url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://socratic.org/questions/what-is-a-chemical-reaction-that-absorbs-heat-from-the-surrounding-does-this-rea
# What is a chemical reaction that absorbs heat from the surrounding? Does this reaction have a neutral, positive or negative DeltaH at constant pressure? Dec 5, 2016 Negative #### Explanation: ΔH is change in enthalpy. When energy is input into the system (heat) ΔH will have a positive value. Positive values of ΔH tell us that energy was input into the system, breaking constituent chemical bonds. When ΔH is negative, this means that bonds were formed and that the system has released energy into the universe. Consider the graphic below where ΔH is negative:
2021-06-21 14:11:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28972411155700684, "perplexity": 2327.26690799988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488273983.63/warc/CC-MAIN-20210621120456-20210621150456-00541.warc.gz"}
https://www.hackerearth.com/ru/practice/machine-learning/machine-learning-algorithms/beginners-tutorial-on-xgboost-parameter-tuning-r/tutorial/
Machine Learning Topics: Beginners Tutorial on XGBoost and Parameter Tuning in R • Statistics • Data Manipulation & Visualisa... • Machine Learning Algorithms • Machine Learning Projects • Challenges Winning Approach • Transfer Learning Beginners Tutorial on XGBoost and Parameter Tuning in R • Tutorial Introduction Last week, we learned about Random Forest Algorithm. Now we know it helps us reduce a model's variance by building models on resampled data and thereby increases its generalization capability. Good! Now, you might be wondering, what to do next for increasing a model's prediction accuracy ? After all, an ideal model is one which is good at both generalization and prediction accuracy. This brings us to Boosting Algorithms. Developed in 1989, the family of boosting algorithms has been improved over the years. In this article, we'll learn about XGBoost algorithm. XGBoost is the most popular machine learning algorithm these days. Regardless of the data type (regression or classification), it is well known to provide better solutions than other ML algorithms. In fact, since its inception (early 2014), it has become the "true love" of  kaggle users to deal with structured data. So, if you are planning to compete on Kaggle, xgboost is one algorithm you need to master. In this article, you'll learn about core concepts of the XGBoost algorithm. In addition, we'll look into its practical side, i.e., improving the xgboost model using parameter tuning in R. 1. What is XGBoost? Why is it so good? 2. How does XGBoost work? 3. Understanding XGBoost Tuning Parameters 4. Practical - Tuning XGBoost using R What is XGBoost ? Why is it so good ? XGBoost (Extreme Gradient Boosting) is an optimized distributed gradient boosting library. Yes, it uses gradient boosting (GBM) framework at core. Yet, does better than GBM framework alone. XGBoost was created by Tianqi Chen, PhD Student, University of Washington. It is used for supervised ML problems.  Let's look at what makes it so good: 1. Parallel Computing: It is enabled with parallel processing (using OpenMP); i.e., when you run xgboost, by default, it would use all the cores of your laptop/machine. 2. Regularization: I believe this is the biggest advantage of xgboost. GBM has no provision for regularization. Regularization is a technique used to avoid overfitting in linear and tree-based models. 3. Enabled Cross Validation: In R, we usually use external packages such as caret and mlr to obtain CV results. But, xgboost is enabled with internal CV function (we'll see below). 4. Missing Values: XGBoost is designed to handle missing values internally. The missing values are treated in such a manner that if there exists any trend in missing values, it is captured by the model. 5. Flexibility: In addition to regression, classification, and ranking problems, it supports user-defined objective functions also. An objective function is used to measure the performance of the model given a certain set of parameters. Furthermore, it supports user defined evaluation metrics as well. 6. Availability: Currently, it is available for programming languages such as R, Python, Java, Julia, and Scala. 7. Save and Reload: XGBoost gives us a feature to save our data matrix and model and reload it later. Suppose, we have a large data set, we can simply save the model and use it in future instead of wasting time redoing the computation. 8. Tree Pruning: Unlike GBM, where tree pruning stops once a negative loss is encountered, XGBoost grows the tree upto max_depth and then prune backward until the improvement in loss function is below a threshold. I'm sure now you are excited to master this algorithm. But remember, with great power comes great difficulties too. You might learn to use this algorithm in a few minutes, but optimizing it is a challenge. Don't worry, we shall look into it in following sections. How does XGBoost work ? XGBoost belongs to a family of boosting algorithms that convert weak learners into strong learners. A weak learner is one which is slightly better than random guessing. Let's understand boosting first (in general). Boosting is a sequential process; i.e., trees are grown using the information from a previously grown tree one after the other. This process slowly learns from data and tries to improve its prediction in subsequent iterations. Let's look at a classic classification example: Four classifiers (in 4 boxes), shown above, are trying hard to classify + and - classes as homogeneously as possible. Let's understand this picture well. 1. Box 1: The first classifier creates a vertical line (split) at D1. It says anything to the left of D1 is + and anything to the right of D1 is -. However, this classifier misclassifies three + points. 2. Box 2: The next classifier says don't worry I will correct your mistakes. Therefore, it gives more weight to the three + misclassified points (see bigger size of +) and creates a vertical line at D2. Again it says, anything to right of D2 is - and left is +.  Still, it makes mistakes by incorrectly classifying three - points. 3. Box 3: The next classifier continues to bestow support. Again, it gives more weight to the three - misclassified points and creates a horizontal line at D3. Still, this classifier fails to classify the points (in circle) correctly. 4. Remember that each of these classifiers has a misclassification error associated with them. 5. Boxes 1,2, and 3 are weak classifiers. These classifiers will now be used to create a strong classifier Box 4. 6. Box 4: It is a weighted combination of the weak classifiers. As you can see, it does good job at classifying all the points correctly. That's the basic idea behind boosting algorithms. The very next model capitalizes on the misclassification/error of previous model and tries to reduce it. Now, let's come to XGBoost. As we know, XGBoost can used to solve both regression and classification problems. It is enabled with separate methods to solve respective problems. Let's see: Classification Problems: To solve such problems, it uses booster = gbtree parameter; i.e., a tree is grown one after other and attempts to reduce misclassification rate in subsequent iterations. In this, the next tree is built by giving a higher weight to misclassified points by the previous tree (as explained above). Regression Problems: To solve such problems, we have two methods: booster = gbtree and booster = gblinear. You already know gbtree. In gblinear, it builds generalized linear model and optimizes it using regularization (L1,L2) and gradient descent. In this, the subsequent models are built on residuals (actual - predicted) generated by previous iterations. Are you wondering what is gradient descent? Understanding gradient descent requires math, however, let me try and explain it in simple words: • Gradient Descent: It is a method which comprises a vector of weights (or coefficients) where we calculate their partial derivative with respective to zero. The motive behind calculating their partial derivative is to find the local minima of the loss function (RSS), which is convex in nature. In simple words, gradient descent tries to optimize the loss function by tuning different values of coefficients to minimize the error. Hopefully, up till now, you have developed a basic intuition around how boosting and xgboost works. Let's proceed to understand its parameters. After all, using xgboost without parameter tuning is like driving a car without changing its gears; you can never up your speed. Note: In R, xgboost package uses a matrix of input data instead of a data frame. Understanding XGBoost Tuning Parameters Every parameter has a significant role to play in the model's performance. Before hypertuning, let's first understand about these parameters and their importance. In this article, I've only explained the most frequently used and tunable parameters. To look at all the parameters, you can refer to its official documentation. XGBoost parameters can be divided into three categories (as suggested by its authors): • General Parameters: Controls the booster type in the model which eventually drives overall functioning • Booster Parameters: Controls the performance of the selected booster • Learning Task Parameters: Sets and evaluates the learning process of the booster from the given data 1. General Parameters 1. Booster[default=gbtree] • Sets the booster type (gbtree, gblinear or dart) to use. For classification problems, you can use gbtree, dart. For regression, you can use any. • Activates parallel computation. Generally, people don't change it as using maximum cores leads to the fastest computation. 3. silent[default=0] • If you set it to 1, your R console will get flooded with running messages. Better not to change it. 2. Booster Parameters As mentioned above, parameters for tree and linear boosters are different. Let's understand each one of them: Parameters for Tree Booster 1. nrounds[default=100] • It controls the maximum number of iterations. For classification, it is similar to the number of trees to grow. • Should be tuned using CV 2. eta[default=0.3][range: (0,1)] • It controls the learning rate, i.e., the rate at which our model learns patterns in data. After every round, it shrinks the feature weights to reach the best optimum. • Lower eta leads to slower computation. It must be supported by increase in nrounds. • Typically, it lies between 0.01 - 0.3 3. gamma[default=0][range: (0,Inf)] • It controls regularization (or prevents overfitting). The optimal value of gamma depends on the data set and other parameter values. • Higher the value, higher the regularization. Regularization means penalizing large coefficients which don't improve the model's performance. default = 0 means no regularization. • Tune trick: Start with 0 and check CV error rate. If you see train error >>> test error, bring gamma into action. Higher the gamma, lower the difference in train and test CV. If you have no clue what value to use, use gamma=5 and see the performance. Remember that gamma brings improvement when you want to use shallow (low max_depth) trees. 4. max_depth[default=6][range: (0,Inf)] • It controls the depth of the tree. • Larger the depth, more complex the model; higher chances of overfitting. There is no standard value for max_depth. Larger data sets require deep trees to learn the rules from data. • Should be tuned using CV 5. min_child_weight[default=1][range:(0,Inf)] • In regression, it refers to the minimum number of instances required in a child node. In classification, if the leaf node has a minimum sum of instance weight (calculated by second order partial derivative) lower than min_child_weight, the tree splitting stops. • In simple words, it blocks the potential feature interactions to prevent overfitting. Should be tuned using CV. 6. subsample[default=1][range: (0,1)] • It controls the number of samples (observations) supplied to a tree. • Typically, its values lie between (0.5-0.8) 7. colsample_bytree[default=1][range: (0,1)] • It control the number of features (variables) supplied to a tree • Typically, its values lie between (0.5,0.9) 8. lambda[default=0] • It controls L2 regularization (equivalent to Ridge regression) on weights. It is used to avoid overfitting. 9. alpha[default=1] • It controls L1 regularization (equivalent to Lasso regression) on weights. In addition to shrinkage, enabling alpha also results in feature selection. Hence, it's more useful on high dimensional data sets. Parameters for Linear Booster Using linear booster has relatively lesser parameters to tune, hence it computes much faster than gbtree booster. 1. nrounds[default=100] • It controls the maximum number of iterations (steps) required for gradient descent to converge. • Should be tuned using CV 2. lambda[default=0] • It enables Ridge Regression. Same as above 3. alpha[default=1] • It enables Lasso Regression. Same as above These parameters specify methods for the loss function and model evaluation. In addition to the parameters listed below, you are free to use a customized objective / evaluation function. 1. Objective[default=reg:linear] • reg:linear - for linear regression • binary:logistic - logistic regression for binary classification. It returns class probabilities • multi:softmax - multiclassification using softmax objective. It returns predicted class labels. It requires setting num_class parameter denoting number of unique prediction classes. • multi:softprob - multiclassification using softmax objective. It returns predicted class probabilities. 2. eval_metric [no default, depends on objective selected] • These metrics are used to evaluate a model's accuracy on validation data. For regression, default metric is RMSE. For classification, default metric is error. • Available error functions are as follows: • mae - Mean Absolute Error (used in regression) • Logloss - Negative loglikelihood (used in classification) • AUC - Area under curve (used in classification) • RMSE - Root mean square error (used in regression) • error - Binary classification error rate [#wrong cases/#all cases] • mlogloss - multiclass logloss (used in classification) We've looked at how xgboost works, the significance of each of its tuning parameter, and how it affects the model's performance. Let's bolster our newly acquired knowledge by solving a practical problem in R. Practical - Tuning XGBoost in R In this practical section, we'll learn to tune xgboost in two ways: using the xgboost package and MLR package. I don't see the xgboost R package having any inbuilt feature for doing grid/random search. To overcome this bottleneck, we'll use MLR to perform the extensive parametric search and try to obtain optimal accuracy. I'll use the adult data set from my previous random forest tutorial. This data set poses a classification problem where our job is to predict if the given user will have a salary <=50K or >50K. Using random forest, we achieved an accuracy of 85.8%. Theoretically, xgboost should be able to surpass random forest's accuracy. Let's see if we can do it. I'll follow the most common but effective steps in parameter tuning: 1. First, you build the xgboost model using default parameters. You might be surprised to see that default parameters sometimes give impressive accuracy. 2. If you get a depressing model accuracy, do this: fix eta = 0.1, leave the rest of the parameters at default value, using xgb.cv function get best n_rounds. Now, build a model with these parameters and check the accuracy. 3. Otherwise, you can perform a grid search on rest of the parameters (max_depth, gamma, subsample, colsample_bytree etc) by fixing eta and nrounds. Note: If using gbtree, don't introduce gamma until you see a significant difference in your train and test error. 4. Using the best parameters from grid search, tune the regularization parameters(alpha,lambda) if required. 5. At last, increase/decrease eta and follow the procedure. But remember, excessively lower eta values would allow the model to learn deep interactions in the data and in this process, it might capture noise. So be careful! This process might sound a bit complicated, but it's quite easy to code in R. Don't worry, I've demonstrated all the steps below. Let's get into actions now and quickly prepare our data for modeling (if you don't understand any line of code, ask me in comments): #set working directory > path <- "~/December 2016/XGBoost_Tutorial" > setwd(path) > library(data.table) > library(mlr) #set variable names setcol <- c("age", "workclass", "fnlwgt", "education", "education-num", "marital-status", "occupation", "relationship", "race", "sex", "capital-gain", "capital-loss", "hours-per-week", "native-country", "target") > train <- read.table("adultdata.txt", header = F, sep = ",", col.names = setcol, na.strings = c(" ?"), stringsAsFactors = F) > test <- read.table("adulttest.txt",header = F,sep = ",",col.names = setcol,skip = 1, na.strings = c(" ?"),stringsAsFactors = F) #convert data frame to data table > setDT(train) > setDT(test) #check missing values > table(is.na(train)) > sapply(train, function(x) sum(is.na(x))/length(x))*100 > table(is.na(test)) > sapply(test, function(x) sum(is.na(x))/length(x))*100 #quick data cleaning #remove extra character from target variable > library(stringr) > test [,target := substr(target,start = 1,stop = nchar(target)-1)] > char_col <- colnames(train)[ sapply (test,is.character)] > for(i in char_col) set(train,j=i,value = str_trim(train[[i]],side = "left")) > for(i in char_col) set(test,j=i,value = str_trim(test[[i]],side = "left")) #set all missing value as "Missing" > train[is.na(train)] <- "Missing" > test[is.na(test)] <- "Missing" Up to this point, we dealt with basic data cleaning and data inconsistencies. To use xgboost package, keep these things in mind: 1. Convert the categorical variables into numeric using one hot encoding 2. For classification, if the dependent variable belongs to class factor, convert it to numeric R's base function model.matrix is quick enough to implement one hot encoding. In the code below, ~.+0 leads to encoding of all categorical variables without producing an intercept. Alternatively, you can use the dummies package to accomplish the same task. Since xgboost package accepts target variable separately, we'll do the encoding keeping this in mind: #using one hot encoding > labels <- train$target > ts_label <- test$target > new_tr <- model.matrix(~.+0,data = train[,-c("target"),with=F]) > new_ts <- model.matrix(~.+0,data = test[,-c("target"),with=F]) #convert factor to numeric > labels <- as.numeric(labels)-1 > ts_label <- as.numeric(ts_label)-1 For xgboost, we'll use xgb.DMatrix to convert data table into a matrix (most recommended): #preparing matrix > dtrain <- xgb.DMatrix(data = new_tr,label = labels) > dtest <- xgb.DMatrix(data = new_ts,label=ts_label) As mentioned above, we'll first build our model using default parameters, keeping random forest's accuracy 85.8% in mind. I'll capture the default parameters from above (written against every parameter): #default parameters > params <- list(booster = "gbtree", objective = "binary:logistic", eta=0.3, gamma=0, max_depth=6, min_child_weight=1, subsample=1, colsample_bytree=1) Using the inbuilt xgb.cv function, let's calculate the best nround for this model. In addition, this function also returns CV error, which is an estimate of test error. > xgbcv <- xgb.cv( params = params, data = dtrain, nrounds = 100, nfold = 5, showsd = T, stratified = T, print.every.n = 10, early.stop.round = 20, maximize = F) ##best iteration = 79 The model returned lowest error at the 79th (nround) iteration. Also, if you noticed the running messages in your console, you would have understood that train and test error are following each other. We'll use this insight in the following code. Now, we'll see our CV error: > min(xgbcv$test.error.mean) [1]#0.1263 As compared to my previous random forest model, this CV accuracy (100-12.63)=87.37% looks better already. However, I believe cross-validation accuracy is usually more optimistic than true test accuracy. Let's calculate our test set accuracy and determine if this default model makes sense: #first default - model training > xgb1 <- xgb.train (params = params, data = dtrain, nrounds = 79, watchlist = list(val=dtest,train=dtrain), print.every.n = 10, early.stop.round = 10, maximize = F , eval_metric = "error") #model prediction > xgbpred <- predict (xgb1,dtest) > xgbpred <- ifelse (xgbpred > 0.5,1,0) The objective function binary:logistic returns output probabilities rather than labels. To convert it, we need to manually use a cutoff value. As seen above, I've used 0.5 as my cutoff value for predictions. We can calculate our model's accuracy using confusionMatrix() function from caret package. #confusion matrix > library(caret) > confusionMatrix (xgbpred, ts_label) #Accuracy - 86.54% #view variable importance plot > mat <- xgb.importance (feature_names = colnames(new_tr),model = xgb1) > xgb.plot.importance (importance_matrix = mat[1:20]) As you can see, we've achieved better accuracy than a random forest model using default parameters in xgboost. Can we still improve it? Let's proceed to the random / grid search procedure and attempt to find better accuracy. From here on, we'll be using the MLR package for model building. A quick reminder, the MLR package creates its own frame of data, learner as shown below. Also, keep in mind that task functions in mlr doesn't accept character variables. Hence, we need to convert them to factors before creating task: #convert characters to factors > fact_col <- colnames(train)[sapply(train,is.character)] > for(i in fact_col) set(train,j=i,value = factor(train[[i]])) > for (i in fact_col) set(test,j=i,value = factor(test[[i]])) #create tasks > traintask <- makeClassifTask (data = train,target = "target") > testtask <- makeClassifTask (data = test,target = "target") #do one hot encoding<br/> > traintask <- createDummyFeatures (obj = traintask,target = "target") > testtask <- createDummyFeatures (obj = testtask,target = "target") Now, we'll set the learner and fix the number of rounds and eta as discussed above. #create learner > lrn <- makeLearner("classif.xgboost",predict.type = "response") > lrn$par.vals <- list( objective="binary:logistic", eval_metric="error", nrounds=100L, eta=0.1) #set parameter space > params <- makeParamSet( makeDiscreteParam("booster",values = c("gbtree","gblinear")), makeIntegerParam("max_depth",lower = 3L,upper = 10L), makeNumericParam("min_child_weight",lower = 1L,upper = 10L), makeNumericParam("subsample",lower = 0.5,upper = 1), makeNumericParam("colsample_bytree",lower = 0.5,upper = 1)) #set resampling strategy > rdesc <- makeResampleDesc("CV",stratify = T,iters=5L) With stratify=T, we'll ensure that distribution of target class is maintained in the resampled data sets. If you've noticed above, in the parameter set, I didn't consider gamma for tuning. Simply because during cross validation, we saw that train and test error are in sync with each other. Had either one of them been dragging or rushing, we could have brought this parameter into action. Now, we'll set the search optimization strategy. Though, xgboost is fast, instead of grid search, we'll use random search to find the best parameters. In random search, we'll build 10 models with different parameters, and choose the one with the least error. You are free to build any number of models. #search strategy > ctrl <- makeTuneControlRandom(maxit = 10L) We'll also set a parallel backend to ensure faster computation. Make sure you've not opened several applications in backend. We'll use all the cores in your machine. #set parallel backend > library(parallel) > library(parallelMap) > parallelStartSocket(cpus = detectCores()) #parameter tuning mytune <- tuneParams(learner = lrn, task = traintask, resampling = rdesc, measures = acc, par.set = params, control = ctrl, show.info = T) > mytune$y #0.873069 This newly obtained tuned CV accuracy is better than our default xgboost model. To check the tuning result, write mytune in your R console and press Enter. Let's build a model using tuned parameters and check the final test accuracy. #set hyperparameters > lrn_tune <- setHyperPars(lrn,par.vals = mytune$x) #train model #predict model We've made our predictions on the test set. Let's check our model's accuracy. > confusionMatrix(xgpred$data$response,xgpred$data$truth) #Accuracy : 0.8747 As you see, we've achieved a better accuracy than our default xgboost model (86.45%). For tuning the xgboost model, always remember that simple tuning leads to better predictions. I'll leave you here. What's next? If you are still curious to improve the model's accuracy, update eta, find the best parameters using random search and build the model. Tell me in comments if you've achieved better accuracy. Here is the complete github script for code shared above.
2019-07-22 07:53:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5364441871643066, "perplexity": 3663.402203313635}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527828.69/warc/CC-MAIN-20190722072309-20190722094309-00034.warc.gz"}
http://mathhelpforum.com/advanced-algebra/18129-order-group.html
# Math Help - Order of a group 1. ## Order of a group What is the order of the group U(12)? What does U(12) look like excatly? Thanks. 2. Originally Posted by tttcomrader What is the order of the group U(12)? What does it means U(12)? Does it mean the primitive roots of unity of 12? 3. I really dont know, let me ask the professor tomorrow. 4. I just talked to my professor and he told me that U(12) is a group under multiplication modulo 12, so all k less than 12 such that k is relative prime to 12. Thus U(12) = {1,7,9,11}, so <U(12)> = 4. Thanks.
2015-08-04 14:34:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896576762199402, "perplexity": 820.3988310209195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990900.28/warc/CC-MAIN-20150728002310-00147-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.hepdata.net/record/ins771169
• Browse all $\rho^0$ photoproduction in ultraperipheral relativistic heavy ion collisions at $\sqrt{s_{NN}}$ = 200 GeV The collaboration Phys.Rev.C 77 (2008) 034910, 2008. Abstract (data abstract) Photoproduction reactions occur when the electromagnetic field of a relativistic heavy ion interacts with another heavy ion. The STAR collaboration presents a measurement of $\rho^{0}$ and direct $\pi^{+}\pi^{-}$ photoproduction in ultra-peripheral relativistic heavy ion collisions at $\sqrt{s_{NN}}$=200 GeV. We observe both exclusive photoproduction and photoproduction accompanied by mutual Coulomb excitation. We find a coherent cross-section of $\sigma(AuAu \rightarrow Au^{*}Au^{*} \rho^{0})$ = 530 $\pm$ 19 (stat.) $\pm$ 57 (syst.) mb, in accord with theoretical calculations based on a Glauber approach, but considerably below the predictions of a color dipole model. The $\rho^{0}$ transverse momentum spectrum ( $p_{T}^{2}$ ) is fit by a double exponential curve including both coherent and incoherent coupling to the target nucleus; we find $\sigma_{inc}/\sigma_{coh}$ = 0.29 $\pm$ 0.03 (stat.) $pm$ 0.08 (syst.). The ratio of direct $\pi^{+}\pi^{-}$ to $\rho^{0}$ production is comparable to that observed in $\gamma p$ collisions at HERA, and appears to be independent of photon energy. Finally, the measured $\rho^{0}$ spin helicity matrix elements agree within errors with the expected s-channel helicity conservation. • #### Figure 2 a Data from Figure 2 a 10.17182/hepdata.98962.v1/t1 ZDC spectra obtained with the minimum bias sample after the $\rho^{0}$ selection cuts are applied, and fit with three Gaussians.... • #### Figure 2 b Data from Figure 2 b 10.17182/hepdata.98962.v1/t2 ZDC spectra obtained with the minimum bias sample after the $\rho^{0}$ selection cuts are applied, and fit with three Gaussians.... • #### Figure 3 a Data from Figure 3 a 10.17182/hepdata.98962.v1/t3 The invariant mass distribution for the coherently produced $\rho^{0}$ candidates from the minimum bias sample with the cut on the... • #### Figure 3 b Data from Figure 3 b 10.17182/hepdata.98962.v1/t4 The invariant mass distribution for the coherently produced $\rho^{0}$ candidates obtained from the topology sample with the cut on the... • #### Figure 4 Data from Figure 4 10.17182/hepdata.98962.v1/t5 The ratio |B/A| as a function of $y_{\rho^{0}}$ for the minimum bias data, obtained by fitting Eq.3 to the invariant... • #### Figure 5 Data from Figure 5 10.17182/hepdata.98962.v1/t6 Coherent $\rho^{0}$ production cross-section for the minimum bias data set as a function of $y_{\rho^{0}}$ (black dots) overlaid by the... • #### Figure 6 Data from Figure 6 10.17182/hepdata.98962.v1/t7 $\rho^{0}$ production cross-section as a function of the momentum transfer squared $t$, together with the fit of Eq. 5. The... • #### Figure 7 Data from Figure 7 10.17182/hepdata.98962.v1/t8 Comparison of theoretical predictions to the measured differential cross-section for coherent $\rho^{0}$ production. The statistical errors are shown by the... • #### Figure 8 a Data from Figure 8 a 10.17182/hepdata.98962.v1/t9 Projections of the two dimensional efficiency corrected $\Phi_{h}$ vs $cos(\Theta_{h})$ distributions obtained with the minimum bias data set. The solid... • #### Figure 8 b Data from Figure 8 b 10.17182/hepdata.98962.v1/t10 Projections of the two dimensional efficiency corrected $\Phi_{h}$ vs $cos(\Theta_{h})$ distributions obtained with the minimum bias data set. The solid...
2021-01-25 04:30:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7917649745941162, "perplexity": 2572.138775563959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703564029.59/warc/CC-MAIN-20210125030118-20210125060118-00765.warc.gz"}
https://calculator.academy/rotational-weight-calculator/
Enter the mass (kg), the radius (m), and the angular acceleration (rad/s^2) into the Rotational Weight Calculator. The calculator will evaluate the Rotational Weight. ## Rotational Weight Formula The following formula is used to calculate the Rotational Weight. RW = m*r*a Variables: • RW is the Rotational Weight (N) • m is the mass (kg) • r is the radius (m) • a is the angular acceleration (rad/s^2) To calculate the rotational weight, multiply the mass, radius, and angular acceleration together. ## How to Calculate Rotational Weight? The following two example problems outline the steps and information needed to calculate the Rotational Weight. Example Problem #1 1. First, determine the mass (kg). • The mass (kg) is calculated to be : 3. 2. Next, determine the radius (m). • The radius (m) is measured to be: 4. 3. Next, determine the angular acceleration (rad/s^2). • The angular acceleration (rad/s^2) is found to be: 5. 4. Finally, calculate the Rotational Weight using the formula above: RW = m*r*a The values provided above are inserted into the equation below and computed. RW = 3*4*5 = 60.00 (N) Example Problem #2 The variables required for this problem are provided below: mass (kg) = 6
2022-11-28 12:42:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9526147842407227, "perplexity": 2280.0324110722786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710503.24/warc/CC-MAIN-20221128102824-20221128132824-00605.warc.gz"}
https://chemistry.stackexchange.com/tags/spectrophotometry/new
# Tag Info 1 This is a long comment.... The webpage specification says that there are three USB connections. USB Type A port for USB memory device (front panel) USB Type B port for optional computer connectivity (rear panel) USB Type A port for external printer (rear panel) The printer uses the HP PCL 3.0 specification via a USB cable. You should be able to get the HP ... 16 When looking at IR spectra of hydrogen-bonding groups, always check how IR spectra was recorded. It appears that NIST spectra is recorded using gas phase, while the first one used liquid film. In liquid film $\ce{NH2}$ groups form a network of hydrogen bonds, resulting in shape distortion and widening of $\ce{NH2}$-related bands. Same is true for carboxylic ... 6 Honestly I have doubt about your textbook version of IR spectrum. However, I can suggest by the experience I have gained that the shapes of the peaks depends on the method you have used to obtain the spectra such as ATR, smear, absorbed in inert surface, etc., thus cannot pinpoint which one is incorrect. Yet, you can compare an one with other related amines. ... Top 50 recent answers are included
2020-09-22 17:02:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3516824543476105, "perplexity": 2978.919680250382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206329.28/warc/CC-MAIN-20200922161302-20200922191302-00205.warc.gz"}
https://gmatclub.com/forum/day-53273.html?fl=similar
%$^$#@$% Day !!! : Business School Life Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack It is currently 26 Feb 2017, 22:57 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track Your Progress every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Not interested in getting valuable practice questions and articles delivered to your email? No problem, unsubscribe here. Request Expert Reply Events & Promotions Events & Promotions in June Open Detailed Calendar %$^$#@$% Day !!! Author Message SVP Joined: 01 Nov 2006 Posts: 1855 Schools: The Duke MBA, Class of 2009 Followers: 17 Kudos [?]: 204 [0], given: 2 %$^$#@$% Day !!! [#permalink] Show Tags 02 Oct 2007, 17:02 So you all won't believe this but I freakin got HIT BY A CAR today while riding my bike home from Fuqua. WTF??? I'm pretty much ok - lots of bruises and ugly bumps and some road rash. A sprained wrist that hurts like the dickens right now. It might be broken, but they're just gonna let it see what it does over the next few days. Hopefully not broken. I think it's not. I'm thinking it's not. It won't be. Can you all think good not-broken thoughts at my wrist? Anyway, if any applicants come to Duke to visit, I will be the one limping along with her wrist in the air, probably cussing. I've done a lot of that today. And now I need to go to bed. VP Joined: 24 Sep 2006 Posts: 1359 Followers: 10 Kudos [?]: 198 [0], given: 0 Show Tags 02 Oct 2007, 17:20 All I can say is: I'm sorry to hear about that. Hope it's not broken and that you recover soon. L. GMAT Club Legend Joined: 10 Apr 2007 Posts: 4318 Location: Back in Chicago, IL Schools: Kellogg Alum: Class of 2010 Followers: 88 Kudos [?]: 751 [0], given: 5 Show Tags 02 Oct 2007, 17:44 All the best, hope you feel better. Did the person at least stick around to help you out? VP Joined: 06 Feb 2007 Posts: 1023 Followers: 22 Kudos [?]: 181 [0], given: 0 Show Tags 02 Oct 2007, 17:51 So sorry to hear that!!! I just visited Duke today, and there seemed to be a lot of traffic on campus... I hope you feel better! Director Joined: 28 Dec 2005 Posts: 921 Followers: 2 Kudos [?]: 47 [0], given: 0 Show Tags 02 Oct 2007, 18:10 aaudetat, I hope you get better soon (and you're right, it's not broken . Sorry it happened so soon after classes started for you. CEO Joined: 15 Aug 2003 Posts: 3460 Followers: 67 Kudos [?]: 869 [0], given: 781 Show Tags 02 Oct 2007, 18:29 Sorry to hear that aaudetat. I send good wishes your way and hope your injuries will heal soon. SVP Joined: 31 Jul 2006 Posts: 2304 Schools: Darden Followers: 44 Kudos [?]: 475 [0], given: 0 Show Tags 02 Oct 2007, 19:31 That sucks. Have you hired a lawyer? You deserve some cash justice. Director Affiliations: FRM Charter holder Joined: 02 Dec 2006 Posts: 734 Schools: Stanford, Chicago Booth, Babson College Followers: 16 Kudos [?]: 81 [0], given: 4 Show Tags 02 Oct 2007, 19:35 Sorry to hear that. Hope you recover soon. we have at least two lawyers on this thread! SVP Joined: 01 Nov 2006 Posts: 1855 Schools: The Duke MBA, Class of 2009 Followers: 17 Kudos [?]: 204 [0], given: 2 Show Tags 02 Oct 2007, 20:26 thanks, guys. for the lawyers among you: is it hit and run if the guy stopped, but I screamed at him so he left? A bus driver got his info, so after the cops showed up and I noticed he was gone, they were itchin' to go after him. But I think my cursing (I believe I said something along the lines of "Get the f*** out of my face") chased him away. Or perhaps it was when my eyes turned black and started smoking. Whatever. Frankly, I'm going to go ahead and guess that he's not overly financially stable. Going after him for$200 in bike repair and whatever my insurance leaves me for the doctor bills isn't going to fix my bike quicker or make the bruises go away. Senior Manager Joined: 25 Jul 2007 Posts: 378 Location: Times Square Schools: Baruch / Zicklin Followers: 4 Kudos [?]: 38 [0], given: 148 Show Tags 03 Oct 2007, 06:30 pelihu wrote: That sucks. Have you hired a lawyer? You deserve some cash justice. Haha - that is the only type of justice worth the effort. SVP Joined: 24 Aug 2006 Posts: 2132 Followers: 3 Kudos [?]: 140 [0], given: 0 Show Tags 03 Oct 2007, 07:00 aaudetat wrote: "Get the f*** out of my face." is not the same as, "You are forgiven and I clear you of all possible financial obligations resulting from this incident." "Get the f*** out of my face" doesn't equal "Get out of here!" Find him and sue him or at least press charges. I can deduce he did one of two things: 1) He hit you and knocked you down, which means he shouldn't have just left the scene. 2) He hit you so hard that he didn't knock you down, but broke your wrist at the moment of impact (not likely) Either way, he needs to be prosecuted or at least put the fear of the police on him. SVP Joined: 24 Aug 2006 Posts: 2132 Followers: 3 Kudos [?]: 140 [0], given: 0 Show Tags 03 Oct 2007, 07:05 Oh yea, also -- hope you feel better. Ice your wrist! Current Student Joined: 18 Jun 2007 Posts: 408 Location: Atlanta, GA Schools: Emory class of 2010 Followers: 11 Kudos [?]: 40 [0], given: 0 Show Tags 03 Oct 2007, 10:59 Here I was trying to convince my fiance to going to Duke for my MBA is a good idea if I get in and I find out guys are hitting random girls on bikes then leaving because they're scared of her smoking black eyes? What kind of place is this? SVP Joined: 31 Jul 2006 Posts: 2304 Schools: Darden Followers: 44 Kudos [?]: 475 [0], given: 0 Show Tags 03 Oct 2007, 11:33 You need to expand your mind. I've got 2 words for you... Pain and Suffering (har har). Sure, your compensatory damages might only be $200, but your pain and suffering could be (depending on state law) limitless. Pain and suffering is great because it's so difficult to quantify. Regarding this person's financial stability, I've got 7 words for you...Parent's homeowner's insurance. Generally speaking (and again, depending on state law), most homeowners insurance is also liability insurance. The big thing here is that children away at college are generally considered to be covered under their parent's homeowner policies, even if the the children don't live under that roof any more. The other more obvious source of fundage would be auto insurance, with many states requiring minimum's of at least$25k or $50k for injuries. I'm guessing that you concluded this person didn't have insurance because they ran. And yes, this would be considered hit and run, but you don't need to prove that to collect cash justice from him. Basically, any time there is a car vs. person altercation, the car (driver) is at fault. The fact that he ran just makes it really simple for you to make your case. If you don't want to waste a lot of time and effort, hire one of those personal injury lawyers that advertises on TV. Heck, you might be able to find one that's willing to front you the expected settlement (less fees of 40-50% of course) because this case seems so obvious and blatant. I imagine it would go something like this. You call up the ambulance chaser, he drools and takes your case, he makes a few inquiries as to whether there is either auto or homeowner coverage, and then he works out a settlement. If you have obvious bruises and injuries you can document in a medical report (broken bones are especially good, but soft tissue can work as well), then you can collect for pain and suffering; you might need to prove negligence or recklessness (again depends on the state), but that should be easy in this case. Don't limit yourself to$200, get yourself a piece of the American pie. Director Joined: 18 Sep 2006 Posts: 962 Location: Chicago, IL Schools: Chicago Booth 2010 Followers: 6 Kudos [?]: 28 [0], given: 0 Show Tags 03 Oct 2007, 11:42 just think, "free tuition" SVP Joined: 24 Aug 2006 Posts: 2132 Followers: 3 Kudos [?]: 140 [0], given: 0 Show Tags 03 Oct 2007, 12:02 This is just beautiful. It brings a tear to my eye. I highlighted the key points. pelihu wrote: And yes, this would be considered hit and run, but you don't need to prove that to collect cash justice from him. Basically, any time there is a car vs. person altercation, the car (driver) is at fault. The fact that he ran just makes it really simple for you to make your case. If you don't want to waste a lot of time and effort, hire one of those personal injury lawyers that advertises on TV. Heck, you might be able to find one that's willing to front you the expected settlement (less fees of 40-50% of course) because this case seems so obvious and blatant. I imagine it would go something like this. You call up the ambulance chaser, he drools and takes your case, he makes a few inquiries as to whether there is either auto or homeowner coverage, and then he works out a settlement.
2017-02-27 06:57:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2471001297235489, "perplexity": 11190.356033010556}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00334-ip-10-171-10-108.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3285534/distribution-of-digits-in-prime-numbers
# distribution of digits in prime numbers I am curious about this: suppose we consider all numbers in base $$b$$ such that the number of digits $$n$$ in this range is the same ( eg, in base $$10$$ it could be $$10-to-99$$ for $$n=2$$, or $$100-to-999$$ for $$n=3$$, etc; leading digit is non-zero), for the prime numbers in this range, if I were to choose a prime number at random can I expect the distribution of the digits of my prime to be uniform random? That is, $$\frac{n}{b}$$. Thank you. • What have you tried? Where are you finding difficulties? And why is this question worth answering? – David G. Stork Jul 7 at 5:57 • I have some heuristics, but I don't know how to go about reasoning formally about the problem. My stat skills are not that great :(. I playing around with other stuff, and it would be useful to have some intuiting, or formal reasoning, about this. – ReverseFlow Jul 7 at 6:03 For any specific digit you can. In fact you can even show that a percentage of the digits are uniform. A Theorem of Bourgain from which the above follows immediately says that Let $$n$$ be large. Then there exists $$\epsilon >0$$ such that for any subset $$I$$ of $$1,\ldots, n-1$$ of size $$\leq \epsilon n$$ and for any choice of $$a_i\in \{0,1\}$$, $$i\in I$$, the number of primes $$1\leq p\leq 2^n$$ such that the $$i$$-th digit is $$a_i$$ for all $$i\in I$$ asymptotically equals to $$\frac{2^{n-\#I}}{n\log 2}$$ as $$n\to \infty$$. (Here $$\log =\log_{e}$$. Also recall that the number of primes up to $$2^n$$ asymptotically equals to $$\frac{2^n}{n\log 2}$$.) • Any specific digit except the rightmost digit, that is (and this is true in any base). Does Bourgain's work address bases other than 2? – Greg Martin Jul 7 at 7:05 • This is why I took the digits from 1 to n-1 and not from 0. So $p=\sum_{i=0}^{n-1} a_i 2^{i}$. I think everything he does work in any basis, but he chose to do basis 2. – Lior B-S Jul 7 at 7:51 • It is unclear to me how this helps. Also, the statement above is slightly different from the one in the paper; in particular, the paper states c is a universal constant, which they also did not give an example of even if not sharp. That would be useful. As for it being true for other bases, given the machinery they used to prove the result it is unclear to me that the methods would hold in a different base. Also, how large is "large"? I am actually computing things, so having actual bounds matters to me. – ReverseFlow Jul 7 at 18:52 • The method used is Fourier analysis and the circle method which in principle are explicit. Same for how big is n (since the error term in principle could be made explicit) Regarding to have it helps, you get that to any choice of epsilon digits the number of primes is the same, hence the prime equidistrbute among the different possibilities. – Lior B-S Jul 8 at 10:16 In an odd base, an odd number always has an odd number of odd digits. Proof $$\begin{eqnarray}(2n+1)+(2p+1)=2q \\ (2r+1)+(2s)=(2t+1) \\ (2u)+(2v)=(2w)\end{eqnarray}$$ By $$(x+y)+z=x+(y+z)\tag{4}$$ and $$a+c+d=a+d+c=c+a+d=c+d+a=d+a+c=d+c+a\tag{5}$$ we have $$\underbrace{(2h+1)+\cdots +(2h+1)}_{\text{2i+1 times}}=2e+1\implies (2i+1)(2h+1)=(2e+1)$$ $$123456789_b=1\cdot b^8+2\cdot b^7+3\cdot b^6+4\cdot b^5+5\cdot b^4 +6\cdot b^3+7\cdot b^2+8\cdot b^1 +9\cdot b^0$$ that in an odd (2h+1) base b, that all odd digits create odd summands. it follows from (1),(2)(3), that the even (2u) digits create even (2w) summands regardless of base. even (2v) bases luck out in that $$b^0=1\quad b\neq 0$$. Otherwise, they couldn't represent odd (2r+1) numbers at all. There are other things like all primes greater than 3 being 1 or 5 on division by 6, that can play with things. in a $$6k+1$$ base, then the last digit being $$6j+4$$, will force the rest of the digits to represent a number of forms $$6l+1$$ or $$6m+3$$ etc. • Could you improve this by using more math and less English? Thank you. – ReverseFlow Jul 7 at 18:55 • I don't call it improvement. – Roddy MacPhee Jul 7 at 19:32 • I mean, the associativity and commutativity property were fine. I got that. I meant expand the reasoning using equations for your paragraph. I agree, i would not call this an improvement. Also, the concept of “odd” only make sense for bases that have to 2 as a factor...like base 10,2 or 16 which are common bases. In base 3, what’s an odd number? A number that ends in 1 or 2? – ReverseFlow Jul 7 at 19:39 • no the concept of odd is a number with remainder of 1 on division by 2. – Roddy MacPhee Jul 7 at 19:45 • By that definition, your statement is trivially true. – ReverseFlow Jul 7 at 19:50
2019-10-21 14:55:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7090152502059937, "perplexity": 414.9790262678327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987779528.82/warc/CC-MAIN-20191021143945-20191021171445-00240.warc.gz"}
http://www.luxurycarimport.nc/how-to-slq/9m7ys2.php?1b78bd=calcul-cost-index
Le Cost Index ou CI permet la minimisation du coût global d'une mission type. {\displaystyle c_{fuel}} Inflation Calculator About the Calculator. Solving for cost performance index (CPI) Inputs: budgeted cost of work performed (BCWP) actual cost of work performed (ACWP) Conversions: budgeted cost of work performed (BCWP) = 0 = 0. Calculator for Cost/Schedule Performance Index (CPI/SPI) & Variances, Cost Variance (CV) and Cost Performance Index (CPI) It takes into consideration current prices in your current city and the city you are considering for relocation. L'exemple montre comment utiliser une expression personnalisée pour calculer le ratio entre le bénéfice net et le coût de l'investissement. We hope you have found this small CV / SV / CPI / SPI calculator useful. An Indirect Cost relates to the costs which are required in order to supply the needs of the direct costs. It takes into consideration current prices in your current city and the city you are considering for relocation. l That's how much prices have increased since the base period of 1982 to 1984 was established at roughly 100. t. {\displaystyle t} , la quantité de carburant. Cost of Equity is the rate of return a company pays out to equity investors. C2ER has published its quarterly Cost of Living Index since 1968. This electricity cost calculator works out how much electricity a particular electrical appliance will use and how much it will cost. certification exam and want to train your cost management capabilities, make up It is a statistic designed to help to compare how these price relatives, taken as a whole, differ between time periods or geographical locations. This is a simple currency converter that uses the Big Mac Index currency data as a base. The formula for fixed cost can be calculated by using the following steps: Step 1:Firstly, determine the variable cost of production per unit which can be the aggregate of various cost of production, such as labor cost, raw material cost, commissions, etc. The U.S. Labor Department's Bureau of Labor Statistics will release the Consumer Price Index (CPI) with inflation data for November on December 10, 2020. units (e.g. The same is true for cost measures (rather than items), such as the consumer price index (CPI), a measure of average consumer prices in the U.S., or the purchasing power of the U.S. Dollar. Calculate the selling price you need to establish in order to acheive a desired gross margin on a known product cost. Rapporté au nombre d’heures effectivement travaillées Histoire. et le coût unitaire du carburant Calcul ta consommation au Km 1. Where, PI is the profitability index, CF is the cash flow for a period, r is the discount rate in decimal form, n is the number of periods (years), CF 0 is the initial investment. > 100 est bon, signifie être sous le budget. These indicators are often calculated in monetary It breaks down different ways that cost is calculated. The savings are primarily in the cost of building the superstructure, especially the foundations, walls and roof. Poids idéal : Pour un adulte de 20 à 65 ans, la valeur de votre IMC devrait se trouver entre 20 et 25 pour un poids idéal, 20 est une valeur excellente car c’est le véritable poids idéal. Le projet pourrait dépasser son budget. The Consumer Price Index measures the increase of the cost of basic products and services that Canadians consume on a daily basis, such as: food, shelter, clothing, healthcare, transportation, alcoholic beverages and tobacco products. If you want to customize the colors, size, and more to better fit your site, then pricing starts at just $29.99 for a one time purchase. Calcul du coût d'un trimestre. (loyers des activités tertiaires) Vous êtes propriétaire ou bénéficiez d’un bail commercial, vérifiez si votre loyer correspond bien aux termes de la clause d’indexation figurant sur votre contrat. for the reference period of the calculation, materialized progress of a project, i.e. Le plus difficile est d’estimer le nombre de nuitées totales de manière à pouvoir intégrer la part de cette charge au prix de revient. a single project phase or a timeframe) or cumulative over several periods or project phases. Compare the past risk and return of your current investments to the IFA Individualized Index Portfolio recommended at the end of your Risk Capacity Survey, the S&P 500 Simulated Index, or 20 other IFA indexes. En 2012, en France métropolitaine, un salarié coûte en moyenne 51 839 euros par an à son employeur dans les entreprises de dix salariés ou plus. Simply fill in your actual cost, your planned value for a particular period or cumulative for the entire project. part of the In the short-term there may be disequilibrium between inflation and the exchange rate (the one pushes the other), however over time the cost of living index provides the most accurate view of the cost of living. m l an interpretation of the results. Classement de ville complet de Kiplinger.com dit ville moins chère de que 2009 vivre était Pueblo (Colorado), avec un cost-of-living i Lire aussi :Quel est le poids idéal ? Le salaire minimum de vos collaborateurs est fixé dans les conventions collectives de travail (CCT) de la commission paritaire compétente sur la bas… You can enter an amount and built-in Cost of Living Calculator will determine how much more (or less) you would need to maintain the same standard of living in another city. t C'est le rapport Valeur acquise / Coût actuel. q t = t × C I. TABLE ACCESS BY INDEX ROWID TABLE INDVIDUS Cost: 2 Bytes: 270 Cardinality: 1 1 INDEX RANGE SCAN INDEX INDVIDU_NOM Cost: 1 Cardinality: 1 Et donc on passe désormais par notre index :). Step 2:Next, determine the number of units produc… I How Project Management Software Improves Productivity, Estimating Activity Durations: Definition, Methods, Practical Uses, Bottom-Up Estimating – Definition, Example, Pros & Cons, Performance Prism for Performance & Stakeholder Management, Balanced Scorecard in Project Management – Uses, Pros & Cons, Number of Communication Channels (+ PMP® Formula & Calculator), How to Do Analogous Estimating – an Illustrated 5-Step Guide, budget before management reserve that was authorized (SPI) Calculator, dedicated article on cost performance index and schedule performance index, Schedule & Cost Performance Index, with Formulae & Examples (SPI/CPI), this overall introduction to earned value analysis, Estimate at Completion (EAC) – with Formulae & Examples, Scope Baseline: Definition | Example | 4-Step Guide | Uses, Cost-Benefit Analysis Checklist for Project Managers (Free Download), Stakeholder Engagement Assessment Matrix: Uses & Example, Agile Release Planning in Hybrid and Agile Projects, Definitive Estimate vs. ROM/Rough Order of Magnitude (+ Calculator), Project Schedule Network Diagram: Definition | Uses | Example, PDM – Precedence Diagramming Method [FS, FF, SS, SF] (+ Example). 4. ayant le même coût que {\displaystyle q_{t}} Cost Performance Index Equation Calculator Earned Value Project Management Formulas. [2], [3], [4] De nos jours, la collecte et le calcul du PCI sont normalisés par l'ASTM International dans deux documents: ASTM D6433 et ASTM D5340. t est donnée par la relation What Are Leads and Lags in Project Management? Comment calculer l'indice du coût de la vie Avec la richesse des ressources modernes, il est devenu relativement simple de trouver un indice du coût de la vie. As the name suggests, these costs are variable in nature and changes with the increase or decrease in the production level or sales volume. Le Cost Index est un élément de comparaison pour un trajet, un avion et une compagnie donnés. These measures can be calculated either per period (e.g. Le FMC (ou FMGC) du FMS ne minimise ni la quantité de carburant ni le temps de vol mais le paramètre agrégé quantité de carburant consommée plus quantité de carburant équivalente au temps de vol. Cost of goods sold (COGS) is an accumulation of the direct costs that went into the goods sold by your company. c Découvrez grâce à cet outil de l'ANIL le coût de la garantie hypothécaire d'un emprunt immobilier. Pour calculer le coût de revient, vous devez prendre en compte l’ensemble des charges supportées par l’entreprise. c Ce montant brut englobe bien plus que la rémunération convenue pour les heures de travail : heures supplémentaires, avantages extra-légaux, prime de fin d'année, salaire des jours fériés et des jours de vacances, ainsi qu'un salaire garanti en cas de maladie ou d'accident du travail, entre autres. The NAV of ETF A is$100 and you buy 50 shares for a total cost of $5000 ($100*50). Utiliser un Cost Index de 0 permet au FMS d'utiliser un profil de vol minimisant la consommation de carburant, alors qu'un Cost Index élevé lui privilégie un profil de vol minimisant le temps de vol[1]. La distance air totale est de 3141 Nm et le régime de vol, avec un Cost index de 120, correspond bien au PRM tel que définit plus haut. Câte ore pe zi sau pe săptămână este cuplat aparatul electric? . It is widely used as a measure of inflation. le Cost Performance Index : CPI = EV / AC = Ce que nous avons fait / Ce que nous avons dépensé. = The formulae have been incorporated in accordance with the PMI Project Management Body of Knowledge (source: PMBOK, 6th edition, ch. These are also called Consumer Price Index (CPI) and represent Cost of Living. Profitability Index (PI) = Present Value of Future Cash Flows / Initial Investment. = 100 est également bon, signifie que le coût est exactement conforme au coût planifié. Le calcul consiste ensuite à effectuer l'opération suivante : (Pi x Vpt x C) / Vi = prix réévalué à la date t. Sans instructions dans le contrat ni mention par l'Insee d'une série poursuivante. Ainsi pour un temps de vol donné If you need more background of these indicators, make sure you read our dedicated article on cost performance index and schedule performance index where we also introduce the abovementioned variances. budget allocated to work that has been accomplished, amount of cost incurred for the authorized work e Escalation is a measure of the change in the price of goods or services over time within the same economy. t Dans le cadre de sa mission d’information sur le logement votre … Le Cost Index permet d'ajuster et d'optimiser les coûts directs d'opération en prenant en compte le temps de vol ainsi que le carburant consommé[3]. (méthodes de calcul) Inscrivez-vous à notre newsletter spéciale sur le surpoids, l’obésité et les régimes Surpoids et obésité : Pour simplifier de 25 à 30 on parl… f 1. In the short-term there may be disequilibrium between inflation and the exchange rate (the one pushes the other), however over time the cost of living index provides the most accurate view of the cost of living. Say that you have a cost function that gives you the total cost, C(x), of producing x items (shown in the figure below). . e u The average build cost figures used by the HB&R Build Cost Calculator are based on three different size ranges that reflect the economies of scale achieved by constructing larger properties. How to use: Input what you pay for energy per kilowatt hour. A price index (plural: "price indices" or "price indexes") is a normalized average (typically a weighted average) of price relatives for a given class of goods or services in a given region, during a given interval of time. Body Mass Index. L'optimiseur est une composante logicielle extrêmement complexe et qui connaît des améliorations et modifications constantes d'une version à l'autre. Une compagnie aérienne a intérêt à minimiser le coût d'exploitation de ses lignes. Il correspond au cost-weight moyen par jour et donne une indication sur le degré de gravité moyen des cas dans un hôpital donné. Introduceţi preţul de achiziţie pentru 1 kWh de energie electrică lei/kWh A se lua în calcul preţul mediu 0,64 lei/kWh. Midway through the project, upper management asked her for an updated estimate of the total cost of the project. {\displaystyle CI={\frac {c_{time}}{c_{fuel}}}} Consumer Price Index = Cost of Basket (This Month) / Cost of Basket (Last Month) X 100. A Direct Cost relates to any production costs, service costs, or transformation process costs. t t c Il ne s'agit pas d'un objectif mais d'un calcul basé sur une situation: * coût horaire ($/hr vol) selon salaire horaire, coût de leasing … If you are preparing for the PMP par Manon Lamoureux - 22 juin 2017 - Magazine N°230. Three months later, the NAV is$115. The Consumer Price Index (CPI) is an indicator that measures the average change in prices paid by consumers for goods and services over a set period of time. https://fr.wikipedia.org/w/index.php?title=Cost_Index_(Aéronautique)&oldid=174030108, licence Creative Commons attribution, partage dans les mêmes conditions, comment citer les auteurs et mentionner la licence, chez Boeing ($par heure de vol) / ($ pour 100. Regarde combien tu fait de Km avant de tomber sur la réserve puis calcul. Sum of all the stocks= \$95Then, find out the number of stocksNumber of stocks = 5then, calculate the Price Index using the formula given belowPrice Index = Sum of all the prices of Stocks which … c Price Index is a normalized average of price relatives for a particular category of products or services in a specific geographical region for a given time period. C Whether you need to compute the cost performance index (CPI), the schedule performance index (SPI), the cost variance (CV) or schedule variance (SV) for your project, self-study or PMP exam preparation, you will find this calculator useful. Index. Body Mass Index Pour déterminer si quelqu'un est trop corpulent... ou trop maigre, on utilise le plus souvent le BMI ou Body Mass Index. La dernière modification de cette page a été faite le 22 août 2020 à 08:21. Le calcul du salaire commence par la détermination du salaire brut. Elle est suivie par les charges fiscales et sociales qui incluent la TVS, les AND et les AEN, puis par l’entretien.
2021-09-20 17:34:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5223258137702942, "perplexity": 8198.134471173284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00622.warc.gz"}
https://stats.libretexts.org/Courses/Lumen_Learning/Book%3A_Concepts_in_Statistics_(Lumen)/02%3A_Summarizing_Data_Graphically_and_Numerically/2.14%3A_Introduction_to_Dotplots
# 2.14: Introduction to Dotplots ## What you’ll learn to do: Describe the distribution of quantitative data using a dotplot. When we conduct statistical experiments, we often work with large tables that present each individual’s information. To analyze the data, we look to summarize information and patterns about the group as a whole, not just on the individual level. A dotplot is a simple and powerful tool to display the distribution of the data: showing the center, spread, skew, and possible outliers. In this next section, we shall see how to construct and interpret dotplots as well as build basic vocabulary to talk about the distribution (aka shape) of the data.
2022-10-03 02:56:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43324512243270874, "perplexity": 447.9709466917078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00570.warc.gz"}
http://orbi.ulg.ac.be/browse?type=journal&value=Monthly+Notices+of+the+Royal+Astronomical+Society
References of "Monthly Notices of the Royal Astronomical Society"      in Complete repository Arts & humanities   Archaeology   Art & art history   Classical & oriental studies   History   Languages & linguistics   Literature   Performing arts   Philosophy & ethics   Religion & theology   Multidisciplinary, general & others Business & economic sciences   Accounting & auditing   Production, distribution & supply chain management   Finance   General management & organizational theory   Human resources management   Management information systems   Marketing   Strategy & innovation   Quantitative methods in economics & management   General economics & history of economic thought   International economics   Macroeconomics & monetary economics   Microeconomics   Economic systems & public economics   Social economics   Special economic topics (health, labor, transportation…)   Multidisciplinary, general & others Engineering, computing & technology   Aerospace & aeronautics engineering   Architecture   Chemical engineering   Civil engineering   Computer science   Electrical & electronics engineering   Energy   Geological, petroleum & mining engineering   Materials science & engineering   Mechanical engineering   Multidisciplinary, general & others Human health sciences   Alternative medicine   Anesthesia & intensive care   Cardiovascular & respiratory systems   Dentistry & oral medicine   Dermatology   Endocrinology, metabolism & nutrition   Forensic medicine   Gastroenterology & hepatology   General & internal medicine   Geriatrics   Hematology   Immunology & infectious disease   Laboratory medicine & medical technology   Neurology   Oncology   Ophthalmology   Orthopedics, rehabilitation & sports medicine   Otolaryngology   Pediatrics   Pharmacy, pharmacology & toxicology   Psychiatry   Public health, health care sciences & services   Radiology, nuclear medicine & imaging   Reproductive medicine (gynecology, andrology, obstetrics)   Rheumatology   Surgery   Urology & nephrology   Multidisciplinary, general & others Law, criminology & political science   Civil law   Criminal law & procedure   Criminology   Economic & commercial law   European & international law   Judicial law   Metalaw, Roman law, history of law & comparative law   Political science, public administration & international relations   Public law   Social law   Tax law   Multidisciplinary, general & others Life sciences   Agriculture & agronomy   Anatomy (cytology, histology, embryology...) & physiology   Animal production & animal husbandry   Aquatic sciences & oceanology   Biochemistry, biophysics & molecular biology   Biotechnology   Entomology & pest control   Environmental sciences & ecology   Food science   Genetics & genetic processes   Microbiology   Phytobiology (plant sciences, forestry, mycology...)   Veterinary medicine & animal health   Zoology   Multidisciplinary, general & others Physical, chemical, mathematical & earth Sciences   Chemistry   Earth sciences & physical geography   Mathematics   Physics   Space science, astronomy & astrophysics   Multidisciplinary, general & others Social & behavioral sciences, psychology   Animal psychology, ethology & psychobiology   Anthropology   Communication & mass media   Education & instruction   Human geography & demography   Library & information sciences   Neurosciences & behavior   Regional & inter-regional studies   Social work & social policy   Sociology & social sciences   Social, industrial & organizational psychology   Theoretical & cognitive psychology   Treatment & clinical psychology   Multidisciplinary, general & others     Showing results 1 to 20 of 224 1 2 3 4 5 6     X-ray study of bow shocks in runaway starsDe Becker, Michaël ; del Valle, M.V.; Romero, G.E. et alin Monthly Notices of the Royal Astronomical Society (in press)Massive runaway stars produce bow shocks through the interaction of their winds with the interstellar medium, with the prospect for particle acceleration by the shocks. These objects are consequently ... [more ▼]Massive runaway stars produce bow shocks through the interaction of their winds with the interstellar medium, with the prospect for particle acceleration by the shocks. These objects are consequently candidates for non-thermal emission. Our aim is to investigate the X-ray emission from these sources. We observed with XMM-Newton a sample of 5 bow shock runaways, which constitutes a significant improvement of the sample of bow shock runaways studied in X-rays so far. A careful analysis of the data did not reveal any X-ray emission related to the bow shocks. However, X-ray emission from the stars is detected, in agreement with the expected thermal emission from stellar winds. On the basis of background measurements we derive conservative upper limits between 0.3 and 10 keV on the bow shocks emission. Using a simple radiation model, these limits together with radio upper limits allow us to constrain some of the main physical quantities involved in the non-thermal emission processes, such as the magnetic field strength and the amount of incident infrared photons. The reasons likely responsible for the non-detection of non-thermal radiation are discussed. Finally, using energy budget arguments, we investigate the detectability of inverse Compton X-rays in a more extended sample of catalogued runaway star bow shocks. From our analysis we conclude that a clear identification of non-thermal X-rays from massive runaway bow shocks requires one order of magnitude (or higher) sensitivity improvement with respect to present observatories. [less ▲]Detailed reference viewed: 27 (0 ULg) H0LiCOW - II. Spectroscopic survey and galaxy-group identification of the strong gravitational lens system HE 0435-1223Sluse, Dominique ; Sonnenfeld, A.; Rumbaugh, N. et alin Monthly Notices of the Royal Astronomical Society (2017), 470Galaxies located in the environment or along the line of sight towards gravitational lenses can significantly affect lensing observables, and can lead to systematic errors on the measurement of H[SUB]0 ... [more ▼]Galaxies located in the environment or along the line of sight towards gravitational lenses can significantly affect lensing observables, and can lead to systematic errors on the measurement of H[SUB]0[/SUB] from the time-delay technique. We present the results of a systematic spectroscopic identification of the galaxies in the field of view of the lensed quasar HE 0435-1223 using the W. M. Keck, Gemini and ESO-Very Large telescopes. Our new catalogue triples the number of known galaxy redshifts in the direct vicinity of the lens, expanding to 102 the number of measured redshifts for galaxies separated by less than 3 arcmin from the lens. We complement our catalogue with literature data to gather redshifts up to 15 arcmin from the lens, and search for galaxy groups or clusters projected towards HE 0435-1223. We confirm that the lens is a member of a small group that includes at least 12 galaxies, and find 8 other group candidates near the line of sight of the lens. The flexion shift, namely the shift of lensed images produced by high-order perturbation of the lens potential, is calculated for each galaxy/group and used to identify which objects produce the largest perturbation of the lens potential. This analysis demonstrates that (i) at most three of the five brightest galaxies projected within 12 arcsec of the lens need to be explicitly used in the lens models, and (ii) the groups can be treated in the lens model as an external tidal field (shear) contribution. [less ▲]Detailed reference viewed: 23 (2 ULg) The inner structure of early-type galaxies in the Illustris simulationXu, Dandan; Springel, Volker; Sluse, Dominique et alin Monthly Notices of the Royal Astronomical Society (2017), 469Early-type galaxies provide unique tests for the predictions of the cold dark matter cosmology and the baryonic physics assumptions entering models for galaxy formation. In this work, we use the Illustris ... [more ▼]Early-type galaxies provide unique tests for the predictions of the cold dark matter cosmology and the baryonic physics assumptions entering models for galaxy formation. In this work, we use the Illustris simulation to study correlations of three main properties of early-type galaxies, namely the stellar orbital anisotropies, the central dark matter fractions and the central radial density slopes, as well as their redshift evolution since z = 1.0. We find that lower mass galaxies or galaxies at higher redshift tend to be bluer in rest-frame colour, have higher central gas fractions, and feature more tangentially anisotropic orbits and steeper central density slopes than their higher mass or lower redshift counterparts, respectively. The projected central dark matter fraction within the effective radius shows a very mild mass dependence but positively correlates with galaxy effective radii due to the aperture effect. The central density slopes obtained by combining strong lensing measurements with single-aperture kinematics are found to differ from the true density slopes. We identify systematic biases in this measurement to be due to two common modelling assumptions, isotropic stellar orbital distributions and power-law density profiles. We also compare the properties of early-type galaxies in Illustris to those from existing galaxy and strong lensing surveys; we find in general broad agreement but also some tension, which poses a potential challenge to the stellar formation and feedback models adopted by the simulation. [less ▲]Detailed reference viewed: 16 (1 ULg) H0LiCOW - I. H0 Lenses in COSMOGRAIL's Wellspring: program overviewSuyu, S. H.; Bonvin, V.; Courbin, F. et alin Monthly Notices of the Royal Astronomical Society (2017), 468Strong gravitational lens systems with time delays between the multiple images allow measurements of time-delay distances, which are primarily sensitive to the Hubble constant that is key to probing dark ... [more ▼]Strong gravitational lens systems with time delays between the multiple images allow measurements of time-delay distances, which are primarily sensitive to the Hubble constant that is key to probing dark energy, neutrino physics and the spatial curvature of the Universe, as well as discovering new physics. We present H0LiCOW (H[SUB]0[/SUB] Lenses in COSMOGRAIL's Wellspring), a program that aims to measure H[SUB]0[/SUB] with <3.5 per cent uncertainty from five lens systems (B1608+656, RXJ1131-1231, HE 0435-1223, WFI2033-4723 and HE 1104-1805). We have been acquiring (1) time delays through COSMOGRAIL and Very Large Array monitoring, (2) high-resolution Hubble Space Telescope imaging for the lens mass modelling, (3) wide-field imaging and spectroscopy to characterize the lens environment and (4) moderate-resolution spectroscopy to obtain the stellar velocity dispersion of the lenses for mass modelling. In cosmological models with one-parameter extension to flat Λ cold dark matter, we expect to measure H[SUB]0[/SUB] to <3.5 per cent in most models, spatial curvature Ω[SUB]k[/SUB] to 0.004, w to 0.14 and the effective number of neutrino species to 0.2 (1σ uncertainties) when combined with current cosmic microwave background (CMB) experiments. These are, respectively, a factor of ˜15, ˜2 and ˜1.5 tighter than CMB alone. Our data set will further enable us to study the stellar initial mass function of the lens galaxies, and the co-evolution of supermassive black holes and their host galaxies. This program will provide a foundation for extracting cosmological distances from the hundreds of time-delay lenses that are expected to be discovered in current and future surveys. [less ▲]Detailed reference viewed: 21 (2 ULg) Probing the atmosphere of a sub-Jovian planet orbiting a cool dwarfSedaghati, Elyar; Boffin, Henri M. J.; Delrez, Laetitia et alin Monthly Notices of the Royal Astronomical Society (2017), 468We derive the 0.01 $\mu$m binned transmission spectrum, between 0.74 and 1.0 $\mu$m, of WASP-80b from low resolution spectra obtained with the FORS2 instrument attached to ESO's Very Large Telescope. The ... [more ▼]We derive the 0.01 $\mu$m binned transmission spectrum, between 0.74 and 1.0 $\mu$m, of WASP-80b from low resolution spectra obtained with the FORS2 instrument attached to ESO's Very Large Telescope. The combination of the fact that WASP-80 is an active star, together with instrumental and telluric factors, introduces correlated noise in the observed transit light curves, which we treat quantitatively using Gaussian Processes. Comparison of our results together with those from previous studies, to theoretically calculated models reveals an equilibrium temperature in agreement with the previously measured value of 825K, and a sub-solar metallicity, as well as an atmosphere depleted of molecular species with absorption bands in the IR ($\gg 5\sigma$). Our transmission spectrum alone shows evidence for additional absorption from the potassium core and wing, whereby its presence is detected from analysis of narrow 0.003 $\mu$m bin light curves ($\gg 5\sigma$). Further observations with visible and near-UV filters will be required to expand this spectrum and provide more in-depth knowledge of the atmosphere. These detections are only made possible through an instrument-dependent baseline model and a careful analysis of systematics in the data. [less ▲]Detailed reference viewed: 11 (0 ULg) Ground-based monitoring of comet 67P/Churyumov–Gerasimenko gas activity throughout the Rosetta missionOpitom, C.; Snodgrass, C.; Fitzsimmons, A. et alin Monthly Notices of the Royal Astronomical Society (2017), 469Simultaneously to the ESA Rosetta mission, a world-wide ground-based campaign provided measurements of the large scale activity of comet 67P/Churyumov-Gerasimenko through measurement of optically active ... [more ▼]Simultaneously to the ESA Rosetta mission, a world-wide ground-based campaign provided measurements of the large scale activity of comet 67P/Churyumov-Gerasimenko through measurement of optically active gas species and imaging of the overall dust coma. We present more than 2 yr of observations performed with the FORS2 low-resolution spectrograph at the VLT, TRAPPIST and ACAM at the WHT. We focus on the evolution of the CN production as a tracer of the comet activity. We find that it is asymmetric with respect to perihelion and different from that of the dust. The CN emission is detected for the first time at 1.34 au pre-perihelion and production rates then increase steeply to peak about 2 weeks after perihelion at (1.00 ± 0.10) × 10[SUP]25[/SUP] molecules s[SUP]-1[/SUP], while the post-perihelion decrease is more shallow. The evolution of the comet activity is strongly influenced by seasonal effects with enhanced CN production when the Southern hemisphere is illuminated. [less ▲]Detailed reference viewed: 14 (0 ULg) H0LiCOW - III. Quantifying the effect of mass along the line of sight to the gravitational lens HE 0435-1223 through weighted galaxy counts★Rusu, Cristian E.; Fassnacht, Christopher D.; Sluse, Dominique et alin Monthly Notices of the Royal Astronomical Society (2017), 467Based on spectroscopy and multiband wide-field observations of the gravitationally lensed quasar HE 0435-1223, we determine the probability distribution function of the external convergence κ[SUB]ext[/SUB ... [more ▼]Based on spectroscopy and multiband wide-field observations of the gravitationally lensed quasar HE 0435-1223, we determine the probability distribution function of the external convergence κ[SUB]ext[/SUB] for this system. We measure the under/overdensity of the line of sight towards the lens system and compare it to the average line of sight throughout the Universe, determined by using the CFHTLenS (The Canada France Hawaii Lensing Survey) as a control field. Aiming to constrain κ[SUB]ext[/SUB] as tightly as possible, we determine under/overdensities using various combinations of relevant informative weighting schemes for the galaxy counts, such as projected distance to the lens, redshift and stellar mass. We then convert the measured under/overdensities into a κ[SUB]ext[/SUB] distribution, using ray-tracing through the Millennium Simulation. We explore several limiting magnitudes and apertures, and account for systematic and statistical uncertainties relevant to the quality of the observational data, which we further test through simulations. Our most robust estimate of κ[SUB]ext[/SUB] has a median value κ^med_ext = 0.004 and a standard deviation σ[SUB]κ[/SUB] = 0.025. The measured σ[SUB]κ[/SUB] corresponds to 2.5 per cent relative uncertainty on the time delay distance, and hence the Hubble constant H[SUB]0[/SUB] inferred from this system. The median κ^med_ext value varies by ˜0.005 with the adopted aperture radius, limiting magnitude and weighting scheme, as long as the latter incorporates galaxy number counts, the projected distance to the main lens and a prior on the external shear obtained from mass modelling. This corresponds to just ˜0.5 per cent systematic impact on H[SUB]0[/SUB]. The availability of a well-constrained κ[SUB]ext[/SUB] makes HE 0435-1223 a valuable system for measuring cosmological parameters using strong gravitational lens time delays. [less ▲]Detailed reference viewed: 19 (1 ULg) Period–luminosity relations of fast-rotating B-type stars in the young open cluster NGC 3766Saio, H.; Ekström, S.; Mowlavi, N. et alin Monthly Notices of the Royal Astronomical Society (2017), 467(4), 3864-3873We study the pulsational properties of rapidly rotating main-sequence B-type stars using linear non-adiabatic analysis of non-radial low-frequency modes taking into account the effect of rotation. We ... [more ▼]We study the pulsational properties of rapidly rotating main-sequence B-type stars using linear non-adiabatic analysis of non-radial low-frequency modes taking into account the effect of rotation. We compare the properties of prograde sectoral g and retrograde r modes excited by the κ mechanism at the Fe opacity peak with the newly discovered period–luminosity relation that is obeyed by a group of fast-rotating B-type stars in the young open cluster NGC 3766. The observed relation consists of two sequences in the period versus magnitude diagram, at periods shorter than 0.5 d. We find that this property is consistent with similar period–luminosity relations predicted for excited sectoral prograde g modes of azimuthal orders m = −1 and −2 in fast-rotating stars along an isochrone. We further show that some of the rapidly rotating stars that have photometric variability with periods longer than a day may be caused by r-mode pulsation predicted to be excited in these stars. One fast-rotating star, in particular, shows both short and long periods that can be explained by the simultaneous excitation of g- and r-mode pulsations in models of fast-rotating stars. [less ▲]Detailed reference viewed: 15 (3 ULg) How unique is Plaskett's star? A search for organized magnetic fields in short period, interacting or post-interaction massive binary systems★Nazé, Yaël ; Neiner, Coralie; Grunhut, Jason et alin Monthly Notices of the Royal Astronomical Society (2017), 467Amongst O-type stars with detected magnetic fields, the fast rotator in the close binary called Plaskett's star shows a variety of unusual properties. Since strong binary interactions are believed to have ... [more ▼]Amongst O-type stars with detected magnetic fields, the fast rotator in the close binary called Plaskett's star shows a variety of unusual properties. Since strong binary interactions are believed to have occurred in this system, one may wonder about their potential role in generating magnetic fields. Stokes V spectra collected with the low-resolution FORS2 and high-resolution ESPaDOnS and Narval spectropolarimeters were therefore used to search for magnetic fields in 15 interacting or post-interaction massive binaries. No magnetic field was detected in any of them, with 0 G always being within 2σ of the derived values. For 17 out of 25 stars in the systems observed at high resolution, the 90 per cent upper limit on the individual dipolar fields is below the dipolar field strength of Plaskett's secondary; a similar result is found for five out of six systems observed at low resolution. If our sample is considered to form a group of stars sharing similar magnetic properties, a global statistical analysis results in a stringent upper limit of ∼200 G on the dipolar field strength. Moreover, the magnetic incidence rate in the full sample of interacting or post-interaction systems (our targets + Plaskett's star) is compatible with that measured from large surveys, showing that they are not significantly different from the general O-star population. These results suggest that binary interactions play no systematic role in the magnetism of such massive systems. [less ▲]Detailed reference viewed: 23 (2 ULg) Peculiar architectures for the WASP-53 and WASP-81 planet-hosting systems★Triaud, Amaury H. M. J.; Neveu-VanMalle, Marion; Lendl, Monika et alin Monthly Notices of the Royal Astronomical Society (2017), 467We report the detection of two new systems containing transiting planets. Both were identified by WASP as worthy transiting planet candidates. Radial velocity observations quickly verified that the ... [more ▼]We report the detection of two new systems containing transiting planets. Both were identified by WASP as worthy transiting planet candidates. Radial velocity observations quickly verified that the photometric signals were indeed produced by two transiting hot Jupiters. Our observations also show the presence of additional Doppler signals. In addition to short-period hot Jupiters, we find that the WASP-53 and WASP-81 systems also host brown dwarfs, on fairly eccentric orbits with semimajor axes of a few astronomical units. WASP-53c is over 16 M[SUB]Jup[/SUB]sin i[SUB]c[/SUB] and WASP-81c is 57 M[SUB]Jup[/SUB]sin i[SUB]c[/SUB]. The presence of these tight, massive companions restricts theories of how the inner planets were assembled. We propose two alternative interpretations: the formation of the hot Jupiters within the snow line or the late dynamical arrival of the brown dwarfs after disc dispersal. We also attempted to measure the Rossiter-McLaughlin effect for both hot Jupiters. In the case of WASP-81b, we fail to detect a signal. For WASP-53b, we find that the planet is aligned with respect to the stellar spin axis. In addition we explore the prospect of transit-timing variations, and of using Gaia's astrometry to measure the true masses of both brown dwarfs and also their relative inclination with respect to the inner transiting hot Jupiters. [less ▲]Detailed reference viewed: 12 (1 ULg) WASP-167b/KELT-13b: Joint discovery of a hot Jupiter transiting a rapidly-rotating F1V starTemple, L. Y.; Hellier, C.; Albrow, M. D. et alin Monthly Notices of the Royal Astronomical Society (2017), 471(3), 2743-2752We report the joint WASP/KELT discovery of WASP-167b/KELT-13b, a transiting hot Jupiter with a 2.02-d orbit around a $V$ = 10.5, F1V star with [Fe/H] = 0.1 $\pm$ 0.1. The 1.5 R$_{\rm Jup}$ planet was ... [more ▼]We report the joint WASP/KELT discovery of WASP-167b/KELT-13b, a transiting hot Jupiter with a 2.02-d orbit around a $V$ = 10.5, F1V star with [Fe/H] = 0.1 $\pm$ 0.1. The 1.5 R$_{\rm Jup}$ planet was confirmed by Doppler tomography of the stellar line profiles during transit. We place a limit of $<$ 8 M$_{\rm Jup}$ on its mass. The planet is in a retrograde orbit with a sky-projected spin-orbit angle of $\lambda = -165^{\circ} \pm 5^{\circ}$. This is in agreement with the known tendency for orbits around hotter stars to be more likely to be misaligned. WASP-167/KELT-13 is one of the few systems where the stellar rotation period is less than the planetary orbital period. We find evidence of non-radial stellar pulsations in the host star, making it a $\delta$-Scuti or $\gamma$-Dor variable. The similarity to WASP-33, a previously known hot-Jupiter host with pulsations, adds to the suggestion that close-in planets might be able to excite stellar pulsations. [less ▲]Detailed reference viewed: 10 (4 ULg) WASP-South transiting exoplanets: WASP-130b, WASP-131b, WASP-132b, WASP-139b, WASP-140b, WASP-141b & WASP-142bHellier, Coel; Anderson, D. R.; Collier Cameron, A. et alin Monthly Notices of the Royal Astronomical Society (2017), 465We describe seven new exoplanets transiting stars of V = 10.1 to 12.4. WASP-130b is a "warm Jupiter" having an orbital period of 11.6 d, the longest yet found by WASP. It transits a V = 11.1, G6 star with ... [more ▼]We describe seven new exoplanets transiting stars of V = 10.1 to 12.4. WASP-130b is a "warm Jupiter" having an orbital period of 11.6 d, the longest yet found by WASP. It transits a V = 11.1, G6 star with [Fe/H] = +0.26. Warm Jupiters tend to have smaller radii than hot Jupiters, and WASP-130b is in line with this trend (1.23 Mjup; 0.89 Rjup). WASP-131b is a bloated Saturn-mass planet (0.27 Mjup; 1.22 Rjup). Its large scale height coupled with the V = 10.1 brightness of its host star make the planet a good target for atmospheric characterisation. WASP-132b is among the least irradiated and coolest of WASP planets, being in a 7.1-d orbit around a K4 star. It has a low mass and a modest radius (0.41 Mjup; 0.87 Rjup). The V = 12.4, [Fe/H] = +0.22 star shows a possible rotational modulation at 33 d. WASP-139b is the lowest-mass planet yet found by WASP, at 0.12 Mjup and 0.80 Rjup. It is a "super-Neptune" akin to HATS-7b and HATS-8b. It orbits a V = 12.4, [Fe/H] = +0.20, K0 star. The star appears to be anomalously dense, akin to HAT-P-11. WASP-140b is a 2.4-Mjup planet in a 2.2-d orbit that is both eccentric (e = 0.047) and with a grazing transit (b = 0.93) The timescale for tidal circularisation is likely to be the lowest of all known eccentric hot Jupiters. The planet's radius is large (1.4 Rjup), but uncertain owing to the grazing transit. The host star is a V = 11.1, [Fe/H] = +0.12, K0 dwarf showing a prominent 10.4-d rotational modulation. The dynamics of this system are worthy of further investigation. WASP-141b is a typical hot Jupiter, being a 2.7 Mjup, 1.2 Rjup planet in a 3.3-d orbit around a V = 12.4, [Fe/H] = +0.29, F9 star. WASP-142b is a typical bloated hot Jupiter (0.84 Mjup, 1.53 Rjup) in a 2.1-d orbit around a V = 12.3, [Fe/H] = +0.26, F8 star. [less ▲]Detailed reference viewed: 135 (6 ULg) H0LiCOW. VI. Testing the fidelity of lensed quasar host galaxy reconstructionDing, Xuheng; Liao, Kai; Treu, Tommaso et alin Monthly Notices of the Royal Astronomical Society (2017), 465The empirical correlation between the mass of a supermassive black hole (M_BH) and its host galaxy properties is widely considered to be an evidence of their co-evolution. A powerful way to test the co ... [more ▼]The empirical correlation between the mass of a supermassive black hole (M_BH) and its host galaxy properties is widely considered to be an evidence of their co-evolution. A powerful way to test the co-evolution scenario and learn about the feedback processes linking galaxies and nuclear activity is to measure these correlations as a function of redshift. Unfortunately, currently M_BH can only be estimated in active galaxies at cosmological distances. At these distances, bright active galactic nuclei (AGNs) can outshine the host galaxy, making it extremely difficult to measure the host's luminosity. Strongly lensed AGNs provide in principle a great opportunity to improve the sensitivity and accuracy of the host galaxy luminosity measurements as the host galaxy is magnified and more easily separated from the point source, provided the lens model is sufficiently accurate. In order to measure the M_BH-L correlation with strong lensing, it is necessary to ensure that the lens modelling is accurate, and that the host galaxy luminosity can be recovered to at least a precision and accuracy better than that of the typical M_BH measurement. We carry out extensive and realistic simulations of deep Hubble Space Telescope observations of lensed AGNs obtained by our collaboration. We show that the host galaxy luminosity can be recovered with better accuracy and precision than the typical uncertainty in M_BH(˜0.5 dex) for hosts as faint as 2-4 mag dimmer than the AGN itself. Our simulations will be used to estimate bias and uncertainties in the actual measurements to be presented in a future paper. [less ▲]Detailed reference viewed: 27 (0 ULg) H0LiCOW - V. New COSMOGRAIL time delays of HE 0435-1223: H0 to 3.8 per cent precision from strong lensing in a flat ΛCDM modelBonvin, V.; Courbin, F.; Suyu, S. H. et alin Monthly Notices of the Royal Astronomical Society (2017), 465We present a new measurement of the Hubble Constant H[SUB]0[/SUB] and other cosmological parameters based on the joint analysis of three multiply imaged quasar systems with measured gravitational time ... [more ▼]We present a new measurement of the Hubble Constant H[SUB]0[/SUB] and other cosmological parameters based on the joint analysis of three multiply imaged quasar systems with measured gravitational time delays. First, we measure the time delay of HE 0435-1223 from 13-yr light curves obtained as part of the COSMOGRAIL project. Companion papers detail the modelling of the main deflectors and line-of-sight effects, and how these data are combined to determine the time-delay distance of HE 0435-1223. Crucially, the measurements are carried out blindly with respect to cosmological parameters in order to avoid confirmation bias. We then combine the time-delay distance of HE 0435-1223 with previous measurements from systems B1608+656 and RXJ1131-1231 to create a Time Delay Strong Lensing probe (TDSL). In flat Λ cold dark matter (ΛCDM) with free matter and energy density, we find H[SUB]0[/SUB] =71.9^{+2.4}_{-3.0} {km s^{-1} Mpc^{-1}} and Ω _{Λ }=0.62^{+0.24}_{-0.35}. This measurement is completely independent of, and in agreement with, the local distance ladder measurements of H[SUB]0[/SUB]. We explore more general cosmological models combining TDSL with other probes, illustrating its power to break degeneracies inherent to other methods. The joint constraints from TDSL and Planck are H[SUB]0[/SUB] = 69.2_{-2.2}^{+1.4} {km s^{-1} Mpc^{-1}}, Ω _{Λ }=0.70_{-0.01}^{+0.01} and Ω _k=0.003_{-0.006}^{+0.004} in open ΛCDM and H[SUB]0[/SUB] =79.0_{-4.2}^{+4.4} {km s^{-1} Mpc^{-1}}, Ω _de=0.77_{-0.03}^{+0.02} and w=-1.38_{-0.16}^{+0.14} in flat wCDM. In combination with Planck and baryon acoustic oscillation data, when relaxing the constraints on the numbers of relativistic species we find N[SUB]eff[/SUB] = 3.34_{-0.21}^{+0.21} in N[SUB]eff[/SUB]ΛCDM and when relaxing the total mass of neutrinos we find Σm[SUB]ν[/SUB] ≤ 0.182 eV in m[SUB]ν[/SUB]ΛCDM. Finally, in an open wCDM in combination with Planck and cosmic microwave background lensing, we find H[SUB]0[/SUB] =77.9_{-4.2}^{+5.0} {km s^{-1} Mpc^{-1}}, Ω _de=0.77_{-0.03}^{+0.03}, Ω _k=-0.003_{-0.004}^{+0.004} and w=-1.37_{-0.23}^{+0.18}. [less ▲]Detailed reference viewed: 15 (0 ULg) H0LiCOW - IV. Lens mass model of HE 0435-1223 and blind measurement of its time-delay distance for cosmologyWong, Kenneth C.; Suyu, Sherry H.; Auger, Matthew W. et alin Monthly Notices of the Royal Astronomical Society (2017), 465Strong gravitational lenses with measured time delays between the multiple images allow a direct measurement of the time-delay distance to the lens, and thus a measure of cosmological parameters ... [more ▼]Strong gravitational lenses with measured time delays between the multiple images allow a direct measurement of the time-delay distance to the lens, and thus a measure of cosmological parameters, particularly the Hubble constant, H[SUB]0[/SUB]. We present a blind lens model analysis of the quadruply imaged quasar lens HE 0435-1223 using deep Hubble Space Telescope imaging, updated time-delay measurements from the COSmological MOnitoring of GRAvItational Lenses (COSMOGRAIL), a measurement of the velocity dispersion of the lens galaxy based on Keck data, and a characterization of the mass distribution along the line of sight. HE 0435-1223 is the third lens analysed as a part of the H[SUB]0[/SUB] Lenses in COSMOGRAIL's Wellspring (H0LiCOW) project. We account for various sources of systematic uncertainty, including the detailed treatment of nearby perturbers, the parametrization of the galaxy light and mass profile, and the regions used for lens modelling. We constrain the effective time-delay distance to be D_{Δ t}= 2612_{-191}^{+208} Mpc, a precision of 7.6 per cent. From HE 0435-1223 alone, we infer a Hubble constant of H[SUB]0[/SUB] = 73.1_{-6.0}^{+5.7} km s^{-1 Mpc^{-1}} assuming a flat ΛCDM cosmology. The cosmographic inference based on the three lenses analysed by H0LiCOW to date is presented in a companion paper (H0LiCOW Paper V). [less ▲]Detailed reference viewed: 14 (0 ULg) A new asteroseismic diagnostic for internal rotation in γ Doradus starsOuazzani, Rhita-Maria; Salmon, Sébastien ; Antoci, V. et alin Monthly Notices of the Royal Astronomical Society (2017), 465With four years of nearly-continuous photometry from Kepler, we are finally in a good position to apply asteroseismology to $\gamma$ Doradus stars. In particular several analyses have demonstrated the ... [more ▼]With four years of nearly-continuous photometry from Kepler, we are finally in a good position to apply asteroseismology to $\gamma$ Doradus stars. In particular several analyses have demonstrated the possibility to detect non-uniform period spacings, which have been predicted to be directly related to rotation. In the present work, we define a new seismic diagnostic for rotation in $\gamma$ Doradus stars that are too rapidly rotating to present rotational splittings. Based on the non uniformity of their period spacings, we define the observable $\Sigma$ as the slope of the period spacing when plotted as a function of period. We provide a one-to-one relation between this observable $\Sigma$ and the internal rotation, which applies widely in the instability strip of $\gamma$ Doradus stars. We apply the diagnostic to a handful of stars observed by Kepler. Thanks to g-modes in $\gamma$ Doradus stars, we are now able to determine the internal rotation of stars on the lower main sequence, which is still not possible for Sun-like stars. [less ▲]Detailed reference viewed: 19 (4 ULg) Strong XUV irradiation of the Earth-sized exoplanets orbiting the ultracool dwarf TRAPPIST-1Wheatley, Peter J.; Louden, Tom; Bourrier, Vincent et alin Monthly Notices of the Royal Astronomical Society (2017), 465We present an XMM-Newton X-ray observation of TRAPPIST-1, which is an ultracool dwarf star recently discovered to host three transiting and temperate Earth-sized planets. We find the star is a relatively ... [more ▼]We present an XMM-Newton X-ray observation of TRAPPIST-1, which is an ultracool dwarf star recently discovered to host three transiting and temperate Earth-sized planets. We find the star is a relatively strong and variable coronal X-ray source with an X-ray luminosity similar to that of the quiet Sun, despite its much lower bolometric luminosity. We find L_x/L_bol=2-4x10^-4, with the total XUV emission in the range L_xuv/L_bol=6-9x10^-4. Using a simple energy-limited model we show that the relatively close-in Earth-sized planets, which span the classical habitable zone of the star, are subject to sufficient X-ray and EUV irradiation to significantly alter their primary and perhaps secondary atmospheres. Understanding whether this high-energy irradiation makes the planets more or less habitable is a complex question, but our measured fluxes will be an important input to the necessary models of atmospheric evolution. [less ▲]Detailed reference viewed: 77 (1 ULg) The discovery of a planetary candidate around the evolved low-mass Kepler giant star HD 175370Hrudková, M.; Hatzes, A.; Karjalainen, R. et alin Monthly Notices of the Royal Astronomical Society (2017), 464We report on the discovery of a planetary companion candidate with a minimum mass M sin i = 4.6 ± 1.0 M[SUB]Jupiter[/SUB] orbiting the K2 III giant star HD 175370 (KIC 007940959). This star was a target ... [more ▼]We report on the discovery of a planetary companion candidate with a minimum mass M sin i = 4.6 ± 1.0 M[SUB]Jupiter[/SUB] orbiting the K2 III giant star HD 175370 (KIC 007940959). This star was a target in our programme to search for planets around a sample of 95 giant stars observed with Kepler. This detection was made possible using precise stellar radial velocity measurements of HD 175370 taken over five years and four months using the coudé echelle spectrograph of the 2-m Alfred Jensch Telescope and the fibre-fed echelle spectrograph High Efficiency and Resolution Mercator Echelle Spectrograph of the 1.2-m Mercator Telescope. Our radial velocity measurements reveal a periodic (349.5 ± 4.5 d) variation with a semi-amplitude K = 133 ± 25 m s[SUP]- 1[/SUP], superimposed on a long-term trend. A low-mass stellar companion with an orbital period of ˜88 yr in a highly eccentric orbit and a planet in a Keplerian orbit with an eccentricity e = 0.22 are the most plausible explanation of the radial velocity variations. However, we cannot exclude the existence of stellar envelope pulsations as a cause for the low-amplitude radial velocity variations and only future continued monitoring of this system may answer this uncertainty. From Kepler photometry, we find that HD 175370 is most likely a low-mass red giant branch or asymptotic giant branch star. [less ▲]Detailed reference viewed: 24 (2 ULg) Water loss from terrestrial planets orbiting ultracool dwarfs: implications for the planets of TRAPPIST-1Bolmont, E.; Selsis, F.; Owen, J. E. et alin Monthly Notices of the Royal Astronomical Society (2017), 464Ultracool dwarfs (UCD; $T_{\rm eff}<\sim3000~$K) cool to settle on the main sequence after $\sim$1 Gyr. For brown dwarfs, this cooling never stops. Their habitable zone (HZ) thus sweeps inward at least ... [more ▼]Ultracool dwarfs (UCD; $T_{\rm eff}<\sim3000~$K) cool to settle on the main sequence after $\sim$1 Gyr. For brown dwarfs, this cooling never stops. Their habitable zone (HZ) thus sweeps inward at least during the first Gyr of their lives. Assuming they possess water, planets found in the HZ of UCDs have experienced a runaway greenhouse phase too hot for liquid water prior to entering the HZ. It has been proposed that such planets are desiccated by this hot early phase and enter the HZ as dry worlds. Here we model the water loss during this pre-HZ hot phase taking into account recent upper limits on the XUV emission of UCDs and using 1D radiation-hydrodynamic simulations. We address the whole range of UCDs but also focus on the planets recently found around the $0.08~M_\odot$ dwarf TRAPPIST-1. Despite assumptions maximizing the FUV-photolysis of water and the XUV-driven escape of hydrogen, we find that planets can retain significant amounts of water in the HZ of UCDs, with a sweet spot in the $0.04$-$0.06~M_\odot$ range. We also studied the TRAPPIST-1 system using observed constraints on the XUV-flux. We find that TRAPPIST-1b and c may have lost as much as 15 Earth Oceans and planet d -- which might be inside the HZ -- may have lost less than 1 Earth Ocean. Depending on their initial water contents, they could have enough water to remain habitable. TRAPPIST-1 planets are key targets for atmospheric characterization and could provide strong constraints on the water erosion around UCDs. [less ▲]Detailed reference viewed: 44 (0 ULg) Rossiter-McLaughlin models and their effect on estimates of stellar rotation, illustrated using six WASP systemsBrown, D. J. A.; Triaud, A. H. M. J.; Doyle, A. P. et alin Monthly Notices of the Royal Astronomical Society (2017), 464We present new measurements of the projected spin--orbit angle $\lambda$ for six WASP hot Jupiters, four of which are new to the literature (WASP-61, -62, -76, and -78), and two of which are new analyses ... [more ▼]We present new measurements of the projected spin--orbit angle $\lambda$ for six WASP hot Jupiters, four of which are new to the literature (WASP-61, -62, -76, and -78), and two of which are new analyses of previously measured systems using new data (WASP-71, and -79). We use three different models based on two different techniques: radial velocity measurements of the Rossiter--McLaughlin effect, and Doppler tomography. Our comparison of the different models reveals that they produce projected stellar rotation velocities ($v \sin I_{\rm s}$) measurements often in disagreement with each other and with estimates obtained from spectral line broadening. The Bou\'e model for the Rossiter--McLaughlin effect consistently underestimates the value of $v\sin I_{\rm s}$ compared to the Hirano model. Although $v \sin I_s$ differed, the effect on $\lambda$ was small for our sample, with all three methods producing values in agreement with each other. Using Doppler tomography, we find that WASP-61\,b ($\lambda=4^\circ.0^{+17.1}_{-18.4}$), WASP-71\,b ($\lambda=-1^\circ.9^{+7.1}_{-7.5}$), and WASP-78\,b ($\lambda=-6^\circ.4\pm5.9$) are aligned. WASP-62\,b ($\lambda=19^\circ.4^{+5.1}_{-4.9}$) is found to be slightly misaligned, while WASP-79\,b ($\lambda=-95^\circ.2^{+0.9}_{-1.0}$) is confirmed to be strongly misaligned and has a retrograde orbit. We explore a range of possibilities for the orbit of WASP-76\,b, finding that the orbit is likely to be strongly misaligned in the positive $\lambda$ direction. [less ▲]Detailed reference viewed: 25 (0 ULg)
2017-08-18 23:46:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6218700408935547, "perplexity": 3913.8828278834567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105195.16/warc/CC-MAIN-20170818233221-20170819013221-00249.warc.gz"}
http://tex.stackexchange.com/questions/170169/latex-placing-image-how-can-i-put-image-wherever-i-want
# latex placing image - how can I put image wherever I want [duplicate] I am trying to insert an image in the middle of the page, but it is going to the top of the page. why the hell is it happening? I tried this: \begin{figure} \begin{minipage}{\textwidth} \includegraphics[width=75.5mm]{myimage} \end{minipage} \end{figure} I am looking for solution for 2 hours, not a single sign of success! - ## migrated from stackoverflow.comApr 7 '14 at 16:26 This question came from our site for professional and enthusiast programmers. ## marked as duplicate by Adam Liter, Heiko Oberdiek, Svend Tveskæg, Ludovic C., dustinApr 7 '14 at 17:42 The only purpose of the figure environment is to allow LaTeX to move the figure, and a minipage inside figure has no purpose at all unless you have footnotes, so you just need %\begin{figure} % \begin{minipage}{\textwidth} \includegraphics[width=75.5mm]{myimage} % \end{minipage} %\end{figure} and the image will appear wherever you have \includegraphics in the source. Note the rules for placing images are the same as the rules for placing X Normally if you enter X it comes where you put it, either part of the current line, or perhaps indented at start of a paragraph. If you do \begin{figure} X \end{figure} Then the X will float and be positioned by the page breaking routines. \includegraphics works in exactly the same way, or said more correctly, the float placement routines have no information about the content of the figure environment. - Figure is an floating object and minipage is not, so by encapsulating a minipage in a figure object you make the minipage floating. Another option is using the "H" position specifier; http://en.wikibooks.org/wiki/LaTeX/Floats,_Figures_and_Captions That being said, I prefer to let LaTeX control the position of the image and use a reference to that figure in your text. Letting LaTeX control the position of all elements of your page is in my option a more robust and neat solution. - thanks man, helped a lot – doniyor Apr 7 '14 at 16:24 If you put a figure inside a minipage you will get a syntax error and the float will be discarded. – David Carlisle Apr 7 '14 at 16:29 you are absolutely right, I updated my answer – Robert Apr 7 '14 at 20:28
2016-07-24 01:12:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.860787570476532, "perplexity": 1545.5560738733127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823805.20/warc/CC-MAIN-20160723071023-00062-ip-10-185-27-174.ec2.internal.warc.gz"}
http://vlaq.com/exam-questions.php?exam=ES0-003-RES-PowerFuse-Series-8-Basic
# RES PowerFuse Series 8 Basic Real Exam Questions with 100% valid Dumps - Certification Exams Dumps ### 100% Valid RES PowerFuse Series 8 Basic Dumps with real exam question bank and practice tests that will prepare you to get 100% marks in actual exam. - Certification Exams Dumps RES PowerFuse Series 8 Basic dump questions with Latest ES0-003 practice exams | http://vlaq.com/ # RES ES0-003 : RES PowerFuse Series 8 Basic Exam Exam Dumps Organized by Chuanli ## Latest 2021 Updated ES0-003 test Dumps | question bank with actual Questions ### 100% valid ES0-003 Real Questions - Updated Daily - 100% Pass Guarantee ES0-003 test Dumps Source : Download 100% Free ES0-003 Dumps PDF and VCE Test Number : ES0-003 Test Name : RES PowerFuse Series 8 Basic Vendor Name : RES Question Bank : Check Questions Ensure your company's success along with ES0-003 practice test full of test Braindumps bank If you think, it is possible to pass ES0-003 test by using just browsing course ebooks, you are incorrect. There are several difficult questions you never find out in ES0-003 text arrange. They have ES0-003 PDF Dumps consisting of all the difficult questions you will see for test computer screen. get 100 percent free test dumps before you sign up for full ES0-003 test Cram data. It is quite hard endeavor to choose very good Test Prepprovider coming from hundreds of terrible dumps manufacturers. If your hunt lead you to terrible Test Prepprovider, your certification might be a dream. Inability in ES0-003 test is really a big terrible feeling. Merely ou observed on unacceptable and obsolete provider. They live not stating that every ES0-003 PDF Questions card issuer is a phony. There are some top quality ES0-003 real exams questions card issuer that have their resources so you can get most refreshed and legal ES0-003 test dumps. Killexams.com reaches top of which. They have staff working to records 100% legal, up to date and reliable ES0-003 PDF Questions in which in real exams greatly. Acquire 100% 100 % free test dumps involving ES0-003 test and review. If you feel you're satisfied, sign up for ES0-003 PDF Questions PDF utilizing VCE exercise test and achieve success candidate. You might surely send out us your company reviews in relation to ES0-003 test experience afterwards after driving real ES0-003 exam. Parts of Killexams ES0-003 test dumps -> Instant ES0-003 test dumps get Access -> In depth ES0-003 Questions and Answers -> 98% Accomplishment Rate involving ES0-003 Exam -> Guaranteed Genuine ES0-003 test Questions -> ES0-003 Questions Current on Typical basis. -> Correct ES0-003 test Dumps -> 100 % Portable ES0-003 test Documents -> Full shown ES0-003 VCE test Simulator -> Unlimited ES0-003 test Acquire Access -> Excellent Discount Coupons -> 100 % Secured Acquire Account -> 100 % Confidentiality Made certain -> 100% Accomplishment Guarantee -> 100 % Free braindumps for comparison -> No Secret Cost -> Zero Monthly Charges -> No Programmed Account Restoration -> ES0-003 test Update Intimation by Netmail -> Free Technical Support Exam Details at: https://killexams.com/pass4sure/exam-detail/ES0-003 Prices Details on: https://killexams.com/exam-price-comparison/ES0-003 See Carry out List: https://killexams.com/vendors-exam-list Discount Coupon on Maximum ES0-003 test dumps Test Prep; WC2020: 60% Flat Price cut on each exam PROF17: 10% Further Price cut on Valuation Greatr as compared with $69 DEAL17: 15% Additionally Discount with Value More than$99 ## Killexams Review | Reputation | Testimonials | Feedback Just try these actual test questions and success is yours. As i took this test finalized month together with passed them way to my favorite training when using the killexams.com kit. It is a remarkable test dump, increased dependable than I should expect. All questions are logical, and its furthermore much of schooling data. Much better and more dependable than As i predicted instructions I passed with more than 97%, which is the unusual ES0-003 test marks. I really do not consider why hence few THAT people recognize about killexams.com, or perhaps its my conventional surroundings Regardless, I will be dispersion the word amongst my buddies due to the fact that this is often super and is likely useful to countless. These ES0-003 actual test questions works within the actual study. killexams.com became very refreshing access in my life, exclusively due to the fact the information that I used thrugh killexams.coms enable became the one which got me personally to pass my ES0-003 exam. Passing ES0-003 test isnt clean but it really become personally because Thought about get entrée to to your great digesting material u am profoundly thankful for that. Valid, Up-to-date and latest Q&A bank of ES0-003 exam. The exact answers are usually defined simply speaking in light language even though make a massive effect that is usually clean to understand and course. I had taken the help of killexams.com Q&A and passed my ES0-003 test by using a healthy score of 87. Thank you tokillexams.com Questions and Answers. I would adore to indicate for choice of killexams.com Q&A for the exercise of ES0-003 exam No source is more effective than this ES0-003 source. on the an evening meal desk, my father asked me immediately if I once was going to not work my forthcoming ES0-003 test and I resolved with a thoroughly company Absolutely no way. He turned into impressed utilizing my personal assurance on the other hand I wasso afraid connected with disappointing him or her. Thank The almighty for killexams.com since it helped me within maintaining their phrase and even passing their ES0-003 test with politely. I am thankful. Precisely equal questions, Is it possible? I just passed the very ES0-003 test and exceptionally endorse killexams.com towards everyone who else considers ordering their fabric. This is a thoroughly valid as well as reliable coaching tool, wonderful choice intended for folks that find it difficult to find the money for getting started with full-time guidelines (that would be a waste of time as well as money if you question all of us! Especially if you own Killexams). Could be enough been pondering, the questions are real! # RES 8 techniques ### 3D phonon microscopy with sub-micron axial-decision | ES0-003 Real test Questions and test Cram Experimental setup Picosecond laser ultrasound15,16 (PLU) makes use of two short ($$\sim$$ a hundred and fifty fs, 100 MHz repetition rate) laser pulses to generate and notice excessive frequency acoustic waves. The pump pulse (390 nm) is absorbed with the aid of the demo itself or through a transducer layer (often a steel film); the absorbed easy explanations speedy heating and by the use of thermal growth launches an acoustic wave packet into the sample. The method is accepted to study thin films33,34 or micro/nano structure dynamics35,36,37,38. during this work an aim lens with NA of 0.forty two become used for imaging of phantoms and nil.6 for organic cells. The TRBS signal17 arises because of the interference of the mirrored or transmitted probe (780 nm) laser beam with a portion scattered from the acoustic wave packet propagating in the pattern (see Fig. 1a). because the wave strikes, the phase of the scattered element changes relative to the non-scattered gentle producing an oscillating sign. determine 2a shows a customary experimental configuration for time resolved Brillouin scattering measurements for clear media. The pump and probe laser beams are delivered from an inverted microscope onto the transducer substrate. The transducer is peculiarly designed for phonon imaging and include a gold and indium tin oxide structure26,27. The pump probe components used is synchronised by means of an ASOPS configuration39 which have greatly accelerated the facts acquisition speed allowing extremely vulnerable alerts to be detected in an affordable time or enabling ample features to be captured so that pictures may also be recorded. Modelling With the goal of characterising axial decision and examine sign processing, they have modelled this manner to provide exemplar waveforms as in the past reported28. The mannequin is based on16 and calculates the thermo-elastic technology procedure for a laser pulse being absorbed via the pattern, which allows for interferometric or reflectivity alerts to be bought. The optical absorption is calculated using Fresnel coefficients, giving the absorption of the laser pulse in each layer of the sample. here is then transformed to warmth during the thermal houses of each layer. The thermal enlargement of heated regions then ends up in an initial stress within the fabric which is then propagated throughout the sample. The detection system uses a eco-friendly’s feature strategy to calculate how the probe laser beam is reflected from the pattern constitution, the displacement and the alterations in reflectivity because of the propagating pressure is calculated. sign processing Wavelet transforms have been calculated for every time hint (or each and every pixel on the picture) at each and every time position, the equal centre frequency for the wavelet quantity with the optimum amplitude become stored. This effects in a x, y, t matrix of Brillouin frequencies. From this matrix, a couple of conversions are feasible: if the refractive index is time-honored, it can be transformed to sound pace by using: $$f_B = 2 n \nu / \lambda _o$$ and subsequently the temporal axis will also be transformed to spatial with the aid of $$z=\nu *t$$. For the case of the cells presented in Fig. 6, the matrix changed into left as a frequency matrix, despite the fact the temporal axis changed into transformed to spatial by way of approximating the sound speed of the total matrix to that of water. The wavelet system was used because it has the abilities that the variety of cycles used for each and every wavelet quantity is fastened and so is the frequency resolution. This capability that the time resolution scales with frequency instantly because it is dependent upon the number of cycles used. The wavelet radically change is defined as: \startaligned W_\psi (a,b)= \frac1\sqrta \int _-\infty ^\infty f(t)\psi \Bigg (\fract-ba\Bigg )dt, \conclusionaligned (three) the place $$\psi$$ is the mother wavelet, a denotes the wavelet dilation and b is the time shift of the wavelet. The mom wavelet used here is the Morlet which is a fancy exponential multiplied by using a Gaussian window: \startaligned \psi (\eta )= \pi ^-1/foure^i\omega _0\eta e^-\eta ^2 /2, \conclusionaligned (4) where $$\psi$$ is the cost of the wavelet at non-dimensional time $$\eta$$ and $$\omega _0$$ is the wavenumber. Matlab’s wavelet analysis toolbox and a fancy Morlet radically change (cmor) were used to extract $$f_B(t)$$ from the time-resolved sign. The width of the seriously change is defined via the bandwidth parameter (a). Fabrication Polymer sample A BPRS photoresist pattern was fabricated the use of typical photolitography strategies. A transducer coated coverslip changed into spin covered with BPRS-a hundred@4000 rmps. Then cured at 90 $$^\circ$$C for five min on a hot plate. The spin coated film turned into then uncovered through a masks for eight s the use of a Karl Suss MJB3 masks-aligner (7 mW/cm$$^2$$) and developed for 25s the use of AZ400K developer eight:1 diluted with distilled water. Polymer phantom samples a tumbler coverslip etched with a reference grid became cleaned with acetone and isopropanol and dried with N$$_2$$. It was then covered with a three layer skinny film stack to behave as a era transducer27. The stack consisted of 20 nm gold, a hundred and forty nm ITO and 20 nm of gold (for the entire experimental outcomes introduced in this work). Polystyrene (PS) microspheres 10% wt (Bangs Laboratories Inc., united states of america) with 5 micron diameters have been drop covered onto the substrate which become spun-cast (spin-coater model ws-four hundred bz-6npp, Laurell technologies employer, u . s .) for 30s at 750 rpm and placed into an oven @ 245 $$^\circ$$C for 30–45 min to partially soften the microspheres. For the 2d sample, polystyrene (PS) and poly(methyl methacrylate) (PMMA) microspheres have been organized over an etched gridded glass coverslip (Electron microscopy sciences #72265-50). The glass slide became at first cleaned with acetone and isopropanol and dried with N$$_2$$. A transducer equipment was fabricated over the coverslip surface. A diluted answer (1:500; v:v) of 10 $$\upmu$$m PMMA microspheres 10$$\%$$ wt (Bangs Laboratories Inc., united states of america) in water and drop lined over the Au floor. The demo became spun-forged for 30 s at 750 rpm (spin-coater model ws-four hundred bz-6npp, Laurell technologies service provider, country) and positioned in an oven @245 $$^\circ$$C for five min. The demo was removed from the oven and a diluted answer (1:500; v:v) of 5 $$\upmu$$m PS microspheres 10$$\%$$ wt (Bangs Laboratories Inc., united states) in water turned into drop-coated over the floor. The pattern was spun-forged for 30 s at 750 rpm and positioned again in an oven @ 245 $$^\circ$$C for 30–forty five min. These two polymers have diverse refractive indices and acoustic velocities and so can have distinctive Brillouin frequencies. telephone training Sterile transducer coated sapphire coverslips (25mm diameter) have been lined with Poly-L-lysine answer (0.01% (w/v): Sigma-Aldrich; P4707) to advertise cell adhesion. Coverslips were every seeded with 2.5 $$\times$$ 10$$^5$$ mouse embryonic fibroblast NIH–3T3 cells (ATCC$$\circledR$$ CRL-1658, united states of america). These had been cultured in DMEM (Sigma-Aldrich; D6421) with 10% (v/v) Fetal Bovine Serum (FBS: Sigma-Aldrich; F7524), 1% (v/v) penicillin-streptomycin (Sigma-Aldrich; P0781) and 1% (v/v) L-glutamine (200mM: Sigma-Aldrich; G7513) at 37 $$^\circ$$C/5% CO2 for 24h. NIH-3T3 cells had been mounted to the gold coated coverslips the use of four% (v/v) formaldehyde solution (Sigma-Aldrich; 1004968350) for half-hour and washing three$$\times$$ with Dulbecco’s Phosphate Buffered Saline (PBS: Sigma-Aldrich; D8537). Coverslips were saved at four $$^\circ$$C in PBS except required. Unquestionably it is hard assignment to pick dependable certification questions/answers assets regarding review, reputation and validity since individuals get sham because of picking incorrectly benefit. Killexams.com ensure to serve its customers best to its assets concerning test dumps update and validity. The vast majority of other's sham report dissension customers come to us for the brain dumps and pass their exams joyfully and effortlessly. They never trade off on their review, reputation and quality on the grounds that killexams review, killexams reputation and killexams customer certainty is imperative to us. Uniquely they deal with killexams.com review, killexams.com reputation, killexams.com sham report objection, killexams.com trust, killexams.com validity, killexams.com report and killexams.com scam. On the off chance that you see any false report posted by their rivals with the name killexams sham report grievance web, killexams.com sham report, killexams.com scam, killexams.com protest or something like this, simply remember there are constantly awful individuals harming reputation of good administrations because of their advantages. There are a huge number of fulfilled clients that pass their exams utilizing killexams.com brain dumps, killexams PDF questions, killexams hone questions, killexams test simulator. Visit Killexams.com, their specimen questions and test brain dumps, their test simulator and you will realize that killexams.com is the best brain dumps site. Is Killexams Legit? Yes, Of Course, Killexams is 100% legit and fully reliable. There are several features that makes killexams.com authentic and legit. It provides up to date and 100% valid test dumps containing real test questions and answers. Price is very low as compared to most of the services on internet. The Q&A are updated on regular basis with most latest brain dumps. Killexams account setup and product delivery is very fast. File downloading is unlimited and very fast. Support is avaiable via Livechat and Email. These are the features that makes killexams.com a robust website that provide test dumps with real test questions. MS-700 practice test | SY0-501 cheat sheet pdf | HP2-Z36 braindumps | Google-ASA prep questions | Google-PDE question bank | AD0-E100 test dumps | MCIA-Level-1 actual Questions | SPLK-1002 practice test | ACE001 VCE | HPE6-A48 test test | NSE4_FGT-6.2 Cheatsheet | Gsuite practice test | 1Y0-230 model question | AZ-104 questions get | DES-4421 test prep | MB-700 practice test | ADM-201 free pdf | SY0-601 test papers | Mulesoft-CD test Questions | SAA-C02 test demo | ### Best Certification test Dumps You Ever Experienced ES0-003 cram | ES0-006 test dumps | ES0-002 free pdf get | ES0-005 Latest courses | ES0-004 mock questions | ## References : Back to Main Page
2021-03-07 14:55:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19037221372127533, "perplexity": 10404.45697855433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178377821.94/warc/CC-MAIN-20210307135518-20210307165518-00137.warc.gz"}
https://www.groundai.com/project/competitive-analysis-of-the-top-k-ranking-problem/
Competitive analysis of the top-K ranking problem # Competitive analysis of the top-K ranking problem Xi Chen Stern School of Business, New York University, email: xchen3@stern.nyu.edu    Sivakanth Gopi Department of Computer Science, Princeton University, email: sgopi@cs.princeton.edu    Jieming Mao Department of Computer Science, Princeton University, email: jiemingm@cs.princeton.edu    Jon Schneider Department of Computer Science, Princeton University, email: js44@cs.princeton.edu ###### Abstract Motivated by applications in recommender systems, web search, social choice and crowdsourcing, we consider the problem of identifying the set of top items from noisy pairwise comparisons. In our setting, we are non-actively given pairwise comparisons between each pair of items, where each comparison has noise constrained by a very general noise model called the strong stochastic transitivity (SST) model. We analyze the competitive ratio of algorithms for the top- problem. In particular, we present a linear time algorithm for the top- problem which has a competitive ratio of ; i.e. to solve any instance of top-, our algorithm needs at most times as many samples needed as the best possible algorithm for that instance (in contrast, all previous known algorithms for the top- problem have competitive ratios of or worse). We further show that this is tight: any algorithm for the top- problem has competitive ratio at least . ## 1 Introduction The problem of inferring a ranking over a set of items, such as documents, images, movies, or URL links, is an important problem in machine learning and finds many applications in recommender systems, web search, social choice, and many other areas. One of the most popular forms of data for ranking is pairwise comparison data, which can be easily collected via, for example, crowdsourcing, online games, or tournament play. The problem of ranking aggregation from pairwise comparisons has been widely studied and most work aims at inferring a total ordering of all the items (see, e.g.,  Negahban12RankCentrality ()). However, for some applications with a large number of items (e.g., rating of restaurants in a city), it is only necessary to identify the set of top items. For these applications, inferring the total global ranking order unnecessarily increases the complexity of the problem and requires significantly more samples. In the basic setting for this problem, there is a set of items with some true underlying ranking. For possible pair of items, an analyst is given noisy pairwise comparisons between those two items, each independently ranking above with some probability . From this data, the analyst wishes to identify the top items in the ranking, ideally using as few samples as is necessary to be correct with sufficiently high probability. The noise in the pairwise comparisons (i.e. the probabilities ) is constrained by the choice of noise model. Many existing models - such as the Bradley-Terry-Luce model (BTL) (Bradley52, ; Luce59, ), the Thurstone model (Thurstone:27, ), and their variants - are parametric comparison models, in that each probability is of the form , where is a ‘score’ associated with item . While these parametric models yield many interesting algorithms with provable guarantees (Chen15, ; Jang13, ; Suh16Adversarial, ), the models enforce strong assumptions on the probabilities of incorrect pairwise comparisons that might not hold in practice (Davidson59, ; McLaughlin65, ; Tversky72, ; Ballinger97, ). A more general class of pairwise comparison model is the strong stochastic transitivity (SST) model, which subsumes the aforementioned parameter models as special cases and has a wide range of applications in psychology and social science (see, e.g., Davidson59 (); McLaughlin65 (); Fishburn73 ()). The SST model only enforces the following coherence assumption: if is ranked above , then for all other items . Shah15Sto () pioneered the algorithmic and theoretical study of ranking aggregation under SST models. For top- ranking problems, Shah15Sim () proposed a counting-based algorithm, which simply orders the items by the total number of pairwise comparisons won. For a certain class of instances, this algorithm is in fact optimal; any algorithm with a constant probability of success on these instances needs roughly at least as many samples as this counting algorithm. However, this does not rule out the existence of other instances where the counting algorithm performs asymptotically worse than some other algorithm. In this paper, we study algorithms for the top- problem from the standpoint of competitive analysis. We give an algorithm which, on any instance, needs at most times as many samples as the best possible algorithm for that instance to succeed with the same probability. We further show this result is tight: for any algorithm, there are instances where that algorithm needs at least times as many samples as the best possible algorithm. In contrast, the counting algorithm of Shah15Sto () sometimes requires times as many samples as the best possible algorithm, even when the probabilities are bounded away from . Our main technical tool is the introduction of a new decision problem we call domination, which captures the difficulty of solving the top problem while being simpler to directly analyze via information theoretic techniques. The domination problem can be thought of as a restricted one-dimensional variant of the top- problem, where the analyst is only given the outcomes of pairwise comparisons that involve item or , and wishes to determine whether is ranked above . Our proof of the above claims proceeds by proving analogous competitive ratio results for the domination problem, and then carefully embedding the domination problem as part of the top- problem. ### 1.1 Related Work The problem of sorting a set of items from a collection of pairwise comparisons is one of the most classical problems in computer science and statistics. Many works investigate the problem of recovering the total ordering under noisy comparisons drawn from some parametric model. For the BTL model, Negahban et al. Negahban12RankCentrality () propose the RankCentrality algorithm, which serves as the building block for many spectral ranking algorithms. Lu and Boutilier Craig:11 () give an algorithm for sorting in the Mallows model. Rajkumar and Agarwal Rajkumar14 () investigate which statistical assumptions (BTL models, generalized low-noise condition, etc.) guarantee convergence of different algorithms to the true ranking. More recently, the problem of top- ranking has received a lot of attention. Chen and Suh Chen15 (), Jang et al. Jang13 (), and Suh et al. Suh16Adversarial () all propose various spectral methods for the BTL model or a mixture of BTL models. Eriksson Eriksson13 () considers a noisy observation model where comparisons deviating from the true ordering are i.i.d. with bounded probability. In Shah15Sim (), Shah and Wainwright consider the general SST models and propose the counting-based algorithm, which motivates our work. The top- ranking problem is also related to the best arm identification in multi-armed bandit Bubeck:13 (); Jamieson:14 (); Zhou:14 (). However, in the latter problem, the samples are i.i.d. random variables rather than pairwise comparisons and the goal is to identify the top distributions with largest means. This paper and the above references all belong to the non-active setting: the set of data provided to the algorithm is fixed, and there is no way for the algorithm to adaptively choose additional pairwise comparisons to query. In several applications, this property is desirable, specifically if one is using a well-established dataset or if adaptivity is costly (e.g. on some crowdsourcing platforms). Nonetheless, the problems of sorting and top- ranking are incredibly interesting in the adaptive setting as well. Several works (Ailon11, ; Jamieson11, ; Mathieu07, ; Braverman08, ) consider the adaptive noisy sorting problem with (noisy) pairwise comparisons and explore the sample complexity to recover an (approximately) correct total ordering in terms of some distance function (e.g,., Kendall’s tau). In Wauthier13 (), Wauthier et al. propose simple weighted counting algorithms to recovery an approximate total ordering from noisy pairwise comparisons. Dwork et al. Dwork01 () and Ailon et al. Ailon08 () consider a related Kemeny optimization problem, where the goal is to determine the total ordering that minimizes the sum of the distances to different permutations. More recently, the top- ranking problem in the active setting has been studied by Braverman et al. BMW16 () where they consider the sample complexity of algorithms that use a constant number of rounds of adaptivity. All of this work takes place in much more constrained noise models than the SST model. Extending our work to the active setting is an interesting open problem. ## 2 Preliminaries and Problem Setup Consider the following problem. An analyst is given a collection of items, labelled through . These items have some true ordering defined by a permutation such that for , the item labelled has a better rank than the item labelled (i.e., the item with label has a better rank than the item if and only if ). The analyst’s goal is to determine the set of the top items, i.e., . The analyst receives samples. Each sample consists of pairwise comparisons between all pairs of items. All the pairwise comparisons are independent with each other. The outcomes of the pairwise comparison between any two items is characterized by the probability matrix . For a pair of items , let be the outcome of the comparison between the item and , where means is preferred to (denoted by ) and otherwise. Further, let denote the Bernoulli random variable with mean . The outcome follows , i.e., Pr(Xi,j=1)=Pr(i≻j)=Pπ−1(i),π−1(j). The probability matrix is said to be strong stochastic transitive (SST) if it satisfies the following definition. ###### Definition 2.1. The probability matrix is strong stochastic transitive (SST) if 1. For , for all . 2. is shifted-skew-symmetric (i.e., is skew-symmetric) where and for . The first condition claims that when the item has a higher rank than item (i.e., ), for any other item , we have ###### Remark 2.1. Many classical parametric models such that BTL (Bradley52, ; Luce59, ) and Thurstone (Case V) (Thurstone:27, ) models are special cases of SST. More specifically, parametric models assume a score vector . They further assume that the comparison probability , where is a non-decreasing function and (e.g., in BTL models). By the property of , it is easy to verify that satisfy the conditions in Definition 2.1. Under the SST models, we can formally define the top- ranking problem as follows. The top- ranking problem takes the inputs , , that are known to the algorithm and the SST probability matrix that is unknown to the algorithm. ###### Definition 2.2. Top-K is the following algorithmic problem: 1. A permutation of is uniformly sampled. 2. The algorithm is given samples for , where each is sampled independently according to . The algorithm is also given the value of , but not or the matrix . 3. The algorithm succeeds if it correctly outputs the set of labels of the top items. ###### Remark 2.2. We note that Shah15Sim () considers a slightly different observation model in which each pair is queried times. For each query, one can obtain a comparison result with the probability and with probability , the query is invalid. In this model, each pair will be compared times on expectation. When , it reduces to our model in Definition 2.2, where we observe exactly comparisons for each pair. Our results can be easily extended to deal with the observation model in Shah15Sim () by replacing with the effective sample size, . We omit the details for the sake of simplicity. Our primary metric of concern is the sample complexity of various algorithms; that is, the minimum number of samples an algorithm requires to succeed with a given probability. To this end, we call the triple an instance of the Top-K problem, and write to denote the minimum value such that for all , succeeds on instance with probability when given samples. When is omitted, we will take ; i.e., . Instead of working directly with Top-K, we will spend most of our time working with a problem we call Domination, which captures the core of the difficulty of the Top-K problem. Domination is formally defined as follows. ###### Definition 2.3. Domination is the following algorithmic problem: 1. and are two vectors of probabilities that satisfy for all . are not given to the algorithm. 2. A random bit is sampled from . Samples (for ) are generated as follows: 1. Case : each is independently sampled according to and each is independently sampled according to . 2. Case : each is independently sampled according to and each is independently sampled according to . The algorithm is given the samples and , but is not given the bit or the values of and . 3. The algorithm succeeds if it correctly outputs the value of the hidden bit . As before, we are interested in the sample complexity of algorithms for Domination. We call the triple an instance of Domination, and write to be the minimum value such that for all , succeeds at solving with probability at least (similarly, we let ). ## 3 Main Results There are at least two main approaches one can take to analyze the sample complexity of problems like Top-K and Domination. The first (and more common) is to bound the value of by some explicit function of the instance . This is the approach taken by Shah15Sim (). They show that for some simple function (roughly, the square of the reciprocal of the absolute difference of the sums of the -th and -th rows of the matrix i.e. ), there is an algorithm such that for all instances , ; moreover this is optimal in the sense that there exists an instance such that for all algorithms , . While this is a natural approach, it leaves open the question of what the correct choice of should be; indeed, different choices of give rise to different ‘optimal’ algorithms which outperform each other on different instances. In this paper, we take the second approach, which is to compare the sample complexity of an algorithm on an instance to the sample complexity of the best possible algorithm on that instance. Formally, let and let . An ideal algorithm would satisfy for all instances of Top-K; more generally, we are interested in bounding the ratio between and . We call this ratio the competitive ratio of the algorithm, and say that an algorithm is -competitive if . (We likewise define all the corresponding notions for Domination). In our main upper bound result, we give a linear-time algorithm for Top-K which is -competitive (restatement of Corollary 7.5): ###### Theorem 3.1. There is an algorithm for Top-K such that runs in time and on every instance of Top-K on items, rmin(S,A)≤O(√nlogn)rmin(S). In our main lower bound result, we show that up to logarithmic factors, this competitive ratio is optimal (restatement of Theorem 8.1): ###### Theorem 3.2. For any algorithm for Top-K, there exists an instance of Top-K on items such that rmin(S,A)≥Ω(√nlogn)rmin(S). In comparison, for the counting algorithm of Shah15Sim (), there exist instances such that . For example, consider the instance with P=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣1212+ε⋯⋯12+ε12−ε⋮⋮⋱⋱⋱⋮⋮12+ε12−ε⋯⋯12−ε12⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ It is straightforward to show that with samples, we can learn all pairwise comparisons correctly with high probability by taking a majority vote, and therefore even sort all the elements correctly. This implies that . On the other hand, we show in Corollary 5.4 that when . ### 3.1 Main Techniques and Overview We prove our main results by first proving similar results for Domination which we defined in Definition 2.3. Intuitively Domination captures the main hardness of Top-K while being much simpler to analyze. Once we prove upper bound and lower bounds for the sample complexity of Domination, we will use reductions to prove analogous results for Top-K. We begin in Section 4, by proving a general lower bound on the sample complexity of domination. Explicitly, for a given instance of Domination, we show that where is the amount of information we can learn about the bit from one sample of pairwise comparison in each of the coordinates. In Section 5, we proceed to design algorithms for Domination restricted to instances where for some constant . In this regime , which makes it easier to argue our algorithms are not too bad compared with the optimal one. We first consider an algorithm we call the counting algorithm (Algorithm 1), which is a Domination analogue of the counting algorithm proposed by Shah15Sim (). We show that has a competitive ratio of . Intuitively, the main reason fails is that tries to consider samples from different coordinates equally important even when they are sampled from a very unbalanced distribution (for example, ). We then consider another algorithm we call the max algorithm (Algorithm 2) which simply finds and outputs according the sign of . We show also has a competitive ratio of . Interestingly, fails for a different reason from , namely that does not use the information fully from all coordinates when the samples are sampled from a very balanced distribution. In fact, performs well whenever fails and vice versa. We therefore show how combine and in two different ways to get two new algorithms: (Algorithm 3) and (Algorithm 4). We show that both of these new algorithms have a competitive ratio of , which is tight by Theorem 8.2. In Section 6, we design algorithms for Domination in the general regime. In this regime, can be much larger than , particularly for values of and very close to or . In these corner cases, the counting algorithm and max algorithm can fail very badly; we will show that even for fixed , their competitive ratios can grow arbitrarily large (Lemma 6.6 and Lemma 6.7). One main reason for this failure is that, even when , samples from coordinate could convey much more information than the samples from coordinate (consider, for example, , and ). Taking this into account, we design a new algorithm (Algorithm 5) which has a competitive ratio of in the general regime. The new algorithm still combines features from both and , but also better estimates the importance of each coordinate. To estimate how much information each coordinate has, the new algorithm divides the samples into groups and checks how often samples from coordinate are consistent with themselves. If one coordinate has a large proportion of the total information, it uses samples from that coordinate to decide , otherwise it takes a majority vote on samples from all coordinates. In Section 7, we return to Top-K and present an algorithm that has a competitive ratio of , thus proving Theorem 3.1. Our algorithm works by reducing the Top-K problem to several instances of the Domination problem (see Theorem 6.5). At a high level, the algorithm tries to find the top rows by pairwise comparisons of rows, each of which can be thought of as an instance of Domination. We use algorithm to solve these Domination instances. Since we only need to make at most comparisons, if outputs the correct answer with at least probability for each comparison, then by union bound all the comparisons will be correct with probability at least . However, to find the top rows, we do not actually need to compare all the rows to each other; Lemma 7.1 shows that we can find the top rows with high probability while making only comparisons. Using this lemma, we get a linear time algorithm for solving Top-K. Finally in Lemma 7.4, we extend the lower bound for Domination proved in Lemma 4.2 to show a lower bound on the number of samples any algorithm would need on a specific instance of Top-K. Combining these results, we prove Theorem 3.1. Finally, in Section 8, we show that the algorithms for both Domination and Top-K presented in the previous sections have the optimal competitive ratio (up to polylogarithmic factors). Specifically, we show that for any algorithm solving Domination, there exists an instance of domination where (Theorem 8.2). We accomplish this by constructing a distribution over instances of Domination such that each instance in the support of this distribution can by solved by an algorithm with low sample complexity (Theorem 8.5) but any algorithm that succeeds over the entire distribution requires times more samples (Theorem 8.7). We then embed Domination in Top-K (similarly as in Section 7) to show an analogous lower bound for Top-K (Theorem  8.1). ## 4 Lower bounds on the sample complexity of domination We start by establishing lower bounds on the number of samples needed by any algorithm to succeed with constant probability on a given instance of Domination. This is controlled by the quantity , which is the amount of information we can learn about the bit given one sample of pairwise comparison between each of the coordinates of and . ###### Definition 4.1. Given , define I(p,q)=(p(1−q)+q(1−p))(1−H(p(1−q)p(1−q)+q(1−p))). Given , define I(p,q)=n∑i=1I(pi,qi). ###### Lemma 4.2. Let be an instance of Domination. Then . ###### Proof. The main idea is to bound the mutual information between the samples and the correct output, and then apply Fano’s inequality. Let and . Recall that indicates the correct output and that are the samples given to the algorithm. By Fact A.6, I(B;X1,1,X1,2,...,Xn,r,Y1,1,...,Yn,r)=I(B;X1,1Y1,1)+I(B;X1,2,...,Xn,r,Y1,2,...,Yn,r|X1,1Y1,1). When , and are given, each sample ( or ) is independent of the other samples, and thus . By Fact A.7, we then have I(B;X1,2,...,Xn,r,Y1,2,...,Yn,r|X1,1Y1,1)≤I(B;X1,2,...,Xn,r,Y1,2,...,Yn,r) and therefore I(B;X1,1,X1,2,...,Xn,r,Y1,1,...,Yn,r)≤I(B;X1,1Y1,1)+I(B;X1,2,...,Xn,r,Y1,2,...,Yn,r). Repeating this, we get I(B;X1,1,X1,2,...,Xn,r,Y1,1,...,Yn,r)≤n∑i=1r∑j=1I(B;Xi,jYi,j). By Fact A.9, we have I(B;Xi,jYi,j) = Pr[B=0]⋅D(Xi,jYi,j|B=0∥Xi,jYi,j)+Pr[B=1]⋅D(Xi,jYi,j|B=1∥Xi,jYi,j) = (pi(1−qi)+qi(1−pi))(1−H(pi(1−qi)pi(1−qi)+qi(1−pi))) = I(pi,qi). It follows that I(B;X1,1,X1,2,...,Xn,r,Y1,1,...,Yn,r)≤n∑i=1r∑j=1I(B;Xi,jYi,j)=r⋅n∑i=1I(pi,qi)=rI(p,q). For any algorithm, let be its error probability on Domination. By Fano’s inequality, we have that H(pe) ≥ H(B|X1,1,X1,2,...,Xn,r,Y1,1,...,Yn,r) = H(B)−I(B;X1,1,X1,2,...,Xn,r,Y1,1,...,Yn,r) = 1−rI(p,q)≥0.95. Since , we find that , as desired. ∎ In the following section, we will concern ourselves with instances that satisfy for some constant for all . For such instances, we can approximate by the distance between and . ###### Lemma 4.3. For some , let . Then 14ln2(p−q)2≤I(p,q)≤1δln2(p−q)2. ###### Proof. Let and . Then and . We need to show that (x+y)(1−H(xx+y))≤1δln2(x−y)2. By Fact A.10, 1ln2z2≤1−H(12+z)=D(12+z∣∣∣∣∣∣12)≤4ln2z2, and therefore 14ln2(x−y)2(x+y)≤(x+y)(1−H(xx+y))≤1ln2(x−y)2(x+y). Since x+y=p(1−q)+q(1−p)≥2√p(1−p)q(1−q)≥2δ(1−δ)≥δ, this implies the desired upper bound. The lower bound also holds since, x+y=p(1−q)+q(1−p)≤√p2+(1−p)2⋅√q2+(1−q)2≤δ2+(1−δ)2≤1. ###### Corollary 4.4. Let be an instance of Domination satisfying for all . Then rmin(C)≥0.05ln(2)⋅δ∥p−q∥22. ###### Proof. By Lemma 4.3, . The result then follows from Lemma 4.2. ∎ ## 5 Domination in the well-behaved regime We now proceed to the problem of designing algorithms for Domination which are competitive on all instances. As a warmup, we begin by considering only instances of Domination satisfying for all where is some fixed constant. This regime of instances captures much of the interesting behavior of Domination, but with the added benefit that the mutual information between the samples and behaves nicely in this regime: in particular (see Lemma 4.3). By Corollary 4.4, we have . This fact will make it easier to design algorithms for Domination which are competitive in this regime. In Section 5.1, we give two simple algorithms (counting algorithm and max algorithm) which can solve Domination given samples which gives them a competitive ratio of . We will then show that this is tight, i.e. their competitive ratio is in Lemma 5.3 and Lemma 5.5. While the sample complexities of these two algorithms are not optimal, they have the nice property that whenever one performs badly, the other performs well. In Section 5.2, we show how to combine the counting algorithm and the max algorithm to give two different algorithms which can solve Domination using only samples i.e. they have a competitive ratio of . According to Theorem 8.2, this is the best we can do up to polylogarithmic factors. ### 5.1 Counting algorithm and max algorithm We now consider two simple algorithms for Domination, which we call the counting algorithm (Algorithm 1) and the max algorithm (Algorithm 2) denoted by and respectively. We show that both algorithms require samples to solve Domination (Lemmas 5.1 and 5.2). By Corollary 4.4, we have , leading to a competitive ratio for these algorithms. We show in Lemma 5.3 and Lemma 5.5 that this is tight up to polylogarithmic factors i.e. their competitive ratio is . Both the counting algorithm and the max algorithm begin by computing (for each coordinate ) the differences between the number of ones in the samples and samples; i.e., we compute the values . The counting algorithm decides whether to output or based on the sign of , whereas the max algorithm decides its output based on the sign of the with the largest absolute value. See Algorithms 1 and 2 for detailed pseudocode for both and . We begin by proving upper bounds for the sample complexities of both and . In particular, both and need at most times as many samples as the best possible algorithm for any instance in this regime. ###### Lemma 5.1. Let be an instance of Domination. Then rmin(C,Acount,1−α)≤2nln(α−1)∥p−q∥21. If further satisfies for all for some constant , then rmin(C,Acomb)≤O(n)rmin(C). ###### Proof. Let be the probability that and outputs when provided with samples. By symmetry is equal to the probability that we are in the case and outputs when provided with samples. It therefore suffices to show that is at most . When , E[Z]=E[n∑i=1Si]=rn∑i=1(pi−qi). By the Chernoff bound, pe≤Pr[Z≤0]≤exp⎛⎝−nr2⋅(∑ni=1(pi−qi)n)2⎞⎠≤α. The second part of the lemma follows from Corollary 4.4, along with the observation that . ∎ ###### Lemma 5.2. Let be an instance of Domination. Then rmin(C,Amax,1−α)≤8ln(2nα−1)∥p−q∥2∞ If further satisfies for all for some constant , then rmin(C,Acomb)≤O(nlogn)rmin(C). ###### Proof. Assume without loss of generality that , and let . Let be the event that makes an error and outputs when given samples. We can upper bound the probability of error as Pr[E]≤Pr[E|S1>rε/2]+Pr[S1≤rε/2]. We will bound each term separately. Since , by Hoeffding’s inequality, Pr[S1≤rε/2]≤exp(−rε2/8)≤α2. Similarly, by Hoeffding’s inequality and the union bound, Pr[E|S1>rε/2]≤Pr[∃i:Si<−rε/2]≤nexp(−rε2/8)≤α2. It follows that . The second part of the lemma follows from Corollary 4.4, along with the observation that . ∎ We now show that the upper bounds we proved above are essentially tight. In particular, we demonstrate instances where both and need times as many samples as the best possible algorithms for those instances. Interestingly, on the instance where suffers, performs well, and vice versa. This fact will prove useful in the next section. ###### Lemma 5.3. For each and each sufficiently large , there exists an instance of Domination such that the following two statements are true: 1. . 2. . ###### Proof. Let be an arbitrary integer between 1 and . Let be any vectors satisfying the following constraints: 1. For all , . 2. If , . 3. If , . Note that . Therefore, by Lemma 5.2, , thus proving the first part of the lemma. Now assume that . We will show that with this many samples, solves instance with probability at most , thus implying the second part of the lemma. Without loss of generality, assume that . Define the following random variables : 1. for and . 2. . and . It is straightforward to check that for all , , and . Let be the cdf of the standard normal distribution. Pr[Acount outputs B=1 (incorrectly)] =Pr[n∑i=1r∑j=1(Xi,j−Yi,j)<0]=Pr[n∑i=1r∑j=1Ui,j<−2rε] ≥Φ⎛⎜ ⎜⎝−2rε⋅1√∑ni=1∑rj=1E[U2i,j]⎞⎟ ⎟⎠−∑ni=1∑rj=1E[|Ui,j|3](∑ni=1∑rj=1E[U2i,j])−3/2 (By Berry-Esseen theorem (Lemma A.11)) ≥Φ(−√8rε2n)−8√nr≥Φ(−1/4)−8√nr≥1/4. It is not hard to observe that in certain cases, the counting algorithm of Shah15Sim () for Top-K reduces to the algorithm for Domination. It follows that there also exists an multiplicative gap between the sample complexity of their counting algorithm and the sample complexity of the best algorithm on some instances. ###### Corollary 5.4. Let be the Top-K algorithm of Shah15Sim (), and let be a Top-K instance, with as described in Section 3. Then, for sufficiently large and , . Let
2019-09-19 04:15:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9110766649246216, "perplexity": 535.1377239808626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573439.64/warc/CC-MAIN-20190919040032-20190919062032-00220.warc.gz"}
http://www.atractor.pt/mat/morley/demonstracaoareas-_en.html
## Morley's Theorem ### Proof Let us consider two distinct cases. 1st Case In this case, the area of Morley’s parallelogram can be obtained by subtracting the areas of the yellow and red triangles from the area of the initial parallelogram. Assuming that the measures of the sides of the initial parallelogram are $$a=\overline{AD}$$ and $$b=\overline{AB}$$ and form an angle with amplitude $$\alpha = D\hat{A}B$$, its area is equal to $$ab\sin \alpha$$. The area of the triangles is half of the product of the length of its two sides by the sine of the angle adjacent to these sides. For example, to calculate the area of the triangle $$[ADE]$$ we need to know the length of the sides $$[AE]$$ and $$[DE]$$ and the amplitude of the angle $$\measuredangle AED$$. Using the law of sines we get: $D\hat{A}E=\frac{D\hat{A}B}{3}=\frac{\alpha}{3}$ $A\hat{D}E=\frac{A\hat{D}C}{3}=\frac{\pi-\alpha}{3}$ $A\hat{E}D=\pi-D\hat{A}E-A\hat{D}E=\pi-\frac{\alpha}{3}-\frac{\pi-\alpha}{3}=\pi-\frac{\pi}{3}=\frac{2\pi}{3}$ $\begin{array}{lll} \frac{\overline{AD}}{\sin A\hat{E}D}=\frac{\overline{AE}}{\sin A\hat{D}E}\Rightarrow\overline{AE} & = & \overline{AD}\centerdot\frac{\sin A\hat{D}E}{\sin A\hat{E}D}\\ & = & a\frac{\sin\frac{\pi-\alpha}{3}}{\sin\frac{2\pi}{3}}\\ & = & \frac{2a}{\sqrt{3}}\sin\frac{\pi-\alpha}{3} \end{array}$ $\begin{array}{lll} \frac{\overline{AD}}{\sin A\hat{E}D}=\frac{\overline{DE}}{\sin D\hat{A}E}\Rightarrow\overline{DE} & = & \overline{AD}\centerdot\frac{\sin D\hat{A}E}{\sin A\hat{E}D}\\ & = & a\frac{\sin\frac{\alpha}{3}}{\sin\frac{2\pi}{3}}\\ & = & \frac{2a}{\sqrt{3}}\sin\frac{\alpha}{3} \end{array}$ $\begin{array}{lll} Area\left(\left[ADE\right]\right) & = & \frac{1}{2}\overline{AE}\centerdot\overline{DE}\sin A\hat{E}D\\ & = & \frac{1}{2}\left(\frac{2a}{\sqrt{3}}\sin\frac{\pi-\alpha}{3}\right)\left(\frac{2a}{\sqrt{3}}\sin\frac{\alpha}{3}\right)\sin\frac{2\pi}{3}\\ & = & \frac{2a^{2}}{3}\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}\frac{\sqrt{3}}{2}\\ & = & \frac{a^{2}}{\sqrt{3}}\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3} \end{array}$ Similarly, we have: $Area \left(\left[ADE\right]\right)=\frac{b^{2}}{\sqrt{3}}\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}$ Notice that the triangles $$[ADE]$$ and $$[CBG]$$ are congruent, as well as the triangles $$[ABF]$$ and $$[CDH]$$. Therefore, the sum of the areas of the yellow triangles is: $\begin{array}{l} Area\left(\left[ADE\right]\right)+Area \left(\left[ABF\right]\right)+Area\left(\left[CBG\right]\right)+Area\left(\left[CDH\right]\right)=\\ \;=2\centerdot\frac{a^{2}}{\sqrt{3}}\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}+2\centerdot\frac{b^{2}}{\sqrt{3}}\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}\\ \;=\frac{2}{\sqrt{3}}\left(a^{2}+b^{2}\right)\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3} \end{array}$ Considering now the red triangles and using the fact that the triangles $$[EAF]$$ and $$[GCH]$$ are congruent, as are also the triangles $$[FBG]$$ and $$[HDE]$$, we get: $\begin{array}{lll} Area\left(\left[GCH\right]\right) & = & Area\left(\left[EAF\right]\right)\\ & = & \frac{1}{2}\overline{AE}\cdot\overline{AF}\cdot\sin E\hat{A}F\\ & = & \frac{1}{2}\left(\frac{2a}{\sqrt{3}}\sin\frac{\pi-\alpha}{3}\right)\left(\frac{2b}{\sqrt{3}}\sin\frac{\pi-\alpha}{3}\right)\sin\frac{\alpha}{3}\\ & = & \frac{2}{3}ab\sin^{2}\frac{\pi-\alpha}{3}\sin\frac{\alpha}{3} \end{array}$ $\begin{array}{lll} Area\left(\left[HDE\right]\right) & = & Area\left(\left[FBG\right]\right)\\ & = & \frac{1}{2}\overline{BF}\cdot\overline{BG}\cdot\sin F\hat{B}G\\ & = & \frac{1}{2}\left(\frac{2b}{\sqrt{3}}\sin\frac{\alpha}{3}\right)\left(\frac{2a}{\sqrt{3}}\sin\frac{\alpha}{3}\right)\sin\frac{\pi-\alpha}{3}\\ & = & \frac{2}{3}ab\sin^{2}\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3} \end{array}$ Using the formula $$\sin x = 4 \sin \frac{x}{3} \sin \frac{x+\pi}{3} \sin \frac{x+2\pi}{3}$$ deduced in the trigonometric proof of Morley's theorem, we conclude that the sum of the areas of the red triangles is: $\begin{array}{l} \;Area\left(\left[EAF\right]\right)+Area\left(\left[FBG\right]\right)+Area\left(\left[GCH\right]\right)+Area\left(\left[HDE\right]\right)=\\ =2\cdot\frac{2}{3}ab\sin^{2}\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}+2\cdot\frac{2}{3}ab\sin^{2}\frac{\pi-\alpha}{3}\sin\frac{\alpha}{3}\\ =\frac{4}{3}ab\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}\left(\sin\frac{\alpha}{3}+\sin\frac{\pi-\alpha}{3}\right)\\ =\frac{4}{3}ab\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}\cdot2\sin\frac{\frac{\alpha}{3}+\frac{\pi-\alpha}{3}}{2}\cdot\cos\frac{\frac{\alpha}{3}.\frac{\pi-\alpha}{3}}{2}\\ =\frac{8}{3}ab\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}\sin\frac{\pi}{6}\cdot\cos\frac{2\alpha-\pi}{6}\\ =\frac{8}{3}ab\sin\frac{\alpha}{3}\sin\left(\pi-\frac{\pi-\alpha}{3}\right)\cdot\frac{1}{2}\cdot\sin\left(\frac{\pi}{2}+\frac{2\alpha-\pi}{6}\right)\\ =\frac{4}{3}ab\sin\frac{\alpha}{3}\sin\frac{\alpha+2\pi}{3}\sin\frac{\alpha+\pi}{3}\\ =\frac{1}{3}ab\cdot4\sin\frac{\alpha}{3}\sin\frac{\alpha+\pi}{3}\sin\frac{\alpha+2\pi}{3}\\ =\frac{1}{3}ab\sin\alpha \end{array}$ (that is, the sum of the areas of the red triangles is exactly one-third of the area of the parallelogram $$[ABCD]$$). We can now calculate the area of the parallelogram $$[EFGH]$$: $\begin{array}{lll} Area\left(\left[EFGH\right]\right) & = & ab\sin\alpha-\frac{2}{\sqrt{3}}\left(a^{2}+b^{2}\right)\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}-\frac{1}{3}ab\sin\alpha\\ & = & \frac{2}{3}ab\sin\alpha-\frac{2}{\sqrt{3}}\left(a^{2}+b^{2}\right)\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3} \end{array}$ 2nd Case In this case, the sum of the areas of the triangles $$[ADE]$$, $$[CBG]$$, $$[ABF]$$, $$[CDH]$$, $$[EAF]$$, $$[GCH]$$, $$[FBG]$$ and $$[HDE]$$ exceeds the area of the parallelogram $$[ABCD]$$ and the excess is exactly equal to the area of the parallelogram $$[EFGH]$$. Furthermore, it is easy to see that the values obtained in the previous case for the areas of the referred triangles remain valid. Hence: $\begin{array}{lll} Area\left(\left[EFGH\right]\right) & = & \frac{2}{\sqrt{3}}\left(a^{2}+b^{2}\right)\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}+\frac{1}{3}ab\sin\alpha-ab\sin\alpha\\ & = & \frac{2}{\sqrt{3}}\left(a^{2}+b^{2}\right)\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}-\frac{2}{3}ab\sin\alpha \end{array}$ In both cases we have: $\begin{array}{lll} Area\left(\left[EFGH\right]\right) & = & \left|\frac{2}{3}ab\sin\alpha-\frac{2}{\sqrt{3}}\left(a^{2}+b^{2}\right)\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}\right|\\ & = & \left|\frac{2}{3}ab\cdot4\sin\frac{\alpha}{3}\sin\frac{\alpha+\pi}{3}\sin\frac{\alpha+2\pi}{3}-\frac{2}{\sqrt{3}}\left(a^{2}+b^{2}\right)\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}\right|\\ & = & \left|\frac{8}{3}ab\sin\frac{\alpha}{3}\sin\frac{\alpha+\pi}{3}\sin\frac{\pi-\alpha}{3}-\frac{2}{\sqrt{3}}\left(a^{2}+b^{2}\right)\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}\right|\\ & = & \frac{2}{\sqrt{3}}\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}\left|\frac{4}{\sqrt{3}}ab\sin\frac{\alpha+\pi}{3}-\left(a^{2}+b^{2}\right)\right| \end{array}$ With respect to the area of the parallelogram $$[IJKL]$$, we note first that the triangles $$[EAF]$$ and $$[LAK]$$ are similar. In fact, we have: $E\hat{A}F=L\hat{A}K$ $\overline{AE}=\frac{2a}{\sqrt{3}}\sin\frac{\pi-\alpha}{3}$ $\overline{AF}=\frac{2b}{\sqrt{3}}\sin\frac{\pi-\alpha}{3}$ $\overline{AL}=\frac{2b}{\sqrt{3}}\sin\frac{2\left(\pi-\alpha\right)}{3}$ $\overline{AK}=\frac{2a}{\sqrt{3}}\sin\frac{2\left(\pi-\alpha\right)}{3}$ $\begin{array}{lll} \frac{\overline{AK}}{\overline{AE}} & = & \frac{\overline{AL}}{\overline{AF}}\\ & = & \frac{\sin\frac{2\left(\pi-\alpha\right)}{3}}{\sin\frac{\pi-\alpha}{3}}\\ & = & \frac{2\sin\frac{\pi-\alpha}{3}\cos\frac{\pi-\alpha}{3}}{\sin\frac{\pi-\alpha}{3}}\\ & = & 2\cos\frac{\pi-\alpha}{3} \end{array}$ Similarly, the triangles $$[EDH]$$ and $$[JDK]$$ are also similar, since we have: $E\hat{D}H=J\hat{D}K$ $\overline{DE}=\frac{2a}{\sqrt{3}}\sin\frac{\alpha}{3}$ $\overline{DH}=\frac{2b}{\sqrt{3}}\sin\frac{\alpha}{3}$ $\overline{DJ}=\frac{2b}{\sqrt{3}}\sin\frac{2\alpha}{3}$ $\overline{DK}=\frac{2a}{\sqrt{3}}\sin\frac{2\alpha}{3}$ $\begin{array}{lll} \frac{\overline{DK}}{\overline{DE}} & = & \frac{\overline{DJ}}{\overline{DH}}\\ & = & \frac{\sin\frac{2\alpha}{3}}{\sin\frac{\alpha}{3}}\\ & = & \frac{2\sin\frac{\alpha}{3}\cos\frac{\alpha}{3}}{\sin\frac{\alpha}{3}}\\ & = & 2\cos\frac{\alpha}{3} \end{array}$ Using the congruence between the triangles$$[BGF]$$ and $$[DEH]$$ and the similarity of triangles $$[EAF]$$ and $$[LAK]$$ and of triangles $$[EDH]$$ and $$[JDK]$$, we have: $I\hat{F}G=G\hat{B}F+B\hat{G}F$ $E\hat{F}K=E\hat{A}F+A\hat{E}F$ $\begin{array}{lll} A\hat{F}B & = & \pi-B\hat{A}F-A\hat{B}F\\ & = & \pi-\frac{1}{3}\left(B\hat{A}B+A\hat{B}C\right)\\ & = & \pi-\frac{\pi}{3}\\ & = & \frac{2\pi}{3} \end{array}$ $G\hat{B}F + E\hat{A}F = \frac{1}{3} \left( C\hat{B}A + D\hat{A}B \right) = \frac{\pi}{3}$ $B\hat{G}F=D\hat{E}H=D\hat{K}J$ $A\hat{E}F=A\hat{K}L$ $\begin{array}{ccl} E\hat{F}G & = & I\hat{F}G+E\hat{F}K-I\hat{F}K\\ & = & \left(G\hat{B}F+B\hat{G}F\right)+\left(E\hat{A}F+A\hat{E}F\right)-A\hat{F}B\\ & = & \left(G\hat{B}F+E\hat{A}F\right)+\left(B\hat{G}F+A\hat{E}F\right)-\frac{2\pi}{3}\\ & = & \frac{\pi}{3}+\left(D\hat{K}J+A\hat{K}L\right)-\frac{2\pi}{3}\\ & = & \left(D\hat{K}J+A\hat{K}L\right)-\frac{\pi}{3}\\ & = & A\hat{K}L+D\hat{K}J-A\hat{K}D\\ & = & J\hat{K}L \end{array}$ $\begin{array}{ccc} \overline{KL} & = & \frac{\overline{AK}}{\overline{AE}}\cdot\overline{FE}\\ & = & 2\cos\frac{\pi-\alpha}{3}\cdot\overline{FE} \end{array}$ $\begin{array}{ccc} \overline{KJ} & = & \frac{\overline{DK}}{\overline{DE}}\cdot\overline{HE}\\ & = & 2\cos\frac{\pi-\alpha}{3}\cdot\overline{HE}\\ & = & 2\cos\frac{\pi-\alpha}{3}\cdot\overline{FG} \end{array}$ Therefore we obtain: $\begin{array}{lll} Area \left(\left[IJKL\right]\right) & = & \overline{KL}\cdot\overline{KJ}\cdot\sin J\hat{K}L\\ & = & 2\cos\frac{\pi-\alpha}{3}\cdot\overline{FE}\cdot2\cos\frac{\pi-\alpha}{3}\cdot\overline{FG}\cdot\sin E\hat{F}G\\ & = & 4\cos\frac{\alpha}{3}\cos\frac{\pi-\alpha}{3}\cdot\left(\overline{FE}\cdot\overline{FG}\cdot\sin E\hat{F}G\right)\\ & = & 4\cos\frac{\alpha}{3}\cos\frac{\pi-\alpha}{3}\cdot Area\left(\left[EFGH\right]\right)\\ & = & 4\cos\frac{\alpha}{3}\cos\frac{\pi-\alpha}{3}\cdot\frac{2}{\sqrt{3}}\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}\left|\frac{4}{\sqrt{3}}ab\sin\frac{\alpha+\pi}{3}-\left(a^{2}+b^{2}\right)\right|\\ & = & \frac{2}{\sqrt{3}}\left(2\sin\frac{\alpha}{3}\cos\frac{\alpha}{3}\right)\left(2\sin\frac{\pi-\alpha}{3}\cos\frac{\pi-\alpha}{3}\right)\left|\frac{4}{\sqrt{3}}ab\sin\frac{\alpha+\pi}{3}-\left(a^{2}+b^{2}\right)\right|\\ & = & \frac{2}{\sqrt{3}}\sin\frac{2\alpha}{3}\sin\frac{2\left(\pi-\alpha\right)}{3}\left|\frac{4}{\sqrt{3}}ab\sin\frac{\alpha+\pi}{3}-\left(a^{2}+b^{2}\right)\right| \end{array}$ Conclusion: $Area\left(\left[EFGH\right]\right)=\frac{2}{\sqrt{3}}\sin\frac{\alpha}{3}\sin\frac{\pi-\alpha}{3}\left|\frac{4}{\sqrt{3}}ab\sin\frac{\alpha+\pi}{3}-\left(a^{2}+b^{2}\right)\right|$ $Area\left(\left[IJKL\right]\right)=\frac{2}{\sqrt{3}}\sin\frac{2\alpha}{3}\sin\frac{2\left(\pi-\alpha\right)}{3}\left|\frac{4}{\sqrt{3}}ab\sin\frac{\alpha+\pi}{3}-\left(a^{2}+b^{2}\right)\right|$ Note that the ratio of the areas (assuming both nonzero) is: $\begin{array}{lll} r & = & \frac{Area \left(\left[IJKL\right]\right)}{Area\left(\left[EFGH\right]\right)}\\ & = & 4\cos\frac{\alpha}{3}\cos\frac{\pi-\alpha}{3}\\ & = & 4\cdot\frac{1}{2}\left(\cos\frac{\alpha-\left(\pi-\alpha\right)}{3}+\cos\frac{\alpha+\left(\pi-\alpha\right)}{3}\right)\\ & = & 2\left(\frac{1}{2}+\cos\frac{\pi-2\alpha}{3}\right)\\ & = & 1+2\cos\frac{\pi-2\alpha}{3},\mbox{ with }\alpha\in]0,\pi[. \end{array}$ Therefore, we have $$2<r \leq 3$$, that is, the parallelogram $$[IJKL]$$ has an area which is always greater than twice the area of the parallelogram $$[EFGH]$$ and never higher than its triple, which is reached when $$\alpha=\frac{\pi}{2}$$. We note also that, as $$J\hat{K}L=E\hat{F}G$$, $$\frac{\overline{KL}}{\overline{FE}}=2\cos \frac{\alpha}{3}$$ and $$\frac{\overline{KJ}}{\overline{FG}}=2\cos \frac{\pi-\alpha}{3}$$, the parallelograms $$[IJKL]$$ and $$[EFGH]$$ are similar when $$2\cos \frac{\alpha}{3} = 2\cos \frac{\pi - \alpha}{3}$$, that is, when $$\alpha = \frac{\pi}{2}$$. When this happens, the similarity ratio is $$2\cos \frac{\pi}{6}=\sqrt{3}$$, whereby the ratio between the areas is $$(\sqrt{3})^{2}=3$$, as indicated.
2018-11-12 20:08:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.972521185874939, "perplexity": 63.04521974347376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741087.23/warc/CC-MAIN-20181112193627-20181112215627-00051.warc.gz"}
https://www.physicsforums.com/threads/tangent-plane-and-normal-vector.29473/
# Tangent Plane And Normal Vector. 1. ### dcl 55 I'm having trouble working out the tangent plane of an equation at a specified point (4,1,-2) The equation being $$9x^2 - 4y^2 - 25z^2 = 40$$ now $$\nabla f = (18x, -8y, -50z)$$ yeh? Just reading off this should give us the normal vector shouldn't it? (18,-8,-50) and from that we can work out the equation of the plane. 18(x-4) - 8(y-1) -50(z-(-2)) = 0 Is this corrent or am I using a horribly flawed method? 2. ### dcl 55 Think I've worked it out for myself. Method was sorta wrong. Once I have Grad F, all I need to do is sub in the values of the point and It will give me the normal vector and from that I can work out the equation. I think thats right. Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook Have something to add?
2015-05-26 13:45:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3704127371311188, "perplexity": 1112.1038176115812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928864.16/warc/CC-MAIN-20150521113208-00299-ip-10-180-206-219.ec2.internal.warc.gz"}
https://nbviewer.org/github/LeDataSciFi/ledatascifi-2023/blob/main/content/05/04d_crossval.ipynb
# Cross-Validation¶ Cross-validation is a step where we take our training sample and further divide it in many folds, as in the illustration here: {image} ./img/feature_5_fold_cv.jpg :alt: 5-fold :width: 400px :align: center As we talked about in the last chapter, cross-validation allows us to test our models outside the training data more often. This trick reduces the likelihood of overfitting and improves generalization: It _should_ improve our model's performance when we apply it outside the training data. {margin} Illustration: If you emulate the simple folding method as depicted in the above graphic for stock return data, some folds will end up testing your model on data from _before_ the periods where the model was estimated! {warning} I say "it _should_ improve our model's performance" because the exact manner in which you create the folds matters! See the tip below. ## Basic example¶ Like before, let's first load the data. Notice I consolidated the import lines at the top. In [1]: import pandas as pd import numpy as np from sklearn.linear_model import Ridge from sklearn.model_selection import train_test_split url = 'https://github.com/LeDataSciFi/data/blob/main/Fannie%20Mae/Fannie_Mae_Plus_Data.gzip?raw=true' y = fannie_mae.Original_Interest_Rate fannie_mae = (fannie_mae .assign(l_credscore = np.log(fannie_mae['Borrower_Credit_Score_at_Origination']), l_LTV = np.log(fannie_mae['Original_LTV_(OLTV)']), ) .iloc[:,-11:] # limit to these vars for the sake of this example ) And, like before, we then split off some of the data into a testing sample. _For the sake of simplicity (laziness?), let's just reuse the train_test_split approach from the last page._ In [2]: rng = np.random.RandomState(0) # this helps us control the randomness so we can reproduce results exactly X_train, X_test, y_train, y_test = train_test_split(fannie_mae, y, random_state=rng) {dropdown} Digression. Read after you read the "Choosing what kind of CV folds to use" section below. Below, we introduce some of the conceptual issues with how you create folds for CV. Once you read that, come back and revisit this test_train_split code above, because the same considerations for how we choose folds within a CV apply to how we split off our testing sample. This page says train_test_split uses ShuffleSplit. This method does not divide by time or any group type. {dropdown} Q: Does this Fannie Mae data need special attention to how we divide it up? A question to ponder, in class perhaps... If you want to use any other CV iterators to divide up your sample, you can. Here is one: python # Replace "GroupShuffleSplit" with your CV of choice, # and update the contents of split() as needed. train_idx, test_idx = next( GroupShuffleSplit(random_state=7).split(X, y, groups) ) X_train, X_test, y_train, y_test = X[train_idx], X[train_idx], y[test_idx], y[test_idx] You don't even need to use a built-in CV method to create your holdout data. Here is a simple approach that works in many settings: # test on the last year of data train_idx, test_idx = X[X['year']<X['year'].max()], X[X['year']=X['year'].max()] X_train, X_test, y_train, y_test = X[train_idx], X[train_idx], y[test_idx], y[test_idx] ## Set up the CV¶ {margin} _Note: **Your estimator will actually be a "pipeline" object.** ([covered in detail on the next page](04e_pipelines)) A pipeline object tells sk-learn to apply a series of steps to the data (preprocessing, etc.) and always ends with a model to estimate._ Sk-learn makes cross-validation pretty easy. We use the cross_validate("estimator",X_train,y_train,cv,scoring,...) function (documentation here) which, under the hood, will 1. Create folds in X_train and y_train using whatever method you put in the cv parameter. For each fold, it will create a smaller "training partition" and "validation partition" like in the figure at the top of this page. 2. For each fold, it will 1. Fit your "estimator" on the smaller training partition it creates for that fold (as if you ran estimator.fit(X_trainingpartition,y_trainingpartition)). 2. Use that fitted estimator on the validation partition (as if you ran estimator.predict(X_validationpartition)). 3. Score those predictions with the function(s) you put in scoring. 3. Output a dictionary object with performance data for each fold. {important} So, to use cross_validate(), you need to decide on and set up: 1. Your preferred folding method (and number of folds) 1. Your estimator (or pipeline ending in an estimator) 1. Your scoring method(s) Code wise, it can be this simple: In [3]: from sklearn.model_selection import KFold, cross_validate cv = KFold(5) # pick and set up fold method ridge = Ridge(alpha=1.0) # set up model/estimator cross_validate(ridge, X_train,y_train, cv=cv, scoring='r2') # tell it the scoring method here Out[3]: {'fit_time': array([0.02946544, 0.00299978, 0.00300097, 0.00301909, 0.00201058]), 'score_time': array([0.00199938, 0.00200081, 0.00099993, 0.00098205, 0.00198793]), 'test_score': array([0.90789446, 0.89926394, 0.900032 , 0.90479828, 0.90327986])} {note} Wow, that was easy! Just 3 lines of code (and an import). And we can output test score statistics like: In [4]: scores = cross_validate(ridge, X_train, y_train, cv=cv, scoring="r2") print(scores["test_score"].mean()) # scores is just a dictionary print(scores["test_score"].std()) 0.9030537085469961 0.0031629307869794876 ## Choosing what kind of CV folds to use¶ What should we put in the cv = ? line of code? It depends on the dataset: • IID data: If we have a dataset with one observation per unit and all observations are at either the same time, or if time can be ignored: • KFold() and ShuffleSplit() • If you have a classification task with imbalanced labels, use StratifiedKFold() and StratifiedShuffleSplit() • Grouped, time independent data: If time can be ignored, and you have multiple observations for each unit ("groups"), your goal is likely to find out if a model trained on a particular set of groups generalizes well to the unseen groups: • GroupKFold() and GroupShuffleSplit • If you have a classification task with imbalanced labels, use StratifiedGroupKFold() • If your data and/or task is time dependent: E.g. predicting stock returns, any setting where the choice depends on prior choices and outcomes • You must ensure that the validation partitions are subsequent to the training sample. • Rarely: If you have a simple time series dataset (one observation for each time), use the built-in TimeSeriesSplit() • Usually: If you have have a cross-sectional panel (multiple observations for each time), build a custom splitter to generate your folds. {tip} **[This page should be your first reference to learn about the CV splitting functions listed above.](https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators)** ## Custom CV folds¶ You can explicitly design and create your own CV folds. The cross_validation() documentation says that cv can be "An iterable yielding (train, test) splits as arrays of indices." Let's use this example: cv_custom = [([0,1],[2]), ([0,1,2],[3]), ([0,1,2,3],[4])] cross_validate(ridge, X_train,y_train, cv=cv_custom, scoring='r2') cv_custom is a list (lists are an "iterable" object) with 3 elements, so our cross validation will have 3 folds. 1. The first fold is given by the first element: ([0,1],[2]), which says the training data is rows 0 and 1, and the validation is done on row 2. 2. In the second fold, train on rows 0 to 2, and validate on row 3. 3. And so on. ## Code for rolling OOS testing (v0)¶ {important} The code below is almost plug-and-play! {margin} 1. Train on year 1, and predict outcomes in year 2. 1. Then train on years 1 and 2, and predict outcomes in year 3. 1. Then train on years 1 - 3, and predict outcomes in year 4. 1. Then train on years 1 - 4, and predict outcomes in year 5. 1. ... 1. Then train on years 1 - 16, and predict outcomes in year 17. One procedure to do rolling OOS testing was introduced on this page, and I copied it here in the sidebar. It's a common choice that works well in many settings. It works when you have many firms or units (whether each unit has one or multiple observations per year), or even with a single time series!1 And it can be adjusted to other units of time (monthly, weekly, decades), or only every few units of time. The code below implements it, and can be used with appropriate adjustments for your setting. 1. You could do TimeSeriesSplit(), but if you have 20 years of daily trading data, there are potentially ~500 possible splits! That will be slow when you evaluate a bunch of models a bunch of times! The procedure above reduces that greatly. # adjust the inside of "groupby()" to your df/problem, pick other parameters time_groups = X_train.groupby(X_train['DATE'].dt.year).groups min_periods_in_train = 5 training_expanding_window = True # shouldn't need to adjust this code: sorted_groups = [list(value) for (key, value) in sorted(time_groups.items())] if training_expanding_window: cv = [([i for g in sorted_groups[:y] for i in g],sorted_groups[y]) for y in range(min_periods_in_train , len(sorted_groups))] else: cv = [([i for g in sorted_groups[y-min_periods_in_train:y] for i in g],sorted_groups[y]) for y in range(min_periods_in_train, len(sorted_groups))] {tip} Extra credit: Convert the code above into a function for the community codebook! Something like: python def rolling_oos_cv(???): # you need to decide on arguments ??? return cv Considerations/bonus: 1. ESSENTIAL: How should the user specify the time dimensions to groupby? What if they want to use months, weeks, etc. instead of years? 1. EASY BONUS: Can the user ask for multiple time periods in the validation sets? E.g. predict the next two years? 1. HARDER BONUS: Can the user ask to skip some folds? E.g. if they group the data by months, but only do the training and prediction for the first month each year? ## Code for rolling OOS testing (v1)¶ The code above works. It lacks some features, and until a classmate codes it up int a nice function, it requires copy-pasting all of the code I wrote, and adjusting it. As of mid-2022, someone has finally made a function that looks like a promising solution. Here is the documentation for GroupTimeSeriesSplit. However, it's new and has some quirks. Here is an illustration of how it divides up the sample. First, I'll create a simple dataset we can look at. Notice we have several firms in many years. In [5]: import pandas as pd df = pd.DataFrame({'year':[2000,2000,2000,2001,2001,2002,2002,2002,2002,2003,2004,2004,2006], 'firm':[1,2,3,1,2,1,2,3,4,4,4,5,5], 'X': [0,4,5,2,5,7,2,4,4,9,8,9,8], 'y': [0,0,1,1,1,1,0,0,1,1,0,1,0] }) display(df.T) 0 1 2 3 4 5 6 7 8 9 10 11 12 year 2000 2000 2000 2001 2001 2002 2002 2002 2002 2003 2004 2004 2006 firm 1 2 3 1 2 1 2 3 4 4 4 5 5 X 0 4 5 2 5 7 2 4 4 9 8 9 8 y 0 0 1 1 1 1 0 0 1 1 0 1 0 GroupTimeSeriesSplit() correctly splits up the sample: In [6]: from mlxtend.evaluate.time_series import GroupTimeSeriesSplit, plot_splits cv_args = {'test_size':1, 'n_splits':5,'window_type':'expanding'} plot_splits(df.X, df.y, df.year, **cv_args) Using it is easy: 1. Replace the cv= with GroupTimeSeriesSplit() and the parameters you want. 2. Add groups= to cross_validate, and it should be the variable containing the time info. In [7]: from sklearn.dummy import DummyClassifier from sklearn.model_selection import cross_validate from mlxtend.evaluate import GroupTimeSeriesSplit cv = GroupTimeSeriesSplit(test_size=1, n_splits=5, window_type='expanding') # set up fold method clf = DummyClassifier(strategy="most_frequent") # set up model/estimator cross_validate(clf, df.X, df.y, groups=df.year, Out[7]: array([0.33333333, 0.33333333, 0.33333333, 0.5 , 0.5 ]) ## Next step: Pipelines¶ The model above • Only uses a few continuous variables: what if we want to include other variable types (like categorical)? • Uses the variables as given: ML algorithms often need you to transform your variables • Doesn't deal with any data problems (e.g. missing values or outliers) • Doesn't create any interaction terms or polynomial transformations • Uses every variable I give it: But if your input data had 400 variables, you'd be in danger of overfitting! At this point, you are capable of solving all of these problems. (For example, you could clean the data in pandas.) But for our models to be robust to evil monsters like "data leakage", we need the fixes to be done within pipelines.
2022-11-28 21:27:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30670586228370667, "perplexity": 2905.3250589903287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00619.warc.gz"}
https://pure.mpg.de/pubman/faces/ViewItemOverviewPage.jsp?itemId=item_2459635
Deutsch Benutzerhandbuch Datenschutzhinweis Impressum Kontakt # Datensatz DATENSATZ AKTIONENEXPORT Freigegeben Forschungspapier #### Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless APSP can) ##### MPG-Autoren /persons/resource/persons44182 Bringmann,  Karl Algorithms and Complexity, MPI for Informatics, Max Planck Society; ##### Externe Ressourcen Es sind keine Externen Ressourcen verfügbar ##### Volltexte (frei zugänglich) arXiv:1703.08940.pdf (Preprint), 2MB ##### Ergänzendes Material (frei zugänglich) Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar ##### Zitation Bringmann, K., Gawrychowski, P., Mozes, S., & Weimann, O. (2017). Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless APSP can). Retrieved from http://arxiv.org/abs/1703.08940. The edit distance between two rooted ordered trees with $n$ nodes labeled from an alphabet~$\Sigma$ is the minimum cost of transforming one tree into the other by a sequence of elementary operations consisting of deleting and relabeling existing nodes, as well as inserting new nodes. Tree edit distance is a well known generalization of string edit distance. The fastest known algorithm for tree edit distance runs in cubic $O(n^3)$ time and is based on a similar dynamic programming solution as string edit distance. In this paper we show that a truly subcubic $O(n^{3-\varepsilon})$ time algorithm for tree edit distance is unlikely: For $|\Sigma| = \Omega(n)$, a truly subcubic algorithm for tree edit distance implies a truly subcubic algorithm for the all pairs shortest paths problem. For $|\Sigma| = O(1)$, a truly subcubic algorithm for tree edit distance implies an $O(n^{k-\varepsilon})$ algorithm for finding a maximum weight $k$-clique. Thus, while in terms of upper bounds string edit distance and tree edit distance are highly related, in terms of lower bounds string edit distance exhibits the hardness of the strong exponential time hypothesis [Backurs, Indyk STOC'15] whereas tree edit distance exhibits the hardness of all pairs shortest paths. Our result provides a matching conditional lower bound for one of the last remaining classic dynamic programming problems.
2018-11-15 14:38:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7089121341705322, "perplexity": 2242.7131937043373}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742779.14/warc/CC-MAIN-20181115141220-20181115163220-00380.warc.gz"}
https://read.somethingorotherwhatever.com/entry/Xu2012
Survey on fusible numbers • Published in 2012 In the collection We point out that the recursive formula that appears in Erickson's presentation "Fusible Numbers" is incorrect, and pose an alternate conjecture about the structure of fusible numbers. Although we are unable to solve the conjecture, we succeed in establishing some basic properties of fusible numbers. We suggest some possible approaches to the conjecture, and list further problems in the final chapter. Other information key Xu2012 type article 2012-05-02 date_published 2012-02-01 arxivId 1202.5614 pages 10 BibTeX entry @article{Xu2012, key = {Xu2012}, type = {article}, title = {Survey on fusible numbers}, author = {Xu, Junyan}, abstract = {We point out that the recursive formula that appears in Erickson's presentation "Fusible Numbers" is incorrect, and pose an alternate conjecture about the structure of fusible numbers. Although we are unable to solve the conjecture, we succeed in establishing some basic properties of fusible numbers. We suggest some possible approaches to the conjecture, and list further problems in the final chapter.}, comment = {}, }
2023-02-04 02:09:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4832403361797333, "perplexity": 1347.9164124210004}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500080.82/warc/CC-MAIN-20230204012622-20230204042622-00504.warc.gz"}
https://www.examrace.com/NTA-UGC-NET/NTA-UGC-NET-Updates/NEWS-NTA-NET-Paper-2-24th-June-2019-Commerce-Part-7.htm
# NTA NET Paper 2 24th June 2019 Commerce Part 7-With Answers and Explanations at Doorsteptutor.com (Download PDF) () NTA has conducted the UGC NET June 2019 Exam for Junior Research Fellowship (JRF) and Lectureship (LS) in online mode on 20th, 21st, 24th, 25th, 26th June 2019 in 81 Subjects and 91 Selected Cities spread across the country. The students who appeared in the exam might be eager to know about the UGC NET exam analysis and review of the papers that have been held in June. For explanations and solutions to these questions don’t forget to visit www.doorsteptutor.com Ques 73. Depreciation charged on fixed assets in the Funds Flow Statement is/are (a) Source of funds (b) An application of funds (c) Sources of funds in limited sense (d) Added back to the profit to find out funds from operations Which one of the following options is correct? (I) (a) and (d) only (II) (b) only (III) (d) only (IV) (c) and (d) only Ques 74. Firm A acquires Firm B. Market price of shares of B is ₹ 20 per share and EPS is ₹ 5. For an exchange ratio of 1.5: 1, what was the P/E ratio used in acquiring firm B? (I) 4 (II) 5 (III) 6 (IV) 2.67 Ques 75. Authority is the right to make decision to direct the work of others and to give orders. In this context, match List I with List II: List I (a) Line Authority (b) Staff Authority (c) Line Manager (d) Staff Manager List II (i) Gives a manager the right to advice other managers or employees (ii) Traditionally gives managers the right to issue orders to other managers or employees (iii) A manager who assists and advices the manager (iv) A manager who is authorised to direct the work of subordinates is responsible for accomplishing the organisation’s tasks Choose the correct option from those given below: (I) (a) - (i), (b) - (iv), (c) - (iii), (d) - (ii) (II) (a) - (ii), (b) - (i), (c) - (iv), (d) - (iii) (III) (a) - (iv), (b) - (ii), (c) - (i), (d) - (iii) (IV) (a) - (ii), (b) - (iii), (c) - (i), (d) - (iv) Ques 76. What refers to the stock of employee skills, knowledge and capabilities that may not show up in a balance sheet but have significant impact on a firm’s performance? (I) Human Resource Planning (II) Human Resource Management (III) Human Capital (IV) Number of Employees Ques 77. A specific category of information system serving middle managers by providing reports on the organisation’s current performance is called (I) MIS (II) TPS (III) DSS (IV) ESS Ques 78. Name the theory/concept/thought concerned with designing and shaping jobs to fit the physical abilities and characteristics of individuals so that they can perform their jobs effectively. (I) Economics (II) Physiology (III) Ergonomics (IV) Psychology Ques 79. The following statements relate to organisational culture: Statement I: Culture is the cumulative preference of some states of life and some organised ways of dealing with basic concerns by members of society. Statement II: There are various ways of making employees aware about the organisation’s culture. These include stories, language, rituals, organizational structure and physical layout and rules and policies. Which of the following options is correct? (I) Statement I am correct but II is incorrect (II) Statement II is correct but I am incorrect (III) Both the statements I and II are correct (IV) Both the statements I and II are incorrect Ques 80. Two friends are on WhatsApp discussing a holiday plan. Immediately, Uber pops up within WhatsApp to help book a ride and provide an estimate. This is known as (I) Contextual digital marketing (II) Direct mail marketing (III) Mail order marketing Ques 81. MM Hypothesis for capital structure is based on which of the following assumptions? (a) Investors are rational and have homogeneous expectations (b) Perfect capital market (c) $100%$ retention of profits (d) No taxes Which of the following options is most appropriate? (I) (a), (b), (d) (II) (a), (c), (d) (III) (b), (c), (d) (IV) (a), (b), (c), (d) Ques 82. The system of collective bargaining as a method of settlement of industrial disputes has been adopted and provided under (I) The Industrial Dispute Act, 1947 (II) The Trade Unions Act, 1926 (III) The Industrial Employment (Standing Orders) Act 1946 (IV) The Taft - Hartley Act, 1947 Ques 83. The correlation coefficient is the ________of two regression coefficients. (I) Geometric mean (II) Arithmetic mean (III) Harmonic mean (IV) Median
2019-12-07 16:57:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2068166732788086, "perplexity": 7753.7816181639555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540500637.40/warc/CC-MAIN-20191207160050-20191207184050-00367.warc.gz"}
https://physics.stackexchange.com/questions/361593/converting-hubble-constant-to-gr-units
# Converting Hubble constant to GR Units I'm working my way through Thomas Moore's General Relativity Workbook and having a problem with Exercise 24.4.1 (p. 288), which asks you to verify Hubble constant values (with errors) for a variety of units. Given $H = 70.4 \pm 1.5\, {\rm km\,s^{-1}Mpc^{-1}}$, they ask for H in inverse meters, inverse seconds and billions of years. I have no problem with inverse seconds: $$70.4 {\rm(km/s)(1/Mpc)} = (70.4)(10^3 {\rm m/km})/(30.857 \times 10^21 {\rm m/Mpc}) {\rm s} = 2.28 \times 10^{-18} {\rm s^{-1}}$$ the error is $(1.5)(10^3 {\rm m/km})/(30.857 \times 10^21 {\rm m/Mpc}) {\rm s} = \pm 0.0486 \times 10^{-18} {\rm s^{-1}}$ which verifies the values in 24.2 p. 281. Likewise, for $\rm m^{-1}$: $$(2.28 \times 10^{-18} {\rm s}^{-1})(3.336 \times 10^{-9} {\rm s/m}) = 7.61 \times 10^{-27} {\rm m}^{-1}$$ with an error of $\pm(0.0486 \times 10^{-18} {\rm s}^{-1})(3.336 \times 10^{-9} {\rm s/m}) = 0.162 \times 10^{-27} {\rm m}^{-1}$ which also verifies the value in 24.2. But using the same method for Gy: \begin{align}(2.28 \times 10^{-18} {\rm s}^{-1})(3.1556 \times 10^7 {\rm s/y})& = 7.1948 \times 10^{-11} {\rm y}^{-1} \\&= (\frac{1}{7.1948}) \times 10^{11} {\rm y} \\&= 13.9 \times 10^9 {\rm y} = 13.9 {\rm Gy}\end{align} which is correct, but for the error \begin{align}(0.0486 \times 10^{-18} {\rm s}^{-1})(3.1556 \times 10^7\, {\rm s/y}) &= 0.1534 \times 10^{-11} {\rm y}^{-1} \\&= (\frac{1}{0.1534}) \times 10^{11} {\rm y} \\&= 652 \times 10^9 {\rm y} = 652 {\rm Gy}\end{align} which is obviously wrong. But I don't see why (which makes me feel pretty stupid, considering that the harder stuff, like the tensor calculus, has gotten pretty natural). I apologize for the lack of subscripts and superscripts, but this is my first post. Please, someone show me the error of my ways. • Hey and welcome to Physics SE! Posts here support Latex, so you can format equations properly. Here's a link to a tutorial math.meta.stackexchange.com/questions/5020/… – CDCM Oct 8 '17 at 0:28 • Hi. I typeset parts of your question in Latex (will be visible pending edit approval), but please do so yourself in the future. It is hard to follow equations otherwise. Please see that some of your steps are contradictory. – Sayan Mandal Oct 8 '17 at 0:43 The problem lies in how you're calculating your error. We can estimate the error on a function $f(x)$ where $x$ has an error $\Delta x$ as $$\Delta f = \lvert f(x+\Delta x)-f(x)\rvert. \tag{1}$$ When $f(x)=a x$, this simplifies to $\Delta f = f(\Delta x)$. This is the case for the first two questions, and is what you've used. However for the third part, $f(x)\neq ax$, so you can't use that simplification anymore. You now have $f(x)=\frac{a}{x}$, so the error using equation 1 is: $$\Delta f = \lvert \frac{a \Delta x}{x^2 + x \Delta x} \rvert.$$
2021-07-29 14:32:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998674392700195, "perplexity": 864.805315879959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153860.57/warc/CC-MAIN-20210729140649-20210729170649-00403.warc.gz"}
http://mathoverflow.net/questions/87622/do-real-vectors-attain-matrix-norms
# Do real vectors attain matrix norms? I apologize if the following question ends up being too elementary for this website; I asked it on math.SE a week ago and it remains unanswered. Let $A$ be an $n \times n$ matrix with real entries and let $p \geq 1$. I'm wondering if $$\max_{ x \in \mathbb{C}^N, \|x\|_p = 1} \|Ax\|_p$$ is the same as $$\max_{ x \in \mathbb{R}^N, \|x\|_p = 1} \|Ax\|_p.$$ The only difference is the replacement of $\mathbb{C}^n$ by $\mathbb{R}^n$. Certainly, the answer is yes if $p=1,2,\infty$; on the other, it is pointed out in this answer that the answer is no for mixed $(p,q)$-norms. Edit, Will: I can't get this stupid thing to work. On MSE, Robert Israel introduced a $p,q$ mixed matrix norm with suitable notation, where the vectors $x$ are measured with the $p$-norm, but the $Ax$ are measured with the $q$-norm. - Is $A$ real or complex ? –  Ralph Feb 5 '12 at 22:19 I don't think it's true for reals. Otherwise, Perron-Frobenius would be a stupidly weak assertion. –  darij grinberg Feb 5 '12 at 22:25 $A$ is real; I edited it into the text of the question. Thanks. –  user21162 Feb 5 '12 at 22:27 As currently constituted, there are no matrix norms in the question; only $p$-norms of vectors - these, presumably, are unambiguous –  Anthony Quas Feb 5 '12 at 22:51 Doesn't this paper: arxiv.org/pdf/math/0512608v1.pdf answer the question (Theorem 3.1 specifically)? –  Florian Eisele Feb 6 '12 at 0:14 Okay, so basically the answer can be found in here: http://arxiv.org/pdf/math/0512608v1.pdf Here's how the argument works (a simplified version of what is done in the paper with finite dimensions and $p=q$): (Note: we define "$\Re$" of a vector by taking the real part componentswise) Lemma 3.4 says (applied to the finite dimensional situation) $$\int_0^{2\pi} \| \Re(e^{i\varphi} x) \|_p^p d\varphi = \int_0^{2\pi} |\cos(\varphi)|^p d\varphi$$ for any $x\in \mathbb C^n$ with $\|x\|_p=1$. This is fairly elementary to verify. Therefore, whenever $x,y\in \mathbb C^n$ both have norm $1$, we will find a $\varphi\in[0,2\pi]$ such that $$\|\Re(e^{i\varphi}x)\|_p \leq \|\Re(e^{i\varphi}y)\|_p$$ since the integral $$\int_0^{2\pi}\|\Re(e^{i\varphi}y)\|_p^p - \|\Re(e^{i\varphi}x)\|_p^p d\varphi = 0$$ is zero und thus the integrand has to be non-negative somewhere. Then Lemma 3.2 of the paper yields the result. What the authors do here is to take a vector $0\neq x\in \mathbb C^n$ such that $\|Ax\|_p/\|x\|_p$ is maximal (assume also that $A\neq 0$; then $Ax\neq 0$ follows automatically and we can divide by its norm below). Then they take a $\varphi$ such that $$\left\|\Re(e^{i\varphi} \frac{x}{\|x\|_p})\right\|_p \leq \left\| \Re(e^{i\varphi} \frac{Ax}{\|Ax\|_p})\right\|_p$$ which is possible by the above. If $\Re(e^{i\varphi}x)\neq 0$, this can then be rewritten as $$\frac{\|Ax\|_p}{\|x\|_p} \leq \frac{\|A \Re(e^{i\varphi}x)\|_p}{\|\Re(e^{i\varphi}x)\|_p}$$ which shows that the maximum is also attained at the real vector $\Re(e^{i\varphi}x)\in \mathbb R^n$. If $\Re(e^{i\varphi}x)=0$, then $i\cdot e^{i\varphi}x$ is a real vector at which the maximum is attained. -
2014-08-30 04:24:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9494410753250122, "perplexity": 231.99145408227935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834258.45/warc/CC-MAIN-20140820021354-00144-ip-10-180-136-8.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Talk:Linear_equation
# Talk:Linear equation WikiProject Mathematics (Rated B-class, Top-importance) This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Mathematics rating: B Class Top Importance Field:  Algebra One of the 500 most frequently viewed mathematics articles. ## Normal form The old school class sequence (when I was in high school in 1965) was Algebra, Geometry, Algebra 2, Trigonometry, Analytic Geometry, Pre Calculus. The Normal form was taught in Analytic Geometry. The underlying form was that a line had direction numbers, and that these could be scaled to be direction cosines. when the direction numbers were scaled to be direction cosines the constant resolved to be the distance to the origin. The absolute shortest derivation of the equations come from Vector Algebra. It is entirely feasible to derive the equation in ordinary college algebra but it is typically long winded. See https://sites.google.com/site/everidgemath/home/writings/algebra document Distance between a point and a line. The Normal equation of a line has a natural extension to normal to a plane and is typically an early introduction to the concept. The line Ax+By+C = 0 has a normal Vector <A,B>. The particular normal vector from the origin to the line has length C/sqrt(A^2+B^2). The Normal form of the line is a classic which has been taught in analytic geometry for decades. A particular use of the normal form is this: If N(x,y)=0 is the normal form of the equation of a line then the distance from a point (a,b) to the line is |N(a,b)|. For example the line 3x -4y -5 =0 has the normal form 3/5 x - 4/5 y -1 = 0 and the distance from this line to the point (7,3) is |3/5 * 7 -4/5 *3 -1| =(21-12-5)/5 =4/5 The Normal form is more advanced than most of the other forms in this article, but less advanced than the determinant form or the parametric form. The Normal form for the equation of a line is mentioned (but not completely explicated) in the Wikipedia article Normal Form as The equation of a line: Ax + By = C, with A2 + B2 = 1 and C ≥ 0 The polar form is frequently discussed with the Normal form and is in Wikipedia article Polar coordinate system. I argue that some bit of the Normal form for the equation of the line should be added to this article. I'd also like to see the polar form as well. This article redirects from equation of a line This is not new stuff and has been around for ages. Reference Introduction to Analytic Geometry PEECEY R SMITH, PH.D and AKTHUB SULLIVAN GALE, PH.D. GINN BOSTON 1904 pp 92-93 EdEveridge (talk) 20:45, 13 May 2015 (UTC) Please stop adding this unsourced, poorly written, unencyclopedic, poorly math-formatted content to the article. You added it here before, and it was erased for the same reasons. Second level warning on your user talk page. - DVdm (talk) 21:30, 13 May 2015 (UTC) I agree with the revert by DVdm. Nevertheless, there is some important material that is lacking in this article and deserve to be added, if it would be better written than in EdEveridge version. This is: • The vector form N · (XX0) (wrong in EdEveridge version), which is closely related with matrix form (not said in EdEveridge version) • The normalized standard form or normal form, which, contrarily to the other forms, does not exist for linear equations over other fields than the reals (not said in EdEveridge version). It results from taking a unit normal vector N in the above vector form (not said in EdEveridge version). It is normally written x cos α + y sin α = c, where α is the angle between the line of the solutions and the x axis (wrong in EdEveridge version). As the comments between parentheses show, EdEveridge version is not the right way for adding this lacking material. D.Lazard (talk) 09:21, 14 May 2015 (UTC) ## Confusing sentence in One Variable section What does this sentence mean? "If a = 0, then either the equation does not have any solution, if b ≠ 0 (it is inconsistent), or every number is a solution, if b is also zero." I'd fix it if I could understand it. The structure is unparseable - If blah, then either blah, if blah, or blah, if blah. Maybe there are too many commas. I think it should be replaced with a bullet list: If a = 0, then there are two possibilities: • b ≠ 0, in which case there is no solution because the equation is inconsistent • b = 0, where every number is a solution I have a fondness for bullet lists, though. And this might not even be correct. How can a single equation be inconsistent? Isn't that a term that applies to a system of equations? And how can x = 0/0 possibly be true for all x? Anyway, I'm baffled. — Preceding unsigned comment added by 65.36.43.2 (talk) 16:05, 1 September 2015 (UTC) I agree, that sentence is awkwardly worded. Your bulleted list is correct, but I think that form is overkill in this situation. I'll rewrite the sentence. Bill Cherowitzo (talk) 16:29, 1 September 2015 (UTC) ## Adding the equation from 'Two-point form' to 'General (or standard) form' I think it would be useful to add the equation ${\displaystyle x\,(y_{2}-y_{1})-y\,(x_{2}-x_{1})=x_{1}y_{2}-x_{2}y_{1}}$, from the 'Two-point form' section, to the 'General (or standard) form' section. Even though it's stated in the 'Two-point form' section that the equation relates to the General (or standard) form, a reader looking for a way to write the General (or standard) form out of two points would find the equation in the right section right away. GuiARitter (talk) 20:53, 16 December 2015 (UTC) IMO, it would be even better to remove section "Two point form", and to dispatch its content in the relevant sections General form, Point-slope form and Parametric form (the latter requires also to be completely rewritten, as using notation that is not coherent with that of preceding sections). In fact, there is not really a two-point form, but formulas for getting the various forms from the coordinates of two points of the line. By the way, Equation of a line redirects here, and this article is also the {{main}} article of Line (geometry) § Cartesian plane. It results that it is very difficult to find the right article, for a reader looking for the equation of a line passing through two points in a space of higher dimension. I suggest to make Equation of a line a true article, and to reduce the corresponding parts of Linear equation and Line (geometry) to a summary with a template {{main}}. D.Lazard (talk) 09:07, 17 December 2015 (UTC) ## "A simple example ... may be expressed as" In my view this is poor language, so I had undone the edit, upon which user Zedshort (talk · contribs) immediately reverted without any comment. Do we think this is properly expressed? - DVdm (talk) 15:22, 16 August 2016 (UTC) I was in the process of editing when I submitted and was not trying to revert. Stop being so pedantic and persnikity about the writing of such articles, let's write for people other than mathematicians. People who know the subject will not come here to read such an article. It will be visited by people that are learning the subject and need another source that perhaps expresses the ideas just a little differently. If you come here thinking the purpose is to write for yourself then you have the wrong idea. We should always ask ourselves: "For whom am I writing?" In addition, if a single word of an edit is wrong, then correct that one word, but avoid the wholesale reversion a long string of edits. Doing otherwise suggests that you are squating on the article and are attempting to guard what you believe to be your territory. We human-beasts are very territorial creatures, but we need to overcome such base urges, otherwise there will be endless conflicts both here and in the real world. Zedshort (talk) 15:48, 16 August 2016 (UTC) I have no comment to this. I'll leave this to the other article contributors. - DVdm (talk) 16:17, 16 August 2016 (UTC) I'm afraid that that language issue was my fault. I was primarily interested in fixing the formatting of the example and then realized that I should remove the standard form phrase as it was undefined and would have no meaning to a casual reader. As my want, I attempted to edit with the minimum amount of change and that led to the awkward phrasing. With a little more reflection I would have done a better job (and still can, as I see that the formatting needs to be adjusted again.) --Bill Cherowitzo (talk) 17:10, 16 August 2016 (UTC) Ok, much better, thanks. - DVdm (talk) 19:00, 16 August 2016 (UTC) ## Vector Predicate forms for representing lines (optimized for software geometry) The section I inserted on "Orientation-Location form" (immediately undone) was an attempt to introduce into this topic a modern, algorithmic geometry (software) perspective. The crux of this new methodology (being taught in Silicon Valley public school) is inventive sketching that results in a sketch specifying an algorithm to be implemented in software. I plead innocence on the charge of "self-promotion". My goal is to help 21st century math learners pick up the strongest spatial math problem-solving methodologies, which in 2016 implies computational thinking. In the realm of geometry, this means ability to automate your creative solution to a problem by implementing it in software. I understand that this multidisciplinary approach can be unsettling to math teachers who haven't had training in numerical software design and programming. On the other hand, math teachers have an obligation to teach applied Math problem-solving as it is currently being practiced in the real world, and the expectation nowadays is that mathematical thinking be able to be automated (and replicated) via software. I'm unsure how to proceed how to spread software-savvy Math knowhow using Wikipedia, and welcome suggestions. The general lack of understanding of a mature spatial computational perspective is becoming an issue in 9-12 Math, where more teachers are bringing Computational Thinking into the classroom, but are stumbling forward unaware of the unique requirements of software math (as compared to math for earlier toolsets, such as paper and pencil + handheld calculator). (BTW, paper and pencil remain essential tools in the computational era). Here are some key changes: ``` • infinity. Infinity as a numerical value is undefined, and cannot be pushed forward into calculations. Therefore, in algorithmic math, we seek out representations that do not depend on infinity as a value. For instance, the slope-intercept representation of 2D lines is unable to represent vertical lines. • "=" differentiates into two different concepts, "←" (assignment or information copying) and "==" (predicate evaluation resulting in an equality comparison being true or false) • chunking information into objects aids in simplifcation, e.g., bundling up the x y z coordinates of a 3D location into a single vector object having its own name. • representations and algorithms want to be able to handle all cases, with the fewest exceptions (for algorithmic simplicity) • spatial concepts, representations and algorithms want to be able to scale elegantly going from 2D --> 3D and higher dimensions (if possible) ``` Run direction and orientation of 2D line 2D line equation (predicate form) The "Orientation-Location form" section I added is similar in its underpinning math pedigree to the "Normal form" (described in this talk page, and also having been controversially deleted from the article). I thought the Wikipedia norm was to err on the side of openness and inclusion (so long as articles don't become redundant). The 1965-era "Normal form" is perhaps a bit outdated for a visual-computational spatial math treatment only in that it doesn't anticipate representing points as vectors, for instance the commonplace by now notation of referring to a 2D point p = [ x y ], and referring to points as p1, p2, p3, etc. The "normal" discussed gets to one of the (potential) spatial features of a 2D line, the perpendicular vector emanating from the origin out to the line. The only reason this formalism is not perfectly attuned to software computation is that it fails for a tilted line that passes though the origin. The problem with using the "normal" as a feature is that it overcompresses information about line orientation (tilt, slope) with line location in space. In the more modern formulation, the information is split up into orientation and location, and the orientation is stored as a "normalized normal" (unit length direction vector pointing perp. to the line). You can see why a different nomenclature might be advisable, and that's how orientation o has become preferable[1]. Do the mathematicians who view and manage this page want a computational perspective treated in another page? For example, an article "Line Predicate (computational)"  ?? I can't be the one to decide if more recent, computational refinements to math theory deserve to appear in the Math page, or on a separate page with a reference? But, if that's agreeable, I could take that tack. The main thing is for readers to be able to get to up-to-date Math content, or Computational Math content if you prefer. Pbierre (talk) 20:48, 6 January 2017 (UTC) References 1. ^ Bierre, Pierre (2010). Flexing the Power of Algorithmic Geometry (1st ed.). Spatial Thoughtware. ISBN 978-0-9827526-0-9. Wikipedia needs wp:secondary sources. As soon as sufficient scholars find your work sufficiently important, it will be referred to and cited in the relevant literature. Then we can (and probably should) take in on board. A matter of patience. - DVdm (talk) 23:20, 6 January 2017 (UTC)
2017-06-25 11:06:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7306892275810242, "perplexity": 1064.8295165348143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320489.26/warc/CC-MAIN-20170625101427-20170625121427-00432.warc.gz"}
https://www.highlightphotonics.com.au/service/network-configuration/
# Network Configuration Networking can be very complicated but we can ‘cut through’ the spaghetti junction to find any network faults or rationalise your network configuration. Perhaps you’ve had trouble connecting to a modem or trying to share files over your network. Let us give you an appraisal of your network setup to suggest improvements to network performance and your productivity. Imagine how much more efficiently work could be done if staff had access to the same shared files within the office, without compromising on performance.
2020-04-10 01:46:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272876143455505, "perplexity": 1507.9654821793972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371883359.91/warc/CC-MAIN-20200410012405-20200410042905-00178.warc.gz"}
https://arbital.greaterwrong.com/p/transcendental_number?l=5wx
# Transcendental number A real or com­plex num­ber is said to be tran­scen­den­tal if it is not the root of any (nonzero) in­te­ger-co­effi­cient polyno­mial. (“Tran­scen­den­tal” means “not alge­braic”.) # Ex­am­ples and non-examples Many of the most in­ter­est­ing num­bers are not tran­scen­den­tal. • Every in­te­ger is not tran­scen­den­tal (i.e. is alge­braic): the in­te­ger $$n$$ is the root of the in­te­ger-co­effi­cient polyno­mial $$x-n$$. • Every ra­tio­nal is alge­braic: the ra­tio­nal $$\frac{p}{q}$$ is the root of the in­te­ger-co­effi­cient polyno­mial $$qx - p$$. • $$\sqrt{2}$$ is alge­braic: it is a root of $$x^2-2$$. • $$i$$ is alge­braic: it is a root of $$x^2+1$$. • $$e^{i \pi/2}$$ (or $$\frac{\sqrt{2}}{2} + \frac{\sqrt{2}}{2}i$$) is alge­braic: it is a root of $$x^4+1$$. How­ever, $$\pi$$ and $$e$$ are both tran­scen­den­tal. (Both of these are difficult to prove.) # Proof that there is a tran­scen­den­tal number There is a very sneaky proof that there is some tran­scen­den­tal real num­ber, though this proof doesn’t give us an ex­am­ple. In fact, the proof will tell us that “al­most all” real num­bers are tran­scen­den­tal. (The same proof can be used to demon­strate the ex­is­tence of ir­ra­tional num­bers.) It is a fairly easy fact that the non-tran­scen­den­tal num­bers (that is, the alge­braic num­bers) form a countable sub­set of the real num­bers. In­deed, the Fun­da­men­tal The­o­rem of Alge­bra states that ev­ery polyno­mial of de­gree $$n$$ has ex­actly $$n$$ com­plex roots (if we count them with mul­ti­plic­ity, so that $$x^2+2x+1$$ has the “two” roots $$x=-1$$ and $$x=-1$$). There are only countably many in­te­ger-co­effi­cient polyno­mi­als spell out why, and each has only finitely many com­plex roots (and there­fore only finitely many—pos­si­bly $$0$$real roots), so there can only be countably many num­bers which are roots of any in­te­ger-co­effi­cient polyno­mial. But there are un­countably many re­als (proof), so there must be some real (in­deed, un­countably many!) which is not alge­braic. That is, there are un­countably many tran­scen­den­tal num­bers. # Ex­plicit con­struc­tion of a tran­scen­den­tal number Liou­ville’s constant Parents: • Number An ab­stract ob­ject that ex­presses quan­tity or value of some sort.
2019-12-08 21:04:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719668030738831, "perplexity": 5668.347078054083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514893.41/warc/CC-MAIN-20191208202454-20191208230454-00551.warc.gz"}
https://bioinformatics.stackexchange.com/questions/7016/how-to-quantify-similarity-of-genomes-and-find-differences-in-set-of-s-aureus-ge
# How to quantify similarity of genomes and find differences in set of S aureus genomes? I have around 500 annotated proteomes of different bacterial strains and would like to quantify their similarity (or difference). I found gt genomediff from genometools gives me some scores that I can use to generate nice clusters, but I am not sure whether that tools really works. The fasta-files that I use contain multiple sequences. I ran some tests and it looks ok. 1_reference.fna: >1 TAAGTTACT >2 TAAGTTACA 2_eq_to_ref.fna: >1 TAAGTTACT >2 TAAGTTACA 3_tags_diff.fna: TAAGTTACT >2asdfasdfffa TAAGTTACA 4_orde_diff.fna: >2 TAAGTTACA >1 TAAGTTACT >1 TAAGTTACT >2 TAAGTTACA >2 TTACA 6_point_mut.fna: >1 AAAGTTACT >2 TAAGTTACA 7_different.fna: >1 TAAGTTACT ATTACCTAA >2 AAAAAAAAA Then: gt genomediff --indexname test *fna 7 1_reference.fna 0.000000 0.000000 0.000000 0.000000 0.206969 0.199527 0.794496 2_eq_to_ref.fna 0.000000 0.000000 0.000000 0.000000 0.206969 0.199527 0.794496 3_tags_diff.fna 0.000000 0.000000 0.000000 0.000000 0.206969 0.199527 0.794496 4_orde_diff.fna 0.000000 0.000000 0.000000 0.000000 0.206969 0.199527 0.794496 5_add_subse.fna 0.206969 0.206969 0.206969 0.206969 0.000000 0.212596 1.349206 6_point_mut.fna 0.199527 0.199527 0.199527 0.199527 0.212596 0.000000 0.569180 7_different.fna 0.794496 0.794496 0.794496 0.794496 1.349206 0.569180 0.000000 I am running the clustering on the difference matrix calculated with genomediff: "These distances are Jukes-Cantor corrected divergence between the pairs of genomes, that is, the number of mutations per base between them." Currently, we are studying S aureus. We genomes are assembled genomes (three different methods). My guess is that the sequences from the plasmids are present. Furthermore, we do have drug-resistance measured in culture. So, we will be able to compare the genomes and the resistances. • Can help here (I think), but need more info. Could you explain why you talk about "proteins" but present nucleotide data. What do the fasta sequences represent and what is your goal of analysis? Also which bacteria, pathogens? – M__ Feb 13 '19 at 4:23 • Arguably, depending on what the goal of using said differences is for, it would be far more useful to infer a phylogeny of the bacterial strains you're analyzing and consider differences within the context of their evolutionary relationships. Feb 13 '19 at 14:34 • Thats what I was thinking too. In pathogens its quite common to track microbial drug-resistance and that tends to be presence/absence stuff. – M__ Feb 13 '19 at 15:54 • The talk about proteins was an error, but I also have annotated proteomes generated from the dna sequence files. From here I would do hierarchical clustering. What tool would you use to generate a phylogenetic tree? First of all I want to quantify the difference between the strains to see whether drug-resistant or other classes of strains cluster together. Feb 13 '19 at 17:02 • Ah! right now we have a question. Its a while since I've done anything on microbial drug resistance so I'll need to collect my thoughts. – M__ Feb 13 '19 at 22:08 ... stuff of original post deleted. On second thoughts what you might be doing is a template assembly of your genomes. It is a possible interpretation of the 1.2. fasta sequences above (i.e. 1 is template). A microbial genomics professional would advise also performing a de novo assembly, particularly if you are interested in presence and absence. The reason is that if a gene is present in your query and absent in your template, you will miss it. Again it all depends on what bacteria are being assessed, some are more prone to "genetic islands" than others. You need a collaborator beyond that. For the onlookers here bacterial genetic behaviour is very different to eukaryotes and what they get up, switching DNA etc.., to would appear bizzare from a higher eukaryotic world. You mentioned a comparative analysis of 500 strains: I have worked on this in MRSA. Note, the bacteria is important considering the approach you will adopt. Anyway, you want a single aligned file to produce a phylogeny, Bayes or likelihood. This is a model of point mutations. Bootstrapping doesn't really help because of SNP differences between isolates. There is a complex problem. Generic phylogeny can hit the buffers in my opinion because often: • low numbers of SNP differences between isolates. • The error across the genome is poorly defined • Multi-clonal infection ignored and your isolates not cloned. • The other issue is the QC of the genome, I've been fairly astonished by the variation (it affects the tree) Put it all together and you can get a nice tree but the topology of a given MLST is in my opinion unlikely to be correct. A referee may not bother with this, but they might. You then map the phenotype against the phylogeny (the tips of the tree) and look for clusters. HOWEVER, there are problems: • Drug-resistant e.g. mec cassette (methicillin-resistant ) are often on plasmids and these are lost during isolation. Albeit they can integrate onto the genome. So an isolate can be drug resistance but you fail to find the gene. • Drug-resistance transmission is unlikely to cluster, its too quick, so you don't see nice tight clusters against a theoretically perfect tree • The best approach is drug-resistance in culture. Hierarchical clustering, has many meanings. It is used in presence/absence data, I demonstrated the method failed (on bacteria) based on cluster analysis. It is currently revised and used in unsupervised deep (or machine) learning prior a training method. The question then is what is what are you modelling? Presence/absence (of genes), point mutations, epidemiological data, drug-resistance/non-drug resistance? My assessment is that given the potential complexity of these biological scenarios and the quality of the question you should seek formal collaboration and this is before considering what bacteria are being assessed. Some of epidemic drug-resistent bacteria rip up this rule book. • I am running the clustering on the difference matrix calculated with genomediff. "These distances are Jukes-Cantor corrected divergence between the pairs of genomes, that is, the number of mutations per base between them." Currently, we are studying S aureus. We do have drug-resistance measured in culture. And we do have assembled genomes. My guess is that the sequences from the plasmids are present. Feb 14 '19 at 16:58 • JC is a primitive algorithm, particularly using cluster analysis ... a definite no no. Acually its quite rubbish and if I reviewed the ms I'd politely return it. On other issue is that I was authoritively informed that plasmids for MRSA (i.e. staph, i.e. S aureus) could not be guarrenteed and can be lost in isolation. E.g. the mec cassette may be present, but could be lost. I'm not a microbial biologist so wouldn't have a clue about plasmid retention, but it was the clear advise I got from exactly the same question, i.e. from people who are supposed to know. – M__ Feb 14 '19 at 17:04 • Anyway I think we're done here ... at least get a negative rep for this thread (at least so far :-O ) ... Oh I see "S aureus" is mentioned in the title, my bad, ... the good thing is that you can use phylogeny on this bacteria, i.e. the theoretical model (which is what I do) matches the biology of the bacterial genetics ... – M__ Feb 14 '19 at 17:07 • Michael, do you have a publication for your SA project? It seems to be very relevant for us. Feb 14 '19 at 17:15 • I do, but its not an area I work on and it didn't float my boat, so I prefer to remain anonymous. There are lots of publications on MRSA you can follow. I personally don't think the area is very well developed in terms of modeling brilliance, but one of Peacock's studies stands out. – M__ Feb 14 '19 at 17:23
2021-10-28 20:18:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5234664082527161, "perplexity": 1673.9019697271976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588526.57/warc/CC-MAIN-20211028193601-20211028223601-00261.warc.gz"}
https://acadsol.eu/npsc/26/1/6
# A COLLOCATION SCHEME FOR SINGULAR BOUNDARY VALUE PROBLEMS ARISING IN PHYSIOLOGY Title A COLLOCATION SCHEME FOR SINGULAR BOUNDARY VALUE PROBLEMS ARISING IN PHYSIOLOGY Publication Type Journal Article Year of Publication 2018 Authors KUMAR D. Journal Neural, Parallel, and Scientific Computations Volume 26 Issue 1 Start Page 95 Pagination 24 Date Published 2018 ISSN 1056-2176 Keywords 34B16, 65L10, 65L12 Abstract A new collocation method for the solution of a class of second-order two-point boundary value problems associated with physiology and other areas with a singular point at one endpoint is constructed. The singularity of the differential equation is modified by L’Hôpital’s rule and the boundary condition ${ y′(0)=0}$ . Quintic B-spline functions on equidistant collocation points are used to approximate the solution. The quasi-linearization technique is used to reduce a non-linear problem to a sequence of linear problems. The system obtained on discretization is transformed to the system of linear algebraic equations which is easy to be solved. It is proved that the proposed algorithm converges to a smooth approximate solution of the singular boundary value problems and the error estimates are given. To check the theory and to demonstrate the efficiency of the proposed method, several numerical illustrations from physical model problems have been carried out. To show the effectiveness of the proposed method comparisons with several existing methods has also been done. URL https://acadsol.eu/npsc/articles/26/1/6.pdf DOI 10.12732/npsc.v26i1.6 Refereed Designation Refereed Full Text REFERENCES [1] M. Abukhaled, S. A. Khuri and A. Sayfy, A numerical approach for solving a class of singular boundary value problems arising in physiology, Int. J. Numer. Anal. Model., 8:353–363, 2011. [2] J. A. Adam, A simplified mathematical model of tumor growth, Math. Biosci., 81:229–244, 1986. [3] R. P. Agarwal and Y. M. Chow, Finite difference methods for boundary value problems for differential equations with deviating arguments, Comput. Math. Appl., 12:1143–1153, 1986. [4] R. P. Agarwal, Boundary Value Problems For Higher-Order Differential Equations, World Scientific, Singapore, 1986. [5] N. Anderson and A. M. Arthurs, Analytical diffusion in a spherical cell with michaelis-menten oxygen uptake kinetics, Bull. Math. Biol., 47:145–153, 1983. [6] N. S. Asaithambi and J. B. Garner, Pointwise solution bounds for a class of singular diffusion problems in physiology, Appl. Math. Comput., 30:215–222, 1989. [7] N. C¸ a˘glar and H. C¸ a˘glar, B-spline solution of singular boundary value problems, Appl. Math. Comput., 182:1509–1513, 2006. [8] H. C¸ a˘glar, N. C¸ a˘glar and M. Ozer, B-spline solution of non-linear singular bound- ¨ ary value problems arising in physiology, Chaos Solutions and Fractals, 39:1232– 1237, 2009. [9] H. C¸ a˘glar, C. Akkoyunlu, N. C¸ a˘glar and D. Dundar, The numerical solution of the singular two-point boundary value problems by using non-polynomial spline functions, Proceedings of the 9th WSEAS Int. Conference on Applied Computer And Applied Computational Science. ISSN: 1790-5117, ISBN: 978-960-474-173- 1. [10] M. M. Chawla and C. P. Katti, Finite difference methods and their convergence for a class of singular two point boundary value problem, Numer. Math., 39:341– 350, 1982. [11] M. M. Chawla, S. McKee and G. Shaw, Order h 2 method for a singular two point boundary value problem, BIT., 26:318–326, 1986. [12] M. M. Chawla and R. Subramanian, A new spline method for singular two point boundary value problems, Int. J. Comput. Math., 24:291–310, 1988. [13] M. M. Chawla, R. Subramanian and H. Sathi, A fourth order method for a singular two point boundary value problem, BIT., 28:88–97, 1988. [14] A. M. Cohen and D. E. Jones, A note on the numerical solution of some singular second order differential equations, J. Inst. Math. Appl., 13:379–384, 1974. [15] M. G. Cui and F. Z. Geng, Solving singular two-point boundary value problem in reproducing kernel space, J. Comput. Appl. Math., 205:6–15, 2007. [16] E. P. Doolan, J. J. H. Miller and W. H. A. Schilders, Uniform Numerical Methods for Problems With Initial and Boundary Layers, Boole Press, Dublin, 1980. [17] R. C. Duggan and A. M. Goodman, Point wise bounds for a nonlinear heat conduction model of the human head, Bull. Math. Biol., 48:229–236, 1986. [18] M. El-Gamel and A. Zayed, A comparison between the Wavelet-Galerkin and the Sinc-Galerkin methods in solving nonhomogeneous heat equations, in: Contemporary Mathematics of the American Mathematical Society, in: Zuhair Nashed, Otmar Scherzer (Eds.), Series, Inverse Problem, Image Analysis, and Medical Imaging, 2002, 313, AMS, Providence, 97–116. [19] M. A. El-Gebeily and I. T. Abbu-Zaid, On a finite difference method for singular two-point boundary value problem, IMA J. Numer. Anal., 18:179–190, 1998. [20] U. Flesch, The distribution of heat sources in the human head: a theoretical consideration, J. Theor. Biol., 54:285–287, 1975. [21] J. B. Garner and R. Shivaji, Diffusion problems with mixed nonlinear boundary condition, J. Math. Anal. Appl., 148:422–430, 1990. [22] J. Goh, A. Majid and A. I. Ismail, A quartic B-spline for second-order singular boundary value problems, Comput. Math. Appl., 64:115–120, 1983. [23] P. Hiltmann and P. Lory, On oxygen diffusion in a spherical cell with michaelismenten oxygen uptake kinetics, Bull. Math. Biol., 45:661–664, 1983. [24] R. K. Jain and P. Jain, Finite difference methods for a class of singular two-point boundary value problems, Int. J. Comput. Math., 27:113–120, 1989. [25] M. K. Kadalbajoo and K. S. Raman, Numerical solution of singular boundary value problems by invariant imbedding, J. Comput. Phys., 55:268–277, 1984. [26] M. K. Kadalbajoo and V. K. Aggarwal, Numerical solution of singular boundary value problems via Chebyshev polynomial and B-spline, Appl. Math. Comput., 160:851–863, 2005. [27] M. K. Kadalbajoo and V. K. Aggarwal, B-spline method for a class of singular two-point boundary value problems using optimal grid, Appl. Math. Comput., 188:1856–1869, 2007. [28] H. B. Keller, Numerical Methods for Two-Point Boundary Value Problems, Blaisdell Publishing Co., Waltham Massachusetts, 1968. [29] S. A. Khuri and A. Sayfy, A Twofold spline Chebychev linearization approach for a class of singular second-order nonlinear differential equation, Results Math., 63:817–835, 2013. [30] M. Kumar, A three-point finite difference method for a class of singular two-point boundary value problems, J. Comput. Appl. Math., 145:89–97, 2002. [31] H. S. Lin, Oxygen diffusion in a spherical cell with nonlinear oxygen uptake kinetics, J. Theor. Biol., 60:449–457, 1976. [32] D. L. S. McElwain, A re-examination of oxygen diffusion in a spherical cell with michaelis-menten oxygen uptake kinetics, J. Theor. Biol., 71:255–263, 1978. [33] R. K. Pandey and A. K. Singh, On the convergence of a finite difference method for a class of singular boundary value problems arising in physiology, J. Comput. Appl. Math., 166:553–564, 2004. [34] P. M. Prenter, Spline and Variational Methods, John Wiley & Sons, New York, 1975. [35] J. Rashidinia, R. Mohammadi and R. Jalilian, The numerical solution of nonlinear singular boundary value problems arising in physiology, Appl. Math. Comput., 185:360–367, 2007. [36] A. S. V. Ravi Kanth and Y. N. Reddy, A numerical method for singular two-point boundary value problems via Chebyshev economization, Appl. Math. Comput., 146:691–700, 2003. [37] A. S. V. Ravi Kanth and Y. N. Reddy, Higher order finite difference method for a class of singular boundary value problems, Appl. Math. Comput., 55:249–258, 2004. [38] A. S. V. Ravi Kanth and Y. N. Reddy, Cubic spline for a class of singular twopoint boundary value problems, Appl. Math. Comput., 170:733–740, 2005. [39] A. S. V. Ravi Kanth and V. Bhattacharya, Cubic spline polynomial for a class of non-linear singular two-point boundary value problems arising in physiology, Appl. Math Comput., 174:768–774, 2006. [40] G. W. Reddien, On the collocation method for singular two point boundary value problems, Numer. Math., 25:427–432, 1975. [41] R. D. Russell and L. F. Shampine, Numerical methods for singular boundary value problems, SIAM J. Numel. Anal., 12:13–36, 1975. [42] A. Sayfy and S. Khuri, A generalized algorithm for the order verification of numerical methods, Far East J. Appl. Math., 33:295–306, 2008. [43] I. J. Schoenberg, On Spline Functions. MRC Report 625, University of Wisconsin (1966). [44] S. Taliaferro, A nonlinear singular boundary value problem, Nonlinear Anal., 3:897–904, 1979. [45] W. Wang, M. Cui and B. Han, A new method for solving a class of singular two-point boundary value problems, Appl. Math. Comput., 206:721–727, 2008. [46] Y. Wang, M. Tadi and M. Radenkovic, A numerical method for singular and singularly perturbed Dirichlet-type boundary-value problems, Int. J. Appl. Math. Research., 3:292–300, 2014. [47] X. Zhang, Modified cubic B-spline solution of singular two-point boundary value problems, J. Inf. Comput. Sci., 11:3167–3176, 2014.
2021-03-04 13:01:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4126540720462799, "perplexity": 2170.5153795284077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369054.89/warc/CC-MAIN-20210304113205-20210304143205-00190.warc.gz"}
http://kendallscapes.com/a90qp8/a17587-difference-equation-example
Example 2. Example 3: Solve and find a general solution to the differential equation. In addition to this distinction they can be further distinguished by their order. Example 6: The differential equation Differential equations have wide applications in various engineering and science disciplines. Example 1: Solve. The picture above is taken from an online predator-prey simulator . We will give a derivation of the solution process to this type of differential equation. Show Answer = ' = + . An ordinary differential equation (ODE) is an equation that involves some ordinary derivatives (as opposed to partial derivatives) of a function.Often, our goal is to solve an ODE, i.e., determine what function or functions satisfy the equation.. The homogeneous part of the solution is given by solving the characteristic equation . Example 5: Find the differential equation for the family of curves x 2 + y 2 = c 2 (in the xy plane), where c is an arbitrary constant. Solving Differential Equations with Substitutions. Example 4: Deriving a single nth order differential equation; more complex example For example consider the case: where the x 1 and x 2 are system variables, y in is an input and the a n are all constants. The solution diffusion. Differential equations are very common in physics and mathematics. Solving differential equations means finding a relation between y and x alone through integration. You can classify DEs as ordinary and partial Des. The interactions between the two populations are connected by differential equations. ... Let's look at some examples of solving differential equations with this type of substitution. dydx = re rx; d 2 ydx 2 = r 2 e rx; Substitute these into the equation above: r 2 e rx + re rx − 6e rx = 0. Example 1. If we assign two initial conditions by the equalities uuunnn+2=++1 uu01=1, 1= , the sequence uu()n n 0 ∞ = =, which is obtained from that equation, is the well-known Fibonacci sequence. For example, the general solution of the differential equation $$\frac{dy}{dx} = 3x^2$$, which turns out to be $$y = x^3 + c$$ where c is an arbitrary constant, denotes a … While this review is presented somewhat quick-ly, it is assumed that you have had some prior exposure to differential equations and their time-domain solution, perhaps in the context of circuits or mechanical systems. For example, y=y' is a differential equation. Finite Difference Method applied to 1-D Convection In this example, we solve the 1-D convection equation, ∂U ∂t +u ∂U ∂x =0, using a central difference spatial approximation with a forward Euler time integration, Un+1 i −U n i ∆t +un i δ2xU n i =0. Also, the differential equation of the form, dy/dx + Py = Q, is a first-order linear differential equation where P and Q are either constants or functions of y (independent variable) only. Determine whether y = xe x is a solution to the d.e. If you know what the derivative of a function is, how can you find the function itself? To find linear differential equations solution, we have to derive the general form or representation of the solution. Khan Academy is a 501(c)(3) nonprofit organization. Differential Equations: some simple examples from Physclips Differential equations involve the differential of a quantity: how rapidly that quantity changes with respect to change in another. differential equations in the form N(y) y' = M(x). We must be able to form a differential equation from the given information. We use the method of separating variables in order to solve linear differential equations. The highest power of the y ¢ sin a difference equation is defined as its degree when it is written in a form free of D s ¢.For example, the degree of the equations y n+3 + 5y n+2 + y n = n 2 + n + 1 is 3 and y 3 n+3 + 2y n+1 y n = 5 is 2. Before we get into the full details behind solving exact differential equations it’s probably best to work an example that will help to show us just what an exact differential equation is. Therefore, the basic structure of the difference equation can be written as follows. Without their calculation can not solve many problems (especially in mathematical physics). Example 1 Find the order and degree, if defined , of each of the following differential equations : (i) /−cos⁡〖=0〗 /−cos⁡〖=0〗 ^′−cos⁡〖=0〗 Highest order of derivative =1 ∴ Order = Degree = Power of ^′ Degree = Example 1 Find the order and degree, if defined , of y 'e-x + e 2x = 0 Solution to Example 3: Multiply all terms of the equation by e x and write the differential equation of the form y ' = f(x). One of the stages of solutions of differential equations is integration of functions. For other forms of c t, the method used to find a solution of a nonhomogeneous second-order differential equation can be used. Determine whether P = e-t is a solution to the d.e. For example, as predators increase then prey decrease as more get eaten. Differential equations are equations that include both a function and its derivative (or higher-order derivatives). An integro-differential equation (IDE) is an equation that combines aspects of a differential equation and an integral equation. 0014142 2 0.0014142 1 = + − The particular part of the solution is given by . A linear differential equation of the first order is a differential equation that involves only the function y and its first derivative. equation is given in closed form, has a detailed description. y ' = - e 3x Integrate both sides of the equation ò y ' dx = ò - e 3x dx Let u = 3x so that du = 3 dx, write the right side in terms of u A stochastic differential equation (SDE) is an equation in which the unknown quantity is a stochastic process and the equation involves some known stochastic processes, for example, the Wiener process in the case of diffusion equations. Difference Equation The difference equation is a formula for computing an output sample at time based on past and present input samples and past output samples in the time domain. Ordinary differential equation examples by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. coefficient differential equations and show how the same basic strategy ap-plies to difference equations. Differential equations with only first derivatives. The next type of first order differential equations that we’ll be looking at is exact differential equations. (2) For example, the following difference equation calculates the output u(k) based on the current input e(k) and the input and output from the last time step, e(k-1) and u(k-1). Let y = e rx so we get:. Differential equations (DEs) come in many varieties. m2 −2×10 −6 =0. And different varieties of DEs can be solved using different methods. Example 2. Example : 3 (cont.) In general, modeling of the variation of a physical quantity, such as ... Chapter 1 first presents some motivating examples, which will be studied in detail later in the book, to illustrate how differential equations arise in … Phenomena in biology, economics, population dynamics, and physics difference equation example, x y. In many varieties represent solutions of basic differential equations are very common in and... Same basic strategy ap-plies to difference equations detailed description e-t is a linear homogeneous difference of. The characteristic equation problem by using the method of separating variables in order to solve differential! X is a solution to the differential equation and asked to find a solution. Used to find linear differential equations are equations that include both a function is, how can you find function... The second order forms of c t, the method of separating variables in order to solve linear differential solution... ( IDE ) is an equation that involves only the function itself also start at. Representation of the ordinary differential equation is a linear homogeneous difference equation of solution. The first order differential equation and an integral equation interval of validity the... Function itself whether P = e-t is a differential equation that involves only the function y and its (. Will now look at another type of first order differential equation to an quadratic... The form N ( y ) y ' = M ( x ) interval of validity the! Have reduced the differential equation of the solution process to this distinction they can be readily solved using simple! The same basic strategy ap-plies to difference equations from an online predator-prey simulator given a differential.... To survive for other forms of c t, the method of variation of a function is, how you! Include both a function and its first derivative of differential equation and asked find. The two populations are connected by differential equations is integration of functions, a! Of first order differential equations physics and mathematics exact solution of the solution is given in closed form, a. ( xy ’ = y + 2 { x^3 }.\ ) solution can further. ( or higher-order derivatives ) and physics K e 0 will give a derivation the! By solving the characteristic equation of substitution y = e rx ( r 2 + dydx − 6y 0! Two populations are connected by differential equations exact solution of a nonhomogeneous differential... Using the method of separating variables in order to solve linear differential equations for the difference equation example to the differential.! Integro-Differential equation ( IDE ) is an equation that involves only the itself. Let 's look at some examples of solving differential equations means finding a relation between y and its derivative or. A free, world-class education to anyone, anywhere learn how to find differential!, y=y ' is a 501 ( c ) ( 3 ) nonprofit organization second-order differential equation that can used! Get: DEs as ordinary and partial DEs equations with this type of differential are... To a differential equation of the first order is a linear differential equation write the form! As more get eaten derivation of the first order is a differential.! Equation! differential equation that can be further distinguished by their order: example 1 N y... Is to provide a free, world-class education to anyone, anywhere using the of... Dydx − 6y = 0 be able to form a differential equation the picture above is from! The d.e through integration which allows more prey to survive we will now at. Equation is derived as follows: example 1 same basic strategy ap-plies to difference.. Is, how can you find the function y and its derivative ( or derivatives! K e 0 its derivative ( or higher-order derivatives ) equation \ ( ’! Of DEs can be readily solved using different methods x ) using different methods of solving differential equations ( )! Be solved using different methods xe x is a 501 ( c (. 6 = 0 at is exact differential equations are very common in and... ) nonprofit organization ( 3 ) nonprofit organization solution of a constant + 2 { x^3.\! Form or representation of the stages of solutions is given in closed form, has a detailed description = =. Answer = ) = -, = example 4 we get:, as predators increase then prey as... Equation from the given information LTI difference equation as follows an integro-differential equation ( IDE ) an... Are equations that we ’ ll also start looking at is exact differential equations are very common physics! ( 3 ) nonprofit organization a relation between y and its derivative ( or higher-order )... Ide ) is an equation that combines aspects of a function is, how can you the... ( IDE ) is an equation that combines aspects of a nonhomogeneous differential! Further distinguished by their order 501 ( c ) ( 3 ) nonprofit organization and start to die out which... Their order of first order differential equations ( DEs ) come in many varieties equation! khan is! Finding a relation between y and x alone through integration varieties of can! Looking at finding the interval of validity for the solution is given by simple substitution ) y ' = (! = y + 2 { x^3 }.\ ) solution y = xe x is a solution to the equation! Which allows more prey to survive function is, how can you find function... Variables in order to solve linear differential equations Academy is a linear difference... That we ’ ll be looking at is exact differential equations solution we.
2021-06-21 00:57:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7439783811569214, "perplexity": 501.2782421858487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488259200.84/warc/CC-MAIN-20210620235118-20210621025118-00568.warc.gz"}
http://math.hawaii.edu/wordpress/bjoern/papers/
# Publications #### Submitted 2021 1. An incompressibility theorem for automatic complexity. 2. Bjørn Kjos-Hanssen, Saroj Niraula, Soowhan Yoon, Sabrina Hardisty, Guanhong Li, and Jacqueline Millard. A 1-parameter family of metrics connecting Jaccard distance to normalized information distance. #### Submitted 2020 1. Bjørn Kjos-Hanssen, Clyde James Felix, Sun Young Kim, Ethan Lamb, and Davin Takahashi, VC-dimensions of nondeterministic finite automata for words of equal length. Electronic proceedings of International Symposium on Artificial Intelligence and Mathematics (ISAIM) 2020. Journal version submitted 04 Apr 2020, under review since 08 Oct 2020, under revision since 08 Feb 2021. #### Submitted 2019 1. Bjørn Kjos-Hanssen and Lei Liu: The number of languages with maximum state complexity, Algebra Universalis (resubmitted 20 Jan 2021). Lecture Notes in Computer Science 11436 (2019), 394--409. TAMC 2019 (Theory and Applications of Models of Computation). ### Journal articles 1. Bjørn Kjos-Hanssen, Automatic complexity of Fibonacci and Tribonacci words. Discrete Applied Mathematics 289 (31 January 2021), 446--454. 2. Bjørn Kjos-Hanssen and Lu Liu: Extracting randomness within a subset is hard, European Journal of Mathematics 6 (2020), no. 4, 1438–1451. 3. Only human. Notices of the American Mathematical Society 66 (2019), no. 4, 556--561. 4. Few paths, fewer words: model selection with automatic structure functions. Experimental Mathematics 28 (2019), no. 1, 121--127. 5. Downey, Hirschfeldt and Kjos-Hanssen: Preface. Special issue on "Computability, complexity and randomness'' (CCR 2016). Held in Honolulu, January 4-8, 2016. Theory of Computing Systems 62 (2018) no. 7, 1553–1554. 6. Automatic complexity of shift register sequences. Discrete Mathematics 341 (2018), no. 9, 2409–2417. Lecture Notes in Computer Science 10388 (2017), 170-181. Workshop on Logic, Language, Information and Computation (WoLLIC) 2017. 7. Permutations of the integers induce only the trivial automorphism of the Turing degrees, Bulletin of Symbolic Logic 24 (2018), no. 2, 165-174. Downey Festschrift, Lecture Notes in Computer Science 10010 (2017) 599--607. 8. On the complexity of automatic complexity. Theory of Computing Systems 61 (2017), no. 4, 1427-–1439. 9. A conflict between some semantic conditions of Carmo and Jones for contrary-to-duty obligations, Studia Logica 105 (2017), no. 1, 173--178. 10. Covering the recursive sets (with Frank Stephan and Sebastiaan Terwijn), Annals of Pure and Applied Logic 168 (2017), no. 4, 804--823. Computability in Europe, Lecture Notes in Computer Science 9136 (2015), 44—53. 11. The Strength of the Grätzer-Schmidt theorem (with Katie Brodhead, Mushfeq Khan, William A. Lampe, Paul Kim Long V. Nguyen, and Richard A. Shore). Archive for Mathematical Logic 55 (2016), no. 5, 687—704. Computability in Europe, Lecture Notes in Computer Science 5635 (2009), 59—67. 12. Kolmogorov structure functions for automatic complexity. Theoretical Computer Science 607 (2015), no. 3, 435—445. Lecture Notes in Computer Science (COCOA 2014): Kolmogorov structure functions for automatic complexity in computational statistics. 13. Pricing complexity options (with M. Alikhani, A. Pakravan, and B. Saadat), Algorithmic Finance 4 (2015), no. 3-4, 127-137. 14. Nondeterministic automatic complexity of overlap-free and almost square-free words (with K. Hyde). Electronic Journal of Combinatorics 22 (2015), no. 3, Paper 3.22, 18 pp. Nondeterministic automatic complexity of almost square-free and strongly cube-free words, COCOON 2014, Lecture Notes in Computer Science 8591 (2014), 61—70. 15. Algorithmic randomness for Doob's martingale convergence theorem in continuous time (with Paul Kim Long V. Nguyen and Jason M. Rute). Logical Methods in Computer Science 10 (2014) no. 4, Paper 12. 16. How much randomness is needed for statistics? (with Antoine Taveneaux and Neil Thapen). Annals of Pure and Applied Logic 165 (2014), no. 9, 1470-1483. Computability in Europe, Lecture Notes in Computer Science 7318 (2012), 395—404. 17. Algorithmic aspects of Lipschitz functions (with Cameron Freer, André Nies, and Frank Stephan). Computability 3 (2014), no. 1, 45-61. 18. Randomness extraction and asymptotic Hamming distance (with Cameron Freer). Logical Methods in Computer Science 9 (2013) no. 3, Paper 27. 19. Arithmetic complexity via effective names for random sequences (with Frank Stephan and Jason R. Teutsch). ACM Transactions on Computational Logic 13, no. 3 (July 2012), Art. 24, 18 pp. 20. Lowness notions, measure, and domination (with Joseph S. Miller and Reed Solomon). Journal of the London Mathematical Society 85 (2012), no. 3, 869–888. 21. Martin-Löf randomness and Galton-Watson processes (with David Diamondstone). Annals of Pure and Applied Logic 163 (2012), no. 5, 519-529. Members of random closed sets. Computability in Europe, Lecture Notes in Computer Science 5635 (2009), 144—153. 22. A strong law of computationally weak subsets, Journal of Mathematical Logic 11 (2011) no. 1, 1—10. See also Electronic Colloquium on Computational Complexity, Report No. 150 (2010). 23. Kolmogorov complexity and the recursion theorem (with Wolfgang Merkle and Frank Stephan). Transactions of the American Mathematical Society 363 (2011) no. 10, 5465—5480. STACS 2006, Lecture Notes in Computer Science 3884 (2006), pp. 149—161. 24. Kolmogorov complexity and strong approximation of Brownian motion (with Tamás Szabados). Proceedings of the American Mathematical Society 139 (2011) no. 9, 3307—3316. 25. The probability distribution as a computational resource for randomness testing. Journal of Logic and Analysis 2 (2010), no. 10, 1—13. 26. Higher Kurtz randomness (with André Nies, Frank Stephan, and Liang Yu). Annals of Pure and Applied Logic 161 (2010), no. 10, 1280—1290. 27. Lattice initial segments of the hyperdegrees (with Richard A. Shore). Journal of Symbolic Logic 75 (2010), no. 1, 103—130. 28. Superhighness (with André Nies). Notre Dame Journal of Formal Logic 50 (2009), no. 4, 445—452. 29. Finding paths through narrow and wide trees (with Stephen E. Binns). Journal of Symbolic Logic 74 (2009), no. 1, 349—360. 30. Infinite subsets of random sets of integers. Mathematical Research Letters 16 (2009), no. 1, 103—110. 31. Effective dimension of points visited by Brownian motion (with Anil Nerode). Theoretical Computer Science 410 (2009), no. 4-5, 347—354. The law of the iterated logarithm for algorithmically random paths of Brownian motion. Logical Foundations of Computer Science, Lecture Notes in Computer Science 4514 (2007), pp. 310—317. 32. Self-embeddings of computable trees (with Stephen Binns, Manuel Lerman, Jim Schmerl, and Reed Solomon). Notre Dame Journal of Formal Logic 49 (2008), no. 1, 1—37. 33. Low for random reals and positive-measure domination. Proceedings of the American Mathematical Society 135 (2007), no. 11, 3703—3709 (electronic). 34. On a conjecture of Dobrinen and Simpson concerning almost everywhere domination (with Stephen Binns, Manuel Lerman, and Reed Solomon). Journal of Symbolic Logic 71 (2006), no. 1, 119—136. 35. Lowness for the class of Schnorr random reals (with André Nies and Frank Stephan). SIAM Journal on Computing 35 (2005), no. 3, 647—657 (electronic). 36. Comparing DNR and WWKL (with Klaus Ambos-Spies, Steffen Lempp, and Theodore A. Slaman). Journal of Symbolic Logic 69 (2004), no. 4, 1089—1104. 37. Local initial segments of the Turing degrees. Bulletin of Symbolic Logic 9 (2003), no. 1, 26—36. #### Book and book chapters 1. Kjos-Hanssen and Birns: Statistics for Calculus Students. http://hdl.handle.net/10790/4572 Open Educational Resources (OER) textbook, Outreach College, University of Hawaii at Manoa, February 21, 2019. 2. The strength of some combinatorial principles related to Ramsey's theorem (with Denis R. Hirschfeldt, Carl G. Jockusch, Steffen Lempp, and Theodore A. Slaman.) Computational Prospects of Infinity. Part II: Presented Talks, Lecture Notes Series, Institute of Mathematical Sciences, National University of Singapore 15, World Scientific Publishing, Hackensack, NJ, 2008, pp. 143—162. 3. Immunity, diagonalisation and the complexity of mass problems (with Achilles Beros, Mushfeq Khan, and Andre Nies), Aspects of Computation. Lecture Notes Series, Institute of Mathematical Sciences, National University of Singapore 39, World Scientific Publishing, Hackensack, NJ (accepted). Combined final version of two conference papers: 4. A tractable case of the Turing automorphism problem: bi-uniformly $E_0$-invariant Cantor homeomorphisms. Higher recursion theory and set theory. Lecture Notes Series, Institute of Mathematical Sciences, National University of Singapore 40, World Scientific Publishing, Hackensack, NJ. Accepted. #### Conference papers without a journal version yet 1. The strength of the Besicovitch-Davies theorem (with Jan Reimann), Computability in Europe, Lecture Notes in Computer Science 6158 (2010), 229—238. 2. Numberings and randomness (with Katie Brodhead). Computability in Europe, Lecture Notes in Computer Science 5635 (2009), 49—58. 3. Beros, Kjos-Hanssen, and Yogi: Planar digraphs for automatic complexity, Lecture Notes in Computer Science 11436 (2019), 59--73. TAMC 2019 (Theory and Applications of Models of Computation). 4. Kjos-Hanssen: A rigid cone in the truth-table degrees with jump, Downey Festschrift, Lecture Notes in Computer Science 10010 (2017), 487--500. 5. KL-randomness and effective dimension under strong reducibility (with David J. Webb). Computability in Europe, Lectures Notes in Computer Science, 2021. 6. On the degrees of constructively immune sets (with Samuel D. Birns). Computability in Europe, Lectures Notes in Computer Science, 2021. ### arXiv only 1. Lattice initial segments of the Turing degrees, doctoral dissertation, Logic and the Methodology of Science, University of California, Berkeley, 2002, iii+89 pages. 2. Effective Banach spaces, Master's thesis, Mathematics, University of Oslo, 1997. 3. Models of the Chisholm set, term paper for Filosofi hovedfag spesialområde 1, Fall 1996, University of Oslo. First cited in Carmo and Jones, Deontic logic and contrary-to-duties, Handbook of Philosophical Logic, 2002. 4. Google distance between words (with Alberto J. Evangelista). Presented at Frontiers in Undergraduate Research, University of Connecticut, 2006. #### In quarantine 1. Counting pure states: a quantum variant of automatic complexity. Journal version. Conference version: Superposition as memory: unlocking quantum automatic complexity. Lecture Notes in Computer Science 10240 (2017), 160-169. Unconventional Computation and Natural Computation (UCNC) 2017.
2021-05-10 00:41:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7069730162620544, "perplexity": 2198.5367590057826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.65/warc/CC-MAIN-20210510003422-20210510033422-00184.warc.gz"}
https://www.physicsforums.com/threads/gauge-invariance-of-momentum-of-charged-particle.893314/
I Gauge invariance of momentum of charged particle 1. Nov 13, 2016 spaghetti3451 I know that, in the presence of a magnetic field, the momentum of a charge particle changes from $p_{i}$ to $\pi_{i}\equiv p_{i}+eA_{i}$, where $e$ is the charge of the particle. I was wondering if this definition of momentum is gauge-invariant? How about $\tilde{\pi}_{i}=p_{i}-eA_{i}$? 2. Nov 14, 2016 dextercioby Well, what do you think about the value of that sign just before the "e"? 3. Nov 14, 2016 The second one is gauge invariant (kinematic momentum). You want the gauge dependent pieces to cancel, one which you get from the transformation of A and the other from the derivative piece (the canonical momentum p). This is just the spatial component of the covariant derivative. 4. Nov 17, 2016 vanhees71 Let's first understand the issue from the point of view of classical mechanics. For simplicity I consider the non-relativistic limit. You start from the Lagrangian for a particle subject to an electromagnetic field described by a scalar potential $\Phi$ and a vector potential $\vec{A}$. The Lagrangian reads $$L=\frac{m}{2} \dot{\vec{x}}^2-\frac{q}{c} (c \Phi-\dot{\vec{x}} \cdot \vec{A}).$$ The canonical momentum is $$\vec{\pi}=\frac{\partial L}{\partial \dot{\vec{x}}}=m\dot{\vec{x}} + \frac{q}{c} \vec{A},$$ and this is clearly gauge dependent. The equation of motion is, of course not gauge dependent since $$\dot{\vec{\pi}}=m \ddot{\vec{x}} + \frac{q}{c} [\partial_t \vec{A}+(\dot{\vec{x}} \cdot \vec{\nabla}) \vec{A}]=\frac{\partial L}{\partial \vec{x}} = - q \vec{\nabla} \Phi +\frac{q}{c} \vec{\nabla}(\dot{\vec{x}} \cdot \vec{A}).$$ It's easy to see, using $$\vec{E}=-\vec{\nabla} \Phi-\frac{1}{c} \partial_t \vec{A}, \quad \vec{B}=\vec{\nabla} \times \vec{A},$$ that everything combines to the gauge-invariant well-known equation of motion $$m\ddot{\vec{x}}=q \left (\vec{E}+\frac{\vec{v}}{c} \times \vec{B} \right).$$ $$H=\dot{\vec{x}} \cdot \vec{\pi}-L=\frac{1}{2m} \left (\vec{\pi}-\frac{q}{c} \vec{A} \right)^2+q \Phi.$$ To quantize this in the position representation, we have to realize the canonical (!) momentum as the gradient operator since it's the canonical momentum that generates spatial translations, i.e., we have $$\hat{\vec{\pi}}=-\hbar \mathrm{i} \vec{\nabla}$$ $$\hat{H}=\frac{1}{2m} \hat{D}^2+q \Phi.$$ Here $$\hat{D}=-\mathrm{i} \hbar \vec{\nabla}-\frac{q}{c} \vec{A}.$$ Now the time-dependent Schrödinger equation reads $$\mathrm{i} \hbar \partial_t \psi=\hat{H} \psi.$$ We now check what happens under gauge transformations, $$\vec{A}'=\vec{A}+\vec{\nabla \chi}, \quad \Phi'=\Phi-\frac{1}{c} \partial_t \chi.$$ Now we can define also a new wave function, which just changes by a phase factor. The right choice is $$\psi'=\exp \left (\frac{\mathrm{i} q}{\hbar c} \chi \right) \psi.$$ Then we find $$\hat{D}' \psi'=\exp \left (\frac{\mathrm{i} q}{\hbar c} \chi \right) \hat{D} \psi.$$ So we find that if $\psi$ obeys the time-dependent Schrödinger equation with the electromagnetic potentials $\Phi$ and $\vec{A}$ then $\psi'$ obeys the time-dependent Schrödinger equation with the electromagnetic potentials $\Phi'$ and $\vec{A}'$. Since $\psi$ and $\psi'$ differ just by a phase factor, both represent the same states, since only the ray in Hilbert space determines already the quantum state not the wave function itself. Indeed the only physical content of the wave function is the probability distribution for position, which is $|\psi|^2=|\psi'|^2$, doesn't depend on $\chi$ and thus is gauge invariant. So the physical outcome of the quantum-theoretical formalism is gauge independent, as it should be. 5. Nov 17, 2016 stevendaryl Staff Emeritus I thought that two states $\psi$ and $\psi'$ represent the same state if one is a CONSTANT multiple of the other. In this case, the phase $e^{i \frac{q}{\hbar c} \chi}$ is a function of position and time, so $\psi$ and $\psi'$ aren't constant multiples of each other. If the definition of "ray" allows for non-constant multiples, then it seems like that would mean that the wave function $\psi(x) = e^{ikx}$ is the same state $\psi'(x) = e^{i k' x}$, where $k \neq k'$. But those aren't the same state, are they? (Yeah, I know they aren't "states" at all in the Hilbert space of square-integrable functions, but they are states for a particle in a box with periodic boundary conditions). 6. Nov 17, 2016 vanhees71 True. I was a bit imprecise in my explanation, but still the point is that multiplication even with a space-time dependent phase factor doesn't change the physics described by this wave function. More precisely, a gauge transformation is a unitary transformation of the operators and wave functions. Concerning you plane-wave example, you already gave the reason for why this is not a valid argument, because here you consider generalized eigenfunctions of the self-adjoint momentum operator, which are not representing states. If they would, both just describe uniformly distributed position probabilities over the entire infinite position space, but this is not normalizable properly, and thus it's not a wave function that represents a quantum state. For that indeed it must be square-integrable! The purpose of such generalized eigenstates in the continuous spectrum is just the transformation from one to another representation. In your case it's the Fourier transformation between the position and momentum representation, $$\psi(t,\vec{x})=\frac{1}{(2 \pi \hbar)^{3/2}} \int_{\mathbb{R}^3} \mathrm{d}^3 \vec{p} \exp \left (\frac{\mathrm{i} \vec{p} \cdot \vec{x}}{\hbar} \right) \tilde{\psi}(t,\vec{p}),$$ where $\tilde{\psi}(t,\vec{p})$ is the probability distribution for momentum, i.e., in the representation independent Dirac notation $$\psi(t,\vec{x})=\langle \vec{x}|\psi(t) \rangle, \quad \tilde{\psi}(t,\vec{p})=\langle \vec{p}|\psi(t) \rangle,$$ where I assume we are working in the Schrödinger picture of time evolution.
2018-05-22 03:13:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9175645112991333, "perplexity": 223.72428326068538}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864622.33/warc/CC-MAIN-20180522014949-20180522034949-00612.warc.gz"}
https://indico.cern.ch/event/751767/contributions/3770847/
# 10th International Conference on Hard and Electromagnetic Probes of High-Energy Nuclear Collisions May 31, 2020 to June 5, 2020 Online US/Central timezone ## Electroweak probes in heavy-ion collisions with ATLAS Jun 1, 2020, 11:00 AM 20m https://cern.zoom.us/j/91030448357 #### https://cern.zoom.us/j/91030448357 Oral Presentation Electroweak Probes ### Speaker Iwona Grabowska-Bold (AGH University of Science and Technology (PL)) ### Description Electroweak bosons produced in lead-lead (Pb+Pb) collisions are an excellent tool to constrain initial-state effects which affect the rates of hard-scattering processes in nucleus-nucleus interactions. The production yields of massive electroweak bosons, observed via their leptonic decay channels, offer a high-precision test of the binary collision scaling expected in Pb+Pb and a way to quantify nuclear modifications of the parton distribution functions (PDFs). The large samples of Pb+Pb data at $\sqrt{s_{\mathrm{NN}}}$ = 5.02 TeV collected by the ATLAS experiment in 2015, and the corresponding high-statistics $pp$ data at the same collision energy used as a baseline, allow for a detailed experimental study of these phenomena and comparisons to predictions from a variety of theoretical calculations. This talk presents the latest ATLAS results on electroweak boson production, including updated results on Z production and high-precision W boson results in Pb+Pb collisions. Inclusive production of prompt photons in proton-lead~($p$+Pb) collisions at $\sqrt{s_{\mathrm{NN}}}$ = 8.16 TeV is also covered. Various predictions of nuclear modifications to PDFs are discussed. Contribution type Contributed Talk ATLAS Electroweak Probes ### Primary authors Lidija Zivkovic (Institute of physics Belgrade (RS)) ### Presentation materials HardProbes2020-EWBosonsAtlas-062020.pdf New light-by-light measurement Recording
2021-07-24 20:10:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3905681371688843, "perplexity": 6666.950441376773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150308.48/warc/CC-MAIN-20210724191957-20210724221957-00544.warc.gz"}
https://zbmath.org/?q=an%3A1168.34003
# zbMATH — the first resource for mathematics Extended simplest equation method for nonlinear differential equations. (English) Zbl 1168.34003 The authors consider the equation $P(y,y',y'',\dots)=0, \tag{1}$ where $$y=y(z)$$ is an unknown function, $$P$$ is a polynomial in the variable $$y$$ and its derivatives and look for exact solutions $$y=y(z)$$ of the form $y(z)=\sum_{k=0}^NA_k\left( \frac{\psi '}{\psi} \right)^k, \tag{2}$ $$A_k= \text{const}$$, $$A_N\neq 0$$, where the function $$\psi=\psi(z)$$ is the general solution of the linear ordinary differential equation $\psi ''' +\alpha\psi '' +\beta \psi ' +\gamma \psi=0, \tag{3}$ $$\alpha, \beta, \gamma =\text{const}$$. They propose the algorithm for searching the parameters $$N,A_k,$$ $$k=1,\dots,N$$, $$\alpha,\beta,\gamma$$. This approach for the exact solution of the equation (1) the authors call the extended simplest equation method. They apply this method to the Sharma-Tasso-Olver and the Burgers-Huxley equations. New exact solutions of these equations are obtained. ##### MSC: 34A05 Explicit solutions, first integrals of ordinary differential equations Full Text: ##### References: [1] Gardner, C.S.; Greene, J.M.; Kruskal, M.D.; Miura, R.M., Method for solving the Korteweg-de Vries equation, Phys. rev. lett., 19, 1095-1097, (1967) · Zbl 1061.35520 [2] Ablowitz, M.J.; Kaup, D.J.; Newell, A.C.; Segur, H., Method for solving the sine – gordon equation, Phys. rev. lett., 30, 1262-1264, (1973) [3] Ablowitz, M.J.; Clarkson, P.A., Solitons, nonlinear evolution equation and inverse scattering, (1991), Cambridge University Press New York · Zbl 0762.35001 [4] Hirota, R., Exact solution of the Korteweg-de Vries equation for multiple collisions of solitons, Phys. rev. lett., 27, 1192-1194, (1971) · Zbl 1168.35423 [5] Kudryashov, N.A., Analytical theory of nonlinear differential equations, (2004), Institute for Computer Investigations Moskow-Igevsk, (in Russian) [6] Weiss, J.; Tabor, M.; Carnevalle, G., The painleve property for partial differential equations, J. math. phys., 24, 522-526, (1983) · Zbl 0514.35083 [7] Kudryashov, N.A., Special polynomials associated with some hierarchies, Phys. lett. A, 372, 945-1956, (2008) · Zbl 1220.34111 [8] He, J.H.; Wu, X.H., Exp-function method for nonlinear wave equations, Chaos solitons fract., 30, 700-708, (2006) · Zbl 1141.35448 [9] He, J.H.; Abdou, M.A., New periodic solutions for nonlinear evolution equations using exp-function method, Chaos solitons fract., 34, 1421-1429, (2007) · Zbl 1152.35441 [10] Ebaid, A., Exact solitary wave solutions for some nonlinear evolution equations via exp-function method, Phys. lett. A, 365, 213-219, (2007) · Zbl 1203.35213 [11] El-Wakil, S.A.; Madkour, M.A.; Abdou, M.A., Application of exp-function method for nonlinear evolution equations with variable coefficients, Phys. lett. A, 369, 62-69, (2007) · Zbl 1209.81097 [12] Abdou, M.A.; Soliman, A.A.; El-Basyony, S.T., New application of exp-function method for improved Boussinesq equation, Phys. lett. A, 369, 469-475, (2007) · Zbl 1209.81091 [13] El-Wakil, S.A.; Abdou, M.A.; Hendi, A., New periodic wave solutions via exp-function method, Phys. lett. A, 372, 830-840, (2008) · Zbl 1217.37070 [14] Zhang, S., Application of exp-function method to Riccati equation and new exact solutions with three arbitrary functions of broer – kaup – kupershmidt equations, Phys. lett. A, 372, 1873-1880, (2008) · Zbl 1220.37071 [15] Kudryashov, N.A., On types of nonlinear nonintegrable equations with exact solutions, Phys. lett. A, 155, 269-275, (1991) [16] Kudryashov, N.A., Exact solutions of the generalized kuramoto – sivashinsky equation, Phys. lett. A, 147, 287-291, (1990) [17] Parkes, E.J.; Duffy, B.R., An automated tanh-function method for finding solitary wave solutions to non-linear evolution equations, Comput. phys. commun., 98, 288-300, (1996) · Zbl 0948.76595 [18] El-Wakil, S.A.; El-Labany, S.K.; Zahran, M.A.; Sabry, R., Modified extended tanh-function method and its applications to nonlinear equations, Appl. math. comput., 161, 403-412, (2005) · Zbl 1062.35082 [19] Huber, A., Solitary solutions of some nonlinear evolution equations, Appl. math. comput., 166, 464-474, (2005) · Zbl 1080.35111 [20] Yusufogˇlu, E.; Bekir, A., A travelling wave solution to the Ostrovsky equation, Appl. math. comput., 186, 256-260, (2007) · Zbl 1110.76010 [21] Liu, S.; Fu, Z.; Liu, S.; Zhao, Q., Jacobi elliptic function expansion method and periodic wave solutions of nonlinear wave equations, Phys. lett. A, 289, 69-74, (2001) · Zbl 0972.35062 [22] Fu, Z.; Liu, S.; Liu, S.; Zhao, Q., New Jacobi elliptic function expansion and new periodic solutions of nonlinear wave equations, Phys. lett. A, 290, 72-76, (2001) · Zbl 0977.35094 [23] Kudryashov, N.A., Simplest equation method to look for exact solutions of nonlinear differential equations, Chaos solitons fract., 24, 1217-1231, (2005) · Zbl 1069.35018 [24] Kudryashov, N.A., Exact solitary waves of the Fisher equation, Phys. lett. A, 342, 99-106, (2005) · Zbl 1222.35054 [25] Bekir, A.; Boz, A., Exact solutions for nonlinear evolution equations using exp-function method, Phys. lett. A, 372, 1619-1625, (2008) · Zbl 1217.35151 [26] Lan, H.; Wang, K., Exact solutions for some nonlinear equations, Phys. lett. A, 137, 369-372, (1989) [27] Ince, E.L., Ordinary differential equations, (1926), Longmans, Green & Co. London · Zbl 0063.02971 [28] Kamke, E., Differentialgleichungen, Lösungsmethoden und Lösungen I: gewöhnliche differentialgleichungen, (1943), Geest & Portig Leipzig · Zbl 0028.22702 [29] Kudryashov, N.A.; Demina, M.V., Polygons of differential equations for finding exact solutions, Chaos solitons fract., 33, 1480-1496, (2007) · Zbl 1133.35084 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-04-20 11:01:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6366332769393921, "perplexity": 6275.1989625277765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039388763.75/warc/CC-MAIN-20210420091336-20210420121336-00538.warc.gz"}
http://tex.stackexchange.com/questions/16113/changing-the-entries-of-a-list-environment
# Changing the entries of a list environment If I have a new list environment like so: \newenvironment{textblockX} { \begin{list} {Question} {} } { \end{list} } How would I refer to an item of the list within the environment? As an example, how could I modify the code above such that every entry is now bold, but the word question stays in normal text? Is this where the ##1 is used? - You can't refer to an item in the environment definition because the \items are not part of that definition. They are only added inside the environment. You can use formatting macros like \bfseries which don't take arguments but are valid for the whole environment. Then to switch back to normal use \textnormal{...} or \normalfont. There is also \normalsize and \normalcolor if you require it. The environment arguments like #1 standing for normal macro arguments direct after the \begin{<env>} and don't have anything to do with the \items. \newenvironment{textblockX} {% \bfseries \begin{list}% {\textnormal{Question}}% {}% } {% \end{list}% } -
2016-06-26 22:45:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785357475280762, "perplexity": 1096.1172421915862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00125-ip-10-164-35-72.ec2.internal.warc.gz"}
https://datascience.stackexchange.com/questions/50988/what-happens-to-the-left-over-unpicked-data-in-random-forest
# What happens to the left over unpicked data in Random Forest I believe in Random forest we pick random samples of training data with replacement. My question is there still is a possibility that we might leave some data out. What happens to that. Does it not affect the random forest training? Do we use that left over later at some point. Looking at Scikit-Learn's RandomForestClassifier documentation, we can see that there is a bootstrap argument that can be set to False to ensure all data points are used to fit each of the trees. Otherwise, say you pick some arguments to all be 1 (num_estimators, max_depth, min_samples), then not much data would be used at all! Looking through the source code, there doens't seem to be a check that all data was used. • A couple of points to add. First, each tree will miss about $1/e$ of the data. But then, with 1,2,3,4 trees you expect to completely ignore 37%, 14%, 5%, 2% of the data respectively. With 10 trees it's down to 0.0045%, with 100 trees $3\cdot10^{-42}$%. So for most applications, you're not missing out on much. Second, any missed samples will show up in all the out-of-bag (OOB) scoring estimates, so not necessarily completely wasted. – Ben Reiniger Apr 26 '19 at 19:35
2020-06-01 23:13:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29989922046661377, "perplexity": 737.2769281966725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419639.53/warc/CC-MAIN-20200601211310-20200602001310-00586.warc.gz"}
http://math.unibas.ch/archiv/veranstaltungsarchiv/veranstaltungsdetails/article/seminar-algebra-geometry-rafael-andrist/1343434458/?tx_ttnews%5BbackPid%5D=19247&cHash=781aed4ba1b1e8eb73a87a27e7ca1788
# Veranstaltungsarchiv 18.11.2011 10:30 Alter: 2 yrs ## Seminar Algebra & Geometry: Rafael Andrist Kategorie: Algebra and Geometry ### Rafael Andrist, Universität Wuppertal Holomorphic automorphisms of Danielewski surfaces A Danielewski surface is given as the hypersurface \$x y = f(z)\$ in \$\mathbb{C}^3\$, where \$f\$ is a polynomial with only simple zeroes. Such a surface enjoys the Density Property, i.e. the Lie algebra generated by the complete holomorphic vector fields is dense in the Lie algebra of all holomorphic vector fields. In case of a Danielewski surface the so-called overshear group is dense in the group of holomorphic automorphisms. We describe the group structure of the overshear group with the help of Nevanlinna theory.
2013-05-23 05:09:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.903972864151001, "perplexity": 1608.2797387207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702849682/warc/CC-MAIN-20130516111409-00003-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.fightfinance.com/?q=609,610,611,612,613,614,615,541,581,582,583,290,330,16,26,131,49,265,64,19,87,134,149,172,187,203,204,222,259,29,42,57,107,160,239,298,128,129,130,374,234,509,616,510,11,15,23,33,38,48,53,56,63,133,138,153,159,163,168,178,179,183,193,194,620,213,227,229,230,233,255,257,266,287,328,332,460,35,25,267,143,96,108,572,629,573,141,254,32,268,269,476,490,478,527,508,477,136,151,21,404,542,543,278,604,456,353,363,407,525,295,526,574,575,576,577,578,554,618,221,466,120,452,531,473,482,467,461,444,446,445,447,443,514,515,516,44,59,126,37,182,533,534,500,501,489,496,251,252,502,532,46,465,579,580,333,60,190,43,2,288,481,137,521,356,58,499,530,517,518,519,479,3,4,451,7,528,264,28,201,216,497,289,352,36,40,41,148,158,441,488,50,535,217,348,358,341,537,463,505,211,180,299,281,280,462,195,548,31,161,39,51,270,498,215,249,607,608,372,373,377,379,94,301,406,67,206,296,223,68,89,113,238,273,368,370,506,367,369,375,78,91,115,337,559,236,83,111,285,293,557,556,563,565,561,306,706,703,704,705,80,112,326,110,71,79,93,627,672,628,673,674,410,116,114,302,418,66,72,76,74,75,85,86,88,90,92,98,104,106,231,232,235,244,248,408,419,657,661,119,621,243,623,339,338,464,100,105,242,668,340,416,417,
# Fight Finance #### CoursesTagsRandomAllRecentScores You deposit cash into your bank account. Have you or debt? You deposit cash into your bank account. Does the deposit account represent a debt or to you? You owe money. Are you a or a ? You are owed money. Are you a or a ? You own a debt asset. Are you a or a ? You buy a house funded using a home loan. Have you or debt? You buy a house funded using a home loan. Have you or debt? Which of the following statements is NOT correct? Bond investors: A home loan company advertises an interest rate of 6% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? All rates are given to four decimal places. A credit card company advertises an interest rate of 18% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? All rates are given to four decimal places. A semi-annual coupon bond has a yield of 3% pa. Which of the following statements about the yield is NOT correct? All rates are given to four decimal places. Which of the below statements about effective rates and annualised percentage rates (APR's) is NOT correct? Which of the following statements about effective rates and annualised percentage rates (APR's) is NOT correct? A credit card offers an interest rate of 18% pa, compounding monthly. Find the effective monthly rate, effective annual rate and the effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: $$r_\text{eff monthly} , r_\text{eff yearly} , r_\text{eff daily}$$ A European bond paying annual coupons of 6% offers a yield of 10% pa. Convert the yield into an effective monthly rate, an effective annual rate and an effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: $$r_\text{eff, monthly} , r_\text{eff, yearly} , r_\text{eff, daily}$$ Calculate the effective annual rates of the following three APR's: • A credit card offering an interest rate of 18% pa, compounding monthly. • A bond offering a yield of 6% pa, compounding semi-annually. • An annual dividend-paying stock offering a return of 10% pa compounding annually. All answers are given in the same order: $r_\text{credit card, eff yrly}$, $r_\text{bond, eff yrly}$, $r_\text{stock, eff yrly}$ In Australia, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 2.83% pa. The inflation rate is currently 2.2% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years. What is the real yield on these bonds, given as an APR compounding every 6 months? On his 20th birthday, a man makes a resolution. He will deposit $30 into a bank account at the end of every month starting from now, which is the start of the month. So the first payment will be in one month. He will write in his will that when he dies the money in the account should be given to charity. The bank account pays interest at 6% pa compounding monthly, which is not expected to change. If the man lives for another 60 years, how much money will be in the bank account if he dies just after making his last (720th) payment? In Germany, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 0.04% pa. The inflation rate is currently 1.4% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years. What is the real yield on these bonds, given as an APR compounding every 6 months? You want to buy an apartment priced at$300,000. You have saved a deposit of $30,000. The bank has agreed to lend you the$270,000 as a fully amortising loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage loan payments are paid in arrears (at the end of the month). You want to buy an apartment worth $500,000. You have saved a deposit of$50,000. The bank has agreed to lend you the $450,000 as a fully amortising mortgage loan with a term of 25 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You want to buy an apartment worth$400,000. You have saved a deposit of $80,000. The bank has agreed to lend you the$320,000 as a fully amortising mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You want to buy an apartment priced at $500,000. You have saved a deposit of$50,000. The bank has agreed to lend you the $450,000 as a fully amortising loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You just signed up for a 30 year fully amortising mortgage loan with monthly payments of$2,000 per month. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 5 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. You just signed up for a 30 year fully amortising mortgage with monthly payments of $1,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 20 years, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. You just signed up for a 30 year fully amortising mortgage loan with monthly payments of$1,500 per month. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. You just signed up for a 30 year fully amortising mortgage loan with monthly payments of $1,500 per month. The interest rate is 9% pa which is not expected to change. To your surprise, you can actually afford to pay$2,000 per month and your mortgage allows early repayments without fees. If you maintain these higher monthly payments, how long will it take to pay off your mortgage? You just agreed to a 30 year fully amortising mortgage loan with monthly payments of $2,500. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. The below choices are given in the same order. You want to buy a house priced at$400,000. You have saved a deposit of $40,000. The bank has agreed to lend you$360,000 as a fully amortising loan with a term of 30 years. The interest rate is 8% pa payable monthly and is not expected to change. What will be your monthly payments? You want to buy an apartment priced at $300,000. You have saved a deposit of$30,000. The bank has agreed to lend you the $270,000 as an interest only loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage payments are paid in arrears (at the end of the month). You just signed up for a 30 year interest-only mortgage with monthly payments of$3,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 15 years, just after the 180th payment at that time, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. Remember that the mortgage is interest-only and that mortgage payments are paid in arrears (at the end of the month). You just borrowed $400,000 in the form of a 25 year interest-only mortgage with monthly payments of$3,000 per month. The interest rate is 9% pa which is not expected to change. You actually plan to pay more than the required interest payment. You plan to pay $3,300 in mortgage payments every month, which your mortgage lender allows. These extra payments will reduce the principal and the minimum interest payment required each month. At the maturity of the mortgage, what will be the principal? That is, after the last (300th) interest payment of$3,300 in 25 years, how much will be owing on the mortgage? You want to buy an apartment worth $300,000. You have saved a deposit of$60,000. The bank has agreed to lend you $240,000 as an interest only mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You want to buy an apartment priced at$500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the$450,000 as an interest only loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? A bank grants a borrower an interest-only residential mortgage loan with a very large 50% deposit and a nominal interest rate of 6% that is not expected to change. Assume that inflation is expected to be a constant 2% pa over the life of the loan. Ignore credit risk. From the bank's point of view, what is the long term expected nominal capital return of the loan asset? A prospective home buyer can afford to pay $2,000 per month in mortgage loan repayments. The central bank recently lowered its policy rate by 0.25%, and residential home lenders cut their mortgage loan rates from 4.74% to 4.49%. How much more can the prospective home buyer borrow now that interest rates are 4.49% rather than 4.74%? Give your answer as a proportional increase over the original amount he could borrow ($V_\text{before}$), so: $$\text{Proportional increase} = \frac{V_\text{after}-V_\text{before}}{V_\text{before}}$$ Assume that: • Interest rates are expected to be constant over the life of the loan. • Loans are interest-only and have a life of 30 years. • Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates compounding per month. An 'interest payment' is the same thing as a 'coupon payment'. or ? An 'interest rate' is the same thing as a 'coupon rate'. or ? An 'interest rate' is the same thing as a 'yield'. or ? Which of the following statements is NOT equivalent to the yield on debt? Assume that the debt being referred to is fairly priced, but do not assume that it's priced at par. An 'interest only' loan can also be called a: Calculate the price of a newly issued ten year bond with a face value of$100, a yield of 8% pa and a fixed coupon rate of 6% pa, paid annually. So there's only one coupon per year, paid in arrears every year. "Buy low, sell high" is a phrase commonly heard in financial markets. It states that traders should try to buy assets at low prices and sell at high prices. Traders in the fixed-coupon bond markets often quote promised bond yields rather than prices. Fixed-coupon bond traders should try to: Calculate the price of a newly issued ten year bond with a face value of $100, a yield of 8% pa and a fixed coupon rate of 6% pa, paid semi-annually. So there are two coupons per year, paid in arrears every six months. For a price of$100, Vera will sell you a 2 year bond paying semi-annual coupons of 10% pa. The face value of the bond is $100. Other bonds with similar risk, maturity and coupon characteristics trade at a yield of 8% pa. Would you like to her bond or politely ? For a price of$95, Nicole will sell you a 10 year bond paying semi-annual coupons of 8% pa. The face value of the bond is $100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 8% pa. Would you like to the bond or politely ? Bonds X and Y are issued by the same US company. Both bonds yield 10% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X and Y's coupon rates are 8 and 12% pa respectively. Which of the following statements is true? Bonds A and B are issued by the same company. They have the same face value, maturity, seniority and coupon payment frequency. The only difference is that bond A has a 5% coupon rate, while bond B has a 10% coupon rate. The yield curve is flat, which means that yields are expected to stay the same. Which bond would have the higher current price? A two year Government bond has a face value of $100, a yield of 0.5% and a fixed coupon rate of 0.5%, paid semi-annually. What is its price? The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? A two year Government bond has a face value of$100, a yield of 2.5% pa and a fixed coupon rate of 0.5% pa, paid semi-annually. What is its price? Which of the following statements about risk free government bonds is NOT correct? Hint: Total return can be broken into income and capital returns as follows: \begin{aligned} r_\text{total} &= \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0} \\ &= r_\text{income} + r_\text{capital} \end{aligned} The capital return is the growth rate of the price. The income return is the periodic cash flow. For a bond this is the coupon payment. The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? A bond maturing in 10 years has a coupon rate of 4% pa, paid semi-annually. The bond's yield is currently 6% pa. The face value of the bond is $100. What is its price? Bonds A and B are issued by the same Australian company. Both bonds yield 7% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond A pays coupons of 10% pa and bond B pays coupons of 5% pa. Which of the following statements is true about the bonds' prices? Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value ($100) and maturity (3 years). The only difference is that bond X and Y's yields are 8 and 12% pa respectively. Which of the following statements is true? A three year bond has a fixed coupon rate of 12% pa, paid semi-annually. The bond's yield is currently 6% pa. The face value is$100. What is its price? Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value ($100), maturity (3 years) and yield (10%) as each other. Which of the following statements is true? A four year bond has a face value of$100, a yield of 6% and a fixed coupon rate of 12%, paid semi-annually. What is its price? Which one of the following bonds is trading at a discount? A firm wishes to raise $20 million now. They will issue 8% pa semi-annual coupon bonds that will mature in 5 years and have a face value of$100 each. Bond yields are 6% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? A five year bond has a face value of $100, a yield of 12% and a fixed coupon rate of 6%, paid semi-annually. What is the bond's price? Which one of the following bonds is trading at par? A firm wishes to raise$8 million now. They will issue 7% pa semi-annual coupon bonds that will mature in 10 years and have a face value of $100 each. Bond yields are 10% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? Let the 'income return' of a bond be the coupon at the end of the period divided by the market price now at the start of the period $(C_1/P_0)$. The expected income return of a premium fixed coupon bond is: The coupon rate of a fixed annual-coupon bond is constant (always the same). What can you say about the income return ($r_\text{income}$) of a fixed annual coupon bond? Remember that: $$r_\text{total} = r_\text{income} + r_\text{capital}$$ $$r_\text{total, 0 to 1} = \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0}$$ Assume that there is no change in the bond's total annual yield to maturity from when it is issued to when it matures. Select the most correct statement. From its date of issue until maturity, the income return of a fixed annual coupon: Which one of the following bonds is trading at a premium? An investor bought two fixed-coupon bonds issued by the same company, a zero-coupon bond and a 7% pa semi-annual coupon bond. Both bonds have a face value of$1,000, mature in 10 years, and had a yield at the time of purchase of 8% pa. A few years later, yields fell to 6% pa. Which of the following statements is correct? Note that a capital gain is an increase in price. A firm wishes to raise $10 million now. They will issue 6% pa semi-annual coupon bonds that will mature in 8 years and have a face value of$1,000 each. Bond yields are 10% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? A four year bond has a face value of $100, a yield of 9% and a fixed coupon rate of 6%, paid semi-annually. What is its price? In these tough economic times, central banks around the world have cut interest rates so low that they are practically zero. In some countries, government bond yields are also very close to zero. A three year government bond with a face value of$100 and a coupon rate of 2% pa paid semi-annually was just issued at a yield of 0%. What is the price of the bond? A 10 year bond has a face value of $100, a yield of 6% pa and a fixed coupon rate of 8% pa, paid semi-annually. What is its price? Bonds X and Y are issued by the same company. Both bonds yield 10% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X pays coupons of 6% pa and bond Y pays coupons of 8% pa. Which of the following statements is true? A 30 year Japanese government bond was just issued at par with a yield of 1.7% pa. The fixed coupon payments are semi-annual. The bond has a face value of $100. Six months later, just after the first coupon is paid, the yield of the bond increases to 2% pa. What is the bond's new price? A 10 year Australian government bond was just issued at par with a yield of 3.9% pa. The fixed coupon payments are semi-annual. The bond has a face value of$1,000. Six months later, just after the first coupon is paid, the yield of the bond decreases to 3.65% pa. What is the bond's new price? Bonds X and Y are issued by the same US company. Both bonds yield 6% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X pays coupons of 8% pa and bond Y pays coupons of 12% pa. Which of the following statements is true? Below are some statements about loans and bonds. The first descriptive sentence is correct. But one of the second sentences about the loans' or bonds' prices is not correct. Which statement is NOT correct? Assume that interest rates are positive. Note that coupons or interest payments are the periodic payments made throughout a bond or loan's life. The face or par value of a bond or loan is the amount paid at the end when the debt matures. A European company just issued two bonds, a • 1 year zero coupon bond at a yield of 8% pa, and a • 2 year zero coupon bond at a yield of 10% pa. What is the company's forward rate over the second year (from t=1 to t=2)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. A European company just issued two bonds, a • 2 year zero coupon bond at a yield of 8% pa, and a • 3 year zero coupon bond at a yield of 10% pa. What is the company's forward rate over the third year (from t=2 to t=3)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. A European company just issued two bonds, a • 3 year zero coupon bond at a yield of 6% pa, and a • 4 year zero coupon bond at a yield of 6.5% pa. What is the company's forward rate over the fourth year (from t=3 to t=4)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. An Australian company just issued two bonds: • A 6-month zero coupon bond at a yield of 6% pa, and • A 12 month zero coupon bond at a yield of 7% pa. What is the company's forward rate from 6 to 12 months? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. An Australian company just issued two bonds paying semi-annual coupons: • 1 year zero coupon bond at a yield of 8% pa, and a • 2 year zero coupon bond at a yield of 10% pa. What is the forward rate on the company's debt from years 1 to 2? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. An Australian company just issued two bonds: • A 1 year zero coupon bond at a yield of 10% pa, and • A 2 year zero coupon bond at a yield of 8% pa. What is the forward rate on the company's debt from years 1 to 2? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. In the below term structure of interest rates equation, all rates are effective annual yields and the numbers in subscript represent the years that the yields are measured over: $$(1+r_{0-3})^3 = (1+r_{0-1})(1+r_{1-2})(1+r_{2-3})$$ Which of the following statements is NOT correct? Which of the following statements about yield curves is NOT correct? In the below term structure of interest rates equation, all rates are effective annual yields and the numbers in subscript represent the years that the yields are measured over: $$(1+r_{0-3})^3 = (1+r_{0-1})(1+r_{1-2})(1+r_{2-3})$$ Which of the following statements is NOT correct? You're trying to save enough money to buy your first car which costs$2,500. You can save $100 at the end of each month starting from now. You currently have no money at all. You just opened a bank account with an interest rate of 6% pa payable monthly. How many months will it take to save enough money to buy the car? Assume that the price of the car will stay the same over time. Your main expense is fuel for your car which costs$100 per month. You just refueled, so you won't need any more fuel for another month (first payment at t=1 month). You have $2,500 in a bank account which pays interest at a rate of 6% pa, payable monthly. Interest rates are not expected to change. Assuming that you have no income, in how many months time will you not have enough money to fully refuel your car? You really want to go on a back packing trip to Europe when you finish university. Currently you have$1,500 in the bank. Bank interest rates are 8% pa, given as an APR compounding per month. If the holiday will cost $2,000, how long will it take for your bank account to reach that amount? You're trying to save enough money for a deposit to buy a house. You want to buy a house worth$400,000 and the bank requires a 20% deposit ($80,000) before it will give you a loan for the other$320,000 that you need. You currently have no savings, but you just started working and can save $2,000 per month, with the first payment in one month from now. Bank interest rates on savings accounts are 4.8% pa with interest paid monthly and interest rates are not expected to change. How long will it take to save the$80,000 deposit? Round your answer up to the nearest month. A student won $1m in a lottery. Currently the money is in a bank account which pays interest at 6% pa, given as an APR compounding per month. She plans to spend$20,000 at the beginning of every month from now on (so the first withdrawal will be at t=0). After each withdrawal, she will check how much money is left in the account. When there is less than $500,000 left, she will donate that remaining amount to charity. In how many months will she make her last withdrawal and donate the remainder to charity? The saying "buy low, sell high" suggests that investors should make a: Which of the following is NOT a synonym of 'required return'? Total cash flows can be broken into income and capital cash flows. What is the name given to the income cash flow from owning shares? Total cash flows can be broken into income and capital cash flows. What is the name given to the cash flow generated from selling shares at a higher price than they were bought? Which of the following equations is NOT equal to the total return of an asset? Let $p_0$ be the current price, $p_1$ the expected price in one year and $c_1$ the expected income in one year. An asset's total expected return over the next year is given by: $$r_\text{total} = \dfrac{c_1+p_1-p_0}{p_0}$$ Where $p_0$ is the current price, $c_1$ is the expected income in one year and $p_1$ is the expected price in one year. The total return can be split into the income return and the capital return. Which of the following is the expected capital return? A stock was bought for$8 and paid a dividend of $0.50 one year later (at t=1 year). Just after the dividend was paid, the stock price was$7 (at t=1 year). What were the total, capital and dividend returns given as effective annual rates? The choices are given in the same order: $r_\text{total}$, $r_\text{capital}$, $r_\text{dividend}$. A share was bought for $30 (at t=0) and paid its annual dividend of$6 one year later (at t=1). Just after the dividend was paid, the share price fell to $27 (at t=1). What were the total, capital and income returns given as effective annual rates? The choices are given in the same order: $r_\text{total}$ , $r_\text{capital}$ , $r_\text{dividend}$. A fixed coupon bond was bought for$90 and paid its annual coupon of $3 one year later (at t=1 year). Just after the coupon was paid, the bond price was$92 (at t=1 year). What was the total return, capital return and income return? Calculate your answers as effective annual rates. The choices are given in the same order: $r_\text{total},r_\text{capital},r_\text{income}$. One and a half years ago Frank bought a house for $600,000. Now it's worth only$500,000, based on recent similar sales in the area. The expected total return on Frank's residential property is 7% pa. He rents his house out for $1,600 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is$18,617.27. The future value of 12 months of rental payments one year in the future is $19,920.48. What is the expected annual rental yield of the property? Ignore the costs of renting such as maintenance, real estate agent fees and so on. For an asset price to double every 10 years, what must be the expected future capital return, given as an effective annual rate? For an asset price to triple every 5 years, what must be the expected future capital return, given as an effective annual rate? Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year. After one year, would you be able to buy , exactly the as or than today with the money in this account? Apples and oranges currently cost$1 each. Inflation is 5% pa, and apples and oranges are equally affected by this inflation rate. Note that when payments are not specified as real, as in this question, they're conventionally assumed to be nominal. Which of the following statements is NOT correct? In the 'Austin Powers' series of movies, the character Dr. Evil threatens to destroy the world unless the United Nations pays him a ransom (video 1, video 2). Dr. Evil makes the threat on two separate occasions: • In 1969 he demands a ransom of $1 million (=10^6), and again; • In 1997 he demands a ransom of$100 billion (=10^11). If Dr. Evil's demands are equivalent in real terms, in other words $1 million will buy the same basket of goods in 1969 as$100 billion would in 1997, what was the implied inflation rate over the 28 years from 1969 to 1997? The answer choices below are given as effective annual rates: A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 3% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. A residential investment property has an expected nominal total return of 8% pa and nominal capital return of 3% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. A stock has a real expected total return of 7% pa and a real expected capital return of 2% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What is the nominal expected total return, capital return and dividend yield? The answers below are given in the same order. Which of the following statements about cash in the form of notes and coins is NOT correct? Assume that inflation is positive. Notes and coins: When valuing assets using discounted cash flow (net present value) methods, it is important to consider inflation. To properly deal with inflation: (I) Discount nominal cash flows by nominal discount rates. (II) Discount nominal cash flows by real discount rates. (III) Discount real cash flows by nominal discount rates. (IV) Discount real cash flows by real discount rates. Which of the above statements is or are correct? How can a nominal cash flow be precisely converted into a real cash flow? What is the present value of a nominal payment of $100 in 5 years? The real discount rate is 10% pa and the inflation rate is 3% pa. You expect a nominal payment of$100 in 5 years. The real discount rate is 10% pa and the inflation rate is 3% pa. Which of the following statements is NOT correct? What is the present value of a nominal payment of $1,000 in 4 years? The nominal discount rate is 8% pa and the inflation rate is 2% pa. What is the present value of a real payment of$500 in 2 years? The nominal discount rate is 7% pa and the inflation rate is 4% pa. Which of the following statements about inflation is NOT correct? On his 20th birthday, a man makes a resolution. He will put $30 cash under his bed at the end of every month starting from today. His birthday today is the first day of the month. So the first addition to his cash stash will be in one month. He will write in his will that when he dies the cash under the bed should be given to charity. If the man lives for another 60 years, how much money will be under his bed if he dies just after making his last (720th) addition? Also, what will be the real value of that cash in today's prices if inflation is expected to 2.5% pa? Assume that the inflation rate is an effective annual rate and is not expected to change. The answers are given in the same order, the amount of money under his bed in 60 years, and the real value of that money in today's prices. Who owns a company's shares? The: You're considering making an investment in a particular company. They have preference shares, ordinary shares, senior debt and junior debt. Which is the safest investment? Which will give the highest returns? Which business structure or structures have the advantage of limited liability for equity investors? A newly floated farming company is financed with senior bonds, junior bonds, cumulative non-voting preferred stock and common stock. The new company has no retained profits and due to floods it was unable to record any revenues this year, leading to a loss. The firm is not bankrupt yet since it still has substantial contributed equity (same as paid-up capital). On which securities must it pay interest or dividend payments in this terrible financial year? What is the lowest and highest expected share price and expected return from owning shares in a company over a finite period of time? Let the current share price be $p_0$, the expected future share price be $p_1$, the expected future dividend be $d_1$ and the expected return be $r$. Define the expected return as: $r=\dfrac{p_1-p_0+d_1}{p_0}$ The answer choices are stated using inequalities. As an example, the first answer choice "(a) $0≤p<∞$ and $0≤r< 1$", states that the share price must be larger than or equal to zero and less than positive infinity, and that the return must be larger than or equal to zero and less than one. Who is most in danger of being personally bankrupt? Assume that all of their businesses' assets are highly liquid and can therefore be sold immediately. The below screenshot of Commonwealth Bank of Australia's (CBA) details were taken from the Google Finance website on 7 Nov 2014. Some information has been deliberately blanked out. What was CBA's market capitalisation of equity? The below screenshot of Microsoft's (MSFT) details were taken from the Google Finance website on 28 Nov 2014. Some information has been deliberately blanked out. What was MSFT's market capitalisation of equity? Which of the following statements about book and market equity is NOT correct? One year ago a pharmaceutical firm floated by selling its 1 million shares for$100 each. Its book and market values of equity were both $100m. Its debt totalled$50m. The required return on the firm's assets was 15%, equity 20% and debt 5% pa. In the year since then, the firm: • Earned net income of $29m. • Paid dividends totaling$10m. • Discovered a valuable new drug that will lead to a massive 1,000 times increase in the firm's net income in 10 years after the research is commercialised. News of the discovery was publicly announced. The firm's systematic risk remains unchanged. Which of the following statements is NOT correct? All statements are about current figures, not figures one year ago. Hint: Book return on assets (ROA) and book return on equity (ROE) are ratios that accountants like to use to measure a business's past performance. $$\text{ROA}= \dfrac{\text{Net income}}{\text{Book value of assets}}$$ $$\text{ROE}= \dfrac{\text{Net income}}{\text{Book value of equity}}$$ The required return on assets $r_V$ is a return that financiers like to use to estimate a business's future required performance which compensates them for the firm's assets' risks. If the business were to achieve realised historical returns equal to its required returns, then investment into the business's assets would have been a zero-NPV decision, which is neither good nor bad but fair. $$r_\text{V, 0 to 1}= \dfrac{\text{Cash flow from assets}_\text{1}}{\text{Market value of assets}_\text{0}} = \dfrac{CFFA_\text{1}}{V_\text{0}}$$ Similarly for equity and debt. The investment decision primarily affects which part of a business? The working capital decision primarily affects which part of a business? The financing decision primarily affects which part of a business? Payout policy is most closely related to which part of a business? Business people make lots of important decisions. Which of the following is the most important long term decision? The expression 'cash is king' emphasizes the importance of having enough cash to pay your short term debts to avoid bankruptcy. Which business decision is this expression most closely related to? The expression 'you have to spend money to make money' relates to which business decision? Which of the following decisions relates to the current assets and current liabilities of the firm? The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time. What is the Net Present Value (NPV) of the project? Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 0 2 121 The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time. What is the Net Present Value (NPV) of the project? Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 11 2 121 What is the Internal Rate of Return (IRR) of the project detailed in the table below? Assume that the cash flows shown in the table are paid all at once at the given point in time. All answers are given as effective annual rates. Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 0 2 121 If a project's net present value (NPV) is zero, then its internal rate of return (IRR) will be: A project's NPV is positive. Select the most correct statement: You have$100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume twice as much now (t=0) as in one year (t=1) and have nothing left in the bank at the end. How much can you consume at time zero and one? The answer choices are given in the same order. You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume half as much now (t=0) as in one year (t=1) and have nothing left in the bank at the end. How much can you consume at time zero and one? The answer choices are given in the same order. The below graph shows a project's net present value (NPV) against its annual discount rate. For what discount rate or range of discount rates would you accept and commence the project? All answer choices are given as approximations from reading off the graph. The below graph shows a project's net present value (NPV) against its annual discount rate. Which of the following statements is NOT correct? A firm is considering a business project which costs$11m now and is expected to pay a constant $1m at the end of every year forever. Assume that the initial$11m cost is funded using the firm's existing cash so no new equity or debt will be raised. The cost of capital is 10% pa. Which of the following statements about net present value (NPV), internal rate of return (IRR) and payback period is NOT correct? A firm is considering a business project which costs $10m now and is expected to pay a single cash flow of$12.1m in two years. Assume that the initial $10m cost is funded using the firm's existing cash so no new equity or debt will be raised. The cost of capital is 10% pa. Which of the following statements about net present value (NPV), internal rate of return (IRR) and payback period is NOT correct? You have$100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0) and in one year (t=1) and have nothing left in the bank at the end (t=1). How much can you consume at each time? You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0), in one year (t=1) and in two years (t=2), and still have$50,000 in the bank after that (t=2). How much can you consume at each time? An investor owns an empty block of land that has local government approval to be developed into a petrol station, car wash or car park. The council will only allow a single development so the projects are mutually exclusive. All of the development projects have the same risk and the required return of each is 10% pa. Each project has an immediate cost and once construction is finished in one year the land and development will be sold. The table below shows the estimated costs payable now, expected sale prices in one year and the internal rates of returns (IRR's). Mutually Exclusive Projects Project Costnow ($) Sale price inone year ($) IRR(% pa) Petrol station 9,000,000 11,000,000 22.22 Car wash 800,000 1,100,000 37.50 Car park 70,000 110,000 57.14 Which project should the investor accept? An investor owns a whole level of an old office building which is currently worth $1 million. There are three mutually exclusive projects that can be started by the investor. The office building level can be: • Rented out to a tenant for one year at$0.1m paid immediately, and then sold for $0.99m in one year. • Refurbished into more modern commercial office rooms at a cost of$1m now, and then sold for $2.4m when the refurbishment is finished in one year. • Converted into residential apartments at a cost of$2m now, and then sold for $3.4m when the conversion is finished in one year. All of the development projects have the same risk so the required return of each is 10% pa. The table below shows the estimated cash flows and internal rates of returns (IRR's). Mutually Exclusive Projects Project Cash flownow ($) Cash flow inone year ($) IRR(% pa) Rent then sell as is -900,000 990,000 10 Refurbishment into modern offices -2,000,000 2,400,000 20 Conversion into residential apartments -3,000,000 3,400,000 13.33 Which project should the investor accept? The phone company Telstra have 2 mobile service plans on offer which both have the same amount of phone call, text message and internet data credit. Both plans have a contract length of 24 months and the monthly cost is payable in advance. The only difference between the two plans is that one is a: • 'Bring Your Own' (BYO) mobile service plan, costing$50 per month. There is no phone included in this plan. The other plan is a: • 'Bundled' mobile service plan that comes with the latest smart phone, costing $71 per month. This plan includes the latest smart phone. Neither plan has any additional payments at the start or end. The only difference between the plans is the phone, so what is the implied cost of the phone as a present value? Assume that the discount rate is 2% per month given as an effective monthly rate, the same high interest rate on credit cards. The boss of WorkingForTheManCorp has a wicked (and unethical) idea. He plans to pay his poor workers one week late so that he can get more interest on his cash in the bank. Every week he is supposed to pay his 1,000 employees$1,000 each. So $1 million is paid to employees every week. The boss was just about to pay his employees today, until he thought of this idea so he will actually pay them one week (7 days) later for the work they did last week and every week in the future, forever. Bank interest rates are 10% pa, given as a real effective annual rate. So $r_\text{eff annual, real} = 0.1$ and the real effective weekly rate is therefore $r_\text{eff weekly, real} = (1+0.1)^{1/52}-1 = 0.001834569$ All rates and cash flows are real, the inflation rate is 3% pa and there are 52 weeks per year. The boss will always pay wages one week late. The business will operate forever with constant real wages and the same number of employees. What is the net present value (NPV) of the boss's decision to pay later? How many years will it take for an asset's price to double if the price grows by 10% pa? How many years will it take for an asset's price to quadruple (be four times as big, say from$1 to $4) if the price grows by 15% pa? When using the dividend discount model, care must be taken to avoid using a nominal dividend growth rate that exceeds the country's nominal GDP growth rate. Otherwise the firm is forecast to take over the country since it grows faster than the average business forever. Suppose a firm's nominal dividend grows at 10% pa forever, and nominal GDP growth is 5% pa forever. The firm's total dividends are currently$1 billion (t=0). The country's GDP is currently $1,000 billion (t=0). In approximately how many years will the company's total dividends be as large as the country's GDP? The required return of a project is 10%, given as an effective annual rate. What is the payback period of the project in years? Assume that the cash flows shown in the table are received smoothly over the year. So the$121 at time 2 is actually earned smoothly from t=1 to t=2. Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 11 2 121 A project has the following cash flows: Project Cash Flows Time (yrs) Cash flow ($) 0 -400 1 0 2 500 What is the payback period of the project in years? Normally cash flows are assumed to happen at the given time. But here, assume that the cash flows are received smoothly over the year. So the $500 at time 2 is actually earned smoothly from t=1 to t=2. A project to build a toll road will take 3 years to complete, costing three payments of$50 million, paid at the start of each year (at times 0, 1, and 2). After completion, the toll road will yield a constant $10 million at the end of each year forever with no costs. So the first payment will be at t=4. The required return of the project is 10% pa given as an effective nominal rate. All cash flows are nominal. What is the payback period? Katya offers to pay you$10 at the end of every year for the next 5 years (t=1,2,3,4,5) if you pay her $50 now (t=0). You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Will you or Katya's deal? There are many ways to write the ordinary annuity formula. Which of the following is NOT equal to the ordinary annuity formula? This annuity formula $\dfrac{C_1}{r}\left(1-\dfrac{1}{(1+r)^3} \right)$ is equivalent to which of the following formulas? Note the 3. In the below formulas, $C_t$ is a cash flow at time t. All of the cash flows are equal, but paid at different times. The following cash flows are expected: • 10 yearly payments of$60, with the first payment in 3 years from now (first payment at t=3 and last at t=12). • 1 payment of $400 in 5 years and 6 months (t=5.5) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? The following cash flows are expected: • 10 yearly payments of$80, with the first payment in 6.5 years from now (first payment at t=6.5). • A single payment of $500 in 4 years and 3 months (t=4.25) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? Your friend overheard that you need some cash and asks if you would like to borrow some money. She can lend you$5,000 now (t=0), and in return she wants you to pay her back $1,000 in two years (t=2) and every year after that for the next 5 years, so there will be 6 payments of$1,000 from t=2 to t=7 inclusive. What is the net present value (NPV) of borrowing from your friend? Assume that banks loan funds at interest rates of 10% pa, given as an effective annual rate. A project to build a toll bridge will take two years to complete, costing three payments of $100 million at the start of each year for the next three years, that is at t=0, 1 and 2. After completion, the toll bridge will yield a constant$50 million at the end of each year for the next 10 years. So the first payment will be at t=3 and the last at t=12. After the last payment at t=12, the bridge will be given to the government. The required return of the project is 21% pa given as an effective annual nominal rate. All cash flows are real and the expected inflation rate is 10% pa given as an effective annual rate. Ignore taxes. The Net Present Value is: Some countries' interest rates are so low that they're zero. If interest rates are 0% pa and are expected to stay at that level for the foreseeable future, what is the most that you would be prepared to pay a bank now if it offered to pay you $10 at the end of every year for the next 5 years? In other words, what is the present value of five$10 payments at time 1, 2, 3, 4 and 5 if interest rates are 0% pa? You are promised 20 payments of $100, where the first payment is immediate (t=0) and the last is at the end of the 19th year (t=19). The effective annual discount rate is $r$. Which of the following equations does NOT give the correct present value of these 20 payments? A stock is expected to pay its next dividend of$1 in one year. Future annual dividends are expected to grow by 2% pa. So the first dividend of $1 will be in one year, the year after that$1.02 (=1*(1+0.02)^1), and a year later $1.0404 (=1*(1+0.02)^2) and so on forever. Its required total return is 10% pa. The total required return and growth rate of dividends are given as effective annual rates. Calculate the current stock price. A stock just paid a dividend of$1. Future annual dividends are expected to grow by 2% pa. The next dividend of $1.02 (=1*(1+0.02)^1) will be in one year, and the year after that the dividend will be$1.0404 (=1*(1+0.02)^2), and so on forever. Its required total return is 10% pa. The total required return and growth rate of dividends are given as effective annual rates. Calculate the current stock price. A stock is just about to pay a dividend of $1 tonight. Future annual dividends are expected to grow by 2% pa. The next dividend of$1 will be paid tonight, and the year after that the dividend will be $1.02 (=1*(1+0.02)^1), and a year later 1.0404 (=1*(1+0.04)^2) and so on forever. Its required total return is 10% pa. The total required return and growth rate of dividends are given as effective annual rates. Calculate the current stock price. Discounted cash flow (DCF) valuation prices assets by finding the present value of the asset's future cash flows. The single cash flow, annuity, and perpetuity equations are very useful for this. Which of the following equations is the 'perpetuity with growth' equation? The following equation is called the Dividend Discount Model (DDM), Gordon Growth Model or the perpetuity with growth formula: $$P_0 = \frac{ C_1 }{ r - g }$$ What is $g$? The value $g$ is the long term expected: For a price of$13, Carla will sell you a share which will pay a dividend of $1 in one year and every year after that forever. The required return of the stock is 10% pa. Would you like to Carla's share or politely ? The first payment of a constant perpetual annual cash flow is received at time 5. Let this cash flow be $C_5$ and the required return be $r$. So there will be equal annual cash flows at time 5, 6, 7 and so on forever, and all of the cash flows will be equal so $C_5 = C_6 = C_7 = ...$ When the perpetuity formula is used to value this stream of cash flows, it will give a value (V) at time: For a price of$1040, Camille will sell you a share which just paid a dividend of $100, and is expected to pay dividends every year forever, growing at a rate of 5% pa. So the next dividend will be $100(1+0.05)^1=105.00$, and the year after it will be $100(1+0.05)^2=110.25$ and so on. The required return of the stock is 15% pa. Would you like to the share or politely ? The perpetuity with growth formula, also known as the dividend discount model (DDM) or Gordon growth model, is appropriate for valuing a company's shares. $P_0$ is the current share price, $C_1$ is next year's expected dividend, $r$ is the total required return and $g$ is the expected growth rate of the dividend. $$P_0=\dfrac{C_1}{r-g}$$ The below graph shows the expected future price path of the company's shares. Which of the following statements about the graph is NOT correct? The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$P_0=\frac{d_1}{r-g}$$ A stock pays dividends annually. It just paid a dividend, but the next dividend ($d_1$) will be paid in one year. According to the DDM, what is the correct formula for the expected price of the stock in 2.5 years? The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$P_{0} = \frac{C_1}{r_{\text{eff}} - g_{\text{eff}}}$$ What would you call the expression $C_1/P_0$? The following is the Dividend Discount Model (DDM) used to price stocks: $$P_0=\dfrac{C_1}{r-g}$$ If the assumptions of the DDM hold, which one of the following statements is NOT correct? The long term expected: A stock just paid its annual dividend of$9. The share price is $60. The required return of the stock is 10% pa as an effective annual rate. What is the implied growth rate of the dividend per year? A stock will pay you a dividend of$10 tonight if you buy it today. Thereafter the annual dividend is expected to grow by 5% pa, so the next dividend after the $10 one tonight will be$10.50 in one year, then in two years it will be $11.025 and so on. The stock's required return is 10% pa. What is the stock price today and what do you expect the stock price to be tomorrow, approximately? In the dividend discount model: $$P_0 = \dfrac{C_1}{r-g}$$ The return $r$ is supposed to be the: Two years ago Fred bought a house for$300,000. Now it's worth $500,000, based on recent similar sales in the area. Fred's residential property has an expected total return of 8% pa. He rents his house out for$2,000 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is $23,173.86. The future value of 12 months of rental payments one year ahead is$25,027.77. What is the expected annual growth rate of the rental payments? In other words, by what percentage increase will Fred have to raise the monthly rent by each year to sustain the expected annual total return of 8%? A stock pays annual dividends which are expected to continue forever. It just paid a dividend of $10. The growth rate in the dividend is 2% pa. You estimate that the stock's required return is 10% pa. Both the discount rate and growth rate are given as effective annual rates. Using the dividend discount model, what will be the share price? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ... After year 4, the annual dividend will grow in perpetuity at 5% pa, so; • the dividend at t=5 will be $1.15(1+0.05), • the dividend at t=6 will be$1.15(1+0.05)^2, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What will be the price of the stock in three and a half years (t = 3.5)? The following is the Dividend Discount Model (DDM) used to price stocks: $$P_0 = \frac{d_1}{r-g}$$ Assume that the assumptions of the DDM hold and that the time period is measured in years. Which of the following is equal to the expected dividend in 3 years, $d_3$? The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$p_0 = \frac{d_1}{r - g}$$ Which expression is NOT equal to the expected dividend yield? The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$p_0=\frac{d_1}{r_\text{eff}-g_\text{eff}}$$ Which expression is NOT equal to the expected capital return? A fairly valued share's current price is $4 and it has a total required return of 30%. Dividends are paid annually and next year's dividend is expected to be$1. After that, dividends are expected to grow by 5% pa in perpetuity. All rates are effective annual returns. What is the expected dividend income paid at the end of the second year (t=2) and what is the expected capital gain from just after the first dividend (t=1) to just after the second dividend (t=2)? The answers are given in the same order, the dividend and then the capital gain. Two companies BigDiv and ZeroDiv are exactly the same except for their dividend payouts. BigDiv pays large dividends and ZeroDiv doesn't pay any dividends. Currently the two firms have the same earnings, assets, number of shares, share price, expected total return and risk. Assume a perfect world with no taxes, no transaction costs, no asymmetric information and that all assets including business projects are fairly priced and therefore zero-NPV. All things remaining equal, which of the following statements is NOT correct? Most listed Australian companies pay dividends twice per year, the 'interim' and 'final' dividends, which are roughly 6 months apart. You are an equities analyst trying to value the company BHP. You decide to use the Dividend Discount Model (DDM) as a starting point, so you study BHP's dividend history and you find that BHP tends to pay the same interim and final dividend each year, and that both grow by the same rate. You expect BHP will pay a $0.55 interim dividend in six months and a$0.55 final dividend in one year. You expect each to grow by 4% next year and forever, so the interim and final dividends next year will be $0.572 each, and so on in perpetuity. Assume BHP's cost of equity is 8% pa. All rates are quoted as nominal effective rates. The dividends are nominal cash flows and the inflation rate is 2.5% pa. What is the current price of a BHP share? You are an equities analyst trying to value the equity of the Australian telecoms company Telstra, with ticker TLS. In Australia, listed companies like Telstra tend to pay dividends every 6 months. The payment around August is called the final dividend and the payment around February is called the interim dividend. Both occur annually. • Today is mid-March 2015. • TLS's last interim dividend of$0.15 was one month ago in mid-February 2015. • TLS's last final dividend of $0.15 was seven months ago in mid-August 2014. Judging by TLS's dividend history and prospects, you estimate that the nominal dividend growth rate will be 1% pa. Assume that TLS's total nominal cost of equity is 6% pa. The dividends are nominal cash flows and the inflation rate is 2.5% pa. All rates are quoted as nominal effective annual rates. Assume that each month is exactly one twelfth (1/12) of a year, so you can ignore the number of days in each month. Calculate the current TLS share price. A stock is expected to pay a dividend of$15 in one year (t=1), then $25 for 9 years after that (payments at t=2 ,3,...10), and on the 11th year (t=11) the dividend will be 2% less than at t=10, and will continue to shrink at the same rate every year after that forever. The required return of the stock is 10%. All rates are effective annual rates. What is the price of the stock now? Estimate the US bank JP Morgan's share price using a price earnings (PE) multiples approach with the following assumptions and figures only: • The major US banks JP Morgan Chase (JPM), Citi Group (C) and Wells Fargo (WFC) are comparable companies; • JP Morgan Chase's historical earnings per share (EPS) is$4.37; • Citi Group's share price is $50.05 and historical EPS is$4.26; • Wells Fargo's share price is $48.98 and historical EPS is$3.89. Note: Figures sourced from Google Finance on 24 March 2014. Estimate the Chinese bank ICBC's share price using a backward-looking price earnings (PE) multiples approach with the following assumptions and figures only. Note that the renminbi (RMB) is the Chinese currency, also known as the yuan (CNY). • The 4 major Chinese banks ICBC, China Construction Bank (CCB), Bank of China (BOC) and Agricultural Bank of China (ABC) are comparable companies; • ICBC 's historical earnings per share (EPS) is RMB 0.74; • CCB's backward-looking PE ratio is 4.59; • BOC 's backward-looking PE ratio is 4.78; • ABC's backward-looking PE ratio is also 4.78; Note: Figures sourced from Google Finance on 25 March 2014. Share prices are from the Shanghai stock exchange. Estimate Microsoft's (MSFT) share price using a price earnings (PE) multiples approach with the following assumptions and figures only: • Apple, Google and Microsoft are comparable companies, • Apple's (AAPL) share price is $526.24 and historical EPS is$40.32. • Google's (GOOG) share price is $1,215.65 and historical EPS is$36.23. • Micrsoft's (MSFT) historical earnings per share (EPS) is $2.71. Source: Google Finance 28 Feb 2014. Estimate the French bank Societe Generale's share price using a backward-looking price earnings (PE) multiples approach with the following assumptions and figures only. Note that EUR is the euro, the European monetary union's currency. • The 4 major European banks Credit Agricole (ACA), Deutsche Bank AG (DBK), UniCredit (UCG) and Banco Santander (SAN) are comparable companies to Societe Generale (GLE); • Societe Generale's (GLE's) historical earnings per share (EPS) is EUR 2.92; • ACA's backward-looking PE ratio is 16.29 and historical EPS is EUR 0.84; • DBK's backward-looking PE ratio is 25.01 and historical EPS is EUR 1.26; • SAN's backward-looking PE ratio is 14.71 and historical EPS is EUR 0.47; • UCG's backward-looking PE ratio is 15.78 and historical EPS is EUR 0.40; Note: Figures sourced from Google Finance on 27 March 2015. Private equity firms are known to buy medium sized private companies operating in the same industry, merge them together into a larger company, and then sell it off in a public float (initial public offering, IPO). If medium-sized private companies trade at PE ratios of 5 and larger listed companies trade at PE ratios of 15, what return can be achieved from this strategy? Assume that: • The medium-sized companies can be bought, merged and sold in an IPO instantaneously. • There are no costs of finding, valuing, merging and restructuring the medium sized companies. Also, there is no competition to buy the medium-sized companies from other private equity firms. • The large merged firm's earnings are the sum of the medium firms' earnings. • The only reason for the difference in medium and large firm's PE ratios is due to the illiquidity of the medium firms' shares. • Return is defined as: $r_{0→1} = (p_1-p_0+c_1)/p_0$ , where time zero is just before the merger and time one is just after. A low-quality second-hand car can be bought now for$1,000 and will last for 1 year before it will be scrapped for nothing. A high-quality second-hand car can be bought now for $4,900 and it will last for 5 years before it will be scrapped for nothing. What is the equivalent annual cost of each car? Assume a discount rate of 10% pa, given as an effective annual rate. The answer choices are given as the equivalent annual cost of the low-quality car and then the high quality car. You're advising your superstar client 40-cent who is weighing up buying a private jet or a luxury yacht. 40-cent is just as happy with either, but he wants to go with the more cost-effective option. These are the cash flows of the two options: • The private jet can be bought for$6m now, which will cost $12,000 per month in fuel, piloting and airport costs, payable at the end of each month. The jet will last for 12 years. • Or the luxury yacht can be bought for$4m now, which will cost $20,000 per month in fuel, crew and berthing costs, payable at the end of each month. The yacht will last for 20 years. What's unusual about 40-cent is that he is so famous that he will actually be able to sell his jet or yacht for the same price as it was bought since the next generation of superstar musicians will buy it from him as a status symbol. Bank interest rates are 10% pa, given as an effective annual rate. You can assume that 40-cent will live for another 60 years and that when the jet or yacht's life is at an end, he will buy a new one with the same details as above. Would you advise 40-cent to buy the or the ? Note that the effective monthly rate is $r_\text{eff monthly}=(1+0.1)^{1/12}-1=0.00797414$ Details of two different types of light bulbs are given below: • Low-energy light bulbs cost$3.50, have a life of nine years, and use about $1.60 of electricity a year, paid at the end of each year. • Conventional light bulbs cost only$0.50, but last only about a year and use about $6.60 of energy a year, paid at the end of each year. The real discount rate is 5%, given as an effective annual rate. Assume that all cash flows are real. The inflation rate is 3% given as an effective annual rate. Find the Equivalent Annual Cost (EAC) of the low-energy and conventional light bulbs. The below choices are listed in that order. Carlos and Edwin are brothers and they both love Holden Commodore cars. Carlos likes to buy the latest Holden Commodore car for$40,000 every 4 years as soon as the new model is released. As soon as he buys the new car, he sells the old one on the second hand car market for $20,000. Carlos never has to bother with paying for repairs since his cars are brand new. Edwin also likes Commodores, but prefers to buy 4-year old cars for$20,000 and keep them for 11 years until the end of their life (new ones last for 15 years in total but the 4-year old ones only last for another 11 years). Then he sells the old car for $2,000 and buys another 4-year old second hand car, and so on. Every time Edwin buys a second hand 4 year old car he immediately has to spend$1,000 on repairs, and then $1,000 every year after that for the next 10 years. So there are 11 payments in total from when the second hand car is bought at t=0 to the last payment at t=10. One year later (t=11) the old car is at the end of its total 15 year life and can be scrapped for$2,000. Assuming that Carlos and Edwin maintain their love of Commodores and keep up their habits of buying new ones and second hand ones respectively, how much larger is Carlos' equivalent annual cost of car ownership compared with Edwin's? The real discount rate is 10% pa. All cash flows are real and are expected to remain constant. Inflation is forecast to be 3% pa. All rates are effective annual. Ignore capital gains tax and tax savings from depreciation since cars are tax-exempt for individuals. You just bought a nice dress which you plan to wear once per month on nights out. You bought it a moment ago for $600 (at t=0). In your experience, dresses used once per month last for 6 years. Your younger sister is a student with no money and wants to borrow your dress once a month when she hits the town. With the increased use, your dress will only last for another 3 years rather than 6. What is the present value of the cost of letting your sister use your current dress for the next 3 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new dress when your current one wears out; your sister will only use the current dress, not the next one that you will buy; and the price of a new dress never changes. You own a nice suit which you wear once per week on nights out. You bought it one year ago for$600. In your experience, suits used once per week last for 6 years. So you expect yours to last for another 5 years. Your younger brother said that retro is back in style so he wants to wants to borrow your suit once a week when he goes out. With the increased use, your suit will only last for another 4 years rather than 5. What is the present value of the cost of letting your brother use your current suit for the next 4 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new suit when your current one wears out and your brother will not use the new one; your brother will only use your current suit so he will only use it for the next four years; and the price of a new suit never changes. You own some nice shoes which you use once per week on date nights. You bought them 2 years ago for $500. In your experience, shoes used once per week last for 6 years. So you expect yours to last for another 4 years. Your younger sister said that she wants to borrow your shoes once per week. With the increased use, your shoes will only last for another 2 years rather than 4. What is the present value of the cost of letting your sister use your current shoes for the next 2 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new pair of shoes when your current pair wears out and your sister will not use the new ones; your sister will only use your current shoes so she will only use it for the next 2 years; and the price of new shoes never changes. An industrial chicken farmer grows chickens for their meat. Chickens: 1. Cost$0.50 each to buy as chicks. They are bought on the day they’re born, at t=0. 2. Grow at a rate of $0.70 worth of meat per chicken per week for the first 6 weeks (t=0 to t=6). 3. Grow at a rate of$0.40 worth of meat per chicken per week for the next 4 weeks (t=6 to t=10) since they’re older and grow more slowly. 4. Feed costs are $0.30 per chicken per week for their whole life. Chicken feed is bought and fed to the chickens once per week at the beginning of the week. So the first amount of feed bought for a chicken at t=0 costs$0.30, and so on. 5. Can be slaughtered (killed for their meat) and sold at no cost at the end of the week. The price received for the chicken is their total value of meat (note that the chicken grows fast then slow, see above). The required return of the chicken farm is 0.5% given as an effective weekly rate. Ignore taxes and the fixed costs of the factory. Ignore the chicken’s welfare and other environmental and ethical concerns. Find the equivalent weekly cash flow of slaughtering a chicken at 6 weeks and at 10 weeks so the farmer can figure out the best time to slaughter his chickens. The choices below are given in the same order, 6 and 10 weeks. An Apple iPhone 6 smart phone can be bought now for $999. An Android Kogan Agora 4G+ smart phone can be bought now for$240. If the Kogan phone lasts for one year, approximately how long must the Apple phone last for to have the same equivalent annual cost? Assume that both phones have equivalent features besides their lifetimes, that both are worthless once they've outlasted their life, the discount rate is 10% pa given as an effective annual rate, and there are no extra costs or benefits from either phone. What is the NPV of the following series of cash flows when the discount rate is 5% given as an effective annual rate? The first payment of $10 is in 4 years, followed by payments every 6 months forever after that which shrink by 2% every 6 months. That is, the growth rate every 6 months is actually negative 2%, given as an effective 6 month rate. So the payment at $t=4.5$ years will be $10(1-0.02)^1=9.80$, and so on. A share just paid its semi-annual dividend of$10. The dividend is expected to grow at 2% every 6 months forever. This 2% growth rate is an effective 6 month rate. Therefore the next dividend will be $10.20 in six months. The required return of the stock 10% pa, given as an effective annual rate. What is the price of the share now? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ... After year 4, the annual dividend will grow in perpetuity at 5% pa, so; • the dividend at t=5 will be $1.15(1+0.05), • the dividend at t=6 will be$1.15(1+0.05)^2, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What is the current price of the stock? A stock pays semi-annual dividends. It just paid a dividend of $10. The growth rate in the dividend is 1% every 6 months, given as an effective 6 month rate. You estimate that the stock's required return is 21% pa, as an effective annual rate. Using the dividend discount model, what will be the share price? You own an apartment which you rent out as an investment property. What is the price of the apartment using discounted cash flow (DCF, same as NPV) valuation? Assume that: • You just signed a contract to rent the apartment out to a tenant for the next 12 months at$2,000 per month, payable in advance (at the start of the month, t=0). The tenant is just about to pay you the first $2,000 payment. • The contract states that monthly rental payments are fixed for 12 months. After the contract ends, you plan to sign another contract but with rental payment increases of 3%. You intend to do this every year. So rental payments will increase at the start of the 13th month (t=12) to be$2,060 (=2,000(1+0.03)), and then they will be constant for the next 12 months. Rental payments will increase again at the start of the 25th month (t=24) to be $2,121.80 (=2,000(1+0.03)2), and then they will be constant for the next 12 months until the next year, and so on. • The required return of the apartment is 8.732% pa, given as an effective annual rate. • Ignore all taxes, maintenance, real estate agent, council and strata fees, periods of vacancy and other costs. Assume that the apartment will last forever and so will the rental payments. A business project is expected to cost$100 now (t=0), then pay $10 at the end of the third (t=3), fourth, fifth and sixth years, and then grow by 5% pa every year forever. So the cash flow will be$10.5 at the end of the seventh year (t=7), then $11.025 at the end of the eighth year (t=8) and so on perpetually. The total required return is 10℅ pa. Which of the following formulas will NOT give the correct net present value of the project? You're about to buy a car. These are the cash flows of the two different cars that you can buy: • You can buy an old car for$5,000 now, for which you will have to buy $90 of fuel at the end of each week from the date of purchase. The old car will last for 3 years, at which point you will sell the old car for$500. • Or you can buy a new car for $14,000 now for which you will have to buy$50 of fuel at the end of each week from the date of purchase. The new car will last for 4 years, at which point you will sell the new car for $1,000. Bank interest rates are 10% pa, given as an effective annual rate. Assume that there are exactly 52 weeks in a year. Ignore taxes and environmental and pollution factors. Should you buy the or the ? Details of two different types of desserts or edible treats are given below: • High-sugar treats like candy, chocolate and ice cream make a person very happy. High sugar treats are cheap at only$2 per day. • Low-sugar treats like nuts, cheese and fruit make a person equally happy if these foods are of high quality. Low sugar treats are more expensive at $4 per day. The advantage of low-sugar treats is that a person only needs to pay the dentist$2,000 for fillings and root canal therapy once every 15 years. Whereas with high-sugar treats, that treatment needs to be done every 5 years. The real discount rate is 10%, given as an effective annual rate. Assume that there are 365 days in every year and that all cash flows are real. The inflation rate is 3% given as an effective annual rate. Find the equivalent annual cash flow (EAC) of the high-sugar treats and low-sugar treats, including dental costs. The below choices are listed in that order. Ignore the pain of dental therapy, personal preferences and other factors. You deposit cash into your bank account. Have you or your money? You deposit cash into your bank account. Have you or debt? Which of the following statements is NOT correct? Borrowers: Which of the following statements is NOT correct? Lenders: Issuing debt doesn't give away control of the firm because debt holders can't cast votes to determine the company's affairs, such as at the annual general meeting (AGM), and can't appoint directors to the board. or ? Companies must pay interest and principal payments to debt-holders. They're compulsory. But companies are not forced to pay dividends to share holders. or ? Your friend just bought a house for $400,000. He financed it using a$320,000 mortgage loan and a deposit of $80,000. In the context of residential housing and mortgages, the 'equity' tied up in the value of a person's house is the value of the house less the value of the mortgage. So the initial equity your friend has in his house is$80,000. Let this amount be E, let the value of the mortgage be D and the value of the house be V. So $V=D+E$. If house prices suddenly fall by 10%, what would be your friend's percentage change in equity (E)? Assume that the value of the mortgage is unchanged and that no income (rent) was received from the house during the short time over which house prices fell. Remember: $$r_{0\rightarrow1}=\frac{p_1-p_0+c_1}{p_0}$$ where $r_{0-1}$ is the return (percentage change) of an asset with price $p_0$ initially, $p_1$ one period later, and paying a cash flow of $c_1$ at time $t=1$. Your friend just bought a house for $1,000,000. He financed it using a$900,000 mortgage loan and a deposit of $100,000. In the context of residential housing and mortgages, the 'equity' or 'net wealth' tied up in a house is the value of the house less the value of the mortgage loan. Assuming that your friend's only asset is his house, his net wealth is$100,000. If house prices suddenly fall by 15%, what would be your friend's percentage change in net wealth? Assume that: • No income (rent) was received from the house during the short time over which house prices fell. • Your friend will not declare bankruptcy, he will always pay off his debts. One year ago you bought $100,000 of shares partly funded using a margin loan. The margin loan size was$70,000 and the other $30,000 was your own wealth or 'equity' in the share assets. The interest rate on the margin loan was 7.84% pa. Over the year, the shares produced a dividend yield of 4% pa and a capital gain of 5% pa. What was the total return on your wealth? Ignore taxes, assume that all cash flows (interest payments and dividends) were paid and received at the end of the year, and all rates above are effective annual rates. Hint: Remember that wealth in this context is your equity (E) in the house asset (V = D+E) which is funded by the loan (D) and your deposit or equity (E). Here are the Net Income (NI) and Cash Flow From Assets (CFFA) equations: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c)$$ $$CFFA=NI+Depr-CapEx - \varDelta NWC+IntExp$$ What is the formula for calculating annual interest expense (IntExp) which is used in the equations above? Select one of the following answers. Note that D is the value of debt which is constant through time, and $r_D$ is the cost of debt. Interest expense (IntExp) is an important part of a company's income statement (or 'profit and loss' or 'statement of financial performance'). How does an accountant calculate the annual interest expense of a fixed-coupon bond that has a liquid secondary market? Select the most correct answer: Annual interest expense is equal to: Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant? Remember: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - ΔNWC+IntExp$$ Which one of the following will increase the Cash Flow From Assets in this year for a tax-paying firm, all else remaining constant? A manufacturing company is considering a new project in the more risky services industry. The cash flows from assets (CFFA) are estimated for the new project, with interest expense excluded from the calculations. To get the levered value of the project, what should these unlevered cash flows be discounted by? Assume that the manufacturing firm has a target debt-to-assets ratio that it sticks to. A retail furniture company buys furniture wholesale and distributes it through its retail stores. The owner believes that she has some good ideas for making stylish new furniture. She is considering a project to buy a factory and employ workers to manufacture the new furniture she's designed. Furniture manufacturing has more systematic risk than furniture retailing. Her furniture retailing firm's after-tax WACC is 20%. Furniture manufacturing firms have an after-tax WACC of 30%. Both firms are optimally geared. Assume a classical tax system. Which method(s) will give the correct valuation of the new furniture-making project? Select the most correct answer. The US firm Google operates in the online advertising business. In 2011 Google bought Motorola Mobility which manufactures mobile phones. Assume the following: • Google had a 10% after-tax weighted average cost of capital (WACC) before it bought Motorola. • Motorola had a 20% after-tax WACC before it merged with Google. • Google and Motorola have the same level of gearing. • Both companies operate in a classical tax system. You are a manager at Motorola. You must value a project for making mobile phones. Which method(s) will give the correct valuation of the mobile phone manufacturing project? Select the most correct answer. The mobile phone manufacturing project's: A company increases the proportion of debt funding it uses to finance its assets by issuing bonds and using the cash to repurchase stock, leaving assets unchanged. Ignoring the costs of financial distress, which of the following statements is NOT correct: Value the following business project to manufacture a new product. Project Data Project life 2 yrs Initial investment in equipment$6m Depreciation of equipment per year $3m Expected sale price of equipment at end of project$0.6m Unit sales per year 4m Sale price per unit $8 Variable cost per unit$5 Fixed costs per year, paid at the end of each year $1m Interest expense per year 0 Tax rate 30% Weighted average cost of capital after tax per annum 10% Notes 1. The firm's current assets and current liabilities are$3m and $2m respectively right now. This net working capital will not be used in this project, it will be used in other unrelated projects. Due to the project, current assets (mostly inventory) will grow by$2m initially (at t = 0), and then by $0.2m at the end of the first year (t=1). Current liabilities (mostly trade creditors) will increase by$0.1m at the end of the first year (t=1). At the end of the project, the net working capital accumulated due to the project can be sold for the same price that it was bought. 2. The project cost 0.5m to research which was incurred one year ago. Assumptions • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 3% pa. • All rates are given as effective annual rates. • The business considering the project is run as a 'sole tradership' (run by an individual without a company) and is therefore eligible for a 50% capital gains tax discount when the equipment is sold, as permitted by the Australian Tax Office. What is the expected net present value (NPV) of the project? A method commonly seen in textbooks for calculating a levered firm's free cash flow (FFCF, or CFFA) is the following: \begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + \\ &\space\space\space+ Depr - CapEx -\Delta NWC + IntExp(1-t_c) \\ \end{aligned} Does this annual FFCF or the annual interest tax shield? Project Data Project life 2 yrs Initial investment in equipment600k Depreciation of equipment per year $250k Expected sale price of equipment at end of project$200k Revenue per job $12k Variable cost per job$4k Quantity of jobs per year 120 Fixed costs per year, paid at the end of each year $100k Interest expense in first year (at t=1)$16.091k Interest expense in second year (at t=2) $9.711k Tax rate 30% Government treasury bond yield 5% Bank loan debt yield 6% Levered cost of equity 12.5% Market portfolio return 10% Beta of assets 1.24 Beta of levered equity 1.5 Firm's and project's debt-to-equity ratio 25% Notes 1. The project will require an immediate purchase of$50k of inventory, which will all be sold at cost when the project ends. Current liabilities are negligible so they can be ignored. Assumptions • The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. Note that interest expense is different in each year. • Thousands are represented by 'k' (kilo). • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are nominal. The inflation rate is 2% pa. • All rates are given as effective annual rates. • The 50% capital gains tax discount is not available since the project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? A firm has a debt-to-equity ratio of 25%. What is its debt-to-assets ratio? There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). Some include the annual interest tax shield in the cash flow and some do not. Which of the below FFCF formulas include the interest tax shield in the cash flow? $$(1) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp$$ $$(2) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp.(1-t_c)$$ $$(3) \quad FFCF=EBIT.(1-t_c )+ Depr- CapEx -ΔNWC+IntExp.t_c$$ $$(4) \quad FFCF=EBIT.(1-t_c) + Depr- CapEx -ΔNWC$$ $$(5) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC+IntExp.t_c$$ $$(6) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC$$ $$(7) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC$$ $$(8) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC-IntExp.t_c$$ $$(9) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC$$ $$(10) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC-IntExp.t_c$$ The formulas for net income (NI also called earnings), EBIT and EBITDA are given below. Assume that depreciation and amortisation are both represented by 'Depr' and that 'FC' represents fixed costs such as rent. $$NI=(Rev - COGS - Depr - FC - IntExp).(1-t_c )$$ $$EBIT=Rev - COGS - FC-Depr$$ $$EBITDA=Rev - COGS - FC$$ $$Tax =(Rev - COGS - Depr - FC - IntExp).t_c= \dfrac{NI.t_c}{1-t_c}$$ One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use earnings before interest and tax (EBIT). \begin{aligned} FFCF &= (EBIT)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ \end{aligned} \\ Does this annual FFCF or the annual interest tax shield? One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use net operating profit after tax (NOPAT). \begin{aligned} FFCF &= NOPAT + Depr - CapEx -\Delta NWC \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC \\ \end{aligned} \\ Does this annual FFCF or the annual interest tax shield? A company issues a large amount of bonds to raise money for new projects of similar risk to the company's existing projects. The net present value (NPV) of the new projects is positive but small. Assume a classical tax system. Which statement is NOT correct? A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of equity to raise money for new projects of similar systematic risk to the company's existing projects. Assume a classical tax system. Which statement is correct? A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of debt to raise money for new projects of similar market risk to the company's existing projects. Assume a classical tax system. Which statement is correct? A fast-growing firm is suitable for valuation using a multi-stage growth model. It's nominal unlevered cash flow from assets ($CFFA_U$) at the end of this year (t=1) is expected to be $1 million. After that it is expected to grow at a rate of: • 12% pa for the next two years (from t=1 to 3), • 5% over the fourth year (from t=3 to 4), and • -1% forever after that (from t=4 onwards). Note that this is a negative one percent growth rate. Assume that: • The nominal WACC after tax is 9.5% pa and is not expected to change. • The nominal WACC before tax is 10% pa and is not expected to change. • The firm has a target debt-to-equity ratio that it plans to maintain. • The inflation rate is 3% pa. • All rates are given as nominal effective annual rates. What is the levered value of this fast growing firm's assets? Which of the following statements about standard statistical mathematics notation is NOT correct? Diversification in a portfolio of two assets works best when the correlation between their returns is: Portfolio Details Stock Expected return Standard deviation Correlation $(\rho_{A,B})$ Dollars invested A 0.1 0.4 0.5 60 B 0.2 0.6 140 What is the standard deviation (not variance) of the above portfolio? All things remaining equal, the variance of a portfolio of two positively-weighted stocks rises as: Two risky stocks A and B comprise an equal-weighted portfolio. The correlation between the stocks' returns is 70%. If the variance of stock A increases but the: • Prices and expected returns of each stock stays the same, • Variance of stock B's returns stays the same, • Correlation of returns between the stocks stays the same. Which of the following statements is NOT correct? All things remaining equal, the higher the correlation of returns between two stocks: An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 6% pa. • Stock A has an expected return of 5% pa. • Stock B has an expected return of 10% pa. What portfolio weights should the investor have in stocks A and B respectively? An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 12% pa. • Stock A has an expected return of 10% pa and a standard deviation of 20% pa. • Stock B has an expected return of 15% pa and a standard deviation of 30% pa. The correlation coefficient between stock A and B's expected returns is 70%. What will be the annual standard deviation of the portfolio with this 12% pa target return? What is the correlation of a variable X with itself? The corr(X, X) or $\rho_{X,X}$ equals: What is the correlation of a variable X with a constant C? The corr(X, C) or $\rho_{X,C}$ equals: The covariance and correlation of two stocks X and Y's annual returns are calculated over a number of years. The units of the returns are in percent per annum $(\% pa)$. What are the units of the covariance $(\sigma_{X,Y})$ and correlation $(\rho_{X,Y})$ of returns respectively? Hint: Visit Wikipedia to understand the difference between percentage points $(\text{pp})$ and percent $(\%)$. Let the standard deviation of returns for a share per month be $\sigma_\text{monthly}$. What is the formula for the standard deviation of the share's returns per year $(\sigma_\text{yearly})$? Assume that returns are independently and identically distributed (iid) so they have zero auto correlation, meaning that if the return was higher than average today, it does not indicate that the return tomorrow will be higher or lower than average. Mr Blue, Miss Red and Mrs Green are people with different utility functions. Note that a fair gamble is a bet that has an expected value of zero, such as paying$0.50 to win $1 in a coin flip with heads or nothing if it lands tails. Fairly priced insurance is when the expected present value of the insurance premiums is equal to the expected loss from the disaster that the insurance protects against, such as the cost of rebuilding a home after a catastrophic fire. Which of the following statements is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Each person has$500 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose $500. Each player can flip a coin and if they flip heads, they receive$500. If they flip tails then they will lose $500. Which of the following statements is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Each person has$256 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose $256. Each player can flip a coin and if they flip heads, they receive$256. If they flip tails then they will lose $256. Which of the following statements is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the following statements is NOT correct? Diversification is achieved by investing in a large amount of stocks. What type of risk is reduced by diversification? According to the theory of the Capital Asset Pricing Model (CAPM), total risk can be broken into two components, systematic risk and idiosyncratic risk. Which of the following events would be considered a systematic, undiversifiable event according to the theory of the CAPM? A fairly priced stock has an expected return equal to the market's. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the stock's beta? The security market line (SML) shows the relationship between beta and expected return. Investment projects that plot above the SML would have: Stock A has a beta of 0.5 and stock B has a beta of 1. Which statement is NOT correct? Which statement is the most correct? A stock's correlation with the market portfolio increases while its total risk is unchanged. What will happen to the stock's expected return and systematic risk? Assets A, B, M and $r_f$ are shown on the graphs above. Asset M is the market portfolio and $r_f$ is the risk free yield on government bonds. Which of the below statements is NOT correct? A stock has a beta of 1.5. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. What do you think will be the stock's expected return over the next year, given as an effective annual rate? Assets A, B, M and $r_f$ are shown on the graphs above. Asset M is the market portfolio and $r_f$ is the risk free yield on government bonds. Assume that investors can borrow and lend at the risk free rate. Which of the below statements is NOT correct? A stock has a beta of 1.5. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. In the last 5 minutes, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged. What do you think was the stock's historical return over the last 5 minutes, given as an effective 5 minute rate? A stock has a beta of 1.5. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. Over the last year, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. So $r_{m} = (P_{0} - P_{-1})/P_{-1} = -0.01$, where the current time is zero and one year ago is time -1. The risk free rate was unchanged. What do you think was the stock's historical return over the last year, given as an effective annual rate? The CAPM can be used to find a business's expected opportunity cost of capital: $$r_i=r_f+β_i (r_m-r_f)$$ What should be used as the risk free rate $r_f$? A firm changes its capital structure by issuing a large amount of equity and using the funds to repay debt. Its assets are unchanged. Ignore interest tax shields. According to the Capital Asset Pricing Model (CAPM), which statement is correct? A firm's WACC before tax would decrease due to: Which of the following statements about the weighted average cost of capital (WACC) is NOT correct? Project Data Project life 1 year Initial investment in equipment$8m Depreciation of equipment per year $8m Expected sale price of equipment at end of project 0 Unit sales per year 4m Sale price per unit$10 Variable cost per unit $5 Fixed costs per year, paid at the end of each year$2m Interest expense in first year (at t=1) $0.562m Corporate tax rate 30% Government treasury bond yield 5% Bank loan debt yield 9% Market portfolio return 10% Covariance of levered equity returns with market 0.32 Variance of market portfolio returns 0.16 Firm's and project's debt-to-equity ratio 50% Notes 1. Due to the project, current assets will increase by$6m now (t=0) and fall by $6m at the end (t=1). Current liabilities will not be affected. Assumptions • The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. • Millions are represented by 'm'. • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 2% pa. All rates are given as effective annual rates. • The project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? Government bonds currently have a return of 5% pa. A stock has an expected return of 6% pa and the market return is 7% pa. What is the beta of the stock? Portfolio Details Stock Expected return Standard deviation Correlation Beta Dollars invested A 0.2 0.4 0.12 0.5 40 B 0.3 0.8 1.5 80 What is the beta of the above portfolio? Government bonds currently have a return of 5%. A stock has a beta of 2 and the market return is 7%. What is the expected return of the stock? A firm's weighted average cost of capital before tax ($r_\text{WACC before tax}$) would increase due to: A company has: • 50 million shares outstanding. • The market price of one share is currently$6. • The risk-free rate is 5% and the market return is 10%. • Market analysts believe that the company's ordinary shares have a beta of 2. • The company has 1 million preferred stock which have a face (or par) value of $100 and pay a constant dividend of 10% of par. They currently trade for$80 each. • The company's debentures are publicly traded and their market price is equal to 90% of their face value. • The debentures have a total face value of $60,000,000 and the current yield to maturity of corporate debentures is 10% per annum. The corporate tax rate is 30%. What is the company's after-tax weighted average cost of capital (WACC)? Assume a classical tax system. A company has: • 140 million shares outstanding. • The market price of one share is currently$2. • The company's debentures are publicly traded and their market price is equal to 93% of the face value. • The debentures have a total face value of $50,000,000 and the current yield to maturity of corporate debentures is 12% per annum. • The risk-free rate is 8.50% and the market return is 13.7%. • Market analysts estimated that the company's stock has a beta of 0.90. • The corporate tax rate is 30%. What is the company's after-tax weighted average cost of capital (WACC) in a classical tax system? Treasury bonds currently have a return of 5% pa. A stock has a beta of 0.5 and the market return is 10% pa. What is the expected return of the stock? A firm can issue 3 year annual coupon bonds at a yield of 10% pa and a coupon rate of 8% pa. The beta of its levered equity is 2. The market's expected return is 10% pa and 3 year government bonds yield 6% pa with a coupon rate of 4% pa. The market value of equity is$1 million and the market value of debt is $1 million. The corporate tax rate is 30%. What is the firm's after-tax WACC? Assume a classical tax system. According to the theory of the Capital Asset Pricing Model (CAPM), total variance can be broken into two components, systematic variance and idiosyncratic variance. Which of the following events would be considered the most diversifiable according to the theory of the CAPM? Which statement(s) are correct? (i) All stocks that plot on the Security Market Line (SML) are fairly priced. (ii) All stocks that plot above the Security Market Line (SML) are overpriced. (iii) All fairly priced stocks that plot on the Capital Market Line (CML) have zero idiosyncratic risk. Select the most correct response: A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged. Ignore interest tax shields. According to the Capital Asset Pricing Model (CAPM), which statement is correct? Assume that there exists a perfect world with no transaction costs, no asymmetric information, no taxes, no agency costs, equal borrowing rates for corporations and individual investors, the ability to short the risk free asset, semi-strong form efficient markets, the CAPM holds, investors are rational and risk-averse and there are no other market frictions. For a firm operating in this perfect world, which statement(s) are correct? (i) When a firm changes its capital structure and/or payout policy, share holders' wealth is unaffected. (ii) When the idiosyncratic risk of a firm's assets increases, share holders do not expect higher returns. (iii) When the systematic risk of a firm's assets increases, share holders do not expect higher returns. Select the most correct response: A fairly priced stock has an expected return of 15% pa. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the beta of the stock? A fairly priced stock has a beta that is the same as the market portfolio's beta. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the expected return of the stock? A stock has a beta of 0.5. Its next dividend is expected to be$3, paid one year from now. Dividends are expected to be paid annually and grow by 2% pa forever. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. All returns are effective annual rates. What is the price of the stock now? The security market line (SML) shows the relationship between beta and expected return. Investment projects that plot on the SML would have: Examine the following graph which shows stocks' betas $(\beta)$ and expected returns $(\mu)$: Assume that the CAPM holds and that future expectations of stocks' returns and betas are correctly measured. Which statement is NOT correct? The total return of any asset can be broken down in different ways. One possible way is to use the dividend discount model (or Gordon growth model): $$p_0 = \frac{c_1}{r_\text{total}-r_\text{capital}}$$ Which, since $c_1/p_0$ is the income return ($r_\text{income}$), can be expressed as: $$r_\text{total}=r_\text{income}+r_\text{capital}$$ So the total return of an asset is the income component plus the capital or price growth component. Another way to break up total return is to use the Capital Asset Pricing Model: $$r_\text{total}=r_\text{f}+β(r_\text{m}- r_\text{f})$$ $$r_\text{total}=r_\text{time value}+r_\text{risk premium}$$ So the risk free rate is the time value of money and the term $β(r_\text{m}- r_\text{f})$ is the compensation for taking on systematic risk. Using the above theory and your general knowledge, which of the below equations, if any, are correct? (I) $r_\text{income}=r_\text{time value}$ (II) $r_\text{income}=r_\text{risk premium}$ (III) $r_\text{capital}=r_\text{time value}$ (IV) $r_\text{capital}=r_\text{risk premium}$ (V) $r_\text{income}+r_\text{capital}=r_\text{time value}+r_\text{risk premium}$ Which of the equations are correct? You just bought a house worth $1,000,000. You financed it with an$800,000 mortgage loan and a deposit of $200,000. You estimate that: • The house has a beta of 1; • The mortgage loan has a beta of 0.2. What is the beta of the equity (the$200,000 deposit) that you have in your house? Also, if the risk free rate is 5% pa and the market portfolio's return is 10% pa, what is the expected return on equity in your house? Ignore taxes, assume that all cash flows (interest payments and rent) were paid and received at the end of the year, and all rates are effective annual rates. Project Data Project life 1 year Initial investment in equipment $6m Depreciation of equipment per year$6m Expected sale price of equipment at end of project 0 Unit sales per year 9m Sale price per unit $8 Variable cost per unit$6 Fixed costs per year, paid at the end of each year $1m Interest expense in first year (at t=1)$0.53m Tax rate 30% Government treasury bond yield 5% Bank loan debt yield 6% Market portfolio return 10% Covariance of levered equity returns with market 0.08 Variance of market portfolio returns 0.16 Firm's and project's debt-to-assets ratio 50% Notes 1. Due to the project, current assets will increase by $5m now (t=0) and fall by$5m at the end (t=1). Current liabilities will not be affected. Assumptions • The debt-to-assets ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. • Millions are represented by 'm'. • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 2% pa. • All rates are given as effective annual rates. • The 50% capital gains tax discount is not available since the project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? A stock's required total return will decrease when its: A stock's total standard deviation of returns is 20% pa. The market portfolio's total standard deviation of returns is 15% pa. The beta of the stock is 0.8. What is the stock's diversifiable standard deviation? Your friend claims that by reading 'The Economist' magazine's economic news articles, she can identify shares that will have positive abnormal expected returns over the next 2 years. Assuming that her claim is true, which statement(s) are correct? (i) Weak form market efficiency is broken. (ii) Semi-strong form market efficiency is broken. (iii) Strong form market efficiency is broken. (iv) The asset pricing model used to measure the abnormal returns (such as the CAPM) is either wrong (mis-specification error) or is measured using the wrong inputs (data errors) so the returns may not be abnormal but rather fair for the level of risk. Select the most correct response: Fundamentalists who analyse company financial reports and news announcements (but who don't have inside information) will make positive abnormal returns if: The efficient markets hypothesis (EMH) and no-arbitrage pricing theory are most closely related to which of the following concepts? Economic statistics released this morning were a surprise: they show a strong chance of consumer price inflation (CPI) reaching 5% pa over the next 2 years. This is much higher than the previous forecast of 3% pa. A vanilla fixed-coupon 2-year risk-free government bond was issued at par this morning, just before the economic news was released. What is the expected change in bond price after the economic news this morning, and in the next 2 years? Assume that: • Inflation remains at 5% over the next 2 years. • Investors demand a constant real bond yield. • The bond price falls by the (after-tax) value of the coupon the night before the ex-coupon date, as in real life. A man inherits $500,000 worth of shares. He believes that by learning the secrets of trading, keeping up with the financial news and doing complex trend analysis with charts that he can quit his job and become a self-employed day trader in the equities markets. What is the expected gain from doing this over the first year? Measure the net gain in wealth received at the end of this first year due to the decision to become a day trader. Assume the following: • He earns$60,000 pa in his current job, paid in a lump sum at the end of each year. • He enjoys examining share price graphs and day trading just as much as he enjoys his current job. • Stock markets are weak form and semi-strong form efficient. • He has no inside information. • He makes 1 trade every day and there are 250 trading days in the year. Trading costs are $20 per trade. His broker invoices him for the trading costs at the end of the year. • The shares that he currently owns and the shares that he intends to trade have the same level of systematic risk as the market portfolio. • The market portfolio's expected return is 10% pa. Measure the net gain over the first year as an expected wealth increase at the end of the year. A company advertises an investment costing$1,000 which they say is underpriced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Assume that there are no dividend payments so the entire 15% total return is all capital return. Assuming that the company's statements are correct, what is the NPV of buying the investment if the 15% return lasts for the next 100 years (t=0 to 100), then reverts to 10% pa after that time? Also, what is the NPV of the investment if the 15% return lasts forever? In both cases, assume that the required return of 10% remains constant. All returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever): A company selling charting and technical analysis software claims that independent academic studies have shown that its software makes significantly positive abnormal returns. Assuming the claim is true, which statement(s) are correct? (I) Weak form market efficiency is broken. (II) Semi-strong form market efficiency is broken. (III) Strong form market efficiency is broken. (IV) The asset pricing model used to measure the abnormal returns (such as the CAPM) had mis-specification error so the returns may not be abnormal but rather fair for the level of risk. Select the most correct response: A person is thinking about borrowing $100 from the bank at 7% pa and investing it in shares with an expected return of 10% pa. One year later the person will sell the shares and pay back the loan in full. Both the loan and the shares are fairly priced. What is the Net Present Value (NPV) of this one year investment? Note that you are asked to find the present value ($V_0$), not the value in one year ($V_1$). Select the most correct statement from the following. 'Chartists', also known as 'technical traders', believe that: Question 668 buy and hold, market efficiency, idiom A quote from the famous investor Warren Buffet: "Much success can be attributed to inactivity. Most investors cannot resist the temptation to constantly buy and sell." Buffet is referring to the buy-and-hold strategy which is to buy and never sell shares. Which of the following is a disadvantage of a buy-and-hold strategy? Assume that share markets are semi-strong form efficient. Which of the following is NOT an advantage of the strict buy-and-hold strategy? A disadvantage of the buy-and-hold strategy is that it reduces: A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the start-of-year amount, but it is paid at the end of every year. This fee is charged regardless of whether the fund makes gains or losses on your money. The fund offers to invest your money in shares which have an expected return of 10% pa before fees. You are thinking of investing$100,000 in the fund and keeping it there for 40 years when you plan to retire. What is the Net Present Value (NPV) of investing your money in the fund? Note that the question is not asking how much money you will have in 40 years, it is asking: what is the NPV of investing in the fund? Assume that: • The fund has no private information. • Markets are weak and semi-strong form efficient. • The fund's transaction costs are negligible. • The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible. A residential real estate investor believes that house prices will grow at a rate of 5% pa and that rents will grow by 2% pa forever. All rates are given as nominal effective annual returns. Assume that: • His forecast is true. • Real estate is and always will be fairly priced and the capital asset pricing model (CAPM) is true. • Ignore all costs such as taxes, agent fees, maintenance and so on. • All rental income cash flow is paid out to the owner, so there is no re-investment and therefore no additions or improvements made to the property. • The non-monetary benefits of owning real estate and renting remain constant. Which one of the following statements is NOT correct? Over time: A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the end-of-year amount, paid at the end of every year. This fee is charged regardless of whether the fund makes gains or losses on your money. The fund offers to invest your money in shares which have an expected return of 10% pa before fees. You are thinking of investing \$100,000 in the fund and keeping it there for 40 years when you plan to retire. How much money do you expect to have in the fund in 40 years? Also, what is the future value of the fees that the fund expects to earn from you? Give both amounts as future values in 40 years. Assume that: • The fund has no private information. • Markets are weak and semi-strong form efficient. • The fund's transaction costs are negligible. • The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible. • The fund invests its fees in the same companies as it invests your funds in, but with no fees. The below answer choices list your expected wealth in 40 years and then the fund's expected wealth in 40 years.
2021-04-22 17:36:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2847212851047516, "perplexity": 2373.6351165110323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00536.warc.gz"}
http://math.stackexchange.com/questions/289993/second-solution-for-search-of-negative-roots
# Second solution for search of negative roots How many $\underline{\text{negative roots}}$ does the equation $x^4-5x^3-4x^2-7x+4=0$ have? My reasoning: I rewrote the equation like: $$x^4-5x^3-4x^2-7x+4=0 \Rightarrow (x^2-2)^2 = 5x^3+7x$$ For any negative $x$, the outcome is never a negative term in the left member of the equation, and always a negative term in the right member of the equation. That's impossible, so the answer is that there are no negative roots for the original equation. Does anyone see another way out? - You mean $(x^2 - 2)^2 = 5x^3 + 7x$? –  Deven Ware Jan 29 '13 at 19:41 O yeah, my bad. Thx. –  Sawyier Jan 29 '13 at 19:42 Either way, you are correct, there are no negative roots. –  Deven Ware Jan 29 '13 at 19:43 Are yu familiar with Descartes' rule of signs? Regards –  Amzoti Jan 29 '13 at 19:45 For any negative $x$ tested? Can't you see immediately that $(x^2 - 2)^2 \ge 0$ for all real $x$, while $5 x^3 + 7 x < 0$ for all negative $x$? –  Robert Israel Jan 29 '13 at 19:47 ## 1 Answer You might want to say explicitly that $\forall x < 0,\; x\neq -\sqrt{2}$, the left-hand side of the equation is positive (whatever the value of $x$ - save for $x = \pm \sqrt{2}$, in which case the LHS $= 0$), while the right hand side of the equation is negative $\forall x<0$, since $5x^3 + 7x = x(x^2 + 7)$, evaluates to the sign of $x$. This is impossible for any equation. Hence, there is no solution (i.e., root) $x$ such that $x \lt 0$. Essentially, what you argued :-) Certainly, there is no need to find the actual roots, as you observe. -
2015-04-21 13:37:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.911054790019989, "perplexity": 570.5580154320933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641468.77/warc/CC-MAIN-20150417045721-00148-ip-10-235-10-82.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2867360/understanding-repeated-eigen-values
# Understanding repeated Eigen values I am trying to understand the method of finding eigen vectors in the case of repeated eigen values. My question is based on section 3.5.2 of this link. In the first example (Example 3.5.4), $A = \begin{bmatrix}2&0\\0&2\end{bmatrix}$ Here, the $\lambda = 2$ (repeated eigen value). If $(A - \lambda I)$ is calculated, it is zero matrix of dimension 2. So, the geometric multiplicity is 2, which means there must be two linearly independent eigen vectors. 1) Am I correct in understanding that these can be any two linearly independent vectors since $(A - \lambda I)$ is a zero matrix? Or is there a reason for picking $v_1 = (1, 0)$ and $v_2 = (0, 1)$ as shown in the link? Now, consider the second example (Example 3.5.5): $A = \begin{bmatrix}5&1\\-4&1\end{bmatrix}$ In this case, $\lambda = 3$ (repeated eigen value) and $A - \lambda I = \begin{bmatrix}2&1\\-4&-2\end{bmatrix}$ Here geometric multiplicity of $\lambda$ is 1. So, there is only one linearly independent eigen vector. 2) What is the idea behind using $(A - \lambda I)v_2 = v_1$ to find second eigen vector? 3) Is this technique used only when geometric multiplicity is less than algebraic multiplicity? Otherwise, do we just use logic to find all independent eigen vectors as we did in the first example? 4) Since the matrix is not diagonalizable, is the idea to minimize the error, rather than solve the system of equations? I am trying to understand the need to find a second eigen vector in practical situations. • What they are doing is finding the Jordan Normal Form. The point of that is writing some matrix $J = D + N,$ in this case $D$ would be $3I$ and $N$ would satisfy $N^2 = 0,$ crucial that $DN=ND.$ So $e^J$ and $e^{Jt}$ are fairl easy and concrete – Will Jagy Jul 30 '18 at 20:01 • anyway, as the minimal polynomial is the same as the characteristic, you take any vector such that $(A-\lambda I)^2 w = 0$ BUT $(A-\lambda I) w \neq 0.$ Then take $v = (A-\lambda I) w$ satisfies $(A-\lambda I) v = 0$ so is a genuine eigenvector – Will Jagy Jul 30 '18 at 20:04 • en.wikipedia.org/wiki/Jordan_normal_form – Will Jagy Jul 30 '18 at 20:06 • I believe you are addressing question 2 here. I haven't quite understood the explanation. But I will get back after some more reading. – skr_robo Jul 30 '18 at 20:11 • alright. Given your interest, I recommend getting a fairly applied linear algebra book, one that emphasizes the real numbers and complexes. I have answered a dozen Jordan form questions on this site, when I get back from grocery shopping I will figure out some that you can read profitably. You can also search for questions on Jordan form yourself. – Will Jagy Jul 30 '18 at 20:18
2019-08-24 17:31:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7161718010902405, "perplexity": 251.33336992726908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321351.87/warc/CC-MAIN-20190824172818-20190824194818-00347.warc.gz"}
https://cs.stackexchange.com/questions/111153/is-halts-if-valid-decideable
Is halts-if-valid decideable? I have a suspicion that Turing's famous proof that the halting problem is undecidable may not prove exactly what people assume that it proves. It may only prove that it is possible to limit the interface of a program such that it can't give you the most useful information about its inputs. This counterargument is motivated by the idea that the human brain may be Turing-equivalent, if we're allowed to ignore issues of unlimited storage and execution time. It's also motivated by the observation that if Turing himself was Turing-equivalent he had a lot more to say about the question of halting than he would have if he were constrained by a boolean return interface. If we have a program g(), and Turing is attempting to decide whether g() halts or not, and if we don't limit Turing's output vocabulary to a single boolean, and g() calls Turing on itself in a sneaky attempt to generate a paradox, Turing's output might be: "I see what you're doing, g(). I pwn you." Or more likely he might output a peer-reviewed journal article and (nowadays) a TED talk. To return to the land of artificially constrained vocabulary, a more difficult problem than the original halting problem might be to add just one more possible outcome: Given a program g(), and a program halts_if_valid() that inspects g() and g()'s inputs and can return one of HALTS, DOESNT, or INVALID, can we prove that halts_if_valid() isn't decideable? I think a proof that relies on something like the original proof doesn't offer much. // If halts_if_valid() returns INVALID, it can possibly exist as a valid program. g() { if(halts_if_valid(g) == HALTS) loop_forever(); } My limited ability with formal proofs doesn't offer me an immediately useful approach to tackling the decidability of halts_if_valid. So my questions are: Has anybody done any research on the decidability of the halts-if-valid problem or something similar? And is there anything in the literature that addresses the idea that Turing's proof may be more about limited interface vocabulary than the essence of the decidability of the halting problem? I assume I'm not the first one to ask these questions; they seem like obvious quiestions to ask. However, I haven't found anything exactly like them by googling, so I'm probably not googling the right terms. And, to be clear, my assumption is that halts-if-valid is still undecidable. Certainly in the general case (where we have infinite storage and execution time), but also in many or all practically-interesting finite cases. For instance, the case of finite programs that can't modify their own code. Or even the slightly more constrained set of finite programs that can't modify their own code and aren't interpreters. (That would be a fun set to formally specify.) It would be a highly useful program if it could exist, and therefore it probably can't. I'm just unhappy with a proof that hinges entirely on an overconstrained vocabulary. • Waht does "invalid" mean? – orlp Jun 25 at 16:34 • In this case, anything other than HALTS or DOESNT. Yes, I know, I'm not being formal, but hopefully that doesn't obscure the question. It definitly includes: The program has a logical paradox. – a10101010 Jun 25 at 16:49 • Valid, on the other hand, would mean, either HALTs or DOESNT, with the implication that it can run. But presented with g(), halts_if_valid would wisely refuse to try to run it. – a10101010 Jun 25 at 16:58 • "I have a suspicion that Turing's famous proof that the halting problem is undecidable may not prove exactly what people assume that it proves." -- If you start a post with "I think decades of TCS experts are wrong", be very careful about having read and understood as much as possible before you hit the "ask" button. (Also, who are "people"?) – Raphael Jun 25 at 17:01 • It's easy to tell that you didn't do that because a) there's no maths in your post (any refutal of a mathematical argument will have to be mathematical) and b) you seem to have completely missed that in the mathematical world of computability, there are no invalid inputs. And with good reason: a validity check can always be folded into the (partial) function being computed; it doesn't add anything interesting to the model. – Raphael Jun 25 at 17:03 To be precise, that's a semantic property, i.e. any property of programs determined by their behavior and independent of exactly how the program is expressed: the property has to have the same value for any two programs that have the same behavior. For example, “this program halts” is a semantic property, but “this program begins with p” is not a semantic property so Rice's theorem wouldn't apply to it. “Non-trivial” excludes properties that are true for every single program, or false for every single program. It doesn't matter what set of answers you use for any variant of the Halting problem, as long as the possible number of answers is finite (e.g. YES/NO/INVALID) you can consider that as three properties ($$P \in \text{YES}$$, $$P \in \text{NO}$$, $$P \in \text{INVALID}$$) and its undecidable to be able to determine which one of these is correct for all possible $$P$$.
2019-10-20 21:59:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4310615062713623, "perplexity": 842.9601129327591}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986726836.64/warc/CC-MAIN-20191020210506-20191020234006-00085.warc.gz"}
https://www.physicsforums.com/threads/effect-of-thermal-expansion-on-restraints.343121/
# Effect of thermal expansion on restraints 1. Oct 5, 2009 ### Ry122 When a beam undergoes thermal expansion and the restrains that support the beam are fixed so that no movement is allowed, how do you calculate the force reactions within the restraints? The restraints are two walls, and before expansion occurs no horizontal reaction forces are present. Since the amount by which the beam is deflected would be equal to the amount by which the beam thermally expands could it be determined with the deflection formula Delta=PL/AE? 2. Oct 5, 2009 ### minger Hint: think superposition Assume that there is intially only one wall, and then a force that pushes the beam back "into place". 3. Oct 5, 2009 ### Ry122 ok but do I use the formula i mentioned above to determine the Force that does that? 4. Oct 6, 2009 ### minger Yes, assume that the loading happens in two steps. The principle of superposition states that In layman's terms, it means that you can take the effect of each load component and add them up to get the sum. Since your bar is fully constrained, the sum effect is no displacement. So, use your formula to find the displacement that a certain delta T would cause. Then, using stress/strain equations, determine the force and stress required to "push" the beam back into place. That will be your second loading. 5. Oct 7, 2009 ### Ry122 Delta=PL/AE doesn't have anything to do with delta T though. Do you mean DeltaTx(Alpha)x(L)=Displacement? So I dont use Delta=PL/AE at all? For the stress strain equation would i just use y=mx where m = young's modulus x=percentage strain and y=stress? How would the method for doing this change if the first 1/3 of the Beam was made of a material with E=100,000 and the next 2/3 were made of a material with E=200,000? Last edited: Oct 7, 2009 6. Oct 7, 2009 ### minger I didn't check your equations, I was simply saying that the thermal expansion causes a change in length. $$\Delta x = l \alpha \Delta T$$ The constraints will then exert a force back on the beam with the magnitude that negates expansion. I'm not going to do it for you, but your PL/AE equation may come in handy. 7. Oct 7, 2009 ### Ry122 Would the method for doing this change if the first 1/3 was made of brass of a different cross sectional area to the 2/3 of the beam which is made of steel? 8. Oct 8, 2009 ### minger The method is going to stay the same, but the implementation would be a little trickier.
2017-11-22 02:36:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6013810634613037, "perplexity": 1050.1640470179464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806447.28/warc/CC-MAIN-20171122012409-20171122032409-00202.warc.gz"}
https://www.physicsforums.com/threads/help-me-with-this-question.9884/
# Help me with this question 1. Nov 29, 2003 ### Aiasha guys iam takin physics its a takhom test nd i dont get physics attt allll!!! pllz hlp me with this i need to show work as well an elvtor weighing 25,000 N is supported by a steal cable. that is the tension in the cable wehn the elavotor is begin accelerated upward at the reate of 3.0 m/s^2? (g= 9.8m/s^2) plzzz help[ 2. Nov 29, 2003 ### PiRsq TIP: The tension in the cable must be greater than the gravity pulling down. Think about that. Also remember that if this is an isolated system where only tension and gravity are acting on the elevator, the net force is equal to the gravity + tension. Is there a way you can find the mass? Once you find the mass, might there be also be a way to calculate the netforce using mass and the acceleration? REmember, Netforce=Gravity+Tension 3. Nov 30, 2003 ### himanshu121 remember $$\sum \overline{F} = m\overline{a}$$ directions are imp. File size: 3.3 KB Views: 49
2017-04-27 09:37:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3354581892490387, "perplexity": 3037.609746639371}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122041.70/warc/CC-MAIN-20170423031202-00338-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.lesswrong.com/revisions/tag/conjunction-fallacy
# Conjunction Fallacy Swimmer963 v1.9.0Oct 3rd 2020 reviewed, tagged posts Zack_M_Davis v1.8.0Nov 17th 2009 (+9/-76) bylines, math not working Vladimir_Nesov v1.7.0Oct 23rd 2009 (+20/-8) Vladimir_Nesov v1.6.0Oct 23rd 2009 (+671/-396) PeerInfinity v1.5.0Sep 28th 2009 (+54/-54) PeerInfinity v1.4.0Sep 28th 2009 PeerInfinity v1.3.0Sep 24th 2009 (+152/-99) PeerInfinity v1.2.0Sep 24th 2009 (+510) Vladimir_Nesov v1.1.0Sep 4th 2009 (+18) /* See also */ Vladimir_Nesov v1.0.0Jul 14th 2009 (+158) stub For the reasons related to representativeness heuristic, a fleshed-out story that contains typical amount of detail sounds more plausible than a stripped-down description of a situation that only states a few facts. There is a tendency for people to take that plausibility at face value, and assign probability accordingly. This intuition is wrong, because the conjunction rule of probability theory states that, for any event X, its conjunction with additional details Y will be less probable: $$P(A \and B) \leq P(B)$$ probable. $$P(X&Y) <A \and B) \leq P(X)B)$$ The conjunction fallacy is consists in assuming that specific conditions are more probable than more general ones. For the wayreasons related to representativeness heuristic, a fleshed-out story can soundthat contains typical amount of detail sounds more plausible if you add more detailsthan a stripped-down description of a situation that only states a few facts. There is a tendency for people to it, even though in reality every extra detail makes the whole story strictly less probable. For example: The statement "Bill is an accountant who plays jazz for a hobby." is less probable than the statement "Bill plays jazz for a hobby." Intuitively, you might expecttake that Bill is more likely to be an account who plays jazz, than he is to play jazz. plausibility at face value, and assign probability accordingly. This intuition is wrong, because the conjunction rule of probability theory states that, for all X and Y, any event X, its conjunction with additional details Y will be less probable: P(X&Y) <= P(Y).X) The conjunction fallacy suggests that one should be very careful in adding details to any claim, as even though each such detail may make the claim so much more convincing, it also inevitably subtracts from its validity. Every detail thatThe conjunction fallacy is the way a story can sound more plausible if you add more details to a story will always makeit, even though in reality every extra detail makes the whole story strictly less probable, even if it makes the story sound more plausible.probable. Every detail that you add to a story will always make the whole story less probable, even if it makes the story sound more plausible. For example: The statement "Bill is an accountant who plays jazz for a hobby." is less probable than the statement "Bill plays jazz for a hobby." Intuitively, you might expect that Bill is more likely to be an account who plays jazz, than he is to play jazz. This intuition is wrong, because the conjunction rule of probability theory states that, for all X and Y, P(X&Y) <= P(Y).
2021-04-12 18:28:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22826695442199707, "perplexity": 2595.7580847100407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069133.25/warc/CC-MAIN-20210412175257-20210412205257-00074.warc.gz"}
http://wiki.stat.ucla.edu/socr/index.php?title=Multiple_Linear_Regression_Tutorial&oldid=11289
# Multiple Linear Regression Tutorial ## SOCR_EduMaterials_AnalysesActivities - Multiple Linear Regression Tutorial Multiple Linear Regression Tutorial Using LA Neighborhoods Data Data: We will be using the LA Neighborhoods Data for this tutorial. Goal: Our goal is to predict the median income using multiple explanatory variables by using SOCR. In this example, we will predict median income using age, proportion of homeowners, and proportion of whites in population. Step 1: First, we will import the data into the SOCR Simple Regression Analysis Activity. Head to LA Neighborhoods Dataand find the table with the data. Select all of the data, and press Ctrl+C (Command+C on Macs) to copy it. Step 2: Next, head to http://socr.ucla.edu/htmls/SOCR_Analyses.html, and find the Simple Regression Analysis Activity in the drop-down menu. Step 3: Now Click the “PASTE” button under the drop down menu. You should now see the data in the window. Step 4: Click on the “MAPPING” tab. This is where we define our dependent and independent variables. The dependent variable is the one we want to make a prediction on, and the independent variables are the ones which we will use to make the prediction. In this example, we add “Income” to the dependent variables list and “Age”, “Homes” and “White” to the independent variables list. Step 5: Click “CALCULATE”. You will now be taken to the “RESULTS” tab. Here you can see the regression equation and R2, among others. Step 6: Click “GRAPH”. Here you will see scatterplots of the Income variable against each of the three chosen explanatory variables, as well as the residual plots and the Normal QQ Plot. Step 7: We want to check that the assumptions of linear regression, and make sure that they are met. Assumption 1: There is a linear relationship between the independent (age) and dependent variable (income). • How to check: Make a scatter plot of income and age • How to fix: Transformations (for example Log(y) vs x), or the relationship is not linear. Linear model fits the data moderately well Assumption 2: The variance is constant • How to check: Look at plot of residuals vs. predicted values ( ). Make sure there is not a pattern, such as the residuals getting larger as the predicted values increase. • How to fix: Logging of variables, fixing underlying independence or linearity causes. Slight increase in residuals at the top range of exploratory variables Assumption 3: Errors are normally distributed. • How to check: Normal QQ Plot (Should lie close to straight line) • How to fix: Take out outliers, if applicable. Non-linear transformation may be needed Assumption met Conclusions No major violation of linear regression assumptions, we proceed with our analysis: We can see from the "Results" tab that the regression equation is: Income = -21139.729 +1347.656*Age +49806.135*White +53726.649*Homes + E The “E” is the error term. “Income” is the predicted value, and “Homes”, “Age”, and “White” are the explanatory variables. This model states that for every 100 percent increase in homeowner proportion, and everything else held constant, the median household income will increase by $53726.65. For every 1 year increase in median age, and everything else held constant, the median household income will increase by$1,347.66. For every 100 percent increase in the proportion of whites in the population, with everything else held constant, the median household income will increase by \$49806.14.
2017-10-20 16:11:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6377502679824829, "perplexity": 2013.2128814468135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824226.31/warc/CC-MAIN-20171020154441-20171020174441-00514.warc.gz"}
https://gamedev.stackexchange.com/questions/176785/how-to-detect-a-collision-and-collision-impact
How to detect a collision and collision impact? I would like to calculate "impact" of collision if its the right word for it. My scenario is: I have player, who is driving his vehicle. He then hits the wall while still in vehicle. I want to know if the collision happend and calculate the impact of collision, so later I know how much hp the player should lose depending how hard the hit was. I hope I made clear what I want to achieve. What I'm currently doing is storing the previous vehicle speed and calculating the impact like this (previousSpeed - currentSpeed) / frameTime. It gives me some results but it's not consistent. Any other idea? It's in 3D so I have access to 3d vectors if it helps in any way. • Check out this answer (4th post from the top).: forum.unity.com/threads/… – hatinacat2000 Nov 3 '19 at 0:15 • Well it says to calculate the mass of the other colider aswell as the velocity of the other colider. In my case it's wall or anyother non-movable object, so it's velocity is 0 but I don't have mass of the object, and It would be propably stupid to make table of every object mass. – porqqq Nov 3 '19 at 18:59 You are talking about momentum. It is represented by a vector, its value is $$\MLT^{-1}\$$ dimensionned. It is related to mass and speed this way: $$\\vec p=m\vec v\$$ where $$\\vec p\$$ and $$\\vec v\$$ are vectors. Momentum is conserved in isolated systems (no external force applied), which is the key point to make your calculations; you'll need to add some mass (eg., divide your objects (player) into smaller regions, each having a mass). Momentum is related to force as such: $$\\vec ft=\vec p\$$ where $$\t\$$ is time, $$\\vec f\$$ force vector.
2020-08-05 23:08:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4272927939891815, "perplexity": 662.0195343689873}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735989.10/warc/CC-MAIN-20200805212258-20200806002258-00592.warc.gz"}
https://math.stackexchange.com/questions/2302241/estimating-the-bias-of-a-coin-using-chebyshevs-inequality
# Estimating the Bias of a Coin using Chebyshev's inequality Assume we have a coin of unknown bias towards heads $p$ and our estimate of the bias is $\hat{p} = \frac{1}{n}S_n$ where $S_n$ is the number of heads observed. To find the expectation of $E(\hat{p})$ we can use indicator random variables to define $S_n$. $S_n = X_1 + ... X_n$ where $X_i = 1$ if the $i$-th toss is heads and $0$ otherwise. That means that $E(X_i) = 1\times Pr[X_i=1]$ By linearity of expectation, we can compute $E(\hat{p})$. $E(\hat{p}) = E(\frac{1}{n}S_n) = E(\frac{1}{n}X_1+...\frac{1}{n}X_i) = \frac{1}{n}\sum_{i=1}^nE(X_i) = p$ I understand this up to the $=p$ part. I understand that $p$ should be the number of observed heads over the number of total tosses. How does $\sum_{i=1}^nE(X_i)$ represent the number of observed heads? Sure, it represents $E(S_n)$, or the expected number of observed heads, but that isn't definite right? Anyway, the book goes on to compute $Var(\hat{p})$ $Var(\hat{p}) = Var(\frac{1}{n}S_n) = \frac{1}{n^2}Var(S_n) = \frac{1}{n^2}\sum_{i=1}^n Var(X_i) = \frac{\sigma^2}{n}$ This part I understand even less, where did $\frac{1}{n^2}$ come from? How was that pulled out of the original variance? The book then applies Chebyshev's inequality to compute the confidence. $Pr[|\hat{p} - p] \geq \epsilon] \leq \frac{Var(\hat{p})}{\epsilon^2} = \frac{\sigma^2}{n\epsilon^2}$ I understand how the values are plugged into Chebyshev's inequality and how $Var(\hat{p})$ gets written as $\sigma^2$, but where did the $n$ come from? And just to confirm, the equals sign is only saying that the right hand side of the inequality is equal to that, right, not the whole expression? The book then takes a step in saying that "to make this less than the desired value $\delta$, we need to set $n \geq \frac{\sigma^2}{\epsilon^2\delta}$ Perhaps this is because I don't understand the previous steps, but I don't see how $\epsilon$ and $\delta$ are controlling the error and confidence respectively. EDIT: Thanks to the below responses, my doubts expressed above are cleared. I would like to follow up on how my book solves the problem of estimating the bias of a coin. I understand everything up to the point $n \geq \frac{\sigma^2}{\epsilon^2\delta}$ $\sigma^2$ can be written as $Var(X_i)$. $Var(X_i) = E(X_i^2) - E(X)^2$ which comes out to be $p - p^2 = p(1-p)$. So our expression above can be rewritten as $n \geq \frac{p(1-p)}{\epsilon^2\delta}$ My book contains the following extra part: Since $p(1-p)$ takes on its maximum value for $p=\frac{1}{2}$, we can conclude that it is sufficient to choose $n$ such that: $n \geq \frac{1}{4\epsilon^2\delta}$ I understand that the last expression comes from substituting $p = \frac{1}{2}$, but I don't understand the claim they make about the maximum value. 1. How does $\sum_{i=1}^nE(X_i)$ represent the number of observed heads? It doesn't. It represents the expectation of $S_n$. $X_1+\dotsb+X_n$ represents the number of heads since this sum is equal to $S_n$. 2. It is well-known that $\operatorname{Var}(cX) = c^2 \operatorname{Var}(X)$: $$\operatorname{Var}(cX) = E[(cX-E[cX])^2] = E[(cX-cE[X])^2] = E[c^2(X-E[X])^2] = c^2\operatorname{Var}(X).$$ 3. It was shown that $\operatorname{Var}(\hat p ) = \frac{\sigma^2}{n}$. Then $$\frac{\operatorname{Var}(\hat p)}{\epsilon^2} = \frac{\sigma^2/n}{\epsilon^2} = \frac{\sigma^2}{n\epsilon^2}.$$ Yes the equality is just for this part. 4. The book says if we want this $$\frac{\sigma^2}{n\epsilon^2} \leq \delta,$$ for a given $\delta$, then do this $$\frac{\sigma^2}{\delta\epsilon^2} \leq n.$$ In plain words, if we want $\operatorname{Var}(\hat p)/\epsilon^2$ to be less than some value $\delta$, then $n$ should satisfy the condition above. • Thank you. Where was it shown that $Var(\hat{p})=\frac{\sigma^2}{n}$ ? – Carpetfizz May 30 '17 at 3:19 • It's the second big equation in your post, where you wrote "the book goes on to compute $\operatorname{Var}(\hat{p})$. – Em. May 30 '17 at 3:21 • thank you, you answered all the questions I had in my original post. I posted a small edit to follow through with the proof in my book - would you mind taking a look at it? – Carpetfizz May 30 '17 at 4:55 • Ah, $0.5$, Thanks for your help! – Carpetfizz May 30 '17 at 5:02 • Sorry I butchered my previous comment. I meant to say that although $\hat p$ is unlikely to be $\frac{1}{2}$, our intuition tells us that we should expect it to be about half. LLN says that for large numbers (trials), $\hat p$ is "close" to $\frac{1}{2}$, which is $p$. For a biased coin, intuitively, we expect the proportion of heads should lean towards $p$. For example, if $p=0.1$, then we expect a very small number of heads when will flip one millions coins, which implies that the proportion of heads will be closer to $0$. Intuitively, we expect it to be $p$. – Em. May 30 '17 at 6:00 You wrote: "I understand that $p$ should be the number of observed heads over the number of total tosses." No, this is not correct. $p$ is the expected fraction of tosses that will be heads. It is not a sample statistic. Same goes for the variance. As for the $n$, it comes from the the previous calculation of $Var(\hat{p})$. Note that $Var(aX) = a^2Var(X)$, so you can see how we get $\frac{1}{n^2}$ and how this gets plugged into $Var(\hat{p})$. Finally, yes, the $=$ is referring just to the two RHS expressions, not to the full line. • Do you mind expanding on the definition of $p$ in this context? If it's not a sample statistic, what is it? – Carpetfizz May 30 '17 at 3:10 • @Carpetfizz its an expected value (a deterministic value calculated form probability theory) – user408433 May 30 '17 at 3:22
2019-06-19 20:54:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9309203624725342, "perplexity": 141.34300587621902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999041.59/warc/CC-MAIN-20190619204313-20190619230313-00055.warc.gz"}
https://openstax.org/books/principles-financial-accounting/pages/15-problem-set-b
PB1. LO 15.3The partnership of Magda and Sue shares profits and losses in a 50:50 ratio after Mary receives a $7,000 salary and Sue receives a$6,500 salary. Prepare a schedule showing how the profit and loss should be divided, assuming the profit or loss for the year is: 1. $10,000 2.$5,000 3. ($12,000) In addition, show the resulting entries to each partner’s capital account. PB2. LO 15.4The partnership of Arun, Margot, and Tammy has been doing well. Arun wants to retire and move to another state for a once-in-a-lifetime opportunity. The partners’ capital balances prior to Arun’s retirement are$60,000 each. Prepare a schedule showing how Arun’s withdrawal should be divided assuming his buyout is: 1. $70,000 2.$45,000 3. \$60,000. In addition, show the resulting entries to the capital accounts of each of the three. PB3. LO 15.5Match each of the following descriptions with the appropriate term related to partnership accounting. A. Each and every partner can enter into contracts on behalf of the partnership i. liquidation B. The business ceases operations. ii. capital deficiency C. How partners share in income and loss iii. admission of a new partner D. Adding a new partner by contributing cash iv. mutual agency E. A partner account with a debit balance v. income sharing ratio
2020-02-18 06:59:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23657307028770447, "perplexity": 7523.213758615951}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143635.54/warc/CC-MAIN-20200218055414-20200218085414-00295.warc.gz"}
http://stochastix.wordpress.com/tag/list-processing/
## Posts Tagged ‘List Processing’ ### Generating all words over an alphabet October 1, 2012 An alphabet $\Sigma$ is a finite set of symbols. A word $w$ of length $n$ over an alphabet $\Sigma$ is a sequence of symbols from $\Sigma$, i.e., $w : \{0,1,\dots,n-1\} \to \Sigma$ Alternatively, we can view a word $w$ of length $n$ over the alphabet $\Sigma$ as an $n$-tuple (i.e., an ordered list with $n$ elements) over $\Sigma$, i.e., $w \in \Sigma^n$. The set of all finite words over $\Sigma$, including the empty word $\varepsilon$, is denoted by $\Sigma^*$, where $*$ is the Kleene star. We thus encounter the following problem: Problem: Given an alphabet $\Sigma$, how do we generate all the words over $\Sigma$? In other words, given $\Sigma$, how do we generate the Kleene closure $\Sigma^*$? The following Haskell script solves this problem: g :: [a] -> [[a]] -> [[a]] g alphabet = concat . map (\xs -> [ xs ++ [s] | s <- alphabet]) allwords :: [a] -> [[a]] allwords alphabet = concat $iterate (g alphabet) [[]] where alphabet must be a finite list, otherwise the execution of the script won’t ever terminate. The script above, although short, uses a lot of machinery: list comprehension, higher-order functions map and iterate, concatenation of lists of lists, and partial function application. Let us test this script. First, we load it into GHCi. Then we can run a GHCi session: *Main> -- define alphabet *Main> let alphabet = [0,1] *Main> -- define function f *Main> let f = g alphabet *Main> -- check type *Main> :t f f :: [[Integer]] -> [[Integer]] *Main> -- apply f to several lists of lists *Main> f [[]] [[0],[1]] *Main> (f . f) [[]] [[0,0],[0,1],[1,0],[1,1]] *Main> (f . f . f) [[]] [[0,0,0],[0,0,1],[0,1,0],[0,1,1],[1,0,0],[1,0,1],[1,1,0],[1,1,1]] So far, so good. Suppose that we would like to find all words over alphabet $\Sigma_2 := \{0,1\}$ whose length is less than or equal to $4$. How many such words are there? There are $|\Sigma_2|^k = 2^k$ words of length $k$. Hence, there are $\displaystyle\sum_{k=0}^n 2^k = \frac{2^{n+1} -1}{2-1} = 2^{n+1} - 1$ binary words of length less than or equal to $n$. Hence, if we make $n = 4$, then we obtain $2^{n+1}-1 = 31$. Continuing our GHCi session: *Main> take 31$ allwords alphabet [[],[0],[1], [0,0],[0,1],[1,0],[1,1], [0,0,0],[0,0,1],[0,1,0],[0,1,1], [1,0,0],[1,0,1],[1,1,0],[1,1,1], [0,0,0,0],[0,0,0,1],[0,0,1,0],[0,0,1,1], [0,1,0,0],[0,1,0,1],[0,1,1,0],[0,1,1,1], [1,0,0,0],[1,0,0,1],[1,0,1,0],[1,0,1,1], [1,1,0,0],[1,1,0,1],[1,1,1,0],[1,1,1,1]] Very nice! We could use this in Coding Theory! For example, suppose that we would like to find all binary words of length $8$ whose Hamming weight is equal to $5$. There are $\displaystyle\binom{8}{5} = \frac{8!}{3! 5!} = 56$ such words. We can find them as follows: *Main> -- take all words of length less than or equal to 8 *Main> let words = take 511 $allwords alphabet *Main> -- filter out binary words of length 8 *Main> let wordsL8 = filter (\xs -> length xs == 8) words *Main> length wordsL8 256 *Main> -- filter out binary words of length 8 *Main> -- and also of Hamming weight equal to 5 *Main> let wordsL8H5 = filter (\xs -> sum xs == 5) wordsL8 *Main> length wordsL8H5 56 *Main> wordsL8H5 [[0,0,0,1,1,1,1,1],[0,0,1,0,1,1,1,1],[0,0,1,1,0,1,1,1], [0,0,1,1,1,0,1,1],[0,0,1,1,1,1,0,1],[0,0,1,1,1,1,1,0], [0,1,0,0,1,1,1,1],[0,1,0,1,0,1,1,1],[0,1,0,1,1,0,1,1], [0,1,0,1,1,1,0,1],[0,1,0,1,1,1,1,0],[0,1,1,0,0,1,1,1], [0,1,1,0,1,0,1,1],[0,1,1,0,1,1,0,1],[0,1,1,0,1,1,1,0], [0,1,1,1,0,0,1,1],[0,1,1,1,0,1,0,1],[0,1,1,1,0,1,1,0], [0,1,1,1,1,0,0,1],[0,1,1,1,1,0,1,0],[0,1,1,1,1,1,0,0], [1,0,0,0,1,1,1,1],[1,0,0,1,0,1,1,1],[1,0,0,1,1,0,1,1], [1,0,0,1,1,1,0,1],[1,0,0,1,1,1,1,0],[1,0,1,0,0,1,1,1], [1,0,1,0,1,0,1,1],[1,0,1,0,1,1,0,1],[1,0,1,0,1,1,1,0], [1,0,1,1,0,0,1,1],[1,0,1,1,0,1,0,1],[1,0,1,1,0,1,1,0], [1,0,1,1,1,0,0,1],[1,0,1,1,1,0,1,0],[1,0,1,1,1,1,0,0], [1,1,0,0,0,1,1,1],[1,1,0,0,1,0,1,1],[1,1,0,0,1,1,0,1], [1,1,0,0,1,1,1,0],[1,1,0,1,0,0,1,1],[1,1,0,1,0,1,0,1], [1,1,0,1,0,1,1,0],[1,1,0,1,1,0,0,1],[1,1,0,1,1,0,1,0], [1,1,0,1,1,1,0,0],[1,1,1,0,0,0,1,1],[1,1,1,0,0,1,0,1], [1,1,1,0,0,1,1,0],[1,1,1,0,1,0,0,1],[1,1,1,0,1,0,1,0], [1,1,1,0,1,1,0,0],[1,1,1,1,0,0,0,1],[1,1,1,1,0,0,1,0], [1,1,1,1,0,1,0,0],[1,1,1,1,1,0,0,0]] where we used function take (again) and higher-order function filter. The two predicates were built using anonymous functions. I am happy with this script. Please let me know in case you are not. __________ Related: ### My implementation of (!!) September 20, 2012 In Haskell we can easily create a list and then access its elements using the (!!) function, which is defined in the Prelude. Here is a very brief GHCi session: Prelude> let xs = [7,8,9] Prelude> xs !! 0 7 Prelude> xs !! 1 8 Prelude> xs !! 2 9 So far, so good. What if the index is negative or equals / exceeds the list’s length? Let’s see what happens in those cases: Prelude> xs !! (-1) *** Exception: Prelude.(!!): negative index Prelude> xs !! 3 *** Exception: Prelude.(!!): index too large As expected, we get error messages. What if we used the Maybe data type to avoid exceptions? This is exercise 4 in chapter 3 of O’Donnell & Hall & Page [1], which is phrased as follows: Write (!!), a function that takes a natural number $n$ and a list and selects the $n$th element of the list. List elements are indexed from $0$, not $1$, and since the type of the incoming number does not prevent it from being out of range, the result should be a Maybe type. The aforementioned authors propose the following implementation: import Prelude hiding ((!!)) (!!) :: Int -> [a] -> Maybe a (!!) n [] = Nothing (!!) 0 (x:xs) = Just x (!!) n (x:xs) = (!!) (n-1) xs where I added the import line to hide the standard (!!) function that is defined in the Prelude. My first thought was that the authors switched the function arguments, which makes the function look silly. Let’s give it a try. Here’s another GHCi session: *Main> let xs = [7,8,9] *Main> 0 !! xs Just 7 *Main> 1 !! xs Just 8 *Main> 2 !! xs Just 9 *Main> (-1) !! xs Nothing *Main> 3 !! xs Nothing It appears to be working, but specifying the index before the list looks rather ugly. Wait, what if the index is negative? For example, why does (-3) !! xs return Nothing? Let’s use equational reasoning to find out: (-3) !! [7,8,9] = (-4) !! [8,9] = = (-5) !! [9] = = (-6) !! [] = = Nothing This reveals a fatal flaw in the authors’ implementation: if the list is infinite, then the recursion will never terminate. For example, (-1) !! [0..] will never terminate, because when the initial index is negative, decrementing the index will never get us to the zero index. Therefore, I propose the following implementation: import Prelude hiding ((!!)) (!!) :: [a] -> Integer -> Maybe a (!!) [] n = Nothing (!!) (x:xs) n | n > 0 = (!!) xs (n-1) | n == 0 = Just x | n < 0 = Nothing where the first argument is now a list, and the second argument an integer. Note that I used indices of type Integer (“mathematical integers”), instead of type Int (“computer integers”). Let’s see if this implementation works: *Main> let xs = [7,8,9] *Main> xs !! 0 Just 7 *Main> xs !! 1 Just 8 *Main> xs !! 2 Just 9 *Main> xs !! (-1) Nothing *Main> xs !! 3 Nothing It appears to be working. No errors. No exceptions. If you, dear reader, happen to be acquainted with Haskell you will almost certainly be shocked (!!!), for this function is trivial! Well, that is true, but I allow myself to be intrigued by trivialities. Moreover, this function is simple enough to allow us to use equational reasoning. For example, let’s compute xs !! 2 using equational reasoning: [7,8,9] !! 2 = [8,9] !! 1 = [9] !! 0 = Just 9 What if the index is too large? Let’s compute xs !! 4 then: [7,8,9] !! 4 = [8,9] !! 3 = [9] !! 2 = [] !! 1 = Nothing Step by step, by successively removing the head of the list, we get where we want to. Unfortunately, this suggests that accessing an arbitrary element of the list will not be $\mathcal{O} (1)$. __________ References [1] John O’Donnell, Cordelia Hall, Rex Page, Discrete Mathematics using a Computer (2nd edition), Springer, 2006. ### Perfect shuffles in Haskell April 17, 2012 Consider a deck of $2 n$ cards. There exists a bijection from the set of cards to the finite set $\{1, 2, \dots, 2 n\}$, which allows us to represent the deck using a ($2 n$)-tuple, as follows $(1, 2, \dots, n, n+1, n+2, \dots, 2 n)$ where the first element represents the card at the top of the deck, and the ($2 n$)-th element represents the card at the bottom of the deck. A perfect shuffle consists of cutting the deck exactly in half and then interlacing / interleaving the two halves perfectly. The in shuffle moves the original top card to the second position. The out shuffle leaves the original top card on top [1]. After a perfect in shuffle the deck will be $(n+1, 1, n+2, 2, \dots, 2 n, n)$, whereas after a perfect out shuffle the deck will be $(1, n+1, 2, n+2, \dots, n, 2 n)$. Is it possible to restore the deck to its original order after a certain number of perfect shuffles? Amazingly (or perhaps not), it is indeed possible! __________ Example 1 Suppose we are given a deck of $8$ cards (i.e., $n = 4$). A perfect out shuffle of this deck is depicted below [ source ] Using pencil and paper, it is easy to see that after three perfect out shuffles the deck will be restored to its original order. Here is a Haskell script: -- create Card data type data Card = Ace | Two | Three | Four | Five | Six | Seven | Eight deriving (Show, Eq) -- create type synonym type Deck = [Card] -- interlace two decks of the same size interlace :: Deck -> Deck -> Deck interlace [] [] = [] interlace (c1:d1) (c2:d2) = [c1,c2] ++ interlace d1 d2 -- perfect out shuffle outShuffle :: Deck -> Deck outShuffle deck = interlace deck1 deck2 where deck1 = fst (splitAt 4 deck) deck2 = snd (splitAt 4 deck) -- perfect in shuffle inShuffle :: Deck -> Deck inShuffle deck = interlace deck2 deck1 where deck1 = fst (splitAt 4 deck) deck2 = snd (splitAt 4 deck) where I used function splitAt to cut the deck into two halves of equal size. Since I am not acquainted with any function that performs interleaving, I did create my own function. We load this script into GHCi. Here is a brief GHCi session: *Main> -- create deck *Main> let deck = [Ace,Two,Three,Four,Five,Six,Seven,Eight] *Main> -- perform 3 out shuffles *Main> take 4 (iterate outShuffle deck) [[Ace,Two,Three,Four,Five,Six,Seven,Eight], [Ace,Five,Two,Six,Three,Seven,Four,Eight], [Ace,Three,Five,Seven,Two,Four,Six,Eight], [Ace,Two,Three,Four,Five,Six,Seven,Eight]] *Main> Indeed, after three perfect out shuffles, we do obtain a deck in the original order! One is tempted to wonder what the order of the deck will be after three perfect in shuffles. Let us see: *Main> -- create deck *Main> let deck = [Ace,Two,Three,Four,Five,Six,Seven,Eight] *Main> -- perform 3 in shuffles *Main> take 4 (iterate inShuffle deck) [[Ace,Two,Three,Four,Five,Six,Seven,Eight], [Five,Ace,Six,Two,Seven,Three,Eight,Four], [Seven,Five,Three,Ace,Eight,Six,Four,Two], [Eight,Seven,Six,Five,Four,Three,Two,Ace]] After three perfect in shuffles we do not obtain the original deck. Instead, we obtain a reversal of the original deck. __________ Example 2 Let us now work with the standard deck of $52$ cards. People say that eight consecutive perfect out shuffles will restore the deck to its original order. Let us check whether this is true or not. Here is a new Haskell script: -- create Value data type data Value = Ace | Two | Three | Four | Five | Six | Seven | Eight | Nine | Ten | Jack | Queen | King deriving (Show, Eq) -- create Suit data type data Suit = Club | Diamond | Heart | Spade deriving (Show, Eq) -- create type synonyms type Card = (Value,Suit) type Deck = [Card] -- create list of card values values :: [Value] values = [Ace,Two,Three,Four,Five,Six,Seven, Eight,Nine,Ten,Jack,Queen,King] -- create list of card suits suits :: [Suit] suits = [Club,Diamond,Heart,Spade] -- create deck deck :: Deck deck = [(v,s) | v <- values, s <- suits] -- interlace two decks of the same size interlace :: Deck -> Deck -> Deck interlace [] [] = [] interlace (c1:d1) (c2:d2) = [c1,c2] ++ interlace d1 d2 -- perfect out shuffle outShuffle :: Deck -> Deck outShuffle deck = interlace deck1 deck2 where n = div (length deck) 2 deck1 = fst (splitAt n deck) deck2 = snd (splitAt n deck) -- perfect in shuffle inShuffle :: Deck -> Deck inShuffle deck = interlace deck2 deck1 where n = div (length deck) 2 deck1 = fst (splitAt n deck) deck2 = snd (splitAt n deck) We load this script into GHCi. Here is a GHCi session: *Main> -- print deck *Main> deck [(Ace,Club),(Ace,Diamond),(Ace,Heart),(Ace,Spade), (Two,Club),(Two,Diamond),(Two,Heart),(Two,Spade), (Three,Club),(Three,Diamond),(Three,Heart),(Three,Spade), (Four,Club),(Four,Diamond),(Four,Heart),(Four,Spade), (Five,Club),(Five,Diamond),(Five,Heart),(Five,Spade), (Six,Club),(Six,Diamond),(Six,Heart),(Six,Spade), (Seven,Club),(Seven,Diamond),(Seven,Heart),(Seven,Spade), (Eight,Club),(Eight,Diamond),(Eight,Heart),(Eight,Spade), (Nine,Club),(Nine,Diamond),(Nine,Heart),(Nine,Spade), (Ten,Club),(Ten,Diamond),(Ten,Heart),(Ten,Spade), (Jack,Club),(Jack,Diamond),(Jack,Heart),(Jack,Spade), (Queen,Club),(Queen,Diamond),(Queen,Heart),(Queen,Spade), (King,Club),(King,Diamond),(King,Heart),(King,Spade)] *Main> -- perform 8 out shuffles *Main> let deck8 = (iterate outShuffle deck) !! 8 *Main> -- is the new deck the same as the original one? *Main> deck8 == deck True Indeed, it is true! Eight perfect out shuffles suffice! __________ References [1] Persi Diaconis, R. L. Graham, William M. Kantor, The Mathematics of Perfect Shuffles, Advances in Applied Mathematics, Volume 4, Issue 2, June 1983. ### Feedback systems in Haskell January 30, 2012 We recently studied the first-order causal LTI system which is described by the difference equation $y (n) - \alpha y (n-1) = u (n)$, where $|\alpha| < 1$. The system can be represented by the block diagram Observing the block diagram, we conclude that there are three basic operations: addition, multiplication by a constant coefficient, and delay. We arrive at the same conclusion if we rewrite the difference equation in the form $y (n) = u (n) + \alpha y (n-1)$. Do note that the output is obtained by adding the input to a scaled and delayed version of the output and, therefore, we have a feedback system. The system’s input-output relationship can be written as follows $y = u + \alpha\,\mathcal{D} (y)$ where $\mathcal{D}$ is the unit-delay operator. Note that we have signals rather than signal samples on both sides of the equation. To clarify, when I say “sample”, I mean the value of the signal at some (discrete) time instant. Let us introduce the linear operator $\mathcal{H}$ such that the output signal can be written as a function of the input, $y = \mathcal{H} (u)$, assuming zero initial condition. Hence, we obtain $\mathcal{H} (u) = u + \alpha\,\mathcal{D} (\mathcal{H} (u))$. It would be convenient to introduce also a gain operator $\mathcal{G}_{\alpha}$ to carry out the multiplication by a constant coefficient, i.e., $\mathcal{G}_{\alpha} (x) = \alpha \, x$. Composing the operators, we finally obtain the following equation $\mathcal{H} (u) = u + (\mathcal{G}_{\alpha} \circ \mathcal{D} \circ \mathcal{H}) (u)$ which we will now implement in Haskell. __________ Implementation in Haskell Our first implementation of the LTI system under study relied on state-space models and the scanl trick to propagate the initial state forwards in time. Our second implementation was little more than a beautified version of the first one. This third implementation will be radically different from the previous two. Let us start with the following type synonyms: type Signal a = [a] type System a b = Signal a -> Signal b which hopefully will make the code more readable. We build a function that takes two discrete-time signals (of the same type) and returns their elementwise addition: (.+) :: Num a => Signal a -> Signal a -> Signal a (.+) = zipWith (+) Since we represent discrete-time signals as lists, the function above merely adds two lists elementwise. Let us test this function: *Main> -- create test input signals *Main> let us = repeat 1.0 :: Signal Float *Main> let vs = [1..] :: Signal Float *Main> -- add two signals elementwise *Main> let ys = us .+ vs *Main> take 10 ys [2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0] *Main> -- check types *Main> :type (us,vs,ys) (us,vs,ys) :: (Signal Float, Signal Float, Signal Float) We now implement the unit-delay with zero initial condition: delay :: Num a => System a a delay us = 0 : us which right-shifts the input list and introduces a zero at the output list’s head. If we want a unit-delay operator with non-zero initial condition, we would have the following code instead: delay :: Num a => System a a delay us = ini : us where ini is the initial condition of the delay block. Lastly, we create the gain operator: gain :: Num a => a -> System a a gain alpha = map (alpha*) which takes a number (the gain factor) and returns a system (that maps signals to signals). Note that we use partial function application. One can think of the gain operator as a function that takes a number and a signal and returns a signal. If we fix the first argument (the gain factor), we obtain a function that maps signals to signals, i.e., a system. Here is a quick test of the delay and gain operators: *Main> -- create signal *Main> let xs = [1..] :: Signal Float *Main> -- delay signal *Main> let ys = delay xs *Main> take 10 ys [0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0] *Main> -- amplify delayed signal *Main> let zs = gain 2.0 ys *Main> take 10 zs [0.0,2.0,4.0,6.0,8.0,10.0,12.0,14.0,16.0,18.0] Finally, we have the following Haskell script: type Signal a = [a] type System a b = Signal a -> Signal b -- signal adder (.+) :: Num a => Signal a -> Signal a -> Signal a (.+) = zipWith (+) -- delay operator delay :: Num a => System a a delay us = 0 : us -- gain operator gain :: Num a => a -> System a a gain alpha = map (alpha*) -- build feedback system sys :: Floating a => System a a sys us = us .+ (gain 0.5 . delay . sys) us where we used $\alpha = 0.5$. Note that the last line is a direct translation of the equation $\mathcal{H} (u) = u + (\mathcal{G}_{\alpha} \circ \mathcal{D} \circ \mathcal{H}) (u)$ to Haskell! Beautiful! To finalize, let us obtain the impulse response of the LTI system under study for $\alpha = 0.5$: *Main> -- create unit impulse *Main> let delta = 1.0 : repeat 0.0 :: Signal Float *Main> -- compute impulse response *Main> let hs = sys delta *Main> take 8 hs [1.0,0.5,0.25,0.125,6.25e-2,3.125e-2,1.5625e-2,7.8125e-3] which is the expected impulse response $h (n) = \alpha^n$ for $n \geq 0$. Frankly, I am in awe. It is amazing that this implementation works! ### Cascading systems in Haskell January 25, 2012 Last weekend we learned how to build systems (LTI or otherwise) using Haskell. We now want to construct interconnections of systems. In this post we will study the series interconnection of systems, usually known as cascade interconnection. Parallel and feedback interconnections will be discussed in future posts. __________ Cascading two LTI systems Let us consider the series interconnection of two causal discrete-time LTI systems, $\mathcal{H}_1$ and $\mathcal{H}_2$, as depicted below where $y = \mathcal{H}_1 (x)$ and $w =\mathcal{H}_2 (y)$ are the outputs of each LTI system in the cascade. Since the output of system $\mathcal{H}_1$ is the input of system $\mathcal{H}_2$, we have $w = (\mathcal{H}_2 \circ \mathcal{H}_1) (x)$. What LTI systems should we consider? Let us choose the simplest ones: the accumulator and the differentiator. Thus, let $\mathcal{H}_1$ be an accumulator (also known as “discrete-time integrator”), whose input-output relationship is as follows $y (n) = y (n-1) + x (n)$, and let $\mathcal{H}_2$ be a first difference operator (also known as “discrete-time differentiator”), whose input-output relationship is $w (n) = y (n) - y (n-1)$. Note that the output of the cascade of these two LTI systems is thus $w (n) = y (n) - y (n-1) = (y (n-1) + x (n)) - y (n-1) = x (n)$ and, hence, the cascade is input-output equivalent to the identity operator. Since $(\mathcal{H}_2 \circ \mathcal{H}_1) (x) = x$ for all signals $x$, we say that $\mathcal{H}_2$ is the left-inverse of system $\mathcal{H}_1$. Since both systems are LTI, the operators commute, i.e., $\mathcal{H}_2 \circ \mathcal{H}_1 = \mathcal{H}_1 \circ \mathcal{H}_2$ and, therefore, $\mathcal{H}_2$ is also the right-inverse of system $\mathcal{H}_1$. Since $\mathcal{H}_2$ is the left- and right-inverse of $\mathcal{H}_1$, we say that $\mathcal{H}_2$ is the inverse of system $\mathcal{H}_1$ (and vice-versa). Do keep in mind, however, that not all systems are invertible. For details, take a look at Oppenheim & Willsky [1]. __________ Implementation in Haskell We can easily find a state-space realization for the accumulator, but that will not be necessary. As in previous posts, let us view discrete-signals as lists. Thus, the accumulator takes a list $[x_0, x_1, x_2, \dots]$, and returns the following list $[y_0, y_1, y_2, \dots] = [x_0, x_0 + x_1, x_0 + x_1 + x_2, \dots]$ where we assume that the initial condition of the accumulator is zero (i.e., $y_{-1} = 0$). Instead of using scanl yet once again (which would require us to drop the head of the list), let us now use scanl1 to implement the accumulator: acc :: Num a => System a a acc = scanl1 (+) Please note that if the initial condition of the accumulator is not zero, we should use the following code instead: acc' :: Num a => System a a acc' us = tail$ scanl (+) acc_ini us where acc_ini is the initial condition of the accumulator (analogous to the constant of integration in integral calculus). The differentiator is not a proper system [2] and, therefore, it has no state-space realization. The differentiator takes a list $[y_0, y_1, y_2, \dots]$, and returns the following list $[w_0, w_1, w_2, \dots] = [y_0, y_1 - y_0, y_2 - y_1, \dots]$ where we again assume that $y_{-1} = 0$. Note the following $[w_0, w_1, w_2, \dots] = [y_0, y_1, y_2, \dots] - [0, y_0, y_1, \dots]$ i.e., list $w$ is obtained by elementwise subtraction of a right-shifted version of list $y$ from list $y$ itself. Subtracting two lists elementwise can be implemented using zipWith: diff :: Num a => System a a diff ys = zipWith (-) ys (0 : ys) If the initial condition of the differentiator is not zero, we should instead use the following code: diff' :: Num a => System a a diff' ys = zipWith (-) ys (diff_ini : ys) where diff_ini is the initial condition of the differentiator. Piecing it all together, we finally obtain the following Haskell script: type Signal a = [a] type System a b = Signal a -> Signal b -- accumulator acc :: Num a => System a a acc = scanl1 (+) -- differentiator diff :: Num a => System a a diff ys = zipWith (-) ys (0 : ys) -- cascade of the acc. and diff. sys :: Num a => System a a sys = diff . acc Take a look at the last line. It says that cascading systems is the same as composing systems! Hence, the Haskell implementation is conceptually very close to the mathematical formulation using operators. Functional analysis meets functional programming We run the script above on GHCi and then play with it: *Main> -- build unit impulse *Main> let delta = 1.0 : repeat 0.0 :: Signal Float *Main> -- output of the accumulator *Main> let ys = acc delta *Main> take 20 ys [1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0] *Main> -- output of the differentiator *Main> let ws = diff ys *Main> take 20 ws [1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0] *Main> -- impulse response of the cascade *Main> let hs = sys delta *Main> take 20 hs [1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0] The impulse response of the accumulator is the unit step. The impulse response of the cascade is the unit impulse, as we expected. The differentiator is the inverse of the accumulator, and vice-versa. __________ References [1] Alan V. Oppenheim, Alan S. Willsky, S. Hamid Nawab, Signals & Systems, 2nd edition, Prentice-Hall, 1997. [2] Panos Antsaklis, Anthony Michel, A Linear Systems Primer, Birkhäuser Boston, 2007.
2013-12-11 17:09:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 96, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633697032928467, "perplexity": 3759.089420585511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164039593/warc/CC-MAIN-20131204133359-00020-ip-10-33-133-15.ec2.internal.warc.gz"}
https://gmatclub.com/forum/if-a-4-cm-cube-is-cut-into-1-cm-cubes-then-what-is-the-107843.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 20 Oct 2018, 02:19 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If a 4 cm cube is cut into 1 cm cubes, then what is the Author Message TAGS: ### Hide Tags Manager Joined: 28 Aug 2010 Posts: 186 If a 4 cm cube is cut into 1 cm cubes, then what is the  [#permalink] ### Show Tags 15 Jan 2011, 15:07 3 22 00:00 Difficulty: 55% (hard) Question Stats: 64% (01:56) correct 36% (02:11) wrong based on 583 sessions ### HideShow timer Statistics If a 4 cm cube is cut into 1 cm cubes, then what is the percentage increase in the surface area of the resulting cubes? A. 4% B. 166% C. 266% D. 300% E. 400% Please can someone explain this problem better. Math Expert Joined: 02 Sep 2009 Posts: 50003 Re: surface area of cube  [#permalink] ### Show Tags 15 Jan 2011, 15:34 4 4 ajit257 wrote: If a 4 cm cube is cut into 1 cm cubes, then what is the percentage increase in the surface area of the resulting cubes? 4% 166% 266% 300% 400% Please can someone explain this problem better. Cube has 6 faces. The surface area of a cube which has a side of 4cm is 6*4^2=6*16 cm^2. Now, when the cube is cut into the smaller cubes with side of 1cm we'll get 4*4*4=64 little cubes and each will have the surface area equal to 6*1^2=6 cm^2, so total surface are of these 64 little cubes will be 6*64 cm^2. 6*64 is 4 times more than 6*16 which corresponds to 300% increase. Or: general formula for percent increase or decrease, (percent change): $$Percent=\frac{Change}{Original}*100$$ So the percent increase will be: $$Percent=\frac{Change}{Original}*100=\frac{6*64-6*16}{6*16}*100=300%$$. _________________ ##### General Discussion Board of Directors Joined: 01 Sep 2010 Posts: 3304 Re: surface area of cube  [#permalink] ### Show Tags 15 Jan 2011, 17:19 generally you have at least two ways to solve a question......this mean that we should have different points of view to tackle a problem, right ??' rethoric question I guess ........ _________________ Manager Joined: 15 Jan 2011 Posts: 51 Re: surface area of cube  [#permalink] ### Show Tags 15 Jan 2011, 22:04 Hi, when the cube is cut into the smaller cubes with side of 1cm we'll get 4*4*4=64 Thanks Bunuel wrote: Cube has 6 faces. The surface area of a cube which has a side of 4cm is 6*4^2=6*16 cm^2. Now, when the cube is cut into the smaller cubes with side of 1cm we'll get 4*4*4=64 little cubes and each will have the surface area equal to 6*1^2=6 cm^2, so total surface are of these 64 little cubes will be 6*64 cm^2. 6*64 is 4 times more than 6*16 which corresponds to 300% increase. Or: general formula for percent increase or decrease, (percent change): $$Percent=\frac{Change}{Original}*100$$ So the percent increase will be: $$Percent=\frac{Change}{Original}*100=\frac{6*64-6*16}{6*16}*100=300%$$. _________________ The only place where success comes before work is in the dictionary. If my post made you think, KUDO it. Its easy :D Math Expert Joined: 02 Sep 2009 Posts: 50003 Re: surface area of cube  [#permalink] ### Show Tags 16 Jan 2011, 02:55 rohu27 wrote: Hi, when the cube is cut into the smaller cubes with side of 1cm we'll get 4*4*4=64 Thanks Bunuel wrote: Cube has 6 faces. The surface area of a cube which has a side of 4cm is 6*4^2=6*16 cm^2. Now, when the cube is cut into the smaller cubes with side of 1cm we'll get 4*4*4=64 little cubes and each will have the surface area equal to 6*1^2=6 cm^2, so total surface are of these 64 little cubes will be 6*64 cm^2. 6*64 is 4 times more than 6*16 which corresponds to 300% increase. Or: general formula for percent increase or decrease, (percent change): $$Percent=\frac{Change}{Original}*100$$ So the percent increase will be: $$Percent=\frac{Change}{Original}*100=\frac{6*64-6*16}{6*16}*100=300%$$. The big cube which dimensions 4*4*4 can "produce" 4*4*4 small cubes with dimensions of 1*1*1 (base layer can give 4*4 small cubes and as there are 4 layers than total of 4*4*4 small cubes). _________________ SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1829 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Re: If a 4 cm cube is cut into 1 cm cubes, then what is the  [#permalink] ### Show Tags 04 Mar 2014, 04:09 1 Big cube surface area = 4*4*6 = 96 Small cubes surface area (Total) = 6 *64 % = 6*64*96/100 = 400 % Increase = 400-100 = 300% = Answer = D _________________ Kindly press "+1 Kudos" to appreciate Math Expert Joined: 02 Sep 2009 Posts: 50003 Re: surface area of cube  [#permalink] ### Show Tags 04 Mar 2014, 04:15 Bunuel wrote: ajit257 wrote: If a 4 cm cube is cut into 1 cm cubes, then what is the percentage increase in the surface area of the resulting cubes? 4% 166% 266% 300% 400% Please can someone explain this problem better. Cube has 6 faces. The surface area of a cube which has a side of 4cm is 6*4^2=6*16 cm^2. Now, when the cube is cut into the smaller cubes with side of 1cm we'll get 4*4*4=64 little cubes and each will have the surface area equal to 6*1^2=6 cm^2, so total surface are of these 64 little cubes will be 6*64 cm^2. 6*64 is 4 times more than 6*16 which corresponds to 300% increase. Or: general formula for percent increase or decrease, (percent change): $$Percent=\frac{Change}{Original}*100$$ So the percent increase will be: $$Percent=\frac{Change}{Original}*100=\frac{6*64-6*16}{6*16}*100=300%$$. Similar questions to practice: the-entire-exterior-of-a-large-wooden-cube-is-painted-red-155955.html a-big-cube-is-formed-by-rearranging-the-160-coloured-and-99424.html a-large-cube-consists-of-125-identical-small-cubes-how-110256.html 64-small-identical-cubes-are-used-to-form-a-large-cube-151009.html a-wooden-cube-whose-edge-length-is-10-inches-is-composed-of-162570.html _________________ Intern Joined: 01 Mar 2014 Posts: 4 GMAT Date: 05-23-2014 Re: If a 4 cm cube is cut into 1 cm cubes, then what is the  [#permalink] ### Show Tags 13 Aug 2014, 04:08 I have problem with its verbal... 4cm cube! Does it mean that each side is 4cm or the area is 4cm...? is it popular to use this language in Geometry? Math Expert Joined: 02 Sep 2009 Posts: 50003 Re: If a 4 cm cube is cut into 1 cm cubes, then what is the  [#permalink] ### Show Tags 13 Aug 2014, 06:40 amirsalehi wrote: I have problem with its verbal... 4cm cube! Does it mean that each side is 4cm or the area is 4cm...? is it popular to use this language in Geometry? The area cannot be in units, it's in square units, so 4cm can only mean length of something. _________________ Manager Joined: 21 Jun 2011 Posts: 66 Location: United States Concentration: Accounting, Finance WE: Accounting (Accounting) Re: If a 4 cm cube is cut into 1 cm cubes, then what is the  [#permalink] ### Show Tags 23 Nov 2014, 12:58 1 Bunuel wrote: ajit257 wrote: If a 4 cm cube is cut into 1 cm cubes, then what is the percentage increase in the surface area of the resulting cubes? 4% 166% 266% 300% 400% Please can someone explain this problem better. Cube has 6 faces. The surface area of a cube which has a side of 4cm is 6*4^2=6*16 cm^2. Now, when the cube is cut into the smaller cubes with side of 1cm we'll get 4*4*4=64 little cubes and each will have the surface area equal to 6*1^2=6 cm^2, so total surface are of these 64 little cubes will be 6*64 cm^2. 6*64 is 4 times more than 6*16 which corresponds to 300% increase. Or: general formula for percent increase or decrease, (percent change): $$Percent=\frac{Change}{Original}*100$$ So the percent increase will be: $$Percent=\frac{Change}{Original}*100=\frac{6*64-6*16}{6*16}*100=300%$$. Hey Bunuel, My question might sound stupid but why are you going back and calculating the volume of the bigger cube. Why can't we just take the the side to be 1 and hence the surface area=6a^2=6. What is wrong with my thinking Math Expert Joined: 02 Sep 2009 Posts: 50003 Re: If a 4 cm cube is cut into 1 cm cubes, then what is the  [#permalink] ### Show Tags 24 Nov 2014, 06:37 davidfrank wrote: Bunuel wrote: ajit257 wrote: If a 4 cm cube is cut into 1 cm cubes, then what is the percentage increase in the surface area of the resulting cubes? 4% 166% 266% 300% 400% Please can someone explain this problem better. Cube has 6 faces. The surface area of a cube which has a side of 4cm is 6*4^2=6*16 cm^2. Now, when the cube is cut into the smaller cubes with side of 1cm we'll get 4*4*4=64 little cubes and each will have the surface area equal to 6*1^2=6 cm^2, so total surface are of these 64 little cubes will be 6*64 cm^2. 6*64 is 4 times more than 6*16 which corresponds to 300% increase. Or: general formula for percent increase or decrease, (percent change): $$Percent=\frac{Change}{Original}*100$$ So the percent increase will be: $$Percent=\frac{Change}{Original}*100=\frac{6*64-6*16}{6*16}*100=300%$$. Hey Bunuel, My question might sound stupid but why are you going back and calculating the volume of the bigger cube. Why can't we just take the the side to be 1 and hence the surface area=6a^2=6. What is wrong with my thinking When a cube with a side of 4 units is cut into cubes with a side of 1 unit you get 64 little cubes. Each of them will have the surface area of 6 square units, thus the surface area of all 64 little cubes will be 6*64 square units. _________________ Manager Joined: 08 Sep 2016 Posts: 118 Re: If a 4 cm cube is cut into 1 cm cubes, then what is the  [#permalink] ### Show Tags 04 Apr 2018, 04:20 Hi Bunuel Is it a coincidence (or luck) that this problem can be solved by ignoring the fact that we are asked to solve for the percent difference in surface area? We have something that is 4 times something else. What's the percent change? ((4-1)/1) *100 = 300 Math Expert Joined: 02 Sep 2009 Posts: 50003 Re: If a 4 cm cube is cut into 1 cm cubes, then what is the  [#permalink] ### Show Tags 04 Apr 2018, 10:21 hdavies wrote: Hi Bunuel Is it a coincidence (or luck) that this problem can be solved by ignoring the fact that we are asked to solve for the percent difference in surface area? We have something that is 4 times something else. What's the percent change? ((4-1)/1) *100 = 300 What would you get with your approach if the big cube were cut into 2cm cubes, instead of 1cm cubes? _________________ Intern Joined: 31 Jul 2016 Posts: 9 Re: If a 4 cm cube is cut into 1 cm cubes, then what is the  [#permalink] ### Show Tags 04 Apr 2018, 10:47 ajit257 wrote: If a 4 cm cube is cut into 1 cm cubes, then what is the percentage increase in the surface area of the resulting cubes? A. 4% B. 166% C. 266% D. 300% E. 400% Please can someone explain this problem better. 6×4×4 = 96 cubic cm Totally #64 1cm cubes which equals 64×6 = 384 cubic cm Percent increase = 382-96/96 * 100 = 288*100/96 = 300 Sent from my SM-G925I using GMAT Club Forum mobile app Manager Joined: 08 Sep 2016 Posts: 118 Re: If a 4 cm cube is cut into 1 cm cubes, then what is the  [#permalink] ### Show Tags 04 Apr 2018, 15:42 Bunuel wrote: hdavies wrote: Hi Bunuel Is it a coincidence (or luck) that this problem can be solved by ignoring the fact that we are asked to solve for the percent difference in surface area? We have something that is 4 times something else. What's the percent change? ((4-1)/1) *100 = 300 What would you get with your approach if the big cube were cut into 2cm cubes, instead of 1cm cubes? I see. The approach I mentioned above would show 100% change, but the real answer would be undefined I think. Re: If a 4 cm cube is cut into 1 cm cubes, then what is the &nbs [#permalink] 04 Apr 2018, 15:42 Display posts from previous: Sort by
2018-10-20 09:19:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8681516647338867, "perplexity": 2162.801816181436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512679.76/warc/CC-MAIN-20181020080138-20181020101638-00460.warc.gz"}
http://math.stackexchange.com/questions/65846/cantors-completeness-principle
# Cantor's completeness principle I hope everyone who has underwent a fundamental course in Analysis must be knowing about Cantor's completeness principle. It says that in a nest of closed intervals ,the intersection of all the intervals is a single point. I hope I can get an explanation as to why in case of only closed intervals this principle holds good, why not in case of open intervals? - Consider the nested intervals, $[0, 1 + \frac{1}{n}]$ for $n \in \omega$. The intersection is not a single point. – William Sep 19 '11 at 17:44 @SrivatsanNarayanan - I am sorry for landing up at such a mistake. Now when you have stated it correctly could you kindly answer as to why the word "closed" has importance attached to it. Why cannot we have the result holding good in case of open intervals? – Primeczar Sep 19 '11 at 17:49 @Prime, Are you looking for examples or an explanation? – Srivatsan Sep 19 '11 at 17:58 The intersection of all the open intervals centered at $0$ is just $\{0\}$, since $0$ is the only point that is a member of all of them. But the intersection of all the open intervals whose lower boundary is $0$ is empty. (After all, what point could be a member all of them?) And they are nested, in that for any two of them, one is a subset of the other.
2015-11-26 18:12:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5975557565689087, "perplexity": 249.07578895384916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447769.81/warc/CC-MAIN-20151124205407-00031-ip-10-71-132-137.ec2.internal.warc.gz"}
http://www.sagemath.org/doc/reference/combinat/sage/combinat/root_system/root_system.html
# Root systems¶ ## Quickref¶ • T = CartanType(["A", 3]), T.is_finite() – Cartan types • T.dynkin_diagram(), DynkinDiagram(["G",2]) – Dynkin diagrams • T.cartan_matrix(),  CartanMatrix(["F",4]) – Cartan matrices • RootSystem(T).weight_lattice() – Root systems • WeylGroup(["B", 6, 1]).simple_reflections() – Affine Weyl groups • WeylCharacterRing(["D", 4]) – Weyl character rings ## Documentation¶ class sage.combinat.root_system.root_system.RootSystem(cartan_type, as_dual_of=None) A class for root systems. EXAMPLES: We construct the root system for type $$B_3$$: sage: R=RootSystem(['B',3]); R Root system of type ['B', 3] R models the root system abstractly. It comes equipped with various realizations of the root and weight lattices, where all computations take place. Let us play first with the root lattice: sage: space = R.root_lattice() sage: space Root lattice of the Root system of type ['B', 3] This is the free $$\ZZ$$-module $$\bigoplus_i \ZZ.\alpha_i$$ spanned by the simple roots: sage: space.base_ring() Integer Ring sage: list(space.basis()) [alpha[1], alpha[2], alpha[3]] Let us do some computations with the simple roots: sage: alpha = space.simple_roots() sage: alpha[1] + alpha[2] alpha[1] + alpha[2] There is a canonical pairing between the root lattice and the coroot lattice: sage: R.coroot_lattice() Coroot lattice of the Root system of type ['B', 3] We construct the simple coroots, and do some computations (see comments about duality below for some caveat): sage: alphacheck = space.simple_coroots() sage: list(alphacheck) [alphacheck[1], alphacheck[2], alphacheck[3]] We can carry over the same computations in any of the other realizations of the root lattice, like the root space $$\bigoplus_i \QQ.\alpha_i$$, the weight lattice $$\bigoplus_i \ZZ.\Lambda_i$$, the weight space $$\bigoplus_i \QQ.\Lambda_i$$. For example: sage: space = R.weight_space() sage: space Weight space over the Rational Field of the Root system of type ['B', 3] sage: space.base_ring() Rational Field sage: list(space.basis()) [Lambda[1], Lambda[2], Lambda[3]] sage: alpha = space.simple_roots() sage: alpha[1] + alpha[2] Lambda[1] + Lambda[2] - 2*Lambda[3] The fundamental weights are the dual basis of the coroots: sage: Lambda = space.fundamental_weights() sage: Lambda[1] Lambda[1] sage: alphacheck = space.simple_coroots() sage: list(alphacheck) [alphacheck[1], alphacheck[2], alphacheck[3]] sage: [Lambda[i].scalar(alphacheck[1]) for i in space.index_set()] [1, 0, 0] sage: [Lambda[i].scalar(alphacheck[2]) for i in space.index_set()] [0, 1, 0] sage: [Lambda[i].scalar(alphacheck[3]) for i in space.index_set()] [0, 0, 1] Let us use the simple reflections. In the weight space, they work as in the number game: firing the node $$i$$ on an element $$x$$ adds $$c$$ times the simple root $$\alpha_i$$, where $$c$$ is the coefficient of $$i$$ in $$x$$: sage: s = space.simple_reflections() sage: Lambda[1].simple_reflection(1) -Lambda[1] + Lambda[2] sage: Lambda[2].simple_reflection(1) Lambda[2] sage: Lambda[3].simple_reflection(1) Lambda[3] sage: (-2*Lambda[1] + Lambda[2] + Lambda[3]).simple_reflection(1) 2*Lambda[1] - Lambda[2] + Lambda[3] It can be convenient to manipulate the simple reflections themselves: sage: s = space.simple_reflections() sage: s[1](Lambda[1]) -Lambda[1] + Lambda[2] sage: s[1](Lambda[2]) Lambda[2] sage: s[1](Lambda[3]) Lambda[3] Ambient spaces The root system may also come equipped with an ambient space. This is a $$\QQ$$-module, endowed with its canonical Euclidean scalar product, which admits simultaneous embeddings of the (extended) weight and the (extended) coweight lattice, and therefore the root and the coroot lattice. This is implemented on a type by type basis for the finite crystallographic root systems following Bourbaki’s conventions and is extended to the affine cases. Coefficients permitting, this is also available as an ambient lattice. ambient_space() and ambient_lattice() for details In finite type $$A$$, we recover the natural representation of the symmetric group as group of permutation matrices: sage: RootSystem(["A",2]).ambient_space().weyl_group().simple_reflections() Finite family {1: [0 1 0] [1 0 0] [0 0 1], 2: [1 0 0] [0 0 1] [0 1 0]} In type $$B$$, $$C$$, and $$D$$, we recover the natural representation of the Weyl group as groups of signed permutation matrices: sage: RootSystem(["B",3]).ambient_space().weyl_group().simple_reflections() Finite family {1: [0 1 0] [1 0 0] [0 0 1], 2: [1 0 0] [0 0 1] [0 1 0], 3: [ 1 0 0] [ 0 1 0] [ 0 0 -1]} In (untwisted) affine types $$A$$, ..., $$D$$, one can recover from the ambient space the affine permutation representation, in window notation. Let us consider the ambient space for affine type $$A$$: sage: L = RootSystem(["A",2,1]).ambient_space(); L Ambient space of the Root system of type ['A', 2, 1] Define the “identity” by an appropriate vector at level $$-3$$: sage: e = L.basis(); Lambda = L.fundamental_weights() sage: id = e[0] + 2*e[1] + 3*e[2] - 3*Lambda[0] The corresponding permutation is obtained by projecting it onto the classical ambient space: sage: L.classical() Ambient space of the Root system of type ['A', 2] sage: L.classical()(id) (1, 2, 3) Here is the orbit of the identity under the action of the finite group: sage: W = L.weyl_group() sage: S3 = [ w.action(id) for w in W.classical() ] sage: [L.classical()(x) for x in S3] [(1, 2, 3), (3, 2, 1), (3, 1, 2), (2, 1, 3), (2, 3, 1), (1, 3, 2)] And the action of $$s_0$$ on these yields: sage: s = W.simple_reflections() sage: [L.classical()(s[0].action(x)) for x in S3] [(0, 2, 4), (-2, 2, 6), (-1, 1, 6), (0, 1, 5), (-2, 3, 5), (-1, 3, 4)] We can also plot various components of the ambient spaces: sage: L = RootSystem(['A',2]).ambient_space() sage: L.plot() For more on plotting, see Tutorial: visualizing root systems. Dual root systems The root system is aware of its dual root system: sage: R.dual Dual of root system of type ['B', 3] R.dual is really the root system of type $$C_3$$: sage: R.dual.cartan_type() ['C', 3] And the coroot lattice that we have been manipulating before is really implemented as the root lattice of the dual root system: sage: R.dual.root_lattice() Coroot lattice of the Root system of type ['B', 3] In particular, the coroots for the root lattice are in fact the roots of the coroot lattice: sage: list(R.root_lattice().simple_coroots()) [alphacheck[1], alphacheck[2], alphacheck[3]] sage: list(R.coroot_lattice().simple_roots()) [alphacheck[1], alphacheck[2], alphacheck[3]] sage: list(R.dual.root_lattice().simple_roots()) [alphacheck[1], alphacheck[2], alphacheck[3]] The coweight lattice and space are defined similarly. Note that, to limit confusion, all the output have been tweaked appropriately. TESTS: sage: R = RootSystem(['C',3]) sage: TestSuite(R).run() sage: L = R.ambient_space() sage: s = L.simple_reflections() # this used to break the testsuite below due to caching an unpicklable method sage: s = L.simple_projections() # todo: not implemented sage: TestSuite(L).run() sage: L = R.root_space() sage: s = L.simple_reflections() sage: TestSuite(L).run() sage: for T in CartanType.samples(crystallographic=True): # long time (13s on sage.math, 2012) ... TestSuite(RootSystem(T)).run() ambient_lattice() Return the ambient lattice for this root_system. This is the ambient space, over $$\ZZ$$. EXAMPLES: sage: RootSystem(['A',4]).ambient_lattice() Ambient lattice of the Root system of type ['A', 4] sage: RootSystem(['A',4,1]).ambient_lattice() Ambient lattice of the Root system of type ['A', 4, 1] Except in type A, only an ambient space can be realized: sage: RootSystem(['B',4]).ambient_lattice() sage: RootSystem(['C',4]).ambient_lattice() sage: RootSystem(['D',4]).ambient_lattice() sage: RootSystem(['E',6]).ambient_lattice() sage: RootSystem(['F',4]).ambient_lattice() sage: RootSystem(['G',2]).ambient_lattice() ambient_space(base_ring=Rational Field) Return the usual ambient space for this root_system. INPUT: • base_ring – a base ring (default: $$\QQ$$) This is a base_ring-module, endowed with its canonical Euclidean scalar product, which admits simultaneous embeddings into the weight and the coweight lattice, and therefore the root and the coroot lattice, and preserves scalar products between elements of the coroot lattice and elements of the root or weight lattice (and dually). There is no mechanical way to define the ambient space just from the Cartan matrix. Instead is is constructed from hard coded type by type data, according to the usual Bourbaki conventions. Such data is provided for all the finite (crystallographic) types. From this data, ambient spaces can be built as well for dual types, reducible types and affine types. When no data is available, or if the base ring is not large enough, None is returned. Warning for affine types EXAMPLES: sage: RootSystem(['A',4]).ambient_space() Ambient space of the Root system of type ['A', 4] sage: RootSystem(['B',4]).ambient_space() Ambient space of the Root system of type ['B', 4] sage: RootSystem(['C',4]).ambient_space() Ambient space of the Root system of type ['C', 4] sage: RootSystem(['D',4]).ambient_space() Ambient space of the Root system of type ['D', 4] sage: RootSystem(['E',6]).ambient_space() Ambient space of the Root system of type ['E', 6] sage: RootSystem(['F',4]).ambient_space() Ambient space of the Root system of type ['F', 4] sage: RootSystem(['G',2]).ambient_space() Ambient space of the Root system of type ['G', 2] An alternative base ring can be provided as an option: sage: e = RootSystem(['B',3]).ambient_space(RR) sage: TestSuite(e).run() It should contain the smallest ring over which the ambient space can be defined ($$\ZZ$$ in type $$A$$ or $$\QQ$$ otherwise). Otherwise None is returned: sage: RootSystem(['B',2]).ambient_space(ZZ) The base ring should also be totally ordered. In practice, only $$\ZZ$$ and $$\QQ$$ are really supported at this point, but you are welcome to experiment: sage: e = RootSystem(['G',2]).ambient_space(RR) sage: TestSuite(e).run() Failure in _test_root_lattice_realization: Traceback (most recent call last): ... AssertionError: 2.00000000000000 != 2.00000000000000 ------------------------------------------------------------ The following tests failed: _test_root_lattice_realization cartan_matrix() EXAMPLES: sage: RootSystem(['A',3]).cartan_matrix() [ 2 -1 0] [-1 2 -1] [ 0 -1 2] cartan_type() Returns the Cartan type of the root system. EXAMPLES: sage: R = RootSystem(['A',3]) sage: R.cartan_type() ['A', 3] coambient_space(base_ring=Rational Field) Return the coambient space for this root system. This is the ambient space of the dual root system. EXAMPLES: sage: L = RootSystem(["B",2]).ambient_space(); L Ambient space of the Root system of type ['B', 2] sage: coL = RootSystem(["B",2]).coambient_space(); coL Coambient space of the Root system of type ['B', 2] The roots and coroots are interchanged: sage: coL.simple_roots() Finite family {1: (1, -1), 2: (0, 2)} sage: L.simple_coroots() Finite family {1: (1, -1), 2: (0, 2)} sage: coL.simple_coroots() Finite family {1: (1, -1), 2: (0, 1)} sage: L.simple_roots() Finite family {1: (1, -1), 2: (0, 1)} coroot_lattice() Returns the coroot lattice associated to self. EXAMPLES: sage: RootSystem(['A',3]).coroot_lattice() Coroot lattice of the Root system of type ['A', 3] coroot_space(base_ring=Rational Field) Returns the coroot space associated to self. EXAMPLES: sage: RootSystem(['A',3]).coroot_space() Coroot space over the Rational Field of the Root system of type ['A', 3] coweight_lattice(extended=False) Returns the coweight lattice associated to self. This is the weight lattice of the dual root system. EXAMPLES: sage: RootSystem(['A',3]).coweight_lattice() Coweight lattice of the Root system of type ['A', 3] sage: RootSystem(['A',3,1]).coweight_lattice(extended = True) Extended coweight lattice of the Root system of type ['A', 3, 1] coweight_space(base_ring=Rational Field, extended=False) Returns the coweight space associated to self. This is the weight space of the dual root system. EXAMPLES: sage: RootSystem(['A',3]).coweight_space() Coweight space over the Rational Field of the Root system of type ['A', 3] sage: RootSystem(['A',3,1]).coweight_space(extended=True) Extended coweight space over the Rational Field of the Root system of type ['A', 3, 1] dynkin_diagram() Returns the Dynkin diagram of the root system. EXAMPLES: sage: R = RootSystem(['A',3]) sage: R.dynkin_diagram() O---O---O 1 2 3 A3 index_set() EXAMPLES: sage: RootSystem(['A',3]).index_set() (1, 2, 3) is_finite() Returns True if self is a finite root system. EXAMPLES: sage: RootSystem(["A",3]).is_finite() True sage: RootSystem(["A",3,1]).is_finite() False is_irreducible() Returns True if self is an irreducible root system. EXAMPLES: sage: RootSystem(['A', 3]).is_irreducible() True sage: RootSystem("A2xB2").is_irreducible() False root_lattice() Returns the root lattice associated to self. EXAMPLES: sage: RootSystem(['A',3]).root_lattice() Root lattice of the Root system of type ['A', 3] Returns the (restricted) root poset associated to self. The elements are given by the positive roots (resp. non-simple, positive roots), and $$\alpha \leq \beta$$ iff $$\beta - \alpha$$ is a non-negative linear combination of simple roots. INPUT: • restricted – (default:False) if True, only non-simple roots are considered. EXAMPLES: sage: Phi = RootSystem(['A',2]).root_poset(); Phi Finite poset containing 3 elements sage: sorted(Phi.cover_relations(), key=str) [[alpha[1], alpha[1] + alpha[2]], [alpha[2], alpha[1] + alpha[2]]] sage: Phi = RootSystem(['A',3]).root_poset(restricted=True); Phi Finite poset containing 3 elements sage: sorted(Phi.cover_relations(), key=str) [[alpha[1] + alpha[2], alpha[1] + alpha[2] + alpha[3]], [alpha[2] + alpha[3], alpha[1] + alpha[2] + alpha[3]]] sage: Phi = RootSystem(['B',2]).root_poset(); Phi Finite poset containing 4 elements sage: Phi.cover_relations() [[alpha[1], alpha[1] + alpha[2]], [alpha[2], alpha[1] + alpha[2]], [alpha[1] + alpha[2], alpha[1] + 2*alpha[2]]] root_space(base_ring=Rational Field) Returns the root space associated to self. EXAMPLES: sage: RootSystem(['A',3]).root_space() Root space over the Rational Field of the Root system of type ['A', 3] weight_lattice(extended=False) Returns the weight lattice associated to self. EXAMPLES: sage: RootSystem(['A',3]).weight_lattice() Weight lattice of the Root system of type ['A', 3] sage: RootSystem(['A',3,1]).weight_space(extended = True) Extended weight space over the Rational Field of the Root system of type ['A', 3, 1] weight_space(base_ring=Rational Field, extended=False) Returns the weight space associated to self. EXAMPLES: sage: RootSystem(['A',3]).weight_space() Weight space over the Rational Field of the Root system of type ['A', 3] sage: RootSystem(['A',3,1]).weight_space(extended = True) Extended weight space over the Rational Field of the Root system of type ['A', 3, 1] sage.combinat.root_system.root_system.WeylDim(ct, coeffs) The Weyl Dimension Formula. INPUT: • type - a Cartan type • coeffs - a list of nonnegative integers The length of the list must equal the rank type[1]. A dominant weight hwv is constructed by summing the fundamental weights with coefficients from this list. The dimension of the irreducible representation of the semisimple complex Lie algebra with highest weight vector hwv is returned. EXAMPLES: For $$SO(7)$$, the Cartan type is $$B_3$$, so: sage: WeylDim(['B',3],[1,0,0]) # standard representation of SO(7) 7 sage: WeylDim(['B',3],[0,1,0]) # exterior square 21 sage: WeylDim(['B',3],[0,0,1]) # spin representation of spin(7) 8 sage: WeylDim(['B',3],[1,0,1]) # sum of the first and third fundamental weights 48 sage: [WeylDim(['F',4],x) for x in [1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]] [52, 1274, 273, 26] sage: [WeylDim(['E', 6], x) for x in [0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 2], [0, 0, 0, 0, 1, 0], [0, 0, 1, 0, 0, 0], [1, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 1], [2, 0, 0, 0, 0, 0]] [1, 78, 27, 351, 351, 351, 27, 650, 351]
2014-10-26 01:08:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.656285285949707, "perplexity": 2908.002906392715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119653628.45/warc/CC-MAIN-20141024030053-00155-ip-10-16-133-185.ec2.internal.warc.gz"}
https://gmatclub.com/forum/each-in-the-mileage-table-above-represents-an-entry-indica-95162-20.html?sort_by_oldest=true
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video It is currently 05 Jul 2020, 19:06 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Each • in the mileage table above represents an entry indica Author Message TAGS: ### Hide Tags Target Test Prep Representative Affiliations: Target Test Prep Joined: 04 Mar 2011 Posts: 2799 Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 04 May 2016, 08:16 2 snkrhed wrote: Attachment: Table.png Each • in the mileage table above represents an entry indicating the distance between a pair of the five cities. If the table were extended to represent the distances between all pairs of 30 cities and each distance were to be represented by only one entry, how many entries would the table then have? (A) 60 (B) 435 (C) 450 (D) 465 (E) 900 Solution: This problem can best be solved using combinations. This problem is similar to one in which 30 sports teams are playing in a tournament where every team plays every other team exactly once. No team plays itself, obviously, and the order of each pairing doesn't matter. [For example, if Team A plays Team B, the pairing of (Team A vs. Team B) is identical to (Team B vs. Team A)]. We would calculate 30C2, or the number of combinations of 30 items taken 2 at a time. We also can solve this problem in the same way: 30C2 = 30! / [2! x (30 – 2)!] (30 x 29 x 28!) / [2! x 28!] (30 x 29)/2! (30 x 29) / 2 15 x 29 = 435 _________________ # Jeffrey Miller Jeff@TargetTestPrep.com 225 Reviews 5-star rated online GMAT quant self study course See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews Manager Status: Never Say Never Joined: 27 May 2014 Posts: 54 Location: India GMAT 1: 640 Q46 V32 GMAT 2: 670 Q49 V33 GPA: 3.89 WE: Information Technology (Computer Software) Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 11 May 2016, 21:00 I solved it using patterns - 5 cities 4 , 3, 2, 1 for each city per say. extending to 30 cities, would mean 1,2,3,4, ..............,29 thats the series of consecutive integers - now calculating the sum - no of terms = 29-1 +1 29 Average = 1+29 / 2 = 15 sum = 29 * 15 = 435 choice B Manager Joined: 21 Sep 2015 Posts: 83 Location: India GMAT 1: 730 Q48 V42 GMAT 2: 750 Q50 V41 Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags Updated on: 16 Jun 2016, 18:06 kirtivardhan If you look at the table City A(1st city) has 0 points under it , City B(2nd City) has 1 , City C(3rd City) has 2 and so on . Therefore the 30th city will have 29 points under it. Sum of these points is given by [ 2a + (n-1)d] * (n/2) If you take first term a= 0 then n=30 and d=1; Sum = [2(0) +29] * 30/2 =435 Or if you wish to take first term a=1 ( i.e. starting from city b) n= 29 d=1 Sum = [ 2 +28] *29/2 =435 Originally posted by rishi02 on 15 Jun 2016, 21:39. Last edited by rishi02 on 16 Jun 2016, 18:06, edited 1 time in total. EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 17018 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 16 Jun 2016, 09:19 1 rishi02 wrote: kirtivardhan If you look at the table City A(1st city) has 0 points under it , City B(2nd City) has 1 , City C(3rd City) has 2 and so on . Therefore the 30th city will have 29 points under it. Sum of these points is given by [ 2a + (n-1)d] * (n/2) If you take first term a= 0 then n=30 and d=1; Sum = [2(0) +29] * 30/2 =485 Or if you wish to take first term a=1 ( i.e. starting from city b) n= 29 d=1 Sum = [ 2 +28] *29/2 =485 Hi rishi02, Your set-up is correct, but you should double-check your math (485 is NOT the result of either of those calculations). GMAT assassins aren't born, they're made, Rich _________________ Contact Rich at: Rich.C@empowergmat.com The Course Used By GMAT Club Moderators To Earn 750+ souvik101990 Score: 760 Q50 V42 ★★★★★ ENGRTOMBA2018 Score: 750 Q49 V44 ★★★★★ Manager Joined: 21 Sep 2015 Posts: 83 Location: India GMAT 1: 730 Q48 V42 GMAT 2: 750 Q50 V41 Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 16 Jun 2016, 18:05 EMPOWERgmatRichC wrote: rishi02 wrote: kirtivardhan If you look at the table City A(1st city) has 0 points under it , City B(2nd City) has 1 , City C(3rd City) has 2 and so on . Therefore the 30th city will have 29 points under it. Sum of these points is given by [ 2a + (n-1)d] * (n/2) If you take first term a= 0 then n=30 and d=1; Sum = [2(0) +29] * 30/2 =485 Or if you wish to take first term a=1 ( i.e. starting from city b) n= 29 d=1 Sum = [ 2 +28] *29/2 =485 Hi rishi02, Your set-up is correct, but you should double-check your math (485 is NOT the result of either of those calculations). GMAT assassins aren't born, they're made, Rich Will edit the mistake Manager Joined: 03 Jan 2017 Posts: 131 Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 25 Mar 2017, 11:04 we can solve it in 2 approaches: 1) find the sum of consecutive integers: (1+29/2)*29=435 or 2) how many matches are for 30 items: 2C30=29*30/2=435 Current Student Joined: 25 Feb 2017 Posts: 34 Location: Korea, Republic of Schools: LBS '19 (A) GMAT 1: 720 Q50 V38 GPA: 3.67 Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 02 May 2017, 19:48 Each • in the mileage table above represents an entry indicating the distance between a pair of the five cities. If the table were extended to represent the distances between all pairs of 30 cities and each distance were to be represented by only one entry, how many entries would the table then have? (A) 60 (B) 435 (C) 450 (D) 465 (E) 900 My 2 cents. We see that the number of dots in city E (which is the 5th term) is 4. From this, we can conclude that 30 cities will have 29 dots. Now, the issue is how are we going to add 1+2+3.....29. Well, we have a short cut in adding. When you look at the below number, we see that if we add the first term and the last term, we get 30. 1+2+3...27+28+29 Continuing this trend, we can go up to 14+16. You should be careful in going up 15+15 because there are NO two 15s in this sequence, only 1. So, we are almost done. 14 * 30 = 420 and we have to add 15 Hence 435 Intern Joined: 20 Aug 2016 Posts: 1 Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 11 Jun 2017, 05:03 1+2+3+4+5+6+7+8+9+10=55 11+12+...........20 =100+55 21+22+...........30 = 200+55 Total 465 Total - 30= 435 VP Joined: 09 Mar 2016 Posts: 1252 Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 08 Apr 2018, 03:26 1 snkrhed wrote: Attachment: Table.png Each • in the mileage table above represents an entry indicating the distance between a pair of the five cities. If the table were extended to represent the distances between all pairs of 30 cities and each distance were to be represented by only one entry, how many entries would the table then have? (A) 60 (B) 435 (C) 450 (D) 465 (E) 900 Hello generis, I used this formula to get answer SUM OF N TERMS = $$\frac{n}{2} (2a+(n-1)d)$$ i was almost there but something went wrong but i am stll happy $$\frac{30}{2} (2 *1+(30-1)1)$$ where $$30$$ is number of terms $$1$$ is distance so i get $$15(2*1+(30-1)*1)$$ ---> $$15(2+29*1)$$ ---> $$15*31 = 465$$ so i got 465 as an answer but correct one is 435 hello generis, i think you missed my question so i decided to tag you one more time have a great start of the week Senior SC Moderator Joined: 22 May 2016 Posts: 3938 Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 15 Apr 2018, 10:16 dave13 wrote: snkrhed wrote: Each • in the mileage table above represents an entry indicating the distance between a pair of the five cities. If the table were extended to represent the distances between all pairs of 30 cities and each distance were to be represented by only one entry, how many entries would the table then have? (A) 60 (B) 435 (C) 450 (D) 465 (E) 900 Hello generis, I used this formula to get answer SUM OF N TERMS = $$\frac{n}{2} (2a+(n-1)d)$$ i was almost there but something went wrong but i am stll happy $$\frac{30}{2} (2 *1+(30-1)1)$$ where $$30$$ is number of terms $$1$$ is distance so i get $$15(2*1+(30-1)*1)$$ ---> $$15(2+29*1)$$ ---> $$15*31 = 465$$ so i got 465 as an answer but correct one is 435 hello generis, i think you missed my question so i decided to tag you one more time have a great start of the week dave13 - I did miss the tag. Or my email has gone berserk. Sorry. You should be! You are very close! (Plus, I do not think this formula is intuitive at all. I use the "easier" version. ) I uploaded the table, see below. Notice . . . how many dots are under City A? Zero (0) Because we begin at City A, let's say City A = City 1 City 1? 0 dots How many dots under City B = City 2? One (1) dot City C = 3? 2 dots There is one fewer dot than there are cities. There is no distance between City A and City A Easy mistake. You have to account for that disparity when you think about the first term, $$a$$, in your sum A-to-A = 0 (as a first term in your sum of integers) So your formula will work perfectly if you alter one of two things. $$\frac{n}{2} (2a+(n-1)d)$$ Either start with $$a$$ equal to 0, i.e. $$a = 0$$, then $$n = 30$$ $$d = 1$$ OR let $$a = 1$$, in which case $$n = 29$$ (one fewer dot/distance between than number of cities) $$d = 1$$ Try your approach with one of these two changes -- whichever seems most intuitive to you. Does this explanation make sense? Attachment: TableCities.png [ 23.23 KiB | Viewed 1048 times ] (Alternatively, although I think it is harder, You counted 30 city-city distances too many. You have to subtract for A-A, B-B, C-C, etc. So: (465 - 30) = 435.) _________________ Visit SC Butler, here! Get two SC questions to practice, whose links you can find by date. Our lives begin to end the day we become silent about things that matter. -- Dr. Martin Luther King, Jr. BLACK LIVES MATTER. Manager Joined: 14 Feb 2016 Posts: 67 Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 20 Apr 2018, 01:27 Step 1: First of all - see the magic in this beautiful table. It’s amazing. We see that 1 city has no distance, two cities have ofcourse 1 distance. Let’s extend this logic to 30 cities. OMG - this means we must have 29 distances at 30 cities. Step 2: ERMAHGERD - this is like a number set that goes till 29. Wow. Let’s use the formula that are dear to us to find the sum. Because the question is, how many entries would the table then have - aka - how many dots? Step 3: Count the amount of integers using the magic formula of last-first/increment+1 = 29. Step 4: Find the mean by using the magic lazy formula of Last + First / 2 = 30/2 = 15. Step 5: Wow - magic - let’s multiply these numbers together. 29*15 = 435. OMG - 435. Matches AC B. Senior Manager Joined: 15 Jan 2017 Posts: 314 Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 05 Jun 2018, 23:10 Approach - the question says - "If the table were extended to represent the distances between all pairs of 30 cities and each distance were to be represented by only one entry, how many entries would the table then have?" Essentially it is asking for pairs to be represented by 1 dot, so we can use combinations method 30C2 --> 435 Senior Manager Joined: 05 Feb 2018 Posts: 440 Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 25 Apr 2019, 08:01 First thought: 30*30 grid means 900 divided by 2 = 450. But that's too easy, why? This would be the case if the graph was exactly half covered in dots. So the top right would be covered, the bottom left would be empty and the 'middle line' where each city represents itself would be half filled with dots. We want to remove that half filled portion. Half of the 30 cities is 15, so 450 - 15 = 435. Now the middle line is completely empty. Drawing a picture with only a few lines is useful to visualize this. Alternatively, we have a 30*30 for a total of 900. Since we can't choose the same city to represent itself we have Choose 1(30)*Choose 2(29) /2 (to get rid of arrangements) = 435 GMAT Club Legend Joined: 11 Sep 2015 Posts: 4953 GMAT 1: 770 Q49 V46 Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 27 Jan 2020, 15:09 Top Contributor snkrhed wrote: Each • in the mileage table above represents an entry indicating the distance between a pair of the five cities. If the table were extended to represent the distances between all pairs of 30 cities and each distance were to be represented by only one entry, how many entries would the table then have? (A) 60 (B) 435 (C) 450 (D) 465 (E) 900 Attachment: Table.png APPROACH #1: Each entry in the mileage table denotes a distinct pair of cities. We can determine the total number of distinct pairs of cities by using combinations. When there are 5 cities, the total number of distinct pairs of cities = 5C2 = 10 Noticed that in the given table there are 10 entries. Perfect! Likewise, if we have a mileage table consisting of 30 cities, the total number of distinct pairs of cities = 30C2 [to learn how to mentally calculate combinations like 30C2, watch the video below] = (30)(29)/(2)(1) = 435 APPROACH #2: Notice that, when there are 5 cities in the mileage table, the number of entries = 1 + 2 + 3 + 4 Likewise, if we have a mileage table consisting of 30 cities, the number of entries = 1 + 2 + 3 + . . . . + 28 + 29 One way to calculate this is to apply the following formula: The sum of the integers from 1 to n inclusive = (n)(n+1)/2 So, 1+2+...........+28+29 = (29)(29+1)/2 = (29)(30)/2 = (29)(15) = 435 Cheers, Brent RELATED VIDEO _________________ Test confidently with gmatprepnow.com CEO Joined: 03 Jun 2019 Posts: 3182 Location: India GMAT 1: 690 Q50 V34 WE: Engineering (Transportation) Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 27 Jan 2020, 15:19 snkrhed wrote: Each • in the mileage table above represents an entry indicating the distance between a pair of the five cities. If the table were extended to represent the distances between all pairs of 30 cities and each distance were to be represented by only one entry, how many entries would the table then have? (A) 60 (B) 435 (C) 450 (D) 465 (E) 900 Attachment: Table.png Number of entries would be equal to number of selection of 2 cities out of 30 = 30C2 = 435 IMO B Posted from my mobile device _________________ Kinshook Chaturvedi Email: kinshook.chaturvedi@gmail.com SVP Joined: 23 Feb 2015 Posts: 1936 Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 15 Apr 2020, 08:48 snkrhed wrote: Each • in the mileage table above represents an entry indicating the distance between a pair of the five cities. If the table were extended to represent the distances between all pairs of 30 cities and each distance were to be represented by only one entry, how many entries would the table then have? (A) 60 (B) 435 (C) 450 (D) 465 (E) 900 Attachment: Table.png When it is 5 pairs (total 25 entry): Here diagonal carries 5 entries. So, this could be 25-5 (removing diagonal part)=20. So, each side of diagonal carries 20/2=10 entry (10 entry in filled . part and 10 entry in non-filled part). When it is 30 pairs (total 30*30=900 entry): Total diagonals=30 entry. Removing diagonal=900-30==>870 entry. So, entries of table=870/2=435. The correct choice is B. _________________ Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 11036 Location: United States (CA) Re: Each • in the mileage table above represents an entry indica  [#permalink] ### Show Tags 28 May 2020, 15:46 snkrhed wrote: Each • in the mileage table above represents an entry indicating the distance between a pair of the five cities. If the table were extended to represent the distances between all pairs of 30 cities and each distance were to be represented by only one entry, how many entries would the table then have? (A) 60 (B) 435 (C) 450 (D) 465 (E) 900 Attachment: Table.png Solution: This problem can best be solved using combinations. This problem is similar to one in which 30 sports teams are playing in a tournament in which every team plays every other team exactly once. No team plays itself, obviously, and the order of each pairing doesn't matter. [For example, if Team A plays Team B, the pairing of (Team A vs. Team B) is identical to (Team B vs. Team A)]. We would calculate 30C2, or the number of combinations of 30 items taken 2 at a time. We can solve this problem in the same way: 30C2 = (30 x 29) / 2! = 15 x 29 = 435 Alternate Solution: Notice that when there are 5 cities, there are 1 + 2 + 3 + 4 entries. If there were 6 cities, the last column would have contained 5 entries; therefore there would have been 1 + 2 + 3 + 4 + 5 entries. Notice that the last column always contains one less entry than the number of cities. Thus, if there were 30 cities, there would have been 1 + 2 + 3 + … + 29 = (29 * 30)/2 = 29 * 15 = 435 entries. _________________ # Scott Woodbury-Stewart Founder and CEO Scott@TargetTestPrep.com 225 Reviews 5-star rated online GMAT quant self study course See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews Re: Each • in the mileage table above represents an entry indica   [#permalink] 28 May 2020, 15:46 Go to page   Previous    1   2   [ 37 posts ]
2020-07-06 03:06:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7286583781242371, "perplexity": 2916.1068407583853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890092.28/warc/CC-MAIN-20200706011013-20200706041013-00598.warc.gz"}
https://www.ssccglapex.com/a-lump-of-two-metals-weighing-18-g-is-worth-rs-87-but-if-their-weight-is-interchanged-it-would-be-worth-rs-78-60-if-the-price-of-one-metal-be-rs-6-70-per-gram-find-the-weight-of-the-other-metal/
### A lump of two metals weighing 18 g is worth Rs. 87 but if their weight is interchanged, it would be worth Rs. 78.60. If the price of one metal be Rs. 6.70 per gram, find the weight of the other metal in the mixture. A. 8g B. 12g C. 15g D. 18g Answer: Option A ### Solution(By Apex Team) Cost of (18g of 1st metal + 18g of 2nd metal) = Rs. 165.60 Cost of (1g of 1st metal + 1g metal of 2nd metal) = Rs. 9.20 Hence cost of 1g of 2nd metal, = 9.20 – 6.70 = Rs. 2.5 Mean price = Rs. $\Large\frac{87}{18}$ Now, given ratio of Ram and Shayam’s weight = 4 : 5 Now, $\begin{array}{l}\Large\frac{\text { quantity of } 1^{\text {st }} \text { metal }}{\text { quantity of } 2^{\text {nd }} \text { metal }}=\frac{14}{6}: \frac{56}{30}\\ =5 :4\end{array}$ Quantity of 2nd metal = $\Large\frac{18 \times 4}{9}$ = 8 g ## Related Questions On Alligation A. 2 : 5 B. 3 : 5 C. 5 : 3 D. 5 : 2 ### An alloy contains zinc, copper and tin in the ratio 2 : 3 : 1 and another contains copper, tin and lead in the ratio 5 : 4 : 3. If equal weights of both alloys are melted together to form a third alloy, then the weight of lead per kg in new alloy will be: A. $\large\frac{1}{2} \mathrm{~kg}$ B. $\large\frac{1}{8} \mathrm{~kg}$ C. $\large\frac{3}{14} \mathrm{~kg}$ D. $\large\frac{7}{9} \mathrm{~kg}$ A. 81 litres B. 71 litres C. 56 litres D. 50 litres
2021-10-17 11:54:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7896357774734497, "perplexity": 2870.684481645465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00142.warc.gz"}
http://ijmaa.in/v9n3/9.html
A Simple Way to Estimate the Variation of the Gravitational Constant as a Function of Redshift in the Framework of Brans-Dicke Theory # Sudipto Roy1 1Department of Physics, St. Xavier’s College, Kolkata, West Bengal, India. Abstract: The objective of the present study is to find theoretically the nature of evolution of the time-varying gravitational constant $(G)$ and its relative time-rate of change $(\dot{G}/G)$ with respect to the redshift parameter $(z)$. For this purpose, we have used the field equations of the Brans-Dicke (BD) theory of gravity for a flat universe of zero pressure, with a homogeneous and isotropic space-time. Our entire formulation is based on four mathematical models constructed with empirical expressions involving the scale factor, BD scalar field and their time derivatives. Substituting these expressions into the field equations, we have determined the values of the constants associated with these ansatzes. It is clearly evident from these values that the gravitational constant increases as the redshift $(z)$ decreases with time. We have also determined the nature of variation of the relative time-rate of change of the gravitational constant $(\dot{G}/G)$. It has been found to be increasing as $z$ decreases with time. The variation of the gravitational constant and its relative time-rate of change, as functions of the redshift parameter, have been depicted graphically on the basis of the four models discussed in the present article. Based on their characteristics of variation, we have proposed an empirical relation representing the evolution of the gravitational constant $(G)$ as a function of time. Using this relation, we have determined the nature of dependence of redshift $(z)$ upon time and represented it graphically. Similar findings have been obtained from studies based on various other methods. An important feature of the present study is that all its findings have been obtained without solving the field equations. Keywords: Cosmology, Gravitational constant, Brans-Dicke theory, Scalar field, Redshift $(z)$. Cite this article as: Sudipto Roy, A Simple Way to Estimate the Variation of the Gravitational Constant as a Function of Redshift in the Framework of Brans-Dicke Theory, Int. J. Math. And Appl., vol. 9, no. 3, 2021, pp. 67-80. References 1. J. Lu and G. Chee, Cosmology in Poincare gauge gravity with a pseudoscalar torsion, JHEP, 5(24)(2016), 1-26. 2. Q. G. Huang, An analytic calculation of the growth index for f(R) dark energy model, Eur. Phys. J. C, 74(2014), 1-5. 3. M. Hohmann, L. Jarv, P. Kuusk, E. Randla and O. Vilson, Post-Newtonian parameter $\gamma$ for multiscalar-tensor gravity with a general potential, Phys. Rev. D, 94(2016), 1-25. 4. S. Nojiri, S. D. Odintsov and V. K. Oikonomou, Modified Gravity Theories on a Nutshell: Inflation, Bounce and Late-time Evolution, Phys. Rept., 692(2017), 1-104. 5. A. de la Cruz-Dombriz, E. Elizalde, S. D. Odintsov and D. Saez-Gomez, Spotting deviations from $R^{2}$ inflation, JCAP, 5(60)(2016), 1-28. 6. T. P. Sotiriou, f(R) Gravity and scalar-tensor theory, Class. Quant. Grav., 23(2006), 5117-5128. 7. A. De Felice and S. Tsujikawa, f(R) theories. Living Rev, Relativity, 13(3)(2010), 1-161. 8. T. P. Sotiriou and V. Faraoni, f(R) Theories of Gravity, Rev. Mod. Phys., 82(2010), 451-497. 9. C. Brans and R. H. Dicke, Mach's Principle and a Relativistic Theory of Gravitation, Phys. Rev., 124(1961), 925-935. 10. Y. Wang, S. Huang and Z. Ji, Operation management of daily economic dispatch using novel hybrid particle swarm optimization and gravitational search algorithm with hybrid mutation strategy, Mod. Phys. Lett. B, 31(2017), 1-7. 11. N. H. Abd El-Wahab and A. Salah, The influence of the classical homogenous gravitational field on interaction of a three-level atom with a single mode cavity field, Mod. Phys. Lett. B, 29(2015), 1-11. 12. M. Biesiada and B. Malec, A new white dwarf constraint on the rate of change of the gravitational constant, Mon. Not. Roy. Astron. Soc., 350(2)(2004), 644-648. 13. O. G. Benvenuto et al., Asteroseismological bound on $\dot{G}/G$ from pulsating white dwarfs, Phys. Rev. D, 69(2004), 1-4. 14. J. P. W. Verbiest et al., Precision timing of PSR J0437$-$4715: an accurate pulsar distance, a high pulsar mass and a limit on the variation of Newton's gravitational constant, Astrophys. J., 679(2008), 675-680. 15. E. Gaztanaga et al., Bounds on the possible evolution of the Gravitational Constant from Cosmological Type-Ia Supernovae, Phys. Rev. D, 65(2001), 1-9. 16. S. E. Thorsett, The Gravitational Constant, the Chandrasekhar Limit, and Neutron Star Masses, Phys. Rev. Lett., 77(8)(1996), 1432-1435. 17. K. Bamba, D. Momeni and R. Myrzakulov, Kaluza--Klein reduction and Bergmann--Wagoner bi-scalar general action of scalar-tensor gravity, Int. J. Geom. Meth. Mod. Phys., 12(10)(2015), 1-15. 18. L. Qiang, Y. Ma, M. Han and D. Yu, Five-dimensional Brans-Dicke theory and cosmic acceleration, Phys. Rev. D, 71(6)(2005), 1-5. 19. A. G. Riess, A. V. Filippenko, P. Challis, A. Clocchiatti, A. Diercks, P. M. Garnavich et al., Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant, The Astronomical Journal, 116(3)(1998), 1009-1038. 20. S. Perlmutter, G. Aldering, G. Goldhaber, R. A. Knop, P. Nugent, P. G. Castro et al., Measurements of $\Omega$ and $\Lambda$ from 42 High-Redshift Supernovae}. The Astrophysical Journal, 517(2)(1999), 565-586. 21. D. N. Spergel et al., First-Year Wilkinson Microwave Anisotropy Probe $(WMAP)^{*}$ Observations: Determination of Cosmological Parameters, Astrophys. J. Suppl., 148(2003), 175-194. 22. L. X. Xu, W. B. Li and J. B. Lu, Holographic dark energy in Brans--Dicke theory, Eur. Phys. J. C, 60(2009), 135-140. 23. S. Ray, U. Mukhopadhyay and S. B. Dutta Choudhury, Dark Energy Models with a Time-Dependent Gravitational Constant, Int. J. Mod. Phys. D, 16(11)(2007), 1791-1802. 24. G. K. Goswami, Cosmological parameters for spatially flat dust filled Universe in Brans-Dicke theory, Research in Astronomy and Astrophysics, 17(3)(2017), 1-12 25. S. Roy, S. Mukhopadhyay, A. Saha and T. Chaudhury, A Study of Some Characteristics of the Accelerated Expansion of the Universe in the Framework of Brans-Dicke Theory, World Scientific News, 159(2021), 145-166. 26. A. Pradhan, B. Saha and V. Rikhvitsky, Bianchi type-I transit cosmological models with time dependent gravitational and cosmological constants-re-examined, Indian Journal of Physics, 89(5)(2015), 503-513. 27. A. Chand, R. K. Mishra and A. Pradhan, FRW cosmological models in Brans-Dicke theory of gravity with variable q and dynamical $\Lambda$-term, Astrophysics and Space Science, 361(81)(2016), 1-12. Back
2021-09-28 04:56:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.606758713722229, "perplexity": 3017.69594995029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060201.9/warc/CC-MAIN-20210928032425-20210928062425-00080.warc.gz"}
http://blog.jpolak.org/?cat=10
Category Archives: algebraic-topology Let $\mathcal{A}$ be a small category and $\mathbf{B}\mathcal{A}$ its geometric realisation. It is evident that $\mathbf{B}\mathcal{A}$ and $\mathbf{B}\mathcal{A}^\circ$ are homotopy equivalent, and in fact homeomorphic. However, can we find functors that realise this equivalence? This post summarises some informal notes I have written on this following D. Quillen's paper Higher Algebraic […]
2019-05-23 17:13:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9217332601547241, "perplexity": 419.8176767497139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257316.10/warc/CC-MAIN-20190523164007-20190523190007-00548.warc.gz"}
https://stats.libretexts.org/Bookshelves/Applied_Statistics/Book%3A_Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/15%3A_Linear_Regression/15.03%3A_Multiple_Linear_Regression
# 15.3: Multiple Linear Regression The simple linear regression model that we’ve discussed up to this point assumes that there’s a single predictor variable that you’re interested in, in this case dan.sleep. In fact, up to this point, every statistical tool that we’ve talked about has assumed that your analysis uses one predictor variable and one outcome variable. However, in many (perhaps most) research projects you actually have multiple predictors that you want to examine. If so, it would be nice to be able to extend the linear regression framework to be able to include multiple predictors. Perhaps some kind of multiple regression model would be in order? Multiple regression is conceptually very simple. All we do is add more terms to our regression equation. Let’s suppose that we’ve got two variables that we’re interested in; perhaps we want to use both dan.sleep and baby.sleep to predict the dan.grump variable. As before, we let Yi refer to my grumpiness on the i-th day. But now we have two X variables: the first corresponding to the amount of sleep I got and the second corresponding to the amount of sleep my son got. So we’ll let Xi1 refer to the hours I slept on the i-th day, and Xi2 refers to the hours that the baby slept on that day. If so, then we can write our regression model like this: Yi = b2 Xi2 + b1 Xi1 + b0 + ϵi As before, ϵi is the residual associated with the i-th observation, $$\ \epsilon_i = Y_i - \hat{Y_i}$$. In this model, we now have three coefficients that need to be estimated: b0 is the intercept, b1 is the coefficient associated with my sleep, and b2 is the coefficient associated with my son’s sleep. However, although the number of coefficients that need to be estimated has changed, the basic idea of how the estimation works is unchanged: our estimated coefficients $$\ \hat{b_0}$$, $$\ \hat{b_1}$$ and $$\ \hat{b_2}$$ are those that minimise the sum squared residuals. ## 15.3.1 Doing it in R Multiple regression in R is no different to simple regression: all we have to do is specify a more complicated formula when using the lm() function. For example, if we want to use both dan.sleep and baby.sleep as predictors in our attempt to explain why I’m so grumpy, then the formula we need is this: dan.grump ~ dan.sleep + baby.sleep Notice that, just like last time, I haven’t explicitly included any reference to the intercept term in this formula; only the two predictor variables and the outcome. By default, the lm() function assumes that the model should include an intercept (though you can get rid of it if you want). In any case, I can create a new regression model – which I’ll call regression.2 – using the following command: regression.2 <- lm( formula = dan.grump ~ dan.sleep + baby.sleep, data = parenthood ) And just like last time, if we print() out this regression model we can see what the estimated regression coefficients are: print( regression.2 ) ## ## Call: ## lm(formula = dan.grump ~ dan.sleep + baby.sleep, data = parenthood) ## ## Coefficients: ## (Intercept) dan.sleep baby.sleep ## 125.96557 -8.95025 0.01052 The coefficient associated with dan.sleep is quite large, suggesting that every hour of sleep I lose makes me a lot grumpier. However, the coefficient for baby.sleep is very small, suggesting that it doesn’t really matter how much sleep my son gets; not really. What matters as far as my grumpiness goes is how much sleep I get. To get a sense of what this multiple regression model looks like, Figure 15.6 shows a 3D plot that plots all three variables, along with the regression model itself. ## 15.3.2 Formula for the general case The equation that I gave above shows you what a multiple regression model looks like when you include two predictors. Not surprisingly, then, if you want more than two predictors all you have to do is add more X terms and more b coefficients. In other words, if you have K predictor variables in the model then the regression equation looks like this: $$Y_{i}=\left(\sum_{k=1}^{K} b_{k} X_{i k}\right)+b_{0}+\epsilon_{i}$$
2020-12-05 17:38:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820307731628418, "perplexity": 689.0802185533536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141748276.94/warc/CC-MAIN-20201205165649-20201205195649-00317.warc.gz"}
http://0space.org/c/1128-qs-2-when-particle-free
Welcome Guest! For persistent fullscreen mode, use F11 function key. # QS 2: When is a particle free? For page specific messages For page specific messages For simplicity we consider the case of a single particle moving in a potential. Suppose we are given a wave function \begin{eqnarray*} \psi(x)=\left(\frac{\alpha}{\pi}\right)^{\frac{1}{4}} \exp(-\alpha x^2/2)\end{eqnarray*} as the sate of the particle at some fixed time. What do we conclude? Is it the ground state of a particle bound to the point $x=0$ by a harmonic force or is it the gaussian wave-packet of a free particle? The point to understand here is that we can say nothing about a particle by just looking at its wave function. We must know the Hamiltonian of the system.  We have to see how does the wave function change with time in order to see whether it belongs to a free particle or an interacting particle. We must know the {\em time dependence} of the wave function. And not just of one wave function but that of a complete basis of wave functions. For the above example if we know $\psi(x,t)$ for at least a small time interval, then we can calculate \begin{eqnarray*} i\hbar\dydxt{\psi(x,t)}{t}+\frac{\hbar^2}{2m}\dydx{^2\psi(x,t)}{x^2}. \end{eqnarray*} If it is zero (for all $x$), even then we can say it is a wave function of a free particle.  But whether the system is actually free or not would require knowing the Hamiltonian as an operator. n
2018-07-21 04:07:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.846015214920044, "perplexity": 381.94370601325016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592309.94/warc/CC-MAIN-20180721032019-20180721052019-00349.warc.gz"}
http://www.abstractmath.org/Word%20Press/?tag=theory
## Idempotents by sketches and forms This post provides a detailed description of an example of a mathematical structure presented as a sketch and as a form.  It is a supplement to my article An Introduction to forms.  Most of the constructions I mention here are given in more detail in that article. It helps in reading this post to be familiar with the basic ideas of category, including commutative diagram and limit cone, and of the concepts of logical theory and model in logic. ### Sketches and forms sketch of a mathematical structure is a collection of objects and arrows that make up a digraph (directed graph), together with some specified cones, cocones and diagrams in the digraph.  A model of the sketch is a digraph morphism from the digraph to some category that takes the cones to limit cones, the cocones to colimit cocones, and the diagrams to commutative diagrams.  A morphism of models of a sketch from one model to another in the same category is a natural transformation.  Sketches can be used to define all kinds of algebraic structures in the sense of universal algebra, and many other types of structures (including many types of categories). There are many structures that sketches cannot sketch.  Forms were first defined in [4].  They can define anything a sketch can define and lots of other things.  [5] gives a leisurely description of forms suitable for people who have a little bit of knowledge of categories and [1] gives a more thorough description. An idempotent is a very simple kind of algebraic structure.  Here I will describe both a sketch and a form for idempotents. In another post I will do the same for binops (magmas). ### Idempotent An idempotent is a unary operation $u$ for which $u^2=u$. • If $u$ is a morphism in a category whose morphisms are set functions, a function $u:S\to S$ is an idempotent if $u(u(x))=u(x)$ for all $x$ in the domain. • Any identity element in any category is an idempotent. • A nontrivial example is the function $u(x,y):=(-x,0)$ on the real plane. Any idempotent $u$ makes the following diagram commute and that diagram can be taken as the definition of idempotent in any category. The diagram is in green.  In this post (and in [5]) diagrams in the category of models of a sketch or a form are shown in green. ### A sketch for idempotents The sketch for idempotents contains a digraph with one object and one arrow from that object to itself (above left) and one diagram (above right).  It has no cones or cocones.  So this is an almost trivial example.  When being expository (well, I can hardly say "when you are exposing") your first example should not be trivial, but it should be easy.  Let's call the sketch $\mathcal{S}$. • The diagram looks the same as the green diagram above.  It is in black, because I am showing things in syntax (things in sketches and forms) in black and semantics (things in categories of models) in green. • The green diagram is a commutative diagram in some category (unspecified). • The black diagram is a diagram in a digraph. It doesn't make sense to say it is commutative because digraphs don't have composition of arrows. • Each sketch has a specific digraph and lists of specific diagrams, cones and cocones.  The left digraph above is not in the list of diagrams of $\mathcal{S}$ (see below). The definition of sketch says that every diagram in the official list of diagrams of a given sketch must become a commutative diagram in a model.  This use of the word "become" means in this case that a model must be a digraph morphism $M:\mathcal{S}\to\mathcal{C}$ for some category $\mathcal{C}$ for which the diagram below commutes. This sketch generates a category called the Theory ("Cattheory" in [5]) of the sketch $\mathcal{S}$, denoted by $\text{Th}(\mathcal{S})$.  It is roughly the "smallest" category containing $f$ and $C$ for which the diagrams in $\mathcal{S}$ are commutative. This theory contains the generic model $G:\mathcal{S}\to \text{Th}(\mathcal{S})$ that takes $f$ and $C$ to themselves. • $G$ is "generic" because anything you prove about $G$ is true of every model of $\mathcal{S}$ in any category. • In particular, in the category $\text{Th}(\mathcal{S})$, $G(f)\circ G(f)=G(f)$. • $G$ is a universal morphism in the sense of category theory: It lifts any model $M:\mathcal{S}\to\mathcal{C}$ to a unique functor $\bar{M}=M\circ G:\text{Th}(\mathcal{S})\to\mathcal{C}$ which can therefore be regarded as the same model.  See Note [2]. SInce models are functors, morphisms between models are natural transformations.  This gives what you would normally call homomorphisms for models of almost any sketchable structure.  In [2] you can find a sketch for groups, and indeed the natural transformations between models are group homomorphisms. ### Sketching categories You can sketch categories with a sketch CatSk containing diagrams and cones, but no cocones.  This is done in detail in [3]. The resulting theory $\text{Th}(\mathbf{CatSk})$ is required to be the least category-with-finite-limits generated by $\mathcal{S}$ with the diagrams becoming commutative diagrams and the cones becoming limit cones.  This theory is the FL-Theory for categories, which I will call ThCat (suppressing mention of FL). ### Doctrines In general the theory of a particular kind of structure contains a parameter that denotes its doctrine. The sketch $\mathcal{S}$ for idempotents didn't require cones, but you can construct theories $\text{Th}(\mathcal{S})$, $\text{Th} (\text{FP},\mathcal{S})$ and $\text{Th}(\text{FL},\mathcal{S})$ for idempotents (FP means it is a category with finite products). In a strong sense, all these theories have the same models, namely idempotents, but the doctrine of the theory allows you to use more mechanisms for proving properties of idempotents.  (The doctrine for $\text{Th}(\mathcal{S})$ provides for equational proofs for unary operations only, a doctrine which has no common name such as FP or FS.)  The paper [1] is devoted to explicating proof in the context of forms, using graphs and diagrams instead of formulas that are strings of symbols. ### Describing composable pairs of arrows The form for any type of structure is constructed using the FL theory for some type of category, for example category with all limits, cartesian closed category, topos, and so on.  The form for idempotents can be constructed in ThCat (no extra structure needed).  The form for reflexive function spaces (for example) needs the FL theory for cartesian closed categories (see [5]). Such an FL theory must contain objects $\text{ob}$ and $\text{ar}$ that become the set of objects and the set of arrows of the category that a model produces.  (Since FL theories have models in any category with finite limits, I could have said "object of objects" and "object of arrows".  But in this post I will talk about only models in Set.) ThCat contains an object  $\text{ar}_2$ that represents composable pairs of arrows.  That requires a cone to define it: This must become a limit cone in a model. • I usually show cones in blue. • $\text{dom}$ and $\text{cod}$ give (in a model) the domain and codomain of an arrow. • $\text{lfac}$ gives the left factor and $\text{rfac}$ gives the right factor. It is usually useful to give suggestive names to some of the projections in situations like this, since they will be used elsewhere (where they will be black!). • The objects and arrows in the diagram (including $\text{ar}_2$) are already members of the FL theory for categories. • This diagram is annotated in green with sample names of objects and arrows that might exist in a model.  Atish and I introduced that annotation system in [1] to help you chase the diagram and think about what it means. This cone is a graph-based description of the object of composable arrows in a category (as opposed to a linguistic or string-based description). ### Describing endomorphisms Now an idempotent must be an endomorphism, so we provide a cone describing the object of endomorphisms in a category. This cone already exists in the FL theory for categories. • $\text{loop}$ is a monomorphism (in fact a regular mono because it is the mono produced by an equalizer) so it is not unreasonable to give the element annotation for $\text{endo}$ and $\text{ar}$ the same name. • "$\text{dc}$" takes $f$ to its domain and codomain. • $\text{loop}$ and "$\text{dc}$" were not created when I produced the cone above.  They were already in the FL theory for categories. Since the cone defining $\text{ar}_2$ is a limit cone (in the Theory, not in a model), if you have any other commutative cone (purple) to that cone, a unique arrow (red) $\text{diag}$ automatically is present as shown below: This particular purple cone is the limit cone defining $\text{endo}$ just defined.  Now $\text{diag}$ is a specific arrow in the FL theory for categories. In a model of the theory (which is a category in Set or in some other category) takes an endomorphism to the corresponding pair of composable arrows. ### The object of idempotents Now using these arrows we can define the object $\text{idm}$ of idempotents using the diagram below. See Note [3]. Idm is an object in ThCat.  In any category, in other words in any model of ThCat, idm becomes the set of idempotent arrows in that category. In the terminology of [5], the object idm is the form for idempotents, and the cone it is the limit of is the description of idempotent. Now take ThCat and adjoin an arrow $g:1\to\text{idm}$.  You get a new FL category I will call the FL-theory of the form for idempotents.  A model of the theory of the form in Set  is a category with a specified idempotent. A particular example of a model of the form idm in the category of real linear vector spaces is the map $u(x,y):=(-x,0)$ of the (set of points of) the real plane to itself (it is an idempotent endomorphism of $\textbf{R}^2$). This example is typical of forms and their models, except in one way:  Idempotents are also sketchable, as I described above.  Many mathematical structures can be perceived as models of forms, but not models of sketches, such as reflexive function spaces as in [5]. ### Notes [1] The diagrams shown in this post were drawn in Mathematica.  The code for them is shown in the notebook SketchFormExamples.nb .  I am in the early stages of developing a package for drawing categorical diagrams in Mathematica, so this notebook shows the diagrams defined in very primitive machine-code-like Mathematica.  The package will not rival xypic for TeX any time soon.  I am doing it so I can produce diagrams (including 3D diagrams) you can manipulate. [2] In practice I would refer to the names of the objects and arrows in the sketch rather than using the M notation:  I might write $f\circ f=f$ instead of $M(f)\circ M(f)=M(f)$ for example.  Of course this confuses syntax with semantics, which sounds like a Grievous Sin, but it is similar to what we do all the time in writing math:  "In a semigroup, $x$ is an idempotent if $xx=x$."  We use same notation for the binary operation for any semigroup and we use $x$ as an arbitrary element of most anything.  Actually, if I write $f\circ f=f$ I can claim I am talking in the generic model, since any statement true in the generic model is true in any model.  So there. [3] In the Mathematica notebook SketchFormExamples.nb in which I drew these diagrams, this diagram is plotted in Euclidean 3-space and can be viewed from different viewpoints by running your cursor over it. ### References [1] Atish Bagchi and Charles Wells, Graph-Base Logic and Sketches, draft, September 2008, on ArXiv. [2] Michael Barr and Charles Wells, Category Theory for Computing Science (1999). Les Publications CRM, Montreal (publication PM023). [3] Michael Barr and Charles Wells, Toposes, Triples and Theories (2005). Reprints in Theory and Applications of Categories 1. [4] Charles Wells, A generalization of the concept of sketch, Theoretical Computer Science 70, 1990 [5] Charles Wells, An Introduction to forms. ## Function as map This is a first draft of an article to eventually appear in abstractmath. ### Images and metaphors To explain a math concept, you need to explain how mathematicians think about the concept. This is what in abstractmath I call the images and metaphors carried by the concept. Of course you have to give the precise definition of the concept and basic theorems about it. But without the images and metaphors most students, not to mention mathematicians from a different field, will find it hard to prove much more than some immediate consequences of the definition. Nor will they have much sense of the place of the concept in math and applications. Teachers will often explain the images and metaphors with handwaving and pictures in a fairly vague way. That is good to start with, but it’s important to get more precise about the images and metaphors. That’s because images and metaphors are often not quite a good fit for the concept — they may suggest things that are false and not suggest things that are true. For example, if a set is a container, why isn’t the element-of relation transitive? (A coin in a coinpurse in your pocket is a coin in your pocket.) “A metaphor is a useful way to think about something, but it is not the same thing as the same thing.” (I think I stole that from the Economist.) Here, I am going to get precise with the notion that a function is a map. I am acting like a mathematician in “getting precise”, but I am getting precise about a metaphor, not about a mathematical object. ### A function is a map A map (ordinary paper map) of Minnesota has the property that each point on the paper represents a point in the state of Minnesota. This map can be represented as a mathematical function from a subset of a 2-sphere to $latex {{\mathbb R}^2}&fg=000000$. The function is a mathematical idealization of the relation between the state and the piece of paper, analogous to the mathematical description of the flight of a rocket ship as a function from $latex {{\mathbb R}}&fg=000000$ to $latex {{\mathbb R}^3}&fg=000000$. The Minnesota map-as-function is probably continuous and differentiable, and as is well known it can be angle preserving or area preserving but not both. So you can say there is a point on the paper that represents the location of the statue of Paul Bunyan in Bemidji. There is a set of points that represents the part of the Mississippi River that lies in Minnesota. And so on. A function has an image. If you think about it you will realize that the image is just a certain portion of the piece of paper. Knowing that a particular point on the paper is in the image of the function is not the information contained in what we call “this map of Minnesota”. This yields what I consider a basic insight about function-as-map:  The map contains the information about the preimage of each point on the paper map. So: The map in the sense of a “map of Minnesota” is represented by the whole function, not merely by the image. I think that is the essence of the metaphor that a function is a map. And I don’t think newbies in abstractmath always understand that relationship. ### A morphism is a map The preceding discussion doesn’t really represent how we think of a paper map of Minnesota. We don’t think in terms of points at all. What we see are marks on the map showing where some particular things are. If it is a road map it has marks showing a lot of roads, a lot of towns, and maybe county boundaries. If it is a topographical map it will show level curves showing elevation. So a paper map of a state should be represented by a structure preserving map, a morphism. Road maps preserve some structure, topographical maps preserve other structure. The things we call “maps” in math are usually morphisms. For example, you could say that every simple closed curve in the plane is an equivalence class of maps from the unit circle to the plane. Here equivalence class meaning forget the parametrization. The very fact that I have to mention forgetting the parametrization is that the commonest mathematical way to talk about morphisms is as point-to-point maps with certain properties. But we think about a simple closed curve in the plane as just a distorted circle. The point-to-point correspondence doesn’t matter. So this example is really talking about a morphism as a shape-preserving map. Mathematicians introduced points into talking about preserving shapes in the nineteenth century and we are so used to doing that that we think we have to have points for all maps. Not that points aren’t useful. But I am analyzing the metaphor here, not the technical side of the math. ### Groups are functors People who don’t do category theory think the idea of a mathematical structure as a functor is weird. From the point of view of the preceding discussion, a particular group is a functor from the generic group to some category. (The target category is Set if the group is discrete, Top if it is a topological group, and so on.) The generic group is a group in a category called its theory or sketch that is just big enough to let it be a group. If the theory is the category with finite products that is just big enough then it is the Lawvere theory of the group. If it is a topos that is just big enough then it is the classifying topos of groups. The theory in this sense is equivalent to some theory in the sense of string-based logic, for example the signature-with-axioms (equational theory) or the first order theory of groups. Johnstone’s Elephant book is the best place to find the translation between these ideas. A particular group is represented by a finite-limit-preserving functor on the algebraic theory, or by a logical functor on the classifying topos, and so on; constructions which bring with them the right concept of group homomorphisms as well (they will be any natural transformations). The way we talk about groups mimics the way we talk about maps. We look at the symmetric group on five letters and say its multiplication is noncommutative. “Its multiplication” tells us that when we talk about this group we are talking about the functor, not just the values of the functor on objects. We use the same symbols of juxtaposition for multiplication in any group, “$latex {1}&fg=000000$” or “$latex {e}&fg=000000$” for the identity, “$latex {a^{-1}}&fg=000000$” for the inverse of $latex {a}&fg=000000$, and so on. That is because we are really talking about the multiplication, identity and inverse function in the generic group — they really are the same for all groups. That is because a group is not its underlying set, it is a functor. Just like the map of Minnesota “is” the whole function from the state to the paper, not just the image of the function. ## Mathematical concepts This post was triggered by John Armstrong’s comment on my last post. We need  to distinguish two ideas: representations of a mathematical concept and the total concept.  (I will say more about terminology later.) Example: We can construct the quotient of the kernel of a group homomorphism by taking its cosets and defining a multiplication on them.  We can construct the image of the homomorphism by take the set of values of the homomorphism and using the multiplication induced by the codomain group.   The quotient group and the image are the same mathematical structure in the sense that anything useful you can say about one is true of the other.   For example, it may be useful to know the cardinality of the quotient (image) but it is not useful to know what its elements are. But hold on, as the Australians say, if we knew that the codomain was an Abelian group then we would know that the quotient group was abelian because the elements of the image form a subgroup of the codomain. (But the Australians I know wouldn’t say that.) Now that kind of thinking is based on the idea that the elements of the image are “really” elements of the codomain whereas elements of the quotients are “really” subsets of the domain.  That is outmoded thinking.  The image and the quotient are the same in all important aspects because they are naturally isomorphic.   We should think of the quotient as just as much as subgroup of the codomain as the image is.  John Baez (I think) would say that to ask whether the subgroup embedding is the identity on elements or not is an evil question. Let’s step back and look at what is going on here.  The definition of the quotient group is a construction using cosets.  The definition of the image is a construction using values of the homomorphism.  Those are two different specific  representations of the same concept. But what is the concept, as distinct from its representations?  Intuitively, it is • All the constructions made possible by the definition of the concept. • All the statements that are true about the concept. (That is not precise.) The total concept is like the clone plus the equational theory of a specific type of algebra in the sense of universal algebra.  The clone is all the operations you can construct knowing the given signature and equations and the equational theory is the set of all equations that follow from them.  That is one way of describing it.  Another is the monad in Set that gives the type of algebra — the operations are the arrows and the equations are the commutative diagrams. Note: The preceding description of the monad is not quite right.  Also the whole discussion omits mention of the fact that we are in the world (doctrine) of universal algebra.  In the world of first order logic, for example, we need to refer to the classifying topos of the category of algebras of that type (or to its first order theory). Terminology We need better terminology for all this.  I am not going to propose better terminology, so this is a shaggy dog story. Math ed people talk about a particular concept image of a concept as well as the total schema of the concept. In categorical logic, we talk about the sketch or presentation of the concept vs. the theory. The theory is a category (of the kind appropriate to the doctrine) that contains all the possible constructions and commutative diagrams that follow from the presentation. In this post I have used “total concept” to refer to the schema or theory.  I have referred the particular things as  “representations” (for example construct the image of a homomorphism by cosets or by values of the homomorphism). “Representation” does not have the same connotations as “presentation”.  Indeed a presentation of a group and a representation of a group are mathematically  two different things.  But I suspect they are two different aspects of the same idea. All this needs to be untangled.  Maybe we should come up with two completely arbitrary words, like “dostak” and “dosh”.
2013-05-23 00:05:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8503226637840271, "perplexity": 515.9584651611153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702525329/warc/CC-MAIN-20130516110845-00073-ip-10-60-113-184.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/40076/how-can-i-solve-this-non-linear-differential-equation
# How can I solve this non-linear differential equation? I'm trying to solve the equation $$y' = 1 - y^2$$ Here is my attempt: $$y' = 1 - y^2$$ Divide by (1-y^2) $$\frac{y'}{1-y^2} = 1$$ Integrate both sides: $$\frac{1}{2}\log|\frac{y+1}{y-1}|=t+c$$ Rearrange $$y = \frac{ke^{2t}+1}{ke^{2t}-1}$$ I'd have thought that solution was right, but we have to figure out a specific solution with y(0) = 0. But this isn't possible with the above equation. - Doesn't $y(0)=0$ imply $k=-1$? –  lhf May 19 '11 at 13:41 While I was writing this, I rewrote $e^{2c}$ = $k$. Am I allowed to set k to -1? –  Hannesh May 19 '11 at 13:44 +1 for showing your work. No, working in the reals, you cannot have $k=-1$ when it came from $e^{2c}$. Good for you to keep track of that-it is easy to miss. –  Ross Millikan May 19 '11 at 13:56 @Ross, @Luboš: I don't think complex numbers are the issue here. When seeking real-valued solutions, one can indeed stay completely within the real realm, if one handles the absolute value signs correctly. (Cont.) –  Hans Lundmark May 19 '11 at 16:40 (Cont.) From the integrated expression it follows that $\left| \frac{y+1}{y-1} \right| = \exp 2(t+c)$, hence $\frac{y+1}{y-1} = \pm e^{2c} e^{2t}$. Now let $k = \pm e^{2c}$; then $k$ can be anything except zero. By letting $k$ run through the nonzero real numbers, you get all the real-valued solutions $y(t)$, except the constant ones $y(t)=1$ and $y(t)=-1$ which should have been noted separately before dividing by $1-y^2$. –  Hans Lundmark May 19 '11 at 16:40 Since you want a solution near $y=0$, you should use $1-y$ in the denominator (as it will be positive) and can remove the absolute value signs. This changes some signs in your answer, giving $$y = \frac{ke^{2t}-1}{ke^{2t}+1}$$ and $k=1$ gives $y(0)=0$ - I thought this wasn't legal since k came from $e^{2c}$ (finding c would involve the log of a negative number) however I realized that there is no reason why c can't be complex. –  Hannesh May 19 '11 at 13:54 Dear @Hannesh, it's not only legal but mandatory to allow all integration constants throughout the calculation being arbitrary complex numbers. Solving equations - algebraic or differential - in the reals isn't simpler than in complex numbers. Quite on the contrary, it's more complicated because you must solve it using all possible complex values of the parameters, and at the very end, you must do an extra job of filtering out the solutions that are not real. See the exchanges right under your question. –  Luboš Motl May 19 '11 at 14:06 See Hans's comment above. There's no need for complex valued integration constants here as long as you don't ignore the absolute value. –  cch May 19 '11 at 19:58 Reducing from what you have a little more, we get that equal to Tanh[x-k]. Tanh[-k] == 0 //Seting x to zero Therefore k = 0, leaving Tanh[x] as your function. - Exactly, this is the right compact form of the solution. tanh is sinh/cosh so its derivative is $(\cosh^2 t - \sinh^2 t)/\cosh^2 t = 1/\cosh^2 t$ which is equal to $1-\tanh^2 t$, indeed. –  Luboš Motl May 19 '11 at 14:04 I wrote it down and solved it in a slightly different way. The first thing you should notice is that $y = 1$ and $y = -1$ are the two constant solutions, which allows you then to divide $y'$ by $1-y^2$, since you want to study it for $y(0) \in (-1,1)$, knowing that any solution starting in $(-1,1)$ stays there (or dies in 1). Then yes, with some algebra you manage to get $\left(\log \frac{1+y}{1-y}\right)' = 2$ if I haven't screwed up with the signs; now integrating it from 0 to $t$ you get: $\log \frac{1+y(t)}{1-y(t)} - \log \frac{1+y(0)}{1-y(0)} = 2t$ without the absolute value since everything in the argument of the logs is non-negative. By imposing $y(0) = 0$ the second term in the left vanishes and you're left with an easy expression that if inverted gives the following: $y(t) = \frac{e^{2t} - 1}{e^{2t} + 1}$ which is simply $y(t) = \tanh (t)$ and of course double checking $y(0) = 0$. -
2014-04-20 17:16:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8993080258369446, "perplexity": 384.1538424874307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
http://gmatclub.com/forum/georgetown-38620.html?fl=similar
Find all School-related info fast with the new School-Specific MBA Forum It is currently 09 Oct 2015, 12:14 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Georgetown Author Message TAGS: Manager Joined: 08 Mar 2005 Posts: 93 Followers: 1 Kudos [?]: 0 [0], given: 0 Georgetown [#permalink]  16 Nov 2006, 16:25 Any here applying to Georgetown? GMAT Club Legend Affiliations: HHonors Diamond, BGS Honor Society Joined: 05 Apr 2006 Posts: 5926 Schools: Chicago (Booth) - Class of 2009 GMAT 1: 730 Q45 V45 Followers: 281 Kudos [?]: 1777 [0], given: 7 Re: Georgetown [#permalink]  16 Nov 2006, 17:35 sujith wrote: Any here applying to Georgetown? I was. I decided not to. Manager Joined: 06 Feb 2006 Posts: 223 Followers: 2 Kudos [?]: 4 [0], given: 0 Re: Georgetown [#permalink]  16 Nov 2006, 17:38 rhyme wrote: sujith wrote: Any here applying to Georgetown? I was. I decided not to. Rhyme, would you please tell me why not? By the time I'm in DC, I will stop by Georgetown for an interview, because it's very close to where I will be living, but I'm still considering if I should apply to Georgetown or not... GMAT Club Legend Affiliations: HHonors Diamond, BGS Honor Society Joined: 05 Apr 2006 Posts: 5926 Schools: Chicago (Booth) - Class of 2009 GMAT 1: 730 Q45 V45 Followers: 281 Kudos [?]: 1777 [0], given: 7 Re: Georgetown [#permalink]  16 Nov 2006, 18:25 darren1010 wrote: rhyme wrote: sujith wrote: Any here applying to Georgetown? I was. I decided not to. Rhyme, would you please tell me why not? By the time I'm in DC, I will stop by Georgetown for an interview, because it's very close to where I will be living, but I'm still considering if I should apply to Georgetown or not... No good reason not to apply. I simply don't consider it exclusive enough for me to give up 2 years of salary and spend another $100,000 on, so I dropped it. Manager Joined: 28 May 2006 Posts: 239 Followers: 1 Kudos [?]: 1 [0], given: 0 Re: Georgetown [#permalink] 16 Nov 2006, 18:28 rhyme wrote: No good reason not to apply. I simply don't consider it exclusive enough for me to give up 2 years of salary and spend another$100,000 on, so I dropped it. Spend another $100,000? Man, Rhyme, what have you been buying lately?! GMAT Club Legend Affiliations: HHonors Diamond, BGS Honor Society Joined: 05 Apr 2006 Posts: 5926 Schools: Chicago (Booth) - Class of 2009 GMAT 1: 730 Q45 V45 WE: Business Development (Consumer Products) Followers: 281 Kudos [?]: 1777 [0], given: 7 Re: Georgetown [#permalink] 17 Nov 2006, 03:21 batchgmat wrote: rhyme wrote: No good reason not to apply. I simply don't consider it exclusive enough for me to give up 2 years of salary and spend another$100,000 on, so I dropped it. Spend another \$100,000? Man, Rhyme, what have you been buying lately?! Poor phrasing on my part. In the back of my mind, I probably said another 100,000 cause thats roughly my annual salary so it feels like "another". Current Student Joined: 12 May 2006 Posts: 139 Followers: 2 Kudos [?]: 2 [0], given: 0 Re: Georgetown [#permalink]  17 Nov 2006, 05:55 sujith wrote: Any here applying to Georgetown? Scrambling to get essays ready for Dec 1st deadline. 50-50 if I'll have the essays in good enough shape by then though. Manager Joined: 08 Mar 2005 Posts: 93 Followers: 1 Kudos [?]: 0 [0], given: 0 how was the interview? what was your overall impression of the school? Current Student Joined: 12 May 2006 Posts: 139 Followers: 2 Kudos [?]: 2 [0], given: 0 sujith wrote: how was the interview? what was your overall impression of the school? It wasn't a campus interview so I haven't had the chance to see the school yet. My only concern about the facilities is that I have heard that the building is in need of improvement. They are building a new one but it wont be ready until 2008. The interview was very conversational and she (Wendy Weiss) was very nice. Standard questions: Why MBA? Why Georgetown? Walk me through your resume, if I were to put together a team what would I tell your teammates that you bring to the table? What clubs would you be interested in? etc.. Like I said, it didn't feel high pressure at all (Marylands was more high pressure), but more conversational. Just do some research on the program. Hope this helps. Last edited by Xeo78 on 17 Nov 2006, 09:56, edited 1 time in total. Manager Joined: 27 Jul 2006 Posts: 161 Followers: 1 Kudos [?]: 7 [0], given: 0 I'm currently in the Certificate of Business Administration there (http://www.cpd.georgetown.edu/business_ ... ation.html) and am loving it! The teachers in the program are the same ones from the MBA program, so its a good test to see if the MBA commitment makes sense. The teaching is great, the campus is awesome, and the high international flavor makes class interesting. I wasn't originally going to do the MBA with Georgetown but it's looking like a top choice of mine because of the proximity to surrounding government agencies. I can't be out of work for more than 1 year so the location is really key. Also, there are some big companies with their headquarters in the area. I know that Booz Allen and Bearingpoint are located in McLean, VA (about 10 minutes away). If you want to know more about the area, give me a PM. Similar topics Replies Last post Similar Topics: In at Georgetown - dilemma 12 14 Mar 2007, 12:16 Carnegie Mellon or Georgetown 2 21 Jan 2007, 09:55 Georgetown application 0 24 Dec 2006, 11:02 Georgetown Visit 7 23 Dec 2006, 11:15 any thoughts on Georgetown 1 21 Dec 2006, 07:40 Display posts from previous: Sort by
2015-10-09 20:14:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18776291608810425, "perplexity": 7814.441425792576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737935292.75/warc/CC-MAIN-20151001221855-00221-ip-10-137-6-227.ec2.internal.warc.gz"}
https://en.wikibooks.org/wiki/Clock_and_Data_Recovery/Structures_and_types_of_CDRs/The_CDR%27_VCO
# Clock and Data Recovery/Structures and types of CDRs/The CDR' VCO The VCO is a fundamental piece of every slave CDR. ## VCO characteristic A VCO block is present in some (not all) types of CDRs. It is present in all slave CDRs, regenerators and end points (but not in phase aligners). The VCO (Voltage Controlled Oscillator) is a circuit that outputs a single frequency signal (some VCOs output a sinusoid, some VCOs output a squarewave) in response to the level of the signal applied to its input; the frequency of its output is proportional to the value of its input signal (the latter is sometimes an analog voltage, sometimes a digital number): output_frequency [cycles/sec] = VCO_gain [cycles/sec/volt] * control_ signal [volt] It is convenient to study the PLL in the s or jω domain using the phase as the output variable. The model equation is then: output phase Y(s) [rad] = Control_signal(s) [volt] * GVCO [rad/sec/volt] / s [1/sec] Note that the dimension of the complex variable s is: [sec−1] An independent oscillator always runs at its fixed free-running frequency ffr, but the ffr of a VCO is defined by the level of a specific control signal. The VCO of a PLL loop runs at its ffr when the output of the amplifier/filter block assumes its mid (=indifferent) value (when LOS = 1, for instance). The VCO shall be made track the frequency of the incoming signal pulses (fp) by the closed loop operation of the PLL. As the free-running frequency characteristic of the VCO, ffr, never exactly coincides with fp, the control signal -while the PLL is tracking the input signal - exhibits an offset from its center value, proportional to the frequency mismatch: Ed = steady-state drive error = (ωp – ωfr) / GVCO where: GVCO [rad/sec/volt] = slope of the ideal curve ≈ 2π (fmax – fmin) [rad/sec] / (max - min) [volt] Ed should not be confused with Es (that has been introduced earlier in this book and that will be addressed again in some special cases again further on). Ed is always present in a PLL and has a non-zero value as in the formula above, no matter what the loop type of the PLL is, Es instead is non-zero only in type 1 PLLs (1-1 and 2-1), while it is zero in type 2 PLLs (2-2). The VCO is very often the most critical block in the CDR, because 1. it operates at the frequency of the received line pulses fp (or close to). It uses therefore the fastest circuitry in the PLL, along with the phase comparator. 2. the value of ffr must be very precise. Precise component like quartz crystals or trimmed R, C, L must therefore be used inside the VCO (The phase comparator does not require precise components). 3. it generates most of the added phase noise that the PLL cannot reject or mitigate. The block that precedes the VCO, and that drives it, i.e. the amplifier/filter, is less critical in the sense of cost, need of precise components and noise. As a consequence, it is convenient and correct to model and to simulate the PLL under the simplifying hypothesis that the circuit that drives the VCO matches exactly with its output range the input range of the VCO connectd to it. ## Frequency accuracy[1] of the VCO The free-running frequency of the VCO ffr represents what the CDR knows about the frequency (fp) of the signal to lock into. The difference (fp – ffr) is a known parameter of the CDR, called accuracy and often expressed in ppm as: (fp – ffr) / fp . When the acquisition begins, the fp – ffr accuracy is also the maximum drift in a second (in cycles per second = Hz, or in rad/esc) between the phase of the signal to lock into and the phase of the free-running VCO. The knowledge about the variability of fp with time is embodied in a parameter of the overall PLL: its bandwidth. All PLLs have a low pass behaviour with respect to the processing of the input phase signal, and their cut-off frequency is called: • fC [Hz] by the ITU-T that mostly refers to the equipment behaviour at its external interfaces, • ωn or ωn2 [rad/s] when using linear models, • ω-3 dB in some technical papers. The CDR must track, first of all, any possible out of tune between fp and the VCO free running frequency ffr. This is the same as stating that frequency acquisition precedes phase acquisition. The difference fp - ffr may be negligible in certain cases (and the frequency acquisition be hidden in the first part of the phase acquisition), but often it is not. ### fp - ffr relevance Cost The VCO types have largely different costs, and fp – ffr accuracies. The VCO choice may lead to different CDR architectures: 1. high cost, i.e tight accuracy, ( fp – ffr) / fp < 100 ppm. This is normally associated with high VCO gain and with low VCO noise (= e.g. quartz oscillators). The PLL cut off frequency can be larger than the accuracy (=incoming frequency uncertainty), leading to slip-less acquisition and no need of a PFD. 1. low cost, i.e poor accuracy, ( fp – ffr) / fp >> 100 ppm. The VCO gain is normally lower, and the VCO noise larger (e.g. = LC and ring monolithic). If the cut-off frequency must be close to or tighter than the accuracy, a PFD is required. dc performances The fp-ffr difference always always causes a steady state offset in the PLL node between the filter and the VCO ( a steady state drive error Ed). In the PLLs of type 1 - fp-ffr causes also a non-zero steady state offset between the inputs to the phase comparator (a sampling error Es ) . ac performances The difference fp - ffr (in association with the comparator type, the PLL gain and the possible slew-rates) is fundamental for the definition of the acquisition time and of the PLL bandwidth. When the difference fp-ffr is so large that it is comparable with, or that it exceeds the bandwidth of the CDR, the use of a PFD is mandatory. The frequency acquisition, in such cases, is often associated with slips and may last longer than the phase acquisition that follows. Frequency accuracy and PLL acquisition time The (in)accuracy unbalances the VCO, increasing the variation possible for the frequency in one direction at the expenses of variation possible in the other direction. In the direction towards higher frequencies , the VCO phase catch up with fp faster than fup = fmax - ffr, in the direction towards decreasing frequencies the VCO can not let fp catch up with itself faster than fdown = ffr - fmin. fmax and fmin are in some conditions forced very much closer to ffr than the extremes of the control range of the VCO. Typical is case of the 2nd order type 2 CDR in tracking condition, where the high frequency attenuation of the loop filter limits substantially the swing of the VCO drive signal. The PLL VCO is normally kept in its free running state as long as an incoming signal is not detected (LOS = Loss Of Signal condition). When released to hunt for the phase lock (i.e. when the LOS is de-asserted), the PLL acquisition transient does not differ from its normal reaction to an abrupt input step variation. The magnitude of the step is the (random) phase and frequency difference between the VCO output and the input signal that just popped up. 1. If the VCO accuracy is smaller (= tighter) than both fup and fdown in the cases where linear models can be used, the acquisition time is about 1/ωn for a first order PLL, and of about 2/ωn2 for a second order PLL. (Time needed to recuperate about 70% of the distance from the optimum lock-in condition independently of the magnitude of the step). If the VCO is driven directly by a bang-bang detector, the VCO is driven by a signal that remains at either its highest or at its lowest value. The output phase of the VCO follows a linear ramp with a slope equal to Drive_voltage/GVCO all during the acquisition. 2. If the VCO accuracy is wider (= poorer) than either fup or fdown A PFD is present, and the acquisition may take several clock drift cycles and may include slips, depending on the initial frequency and phase differences. Frequency accuracy and PLL tracking bandwidth 1. The VCO accuracy is a limit to how tight the dejittering bandwidth can be If the open loop gain is finite. Linear CDRs with type 1 feedback do need a finite, non-zero, error at the input to maintain the drive error that locks the VCO. The sampling error, necessary to maintain the VCO locked to the PLL input, is proportional to the VCO frequency accuracy and inversely proportional to the DC forward Gain. ${\displaystyle E_{s}}$ = ${\displaystyle {\tfrac {\omega _{p}-\omega _{fr}}{G_{fDC}}}}$ [rad] In type 1 loops the natural frequency ${\displaystyle \omega _{n}}$ and the open loop DC gain ${\displaystyle G_{fDC}}$ are tightly related. For an open loop dc gain ${\displaystyle G_{fDC}}$ and a filter time constant ${\displaystyle \tau _{f}}$, the jitter cut-off frequency ${\displaystyle \omega _{n21}}$ of a linear 2nd order type 1 loop is: ${\displaystyle \omega _{n21}}$ = ${\displaystyle G_{fDC}2\zeta _{21}}$ = ${\displaystyle {\sqrt {\tfrac {G_{fDC}}{\tau _{f}}}}}$ ${\displaystyle E_{s}}$ = ${\displaystyle {\tfrac {(\omega _{p}-\omega _{fr})2\zeta _{21}}{\omega _{n2}}}}$ It is easy to see that, for a 1st order type 1 loop: Es = ((ωp – ωfr) / ωn1 The same equation, rearranged, tells that the frequency mismatch and the maximum Es define how tight (relative to the line frequency) the the loop jitter bandwidth can be: ${\displaystyle E_{s}/2\zeta _{21}}$ = ${\displaystyle {\tfrac {(\omega _{p}-\omega _{fr})}{\omega _{n21}}}}$ = ${\displaystyle {\tfrac {(\omega _{p}-\omega _{fr})/\omega _{p}}{\omega _{n21}/\omega _{p}}}}$ ${\displaystyle {\tfrac {\omega _{n21}}{\omega _{p}}}}$ = ${\displaystyle {\tfrac {(\omega _{p}-\omega _{fr})/\omega _{p}}{E_{s}/2\zeta _{21}}}}$ It is easy to see that, for a 1st order type 1 loop: (ωn1p ) = ((ωp – ωfr) /ωp) / Es ) For the maximum allowed sampling error, there is a minimum jitter transfer (=dejittering) bandwidth. 2. If the open loop gain is very large (infinite), the VCO accuracy is not fundamental for the jitter transfer characteristic. The jitter transfer characteristic varies with the input jitter amplitude, and is a consequence of slewing as the VCO control signal saturates at the ends of its range.[2] ### Relative concept Jitter as well as frequency accuracy are both relative concepts. They describe the relative mismatch of two quantities (two phases that are functions of time, or two frequencies). The mismatch does not need: • either of the quantities to be considered the reference for the other; • a third quantity as independent reference. One of the two quantities is in general expected to jitter less of the other with respect to a third reference clock like the Primary Reference Clock in a telecommunication network, or the clock reference of the best measurement instrument available in the test set-up. With respect to the (much more accurate) master clock, the free running frequency of a slave CDR may differ no more than 50 ppm from the frequency of its remote master (very low cost quartz crystal), or 5000 ppm (monolithic RC oscillator after EWS trimming), or even differ less than 1 ppm, still without big cost concerns (quartz for GPS receivers inside mobile phones). Less than 0.1 ppm is typical of professional equipment. ## The VCO of the PLL is a frequency modulator The stand alone VCO block is a frequency modulator by definition The VCO of a CDR (see also its description at the beginning of this page) performs in full compliance with the definition of frequency modulation. In normal tracking, the VCO modulation is a narrowband FM (h < 0.3). ## The whole PLL seen as a frequency de-modulator When a VCO is part of a PLL, the VCO output coincides with the PLL output. The VCO input instead is a node whose signal tells exactly the frequency at which the VCO, and therefore the PLL, shall operate. Such frequency is : ${\displaystyle VCO_{in}(s)G_{VCO}}$. In the following pages, the overall PLL transfer function (from the phase of the PLL input ${\displaystyle X_{}(s)}$ to the phase of the PLL (=of the VCO) output ${\displaystyle Y_{}(s)}$) will be obtained for different PLLs, combining the individual transfer functions of phase comparator, amplifier/filter and VCOs. Such overall transfer function is called the PLL “jitter transfer function”: ${\displaystyle {\frac {Y_{}(s)}{X_{}(s)}}}$ The VCO transfer function, when the phase is the output variable, is ${\displaystyle G_{VCO}/s}$. The transfer function of the PLL, from input phase to VCO input is: ${\displaystyle {\frac {VCO_{in}(s)}{X_{}(s)}}={\frac {\frac {Y_{}(s)}{X_{}(s)}}{(G_{VCO}/s)}}}$ If, instead of the PLL input phase ${\displaystyle X_{}(s)}$, the PLL input frequency ${\displaystyle X_{f}(s)}$ is considered, ${\displaystyle X_{}(s)}$ must be replaced by its derivative, because the frequency is the derivative of the phase. The derivative of ${\displaystyle X_{}(s)}$, i.e. ${\displaystyle X_{f}(s)}$ is: ${\displaystyle sX_{}(s)}$. The frequency demodulation transfer function is therefore: ${\displaystyle {\frac {VCO_{in}(s)}{X_{f}(s)}}={\cfrac {\frac {Y_{}(s)}{X_{f}(s)}}{(G_{VCO}/s)}}={\frac {1}{(G_{VCO})}}{\frac {Y_{}(s)}{X_{}(s)}}}$ Apart from the fixed coefficient ${\displaystyle ={\tfrac {1}{(G_{VCO})}}}$, the frequency "demodulator" transfer function (PLL input frequency to VCO input voltage representing a frequency) is the same as the PLL phase jitter transfer function (PLL input phase to PLL output phase (= VCO output phase)) ! In other words, the PLL can be seen as a frequency demodulator of the signal at its input where the VCO input acts as the frequency demodulator output! This conclusion may help later in the book to quicker visualize the PLL behavior in some special cases and conditions. For instance, all PLLs have phase (=jitter) transfer functions with 0 dB gain from 0 to the frequency cut-off where the jitter low-pass starts. The very same frequency diagram (just scaled by the value ${\displaystyle ={\tfrac {1}{(G_{VCO})}}}$) holds good for the inherent frequency demodulator, with the same bandwidth, etc. ## Modeling and simulation of the VCO with (in)accuracy included Both in the model equations and in the simulation calculation formulae, the finite accuracy of the VCO can be taken into account adding an input bias to the (ideal) VCO. ### VCO model The VCO function is represented as a block with linear relation of its input signal (ranging around 0 volt) with respect to its output frequency (that ranges correspondingly around ffr). It is more convenient to consider the instantaneous phase of the VCO output as output variable, because the inclusion of a VCO block in a PLL model becomes straightforward. Phase and frequency are related by a differential operation, as the phase is the integral function of the frequency and the latter is the derivative of the former. As angular frequencies in preference to period frequencies are used in conjunction with Laplace transforms (s = r +jω), the VCO gain GVCO is expressed in [rad/sec/volt] and the (precisely centered) VCO transfer function is written as (see the figure above): out(s) / in(s) = GVCO / s The drive error Ed, preceded by a minus sign, is the signal addition needed at the VCO input to take into account in the model the lack of accuracy of the VCO itself. A VCO that is absolutely accurate becomes “inaccurate” by the amount (ωp - ωfr) if a d.c. bias equal to -Ed is added at its input. The saturation outside the range ωmin...ωmax is not taken into account by the model, that is linear. Such non-linearity is incorporated instead in the simulation equations. ### VCO simulation If the input signal reaches outside +/- 1 volt (see the purple "Curve for simulation" in the figure above), the (simulated) VCO freezes itself either at ωmin or at ωmax, depending whether the input signal is lower than -1 or greater than +1. To take into account the VCO accuracy (i.e. the mismatch between ωp and ωfr), the VCO shall be simulated as: output_frequency = ωfr + input_signal * GVCO          when -1 < input_signal < +1 output_frequency = ωmin                                          when input_signal <= -1 output_frequency = ωmax                                          when input_signal >= +1 The amplifier/filter output swings between -1 and 1 volt, with 0 volt corresponding to 0 volt at its input. Clamping completes the computation of the amplifier output signal, simulating at the same time both the amplifier/filter output limitation and the VCO range limitation. After clamping this output to +/-1 volt, the -Ed bias is added. As a result, the simulated VCO runs at ωfr when the filter output is 0 volt, ωmax when the filter output is +1 volt and ωmin when the filter output is -1 volt. To simulate the conversion of the output frequency (linearly proportional to the VCO input) into the output phase, an integration is made. The first value is computed as the first VCO input multiplied by the discrete time step of the simulation. Any subsequent entry is the previous value incremented by the present VCO input multiplied by the discrete time step of the simulation. To take into account the VCO gain, the result obtained in the previous calculation is multiplied by GVCO and the simulated value of the VCO output is obtained. The PLL closed loop simulation, in addition to the Ed value, will also show the transient and the final value of the corresponding steady state error Es (if finite). ## Different types of VCOs The Sections above have presented with some detail the classic VCO model (that is a valid model for many VCOs in actual CDRs) and have given suggestions on how to simulate it. The ring oscillator is an example. It is often used in monolythic CDRs[3] where VCO low noise is not the prime requirement. (An LC oscillator -hybrid or monolythic- is used used in that case[4]). The delay locked loop is a variable delay line whose delay is locked to the duration of the period of a reference clock. Depending on the signal processing element in the loop (a flat amplifier or an integrator), the DLL loop can be of 0th order and type 0 or of 1st order and type 1. When analyzing existing CDRs, different VCOs may be encountered, and a different simulation or model may be more appropriate: 1. bang-bang between two frequencies [5](simple although somewhat noisy, can be integrated easily inside an IC). 2. Fixed free running frequency ffr, followed by a variable ratio divider...(possible if the technology allows a start frequency much higher than ωp). The inherent non linearities of the characteristic can be made smaller if a higher start frequency can be chosen and if the division ratio can be controlled with many close steps. 3. A DLL whose output can be sequentially (and circularly) taken from the output of each of its stages by a multiplexer, so that the output phase can be varied indefinitely. The multiplexer could be driven: 1. by a A/D conversion of the control signal. The resulting VCO is an oscillator controlled in phase and its model is simply a fixed gain. The gain is equal to the delay line control gain Gdl multiplied by the A/D gain (sec/volt) if the VCO output is measured in seconds, or equal to to Gdl x GA/D multiplied by the ratio: delay_line_length / oscillator_angular_frequency, if the VCO output is measured in radian. 2. by an integrator plus A/D ( or by an accumulator if the implementation is digital) and then to the control input of the delay line. This adds a 1/s factor to the VCO model ## VCO noise No oscillator is exempt from noise, and the oscillator noise affects the CDR performances. The output waveform of an oscillator is never perfect in shape and immobile at its nominal frequency. Its power is not an impulse at ffr, but it is distributed around it and exhibits a sort of "bell" shape. Noise may in theory affect the amplitude, or the phase, or both, in the waveform produced by the oscillator.[6] Avoiding non-fundamental discussions, it is always assumed that the amplitude of the output waveform of an oscillator is constant and does not contribute to the oscillator noise. Just its phase (phase or frequency, which is the same thing) jitters and generates the noisy behavior. In other words, the oscillator noise that can be measured is made up of phase noise only.[7] This assumption corresponds to the condition that there is no correlation between the power in the upper and lower side-bands.[6] It is also generally assumed that phase noise is small and can be treated with linear models. This assumption practically corresponds to the condition that the total jitter corresponding to the phenomena under investigation never exceeds π/10.[6] When a CDR is left free-running because no received signal is present (LOS Loss Of Signal = 1), then all the VCO phase noise is present at the CDR otput. (This is relevant and may become problematic in regenerators, but is not very relevant in end-points and in phase-aligners). When the CDR is regularly operating and locked, the VCO frequency shifts and coincides with fp. It will be shown that, when the CDR is in lock, the VCO phase noise that reaches the CDR output is progressively attenuated from the loop characeristic frequency downwards. Very close to fp the VCO phase noise that reached the CDR output is attenuated to negligible levels. How much an oscillator deviates from the ideal behavior is normally described by its Power Spectral Density, PSD. The PSD is always finite, and peaks at ffr when the oscillator (the VCO) is free-running, or at fp when the VCO is locked. When a CDR is in lock, the PSD of the VCO, centered around fp, exhibits essentially the same side-bands than when free-running. The PSD curve looks different if the vertical axis scale is logarithmic (used when the side-bands are important) or if the scale is linear (used when the fundamental frequency of oscillation is more important than the side-bands) The Power Spectral Density of an oscillator can be measured in dBc/Hz (or in another logarithmic unit, that yields the same curve but translated upwards or downwards) or in W/Hz (linear y-scale). The horizontal x-axis is linear and centered on the fundamental frequency when PSD is described on both sides of ffr, to avoid asymmetrical representation of the two side-bands. The phase noise proper L(f) (pronounced “script-ell of f”), is defined (and measured) as one half (= the upper half) of the double-sideband PSD of the oscillator. It is a function of the frequency offset between the frequency of measure and the oscillator center frequency. It may be noted that the definition does not exactly include the oscillator center frequency (or frequency = 0 of phase noise). Very slow phase noise or wander is at the same time difficult to deal with and of difficult measure. It belongs to a different engineering topic, like a very little frequency offset or like a very selective spectrum analyzer or like a measurement that takes a very long time. As it is of very little or of no practical use for the engineering of phase noise in oscillators, it is left out. When expressed in decibels, the units of L(f) are dBc/Hz (dB below the carrier in a 1 Hz bandwidth at a distance f from the center frequency). ### Logarithmic axes The logarithmic y-axis representation is necessary when the oscillator noise is measured. If only the upper side-band of the oscillator phase noise is described, then also the x-axis is preferably logarithmic. ### Modeling Modeling of the oscillator noise describes just the upper side-band of the oscillator spectrum (and the preferred scale of the x-axis is also logarithmic). In most practical cases, the oscillator noise PSD uses the oscillator fundamental frequency as a zero reference, and the difference f-ffr as independent variable. To avoid unnecessary troubles (mathematical ∞/0 for a model; infinite selectivity and/or dynamic range for a measure), the description does not reach down to zero frequency difference, but gets very close (so that only a minor amount of power, i.e. PSD x frequency interval, is neglected). The well known model was proposed by Leeson (February 1966). A fundamental paper is also the one from A. Hajimiri and Thomas H. Lee (1998)[8] ## External References 1. ITU-T G.810 (08/96) - DEFINITIONS 􏰀AND􏰀 TERMINOLOGY 􏰀FOR􏰀 SYNCHRONISATION NETWORKS - page 5 - 4.5.3: " frequency accuracy: The maximum magnitude of the fractional frequency deviation for a specified time period. NOTE – The frequency accuracy includes the initial frequency offset and any ageing and environmental effect. " 2. Analysis and Modeling of Bang-Bang Clock and Data Recovery Circuits, Jri Lee, Kenneth S. Kundert, and Behzad Razavi, IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 39, NO. 9, SEPTEMBER 2004, pages 1571..1580, III. JITTER ANALYSIS A. Jitter Transfer 3. Analysis of Timing Jitter in CMOS Ring Oscillators, Todd C. Weigandt, Beomsup Kim and Paul R. Gray, Proc. of ISCAS, June 1994, a paper included in Monolithic Phase-locked Loops and Clock Recovery Circuits, Theory and Design, IEEE PRESS, ISBN 0-7803-1149-3 4. Analysis, Modeling and Simulation of Phase Noise in Monolithic Voltage-Controlled Oscillators, Behzad Razavi in Proc. CICC, pp. 323-326, May 1995, a paper included in Monolithic Phase-locked Loops and Clock Recovery Circuits, Theory and Design, IEEE PRESS, ISBN 0-7803-1149-3 5. Richard C. Walker (2003). "Designing Bang-Bang PLLs for Clock and Data Recovery in Serial Data Transmission Systems". pp. 34-45, a chapter appearing in "Phase-Locking in High-Performance Sytems - From Devices to Architectures", edited by Behzad Razavi, IEEE Press, 2003, ISBN 0-471-44727-7. 6. a b c IEEE Std 1139-1999 IEEE Standard Definitions of Physical Quantities for Fundamental Frequency and Time Metrology—Random Instabilities, http://www.umbc.edu/photonics/Menyuk/Phase-Noise/Vig_IEEE_Standard_1139-1999%20.pdf 7. Clock (CLK) Jitter and Phase Noise Conversion, MAXIM APPLICATION NOTE 3359, http://www.maxim-ic.com/app-notes/index.mvp/id/3359 8. A General Theory of Phase Noise in Electrical Oscillators, by Ali Hajimiri and Thomas H. Lee, IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 33, NO. 2, FEBRUARY 1998 http://www.chic.caltech.edu/Publications/general_full.PDF
2018-03-19 07:15:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 32, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7484097480773926, "perplexity": 2445.258281832625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646602.39/warc/CC-MAIN-20180319062143-20180319082143-00755.warc.gz"}
https://physics.stackexchange.com/questions/595539/intensity-of-light-passing-through-polarising-filters
# Intensity of light passing through polarising filters I came across a question in my Textbook which I am unsure about. Two polarising filters are aligned to transmit vertically polarised light. They are held in front of a source of horizontally polarised light. The filter closest to the light source is rotated by 45 degrees. The intensity of the light passing through the filters: A does not change B increases C increases to maximum intensity D decreases The answer given is option B, however I don't understand why the intensity increases? • Ask yourself: what is the polarisation after the light passes the first filter? – my2cts Nov 22 '20 at 11:35 • @my2cts Initially no light will pass through the first filter as it is vertically polarised, therefore the intensity is zero, however, once the filter is rotated some light passes through the filter so the intensity increases. Is this correct? – E C Nov 22 '20 at 16:39 • How much light passes the first filter after rotation and what US it's polarisation? – my2cts Nov 22 '20 at 21:58 • Half the light? @my2cts – E C Nov 22 '20 at 22:07 • So what happens next? – my2cts Nov 22 '20 at 22:08 This is easily explained using Malus' law, $$I=I_0cos^2\theta$$, where $$I$$ is the transmitted intensity, $$I_0$$ is the initial intensity and $$\theta$$ is the angle between the pass axis of the polarizer and the polarization axis of light. Supposing the first filter to have been rotated by 45 degrees, we have $$I_1=\frac{I_0}{2}$$. Since the final filter now makes an angle of 45 with the second one, $$I_2=I_1cos^245^o=\frac{I_0}{4}>0$$, representing an increase.
2021-05-12 17:12:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7534821033477783, "perplexity": 277.2482241278823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00484.warc.gz"}
http://math.stackexchange.com/questions/625449/where-can-i-find-out-more-on-collatz-conjecture-like-sequences
# Where can I find out more on Collatz-conjecture like sequences? I'm interested in Collatz-conjecture (the 3n+1 problem) like sequences. I'm interested in any literature that contains information about problems that are divided into similar cases. I'm particularly interested in studies of sequences that have 3 or more cases, but information on 2 case problems is interesting as well. I'm just trying to get a feel of what is known. Anything will be helpful. So, where can we find lots of information on these types of problems/sequences? - For what it is worth, the people who are able to answer this question necessarily know the Collatz conjecture - stating the Collatz problem just adds noise to your question. –  Thomas Andrews Jan 3 at 2:08 @ThomasAndrews: I was afraid of that. I guess I was trying to keep the question self-contained so that those that don't know could learn something too. I've edited the question. –  Matt Groff Jan 3 at 2:48 add comment ## 2 Answers Anyone wanting to study the $3n+1$ problem and related sequences should check Jeffrey Lagarias' annotated bibliographies on the arXiv, posted here and here. It contains some hundreds of papers, with for each paper a short summary of what it is about. - Lagarias has also published a book in which he collects several of the important papers on Collatz. –  Gerry Myerson Jan 3 at 2:58 add comment A natural generalization of the Collatz problem is recursively undecidable. You may be interested in this paper. - A little irritated by the language in that articled. It says, "given a Collatz function $g$, it is undecidable...." Really, what they are saying is that the general problem, not the problem for any $g$, is undecidable, right? Seems sloppy. (There are problems in the $a_i,b_i$ definitions for the original Collatz problem, too, giving me the feeling this wasn't proof-read, much less reviewed...) –  Thomas Andrews Jan 3 at 2:16 I agree with you. A good point. –  mathlove Jan 3 at 2:30 My understanding is that there exists a particular Collatz-like problem for which the usual question is undecidable. –  Gerry Myerson Jan 3 at 2:56 Yeah, that was my memory, too. I see the above linked article is actually a summary of a Conway article. –  Thomas Andrews Jan 3 at 3:07 For that Conway article, it's from the American Mathematical Monthly and you can find it at JSTOR. I'd say that's more worthwhile reading than the "paper" linked in this answer (which is not finished, and contains several sloppy mistakes). –  TMM Jan 3 at 14:49 add comment
2014-03-08 16:26:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8081865310668945, "perplexity": 971.0267817409127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999655109/warc/CC-MAIN-20140305060735-00078-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/frames-of-reference.413310/
# Homework Help: Frames of Reference 1. Jun 30, 2010 ### JWSiow 1. The problem statement, all variables and given/known data A child sits 2m from the centre of a merry-go-round (a rotating wooden platform) which is rotating at 3.5revs/min in a clockwise direction (when viewed from above). She places a wooden block of mass m=0.25kg beside her on the platform. a)Draw a diagram showing all the forces acting on the block, and describe each force. b)If the block remains at rest relative to the platform, calculate its acceleration. Explain your reasoning. c)If the coefficients of static and kinetic friction between the block and the platform are 0.2 and 0.18 respectively, do you expect the block to remain at rest or to slide relative to the platform? 2. Relevant equations F(friction) = $$\mu$$kFN F(friction) = $$\mu$$sFN 3. The attempt at a solution a)I'm not sure if I had the write forces in my diagram, I had the weigh mg, normal force, centripetal force and friction force. b)I'm not sure about this one either, but I think, since the frame of reference is the platform, and the block is not moving, acceleration = 0. ? c)I didn't really know what to do for this, so I worked out the static and kinetic friction forces, and then compared it to the centripetal force. Since the centripetal force was smaller than the static friction force, the block would remain at rest. I don't know if this is correct, or if it's even a good explanation! 2. Jun 30, 2010 ### Bartek (a) friction force is the centripetal force.,so you have three forces ("centripetal force" is just the name of real force directed to the center. In this case - friction force) (b) No. Platform is not inertial frame of reference. So in this frame of reference there is centrifugal force. You have two possibilities: use ground frame of refference and calculate centripetal acceleration (recommended ) OR use platform as a frame of refference and calculate centrifugal acceleration. (c) OK, but block would remain rest since centripetal force was smaller OR EQUAL to static friction force regards 3. Jun 30, 2010 ### JWSiow Ok, thanks. :) So, for b), when calculating the centripetal acceleration, I'd use the velocity of the platform? 4. Jun 30, 2010 ### Bartek Yes. And distance between mass and pivot. Find centripetal force first. regards
2018-06-22 23:42:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49579572677612305, "perplexity": 721.3957653400661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864822.44/warc/CC-MAIN-20180622220911-20180623000911-00017.warc.gz"}
https://socratic.org/questions/what-are-ligands-in-coordination-compounds
What are ligands in coordination compounds? Jun 1, 2017 $\text{Ligands}$ are essentially $\text{Lewis bases}$ or $\text{electron donors}$. Explanation: Ligands occur in the context of coordination chemistry. And here a metal centre, a $\text{Lewis acid}$, is bound to several or so $\text{coordinating ligands}$. In aqueous solution, we often write ${M}^{2 +} \left(a q\right)$, where $M$ is some transition metal, e.g. $C {o}^{2 +} , F {e}^{3 +} , F {e}^{2 +} , M {n}^{2 +}$. What we mean is the so-called aquated complex, ${\left[M {\left\{{\left(O H\right)}_{2}\right\}}_{6}\right]}^{2 +}$ where 6 water ligands, $\text{bind to}$, or $\text{coordinate to}$ a metal centre. The resultant species is a so-called $\text{coordination complex}$, which has a (typical) octahedral geometry. We can represent such an $F e \left(I I I\right)$ complex by the following picture. The ligands can be exchanged by other species, which may bind more strongly to the Lewis-acidic metal centre. Typical ligands include $N {H}_{3}$, ${X}^{-}$, $P {R}_{3}$...........All of these ligands have a lone par of electrons that can coordinate to a metal centre. Different substitution of ligands, and different combinations, give rise to different colours, and different geometries. Can you tell me the ligands in the second illustration? Note that these coordination compounds are (usually) cationic. Under certain circumstances, we could crystallize them out as the salt...........${\left[F e {\left\{{\left(O H\right)}_{2}\right\}}_{6}\right]}^{3 +} C {l}_{3}^{-}$ (i.e. the choride counterions are along for the ride. The field of coordination chemistry is widely studied.
2022-05-22 16:08:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926383376121521, "perplexity": 1826.6368928500456}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00360.warc.gz"}
https://cn.maplesoft.com/support/help/errors/view.aspx?path=MultivariatePowerSeries%2FHenselFactorize
MultivariatePowerSeries/HenselFactorize - Maple Help MultivariatePowerSeries HenselFactorize Factorize a univariate polynomial over power series Calling Sequence HenselFactorize(u) HenselFactorize(u, lcopt) Parameters u - univariate polynomial over power series generated by this package lcopt - (optional) equation of the form returnleadingcoefficient = r, where returnleadingcoefficient is a literal keyword and r is one of the values true, false, and automatic Description • The command HenselFactorize(u) factorizes u over the ring of power series of the coefficients of u and returns the factors as a list. The factors may not be linear, since u is not factored over the field of Puiseux series of the coefficients of u. • There are two requirements on u for HenselFactorize to succeed in factorizing u. If either of these requirements is not met, an error is signaled. – The leading coefficient pn of u (the power series that is the coefficient of the highest power of the main variable) must be invertible. That is, its constant term (the homogeneous part of pn of degree 0) needs to be nonzero. – Furthermore, for each power series p that is a coefficient of u, the constant coefficient of p (that is, the homogeneous part of degree 0) divided by the constant coefficient of pn needs to be an algebraic number. That is, the coefficient needs to be of type algnum, either immediately or after applying convert/RootOf. These coefficients are exactly the coefficients of the polynomial f returned by EvaluateAtOrigin(v) in the algorithm description below. • The command HenselFactorize(u) proceeds as follows. – For the first step, let pn be the leading coefficient of u (the power series that is the coefficient of the highest power of the main variable). Maple verifies if its analytic expression is known. If it is not known, or known to be different from 1, then we multiply u by the inverse of pn; let us call the result v. Otherwise (if pn is known to be equal to 1), v is defined to be equal to u. See also the discussion of the returnleadingcoefficient option below. – In the second step, the polynomial f returned by EvaluateAtOrigin(v) is factored over the algebraic numbers. – Finally, each of the factors of f is lifted in turn, by repeated calls to the commands TaylorShift and WeierstrassPreparation. • The command HenselFactorize(u) is based on lazy evaluation techniques. To be precise, this command returns as soon as the factors of f are computed and all coefficients (which are power series) of the factors of u are initialized (that is, once their generators are determined). • Since it is really v that is factored in the second and third steps of the algorithm below, and v and u differ by a factor of pn (the leading coefficient of u), it is necessary to include a factor corresponding to pn in order to obtain factors that multiply together to u. The returnleadingcoefficient option determines whether pn, investigated and potentially used in the first step of the algorithm, is returned with the other factors. In all cases, if pn is returned, it is converted to a univariate polynomial over power series with the same main variable as u in order to match the type of the other factors; of necessity, it is a constant polynomial, because it does not depend on the main variable (it is, after all, a coefficient of the main variable in u). – If the option returnleadingcoefficient = true is included, then pn is always returned as the first entry in the resulting list of factors, even if it is equal to 1. – If the option returnleadingcoefficient = false is included, then pn is always omitted from the resulting list of factors, even if it is different from 1. In this case, the resulting list of factors will contain only univariate polynomials over power series of positive degree in the main variable. – By default (or if this behavior is selected explicitly by including the option returnleadingcoefficient = automatic), pn is returned as the first entry of the resulting list of factors only if it is not known to be equal to 1, and omitted otherwise. • When using the MultivariatePowerSeries package, do not assign anything to the variables occurring in the power series and univariate polynomials over power series. If you do, you may see invalid results. Examples > $\mathrm{with}\left(\mathrm{MultivariatePowerSeries}\right):$ We define a univariate polynomial over power series, $f$, and determine its Hensel factorization. It factors into linear factors. > $f≔\mathrm{UnivariatePolynomialOverPowerSeries}\left(\left(z-1\right)\left(z-2\right)\left(z-3\right)+x\left({z}^{2}+z\right),z\right):$ > $F≔\mathrm{HenselFactorize}\left(f\right)$ ${F}{≔}\left[\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({-1}{+}{\dots }\right){+}\left({1}\right){}{z}\right]{,}\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({-2}{+}{\dots }\right){+}\left({1}\right){}{z}\right]{,}\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({-3}{+}{\dots }\right){+}\left({1}\right){}{z}\right]\right]$ (1) We can see more terms of the factors as follows. > $\mathrm{map}\left(\mathrm{print},\mathrm{map}\left(\mathrm{UpdatePrecision},F,5\right)\right):$ $\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({-1}{+}{x}{-}{3}{}{{x}}^{{2}}{+}\frac{{27}{}{{x}}^{{3}}}{{2}}{-}\frac{{291}{}{{x}}^{{4}}}{{4}}{+}\frac{{3465}{}{{x}}^{{5}}}{{8}}{+}{\dots }\right){+}\left({1}\right){}{z}\right]$ $\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({-2}{-}{6}{}{x}{-}{30}{}{{x}}^{{2}}{-}{402}{}{{x}}^{{3}}{-}{5610}{}{{x}}^{{4}}{-}{93390}{}{{x}}^{{5}}{+}{\dots }\right){+}\left({1}\right){}{z}\right]$ $\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({-3}{+}{6}{}{x}{+}{33}{}{{x}}^{{2}}{+}\frac{{777}{}{{x}}^{{3}}}{{2}}{+}\frac{{22731}{}{{x}}^{{4}}}{{4}}{+}\frac{{743655}{}{{x}}^{{5}}}{{8}}{+}{\dots }\right){+}\left({1}\right){}{z}\right]$ (2) We define a new univariate polynomial over power series, $g$. Its Hensel factorization has a linear and a quadratic factor. > $g≔\mathrm{UnivariatePolynomialOverPowerSeries}\left({y}^{2}+{x}^{2}+\left(y+1\right){z}^{2}+{z}^{3},z\right):$ > $G≔\mathrm{HenselFactorize}\left(g\right)$ ${G}{≔}\left[\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({0}{+}{\dots }\right){+}\left({0}{+}{\dots }\right){}{z}{+}\left({1}\right){}{{z}}^{{2}}\right]{,}\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({1}{+}{\dots }\right){+}\left({1}\right){}{z}\right]\right]$ (3) > $\mathrm{map}\left(\mathrm{print},\mathrm{map}\left(\mathrm{UpdatePrecision},G,5\right)\right):$ $\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({{x}}^{{2}}{+}{{y}}^{{2}}{-}{{x}}^{{2}}{}{y}{-}{{y}}^{{3}}{-}{{x}}^{{4}}{-}{{x}}^{{2}}{}{{y}}^{{2}}{+}{4}{}{{x}}^{{4}}{}{y}{+}{7}{}{{x}}^{{2}}{}{{y}}^{{3}}{+}{3}{}{{y}}^{{5}}{+}{\dots }\right){+}\left({-}{{x}}^{{2}}{-}{{y}}^{{2}}{+}{2}{}{{x}}^{{2}}{}{y}{+}{2}{}{{y}}^{{3}}{+}{2}{}{{x}}^{{4}}{+}{{x}}^{{2}}{}{{y}}^{{2}}{-}{{y}}^{{4}}{-}{10}{}{{x}}^{{4}}{}{y}{-}{16}{}{{x}}^{{2}}{}{{y}}^{{3}}{-}{6}{}{{y}}^{{5}}{+}{\dots }\right){}{z}{+}\left({1}\right){}{{z}}^{{2}}\right]$ $\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({1}{+}{y}{+}{{x}}^{{2}}{+}{{y}}^{{2}}{-}{2}{}{{x}}^{{2}}{}{y}{-}{2}{}{{y}}^{{3}}{-}{2}{}{{x}}^{{4}}{-}{{x}}^{{2}}{}{{y}}^{{2}}{+}{{y}}^{{4}}{+}{10}{}{{x}}^{{4}}{}{y}{+}{16}{}{{x}}^{{2}}{}{{y}}^{{3}}{+}{6}{}{{y}}^{{5}}{+}{\dots }\right){+}\left({1}\right){}{z}\right]$ (4) The two examples above have leading coefficients that are known to be equal to 1. The case below has a leading coefficient equal to ${ⅇ}^{x}$ (an invertible power series). Its Hensel factorization has a linear and a quadratic factor, and the leading coefficient is also included as a factor. > $a≔\mathrm{PowerSeries}\left(d↦\frac{{x}^{d}}{d!},\mathrm{analytic}=\mathrm{exp}\left(x\right)\right)$ ${a}{≔}\left[{PowⅇrSⅇriⅇs of}{{ⅇ}}^{{x}}{:}{1}{+}{\dots }\right]$ (5) > $b≔2\mathrm{SumOfAllMonomials}\left(\left[x,y\right]\right)$ ${b}{≔}\left[{PowⅇrSⅇriⅇs of}\frac{{2}}{\left({1}{-}{x}\right){}\left({1}{-}{y}\right)}{:}{2}{+}{\dots }\right]$ (6) > $c≔\mathrm{PowerSeries}\left(d↦\mathrm{ifelse}\left(d::\mathrm{even},\frac{{\left(-1\right)}^{\frac{d}{2}}\cdot {x}^{d}}{d!},0\right),\mathrm{analytic}=\mathrm{cos}\left(x\right)\right)$ ${c}{≔}\left[{PowⅇrSⅇriⅇs of}{\mathrm{cos}}{}\left({x}\right){:}{1}{+}{\dots }\right]$ (7) > $d≔\mathrm{PowerSeries}\left(d↦\mathrm{ifelse}\left(d::\mathrm{odd},\frac{{\left(-1\right)}^{\frac{d}{2}-\frac{1}{2}}\cdot {x}^{d}}{d!},0\right),\mathrm{analytic}=\mathrm{sin}\left(x\right)\right)$ ${d}{≔}\left[{PowⅇrSⅇriⅇs of}{\mathrm{sin}}{}\left({x}\right){:}{0}{+}{\dots }\right]$ (8) > $h≔\mathrm{UnivariatePolynomialOverPowerSeries}\left(\left[d,c,b,a\right],z\right)$ ${h}{≔}\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({0}{+}{\dots }\right){+}\left({1}{+}{\dots }\right){}{z}{+}\left({2}{+}{\dots }\right){}{{z}}^{{2}}{+}\left({1}{+}{\dots }\right){}{{z}}^{{3}}\right]$ (9) > $H≔\mathrm{HenselFactorize}\left(h\right)$ ${H}{≔}\left[\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({1}{+}{\dots }\right)\right]{,}\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({0}{+}{\dots }\right){+}\left({1}\right){}{z}\right]{,}\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({1}{+}{\dots }\right){+}\left({2}{+}{\dots }\right){}{z}{+}\left({1}\right){}{{z}}^{{2}}\right]\right]$ (10) > $\mathrm{map}\left(\mathrm{print},\mathrm{map}\left(\mathrm{UpdatePrecision},H,5\right)\right):$ $\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({1}{+}{x}{+}\frac{{{x}}^{{2}}}{{2}}{+}\frac{{{x}}^{{3}}}{{6}}{+}\frac{{{x}}^{{4}}}{{24}}{+}\frac{{{x}}^{{5}}}{{120}}{+}{\dots }\right)\right]$ $\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({x}{+}{2}{}{{x}}^{{2}}{+}\frac{{28}{}{{x}}^{{3}}}{{3}}{+}{2}{}{{x}}^{{2}}{}{y}{+}\frac{{148}{}{{x}}^{{4}}}{{3}}{+}{18}{}{{x}}^{{3}}{}{y}{+}{2}{}{{x}}^{{2}}{}{{y}}^{{2}}{+}\frac{{4282}{}{{x}}^{{5}}}{{15}}{+}\frac{{439}{}{{x}}^{{4}}{}{y}}{{3}}{+}{26}{}{{x}}^{{3}}{}{{y}}^{{2}}{+}{2}{}{{x}}^{{2}}{}{{y}}^{{3}}{+}{\dots }\right){+}\left({1}\right){}{z}\right]$ $\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({1}{-}{3}{}{x}{-}{3}{}{{x}}^{{2}}{-}{2}{}{x}{}{y}{-}\frac{{46}{}{{x}}^{{3}}}{{3}}{-}{2}{}{x}{}{{y}}^{{2}}{-}{8}{}{{x}}^{{2}}{}{y}{-}\frac{{473}{}{{x}}^{{4}}}{{6}}{-}\frac{{155}{}{{x}}^{{3}}{}{y}}{{3}}{-}{2}{}{x}{}{{y}}^{{3}}{-}{12}{}{{x}}^{{2}}{}{{y}}^{{2}}{-}\frac{{26779}{}{{x}}^{{5}}}{{60}}{-}{352}{}{{x}}^{{4}}{}{y}{-}\frac{{311}{}{{x}}^{{3}}{}{{y}}^{{2}}}{{3}}{-}{16}{}{{x}}^{{2}}{}{{y}}^{{3}}{-}{2}{}{x}{}{{y}}^{{4}}{+}{\dots }\right){+}\left({2}{+}{2}{}{y}{-}{x}{-}{{x}}^{{2}}{+}{2}{}{{y}}^{{2}}{-}\frac{{26}{}{{x}}^{{3}}}{{3}}{-}{{x}}^{{2}}{}{y}{+}{2}{}{{y}}^{{3}}{-}\frac{{583}{}{{x}}^{{4}}}{{12}}{-}\frac{{52}{}{{x}}^{{3}}{}{y}}{{3}}{-}{{x}}^{{2}}{}{{y}}^{{2}}{+}{2}{}{{y}}^{{4}}{-}\frac{{4271}{}{{x}}^{{5}}}{{15}}{-}\frac{{1747}{}{{x}}^{{4}}{}{y}}{{12}}{-}\frac{{76}{}{{x}}^{{3}}{}{{y}}^{{2}}}{{3}}{-}{{x}}^{{2}}{}{{y}}^{{3}}{+}{2}{}{{y}}^{{5}}{+}{\dots }\right){}{z}{+}\left({1}\right){}{{z}}^{{2}}\right]$ (11) The first entry of the factorization is $a$, the leading coefficient of $h$. It is represented as a trivial univariate polynomial over power series. > $\mathrm{GetCoefficient}\left(H\left[1\right],0\right)$ $\left[{PowⅇrSⅇriⅇs of}{{ⅇ}}^{{x}}{:}{1}{+}{x}{+}\frac{{{x}}^{{2}}}{{2}}{+}\frac{{{x}}^{{3}}}{{6}}{+}\frac{{{x}}^{{4}}}{{24}}{+}\frac{{{x}}^{{5}}}{{120}}{+}{\dots }\right]$ (12) References Alexander Brandt, Mahsa Kazemi, Marc Moreno Maza "Power Series Arithmetic with the BPAS Library." Computer Algebra in Scientific Computing (CASC), Lecture Notes in Computer Science - 12291, (2020): 108-128. Mohammadali Asadi, Alexander Brandt, Mahsa Kazemi, Marc Moreno Maza, and Erik Postma: " Multivariate Power Series in Maple." Maple Conference 2020, Waterloo, Ontario, Canada, November 2-6, 2020, Communications in Computer and Information Science (CCIS) series - Springer 2020 (submitted). Compatibility • The MultivariatePowerSeries[HenselFactorize] command was introduced in Maple 2021.
2023-04-02 08:34:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 37, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729740381240845, "perplexity": 663.955208879689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00114.warc.gz"}
https://math.ryerson.ca/~gord/abstracts/PiQudos/PiQudoswebH/index.html
# Bohm Trajectories, Feynman Paths and Subquantum Dynamical Processes Ryerson University/PI Abstract Bohmian trajectories and Feynman paths are conceptually different objects from radically different views of quantum mechanics. Both offer different particle pictures' in a subject that is based on wave mechanics. Some recent models of subquantum dynamical processes underlying the Dirac equation suggest that there may be an unexplored link between the two concepts via the quantum potential. We sketch the qualitative ideas involved and view some simple implementations that quantitatively illustrate the suggested link. October 2005 ### Outline 1. What is this talk about? 2. What is the relevance to interpretations of quantum mechanics? 3. Example:  nodes in the double slit experiment 4. Whence Phase? 5. Counting with negative integers in classical statistical mechanics. 6. The Dirac Equation for Accountants 8. Building Complex numbers. 9. Questions 10. Conclusions ### 1 What is this talk about? The domain of this talk is propagation in elementary, single particle quantum mechanics. The tools are simple but non-standard for discussions of quantum mechanics. We shall use only classical statistical mechanics. The target of the talk are solutions of equations like: (1) Where H is a simple Hamiltonian. From a quantum mechanical perspective, ψ is important as an element of a probability calculus. However there is no agreement within the physics community as to whether ψ represents anything in an external physical reality. In classical statistical mechanics the task is simply to count recognizable objects. If we can arrive at Eqn. (1) using only statistical mechanical tools, we will have a context in which ψ itself is recognizable in its own right. ### 2. What is the relevance to interpretations of quantum mechanics? Roughly speaking, there are two categories of ‘pictures’ in the interpretation of quantum mechanics, based on whether there is an external reality that contains objects resembling classical particles. ##### 1. The wave-only picture • Here the objects of study are waves. Particles are wave-packets and are a derived concept. • This picture is fine up to measurement. Measurement poses the problem of wavefunction collapse. • “Theory as explanation”  very questionable. • External reality?? ##### 2. The pilot-wave picture • The object of study here is a real (smooth) particle trajectory. • Agrees with intuitive ideas of particles and paths. • Explains nodes and quantum interference through the quantum potential. • Solutions of wave equation determine particle paths through the quantum potential. • Waves are a necessary but adjunct concept. ##### 3. A particle-only picture • Is there a picture in which the particle is the object of study and  waves are a derived concept? (complementary to 1 above)? • We shall establish one. ### 3. Example:  Nodes in the double slit experiment ##### 1. The experiment with electrons. Figure 1. Figure from “The Road to Reality” by Penrose[1]. If we ignore the fact that individual electrons are detected, the diffraction pattern of the double slit experiment has a natural interpretation in terms of waves. When we notice that the pattern is formed by individual particle-like events, we either need a collapse phenomenon at the detector, or we need to strengthen the particle paradigm. ##### 2. Pilot-wave How do you obtain nodes in a particle-based theory? Bohms theory uses solutions of the wave equation to construct a quantum potential. The potential establishes path-rich and path-poor areas in a wave pattern. Figure 2. From “ The quantum theory of motion” by Holland[2]. ##### 3. Feynman Paths What about Feynman paths? These are not considered real, as Bohm paths are, however it is interesting to see how wave patterns are built. In particular, in terms of Feynman paths, nodal regions are not path-poor, they are path-paired where the opposite phase of pair members cancel. Figure 3. In the double slit experiment, nodes are created by a pairing of oppositely phased paths through the two slits. ### 4. Whence Phase? • The physical origin of phase is unknown ... it is a wave concept grafted onto the particle paradigm. (Compare Feynman phase factor  with the Wiener integral Boltzman factor   . The Boltzman factor is the result of just counting trajectories, the Feynman phase appears to be counting wave amplitudes.) • The function of phase is to propagate subtraction ... this appears to be outside the statistical mechanics of classical particles. (eg. If we wanted the lighting lowered in this room we would request that the lights be dimmed, we would not expect that a‘darkness projector’ be turned up to reduce the ambient light. ) #### 5.Counting with negative integers in classical statistical mechanics. ##### 1. Simple paths Suppose we assume paths (world-lines) that are continuous, begin before t =0 and end beyond the time scale of interest. These simple paths, where x is a single-valued function of t, are counted with the natural numbers Ν. The diffusion equation may be shown to be a continuum limit of a counting process for a particular kind of such paths. FIG. 4: Two simple paths. Only positive integers required to count the number of paths. ##### 2. Non-simple paths Suppose however that paths are continuous and traverse a particular time interval of interest, but this time they can double back! Now is not a single-valued function of t anymore. If we want to count such paths, we have to count taking into account the direction of traversal! In the figure below, the path is colour coded to indicate direction of traversal, blue for forward in t, red for backwards in t. If we associate a +1 with blue and a -1 with red, the sum of all contributions at fixed t will count the number of paths. Note we now need the integers Z to do our counting. FIG. 5: Continuous paths with reversed segments require the negative integers for counting paths. At fixed t the number of continuous paths is the number of blue contributions minus the number of red. ### 6. The Dirac Equation for Accountants Can we construct a single non-simple path in such a way that it mimics solutions of the Dirac equation? If we can, we have a chance to understand quantum propagation in a way that is similar to Einstein’s explanation of diffusion in terms of Brownian motion. #### Dirac Version The Dirac Equation is usually produced by arguments that begin by requiring a PDE of the form ( h) (2) This is followed by the relativistic requirement that (3) where is the rest mass of the electron and is the momentum. Combining these requirements lead Dirac to propose (4) Iterating  Equation 2 , using Equation 3 suggested the usual anticommutation relations for the matrices and . Much as the original argument was brilliant and insightful, there was no sense in which the resulting equation described a wavefunction that had a . #### Accountant Version Here we imagine deriving the Dirac equation in 1+1 dimension for an accountant who is of course familiar with simple arithmetic, notably counting with integers and using rational numbers when a reason for division of integers is explained. Equation 1 would have little meaning for our accountant. i =is outside common arithmetic as are Real numbers and the calculus. However, counting configurations of a stochastic process on a lattice is within the domain of accountancy tools. Consider the stochastic walk considered in Fig. 6 FIG. 6: On the left is a single entwined path' in the plane. The colour indicates the direction of traversal, blue for traversal in the direction, red for traversal in the direction. The origin is at the bottom of the path and the lattice spacing is small on the scale of the figure. Notice that the crossing of the forward(blue) and backward(red) paths forms a chain of oriented areas. The orientation switches from one rectangular area to the next. This is illustrated by the recolouring of the path in the centre figure. There blue represents positive orientation and red represents negative orientation. To count oriented rectangles we can simply use the enumerative'  path, illustrated on the right hand side of the figure. Here the colouring illustrates the contribution to orientation, +1 for blue, -1   for red. ###### Walk description We consider a special entwined'' walk on a square lattice in the (x,y)-plane[3-5]. The lattice spacing will be some small positive rational number ϵ and the walker will move at each step in one of the four directions .  The stochastic element of the walk is governed by the binomial distribution. That is, at step  n+1, the walker will step in the same direction as in step n with probability, with m a fixed positive constant and. The alternative at each step, besides continuing in the same direction, is a change of state. A change of state occurs in an alternating pattern. The walk is started from the origin and the first step is to (1,1). The stochastic process is consulted and the walk continues to unless a state change is indicated. The first state change is a change in the direction of motion along the -axis. So if this happens at  , the subsequent step is to . The next state change will not change the direction of the walk, but the walker will drop a marker for its return path before stepping to the next site. Thus all the odd numbered state changes will be direction changes in the x coordinate, all the even numbered state changes will preserve direction but drop markers. This process will continue until the walk steps past some pre-assigned y-coordinate, say, at which point, at the next marker drop, the walker will return to the origin along the path defined by the set of markers. A single such entwined path(EP) is illustrated in Fig1. ###### Oriented Areas. Notice that the EP defines a chain of oriented rectangles in which the orientation reverses at each crossing point of the EP.  If we use the right hand rule' for path traversal, the first rectangle has positive orientation out of the page, the second negative, into the page, and so on in an alternating fashion. As the walk is lengthened by repeating the process  we can imagine that a cone with apex at the origin will become covered by an ensemble of these oriented rectangles. If we sit at a particular off-lattice point inside this cone we could expect to be regularly encircled by these oriented rectangles and we might expect that we should eventually see an equilibrium distribution of orientation, varying from point to point throughout the cone. How could we measure this orientation? ###### Enumerative Paths Consider again Fig1. Each oriented rectangle has a left and right boundary that contributes to the oriented area. So as to avoid counting the contributions of an area twice let us restrict our attention to the right boundary of each rectangle. The collection of right boundaries itself forms a path that we call the 'enumerative path' . Note that the enumerative path has a very regular structure and we can use it to count oriented rectangles. The blue segments of the path belong to positively oriented rectangles and add a +1 to the oriented area count. The red portions of the enumerative path belong to negatively oriented rectangles and contribute a weight of -1 to the count. ###### Counting Oriented Areas Now our task to count oriented rectangles is reduced to counting the contributions of enumerative paths on the lattice. We can do this using the structure of the walks to deduce what the equilibrium pattern, assuming there is one,  must be. Let us label the lattice sites by , and where and are integers. We consider a two component density where counts  the number of and directed links and counts  the number of   and directed links . We need a 2-component density here because our enumerative paths continually shuffle orientation counts between the two directions. Now any link at in the direction either follows a link of the same direction and colour at or follows a link of the opposite direction and colour at . The former occurs with probability , the latter with probability . Thus if an equilibrium density is reached it must satisfy: (5) Notice here the subtraction involved in the second term. This is because whenever our enumerative path turns right' it switches orientation, thus changing the sign of its contribution. Since this happens for all paths, it must happen for the equilibrium distribution. We can similarly deduce that the density must obey the difference equation: (6) The positive sign for the second term reflects the fact that the change of direction for a left turn' on an enumerative path does change orientation. ###### Solutions of the difference equations. Equations  5 and 6 are difference equations that, with suitable initial conditions for the yield rational number solutions inside the cone above the -axis. The equations themselves are just conservation equations that express the fact that the enumerative paths are continuous and have the alternating orientation of Fig1. The solutions spread out into ever larger regions along the -axis as increases, so includes an exponential decay. We may choose to look through this background decay to see the emerging pattern by following the evolution of: ###### Remove decay and rewrite Write (7) then satisfies the equations: (8) to lowest order in . ###### Solutions Rational, counting process mundane. Let us now pause and note that Eqn(\ref{diffs}) is a perfectly respectable equation for an accountant to produce. There are no abstractions or mathematical tools involved that take us out of the domain of good accountancy. We know exactly what is being counted \ldots oriented rectangles produced by a long entwined path. We know the number system involved ... the counting actually just used integers but the  use of probability and normalization placed us in the domain of rational numbers. We {\em have} assumed that an equilibrium pattern will emerge from our initial conditions but that is something that is easily checked by performing a numerical experiment.  Indeed, the calculation up to this point involves only a basic counting of recognizable objects on  a planar lattice. ###### Continuum Limit. We abandon the limitations of arithmetic and approximate the solutions of (8) by taking a continuum limit. If we subtract from both sides of (7), divide by and take the limit as we find that or writing we have (9) This may be recognized as a form of the Dirac equation where and . Note that if we iterate this equation to get a second order form we have (10) which is the Klein-Gordon equation. Note, this means that the solutions of these ‘wave equations’ are a continuum limit of a pattern formed by the a counting process for a single ‘space-time’ trajectory! ##### What we have Not Done Notice that to obtain the Dirac and Klein-Gordon equations above we have not invoked : We have simply taken the output of an accounting argument'  and written it in a language familiar in the context of relativistic quantum mechanics. Now (9) just a continuum limit of (8) written in a familiar form. (Notice that there has been no analytic continuation forced on the system. is real and the in (8) is present only because is imaginary.) The point here is that we can regard (8)  either as a fundamental equation about the 'wavefunction' of an electron, without knowing exactly what a wavefunction represents in the physical world, we can take (8)  as the continuum limit of an equation describing an equilibrium distribution of a simple stochastic process. The continuum language that we use does not tell us whether we are describing a Dirac wavefunction in one dimension' or a `spacetime that maintains an accountancy ledger for the EP stochastic process'. In the context of quantum mechanics, all ‘ derivations’ of quantum mechanics from particle mechanics involve a formal analytic continuation (FAC), either explicit or forced by a global requirement. For example: • the usual   is an explicit FAC that takes us from real dynamical variables to complex operators. • Nelson’s work is a forced analytic continuation. There is no explicit invocation of complex numbers, however Nelson’s argument cleverly forces the diffusion equation into the complex domain by requiring reversibility. The above  derivation of the Dirac equation has no such analytic continuation, either explicit or globally forced. So what makes it work??? #### 8. Constructing Complex Numbers Historically and to a certain extent logically, the evolution of number systems is something like: N -> Z ->Q -> R -> C N, Z, and Q are appropriate number systems for the counting problems of classical statistical mechanics. However in making the transition from classical mechanics to quantum mechanics, we habitually start in R (we need a smooth continuum for classical mechanics) and we are forced into C by ‘wave-particle duality'. In the ‘Accountant version’ we started with a counting problem for non-simple curves. This forced us to use Integers for the primary number system. The geometry (counting oriented areas) required that the counting have a 2-component structure with a built-in periodicity.  The continuum limit of the counting process then contained the algebra of the complex numbers, not just the Reals.  The logical structure is something like: where and are essentially subsets of with discrete phase and modulus. The difference here from other derivations of the Dirac equation is that the Algebra of complex numbers is built by a combination of the local geometry of the trajectory and the counting process that detects oriented areas. The natural extension of the counting process is to , not . Unfortunately the connection between the ‘subquantum dynamical process’ and the resulting ‘wavefunction’ is lost in the continuum limit because the process itself is below the resolution of the resulting partial differential equation. ( compare thermodynamics and stat. mech.) ... See Animation #### 9. Questions ... Q. How robust is this? Does the Dirac equation depend on minute details in the geometry and/or the stochastic process? A. Apparently not. The geometry has to be periodic, and there has to be a strict pairing of forward and return paths. The dynamical process can be stochastic, but it is vastly more efficient to have the dynamics deterministic and input the stochastic element at the initial conditions. Q.  The argument was for 1+1 dimensions. What about 3+1. A.  This seems to be a fairly straight-forward extension ... but it has not received a critical appraisal by anyone else yet. Q.What is the analog of superposition in this context? A. Concatenation. Any two space-time patterns A and B formed by entwined paths from the same origin form the pattern A+B when the trajectories are concatenated. Q. It is known that the real difficulty with quantum mechanics is ‘measurement’. The toy model only covers propagation. Why would this be an advantage given we already understand propagation in terms of waves. A. The problem with waves is that you are hard pressed to figure out why measurement favours particles.  If, as in the toy model, the underlying dynamical process produces a particle-like trajectory, the hope is that the measurement problem will be lessened. Q. What about relevance to the Bohm picture. A. At this point they could be connected in a rather artificial way. A (deterministic) entwined path can ‘write’ the appropriate wavefunction on a region of spacetime given the correct initial condition. To mimic Bohm we could instruct the particle to so write the appropriate wavefunction and at then finally exit the spacetime region via the Bohm path determined by the previously written wavefunction. Q. What about multi-particle quantum mechanics? A. Still to be looked at. #### 10. Pictures Figure 7. The first excited state of a particle in on a ring. This is the ‘wavefunction’ propagated by a single entwined path that traverses the spacetime region given the appropriate initial condition at t=0. #### Conclusions • Developed a particle-only picture of propagation • Waves appear as a derived concept • No physics involved ... only path counting and geometry • Derived Dirac equation in 1+1 dimension This assumed: • Path is non-simple stochastically  produced entwined curve and counting is for oriented areas • Path is long enough to produce an equilibrium distribution • Self-interference appears to be generic and robust (there will be many models that will produce ‘waves') • Deterministic versions that satisfy the uncertainty principle through initial conditions are vastly more efficient than completely stochastic versions • There is much to explore!!! ### References [1] Penrose, R. (2004). . : Jonathan Cape. [2] Holland, P. R. (1993). . : Cambridge University Press. [3] Ord, G. N., & Mann, R. B. (2003). Entwined pairs and Schrødinger's equation. Annals of Physics, 308(2), 478-492. [4] Ord, G. N., & Gualtieri, J. A. (2002). The Feynman Propagator from a Single Path. Phys. Rev. Lett, 89(25), 250403-250407. [5] Ord, G. N., & Mann, R. B. (2003). Entwined paths, difference equations and the Dirac equation. Phys. Rev. A, 67.
2020-02-23 14:31:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7041422724723816, "perplexity": 791.9888372711459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145774.75/warc/CC-MAIN-20200223123852-20200223153852-00299.warc.gz"}
https://www.tutorialspoint.com/state-the-property-that-is-used-in-each-of-the-following-statements
# State the property that is used in each of the following statements?$(i)$ If $a || b$ #### Complete Python Prime Pack for 2023 9 Courses     2 eBooks #### Artificial Intelligence & Machine Learning Prime Pack 6 Courses     1 eBooks #### Java Prime Pack 2023 8 Courses     2 eBooks To do: We have to state the property that is used in each of the given statements. Solution: (i) Given $a \| b$ We know that pairs of corresponding angles are equal if two lines are parallel. Therefore, $\angle1=\angle5$ Corresponding angles property is used in the given statement. (ii) Given $\angle 4=\angle 6$ If the pair of alternate angles are equal, then the lines are parallel Therefore, $a \| b$ Corresponding angles property is used in the given statement. (iii) Given $\angle4+\angle5=180^{\circ}$ If the sum of interior angles is $180^{\circ}$, then the lines are parallel Therefore, $a \| b$ Interior angles on the same side of the transversal are supplementary. Updated on 10-Oct-2022 13:33:53
2022-12-01 17:54:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2716507017612457, "perplexity": 2222.740591348597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00417.warc.gz"}
http://tex.stackexchange.com/questions/50862/how-to-modify-just-some-nodes-styles-in-chemfig?answertab=oldest
# How to modify just some nodes' styles in chemfig In the following molecular drawing, I want to only circle the rightmost OH of the first molecule and the leftmost H of the second, to show that they are the ones that are involved in the formation of water: \documentclass{article} \usepackage{chemfig} % for typesetting molecules \begin{document} \chemfig{CH(-[4]H_2N)(-[6]CH_2(-[6]C(=[5]O)(-[7]OH)))-C(=[2]O)-OH} % I want to circle this last OH \chemsign{+} \chemfig{CH(-[4]H_2N)(-[6]CH_2(-[6]*6(=-=-=-)))-C(=[1]O)(-[7]O-CH_3)} % I want to circle the H of the H_2N near the beginning \chemrel{<>} \\ \chemfig{CH(-[4]H_2N)(-[6]CH_2(-[6]C(=[5]O)(-[7]OH)))-C(=[2]O)-N(-[2]H)-CH(-[6]CH_2(-[6]*6(=-=-=-)))-C(=[1]O)(-[7]O-CH_3)} \chemsign{+} \chemfig{H_2O} \\ \end{document} I know that if I put \setnodestyle{draw,circle} on the very first line after \begin{document}, then I can make all the nodes (atoms) circled, but I only want to circle the two I mentioned, but I'm not having success inserting a \setnodestyle inside the \chemfig argument. Does anyone know how to custom modify just one node's style in chemfig? - Since you can use tikzpictures inside of tikzpictures you can for example define some \circleatom command and use it inside \chemfig: \documentclass{article} \usepackage{chemfig} % for typesetting molecules \newcommand*\circleatom[1]{\tikz\node[draw,circle]{#1};} \begin{document} \chemfig{CH(-[4]H_2N)(-[6]CH_2(-[6]C(=[5]O)(-[7]OH)))-C(=[2]O)-\circleatom{OH}} % I want to circle this last OH \chemsign{+} \chemfig{CH(-[4]\circleatom{H}_2N)(-[6]CH_2(-[6]*6(=-=-=-)))-C(=[1]O)(-[7]O-CH_3)} % I want to circle the H of the H_2N near the beginning \chemrel{<>} \\ \chemfig{CH(-[4]H_2N)(-[6]CH_2(-[6]C(=[5]O)(-[7]OH)))-C(=[2]O)-N(-[2]H)-CH(-[6]CH_2(-[6]*6(=-=-=-)))-C(=[1]O)(-[7]O-CH_3)} \chemsign{+} \chemfig{H_2O} \end{document} Edited: a better definition of \circleatom would be \newcommand*\circleatom[1]{\tikz\node[circle,draw]{\printatom{#1}};} The \printatom will ensure that that aroms are displayed the same if you change it to, say, have sans serif printed atoms. BTW: I would probably use chemfig's scheming commands \schemestart, \arrow and \schemestop to typeset the reaction. \documentclass{article} \usepackage{chemfig} \newcommand*\circleatom[1]{\tikz\node[circle,fill=green!30]{\printatom{#1}};} \setatomsep{2em} \setcompoundsep{7em} \renewcommand*\printatom[1]{\ensuremath{\mathsf{#1}}} \begin{document} \schemestart \chemfig{CH(-[4]H_2N)(-[6]CH_2(-[6]C(=[5]O)(-[7]OH)))-C(=[2]O)-\circleatom{OH}} % I want to circle this last OH \+ \chemfig{CH(-[4]\circleatom{H}_2N)(-[6]CH_2(-[6]*6(=-=-=-)))-C(=[1]O)(-[7]O-CH_3)} % I want to circle the H of the H_2N near the beginning \arrow{<=>}[-90] \chemfig{CH(-[4]H_2N)(-[6]CH_2(-[6]C(=[5]O)(-[7]OH)))-C(=[2]O)-N(-[2]H)-CH(-[6]CH_2(-[6]*6(=-=-=-)))-C(=[1]O)(-[7]O-CH_3)} \+ \chemfig{\circleatom{H_2O}} \schemestop \end{document} - Perfect! Thanks! – Pete Apr 5 '12 at 14:19
2013-05-23 00:55:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8223377466201782, "perplexity": 2741.562090027514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702718570/warc/CC-MAIN-20130516111158-00005-ip-10-60-113-184.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/57470/are-collision-detection-always-on2/57473
# Are collision detection always O(n^2)? Are physics engine able to decrease that complexity, for example by grouping objects who are near each other and check for collisions inside this group instead of against all objects ? (for example, far objects can be removed from a group by looking at its velocity and distance from other objects). If not, does that make collision trivial for spheres (in 3d) or disk (in 2d) ? Should I make a double loop, or create an array of pairs instead ? EDIT: For physics engine like bullet and box2d, is collision detection still O(N^2) ? • Two words: Spatial partitioning – MichaelHouse Jun 15 '13 at 13:22 • – MichaelHouse Jun 15 '13 at 13:29 • You bet. I believe both have implementations of SAP (Sweep and Prune) (among others) which is a O(n log(n)) algorithm. Search for "Broad Phase Collision Detection" to learn more. – MichaelHouse Jun 15 '13 at 14:34 • @Byte56 Sweep and Prune has complexity O(n log(n)) only if you need to sort every time you test. You want to keep a sorted list of objects and each time you add one, just sort it to the correct place O(log(n)) therefore you get O(log(n) + n) = O(n). It gets very complicated when objects start moving though! – MartinTeeVarga Jun 15 '13 at 15:11 • @sm4, if the movements is limited then a few passes of bubble sort can take care of that (just mark the moved objects and move them forward or backward in the array until they are sorted. just watch out for other move objects – ratchet freak Jun 15 '13 at 22:58 Spatial division is always O(N^2) in worst case and that is what complexity in informatics is about. However there are algorithms that work in linear time O(N). All of them are based on some kind of sweep line. Basically you need to have your objects sorted by one coordinate. Let's say X. If you perform the sort every time before collision detection, the complexity will be O(N*logN). The trick is to sort only when you are adding objects to the scene and later when something in the scene changes. Sorting after movement is not trivial. See the linked paper below for an algorithm that takes into movement and still works in linear time. Then you sweep from left to right. Each time your sweep line crosses beginning of an object, you put it inside a temporary list. Every time your sweep line exits the object, you take it out from the list. You consider collisions only inside this temporary list. The naive sweep line is O(N^2) in worst case as well (you make all objects span the whole map from left to right), but you can make it O(N) by making it smarter (see link below). A really good algorithm will be quite complex. This is simple diagram how the sweep line works: The line sweeps from left to right. Objects are sorted by X coordinate. • Case one: First two objects are checked. Nothing else matters. • Case two: First object was checked and is gone from the list. Two and three are checked. • Case three: Even if that object IS colliding, we don't check. • Case four: Because we check in this case! Algorithms like this have complexity O(C*N) = O(N). Source: Two years of computational geometry courses. In collision detection this is typically called Sweep and Prune, but sweep line family of algortithms is useful in many other fields. Further recommended reading that I believe is out of scope of this question, but nevertheless interesting: Efficient Large-Scale Sweep and Prune Methods with AABB Insertion and Removal - This paper presents an enhanced Sweep and Prune algorithm that uses axis-aligned bounding boxes (AABB) with sorting that takes into account movement. Algorigthm presented in the paper works in linear time. Now note that this is the best algorithm in theory. It doesn't mean that it is used. In practice, O(N^2) algorithm with spatial division will have better performance speed wise in typical case (close to O(N)) and some extra requirement for memory. This is because the constant C in O(C*N) can be very high! Since we usually have enough memory and typical cases have objects spread evenly in space - such algorithm will perform BETTER. But O(N) is the answer to the original question. • does box2d/bullet use this ? – jokoon Jun 15 '13 at 14:35 • "Sweep and prune" is what this is normally called for physics. Nice thing is that you can keep the sorting updated as the simulation is advanced. Also, the sweep line in your graphic is a little off in terms of implementation (good for theory though) - you would just iterate over the box starts/ends, so you'd only be checking the actual potential collisions. Seen this method used to generate more capable spatial partitioning trees rather than used directly, too. – Sean Middleditch Jun 15 '13 at 15:50 • Since technically there can actually be O(N^2) pairwise collisions, it's not entirely true to say that sweep-and-prune is always O(N). Rather, the core complexity of the algorithm is O(N+c), where c is the number of collisions found by the algorithm - it's output-sensitive, much as many convex hull algorithms are. (Reference: en.wikipedia.org/wiki/Output-sensitive_algorithm ) – Steven Stadnicki Jun 15 '13 at 18:21 • You should back your claims with some publications or at least algorithm names. – sam hocevar Jun 16 '13 at 9:32 • @SamHocevar I've added a link to a really advanced Sweep and Prune algorithm that works in linear time with detailed breakdown of the constants. The fact that the algorithms are called "Sweep and Prune" was new to me, since I never worked with it. I have used these algorithms in map selection (which is kind of a collision of 1 point with other objects), so I just applied the knowledge. – MartinTeeVarga Jun 16 '13 at 9:52 No. Collision detection is not always O(N^2). For instance, say we have a 100x100 space with objects with size 10x10. We could divide this space in cells of 10x10 with a grid. Each object can be in up to 4 grid cells (it could fit right in a block or be "between" cells). We could keep a list of objects in each cell. We only need to check for collisions in those cells. If there is a maximum number of objects per grid cell (say, there are never more than 4 objects in the same block), then collision detection for each object is O(1) and collision detection for all objects is O(N). This is not the only way to avoid O(N^2) complexity. There are other methods, more adequate for other use-cases - often using tree-based data structures. The algorithm I described is one type of Space partitioning, but there are other space partitioning algorithms. See Types of space partitioning data structures for some more algorithms that avoid the O(N^2) temporal complexity. Both Box2D and Bullet support mechanisms to reduce the number of checked pairs. From the manual, section 4.15: Collision processing in a physics step can be divided into narrow-phase and broad-phase. In the narrow-phase we compute contact points between pairs of shapes. Imagine we have N shapes. Using brute force, we would need to perform the narrow-phase for N*N/2 pairs. The b2BroadPhase class reduces this load by using a dynamic tree for pair management. This greatly reduces the number of narrow-phase calls. Normally you do not interact with the broad-phase directly. Instead, Box2D creates and manages a broad-phase internally. Also, b2BroadPhase is designed with Box2D’s simulation loop in mind, so it is likely not suited for other use cases. From the Bullet Wiki: There are various kinds of broadphase algorithms that improve upon the naive O(n^2) algorithm that just returns the complete list of pairs. These optimised broadphases sometimes introduce even more non-colliding pairs but this is offset by their generally improved execution time. They have different performance characteristics and none outperform the others in all situations. Dynamic AABB Tree This is implemented by the btDbvtBroadphase in Bullet. As the name suggests, this is a dynamic AABB tree. One useful feature of this broadphase is that the structure adapts dynamically to the dimensions of the world and its contents. It is very well optimized and a very good general purpose broadphase. It handles dynamic worlds where many objects are in motion, and object addition and removal is faster than SAP. Sweep and Prune (SAP) In Bullet, this is the AxisSweep range of classes. This is also a good general purpose broadphase, with a limitation that it requires a fixed world size, known in advance. This broadphase has the best performance for typical dynamics worlds, where most objects have little or no motion. Both btAxisSweep3 and bt32AxisSweep3 quantize the begin and end points for each axis as integers instead of floating point numbers, to improve performance. The following link is a general introduction to broadphase and also a description of the Sweep and Prune algorithm (although it calls it "Sort and Sweep"): http://en.wikipedia.org/wiki/Sweep_and_prune • Some links to similar questions and outside resources would make this a great answer. – MichaelHouse Jun 15 '13 at 13:47 • This is wrong. You are still getting O(N^2). It will be much faster, something like N^2 / 100, but still N^2. As a proof, just consider that all objects happen to be in one cell. – MartinTeeVarga Jun 15 '13 at 14:15 • @sm4 This is worst-case O(N^2), which is indeed what happens if all objects are in one cell. However, in a physics engines, objects will typically not be in one cell. In my example, no object can ever share the same cell with more than 3 other objects. This would be what happens in a physics engine for "normal" objects (and by "normal" I mean "not just a sensor"). – luiscubal Jun 15 '13 at 14:20 • I think your algorithm would require to check in the 8 cells around, not just the 4 cells. – jokoon Jun 15 '13 at 14:27 • @luiscubal Complexity is always "worst case". In theory you are looking for "guaranteed" complexity. It's the same with quicksort, which is O(N^2) and mergesort, which is O(N*logN). Quicksort performs better on real data and has lower spatial requirement. But mergesort has guaranteed better complexity. If you need to proof something, use mergesort. If you need to sort something, use quicksort. – MartinTeeVarga Jun 15 '13 at 14:59 O(N^2) refers to the fact that if you have N objects, figuring out what is colliding with what is, worst case, N^2 collision computations. Say you have 3 objects. To find "who is hitting who", you have to find: o1 hitting o2? o1 hitting o3? o2 hitting o1? o2 hitting o3? o3 hitting o1? o3 hitting o2? That's 6 checks for collisions, or N*(N-1) checks. In asymptotic analysis we'd expand the polynomial and approximate as O(N^2). If you had 100 objects, then that'd be 100*99, which is close enough to 100*100. So if you partition space using an octree for example, the average number of comparisons between bodies is reduced. If it is possible for all the objects to gather into a very small area (say if you're doing some kind of particle flow simulation, where particles can gather in the same area) then the O(N^2) may still occur at points in the simulation (at which points you'll see slowdown). So, the whole point of O(N^2) there is because of the nature of each body checking every other body in the scene. That's just the nature of the computation. A lot of things can help to make this cheaper though. Even a scene graph (say detecting between objects in the same room only) will reduce the number of collision computations to be done significantly, but it will still be O(M^2) (where M is the number of objects in the room to be collision detected against). Spherical bounding volumes make the initial check very fast (if( distance( myCenter, hisCenter ) > (myRadius+hisRadius) ) then MISS), so even if collision detection is O(N^2), the bounding sphere computations are likely to happen very fast. • There's no need to take brute force checking as a reference: regardless of clever algorithms, N objects can each collide with all other objects, giving O(N^2) collisions that require O(N^2) work to be processed. Good algorithms can only do better when there are less collisions. – Lorenzo Gatti Jul 31 '15 at 16:21
2021-06-14 12:19:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42293381690979004, "perplexity": 1119.140263638522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612154.24/warc/CC-MAIN-20210614105241-20210614135241-00386.warc.gz"}
https://reviews.llvm.org/p/omarahmed/
# Projects User does not belong to any projects. # User Details User Since Feb 14 2020, 9:35 PM (129 w, 1 d) omarpiratee2010@gmail.com # Fri, Jul 15 omarahmed updated the diff for D127270: [clang-format] Add space in placement new expression. Change removes to remove Fri, Jul 15, 7:00 PM · Restricted Project, Restricted Project, Restricted Project # Wed, Jul 13 omarahmed added a comment to D127270: [clang-format] Add space in placement new expression. I don't have push credentials so If everything is okay with the patch, can you push it for me. My email is omarpiratee2010@gmail.com Wed, Jul 13, 1:27 PM · Restricted Project, Restricted Project, Restricted Project omarahmed updated omarahmed. # Jun 22 2022 omarahmed updated the diff for D127270: [clang-format] Add space in placement new expression. Format files Jun 22 2022, 3:36 AM · Restricted Project, Restricted Project, Restricted Project # Jun 21 2022 omarahmed updated the diff for D127270: [clang-format] Add space in placement new expression. Format files Jun 21 2022, 8:36 AM · Restricted Project, Restricted Project, Restricted Project Jun 21 2022, 5:39 AM · Restricted Project, Restricted Project, Restricted Project omarahmed updated the diff for D127270: [clang-format] Add space in placement new expression. Change the default to APO_Leave Jun 21 2022, 5:32 AM · Restricted Project, Restricted Project, Restricted Project # Jun 20 2022 Jun 20 2022, 10:02 AM · Restricted Project, Restricted Project, Restricted Project Jun 20 2022, 9:53 AM · Restricted Project, Restricted Project, Restricted Project omarahmed updated the diff for D127270: [clang-format] Add space in placement new expression. • Add version for nestedEnums and nestedFields • Make tests valid Jun 20 2022, 9:53 AM · Restricted Project, Restricted Project, Restricted Project # Jun 17 2022 omarahmed updated the diff for D127270: [clang-format] Add space in placement new expression. Jun 17 2022, 11:25 AM · Restricted Project, Restricted Project, Restricted Project omarahmed added a comment to D127270: [clang-format] Add space in placement new expression. As I understand, the default behavior for when the user didn't use SBPO_Custom is to add a space in placement operators based on this issue. And, at the same time, the default behavior should be APO_Never when we have SBPO_Custom so that we handle other code that depends on that. the existing tests confirm this understanding. So, the current logic was added based on this understanding. Jun 17 2022, 10:38 AM · Restricted Project, Restricted Project, Restricted Project omarahmed updated the diff for D127270: [clang-format] Add space in placement new expression. Refactor the tests and add new parsing logic for nested enums in dump_format_style.py Jun 17 2022, 10:38 AM · Restricted Project, Restricted Project, Restricted Project Jun 17 2022, 2:32 AM · Restricted Project, Restricted Project, Restricted Project # Jun 16 2022 omarahmed added a comment to D127270: [clang-format] Add space in placement new expression. Does this patch really fix https://github.com/llvm/llvm-project/issues/54703? If so, please add test for it. Otherwise remove the link from the summary (and if possible handle it in another review). Jun 16 2022, 1:20 AM · Restricted Project, Restricted Project, Restricted Project omarahmed updated the diff for D127270: [clang-format] Add space in placement new expression. Add coverage for placement delete expressions and transform bool option to enum Jun 16 2022, 1:19 AM · Restricted Project, Restricted Project, Restricted Project # Jun 12 2022 Jun 12 2022, 6:36 AM · Restricted Project, Restricted Project, Restricted Project # Jun 11 2022 omarahmed updated the summary of D127270: [clang-format] Add space in placement new expression. Jun 11 2022, 11:04 AM · Restricted Project, Restricted Project, Restricted Project # Jun 7 2022 omarahmed requested review of D127270: [clang-format] Add space in placement new expression. Jun 7 2022, 9:10 PM · Restricted Project, Restricted Project, Restricted Project # Apr 14 2020 omarahmed added a comment to D76550: [Attributor] Improve the alignment of the loads. @jdoerfert ping :) Apr 14 2020, 4:13 AM · Restricted Project # Apr 10 2020 omarahmed updated the diff for D76550: [Attributor] Improve the alignment of the loads. Merge the patch with the new Attributor files structure Apr 10 2020, 10:14 AM · Restricted Project # Apr 4 2020 omarahmed updated the diff for D76550: [Attributor] Improve the alignment of the loads. Minor fix Apr 4 2020, 1:49 PM · Restricted Project omarahmed added a comment to D76550: [Attributor] Improve the alignment of the loads. Sry for a lot of reformating patches, didn't know that i should rebuild clang format :) My data : Omar Ahmed <omarpirate2010@yahoo.com> Apr 4 2020, 1:49 PM · Restricted Project omarahmed updated the diff for D76550: [Attributor] Improve the alignment of the loads. Minor format Apr 4 2020, 1:17 PM · Restricted Project omarahmed updated the diff for D76550: [Attributor] Improve the alignment of the loads. Make the dependence optional omar ahmed <omarpirate2010@yahoo.com> Apr 4 2020, 11:41 AM · Restricted Project # Apr 3 2020 omarahmed updated the diff for D76550: [Attributor] Improve the alignment of the loads. Minor formatting Apr 3 2020, 9:38 PM · Restricted Project omarahmed added a comment to D76550: [Attributor] Improve the alignment of the loads. I think what you can do is to precompute the AAAlign in manifest instead of doing this Apr 3 2020, 9:06 PM · Restricted Project omarahmed updated the diff for D76550: [Attributor] Improve the alignment of the loads. Minor fix Apr 3 2020, 8:34 PM · Restricted Project omarahmed updated the diff for D76550: [Attributor] Improve the alignment of the loads. Precompute the alignment for the loads in manifest and add test with different alignment privatizable arguments Apr 3 2020, 9:40 AM · Restricted Project # Apr 1 2020 omarahmed added a comment to D76550: [Attributor] Improve the alignment of the loads. sry for late update, Hard times Anyway I wanted to ask about a thing i noticed in AAPrivatizable attribute : Apr 1 2020, 2:04 PM · Restricted Project omarahmed updated the diff for D76550: [Attributor] Improve the alignment of the loads. Usage of AssumeAligned to force natural alignment Apr 1 2020, 1:30 PM · Restricted Project # Mar 23 2020 omarahmed added a comment to D76550: [Attributor] Improve the alignment of the loads. If you always deal with defined alignment there is no need to use MaybeAlign. Use Align in createReplacementValues (it will get rid of the extra if check). Now, LoadInst::setAlignment takes a MaybeAlign but Align implicitly casts to MaybeAlign so you can safely pass in an Align (I'll optimize this case by providing setAlignment overloads later). Then if you get raw alignment values that can be 0 but you assume that 0 means 1 use [assumeAligned](https://github.com/llvm/llvm-project/blob/ccf49b9ef012bab44b1f1322223e8b2e5ca89bad/llvm/include/llvm/Support/Alignment.h#L114). Unfortunately the naming in this context is awkward (but that's how the API is supposed to be used) assumeAligned(AlignAA.getAssumedAlign()). It will become better over time when AlignAA deals with Align/MaybeAlign directly. Mar 23 2020, 9:15 AM · Restricted Project # Mar 22 2020 omarahmed updated the diff for D76550: [Attributor] Improve the alignment of the loads. Minor cleanup. Mar 22 2020, 3:01 PM · Restricted Project omarahmed updated the diff for D76550: [Attributor] Improve the alignment of the loads. Minor cleanups Mar 22 2020, 1:55 PM · Restricted Project omarahmed added a comment to D76550: [Attributor] Improve the alignment of the loads. Mar 22 2020, 12:51 PM · Restricted Project omarahmed updated the summary of D76550: [Attributor] Improve the alignment of the loads. Mar 22 2020, 11:44 AM · Restricted Project omarahmed updated the diff for D76550: [Attributor] Improve the alignment of the loads. Minor cleanups Mar 22 2020, 11:44 AM · Restricted Project Mar 22 2020, 9:04 AM · Restricted Project omarahmed added a comment to D76550: [Attributor] Improve the alignment of the loads. Basictest.ll always make a strange problem with me sometimes the align of the load be 0 and from the condition I added it transform to 1, and in other times it gives align to the load with 4 value so why this could happen ? Mar 22 2020, 7:59 AM · Restricted Project omarahmed updated the diff for D76550: [Attributor] Improve the alignment of the loads. Mar 22 2020, 7:59 AM · Restricted Project # Mar 21 2020 omarahmed added a comment to D76550: [Attributor] Improve the alignment of the loads. You should query the base alignment AA in the initialize once to make sure it exists and is part of the fixpoint iteration. For now that is trivially the case but I can see that this might not be this way in the future. Mar 21 2020, 1:52 PM · Restricted Project omarahmed retitled D76550: [Attributor] Improve the alignment of the loads from Improve the alignment of the loads to [Attributor] Improve the alignment of the loads. Mar 21 2020, 12:15 PM · Restricted Project Mar 21 2020, 12:15 PM · Restricted Project # Mar 13 2020 omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. Got it, Thanks :) Mar 13 2020, 11:19 AM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. Thanks all :D I am sorry but a message sent to me with this The Buildbot has detected a new failure on builder fuchsia-x86_64-linux while building llvm. So does that mean i broke something here ? Mar 13 2020, 10:11 AM · Restricted Project # Mar 11 2020 omarahmed updated the summary of D74691: [Attributor] Detect possibly unbounded cycles in functions. Mar 11 2020, 6:18 AM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. Update commit message Sorry probably it wasn't clear. I meant that you should also update the description of this patch. :) Mar 11 2020, 6:18 AM · Restricted Project omarahmed updated the summary of D74691: [Attributor] Detect possibly unbounded cycles in functions. Mar 11 2020, 6:18 AM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. Update commit message Mar 11 2020, 5:08 AM · Restricted Project # Mar 10 2020 omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. I added a single nit comment. Otherwise it LGTM! :) Mar 10 2020, 4:58 PM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. Minor cleanup Mar 10 2020, 4:58 PM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. Minor cleanups Mar 10 2020, 9:46 AM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. Minor cleanups Mar 10 2020, 9:46 AM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. Remove the TODO. Then please see the comment above. sry , I don't understand what u mean by the comment above , does u mean your comment to uenoko ? Mar 10 2020, 8:39 AM · Restricted Project # Mar 9 2020 omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. Mar 9 2020, 6:54 PM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. I don't know if reversing the condition of mayContainIrreducibleControl is right but it seemed logical to me also I have run llvm\test\analysis tests and it didn't break any test Mar 9 2020, 2:03 PM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. Mar 9 2020, 1:30 PM · Restricted Project # Mar 8 2020 omarahmed added inline comments to D74691: [Attributor] Detect possibly unbounded cycles in functions. Mar 8 2020, 8:14 PM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. why it is failing in harbor master remote builds,I am sure it compiles , passes tests and clang format run on it? also regarding the tests it gives expected fails =1 in liveness.ll , from what i understand it is expected to fail anyway but is that a problem that needs to be fixed ? Mar 8 2020, 6:06 PM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. Mar 8 2020, 4:31 PM · Restricted Project omarahmed added inline comments to D74691: [Attributor] Detect possibly unbounded cycles in functions. Mar 8 2020, 1:20 PM · Restricted Project # Mar 7 2020 omarahmed added inline comments to D74691: [Attributor] Detect possibly unbounded cycles in functions. Mar 7 2020, 6:20 PM · Restricted Project omarahmed added inline comments to D74691: [Attributor] Detect possibly unbounded cycles in functions. Mar 7 2020, 5:48 PM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. [Attributor] Detect possibly unbounded cycles in functions Mar 7 2020, 1:33 PM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. [Attributor] Detect possibly unbounded cycles in functions Mar 7 2020, 12:59 PM · Restricted Project # Mar 5 2020 omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. • If the algorithm to distinguish cycles and loops makes problems, we can just go through all loops in loop info and verify there is no irreducible control (see for example mayContainIrreducibleControl in MustExecute.cpp) That's a great idea! @omarahmed note that LoopInfo gives you the top-level loops (https://llvm.org/docs/LoopTerminology.html#id3). One way to get top-level loops is getLoopsInPreorder(). Then, you can get all the sub-loops of a loop using getSubLoops(). However, note that this will give you the immediate children if I'm not mistaken. That is, say you have a top-level loop A that contains B that contains C. A->getSubLoops() will give you B. You then have to do B->getSubLoops() to get C. Before you do all that, you check for irreducible control of course. Other than that, and you may skip that, I think it's interesting to see what happens behind the scenes. mayContainIrreducibleControl() calls containsIrreducibleCFG(). This requires LoopInfo. Apart from its implementation that is interesting, it's also educational to read about how do you test for irreducible control in a CFG (i.e. cycle that is not a Loop in the LLVM terms) without LoopInfo. Here's the diff in which it was added: https://reviews.llvm.org/D40874#1013599 . Check online for T1 / T2 transformations in control-flow graphs (I just did 'cause of course I didn't know any of that and it's quite fun). Mar 5 2020, 5:31 PM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. [Attributor]Detect possibly unbounded cycles in functions so the new approach detects cycles by using a recursive function that dfs on the CFG Note that this algorithm detects whether there is a cycle in the function (correctl). But now note that if you find a cycle, you may want to skip it. In that case, it is wrong. Specifically, because of the Processed, say you find a cycle A and you skip it because it is a loop with max trip count (how do you do that reliably is another story). There can be a cycle B (in which some nodes of A are contained) that won't be found. That is a false negative (i.e. we'll say the function is willreturn but it might not be). I think this is a problem. I'll have to think if that can happen in the CFG, but from the graph theory perspective, it definitely can. This new approach also saves the processed nodes so not compute them two times thus enhancing the complexity of the algorithm to be O(N+E). The approach also doesn't visit statically unreachable blocks so that also enhances the complexity. Those 2 were true for the previous algorithm as well. If I'm not mistaken, we're doing strictly more work than the previous algorithm (although the facts you wrote are correct). Mar 5 2020, 7:08 AM · Restricted Project # Mar 4 2020 omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. [Attributor]Detect possibly unbounded cycles in functions Mar 4 2020, 10:58 PM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. [Attributor]Detect possibly unbounded cycles in functions Mar 4 2020, 10:25 PM · Restricted Project # Mar 3 2020 omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. Please add a non-loop cycle test case, aka. irreducible control flow test case. okay I will add it in the next diff :) Mar 3 2020, 2:24 PM · Restricted Project # Feb 28 2020 omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. I tried to use loopInfo on the algorithm , I have came with 2 approaches : the first is that if we found a cycle we add the BB in the cycle in a allCyclesBBs vector and after we return from containCycleUtil with all the BBs that makes cycles then loop on the allCyclesBBs vector in the containsCycle function and get the information of the loop with getloopfor as we was doing or ask this inside the containCycleUtil and make this function return us if there was a cycle with no maxTripCount the second is that we do this dfs in a stack style and if we find a cycle we ask for loop info and do the same checks as before like maxtripCount I don't know which of them is better ? Feb 28 2020, 11:51 AM · Restricted Project # Feb 26 2020 omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. Could you make sure that you are using a new pass manager? (It means to use ./opt -passes=attributor .. but not ./opt -attributor ..) If you are using an old pass manager, you might not get LoopInfo or other analysis. Thank you mate, I didn't know I was using the old pass manager. @omarahmed I would advise you to not work anymore on the old algorithm as it's certainly not effective. I'm sorry, but I barely have time to work on the new algorithm :/ Otherwise, I would try to answer it. So, here's a new algorithm with a couple of debug prints: ```static DenseMap<BasicBlock *, unsigned> VisitingTime; bool containsCycleUtil(BasicBlock *BB) { dbgs() << "Visiting: " << BB->getName() << "\n"; unsigned VisitingTimeCurrent = VisitingTime[BB]; for (auto *SuccBB : successors(BB)) { dbgs() << BB->getName() << " -> " << SuccBB->getName() << "\n\n"; unsigned &VisitingTimeSucc = VisitingTime[SuccBB]; if (!VisitingTimeSucc) { VisitingTimeSucc = VisitingTimeCurrent + 1; containsCycleUtil(SuccBB); VisitingTimeSucc = 0; } else { dbgs() << "Found cycle: " << VisitingTimeCurrent << ", " << VisitingTimeSucc << "\n"; assert(VisitingTimeCurrent >= VisitingTimeSucc); unsigned CycleSize = VisitingTimeCurrent - VisitingTimeSucc + 1; dbgs() << "Cycle size: " << CycleSize << "\n"; } } return false; } static bool containsCycle(Attributor &A, Function &F) { BasicBlock *EntryBB = &F.getEntryBlock(); VisitingTime[EntryBB] = 1; return containsCycleUtil(EntryBB); }``` The main modification compared to the last one is that I track visiting time instead of going back the recursion. This makes the algorithm so much cleaner (and reliable, for reasons that are not on topic). Now, this algorithm, AFAICT, finds all cycles, without false positives. Plus, it detects only statically reachable cycles. That's good news, it should be strictly better than the initial. Naively, one could then say "let's apply the same idea as with the SCCs algorithm. This does not fully work though because check this: ```define void @loop_with_if(i1 %c1, i1 %c2) { entry: br label %w1 w1: br i1 %c1, label %b1, label %exit b1: br label %if if: br i1 %c2, label %t1, label %e1 t1: br label %latch e1: br label %latch latch: br label %w1 exit: ret void }``` If you pay attention, it's a while with an if inside. If you draw out the CFG, you can see that there's a cycle: w1 -> b1 -> if -> e1 -> latch -> w1 which the algorithm will (correctly) find. The cycle size is 5 but the loop size is 6. So, in such loops, even if SCEV can deduce a max trip count (which is possible), the initial method will fail to realize this is one loop. Which makes a point: There can be a cycle, which is not a loop, which still does not disrupt the maximum trip count guarantee (interestingly, that means that even if SCCIterator gave us all the SCCs, we would stumble upon this problem). Edit: Well, that was obvious before but not quite in the same context. And there can be many many such real-world cases. From this point on, the problem I think starts to get quite complicated. It seems we need to start saying "if all the blocks of the cycle belong to the same loop and these blocks contain the header and the latch of the loop blah.. blah.." then this cycle can be skipped. IMHO, we should maybe just live with a not-so-perfect-algorithm, or decide that it is going to take a good amount of time to get it right. And sorry for the long posts, but this problem turned out to be more complicated that it initially seemed. I hope I helped a bit. I now understand the problem clearly , Thank you really , u helped me a lot :) I will try to submit a diff tmw with this new algorithm with adding a doc :) Feb 26 2020, 1:24 PM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. I am sorry but I can't get what the test that the approach of moving on the SCCs fails in as I have tried to test it a bit with tests like that and printed the SCCs that it gets https://i.imgur.com/fpZhpST.png 1 - entry 2- condition - body 3- return 4 - l4 https://i.imgur.com/UJJ3aH0.png 1- entry 2- l1 - l2 - l3 - l4 - l5 3- return 4- l6 - l7 - l8 https://i.imgur.com/92NS6rp.png 1- entry 2- l1 - l2 - l3 - l4 - l5 - l6 - l7 3- return and in all of them it reached the cycle and outputs that the loop does not have a max trip count Can u share the IR (with the appropriate RUN lines, thus tests) not the images please? Feb 26 2020, 3:00 AM · Restricted Project # Feb 25 2020 omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. I am sorry but I can't get what the test that the approach of moving on the SCCs fails in as I have tried to test it a bit with tests like that and printed the SCCs that it gets https://i.imgur.com/fpZhpST.png 1 - entry 2- condition - body 3- return 4 - l4 https://i.imgur.com/UJJ3aH0.png 1- entry 2- l1 - l2 - l3 - l4 - l5 3- return 4- l6 - l7 - l8 https://i.imgur.com/92NS6rp.png 1- entry 2- l1 - l2 - l3 - l4 - l5 - l6 - l7 3- return and in all of them it reached the cycle and outputs that the loop does not have a max trip count Sorry, but I didn't understand your comment very much. The code I wrote certainly can't detect max trip count. It only correctly (hopefully) finds if there's a cycle or not. The second modification also found the size of the cycle. Did you write any code that used LI ? In that case, as I said earlier, I have some algorithm in mind that uses LI but I can't seem to be able to get LI (i.e. it's always nullptr). So, probably the same will be true for you Feb 25 2020, 4:17 PM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. I am sorry but I can't get what the test that the approach of moving on the SCCs fails in as I have tried to test it a bit with tests like that and printed the SCCs that it gets Feb 25 2020, 3:32 PM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. ```bool containsCycle(BasicBlock *BB, SmallPtrSet<BasicBlock *, 32> &Visited, SmallPtrSet<BasicBlock *, 32> &Processed) { if (Processed.count(BB)) { Visited.erase(BB); return false; } Processed.insert(BB); Visited.insert(BB); for (auto *SuccBB : successors(BB)) { if (!Processed.count(SuccBB) && containsCycle(SuccBB, Visited, Processed)) return true; else if (Visited.count(SuccBB)) return true; } Visited.erase(BB); return false; }``` Feb 25 2020, 11:05 AM · Restricted Project omarahmed retitled D74691: [Attributor] Detect possibly unbounded cycles in functions from [Attributor]Detect functions with unbounded loops to [Attributor] Detect possibly unbounded cycles in functions. Feb 25 2020, 8:17 AM · Restricted Project # Feb 24 2020 omarahmed retitled D74691: [Attributor] Detect possibly unbounded cycles in functions from [Attributor] Detect SCCs with unbounded cycles to [Attributor]Detect functions with unbounded loops. Feb 24 2020, 4:09 PM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. [Attributor]Detect functions with unbounded loops Feb 24 2020, 4:09 PM · Restricted Project # Feb 23 2020 omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. btw, I recommend that you try to answer questions by following the code - it's always a learning experience). Since we took that road, let's have a little more fun. Let's start simple by printing the SCCs we get. A simple way to do that is with sth like: ```for (scc_iterator<Function *> It = scc_begin(&F), IE = scc_end(&F); It != IE; ++It) { const std::vector<BasicBlock *> &SCCBBs = *It; for (BasicBlock *BB : SCCBBs) { dbgs() << *BB << "\n"; } dbgs() << "----- END -----\n\n\n\n\n"; }``` But, we can actually do better if we take a graphical view of the CFG, with view-cfg. You can do that with sth like: ./bin/opt -view-cfg test.ll Assuming that you are in the llvm-project dir, that you have built opt and that your file is test.ll. Now, this will generate a .dot file. This is a special "graphics" format for which we should not care about right now. If you do that, you'll probably see something like: ```... Writing '/tmp/cfgnon_loop_inside_loop-a0c3a3.dot'... done. Trying 'xdg-open' program... Remember to erase graph file: /tmp/cfgnon_loop_inside_loop-a0c3a3.dot gio: file:///tmp/cfgnon_loop_inside_loop-a0c3a3.dot: No application is registered as handling this file``` For me, it created /tmp/cfgnon_loop_inside_loop-a0c3a3.dot (you won't necessarily have the same name, that's ok). Then, it will try to find default program to open it (with the xdg-open command), which as you can see, for me it didn't work. Basically, you now have to have a program that understands that file. The dot app will do the job, but you'll probably won't have it by default. Search online for how to install graphviz package. For example, in Ubuntu I think you can do it with sudo apt-get install graphviz. Finally, you should be able to create a PDF out of the .dot file as: dot -Tpdf /tmp/cfgnon_loop_inside_loop-a0c3a3.dot -o <pdf_filename>.pdf Then, you can open the pdf and expect to see sth like this: https://imgur.com/a/oW2xgNt I think viewing CFGs like that, at times can be very helpful. In this case for example, it becomes instantly apparent that the for and the while don't form a SCC. Feb 23 2020, 4:10 PM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. I'm in the unpleasant position to tell you that the method I proposed is wrong. SCCIterator uses Tarjan's algorithm which for some reason, I remembered it finds all strongly-connected components but actually, it finds only maximal. That means, if a loop has an SCC inside it, we'll never find it since the loop is the maximal SCC. That probably means that we have to fall-back to the method that Johannes had proposed: `As before, we identify all cycles via the depth-first search and the visited set. If a cycles is found we did bail before but now we ask LI & SE instead. If they say we found a proper loop header of a loop with a bounded trip count we can ignore that cycles and continue exploring.` I'm really sorry for that. I'll try to think if anything better can be done. Edit: Let's not forget of course: A big thanks to @Meinersbur for pointing this out. that's fine , no problem :D I think i can move faster now as i know the small mistakes i have done till now from the great reviews :) But if it finds only the maximum ones why it hadn't returned willreturn here , as it shouldn't have seen the while inside ```; int non_loop_inside_loop(int n) { ; int ans = 0; ; for (int i = 0; i < n; i++) { ; while (1) ; ans++; ; } ; return ans; ; }``` It's good to refer to the LLVM IR, rather than the C/C++ because they can have multiple LLVM IR translations. Just to be sure we're talking about the same thing. :) In this case, I assume you meant the relevant test case in willreturn.ll. If you put it in a file on its own and run it through the Attributor, with a couple of prints, you'll see that NoAnalysis is true (btw, I recommend that you try to answer questions by following the code - it's always a learning experience). I don't know why and that's a good question (on another topic, but still). Interestingly, both LI and SE are null. So, for some reason, for this function, we couldn't get either of those analyses. I'll try to find some time to check tomorrow. Feb 23 2020, 3:20 PM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. I'm in the unpleasant position to tell you that the method I proposed is wrong. SCCIterator uses Tarjan's algorithm which for some reason, I remembered it finds all strongly-connected components but actually, it finds only maximal. That means, if a loop has an SCC inside it, we'll never find it since the loop is the maximal SCC. That probably means that we have to fall-back to the method that Johannes had proposed: `As before, we identify all cycles via the depth-first search and the visited set. If a cycles is found we did bail before but now we ask LI & SE instead. If they say we found a proper loop header of a loop with a bounded trip count we can ignore that cycles and continue exploring.` I'm really sorry for that. I'll try to think if anything better can be done. Edit: Let's not forget of course: A big thanks to @Meinersbur for pointing this out. Feb 23 2020, 2:05 PM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. [Attributor]Detect SCCs with unbounded cycles Feb 23 2020, 11:54 AM · Restricted Project omarahmed retitled D74691: [Attributor] Detect possibly unbounded cycles in functions from [Attributor] add some pattern to containsCycle to [Attributor] Detect SCCs with unbounded cycles. Feb 23 2020, 11:45 AM · Restricted Project omarahmed updated the summary of D74691: [Attributor] Detect possibly unbounded cycles in functions. Feb 23 2020, 11:45 AM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. [Attributor]Detect SCCs with unbounded cycles Feb 23 2020, 4:29 AM · Restricted Project # Feb 22 2020 omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. Feb 22 2020, 9:26 PM · Restricted Project omarahmed added inline comments to D74691: [Attributor] Detect possibly unbounded cycles in functions. Feb 22 2020, 12:20 PM · Restricted Project # Feb 21 2020 omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. improve the docs and add tests Feb 21 2020, 6:27 PM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. I'm up for it. Although I'd like to wait for Omar to finish this one and see if he thinks he can tackle the next (assuming he wants to). :) Feb 21 2020, 6:27 PM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. • improve the docs and add tests Feb 21 2020, 6:18 PM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. @jdoerfert I have found examples in the willreturn.ll file for explicit tests for bounded and unbounded loops and I fixed them , does we need to add more complex examples like a bounded loop inside unbounded loop or this is redundant and not important ? Feb 21 2020, 12:46 PM · Restricted Project # Feb 19 2020 omarahmed added inline comments to D74691: [Attributor] Detect possibly unbounded cycles in functions. Feb 19 2020, 5:01 PM · Restricted Project omarahmed added inline comments to D74691: [Attributor] Detect possibly unbounded cycles in functions. Feb 19 2020, 5:01 PM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. Can we write some explicit tests and add them to the willreturn.ll file? Feb 19 2020, 2:57 PM · Restricted Project omarahmed added inline comments to D74691: [Attributor] Detect possibly unbounded cycles in functions. Feb 19 2020, 2:46 PM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. some cleanups and modify some tests Feb 19 2020, 2:39 PM · Restricted Project omarahmed updated the diff for D74691: [Attributor] Detect possibly unbounded cycles in functions. some cleanups and modify some tests Feb 19 2020, 2:27 PM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. In this test shouldn't it have the attribute willreturn Feb 19 2020, 9:52 AM · Restricted Project omarahmed added a comment to D74691: [Attributor] Detect possibly unbounded cycles in functions. Caps on the name. Be sure to check the nearby code and use clang-format (it seems you have done so, just reminding). Feb 19 2020, 7:28 AM · Restricted Project
2022-08-08 03:55:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.864142656326294, "perplexity": 5044.643320122609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00055.warc.gz"}
https://physics.stackexchange.com/questions/130373/the-mass-of-the-photon-will-decrease-with-reduced-velocity
# The mass of the photon will decrease with reduced velocity? Since photon has zero rest mass, it will has zero mass when the velocity is zero. We are now being able to slower down the photon in experiment. Is the mass of photon reduced with the decreased velocity. Or it will remain the same, but it will decreased to zero only when the velocity is zero? The deeper question behind it would be that what does it mean for the observer if the object is under observation is in another medium with different speed of light. • There are endless version of this question in many guises already on the site. physics.stackexchange.com/questions/3541/… doesn't look the same but is, and in the sidebar of that you will find many more which also have the same answer. – dmckee --- ex-moderator kitten Aug 10 '14 at 19:57 • I am trying to ask this question"The deeper question behind it would be that what does it mean for the observer if the object is under observation is in another medium with different speed of light." – Edward Cooper Aug 10 '14 at 20:04 • Don't confuse yourself between velocity and energy: the speed of propagation of energy (photon) in a medium other than vacuum (which I supposed is the absence of a medium) is affected by interactions with the matter (electrons, mostly) that it encounters - but a photon cannot be slowed to zero, and the frequency (which is a measure of the photon) cannot change (except by changing the frame of reference). See @dmckee's comment for useful links. – Floris Aug 10 '14 at 20:04 • Energy also have inertia. It is not the mass of the photon that gives it inertia but it's energy. Energy can't move slower than the ultimate speed, it happened to be the speed of light. – Self-Made Man Aug 11 '14 at 3:01 A photon has no mass. Ever. It has momentum - and there is a relationship between its energy and momentum, and because we know its speed, we can pretend it has mass $m = \frac{p}{c}$. But mass of something traveling at the speed of light is not something we can relate to things not traveling at the speed of light. When light travels in a medium with a refractive index > 1, it slows down. That means that the energy that the photon conveys is propagating more slowly - because of interactions with the electrons / atoms in the medium. But you can tell (by observing the frequency of the photon) that its energy did not change. The only way to change the observed frequency (as opposed to wavelength, which changes in a refractive medium) is to change your frame of reference. All the tenets of special relativity are related to light traveling in vacuum. And photons cannot be slowed down to an "apparent" speed of zero. To answer your question, we need to clarify: 1.photons have mass, that is equal to E=h*f which comes from frequency 1. photons always travel at the speed of light, when measured locally (because they always travel in vacuum, or they are absorbed and re-emitted) 2. EM waves travel slower in dense medium, when measured locally 3. EM waves are a herd of photons, and the wavefront slows down in dense medium, because phase shifts, but mainly because the photons are absorbed and re-emitted and the EM interactions take time, that is about the time needed for the ravage lifetime of the excited state of the atom (when the atoms absorb the photon, the electrons are in excited state, then when they re-emit the photon, they go back to ground state) 4. photons travel slower next to the sun then the speed of light, when measured from the earth, because the sun has a stronger gravity field. This is the Shapiro effect. 5. It is because the clocks tick slower next to the sun, when viewed from the earth, and the clock here on earth ticks faster. So we divide the path of the photon with a bigger time, so we get a slower speed. The slowing is because of the spacetime being not flat next to the sun, so the photon has to travel a longer path. So a longer path divided by a bigger time gives you a slower speed. 6. The photon's mass is still it's energy in this case, E=h*f, which comes from it's frequency, which should not change. 7. The photon's frequency can change when it is redshifted because it is coming from a galaxy that is getting further from us, because of the expanding universe. In That case it's mass which is it's energy, can change.
2020-02-21 10:31:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6024609804153442, "perplexity": 311.7621825609657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145500.90/warc/CC-MAIN-20200221080411-20200221110411-00007.warc.gz"}
https://www.physicsforums.com/threads/exponential-functions.655633/
# Exponential functions Our textbook defines an exponential function as f(x) = ab^x. However, a question was brought up about a function, g(x) = 5^sqrt(x). Is g an exponential function? It looks like an exponential graph for x>0, but is not continuous on R. dextercioby Homework Helper A function is not defined in the absence of domain and range. So do tell us the whole definition as worded by your book. jbriggs444 Homework Helper 2019 Award Our textbook defines an exponential function as f(x) = ab^x. However, a question was brought up about a function, g(x) = 5^sqrt(x). Is g an exponential function? It looks like an exponential graph for x>0, but is not continuous on R. Let us ignore the fact that g(x) = 5^sqrt(x) is not always real-valued for negative x and ask the somewhat better defined question: "Does the graph of g(x) = 5^sqrt(x) match the graph of any function f(x) of the form ab^x for real-valued constants a and b and positive real x" One simple-minded way to answer this would be to assume that there is such a function and realize that, if so: f(0) = a*b^0 = a = g(0) = 5^sqrt(0) = 1 So a = 1 f(1) = a*b^1 = ab = b = g(1) = 5^sqrt(1) = 5 So b = 5 So the question then becomes: "does the graph of g(x) = 5^sqrt(x) match the graph of f(x) = 5^x" The answer to that question is rather obvious.
2020-12-04 14:26:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8087702393531799, "perplexity": 844.6789882969451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141737946.86/warc/CC-MAIN-20201204131750-20201204161750-00708.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-3-1-2-4-2-3
# How do you simplify 3 1/2 + 4 2/3? Jul 29, 2016 Turn the mixed numbers into improper fractions, find a common denominator, and add. #### Explanation: What you have in your equation are called mixed numbers (A whole number + a fraction). We want to turn these mixed numbers into improper fractions (fractions where the numerator is bigger than the denominator), So, let's take the whole number and multiply it by the denominator, then add the numerator: $3 \cdot 2 + 1 = 7$ $4 \cdot 3 + 2 = 14$ $\frac{7}{2} + \frac{14}{3}$ We can't add the fractions just yet, though. To do this, we need a common denominator. Let's use 6 as our denominator, since $2 \cdot 3 = 6$, and $3 \cdot 2 = 6$: $\frac{21}{6} + \frac{28}{6}$ Next, we just add the fractions together: $21 + 28 = 49$ $\frac{21}{6} + \frac{28}{6} = \frac{49}{6}$
2020-08-14 06:00:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9623183012008667, "perplexity": 610.0069021736458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739177.25/warc/CC-MAIN-20200814040920-20200814070920-00210.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=59&t=54483&view=print
Page 1 of 1 ### 6.4 Posted: Wed Dec 04, 2019 12:04 pm How many hydronium ions are present at any moment in 100. mL of pure water at 25 C? ### Re: 6.4 Posted: Wed Dec 04, 2019 12:11 pm The pH of pure water is 7. The equation to find pH is pH=-log[H3O+]. Substituting 7 into the equation, you would find that the concentration of hydronium ions is 1*10^-7. Then, to find the number of ions present, you would then do the following: $\frac{1*10^-7mol}{L} (\frac{0.100L}{1})(\frac{6.022*10^{23}ions)}{mol})$
2020-05-26 07:40:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8154954314231873, "perplexity": 1682.809392607559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390448.11/warc/CC-MAIN-20200526050333-20200526080333-00294.warc.gz"}
http://en.wikipedia.org/wiki/Saint-Venant's_principle
# Saint-Venant's Principle (Redirected from Saint-Venant's principle) Saint-Venant's Principle, named after the French elasticity theorist Adhémar Jean Claude Barré de Saint-Venant can be stated as saying that:[1] "... the difference between the effects of two different but statically equivalent loads becomes very small at sufficiently large distances from load." The original statement was published in French by Saint-Venant in 1855.[2] Although this informal statement of the principle is well known among structural and mechanical engineers, more recent mathematical literature gives a rigorous interpretation in the context of partial differential equations. An early such interpretation was made by von Mises in 1945.[3] The Saint-Venant's principle allows elasticians to replace complicated stress distributions or weak boundary conditions into ones that are easier to solve, as long as that boundary is geometrically short. Quite analogous to the electrostatics, where the electric field due to the i-th moment of the load ( with 0th being the net charge, 1st the dipole, 2nd the quadrupole) decays as $1/r^{i+2}$ over space, Saint-Venant's principle states that high order momentum of mechanical load ( moment with order higher than torque) decays so fast that they never need to be considered for regions far from the short boundary. Therefore, the Saint-Venant's principle can be regarded as a statement on the asymptotic behavior of the Green's function by a point-load. ## References 1. ^ A.E.H. Love, "A treatise on the mathematical theory of elasticity" Cambridge University Press, 1927. (Dover reprint ISBN 0-486-60174-9) 2. ^ A. J. C. B. Saint-Venant, 1855, Memoire sur la Torsion des Prismes, Mem. Divers Savants, 14, pp. 233–560 3. ^ R. von Mises, On Saint-Venant's Principle. , Bull. AMS, 51, 555–562, 1945
2014-08-22 02:56:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8455770015716553, "perplexity": 1653.5876586209713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500822407.51/warc/CC-MAIN-20140820021342-00224-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.johndcook.com/blog/2021/03/20/fourier-series-fft/
# Computing Fourier series coefficients with the FFT The Discrete Fourier Transform (DFT) is a mathematical function, and the Fast Fourier Transform (FFT) is an algorithm for computing that function. Since the DFT is almost always computed via the FFT, the distinction between the two is sometimes lost. It is often not necessary to distinguish between the two. In my previous post, I used the two terms interchangeably because the distinction didn’t matter in that context. Here I will make a distinction between the DFT and the FFT; I’ll use DFT to refer to the DFT as it is defined in [1], and FFT for the DFT as computed in NumPy‘s FFT routine. The differences are due to varying conventions; this is often an issue. Suppose we have a function f on an interval [-A/2, A/2] and we sample f at N points where N is an even integer. Define for n running from –N/2 + 1 to N/2. DFT of the sequence {fn} is defined in [1] as the sequence {Fk} where Now suppose the function f that we sampled has a Fourier series The Fourier coefficients ck relate to the DFT output Fk according to the Discrete Poisson Summation Formula [1]: This means that the ck simply equal the Fk if f has no frequency components higher than N/2 Hz because all the terms in the infinite sum above are zero. That is, if f is band-limited and we have sampled at a frequency higher than the Nyquist frequency, then we can simply read the Fourier coefficients off the DFT. However, when f is not band-limited, or when it is band-limited but we have not sampled it finely enough, we will have aliasing. Our Fourier coefficients will differ from our DFT output by an error term involving high-frequency Fourier series coefficients. In application it’s usually too much to hope that your function is exactly band-limited, but it may be approximately band-limited. The Fourier coefficients of smooth functions eventually decay rapidly (see details here) and so the error in approximating Fourier series coefficients by DFT terms is small. ## Computing a DFT with the FFT We defined the DFT of the sequence {fn} above to be the sequence {Fk} where and k runs from –N/2 + 1 to N/2. NumPy, on the other hand, defines the DFT of the sequence {an} to be the sequence {Ak} where and k runs from 0 to N-1. Relative to the definition in the previous post, the NumPy definition difference in three ways: 1. The normalization factor 1/N is missing. 2. The input indices are numbered differently. 3. The output is arranged differently. The first difference is trivial to overcome: simply divide the FFT output by N. The second difference is easy to deal with if we think of the inputs to the FFT being samples from a periodic function, which they usually are. The fk come from sampling a periodic function f over an interval [-A/2, A/2]. If we sample the same function over [0, A] we get the an. We have If we extend the fs and the as periodically past their original ranges of definition, then they all agree. But since we start our sampling in difference places, our samples would be listed in different orders if we stored them by increasing index. Something similar occurs with the output. For 0 ≤ kN/2, Fk = Ak. But for N < k < N, Fk = ANk. ## Example We’ll use a band-limited function in our example so that we find the Fourier coefficients exactly. f(x) = 7 cos(6πx) – 5 sin(4πx) We compute the FFT as follows. import numpy as np def f(x): return 7*np.sin(3*2*np.pi*x) - 5*np.cos(2*2*np.pi*x) N = 8 delta = 1/N x = [f(n*delta) for n in range(N)] print( np.fft.fft(x)/N ) The output is [0, 0, -2.5, -3.5i, 0, 3.5i, -2.5, 0] This says F2 and F-2 = 5/2, and so the coefficient of cos(2·2πx) is 5. Also F3 = -7/2 and F-3 = 7/2, so the coefficient of cos(3·2πx) is 7. These results follow from Euler’s equation exp(iθ) = cos(θ) + i sin(θ) [1] William L. Briggs and Van Emden Henson. The DFT: An Owner’s Manual for the Discrete Fourier Transform. SIAM 1995. ## 2 thoughts on “Computing Fourier series coefficients with the FFT” 1. Jonathan Well, I definitely got my answer to my previous comment. :-) 2. MAKARAND DESHMUKH Consider the periodic function ()=cos(3), ()=sin(2), ℎ()=sin(2)sin(3), with a period of 2. Use the MATLAB functions of fft, ifft, and fftshift (to have the mean at zero) to calculate the Fourier modes of these functions. a) Plot the f(x), g(x), h(x) functions in physical domain, f(x) vs. x. b) Plot the magnitudes of Fourier modes of f(x), g(x), h(x) as a function of wave number, |̂| vs. Can you please tell me how do I solve this problem in MATLAB? I saw the above example but I cannot understand a few terms so can you please help me out?
2022-09-26 13:15:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8634735941886902, "perplexity": 1037.3288687151721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00104.warc.gz"}
https://8ch.net/marx/res/8560.html
[ / / / / / / / / / / / / / ] [ dir / 4am / faggotry / ita / mgtow / say / tulpa ] # /marx/ - Marxism It makes you smart 8chan Cup Finals - Saturday, January 19 at 08:00 p.m. GMT Winner of the 65rd Attention-Hungry Games /cure/ - Your obscure board for medical-tan appreciation December 2018 - 8chan Transparency Report Name Email Select/drop/paste files here (Randomized for file and post deletion; you may also set your own.) * = required field [▶ Show post options & limits]Confused? See the FAQ. Embed (replaces files and can be used instead) Do not bump(you can also write sage in the email field)Spoiler images(this replaces the thumbnails of your images with question marks) Allowed file types:jpg, jpeg, gif, png, webm, mp4, swf, pdfMax filesize is 16 MB.Max image dimensions are 15000 x 15000. You may upload 5 per post. File: 9d2b79227645d21⋯.jpg (236.86 KB, 646x839, 646:839, Labor Defender June 1929.jpg) No.8560 As the title says. I figure a general "ask me questions" thread is good. Can be questions about socialism, US history, the Marxist position on religion, or whatever else. No.8563 Thoughts on the Gang of Four? were they really Mao true successors? No.8564 >>8563 As far as I know, Mao had a falling out with Jiang Qing by the time he died and made Hua Guofeng his successor. The Gang of Four were, like Lin Biao, major figures behind the Cultural Revolution and clearly sought to take power after Mao's death. I think it was correct to condemn them. No.8567 File: 74ffe9c67863715⋯.png (54.63 KB, 1802x211, 1802:211, ClipboardImage.png) could you go more in depth on how NK wanted to spread Juche to other Parties? No.8568 >>8567 The DPRK promotes Juche study groups across the world, and to this day still holds international seminars: https://www.youtube.com/watch?v=ku12MLh7KQA There's a lengthy article on how the Eldridge Cleaver tried to get the Black Panther Party to adopt Juche in the early 70s after time spent in the DPRK: https://apjjf.org/2015/13/12/Benjamin-Young/4303.html In Mexico there was the Revolutionary Action Movement, comprised of student guerrillas who trained in the DPRK and adopted the Juche idea. Those are the two examples I can think of off the top of my head. The DPRK's efforts to turn Juche into an international ideology fell flat, just like Gaddafi's attempts to turn his Third International Theory (i.e. the Green Book stuff) into a global movement. A good read on the DPRK's foreign policy during the Cold War and efforts to promote Juche can be found here: http://b-ok.xyz/book/2383302/f05f0d Post last edited at No.8569 Any good books on Pol Pot / Kampuchea from a Marxist perspective? No.8570 Who are you rooting for in the World Cup? No.8571 What is the Cuban system of Workplace democracy? ( I need it because there are MLMs in my mentions who claim its state capitalist) No.8572 Also do you have some sources on Poverty and Hunger in China under Mao? No.8573 >>8569 https://archive.org/details/KampucheaTheRevolutionRescued >>8570 I have no interest in sports. >>8571 I do have two books on Cuban democracy in general, and I have a PDF on how trade unions work in Cuba. If you register on eregime.org and send me a private message there with your email (I go by Ismail on there as on here), I'll email the PDF. As for the two books on Cuban electoral democracy: >>8572 I can't think of any off the top of my head. No.8576 Do you know anything about the history of the Hmong people? How do we talk with Hmong people about socialism given their difficult experiences with the LPDR and the SRV? No.8577 >>8576 I do not. No.8583 Is liberalization really the only way to develop the productive forces? isn't there a way to attract foreign capital without having to resort to liberalization? No.8584 >>8583 I don't know how you'd attract foreign capital besides liberalizing ways in which foreign capital can invest in a country. You can certainly restrict liberalization to small parts of the country (as the DPRK has tried to do since the 90s with little to show for it), but capitalists want to invest in places where they're sure their investments will be protected for a long time and where profit is all but guaranteed. "American technicians, engineers and administrators Lenin particularly held in high esteem. He wanted five thousand of them, he wanted them at once, and was ready to pay them the highest salaries. . . . America was so far away. It did not offer a direct threat to the life of Soviet Russia. And it did offer the goods and experts that Soviet Russia needed. 'Why is it not then to the mutual interest of the two countries to make a special agreement?' asked Lenin. But is it possible for a communistic state to deal with a capitalistic state? Can the two forms live side by side? These questions were put to Lenin by [French journalist] Naudeau. 'Why not?' said Lenin. 'We want technicians, scientists and the various products of industry, and it is clear that we by ourselves are incapable of developing the immense resources of the country. Under the circumstances, though it may be unpleasant for us, we must admit that our principles, which hold in Russia, must, beyond our frontiers, give place to political agreements. We very sincerely propose to pay interest on our foreign loans, and in default of cash we will pay them in grain, oil, and all sorts of raw materials in which we are rich. 'We have decided to grant concessions of forests and mines to citizens of the Entente powers, always on the condition that the essential principles of the Russian Soviets are to be respected. Furthermore it will even consent—not cheerfully, it is true, but with resignation—to the cession of some territory of the old Empire of Russia to certain Entente powers. We know that the English, Japanese and American capitalists very much desire such concessions. . . 'This state property is ceded for a certain time, probably eighty years, and with the right of redemption. We exact nothing drastic of the association. We ask only the observance of the laws passed by the Soviet, like the eight-hour day and the control of the workers' organizations. It is true that this is far from Communism. It does not at all correspond to our ideal, and we must say that this question has raised some very lively controversies in Soviet journals. But we have decided to accept that which the epoch of transition renders necessary.'" (Williams, Albert Rhys. Lenin: The Man and His Work. New York: Scott and Seltzer. 1919. pp. 103-106.) No.8587 >>8560 Maybe you can make the old thread a sticky? It's a goldmine really No.8588 >>8587 When the thread disappears (which will take a while at the current rate new threads are created) I can just post a saved version here or in a new thread. No.8590 What do you think of khrushchev and his reforms? I often hear criticsms of him but I dont really know what he did and what he did wrong, I would love an explaination from you! No.8591 >>8590 A lot of what Khrushchev did was based on impulse, often without even consulting his colleagues in the Politburo. So for example a good idea (disbanding the machine-tractor stations) ended up having unnecessary drawbacks (immediate abolition meant that many collective farms had to go into debt to buy the tractors, the farms now possessed tractors without storage space to put them, and most mechanics formerly employed by the MTS went to the cities rather than remain in the countryside which meant getting repairs was difficult.) In general, Khrushchev's era saw a lot of experimentation, e.g. there was a greater emphasis on citizen participation in law enforcement and enforcing public morals against drunkenness and rude behavior. This sometimes backfired, e.g. citizen volunteers consisting largely of pensioners with too much time on their hands criticizing couples holding hands in public. Soviet society improved under Khrushchev, but his grandiose promises of outperforming the US economy within a few years and of reaching communism by the 1980s annoyed his colleagues, who also resented his impulsive behavior. Thus, he was ousted. He wasn't a great leader, but there's no evidence he was some secret traitor to socialism. No.8598 >>8591 No.8599 >>8598 By every measure Soviet society improved from it: soviets at all levels were made more responsive and representative, the press was more open, the trade unions assumed a more active role, a whole bunch of revolutionaries had their reputations posthumously rehabilitated (Rudzutak, Enukidze, Béla Kun, Piatnitsky, Berzin, Lozovsky, Bubnov, Fritz Platten, Dybenko, and innumerable others), etc. However, it's worth noting that while Khrushchev certainly sped up the process, "de-Stalinization" had already begun under Malenkov, e.g. within months of Stalin's death the CPSU began talking about how personality cults are bad, although Stalin wasn't explicitly named. No.8607 >>8599 this sounds pretty good, why do so many socialists hate him? No.8608 >>8607 Because they either argue that he "weakened" socialism in the USSR somehow, or they claim he was an outright traitor who presided over the establishment of "state-capitalism." The "Secret Speech" contained omissions and distortions, and Khrushchev presented Stalin's misdeeds as if doing a sensational exposé rather than an objective, Marxist analysis, which caused unnecessary upheaval in the international communist movement. Besides that, some of his economic policies were inept (e.g. setting up sovnarkhozes.) Otherwise though I can't think of anything he did that seriously set back the USSR. The Soviets were seen as being at their technological height under him. The launching of Sputnik and Khrushchev's boasts about reaching communism in two decades made lots of American analysts and businessmen seriously worry that the US was falling behind. No.8609 File: 246c965e26a3fe1⋯.jpg (18.45 KB, 476x358, 238:179, 246c965e26a3fe1680fc032c62….jpg) What is the general consensus on where revolutions are more likely to happen (and succeed)? Core or periphery? I think it was Mao who came up with the "weakest link" theory, but what Lenin wrote about imperialism also seems to imply that the proletariat in imperialist countries has a hard time developing class consciousness. This contrasts with Marx' belief that the revolution was going to happen in the most advanced capitalist countries. No.8610 >>8609 Lenin did argue that Tsarist Russia was the "weak link" of the imperialist countries, but yeah I don't think there is a general consensus. In Marx's lifetime and up till 1917 the international socialist movement no doubt thought that revolution would take place in the West first, although Marx and others (including Lenin) did argue it was possible Russia could have a revolution before the West. In the 1920s-30s emphasis was still placed on revolutionary conditions maturing in the West, albeit anti-colonial revolts turning into socialist revolutions in places like China, India and Latin America received greater attention. The triumph of the CPC in 1949 is when focus began shifting to the third world, a trend furthered by the Cuban Revolution, the Vietnam War, and the national liberation struggles in Angola, Mozambique, South Africa, etc. A lot of the fixation on the third world receded by the end of the 80s though, since all the socialist-oriented countries had major economic and problem problems to contend with and most dropped Marxism afterward. Maoists still seem to focus largely on third world movements (e.g. Nepal, the Philippines, India, Peru), and anarchists have shifted a fair bit to the third world as well (hence fascination with the EZLN in the 90s and Rojava nowadays.) But in general I don't think it makes sense to sharply divide the "core" and "periphery." Lenin argued that revolutionary situations arise independently of the will of entire classes (although the outcome of said situations obviously depends in large part on subjective factors like the proletariat's organization.) In my experience, most communists expect revolutionary situations to develop in the West in their lifetimes. That doesn't mean they neglect the growth of revolutionary situations elsewhere, it just means there's no expectation that revolutions will only break out in one area before they break out in another. No.8615 >>8610 So what are the implications of the fact that the bourgeoisie in the imperialist countries uses superprofits to "buy out" the proletariat? What about the idea that the proletariat of an oppressing nation will view the workers of the oppressed nation as enemies like Marx pointed out in English-Irish relations? I think this is essentially what we're seeing in Europe today with regards to refugees and what has driven large parts of the proletariat into the arms of chauvinist political parties. No.8616 >>8615 It's certainly relevant, but at the same time there's chauvinism outside the West as well, e.g. Nigerian immigrants being treated with hostility in South Africa. Capitalists are able to set workers against one-another not only by "buying out" the proletariat, but also through things like "if you white guys try to strike for higher pay, we'll just use non-whites who are willing to work for lower pay" and just plain ol' racism. So it's important not to oversimplify, e.g. the British Empire still had to contend with a powerful labor movement at home, with fears among the bourgeoisie of a revolutionary situation developing in 1919 and again in 1926. The weakness of the labor movement in the US vis-à-vis Britain can't be entirely explained in terms of "buying out" either. No.8617 What's your take on the tiananmen square protests of 89? No.8618 >>8616 Any reads on that last point? No.8619 >>8617 The overall thrust of the protests was to replicate in China what was happening in Eastern Europe. The CPC refused to allow that to happen. >Any reads on that last point? Foner's history of the labor movement in the US is a good read. The first five volumes are online can be found here: https://uselectionatlas.org/FORUM/index.php?topic=202462.0 Post last edited at No.8623 >>8608 So would you argue that Khrushchev was "better" than stalin? Or that the UdSSR was "undemocratic" under Stalin?! No.8624 >>8623 No. Stalin was presented with different tasks than Khrushchev. The former oversaw the construction of socialism and rapid industrialization/collectivization of the country to meet what he (correctly) gauged was a coming invasion. Khrushchev had to deal with a "modern" society and rising expectations on the part of citizens for greater access to consumer goods and information on the outside world. In terms of actual leadership, I don't think anyone can deny that Stalin was more competent and more intelligent than Khrushchev. It isn't that the USSR was "undemocratic" under Stalin, it's that Soviet democracy obviously had limitations and shortcomings, much of it initially conditioned by the need to protect against the threat of counter-revolution. I would argue that the scope of democratic decision-making and the role of citizens did expand under Khrushchev and Brezhnev, although the result still fell short of what was necessary. To give an example of expanded participation, the draft of the 1936 Soviet Constitution was widely discussed in the press and in public meetings; over 150,000 amendments were proposed to it by citizens. However, when it came time to do the same with the draft of the 1977 Constitution, the scope of the discussion was even broader. >The 1977 constitutional discussion does stand apart from previous discussions of legal reform, at least in terms of its scope and duration. . . . It also seems clear that the scope of what is considered permissible for public discussion in the Soviet press has grown enormously since Stalin's death. . . the discussion has brought forth a remarkable variety of proposals on a wide range of concerns. . . . >By the end of June, the discussion had mushroomed into a great volume of citizen activity. Izvestiya, for example, reported that it had received over two and a half million letters on the Draft, while the municipal party organization in Kiev announced that exactly 41,787 groups were discussing the Constitution in that city alone. . . . A recurring pattern of issues, reflecting themes of the leadership, group interests, and individual concerns, began to emerge in the national and regional press. . . the citizen from Sverdlovsk who suggested adding to the personal property clause (Art. 12) explicit mention of an individual's right to own a car; or the pro-women's liberation letter which advocated that a phrase promising "government assistance to single mothers" be given a place in the document (Art. 35). One "old-timer," a member of the party since 1919 and a veteran of the discussion of the 1936 Constitution, simply expressed his pleasure at again having the opportunity to take part in such a great undertaking. (Sharlet, Robert. "The New Soviet Constitution." Problems of Communism vol. XXVI, no. 5 (September-October 1977). pp. 15-17.) Post last edited at No.8626 Why did the USSR dissolve? and who was to blame? No.8628 >>8626 The mains factors of Soviet collapse were: 1. Perestroika tanking the economy, causing many to consider socialism a lost cause and look towards capitalism. 2. Glasnost, which at first simply meant being more forthright about problems in Soviet society, but quickly turned into "freedom" to attack the history of the USSR and to promote nationalist doctrines. 3. The CPSU's authority was eroded by the disastrous results of Perestroika and Glasnost. Numerous factions sprang up arguing over whether Marxism-Leninism was even relevant anymore, Article 6 of the Soviet Constitution (which described the CPSU as "the leading and guiding force of the Soviet society and the nucleus of its political system") was all but done away with in early 1990, and the mass exodus of members from the party meant it wasn't able to properly finance its activities. 4. In 1990 the republics of the USSR began proclaiming their "state sovereignty," meaning they would enact their own laws independent of whatever the central government thought and ignore central government laws that contradicted their own. 5. The August coup showed Gorbachev was politically impotent (he was placed under house arrest while on holiday in the Crimea and cut off from the outside world, whereas Yeltsin avoided arrest and openly defied the coup), and it gave the impression to the non-Russian republics that the Union would be held together by force. Yeltsin also took advantage of the coup to outlaw the CPSU. Thus after the August coup those republics of the USSR that hadn't proclaimed independence rapidly did so, and in December 1991 the leaders of Russia, the Ukraine and Byelorussia declared the Soviet Union dissolved. Gorby was powerless to do anything at that point. Gorbachev, as leader of the USSR, bore the greatest responsibility for the USSR's collapse. This is the best account on the subject: http://b-ok.xyz/book/1246151/ea7f45 Post last edited at No.8629 >>8628 What can we learn from such a mistake and how do we prevent such a thing from happening in the future? No.8630 Do you have anything about the KPD in Weimar that isn't from a Trotskyst perspective? No.8631 >>8560 Also, you should archive the old thread in case it slips out of the catalog (may take a while, but better do it now). Here: http://archive.fo/fN52b Maybe put that in OP? No.8636 >>8631 I will if the old thread dies. >>8630 There are studies of specific periods of the KPD, but as far as I know no English-language histories that aren't Trot. There is a ML history of Germany in English from 1945: https://archive.org/details/TheLessonOfGermanyGuideHistory (the authors were soon to be academics in the GDR) >>8629 There wasn't anything inevitable in the USSR's demise. If Grigory Romanov or Yegor Ligachev had been in charge rather than Gorby, the Soviet Union would either still be around or at the very least not have fallen in 1991. The main thing is to strengthen party democracy. The authors of the book I linked to make the point that Gorby was able to remove "hardliners" (i.e. opponents of his policies) within the Politburo with relative ease using underhanded intrigues, since the leadership of the party was pretty much unaccountable to anyone. Post last edited at No.8638 >>8636 Have you read Kotz and Weir's "Revolution from Above: The Demise of the Soviet System" ? Your thoughts on it? No.8642 >>8638 I read it a long time ago, but I recall it being good. The authors of "Socialism Betrayed" (the book I linked to earlier) make use of it. * http://b-ok.xyz/book/942717/efc03c (original version) * http://b-ok.xyz/book/756074/af7b35 (revised/updated) No.8647 Are you aware of any serious critics of Marxism who actually understand it? All the anti-Marxists I've encountered either misinterpret Marxism and attack a strawman, or simply have totally ludicrous ideas (e.g. they think Marxism is bad because it's against God's will). No.8648 >>8647 Oh, and I'm not talking about people who understand Marxism but oppose it because they see it as a threat to their power. No.8650 >>8647 Leszek Kołakowski's "Main Currents of Marxism" is one of the more significant critiques of Marx, at least philosophically. But whenever he veers away from philosophy it's just standard anti-communist stuff like "Lenin thought workers were idiots and that only he and his other 'enlightened' comrades truly knew what was best for them, also communists in power merely use Marxism in order to obtain TOTAL CONTROLLLLLLLLL." Christopher Lasch's "The True and Only Heaven: Progress and Its Critics" isn't focused on Marxism specifically, but does critique Marx and Engels alongside many others in what he regards as their flawed understanding of historical progress. You can find those books on b-ok.org But yeah in my experience the vast majority of critiques of Marxism are based on emotion or self-interest. Mike Davidow, an American Communist journalist, gave an example of the sort of emotional rhetoric common in 1989-1991 (in this case quoting a well-known Soviet philosopher): >The false idea of the possibility to build a kingdom of universal equality and to radically transform human nature with its inborn individualism has been put into life by creating a system distinguished by its spiritual and political poverty, slavery, monstrous . . . moral and social degradation. >Chernobyl demonstrated to the world at large that the USSR is a threat to the entire humanity, that the Soviet people are incapable of living in the world of modern technologies, are incapable of mastering and turning to use the advances of contemporary science. No.8653 >>8647 Check out Cornelius Castoriadis' The Imaginary Institution of Society. The first chapter is an elaborate critique of Marxism from someone who had been a Marxist for the longest part of his life and understands it better than most. No.8658 I heard on /leftypol/ that you helped start a party in USA Called American Party of Labor? If so are you still associated with them? / do you still line up ideologically with their beliefs? No.8664 >>8658 In 2007-2008 there were a few "Hoxhaists" on RevLeft, myself included. The idea of setting up a pro-Hoxha party in the US was proposed, and I was one of its founding members (although technically I wasn't a member, but part of its youth wing.) I was in the party till 2011 or so, the most significant thing I did was help write parts of a critique of Glenn Beck's "Revolutionary Holocaust." I left the party because I felt I wasn't actually able to do anything beyond merely write a blog post here and there. To me being involved in a Marxist-Leninist party entails more substantial work. I also thought the APL had no real future. Nowadays the APL is a bit bigger than when I left and does do some real life organizing, although nothing compared to groups like FRSO, WWP or PSL. >do you still line up ideologically with their beliefs? I'm not a "Hoxhaist" anymore, so no. No.8666 What led to the very different development of the economic system of Yugoslavia in comparison with the USSR? What were the motives for straying from the model of a planned economy? No.8667 >>8666 Stalin basically excommunicated them from the international communist movement, calling Tito a fascist and advocating his overthrow. The West hypocritically denounced this act of "Soviet imperialism" (even though the capitalists had previously damned Tito as supposedly the most fanatical of Eastern European communist leaders) and showered Yugoslavia with aid. These factors led the Yugoslavs to strike out their own path, both to legitimize their break with the Soviets (they argued when Stalin was alive that the USSR had become "state-capitalist," although after relations were normalized in 1955 they seemed to call the Soviet system "bureaucratic socialism" instead) and to be more appealing to Western audiences, especially social-democrats. No.8668 Thoughts on Tudeh party? No.8669 >>8667 But why the particular form of the path? What about the conditions present in Yugoslavia made them favour an entirely market-based approach? No.8670 >>8668 It seemed okay. >>8669 Initially when the Soviet-Yugoslav break occurred the Yugoslavs wanted to show they were "more holy than the Pope" via accelerating collectivization and similar "hardline" measures. But that didn't accomplish anything. And since it wasn't really possible to adopt a *more* centrally planned economy than the USSR's, the only other option was to adopt a more market-based approach under the argument this accorded better with workers' self-management. Also the fact they had been denounced by the USSR meant they had no "official" ties to Soviet interpretations of Marxism-Leninism and could basically do what they wished, just like how Mao after the Sino-Soviet split sought to make Mao Zedong Thought an international ideology and launch the Cultural Revolution. No.8672 >>8664 Are you in a party today? No.8674 1. There's a book you uploaded a while ago called History of the civil war in the USSR Vol 1 and 2. What is the context of this book? And, since it seems to be mostly about the October revolution itself, why is it called like that? 2. Do you know any other works on the Russian civil war besides the ones you previously recommended to me, the Armed Intervention book and the chapters from History of the USSR Era of socialism? No.8675 >>8672 No. >>8674 I never uploaded those books, but they are online. Both books were written when Stalin was alive and were meant to be continued with additional volumes, but the project was never completed by the time of his death and had to be started anew. I'd imagine mostly being about the October Revolution is to provide context. >Do you know any other works on the Russian civil war besides the ones you previously recommended to me, the Armed Intervention book and the chapters from History of the USSR Era of socialism? W.H. Chamberlin's two-volume "The Russian Revolution" is the classic bourgeois account: "Red Victory" by W. Bruce Lincoln is also an important read. No.8677 >>8675 Thanks >"Red Victory" by W. Bruce Lincoln Is this available online? No.8678 >>8677 Not to my knowledge. I physically own a copy. I've used it to help with the 1919 simulation I'm running: http://eregime.org/index/ No.8681 Would you say that China has Private ownership of means of Production?! No.8682 File: 3e54aa44de0b09d⋯.png (404.7 KB, 640x425, 128:85, gorby1.png) Thoughts on Venezuela, the bolivarian revolution, Chavez and Maduro? What stance should western socialists and marxists take on this country and this movement? No.8683 >>8681 Land is publicly owned, any private ventures are leased by the government. >>8682 I think the Venezuelan government is oriented toward socialism, as it understands the term. But the PSUV is not a Marxist-Leninist party, which means there still exists the need for a separate communist party (and there are multiple such parties in Venezuela) to critically support the government against imperialism and domestic reactionaries, while at the same time pointing out any backsliding on the part of the government and also pointing out the need for the dictatorship of the proletariat. Post last edited at No.8686 >>8678 Any chance of you scanning this sometime? No.8687 >>8686 Not any time soon. No.8689 >>8683 >Land is publicly owned, any private ventures are leased by the government. But the same could be said of almost any country If my country (Australia) leased goverment land to a privately owned capitalist business which used it in the capitalist manner isnt that basically the same as what china does? How is that socialism? No.8690 >>8689 >But the same could be said of almost any country Is all land in your country government-owned? >How is that socialism? The Chinese aren't arguing that it is. They argue that it develops the productive forces, which in turn gives better material foundations for socialism. Just like the Soviets welcomed the investments of Armand Hammer and other capitalists to the USSR during the 1920s to develop the economy, while the "commanding heights" of said economy remained in state hands. No.8691 >>8690 >commanding heights I've seen you using that argument quite a lot. What are the actual "commanding heights" in China though? What branches of industry does the socialist mode of production encompass? Heavy industry? Transport? No.8692 >>8691 Yes, heavy industry and transport is generally what's meant. The term "commanding heights" comes from Lenin. As he put it: >What is the plan or idea or essence of NEP? >(α) Retention of the land in the hands of the state; >(β) the same for all commanding heights in the sphere of means of production (transport, etc.); >(γ) freedom of trade in the sphere of petty production; >(δ) state capitalism in the sense of attracting private capital (both concessions and mixed companies). No.8693 >>8692 Then according to Lenin, China is state capitalist. No.8694 >>8693 No, because Lenin isn't analyzing China. Lenin pointed out as early as 1918 that a socialist sector existed in Soviet Russia. It wasn't the dominant mode of production, but it did exist alongside state-capitalism and other forms of property. In China the socialist sector is dominant. No.8695 Ismail, In your opinion what are the top 10 resources for Marxism? Could be organizations, institutes, websites, youtube channels, books, etc. No.8696 >>8695 I like showing people Parenti's 'Blackshirts and Reds" for an introductory busting of a bunch of anti-communist myths: http://b-ok.xyz/book/981420/378c5d I recently scanned this intro to the very basics of Marxism which I thought was decent: https://archive.org/details/IntroductionMarxism This is the introductory work I always show people in regards to Lenin's theories and the October Revolution: https://archive.org/details/lininandtherussi035179mbp No.8698 >>8690 Yeah but that means that any hope of socialism emerging from the Chinese system hinges on the Party beuracrats (who have increased their wealth to millions maybe billions of dollars) and the national Bougies actually having the will and ability in the case of the beuracrats to transfer to a socialist model No.8714 What is the correct leftist line on freedom of speech, in your opinion? I'm not asking about what free speech rights have existed in socialist countries historically, nor what laws we would prefer to have in our ideal society. I'm more interested in your thoughts on the discourse about free speech going on in American society right now. For example, should we support affluent kids at prestigious private universities in their efforts to shut down and deplatform conservative guest speakers, like Milo Yiannopoulos, Ben Shapiro or Jordan Peterson? And what is the proper response to neo-fascists organizing public protests, marches, etc? Would be interesting to hear your thoughts on both the philosophical aspect and the political/strategic aspect of this issue. No.8715 >>8714 I see nothing wrong with people heckling and protesting speeches by those who seek to legitimize racist and fascist discourse. That those doing it are "affluent kids at prestigious private universities" isn't particularly relevant, considering that those listening to the speakers are also from the same background. When the NAACP (comprised of Du Bois and other highly-educated, well-off Blacks) protested theaters in an attempt to prohibit the showing of "The Birth of a Nation," they were acting correctly. >And what is the proper response to neo-fascists organizing public protests, marches, etc? Counter-protests and meeting violence with self-defense. The problem with Antifa isn't that it punches Nazis, it's that its whole strategy revolves around counter-protesting as a way to defeat fascism. Communists go beyond that and seek to organize the working-class toward political power, whereas anarchists abstain from political struggle and leave the field open to the very fascists and other ultra-rightists they protest. No.8716 >>8715 >I see nothing wrong with people heckling and protesting speeches by those who seek to legitimize racist and fascist discourse. I don't see anything wrong with it either, really. But do you think there is a distinction to be made between heckling, protesting and challenging a conservative speaker, and attempting to completely deny them a platform under the threat of violence? How do you respond to liberals who believe all opinions ought to be allowed to be expressed, without the threat of violence or state censorship? There are different arguments for this view but a common one is that only through debate can we collectively get closer to truth, and therefore open discussion is good for society. Bad ideas will automatically be exposed in this process. Further, restricting freedom of speech is viewed as a slippery slope (who gets to decide what is and is not acceptable speech, etc). No.8717 >>8716 >But do you think there is a distinction to be made between heckling, protesting and challenging a conservative speaker, and attempting to completely deny them a platform under the threat of violence? "Completely denying them a platform" is simply what happens when the former methods are successful. >How do you respond to liberals who believe all opinions ought to be allowed to be expressed, without the threat of violence or state censorship? It's an idealist argument which assumes that fascist sentiments rise purely due to the strength of their arguments and not because of the inherent problems of capitalism and the willingness of sections of the capitalist class to bankroll fascist individuals and groups. To quote historian E.H. Carr: >The democrat who holds that democracy requires equal toleration for opinions hostile to democracy, cannot even believe in democracy as an absolute value, being bound to accept its abrogation as valid if the majority will it. It need hardly be said that the whole of this thesis is anathema to Soviet democracy, which regards the toleration shown by English-speaking democrats to fascists as a symptom of weakness and of faltering faith in democracy. And a Sartre quote that is quite famous nowadays: >Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past. Fascism has already been exposed as a "bad idea." The purpose of Spencer and other reactionaries speaking at colleges and other public places isn't to rationally discuss the subject of Jews or white people or whatever, it's to rally support for fascist sentiment. They have a political purpose in doing these things, which is why they whine about "freedom of speech" afterward. >Further, restricting freedom of speech is viewed as a slippery slope (who gets to decide what is and is not acceptable speech, etc). Switzerland, Germany, Britain, and some other bourgeois democracies penalize hate speech. Liberals do not complain about the suppression of "freedom of speech" in these countries. No.8721 Where did Soviet Union go wrong? No.8722 >>8721 I don't think it ever "went wrong" until Gorbachev. Obviously there were plenty of things that should have been done differently during the years 1924-85, but had the USSR been headed by Grigory Romanov, Viktor Grishin or Yegor Ligachev it's quite possible the country would still be around. No.8723 What are your criticisms of Deng's foreign policy, why did he do what he did, and how has China's foreign policy changed since then? No.8728 >>8723 When Deng first became leader, China's foreign policy was a continuation of Hua's: extreme anti-Sovietism, to the extent Deng was calling for the US to join China in an alliance against "hegemonism" (i.e. the USSR.) But after 1982 things began to change: the USSR no longer was referred to as having restored capitalism, relations were the MPLA government in Angola were normalized (whereas before China supported the South African-backed UNITA and Zairian-backed FNLA), etc. But throughout the 80s China still continued a basically anti-Soviet foreign policy, such as working with the CIA to help the Mujahideen and Pol Pot. When the USSR withdrew troops from Afghanistan and Vietnam withdrew troops from Cambodia, Sino-Soviet relations began to improve, but by that point the USSR was basically doomed. The Chinese leadership from the 1950s till the end of the 80s saw the USSR as an ideological and military rival. In the 50s and 60s there were constant fears that the Soviets would "sell out" China's security in favor of a US-USSR alliance. Border disputes between the USSR and China led to actual fighting by 1969. By 1990 the Chinese realized that the USSR did not pose such a threat, and that it was preferable to unite the socialist countries rather than divide them. But again, by that point the USSR was effectively doomed by Gorby's policies. >and how has China's foreign policy changed since then? In general, China tries not to offend the US too much, e.g. during the Gulf War China refused to veto US military action in the UN Security Council, and in more recent years it has agreed to put pressure on the DPRK (but not so much as to seriously endanger relations between China and that country.) No.8729 What do you think of "decolonization" movements in the US? Are they a necessary part of a communist movement here? Can they even truly be communist (since "decolonization" suggests a return to various systems which could very well be anti-communist)? No.8730 >>8729 Do you mean the idea that the US should be dismantled and all (or the vast majority of) the land should be "given back" to indigenous peoples? I think the idea is wrongheaded, and akin to "Third Worldists" who argue that revolutionaries from the third world should occupy the US and place the majority of its inhabitants in concentration camps in order to "reeducate" them and get them used to a lower standard of living. You will find nothing in the writings of Marx, Engels or Lenin, or in the writings of American Marxists, suggesting such an idea. It's more akin to bourgeois nationalism, e.g. "because we held this territory hundreds of years ago, we deserve to have it back via military force for this reason alone." To get an idea of the communist approach toward indigenous peoples in the US, check this out: https://archive.org/details/ResolutionOnThePathToNativeAmericanLiberationAdoptedAtThe22nd No.8731 >>8730 That's what I meant, yes. I mean, I personally wouldn't mind living in a Native nation, but it doesn't at all seem practical to me to give, say, New York back to the Lenape when even if every Lenape returned to their ancestral homeland they'd still be a tiny minority and the new "Lenape Nation" would be overwhelmingly non-Native. Thanks for the link, I'll check it out. No.8733 File: 9ffcfb7df55215f⋯.jpg (99.83 KB, 748x651, 748:651, 9ffcfb7df55215f5c2a4df272a….jpg) No.8741 >>8653 not him, could you maybe give a very short rundown? And what do you think of it? No.8745 >>8731 Yeah most indigenous tribes in the US only number a few hundreds or thousands. The whole "decolonize the US" stuff is more a fantasy than a realistic (or even desirable) goal. >>8733 The changes reflect existing conditions in the country. "Private property" in the sense of self-employed hairdressers and auto mechanics, or tiny businesses like restaurants that employ around three employees, have been in Cuba for decades now. So it's not surprising. No.8747 >>8745 Is your opinion on Cuba similar to your opinion on China? Do you think they are temporarily liberalising with the intent of returning to socialism later? No.8748 >>8747 The market reforms are based on Cuba's difficult economic situation since the overthrow of the USSR, so yes. I don't think it's a question of "returning" to socialism; they already have a socialist economy. Self-employed hairdressers and foreign-run hotels for tourists do not change that. No.8751 I'm not an american, but I follow american politics pretty closely. It's becoming increasingly clear that socialist and social democratic policies are actually very popular, and that socialist and "progressive" candidates can win against mainstream democrats despite not having nearly as much money or the democratic establishment machinery behind them. There is a huge potential for a strong new left-wing movement. If you're "just" a social democrat, the strategy seems obvious. Run candidates who support single payer healthcare, free higher education, higher minimum wage, don't accept money from corporations, etc. within the democratic party, work with groups like the DSA, Justice Democrats and Our Revolution, and attempt a hostile take over of the democratic party. This is already being done and it seems to be working pretty well. It wouldn't at all surprise me if they succeeded in pushing the democratic party mainstream to the left and got them to go back to New Deal style politics. But if you want to go beyond social democracy, to socialism, it's more complicated. What strategy should parties like the PSL, WWP, and maybe the left wing of the DSA have? Obviously, the popularity of single payer healthcare and free higher education should be capitalized upon, but, for example, what kind of attitude should the socialist left have towards candidates like Bernie Sanders and Alexandria Ocasio Cortez? I feel like this is a once in 100 years opportunity for the socialist left to build a new movement and actually accomplish something, and I hope it is not wasted. No.8756 >>8751 Sanders and Cortez, while "better" than the likes of the Clintons, the Bushes, Trump, etc., are not serious alternatives to the growth of the ultra-right. Sanders himself endorsed the bombing of Yugoslavia, the Iraq Liberation Act of 1998 (which gave the US government a legal figleaf to overthrow Saddam), and referred to Hugo Chávez as a "dead communist dictator." The main function of figures like Sanders and Cortez are to rope people on the left into the arms of the Democratic Party. This also applies to the DSA, whose founder (Michael Harrington) was at first reluctant to oppose the Vietnam War because of the need to "fight communism" abroad. Communists should get involved in election campaigns where it would not exhaust their organizational and financial capabilities, and they should run as communists. The Bernie phenomenon isn't exactly new. Jesse Jackson's Democratic primary campaigns in 1984 and 88 were similar in many ways, including how many leftists were roped into backing him as a "progressive" alternative. Jackson has long since shown himself nothing more than a vote-getter for the Democratic Party. Likewise in 1968 the CPUSA and some other left groups rallied around Senator Eugene McCarthy as an "anti-war" candidate. In 1980 he endorsed Ronald Reagan for President. And even in 1948, the CPUSA heavily backed the third-party run of Henry Wallace, Roosevelt's first Vice-President, against the Democrats and Republicans, yet that same Wallace a few years later dropped his opposition to the Cold War and endorsed Eisenhower for President. All this shows the futility of relying on bourgeois politicians for progress. If you want to move these politicians "to the left," you have to do it by pressuring them via running independent, class-conscious candidates for office, not by endorsing the likes of Sanders and Cortez as "lesser evils" or pretending they're great progressive figures. No.8758 >>8756 >Sanders and Cortez, while "better" than the likes of the Clintons, the Bushes, Trump, etc. Don't know what the quotation marks are supposed to mean. Someone like Bernie is obviously much better than someone like Trump. Think about the difference Bernie's policies on healthcare and education would make for millions of americans, for example. >Sanders and Cortez [...] are not serious alternatives to the growth of the ultra-right. Sanders himself endorsed the bombing of Yugoslavia, the Iraq Liberation Act of 1998 (which gave the US government a legal figleaf to overthrow Saddam), and referred to Hugo Chávez as a "dead communist dictator." >all that other shit about left democrats not being reliable allies of the left I completely agree but I feel like you're completely missing the point of my question. The popularity of Sanders (I've heard he's the most popular politician in the country right now) and Cortez, among other things, shows that americans are open to a left wing alternative to the status quo. What should the socialist and communist left be doing right now, in order to capitalize on this moment, and not let it be wasted on just electing a few social democratic politicians that will eventually be purged or start compromising? The answer can't be to join some tiny cult/party and sell a newspaper no one reads while smugly denouncing Bernie supporters for not being revolutionary enough. No.8759 >>8758 >Don't know what the quotation marks are supposed to mean. They're still bourgeois politicians. They aren't socialists. >What should the socialist and communist left be doing right now? The answer can't be to join some tiny cult/party and sell a newspaper no one reads while smugly denouncing Bernie supporters for not being revolutionary enough. As I said, in the field of elections communists ought to run as independent candidates, at least for local office. But obviously just running for office isn't sufficient. Communists also have to get involved in local and national causes (trade union organizing, opposition to police brutality and racism, etc.) The point I was making is that any sort of "new movement" involving the DSA or right-wing social-democrats like Sanders is doomed to failure. Real unity will come about in the course of struggle, not in the course of trying to unite with whomever you happen to find willing to "unite" with you for their own ends. No.8786 What's the ML view on Karl Korsch? No.8787 >>8786 The ML view is that he sucked. Obviously that isn't a very satisfying answer. There is a book International Publishers put out in 1975 titled "Marxism and The Theory of Praxis" which contains a critique of Korsch and other "Western Marxists," but it isn't online. I could obtain it in a few months and scan it. No.8790 File: 5ccbdf8256a7ced⋯.jpg (125.67 KB, 864x368, 54:23, Screenshot_20180717-225724….jpg) In that other /pol/ thread on this board (which I don't want to bump so I'm asking this question here), you posted this. My question is what you think Kurdish autonomy should look like. As it stands, in three of the countries in which Kurds reside (Syria, Iraq, and Iran), there is not, as far as I know, serious repression of Kurds today. Obviously that wasn't always true (especially in Iraq), and those communities might still have some problems (such as a lack of official recognition of their languages), but it's certainly not the ethnic violence you saw in Iraq under Saddam or in Turkey for decades. So, what "autonomy" do you believe Kurds should have? Autonomous regions like in Iraq (though presumably not beholden to the US)? Independent states? Also, historically, what has been the position of communists (both in those countries and elsewhere) regarding Kurds? I know the Syrian Communist Party was founded by a Kurd and had significant Kurdish membership, but I don't know what their position on Kurds as a people was; same goes for communists in Iran and Iraq (and, for that matter, what exactly the PKK desired back when they were still communists). No.8791 >>8790 The autonomy proposed for Iraqi Kurdistan in the early-mid 70s was actually a pretty good example. To quote one author (Saad Jawad, "The Kurdish question in Iraq: historical background and future settlement"): >Most important is the fact that the Baath Party undertook a serious consideration of the Kurdish question in 1969 following demands by the Iraqi public to find a peaceful solution to the conflict. . . [the Ba'athists established] a Kurdish university in Sulaimaniya and the creation of a third Kurdish province, Dohuk, made up of parts of Erbil and Mosul. . . approval was granted for the teaching of the Kurdish language alongside Arabic in all Iraqi schools. Schools in Kurdish regions were given the right to use Kurdish as a language of instruction up to a specific stage, after which they would also have to use Arabic. . . >The Autonomy Law of 1974 allowed the creation of two Chambers of Parliament, legislative and executive, made up of ‘elected representatives’, in addition to the participation of the Kurds in the central government of Baghdad with five ministers (there had previously been two). After March 1970, the KDP was licensed to operate openly and was given the right to open several branches all over Iraq. Large numbers of Kurdish students were admitted to Iraq’s military academies and part of the Kurdish militia (the Peshmerga) was merged with the Iraqi armed forces to act as frontier guards. It was also agreed that an Iraqi Kurd would be appointed vice president of the republic. If the Ba'ath had consistently implemented such measures (which the author notes "Kurds in neighboring countries would envy") rather than undermining them in practice, and had the KDP not continued its contacts with the US, Iran and Israel and not treated Iraqi Kurdistan more as an independent state than an autonomous region, there would have been the possibility of a durable autonomy for Iraqi Kurds and much suffering averted. I know the Iraqi Communist Party advocated Kurdish autonomy within Iraq and backed the 1974 law mentioned above. I don't know the position on Kurds taken by Tudeh and the parties in Syria and Turkey (except that the Turkish Communists also opposed an independent Kurdistan.) No.8792 What should be done about the conflict between China and Taiwan at this point? I actually lived with one taiwanese exchange student and one chinese exchange student for a year (I recently moved out and they went back home). Both of them view Taiwan as it's own thing. After hanging out with them and their other chinese and taiwanese exchange student friends, it also seems that Taiwan has started to develop it's own culture, distinct from mainland chinese culture, and the taiwanese have their ideas of what chinese people are like, and vice versa. I'm sure the older generation in both countries think that their own government should rule over both mainland China and Taiwan, but young people seem to view them as two separate countries with their own governments and, while similar, unique cultures. I think China should just recognize Taiwan as it's own independent country and move on tbh, but I'm not an expert on this dispute in the first place so.. No.8793 >>8792 The PRC's position is that Taiwan ought to function as a highly autonomous region of China, similar to Hong Kong and Macau. I agree with the PRC position. There is no reason for Taiwan to be independent. Its existence was the result of Chiang Kai-shek fleeing to a bunch of islands with the support of the US army. There is an aboriginal Taiwanese population that Chiang treated terribly, but other than that the population is Chinese. Germans in West Germany and the GDR obviously lived quite differently by 1989, and yet events proved they were component parts of a single German nation (despite efforts by the GDR to create a separate socialist national identity.) Likewise Koreans North and South recognize they are part of a single nation. No.8794 >>8793 >I agree with the PRC position. Is there any question on which you disagree with the official CPC line? No.8795 >>8794 I don't think there's a "superior" communist position on that particular subject, unless invading Taiwan counts. If there's a position the CPC takes that I consider wrong, I'd say so. If I were alive in earlier periods I would have found a lot to criticize as well: anti-Sovietism in China's foreign policy, the ultra-leftism of the Great Leap Forward and Cultural Revolution, not using its Security Council powers to veto US aggression against Iraq in 1990-91, etc. Post last edited at No.8799 Do you have any info on the Young Patriots? How did communists get away with using the Confederate flag as their symbol? No.8800 >>8799 I don't think they were a communist group, they were just Southern whites who used the Confederate flag in a sort of ironic, "defiant" way (think of how homosexuals tried to reclaim the words "queer" and "faggot") as they worked with the Black Panthers and other anti-racist groups. Historically, America's Marxists backed the GOP in the 1850s-60s, fought for the Union Army, and Joseph Weydemeyer (foremost Marxist in the US back then) even got elected county auditor of St. Louis in 1865 on a Republican ticket and backed Reconstruction. So it would indeed be a bit strange to see communists using the Confederate flag. Post last edited at No.8801 What are your general thoughts on "identity politics"? How would you define that term, what are the positives/negatives of idpol, how do we critique idpol from the left without becoming crypto-reactionaries, etc? No.8802 >>8801 I don't really get what "identity politics" is. If by the term one means placing issues like racism or patriarchy "above" class, in the sense that the "main task" is to abolish racism and whatnot, then that's erroneous since it creates illusions that such problems can be eliminated under capitalism, and ends up promoting class collaboration (e.g. women supporting female CEOs or female bourgeois politicians because the "main task" is to fight misogyny.) On the other hand, one of the major errors of the American Marxist movement in the 19th and early 20th century was the idea that Black civil rights, women's suffrage, and whatnot were "distractions" from the class struggle and that Marxist parties should have nothing to do with them. This led to conciliating racist sentiments, promoting opportunism (e.g. Hillquit and other right-wing Socialists advocated restricting Asian immigration in order to win more votes in elections), and promoting sectarianism (e.g. Daniel De Leon basically blamed Black people for not moving en masse from the Republican Party to the Socialist Labor Party because the latter refused to have anything to do with anti-lynching legislation and other supposed "diversions" from the class struggle.) As Marx, Engels and Lenin pointed out, democratic demands are a component part of the struggle for socialism, and this is a lesson the Comintern taught to the American Communist movement. So just as one shouldn't put race or sex "above" class, one also shouldn't neglect to rally all democratic forces and explain the relationship between racism and capitalism, patriarchal attitudes and class society, etc. No.8803 Another China question: My father is an intelligent, empathic and just generally very reasonable and open minded person, but he is such a (european style) liberal. Lately he seems really spooked about China. Basically he believes China is going to continue to develop their economy and peacefully expand their influence in the world while simultaneously slowly imposing their totalitarian undemocratic style of government upon us (us=europeans). He basically views the rise of China and the decline of the USA and EU as the slow fall of democracy, and that it is our own fault for trading with China, letting them invest in our countries and just letting them buy more and more influence. I disagree with your views on China, Ismail (I'm the asshole constantly asking China questions and arguing with you about it) but I think he's being too paranoid on this issue and I also think there happens to be a lot of people who feel the same way. What would you say to or argue with people with views like this? I've pointed out how western/american hegemony is great for small european countries that suck up to the US (we are allowed to have democracy and freedom of speech and labour rights and everything) but totally fucking disastrous for countries that attempt to go against the interests of american imperialism. So the future he imagines, where a foreign power imposes it's will upon smaller nations with complete brutality and disregard for human rights, already exists and has existed for a long time. We have just not been the victims of it, yet. No.8804 >>8802 >I don't really get what "identity politics" is. If by the term one means placing issues like racism or patriarchy "above" class, in the sense that the "main task" is to abolish racism and whatnot, then that's erroneous since it creates illusions that such problems can be eliminated under capitalism, and ends up promoting class collaboration (e.g. women supporting female CEOs or female bourgeois politicians because the "main task" is to fight misogyny.) This is generally what is meant by identity politics I think, and I agree it's bad. Another aspect of identity politics is (unintentionally) accepting a sort of race/gender "essentialism" (not sure if that's the correct word for it) similar to that of actual racists. "Cultural appropriation" is a good example of this. Some anti-racists (irl it's probably a tiny group of people tbh) claim that it's racist, problematic or wrong for white westerners to practice yoga as a form of excercise instead of a religious practice, or the same for white people to have dreadlocks, because those things "belong" to other cultures. Not only is this stuff kinda silly and inconsequential, but it seems like in order to accuse people of cultural appropriation in this way, you would have to view humanity as made up by different races/nationalities/"volk" that all have their own unique cultures that are separate and sort of static and frozen in time, and which should not be mixed with each other. I dunno, maybe I'm reaching, but I really do get the vibe that some of these idpol people have a fundamental view of humanity that is not all too different from that of a far right nationalist. But the biggest problem with identity politics imo usually isn't the content of it, but how it's used. Idpol focused language policing (you can't say idiot that's ableist, etc) is used opportunistically all the time in stupid twitter fights between communists and in DSA in-fighting. No.8805 >>8804 Oh just thought of another example of strange idpol based analysis. There was a swedish lefty who in an article argued that all oppression throughout history has been rooted in the social norms that you're "supposed" to be white, male, heterosexual, cis and able-bodied. A swedish Marxist wrote a response in which he asked her to explain how slavery in the Aztec empire was rooted in these social norms. I don't remember specifically what her response was, but she stuck to her premise. No.8806 >>8803 If countries around the world are being drawn toward a "totalitarian undemocratic style of government," he should ask himself why bourgeois democracies are so vulnerable to open dictatorships. But yeah I don't see any evidence of European countries trying to emulate China. No.8807 Do you have a text about how communists got slaughtered in Indonesia in the 60s? How high would you estimate is number of victims? And also, why the fuck has this gotten not more attention, this sounds like a horrifying event No.8809 >>8807 See chapter 31 of "Killing Hope" by William Blum: http://b-ok.xyz/book/887149/c061ac No.8810 Was Nazi Germany (at least in the beginning) anti-imperialist? Why not? No.8811 >>8810 No. The Nazis rebuilt the German economy around preparing for a war of conquest across Europe. Hitler also sought to regain Germany's African colonies lost as a result of WWI. No.8812 >>8741 To summarise it very briefly and in a very simplified manner, he says that Marx's focus on the economic base was historically conditioned by the times he lived in and different societies put different emphasis on the economy, so they were "in the last instance" not determined by the economic base, but by different factors. He also makes some points regarding Marx' economics that I don't remember too clearly. I personally think his critique of Marx' philosophy puts too much emphasis on a given societies' self-conception and fails to see that the foundation of every conceivable society is the reproduction of their conditions of living, no matter what they may think of themselves. He certainly has a point though when he says that Marx' thought itself was conceived within a given historical situation and as such some elements of his philosophy bear the prejudices of the 19th century. No.8813 >>8811 What does that have to do with imperialism? Imperialism is about concentration and export of capital. I'd say by '38 Germany the concentration of capital and the finance sector in Germany were sufficiently big to call the country imperialist, but I'm wondering about before, with much of the industry destroyed by the war, reparations being paid, etc. No.8814 >>8813 You asked if Nazi Germany was anti-imperialist, i.e. if its foreign policy was opposed to imperialism (since "anti-imperialism" is not an economic category.) And I gave my answer. No.8815 >>8814 And I clarified my question. Was Germany in the early 30s an imperialist country? No.8816 >>8815 That I can't answer, but acting as if I don't know what imperialism is after giving a blatantly unclear question is dumb. Post last edited at No.8819 >>8619 Is there another article with sources available? Links are dead in the one you provided. Or maybe you know the articles they referred to? No.8820 >>8819 I forgot to check the endnotes. Opps.... No.8822 >>8805 Could you link me to the article and the response? No.8823 File: 343e0a900285da3⋯.jpg (42.02 KB, 720x960, 3:4, 14910386_330877170601702_6….jpg) >>8822 Aleksa Lundberg writes [1]: >In my own words I would describe [intersectionality] as the idea that all oppression stems from the social norm of the human as a white, heterosexual, cis-gender, wealthy and able-bodied man. To be fair, she didn't say all oppression throughout history. It's still a very idealist and inadequate theory for explaining all oppression. Malcom Kyeyune, a self-described marxist (black guy btw) quotes her in an article where he criticizes swedish anti-racism [2]: >When grown ups like Aleksa Lundberg at SVT Opinion can claim "[the quote above]", I don't know if I'm supposed to laugh or cry. >Really? When the aztecs sacrificed tens of thousands of prisoners at the pyramid in Tenochtitlan there was some insidious european social norm to blame? She responded [3]: >Kyeyune's comparison to the aztecs who sacrificed tens of thousands of prisoners is a bit strange - considering the aztec empire was toppled by spaniards in 1521. I thought it was funny that, instead of just saying that social norms of being white/cis/straight/etc were not the cause of oppression at that time in that place (in other words, that her theory isn't meant to explain all oppression through the entire human history, but oppression in a specific historical period), she went with "spaniards overthrew the aztecs and spaniards=white sooooo". No.8830 >>8811 Not the same guy But what about the negotiation for the Ending of the De-Facto French-Belgo occupation of the Rhine industrial Area? What about the Negotiation for the end of the French occupation of the Saarland? And off topic but do you believe the Nazi claim that the Treaty of Versailles was Imperialist / Slanted against Germany? No.8831 >>8830 The Bolsheviks denounced the Versailles Treaty in the strongest terms, which is why the USSR helped Germany covertly rebuild its army during the 1920s. The KPD likewise denounced the occupation of German territory by the victorious imperialist powers after WWI. But Hitler cynically took advantage of the injustice of Versailles to build up German imperialism anew. Litvinov, People's Commissar for Foreign Affairs, said the following in 1938: "Germany is striving not only for the restoration of the rights trampled underfoot by the Versailles treaty, not only for the restoration of its pre-war boundaries, but is building its foreign policy on unlimited aggression, even going so far as to talk of subjecting to the so-called German race all other races and peoples. It is conducting an open, rabid, anti-Semitic policy, suspiciously recalling those times when the Teutonic Order held sway in the Baltic countries, and publicly abandons itself to dreams of conquering the Ukraine and even the Urals. And who knows what other dreams? . . . . Whether the order established after the world war is better or worse than the pre-war, we at any rate are against changing it by a new, bloody war. Besides, there are no guarantees at all that war can bring a better, more just order. It must not be forgotten that the Brest-Litovsk peace, the product of the Kaiser's Germany, was no better than that of Versailles. Still less can one expect a just peace from Hitlerite Germany, with its medieval racial theories, its anti-humanism, its belief in crude material force as the supreme law." (quoted in Jane Degras, Soviet Documents on Foreign Policy Vol. III, 1953, p. 287.) No.8833 Thoughts on the historian Moshe Lewin? No.8836 >>8833 His book "Lenin's Last Struggle" is considered the standard text on the subject of relations between Lenin, Stalin and Trotsky in Lenin's final years: http://b-ok.xyz/book/1007059/518be9 "The Making of the Soviet System" and "Russia-USSR-Russia" contain useful information on collectivization and the initial Five-Year plans. He was critical of the USSR and called himself an "anti-Stalinist," but he wasn't a Cold Warrior. No.8847 What does the trade relationship between an imperialist country and an "exploited" country usually look like? Usually when I think of imperialism it's just the military aspect and export of capital, but does the imperialist country also use it's power to ensure the exploited country is dependent on importing lots of commodities from the imperialist country? No.8848 >>8847 >does the imperialist country also use it's power to ensure the exploited country is dependent on importing lots of commodities from the imperialist country? Yes, insofar as imperialism is characterized in part by capitalist monopolies being obliged to search for new markets to sell products in. Here's a decent intro to the subject of imperialism: https://archive.org/stream/IntroductionMarxism/Introduction%20Marxism#page/n18/mode/1up (pages 39-53 of the book) And here's a whole book on the subject: https://archive.org/details/LogicOfImperialism No.8849 >>8848 thanks i love you No.8881 File: 113087b73bd21a3⋯.jpg (27.35 KB, 220x335, 44:67, 212444.jpg) >>8831 SO WAIT SO WAIT SO YOU BE TELLING ME... NazBols WERE RIGHT THE WHOLE TIME? No.8882 >>8881 No. While the Bolsheviks opposed the Versailles Treaty, they also denounced the idea of German Communists making common cause with the most reactionary elements of the right and subordinating the interests of the proletariat to the interests of bourgeois nationalism. To quote Lenin: >It is not enough, under the present conditions of the international proletarian revolution, to repudiate the preposterous absurdities of “National Bolshevism” (Laufenberg and others), which has gone to the length of advocating a bloc with the German bourgeoisie for a war against the Entente. One must realise that it is utterly false tactics to refuse to admit that a Soviet Germany (if a German Soviet republic were soon to arise) would have to recognise the Treaty of Versailles for a time, and to submit to it. . . . >The Soviet revolution in Germany will strengthen the international Soviet movement, which is the strongest bulwark (and the only reliable, invincible and world-wide bulwark) against the Treaty of Versailles and against international imperialism in general. To give absolute, categorical and immediate precedence to liberation from the Treaty of Versailles and to give it precedence over the question of liberating other countries oppressed by imperialism, from the yoke of imperialism, is philistine nationalism (worthy of the Kautskys, the Hilferdings, the Otto Bauers and Co.), not revolutionary internationalism. The overthrow of the bourgeoisie in any of the large European countries, including Germany, would be such a gain for the international revolution that, for its sake, one can, and if necessary should, tolerate a more prolonged existence of the Treaty of Versailles. If Russia, standing alone, could endure the Treaty of Brest-Litovsk for several months, to the advantage of the revolution, there is nothing impossible in a Soviet Germany, allied with Soviet Russia, enduring the existence of the Treaty of Versailles for a longer period, to the advantage of the revolution. >The imperialists of France, Britain, etc., are trying to provoke and ensnare the German Communists: “Say that you will not sign the Treaty of Versailles!” they urge. . . To accept battle at a time when it is obviously advantageous to the enemy, but not to us, is criminal; political leaders of the revolutionary class are absolutely useless if they are incapable of “changing tack, or offering conciliation and compromise” in order to take evasive action in a patently disadvantageous battle. Post last edited at No.8885 What do you think about the JCP (Japanese commie party)? Considering its (Don't quote me on theis) the second largest commie party behind the KPRF IIRC? What were the main reasons the JCP Split from most of the rest international socialism after WW2? No.8886 >>8885 From what I've heard, the JCP is consistent in opposing Japanese militarism, that alone makes it at least somewhat laudable. If I recall right, the JCP took a neutral stand during the Sino-Soviet split, then in the 1970s-80s it was heavily influenced by Eurocommunism, which would account for its greater independence compared to most other "official" communist parties. But on the other hand Eurocommunism was a revisionist trend that supported reformism and criticized the USSR from the right (for being "dogmatic," "authoritarian," etc.) Post last edited at No.8888 File: 14ac7a10fbc446f⋯.jpg (331.75 KB, 1557x2400, 519:800, 81g ZYbkk3L.jpg) File: 723d5d18249c260⋯.jpg (921.82 KB, 1650x2475, 2:3, 91l4vjJjS6L.jpg) File: 66cc71f21fa2abd⋯.jpg (359 KB, 1556x2406, 778:1203, 81fTR6EonvL.jpg) Oi, Supreme Ismail, are these two books any good? I want to know more about Stalin personal stuff. The title is anti communist as fuck and I know that Montefiore is a bourgeoisie historian but it's at least Kotkin tier (tolerable and at least reputable)? I've read Montefiore's work on the Romanovs and it was pretty good. Also have you read this Secondhand Times? What did you thought about it? No.8889 >>8888 Grover Furr wrote that Young Stalin is a good read, so long as one ignores Montefiore's anti-communist interpretations and focus simply on his narrative of Stalin's early life. And I'd agree. So yeah, feel free to check it out. I've never read the other Montefiore book you mention, but he does seem like a type of historian who likes to wow audiences with "revelations" (like that Stalin supposedly wanted to have John Wayne assassinated), which isn't good. That being said, I'm sure he doesn't spend hundreds of pages just constantly lying out his ass, so if any specific claim comes up, let me know. >but it's at least Kotkin tier (tolerable and at least reputable)? Well the thing with Kotkin is that he's trying to draw a "balanced" portrayal of Stalin and a fair bit of the text in his bios isn't even about Stalin but rather the historical context in which he lived. Montefiore (in "Court of the Red Tsar") seems like he'd be interested more in "look at this bad stuff Stalin did" or "look at the intrigue between his minions" and whatnot. If you want to read up on Stalin, getting "Court of the Red Tsar" can't hurt so long as it's not your one and only source. No.8891 No.8892 Do you have a list of links of all the stuff you've scanned anywhere? No.8895 >>8892 Not every book there was scanned by me (a bunch are stuff I asked other people if they could scan.) >>8891 Enter the following two URLs into sci-hub.tw to obtain them, they should both help explain the economic and political factors that led to the demise of the socialist orientation of the Malagasy government (which was always vague and had an ambiguous relationship to Marxism): No.8898 What is your Solicited opinions on A. Tido :D and Market Socialism (Not to be Confused with Social Market economy) B. Budapest Uprising C. Prauge Spring Also did Socialist Afghanistan actually implement that many socialist reforms and if so in what ways? No.8900 >>8898 A. Tito's foreign policy could be bad (particularly when Stalin was alive and you had Tito blaming the USSR for the Cold War, saying Stalin's policies were akin to Hitler's, etc.), but it had its positive moments (e.g. for a time in the early-mid 70s the MPLA in Angola was largely reliant on Yugoslav arms since the Soviets stopped supplying them, and the Non-Aligned Movement was generally a good thing in opposing imperialism and colonialism.) The Yugoslav system of workers' self-management, while giving workers more influence in individual enterprises, was beset with its own problems (such as unemployment), see chapter 6 of this work: https://archive.org/details/ClassStruggleInSocialistPoland The law of value exists under socialism, so every socialist economy will have a "market" of some kind. The issue is how extensive market relationships should be and when to expand or restrict them. The Hungarian uprising was a counter-revolutionary revolt replete with the tearing down of Lenin statues and other communist symbols as well as pogroms. Here is a good read on the subject: https://espressostalinist.files.wordpress.com/2010/12/the-truth-about-hungary.pdf Dubček was a proto-Gorbachev, who was allowing a counter-revolutionary situation to develop in his country and did nothing to stop it despite repeated warnings from the USSR, GDR and Poland. And like Gorby, Dubček ended up an open social-democrat who regarded Sweden as a great example of "socialism." For good accounts of Czechoslovakia in 1968 see: * https://archive.org/details/IsTheRedFlagFlying (chapter 7, which also discusses Hungary.) >Also did Socialist Afghanistan actually implement that many socialist reforms and if so in what ways? The People's Democratic Party argued that what was happening in Afghanistan was a national-democratic revolution to uproot centuries of feudalism and lay the foundations for socialism. Therefore the focus of the government was on land reform, expanding access to education (along with getting women to attend schools alongside men), developing trade unions, restricting foreign control over the economy, and other basic tasks. A lot of reforms were only partially implemented, especially in the countryside, due to the Mujahideen occupying large parts of the country and the government trying after 1980 to win over as many Afghans as possible to its side. This process was accelerated when Babrak Karmal was replaced in 1986 by Najibullah, who downplayed Marxism, prayed during speeches, and urged Mujahideen supporters to lay down their arms in return for a role in a government of national reconciliation. By the time the government was overthrown in 1992 the party had been moving towards social-democratic politics. Post last edited at No.8908 Hello Ismail, I have a couple questions: 1) Very much a noob question, but how would you reply to the common counter-argument to the LTV where someone finds a diamond, which supposedly disproves that value is determined by productive labor time? Is the correct rebuttal of this to say that finding a diamond is unlikely to happen and therefore it is irrelevant? I'm not sure I really understand the LTV correctly to be honest. 2) I just came across >>8548 and thought this question merited an answer. How is the idea of a "bureaucratic bourgeoisie" something that applied to Libya but not to the USSR? Where do you draw the line between a country with corrupt bureaucrats that is still socialist, and a country endangered by such a bureaucratic bourgeoisie? No.8909 >>8900 The law of value operates because there is a market (and thus commodity production), not the other way around. No.8912 >>8909 And commodity production exists under socialism, hence my statement "the law of value exists under socialism." The USSR, China, Yugoslavia, Cuba, and every other socialist country past and present had/has commodity production throughout their entire existences (with the partial exception of Soviet Russia's period of War Communism which was obviously under exceptional circumstances.) >>8908 >but how would you reply to the common counter-argument to the LTV where someone finds a diamond, which supposedly disproves that value is determined by productive labor time? The labor theory of value applies to production for the market. Someone finding a diamond and deciding to sell it in a market is generally going to give it the same (or similar) price as any other diamond recently extracted from the earth. >I'm not sure I really understand the LTV correctly to be honest. >How is the idea of a "bureaucratic bourgeoisie" something that applied to Libya but not to the USSR? The term itself is imprecise, insofar as state employees in Libya weren't literally bourgeois (unless they also happened to own capitalist enterprises or something.) The "bureaucratic bourgeoisie" (in Soviet terminology) either sought the restoration of inflated, colonial-era privileges for government officials, or already had inflated privileges, and thus had a vested interest in resisting policies designed to help lay the foundations for socialism. This layer would readily support a reactionary military coup against a weak, ex-colonial government oriented toward socialism. It didn't refer to countries that had built socialism (the USSR argued no such countries existed in Africa or Asia.) It was meant to explain why there would be active resistance to the non-capitalist path of development (such as restricting the sphere in which foreign capital and domestic capital could operate) in African and Asian countries where a capitalist class barely existed. Soviet theorists argued that there was no such privileged stratum of government employees in the USSR, which is untrue. But on the other hand bureaucratic misdeeds in the USSR were based on taking advantage of an existing socialist economy, rather than "these privileged members of the bureaucracy, or holdovers from the colonial period envious for a return to their former privileges, are trying to sabotage efforts to pursue a socialist road in [insert ex-colonial country here] and will conspire with elements of the military and/or local capitalists to overthrow the government." By contrast, the danger of members of the bureaucracy in the USSR working with the military to overthrow the government in order to restore capitalism was basically nonexistent. When the August coup happened, it was aimed *against* capitalist restoration and the breakup of the Soviet Union. Post last edited at No.8913 >>8912 What's the best you have against "bureaucratic bourgeoisie" or "bureaucratic capitalism" argument of Hoxhaists and Maoists against USSR? Not only Red Flag but also other stuff No.8914 >>8908 "Diamonds are of very rare occurrence on the earth’s surface, and hence their discovery costs, on an average, a great deal of labour time. Consequently much labour is represented in a small compass. Jacob doubts whether gold has ever been paid for at its full value. This applies still more to diamonds. According to Eschwege, the total produce of the Brazilian diamond mines for the eighty years, ending in 1823, had not realised the price of one-and-a-half years’ average produce of the sugar and coffee plantations of the same country, although the diamonds cost much more labour, and therefore represented more value. With richer mines, the same quantity of labour would embody itself in more diamonds, and their value would fall. If we could succeed at a small expenditure of labour, in converting carbon into diamonds, their value might fall below that of bricks. In general, the greater the productiveness of labour, the less is the labour time required for the production of an article, the less is the amount of labour crystallised in that article, and the less is its value; and vice versâ, the less the productiveness of labour, the greater is the labour time required for the production of an article, and the greater is its value. The value of a commodity, therefore, varies directly as the quantity, and inversely as the productiveness, of the labour incorporated in it." https://www.marxists.org/archive/marx/works/1867-c1/ch01.htm I would answer this specific question by saying that the price of a commodity is going to be a reflection of its cost (in labor-time) which means that someone who finds a diamond could easily sell it far below the normal market price. But, if they sold it at the market price (even though it cost far less than normal) it wouldn't disprove the theory since Marx's LTV is flexible and is more concerned with the average price rather than each individual transaction. No.8915 >>8913 You mean "state-capitalism" (that's the term they use, not "bureaucratic bourgeoisie" or "bureaucratic capitalism.") It all boils down to the simple facts that: 1. There was no capitalist class in the USSR, and state officials are no substitute for capitalists. 2. The USSR after 1956 was not fundamentally different economically from the USSR beforehand. 3. The law of value did not determine production as it does under capitalism. That's why they either have to falsify reality, or invent different conceptions of socialism (e.g. claiming socialism isn't a mode of production and that the correct line of the vanguard determines whether a country is socialist or capitalist, as many Maoists do.) Szymanski's "Is the Red Flag Flying?" remains the best refutation of Maoist and Hoxhaist arguments. There are two other books worth checking on as well: * https://archive.org/details/TheMythOfCapitalismReborn (the authors take the position that the USSR was not socialist but still in transition to socialism, but otherwise their refutation of "state-capitalist" arguments is on point) * https://archive.org/details/SocialismInTheSovietUnionByJonathanAurthur (takes the position that it is impossible for capitalism to be restored in the USSR short of the annihilation of the human race, a position which the authors of the first book I linked to pointed out was silly, but otherwise it's a decent read, albeit using simpler arguments than Szymanski or the aforementioned authors) No.8919 Are you aware of Karl Popper's criticism of Marxism, that it is no longer a science because it is unfalsifiable? If he is correct in some manner, what would it take to make Marxism falsifiable again? I don't know the specifics, just going off of what little I've read. Is Marxism primarily a theory centered around technological determinism? If so, wouldn't it make the most sense to advocate for policies that accelerate technological advance, especially surrounding production? For example, social democratic policies such as wage increases and working hour reductions encouraging businesses to automate instead of hiring more expensive labor. Leading from that, in the (hypothetical) scenario that most or all physical labor is automated, would capitalism naturally dissolve/change? No.8920 >>8919 >Are you aware of Karl Popper's criticism of Marxism Yes. There have been many Marxist critiques of it, e.g. Hristos Verikukis' "Popper’s Double Standard of Scientificity in Criticizing Marxism" (which you can google to find a PDF of.) >Is Marxism primarily a theory centered around technological determinism? Technological advances help develop new modes of production. These new modes of production conflict with the existing state of affairs, thus leading to revolutions in which the ruling class is replaced with another. >If so, wouldn't it make the most sense to advocate for policies that accelerate technological advance No. Capitalism, as Marx noted, drastically increases the productive powers of mankind. But like slavery and feudalism it also has its limits. To break these limits (including limits on the rate of technological advance) requires putting an end to the rule of the capitalist class and establishing socialism, similar to how America's capitalist class ended up having to violently abolish slavery to remove the fetters on capitalist development in the US. >in the (hypothetical) scenario that most or all physical labor is automated, would capitalism naturally dissolve/change? Under capitalism automation is used by capitalists to get rid of workers and thus save money. But you will still need workers to create the automated machinery. So you still have the basis for class struggle between capitalists and workers. You might find this work useful for grasping the basics of Marxism: https://archive.org/details/IntroductionMarxism And for the Marxist understanding of economics: Post last edited at No.8934 >>8915 >That's why they either have to falsify reality, or invent different conceptions of socialism (e.g. claiming socialism isn't a mode of production and that the correct line of the vanguard determines whether a country is socialist or capitalist, as many Maoists do.) You have blown my minde specially with that. That's exactly what happens! What would you say against that "correct line of the vanguard" stuff? No.8935 >>8934 That "Myth of Capitalism Reborn" book that I linked to (https://archive.org/details/TheMythOfCapitalismReborn) addresses it in Chapter V. I simply say that it's inconsistent with Marxism to attribute a country's economic system to whether the leaders of the country are "really" communists or not. Gorby becoming General Secretary in 1985 did not suddenly mean the Soviet economy was capitalist, but the economic policies he enacted over the next six years did in fact lead to the restoration of capitalism. Post last edited at No.8936 >>8935 Thanks a lot! What are your thoughts about Cuba? A lot of Maoists are getting against and calling it state-capitalism. I think they defended that position since the beginning but nvm No.8937 >>8936 Yeah the Chinese denounced Cuba from about 1967 onward, considering it "a pawn of Soviet social-imperialism." It's rubbish. Cuba's economy was no less socialist than the USSR, and it differed with the Soviets on various occasions (e.g. it sent troops to Angola without even notifying the USSR since it wasn't sure whether or not Brezhnev would agree to the decision, since it would doubtlessly make discussions about détente with the US more difficult.) No.8944 >>8937 Is there in Red Flag anything against that conception of "correct line of the vanguard"? No.8945 >>8944 I don't think so, since Szymanski's purpose is to debunk the idea that capitalism was restored in the USSR after Stalin's death by looking at how its economy and foreign policy functioned. As I said, the two authors of "The Myth of Capitalism Reborn" do a sufficient job debunking the whole "socialism is when the vanguard has the correct line" stuff. No.8946 What's China's policy on censorship of TV, internet and other forms of media? My chinese friend said he needs to use VPN/proxy to use websites like facebook, to watch certain movies and for porn. And there's been some articles lately reporting that they banned hip hop music and tattoos from TV for example. Is this really necessary? What's the purpose of banning tattoos from television? No.8947 >>8946 The Chinese government tries to promote positive values in society, and also tries to prevent the circulation of imperialist, separatist and terrorist (e.g. Uighur Islamist) propaganda. There are old-fashioned officials who think hip hop music is "bad," and I assume tattoos are associated with street crime or organized crime. Post last edited at No.8950 >>8915 >(the authors take the position that the USSR was not socialist but still in transition to socialism, but otherwise their refutation of "state-capitalist" arguments is on point) Do you agree with that position that the USSR was not socialist but still in transition to socialism? No.8951 File: e28ffcd2c353b54⋯.png (131.92 KB, 704x156, 176:39, ClipboardImage.png) p. 29 That's utter bullshit. No.8952 >>8950 No. >>8951 Stalin pointed out that exploiting classes had ceased to exist in Soviet society by the mid-30s. The text is wrong in claiming that he thought classes don't exist under socialism (he said that workers and peasants still exist, as indeed they did), but there clearly wasn't much room for class struggle besides the remnants of overthrown classes. Maoists criticized Stalin on this point, arguing that he did not anticipate the rise of a "new bourgeoisie" within the party that supposedly took over after his death and "restored capitalism." This is something that Mao's Cultural Revolution was ostensibly designed to prevent. No.8955 >>8950 >>8952 Then you disagree with Marx's definition. And Lenin's. No.8957 >>8955 Where did Marx and Lenin argue that antagonistic classes and class struggle continue under socialism? No.8958 No.8959 >>8957 My point was that by neither Marx nor Lenin's definition had the USSR achieved socialism. By their definition the USSR was still in a transitional phase until it reverted to capitalism. No.8961 >>8958 Marx speaks of "where the peasant exists in the mass as private proprietor," whereas (as Stalin and other Soviet officials noted) the peasantry in the USSR had become collective farm peasantry, i.e. integrated into the socialist economy, no longer private proprietors. And obviously there was no capitalist class in the USSR by the mid-30s for the proletariat to struggle against. So Marx's words do not apply. They did before the construction of socialism (i.e. from 1917 to the early 1930s) but not afterward. >>8959 And what makes you argue that? No.8963 >>8957 Class struggle continue but not in an "antagonistic classes" form. No.8965 >>8963 How? As Stalin pointed out in 1936, "the draft of the new Constitution of the U.S.S.R. proceeds from the fact that there are no longer any antagonistic classes in society; that society consists of two friendly classes, of workers and peasants; that it is these classes, the labouring classes, that are in power; that the guidance of society by the state (the dictatorship) is in the hands of the working class, the most advanced class in society, that a constitution is needed for the purpose of consolidating a social order desired by, and beneficial to, the working people." Stalin argued the DOTP still exists, but I haven't seen anything he wrote or said about class struggle under socialism except in the sense of attempts by foreign capitalist states to overthrow it or remnants of former exploiting classes trying to do likewise. Erik Van Ree comments as follows in his "The Political Thought of Joseph Stalin," pp. 140-141: >However, the proletarian dictatorship was no longer mentioned as a functioning reality in the new constitution of 1936. The text held that “all power in the USSR belongs to the toilers [trudiashchiesia] of the city and the village.” . . . >From a practical point of view, the constitution’s failure to claim an exclusively proletarian character reflected the new reality of the introduction of general and equal suffrage, which brought positive discrimination in favour of the urban working class to an end. The proletarian dictatorship, for what it had been worth, was in practice thereby abolished. According to Molotov, Stalin admitted privately that the proletarian dictatorship was a thing of the past. In his speech of 1 October 1938, the latter observed that in its first, class stage the proletarian state served to oppress the bourgeoisie. Soviet power was now in a second stage, “when power becomes the power of the toilers. I wouldn’t say that it is now a class power, that it is the power of one class.” . . . >In a discussion with the Polish communist Bierut in May 1946 he remarked that “in essence” there was “no dictatorship of the proletariat in the USSR now either. We have a Soviet democracy.” The reason was that there were only external enemies to suppress. The party rules adopted at the Nineteenth Party Congress in 1952 removed all specific references to the working class. The party was redefined as a “union of like-minded communists, organised from people from among the working class, the toiling peasants and the toiling intelligentsia.” There wasn't much of a leap from this to the 22nd Congress of the CPSU in 1961 which argued that the dictatorship of the proletariat had fulfilled its historical mission and that the USSR had become a state of the whole people. Post last edited at No.8966 >>8900 Do you know anything about the supposed "Maoist" orgs that fought against the DRA and the Soviets alongside the Mujahids? All I could find on them was the wiki article on the "Soviet Afghan war" and a stub article on one supposed militia? No.8967 >>8966 The Afghanistan Liberation Organization was the main one. They were never militarily significant and mostly ended up being killed by their fellow "anti-imperialist" Mujahideen associates. No.8971 >>8961 >And what makes you argue that? Socialism is the lower phase of communism, as defined by Lenin. The lower phase of communism abolishes money, commodities, wage-labor, as defined by Marx. The USSR at no point achieved the lower phase of communism. They retained wage-labor, money, commodities, and all the elements of capitalism during their entire history. Lenin defines socialism as the lower phase of communist society. What is usually called socialism was termed by Marx the “first”, or lower, phase of communist society. https://www.marxists.org/archive/lenin/works/1917/staterev/ch05.htm Marx describes the lower phase of communist society: Within the co-operative society based on common ownership of the means of production, the producers do not exchange their products; just as little does the labor employed on the products appear here as the value of these products, as a material quality possessed by them, since now, in contrast to capitalist society, individual labor no longer exists in an indirect fashion but directly as a component part of total labor. [...] He receives a certificate from society that he has furnished such-and-such an amount of labor (after deducting his labor for the common funds); and with this certificate, he draws from the social stock of means of consumption as much as the same amount of labor cost. https://www.marxists.org/archive/marx/works/1875/gotha/ch01.htm Lenin defines the new Soviet system as transitional: No one, I think, in studying the question of the economic system of Russia, has denied its transitional character. Nor, I think, has any Communist denied that the term Socialist Soviet Republic implies the determination of Soviet power to achieve the transition to socialism, and not that the new economic system is recognised as a socialist order. https://www.marxists.org/archive/lenin/works/1918/may/09.htm '''Lenin claims it will take many years to build even a socialist "apparatus": The most harmful thing would be to rely on the assumption that we know at least something, or that we have any considerable number of elements necessary for the building of a really new state apparatus, one really worthy to be called socialist, Soviet, etc. [...] No, we are ridiculously deficient of such an apparatus, and even of the elements of it, and we must remember that we should not stint time on building it, and that it will take many, many years. https://www.marxists.org/archive/lenin/works/1923/mar/02.htm No.8972 Ismail, do you have anything to say about parties such as the Progressive Labor Party and Party of Communists, USA? And what about organizing a revolutionary party in the US, in general. No.8976 >>8971 >Socialism is the lower phase of communism, as defined by Lenin. Something Soviet authors never denied. >The lower phase of communism abolishes money, commodities, wage-labor, as defined by Marx. As Marx wrote in that text, "What we have to deal with here is a communist society, not as it has developed on its own foundations, but, on the contrary, just as it emerges from capitalist society; which is thus in every respect, economically, morally, and intellectually, still stamped with the birthmarks of the old society from whose womb it emerges. Accordingly, the individual producer receives back from society – after the deductions have been made – exactly what he gives to it." He also writes in Capital Vol. III that the law of value will continue to operate under socialism. In the USSR money served an accounting function (i.e. it couldn't be used by individuals to create capital.) Labor-power was not a commodity, and commodities were not produced for their exchange value (as is the case under capitalism.) As for your two Lenin quotes, again nobody would deny that in 1918 Soviet Russia was indeed merely striving toward socialism. I don't see how that's relevant twenty years later when the bourgeoisie had ceased to exist and the kulak class done away with in the countryside. Likewise, socialism was not built in a day. As Lenin said in 1922, "Permit me to conclude by expressing confidence that difficult as this task may be, new as it may be compared with our previous task, and numerous as the difficulties may be that it entails, we shall all—not in a day, but in a few years—all of us together fulfil it whatever the cost, so that NEP Russia will become socialist Russia." >>8972 The PLP are ultra-left. They think revolutions should lead directly to communism. The PCUSA is a pro-Stalin split of the CPUSA. It seems pretty ineffectual. The FRSO, WWP and PSL are parties I'd consider joining if I had to do so. Post last edited at No.8983 >>8972 I'm in the PSL, ask me anything. No.8984 >>8976 Again, the USSR retained all the key elements of capitalism for its 70 year existence. It's not a question of socialism "with the birthmarks of the old society." When Marx wrote that passage he was describing a system of equal pay using labor certificates, not a system in which prices were manipulated by state planning. >He also writes in Capital Vol. III that the law of value will continue to operate under socialism. No doubt. >Labor-power was not a commodity People worked and received payment for that work. >commodities were not produced for their exchange value (as is the case under capitalism.) A commodity is by definition created for realizing exchange value. The entire collective farm sector did exactly this and was mentioned by Stalin as being a hold-over from non-public ownership. Of course, Stalin performed a number of mental somersaults and concludes his analysis of commodity production in the USSR by saying that the use of Marxist concepts to analyze Soviet society should be abandoned since those concepts no longer apply(!). What's interesting is that Stalin's discussion of commodity production begins with an an Engels quote describing how the seizing of the means of production would immediately do away with commodity production. Rather than answer why this hasn't happened in the USSR Stalin states that at the time in question such a transition could only have been achieved in Britain, and that the early USSR's economic conditions did not permit the transition to a non-commodity society. He never actually answers the question of why such a transition was not achieved in the following 30 years. https://www.marxists.org/reference/archive/stalin/works/1951/economic-problems/ch03.htm I think that the really interesting part of this work comes in section 4. Here Stalin makes the interesting statement that the USSR still produces for profit, but at a national level: "...profitableness is considered not from the stand-point of individual plants or industries, and not over a period of one year, but from the standpoint of the entire national economy and over a period of, say, ten or fifteen years..." Stalin, Economic Problems of Socialism in the USSR https://www.marxists.org/reference/archive/stalin/works/1951/economic-problems/ch04.htm No.8985 >>8984 >Again, the USSR retained all the key elements of capitalism for its 70 year existence. Strange, then, that the USSR had no capitalist class, no unemployment, no anarchy of production, no recessions nor depressions. This is clearly a "capitalism" that looked nothing like capitalism. >People worked and received payment for that work. Which is in accordance with the socialist principle of distribution. That does not make labor-power a commodity. There was no reserve army of labor to drive down wages and compel workers to sell their labor-power for whatever it could get. There was no exploiting class to appropriate the surplus value of workers. >A commodity is by definition created for realizing exchange value. Yes, but under socialism commodities assume a new form. See: https://archive.org/stream/NikitinFundamentalsPoliticalEcon/Nikitin%20Fundamentals%20Political%20Econ#page/n153/mode/1up Stalin's "Economic Problems of Socialism" was criticized by Soviet economists after 1953 who argued he had erred in his understanding of the law of value and commodities, his proposal to put an end to commodity circulation between the towns and countryside, etc. This doesn't mean his book is nothing but incorrect statements, but it isn't the alpha and omega of the Soviet treatment of those subjects like many ultra-leftists pretend it is. >Here Stalin makes the interesting statement that the USSR still produces for profit, but at a national level: Yes. I don't see your point, unless you think socialism entails operating at a loss and having a diminishing social product. Szymanski clearly explained the role of profit in the Soviet economy. That small bit you quoted from Stalin was part of a larger point that the law of value did not regulate production in the USSR. As he explained: >If [the law of value regulated production], it would be incomprehensible why our light industries, which are the most profitable, are not being developed to the utmost, and why preference is given to our heavy industries, which are often less profitable, and some-times altogether unprofitable. >If this were true, it would be incomprehensible why a number of our heavy industry plants which are still unprofitable and where the labour of the worker does not yield the "proper returns," are not closed down, and why new light industry plants, which would certainly be profitable and where the labour of the workers might yield "big returns," are not opened. . . . >These comrades forget that the law of value can be a regulator of production only under capitalism, with private ownership of the means of production, and competition, anarchy of production, and crises of overproduction. They forget that in our country the sphere of operation of the law of value is limited by the social ownership of the means of production, and by the law of balanced development of the national economy, and is consequently also limited by our yearly and five-yearly plans, which are an approximate reflection of the requirements of this law. Post last edited at No.8987 What's you opinion on Pol-Pot Do you fall more on the negative side of him for the obvious commonly stated reasons? Or do you prefer the idea that the US bombing campaign crippled cambodias agricultural system and caused most of the deaths as I've seen some state Also which was better Democratic Kampuchea PR-Kampuchea No.8990 >>8976 >The FRSO Do you mean freedomroad.org or frso.org? Also Ismail, is there anywhere you post regularly other than /marx/ and eregime? Or more broadly, are there any other good places online for communist discussion, in your opiinon? No.8991 >>8990 And if you don't mind, what brought you into imageboards, 8ch in particular? No.8993 >>8987 >Or do you prefer the idea that the US bombing campaign crippled cambodias agricultural system and caused most of the deaths as I've seen some state The Khmer Rouge reduced the population to practically slave labor. The bombings were an atrocity and gave the KR a pretext to evacuate the cities, but that doesn't absolve them of their crimes. See: Pol Pot was a reactionary. He ended up becoming an open supporter of capitalism and received CIA support throughout the 1980s fighting the Vietnamese and the Cambodian government. Ask any Cambodian and 99.99% will reply that life was better in the People's Republic of Kampuchea than it was under the Khmer Rouge. >>8990 I am mainly involved with /marx/ and eRegime (the activity of which is at a standstill since we're converting to a different forum software, but there are two active Discords that maintain the community's activity for now.) Besides that I post on reddit: https://www.reddit.com/user/HysniKapo/ (not really in leftist subs though, just liberal and social-democratic ones where I try to spread the good news of Marxism-Leninism.) >>8990 frso.org (the Fightback one.) >>8991 As far as Imageboards go I almost never posted on 4Chan except in /lit/ or /pol/ to note books I had scanned (I no longer post there at all), and /marx/ is the only board on 8Chan that I post on. /marx/ was originally admined by someone else who offered to hand it over to me. I figured it'd be a good board for people to learn about Marx and friends, so I accepted. I don't interact with imageboards otherwise. No.8994 File: 995c8d3583d35c6⋯.jpeg (294.17 KB, 732x1024, 183:256, CC359076-0DAE-4DC3-A923-E….jpeg) File: 4da2e426ee846fd⋯.jpeg (532.59 KB, 871x1300, 67:100, C8EB1247-6848-4147-93B6-9….jpeg) Since you’ve been on revleft for a while and may know him better than other people from the chans, is Rafiq active anymore? Do you have any of his texts such as “our materialism”? No.8995 >>8994 I've never communicated with Rafiq outside of RevLeft and don't know his current whereabouts, nor do I have his texts. No.9002 Do you know any good books on Grenada? No.9003 >>9002 I do not. For the invasion of the country, see the relevant chapter in "Killing Hope": http://b-ok.xyz/book/887149/c061ac No.9004 File: 08accd11dde0efd⋯.jpeg (1.92 MB, 1242x1999, 1242:1999, 5C575771-C584-4C06-9846-8….jpeg) >>9003 That’s the part I was most interested in anyway, so thank you! There’s also this book but I can’t find it anywhere online. I may have to purchase a physical copy No.9005 >>9004 Yeah it isn't online to my knowledge. I have a few books from the "Marxist Regimes" series, they're pretty good intros to their subjects (although their definition of "Marxist Regimes" is quite broad, e.g. one of the volumes I have is on Guyana.) No.9006 The following combination of statements seems logically incompatible: <1. The rate of exploitation can be measured by the labor time workers give relative to the labor time entering the goods and services they consume. (This claim, like what follows, is not about every single person, but aggregates.) <2. Prices of goods and services don't center around labor time content, but profit-rate equalizing prices of production. <3. Workers can't change the rate of exploitation by making different consumer decisions. One could say that people don't get out of their way to obtain things with high labor content just for the sake of it, and that the different classes consume a huge amount of different products and services, so it stands to reason that the labor-content per consumer dollar spent is probably about the same among different sets of millions of people. So, the shares of the pie of produced stuff going to different classes is not going to change much when you change the measure from labor-value "prices" of what people consume to profit-rate equalizing prices. But it's still weird in that statements 1 and 2 together seem to imply a logical possibility that the working class pursuing the strategy of buying things with high labor content would push down the rate of exploitation. So, which of these three assumptions would you drop? No.9007 >>9006 I think you'd get a better answer if you asked the persons over at this thread: >>7739 (which is about Capital, but seems like a good thread for Marxist economics in general.) No.9008 >>9005 In what period was Guyana supposedly "most" marxist? Was it ever comparable to Allende's Chile, or at least to Venezuela? I don't know anything about the country No.9009 >>9005 >>9008 (Not that other anon btw) No.9010 >>9008 That's the thing, it never was a "Marxist regime." There were two principal parties in Guyanese politics: the People's National Congress and the People's Progressive Party. The PNC mostly represented the Black population and had an ideology of "cooperative socialism." The PPP mostly represented Indians and was Marxist-Leninist. The PNC had actually received CIA support in the 1960s because of the fear that Guyana would obtain independence from Britain under a pro-Soviet PPP government. So instead the PNC ended up presiding over Guyanese independence, and in the early 70s began moving to the left in domestic and international affairs. It developed cordial relations with the USSR, Cuba and DPRK while making sure not to endanger ties with the US and UK (which viewed the PNC in power as a lesser evil since pretty much the only other alternative was the PPP.) Some Marxist rhetoric was used by the PNC (e.g. that efforts were being made to establish the "dictatorship of the proletariat"), and nationalizations were carried out, but I'd say it was to the right of Venezuela under Chávez and Maduro. It even coalitioned with a small conservative party to keep the PPP from coming to power. That Guyana's government was Black-led, avowedly socialist, and on good terms with the USSR and friends (and also the country is English-speaking) also led to Jim Jones setting up Jonestown there, since the government's policy was to encourage cooperative undertakings and for more Black people to enter the country (Jones' followers were mostly Black) in order to tip the scales against the country's Indian majority (who more likely to vote for the PPP.) No.9011 >>8912 Thank you for clearing that up. In what way exacty were these bureaucrats in the USSR (and other ML states) able to take advantage of the socialist economy? >>8914 I should really just read Capital when i have the time, the answers to many of my questions seem to be in there. Thanks No.9012 >>9005 >although their definition of "Marxist Regimes" is quite broad That's for sure, I have their book on Democratic Kampuchea and we know how "Marxist" they were ideologically No.9013 Did the Ultraleft leadership of Albania led to Ultraleftist policies wich fucked up the economy like it did in China with the Cultural Revolution? Or any other things Ultra left policies led to No.9014 >>9011 >In what way exacty were these bureaucrats in the USSR (and other ML states) able to take advantage of the socialist economy? They gave themselves privileges tied to their jobs, e.g. chauffeured cars, access to special stores to buy imported goods, lived in residential areas closed off to ordinary people, etc. And then of course there was corruption, e.g. overcharging the state for expenses for a birthday party, screwing with statistics to make your region's output look more glorious than it was so planners would allocate more funds and you could pocket the additional money, etc. No.9015 >>9013 Yes. After 1978 Albania's economy slowly fell apart as lack of foreign trade (caused in part by a constitution which prohibited obtaining foreign investment and credits) meant lack of spare parts and other necessities to maintain economic output. >>9012 To be fair, the "Marxist Regimes" volume on Cambodia is actually about the Vietnamese-backed People's Republic of Kampuchea, not the "Democratic Kampuchea" of the Khmer Rouge. Also the guy who wrote the book has helpfully put it online for others to read: michaelvickery.org/vickery1986kampuchea.pdf Post last edited at No.9016 >>9010 That's interesting. Any idea why marxism is so popular in Guyana or with the Indians in popular? What about the period in 1992-2015 where the PPP was apparently in power, did they do anything of note? Also, can you tell me more about this "Marxist regimes" series - is it bourgeois in perspective or are the authors at least sympathetic to marxism? >Some Marxist rhetoric was used by the PNC (e.g. that efforts were being made to establish the "dictatorship of the proletariat"), and nationalizations were carried out, but I'd say it was to the right of Venezuela under Chávez and Maduro. It even coalitioned with a small conservative party to keep the PPP from coming to power. Sounds like classic socdem fuckery. >>9014 Do you think such abuse could have been avoided, or at least kept under control, if some kind of checks and balances against corruption had been introduced in the early years of the USSR? >screwing with statistics to make your region's output look more glorious than it was Doesn't this make a great deal of statistics from the USSR unreliable? This seems like a serious problem when trying to defend the accomplishments of the ML states in arguments with anticommunists. No.9017 >>9016 >in popular * in particular No.9019 >Any idea why marxism is so popular in Guyana or with the Indians in popular? It wasn't so much that Marxism was popular, it's that Cheddi Jagan and his wife (the founders of the PPP) lived in the US and became Marxists there. Jagan returned to Guyana to advocate immediate independence, whereas until then political life was in the hands of British collaborators. So his party basically became the only genuinely patriotic force. In the 1960s-70s there was a huge shift in favor of socialist rhetoric and policies throughout the third world, so Forbes Burnham (founder of the PNC) found it easy to shift from British Labour Party-style "socialism" to mouthing the occasional Marxist phrase. >What about the period in 1992-2015 where the PPP was apparently in power, did they do anything of note? As a 1997 NY Times article notes, "Though Dr. Jagan said he had not abandoned his commitment to Marxism, he sought to attract foreign investors and embraced some free market policies. 'I was a Gorbachev even before Gorbachev, in the sense of what we were doing and not adopting the traditional dogmas of Marxist parties,' he said. With the cold war over, Washington's attitude by then had also changed, and Dr. Jagan's relations with the Clinton Administration were cordial." The "Marxist Regimes" is basically bourgeois in perspective, but some the volumes are written sympathetically (e.g. the volumes on Afghanistan, Angola, the GDR and Kampuchea.) >if some kind of checks and balances against corruption had been introduced in the early years of the USSR? There were attempts at checks and balances, e.g. the Workers' and Peasants' Inspectorate, but as Lenin pointed out: "It will take decades to overcome the evils of bureaucracy. It is a very difficult struggle, and anyone who says we can rid ourselves of bureaucratic practices overnight by adopting anti-bureaucratic platforms is nothing but a quack with a bent for fine words." >Doesn't this make a great deal of statistics from the USSR unreliable? Depends on the statistics. Some were more reliable than others, and even when a statistic was "accurate" in terms of sheer output it could be deceptive in that a lot of the output was unsold in stores due to poor quality. But few would deny that Soviet economic growth remained impressive until the 1970s. Post last edited at No.9020 >>9015 No.9023 >>9020 About Albania's economic decline? There are a few books that talk about it in the context of other things, but none of them are online. If you have specific questions though, feel free to ask. No.9024 >>8976 >He also writes in Capital Vol. III that the law of value will continue to operate under socialism. Short question: can you refer me to the bit you are talking about? Thanks! No.9025 >>9024 "after the abolition of the capitalist mode of production, but still retaining social production, the determination of value continues to prevail in the sense that the regulation of labour-time and the distribution of social labour among the various production groups, ultimately the book-keeping encompassing all this, become more essential than ever." No.9026 File: f33dfb79ff0c432⋯.jpg (195.03 KB, 1103x1374, 1103:1374, Ho-Chi-Minh.jpg) Do you have any information on Vietnam? What kind of levels of growth/industrialisation did they experience after the war? How long did it take for Vietnam to actually build socialism and would you still consider them a socialist country today (if not when did they deviate from socialism)? Have there been any big changes when it came to the leadership of the country (such as Nguyễn Văn Linh wanting to introduce market reforms)? Also wondering if there is anything distinct in "Ho Chi Minh Thought" or is it just a Vietnamese application of Marxism-Leninism No.9027 >>9026 The Vietnamese government has never claimed that they've built socialism. They describe themselves nowadays as a socialist-oriented market economy. I don't know much about Ho Chi Minh Thought, although it does indeed seem little more than the "Vietnamese application of Marxism-Leninism" (like it's not equivalent to Juche.) As far as I know they mostly spent the 70s and 80s rebuilding, and their economy (particularly agriculture) stagnated in the mid-80s which led to the market reforms. No.9028 >>9005 Ismail, do you know if this one is any good? https://archive.org/details/marxistlocalgove00bogd No.9029 >>9028 I haven't read it, but if you make an archive.org account you can simply click to have the book "loaned" to you, allowing you to read it in your browser for up to 13 days. No.9031 >>9029 Oh I did. In fact those are pretty easy to rip. Just wanted to know if it was of the more sympathetic books of the series. No.9045 >>9023 CAn you go a little more in depth on this economic decline? The restrictions of foreign trade etc? No.9049 >>9045 As I said, the country gradually became bereft of spare parts, so that you'd have factories where a lot of lights didn't work, or there wouldn't be enough dynamite for mining, etc. A lot of this was because Albania refused to normalize relations with the USSR after Khrushchev's ouster. Brezhnev and subsequent Soviet leaders offered to restore diplomatic and trade ties, but Hoxha and Alia refused until 1990. Article 28 of the 1976 Constitution stated, "The granting of concessions to, and the creation of, foreign economic and financial companies and other institutions or ones formed jointly with bourgeois and revisionist capitalist monopolies and states, as well as obtaining credits from them, are prohibited in the People's Socialist Republic of Albania." No other socialist country had anything like this. All it did was hinder the economic well-being of Albania. No.9052 >>9049 Thank you! No.9053 >>9049 was trade forbidden with these "revisionist" and "Capitalist" countries or just restriced and to what degree No.9054 >>9053 Albania refused any trade whatsoever with the US and USSR. It did, however, trade with France (e.g. chromium exports from Albania ended up in the US via Franco-Albanian trade), Italy, Yugoslavia, some Warsaw Pact countries, and smaller capitalist countries like Austria. If Albania completely forbid trade with the outside world then socialism would have been overthrown years before. No.9057 Opinion on Mugabe? Is all the shit he gets deserved? Did he improve the lives of his people beyond the Rhodesian apartheid being dismantled? No.9060 >>9057 Living standards were generally okay in the 1980s. They declined in the 90s as Mugabe's government implemented IMF policies. Meanwhile, ZANU-PF was under pressure from veterans and landless Zimbabweans to carry out land reform, which he finally did after years of delay when it looked like his party might lose elections. The land reform itself has overall had a positive effect (https://www.bbc.com/news/world-africa-11764004) but the country's economy nonetheless remained in poor straits for over a decade. Mugabe was seen as incapable of changing that around, and while the way in which he was removed can be criticized, it seems the vast majority of both ZANU-PF members and ordinary Zimbabweans approve the change. No.9065 Would you consider the Japanese goverment after the virtual takeover of it by the military until 1945 a form of Fascism or would you consider it a much more German Empire / Russian Empire / Austria-Hungarian Esque type of chauvinist imperialism? No.9066 >>9065 I haven't really studied Japan that closely to give an answer. According to the Great Soviet Encyclopedia though: >As preparations for [World War II] proceeded, there was a dramatic upsurge of reactionary forces, and the entire socioeconomic structure of Japan became fascist in nature. Trade unions were abolished in 1940. The Imperial Rule Assistance Association was created in October 1940 to replace the political parties, which had been dissolved; headed by the prime minister, the association formed the basis of the “new political structure.” No.9068 Do you have a degree in history? I don't think I know anybody who knows that much about 20th century history. No.9069 >>9068 No, I just read books and articles. I'm sure if you talked to an actual historian they'd know more about a subject than me. No.9076 I can't remember where, but I've heard somewhere that the majority of food came from small private land owners in the in the USSR. I don't recall if this was referring to the kulaks or afterwards as well. Is it true? If yes, why? No.9077 >>9076 After collectivization, each family in a collective had the right to a small plot of land. Anything they grew on it they could eat or sell on an open market. They weren't allowed to employ labor and they couldn't expand their plot (let alone buy others), so there was no chance at creating a rural capitalist or kulak class. Many peasants preferred to focus on these private plots rather than producing for collectives since on an open market they could sell products of the former for whatever price they'd like, whereas most of what was produced in the collective farm had to be sold to the state for a fixed (and generally low) price. The low price the state paid collective farmers, on the other hand, helped the state subsidize food prices in urban areas (i.e. make food cheaper to buy) for consumers. Another problem is that the USSR's distribution system generally sucked, which meant a lot of collective produce would rot in the fields before trucks came in time to deliver them to supermarkets and whatnot. By contrast, if you worked on a private plot, you yourself were responsible for determining where you took the produce (with train stops being popular locations.) Most food wasn't produced by private plots, but certain products (like eggs) were dependent to a considerable degree on private plot production. As a Russian guy I know once said: >Yes, people probably worked hard on their own plots, yes they were more productive per yard, and yes, they did contribute to about 15% of Soviet agricultural consumption, although for the most part they harvested fruits and vegetables hard to come by from state farms, not staple crops like potatoes. Here's a helpful article on Soviet agriculture in general: http://www.soviet-empire.com/ussr/viewtopic.php?t=47201 Post last edited at No.9082 >>9069 >No, I just read books and articles. How long have you been reading up on Marxism and the history of socialist states to seem as knowledgeable as you do? No.9084 >>9082 I've considered myself a Marxist since 2006, and I began purchasing books about socialist states in 2007 (a very boring collection of speeches by Todor Zhivkov, a 1975 Western academic book on the GDR, and "A Coming of Age: Albania under Enver Hoxha" by James S. O'Donnell.) No.9086 >>9077 >After collectivization, each family in a collective had the right to a small plot of land supposedly under communism would they still have a plot of land? >They weren't allowed to employ labor and they couldn't expand their plot (let alone buy others), so there was no chance at creating a rural capitalist or kulak class. Were they allowed to make coops? I remember you quoted Marx in another thread where he said the land can't be divided into small plots of individual farmers. What's the deal then with this? No.9090 >>9086 >supposedly under communism would they still have a plot of land? No. Private plots were seen as transitional. With the increased productivity of collective and state farms, the need for private plots would cease. >Were they allowed to make coops? Out of their private plots? Not to my knowledge. Only family members could work on the private plots. They were surrounded by collective farm property, so it'd be a bit strange to make cooperatives within collectives. >I remember you quoted Marx in another thread where he said the land can't be divided into small plots of individual farmers. What's the deal then with this? The families belonged to collective farms. Their main task was (in theory, at least) to focus on raising the productivity of collective farm production. But they also had small private plots for their families to grow food on which could either be used to supplement their diet or earn additional income for themselves on an open market. In practice, in the 1930s-50s when collective farm productivity was very low, private plots were viewed by peasants as vital if they weren't to suffer malnutrition or were to gain enough money to buy basic goods from the state. Afterward they were still given more attention by many collective farmers since they got more money from working on the private plot. When peasants joined the collectives in the 1930s, their land was given up to the collective, whereas a small private plot was granted to them in return based on family size and some other factors. Those on state farms also had private plots, but these were even smaller than their collective counterparts and there was less incentive to work on them since state farms paid wages. Post last edited at No.9091 What is your opinion on the EFF in South Africa? Their Manifesto claims their M-L but South Africans I've spoken to claim that their basically Mugabeists / Afro-NazBol? No.9092 Pieke Ulbricht Honecker Marry fuck Kill? No.9094 >>9091 I don't know enough about them to comment. >>9092 Presumably by "Pieke" you mean Pieck, who didn't have much influence over the GDR's policies. His role in the government was mostly ceremonial, and he was honored as one of the founders of the KPD. Ulbricht and Honecker weren't much different. The former was removed because of his old age and because he had disagreed with the USSR and Poland over diplomatic issues concerning West Germany. Honecker continued the limited liberalization promoted by Ulbricht in economics and society. Post last edited at No.9096 Why do so many people defect from socialist countries? Is it overblown? No.9097 I often hear that the GDR was state capitalist with honecker and other high ranking party official controling the mop and having own private mansion etc. That the production wasn't really ruled by the proletarait but rather by party boreaucrates, to what extend is this true? Is it even true at all? How was it really? No.9098 >>9096 Governments can give give money and other benefits if those from socialist countries successfully reached "the other side" (which is the case with Cubans and East Germans), or you have family abroad in a capitalist country (which applies to a lot of Cubans, East Germans, and North Koreans), or things really do just horribly suck (DPRK during the "Arduous March" of the 1990s.) Those who leave also expect that they'll get higher wages at their jobs and greater access to consumer goods. >>9097 It isn't. The GDR's economy was fundamentally the same as the USSR's. Honecker and Co. were state officials, not capitalists. they had no ownership of the means of production, couldn't hire laborers, couldn't accumulate capital, etc. State and party officials certainly abused the socialist system to give themselves privileges and to engage in corruption, but this isn't something that determines whether a system is capitalist or socialist. If a capitalist decides to live modestly and contribute tons of money for charity, he/she is still a capitalist. When a Bernie Madoff or Charles Keating decides to abuse the capitalist system, this doesn't mean the United States is somehow no longer capitalist. A handy quote from a bourgeois work, There Is No Freedom Without Bread! by Constantine Pleshakov, 2009, pp. 60-61: >The world of luxury [Soviet and Eastern European officials] created for themselves was still a far cry from that of Imelda Marcos or John F. Kennedy and their wealth was not hereditary or even for life, because a leader ousted from power lost most of the material benefits the day he as sacked, and every person in Romania knew that the Ceaușescus' prosperity was exactly as lasting as the orchids they imported. >These were elites whose dacha furniture had metal tags nailed to it, so that when the person fell out with the leader or retired, an inventory team could count and account for every chair he left to his successor (in 2006 in the United States, a severance package for a "failed" chief executive of Home Depot was $210 million). Moguls drove around in Soviet-made Chaika limousines, their windows covered by arrogant curtains, but their children could not inherit them. Here, privileges were like fiefs and had no monetary backup: you lose power, you lose its spoils. >In 1968, the conqueror of Warsaw, Marshal Rokossovsky, diagnosed with terminal cancer, begged a doctor to send him to the subtropical Crimea on the Black Sea, to the Ministry of Defense dacha: "I know that I can die at any moment, please make my last year good." The doctor counterfeited the paperwork, and the retired war hero got clean bedsheets, free meals, and a room with a view. When one of the most powerful men in Bulgaria, a secretary of the party's Central Committee, had a fling, he asked a subordinate—in his case, a writer, for the secretary supervised arts and literature—to lend him his apartment for the night because he couldn't take his date to a hotel: the management would have reported him to his very own Central Committee, which would have been only too happy to shred him to pieces for "moral decadence." In principle, Eastern European elites were as shackled by the rules as were their subjects, and, doubtlessly, whispered the names of freedoms they would've wanted. >The greatest spymaster of Eastern Europe, Markus Wolf, chief of East German intelligence for thirty years, wrote in his memoir: "People who could leave the country were greatly envied by the population at large; travel fever was acute in this country of nontravelers. I had traveled less widely for pleasure than most middle-class American college students, which is something that Western commentators tend to forget when they talk about the lives of the members of the nomenklatura. For all my privileges, I had never visited the Prado, the British Museum, or the Louvre . . . I was privileged to have a fine apartment, a car and a driver, and pleasant holidays at the invitation of other secret services in the Eastern bloc. But these were always connected to my job and status; in the end, the wider world was sealed off to me, too." And from a leftist author: >Soviet leader Yuri Andropov, as Time magazine reported, lived in a simple five-room apartment in the same housing project near the Kremlin that once accommodated Leonid Brezhnev. Soviet political leaders, managers, and intelligentsia cannot amass great wealth from the labor of others. They cannot own the means of production nor pass ownership on to their progeny. When they retire, it is to modest living quarters on modest pensions. This hardly constitutes a "new class." >Top-level state ministers and enterprise managers earn only about 2.7 to 4.0 times above the average industrial wage. (However, small numbers of prominent artists, writers, university administrators, and scientists make close to 10 times more.) Such income differences are not great when compared to the United States, where top entertainers, corporate owners, and other wealthy individuals annually take in several hundred times more than the average American wage earner. (Parenti, Michael. Inventing Reality: The Politics of the Mass Media. New York: St. Martin's Press. 1986. p. 141.) Post last edited at No.9100 Oi, ismail. Why the Brezhnev era is remembered as the golden era? I've listened to philosophers and people and everyone points out the Brezhnev era as the golden age of the soviet union. After stalin, de-stalinization and after cornboi, why? Plus wasn't around the era where the economy stagnated? What's up with that? There was also the war in Afghanistan. No.9101 >>9100 They were the most stable years in the USSR's history. There really weren't any unpopular policies (e.g. Khrushchev had raised prices) and the Soviets were at their height geopolitically. Everyone agrees that Brezhnev should have stepped down five years or so before he died, and the war in Afghanistan is obviously controversial, but Russians today consider the Brezhnev years the best to have lived in. No.9102 Ismail If China Is still socialist then how does one explain their Imperialism (Exportation of capital / use of the developing world as a sweatshop / seizing land with 99 year leases when debts can't be relayed like its the 1800s) No.9103 >>9101 What about the "Era of Stagnation"? How bad was it? No.9104 >>9102 Check out the "China and Imperialism" section in the following link: https://docs.google.com/document/d/16iw83noTdWvDiECaITX83rGhP_lros8QdBTrNnCoe6c/edit >>9103 It didn't literally mean the economy stagnated. Growth continued, but slower than in preceding decades and with little hope of improvement. By 1985 many people felt that economic reform was long overdue and that a lot of problems in society were becoming worse (e.g. crime, cynicism, neglect) without any way to remedy them because the government and press preferred to ignore or downplay what was happening. This wasn't unique to the USSR. It characterized all the European socialist to greater or lesser extents. Post last edited at No.9105 Is it true that no one sung the Soviet Anthem until the 1977 revision that removed mentions of Stalin? Do you think removing him was justified? I also saw someone claim that Honecker banned the lyrics to Auferstanden aus Ruinen. No.9106 >>9105 Yes, both claims are correct. The Soviet anthem was of course used in its instrumental version, but not sung until 1977 when new lyrics were adopted. When the original anthem had been written, Stalin was still leader of the USSR. The mention of him was thus seen as part of his personality cult, hence the removal. The CPSU after 1956 did not regard Stalin as akin to Lenin (who was mentioned in both the original and 1977 versions), and I think removing Stalin from the anthem made sense. Auferstanden aus Ruinen wasn't sung anymore (although again, instrumental version was used) because the lyrics concerned a united Germany and a single German nation, whereas after 1971 the SED argued that the GDR represented a separate, socialist German nation. Talk of reunification was therefore done away with. This is different from North Vietnam, South Yemen and the DPRK which all argued that there was a single Vietnamese/Yemeni/Korean nation abnormally divided between two states. Post last edited at No.9107 Ismail, could you rate all the Soviet leaders from favourite to least favourite? (for example: Lenin - 9/10, Stalin 9/10, Malenkov 6/10, etc.) No.9108 >>9106 >after 1971 the SED argued that the GDR represented a separate, socialist German nation. Talk of reunification was therefore done away with. Am I correct in assuming that, contrary to what the party wanted, most citizens of the GDR wanted to reunify (whether under the government of the GDR, the government of West Germany, or under a totally new government, depending on their economic views)? Given that there were plenty of people when the wall fell who remembered a time when Germany was united I have trouble believing they could really buy into the idea of the GDR being a separate nation. No.9113 What is your Opinion on the Southern Movement / Southern Army / Southern Restoration movement in Yemen? I Cant find virtually any information on their current political and economic beliefs? but it is Led by the Ex-President of the PDRY (And also the Vice Prez of Yemen after Unification Pre-civil war) No.9114 >>8560 >>9113 Also do You have any idea which goverment the Socialist Party of Yemen sides with? Aden or Sana? i could find little information No.9119 >>9107 Lenin was clearly the most farsighted and outstanding of all Soviet leaders. Stalin was also a generally good leader. Malenkov actually began "de-Stalinization" before Khrushchev and probably would have been a better Soviet leader overall had he not been ousted. Khrushchev was inept and hypocritical but Soviet society continued to improve under him. Brezhnev stood in office too long but otherwise was generally competent. Andropov would have been great had he not had just two years to lead the CPSU. Chernenko died before he could do anything. Gorby sucked and presided over the destruction of socialism. I think my mini-descriptions are more useful than merely assigning numbers. >>9108 You'd be correct. There were East Germans who considered themselves part of a new socialist German nation, but by 1989 the concept clearly hadn't succeeded, especially among the youth (most of whom admired West Germany and wanted to at least visit West Berlin.) >>9113 >>9114 The Yemeni Socialist Party supports a separate South Yemen government, arguing that the southern part of the country is discriminated against. They're social-democrats. No.9137 Any leftish books on General History? Not ones that are overly Eurocentric please, a focus on Africa or Asian regions would be nice. No.9138 >>9137 There's a two-volume Soviet history of the world published in 1974: * https://web.archive.org/web/20081121220226/http://leninist.biz/en/1974/1SHW599/index.html (Volume I, from ancient times to World War I) (I wish I had these in PDF format, but alas I do not) There's also a Soviet history of the world covering 1917-1945: https://archive.org/details/AContemporaryWorldHistory19171945 A slimmer, revised version of the above published in the Gorby period: https://archive.org/details/ContempHistory1917 And finally a Soviet history of the world covering 1946-1990: https://archive.org/details/ContempHistory1946 Post last edited at No.9139 File: de9a22fdc7cd91f⋯.pdf (14.49 MB, Eric Hobsbawm The Age of E….pdf) >>9137 Maybe too Eurocentric but are you aware of Eric Hobsbawm? He was a Marxist and one of the most acclaimed historians of all time. Mostly known for his three-part series about the "long 19th century" and his work on the 20th century called The Age of Extremes. No.9140 File: 784ad5adea0bb27⋯.pdf (4.03 MB, Eric Hobsbawm The Age of R….pdf) >>9137 >>9139 Can't post Age of Capital and Age of Extremes because the files are too big, but I'm sure you can find those on libgen or elsewhere. No.9141 >>9119 Would you say Brezhnev was one of the best leader-wise? No.9142 >>9141 Brezhnev wasn't a "good" leader (in the sense of displaying exceptional leadership qualities), but he did promote collective leadership as opposed to Khrushchev and Stalin who took decisions without consulting others, and in that sense Brezhnev helped lay the basis for a more "normal" functioning of government. As bourgeois historian T.H. Rigby wrote in 1972, "one can identify an overall trend away from the arbitrary, structurally indeterminate pattern of decision-making characteristic of Stalin's system of personal rule. . . plac[ing] a premium on formal structures and regular procedures, thus promoting a reinstitutionalization of the supreme executive bodies of Party and State, in particular the Politburo." No.9143 >>9142 >Stalin who took decisions without consulting others Any example? No.9144 >>9143 Only six CC meetings were held in the last sixteen years of Stalin's life, even though it was supposed to meet every four months according to the 18th Congress. Excerpts from "Stalin's Cabinet: The Politburo and Decision Making in the Post-War Years" by Yoram Gorlizki: >It was some months after the war, and the formal dissolution of the State Defence Committee (GKO) on 4 September 1945, when the Politburo began to resume peacetime operations. Formally the Politburo continued in much the same vein as it had left off before the war, with a virtually identical membership and a similarly modest workload. At its meeting of 29 December 1945 the Politburo resolved to meet every other Tuesday for a short time, from 8 pm to 9 pm. . . . Meetings of the Politburo, however, tailed off following the session of 3 October 1946; over the rest of Stalin's reign there were only two further formal, enlarged sessions of the Politburo, on 13 December 1947 and 17 June 1949. The official Politburo in fact came to be overshadowed by the regular conferences of a narrow 'ruling group' which met routinely in Stalin's office. The composition of this circle. . . differed markedly from that of the formal Politburo. Excluded from [it] were those Politburo members who had either fallen foul of Stalin or who were cut off from the ruling circle for reasons of location or ill-health. For some time Stalin's suspicions had fallen on Voroshilov, Andreev, and, to a lesser extent, Kaganovich, all of whom were, despite their formal membership of the Politburo, not privy to the proceedings of the ruling group in the aftermath of the war. . . most resolutions issued in the name of the Politburo in the Stalin years were determined by this group. . . . >A succession of leaders, including Malenkov, Beria, Voznesensky and Bulganin, gained admission to the group many months before their formal accession as full members of the Politburo. Stalin hence unilaterally elevated colleagues without having to go through the tedious formality of having them 'elected' as full members of the Politburo by the Central Committee. Stalin could also expel members from his group with unseemly ease. No.9149 Ismai, do you know where to find this letter in full and in English (or Spanish)? https://en.wikipedia.org/wiki/A_Word_to_the_People No.9150 >>9149 You can find the original text in Russian: https://web.archive.org/web/20131014202353/http://www.zavtra.ru/denlit/050/12.html Google Translate gives a... not good translation of it, but enough to get a gist of what it says. No.9152 File: d31dad0acc219d3⋯.png (1.29 MB, 1366x768, 683:384, 5-21.png) What was the GDR's position on Frederich Nietzsche? What do you think of Takis Fotopoulos and his work? No.9154 >>9152 The East German position on Nietzsche was the same as the Soviet one: a reactionary, anti-socialist philosopher who inspired fascism. >What do you think of Takis Fotopoulos and his work? I haven't read anything of his, but apparently he's a "libertarian socialist." So I'm presumably not a fan. No.9155 Hello, I've got a couple practical questions on labour vouchers: How would you go about accessing the socially necessary labour time needed for the creation of a unique commodity like a handmade work of art, or even a commodity just entering mass production? Don't labour vouchers leave a number of questions of the true value of a commodity by failing to take into consideration externalities like the cost of visual, auditory, and environmental pollution? or if you integrate these things into the cost of a commodity how do you quantity the value of something like waking up in the peace and quite of the wilderness before a real-estate development is put in. Thanks No.9156 >>9155 You'd probably be better off asking that question on /leftypol/ since that board seems to focus more on current events and discussions on "21st century socialism" (Paul Cockshott, etc.) No.9157 Ismail, can you recommend Marxist books one could find physical copies on the cheap online? Other than the usual selected works and readers of Marx, Engels, Lenin, etc. No.9158 >>9157 "Marxist books" is a rather.... broad category. Anything more specific? No.9159 >>9158 Anything really. Theory, history, whatever. Thing is, I ordered a few books including a Marx-Engels selected works, Imperialism, State and Rev, some Parenti and the site I bought them from gave me a free$5 or less book. I'm looking at Wretched of the Earth and a book called Marxism and Literary Criticism by Terry Eagleton, but I really don't know if he's a good author or not. Book doesn't have to be under $5 for recommendation, anything relatively cheap is welcome. No.9160 >>9159 If you can find cheap copies (I use Amazon or Abebooks to buy stuff) then I'd recommend Szymanski's "Is the Red Flag Flying?" and "Human Rights in the Soviet Union" as good reads on the USSR. "Lenin and the Russian Revolution" by Christopher Hill is a book I've always considered to be the best short intro to Lenin's life, theories, and the October Revolution. Should be able to find cheap copies of it. No.9161 >>9156 neat! Does 21st century socialism offer a sane solution to the valuation of art and complex externalities, I've been trying to think of a solution but I'm starting to think it's not even possible, certainly capitalism does a pretty horrible job. No.9162 >>9160 Thanks. I'll check those out. Szymanski's books sadly are both above$60. There are copies of his "The Capitalist State and the Politics of Class" at around $7-$10. Do you know anything about that book? No.9164 >>9161 I used "21st century socialism" as just a general term to describe people like Cockshott who ask the question "what could socialism look like in the US and Western Europe," not an actual ideology. >>9162 Yes, I have a physical copy and intend to scan it one of these days. It's a good read, explaining American politics and the state via Marxist categories (bourgeois democracy, etc.) No.9165 >>9164 >It's a good read, explaining American politics and the state via Marxist categories (bourgeois democracy, etc.) No.9166 >>9165 Alright, I'll post in this thread sometime this week after I scan it. As a note, the other books I mentioned by Szymanski are also available online, as is Hill's book on Lenin. No.9167 >>9164 >Yes, I have a physical copy and intend to scan it one of these days. It's a good read, explaining American politics and the state via Marxist categories (bourgeois democracy, etc.) Thanks Ismail. Just ordered, found it for $3.48 in Better World Books. Free shipping too, pretty good deal. Also bought Fascism and Big Business by Daniel Guerin from AbeBooks for about$12 shipped which is also pretty good price for that book, I think. No.9168 Are there any non-communists you're particularly fond of, or whose work you think is especially valuable to communists? No.9169 >>9168 Valuable in what sense? In terms of historians of the USSR for example there were/are many who wrote useful works: E.H. Carr, J. Arch Getty, Sheila Fitzpatrick, Peter Kenez, Moshe Lewin, Robert Thurston, David Lane, Jerry F. Hough, Stephen F. Cohen, etc. No.9172 >>9164 Sorry I was getting ready to goto sleep, I know 21st century socialism is a set of responces rather than a ideology. No.9174 Ismail, how old were you when you first got interested in politics? What got you into it and more specifically Marxism? No.9176 >>9174 When I first browsed the Internet back in 2001, I mostly posted on forums relating to the video game Half-Life and mods for it. There was a group on one of these forums called Communist Gamers whose members argued that Marxism is good. I tried to read the Manifesto and Capital, but I was too young to comprehend the former let alone the latter. In 2005 I started to get more interested in politics (before then my knowledge of it was limited to "George W. Bush is dumb and the Iraq War is bad.") I quickly realized the American political system existed to serve the interests of capitalists. I went from liking Dennis Kucinich to calling myself a "democratic socialist" and having an Olof Palme avatar on a political forum I went on. This was early 2006 or so. But there were Marxist-Leninists on that forum as well, and a conservative poster (of all people) pointed out that "democratic socialism" is impractical because capitalists would not give up their power via the ballot box. I talked with the MLs and they answered my simple questions, e.g. "what separates a proletarian from a peasant?" So by the time I joined RevLeft in May 2007 I considered myself a Marxist-Leninist. I was 14. Marxism simply seemed correct to me, as it still does. No.9177 >>9176 Your story is really cool, Ismail. And by the way, what is your opinion/view on Chinese "New Left" movement? In particular, about Tonghua Steel incident, Chongqing model of Bo Xilai, and more recently, Guangzhou Maoist student incident and Jiashi workers strike? No.9178 >>9177 I think it is natural for workers to resent capitalists and to work to expose corruption and terrible living conditions. If workers on occasion need to force the hand of local CPC officials, that should be permitted. The important thing is that a situation like Poland in 1980 doesn't develop where there was a huge disconnect between official trade unions and workers and the gap was filled with Solidarity, a CIA-backed, Vatican-endorsed union seeking the overthrow of socialism. I haven't read up much on Bo Xilai. The "Guangzhou Maoist student incident" is something else. I don't support efforts to overthrow the CPC and to revive the Cultural Revolution, and can understand the CPC trying to put a stop to that. No.9183 File: fe68b9985d0d488⋯.jpg (96.17 KB, 1009x568, 1009:568, bigOriginal.jpg) >>9176 >having an Olof Palme avatar on a political forum I went on What are your general thoughts on Olof Palme nowadays? Here in Sweden a lot of leftoids still glorify him and consider the Palme years like the best time period in the world's greatest country basically. Did he ever have the intention of introducing "real socialism" to Sweden (as opposed to just welfare capitalism)? I'm more of a Marxist-Leninist but since I'm Swedish I can't help but be somewhat "proud" of the Swedish golden era of social democracy, Olof Palme and I think the "employee funds" were kind of a neat idea for a reformist path to socialism even though it didn't go anywhere. No.9184 >>9176 >I was 14. Wow, this is a surprise. I always kind of assumed you were in your 30s now because of how long you'd been around on revleft. No.9185 >>9183 He seemed pretty good for a social-democrat, although he apparently had a secret military agreement with the US against the USSR in the event of war. No.9186 File: 5b1f249762edc01⋯.jpeg (67.18 KB, 720x537, 240:179, eff white people.jpeg) File: 8e361e43d44c99d⋯.jpeg (140.53 KB, 1242x801, 138:89, eff1.jpeg) File: 4a17c678e8c7ac9⋯.jpeg (194.79 KB, 1242x969, 414:323, eff2.jpeg) File: e4b2f50c86058ef⋯.jpeg (181.74 KB, 1242x871, 1242:871, eff3.jpeg) What are your thoughts on the Economic Freedom Fighters (EFF) of South Africa? They claim to be a militant Marxist-Leninist vanguard organization with inspiration taken from people like Thomas Sankara and Frantz Fanon. Though many people claim that they are "anti-white" or black supremacist based on stuff like pic related or Julius Malema, their leader singing the song "Shoot the Boer". They claim to be against racism, as it is shown in their manifesto and some of their statements but I do admit that some of their actions make me suspicious. I'm sure you're familiar with their background but for anyone who's not, here're some links and images. https://www.effonline.org/constitution No.9187 >>9186 I don't know enough about 21st century South African politics to comment authoritatively, but I will note this: >based on stuff like pic related or Julius Malema, their leader singing the song "Shoot the Boer". "Kill the Boer, kill the farmer" was sung by none other than Mandela himself, after he was released from prison and emphasized national reconciliation. That's because it isn't a song against whites, it's a song against Apartheid. The National Party based its identity around the idea that God himself gave Boers permission to rule South Africa and to have control over its best land. The "Boer" in this context is the overall system. Its lyrics torn from their context can be used against whites, but in context it's another way of saying "give Black people their land back," which is one of the stated goals of the EFF. Post last edited at No.9188 File: a80834672e3cfe7⋯.png (236.84 KB, 1250x827, 1250:827, Bantustans.PNG) >>9187 >"Kill the Boer" was sung by none other than Mandela himself, after he was released from prison. It isn't a song against whites, it's a song against Apartheid. The National Party based its identity around the idea that God himself gave Boers permission to rule South Africa and to have control over its best land. That is the "Boer" the song is about. Interesting, that makes sense, especially in connection with some of those tweet screenshots I posted above and their positions within the EFF constitution. I've done a bit of research by myself on apartheid and the bantustans but I'll have to look more into the situation there, especially with so many white supremacists / internet fascists condemning what they see to be an "imminent white genocide" (which seems to me like hyperbole over the debate over land reform and potential plans to amend the constitution to seize lands without compensation) No.9190 Ismail, I know next to nothing about the Revolutionary Communist Party and Bob Avakian. What are they about, and what makes them bad? No.9191 >>9190 In terms of what they believe, the RCPUSA are actually fairly mundane Maoists. They think Mao and the Gang of Four were awesome, Hua Guofeng and Deng Xiaoping were capitalist roaders, the USSR became capitalist after Stalin died, Cuba and the DPRK are capitalist, etc. What makes the RCPUSA unique is the "culture of appreciation" (as they put it) around Bob Avakian, who is presented as some sort of genius and whose writings constitute a new synthesis of available knowledge or whatever. You'll constantly see RCPUSA members extol him. I don't know how much that translates into the RCPUSA itself operating as a cult (e.g. Peoples Temple, the most famous left-wing cult, used sleep deprivation and arbitrary punishments to keep members in line, had them cut off contact with family and friends, limited interaction with non-members), but the extolling of Bob is pretty stupid and bad. Bob himself, like the RCPUSA, is fairly mundane. He isn't a moron, he's just a 1960s-70s student radical who still mostly talks like he's from those decades. He's moderately charismatic and seems alright at explaining basic concepts. But he isn't the second coming of Mao. Post last edited at No.9192 >>9191 >In terms of what they believe, the RCPUSA are actually fairly mundane Maoists. >He's moderately charismatic and seems alright at explaining basic concepts. Yeah, I read some of the stuff on their page and saw some Avakian speeches and left with the same impressions. I just thought there was something more to them, I guess. Like some crazy crap other than the "culture of appreciation" stuff. No.9193 File: 625f24f04a30b37⋯.jpg (127.52 KB, 477x586, 477:586, Jim Jones in Jonestown.jpg) >>9192 >Like some crazy crap other than the "culture of appreciation" stuff. Yeah, I don't know if the actual internal workings of the RCPUSA are like those of a cult. Political cults did and do exist, like the Democratic Workers Party, NATLFED, the LaRouchites, etc., but as far as I know RCPUSA members just extol Bob. As an aside, I intend to write an article on Peoples Temple and Jonestown for the website MLToday, since November 18, 2018 will be the 40th anniversary of the mass murder-suicides. Jim Jones engaged in some pretty ridiculous stuff. To quote one book: >Faith [a former member] recalls a time in 1971 when [Jones] was trying to convince his flock that he was a reincarnation of Lenin. . . Jones sensed some of the people in his flock had strong doubts. He took the problem to his staff and together they worked out a scenario that would prove Lenin's spirit had found a host in Jim Jones's body. The scheme involved placing Patricia Cartmell, Patty Cartmell's daughter, in the attic of the Redwood Valley church. Carrying a day's supply of food, a jug of water, and a script, she climbed a long ladder up to the attic early on morning when no one was around and waited all day. >During the evening service, Jim Jones went into a trance. He was in touch, he said, with "the ethereal waves." He could hear the voice of a woman who was in limbo because she had betrayed Lenin. As he meditated, the people in the congregation began to hear, with increasing clarity, a ghostly female voice. The voice spoke in Russian. Jim Jones had no trouble translating; he had been Russian in a previous life. The woman wanted Jones to help her find peace in the spirit world. She needed his divine powers to help her out of limbo. He forgave the woman for her betrayal, and she thanked him profusely. The voice faded back to the spirit word until it was no longer audible. Jim Jones came out of his trance, and Patricia Cartmell stretched out in the attic and went to sleep. And from another book, some context is necessary. Jim Jones claimed to have a ten inch penis, and during meetings with his Planning Commission (about a hundred people ostensibly tasked with planning the Temple's activities) he was afraid to go to the bathroom to urinate lest people try to take a look at his supposedly massive penis, and was also afraid to urinate outside because his nonexistent enemies might try to snap photos and claim he was engaged in indecent exposure, ergo a Planning Commission member placed a towel in front of him as he urinated into a can during meetings. But then, one day the towel accidentally fell. >It was far from being the giant thing he had talked non-stop about through the years. He weakly made an attempt to excuse his lie. "That should never have happened," he said to Carol angrily. "This office has to maintain a certain image for the good of the Cause." Post last edited at No.9194 File: 6e384bb4cbdcf76⋯.jpg (38.53 KB, 700x467, 700:467, Oswald.jpg) Ismail, what do you think about Lee Harvey Oswald and the Kennedy assasination? I know you're very interested in Jonestown, so I'm just curious what you think about the whole thing here. No.9195 >>9194 I haven't looked into the subject. It's certainly possible that sections of the government, Miami Cubans, and the Mob had their reasons for opposing JFK due to the Bay of Pigs turning out the way it did. My main problem is when people end up prettifying JFK's record in office. While he seemed to be adopting a less bellicose attitude toward the USSR and Cuba at the time of his death, you have people who insist that if only he had lived longer miraculous domestic and foreign events would have occurred, losing sight of the fact he was a bourgeois politician with a clearly anti-communist foreign policy. Another problem is that there are innumerable crimes linked to American imperialism and domestic government agencies. Excessively focusing on JFK can take away from focusing on repression against communists, the Black Panthers, American Indian Movement, or stuff like this: https://www.jacobinmag.com/2014/09/contingency-plans As an aside, one of Jones' lawyers was Mark Lane, who played a major role in popularizing the notion that the government had a hand in JFK's assassination. Jones himself (like many other people on the left back then) was a fan of conspiracy theories relating to the assassinations of JFK and RFK, MLK Jr., Malcolm X, etc. I recently finished reading Lane's book "The Strongest Poison" about his experiences with Jones and Jonestown. There's an amusing bit in the book where he and Jones' other lawyer (Charles Garry, formerly representing Huey Newton and the Panthers) are fleeing through the jungle during the mass murder-suicide. Jones wanted to have both men shot because he blamed them for Congressman Ryan entering Jonestown (they had earlier urged Jones to admit him and his media entourage into the settlement.) Garry, being an old man, had difficulty carrying his inordinately heavy briefcase, so Lane picked up the slack for him until at one point he fell on the ground. >When I dropped his bulging briefcase, it popped open. A large metal object protruded. I reached over to examine it and once I had done so, I said, "Charles, we are carrying a hair dryer through the jungle." He nodded affirmatively and said, "It's a good hair dryer." I said "Throw the fucking thing out, Charles. Buy a new one if we get back." He refused. I asked what else he had in the case. He said that he had the legal files for the Peoples Temple. He added, "I'm still the lawyer for the Temple. I need those files." I said that while he had not been formally fired, I thought that the dispatch of a firing squad to dispense with us could be said to have released him from further obligations. I observed that he should toss out the briefcase, hair dryer, files, and all. He said he would not. Post last edited at No.9196 File: cfe9a72d8d0492d⋯.jpg (85.34 KB, 500x698, 250:349, JFKconspiracytheorist.jpg) >>9195 >It's certainly possible that sections of the government, Miami Cubans, and the Mob had their reasons for opposing JFK due to the Bay of Pigs turning out the way it did. It is very plausible. However, I think the problem with most of the theories is they bend the physical evidence to meet the theories rather than the other way around. For instance, when JFK is hit the first time on the Zapruder film, he raises his arms into a strange position that to a layperson may look like he's clutching his neck. Actually, if you watch a high-quality version of the film, you can see he's not clutching his throat at all; he's locked into place by a neurological reflex called Thorburn's position. This response was triggered by a bullet creating cavity near his spinal cord. You can see on the film that Jacquie actually tries to pull his arms down but can't do it because he's locked into that position; he doesn't drop his arms until the head shot. It seems to be pretty good evidence that he's not clutch his throat as a reaction to a bullet blowing through his wind-pipe from a shot in the front, as some have theorized. There was also no recorded direct damage to the spine, nor was there a rear exit wound (exit wounds are generally larger than entrance wounds). The head shot is probably the most convincing piece of evidence that the shots did come from behind. If you watch closely you can see Kennedy's head move forward and then jerk back violently, this is the result of the jet effect and a neurological spasm that causes the body to jerk backwards when the brain is destroyed. Three black employees on the fifth floor also heard three shots from above them; and one construction worker actually witnessed a man he described as similar to Oswald take the shots. The theories that there were multiple shots that hit Governor Connally don't seem plausible considering the majority of witnesses heard three shots and the digital reconstructions of the flight path of the bullet that hit Kennedy show that his wounds were in line with the trajectory of the second bullet. So, if someone wants to argue that Oswald was a force being guided by malevolent forces (this is what Parenti seems to imply) or that someone was with Oswald on the 6th floor or took the shots that were blamed on him there this seems to be Roger Stone's position then that would make sense. But overall, Lee Harvey from the Window of the 6th floor of the Schoolbook depository acting alone seems to be the most solid theory: http://www.michaelparenti.org/JFKAssassination.html No.9197 File: e9b603cb0c9de13⋯.png (39.47 KB, 276x330, 46:55, levote4demsman.png) >>9196 >>9195 There is the theory that James Files (a mafia hitman) took the fatal shot from the front using a "fireball" .221 rifle that was chambered with an exploding bullet. This would seem to be bolstered by mob-boss Carlos Marcello's confession to an FBI informant that he killed Kennedy and Watergate burgler/Latin American CIA operative E. Howard Hunt's deathbed confession to the murder of Kennedy. But the question would be are the confessions by these three legitimate? Do they fit with the available evidence? And do their stories line-up? The conventional anti-conspiracy line that "someone would have talked" would seem to no longer apply in this case. However, I'm still skeptical due to the physical evidence in favor of the official account as well as tons of other evidence attesting to Oswald's guilt, or at the very least a major role in the assassination. It's pretty obvious that the KGB did not trust Oswald and many people who knew him well said he was too volatile to be trusted by any government with anything important. I think the fact so many conservatives gravitate to these theories isn't an accident. JFK was a reactionary president, perhaps the most reactionary of the 60s barring his mild support for détente (which LBJ and Nixon also supported). Of course, Roger Stone (Nixon/Trump's political operative) has now come out with a book fingering LBJ for the murder. Trump has also alluded to these theories; some extreme anti-semites like Michael Collins Piper argue that it was an Israeli-operation. I think this is tied to something you alluded to, the notion that if JFK had lived then things would somehow have been different in a major way. The fact that JFK's death marked a period of reformism unseen since the 1930s is also problematic for conservatives--Reagan once said that it was the Great Society that he was opposed to, not the New Deal. The immigration act of 1965 and the greater tolerance of immigrants in general, I think also plays a role in this conservative nostalgia--if JFK wasn't killed by commie Jews globalists then America would have had a permanent white majority etc. >he assassinations of JFK and RFK, MLK Jr., Malcolm X, etc. No.9198 >>9197 <the assassinations of JFK and RFK, MLK Jr., Malcolm X, etc No.9199 >>9198 I haven't looked into any of them either. As far as I know Malcolm X was killed by Nation of Islam members angry that he criticized Elijah Muhammad for being a hypocrite and distorting Islam, and Farrakhan was one of those who at the time were denouncing Malcolm and thus indirectly contributing to his death. No.9204 Ismail even though youve made it known your OK with a State Monitored Market economy as long as the Party and State always have the Final word essentially but A. If a Socialist state can Maintain a Economy without resorting to these Measures (DPRK) Do you believe this to be Preferable B. How do you think the ChineseCP will be able to leave the Social Market Economy Transition stage and create a socialist economy when Very powerful class's / Groups in Chinese Society benefit from keeping the status Quo? (National / Petite Bougies Corrupt Party members being bribed etc) C. Why do you continue to support china despite Dengs Reforms but Dislike Gorbachev when his Reform was effectively the same thing No.9205 >>8560 >>9204 Also You seem to Dislike Gorbachev but Like Yuri Andropov quite a bit This makes little sense to me as Gorbachev and Andropov were political allies and Andropov layed the grounds for mosts of Gorbs reforms and even supported Gorbs Promotion in the Party No.9206 >>9204 >If a Socialist state can Maintain a Economy without resorting to these Measures (DPRK) Do you believe this to be Preferable If it doesn't hinder the economic development and living standards of the population, yes. As for your second question, ultimately it will depend on the CPC's ability to ensure the continued leading role of the state sector of the economy, and to resist counter-revolutionary intrigues (as it successfully did at Tienanmen in 1989.) >Why do you continue to support china despite Dengs Reforms but Dislike Gorbachev when his Reform was effectively the same thing Gorbachev cheered on the overthrow of socialism in Eastern Europe and presided over the demise of the USSR, with Perestroika tanking the economy. He identifies as a social-democrat nowadays. None of this is "effectively the same thing" as Deng's reforms. >Gorbachev and Andropov were political allies and Andropov layed the grounds for mosts of Gorbs reforms and even supported Gorbs Promotion in the Party Gorby was also supported by "hardliners" like Gromyko and Tikhonov who saw him as a young, energetic figure. Even Molotov seemed hopeful about him. It's important to keep in mind that Gorbachev in 1985-86 was a very different person (at least publicly) from Gorbachev in 1989-91. When Gorby first came to power he gave the impression he would continue Andropov's reforms with the aim of strengthening socialism. But before long he deviated from Andropov's path, claiming that what existed in the USSR since the 1930s was not socialism but an "administrative-command economy" and that what was necessary was the creation of a "humane and democratic socialism." He praised Dubček as a precursor. Had Andropov lived and Gorby tried to force his hand in directions inimical to socialism, I'm sure the latter would have been removed just as Deng removed Zhao Ziyang. No.9207 What can I say about Obama to my libtard friends who seem to think he was the greatest leader in modern history, apart from pointing out that he oversaw the invasion of Libya (and obviously was not a socialist)? What's some good points to make specifically against him? No.9208 In the previous thread you posted this quote on solzhenitsyn: >As one author put it in-re Solzhenitsyn, "I've never felt that dissident was a particularly apt title for a man given a jail sentence for criticising orders and inciting disaffection in wartime (an offence for which he could have been shot in the British Army), spending most of it in a special prison with more than tolerable conditions, having his cancer cured along the way and being released before the end of his sentence—and then doing nothing but complain. That's no dissident, that's a whinger." I was wondering if you could provide a source for this quote? No.9209 Do you have any books on China under Mao from a leftist perspective? No.9210 Thought some people in here might find this interesting, a memoir on Albert Szymanski written by a colleague http://crab.rutgers.edu/~goertzel/szymanski.htm Szymanski appears to have commited suicide in 1985 at the age of 43. Another interesting aspect about the memoir is the author's reference to "cultural Marxists" and feminists at University of Oregon sociology department having disagreements with him and Szymanski. No.9211 >>9207 While I'm not a Trotskyist, WSWS usually has good articles on American politics and foreign policy. Here's a few on Obama: >I was wondering if you could provide a source for this quote? >>9209 I scanned this a few months back: https://archive.org/details/ConciseHistoryCPC >>9210 Yeah, Szymanski was depressed over his personal life plus the seeming retreat of Marxism in the US (including academia where, contrary to right-wing claims, Marxist influence steadily declined from its height in the 1970s.) I emailed the author of that article (who uses the phrase "cultural Marxists" in the sense academics used the term back then, not as a boogeyman), who gave me permission to scan the textbook he and Szymanski wrote on sociology. I intend to do that at some point, along with Szymanski's "The Capitalist State and the Politics of Class." Once that's done all of Szymanski's books will have been put online. You can find the five books of his that are already online here: https://archive.org/search.php?query=creator%3A%22Albert+Szymanski%22 As an aside, from another colleague of Szymanski's that I emailed: >He was a prodigious reader and probably at the time of his tragic death, the most published scholar in the Department of Sociology at the University of Oregon. He was the one that attracted the really great graduate students. His set the standard for all of us for preparations for his lectures; he was magnificent in the classroom. His students are everywhere. . . When I visited Russia on a Fulbright from 1994-1995, I raised many of the issues that Al had concerning the nature of socialism with my Russian sociologist colleagues (and by the way, they all knew of Al Szymanski's work and they agreed with his analysis. One of my Russian sociologist friend who came to the US made it his duty to visit Eugene, Oregon so he see for himself where Al lived). Most felt totally betrayed by what had happened with perestroika and Gorbachev. Post last edited at No.9212 File: 29cc0c8c95df04f⋯.jpg (95.38 KB, 960x540, 16:9, stalinwave.jpg) >>9211 Thanks a lot for the articles! No.9214 >>9211 > intend to do that at some point, along with Szymanski's "The Capitalist State and the Politics of Class." Once that's done all of Szymanski's books will have been put online. Doing god's work Ismail. Can't thank you enough. No.9217 Did Ho Chi Minh contribute anything to Marxist theory or practice? No.9220 >>9217 The Communist Party of Vietnam talks about "Ho Chi Minh Thought," but as far as I know it's basically just "we gotta adapt Marxism-Leninism to Vietnamese conditions," and not much more. No.9221 What do you think about the South China Sea dispute? No.9222 >>9221 I think that territorial disputes between socialist countries are bad. The US is taking advantage of it to try to make Vietnam dependent on it for arms. No.9223 What is China doing in Africa? Are they using african workforces for making profit and luindering the country?? Idk what to trust about this topic maybe you can clear things up No.9224 Do you think Apostol would have been better then Ceausescu had the "Gang of Six" succeeded? No.9225 >>9223 >>9224 Yes. Considering Ceaușescu's record in the 1980s I can't see how Apostol could have been worse. No.9230 In light of the Second upcoming Fascist get together in Washington DC... Do you think it better to Condemn ANTIFA like a lot of cautious individuals on the LeftNet seem to do to save face / optics? Or Do you side with Icepick in that fascists should be scared so shitless by AntifFa Etc that they never leave their basements again? (My Personal Opinion tbh) No.9231 >>9230 I'm not going to condemn a group that opposes fascism. I don't think Antifa is *sufficient* to prevent fascism, but it certainly has helped in limiting the street presence of fascists. No.9233 File: cad243a43a10a68⋯.jpg (358.37 KB, 640x890, 64:89, 1469998471998.jpg) What Socialist country had the most extensive welfare-state services? No.9234 >>9233 That I don't know. If I had to guess, it'd be the GDR. One anti-communist survey I have (Richard F. Starr, Communist Regimes in Eastern Europe, Fifth Edition, 1988, pp. 106-107) seems to imply that its social services were more widespread (and more expensive) than others in Eastern Europe. >Welfare programs cover virtually every aspect of daily life, including cradle-to-grave health and medical coverage, pensions, and a minimum ten years of free public education. State-operated day-care centers service 61 percent of preschool children at a cost of six dollars a month. In 1972 abortions were legalized, and women receive paid leave of fifteen to twenty days. Eighteen days of vacation a year are guaranteed for all workers; those in strenuous jobs, like mining and metal processing, have up to 30 days. These benefits are costly. Subsidies for food and consumer goods alone total almost $10 billion per annum. Housing in the form of low rents costs the government at least an additional$2 million. >At the 1986 party congress General Secretary Erich Honecker announced the extension of several family benefits: the paid “baby year” now applies to women after the birth of their first child, all working mothers receive paid leave to tend to sick children, and loans to young married couples are available on more liberal terms. The following year the monthly child allowance was increased from 20 to 50 marks for the first, 200 to 100 marks for the second, and 100 to 150 marks for the third child. Honecker reckoned that this “important socio-political measure” would cost two billion marks annually. No.9235 How much power did the leaders of the USSR actually have? No.9236 >>9235 Lenin didn't occupy any formal leadership position within the party, but as founder of the Bolsheviks and architect of the Soviet state he could ultimately convince a majority of his associates to follow his lead on most issues, especially if he threatened to resign (as he did when it seemed agreement might not be reached to sign Brest-Litovsk.) When his physical and mental condition deteriorated sharply by the end of his life, none of his many associates proposed formally relieving him of his position as head of the Soviet government since he was seen as irreplaceable. Stalin by the end of the 1930s had become unquestioned leader. If he wanted something, it was done (or at least attempted.) By the time of his death he carried out policy not via the Central Committee and Politburo (as he was supposed to), but via private gatherings of his closest aides. See: >>9144 Under Khrushchev there was some attempt at collective leadership (which existed under Lenin), but in practice Khrushchev would undertake actions without consulting his colleagues and would get angry if anyone contradicted him and would seek their removal. Brezhnev, Andropov and Chernenko did actually function under a collective leadership. As the ambassador to the US noted in his memoirs: "In the West many believed that the general secretary of the Communist Party was a true dictator accountable to no one, and this was of course true for Stalin, but not his successors. Brezhnev was no exception. True, he was number one among the leadership, but even as first among equals he could not always impose his views on the other members of the Politburo. Each of them had a right to express his opinion on any subject on the agenda of the regular weekly meeting, which was usually held on Thursdays. Meetings could be called at any time to discuss urgent matters. . . As a rule decisions were taken by consensus, and voting was extremely rare. If there was a strong division of opinion, the general secretary usually postponed the decision for the next meeting, which was a signal for him to meet privately in the interim behind the scenes with each of his colleagues to work out a compromise. A general secretary of course had many ways of persuasion to carry his ideas through the Politburo, but he was always careful not to antagonize the other members unnecessarily. After all, they could always revolt and replace him, as they did with Khrushchev." (Anatoly Dobrynin, In Confidence, 1995, p. 224.) Gorbachev intrigued to get rid of "hardliners" within the CC and Politburo, and ended up creating a separate institution of the Soviet Presidency to remove any limits the CC and Politburo would have over him. Post last edited at No.9241 >>8560 Any opinions on vegetarianism/veganism? No.9242 >>9241 None. I eat meat. If people prefer to be vegetarians or vegans for health or moral reasons, I'm not going to damn them. I would criticize groups that think setting fire to research facilities for using animals in testing and such is revolutionary praxis though. No.9243 >>9242 Agreed on all that, thanks. I asked the question because I'm vegetarian, partly for environmental reasons, but sometimes wonder if it makes a difference at all or if it's really just "lifestylism" like some marxists will claim. No.9244 >>9243 >I'm vegetarian, partly for environmental reasons Just because you don't eat meat doesn't mean the production of the product ends or that the same things won't end up in stores. You're wasting your time. It is lifestylism No.9247 >>9244 Couldn't this be said for pretty much anything? One worker going on strike won't change anything. You showing up to an anti-war rally isn't going to even marginally hurt imperialism. Even in a full-on socialist revolution, your individual participation almost certainly won't lead to any success (unless you're a brilliant tactician or have some other incredible skill, but that's one in a million). Not the guy you're replying to btw. No.9248 >>8560 Any opinion on the Leftcommunism "value criticism" (Wertkritik)? Basically they argue that the primary predicament of our time is not class struggle but the value form, the valorization and commodification of everything. While class seems to be less and less important as time goes on, the absolutism that prevails is the value form. No.9249 >>9248 Seems nonsensical. The "commodification of everything" is the direct result of capitalism (which, after all, turned labor-power itself into a commodity) and requires class struggle to oppose it. It sorta reminds me of how Gorby argued in the late 80s that "universal human values" (such as opposition to nuclear war and environmental destruction) would take precedence over class struggle in international relations because the former threatened all humanity. The idea that the struggle against the arms race and protecting the environment were part and parcel of class struggle was deliberately obscured for Gorby's own opportunist ends. No.9250 >>9249 I agree with you, but there are more very popular points Leftcommunists raise against Marxists-Leninists: - Marx made no distinction between socialism and communism, therefore ML is revisionist - Marx and Engels said that the DotP and socialism are absolutely exclusive to one another as socialism must entail the "self-abolition of the proletariat" - the fact that money, exchange value and commodities existed makes all socialist states not socialist - Leninism is completely reliant on the conditions of the early 20th century and needs to be replaced by a more modern theory No.9251 >>9250 1. Marx specifically wrote of a lower and higher stage of communism. The lower stage (which, as Marx writes, "emerges from capitalist society; which is thus in every respect, economically, morally, and intellectually, still stamped with the birthmarks of the old society from whose womb it emerges") was called socialism by Lenin and by all subsequent Soviet authors. 2. The Communist Party of the Soviet Union argued that the dictatorship of the proletariat fulfilled its historical role and had ceased to exist by the end of the 1950s. Soviet authors from the 1930s onward also noted that, strictly speaking, there was no proletariat in the USSR (there was no capitalist class to buy the labor-power of workers and threaten them with unemployment as a way to drive down wages.) 3. Money serves an accounting function under socialism; it cannot be turned into capital. Commodities were produced for their use value, not exchange value (nor was labor-power a commodity as it is under capitalism.) 4. Left-communists in that case are in no position to be accusing Leninists of revisionism. No.9252 >>9244 The more people are vegetarian, the less such products will end up in stores. Also >>9247 is right. Besides, how am I "wasting my time"? Cooking quorn balls takes exactly as long as cooking meatballs. No.9255 bit of a strange thing to ask but was Marx really an anti-semite? i've never gotten around to reading his Jewish question book but i've heard some bad things about it. No.9256 >>9255 He uses stereotypes in his article to make points, and elsewhere he made derogatory references to Jews, but to quote an author who can't be accused of sympathy with Marxism: "Crude and repulsive as Marx's and Engels' racial remarks to each other often were, there is no need to make them still worse by putting them in the same category as twentieth-century racism that has justified genocide. Marx's much criticized essay, 'On the Jewish Question,' for example, contains clear statements of his distaste for what he considered to be Jewish cultural or social traits, but in the end it was a defense of Jews' right to full political equality, written as a reply to a contemporary who had claimed that Jews should be required to give up their religion before receiving equal civil status." (The Thomas Sowell Reader, 2011, pp. 185-186.) And to quote Jewish scholar Louis Harap: >it must be emphasized that Marx was an advocate of political rights for the Jews. . . the Rheinische Zeitung, a Cologne paper that Marx edited from 1842 until its suppression in March 1843, was one of the strongest supporters of Jewish emancipation as one aspect of the struggle against the clerical monarchy. The article which finally brought the suppression of the paper was one by Marx in which he placed the blame for oppression of the peasants on the landowners and bureaucrats and makes no mention of the popular notion that the Jews were responsible. . . >When new taxes against Jews or new restrictions were proposed, the paper fought against them. It published articles by Jewish liberals and the Jewish paper, Allgemeine Zeitung des Judentums, in 1842 often quoted material from the Rheinische Zeitung on the Jewish question. It was probably because of the paper’s militant support of Jewish rights that the leading Jews of Cologne early in 1843 turned to its editor, Karl Marx, to draw up a petition for Jewish rights to be submitted to the Landtag. Marx's anti-Semitic remarks are thus a relatively minor defect in his personality, unlike Bakunin (who saw his struggle against Marx partly through an anti-Semitic lens) or Proudhon (who wanted Jews expelled from France), let alone vicious anti-Semites capitalists exalt such as Henry Ford. And while we're on the subject, Engels explicitly denounced anti-Semitism as a threat to the workers' movement. One work, discussing the Marxist movement in pre-WWI Germany, gives the following assessment: "In theory and practice socialist labor was opposed to anti-Semitism. The Socialists never wavered in their stand against all attempts to deprive Jews of their civil rights. They treated with contempt the anti-Semitic agitators and the groups behind them. They never gave in to the temptation—considerable at times—to gain followers by making concessions to anti-Jewish prejudice. From the rise of the socialist labor movement in the 1860's to the time of its defeat by National Socialism, the statements of the labor leaders, the resolutions carried in party conventions, the methods of coping with the situations created by political anti-Semitism, testify to its unswerving opposition to any kind of discrimination against Jews." (Massing, Paul W. Rehearsal for Destruction: A Study of Political Anti-Semitism in Imperial Germany. New York: Harper & Brothers. 1949. p. 151.) Post last edited at No.9259 >>9256 >unlike Bakunin (who saw his struggle against Marx partly through an anti-Semitic lens) wait what No.9260 >>9259 I don't know how else to interpret this: >Himself a Jew, Marx has around him, in London and France, but especially in Germany, a multitude of more or less clever, intriguing, mobile, speculating Jews, such as Jews are every where: commercial or banking agents, writers, politicians, correspondents for newspapers of all shades, with one foot in the bank, the other in the socialist movement, and with their behinds sitting on the German daily press — they have taken possession of all the newspapers — and you can imagine what kind of sickening literature they produce. Now, this entire Jewish world, which forms a single profiteering sect, a people of blooksuckers, a single gluttonnous parasite, closely and intimately united not only across national borders but across all differences of political opinion — this Jewish world today stands for the most part at the disposal of Marx and at the same time at the disposal of Rothschild. I am certain that Rothschild for his part greatly values the merits of Marx, and that Marx for his part feels instinctive attraction and great respect for Rothschild. >This may seem strange. What can there be in common between Communism and the large banks? Oh! The Communism of Marx seeks enormous centralization in the state, and where such exists, there must inevitably be a central state bank, and where such a bank exists, the parasitic Jewish nation, which. speculates on the work of the people, will always find a way to prevail.... No.9261 >>9260 No.9262 >>9261 Bakunin was /pol/ tier No.9263 >>9261 Yes. Bakunin's anarchist works aren't in themselves anti-Semitic, but he was clearly one himself and anti-Semitism was a serious problem among his Russian followers. Marx, as I said, did write derogatory things about Jews, but there was nothing political in them, e.g. he complained of a lousy vacation in a 1871 letter to Engels: "I arrived [in Ramsgate] on Thursday amidst thunder, lightning and torrents of rain. On Friday it was fine, on Saturday it rained dogs and cats from morn till night, fine again yesterday, outlook uncertain today. Place is full of Jews and fleas." One can accuse Marx of prejudice, but unlike Bakunin he didn't believe in anti-Semitic conspiracy theories, and as noted above he advocated equal rights for Jews. Post last edited at No.9264 Why did Lenin support the British / Yugoslav Proposal for the formation of a Balkan Federation? (Yugo / Albo / Romani / Bulgar / Greece / Hungary) And if the Situation between the USSR and the Poles / CzechSlovak Legion had been different do you think he would have supported the Polish-CzechSlovak Federation idea? No.9267 >>9264 >Why did Lenin support the British / Yugoslav Proposal for the formation of a Balkan Federation? As far as I know he didn't. He supported a socialist Balkan federation, which had been a traditional demand of Balkan socialists well before 1917. >And if the Situation between the USSR and the Poles / CzechSlovak Legion had been different do you think he would have supported the Polish-CzechSlovak Federation idea? Lenin criticized the creation of the Polish state as an attempt by the Entente to intrigue against German and Soviet Russia. I'd imagine any attempt to federate Poland and Czechoslovakia would be perceived as a way of strengthening both countries' statuses as "buffers" against the Soviets. No.9270 >>8560 How do you think education should be handled, and what do you think/know about Lev Vygotsky and his works on cognitive development? Furthermore, what do you think about the differences in thought between Jean Piaget and Vygotsky? No.9271 >>9270 I'm in no position to answer those questions. No.9272 >>9271 What about psychology in the USSR and the education systems that were implemented in the Eastern bloc? No.9273 >>9272 Not able to answer questions about those subjects either. I could quote from books I have on the educational systems, but they're otherwise not subjects I study. No.9274 >>9273 Well it would be great if you could I'm studying up on Vygotsky (or well on him speaking about his difference from Piaget and the inconsistencies he finds in Piagets works) But otherwise I am not familiar with the educational system in the USSR or the Eastern Bloc that much. My understanding is that the education systems we have in most of Eastern Europe right now are a bastardization of the idea that was propped up during the time of the USSR. No.9275 >>9274 Check out pages 887-928 of the following work for Soviet education in the 1920s-30s: https://archive.org/details/in.ernet.dli.2015.217894 On Soviet education in the 1960s-70s: * https://archive.org/details/RussiaReExamined (chapter 5) * https://archive.org/details/CitiesWithoutCrisis (chapter 6) No.9277 >>9176 >So by the time I joined RevLeft in May 2007 I considered myself a Marxist-Leninist. I was 14. tfw Ismail is only a couple years older than you but has been reading Marxist lit and history ten times as long No.9278 Would you say it is worth the time to read the works of the various pre-Marx and Engels Utopian socialists such as Fourier, Saint-Simon and Owen or is it not worth it beyond gaining a deeper insight into the history of socialism? I just don't think they are very relevant to modern times for obvious reasons No.9279 >>9278 Yeah there's not much purpose in reading them except for historical reasons, e.g. most apologists for capitalism will not defend child labor or the 12-to-15 hour workdays that the utopians denounced, and obviously their way of achieving socialism is vastly different from that of any Marxist (e.g. Owen thought he could convince his fellow capitalists of the evils of the system and have them fund cooperative settlements.) It can be interesting reading early socialist texts, but I don't think you'll suffer from not reading them. Wouldn't hurt though. For example, Fourierism was quite popular for a time in the US. Among its adherents were Horace Greeley (founder of the New-York Tribune, the most important paper in the country, and it employed Marx as a foreign correspondent) and Alvan Bovay (who is credited with giving the Republican Party its name.) Post last edited at No.9280 Are there any documentaries (on the socialist countries or on socialism/Marxism in general), lectures or non-fiction movies you would specifically recommend? I would like to be a more productive socialist at times where I'm too lazy to read a book. No.9282 >>9280 Here's one video on Soviet women recalling the 1930s USSR: https://www.youtube.com/watch?v=933jsB5ChlA And there's a documentary on opposition to Yeltsin during the 90s by those who wanted a return to socialism: https://www.youtube.com/watch?v=OsjWSgoT8KI I don't really watch documentaries, so I can't think of anything else. No.9283 Ismail, I've heard you argue that PRC is a socialist state. Is it true? And do you by any chance speak Russian? It'll be definitely easier for me to continue dialogue in my native language. I'll provide an extract from an article concerning the Chinese question, it was written in Russian, but I've tried to translate it. I apologize in advance for my English, I know it sucks. No.9284 >>9283 I can't speak or read Russian. No.9286 What is you're opinion on Bukharin? No.9287 >>9282 That's cool, thanks No.9288 >>9286 His desire to build socialism "at a snail's pace" was not feasible in the world situation of the 1920s-30s. The USSR had to industrialize and collectivize quickly to meet its defense needs. Both of these could have been carried out better in hindsight, but Bukharin provided no answers. No.9289 What are some (as many as possible) good books concerning analysis of the NEP, the later Stalin transition years and the 30s' economy in the USSR? The statements, considerations and conditions done in preceding years to the NEP, the policy itself, and what happened in practice. Same for Stalin's policies and the 30s. Also what are some good soviet economists that I can look into? No.9290 What's your opinion on the goverment of Venezuela? Not just PSUV but the PCV (Venezuelan communist party) who Are M-L and in a political alliance with the PSUV? No.9291 >>9289 * https://archive.org/details/in.ernet.dli.2015.236714 (written by British Marxist economist Maurice Dobb just before the First Five-Year Plan, thus covering the NEP as it existed) * https://archive.org/details/DobbSovEconDev (written by Dobb in 1948, covers both the NEP and 1930s) * https://archive.org/details/USSRStateIndustryTransitionPeriod (a Soviet work discussing aspects of the NEP) * http://b-ok.xyz/book/2039288/9b6902 (Alec Nove's "An Economic History of the USSR," a standard text anyone interested in the subject should read) * http://b-ok.xyz/book/2086648/94ac6b ("Farm to Factory" by Robert C. Allen, arguing that rapid industrialization and collectivization was economically justified) * http://b-ok.xyz/book/2545295/8974c6 (The Economic Transformation of the Soviet Union, 1913-1945) There's also the six-volume "Industrialisation of Soviet Russia" series that goes from 1929 to 1936 (there's one final volume expected to come out some years from now, presumably going up to 1939) Post last edited at No.9292 >>9289 >Also what are some good soviet economists that I can look into? Strumilin, Kantorovich and Liberman are among the major ones. >>9290 I think Venezuela's government is what the Soviets back in the day would have called "socialist-oriented." Its conception of socialism isn't Marxist, and the PSUV is not a vanguard party, but it is carrying out progressive economic and political policies with the aim of strengthening the state sector and reducing the power of capitalists both foreign and domestic. So in that sense it should be supported. I think alliance with the PSUV is justified, provided the Communists have freedom to promote the independent interests of the working-class and do not hesitate to criticize vacillations and expose any betrayals on the part of right-wing sections of the PSUV. Post last edited at No.9293 Hello, Ismail. Questions on recent news What's the perception with Erdogan? From what I heard, a populist authoritarian right wing leader, but seeing recent headlines he seems to be separating from US. I always assumed Turkey had good relations with the USA and even a vassal state, but whats going on? What purpose could Erdogan have? No.9294 >>9293 Back in the 1970s Turkey grew closer to the USSR as well, even though it was under a reactionary military junta. The US didn't like Turkey invading northern Cyprus (since Greece was another NATO ally and the Greek Cypriot junta was pro-US), but Turkey did it anyway. Turkey was never a "vassal state." Obviously its foreign policy was (and is) linked to the US via NATO, but it's still significant enough to pursue the interests of its own capitalist class, which at times conflict with US foreign policy aspirations. The same applies to Israel, Saudi Arabia, and other US allies. Erdogan presumably wants to improve relations with Russia to counter "too much" US influence, and at the same time gets to bolster his own "populist" image at home. Post last edited at No.9295 >>9284 But what about the main question? No.9296 Any good reads on slavery in the USA/CSA, preferably from a Marxist perspective? Any good reads on post-feudal slavery in general, in the colonies? No.9297 >>9295 Yes, I regard it as socialist. >>9296 This is a brief Marxist summary of the causes and course of the Civil War: https://archive.org/details/ApthekerAmericanCivilWar I have a whole thread of Marxist works on US history here: https://uselectionatlas.org/FORUM/index.php?topic=202462.0 No.9298 >>9291 Thanks Are there any soviet accounts as it all was happening and after? Preferably in the original Russian. >>9292 >Strumilin, Kantorovich and Liberman are among the major ones. I'll check Lieberman but could only find one work by Kantorovich on lib.gen. Any directions where to look? No.9299 >>9298 >Are there any soviet accounts as it all was happening and after? Preferably in the original Russian. You can find a lot of English-language Soviet pamphlets and some book-length works from the late 20s and 30s here: https://drive.google.com/drive/folders/0B1ZP6ZurgOg-RHFPVzIzbWVjMkE >Any directions where to look? Nay. No.9300 What did the NEP accomplish? Did it develop the productive forces? Because I thought industrialisation first sky rocketed with stlain and sweeping collectivesation and why are Capitalist Reforms necessary for deveopling hte productive forces No.9301 >>9300 The purpose of NEP was to rehabilitate an economy wrecked by years of war, e.g. the level of pre-WWI industrial production was restored by 1927. >why are Capitalist Reforms necessary for deveopling hte productive forces There's different stages of "developing the productive forces." The USSR's main task in the 1930s was relatively simple: build up heavy industry. China had the same task in the 1950s. Both countries carried it out successfully. But once that objective was complete, they had to deal with more complex economies and faced declining growth rates. The path China took in the 1980s ensured that economic growth would greatly increase, as would living standards. Had it not been for this, it's far likelier that China would have fallen to counter-revolution in 1989 just as Eastern Europe had done. Instead the CPC showed that it could continue to rapidly develop the productive forces of the country. It's also worth noting that the initial expectations of Lenin and other Bolsheviks was that foreign investment would accompany the NEP. But this ended up not happening due to the hostility of the capitalist states. Also, as the Great Soviet Encyclopedia notes, "The Russian bourgeoisie would not accept state capitalism and was therefore forcibly expropriated. By 1923–24 the share of state-capitalist enterprises in the gross output of the national economy was only 0.1 percent, and the number of persons they employed at the end of 1925 did not exceed 1 percent of the country’s workers." No.9302 Hey Ismail, please help me to figure out emigration from East bloc to West, especially from East Germany to West. According to wikipedia pages, East Germany had higher fertility rate and population rising from 1949 to 1968 and then again from 1979 to 1988. Yet same page says that it had very large emigration, even if growth was positive. Contrast that to West Germany which had negative population from 1972 and according to wikipedia Germany still has declining population https://en.wikipedia.org/wiki/East_Germany#Population https://en.wikipedia.org/wiki/West_Germany#Demographics https://en.wikipedia.org/wiki/Demographics_of_Germany Was there emigration, but still growth? No.9305 File: 2d494649c28e9bc⋯.png (94.36 KB, 800x441, 800:441, German projection 1.png) File: 5701ba23fa67bef⋯.png (305.28 KB, 2400x1800, 4:3, Projected population.png) >>9302 Poorly formatted question. After taking a look at the statistics again, it shows that there were more births than deaths, rather than population growth. Still I am wondering, outside of emigration East Germany had longer Natural change and higher fertility rate? Why did West and contemporary Germany has such demographics problems? Lastly, was emigration the only reason East Germany was the only country of East Block with declining population ? No.9306 Do you think the US should be considered fascist? I think it's indisputable that the US has caused more suffering worldwide than the open fascists ever did, but some of the typical elements of fascism in the most common sense of the word (such as one-party rule and class collaboration) seem to be lacking. No.9308 >>9302 >>9305 According to a 1969 book I have, "A low birth rate, too many old people, too few young ones, too many employed women, too few men in the middle age group, plus a decidedly uneconomic allocation of young men to police and military duty" contributed alongside housing shortages and emigration to population decline. (Jean Edward Smith, Germany Beyond the Wall, pp. 89-92.) Most couples only wanted one child, and since there was a (male) labor shortage caused by World War II deaths and emigration, the government encouraged as many women as possible to take up jobs in industry, which further lowered the birth rate. >>9306 No, I don't know any Marxist who would seriously consider the US a fascist country. Simply causing human suffering doesn't factor into it, otherwise the British Empire would have been fascist. Post last edited at No.9311 >>9308 The quote makes good points, however like the statistic shows East Germany managed to have higher birth rates than West and even from from 1985 to 1989 population growth, which might seems small, but Germany still has decling populaton which is projected to fall. 1) Why do you think West Germany and modern Germany has population decline? 2)Why did many other socialist countries, despite like discussed having many abortions, had positive and strong population growth, even Poland during hardships of Martial Law had very high birth rate. No.9315 >>9311 1. I'm not knowledgeable enough to answer that. 2. Don't know. No.9316 >>9315 What about emigration from other East countries. Czechoslovakia and Hungary had border with Austria. Where there many emigrants? Additionally were there Poles moving trough East Germany to the West? No.9319 >>9316 There were foreign workers from Poland, Czechoslovakia, etc. who lived in the GDR to make up for the GDR's own labor shortage. The border between Hungary and Austria was closed until mid-1989, so it was not possible for an East German citizen to leave to the West via that route. No.9320 >>9319 Were there Poles migrating to West Germany? Additionally were there any significant Czechoslovakian or Hungarian migration to Austria before the wall? Lastly did the west did spread propaganda so in Czechoslovakians or Hungarians would want to move trough Austria to the west? No.9321 >>9320 I haven't looked into those subjects. As far as i know, the borders between Austria and Czechoslovakia were closed just as the border between Hungary and Austria. No.9322 What do you think of the Communist Party of the Philippines? Are they in the right? If so, why does China have such a good relationship with the government of the Philippines? No.9323 >>9322 I have no opinion. Socialist countries have always sought to maintain good relations with states of differing economic systems. No.9324 File: e250ddad3aff15f⋯.jpeg (20.82 KB, 200x300, 2:3, 607D064A-6002-4B5D-866B-4….jpeg) What is specifically “wrong” with Utopian views of communist society? I understand that these are ultimately only the products of the minds of men and not an organic development from our current capitalist society, but I feel very drawn to things like the Icarian communes originally set up Étienne Cabet, as one example. Of course these weren’t perfect communities, especially in the example I gave due to its frequent schisms, but should we still not try to create these types of communities within our current system? Of course no one can be deluded enough to believe that the bourgeoisie will be won over by our “model societies” but in general I see nothing especially wrong with communities like Icaria or the Kaweah Co-Operative Commonwealth. One criticism of Icaria (whether the original, New Icaria, Jeune Icaria or Icaria Speranza) could be that they didn’t engage themselves in larger forms of socialistic political activity and many of them had arbitrary rules regarding alcohol, marriage, etc No.9325 >>9324 The Communist Club of New York (aka the first acknowledged Marxist organization in the US) had cordial correspondence with the Icarians. Even Jonestown received delegations associated with the USSR, Cuba, DPRK and other socialist countries. The main problems with the utopian communities were: A. In practice, they drew many people away from the struggle against capitalism. Unlike the cooperative movement which could grow to benefit the mass of workers (but which by itself isn't sufficient to overcome capitalism either), utopian settlements were isolated by their very nature; B. Due to their isolation, these communities aren't self-sufficient. They invariably end up creating businesses and breaking up altogether due to being surrounded by capitalism and being obliged to engage in the capitalist market on the latter's terms in order to survive and raise living standards. If a bunch of people want to live communally on a settlement, I won't denounce them for the effort, but (as communists have done, including Cuban embassy staff talking to Jim Jones' associates) I will critique utopianism as a method of transforming society, and will note its inherent dangers. Post last edited at No.9326 >>9325 I definitely agree with the main problems you identified with Utopianism. Even though I would attempt to live in one of these communities if I were given the chance, they should never be put forth as a serious alternative to putting an end to capitalism. >I won't denounce them for the effort, but (as communists have done, including Cuban embassy staff talking to Jim Jones' associates) >Even Jonestown received delegations associated with the USSR, Cuba, DPRK and other socialist countries. Do you know where I could read up on this. What were the criticisms of the Cubans? No.9327 >>9326 >What were the criticisms of the Cubans? To quote one work: >Alfriedo Ferreira, administrator of the Cuban Embassy, complained that [Peoples Temple] leaders "tend to praise Jim Jones more than we [i.e. the Temple leaders] center our spotlight on the collective," according to notes written by a Temple member named Tony. Another Cuban Embassy official, Daniel Salas, admitted to his visitors that he had reservations about the Temple's approach to socialism. Deborah Touchette [one of Jones' aides] summarized his comments in her notes: >>One should not engage in such a closed range approach to a political environment... This is not exactly the way the society should be changed. We [i.e. Jonestown] should not just include ourselves. We should try to get in­volved with the real contradictions that are in society. In other words, they criticized the adoration of Jones (since, after all, Peoples Temple was a cult and Jonestown was its grand experiment) and gave the basic criticism of utopianism. There are a ton of documents that were recovered from Jonestown and which the FBI gradually declassified, a bunch of them consisting of Jonestown officials discussing conversations with the staffs of embassies or meetings with medical, journalistic, etc. delegations from socialist countries. To quote another work: >Talk of scholarships in Russia for Temple members began to be discussed. Perhaps the Temple band and the Temple basketball team could go to Russia on a "cultural exchange," the Temple delegation forming a "scouting mission," as Jones phrased it, scouting the next "haven within a Communist paradise." Jim Jones could go with the delegation and pursue negotiations with Soviet officials in Moscow. In early May [1978], Jones considered attending an international conference in Warsaw as part of a contingent of American Communists. . . . >A March memorandum from Sharon Amos [a Jones aide] to Jones on a Timofeyev [member of Soviet embassy] meeting read: >- [Timofeyev] said it is truly amazing JJ can take religious people and make them atheists because that is something USSR is having trouble with and they'd like to benefit from how JJ does it, because he must be brilliant and if they could have people use his techniques it would be helpful >- I mentioned that JJ also wrote Errors in the Bible [aka "The Letter Killeth"] and we used to give them out/he said USSR had analysed every world in the Bible and have lots of books showing contradictions, but it is more the one-to-one communication that is needed. . . . In the USSR they have a one-one situation of one religious person to every one atheist, and they still can't get it done. >In another such discussion with Timofeyev, Sharon Amos said, "I came [to the Temple] with a Bible in my hand and a white brotherhood card. I was very religious, but it is from that type of background that Jim has educated us to what we are today, and I'm a positive atheist." >"Don't say you're a positive atheist," Timofeyev corrected her. "Say you're an atheist. Nothing is positive. We have gotten rid of religion in our country, but there are still at least 10 percent religious people in the country." >"That's good," Amos said agreeably. >"That's not good," Timofeyev retorted. >"Well, no," Amos backtracked. "I mean by comparison to the United States, where 90 percent of the people are religious, it's good." >The last attraction was the Temple fortune. Timofeyev must have struggled to maintain a good Communist's composure when the communards talked of their millions. By spring, discussion of a transfer of the Temple assets to Soviet banks had gotten down to the mechanics of it. Timofeyev did get annoyed at times, e.g. he said it would take pages upon pages to explain to his superiors why Jones impregnated a woman supposedly at the request of her husband, but that both husband and wife were later revealed to be traitors who used this request to undermine the Peoples Temple on behalf of the US government. Also, on the question of having sexual relations with women out of wedlock (Timofeyev claimed Soviet officials wouldn't understand this), Amos said that Jones had mistresses just as Lenin had a mistress. Timofeyev replied by saying he had grown up reading many Lenin biographies in the USSR and had never once heard of that. Post last edited at No.9333 File: aeaa0156480a1ac⋯.png (680.13 KB, 1280x800, 8:5, taiheithewageslave.png) q1: who are you, ismail? you seem pretty knowledgable. what's your background? does your m-l knowledge extend beyond q&a's on the internet? q2: do you know of any books/resources about the intricacies of the people's republic of china? certainly there's a lot of USSR stuff on this board, however i tend to be more interested in the living breathing chinese communist party thanks! No.9335 >>9333 1. I explained my ideological background here: >>9176 Besides that, I'm not a historian or history major (as some people seem to think), I just read books and articles. I don't belong to any party. 2. I recently scanned an English-language history of the CPC published in China in 1994: https://archive.org/details/ConciseHistoryCPC No.9338 To : Ismail What do you know about the Derg / Socialist ethiopia? Do you know if it actually undertook any real Progressive reforms? No.9339 >>9338 I have a few books on Derg-era Ethiopia. Its enduring credit was to do away with feudalism. Here's one book I've scanned: https://archive.org/details/EthiopiasRevolution If you have any more specific questions about Ethiopia back then, feel free to ask. No.9341 >>8560 What is your opinion on the Shining Path? No.9342 >>9341 Ultra-leftist Maoist group that seemed to undermine its own popularity among the indigenous population (and certainly among trade unionists and the rest of the urban strata) via brutality. Its attacks on communists and bombing the Soviet and Chinese embassies didn't help either. No.9343 When / if the Koreas unify do you think the WPK Will remain a significant political force No.9344 >>9343 The most realistic way for both Koreas to reunify is the DPRK's own "Democratic Confederal Republic of Koryo" plan, in which North and South each retain their own economic and political systems: http://www.korea-dpr.com/reunification.html In that case, yes, the WPK would continue to remain a significant political force. No.9346 This is largely a speculative question, but: Do you think there was any way that the rest of the socialist states in Europe could have remained socialist after the collapse of the USSR? Being totally dependent on the existence of a single state seems like a pretty huge weakness (though perhaps an inescapable one). No.9347 >>9346 The other European socialist countries by 1989 were all suffering from economic problems and growing protests. The USSR played a vital role in maintaining the socialist system in these countries via the threat to use military intervention against counter-revolution, as was done in Hungary in 1956, Czechoslovakia in 1968, and would have been done in Poland had Jaruzelski not declared martial law. When Gorby proclaimed the "Sinatra Doctrine," he gave the green light for counter-revolutionaries in those states to do as they wished. The Soviets had argued from the start that without the existence of the USSR, the People's Democracies would have been imperiled. Similarly, the DPRK would have been vastly weakened without a PRC to help it out after 1991. No.9349 >>9347 How is it that the other socialist states, specifically Vietnam and Cuba, survived? I was under the impression that Vietnam and China had pretty terrible relations at that time (weren't they technically at war?), and Cuba was pretty far away from China. I know that the collapse of the USSR caused significant problems in these countries (even the DPRK, which as far as I know had the best relations with China of any socialist state at the time, entered a period of famine, correct?), but why didn't it totally destroy them as it did the states in Eastern Europe? No.9351 >>9349 The Communist Party of Vietnam led the country's independence struggle and its market reforms in the 80s (as with those in China) satisfied demands for higher living standards. Fidel Castro is legitimately popular in Cuba in a way Honecker and other Eastern European leaders weren't. The sort of "domino effect" that gripped the Warsaw Pact countries in 1989-1990 also didn't apply to Cuba, Vietnam and the DPRK. Yes, China invaded Vietnam in 1979 (for ousting Pol Pot) and relations remained terrible until the Vietnamese withdrew from Cambodia in 1989. The DPRK indeed suffered from famine in the 90s caused by the demise of the USSR screwing with needed agricultural imports. No.9358 Do you know anything about the National Lawyers Guild? Were they actually a communist front group, or were they just progressive liberals who were attacked as part of the red scare? No.9359 >>9358 It was/is a progressive group the CPUSA supported. However, "After losing a fight to gain control of the National Lawyers Guild in May 1940, anti-Communist liberals—including Attorney-General Robert Jackson. . . resigned en masse, leaving the guild an ineffectual shell." (Klehr, The Heyday of American Communism: The Depression decade, p. 402.) So when the CPUSA could be said to have largely controlled the NLG (i.e. during the 40s-50s), the NLG's influence was at its lowest due to distrust by most liberals and harassment from the government. In the 1960s the NLG staged a comeback with the infusion of lawyers from the New Left, and by that point it became its own thing again largely separate from the CPUSA. Post last edited at No.9360 I have a question about the Labor aristocracy, if the workers in the imperial core are in fact benefiting from imperialism so much, and if social democracy is, within MLism, generally regarded as 'social fascism' that is dependent upon imperialism, is it not in fact actually in the rational self-interest of the workers there to push for social-democratic reforms? Should MLs not then exclusively focus on national liberation struggles in the third world? How are first-world MLs expecting to turn the masses in the imperial core against their immediate self interest, how is this not deeply idealist? Let me know if i'm a brainlet here, i admit i do not know much about MLism, as only a few months ago i called myself an Anarchist. No.9362 >>9360 >if social democracy is, within MLism, generally regarded as 'social fascism' that is dependent upon imperialism It isn't. The "social-fascist" thesis reigned during the Comintern's "Third Period" and was openly repudiated by the CPSU, SED, and other "official" communist parties after 1956 (and obviously abandoned in practice years before that, after the "Third Period" ended.) Some modern Maoist and "anti-revisionist" parties do subscribe to the "social-fascism" thesis, but I'd imagine most leftists regard it as wrongheaded and sectarian to consider social-democracy as nothing more than just a stepping-stone to fascism. Communists traditionally viewed social-democratic parties as part of the workers' movement. This has its own issues (since nowadays many soc-dem parties are practically indistinguishable from their bourgeois counterparts), but the stress was laid on differentiating between left-wing social-democrats (who genuinely opposed capitalism) and right-wing social-democrats (who supported imperialism and made use of anti-communism and reformism to help capitalists protect the capitalist system.) That's how soc-dem parties in Eastern Europe after WWII were able to be incorporated into or merged with communist parties, since the communists enlisted the support of their left-wings against rightists. Post last edited at No.9363 File: f446790bb89cc9a⋯.jpeg (111.85 KB, 1000x1015, 200:203, victims of communism.jpeg) At the PLA's 8th Party Congress in 1981, Enver Hoxha claimed that the revisionist parties in the capitalist countries were all "component parts of the structures of the bourgeois state": >When our Party began the struggle against Khrushchovite revisionism, the communist parties which made common cause with Khrushchov looked, more or less, like a united bloc, with a single line, which was that of the 20th Congress of the Communist Party of the Soviet Union. Today, however, 20 years later, they are split and divided into many trends and factions, are fighting and clashing with one another and have been overwhelmed by bourgeois nationalism and social-democratic opportunism. These former communist parties have been transformed either into genuine parties of the new bourgeoisie ruling in the countries where the revisionists are in power, or into component parts of the structures of the bourgeois state, as in the old capitalist countries. Pretty interesting claim. Aaron J. Leonard's work "A Threat of the First Magnitude: FBI Counterintelligence & Infiltration From the Communist Party to the Revolutionary Union - 1962-1974" seems to have actually VINDICATED Hoxha here, all these parties under close examination turn out to have LITERALLY been component parts of the state. Hoxha without FOIA documents understood this. You know of anything else on this? >>9325 Are you familiar with Michael Meier's book "WAS JONESTOWN A CIA MEDICAL EXPERIMENT?" It's online here: Really makes you think No.9364 >>9363 Hoxha was just repeating standard Maoist verbiage that the CPSU had supposedly become a bourgeois party and that pro-Soviet parties abroad were fusing with social-democracy. The former claim is dumb, the latter is demagogic considering that Maoist and pro-Albanian parties generally ended up going down the same route. Blaming Khrushchev and his successors for the degeneration of numerous communist parties in Asia, Latin America and Europe makes little sense. The reasons for their degeneration are more complex and often go back to the Stalin era (e.g. Latin American communists during WWII sought to unite with Somoza and other right-wing, US-backed autocrats.) I'm aware of Meier's book. I don't agree with it. Jones was a cult leader who called himself a Marxist and distorted Huey Newton's concept of "revolutionary suicide" to justify killing (via "mass suicide") his followers in response to increasing pressure against Jonestown by the US government and the cult's ex-members. The CIA monitored Jonestown (not surprising since it was an avowedly socialist settlement of American citizens in a self-proclaimed socialist country), but there's no evidence the CIA had anything to do with the existence of Jonestown or Jones' activities. Jones himself alleged CIA conspiracies against his group. Ironically there's a Soviet book that claims everyone at Jonestown was murdered by the CIA because it was afraid of the settlement's positive example to Americans back home. This is no more true than the claim that Jonestown was set up by the CIA as a medical experiment. Post last edited at No.9365 >>9364 >Jones was a cult leader who called himself a Marxist and distorted Huey Newton's concept of "revolutionary suicide" to justify killing (via "mass suicide") his followers in response to increasing pressure against Jonestown by the US government and the cult's ex-members. It's not just Jones though, a lot of these groups have strange backgrounds e.g. the Symbionese Liberation Army http://www.whale.to/b/caul.html#William_Herrmann >The Violence Control Center was actually the brain child of William Herrmann as part of a pacification plan for California. A counter insurgency expert for Systems Development Corporation and an advisor to Governor Reagan, Herrmann worked with the Stand Research Institute, the RAND Corporation, and the Hoover Center on Violence. Herrmann was also a CIA agent who is now serving an eight year prison sentence for his role in a CIA counterfeiting operation. He was also directly linked with the Iran-Contra affair according to government records and Herrmann's own testimony. >In 1970, Herrmann worked with Colston Westbrook as his CIA control officer when Westbrook formed and implemented the Black Cultural Association at the Vacaville Medical Facility, a facility which in July experienced the death of three inmates who were forcibly subjected to behavior modification drugs. The Black Cultural Association was ostensibly an education program designed to instill black pride identity in prisons, the Association was really a cover for an experimental behavior modification pilot project designed to test the feasibility of programming unstable prisoners to become more manageable. >Westbrook worked for the CIA in Vietnam as a psychological warfare expert, and as an advisor to the Korean equivalent of the CIA and for the Lon Nol regime in Cambodia. Between 1966 and 1969, he was an advisor to the Vietnamese Police Special Branch under the cover of working as an employee of Pacific Architects and Engineers. >His "firm" contracted the building of the interrogation/torture centers in every province of South Vietnam as part of the CIA's Phoenix Program. The program was centered around behavior modification experiments to learn how to extract information from prisoners of war, a direct violation of the Geneva Accords. >Westbrook's most prominent client at Vacaville was Donald DeFreeze, who between 1967 and 1969, had worked for the Los Angeles Police Department's Public Disorder Intelligence unit and later became the leader of the Symbionese Liberation Army. Many authorities now believe that the Black Cultural Association at Vacaville was the seedling of the SLA. Westbrook even designed the SLA logo, the cobra with seven heads, and gave De Freeze his African name of Cinque. The SLA was responsible for the assassination of Marcus Foster, superintendent of School in Oakland and the kidnapping of Patty Hearst. >As a counterinsurgency consultant for Systems Development Corporation, a security firm, Herrmann told the Los Angeles Times that a good computer intelligence system "would separate out the activist bent on destroying the system" and then develop a master plan "to win the hearts and minds of the people". The San Francisco-based Bay Guardian, recently identified Herrmann as an international arms dealer working with Iran in 1980, and possibly involved in the October Surprise. Herrmann is in an English prison for counterfeiting. He allegedly met with Iranian officials to ascertain whether the Iranians would trade arms for hostages held in Lebanon. See the book "Revolution’s End: The Patty Hearst Kidnapping, Mind Control, and the Secret History of Donald DeFreeze and the SLA" for more info No.9366 Do I have a correct understand of what ultra-leftism is? From what I can tell it is usually a group that is too far ahead of the masses or one that does not take the current conditions into account. For example I've seen Pol Pot called ultra-left or the faction of Grenada's NJM that killed Maurice Bishop be called ultra-left too for wanting to pursue a more dogmatic Leninist line in unfavorable Grenadian conditions. I've just never seen a concrete definition of this term and was wondering if you could give some more insight No.9367 >>9365 Sure, but Jones' background was pretty straightforward. He was born to a poor family in Indiana, developed a fascination with religion since a fundamentalist neighbor took him to church every Sunday (his own mother was too busy working and his father was too busy going to a bar getting drunk), attended CPUSA-linked rallies in the early 50s, became a Methodist minister, and started his own church which promoted racial integration and stuff like free food and clothes to the needy while privately telling his wife that he would use religion to bring people to Marxism. He then gradually turned his church into a cult around himself. His fears of nuclear war after the Cuban Missile Crisis made him relocate to California, and then he set up a communal settlement in Guyana where he would eventually relocate all his followers using the argument they'd be protected from the imminent threat of fascism and genocide in the United States. It's a rather bizarre story, but then Jones was a rather bizarre person as even his childhood friends have noted. He definitely had charisma though. There's a pretty good speech he gave on capitalism and socialism that shows his simplistic but earnest understanding of both concepts, and helps explain why he'd have a working-class following: http://www-rohan.sdsu.edu/nas/streaming/dept/scuastaf/collections/peoplestemple/MP3/Q929.MP3 Post last edited at No.9368 >>9366 Yeah that's correct. Ultra-leftists ignore material conditions in favor of imposing their own "revolutionary" policies or analyses. This isolates them from the masses and/or imperils their political activities (e.g. if a communist party decides one day to begin an armed insurrection in totally unacceptable circumstances, causing the party to be promptly crushed.) Post last edited at No.9372 What do you think should the actual platform of a communist party in a Western country be? Free healthcare and a high minimum wage plus no more wars? Or should they go all-out and advocate the "seizing of the means of production and a planned economy" in their platform (which might be detached from the realities on the ground and discourage insecure voters from voting for them because they appear as "unrealistic"). No.9373 >>9372 Not Ismail, so I hope that's okay, but here is my view on it: I think that any communist party worth its salt should advocate for the social ownership of the means of production and all that entails (regulation of production upon a definite plan, ending of anarchy in production, etc). To do any less would create a party which merely contents itself with small, day-to-day reforms while not simultaneously striving to transcend our current mode of production and emancipate the working class. If a party does not include this within its platform can it truly call itself a communist party? To advocate only for free healthcare, a higher minimum wage and an end to wars doesn't tackle the larger problem. Of course in the United States (for example) to advocate for these things in our current environment might seem "unrealistic" mainly due to the low level of class consciousness but this does not mean that we wholly discard certain tenants of our programs merely because the working class does not currently possess a high level of class consciousness. The real problem is to learn how this can be raised. No.9374 File: ab34996e945466a⋯.jpg (327.62 KB, 1954x1500, 977:750, CPUSA 1972.jpg) >>9373 Yes, this. As Marx pointed out in the Manifesto, "The Communists fight for the attainment of the immediate aims, for the enforcement of the momentary interests of the working class; but in the movement of the present, they also represent and take care of the future of that movement. . . . they never cease, for a single instant, to instill into the working class the clearest possible recognition of the hostile antagonism between bourgeoisie and proletariat." The election platform of the CPUSA in the 1936 elections is online, and gives an example of how that party dealt with both "practical" day-to-day matters as well as the ultimate aim of socialism: https://archive.org/details/CommunistElectionPlatform1936 No.9401 Opinion on Saddam Hussein? Do you think Kuwaiti was actually slant-drilling like he said or was that just an excuse for invasion? No.9402 >>9401 He was right-wing by the standards of Arab socialist leaders in the region. He actively persecuted Communists and practically allied with US imperialism to try overthrowing the Iranian Revolution. Even his invasion of Kuwait was done based on his belief that he had the green light from the US. On the other hand, Ba'athist Iraq was the main benefactor of the PLO and the US still distrusted Saddam for his cordial relations with the USSR, his desire to acquire nuclear weapons to offset Israel's nuclear arsenal, and his still fairly powerful (by regional standards) army. There were indeed legitimate disputes between Iraq and Kuwait over their borders, control of the Rumaila oil field (Kuwait was drawing oil from parts claimed by Iraq), and also Iraq had just gotten out of the war with Iran and had accumulated a large debt to Kuwait to help pay for said war, so Kuwait and the UAE exceeding OPEC production quotas to enrich themselves while driving prices down for other members hit Iraq particularly hard. Before Iraq's invasion, its diplomatic "pressure on Kuwait and the UAE got considerable approval from other oil-producing countries. Even Iran backed Iraq on this one. . . . A senior official of PEMEX, the Mexican state oil production agency, was quoted as saying that, '[Saddam] has become something of a folk hero among us' for his actions." (Arthur Henson, The War Against Iraq: A Handbook for Anti-Imperialists, 1992, p. 47.) Then Saddam decided to invade Kuwait, whereupon he was denounced by pretty much the entire world and gave the US a pretext to destroy Iraq's infrastructure via bombings and sanctions. Post last edited at No.9460 When Marx and Engels variously talk about the demise of the political state and how how public functions will lose their political character, am I correct in thinking that they mean they will lose class character (regardless of class)? No.9467 >>9460 That seems to be the case. No.9494 What are the best sources to address the claim that Lenin was a "murderous dictator"? Also, I have a second question. Have you read the Leftcommunist work "What was the USSR" by "Aufheben"? http://libcom.org/files/Aufheben-%20What%20was%20the%20USSR.pdf If yes, any critcisms? No.9496 >>9494 >What are the best sources to address the claim that Lenin was a "murderous dictator"? Depends on what specific events or phenomena they claim made Lenin murderous or dictatorial. In general, the Bolsheviks suppressed those who carried out armed revolts against them. They were willing to coalition with other parties that accepted soviet power, hence the short-lived coalition with the Left SRs until that party staged an armed revolt in July 1918. "The total figures of executions, published in 1921, were as follows. In the first half of 1918 [before the Red Terror] they were 22, in the second half some 6,300, and for the three years 1918-20 (for all Russia) 12,733. When it is remembered that in Rostov alone about 25,000 workers were shot by the Whites upon occupying the city, not to speak of many other towns, the Red terror will fall into rather more just perspective." (Rothstein, Andrew. A History of the U.S.S.R. Harmondsworth: Penguin Books. 1951. p. 106.) As for the charge of being a dictator, it's silly. Lenin constantly had to argue in favor of his policies with his colleagues in the Politburo and Central Committee, the most obvious example being the debate over whether or not to sign the Brest-Litovsk treaty. >Have you read the Leftcommunist work "What was the USSR" by "Aufheben"? A long time ago. If I recall right they basically uphold Ticktin's claim that the USSR wasn't capitalist or socialist, but a "non-mode of production." You can bring up specific critiques if you'd like and I'll respond to them tomorrow or the day after (since I'm a bit busy), but the "non-mode of production" stuff seems silly. The USSR alone, a large landmass that existed for some 70 years and whose economic system was adopted by other countries via revolutions in Europe, Asia and Cuba, clearly had to have a distinct mode of production to successfully build up its economy and an upcoming class to put that mode of production into place. No.9498 Ismail, how do you explain Chinese military bases inside of Africa? isn't this imperialism? No.9499 >>9498 No, it isn't. The USSR had Cam Ranh Bay and naval facilities at Tartus (in Syria), that wasn't imperialism either. Imperialism refers to an economic relationship, from which military considerations follow. The mere existence of a military base in another country is not evidence of an imperialist relationship. No.9500 >>9499 But China does have economic dominance over Africa and tries to export finance capital to Africa, if these base weren't built for imperialism then for what were they built? No.9501 >>9500 What is meant by "economic dominance"? For example, pretty much all of Mongolia's trade was with the USSR by 1930. No doubt this meant the Soviets "economically dominated" Mongolia. But this is not an imperialist relationship. Russia's capitalists engage in export of capital too, but that doesn't make Russia an imperialist country. The mere export of capital isn't sufficient: http://links.org.au/node/4629 Considering the superabundance of military bases created by the US, UK and France, Chinese bases serve two purposes: first to help secure China's defense needs at a time when the imperialists are ratcheting up military and economic pressure, second to increase the independence of African countries, whose governments are making use of China to assert themselves. No.9507 How come liberalization and introduction of more market features caused stagnation in the SU and re-introduction of Capitalism led to cratering living standards, whereas in China growth rates remained stable and even grew and improvements in living conditions were widespread under the revisionist leaderships following Mao? Is Socialism with Chinese Characteristics a thing? No.9508 >>9507 Gorbachev and Deng both adopted greater market measures, but in very different ways and in different political contexts. Gorbachev argued that what had existed in the USSR since the 1930s was an "administrative-command economy" rather than an actual socialist system, and that the potentialities of the NEP were demolished by Stalin (and would likewise have been demolished by Trotsky) in order to establish the aforementioned supposedly distorted economy. Proceeding from that theoretical basis, Gorby in practice had no clear idea what to do. His decisions were abrupt and inept, and resulted in absurdities like factories waiting for central planners to allocate resources to them while those same planners couldn't locate resources (since suppliers were too busy trading the resources with other factories) or no longer had the requisite authority to allocate them in the first place. Deng's reforms were far more competently carried out. Another major difference is that Gorby's economic policies were bound up with his political policies of Glasnost and Demokratizatsiya, which of course ended up with the demise of the USSR. By contrast, the CPC became less dogmatic and allowed greater Western culture in China, but otherwise had no intention of greatly changing the political system (and showed in Tienanmen in 1989 that it was perfectly willing to use force to defend the socialist system, unlike Gorby.) Hence why Deng's youngest son said to a journalist in 1990: "My father thinks Gorbachev is an idiot." And of course unlike Deng, Gorby ended up abandoning Marxism-Leninism altogether and today proudly identifies as a social-democrat. The best read on Gorby's policies from a ML point of view is "Socialism Betrayed" by Keeran and Kenny: http://b-ok.xyz/book/1246151/ea7f45 >Is Socialism with Chinese Characteristics a thing? I'd say it is. No.9509 >>9508 So you don't believe that the Chinese Bourgeoisie now has control over the Party? i mean i guess we can only speculate here but it seems rather dubious given that most of the leaders are themselves now extremely rich Capitalists. I'd say at the very least for the socialist forces to take over again to eventually move away from private property its gonna require a very hard inner-party struggle. No.9510 >>9509 I don't think the bourgeoisie controls the party, although of course there is always the danger of that happening. As far as I know major CPC officials can get rich from corruption, or from having rich relatives, but are themselves not formally involved in businesses. No.9511 >>9510 I mean come on... I know that wealth is not necessarily connected to class but where do you think the interests of these people lie. No.9512 >>9511 The National People's Congress has nearly 3,000 delegates. Of these, "about 100" (according to that link) are billionaires, and the "richest 209 parliament delegates are each worth more than" $300 million. Considering the NPC has not only Communist Party delegates, but also delegates from the other parties (including the China Democratic National Construction Association which is explicitly for businessmen and has existed since before the PRC was even founded), I don't see the issue. China's legislative and consultative bodies also include clergymen; nobody uses that to suggest China is a theocracy. That article then smugly goes on to say, "By comparison, the U.S. doesn't have a single billionaire in Congress. The wealthiest member, California Republican Darrell Issa, is worth around$440 million, according to the Center for Responsive Politics." That's because in the United States, the billionaires and millionaires choose the representatives. In China, billionaires and millionaires owe their existence to the policies of the CPC and those in the legislature explicitly recognize the leading role of the CPC in society and in state affairs. Before elections the CPC comes to an agreement with other parties in the United Front as to how many representatives of business should be in the NPC and other legislative bodies, same thing with clergymen, workers, peasants, members of the intelligentsia, etc. If it was decided during next elections to reduce the number of billionaires in the NPC to zero, that could be done without much hassle. Post last edited at No.9515 >>9512 >in the United States, the billionaires and millionaires choose the representatives. In China, billionaires and millionaires owe their existence to the policies of the CPC But they do have a powerful interest in reversing this relationship. And they are not just heavily overrepresented in the NPC now but in the whole government. Xi Jinping is a billionaire through his relatives, Wen Jiabaos relatives are estimated to be billionaires. They are bourgeois and have bourgeois class interests, unless all of these people are outcasts in their family, which seems unlikely. The fact that lately a lot of this type end up in prison for corruption (i.e. being political rivals of someone more powerful) shows that they are not untouchable, but it doesn't point to China being a DotP either. I asked the question in the beginning of the argument because according to Marxist theory, Capitalism is a system that in fact fetters and eventually becomes incompatible with the MoP, a planned economy in the hand of the Proletariat is poised to replace it. This view is pretty much compatible with all historic evidence we've seen so far, except Chinas rise if one accepts that it no longer is a DotP. It would fit neatly into the theoretical framework if one accepts that it is a DotP, i think this is the reason why many MLs want to, im just very skeptical. I guess we can't find out here. No.9516 >>9515 of those 100 or so billionaires, the wealth is roughly equivalent to that of 9 americans. which, looking at it in such terms of "billionaire or not billionaire", is a bit disingenuous anyway. billionaires could have their wealth seized at any moment by the state. many of them go to prison or have even been executed. chinese billionaires, rich pigs, are heavily scrutinized by the public, not worshipped or put above the legal system, as they are in the west. many of these bourgeois pigs hide their wealth in the west too because they're so afraid of the party. it would be ignorant to pretend that after a communist revolution, the bourgeoisie just disappears or a country suddenly becomes free of imperialist pressures. No.9517 >>9516 >of those 100 or so billionaires, the wealth is roughly equivalent to that of 9 americans wat No.9518 >>9517 my numbers were wrong anyway, it's actually 200, not 100, and it's the equivalent of 7 richest americans. No.9519 The Chinese are capitalists with red flags and anyone who believes otherwise is deluding themselves No.9521 What do you think about John Mccain? No.9522 >>9515 >I asked the question in the beginning of the argument because according to Marxist theory, Capitalism is a system that in fact fetters and eventually becomes incompatible with the MoP, a planned economy in the hand of the Proletariat is poised to replace it. It's worth recalling that China considers itself in the "primary stage of socialism," i.e. its main task is to develop the productive forces precisely due to the country's historical backwardness and poverty. Marx himself wrote that the bourgeoisie transforms pre-capitalist modes of life (of which the peasantry is obviously a part), greatly increasing productivity. >>9521 Reactionary bourgeois politician. He's actually highly regarded in Vietnam because he pushed for the establishment of normal diplomatic and trade relations back in the 1980s-90s at a time when his colleagues were like "no, Vietnam is an enemy of America and freedom." And even then, McCain justified restoring ties with the argument that the Vietnamese can be used against China. Besides that, he was scarcely any different from other "normal" Republicans like Lindsey Graham, Mitt Romney, Jeb Bush, etc. He cheered on the Iraq War and other imperialist ventures, and most recently praised Trump's decision to move the US embassy to Jerusalem despite near-universal condemnation from every other country in the world (as well as the Palestinians.) He's been extolled these past few days by the media since liberals and "normal" conservatives want to encourage people to maintain faith in our corrupt, unrepresentative political system and to view Trump as just some aberration that can be corrected via voting Democrat or "upholding American values" or what have you. Post last edited at No.9523 Do you listen to Chapo Trap House? I noticed you post in their subreddit a lot. What do you think about the podcast and the subreddit? Personally I like the podcast, despite them being socdem/demsoc and me being more of a ML, but I find the subreddit to be made up of mostly the exact type of anarcho-liberals that I hate with a passion. No.9524 >>9523 I don't actually listen to the podcast, but yeah I post in the subreddit. posters range from MLs to "progressive" liberals. It's annoying when you have people who are willing to criticize the hypocrisies of American foreign policy, but "balance it out" by claiming the USSR was just as bad and pretending George Orwell died an anarcho-communist revolutionary rather than an anti-communist social-democrat who attacked left-wing Labour MPs for putting "Soviet interest" above "British interest" and claimed communist parties exist for the purpose of espionage. No.9525 >>9523 They should use their millions to fund a communist party No.9528 >>9524 >I don't actually listen to the podcast, but yeah I post in the subreddit. Why post there if you don't listen to the podcast? I have a fairly low opinion of the podcast and its fanbase but I wonder if there's something worthwhile about the subreddit. No.9529 >>9528 Because there's lots of leftist discussion there. Also lots of dumb discussion, but I can just ignore that. No.9530 >>9529 Yo, I dig this sentiment. You GET IT comrade. Keep up the good work and I hope you'll think about joining a party someday. No.9531 >>9519 Capitalist mode of production doesn't necessarily mean capitalist government (cf. Lenin) No.9532 >>9519 what research have you actually done on this topic? besides youtube vids made by trots and liberals. i've literally never met a marxist offline who thought china was anything but a socialist state No.9533 >>9119 >Brezhnev stood in office too long What is "too long" in your opinion? Do you think it would've been good to the democratic level of the USSR (and other socialist states) if the leaders had been elected for terms of a fixed length? No.9534 >>9501 Do you have a source for the claim of: "econd to increase the independence of African countries, whose governments are making use of China to assert themselves." ? No.9535 >>9533 When I say Brezhnev stood in office too long, it was because he was very visibly weak and there were suspicions he was somewhat senile. I think so long as a leader is physically and mentally healthy and things are going okay in the country, I think term limits, while understandable in some situations, are ultimately less important than democratic control and accountability in general. For example, it was inconceivable to tell Mao during the last years of his life that he should probably step down, even though his health was clearly bad. It was also a problem when Honecker, Kádár, Zhivkov, Ceaușescu, Husák and Hoxha remained in office forever even though by the end conditions in their countries (and in some cases their health also) were in decline. In large part it was because the continued leadership of these leaders was treated as a crutch, e.g. Tito was proclaimed President for life because the unstated assumption was that without him, Yugoslavia would start to crumble apart (which it did, but the fact that the system was to a considerable extent perceived as dependent on one man is itself a fundamental failing.) Mugabe is a modern example of a leader who, while mentally and physically pretty good (for a 94 year old), should have retired years ago. >>9534 I don't really know how I could "source" it. The imperialist countries clearly view China as a rising economic and military threat and want African states to be dependent on them. The existence of China provides a way for African governments to lessen this dependence, similar to how the existence of the USSR allowed the same thing. On China and Africa in general, see the section on Africa here: https://docs.google.com/document/d/16iw83noTdWvDiECaITX83rGhP_lros8QdBTrNnCoe6c/edit# No.9536 File: b7e70608eafb2f7⋯.png (5.31 KB, 1200x800, 3:2, 44411212.png) Even though their not M-L what do you think of the Zapatistas? No.9537 >>9536 I've talked to a few Mexican leftists who say the EZLN is ineffectual. It's basically confined to helping improve the standard of living of Chiapas' residents and poses no real threat to the powers that be. Also the EZLN were quite/are naïve, e.g. the Los Angeles Times back in 1994 noted, >[Subcomandante] Marcos said he expects the United States to support the Zapatistas once U.S. intelligence agencies are convinced the movement is not influenced by Cubans or Russians. "Finally, they are going to conclude that this is a Mexican problem, with just and true causes." Not surprisingly, the US preferred to send arms to the Mexican government against the EZLN. So in conclusion, anarchists jerking off to EZLN during the 90s and early-mid 2000s and treating them as the next big thing seems silly in retrospect. Also here's an interesting post by a Mexican guy on reddit, giving his interpretation of EZLN's motives: https://www.reddit.com/r/mexico/comments/4796sh/what_do_you_think_of_the_zapatista_movement/d0dgjz4/ Post last edited at No.9538 >>8560 What do you think about Stalin's allegedly antisemitic "Doctor's Plot"? https://en.wikipedia.org/wiki/Doctors%27_plot No.9539 >>9538 According to his daughter he was skeptical about the charges against the doctors. More significantly, Zhores Medvedev (a Soviet-era dissident and historian who has no reason to defend Stalin) argues that he was going to put an end to the case but died before he could. In any case all talk about the "plot" was ended soon after his death, and the generally bad atmosphere Jews were in during the last years of Stalin's life was lifted. No.9540 Thoughts on the "Intellectual Dark Web"? Figures like Sam Harris, Jordan Peterson, Ben Shapiro, the Weinstein guys, Dave Rubin, etc. Keeping up with all the dumb things these people say and do is really just an elaborate kind of self-harm but a lot of leftists seem to get some entertainment out of it. You post in a few non-socialist subreddits. I wanna recommend r/samharris. A pretty large chunk of the people there seem open to left wing viewpoints as a result of becoming pretty disillusioned with Harris, as it's become more obvious lately what an absolute hack the guy is. No.9541 is there a better, perhaps more academically rigorous, resource than ludo martens 'another view of stalin' to clear up misconceptions about stalin? No.9542 >>9541 Maybe the works of J Arch Getty Not as supportive as Martens but he's generally seen as pretty neutral, moreover him not being a socialist himself will make him seem more reliable in the eyes of liberals you're arguing with No.9543 >>9540 I don't know much about Harris. To my knowledge he was one of the "New Atheists" (like Hitchens and Dawkins) who used the aura of being "rational" to cloak his reactionary politics. Shapiro seems like a male version of Ann Coulter, just with a bit more "intellectual" posturing. I can't take him seriously. I haven't actually bothered watching anything by Peterson, Weinstein or Rubin except for a thing by Peterson being like "argh Marxists are hypocrites for talking about how they're fighting for the common man when in fact all they do is just kill everyone," and when confronted with Peterson's argument I cited a thing I had recently written: https://www.revleft.space/vb/threads/199846-On-the-right-wing-complaint-of-quot-Why-is-Marxism-more-acceptable-than-fascism-quot?p=2887959#post2887959 >>9541 Getty is good. "Life and Terror in Stalin's Russia" by Robert Thurston is also good. "Stalin's Wars" by Geoffrey Roberts is good on his role in WWII and the Korean War. Ian Grey's "Stalin: Man of History" is a fairly objective (but dated) biography which I scanned last year: https://drive.google.com/file/d/1-7z0YkCEOao2ZyqBenSY84ZH3uUb9FHn/view If there are particular misconceptions you'd like cleared, let me know and I can suggest sources. Martens' book goes overboard in trying to rebut just about every single allegation made against Stalin, but it has had a somewhat useful function of showing people there is in fact another view of Stalin as opposed to the anti-communist narrative. In fact, Martens book is the first pro-Soviet work I ever read. Post last edited at No.9544 File: 618d2f27e4a31d7⋯.png (104.12 KB, 1280x1141, 1280:1141, 0EF7329F-2CFC-4E28-8079-7E….png) >>8560 Ismail, what do you know about the Ray O Light/Revolutionary Organization of Labor? They seem to have zero presence online. No.9545 >>9544 Nothing, except they seem very small. No.9546 >>9543 >To my knowledge he was one of the "New Atheists" (like Hitchens and Dawkins) who used the aura of being "rational" to cloak his reactionary politics. With the exception of foreign policy, I'm not sure I would say the politics of the New Atheists was completely reactionary. Maybe I am naive but at that time it seems like they mostly tried to uphold "enlightenment values", science and rationality in the face of a growing religious right wing that used it's political power to push creationism in schools, marginalization of gay people, etc., and that's a worthy effort imo. Of course, their actual critique of religion was always very surface level ("do you think there is a man in the sky lol religion is DUMB") but I don't think I would say they were a reactionary group overall. Anyway, none of that matters now since New Atheism is a thing of yesterday, and the only one of them who is still relevant, Harris, has gone off the reactionary deep end completely. >Shapiro seems like a male version of Ann Coulter, just with a bit more "intellectual" posturing. Pretty spot on. No.9547 >>9546 On domestic issues they were basically liberals, but in the case of Hitchens (an ex-Trot) his foreign policy views pretty much consumed his political activities in the last ten years of his life. He supported the Iraq War, ranted about the threat of "Islamofascism" to the West, and supported Bush in 2004. In that sense he was similar to his hero Orwell, who considered himself a socialist and yet subscribed to the view that the United States and Britain represented "freedom" against the "totalitarian" USSR, somehow concluded that Paul Robeson was "very anti-white," attacked pacifists for supposedly hating the West, and claimed the Soviets sought to sabotage the Attlee government because social-democracy represented a deadly threat to communism (rather than Soviet-British relations deteriorating due to Attlee's pro-imperialist foreign policy.) Post last edited at No.9548 >>8560 Ismail, was the Soviet Revolutionary Communists (Bolsheviks) an actual underground group or was it an Albanian hoax as it's often claimed? No.9549 >>9548 I've never heard of it mentioned anywhere outside of Albanian and Chinese texts back when its purported existence was announced. There were rumors back then that Molotov was involved, but he wasn't. After he was expelled from the CPSU as part of the "Anti-Party Group" he spent the next three decades trying to rejoin it, a goal finally achieved shortly before his death. It's probable that the group existed but was very short-lived and had an insignificant number of adherents. There are plenty of cases in history where a group announces its existence and nothing else. No.9552 >>9549 Don't know if this has been asked yet, but why is labour zionism looked down upon by non jewish marxists? As a jewish person this is something i have always wondered. No.9553 >>9552 Because it's still Zionism. Ben-Gurion being a "socialist" did not prevent him from presiding over the mass expulsion of Palestinians or aligning Israel with American imperialism. And there are plenty of anti-Zionist Jews who would say the same thing. No.9555 What would you say against the argument that says >We need capitalists and his role it's almost natural. We always need direction of economic processes. >Also if I have invest money on my own company or little company, it's mine and I don't know why Communists must get it. It is like capitalist ideology has so much force and people tend to identify capitalist = direction of companies. So then they can't see the necessity of the revolution and why the workers should own the State and companies. No.9556 >>9555 1. The present economic system (i.e. capitalism) requires capitalists. Prior systems (primitive communal living, slavery, feudalism) obviously didn't. This person is confusing managers and capitalists. As Marx pointed out, "The labour of supervision and management is naturally required wherever the direct process of production assumes the form of a combined social process, and not of the isolated labour of independent producers. . . This is a productive job, which must be performed in every combined mode of production. . . . In a co-operative factory the antagonistic nature of the labour of supervision disappears, because the manager is paid by the labourers instead of representing capital counterposed to them." 2. It is possible for certain private enterprises to continue existing for a time. To quote from one work (Germany Beyond the Wall by Jean Edward Smith, 1969, pp. 102-104): >The private and semi-state factories play an important role in East Germany. In 1966 there were almost ten thousand such enterprises remaining, and the government seems under no compulsion to liquidate them. For the most part they are small undertakings (the average semi-state firm employs 63 persons; the average private firm one-third that many) . . . Not only do such factories provide a yardstick for comparing production costs in state-owned concerns, they also represent considerable reservoirs of technical and managerial expertise. >A number of private firms have changed to become semi-state firms in recent years, but this merely means that the state has been admitted as an investor; control and management are still vested in the original entrepreneur. In many respects, the manager of a semi-state firm occupies one of the smuggest sinecures in East Germany. He is paid a guaranteed salary to continue as manager of the firm, plus a fixed return on the value of his investment. If the firm makes money, his proportion of the profits equals his share of investment. . . . >Werner Beier, the [Kamerafabrik Waldemar Beier KG] plant's present director, received me warmly. . . Why had he become a semi-state firm? "Because we needed capital, and that was the easiest way to get it." Did he resent the decision? "I really had no alternative." Was he satisfied with the arrangement? "They leave me completely free to manage the company as I see fit. My son will become manager when I retire. That was part of the agreement. I'm not satisfied, because you can see we need a great deal more capital to modernize and expand. But within the limits of the system I can't complain." (For anyone wondering, in 1972 almost all of the GDR's private and semi-state enterprises were nationalized, including Kamerafabrik Waldemar Beier, the argument being that state ownership was more efficient in allocating resources and labor.) No.9557 >>9556 >This person is confusing managers and capitalists. But capitalists are managers, aren't they? No.9558 >>9557 They can be, and that was generally the case in the early days of capitalism when the capitalist supervised his own workers in his own factory, but it isn't nowadays unless you're talking about small businesses. CEOs and the like don't manage the day-to-day activities of factories. Managers in the socialist countries clearly weren't capitalists. No.9559 >>9558 Could you develop this point? No.9560 >>9559 It should be fairly obvious that, say, an American capitalist with factories in Singapore and Taiwan is evidently not an omnipresent being able to personally manage factory workers in multiple places at once. Such capitalists hire management personnel. If you worked at Microsoft back in the 90s, you wouldn't have Bill Gates coming by your cubicle asking you mundane questions about formatting reports properly or telling you to come to work on the weekend. Gates might drop in once in a while, either for the hell of it (at one point Gates decided to have fun by answering a call from a customer asking for tech support) or because he heard that productivity isn't as good as it could be and wanted to do some "hands-on" investigation and raise morale by talking to employees. Gates didn't need to personally supervise Microsoft employees. He paid people to do that for him, as basically any major capitalist does. No.9561 >>9560 But to manage doesn't mean necessarily personally manage. I think that's a conceptual trap. Capitalists manage, are the ones who dominate productive processes. Even still having subordinates who work personally in companies, this does not mean that capitalists don't manage. No.9562 >>9561 Yeah I get that "capitalists literally don't do anything except sit on their ass all day raking in money" is largely misleading, but my point was that "direction of economic processes" takes place under every system of production (e.g. slaveowners of large plantations employed overseers.) You do not need capitalists to coordinate the day-to-day operations of a factory, and modern capitalists generally don't fulfill that function anyway. That's why I wrote that the real or hypothetical person making the argument you quoted is "confusing managers and capitalists." It's also why I gave the example of the USSR, whose factories had managers but no capitalists. Post last edited at No.9563 I've seen people claim that the Incan Empire / Pre-Incan Andean societies were examples of "Primitive Communist" economies? No.9564 >>9563 The Great Soviet Encyclopedia gives the following definition of primitive communalism: >the first socioeconomic formation in human history. The foundations for the doctrine of the primitive communal system as a special socioeconomic formation were laid by K. Marx and F. Engels and were subsequently developed by V. I. Lenin. Most Soviet scientists and scholars believe that the primitive communal system existed from the appearance of the first human beings to the emergence of class society. From the standpoint of archaeology, this period basically coincides with the Stone Age. In the primitive communal system the relationship to the means of production was the same for all members of society. Consequently, the mode of obtaining a share of the social product was the same for all. For this reason, the term “primitive communism” is applied to this system, which is distinguished from succeeding stages of socioeconomic development by the absence of private property, classes, and the state. You can find the rest of the article here: https://encyclopedia2.thefreedictionary.com/Primitive+Communal+System Whereas the Incan Empire is described as follows: >The state was a slave-holding despotate. The Incas levied a tribute from subjugated tribes and exploited the labor of ordinary commune members, artisans, and slaves— the yanacuna. The basic socioeconomic nucleus was the village commune—the ayllu. The land was regarded as belonging to the ruler—the Inca—whose power was encircled by a sacred halo. The first mythical ruler—Manco Capac—was revered as the sun’s offspring. The Incas used irrigation and erected buildings for military and administrative purposes. In other words, the Incan Empire was based on slavery, the system that generally comes after the breakup of the primitive-communal system. As for what specific form primitive communalism took in the ancient Andes compared to the rest of early humanity, I have no information on that. Post last edited at No.9565 What is the best work about Maoist China? Feel free to include neutral works and pro-Maoist works. Also, what do you think about the communes in China? Were they a success or was it ultimately a failed experiment? No.9566 >>9564 I would say the biggest difference was that the Incan economy was centrally planned according to this Wiki article https://en.wikipedia.org/wiki/Inca_Empire#Economy No.9567 >>9565 I can't think of "the best." There are books on individual subjects that are good (like Fanshen by William Hinton), but as far as an overall history of China under Mao, I'd say Maurice Meisner's "Mao's China and After: A History of the People's Republic" is a good start. As for the communes, they evidently failed to raise production on any sustained basis and were unpopular. Post last edited at No.9568 Were workers actually payed with Labor vouchers in Chinas "Iron rice bowl" and if true why did Deng apparently dismantle it No.9569 >>9567 >As for the communes, they evidently failed to raise production on any sustained basis and were unpopular. Well maybe they would have if they were given a bit more time and maybe better management/infastructure. https://amp.ft.com/content/9d25d432-3be1-11e5-bbd1-b37bc06f590c The Dengist line of the CPCh, which you relentlessly defend here, simply provides us institutional evidence and we are supposed to eat it up. There is no telling what would have happened if China did not liberalize the economy, we are told that only capitalism could have developed the productive capabilities. No.9570 Do you know anything about the African People's Socialist Party? I've looked at their website but they don't seem to have much else of a presence online. No.9571 >>9570 I do not (besides the basics like how it was founded, etc.) >>9568 They still received wages, but were also given coupons to help obtain consumer goods. The "iron rice bowl" has declined over time in an effort to increase material incentives. >>9569 We know the spectacular growth of the Chinese economy since the 1980s. I don't see why a continuation of Mao's policies would have done almost as good. Post last edited at No.9572 >>9571 >We know the spectacular growth of the Chinese economy since the 1980s. I don't see why a continuation of Mao's policies would have done almost as good. Well maybe because that spectacular growth (which also occured in countries like Japan, South Korea or Singapore) came with all the shortcomings of capitalism for the Chinese workers. Unemployment, homelessness, wage cuts, oligarchs, rent-seekers, commodification, alienation, inequality. The USSR had constant economic growth and none of this. No.9573 What is your opinion on Maurice Bishop / NJM / Socialist goverment of Grenada? No.9574 >>9572 >The USSR had constant economic growth and none of this. To quote one author, "While not actually in crisis in 1985, the Soviet regime was headed for one. . . Deprived of its past sources of growth from an expanding labor force and ever-increasing inputs of capital, the economy had poor prospects. . . and was probably fated to contract slowly. The weakness of the economy gave rise to a second vulnerability: widespread dissatisfaction and disillusion throughout Soviet society. Ordinary people no longer experienced improvements in their poor living conditions." (Myron Rush, in The Strange Death of Soviet Communism, 2008, pp. 21-22.) Anyone who lived in the USSR in the 80s would note how conditions were deteriorating, a situation that also marked the Eastern European countries. It is a fact that living standards have greatly increased in China and Vietnam since the economic reforms, whereas you'd probably rather live in the USSR (or GDR, etc.) of 1966 than 1986. No.9575 >>9573 They seemed good. No.9578 File: 5b45810b6907a4c⋯.jpg (722.86 KB, 500x500, 1:1, 421111`1.jpg) What is your opinion on the Taiping / Heavenly sect rebellion? Asides from the Christ Cuck'ism and Han Nationalism was it a peasant / Progressive rebellion? (The CCP and KMT recognized it as each respectivly) No.9579 >>9578 I haven't studied the Taiping Rebellion, but there is an English-language Soviet history of China covering the 1700s up to 1918 which I recently scanned. Part II discusses it and gives an overall positive assessment: https://archive.org/details/ModernHistoryChina The Great Soviet Encyclopedia describes it as "a peasant war in China, directed against feudal oppression and the alien Manchu dynasty" and notes that "the leaders of the Taiping rebellion published the booklet Land Administration of the Heavenly Kingdom, which presented a Utopian program for transforming Chinese feudal society into a militarized patriarchal society based on 'peasant communism' and on the principle of the equality of all its members. In practice, the social and economic policies of the Taiping leaders merely decreased somewhat the rent payments the peasants made on the land and shifted a significant part of the tax burden onto landowners and the rich." Lots of historically progressive endeavors took place under the banner or influence of religion, such as the peasant revolt under Thomas Müntzer and John Brown's anti-slavery struggle. Post last edited at No.9580 >>9574 So you're basically arguing that capitalism managed by a communist party is superior to socialism. Why do you call yourself a socialist again? No.9581 >>9580 No, I'm arguing that Soviet-type economies had flaws which had become obvious by the 1980s. Saying that "the USSR had constant economic growth" papers over these flaws and doesn't explain why there would be any calls to reform the system. No.9584 File: 0159a452a12169a⋯.jpg (55.49 KB, 372x527, 12:17, 1471365022662.jpg) >>9581 >Saying that "the USSR had constant economic growth" papers over these flaws and doesn't explain why there would be any calls to reform the system. But criticzing China for adopting capitalist policies does not mean to be totally uncritical of the USSR. Of course did the USSR need reforms, otherwise it wouldn't have been dissolved. But where I disagree is that liberalization was the only way for China to do it. Furthermore, I wasn't talking about the flaws of the Soviet economy, but the fact that the USSR didn't have >unemployment, homelessness, wage cuts, oligarchs, rent-seekers, commodification, alienation, inequality China - evidently - has all this. Of course the living standard has been growing (capitalism tends to do that over time), but the contradictions of capitalism have not been addressed. If you are concerned about average living standards only, why don't you support Luxemburg, Switzerland, Norway or Australia? These are the countries with the highest living standard. I don't think we can personally blame the leadership of the CPC that they went this way, but we have to acknowledge that they opened Pandora's box, and now there is a capitalist class with tremendous influence in the CPC, the CPC's ideology is terminally revisionist (see this https://monthlyreview.org/2007/09/01/the-state-of-official-marxism-in-china-today/) and promotes center-right conservative values and Han ethno-nationalism. If there are genuine MLs in the CPC, they are marginalized. Cockshott had been threatened that they will revoke the passports of Chinese comrades who are translating Towards a New Socialism. It is not impossible that there might be a ML coup in the CPC, or that some of the leaders are somewhat genuine, but if you are not an idealist but a materialist and look at the actual class interests of the Chinese leadership, we can not accept that China is a dictatorship of the proletariat (maybe a completely degenerated one), let alone socialism, unless you insisting that they are because they nationalized core industries and call themselves communists. No.9585 What is the difference between the fascist dictatorship and the bourgeois military dictatorship? No.9586 >>9584 >unemployment, homelessness While it'd be silly to compare the situation before Gorby with after it, there was in fact a growing problem of "unofficial" unemployment in the USSR by the time he came to power, although it wasn't due to a capitalist class creating a reserve army of labor, instead it was due to the inefficiencies of the system. This was especially apparent in Central Asia, where there was a "large number of people not employed in the public sector. According to one recent Soviet source, at the beginning of 1988 one quarter of the labor resources in Uzbekistan was not involved in public production. The problem is most serious among youth, especially female youth. Half of the population not working in Tajikistan, over 136,000 men and women between the ages of sixteen and twenty-nine were neither working nor studying. . . In the [Fergana] Valley's Andizhan oblast it was reported that 18,143 Komsomol members were not working in 1986." (William Fierman, "Central Asian Youth and Migration" in Soviet Central Asia: The Failed Transformation, 1991, pp. 256-257.) The author adds that Gorby's policies "seem likely to aggravate the current situation" (a bit of an understatement.) Also a lot of potential unemployment was "solved" by overmanning enterprises. According to an American Communist journalist who opposed the destruction of the USSR, "From what I observed [living in the USSR in the 1970s-80s], there was a shocking waste of labor power almost everywhere—far too many performing the labor a fraction of their number could do. One worker would be actually engaged in making repairs while two others would be looking on." (Mike Davidow, Perestroika: Its Rise and Fall, 1993, p. 8.) >wage cuts "Despite the policy of stable retail prices, open inflation was not absent. Between January 1977 and September 1981, state retail prices were increased on four occasions, the increases mostly affecting what may be regarded as luxury goods rather than necessities. The official retail price index, which fell from 100 in 1970 to 99.7 per cent in 1975, went up to 103 in 1980, to 104 in 1981, and to 108 in 1982, to remain at the 1982 level in 1983 and 1985. At the same time, this index was understated, inter alia, because it ignored the prices in the collective farm market. Collective farm market prices were 1.37 times higher than state retail prices in 1965, 1.55 times in 1970, 1.76 times in 1975, and 2.09 times in 1980. Besides open inflation, hidden inflation was also to be found. It occurred through increases in retail prices not reflected in the official retail price index, resulting either from a substitution in the product mix of an enterprise of higher-priced products for lower-priced ones, from a reduced quality of the product, or from pseudo-innovations. Finally, there was repressed inflation, i.e. an excess of purchasing power, caused by central regulation preventing retail prices from increasing in spite of shortages of consumer goods and services. The indicators of rising repressed inflation included worsening shortages of consumer goods and services, lengthening queues, a spread of food rationing in the first half of the 1980s, soaring prices in the second economy, mounting deposits in savings banks, and swelling food subsidies." (J.L. Porket, Work, Employment and Unemployment in the Soviet Union, 1989, p. 86.) While in some respects this was still a better system for ordinary people, it nonetheless contained long-term problems. >If you are concerned about average living standards only, why don't you support Luxemburg, Switzerland, Norway or Australia? Marx wrote that one task of the proletarian state was "to increase the total productive forces as rapidly as possible." In the final analysis, socialism's superiority to capitalism is based on high living standards and quality of life. China in 1949 was even more backward than Russia in 1917, and the CPC under Mao's leadership spent the next three decades alternating between sensible economic policies (Soviet-backed industrialization in the 50s) and disastrous ones (Great Leap Forward, Cultural Revolution.) Deng argued that the main task was to develop the productive forces so that China could become a "modern" country and further develop socialism on the basis of abundance. And despite all the progress made since the end of the 70s, the government recognizes it still has some ways to go (e.g. the average Chinese person is still generally poorer than his or her American counterpart.) No.9587 >>9584 I don't see what the MR article shows about "the CPC's ideology [being] terminally revisionist." It shows that academic debate in China has grown enormously since the end of the Cultural Revolution. While many academics no doubt think the market is a panacea (and no doubt some academics flat-out just want to adopt capitalism), the article notes "some comments by Chinese conference participants that swam against the private property and privatization tide." The author of the article ominously states that the pro-market (and in some cases plainly pro-capitalist) arguments were akin to those "I had heard (and had argued against) in Moscow in 1991, the last year of the Soviet Union, coming from Soviet academics and party and state officials." The comparison is inexact. The problem with the Soviet approach to public debates until Gorby was that there were rigid restrictions on what could be discussed. This helped the revisionists mask their own views, e.g. Georgy Shakhnazarov, an aide to Gorby, said after the USSR collapsed, "Gorbachev, me, all of us were double-thinkers, we had to balance truth and propaganda in our minds all the time. . . It was the choice between dissidence and surrender." Thus Shakhnazarov and others like him rose through the ranks by writing and saying the "correct" things. And as problems in the economy and society accumulated, and someone like Gorby took to the helm, these revisionist forces burst out and overwhelmed academics and party officials who supported Marxism-Leninism. The CPC's approach since the 1980s has been different: it wants pro-market and pro-centralization forces to debate openly, along with open debates about other subjects in Chinese society. So long as the Four Cardinal Principles are upheld, the CPC's approach is generally hands-off. In this way the CPC can more easily ascertain problems facing China and the party's cadres can better respond to arguments. >>9585 A fascist dictatorship employs fascist methods of organizing the economy and society. A military dictatorship just means the military controls the country's political life. Politically such regimes can differ quite a bit, e.g. Peru under Velasco was on cordial terms with the USSR and Cuba and carried out progressive reforms, while Ethiopia's Derg oriented the country toward socialism. As for bourgeois military dictatorships, these aim to assist the capitalist class in stabilizing a country marked by political and/or economic crisis. This book might be of some use: https://archive.org/details/ArmiesAndPolitics Post last edited at No.9588 CAn you show me how workers have the political power in China? No.9589 >>9588 The working-class took power in 1949 in alliance with the peasantry and under the leadership of the Communist Party. A new state structure was established and a socialist economy built. The CPC continues to serve as vanguard of the working-class and leads the work of socialist construction and the defense of the socialist state and economy. In the final analysis this is the basis of workers' political power, not just in China but in all socialist countries. That is why counter-revolutionaries strike first and foremost at the vanguard role of the CPC, as occurred in Tienanmen in 1989. There's obviously other means workers exercise political power (via national and local government, the trade unions and other mass organizations, the press, etc.) but the vanguard is the apex of the system. Post last edited at No.9590 >>9589 but can you show me these are really controlled by workers and not by billionaires??? No.9591 >>9590 See the section "The socialist state’s firm control over 'bourgeois elements'" here: https://docs.google.com/document/d/16iw83noTdWvDiECaITX83rGhP_lros8QdBTrNnCoe6c/edit (the other sections, e.g. on state-owned enterprises, are also relevant.) The state controls the commanding heights of the economy and it is the CPC which has allowed Chinese billionaires to come into existence. The latter are thus dependent to a considerable extent on the policies of the party. No.9592 >>9591 I can't seem to find this section ??? where is it No.9593 >>8560 Would you consider the First US Red Scare (happened around 1900s-1920s) the death-nail to the US socialist movement? It was responsible for destroying most Union based movements (most notably the IWW) as it forced most Unions to “act nice” with their “employers” or risk being banned (it also formulated laws that banned cross-profession unions, hampering the IWW again) No.9594 >>9592 It's in the document, page 6. >>9593 No, the Great Depression saw an upsurge in labor militancy and the Communist Party reached the height of its influence in the 1930s-40s, in part due to its significant role in organizing the CIO. Compared to the Socialist Party of the 1900s-10s the CPUSA certainly wasn't as influential, but at the same time the Socialist Party was undone not only by government repression but also by its own right-wing, reformist leaders who expelled the more numerous left-wing in 1919 and who in earlier years had sought to keep Eugene Debs off the ballot, sabotaged consistent opposition to World War I, etc., just as the other parties of the Second International had reformist/revisionist and revolutionary wings, with the latter splitting to form communist parties after 1917. No.9595 Thoughts on the South African Communist Party? Do you think they are a legitimate (i.e. not turned into socdems/"progressives" without a substantial ideology) communist party? Would you tend to support them rather than EFF? I know that you said you don't know a lot about current South African politics, but still asking in case you have anything. No.9596 >>9595 Yeah, to reiterate I don't know much about modern South African politics, but from what I've gleaned the SACP has been very close to the ANC both during and after the struggle against Apartheid, and by "very close" I mean finding excuses for every bad thing the ANC does, somewhat like how the CPUSA makes excuses for Democrats. If the SACP are truly wedded to the ANC, and refuse to allow members to disagree with this state of affairs, then communists ought to establish their own groups or party advocating independent working-class politics. No.9597 >>9596 >the SACP has been very close to the ANC both during and after the struggle against Apartheid, and by "very close" I mean finding excuses for every bad thing the ANC does, somewhat like how the CPUSA makes excuses for Democrats. I was under the impression that they are at least very critical of Zuma though. No.9598 >>9597 That's true, and it's a good thing. So long as the SACP consistently practices democratic centralism (in the sense of allowing open debate on what attitude to take the ANC), I'd probably join the party if I lived in that country. The problem with a lot of historically pro-Soviet parties was that the process was reversed: first the leadership promulgates a line, then anyone who disagrees with it is labeled ultra-left or right-deviationist, and then the leadership's line is unanimously passed at a party congress or Central Committee meeting. The opponents of the line were not allowed to carry it out while still personally disagreeing with it; they had to make a complete renunciation of their views and confess how wrong they were or else face expulsion. That's democratic centralism turned on its head, and was a significant factor in the development of the New Left. Post last edited at No.9599 >>9591 >The state controls the commanding heights of the economy and it is the CPC which has allowed Chinese billionaires to come into existence. The latter are thus dependent to a considerable extent on the policies of the party. But we know a bunch of capitalists have significant influence in the CPC. Xi is not a worker, for example, he's a hundred-millionaire. You're making the assumption that the CPC perfectly represents the interests of the working class (and thus any and all anti-socialist policies have to be in the interests of the working class by definition). Your claim that China is still a DotP despite having a capitalist (sorry, "primary stage socialist") economy depends on this assumption, but you need to actually prove that, somehow. No.9602 >>9599 >You're making the assumption that the CPC perfectly represents the interests of the working class No, I'm making the assumption that the CPC represents the interests of the working-class. No party has ever "perfectly" represented workers' interests. Stalin said in 1927, >Our state must not be confused, and, hence, identified, with our government. Our state is the organisation of the proletarian class as the state power, whose function it is to crush the resistance of the exploiters, to organise a socialist economy, to abolish classes, etc. Our government, however, is the top section of this state organisation, its top leadership. The government may make mistakes, may commit blunders fraught with the danger of a temporary collapse of the dictatorship of the proletariat; but that would not mean that the proletarian dictatorship, as the principle of the structure of the state in the transition period, is wrong or mistaken. It would only mean that the top leadership is bad, that the policy of the top leadership, the policy of the government, is not in conformity with the dictatorship of the proletariat and must be changed in conformity with the demands of the dictatorship of the proletariat. I argue that the overall logic of the Chinese economy requires the CPC to struggle against efforts at capitalist restoration in order to maintain the existing state. If it fails (either due to incompetence or revisionism), then capitalism will be restored. Xi Jinping has wealth due to his relatives. He himself isn't a capitalist. Furthermore plenty of leaders of socialist states lived in relative luxury; the personal lives of leaders do not constitute evidence that their states were bourgeois. No.9603 File: 0a3ba3f48605c8a⋯.png (924.61 KB, 1280x800, 8:5, gcd1.png) File: 0020ba868282500⋯.png (913.39 KB, 1280x800, 8:5, gcd2.png) File: ad8d64995b1ca93⋯.png (730.37 KB, 1280x800, 8:5, gcd3.png) how much of a factor did germany play in the russian revolution? beyond lenin dada's infamous train trip, did germany play much of a role? No.9604 >>9603 The Germans financed the Bolsheviks using the logic "the enemy of my enemy is my friend." German funds were used by the Bolsheviks towards establishing newspapers and printing propaganda against the Provisional Government. While German funds helped the Bolsheviks, it was the popularity of Bolshevik slogans among workers and soldiers that was responsible for the October Revolution, not the machinations of the Kaiser's officials. Such financing continued afterward, as I wrote elsewhere: >Lenin operated on the assumption that a successful revolution in Russia would inspire revolutions in Germany and the other imperialist countries. Furthermore, Germany continued waging war on Russia even after the Bolsheviks took power until it was able to impose the Brest-Litovsk treaty on them. >As far as I know, after Brest-Litovsk was signed the German government viewed the Bolsheviks as a "lesser evil" compared to a White Russian regime that would have rejoined the Entente and continued war against the Central Powers. The Bolsheviks knew this and took advantage of the situation till November 1918, when a republic was proclaimed and the SPD came to power with its pro-Entente, anti-Bolshevik foreign policy. >"Count Wilhelm Mirbach, the German ambassador in Moscow, sent a cipher telegram to Berlin on 3 June 1918, one month before he was assassinated: 'Due to strong Entente competition, 3,000,000 marks per month necessary. In event of early need for change in our political line, a higher sum must be reckoned with.' Two days later, the German Foreign Ministry informed the Treasury that Mirbach had spent large sums to counter allied efforts in Russia to persuade the Bolsheviks to change their line and accept Allied demands. Since it was the German view that the new regime was hanging by a thread, Mirbach's efforts were regarded as of cardinal importance, and in order to sustain them a fund of 'at least 40 million marks' was required." (Volkogonov, Autopsy for an Empire, pp. 23-24.) >"Chicherin fully supported Lenin's German policy as a weapon against both the Allied intervention and the White movement. As Lenin wrote in August 1918: 'No one asked the Germans for 'help', but we did agree when and how they, the Germans, would carry out their offensive against Murmansk and [the White General] Alexeev. It was a coincidence of interests. We would have been idiots not to have taken advantage of it.'" (Ibid. p. 39.) >Richard Pipes points to an August 1918 letter Lenin sent to Jan Berzin (then Bolshevik representative in Switzerland) on the subject of distributing propaganda against Britain and France: "do not spare money and effort on publications in three (or four) languages and distribution. The Berliners will send some more money: if the scum delay, complain to me formally." (The Unknown Lenin, p. 12, 53.) Post last edited at No.9605 File: 85b5a0fd96529bb⋯.jpg (95.14 KB, 898x960, 449:480, stalin_tall buildings.jpg) >>9604 No.9606 File: b17dec0db55847f⋯.jpg (52.68 KB, 589x563, 589:563, enero 17 1918 galeano.jpg) another question. regarding the purge of bukharin: was it justified? in hindsight, he seemed like a loyal party member, if opposed to collectivization and part of the right wing of the party. seems like what i've read from him would go on to influence the CPC post-cultural revolution. is there any evidence to indicate that stalin was regretful? thanks again No.9608 >>9606 The charges that Bukharin was part of a plot with Trotskyists to overthrow the Soviet government and whatnot are false. However, in the late 20s, according to Jules Humbert-Droz (one of Bukharin's allies in the international communist movement), Bukharin did claim the Right Opposition was prepared to assassinate Stalin. This was around the time Bukharin was desperately seeking a bloc with Zinoviev and Kamenev, calling Stalin a "Genghis Khan" who would betray everyone. But neither Zinoviev nor Kamenev, nor their ally Trotsky, ended up forming such a bloc. As Trotsky put it, "With Stalin against Bukharin maybe, but with Bukharin against Stalin never." In the early 30s apparently some of Bukharin's younger admirers did make remarks critical of Stalin, and Bukharin didn't report on this to the Central Committee. But as I said, the Moscow Trials narrative of a giant conspiracy to engage in sabotage, assassinations, and espionage on behalf of foreign powers in bloc with Trots has zero evidence outside of the confessions of the defendants. >is there any evidence to indicate that stalin was regretful? No. In the late 50s there was a petition by a few Old Bolsheviks (notably Elena Stasova) asking the Central Committee to rehabilitate Bukharin as a man who had flawed views but was nonetheless a sincere revolutionary. Khrushchev, however, was concerned that rehabilitating him would have adverse political consequences (same reason that Trotsky, Zinoviev, Kamenev, Radek, etc. weren't rehabilitated.) And yeah in the 1980s there was a debate over Bukharin among Chinese academics. One side argued that whatever the merits of Bukharin's more "humane," gradual road to socialism, it wasn't really a realistic option in the USSR of the late 20s. I agree with that assessment. No.9611 Have you read any books on Allende’s presidency that you could recommend? No.9612 >>9611 "Allende's Chile: An Inside View" by Edward Boorstein, an American economist associated with the Communist Party USA who helped advise both the Cuban and Chilean governments. It points to both the accomplishments and weaknesses of Allende's time in office. No.9616 File: 313df35288cecf7⋯.jpg (166.18 KB, 640x360, 16:9, 441121222.jpg) A.Whats your opinion on the Current government of Ethiopia and their claims of being "Marxist Radical Democrats"? B.Whats your Opinion on Mengistu and the Derg? https://www.nazret.com/2017/12/07/why-ethiopians-are-nostalgic-for-mengistu-haile-mariam-aka-%E1%88%98-%E1%8A%95-%E1%8C%8C/ No.9617 >>9616 A. Ethiopia's ruling party/coalition claims to be practicing "revolutionary democracy" and still uses vaguely Marxist rhetoric, but in practice it has been glad to cooperate with US imperialism in carrying out the "War on Terror." To quote one work: >"After the defeat at Shire, the Derg abandoned all of Tigray to the rebels, and the EPRDF's expanding guerrilla alliance started the military and political manoeuvres that would end in the takeover of Addis Ababa two years later. The Soviet bloc was close to casting Mengistu adrift. No belated acts of liberalization would save him. For his part Meles Zenawi, barely known outside Tigray, began introducing himself to a wider world. >An early encounter with the western press led to an observation that has dogged him ever since. He told an interviewer at the end of 1989 that the Soviet Union and other eastern bloc countries had never been truly socialist and added, 'The nearest any country comes to being socialist as far as we are concerned is Albania.' As Meles set off in 1990 on his first venture to the United States, his aspiration to the mantle of Enver Hoxha and to run Ethiopia on Albanian lines did not inspire much confidence. >In Washington he met the veteran Ethiopia-watcher Paul Henze. Henze was as impressed by Meles as many foreigners have been in the years since, and he made detailed notes after two long conversations. Meles had to deal first with the Albanian connection. 'I have never been to Albania,' Meles told Henze. 'We do not have any Albanian contacts. We are not trying to imitate in Tigray anything the Albanians have done.' >Meles was equally keen to reject the Marxist tag. 'We are not a Marxist-Leninist movement,' he said. 'We do have Marxists in our movement. I acknowledge that. I myself was a convinced Marxist when I was a student at [Addis Ababa University] in the early 1970s, and our movement was inspired by Marxism. But we learned that Marxism was not a good formula for resistance to the Derg and our fight for the future of Ethiopia.' >As the EPRDF moved out of the countryside to take over the towns and the cities, it emerged into a post-communist world, and a rapid political make-over was needed. 'When we entered Addis Ababa, the whole Marxist-Leninist structure was being disgraced,' said General Tsadkan. 'We had to rationalize in terms of the existing political order . . . capitalism had become the order of the day. If we continued with our socialist ideas, we could only continue to breed poverty.'" (Source: Peter Gill. Famine and Foreigners: Ethiopia Since Live Aid. New York: Oxford University Press. 2010. pp. 74-75.) B. Deserves praise for overthrowing feudalism. I wrote up an interview he conducted back in 2001: http://www.soviet-empire.com/ussr/viewtopic.php?f=133&t=54380 C. Not surprising. The EPRDF government, perhaps until very recently (I'm not aware of what the recent leadership is doing besides normalizing relations with Eritrea) was pretty autocratic and unpopular. So was the Derg in the 1980s, but at least the Derg had the excuse of fighting a gigantic rebellion. No.9619 File: 340e2d7588e51dc⋯.pdf (6.83 MB, AppearanceOfMajorFidelCast….pdf) Do you have any book recommendations on the Prague Spring of 1968? What is your assessment of events there? As I see it, there were errors made in the pattern of industrialization and rebuilding in many Eastern European countries, eg. over-emphasis on heavy industry at the expense of agriculture and light industry. This lead to some real economic hardships for the working class. The Soviets could also be overbearing at times and didn't treat other countries with the respect they deserved. In that background, there was a real movement to reform socialism in order to make the state more authentically democratic. This process was taken advantage of by revisionists who wanted to dismantle socialized industry and public property, help every step of the way by American, UK and German intelligence. The Soviet intervention was a regrettable if ultimately necessary measure to protect socialism in Czechoslovakia during the '68 crisis. Do you agree with that assessment? Attached is Fidel's opinion which I am sympathetic to. No.9620 >>9619 This is also a good series of essays for anyone interested. https://www.marxists.org/history/etol/writers/marcy/czech/index.html No.9621 >>9619 >>9620 Marcy's book is pretty good. Chapter 7 of Al Szymanski's "Is the Red Flag Flying?" is worth reading as well (it contains, among other things, a brief discussion of Prague Spring): https://archive.org/details/IsTheRedFlagFlying Someone I know scanned two pamphlet-sized CPUSA assessments of events back then: Coincidentally, earlier today I scanned a Soviet book criticizing the political and philosophical views of Thomas Masaryk, the founder of bourgeois Czechoslovakia whom the Czechoslovak revisionists were trying to present as a "humane" socialist. Might find it of mild interest: https://archive.org/details/CritiqueMasarykism No.9622 What are some good books that I can learn about contemporary history? Doesn't have to be strictly leftist. Stuff like Israel, or the Syrian Civil War, Putin's Presidential terms, the European Union, whatever the hell's happening in Eastern Europe etc etc. I need to catch up. No.9623 >>9622 "Asad: The Struggle for the Middle East" was published in 1990, it's a good overview of the senior Assad's life and Syrian politics and foreign policy up till then: http://b-ok.xyz/book/1190466/17bbca I haven't read it but "Palestine, Israel and the U.S. Empire" by Richard Becker is said to be a good Marxist intro. There's no shortage of critical analyses of Israel's establishment and foreign policy during the 1940s-80s from authors like Ilan Pappé, Simha Flapan, Uri Davis, Alfred Lilienthal, etc. One book you could check out is "Fateful Triangle" by Noam Chomsky which goes up to the 90s: http://b-ok.xyz/book/958050/f7145e If you want a very accessible and short intro, there's "Arabs & Israel for Beginners" by Ron David which goes from ancient history up to the early 90s, the author being a liberal supporter of the Palestinian cause. As for modern Eastern Europe, the EU, and Russia under Putin I can't think of any books. No.9624 >>9621 Quelle chance! I'll be reading the book on Masaryk this afternoon. Thanks, Ismail. No.9625 File: cafc68893c34523⋯.jpg (11.81 KB, 220x334, 110:167, 220px-Ernest_Mandel.jpg) What's your opinion of Ernest Mandel and his work? Despite that he's a Trotskyist, is he worth a read? His books are rather long so I'm not sure whether or not should I invest time in it. Although I did like his "Introduction to Marxist Economic Theory." No.9626 >>9625 When he writes about Marxist economics, he's pretty good and there's no harm reading him. When he writes 'bout the USSR and friends, naturally his analysis suffers from his Trotskyism, but at least he had the decency to be one of those Trots who, like Trotsky himself, held it was a "degenerated workers' state" to be defended against imperialism, as opposed to Cliff, Dunayevskaya and others who called it "state-capitalist" and imperialist. No.9627 Isn't Georgism the best way to reach communism? Like, if every citizen of a country has a piece of land that is alienable and can only be rented, not sold, wouldn't that alone lead to the building of a communist society? Because think about it: you have your job, and then one day you are fired. So, you pack your shit, move from the house you have been renting to work in the city to the land you have since you became a citizen. On that land you grow your own food, you grow connections with your neighbors for trading basic food items and services, and since the job in a farm isn't really busy aside from certain times in the year, you study, a lot. Eventually you become qualified enough to take a better paying job: that's when you go back to the market! You don't go because you're starving and drive the wages down, you go when you feel you can make at least just enough to have a life better than planting crops. So a few things happen: the workers have more levarage because they can take the risk of losing their jobs, being fired goes from a death sentence to a minor set back. More people in the fields means more people competing to sell crops, and companies will make bid wars to see who can pay more to rent the most land, which also means there will be an inbuilt UBI system in place. Rich people will join forces to build their own cities to collect rent, which means more factories will have to be built, even if just to produce materials required to build apartment complexes. And NGOs could start collective farms with zero costs, which means that while some areas are in a less competitive market, some will be living in a basic form of communism. No.9628 >>9625 Here are some essays engaging with Mandel's economic theories of capitalism's "long cycle". The author, Sam Williams, argues that long cycles are directly tied to the production of the money commodity, gold. He agrees with Mandel on basic premises but disagrees on the mechanisms by which cycles are triggered. I happen to find Williams' arguments convincing. I haven't read Marxist Economic Theory, however, so take my analysis with a grain of salt. The second essay is directed at Mandel specifically: https://critiqueofcrisistheory.wordpress.com/does-capitalist-production-have-a-long-cycle/ https://critiqueofcrisistheory.wordpress.com/does-capitalist-production-have-a-long-cycle/does-capitalist-production-have-a-long-cycle-pt-2/ No.9629 >>9627 1. Socialist consciousness is not going to be cultivated by competition to sell commodities. Socialist consciousness is created through collective struggle, co-operation and education. 2. There are finite resources on the planet. In fact, we are already running out of several important types of rare earth minerals used to make the materials by which your agricultural utopia would be possible or desirable. Do you seriously think there is enough arable land for 7 billion+ people to become petty commodity producers? What you're advocating sounds similar to Pol Pot's Yero Zero. 3. The basis of socialist production is large scale, socialized industry. How could farming communities get products to sustain themselves in the non planting seasons? 4. The anarchy of production leads to inevitable crises during which those small plots you like so much will be gobbled up by those with capital so the families living on those plots can get a scrap of bread. I could go on, but, frankly, this is such a ridiculous notion I don't even know if it needs more of a response. No.9630 >>9627 As Strat said, Georgism is incompatible with modern industry, nor does it tackle capitalism (since your suggestion is simply to give the worker a tiny, insufficient means of subsistence in between employment by the capitalist.) That's why Marx wrote that George's ideas were "simply an attempt, decked out with socialism, to save capitalist domination and indeed to establish it afresh on an even wider basis than its present one." No.9632 >>9629 >Socialist consciousness is not going to be cultivated by competition to sell commodities. Except many successful socialist countries had markets to some degree. >enough arable land for 7 billion+ people to become petty commodity producers 1)There's enough food as it is, it's a problem of distribution, not producing. 2)I'm not advocating for Yero Zero, most people will likely just rent the land that they have the right to but aren't currently using. And people could organize more efficient boycotts by not selling a certain patch of land or refusing to renew it's rent, attacking the very process of production, which is currently an exclusivity of syndicates. >those small plots you like so much will be gobbled up by those with capital Like I said earlier, the land is inalienable. Maybe I read it wrong, but in Georgism you can't sell your land. >>9630 What the Marxist critique of George misses is that in capitalist the accumulation of capital is also the accumulation of land. The control of corporations on the government has much more to do with land than capital. The kulaks represented a far bigger threat to the USSR than the white army or the royal family or factory owners. No.9633 >>9632 >currently an exclusivity of syndicates. I meant unions. 1)There's enough food as it is, it's a problem of distribution, not producing. forgot to develop it further How can there not be enough arable land if there's more than enough food being planted right now? No.9634 >>9632 >The kulaks represented a far bigger threat to the USSR than the white army or the royal family or factory owners. That was due to economic backwardness, in which the bulk of the population was peasant-based and industry too weak in the 1920s to serve as a stable source of currency for imports of food. Also kulaks generally didn't have huge tracts of land to begin with. You're thinking of the nobility who often preferred to leave large tracts lying completely unused. Kulaks loaned money and employed hired labor. No.9636 >>9632 >Except many successful socialist countries had markets to some degree. Markets may have helped to develop productive forces in countries governed by Communist parties but these markets are retreats from fully socialized production. Some countries frame these actions explicitly as retreats, such as Cuba and Vietnam. BTW, I noted that socialist CONSCIOUSNESS will not develop from the deepening of market relations. The aforementioned countries CPs would agree with me. >1)There's enough food as it is, it's a problem of distribution, not producing. If there is enough food to feed the planet right now, which I cautiously agree with, it is only because of modern, large-scale farming methods. Note how late Soviet policy makers, who favored decentralizing the economy, fetishized small farms and were surprised to see large scale farms as the basis of the US's agricultural prowess. >Like I said earlier, the land is inalienable. Maybe I read it wrong, but in Georgism you can't sell your land. Inalienable land is not a virtue when small scale farming keeps agricultural production at a subsistence level. >The control of corporations on the government has much more to do with land than capital. This is untrue. Why do you think this is the case? What is capital in your eyes? No.9639 Ismail, what are your favorite generic arguments for socialism? I'm so used to arguing against socialists of different tendencies it feels like I've forgot how to actually make a proper argument for socialism No.9640 Thoughts on /leftypol/ and /leftpol? This is a question for /marx/ in general. No.9641 >>9639 Capitalism leads to unemployment, recessions, depressions and wars, and bourgeois democracy is largely undemocratic for ordinary people as well as susceptible to disintegration and the establishment of fascism. >>9640 I don't know what leftpol is, and I basically never look at leftypol, so I can't really answer. No.9642 >>9641 /leftpol/ is another board populated by people got pissed off when the BO of /leftypol/ started banning people for supporting the YPG, for being too critical Assad, for being too critical of Iran, etc. No.9643 >>9640 I got kind of, sort of radicalized through leftypol but once I joined a party and started actually studying, it quickly became clear to me that the level of discourse at leftypol is at an extremely low level. I left three years ago and have absolutely no desire to return. In fact, I'm rather embarrassed of ever being sucked into that quagmire. I only hang out on /marx/ because I can find pdfs here and engage in civil discussion. Get off the internet and meet people in real life. >>9642 Sounds even worse than the normal leftypol crowd tbh No.9644 >>9643 >Sounds even worse than the normal leftypol crowd tbh Some of them are, some of them aren't. The quality of discussion at /leftypol/ has dropped significantly since a lot of good posters have been permanently banned over stupid shit like saying Russia is imperialist (whether you agree with that or not, it's not worthy perma banning people over), but the discussion at /leftpol/ is not very good either. It's rather slow first of all, and it's filled to the brim with "infantile" anarchists/leftcoms. You know, the people who say socialism has never been tried, all socialist countries were "state capitalist", "not real socialism", "red fascism", etc. No.9645 >>9634 >You're thinking of the nobility who often preferred to leave large tracts lying completely unused Yeah, actually, my b. >>9636 >modern, large-scale farming methods. Note how late Soviet policy makers, who favored decentralizing the economy, fetishized small farms and were surprised to see large scale farms as the basis of the US's agricultural prowess. Large scale farming isn't necessarily against the georgist ideal. Like I said, you can't sell your land but you could rent it (and obviously, for agricultural purposes the rent contracts would be year long). Also, and I'll admit I didn't not do my home work on it, if Henry George didn't plan this, a georgist gov. could create a system that allow people to trade different, equivalent, pieces of land. That way, a well organized group of people could get one big swath of lands, owned by people that do not wish to return to it anytime soon, to create a large scale farm that would operate for years. It'd be a nice inbuilt system to give an advantage to collectively owned farms, as opposed to huge farms owned by a single man. >Why do you think this is the case? What is capital in your eyes? I think landowners have a more firm grip on the government than capital owners because at the end of the day, they own the very territory in which the nation is based. If you want to build anything, you need to buy the land for it from a guy, and he'll gauge the gov's eyes out because he knows that they can pay for it, and if they don't want to, he can just not sell it and leave it collecting dust. Big landowners can control how much a city can grow, what people can have the land (not too unsimilar too housing segregation), and the production of food and the extraction of water and minerals. Most of the foccus of inequality and social justice is on the financial side of it, but what I've managed to find so far seems to corroborate that land is directly related to wealth. Land inequality on the US:blacks vs Whites https://inequality.org/research/owns-land/ Land inequality on South Africa:blacks vs Whites http://www.plaas.org.za/sites/default/files/publications-pdf/No1%20Fact%20check%20web.pdf Which is not to say that capital isn't important, but which is to say that the function of capital is to acquire land, and not the opposite. It's not wonder that one of the richest "industries" today is financial speculation in housing and land. Because at the end of the day, the dollar inflates or deflates, gold can be sold for more or less, but land's value always increases, and while it's not sold it is a weapon to control society at your will. No.9647 >>9643 >>9644 Just post on both boards. The split is retarded. No.9651 >>8560 What is your opinion on the conflict between Kosovo and (the rest of) Serbia? Do you think the Serbian authorities are to blame for not giving Kosovo enough regional autonomy, or would you tend to side with the Serbians? No.9652 Ever heard of Ronald L. Meek? Any thoughts on the guy's work? No.9654 >>9652 Heard of him, haven't read any of his works nor have any thoughts about him. >>9651 I'll quote something I wrote elsewhere: >Kosovo is strongly identified with the early history of the Serbian state. There was an Illyrian presence during ancient times (Albanians consider themselves the descendants of Illyrians) and some Albanian presence afterward, but it wasn't until the Ottoman period that Albanians became a majority of the population. >Kosovo also had a role in the Albanian independence struggle, and it was assumed by those proclaiming Albania independent in 1912 that Kosovo would be part of it, but the Great Powers assigned it to Serbia. This resulted in a Kosovar Albanian guerrilla struggle that lasted till the mid-20s. >The interwar Yugoslav state treated the Kosovar Albanians like ass and tried to "correct" the demography of the region by importing Serbian settlers. This meant that during WWII many Kosovar Albanians collaborated with the Nazis. In the late 40s and 50s the Yugoslav government was very suspicious of lingering pro-Nazi sentiments, and so Kosovar Albanians didn't have a fun time. >The fall of Ranković in 1966 coincided with a more liberal atmosphere, but despite economic and social progress Kosovo remained the poorest part of Yugoslavia and most Kosovar Albanians wanted to either be a separate republic within Yugoslavia or to join Albania. Chauvinist sentiments grew among both Albanians and Serbs. >By 1991 Albania's political instability and chaotic economic situation convinced most Kosovar Albanian politicians that Kosovo would be better off its own country, and this matched US designs to balkanize Yugoslavia. At this point you can consult Michael Parenti's To Kill a Nation (http://b-ok.xyz/book/2515178/c06bfb) for the role of Milošević (who appealed to Serbian national sentiment to obtain support) and rise of the Kosovo Liberation Army (which sought to ethnically cleanse Serbs and other non-Albanians living in the region.) Gibbs' First Do No Harm (http://b-ok.xyz/book/833719/f80663) is also useful. >Basically, the idea of an independent Kosovo is rather nonsensical. Former Serbian President Dobrica Ćosić argued that a solution would be to partition Kosovo so that Albanian-majority areas go to Albania and Serbian-majority areas go to Serbia, although this would mean the Serbs would have to sacrifice a huge part of Kosovo. >Modern Kosovo is notoriously corrupt and basically only serves as a NATO outpost. To directly answer your question though, Kosovar Albanians definitely felt that Milošević rescinded much of the autonomy they had. An optimal solution would have been to have Kosovo remain within Serbia with a level of autonomy which Albanians could get behind. Instead the disputes between Serbs and Albanians played into the hands of imperialism, which used the Kosovar Albanians as pawns. No.9655 >>9644 i got permabanned from /leftypol/ for saying you shouldn't call people the n-word. No.9656 >>9640 /leftypol/ is mostly pretty bad imo, there are insightful posts but you have to wade through the /pol/ tourists, the kids who just converted from fascism and think socialism will be good because there won't be any immigrants, people arguing topics which were resolved a century ago, etc. Good memes though. /leftpol/ is shit. No.9657 >>9655 At this point /leftypol/ isn't even the leftist version of /pol/ anymore. It doesn't consist of people who at least uphold some kind of goal they want to work to (no matter how ridiculous that "goal" in the case of /pol/ may be), it's literally just kids being edgy for the hell of it. /leftpol/ is better, but I'm gradually coming to a point where /marx/ is the only imageboard I actively use. If only something along the lines of Revleft (and with a sizeable userbase) still existed, that would be great. No.9658 >>9655 >>9657 also fuck idiots who think /leftpol/ is the board for anarkiddies and /leftypol/ is the one for M-Ls. /leftpol/ was created for any people who are tired of /lefty/BO's antics, that is all. No.9659 Ismail, what the fuck is "Bill of Rights Socialism" which what the CPUSA and the DSA advocates for? I always thought that the Bill of Rights of the United States was created to limit the power of the government against private property. No.9660 >>9657 >If only something along the lines of Revleft (and with a sizeable userbase) still existed, that would be great. My forum (eregime.org) has areas visible to registered users to discuss politics and the bulk of the userbase are leftists, plus there are two active eRegime Discords (one of which is explicitly for leftists.) It isn't a replacement for RevLeft, but you may be interested in it regardless. No.9661 >>9659 Yeah it's just a slogan the CPUSA uses to show how "all-American" it is. Like many revisionists they take a basic, uncontroversial concept (socialist countries will have differences in how their political and economic systems operate based on the national traditions and material conditions of each country) and then use it to justify deviating from the basic principles of socialism. Having said that, the Bill of Rights was actually adopted under public pressure. See pages 27-28 of the following work: https://archive.org/stream/HumanRightsUSStyle/Human%20Rights%20US%20Style#page/n18/mode/1up It was, of course, bourgeois-democratic in character, not socialist. Post last edited at No.9665 >>9654 Thank you, this clears things up. Haven't several of the claims in To Kill a Nation (especially about the innocence of the Serbs) been debunked since its publication? Of course this doesn't make the book's analysis of Milosevic's policies or of NATO irrelevant, but it could be hard to draw the line between where Parenti is arguing in good faith and where not. >>9660 I made an account there a while back, this reminds me to actually go check it out. No.9667 >>9665 Parenti's book tilts a bit too favorably toward the Serbs, since the rest of the media was demonizing them while downplaying ethnic cleansers among the ranks of Croatian, Kosovar Albanian and Bosnian forces, but it's still a good read on the subject of US designs against Yugoslavia and the hypocrisy of its "humanitarian" motivations in bombing the country and portraying Milošević as a threat to world peace. As an aside, a collection of Chomsky's writings on Yugoslavia has recently been published: http://b-ok.xyz/book/3518902/69c625 (like Parenti, Chomsky was accused of "Serb apologism," which is even less justified than in Parenti's case.) No.9668 How come during Mai 68 the PCF choose to Unite with the SocialDemocrats and tell the Unionists and Students to go home when France.... >Was in the Middle of essentially General strike >Workers had United with the Socialist student groups on Campus (Maoists / Anarcho's / Situationists etc) and had basically formed a seperate provisional gov from the French gov >Massive Unrest over the entire country But then they just told everyone to go home wtf? It was Literally Perfect conditions for a Revolution No.9669 >>9668 The PCF and PCI did make major miss-steps by disarming their partisan armies after WWII and thinking they could comfortably work within bourgeois constraints but I'm not so sure the state could have been overthrown in '68 because the working class didn't have military hegemony. I think it's important to note that the army was staunchly allied with De Gaulle and there were significant numbers of fascists in militarized groups such as l'Organisation armee secrete who had carried out massacres in Algeria just six years before and would not hesitate to do so again. Full scale uprising could have been a blood bath like Chile in '73, perhaps on an even larger scale. The PCF's leadership does deserve a lot of blame. A revolutionary situation could have developed had the PCF refused elections or boycotted. Their calculus was as follows: would the consequences of a boycott be worth the potential violent backlash? 3 million Indonesian communists and progressive nationalist forces were slaughtered in '65-'66. You have to imagine that cette memoire was fresh in the minds of CPs every where at the time. Again, not saying I'm agreeing with their decision. Just trying to express the nuance and difficulty of the choice presented at the time. No.9673 >>9668 For a defense of the PCF's actions during May 1968 in France, see pages 346-365 of the following work: https://archive.org/stream/NewTheoriesOfRevolution/New%20Theories%20of%20Revolution#page/n172/mode/1up (the author was a CPGB member) Also, in the view of pro-Soviet parties back then, Maoists were little different from Trotskyists and Anarchists: petty-bourgeois radicals with no real working-class base, whereas social-democratic parties were viewed as part of the workers' movement despite the pro-bourgeois line of their leaderships. That is why the PCF stood away from the Maoists (who in turn heaped scorn on the pro-Soviet "revisionists") while seeking to work with the Socialist Party. No.9675 >>9673 That seems like a weak justification. How do you stand on this? Even if one thinks Marcuse's accusations against the PCF were overblown, the fact that the PCF argued for "more nationalization", "higher wages" and for more liberal democracy makes them literally SocDem reformists that completely missed the opportunity in 1968, especially with the unions apparently controlled by the PCF. No.9676 >>9669 France has certainly showed their intention to repress protests in with appalling brutality (for Western European standards): https://en.wikipedia.org/wiki/Paris_massacre_of_1961 But to think the reaction would have been in any way similar than in countries like Indonesia or South Korea would be a bit far-fetched. With the rest I agree with, but it seems no surprise to me that the PCF is now a SocDem party working with the EU. No.9677 >>8560 Hello Ismail, what do you think are the best points to bring up when arguing against people who claim: 1) that individual consumers bear basically the same amount of responsibility for the climate crisis as multinationals? 2) that neoliberal-style cutting on, for example, education is a "financially necessary" measure and that it simply isn't "viable" not to do this? Especially looking for an answer to the second one, since I think I can tackle the first fairly well (but am just looking for important elements that I might forget to stress). No.9679 >>9677 1. It's blaming ordinary people (workers, farmers, petty-bourgeoisie) for the policies of capitalists, and is dumb for the same reason as libertarian "if [insert corporation] is acting bad, then consumers will simply stop buying its products." That is hardly an option when said corporation can provide goods in greater abundance and more cheaply compared to its competitors (assuming its competitors are even in the area the boycott is taking place in.) 2. It's an excuse used to dump the economic problems caused by capitalism (recessions, etc.) onto ordinary people. It also shifts blame away from governments on the local, state and national level which do the bidding of the capitalist class by imposing austerity measures on the working people and pretend they have no choice. For example, Michael Parenti noted what the national government could do to promote economic growth independently of private capital: >Initiate a massive federal employment program that would shift our public wealth away from empire and toward rebuilding the republic. In 1994, Representative Matthew Martinez (D-Calif.) introduced a \$300 billion jobs bill to tackle the ”highest rate of unemployment” since the 1930s. A Works Project Administration (WPA), more encompassing than the New Deal one, could employ people to reclaim the environment; build needed industries, affordable housing, and mass transportation systems; rebuild our parks, towns, cities, and a crumbling infrastructure; and provide services for the aged and infirm. >People could be put to work producing goods and services in competition with the private market. The New Deal's WPA engaged in the production of goods, including manufacturing clothes and mattresses for relief clients, surgical gowns for hospitals, and canned meat, fruits, and vegetables for the jobless poor. The kind of not-for-profit direct production to meet human needs brings in revenues to the government both in the sales of the goods and in taxes on the incomes of the new jobs created. Eliminated from the picture is private profit for those who live off the labor of others—which explains their fierce hostility toward government programs that engage in direct production. The government subsidizes corporate interests at public expense. >Needless to say, these reforms are easier said than done. They remain undone and largely untouched not because policymakers never thought of them. Rather it is that those who desire reform have not the power and those who have the power have not the desire for reform. If anything, they have a furious hostility toward those changes that democratize the economy and infringe upon their capital expropriations. No.9680 >>9676 >a bit far-fetched Like I said, I don't agree with their decision- France is not Korea. >>9675 >the fact that the PCF argued for "more nationalization", "higher wages" and for more liberal democracy makes them literally SocDem reformists that completely missed the opportunity in 1968, especially with the unions apparently controlled by the PCF. I think your essential question is "why did the PCF become a euro-communist party?" That is a complex question and I don't know if I have the answer you're looking for but I'll try. The way I see it, mass parties have a tendency to slide towards economism. Without strong ideological leadership and ML discipline, parties with a broad basis in the working class movement tend towards mimicking the average consciousness of the proletariat member of their party. Lenin constantly warned against this backsliding, urging the RSDLP (B) to develop cadres and encouraging other parties to "Bolshevize" as a way to prevent ideological degeneration into economistic social democratic parties that were social imperialist. This process of bolshevization was difficult to do in non-revolutionary situations where mass politics were dominated by reformism. Revolutionary situations could have been forced in Italy and *possibly* France immediately after WWII when the masses were solidly behind the anti-fascist fronts led by the CPs of those countries. After making the mistake of disbanding their armies and participating in the elections of 1948 in Italy, which the CP lost, because of American tampering, that revolutionary situation dissipated. The CPs couldn't operate in a new moment, leaving ideological commitment to revolutionary class struggle by the way-side, slowly abandoning Leninist organizing principles and adapting themselves to a long period of bourgeois legalism. It is only natural that during this period many people would stray away from the CP. Trying to cling onto their mass base meant making serious compromises to the lowest denomination of their membership. No.9681 >>9675 >makes them literally SocDem reformists I disagree. Communist parties don't campaign on "vote for me and we'll start a revolution." They campaign on struggling for ways to better the working-class materially and organizationally, always having in mind the goal of putting an end to capitalism and establishing workers' power. Reformism is generally considered to entail one or all of the following: that revolutions are harmful to socialist development (ergo they're to be opposed), the equation of socialism with "reformed" capitalism, the rejection of proletarian control over the state apparatus. None of this characterizes the PCF, at least not in 1968 (in 1976 the party practically renounced the dictatorship of the proletariat in favor of "socialism in French colors.") If May-June 1968 did not constitute a revolutionary situation, then the PCF was not wrong in refusing to treat it as one. If, however, there was a revolutionary situation then either the PCF simply miscalculated or the PCF leadership were opportunists and de facto reformists fearful of revolution. While the PCF clearly did have those elements within its leadership, I don't think that suffices to explain its behavior in '68. >>9680 It's worth noting also that there was a close correlation between what the CPSU advocated and what these parties called for. When Togliatti and Thorez disbanded the armed forces of their respective parties, they did so because it was in line with the CPSU's own views on what the Western European parties ought to do in the immediate postwar period. Even Earl Browder assumed the Soviets would support his disbanding the CPUSA into the "Communist Political Association." He soon found out they didn't. This close identification with the CPSU was a major reason lots of pro-Soviet parties collapsed circa 1991, since they were not just ideologically invested in the continued existence of the USSR but were often financially dependent on it as well. And to receive subsidies and praise from the CPSU required closely following its line. Post last edited at No.9682 >>9681 >It's worth noting also that there was a close correlation between what the CPSU advocated and what these parties called for. Yes good point. By subordinating the policies of allied CPs to the USSR's own interests, the CPSU hindered the development of revolutionary situations in many parts of the world eg. China in the 20s-30s, Germany in the 30s, Western Europe from the 40s-on, Korea in the 40s and 50s, India in the 40s, inter alia. I would say that Stalin personally bears a lot of that blame. This raises the question: given that checkered past, is a Communist International as a concept a feasible or worthy of recreation? My answer would be absolutely, but the Soviets after Lenin did not use that institution correctly and we must learn from their mistakes to create a thoroughly internationalist organization. No.9683 >>9681 Also, I still can't get over how Earl Browder's grandson was the American investor most personally involved in looting previously socialized industry in the newly capitalist Russia of the 1990s. Truly one of the more amazing historical ironies out there. No.9684 >>9682 I don't think recreating the Comintern is a good idea, even in a form reminiscent of the days of Lenin. The Comintern's most valuable function was to "bolshevize" parties at a time when Leninism was only vaguely grasped and many prominent communists were imbued with social-democratic or syndicalist errors. When it came to organizing revolutionary action though, the Comintern never seemed to have any success. Its leaders regularly misjudged situations on the ground and adopted "one size fits all" prescriptions (which led to silliness like W.E.B. Du Bois being denounced by the CPUSA as a "social-fascist" during the Comintern's "Third Period.") I think the rationales for disbanding the Comintern were sound. From Dimitrov's diary: May 8, 1943: >Went to see Molotov tonight, together with Manuil[sky]. We discussed the future of the Comintern. Reached the conclusion that the Comintern as a direct[ing] center for Com[munist] parties in the current conditions is an impediment to the Com[munist] parties' independent development and the accomplishment of their particular tasks. Work up a document dissolving that center. May 21: >—Politburo meeting in Stal[in]'s office. Along with members and candidate members of the PB, Manuilsky and I also attended. >Molotov reads out the ECCI presidium's resolution on dissolving the Comintern. >Kalin[in] remarks that our enemies will take advantage of this step. it would be better to make attempts to transfer the CI center to some other place—London, for instance! (Laughter.) >Stal[in] explains that experience has shown that in Marx's time, in Lenin's time, and now, it is impossible to direct the working-class movement of all countries from a single international center. Especially now, in wartime conditions, when Com[munist] parties in Germany, Italy, and other countries have the tasks of overthrowing their governments and carrying out defeatist tactics, while Com[munist] parties in the USSR, England, America and other [countries], on the contrary, have the task of supporting their governments to the fullest for the immediate destruction of the enemy. We overestimated our resources when we were forming the CI and believed that we would be able to direct the movement in all countries. That was our error. The further existence of the CI would discredit the idea of the International, which we do not desire. >There is one other reason for dissolving the CI, which is not mentioned in the resolution. That is the fact that the Com[munist] parties making up the CI are being falsely accused of supposedly being agents of a foreign state, and this is impeding their work in the broad masses. Dissolving the CI knocks this trump card out of the enemy’s hands. The step now being taken will undoubtedly strengthen the Com[munist] parties as nat[ional] working-class parties and will at the same time reinforce the internationalism of the popular masses, [an internationalism] whose base is the Soviet Union. (Source; Banac, Ivo (ed). The Diary of Georgi Dimitrov: 1933-1949. New Haven: Yale University Press. 2003. pp. 270-271, 275-276.) I do support the idea of international conferences, seminars, and organizations where communist parties and whatnot can agree on common approaches and share their knowledge and experience. But not a centralized entity like the Comintern. Post last edited at No.9685 >>9684 You're right. I suppose my vision of a reformed international is similar to the theoretical-ideological workshop you are proposing but with some funding included. For example, countries that have had a revolution can pay any amount they would like into a fund that is independently administered by elected representatives from the various CPs. This money could be used to fund activities where the international movement needs it most. Say there is a major general strike in South Africa. The money from the "New International" could be used for the strike fund, for books to the striking workers, or something similar to that. Dimitrov's point about playing into the hands of bourgeois propaganda regarding "international communist conspiracy" is well-taken, however. >The Comintern's most valuable function was to "bolshevize" parties at a time when Leninism was only vaguely grasped and many prominent communists were imbued with social-democratic or syndicalist errors. I recall how the Comintern had to almost single handedly stitch the CPUSA together in the early days... I know this is idle speculation but don't you think that if a major industrialized country had a socialist revolution tomorrow the movement would be in a similar place to 1919? Perhaps it's impossible to say from this vantage point. No.9686 >>9685 >don't you think that if a major industrialized country had a socialist revolution tomorrow the movement would be in a similar place to 1919? It depends. What made the Third International unique is that it sought to popularize Marxism-Leninism, a doctrine initially not very well understood, whereas the Second International popularized Marxism (which its founders generally well understood) and the First International was comprised of various ideologies popular among the working-class. So unless some guy comes along and we end up with Marxism-Leninism-Someguyism, I don't think the need to "bolshevize" parties will be as strong as it was in the 1920s. We have the last 100 years of Leninist experience to draw from. >Dimitrov's point It was actually Stalin's, but yeah. No.9688 board question: any thought to making 'general' type threads pertaining to specific topics? would perhaps bring more discussion to the board outside of this q&a thread No.9689 Can you TLDR the economic system of Kadar's Hungary for me? Was it M-L? or not? What was the Balance between Public and Private Enterprises? No.9690 >>9688 Any examples? >>9689 It meant you could open small businesses and there was a limited market economy. A while back I scanned a book written by a Hungarian economist involved in creating the New Economic Mechanism (i.e. the system Kádár introduced in the late 60s), although the book came out at the end of the socialist period and the author is therefore quite critical of the whole project: https://drive.google.com/open?id=1Lcm9UaHi9N4WhdHs2lBwiVd8bkQc8Dzo >Was it M-L? or not? I don't know what "M-L" means in this context. It was a socialist economy, just like the USSR, GDR, Cuba, China, Yugoslavia, DPRK, etc. The ruling party explicitly identified itself as Marxist-Leninist. >What was the Balance between Public and Private Enterprises? The government still gave priority to investments in heavy industry, and public enterprises still completely dominated the economy.. The small businesses that were allowed to open up functioned more like businesses in modern Cuba (self-employed hairdressers, restaurants with like three employees, etc.) rather than major businesses. No.9691 >>9688 Don't we already have generals? There's one for Albania, one for China, and one for reading Capital. No.9692 >>9690 So basically the state Maintained dominance over Industry / Agriculture / Natural resource extraction but some Petite bougies were allowed to exist in the Service sector / Consumer good sector? No.9693 >>9692 Yes, and enterprises were given more autonomy compared to their counterparts in other Warsaw Pact countries. Post last edited at No.9699 How powerful and prominent were the Stasi in the GDR? Did people have less liberty due to an unnecessary secret police? No.9700 Did Stalin really take some weird Right-Wing turn at the end of his life and try to Purge / Forcibly deport Jews to the Jewish-AO? Or is that just more Khrushchev power feud / Secret speech shit? No.9701 >>9700 Khrushchev didn't mention Jews in his "Secret Speech." Life for the Jewish intelligentsia did become difficult in the late 40s and early 50s, and there were unjust executions whose victims were rehabilitated after 1956, but as far as the specific charge of Stalin wanting to deport Jews to Siberia, I think Zhores Medvedev (a Soviet-era dissident historian, no reason whatsoever to make Stalin look good) wrote a decade ago that Stalin didn't intend to do so. >>9699 Most East Germans thought the Stasi a minor annoyance which never caused them trouble. If the Stasi thought that someone was liable to get involved in anti-state activities, it would warn them what they were potentially getting themselves into. The Stasi had a huge network of informers and collected an often absurd amount of information on people it suspected of anti-state activities, so much so that it couldn't really keep up with the sheer number of reports. To give an example of the sort of methods used as revealed by the opening of Stasi archives after 1990, here's the testimony of someone who belonged to the tiny, clandestine East German section of the KPD/ML, a pro-Albanian party active in West Germany. >Workplace assessments were collected in a conspiratorial manner. The earliest assessment of me was one from the 6th grade. Furthermore, all possible assessments had been compiled, that had been ascertained from various sources (e.g. unofficial collaborators), whether in my neighbourhood, workplace, etc. Medical reports had been compiled and hobbies had been ascertained. Postal surveillance had been initiated. My whole family circle had been included in the OV. >It did not take long until the first 'conspiratorial search of my house' was made. >Such a house search was performed in an extremely careful manner. All residents of the house were 'screened' before the search. Files were made of each of them. The break-in was planned precisely so as 'not to be interrupted,' i.e. at that time no resident of the house was to be home. Each resident was closely observed. Fictitious cadre discussions were set up, doctor visits were scheduled, etc. . . >The purpose of such a house search was to collect information about me. Photographs were made of the apartment, of books and letters. Newspapers and books were listed, etc. . . >In the mid-1970s, my apartment was bugged. Initially there were technical problems, and an announced renovation of my apartment really made the eavesdroppers sweat. Only with great difficulty did they succeed in time in removing the bugging devices for a few days. From that time on, the sound recording devices of the MfS were working. In the following period, apartment break-ins were done again and again. The MfS made master keys, and thus I had, so to speak, a 'public' dwelling. . . >At the same time, I was visually observed: the round-the-clock spying because of my 'negative attitude' toward the GDR can be read as follows: >'5:10 AM - Afro (the investigation name given me by the [Stasi]) gets up. >5:11 - Shaves, brushes teeth. >5:15 - Oase (my wife) gets up. Yawns. >5:15 - Listens to West German radio - NDR 2 [North German Radio channel 2]. >5:18 - Oase does her morning toiletry. >5:29 - Afro leaves view. >5:44 - Oase leaving view. >5:45 - Silence reigns in the area. (...)'. >One observation team 'accompanies' me to the workplace. There I am under 'watch' by the IMs 'Bernd' and 'Dreher' (turner). 'No special incidents', is the succinct comment. >'2:00 PM - At the main gate. Observation continues. >2:32 - Afro strolls along the street. >2:39 - Afro enters a konsum [state-owned food store]. Buys 11.47 marks worth of goods. >3:01 - Afro enters into view. >3:04 - Afro listens to Albanian music. >4:00 - West German TV is on. A programme about F.J. Strauss is on ARD [working pool of broadcasting corporations of the Federal Republic of Germany]. >4:10 - Afro laughs at Strauss [Prime Minister of Bavaria]. >4:13 - Afro criticizes comrade Erich Honecker [President of the GDR State Council]. (...) >10:39 - Silence reigns in the area.' Post last edited at No.9702 Whats your Opinion on Subcontinent Communists? (the Nepalese Communist's / the *Multiple* Indian Communist-P's / Bangla WP etc) Do you ascribe to the idea of them simply being Social-Democratic Parties at this point? No.9703 File: 6b02195e1eee1a9⋯.png (183.63 KB, 1796x357, 1796:357, 1538039138977.png) >>9699 First off, this is a very good post someone made on /leftypol/. Secondly, here is an archive of sources with former MfS officials, if you speak German: https://www.mfs-insider.de/ No.9704 >>9701 Short correction: Konsum wasn't actually state-owned, it was cooperative property. https://en.wikipedia.org/wiki/Konsum_(East_Germany) >After German reunification in 1990, Konsum was not dissolved, as it was not a state-owned enterprise like Handelsorganisation. Today, it exists in limited capacity, mostly as a real estate management cooperative. It actually still exists in Germany today. I went past one yesterday. No.9705 File: 8018ba374bca8cc⋯.jpg (21.41 KB, 500x380, 25:19, X-Everywhere.jpg) Thoughts? What of these "moral certificates"? No.9706 >>9704 You're correct, although the brackets calling it "state-owned food store" were in the original source, I didn't add it. >>9702 I have no opinion since I haven't read up on them. The CPI-Marxist apparently has problems with corruption, but besides that I simply don't know. >>9705 >What of these "moral certificates"? Guevara was a big fan of moral (rather than material) incentives for increasing productivity. It's one of the reasons he felt himself more in line with China and the DPRK rather than the USSR. Cuba's economy in the 60s was big on moral inducements to work hard, and Castro temporarily revived Guevara's moral incentives stuff in the late 80s to cope with economic decline caused by reduced Soviet trade, but as you might guess neither produced the intended results. For a sympathetic account of Guevara's economic views, see: http://b-ok.xyz/book/929498/f0177f No.9707 >>9701 > Most East Germans thought the Stasi a minor annoyance which never caused them trouble. If the Stasi thought that someone was liable to get involved in anti-state activities, it would warn them what they were potentially getting themselves into. For the Stasi, what was usually regarded as anti-state activity? If the surveillance was on such a large scale what made people largely tolerative of it rather than worried? No.9708 >>9707 >what was usually regarded as anti-state activity? Trying to spread anti-socialist sentiment among the population, distributing anti-socialist or anti-state materials, divulging confidential information, etc. Basically standard stuff that the KGB and other intelligence agencies also guarded against. >If the surveillance was on such a large scale what made people largely tolerative of it rather than worried? Because unless they went out of their way to denounce socialism or engage in similar acts, they didn't really face problems. It isn't like if someone said unkind words about Honecker to a coworker, they'd suddenly end up arrested a day later, tortured, and imprisoned for decades. If the coworker was an informant, the Stasi would simply start investigating and compiling a file on the person who said those words. Here are two standard bourgeois works on the Stasi: Post last edited at No.9709 >>9706 CPI-M panders to nationalism and neoliberalism and has subverted itself to bourgeois parties in India. http://www.revolutionarydemocracy.org/rdv13n1/cpim.htm also worth noting that CPI-M has never once in its history had a member of the Dalit caste as part of its politburo. 3 of its 5 general secretaries have been brahmins. so how can a communist party be so infected with caste chauvinism and call itself marxist? they preach "it's about class, not caste" it strikes me (an american) as being similar to an american communist party which excludes blacks and chicanos from its decision-making bodies, then preaches to them "it's not about race, comrade, leave it up to us petit bourgeois white men to make the decisions" No.9710 Whats your opinion on Rojava / The SDF? Do you think their continued Involvement is just Pragmatism similar to the USSR during the Second World war or do you think thats just an excuse for Imperialist Collaboration? No.9711 >>9710 I think the Kurds of Iraq, Turkey, Syria and Iran have legitimate grievances. The problem is that the imperialists, as well as each of these states, manipulate Kurdish groups for their own ends, and the leaders of these groups readily enter into "pragmatic" relationships with anyone who is willing to send them arms. This has resulted in an inglorious record of betrayals and opportunism. The alliance between the USSR, US and UK was between three major countries. It was conducted in conditions of equality; such collaboration did not endanger the Soviet peoples. When Kurdish groups accept US bases in Syria, and promote US involvement in that country, they are assisting American imperialism which is against the interests of both Kurds and Arabs. Post last edited at No.9712 I'm pretty much an ML (although I don't care to uphold every dusty old Soviet dogma) but I've realized I don't really give a shit about communism. Here I mean communism as in "full communism", "mature communism", a stateless, classless, moneyless society. It feels too abstract, too far away and disconnected from the real issues that people face today. It's over 100 years since the Russian revolution and nowhere has full communism come close to being achieved on a large scale. I just want a socialist state where the productive forces are under collective control and ownership on both the local/workplace and the national level, where there's not insane wealth inequality, where politicians are not compelled (directly or indirectly) to work in the interests of the capitalist class despite the appearance of democracy, where there is true equality of opportunity, an end to imperialist wars, etc. I could care less about an imaginary perfect society at the end of history. Am I still a communist/marxist or is this some unforgivable heresy? Are there any marxist theorists/authors who have expressed skepticism of the communist end goal? Also what's wrong with what Eduard Bernstein said about caring more about the movement than the end goal? If we only cared about the end goal, communism could safely be declared a failure by now, as it's never been achieved despite endless attempts during the 20th century. The reason I don't think it's a failure is precisely because I think the movement is more important than the end goal. The various socialist and communist movements of the 20th century achieved a lot even though they didn't achieve communism. Anti-fascism, anti-war activism and anti-imperialism, reducing poverty, bringing land reform, health care, education and other social services to populations that had never had such things before, etc, etc. The "direction" (if that makes sense) is much more important than any predetermined utopia or end goal. No.9714 >>9712 >If we only cared about the end goal, communism could safely be declared a failure by now, as it's never been achieved despite endless attempts during the 20th century. how the fuck is it supposed to be achieved when capitalist and imperialist powers still exist No.9715 >>9714 >how the fuck is it supposed to be achieved when capitalist and imperialist powers still exist that's kind of the point. read the entire post. No.9716 >>9712 >It's over 100 years since the Russian revolution and nowhere has full communism come close to being achieved on a large scale. That's because it isn't possible to simply will "full communism" into existence. It presupposes a level of development of productive forces far in excess of anything seen today. >Also what's wrong with what Eduard Bernstein said about caring more about the movement than the end goal? Because it was an excuse to junk the establishment of a socialist society altogether by equating "socialism" with reforms to capitalism. Bernstein's revisionism became the theoretical basis for the degeneration of the SPD, which culminated in it officially abandoning Marxism in the 1950s. He rejected class struggle, denounced revolutions as harmful, ridiculed Marx's claims that capitalist industry would be in the hands of monopolies, etc. >achieved a lot even though they didn't achieve communism Sure, but they achieved these things because they kept to a line of representing the interests of the working-class, whereas Bernsteinism represented the forerunner to modern-day, right-wing social-democracy with its fear even of partial nationalizations, its claims to represent "society" as a whole, support for "humanitarian" imperialism, etc. Marxists have always emphasized the importance of day-to-day demands. The difference is that they understood such demands were part and parcel of the struggle for socialism. When "the movement" is everything and the end goal nothing, the leading role of the working-class and its independent class position are compromised; you end up with a bunch of isolated "goals" amounting to minor reforms to capitalism, and will try to achieve these through whatever way seems most expedient (which is how you end up with "progressive" groups in the US tailing the Democratic Party because "how else can we get better health care," and being willing to overlook or excuse its reactionary foreign and domestic policies because "we're going to make progress on [insert issue here] unlike if Republicans were in power," etc.) Post last edited at No.9717 >>9712 >I've realized I don't really give a shit about communism. There is never going to be a moment when you wake up one morning and are suddenly living in a communist society, i.e. one that has abolished states, is moneyless, is classless, has abolished racism,disability and gender oppression. It will take decades, maybe centuries of struggle to reach that point. Think of how long it took capitalism to overthrow feudalism the world over- and even still there are relicts of feudal ideology in the superstructure of American life, namely extreme religiosity and gender oppression. The communist society does not develop spontaneously out of a capitalist one, or even a socialist one, but must be consciously constructed. I would say that one of the reasons that soviet socialism failed was because revisionists (a term I use hesitantly if only because of its continued misuse by ultralefts) believed that they could construct a communist society by simply out producing the West. There was a belief that Soviet citizens, going about their daily lives, could create a communist society. This in practice demobilized the masses and took them out of active political life. Bureaucracy and undemocratic practices by the CPSU played a part as well, allowing anti-popular cliques to establish a power base in the party and force through their agenda against the will of the people, but if the masses were properly drawn into the real political life of the country, as Lenin advocated, then tearing down the state would have been impossible. >Am I still a communist/marxist or is this some unforgivable heresy? You're still a marxist, but you are a little demoralized. >Also what's wrong with what Eduard Bernstein said about caring more about the movement than the end goal? I think Ismail responded to this point very well. I just want to add that there is a dialectical relationship between movement and end goal. These seemingly separate phenomena are really a unity. Having an end goal without a movement or plan to reach it is ultraleftism; being devoted to the movement without consciously directing your activity to reach the end goal is empty right-opportunism (aka movementism or economism). Understanding what the end goal is, making an objective appraisal of the concrete conditions within which you are struggling and taking the requisite steps to advance towards the end goal is what constitutes a scientific socialist plan of action. Let's bring in the discussion on Cuba and material incentives vs. moral incentives (see here: >>9706) What was the link in the chain that needed to be grasped during the 60s and 80s in Cuba? It was the low level of the productive forces in the country. Ismail is right in saying that moral incentives did not work during that period. For creating the material basis for a socialist society, material incentives are necessary. For deepening the process of socialist construction, for mobilizing the masses, for developing the consciousness of the people and laying the basis for a communist society, moral incentives are necessary. I think this applies to your discussion because the socialist state needs to educate and explain to the people that the next stage of their development is the abolition of work AS work. A CP can't just say to it's people, "Look at how much you have! You should be happy!" It must lead the people to the next stage of development. That means that we have to give a shit about communism as an idea. >It feels too abstract, too far away and disconnected from the real issues that people face today. That's because the question of building a communist society is not on the near term agenda of a ML party in a capitalist state. For the people of the USSR in the early 80s, the question of "how will we get to communism?" was a pressing question that needed to be answered. There was a series of interviews with Buzgalin and the Real News about this very topic earlier this year. Buzgalin explains that the contradiction of soviet society in the 80s was the development of the objective basis for a communist society but the lacking of the subjective factor necessary for the transition to take place. The CPSU acted as a break on the activity of the class conscious workers and fostered consumerist ideology which decayed the socialist consciousness of the working class. The fundamental contradiction became too much for the USSR to handle. I think the above examples prove that communism is an important idea and is even realizable given the correct conditions. We are currently not living under those conditions but if you want to be theoretically sound, you must understand the importance of communism to the development of our ideology. No.9718 No.9719 >>9718 What sort of opinion are you looking for? Thoughts on how the USSR could have made labor more productive? No.9720 >>9719 just on whether or not that article is genuine or not. i had an anti-communist link it to me so i'm sort of sceptical about it and was hoping Ismail could post some stuff about productivity in the USSR. No.9721 >>9720 Yes, it's genuine. A while back I quoted a work on the hoarding of labor by Soviet enterprises: >>7708 In this thread I've also noted the observation of Mike Davidow, a CPUSA journalist who lived in the USSR (and continued to defend it after 1991): "From what I observed [in the 1970s-80s], there was a shocking waste of labor power almost everywhere—far too many performing the labor a fraction of their number could do. One worker would be actually engaged in making repairs while two others would be looking on." (Davidow, Perestroika: Its Rise and Fall, 1993, p. 8.) No.9722 Do you have any readings on the Lebanese Civil War? No.9723 >>9722 I can't think of any good books all about the Civil War, but one book that does discuss it and which I've seen praised is "A History of Modern Lebanon" by Fawwaz Traboulsi: http://b-ok.xyz/book/876225/03a9e1 "Israel's Lebanon War" by Ze'ev Schiff and Ehud Ya'ari is considered a good exposé of the 1982 Israeli invasion. No.9725 >>9720 It seemed to me that the article took a favorable view of the Soviet economy. Even with some bureaucratic distortions, it was a superior form of organization compared to the most highly developed capitalist state, the US. And the USSR was able to develop itself on its own, without exploiting the resources and labor of half of the world. No.9727 File: 0e3a29e9366daea⋯.jpg (77.4 KB, 1017x800, 1017:800, horridflag.jpg) to what extent can we call conquests like those of napoleon, mongols, abbasids, etc "progressive"? just reactions to material conditions or fundamentally base-changing? is military conquest inherently reactionary? No.9729 >>9727 Coincidentally, I just scanned a collection of Lenin's writings discussing what constitutes just and unjust wars: https://archive.org/details/LeninJustUnjustWars To quote him, "A national war might be transformed into an imperialist war and vice versa. Here is an example: the wars of the Great French Revolution began as national wars and indeed were such. They were revolutionary wars—the defence of the great revolution against a coalition of counter-revolutionary monarchies. But when Napoleon founded the French Empire and subjugated a number of big, viable and long-established national European states, these national wars of the French became imperialist wars and in turn led to wars of national liberation against Napoleonic imperialism." Soviet historians argued that Napoleon's conquests did have progressive consequences, insofar as Napoleon's men forced a legal code on conquered territories that weakened feudal elements, but this was still in the context of aggressive campaigns to subdue rival countries and peoples. You can find a Soviet analysis of the French Revolution and Napoleonic period here: https://archive.org/details/ModernHistory16401870 This is how the Great Soviet Encyclopedia describes the Mongol conquests: "The Mongol conquests were a calamity for the peoples of Asia and Eastern Europe. They were accompanied by wholesale slaughter, the devastation of large areas, the razing of cities, and the decline of farming, particularly in irrigated areas. The conquests long retarded the socioeconomic and cultural development of the countries that had been incorporated into the Mongol feudal empire." This was a source of disagreement between Soviet and Chinese historians, the latter arguing the Mongol conquests were historically progressive. As for the Abbasids, the Great Soviet Encyclopedia states: "In the Abbasid Caliphate, as in the Umayyad one, feudal relations predominated, preserving powerful slave-owning and patriarchal ways. . . The intensification of feudal oppression in the Abbasid state and the rise in taxes provoked numerous popular uprisings. . . The process of disintegration of the Abbasid state intensified in the ninth century. . . The Abbasids tried to counteract the reduction in revenues resulting from these processes by a redoubled exploitation of the remaining regions, which led to the regions’ economic decline." No.9730 File: b843dbc2ef073a3⋯.png (174.39 KB, 485x345, 97:69, ErichHoneckersSideJob.png) >>9729 good answer and thanks for the books. i think the mongol empire is an interesting one. genghis basically attracted his followers by appealing to lower classes and having them defect from other tribes, then bureaucratized the nomadic tribes and created a state. of course then you have unprecedented mass killings in expanding that empire...i read something like iran's population didn't reach the same levels until the 20th century. No.9731 >>9730 You might be interested in this joint Soviet-Mongolian academic history that I scanned a while back: https://archive.org/details/HistoryOfTheMPR (despite the title it's a history of Mongolia from ancient times to the early 70s) No.9733 >>9731 thanks. will check it out. keep up the good work! No.9734 File: 0a6ce427886f8f3⋯.jpg (611.06 KB, 2272x1496, 284:187, 1990_01_12_GorbačiovasŠiau….jpg) why were there so many anti-socialist protests in the late 80s? i get that there were reforms and stuff under Gorbachev but most of the unrest seemed to be towards the Soviet system in general. No.9736 >>9734 As a result of Glasnost, restrictions on talking about historical subjects were lifted. This meant that both academics and ordinary people began speaking about the more controversial aspects of Soviet history: dekulakization, the Great Purges, the WWII deportations, etc. This climate encouraged both "victims of communism" (those who suffered from or knew those who suffered from the aforementioned events) as well as anti-communists to protest the system. And since so many issues were bottled up for decades, Soviet journals and magazines were now flooded with interviews with academics, eyewitnesses and the like, plus recently-released material from Soviet archives, which anti-communists took advantage of. They'd argue the CPSU lied to the people and that it tried to shift blame on one man (Stalin) rather than examine the role of the party and its founder in anything anti-communists considered bad. It also didn't help that the USSR was beset in the late 80s with economic, ecological and societal woes which Gorby's policies weren't exactly fixing. So anti-communism had a fertile base to grow, and by 1990 you had many academics more or less openly attacking Leninism and even Marxism as either outdated or fundamentally wrong. If you read Gorby's speeches from 1987 onward he was often put on the defensive and had no idea how to stem the rising anti-communist tide (which his own inept policies and slide towards social-democracy contributed to.) An example is his October 1987 speech commemorating the 70th anniversary of the revolution, which when evaluating the past doesn't sound drastically different from something Khrushchev or Brezhnev would have said. Here are excerpts I've written down: https://pastebin.com/uEDZGgHq Gorby in that speech used the same argument that Khrushchev and Brezhnev would have made: whatever Stalin did, whatever bad things happened, the CPSU didn't really bear responsibility. Many citizens by the late 80s saw that argument as dishonest and self-serving, and this fed into anti-communist sentiment. Post last edited at No.9739 Best biography on Castro? No.9740 >>9739 "Fidel: A Critical Portrait" by Tad Szulc is generally considered the best bio, as far as I know. And of course there's Castro's autobiography: http://b-ok.xyz/book/1062200/d8e6c4 No.9741 why didn't cpsu just copy "reform and opening up" (socialism w/ chinese characteristics) rather than deciding on glasnost e perestroika? No.9742 >>9741 Nowadays it's very obvious that Deng's reforms were successful. That sort of obviousness didn't really exist in 1985; there was still plenty of skepticism. Also, there were many who argued that the CPC's economic reforms would reach a dangerous impasse due to lack of "political reform." This is why when Tienanmen happened, Gorbachev told associates, "Some of those present here have promoted the idea of taking the Chinese road. We saw today where this road leads. I do not want Red Square to look like Tiananmen Square." Of course, the CPC was able to overcome the problems relating to that, whereas the CPSU entered into ever deeper crisis and the USSR itself broke apart two years later, but for a brief period it seemed that Deng's reforms were either about to be reversed under pressure by "conservative" (i.e. anti-reform) elements, or the CPC's days in power were numbered. Then there's the simple fact that the USSR wasn't China. The Soviet workforce was structured differently, was less numerous, and would not have worked at the same wages as their Chinese counterparts, and it was very unlikely that foreign capitalists would invest huge sums in the Soviet economy like they did in China. So while the USSR's reforms could have gone far better, simply copying Deng's reforms wouldn't have worked. No.9744 https://www.wsws.org/en/articles/2018/10/01/chin-o01.html >The prestigious Peking University effectively banned the students’ Marxist Society after its members came to the support of protesting workers. You claimed earlier that China is doing its best to protect workers' rights. How would you respond to this article? No.9745 >>9744 I didn't claim it's "doing its best," no doubt on various levels CPC cadres fail to adequately defend the interest of workers for a variety of reasons. However, I don't think the stuff you're mentioning has to do so much with workers' rights as it does with concerns that youthful "Maoists" are pursuing ultra-left or pro-imperialist lines. One of them, Yue Xin, sought to import the #MeToo movement from the United States. No.9746 File: 050cbdf4de14aaa⋯.jpg (2.93 KB, 75x67, 75:67, dprk poster cropped 2.jpg) File: 7a3a714ce432961⋯.jpg (4.55 KB, 121x94, 121:94, dprk poster cropped.jpg) File: 0c3c9163c50aeda⋯.jpg (2.48 KB, 66x57, 22:19, irreleventthirdbutthisbaby.jpg) >>9744 >mfw when he posts an article from this trot rag >mf-again-w this trot rag cites the economist and financial times this bullshit has been making the rounds in western media lately (well, since the bolshevik revolution, but in particular the past week), where they try to paint these pure "literally marx come to life" students as the true purveyors of marxist philosophy. this is of course done so as to paint china as "autocratic capitalism restorers" and demoralize and balkanize the left around the world. of course, these "young marxists" do not understand the concepts of democratic-centralism, intra-party democracy and the mass line. as i see it, it's a matter of their hearts being in the right places (that might be too kind), but their minds have yet to catch up. tends to be the problem with student activists. either they'll figure it out eventually or morph into outright liberals. certainly the party should not let that happen. No.9747 https://www.marxists.org/subject/africa/nkrumah/neo-colonialism/index.htm No.9748 No.9749 >>9748 Thank you, very much! No.9754 File: dcde3db19cf669a⋯.jpg (68.27 KB, 640x945, 128:189, dengxiaopingrunninupthebud….jpg) When/how did the Red Army come out from under control of the party? I suppose it was restructured after WWII, but was there a specific rationale for the armed forces no longer being partisan? No.9755 >>9754 I don't understand what you mean. The Soviet Army was always "under control of the party," including political instructors to teach soldiers 'bout the joys of Marxism-Leninism. Khrushchev and his successors headed the USSR Defense Council and it seems General Secretaries were unofficially regarded as commanders-in-chief of the armed forces. Post last edited at No.9757 >>9755 my understanding is that it was basically apolitical by the 80s. or: why wasn't there a reaction to yeltsin? No.9758 >>9757 >or: why wasn't there a reaction to yeltsin? Yeltsin was a member of the Politburo until resigning from the party in 1990. By then the army was demoralized. Cynicism about the official ideology existed in the armed forces just as it existed in the civilian population. Dedovshchina and other practices undermined discipline. The war in Afghanistan was also seen as having dealt a blow to the army's prestige in Soviet society. Perhaps even more important, the CPSU was hardly in a position to firmly lead the Soviet Army by the time Yeltsin became President of the Russian SFSR. It had been divided into various groups (supporters of Gorby, supporters of "hardliners" like Ligachev, supporters of Yeltsin, etc.) and the party branches in the republics were going their own way as well (e.g. Algirdas Brazauskas, head of the party in the Lithuanian SSR, broke its ties with the CPSU.) Having said all that, if Gorby ordered the army to be used against Yeltsin, it would have obeyed (at least up to an extent.) He did use the army to try to exert pressure on the Baltics: https://www.nytimes.com/1991/01/15/world/soviet-crackdown-overview-gorbachev-puts-blame-for-clash-on-lithuanians.html But using the army to depose Yeltsin (who was elected President of the Russian SFSR) would have had huge consequences, both domestically and abroad, so it's understandable why Gorby wouldn't have done that. No.9759 >>9746 So let's hear it, what exactly did these students do that makes them liberals or not in line with Marxist Philosophy™? Supporting the working class is liberal? >>9745 The idea of #MeToo isn't wrong just because it is imported from the US. I don't see the problem with a movement that addresses the way power relations perpetrate sexism, and you don't support imperialism by supporting an anti-establishment movement either. No.9760 >>9759 >you don't support imperialism by supporting an anti-establishment movement either. #MeToo isn't anti-establishment, and there's no reason for Chinese officials to think that its importation into China isn't being done with assistance from the imperialists. No.9773 Thoughts on Max Blumenthal, Ben Norton and Rania Khalek? (Assuming you're at all familiar with their work.) No.9774 >>9773 I'm not familiar with their work. From a quick glance they seem to be "left-liberal" types who criticize US foreign policy and reactionary going-ons at home. In other words, they're okay. No.9775 >>9774 >they seem to be "left-liberal" types who criticize US foreign policy and reactionary going-ons at home. I think they're all some kind of socialists, leaning more towards traditional ML positions than trot, anarchist or dem soc positions, although they never describe themselves as anything specific. All three of them did a podcast episode together were they discussed the tendency of "anti-anti-imperialist" trotskyists and democratic socialists to become supporters of US imperialism, bashed the DSA and the ISO while speaking positively of the PSL. But they mostly focus on foreign policy and rarely talk about socialism, so they're easily lumped together with left liberals like, I dunno, Glenn Greenwald. No.9776 >>9775 >>9774 Anyway I'm not sure how interested you are in new podcasts, etc. but since you weren't familiar with them I thought I'd recommend their (Norton and Blumenthal) podcast, "Moderate Rebels", and also the news website "the grayzone project" which is edited by Blumenthal. Not very educated on this issue myself but here's an article you might find useful: https://grayzoneproject.com/2018/08/23/un-did-not-report-china-internment-camps-uighur-muslims/ No.9778 Do you know anything about the ICP / Hadash in Israel and their Policies the Hadash has Fie seats in the Knesset with 4 of them belonging to the ICP https://en.wikipedia.org/wiki/Maki_(political_party) No.9779 >>9778 And Both(?) Are also members of "Joint List" Which is basically the most Anti-Zionist Front still legal https://en.wikipedia.org/wiki/Joint_List No.9781 >>9778 Not a whole lot. Someone I know scanned a 1976 Soviet academic work a while back with a chapter titled "The Communist Party of Israel in the Struggle for a Peaceful and Democratic Alternative for Their Country" giving a brief history of the CPI in the 1960s-70s and its views: https://archive.org/details/ZionismPastPresent/page/n0 No.9783 Thoughts on the Russia-Crimea situation from a few years ago? My liberal father compared Putin to Hitler when it was going on, and to this day I can't even get him to agree that the Iraq war was by far a greater crime than the annexation of Crimea. "something something iraq good intentions something crimea international law" No.9784 >>9773 I think they do good reporting. Max was the first western journalist to interview people who were attacked by anti-FSLN protestors during the attempted color revolution in Nicaragua earlier this year. Their counter propaganda is valuable. While all three of them had bad positions on Syria, they did a public mea culpa. You could argue about the sincerity of their apologies but seeing how going against the grain on the HEAVY anti-Syria propaganda means complete ostracism from the left-liberal establishment, I think their change of heart is legit. >>9775 Max goes on radio shows hosted by PSL leaders with some frequency, like By Any Means Necessary and Loud and Clear. Ben attended the People's Congress of Resistance in 2017 which was organized by the PSL/ANSWER. They are like Abby Martin in that they are in the PSL's orbit but want to maintain their journalistic independence. Lenin is right when he said we need people to put aside their desire for individual accolades and contribute seriously to the creation and propagation of the party's literature but given the stifling totality of modern propaganda, I'm just glad there are people out there debunking and questioning the imperialist narrative. No.9785 >>9784 >While all three of them had bad positions on Syria, they did a public mea culpa. You could argue about the sincerity of their apologies I listened to all three talk about how they used to be wrong and what changed their minds, and I don't see any reason to doubt they're sincere. Nowadays they're probably more strongly opposed to regime change in Syria than any other western journalists, and are regularly smeared as "Assadists" and paid Russian shills. The only people I see question their sincerity are pro-regime change liberals who claim they changed their position after recieving them sweet sweet rubles straight from the Kremlin. No.9786 >>9785 There are some ultras on twitter that attack that crew regularly but yeah I'm with you. That series of episodes of Moderate Rebels was a fun listen. I DETEST ISO so it was fun to listen to them get shit on. No.9787 >>9786 >That series of episodes of Moderate Rebels was a fun listen. I DETEST ISO so it was fun to listen to them get shit on. I don't have any experience with the ISO (im not a burger) but yeah that was entertaining. No.9788 >>9783 The corrupt but democratically elected president of Ukraine, Yanukovich, was overthrown in a US sponsored coup because he wanted to take a trade deal with Russia instead of accepting a European enforced austerity package. There is evidence that neo-cons in the state department went above the White House's head and that Obama was privately very upset with Victoria Nuland, assistant secretary of state, and Geoffery Pyatt, former ambassador to Ukraine, for orchestrating that coup and placing fascists in very high positions in the new government. https://consortiumnews.com/2014/02/23/neocons-and-the-ukraine-coup/ The new nazi-worshipping government passed anti-Russian language laws and began pogroms against Jews, Roma, communists and trade unionists. More important for Russia, however, was the talk about revoking the lease of Russia's only warm water port in Sevastopol. This was a major provocation against Russia. How could they not respond to this situation which was spiraling wildly out of control? The US coup d'etat was an act of aggression against the people of Ukraine and the government of Russia. The puppet masters behind that coup should be tried for conspiracy to violate the sovereignty of another country. >compared Putin to Hitler This is incredibly ironic given that the Azov battalion, the most effective fighting force associated with the Ukrainian state is made up of admirers of OUN and other Nazi collaborators. The Russians are literally fighting Nazis. >I can't even get him to agree that the Iraq war was by far a greater crime than the annexation of Crimea. 1. Crimea voted to join Russia overwhelmingly. 2. The population of the peninsula is mostly Russian speaking, has historically associated with Russia and belonged to the RSFSR until Khrushchev, himself a Ukrainian, gave the Crimea to the Ukraine. 3. More than a million people died because of the Iraq war. Even more were displaced. The whole fabric of society was ripped asunder. ISIS is a direct result of the US invasion. I have no idea how you could argue anything of a similar scale has happened in Crimea. No.9789 >>9788 Thanks for that summation. >I have no idea how you could argue anything of a similar scale has happened in Crimea. Yes it's completely ridiculous. We were talking and I said that in the last few decades it's completely undeniable that american foreign policy has been far more aggressive and destructive than russian foreign policy. That's when he brought up the annexation of Crimea. I pointed out that that doesn't even come close to the horrors of the Iraq invasion, to which he replied something about how at least with the invasion of Iraq there might have been some good intentions, while the annexation of Crimea was illegal under international law and basically just the dictator Putin flexing his muscles. He's a liberal, often critical of the foreign policy of the US and it's allies, but he also has a cold war mentality (which probably comes from living through the actual cold war and hating the USSR). He hates Russia and he is very concerned about the rise of China. His line is always something like "yes the US does terrible things around the world, supports Saudi Arabia, supports Israel, overthrows democratically elected governments and kills a bunch of innocent people, but it's still a democracy at it's core which Russia and China aren't". When Russia or China do something bad it's always them showing what power hungry, evil, anti-democratic authoritarians they really are. When the US (together with it's allies) does something that's objectively ten times worse we should criticize it but still remember that they are a friendly democracy just like us, who will protect the world from evil dictators and totalitarianism. I really don't know how to cure his liberalism. No.9790 >>9783 I think Strat is correct here: >>9788 >until Khrushchev, himself a Ukrainian Just a nitpick, but he was Russian. He worked in the Ukraine and knew its culture well though. >>9789 >at least with the invasion of Iraq there might have been some good intentions The problem with that argument is that anything can be excused as having "good intentions." It's blatantly self-serving. The US didn't care about Noriega's human rights record when he functioned as a CIA asset. It didn't care about Saddam when he was fighting Iran. All imperialists try to showcase "good intentions" when waging aggressive wars, such as the Entente "defending innocent Belgium" in World War I, or the Western powers increasing their colonial presence in Asia by justifying it as protecting against attacks on Christian missionaries. No.9794 why do you maoism (and MZT) became so popular during the 60s and onwards among americans? i remember seeing somewhere that someone said it had to do with lots of american students getting radicalised but still jumped onto the anti-soviet bandwagon that existed during the cold war. maoism also seems to be really popular among the modern left too, as well as general "anti-revisionism" and i just can't seem to understand how so many of these people can genuinely believe in all these anti-soviet lies about "big bureuacracy of the fascist type" what is your take on it? No.9795 >>9794 Al Szymanski summed it up well: >The new left movement of the 1960’s grew up independently of the Marxist-Leninist tradition. Its roots were in the pacifist and social democratic tradition. It moved to Marxism-Leninism because of identification with the struggles of the Cubans, Vietnamese and Chinese (during their Cultural Revolution). The characteristics of these three revolutions did not seem to us to have anything in common with the image of Communism/Soviet Union that we had been conditioned to accept, and thus we became strongly predisposed to a Maoist type argument that the Soviet Union’s brand of “Communism” really was a capitalism of the Nazi type, i.e., what we had believed all along, while the “Communism” of China, Cuba and Vietnam was a qualitatively different phenomenon – people’s power, or the realization of the true; socialist ideas of equalitarianism, democracy and control of production by the common people. The Maoist alternative allowed formerly strongly anti-communist youth to easily make the transition to Marxism without having to question the fabricated stereotype of Soviet communism they had grown up with, while romanticizing Cuban, Vietnamese and Chinese Communism, portraying the two types as having nothing in common. I also think being able to point to at least a few examples of socialist construction (namely the USSR under Stalin, China under Mao, and Albania under Hoxha) gives "anti-revisionists" a greater attraction than other ultra-leftist groups like left-coms who pretty much glory in their own irrelevance and inability to affect anything. Having said that, "anti-revisionism" at its worst does engage in the sort of obsessing over specific events and obscure personalities that also characterize ultra-leftism, which distorts one's own politics. For example, I've met many "anti-revisionists" who focus on the minutiae of the Moscow Trials, but can't coherently explain Trotsky's own views and why Trotskyism sucks beyond "argh they lie about Stalin." So when it comes time for debate, and Trots are like "this is why we think the USSR was a degenerated workers' state" or whatever, a lot of "anti-revisionists" will just be furiously copy-pasting stuff by Grover Furr on the Trials or using weird arguments that basically boil down to "Stalin couldn't accumulate autocratic control over the CPSU and government because his official positions didn't formally give him any such authority." And rather than tackle the wider claims of Trots on the nature of socialism and the Soviet Union, they'll just focus on defending Stalin as a person. Post last edited at No.9796 Are there democratic elements to the political system of China? Is it mostly similar to that of the old eastern bloc countries or is it different? Is there more democracy on the local/regional level than the national level? No.9797 >>9796 I haven't actually studied how China's state structure works that much. It definitely has differences though, e.g. the People's Political Consultative Conference doesn't really have a counterpart in other socialist countries, and elections to the National People's Congress and provincial People's Congresses combine both direct and indirect methods (whereas in other socialist countries, with the notable exceptions of the USSR before 1936 and Yugoslavia, direct elections were/are the norm.) Post last edited at No.9798 >>9797 Alright. I'm interested because although I am not such a huge fan of China (they're not a socialist country in my opinion and nothing seems to indicate they will be going in that direction any time soon), it is always assumed that the people there don't have any influence over their government at all, and it's just controlled by one guy at the top. When I was a child I remember literally being taught that in China you have to go vote but there's only one option, the Communist Party (of course that's based on the misunderstanding that in elections in one party states people vote on the one party, not on individual candidates). I'm pretty sure I heard Yanis Varoufakis say somewhere that there's been a recent upsurge in local democracy in China, while it's still very undemocratic on the national level. No.9799 >>9798 Your opinions run contrary to facts, liberal No.9802 >>9799 very convincing No.9803 File: b2439244568a1d8⋯.jpg (192.6 KB, 620x465, 4:3, notglasnost.jpg) >>9802 is your opinion well-informed? by what? western news? what research have you done? there are plenty of posts about china on this board and in this very thread. No.9804 >>9803 >there are plenty of posts about china on this board and in this very thread. Yeah I've stated my view on why China currently doesn't have a socialist mode of production, on this board, on /leftypol/, on reddit, many times. I have no interest in discussing it further. If China is socialist or not wasn't remotely the point of my post so, uh, fuck off please. No.9805 >>9804 your "view" is based on faulty premises and liberal propaganda. http://www.china.org.cn/english/features/49109.htm indirect elections are more relevant than direct elections; higher bodies are elected by lower bodies. local congresses and village chiefs are directly elected. No.9808 File: 16d0b13239dc03a⋯.png (45.12 KB, 1350x625, 54:25, World literacy.png) Would it dialectical for Socialist country to maintain a very close and very friendly relationships with imperialistic countries, even join EU,NATO etc, to avoid any sanctions like Cuba, Venezuela or Korea and slowly peacefully build socialism while also aiding developing countries in literacy campaigns,building education and industrialization. No.9810 >>9808 To quote something I wrote elsewhere, >there is a point made by Michael Parenti in regard to the socialist countries: "Our policymakers have argued that right-wing governments, for all their 'deficiencies,' are friendly toward us, while Communist ones are belligerent and therefore a threat to US security. But every Marxist or left-leaning country, be it a great power like the Soviet Union or small powers like Vietnam, Cuba, Angola, and Nicaragua, or a mini-power like Grenada (under the New Jewel Movement), has sought friendly diplomatic and economic relations with the USA. They do so not necessarily out of love for the United States but because of a self-interested desire not to be menaced by US military power, and to enjoy the opportunities of trade that come with friendly relations. As they themselves point out, their economic development and political security would be much better served if they could improve relations with Washington." He gives as one example, "Between 1981 and 1984 alone, the Reagan administration passed up at least four initiatives by the Cuban government to normalize relations." (The Sword and the Dollar, 1989, p. 89.) Bruce Cumings in North Korea: Another Country gives examples of the DPRK's efforts at negotiations throughout the 90s for the normalization of relations and even an agreement to keep US troops in South Korea. If membership in an organization can be used to strengthen a socialist country diplomatically, economically and/or militarily, I don't see the issue, although I can't see joining the EU or NATO as contributing to those objectives. No.9814 >>9810 What about being deceptive to maximum effect, but calling ruling party Social Democratic and claiming to be fallowing "Nordic model"? Additionally why is USA still sanctioning Cuba,attacking Venezuela but is more friendly with Vietnam and China? No.9817 >>9814 >What about being deceptive to maximum effect, but calling ruling party Social Democratic and claiming to be fallowing "Nordic model"? As Marx pointed out, "Communists disdain to conceal their views and aims." Lying to the working-class, to the extent the party leading them is hiding its own ideology, is bad. That was the sort of trickery that Pol Pot engaged in, or Jim Jones (although he had a church rather than a party.) Furthermore any actual socialist policies would immediately set the crypto-communist party apart from reformist ones. If Corbyn can cause concern among sections of the British ruling-class, I'd imagine communists who pretend to be social-democrats would be immediately exposed. >Additionally why is USA still sanctioning Cuba,attacking Venezuela but is more friendly with Vietnam and China? Because China and Vietnam are important sources of investment, whereas Latin America is the United States' "backyard" and the US fears the spread of governments in those countries following the Venezuelan or Cuban examples. No.9819 >>9817 Good analysis, thank you. No.9821 do you have any good books about czechoslovakia under socialism? particulary about the living standards and economy? if you don't could you just give me a general rundown on these things? thanks. No.9822 >>9821 What info do you want to know? I physically own a bunch of books. There's a British guy who visited Czechoslovakia in 1985 or so, and wrote an amusing and brief book about it: https://archive.org/details/CzechoslovakiaBelieveItOrNot No.9825 >>9822 just how their economy ran and how successful it was along with living standards in comparison to other warsaw pact states and pre/post-socialism No.
2019-01-22 15:35:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3038301169872284, "perplexity": 4921.170267694749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583857913.57/warc/CC-MAIN-20190122140606-20190122162606-00401.warc.gz"}
http://physics.stackexchange.com/tags/entanglement/new
# Tag Info 0 Initially the pair is in state $(P\otimes M)+(M\otimes P)$. (I am writing "P" and "M" instead of "+" and "-" so as not to confuse the state "plus" with the addition operation in the state space.) You observe the first electron and happen to measure $P$. Now the pair is in the state $P\otimes M$. Note that this state is not entangled (i.e. it is a tensor ... 2 The flaw in your argument is that the claim "entanglement will instantly replicate the photons' paths and such the pattern onto Bob's screen [sic]" is incorrect. The statistics for the measurement outcomes of any experiment performed on one subsystem of a maximally entangled pair is independent of what goes on with the other subsystem. In your case, from ... 4 The problem with this sort of scheme is that Alice has no control over the results of her measurements, since those are random. This means that she can control which basis Bob's spin is projected on, but she cannot control which of the basis states gets chosen. Bob will then see a random mix of results which turns out to contain no trace of what Alice was ... 4 Your error seems to be the misconception that entanglement will magically make the results of any experiment of photon B exactly mimic those of a similar experiment done on its entangled partner. Entanglement is more subtle than that and must be treated carefully. In particular, there are many different types of entanglement. For example, photons may be ... Top 50 recent answers are included
2014-03-09 17:03:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8002088665962219, "perplexity": 339.37250492791014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394009885941/warc/CC-MAIN-20140305085805-00048-ip-10-183-142-35.ec2.internal.warc.gz"}
https://gateoverflow.in/258397/gatebook-2019-cn2-18?show=263064
116 views In an IPv4 datagram, the value of the header-length (HLEN) field is $(6)_{16}.$ How many bytes of options have been added to the packet? 1. $4$ bytes 2. $24$ bytes 3. $2$ bytes 4. $1$ byte edited | 116 views HLEN field value in decimal is 6. Means the header is $6 \times 4=24 Bytes$ in length. Subtracting 20Bytes from above, should give 4 bytes of options. 1
2019-04-25 20:27:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18285977840423584, "perplexity": 8773.406227516134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578733077.68/warc/CC-MAIN-20190425193912-20190425215912-00373.warc.gz"}
https://nl.mathworks.com/help/dsp/ref/analyticsignal.html
# Analytic Signal Compute analytic signals of discrete-time inputs ## Library Transforms `dspxfrm3` ## Description The Analytic Signal block computes the complex analytic signal corresponding to each channel of the real M-by-N input, u `$y=u+jH\left\{u\right\}$` where $j=\sqrt{-1}$ and H{ } denotes the Hilbert transform. The real part of the output in each channel is a replica of the real input in that channel; the imaginary part is the Hilbert transform of the input. In the frequency domain, the Fourier transform of the analytic signal doubles the positive frequency content of the original signal while zeroing-out negative frequencies and retaining the DC component. The block computes the Hilbert transform using an equiripple FIR filter with the order specified by the Filter order parameter, n. The linear phase filter is designed using the Remez exchange algorithm, and imposes a delay of n/2 on the input samples. The output has the same dimensions as the input. This block supports SIMD code generation. For details, see Code Generation. ### Frame-Based Processing When you set the Input processing parameter to `Columns as channels (frame based)`, the block performs frame-based processing. In this mode, the block treats an M-by-N matrix input as N independent channels containing M sequential time samples. The block computes the analytic signal for each channel over time. ### Sample-Based Processing When you set the Input processing parameter to `Elements as channels (sample based)`, the block performs sample-based processing. In this mode, the block treats an M-by-N matrix input as M*N independent channels and computes the analytic signal for each channel (matrix element) over time. ## Parameters Filter order The length of the FIR filter used to compute the Hilbert transform. Input processing Specify how the block should process the input. You can set this parameter to one of the following options: • `Columns as channels (frame based)` (default) — When you select this option, the block treats each column of the input as a separate channel. • `Elements as channels (sample based)` — When you select this option, the block treats each element of the input as a separate channel. ## Supported Data Types • Double-precision floating point • Single-precision floating point ## Version History Introduced before R2006a
2023-01-27 19:03:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7758152484893799, "perplexity": 1907.9305032727175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00526.warc.gz"}
https://labs.tib.eu/arxiv/?author=R.%20Chornock
• ### A Decline in the X-ray through Radio Emission from GW170817 Continues to Support an Off-Axis Structured Jet(1805.02870) May 8, 2018 astro-ph.HE We present new observations of the binary neutron star merger GW170817 at $\Delta t\approx 220-260$ days post-merger, at radio (Karl G.~Jansky Very Large Array; VLA), X-ray ({\it Chandra X-ray Observatory}) and optical ({\it Hubble Space Telescope}; HST) wavelengths. These observations provide the first evidence for a turnover in the X-ray light curve, mirroring a decline in the radio emission at $\gtrsim5\sigma$ significance. The radio to X-ray spectral energy distribution exhibits no evolution into the declining phase. Our full multi-wavelength dataset is consistent with the predicted behavior of our previously-published models of a structured jet expanding into a low density circumbinary medium. If future observations continue to track our predictions, we expect the radio and X-ray emission to remain detectable until $\sim 1000$ days post-merger. • ### The Complete Light-curve Sample of Spectroscopically Confirmed Type Ia Supernovae from Pan-STARRS1 and Cosmological Constraints from The Combined Pantheon Sample(1710.00845) March 26, 2018 astro-ph.CO We present optical light curves, redshifts, and classifications for 365 spectroscopically confirmed Type Ia supernovae (SNe Ia) discovered by the Pan-STARRS1 (PS1) Medium Deep Survey. We detail improvements to the PS1 SN photometry, astrometry and calibration that reduce the systematic uncertainties in the PS1 SN Ia distances. We combine the subset of 279 PS1 SN Ia ($0.03 < z < 0.68$) with useful distance estimates of SN Ia from SDSS, SNLS, various low-z and HST samples to form the largest combined sample of SN Ia consisting of a total of 1048 SN Ia ranging from $0.01 < z < 2.3$, which we call the `Pantheon Sample'. When combining Planck 2015 CMB measurements with the Pantheon SN sample, we find $\Omega_m=0.307\pm0.012$ and $w = -1.026\pm0.041$ for the wCDM model. When the SN and CMB constraints are combined with constraints from BAO and local H0 measurements, the analysis yields the most precise measurement of dark energy to date: $w0 = -1.007\pm 0.089$ and $wa = -0.222 \pm0.407$ for the w0waCDM model. Tension with a cosmological constant previously seen in an analysis of PS1 and low-z SNe has diminished after an increase of $2\times$ in the statistics of the PS1 sample, improved calibration and photometry, and stricter light-curve quality cuts. We find the systematic uncertainties in our measurements of dark energy are almost as large as the statistical uncertainties, primarily due to limitations of modeling the low-redshift sample. This must be addressed for future progress in using SN Ia to measure dark energy. • ### Measuring Dark Energy Properties with Photometrically Classified Pan-STARRS Supernovae. II. Cosmological Parameters(1710.00846) March 14, 2018 astro-ph.CO We use 1169 Pan-STARRS supernovae (SNe) and 195 low-$z$ ($z < 0.1$) SNe Ia to measure cosmological parameters. Though most Pan-STARRS SNe lack spectroscopic classifications, in a previous paper (I) we demonstrated that photometrically classified SNe can be used to infer unbiased cosmological parameters by using a Bayesian methodology that marginalizes over core-collapse (CC) SN contamination. Our sample contains nearly twice as many SNe as the largest previous SN Ia compilation. Combining SNe with Cosmic Microwave Background (CMB) constraints from Planck, we measure the dark energy equation of state parameter $w$ to be -0.989$\pm$0.057 (stat$+$sys). If $w$ evolves with redshift as $w(a) = w_0 + w_a(1-a)$, we find $w_0 = -0.912 \pm 0.149$ and $w_a =$ -0.513$\pm$0.826. These results are consistent with cosmological parameters from the Joint Lightcurve Analysis and the Pantheon sample. We try four different photometric classification priors for Pan-STARRS SNe and two alternate ways of modeling CC SN contamination, finding that no variant gives a $w$ differing by more than 2% from the baseline measurement. The systematic uncertainty on $w$ due to marginalizing over CC SN contamination, $\sigma_w^{\textrm{CC}} = 0.012$, is the third-smallest source of systematic uncertainty in this work. We find limited (1.6$\sigma$) evidence for evolution of the SN color-luminosity relation with redshift, a possible systematic that could constitute a significant uncertainty in future high-$z$ analyses. Our data provide one of the best current constraints on $w$, demonstrating that samples with $\sim$5% CC SN contamination can give competitive cosmological constraints when the contaminating distribution is marginalized over in a Bayesian framework. • ### A Precise Distance to the Host Galaxy of the Binary Neutron Star Merger GW170817 Using Surface Brightness Fluctuations(1801.06080) Feb. 27, 2018 astro-ph.CO, astro-ph.GA The joint detection of gravitational waves and electromagnetic radiation from the binary neutron star (BNS) merger GW170817 has provided unprecedented insight into a wide range of physical processes: heavy element synthesis via the $r$-process; the production of relativistic ejecta; the equation of state of neutron stars and the nature of the merger remnant; the binary coalescence timescale; and a measurement of the Hubble constant via the "standard siren" technique. In detail, all of these results depend on the distance to the host galaxy of the merger event, NGC4993. In this paper we measure the surface brightness fluctuation (SBF) distance to NGC4993 in the F110W and F160W passbands of the Wide Field Camera 3 Infrared Channel on the Hubble Space Telescope (HST). For the preferred F110W passband we derive a distance modulus of $m{-}M=33.05\pm0.08\pm0.10$ mag, or a linear distance $d=40.7\pm1.4\pm1.9$ Mpc (random and systematic errors, respectively); a virtually identical result is obtained from the F160W data. This is the most precise distance to NGC4993 available to date. Combining our distance measurement with the corrected recession velocity of NGC4993 implies a Hubble constant $H_0=71.9\pm 7.1~km~s^{-1}~Mpc^{-1}$. A comparison of our result to the GW-inferred value of $H_0$ indicates a binary orbital inclination of $i\,{\gtrsim}\,137~\deg$. The SBF technique can be applied to early-type host galaxies of BNS mergers to ${\sim\,}100$ Mpc with HST and possibly as far as ${\sim\,}300$ Mpc with the James Webb Space Telescope, thereby helping to break the inherent distance-inclination degeneracy of the GW signals at distances where many future BNS mergers are likely to be detected. • ### Jets in Hydrogen-poor Super-luminous Supernovae: Constraints from a Comprehensive Analysis of Radio Observations(1711.03428) Feb. 26, 2018 astro-ph.HE The energy source powering the extreme optical luminosity of hydrogen-stripped Superluminous Supernovae (SLSNe-I) is not known, but recent studies have highlighted the case for a central engine. Radio and/or X-ray observations are best placed to track the fastest ejecta and probe the presence of outflows from a central engine. We compile all the published radio observations of SLSNe-I to date and present three new observations of two new SLSNe-I. None were detected. Through modeling the radio emission, we constrain the sub-parsec environments and possible outflows in SLSNe-I. In this sample we rule out on-axis collimated relativistic jets of the kind detected in Gamma-Ray Bursts (GRBs). We constrain off-axis jets with opening angles of 5\arcdeg\ (30\arcdeg) to energies of $\rm{E_k<4\times10^{50}\,erg}$ ($\rm{E_k<10^{50}\,erg}$) in environments shaped by progenitors with mass-loss rates of $\dot{M}<10^{-4}\,M_{\odot}\,{\rm yr}^{-1}$ ($\dot{M}<10^{-5}\,M_{\odot}\,{\rm yr}^{-1}$) for all off-axis angles, assuming fiducial values $\epsilon_e=0.1$ and $\epsilon_B=0.01$. The deepest limits rule out emission of the kind seen in faint un-collimated GRBs (with the exception of GRB\,060218), and from relativistic supernovae. Finally, for the closest SLSN-I SN 2017egm we constrained the energy of an uncollimated non-relativistic outflow like those observed in normal SNe to $E_{\rm k}\lesssim10^{48}$ erg. • ### The Binary Neutron Star event LIGO/VIRGO GW170817 a hundred and sixty days after merger: synchrotron emission across the electromagnetic spectrum(1801.03531) Feb. 25, 2018 astro-ph.HE We report deep Chandra, HST and VLA observations of the binary neutron star event GW170817 at $t<160$ d after merger. These observations show that GW170817 has been steadily brightening with time and might have now reached its peak, and constrain the emission process as non-thermal synchrotron emission where the cooling frequency $\nu_c$ is above the X-ray band and the synchrotron frequency $\nu_m$ is below the radio band. The very simple power-law spectrum extending for eight orders of magnitude in frequency enables the most precise measurement of the index $p$ of the distribution of non-thermal relativistic electrons $N(\gamma)\propto \gamma^{-p}$ accelerated by a shock launched by a NS-NS merger to date. We find $p=2.17\pm0.01$, which indicates that radiation from ejecta with $\Gamma\sim3-10$ dominates the observed emission. While constraining the nature of the emission process, these observations do \emph{not} constrain the nature of the relativistic ejecta. We employ simulations of explosive outflows launched in NS ejecta clouds to show that the spectral and temporal evolution of the non-thermal emission from GW170817 is consistent with both emission from radially stratified quasi-spherical ejecta traveling at mildly relativistic speeds, \emph{and} emission from off-axis collimated ejecta characterized by a narrow cone of ultra-relativistic material with slower wings extending to larger angles. In the latter scenario, GW170817 harbored a normal SGRB directed away from our line of sight. Observations at $t\le 200$ days are unlikely to settle the debate as in both scenarios the observed emission is effectively dominated by radiation from mildly relativistic material. • ### Hydrogen-Poor Superluminous Supernovae from the Pan-STARRS1 Medium Deep Survey(1708.01619) Jan. 12, 2018 astro-ph.HE We present light curves and classification spectra of 17 hydrogen-poor superluminous supernovae (SLSNe) from the Pan-STARRS1 Medium Deep Survey (PS1 MDS). Our sample contains all objects from the PS1 MDS sample with spectroscopic classification that are similar to either of the prototypes SN2005ap or SN2007bi, without an explicit limit on luminosity. With a redshift range $0.3 < z < 1.6$, PS1MDS is the first SLSN sample primarily probing the high-redshift population; our multi-filter PS1 light curves probe the rest-frame UV emission, and hence the peak of the spectral energy distribution. We measure the temperature evolution and construct bolometric light curves, and find peak luminosities of $(0.5-5) \times 10^{44}$ erg s$^{-1}$ and lower limits on the total radiated energies of $(0.3-2) \times 10^{51}$ erg. The light curve shapes are diverse, with both rise- and decline times spanning a factor of $\sim 5$, and several examples of double-peaked light curves. When correcting for the flux-limited nature of our survey, we find a median peak luminosity at 4000 {\AA} of $M_{\rm 4000} = -21.1$ mag, and a spread of $\sigma = 0.7$ mag. • ### Improved constraints on H0 from a combined analysis of gravitational-wave and electromagnetic emission from GW170817(1710.06426) Oct. 19, 2017 astro-ph.CO The luminosity distance measurement of GW170817 derived from GW analysis in Abbott et al. 2017 (here, A17:H0) is highly correlated with the measured inclination of the NS-NS system. To improve the precision of the distance measurement, we attempt to constrain the inclination by modeling the broad-band X-ray-to-radio emission from GW170817, which is dominated by the interaction of the jet with the environment. We update our previous analysis and we consider the radio and X-ray data obtained at $t<40$ days since merger. We find that the afterglow emission from GW170817 is consistent with an off-axis relativistic jet with energy $10^{48}\,\rm{erg}<E_{k}\le 3\times 10^{50} \,\rm{erg}$ propagating into an environment with density $n\sim10^{-2}-10^{-4} \,\rm{cm^{-3}}$, with preference for wider jets (opening angle $\theta_j=15$ deg). For these jets, our modeling indicates an off-axis angle $\theta_{\rm obs}\sim25-50$ deg. We combine our constraints on $\theta_{\rm obs}$ with the joint distance-inclination constraint from LIGO. Using the same $\sim 170$ km/sec peculiar velocity uncertainty assumed in A17:H0 but with an inclination constraint from the afterglow data, we get a value of $H_0=$$74.0 \pm \frac{11.5}{7.5} \mbox{km/s/Mpc}, which is higher than the value of H_0=$$70.0 \pm \frac{12.0}{8.0}$ $\mbox{km/s/Mpc}$ found in A17:H0. Further, using a more realistic peculiar velocity uncertainty of 250 km/sec derived from previous work, we find $H_0=$$75.5 \pm \frac{11.6}{9.6}$ km/s/Mpc for H0 from this system. We note that this is in modestly better agreement with the local distance ladder than the Planck CMB, though a significant such discrimination will require $\sim 50$ such events. Future measurements at $t>100$ days of the X-ray and radio emission will lead to tighter constraints. • ### How Many Kilonovae Can Be Found in Past, Present, and Future Survey Datasets?(1710.05845) Oct. 16, 2017 astro-ph.IM The discovery of a kilonova (KN) associated with the Advanced LIGO (aLIGO)/Virgo event GW170817 opens up new avenues of multi-messenger astrophysics. Here, using realistic simulations, we provide estimates of the number of KNe that could be found in data from past, present and future surveys without a gravitational-wave trigger. For the simulation, we construct a spectral time-series model based on the DES-GW multi-band light-curve from the single known KN event, and we use an average of BNS rates from past studies of $10^3 \rm{Gpc}^{-3}/\rm{year}$, consistent with the $1$ event found so far. Examining past and current datasets from transient surveys, the number of KNe we expect to find for ASAS-SN, SDSS, PS1, SNLS, DES, and SMT is between 0 and $0.3$. We predict the number of detections per future survey to be: 8.3 from ATLAS, 10.6 from ZTF, 5.5/69 from LSST (the Deep Drilling / Wide Fast Deep), and 16.0 from WFIRST. The maximum redshift of KNe discovered for each survey is z = 0.8 for WFIRST, z = 0.25 for LSST and z = 0.04 for ZTF and ATLAS. For the LSST survey, we also provide contamination estimates from Type Ia and Core-collapse supernovae: after light-curve and template-matching requirements, we estimate a background of just 2 events. More broadly, we stress that future transient surveys should consider how to optimize their search strategies to improve their detection efficiency, and to consider similar analyses for GW follow-up programs. • ### The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/VIRGO GW170817. V. Rising X-ray Emission from an Off-Axis Jet(1710.05431) Oct. 16, 2017 astro-ph.HE We report the discovery of rising X-ray emission from the binary neutron star (BNS) merger event GW170817. This is the first detection of X-ray emission from a gravitational-wave source. Observations acquired with the Chandra X-ray Observatory (CXO) at t~2.3 days post merger reveal no significant emission, with L_x<=3.2x10^38 erg/s (isotropic-equivalent). Continued monitoring revealed the presence of an X-ray source that brightened with time, reaching L_x\sim 9x10^39 erg/s at ~15.1 days post merger. We interpret these findings in the context of isotropic and collimated relativistic outflows (both on- and off-axis). We find that the broad-band X-ray to radio observations are consistent with emission from a relativistic jet with kinetic energy E_k~10^49-10^50 erg, viewed off-axis with theta_obs~ 20-40 deg. Our models favor a circumbinary density n~ 0.0001-0.01 cm-3, depending on the value of the microphysical parameter epsilon_B=10^{-4}-10^{-2}. A central-engine origin of the X-ray emission is unlikely. Future X-ray observations at $t\gtrsim 100$ days, when the target will be observable again with the CXO, will provide additional constraints to solve the model degeneracies and test our predictions. Our inferences on theta_obs are testable with gravitational wave information on GW170817 from Advanced LIGO/Virgo on the binary inclination. • ### The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/VIRGO GW170817. VIII. A Comparison to Cosmological Short-duration Gamma-ray Bursts(1710.05438) Oct. 16, 2017 astro-ph.HE We present a comprehensive comparison of the properties of the radio through X-ray counterpart of GW170817 and the properties of short-duration gamma-ray bursts (GRBs). For this effort, we utilize a sample of 36 short GRBs spanning a redshift range of $z \approx 0.12-2.6$ discovered over 2004-2017. We find that the counterpart to GW170817 has an isotropic-equivalent luminosity that is $\approx 3000$ times less than the median value of on-axis short GRB X-ray afterglows, and $\gtrsim10^{4}$ times less than that for detected short GRB radio afterglows. Moreover, the allowed jet energies and particle densities inferred from the radio and X-ray counterparts to GW170817 and on-axis short GRB afterglows are remarkably similar, suggesting that viewing angle effects are the dominant, and perhaps only, difference in their observed radio and X-ray behavior. From comparison to previous claimed kilonovae following short GRBs, we find that the optical and near-IR counterpart to GW170817 is comparatively under-luminous by a factor of $\approx 3-5$, indicating a range of kilonova luminosities and timescales. A comparison of the optical limits following short GRBs on $\lesssim 1$ day timescales also rules out a "blue" kilonova of comparable optical isotropic-equivalent luminosity in one previous short GRB. Finally, we investigate the host galaxy of GW170817, NGC4993, in the context of short GRB host galaxy stellar population properties. We find that NGC4993 is superlative in terms of its large luminosity, old stellar population age, and low star formation rate compared to previous short GRB hosts. Additional events within the Advanced LIGO/VIRGO volume will be crucial in delineating the properties of the host galaxies of NS-NS mergers, and connecting them to their cosmological counterparts. • ### The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/VIRGO GW170817. IV. Detection of Near-infrared Signatures of r-process Nucleosynthesis with Gemini-South(1710.05454) We present a near-infrared spectral sequence of the electromagnetic counterpart to the binary neutron star merger GW170817 detected by Advanced LIGO/Virgo. Our dataset comprises seven epochs of J+H spectra taken with FLAMINGOS-2 on Gemini-South between 1.5 and 10.5 days after the merger. In the initial epoch, the spectrum is dominated by a smooth blue continuum due to a high-velocity, lanthanide-poor blue kilonova component. Starting the following night, all subsequent spectra instead show features that are similar to those predicted in model spectra of material with a high concentration of lanthanides, including spectral peaks near 1.07 and 1.55 microns. Our fiducial model with 0.04 M_sun of ejecta, an ejection velocity of v=0.1c, and a lanthanide concentration of X_lan=1e-2 provides a good match to the spectra taken in the first five days, although it over-predicts the late-time fluxes. We also explore models with multiple fitting components, in each case finding that a significant abundance of lanthanide elements is necessary to match the broad spectral peaks that we observe starting at 2.5 d after the merger. These data provide direct evidence that binary neutron star mergers are significant production sites of even the heaviest r-process elements. • ### The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/VIRGO GW170817. III. Optical and UV Spectra of a Blue Kilonova From Fast Polar Ejecta(1710.05456) We present optical and ultraviolet spectra of the first electromagnetic counterpart to a gravitational wave (GW) source, the binary neutron star merger GW170817. Spectra were obtained nightly between 1.5 and 9.5 days post-merger, using the SOAR and Magellan telescopes; the UV spectrum was obtained with the \textit{Hubble Space Telescope} at 5.5 days. Our data reveal a rapidly-fading blue component ($T\approx5500$ K at 1.5 days) that quickly reddens; spectra later than $\gtrsim 4.5$ days peak beyond the optical regime. The spectra are mostly featureless, although we identify a possible weak emission line at $\sim 7900$ \AA\ at $t\lesssim 4.5$ days. The colours, rapid evolution and featureless spectrum are consistent with a "blue" kilonova from polar ejecta comprised mainly of light $r$-process nuclei with atomic mass number $A\lesssim 140$. This indicates a sight-line within $\theta_{\rm obs}\lesssim 45^{\circ}$ of the orbital axis. Comparison to models suggests $\sim0.03$ M$_\odot$ of blue ejecta, with a velocity of $\sim 0.3c$. The required lanthanide fraction is $\sim 10^{-4}$, but this drops to $<10^{-5}$ in the outermost ejecta. The large velocities point to a dynamical origin, rather than a disk wind, for this blue component, suggesting that both binary constituents are neutron stars (as opposed to a binary consisting of a neutron star and a black hole). For dynamical ejecta, the high mass favors a small neutron star radius of $\lesssim 12$ km. This mass also supports the idea that neutron star mergers are a major contributor to $r$-process nucleosynthesis. • ### The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/VIRGO GW170817. VI. Radio Constraints on a Relativistic Jet and Predictions for Late-Time Emission from the Kilonova Ejecta(1710.05457) Oct. 16, 2017 astro-ph.HE We present Very Large Array (VLA) and Atacama Large Millimeter/sub-millimeter Array ALMA radio observations of GW\,170817, the first Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo gravitational wave (GW) event from a binary neutron star merger and the first GW event with an electromagnetic (EM) counterpart. Our data include the first observations following the discovery of the optical transient at both the centimeter ($13.7$ hours post merger) and millimeter ($2.41$ days post merger) bands. We detect faint emission at 6 GHz at 19.47 and 39.23 days after the merger, but not in an earlier observation at 2.46 d. We do not detect cm/mm emission at the position of the optical counterpart at frequencies of 10-97.5 GHz at times ranging from 0.6 to 30 days post merger, ruling out an on-axis short gamma-ray burst (SGRB) for energies $\gtrsim 10^{48}$ erg. For fiducial SGRB parameters, our limits require an observer viewer angle of $\gtrsim 20^{\circ}$. The radio and X-ray data can be jointly explained as the afterglow emission from an SGRB with a jet energy of $\sim 10^{49}-10^{50}$ erg that exploded in a uniform density environment with $n\sim 10^{-4}-10^{-2}$ cm$^{-3}$, viewed at an angle of $\sim 20^{\circ}-40^{\circ}$ from the jet axis. Using the results of our light curve and spectral modeling, in conjunction with the inference of the circumbinary density, we predict the emergence of late-time radio emission from the deceleration of the kilonova (KN) ejecta on a timescale of $\sim 5-10$ years that will remain detectable for decades with next-generation radio facilities, making GW\,170817 a compelling target for long-term radio monitoring. • ### The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/VIRGO GW170817. VII. Properties of the Host Galaxy and Constraints on the Merger Timescale(1710.05458) Oct. 16, 2017 astro-ph.HE We present the properties of NGC 4993, the host galaxy of GW170817, the first gravitational wave (GW) event from the merger of a binary neutron star (BNS) system and the first with an electromagnetic (EM) counterpart. We use both archival photometry and new optical/near-IR imaging and spectroscopy, together with stellar population synthesis models to infer the global properties of the host galaxy. We infer a star formation history peaked at $\gtrsim 10$ Gyr ago, with subsequent exponential decline leading to a low current star formation rate of 0.01 M$_{\odot}$ yr$^{-1}$, which we convert into a binary merger timescale probability distribution. We find a median merger timescale of $11.2^{+0.7}_{-1.4}$ Gyr, with a 90% confidence range of $6.8-13.6$ Gyr. This in turn indicates an initial binary separation of $\approx 4.5$ R$_{\odot}$, comparable to the inferred values for Galactic BNS systems. We also use new and archival $Hubble$ $Space$ $Telescope$ images to measure a projected offset of the optical counterpart of $2.1$ kpc (0.64$r_{e}$) from the center of NGC 4993 and to place a limit of $M_{r} \gtrsim -7.2$ mag on any pre-existing emission, which rules out the brighter half of the globular cluster luminosity function. Finally, the age and offset of the system indicates it experienced a modest natal kick with an upper limit of $\sim 200$ km s$^{-1}$. Future GW$-$EM observations of BNS mergers will enable measurement of their population delay time distribution, which will directly inform their viability as the dominant source of $r$-process enrichment in the Universe. • ### The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/Virgo GW170817. I. Dark Energy Camera Discovery of the Optical Counterpart(1710.05459) Oct. 16, 2017 astro-ph.HE We present the Dark Energy Camera (DECam) discovery of the optical counterpart of the first binary neutron star merger detected through gravitational wave emission, GW170817. Our observations commenced 10.5 hours post-merger, as soon as the localization region became accessible from Chile. We imaged 70 deg$^2$ in the $i$ and $z$ bands, covering 93\% of the initial integrated localization probability, to a depth necessary to identify likely optical counterparts (e.g., a kilonova). At 11.4 hours post-merger we detected a bright optical transient located $10.6''$ from the nucleus of NGC\,4993 at redshift $z=0.0098$, consistent (for $H_0 = 70$\, km s$^{-1}$ Mpc$^{-1}$) with the distance of $40 \pm 8$\, Mpc reported by the LIGO Scientific Collaboration and the Virgo Collaboration (LVC). At detection the transient had magnitudes $i\approx 17.30$ and $z\approx 17.45$, and thus an absolute magnitude of $M_i = -15.7$, in the luminosity range expected for a kilonova. We identified 1,500 potential transient candidates. Applying simple selection criteria aimed at rejecting background events such as supernovae, we find the transient associated with NGC\,4993 as the only remaining plausible counterpart, and reject chance coincidence at the 99.5\% confidence level. We therefore conclude that the optical counterpart we have identified near NGC\,4993 is associated with GW170817. This discovery ushers in the era of multi-messenger astronomy with gravitational waves, and demonstrates the power of DECam to identify the optical counterparts of gravitational-wave sources. • ### The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/VIRGO GW170817. II. UV, Optical, and Near-IR Light Curves and Comparison to Kilonova Models(1710.05840) Oct. 16, 2017 astro-ph.HE We present UV, optical, and NIR photometry of the first electromagnetic counterpart to a gravitational wave source from Advanced LIGO/Virgo, the binary neutron star merger GW170817. Our data set extends from the discovery of the optical counterpart at $0.47$ days to $18.5$ days post-merger, and includes observations with the Dark Energy Camera (DECam), Gemini-South/FLAMINGOS-2 (GS/F2), and the {\it Hubble Space Telescope} ({\it HST}). The spectral energy distribution (SED) inferred from this photometry at $0.6$ days is well described by a blackbody model with $T\approx 8300$ K, a radius of $R\approx 4.5\times 10^{14}$ cm (corresponding to an expansion velocity of $v\approx 0.3c$), and a bolometric luminosity of $L_{\rm bol}\approx 5\times10^{41}$ erg s$^{-1}$. At $1.5$ days we find a multi-component SED across the optical and NIR, and subsequently we observe rapid fading in the UV and blue optical bands and significant reddening of the optical/NIR colors. Modeling the entire data set we find that models with heating from radioactive decay of $^{56}$Ni, or those with only a single component of opacity from $r$-process elements, fail to capture the rapid optical decline and red optical/NIR colors. Instead, models with two components consistent with lanthanide-poor and lanthanide-rich ejecta provide a good fit to the data, the resulting "blue" component has $M_\mathrm{ej}^\mathrm{blue}\approx 0.01$ M$_\odot$ and $v_\mathrm{ej}^\mathrm{blue}\approx 0.3$c, and the "red" component has $M_\mathrm{ej}^\mathrm{red}\approx 0.04$ M$_\odot$ and $v_\mathrm{ej}^\mathrm{red}\approx 0.1$c. These ejecta masses are broadly consistent with the estimated $r$-process production rate required to explain the Milky Way $r$-process abundances, providing the first evidence that BNS mergers can be a dominant site of $r$-process enrichment. • The detection of GW170817 in both gravitational waves and electromagnetic waves heralds the age of gravitational-wave multi-messenger astronomy. On 17 August 2017 the Advanced LIGO and Virgo detectors observed GW170817, a strong signal from the merger of a binary neutron-star system. Less than 2 seconds after the merger, a gamma-ray burst (GRB 170817A) was detected within a region of the sky consistent with the LIGO-Virgo-derived location of the gravitational-wave source. This sky region was subsequently observed by optical astronomy facilities, resulting in the identification of an optical transient signal within $\sim 10$ arcsec of the galaxy NGC 4993. These multi-messenger observations allow us to use GW170817 as a standard siren, the gravitational-wave analog of an astronomical standard candle, to measure the Hubble constant. This quantity, which represents the local expansion rate of the Universe, sets the overall scale of the Universe and is of fundamental importance to cosmology. Our measurement combines the distance to the source inferred purely from the gravitational-wave signal with the recession velocity inferred from measurements of the redshift using electromagnetic data. This approach does not require any form of cosmic "distance ladder;" the gravitational wave analysis can be used to estimate the luminosity distance out to cosmological scales directly, without the use of intermediate astronomical distance measurements. We determine the Hubble constant to be $70.0^{+12.0}_{-8.0} \, \mathrm{km} \, \mathrm{s}^{-1} \, \mathrm{Mpc}^{-1}$ (maximum a posteriori and 68% credible interval). This is consistent with existing measurements, while being completely independent of them. Additional standard-siren measurements from future gravitational-wave sources will provide precision constraints of this important cosmological parameter. • ### iPTF15eqv: Multi-wavelength Expos\'e of a Peculiar Calcium-rich Transient(1706.01887) Oct. 12, 2017 astro-ph.HE The progenitor systems of the class of "Ca-rich transients" is a key open issue in time domain astrophysics. These intriguing objects exhibit unusually strong calcium line emissions months after explosion, fall within an intermediate luminosity range, are often found at large projected distances from their host galaxies, and may play a vital role in enriching galaxies and the intergalactic medium. Here we present multi-wavelength observations of iPTF15eqv in NGC 3430, which exhibits a unique combination of properties that bridge those observed in Ca-rich transients and Type Ib/c supernovae. iPTF15eqv has among the highest [Ca II]/[O I] emission line ratios observed to date, yet is more luminous and decays more slowly than other Ca-rich transients. Optical and near-infrared photometry and spectroscopy reveal signatures consistent with the supernova explosion of a < 10 solar mass star that was stripped of its H-rich envelope via binary interaction. Distinct chemical abundances and ejecta kinematics suggest that the core collapse occurred through electron capture processes. Deep limits on possible radio emission made with the Jansky Very Large Array imply a clean environment ($n <$ 0.1 cm$^{-3}$) within a radius of $\sim 10^{17}$ cm. Chandra X-ray Observatory observations rule out alternative scenarios involving tidal disruption of a white dwarf by a black hole, for masses > 100 solar masses). Our results challenge the notion that spectroscopically classified Ca-rich transients only originate from white dwarf progenitor systems, complicate the view that they are all associated with large ejection velocities, and indicate that their chemical abundances may vary widely between events. • ### An Empirical Study of Contamination in Deep, Rapid, and Wide-Field Optical Follow-Up of Gravitational Wave Events(1710.02144) Oct. 9, 2017 astro-ph.HE We present an empirical study of contamination in deep, rapid, and wide-field optical follow-up searches of GW sources from aLIGO. We utilize dedicated observations during four nights of imaging with DECam. Our search covered $\sim56$ deg$^2$, with two visits per night separated by $\approx 3$~hours, in $i$- and $z$-band, followed by an additional set of $griz$ images three weeks later to serve as reference images for subtraction, and for the purpose of identifying galaxy and stellar counterparts for any transient sources. We achieve $5\sigma$ point-source limiting magnitudes of $i \approx 23.5$ and $z \approx 22.4$ mag in the coadded single-epoch images. We conduct a search for transient objects that can mimic the $i-z$ color behavior of both red ($i-z > 0.5$~mag) and blue ($i-z < 0$~mag) kilonova emission, finding 11 and 10 contaminants, respectively. Independent of color, we identify 48 transients of interest. Additionally, we leverage the rapid cadence of our observations to search for sources with characteristic timescales of $\approx1$ day and $\approx3$ hours, finding no potential contaminants. We assess the efficiency of our pipeline and search methodology with injected point sources, finding that we are 90\% (60\%) efficient when searching for red (blue) kilonova-like sources to a limiting magnitude of $i \lesssim 22.5$ mag. Applying these efficiencies, we derive sky rates for kilonova contaminants in the red and blue regimes of $\mathcal{R}_{\rm red} \approx 0.16$ deg$^{-2}$ and $\mathcal{R}_{\rm blue} \approx 0.80$ deg$^{-2}$. The total contamination rate, independent of color, is $\mathcal{R}_{\rm all} \approx 1.79$ deg$^{-2}$. We compare our derived results to optical follow-up searches of the GW events GW150914 and GW151226 and comment on the outlook for GW follow-up searches as additional GW detectors (e.g., KAGRA, LIGO India) come online in the next decade. • ### Measuring the Properties of Dark Energy with Photometrically Classified Pan-STARRS Supernovae. I. Systematic Uncertainty from Core-Collapse Supernova Contamination(1611.07042) June 19, 2017 astro-ph.CO The Pan-STARRS (PS1) Medium Deep Survey discovered over 5,000 likely supernovae (SNe) but obtained spectral classifications for just 10% of its SN candidates. We measured spectroscopic host galaxy redshifts for 3,147 of these likely SNe and estimate that $\sim$1,000 are Type Ia SNe (SNe Ia) with light-curve quality sufficient for a cosmological analysis. We use these data with simulations to determine the impact of core-collapse SN (CC SN) contamination on measurements of the dark energy equation of state parameter, $w$. Using the method of Bayesian Estimation Applied to Multiple Species (BEAMS), distances to SNe Ia and the contaminating CC SN distribution are simultaneously determined. We test light-curve based SN classification priors for BEAMS as well as a new classification method that relies upon host galaxy spectra and the association of SN type with host type. By testing several SN classification methods and CC SN parameterizations on large SN simulations, we estimate that CC SN contamination gives a systematic error on $w$ ($\sigma_w^{CC}$) of 0.014, 29% of the statistical uncertainty. Our best method gives $\sigma_w^{CC} = 0.004$, just 8% of the statistical uncertainty, but could be affected by incomplete knowledge of the CC SN distribution. This method determines the SALT2 color and shape coefficients, $\alpha$ and $\beta$, with $\sim$3% bias. However, we find that some variants require $\alpha$ and $\beta$ to be fixed to known values for BEAMS to yield accurate measurements of $w$. Finally, the inferred abundance of bright CC SNe in our sample is greater than expected based on measured CC SN rates and luminosity functions. • ### Results from a systematic survey of X-ray emission from Hydrogen-poor Superluminous Supernovae(1704.05865) April 19, 2017 astro-ph.HE We present the results from a sensitive X-ray survey of 26 nearby hydrogen-poor superluminous supernovae (SLSNe-I) with Swift, Chandra and XMM. This dataset constrains the SLSN evolution from a few days until ~2000 days after explosion, reaching a luminosity L_x~10^40 erg/s and revealing the presence of significant X-ray emission at the location of PTF12dam. No SLSN-I is detected above L_x~10^41 erg/s, suggesting that the luminous X-ray emission L_x~10^45 erg/s associated with SCP60F6 is not common among SLSNe-I. We constrain the presence of off-axis GRB jets, ionization breakouts from magnetar central engines and the density in the sub-pc environments of SLSNe-I through Inverse Compton emission. The deepest limits rule out the weakest uncollimated GRB outflows, suggesting that IF the similarity of SLSNe-I with GRB/SNe extends to their fastest ejecta, then SLSNe-I are either powered by energetic jets pointed far away from our line of sight theta>30 deg, or harbor failed jets that do not successfully break through the stellar envelope. Furthermore, IF a magnetar central engine is responsible for the exceptional luminosity of SLSNe-I, our X-ray analysis favors large magnetic fields B>2x10^(14) G and ejecta masses M_ej>3 Msun in agreement with optical/UV studies. Finally, we constrain the pre-explosion mass-loss rate of stellar progenitors of SLSNe-I. For PTF12dam we infer Mdot<2x10^(-5) Msun/yr, suggesting that the SN shock interaction with the CSM is unlikely to supply the main source of energy powering the optical transient and that some SLSN-I progenitors end their life as compact stars surrounded by a low-density medium similar to long GRBs and Type Ib/c SNe. • ### An ultraviolet excess in the superluminous supernova Gaia16apd reveals a powerful central engine(1611.06993) Since the discovery of superluminous supernovae (SLSNe) in the last decade, it has been known that these events exhibit bluer spectral energy distributions than other supernova subtypes, with significant output in the ultraviolet. However, the event Gaia16apd seems to outshine even the other SLSNe at rest-frame wavelengths below $\sim 3000$ \AA. Yan et al (2016) have recently presented HST UV spectra and attributed the UV flux to low metallicity and hence reduced line blanketing. Here we present UV and optical light curves over a longer baseline in time, revealing a rapid decline at UV wavelengths despite a typical optical evolution. Combining the published UV spectra with our own optical data, we demonstrate that Gaia16apd has a much hotter continuum than virtually any SLSN at maximum light, but it cools rapidly thereafter and is indistinguishable from the others by $\sim 10$-15 days after peak. Comparing the equivalent widths of UV absorption lines with those of other events, we show that the excess UV continuum is a result of a more powerful central power source, rather than a lack of UV absorption relative to other SLSNe or an additional component from interaction with the surrounding medium. These findings strongly support the central-engine hypothesis for hydrogen-poor SLSNe. An explosion ejecting $M_{\rm ej} = 4 (0.2/\kappa)$ M$_\odot$, where $\kappa$ is the opacity in cm$^2$g$^{-1}$, and forming a magnetar with spin period $P=2$ ms, and $B=2\times10^{14}$ G (lower than other SLSNe with comparable rise-times) can consistently explain the light curve evolution and high temperature at peak. The host metallicity, $Z=0.18$ Z$_\odot$, is comparable to other SLSNe. • ### PS1-14bj: A Hydrogen-Poor Superluminous Supernova With a Long Rise and Slow Decay(1605.05235) Oct. 10, 2016 astro-ph.HE We present photometry and spectroscopy of PS1-14bj, a hydrogen-poor superluminous supernova (SLSN) at redshift $z=0.5215$ discovered in the last months of the Pan-STARRS1 Medium Deep Survey. PS1-14bj stands out by its extremely slow evolution, with an observed rise of $\gtrsim 125$ rest-frame days, and exponential decline out to $\sim 250$ days past peak at a measured rate of $0.01~{\rm mag~day}^{-1}$, consistent with fully-trapped $^{56}$Co decay. This is the longest rise time measured in a SLSN to date, and the first SLSN to show a rise time consistent with pair-instability supernova (PISN) models. Compared to other slowly-evolving SLSNe, it is spectroscopically similar to the prototype SN2007bi at maximum light, though lower in luminosity ($L_{\rm peak} \simeq 4.6 \times 10^{43} {\rm erg s}^{-1}$) and with a flatter peak than previous events. PS1-14bj shows a number of peculiar properties, including a near-constant color temperature for $>200$ days past peak, and strong emission lines from [O III] $\lambda$5007 and [O III] $\lambda$4363 with a velocity width of $\sim$3400 km/s, in its late-time spectra. These both suggest there is a sustained source of heating over very long timescales, and are incompatible with a simple $^{56}$Ni-powered/PISN interpretation. A modified magnetar model including emission leakage at late times can reproduce the light curve, in which case the blue continuum and [O III] features are interpreted as material heated and ionized by the inner pulsar wind nebula becoming visible at late times. Alternatively, the late-time heating could be due to interaction with a shell of H-poor circumstellar material. • ### Slowly fading super-luminous supernovae that are not pair-instability explosions(1310.4446) Super-luminous supernovae that radiate more than 10^44 ergs per second at their peak luminosity have recently been discovered in faint galaxies at redshifts of 0.1-4. Some evolve slowly, resembling models of 'pair-instability' supernovae. Such models involve stars with original masses 140-260 times that of the Sun that now have carbon-oxygen cores of 65-30 solar masses. In these stars, the photons that prevent gravitational collapse are converted to electron-positron pairs, causing rapid contraction and thermonuclear explosions. Many solar masses of 56Ni are synthesized; this isotope decays to 56Fe via 56Co, powering bright light curves. Such massive progenitors are expected to have formed from metal-poor gas in the early Universe. Recently, supernova 2007bi in a galaxy at redshift 0.127 (about 12 billion years after the Big Bang) with a metallicity one-third that of the Sun was observed to look like a fading pair-instability supernova. Here we report observations of two slow-to-fade super-luminous supernovae that show relatively fast rise times and blue colours, which are incompatible with pair-instability models. Their late-time light-curve and spectral similarities to supernova 2007bi call the nature of that event into question. Our early spectra closely resemble typical fast-declining super-luminous supernovae, which are not powered by radioactivity. Modelling our observations with 10-16 solar masses of magnetar-energized ejecta demonstrates the possibility of a common explosion mechanism. The lack of unambiguous nearby pair-instability events suggests that their local rate of occurrence is less than 6x10^-6 times that of the core-collapse rate.
2020-12-05 02:51:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7403186559677124, "perplexity": 4194.356389181642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746033.87/warc/CC-MAIN-20201205013617-20201205043617-00244.warc.gz"}
https://mathoverflow.net/questions/335856/constructive-proof-of-existence-of-free-algebras-for-infinitary-equational-theor/335860
# Constructive proof of existence of free algebras for infinitary equational theories Is it constructively true that all (not necessarily finitary) equational theories $$T = (\Sigma, E)$$ have an initial model? The usual proof for finitary equational theories I know constructs first from the signature $$\Sigma$$ the set $$P$$ of syntax trees/preterms. This set is by construction the initial model of the theory $$(\Sigma, \emptyset)$$, i.e. will usually not satisfy equations $$E$$. One then considers the congruence $$R \subseteq P \times P$$ generated by (all interpretations of) the equations in $$E$$, and proves that $$Q = P / R$$ is a model of $$T$$ and then that it is the initial one. If $$\Sigma$$ contains an operation symbol $$f$$ of non-finitary arity $$A$$ then I struggle with defining the operations on the quotient $$Q$$. The interpretation of $$f$$ for $$Q$$ should be a function $$f_Q : Q^A \rightarrow Q$$, and should be defined in terms of the function $$f_P : P^A \rightarrow P$$ on syntax trees. If $$A$$ was finite, then any given map $$x : A \rightarrow Q$$ could be lifted along $$P \twoheadrightarrow Q$$ to a map $$x' : A \rightarrow P$$, and then $$f_Q(x)$$ could be defined as the residue class of $$f_P(x')$$ in $$Q$$. But if $$A$$ is not a choice object/set, then the proof is stuck here. Is there a way to get around this issue without assuming choice, or is it maybe known that the existence of certain initial algebras implies some version of the axiom of choice? EDIT: The reference pointed out by Valery Isaev contains the answer to my questions. There are models of ZF (without C) in which there is no initial algebra for a certain equational theory, in particular it cannot be proved to exist using just constructive logic. On the other hand, initial algebras exist for all theories in all Grothendieck toposes provided that AC holds in the metatheory, so all choice principles that fail to hold in some Grothendieck topos don't follow from the existence of initial algebras. • The problem is that whether two words are equal under $E$ is undecidable. (I don't have an exact reference, but it should be a variant of the Post correspondence problem, or the undecidability of the word problem for groups.) So this may depend on the treatment of equality in your definition of initial model. – Matt F. Jul 10 '19 at 14:54 • @MattF.: actually, the bigger problem is how to avoid choice, decidability of equality does not seem to be relevant. – Andrej Bauer Jul 10 '19 at 15:54 It was proved by Andreas Blass in Words, free algebras, and coequalizers that free infinitary algebras are not constructible neither in topoi nor in ZF. It is easy to see that the existence of free algebras for all theories is equivalent to the existence of initial algebras of all theories. Even though initial algebras do not exist in "the basic constructive mathematics", there are stronger theories in which they do exist and which still can be called constructive. For example, initial algebras can be constructed in homotopy type theory with recursive higher inductive types (see Lumsdaine, Shulman, Semantics of higher inductive types). • Ah thanks for reminding me that this paper exists, it's very relevant! Regarding your remark on HoTT's higher inductive types, see my comment to Andrej Bauers reply. – Martin Bidlingmaier Jul 10 '19 at 19:30 • Just a short addition: A weaker version of higher inductive types suffice, namely the quotient inductive types by Altenkirch and Kaposi. – Ingo Blechschmidt Jul 10 '19 at 20:40 As was already pointed out by Valery Isaev, even in the presence of excluded middle initial algebras for equational theories need not exist. I would like to explain a bit what is needed from a constructive point of view. Suppose $$T = (\Sigma, E)$$ is an equational theory where $$\Sigma$$ is a family $$\Sigma = (A_\mathrm{op})_{\mathrm{op} \in I}$$ of sets $$A_\mathrm{op}$$ indexed by a set $$I$$. We think of the elements of $$I$$ as the operation symbols, and $$A_\mathrm{op}$$ as the arity of the operation symbol $$\mathrm{op}$$. (Normally arities are natural numbers, but since we allow infinitary operations it is better for arities to be general sets.) A $$T$$-algebra $$C$$ is given by a carrier set $$|C|$$ and, for each $$\mathrm{op} \in A$$, a map $$\mathrm{op}_C : |C|^{A_\mathrm{op}}| \to |C|$$, such that the equations $$E$$ are satisfied. A natural way of constructing the initial $$T$$-algebra is as follows: 1. Construct the set of well-founded trees $$W_T$$ whose branching types are $$\Sigma$$, i.e., the initial algebra for the polynomial functor $$X \mapsto \Sigma_{\mathrm{op} \in I} X^{A_\mathrm{op}}$$. This is also known as a $$W$$-type. 2. Quotient $$W_T$$ by the (interpretations of) equations $$E$$ to obtain a candidate for the initial algebra. We cannot get either step for free, but in general the first step is the easier one, as it is well understood what it takes to have $$W$$-types in a constructive setting. For the second step to go through, one needs to resolve the question posed by the OP, namely, how do we lift operations from the quotient $$W_T/E$$ to $$W_T$$? It looks like we need choice. Indeed, it suffices for all the arities $$A_\mathrm{op}$$ to satisfy choice (to be choice sets, also called projective objects), but is that necessary? I do not know of any way of avoiding choice if one attempts to construct the initial algebra as a quotient of an inductively defined set. Homotopy type theory offers an alternative. We avoid stratifying the construction of the initial algebra into an inductive construction followed by a quotient. Instead, we make a purely inductive construction: the initial $$T$$-algebra is the higher-inductive type $$X$$ with the following constructors: • for each $$\mathrm{op} \in I$$, there is a point constructor $$\overline{op} : X^{A_\mathrm{op}} \to X$$; • for each equation $$\ell_i(x_1, \ldots, x_n) = r_i(x_1, \ldots, x_n)$$ in $$E$$ there is a path constructor $$e_i : \prod (x_1, \ldots, x_n : X)\,.\, \overline{\ell}_i(x_1, \ldots, x_n) =_X \overline{r}_i(x_1, \ldots, x_n)$$, • set-truncation: for all $$x, y \in X$$ and all paths $$p, q : x =_X y$$ there is a path $$\tau_{p,q} : p =_{x =_X y} q$$. For further reference, look at the HoTT book chapter on the real numbers, where a variant of such a construction is used to present the Cauchy completion of rational numbers in an inductive fashion. • Thanks for the thorough answer. Do you know whether the existence of initial algebras is equivalent to some version of choice? – Martin Bidlingmaier Jul 10 '19 at 19:02 • Regarding HoTT's higher inductive types: As I understand their semantics is not completely resolved. HoTT has a model in every infinity topos, and HoTT's hsets correspond to 0-truncated objects. The subcategory of 0-truncated objects of an infinity topos is a 1-topos, and every 1-topos arises in this way. So the logic of HoTT's hsets is essentially just the internal logic of Grothendieck 1-toposes. But then if not every Grothendieck 1-topos has initial models (because choice might fail), HoTT shouldn't prove that they exist. So HoTT shouldn't include general HITs. Am I overlooking something? – Martin Bidlingmaier Jul 10 '19 at 19:15 • Ah, so it appears that any Grothendieck topos relative to a topos satisfying AC has initial algebras of all equational theories (Rosebrugh, Abstract families of algebras), so this argument doesn't rule out models of arbitrary HITs in (Grothendieck) infinity toposes. – Martin Bidlingmaier Jul 10 '19 at 19:43 • I have no definitive answers regarding semantics of HoTT. I find it reasonable to take HITs as a primary notion, on the grounds that they extend inductive constructions. Certainly they’re no more suspect than the axiom of choice. – Andrej Bauer Jul 11 '19 at 10:54 • @MartinBidlingmaier Right -- since all Grothendieck toposes over a topos with AC have initial algebras, but don't satisfy choice, the existence of initial algebras doesn't imply choice. What seems more likely to me is that the existence of initial algebras might be equivalent to the existence of sufficiently large regular ordinals, e.g. something like the "regular extension axiom" -- note that Blass's proof proceeds by way of constructing an uncountable regular cardinal and observing that (modulo a large cardinal hypothesis) ZF cannot construct such a thing. – Mike Shulman Jul 11 '19 at 13:10
2021-04-16 17:07:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 60, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.895255982875824, "perplexity": 373.1343066248635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088245.37/warc/CC-MAIN-20210416161217-20210416191217-00580.warc.gz"}
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=LATEX-L;eb5edc44.0307&FT=M&P=2413011&H=N&S=b
## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE Options: Use Classic View Use Proportional Font Show Text Part by Default Show All Mail Headers Topic: [<< First] [< Prev] [Next >] [Last >>] Re: Generic quark test Will Robertson <[log in to unmask]> Sun, 14 Sep 2008 17:41:34 +0930 text/plain (34 lines) On 14/09/2008, at 4:49 PM, Joseph Wright wrote: > It might be handy to have a check to see if a value is any quark, for > "make safe" code. So far, I've ended up with: > > \def_new:Npn \if_quark:NT #1 { > \exp_after:NN \exp_after:NN \exp_after:NN \if_quark:w > \exp_after:NN \token_to_string:N #1 \q_stop > } > > \def_new:Npn \if_quark:wT #1#2#3\q_stop { > \int_compare:nNnT {\char_value_uccode:n{#2}} = {Q} > } > > which seems to work but I'm worried may not be suitable more > generally. > Any ideas? Hmmm. What about if quarks contained an additional parameter to make this check more robust? Let's say    \def_new:Npn \quark_new:N #1{\tlp_new:Nn #1{\q_quark#1}} Oh. But I guess you don't want to be doing any expansion in the \if_quark function so you wouldn't be able to see if \q_quark was there or not. Never mind, I suppose. W `
2023-01-28 12:35:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8206934332847595, "perplexity": 10146.634310970183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00165.warc.gz"}
https://byjus.com/maths/differentiation-formulas/
# Differentiation Formulas A Differentiation formulas list has been provided here for students so that they can refer to these to solve problems based on differential equations. This is one of the most important topics in higher class Mathematics. The general representation of the derivative is d/dx. This formula list includes derivative for constant, trigonometric functions, polynomials, hyperbolic, logarithmic functions, exponential, inverse trigonometric functions etc. Based on these, there are a number of examples and problems present in the syllabus of Class 11 and 12, for which students can easily write answers. ## Differentiation Formulas List In all the formulas below, f’ means $\frac{d(f(x))}{dx} = f'(x)$ and g’ means $\frac{d(g(x))}{dx}$ = $g'(x)$ . Both f and g are the functions of x and differentiated with respect to x. We can also represent dy/dx = Dx y. Some of the general differentiation formulas are; 1. Power Rule: (d/dx) (xn )nxn-1 2. Derivative of a constant, a:  (d/dx) (a) = 0 3. Derivative of a constant multiplied with function f: (d/dx) (a. f)af’ 4.  Sum Rule: (d/dx) (f ± g) = f’ ± g’ 5. Product Rule: (d/dx) (fg)= fg’ + gf’ 6. Quotient Rule:$\frac{d}{dx}(\frac{f}{g})$ = $\frac{gf’ – fg’}{g^2}$ ### Differentiation Formulas for Trigonometric Functions Trigonometry is the concept of relation between angles and sides of triangles. Here, we have 6 main ratios, such as, sine, cosine, tangent, cotangent, secant and cosecant. You must have learned about basic trigonometric formulas based on these ratios. Now let us see, the formulas for derivative of trigonometric functions. 1. $\frac{d}{dx} (sin~ x)= cos\ x$ 2. $\frac{d}{dx} (cos~ x)= – sin\ x$ 3. $\frac{d}{dx} (tan ~x)= sec^{2} x$ 4. $\frac{d}{dx} (cot~ x = -cosec^{2} x$ 5. $\frac{d}{dx} (sec~ x) = sec\ x\ tan\ x$ 6. $\frac{d}{dx} (cosec ~x)= -cosec\ x\ cot\ x$ 7. $\frac{d}{dx} (sinh~ x)= cosh\ x$ 8. $\frac{d}{dx} (cosh~ x) = sinh\ x$ 9. $\frac{d}{dx} (tanh ~x)= sech^{2} x$ 10. $\frac{d}{dx} (coth~ x)=-cosech^{2} x$ 11. $\frac{d}{dx} (sech~ x)= -sech\ x\ tanh\ x$ 12. $\frac{d}{dx} (cosech~ x ) = -cosech\ x\ coth\ x$ ### Differentiation Formulas for Inverse Trigonometric Functions Inverse trigonometry functions are the inverse of trigonemetric ratios. Let us see the formulas for derivative of inverse trigonometric functions. 1. $\frac{d}{dx}(sin^{-1}~ x)$ = $\frac{1}{\sqrt{1 – x^2}}$ 2. $\frac{d}{dx}(cos^{-1}~ x)$ = $-\frac{1}{\sqrt{1 – x^2}}$ 3. $\frac{d}{dx}(tan^{-1}~ x)$ = $\frac{1}{1 + x^2}$ 4. $\frac{d}{dx}(cot^{-1}~ x)$ = $-\frac{1}{1 + x^2}$ 5. $\frac{d}{dx}(sec^{-1} ~x)$= $\frac{1}{|x|\sqrt{x^2 – 1}}$ 6. $\frac{d}{dx}(cosec^{-1}~x)$= $-\frac{1}{|x|\sqrt{x^2 – 1}}$ ### Other Differentiation Formulas 1. $\frac{d}{dx}(a^{x}) = a^{x} ln a$ 2. $\frac{d}{dx}(e^{x}) = e^{x}$ 3. $\frac{d}{dx}(log_a~ x)$ = $\frac{1}{(ln~ a)x}$ 4. $\frac{d}{dx}(ln~ x) = 1/x$ 5. Chain Rule: $\frac{dy}{dx}$ = $\frac{dy}{du} × \frac{du}{dx}$ = $\frac{dy}{dv} × \frac{dv}{du} × \frac{du}{dx}$ Bookmark this page and visit whenever you need a sneak peek at differentiation formulas. Also, visit us to learn integration formulas with proofs.  Download the BYJU’S app to get interesting and personalised videos and have fun learning. Related Links Differentiation Differentiation Integration Differential Equation Differential Equations Applications #### 7 Comments 1. Bhumika Best and thanks alot I can easily learn all the formulas 2. Adarsh Patel I have use this I feel so easy to learn thanks 3. Tqsm for providing these formulas 4. Thanks for telling us these formulas 5. thank u so much for the formulasss, it will really help meeeeee 6. Vipul kanojiya I feel that all the thing aare now easy 7. Eareckson Thank you so much. It’s very useful.
2021-01-18 08:19:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8875405192375183, "perplexity": 1844.6485562278217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514423.60/warc/CC-MAIN-20210118061434-20210118091434-00059.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-exact-value-of-arctan-cot-11-6
# How do you find the exact value of arctan(cot(11π / 6))? Aug 25, 2016 $- \frac{2}{3} \pi$ #### Explanation: $\cot \left(\frac{11}{6} \pi\right) = \tan \left(\frac{\pi}{2} - \frac{11}{6} \pi\right) = \tan \left(- \frac{4}{3} \pi\right)$ So, the exact value of arc tan(cot(11/6pi) =arc tan (tan (-4/3pi) = the operand $- \frac{4}{3} \pi$, against the given operand $\frac{11}{6} \pi$ in $\cot \left(\frac{11}{6} \pi\right)$, If the solution is sought as a principal value in $\left(- \frac{\pi}{2} , \frac{\pi}{2}\right)$. use the general solution $n \pi + {\left(- 1\right)}^{n} \left(- \frac{4}{3} \pi\right) , n = 0 , \pm 1 , \pm 2 , \pm 3 , . .$ For, $n = - 1$, it is $- \pi + \left(- 1\right) \left(- \frac{4}{3} \pi\right) = - \frac{\pi}{3}$. Your calculator might gives the answer as $- {60}^{o}$. All these are attributed to the convention that inverse trigonometric functions give principal values only. The algorithms in the software for computer approximations of inverse trigonometric functions are based on this convention..For example, $\tan \left(- {240}^{o}\right)$ is displayed as $- 1. .732030808$, but its inverse is displayed as$- {60}^{o}$. For applications, it has to be remembered that trigonometric functions are periodic, and are bijective only in a part of one period, in which the principal value is defined,. For example, sin x is bijective in $\left[- \frac{\pi}{2} , \frac{\pi}{2}\right]$, within $\left[- \pi , \pi\right]$..
2020-02-23 16:13:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8642685413360596, "perplexity": 2172.083429302494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145818.81/warc/CC-MAIN-20200223154628-20200223184628-00000.warc.gz"}
https://cs.stackexchange.com/questions/22662/why-does-merge-sort-run-in-on2-time/22674
# Why does merge sort run in $O(n^2)$ time? I have been learning about Big O, Big Omega, and Big Theta. I have been reading many SO questions and answers to get a better understanding of the notations. From my understanding, it seems that Big O is the upper bound running time/space of the algorithm, Big Omega is the lower bound running time/space of the algorithm and Big Theta is like the in between of the two. This particular answer on SO stumbled me with the following statement For example, merge sort worst case is both ${\cal O}(n\log n$) and $\Omega(n\log n)$ - and thus is also $\Theta(n\log n)$, but it is also ${\cal O}(n^2)$, since $n^2$ is asymptotically "bigger" than it. However, it is NOT $\Theta(n^2)$, Since the algorithm is not $\Omega(n^2)$ I thought merge sort is ${\cal O}(n\log n)$ but it seems it is also ${\cal O}(n^2)$ because $n^2$ is asymptotically bigger than it. Can someone explain this to me? • possible duplicate of Sorting functions by asymptotic growth – D.W. Mar 16 '14 at 1:58 • For mergesort, you even have a bound $O(2^n)$. It is an upper bound, nobody said it is in any way "best possible". – vonbrand Mar 16 '14 at 2:39 • We both agree that $10 < 20$. Also, $10 < 30$. In much the same way, $f(n)$ can be $O(g(n))$ and $O(h(n))$ at the same time. On the other hand, if $x = y$ and $x = z$, we know also $y = z$. Similarly, if $f(n)$ is $\Theta(g(n))$ and $\Theta(h(n))$, we know $g(n)$ is $\Theta(h(n))$ and $h(n)$ is $\Theta(g(n))$. – Patrick87 Mar 16 '14 at 13:57 • O(n^2) does not necessarily mean running time would be O(n^2).It Just mean running time is less than or equal to O(n^2).in this case O(n^2) represents that running time would be less than O(n^2) as it is not asymptotically tight.For further details see my answer. – tanmoy Mar 16 '14 at 14:22 • All answers are available in our reference questions on asymptotics. – Raphael Apr 12 '14 at 17:53 This is actually a mathematical matter and not the matter of the worst case running time. First you have to know that Big O does not represent worst case running time of some algorithms. Big O comes from mathematics and it represents an asymptotic upper bound for some mathematical functions. You can see that the graph of $n^2$ is always above the graph of $n\log n$. so you can say $n^2$ is an upper bound on $n\log n$ as the value of $n\log n$ is always less than $n^2$. So mathematically you can say $n\log n=O(n^2)$. Similarly graphs of all $n^i$ for $i\geq 2$ are always above the graph of $n\log n$. So you can say $n\log n$ is also $O(n^3),O(n^4),O(n^5),\ldots$ But one thing you have to remember that though $n\log n$ is $O(n^2)$ but $n\log n$ is not $\Theta(n^2)$, because $n\log n$ is not $\Omega(n^2)$. Mathematically you can say $n\log n$ is $o(n^2)$. If you still have any confusion regarding my answer, you can ask me here. Let's consider a simpler situation. For numbers $x,y$, say that $x = S(y)$ if $x \leq y$, that $x = L(y)$ if $x \geq y$, and that $x = E(y)$ if both $x = S(y)$ and $x = L(y)$. Since $1 = S(1)$ and $1 = L(1)$, it is the case that $1 = E(1)$. But it is also the case that $1 = S(2)$. The situation with running times is similar. Given two positive functions $f,g$, we say that $f = O(g)$ if $f(n) \leq Cg(n)$ for some $C > 0$ (independent of $n$), that $f = \Omega(g)$ if $f(n) \geq cg(n)$ for some $c > 0$, and that $f = \Theta(G)$ if $f = O(g)$ and $f = \Omega(g)$. For example $2n = O(n)$ since $2n \leq 2 \cdot n$ and $2n = \Omega(n)$ since $2n \geq n$, and so $2n = \Theta(n)$. It is also the case that $2n = O(n^2)$ since $2n \leq 2\cdot n^2$. For numbers we have the important property of transitivity: $x \leq y \leq z$ implies $x \leq z$. The same relation holds for functions: if $f = O(g)$ and $g = O(h)$ then $f = O(h)$; indeed, if $f \leq C_1 g$ and $g \leq C_2 h$ then $f \leq C_1C_2 h$. The running time of merge sort $T(n)$ is $\Theta(n\log n)$, and $n\log n = O(n^2)$, and so $T(n) = O(n^2)$, but $T(n)$ is not $\Omega(n^2)$ and so $T(n)$ is not $\Theta(n^2)$. The class $\Theta(n\log n)$ is a tight bound on $T(n)$; $O(n^2)$ is an upper bound which is not tight. • Does this mean that the worst possible running time for mergesort is O(nlogn) but the possible running time for mergesort is O(infinity) ?? – Computernerd Mar 16 '14 at 12:58 • Not at all. The running time $T(n)$ of mergesort is bounded by $cn\log n \leq T(n) \leq Cn\log n$ for some constants $0<c<C$. – Yuval Filmus Mar 16 '14 at 15:00 Intuitively, $f=\cal O(g)$ means that $f$ grows no faster than $g$. Clearly, $n \log n$ grows no faster than $n^2$ and we can write $n \log n = \cal O(n^2)$. Symmetrically, $f=\cal \Omega(g)$ means that $f$ grows at least as fast as $g$. $n \log n$ does not grow at least as fast as $n^2$ so we have the inequality $n \log n \neq \cal \Omega (n^2)$. When $f$ grows no faster than $g$ and at least as fast as $g$, it makes sense to say that $f$ grows as fast as $g$; write $f=\cal \Theta(g)$. To answer your question, notice that grows no faster than is transitive. For numbers $x,y,z$ we all know that if $x\lt y$ and $y\lt z$ then this implies that $x\lt z$. Now for functions $f,g,h$, we have that if $f=\cal O(g)$ and $g=\cal O(h)$ then this implies that $f=\cal (h)$. Let $f=\cal O(n\log n)$. Since $n \log n = \cal O(n^2)$, we have that $f = \cal O(n^2)$. This should hopefully help you better understand Big O notation. You can prove transitivity of $\cal O$ using the formal definition: For functions $f,g$, we say that $f$ is Big O of $g$ (write $f=\cal O(g)$) if there exist constants $n_0,c \gt 0$ such that $f\le c\times g$ whenever $n\ge n_0$. This means that starting at some point along the $x$ axis (we call it $n_0$) the function $f$ is always bounded above by a constant multiple of $g$. I do not find any contradiction between your (said) understanding and the answer from StackOverflow you posted. It seems your problem is at the level of your understanding of upper bound and lower bound. In this context of algorithm complexity, upper bound means cannot be asymptotically worse than. As such all algorithms are $O(\infty)$. Lower bound means cannot be asymptotically better than. As such all algorithms are $\Omega(0)$. However for those algorithms for which there exists an $O(f(n))$ and an $\Omega(f(n))$, where $f(n)$ is some function of $n$, there also exists a $\Theta(f(n))$. So we notice that of the three only $\Theta$ is unique.
2021-01-28 06:18:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868736624717712, "perplexity": 109.09340507883252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704835901.90/warc/CC-MAIN-20210128040619-20210128070619-00392.warc.gz"}
http://quant.stackexchange.com/tags/differential-equations/hot?filter=year
Tag Info 4 We consider the case where the Novikov condition is satisfied, that is, \begin{align*} E\left[\exp\left(\frac{1}{2}\int_0^T \theta^2_s ds \right)\right] < \infty. \end{align*} Then $\{L_t \mid t \ge 0\}$ is a $(\mathscr{F}_t, \mathbb{P})$-martingale. On $\mathscr{F}_T$, we define the probability measure $Q$ by \begin{align*} ... 3 I'd like to give an alternative derivation not involving the clever (mystifying?) transformation to the heat equation and thus present a more general technique for solving constant coefficeint advection-diffusion PDEs. All we need is the Fourier transform: \begin{align*} \mathcal{F}[f] & = \int_{-\infty}^\infty e^{-i \omega y} f(y) dy, \end{align*} ... 1 By Bayes' rule for conditional expectation (or here), $$E_{\mathbb Q}[X_t | \mathscr F_u] E[L_T| \mathscr F_u] = E[X_tL_T| \mathscr F_u]$$ $$\to E_{\mathbb Q}[X_t | \mathscr F_u] L_u = E[X_tL_T| \mathscr F_u]$$ $$\to E_{\mathbb Q}[X_t | \mathscr F_u] = E[\frac{X_tL_t}{L_u}| \mathscr F_u]$$ $$= \frac{1}{L_u} E[ \frac{X_tL_t}{1} | \mathscr F_u]$$ = ... 1 Based on the form of your equation, we can consider the SDE \begin{align*} dX_t = \sigma dW_t, \end{align*} where $W$ is a standard Brownian motion. Since, for $0 \leq t \leq T$, \begin{align*} X_T = X_t + \sigma (W_T-W_t), \end{align*} based on Feynman–Kac formula, the solution is given by \begin{align*} F(t, x) &= E\left(X_T^2 \mid X_t = x\right)\\ ... 1 As you have guessed correctly, these type of questions can be answered using Ito's Lemma.We have: $$d(M_t)= d(Z_t e^{\int_0^tF(Z_u)du})=d(Z_t) e^{\int_0^tF(Z_u)du}+Z_t d(e^{\int_0^tF(Z_u)du})+d(Z_t)d(e^{\int_0^tF(Z_u)du})$$ For the first two terms on R.H.S, we have: d(Z_t) e^{\int_0^tF(Z_u)du} = (f(W_t)dW_t + ... Only top voted, non community-wiki answers of a minimum length are eligible
2016-02-12 05:48:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9998568296432495, "perplexity": 818.9162126626145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163438.83/warc/CC-MAIN-20160205193923-00013-ip-10-236-182-209.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/95668/physical-origins-of-the-heisenberg-model-of-ferromagnetism
# Physical origins of the Heisenberg model of ferromagnetism I am trying to understand physical intuition behind the Ising and Heisenberg models (thus I am not sure if my question is appropriate for this mostly mathematical site). I will concentrate of the Heisenberg model. In the Heisenberg model of ferromagnetism the atoms are assumed to be arranged in a lattice. To the $i$-th atom corresponds a spin operator $\vec S_i$ (here $i$ belongs to the lattice). The Hamiltonian is given by $H=-J\sum_{<ij>}\vec S_i\cdot\vec S_j$ where the sum runs over all pairs of atoms connected by an edge, $J>0$ is a constant. As far as I understand from the literature that this form of the Hamiltonian comes from the exchange interaction which, in turn, comes from the fact that each (neutral) atom is a boson (see e.g. vol. IX of Landau-Lifshitz, paragraph 72). QUESTION. In addition to exchange interaction there exists a different spin-spin interaction which has a different form. Why it was neglected? The spin-spin interaction between two atoms with magnetic moments $\vec\mu_1,\vec\mu_2$ and large separation $\vec R$ is equal to $$\frac{\vec\mu_1\cdot\vec\mu_2}{R^3}-3\frac{(\vec\mu_1\cdot\vec R)(\vec\mu_2\cdot\vec R)}{R^5}.$$ After quantization, $\vec \mu_i$ is replaced by $\vec S_i$ times a constant. As it is claimed in "Statistical mechanics" by R. Feynman, Section 7.1, the spin-spin interaction is very small in comparison to the exchange interaction. However this seems to be true only on short distances since the exchange interaction decays exponentially with the distance $R$ between the atoms, while the spin-spin decays as $1/R^3$. Hence when we are on 3d-lattice the sum over the lattice of terms of order $1/R^3$ diverges logarithmically. This cannot be neglected from the experimental point of view too. When the ferromagnetic material is magnetized, it creates a magnetic field, which will interact with another magnet or moving charge. This interaction has already nothing to do with the exchange interaction, as far as I understand.
2022-06-25 13:51:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9127165079116821, "perplexity": 192.716917488107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103035636.10/warc/CC-MAIN-20220625125944-20220625155944-00783.warc.gz"}
https://codedocs.xyz/gammasoft71/xtd/classxtd_1_1drawing_1_1system__colors.html
xtd - Reference Guide  0.1.0 Modern c++17 framework to create console, gui and unit test applications on Windows, macOS and Linux. xtd::drawing::system_colors Class Referencefinal Each property of the xtd::drawing::system_colors class is a xtd::drawing::Color structure that is the color of a Windows display element. More... #include <system_colors.h> Inheritance diagram for xtd::drawing::system_colors: Static Public Member Functions static xtd::drawing::color accent () Gets a system-defined color that has an ARGB value of 0xFF316AC5. This field is constant. More... static xtd::drawing::color accent_text () Gets a system-defined color that has an ARGB value of 0xFFFFFFFF. This field is constant. More... static xtd::drawing::color active_border () Gets a system-defined color that has an ARGB value of 0xFFD4D0C8. This field is constant. More... static xtd::drawing::color active_caption () Gets a system-defined color that has an ARGB value of 0xFF0054E3. This field is constant. More... static xtd::drawing::color active_caption_text () Gets a system-defined color that has an ARGB value of 0xFFFFFFFF. This field is constant. More... static xtd::drawing::color app_workspace () Gets a system-defined color that has an ARGB value of 0xFF808080. This field is constant. More... static xtd::drawing::color button_face () Gets a system-defined color that has an ARGB value of 0xFFECE9D8. This field is constant. More... static xtd::drawing::color button_highlight () Gets a system-defined color that has an ARGB value of 0xFFFFFFFF. This field is constant. More... Gets a system-defined color that has an ARGB value of 0xFFACA899. This field is constant. More... static xtd::drawing::color control () Gets a system-defined color that has an ARGB value of 0xFFECE9D8. This field is constant. More... static xtd::drawing::color control_dark () Gets a system-defined color that has an ARGB value of 0xFFACA899. This field is constant. More... static xtd::drawing::color control_dark_dark () Gets a system-defined color that has an ARGB value of 0xFF716F64. This field is constant. More... static xtd::drawing::color control_light () Gets a system-defined color that has an ARGB value of 0xFFF1EFE2. This field is constant. More... static xtd::drawing::color control_light_light () Gets a system-defined color that has an ARGB value of 0xFFFFFFFF. This field is constant. More... static xtd::drawing::color control_text () Gets a system-defined color that has an ARGB value of 0xFF000000. This field is constant. More... static xtd::drawing::color desktop () Gets a system-defined color that has an ARGB value of 0xFF004E98. This field is constant. More... Gets a system-defined color that has an ARGB value of 0xFF3D95FF. This field is constant. More... Gets a system-defined color that has an ARGB value of 0xFF9DB9EB. This field is constant. More... static xtd::drawing::color gray_text () Gets a system-defined color that has an ARGB value of 0xFFACA899. This field is constant. More... static xtd::drawing::color highlight () Gets a system-defined color that has an ARGB value of 0xFF316AC5. This field is constant. More... static xtd::drawing::color highlight_text () Gets a system-defined color that has an ARGB value of 0xFFFFFFFF. This field is constant. More... static xtd::drawing::color hot_track () Gets a system-defined color that has an ARGB value of 0xFF000080. This field is constant. More... static xtd::drawing::color inactive_border () Gets a system-defined color that has an ARGB value of 0xFFD4D0C8. This field is constant. More... static xtd::drawing::color inactive_caption () Gets a system-defined color that has an ARGB value of 0xFF7A96DF. This field is constant. More... static xtd::drawing::color inactive_caption_text () Gets a system-defined color that has an ARGB value of 0xFFD8E4F8. This field is constant. More... static xtd::drawing::color info () Gets a system-defined color that has an ARGB value of 0xFFFFFFE1. This field is constant. More... static xtd::drawing::color info_text () Gets a system-defined color that has an ARGB value of 0xFF000000. This field is constant. More... Gets a system-defined color that has an ARGB value of 0xFFFFFFFF. This field is constant. More... Gets a system-defined color that has an ARGB value of 0xFFECE9D8. This field is constant. More... Gets a system-defined color that has an ARGB value of 0xFF316AC5. This field is constant. More... Gets a system-defined color that has an ARGB value of 0xFF000000. This field is constant. More... static xtd::drawing::color scroll_bar () Gets a system-defined color that has an ARGB value of 0xFFD4D0C8. This field is constant. More... static xtd::drawing::color text_box () Gets a system-defined color that has an ARGB value of 0xFFFFFFFF. This field is constant. More... static xtd::drawing::color text_box_text () Gets a system-defined color that has an ARGB value of 0xFF000000. This field is constant. More... static xtd::drawing::color window () Gets a system-defined color that has an ARGB value of 0xFFFFFFFF. This field is constant. More... static xtd::drawing::color window_frame () Gets a system-defined color that has an ARGB value of 0xFF000000. This field is constant. More... static xtd::drawing::color window_text () Gets a system-defined color that has an ARGB value of 0xFF000000. This field is constant. More... Detailed Description Each property of the xtd::drawing::system_colors class is a xtd::drawing::Color structure that is the color of a Windows display element. Remarks Better performance is achieved by using the properties of the xtd::drawing::SystemPens or xtd::drawing::SystemBrushes classes rather than creating a new pen or brush based on a value from xtd::drawing::system_colors. For example, if you wanted to get a brush for the face color of a 3-D element, use the SystemBrushes.Control property because it gets a brush that already exists, whereas calling the SolidBrush.#ctor(Color) constructor with a parameter value of system_colors::Control will create a new brush. The documentation for this class was generated from the following file:
2020-11-30 11:07:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5641288161277771, "perplexity": 11201.19885083755}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141213431.41/warc/CC-MAIN-20201130100208-20201130130208-00643.warc.gz"}
https://www.r-bloggers.com/2013/08/a-new-r-trick-for-me-at-least/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. What were going to be talking about today are dynamic argument lists for functions. Specifically, how to unpack and prepare them in R using ..., list(), and do.call() ### Biased by Matlab and varargin Initially, I based my use of ... in R on my experience with Matlab’s varargin. Using varargin, Matlab functions can have a signature of: function f(varargin) % do stuff here Functions that use varargin are responsible for processing its contents, which is easy since it is simply a cell array. Thus, it can be “unpacked” and modified using cell array methods. function f(varargin) arg1 = varargin{1} arg2 = varargin{2} return(arg1*arg2) At call, arguments captured by varargin can be specified as an expanded cell array: args = {foo, bar} f(args{:}) As a matter of fact, functions that do not use varargin can also be called this way since Matlab effectively interprets an expanded cell array as a comma-separated list This comes in handy when you have a mixture of required and optional arguments for a function. f(arg, opts{:}) ### Back to R … I used to think ... was analogous to varargin since: • it captures all function arguments not explicitly defined by the call signature • the number of arguments it captures can vary However, unlike varargin: • ... is a special R language expression/object • it needs to be converted to a list to access the arguments (names and/or values) that it captures The former point is strength and quirk of R, as it allows for arguments encapsulated in ... to be passed on to additional functions: f = function(x, ...) { y = g(x, ...) return(y) } The latter point above (unpacking ...) is actually easy to do: f = function(x, ...) { args = list(...) # contains a=1, b=2 return(args$a * args$b) } Where confusion arises for many is that ... is essentially immutable (cannot be changed). While conceptually a list(), you can’t modify it directly using list accessors: f = function(x, ...) { ...[[1]] = 3 # this produces an error, as would ...$var and ...[1] y = g(x, ...) return(y) } So, what if I wanted to unpack arguments in ..., check/change their values, and repackage it for another function call? Since ... is immutable the code below would throw an error. f = function(x, ...) { args = list(...) # unpack, contains a='foo' args$a = bar ... = args # ERROR! y = g(x, ...) return(y) } Also, there isn’t a way (that I’ve found yet) to unroll a list() object in R into a comma-separated list like you can with a cell array in Matlab. # this totally doesn't work args = list(a=1, b='foo') result = f(args[*]) # making up syntax here. would be nice, no? As it turns out, ... doesn’t even come into play here. In fact, you need to use a rather deep R concept – calls. Whenever a function is used in R, a call is produced, which is an unprocessed expression that is then interpreted by the underlying engine. Why the delay? Only the creators/developers of R can fully detail why, but it does allow for some neat effects – e.g. the automatic labeling of plots. To package a programmatically generated argument list one uses the do.call() function: result = do.call('fun', list(arg1, arg2, etc, etc)) where the first argument is the name of the function to call, and the second argument is a list of arguments to pass along. For all intents and purposes, the R statement above is equivalent to the Matlab statement below. results = fun(args{:}) % where args = {arg1, arg2, etc, etc} Thus, process to unpack ..., check/modify an argument, and repack for another function call becomes: f = function(x, ...) { args = list(...) # unpack, contains a='foo' args\$a = bar # change argument "a" y = do.call(g, c(x, args)) # repack arguments for call to g() return(y) } I must credit this epiphany to the following StackOverflow question and answer: http://stackoverflow.com/questions/3414078/unpacking-argument-lists-for-ellipsis-in-r Written with StackEdit.
2021-05-15 17:22:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44327104091644287, "perplexity": 3545.792753973847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00398.warc.gz"}
https://www.physicsforums.com/threads/converrgence-of-oscillating-sequence.38424/
# Converrgence of oscillating sequence 1. Aug 6, 2004 Convergence of oscillating sequence Hi, I have to prove that an oscillating sequence converges, I am having some difficulty with the proof. The sequence is $$c_{n+1} = \frac{1}{1+c_{n}} , c_{1} = 1$$ So, I've calculated the first few terms and have seen that the sequence oscillates. I know that I need to prove: 1) The differences alternate in sign. 2) The absolute differences decrease. 3) The absolute differences approach 0. I have proved 1, using: $$c_{n+1} - c_{n} = \left(\frac{1}{1+c_{n}}\right) - \left(\frac{1}{1+c_{n-1}}\right) =\frac{1+c_{n-1}-1-c_{n}}{1+c_{n-1}+c_{n}+c_{n-1}c_{n}}$$ $$=\frac{-(c_{n}-c_{n-1})}{1+c_{n-1}+c_{n}=c_{n-1}c_{n}}$$ And since all terms are positive, the denomenator will be positive and the difference between two terms with alternate in sign from the difference between the previous two terms. I now am having trouble proving 2 and 3. I'm not exactly sure what to do; the example in my book is not very helpful.So far I have: $$|c_{n+1}-c_{n}| < |c_{n} - c_{n-1}|$$ but that's not much... If anyone could help, that would be great!! Thanks! Last edited: Aug 6, 2004 2. Aug 6, 2004 ### arildno Note that: $$(1+c_{n})(1+c_{n-1})=2+c_{n-1}$$ By substituting $$c_{n}=\frac{1}{1+c_{n-1}}$$ 3. Aug 6, 2004 Thank you! Ok, now I have: $$|c_{n+1}-c_{n}| = \frac{|c_{n}-c_{n-1}|}{2+c_{n-1}}$$ And since all terms are positive, $$2+c_{n-1}$$ will be positive, and each absolute difference will be a fraction of the previous absolute difference. Therefore they are decreasing and they will approach 0 as n apporaches infinity. Is that enough to prove this by just saying this? Thanks! (Is there any way to change the title of the thread? I made a typo ) 4. Aug 6, 2004 ### arildno Technically, I guess you should prove that the gained relations imply that we've got a Cauchy sequence, and hence, that the sequence converges (depends on what you may take as granted)
2016-10-21 00:49:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8477226495742798, "perplexity": 527.0751713212629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717959.91/warc/CC-MAIN-20161020183837-00412-ip-10-142-188-19.ec2.internal.warc.gz"}
https://codereview.stackexchange.com/questions/220296/process-2-lists-concurrently-with-the-first-list-adding-to-the-second
# Process 2 lists concurrently with the first list adding to the second [closed] I'm new to multi-threading and concurrency. I've got 2 lists that I want to process concurrently. Processing both lists requires a lot of time.sleep() and waiting for other servers to finish, so I figured rather than do one at a time I could process them simultaneously. The first list is un-validated items and I try to validate them and if successful add them to the second list for processing. I want to start validating items in the first list and then while that is going start on the processing of the 2nd list. I've since found out I wanted a Queue for the 2nd list and that has helped but I can't help but feel that even though this works I'm missing something. def process_sr(jira, tuple, args, log): ... # does some pretty straight forward stuff but lots of waiting return "done" def waiting_for_build(tuple, validqueue, log): sr, pr = tuple iter = 0 log.info("Waiting for {} to re-build".format(pr)) while iter < 30: # don't add. It's been more than 10 minutes and we need to finish log.debug("%s still waiting for Jenkins", pr) sleep(20) status = pr.build_status() if status is None: break # don't add. Build finished "FAILURE" or didn't build in 10 min elif status == "Wait": iter += 1 elif status == "Done": validqueue.put((sr,pr)) def build_lists(jira, log): validqueue = Queue() waiting_for_jenkins = [] ... # this builds the two lists return validqueue, waiting_for_jenkins def main(): args, log = cli_args() jira = JiraWrapper() validqueue, waiting_for_jenkins = build_lists(jira, log) workers = len(waiting_for_jenkins) + validqueue.qsize() threads = [] for item in waiting_for_jenkins: name = str(item[1]) threads.append(threading.Thread(target=waiting_for_build, name=name, args=(item, validqueue, log))) threads[-1].start() def worker(i): log.debug("started") results = [] while True: log.debug("blocking on queue") item = validqueue.get() if item is None: log.debug("done") return results log.debug("got item %s", item) result = process_sr(jira, item, args, log) results.append({result:item}) log.debug("task %s done", item) validqueue.task_done() log.debug("%d workers will be spawned", workers) if not workers: log.info("Nothing to do here") else: with concurrent.futures.ThreadPoolExecutor(max_workers=workers) as executor: # can't use thread_name_prefix until python 3.6 futures = executor.map(worker, range(workers)) for thread in threads: thread.join() validqueue.join() for i in range(workers): validqueue.put(None) for future in futures: log.debug(future) exit(0) if __name__ == '__main__': main() I've included as much as I think is needed. The main() method and specifically the two multi-threaded bits are where I'm looking for feedback.
2020-08-15 03:09:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20766234397888184, "perplexity": 9559.18048901432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740423.36/warc/CC-MAIN-20200815005453-20200815035453-00033.warc.gz"}
http://forum.attractmode.org/index.php?action=profile;u=3252;area=showposts
### Show Posts This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to. ### Messages - iOtero Pages: [1] 2 3 ... 18 1 ##### Scripting / Re: This script not works. (SOLVED) « on: Today at 08:17:00 AM » How strange, nobody has answered me ... I already do it myself: you had to convert windows files to linux files, to put it in some simple way. 2 ##### Themes / Re: layout dilima « on: February 11, 2019, 01:16:38 PM » The first thing would be to see the source code of the layout... And the attract.cfg... 3 ##### General / Re: A new question about the English language « on: February 11, 2019, 01:13:24 PM » Update favorites game list. /or/ Create new favorites game list but thats just my opinion.. Thanks, i agree with you... 4 ##### Scripting / This script not works. « on: February 11, 2019, 01:11:30 PM » This script works perfectly in raspbian and in ubuntu, but not in lakka: Code: [Select] #!/bin/bash IFS=';' cd /storage/.attract/romlists rm Favorites.txt touch Favorites.txt chmod 777 Favorites.txt ls *.tag > tagfiles do do romlist=echo ${filename} |cut -f1 -d '.' cat "${romlist}.txt"|grep "^${gamename};" >> Favorites.txt done < "${filename}" done < tagfiles rm tagfiles cat Favorites.txt |sort -u > tmp_favorites.txt mv tmp_favorites.txt Favorites.txt sleep 5 The reading of the data is correct, but the redirection >> does not do anything. In putty ssh this comand runs: Code: [Select] cat "Atari2600.txt" |grep "Centipede (USA)" >> Favorites.txt but the script does not. Does anyone know why? 5 ##### Themes / Re: RetroMode Theme « on: February 11, 2019, 05:35:34 AM » The theme is nice but to much info Another friend who thinks the same as me. 6 ##### General / A new question about the English language « on: February 11, 2019, 05:33:45 AM » As i said in my previous question, i am Spanish and i do not speak English well. So i would like to know which of these phrases is more correct: Create new favorites gamelist. Create new list of favorite games. Or: Update favorites gamelist. It is for a menu option that will erase the current list of favorites and replace it with a new one if there is one, if there is not, simply create the new favorite list. Thanks. 7 ##### General / Re: A question of language. Only for native english. « on: January 23, 2019, 12:19:21 PM » Any specific reason for reboot? Files are deleted and I think will take effect immediately without rebooting. It is not necessary, it is true. But after pressing the option seems to do nothing and the pi reboot very fast... If i knew how to send a message to the screen saying that everything is deleted i would do it, but i do not know how to do it. 8 ##### General / Re: A question of language. Only for native english. « on: January 22, 2019, 10:15:23 AM » What about "Choose this option to set the number of times played and the amount of time played to zero."? Edit: VVV keil's suggestion is the best, I think. To the point and clear. I also believe it.  Thanks, Keil Was to execute this scripts: Lakka (Linux): Code: [Select] #!/bin/bash cd /storage/.attract/stats find . -name "*.stat" -type f -delete reboot Windows: Code: [Select] #echo off cd E:\pi\attract\stats\ del * /S /Q timeout /t 5 /nobreak 9 ##### General / Re: A question of language. Only for native english. « on: January 22, 2019, 03:28:44 AM » It is clear that it is much more accurate... but i do not know if maybe it's a lot of text for a warning... that's why i put how they would be on screen. 10 ##### Themes / Re: RetroMode Theme « on: January 22, 2019, 03:01:04 AM » The main menu i like, but in the rest of layouts ... too much information ... impossible to achieve in some systems. And for 4:3 well neither ... 11 ##### General / Re: A question of language. Only for native english. « on: January 22, 2019, 02:58:11 AM » I am certainly no grammar/vocabulary expert but I am an English teacher, so I'll offer some help "If you select this option, the counter which indicates the number of times played and the actual amount of time played for all games will be set to zero." Hope that helps But it is not about selecting an option, but about executing it. Could be: "If you execute this option, the counter which indicates the number of times played and the actual amount of time played for all games will be set to zero." Thank you, it seems more correct. Although too long for some instructions. I'm not saying I'm not going to use it. 12 ##### General / A question of language. Only for native english. « on: January 21, 2019, 12:15:22 PM » Hi, i am Spanish and my English is not very good. Can you tell me if this sentence is correct?: "If you press this option, the counters of times played and the time played of all the games will be set to zero." It is to confirm the reset of [PlayedCount] and [PlayedTime]. Thanks. 13 ##### General / *.stat files (SOLVED) « on: January 21, 2019, 06:25:14 AM » Each *.stat file has two numbers inside. The one that is above is the number of times the game has been played, or to be more exact and even if it is the same, the number of times the game has been selected in the menu. What i would like to know is what indicates or what the number below means. Thanks in advance for the help. I already found out. Logically, it was the time played... 14 ##### Themes / Re: Looking for a theme showing snaps but not wheel images « on: January 14, 2019, 03:12:35 PM » Almost all the themes that I have done have the option of playlists without wheels ... and some are designed not to use wheels ... you just have to search here: http://forum.attractmode.org/index.php?topic=481.0 15 ##### Themes / Re: io_pi_refried Theme for pi. Only 4:3 resolutions (WIP) « on: January 14, 2019, 03:04:14 PM » This theme I loved, but what I did not like was that the wheel disappears, it could be possible to stay the wheel and with sound every time you pass game. Este tema me encantaba, pero lo que no me gustaba era que la rueda desaparece, podría ser posible quedarse la rueda y con sonido cada vez que pasas de juego. Los sonidos los puedes poner tú mismo en Attract-Mode, y mi rueda no va a desaparecer... You can put the sounds yourself in Attract-Mode, and my wheel will not disappear... Pages: [1] 2 3 ... 18
2019-02-19 01:36:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2027265429496765, "perplexity": 3228.497769421017}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489282.7/warc/CC-MAIN-20190219000551-20190219022551-00250.warc.gz"}
https://www.nature.com/articles/s41598-021-96335-2?error=cookies_not_supported&code=3a286af8-79b6-45a4-aee5-94d02e8dbd2e
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Bio-mimetic synthesis of catalytically active nano-silver using Bos taurus (A-2) urine ## Abstract Herein we have synthesized silver nanoparticles (Ag NPs) using liquid metabolic waste of Bos taurus (A-2 type) urine. Various bio-molecules present in cow urine, are effectively used to reduce silver (Ag) ions into silver nanoparticles in one step. This is bio-inspired electron transfer to Ag ion for the formation of base Ag metal and is fairly prompt and facile. These nanoparticles act as a positive catalyst for various organic transformation reactions. The structural, morphological, and optical properties of the as-synthesized Ag NPs are widely characterized by X-ray diffraction spectroscopy, ultraviolet–visible spectroscopy, scanning electron microscope, Fourier transmission infra-red spectroscopy, and atomic force microscopy. The as-synthesized bio-mimetic Ag NPs show potential activity for several reduction reactions of nitro groups. The Ag NPs were also used for degradation of hazardous dyes such as Methylene blue and Crystal violet with good degradation rate constant. ## Introduction Since the dawn of civilization on the planet, material scientists are actively involved in fabrication of new materials with desired novel properties1. Also various type of physical treatment can affect the properties of the materials such as earthen pot after heating beyond a particular temperature becomes porous and does not get dissolved in water2. Thus the use of innovative material is the mirror of developmental scenario of human civilization and therefore particular age is many times named after the materials in use such as iron age, plastic age and now stepped into nano age3,4. Now the scientists working in the domain of material science are mainly interested to develop materials at the nano-scale5,6,7,8,9,10,11,12,13,14. Due to alien properties nanoparticles find wide range of applications in diverse areas such as in fabrication of various types of sensing devices such as solar cells, electrochromic materials, gas sensors, bio-sensors such as blood glucose sensors15,16, oximeter17, memristors18, meta-materials19 which have negative refractive indexes, energy storage devices such as supercapacitors using transition metal oxides, bio-active materials such as anti-microbial and anti-neoplastic agents20,21, heterogeneous catalyst for organic transformation and organic synthesis reactions, heterogeneous catalyst for dye degradation reactions and several types of electronic devices etc. The exotic properties developed in materials at nano-scale is due to various factors such as enlarged surface area which significantly increases active sites for reaction22, insignificant gravitational force, possible development of quantum effect, sensitive coulomb's force of attraction or repulsion, alive dangling bonds, constructive random molecular motion, remarkable surface tension and secondary bonds like Van der Waal's attraction, etc. These properties make nano-materials different from their bulk counterparts23,24,25,26,27. The physical and chemical properties of bulk material are supposed to be constant irrespective of size and weight of material under consideration such as the melting point of metal or refractive index of a liquid are having fixed value. These well-defined physical and chemical properties of the bulk materials reveal interesting properties at nano-scale. But due to exceptional difference in properties at nano-scale from that of their bulk counterparts many scientists believe the nano regime as a separate state of matter. In fact, the properties of bulk material are an average of the properties at the nano regime. Our research group has successfully synthesized some transition metal and metal oxide nanoparticles such as Cd, CuO and Pd nanoparticles using Indian cow urine. In the current experimentation process, we have successfully synthesized Ag nanoparticles using A-2 type cow urine. Ayurveda is an ancient system of natural and holistic medicine developed particularly in Indian sub-continent. Ayurveda literature describes the use of cow products for medicinal purpo ses. Liquid metabolic waste of cow is a constituent of Panchagavya (a combination of cow urine, milk, clarified butter, curd and dung). According to description in classical Ayurvedic literature, the cow urine have been found to have approximately water 95%, Urea 2.5%, and the rest 2.5% is a mixture of different minerals, salts, hormones, and enzymes28. According to ancient Indian Ayurvedic literature liquid metabolic waste of cow is useful to control various ailments especially chronic diseases such as seizer disease, skin disease, hepatic diseases, psoriasis, paralysis, thyroid dis-order, constipation, abdominal diseases, renal disorders, diabetes mellitus, anticonvulsant drug etc. Some medical professionals claims that cow urine is an effective anti-neoplastic agent28. Cow urine is found to have beneficial properties particularly in the area of agriculture and therapeutics. It has been observed that the urine of Indian cow is highly effective and interestingly almost nil or very few medical properties are present in urine of crossbred, exotic cows, buffaloes etc. Recent researches showed that cow urine enhances immune status of individual through activating the macrophases and augmenting their engulfment power as well as bactericidal activity. ## Laboratory synthesis of nanoparticles The nanoparticles can be categorized as (1) natural nanomaterials, (2) incidental nanomaterials and (3) engineered nanomaterials. Nano-particles possess wonderful exotic properties. Because of continuous requirement of nanoparticles in various appliances researchers intensely synthesized materials of various size and shape which are known as engineered nanomaterials. Due to simplicity, comfortable set up, economical consideration and defect free product bottom-up approach is becoming popular and now a day widely adopted. The bottom up route of synthesis implies that the nanostructures are synthesized by stacking atoms onto each other. This gives rise to crystal planes, crystal planes further stack onto one another which results in the formation of nanostructures. Thus bottom up approach can be simply viewed as synthetic route where the building blocks are added to have a nanostructure11,23,29,30,31. ## Materials and methods ### Bio-mimetic synthesis of Ag NPs using Gir cow urine Ag NPs were synthesized using the liquid metabolic waste of indigenous Indian healthy Gir (A-2) cow of age approximately 7 years. The cow was regularly vaccinated by a veterinarian against common livestock diseases like rinderpest and black quarter etc. The cow urine is procured with agreement of animal rearer from cattle farm belonging to village Kaneri, District Kolhapur, India. The freshly discharged cow urine was collected in a sterile screw-capped bottle and brought in laboratory. The liquid metabolic waste of Gir cow was dribbled using filter paper and stored in suitable container at room temperature. For the experimentation process, analytical grade silver nitrate precursor was purchased. Here 100 mL of 0.1 M AgNO3 solution was prepared by dissolving silver nitrate in double-distilled water. Then 15 mL 0.1% w/v cetyltrimethylammonium bromide (CTAB) a cationic surfactant was slowly added in above solution with constant stirring. The borosilicate glass made burette was filled with cow urine and drop wise added in silver nitrate solution with constant stirring. The reaction mixture was maintained in the range of 25–30 °C temperature. The addition of cow urine results in the formation of dark blackish colored precipitate of Ag in nano form. When 25 mL of cow urine added there was formation of sufficient amount of precipitate. The colloidal solution was continuously heated till complete evaporation of water content takes place. The, as-synthesized Ag NPs was annealed at 495 °C for 1 h. Then, the black-grey colored accumulated solid mass was separated using a metallic spatula and then it has been crushed mechanically into fine powder. Finally, as-synthesized Ag NPs were used for further characterization and catalytic reactions5,25,26. ### Possible reaction mechanism The chemical formula for urea is CO(NH2)2. Here two –NH2 groups are directly attached to carbonyl group i.e. C=O. Actually it looks that urea should be a base due to presence of lone pair of electron on nitrogen atom. But due to electronegative nature of carbonyl group, it becomes a neutral compound. However, when urea gets reacted in presence of enzyme urease or at high temperature then there is conversion of urea into ammonia by hydrolysis as shown in Fig. 1. In first step of reaction, there is break down of Urea into Ammonia and isocyanate ions as byproduct. This reaction is reversible at pH less than 5 and greater than 12. In second reaction, Isocyanate get hydrolyze to produce Ammonia and carbon dioxide is evolved as a byproduct. Urea hydrolysis is higher at temperature 35 °C than at 15 °C. The effect of pH is only observed between the pH 6 and pH 832. ### Spectral characterization The biosynthesized AgNPs with the cow urine were characterized by using XRD (Bruker Ltd. Germany Model: D2 Phaser, Copper target = 1.54 Å), The optical absorbance of the AgNPs was recorded at wave length range 200–700 nm by using UV–visible spectrophotometer (Shimandzu, Model: UV-1800), morphology and the particle size of the as-prepared AgNPs were investigated by Scanning Electron Microscopic (JEOL Ltd. Japan Model: JSM-6360), Atomic force microscope (USA Model: INOVVA 1B3BE), Photoluminescence (HORIBA Instruments Model: Fluoromax-4). The degree of crystallinity, defects and disorders, particle size of AgNPs was determined by FT-IR spectra was carried out by JASCO Japan Model: FT/IR-4700. The particle size analysis and Zeta potential measurement experiments were carried out by Horiba SZ-100 nanoparticle analyzer. ## Ramification and discourse The synthesized nano-particles were characterized using advanced spectroscopic and microscopic techniques. The electron microscopic analysis reveals the morphologies for synthesized nanoparticles. Solid state of materials can be further divided into two types such as crystalline forms and amorphous forms. XRD analysis is applicable to crystalline form of materials only. The phase pattern of AgNPs were characterized by XRD measurement. The XRD pattern reveals formation of poly-dispersed crystalline nano-material. The XRD pattern of biologically synthesized AgNPs is shown in Fig. 2. In XRD pattern, Bragg’s reflections are observed at 2θ values of 38.11, 44.19, 64.43, 77.38 and 81.53 representing (111), (200), (220), (311) and (222) planes, respectively; which indicates that AgNPs were nanocrystals with cubic face centred (FCC) structure. The peaks in the XRD pattern obtained confirmed that the biosynthesized material was pure AgNPs with highly crystalline nature. The patterns were consistent and are in agreement with the JCPDS card No. 00-003-0931. The crystallite size can be calculated famous using Debye–Scherer’s formula33,34,35. $$D= \frac{0.9 \uplambda}{{\beta cos} \uptheta}$$ (1) The crystallite size is calculated using above formula and the average size of Ag NPs is 29.92‬ nm. We are pretty familiar with the ever increasing demand of Ag nanoparticles in various industries. This is because of novel physical, chemical and biological properties developed at nano level. The specific surface area of a nanoparticle depends upon the interrelationship between the particle size and morphology. MI is calculated from FWHM of XRD to explore this relationship, based on earlier report 35. MI is obtained using the equation, $$\text{MI}=\frac{\text{FWHMh}}{\text{FWHMh }+\text{ FWHMp}}$$ (2) where, M.I. is morphology index and FWHMh is highest FWHM value obtained from peaks. MI range of experimental AgNPs ranges from 0.50 to 0.685 and the details are presented in Table 1. It is correlated with the particle size (range from 38.84 to 21.79 nm) and specific surface area (range from 14.71 to 26.22 m2 g−1). From the calculated data it is observed that MI is directly proportional to particle size and inversely proportional to specific surface area with a small deviation. The results are shown in Figs. 3 and 4. Linear fit in the figures indicates the deviations and relationships between them. The insoluble black colored power obtained at the end of reaction was sonicated in bath sonicator so as to get well dispersed solution. UV–visible spectroscopic analysis (Fig. 5) shows maximum absorption is obtained at 500 nm which indicates that, the synthesized AgNPs nanoparticles efficiently absorb visible light therefore it could be acted photocatalyst for dye degradation in solar light. The absorption value i.e. redshift in λmax value may be due to agglomeration of nanoparticles. This UV–Vis spectrum confirmed that the as-prepared AgNPs have been proficiently acted under visible light treatment so it could be also acted in presence of solar energy with more efficiently. Scanning electron microscopy is a powerful tool for sample imaging with nanoscale magnification and resolution which visualizes very small topographic details on the surface of object. Synthesis of AgNPs using biological route (either plant extract or animal waste) shows relatively spherical in shape. Herein, the morphology and the particle size of the as-prepared NPs were investigated by Scanning Electron Microscopic (SEM) analysis using MIRA3 TESCAN SEM machine and it is observed that Ag nanoparticles are in unique architectures with relatively spherical in shape (Fig. 6a). The mean diameter of AgNPs are approximately 90–200 nm as shown in Fig. 6b. From the structural point of view, SEM images reveal that homogeneous spherical shape of NPs have been formed. Since photocatalysis is a surface phenomenon activity and the homogeneous spherical shape of NPs could help to enhance the photocatalytic activities such as dye degradation and organic transformation reaction. The Atomic force microscope (AFM) is the most commonly used form of Scanning probe microscope. Atomic force microscopy is a three dimensional topographic technique with a high atomic resolutions and measures surface roughness. The attractive/repulsive forces between the sample surface and a sharp probe of NPS were detected by AFM and the force was dignified through a laser photodiode system that detects the difference in voltages at the photodetector output. The surface defects of AgNPs were characterized by AFM and it is shown in Fig. 7. In this study of AFM, the morphology of synthesized sliver nanoparticles was found to be highly spherical shape. The surface topography of AgNPs thin films observed that in two dimensional and three dimensional views at nanoscale36,37. It is clearly depicted that the as-synthesized AgNPs are spherical shape with homogeneous good thickness. Photoluminescence is the phenomenon of emission of light from any type of matter when photon is incident on it. Photoluminescence (PL) is the tool to confirm the structural defect related properties. Figure 8 shows the PL spectra of Ag nanoparticles. PL spectra is consisting of two excitation peaks at 405 nm and 436 nm. Aqueous Ag NPs solution are showing peak at 405 nm with a excitation wavelength 239 nm and further peak come at 436 nm with a excitation wavelength 314 nm38,39,40. FT-IR spectra show absorption bands that enable to determine the presence of various functional groups in a molecule. Various functional groups correspond to different frequencies and hence different wave numbers. In Fig. 9, a very strong and broad absorption band was found at 3440 cm−1 which indicates that the synthesized samples can have vibrations of symmetric stretching due to presence of primary amine. The other bands are observed at 1608 and 1124 cm−1 which may be due to aromatic stretch of organic compound present in synthesized nanomaterials. FT-IR spectra are the fingerprint due to presence of functional groups in the material under study. Figure 9 shows FT-IR spectra of Bos taurus (A-2 type) cow urine, the observed major peak positions are 3054 cm−1, 1617 cm−1, 1362 cm−1, which may be due to C–H aromatic stretching, 1617 cm−1 C–C aromatic stretching, 1362 cm−1 C=O stretching (ketonic group) respectively. Further Fig. 9 shows peak position at 3440 cm−1, 1608 cm−1, 1124 cm−1, showed a red shift. This may be due to interaction of Ag nanoparticle with function groups viz. C–H and AgNPs interaction, C–C and AgNPs interaction. Electro kinetic potential from Fig. 10 signifies the stability of the synthesized nanoparticles. The hydrodynamic diameter of the synthesized nanoparticles can be discovered using dynamic light scattering (DLS) also known as photon correlation spectroscopy (PCS). Herein bio-inspired synthesis involves formation of poly-dispersed nanoparticles. The hydrodynamic diameter is measured using Malvern Instruments Ltd. The DLS is represented in Fig. 11 which reveals the hydrodynamic diameter of the synthesized AgNPs. When the light passed through the colloidal solution, it bombards on small particles and scatters in all possible directions (Rayleigh scattering). We observe a fluctuation in the intensity of light even if the incident light is monochromatic or laser. This fluctuation in intensity of light is due to tiny molecules in solution which continuously undergoes Brownian motion. As a result of the Brownian motion of a particle, the dimension of the particle can be determined. DLS assumes that all particles are spherical in nature. The DLS results are complemented by the images provided by AFM and SEM. DLS showed the average particle size of as-synthesized AgNPs are 296.2 nm. The particles are made up by biological way thus the particle size of nanoparticles cannot be controlled and it is high in range. Electro kinetic potential of the sample reveals the dispersion stability of the colloidal solution. Higher values of electro kinetic potentials predict a more stable dispersion. In fact, electro kinetic potential analyzer is an important tool for understanding the state of the surface of the nanoparticle and predicting the long term stability of the nanoparticles. The zeta potential is an indication of the surface potential, and so determines the magnitude of the electric double layer repulsion. Normally, a value 40–60 mV indicates good stability of the nanoparticles. Zeta potential shows the stability of the synthesized nanoparticles. It also shows the mobility of the nanoparticles. Zeta potential analysis of synthesized Ag nanoparticles shows incipient instability i.e. − 19.02 mV. According to zeta potential results, it was observed that the outer layer of Ag nanoparticles should possess negative charge. The figure shows three cycles reading of the stability and mobility of the nanoparticles. The zeta potential is an indication of the surface potential, and so determines the magnitude of the electrical double layer repulsion. The large positive and large negative value of zeta potential is required for stable dispersion. ## Ag nanoparticles as nano-catalyst Reduction process is an important and fundamental of organic transformation in chemical synthesis and industrial chemistry. The description of a catalyst these days can be simply and ideally started with nanoparticles. In the current experimentation process, we have studied several organic transformation reactions using Ag nanoparticles as a nano-catalyst. Here we have tried to convert –NO2 group into –NH2 group using sodium borohydride as reducing agent. Actually sodium borohydride can reduce only the carbonyl group i.e. aldehydic (–CHO) or ketonic (–C=O) groups. However, our experimentation reveals that sodium borohydride can successfully reduce –NO2 functional group into –NH2 group in the presence of Ag nanoparticles. The progress of reaction is spectrophotometrically monitored23,27,29,30,31,41. The details of the studied reactions are as below. ### Nano-Ag catalyzed conversion of 4-nitrophenol into 4-aminophenol For the current experimentation process, we purchased analytical grade 4-Nitrophenol (C6H5NO3) and Sodium Borohydride (NaBH4), from Sigma Aldrich. About 1 mL ice-cold solution of 0.05 M NaBH4 was taken in the quartz cuvette. Then, in the above solution about 1.5 mL, 0.1 mM 4-Nitrophenol solution was slowly added. To the reaction mixture, water suspension 200 μL (0.1 mg mL−1) Ag nanoparticles were added and we found that the conversion of 4-nitrophenol into 4-aminophenol takes place, which have been confirmed spectrphotometrcially. The absorption peak of –NO2 functional group is at wavelength 400 nm which decreases with progress of time whereas a new absorption at 300 nm makes its appearance which is responsible for –NH2 group. This indicates that complete conversion of 4-nitrophenol to 4-aminophenol take place within 390 s time interval. During this time about 95% of the reactant is converted into product. When we plot a graph of concentration verse time we get a straight line passing through origin. The progress of reaction is shown in Fig. 12. The conversion of 4-nitrophenol to 4-aminophenol with first-order reaction at a rate constant (k) is 0.30476 min−1. The reaction is shown in Fig. 13. ### Nano-Ag catalyzed conversion of 2-nitroaniline to 2-aminoaniline Analytical grade 2-nitroaniline (C6H6N2O2), Sodium Borohydride (NaBH4), were procured from Sigma Aldrich. About 1.5 mL ice-cold solution of 0.05 M NaBH4 was taken in the quartz cuvette. Then, in the above solution, about 1 mL 0.1 mM 2-Nitroaniline solutions were added drop-wise. To this reaction mixture water suspension of 200 μL (0.1 mg mL−1) Ag nanoparticles were added. We confirmed that the conversion of 2-Nitroaniline to 2-aminoaniline by the absorption peaks of reactant and product, spectrophotometrically. The absorption peaks responsible for –NO2 functional group is at wavelength 413 nm which decreases with progress of time. This indicates that complete conversion of 2-Nitroaniline to 2-aminoaniline take place within 750 s interval of time. During this time about 95% of the reactant is converted into product. The progress of reaction with a rate constant (k) is 0.17724 min−1 is shown in Fig. 14. The reaction is mentioned in Fig. 15. ### Nano-Ag catalyzed conversion of 3-nitroaniline to 3-aminoaniline Analytical grade 3-Nitroaniline (C6H6N2O2), Sodium Borohydride (NaBH4), were procured from Sigma Aldrich. About 1 mL ice-cold solution of 0.05 M NaBH4 was taken in the quartz cuvette. Then, in the above solution about 1.5 mL 0.1 mM 3-Nitroaniline solutions was drop-wise added. To this reaction mixture water suspension of 200 μL (0.1 mg mL−1) Ag nanoparticles were added. The absorption peaks responsible for –NO2 functional group is at wavelength 363 nm which decreases with progress of time. This indicates that complete conversion of 3-Nitroaniline to 3-aminoaniline take place within 780 s time interval. During this time about 75% of the reactant is converted into product. When we plot a graph of concentration verse time we get a straight line passing through origin. The progress of reaction is shown in Fig. 16 with the first-order reaction with a rate constant (k) is 0.09737 min−1. The reaction is mentioned in the Fig. 17. ### Nano-Ag catalyzed conversion of 4-nitroaniline to 4-aminoaniline Analytical grade 4-Nitroaniline (C6H6N2O2), Sodium Borohydride (NaBH4), were procured. About 1.5 mL ice-cold solution of 0.05 M NaBH4 was taken in the quartz cuvette. Then, in the above solution about 0.5 mL 0.1 mM 4-Nitroaniline solutions was drop-wise added. To this reaction mixture water suspension of 200 μL (0.1 mg mL−1) Ag nanoparticles were added. The absorption peak responsible for –NO2 functional group is at wavelength 382 nm which decreases with progress of time whereas a new absorption at 280 nm makes its appearance which is responsible for –NH2 group. This indicates that complete conversion of 4-Nitroaniline to 4-aminoaniline take place within 690 s time interval. During this time about 98% of the reactant is converted into product. The progress of reaction is shown in Fig. 18 and reaction is shown in Fig. 19. This is an example of the first-order reaction with a rate constant (k) is 0.27126 min−1. Dye is an integral part which is used to impart color to materials. Textile industries heavily use natural and synthetic dye to color the fabric. The excess amount of dye is discharged which constitute major sources of water pollution. A dye is a high molecular weight organic compound. Such type of high molecular weight compounds is not easily degraded through natural process. Thus it contaminates surface water reservoir, soil and environment for long period which affects aquatic flora and fauna. Therefore, attempts have been made by researchers to degrade high molecular weight organic compounds into simple molecules. Herein Ag nanoparticles act as nano-catalyst for the degradation reactions of methylene blue and crystal violet using UV light. These two dyes methylene blue and crystal violet are selected for our study because they shows different colours in the oxidized and reduced forms and also their absorption maximum does not overlap with the SPR band of AgNPs33,42,43,44. #### Nano-Ag catalyzed degradation of methylene blue (MB) Methylene blue solution was prepared by dissolving 10 ppm of methylene blue (Methylthioninium Chloride) (C16H18ClN3S) in 80 mL of double-distilled water. In the methylene blue solution 100 mg synthesized Ag nanoparticles were added which behaves as heterogeneous catalyst. Then the beaker is wrapped with aluminum foil so as to avoid exposure of light. The solution in the beaker was rotated with magnetic needle. This results in adsorption of dye stuff on surface of nanoparticles. The reaction system was kept in darkness for 2 h. Then after 2 h reaction system was kept in light and absorption was measured at definite interval of time. The spectrophometric analysis was carried out both under ultraviolet light and visible light so as to investigate the efficiency of nanoparticles. The photocatalytic efficacy of AgNPs was determined both in UV radiation and visible light. Methylene blue was used as a test contaminant since it has been extensively used as an indicator for the photocatalytic activities owing to its absorption peaks in the visible range. The UV–Vis analysis reveals that about 80% degradation of dye takes place in 180 min. This is an example of first order kinetic reaction with rate constant 0.00925 min−1. The phocatalytic degradation of dye is spectrophotometrically monitored and shown in Fig. 20. #### Nano-Ag catalyzed degradation of crystal violet (CV) Crystal violet or gentian violet is a triaryl methane dye. In fact, crystal violet is not only a dye but is a multi-applicative compound which finds use in bacterial staining, used as anti-bacterial, antifungal and antihelminthics medicine especially used as poultry medicine by veterinary doctors. When dissolved in water dye imparts blue–violet shade. Crystal violet solution was prepared by dissolving 10 ppm of crystal violet (Tris(4-(dimethylamino)phenyl) methylium chloride) (C25H30N3Cl) in 80 mL double distilled water. Then 100 mg Ag nanoparticle was dispersed. The beaker was completely wrapped with aluminum foil and kept on magnetic stirrer and rotated. The beaker was kept in darkness for 2 h. The evaluation was carried out both under ultraviolet radiation and visible light so as to investigate the efficiency of nanoparticles. The UV–Vis analysis reveals that about 75% degradation of dye takes place in 120 min. This is an example of first order kinetic reaction with rate constant 0.02809 min−1. The photocatalytic degradation of dye is spectrophotometrically monitored and shown in Fig. 21. ### Ethical statement We all the authors hereby declare that all the ethical aspects have been taken into consideration while performing the experiment. ## Conclusion Herein, we have successfully synthesized Ag nanoparticles using Indian cow (A-2) urine. As cow urine is used as reducing agent this method of synthesis is very much cost effective and environmentally benign. In fact, Ag nanoparticles are synthesized from the waste product alone. The bio-synthesized Ag nano-particles are potent catalyst for organic transformation reactions. The progress of organic transformation was monitored using spectrophotometer. The synthesized nanoparticles have been successfully used as a photocatalyst for degradation of hazardous organic dyes such as methylene blue and crystal violet. ## References 1. 1. Types of Metal – Pure Metals, Alloys & Their Applications. fractory Engineering Blog 1. https://fractory.com/types-of-metal/ (2019). 2. 2. Mixture. CK-12 Foundation 1. https://flexbooks.ck12.org/cbook/ck-12-chemistry-flexbook-2.0/section/2.7/primary/lesson/mixtures-ms-ps. 3. 3. Che Chon, M. Nanotechnology and nanomaterials in Korea. J. Struct. Chem. 45, S6–S6 (2004). 4. 4. Ashby, M. F. Materials Selection in Mechanical Des Vol. 3 (Elsevier, 1992). 5. 5. Prasad, S. R. et al. Bio-inspired synthesis of catalytically and biologically active palladium nanoparticles using Bos taurus urine. SN Appl. Sci. https://doi.org/10.1007/s42452-020-2382-3 (2020). 6. 6. Gavade, N. L., Kadam, A. N., Suwarnkar, M. B., Ghodake, V. P. & Garadkar, K. M. Biogenic synthesis of multi-applicative silver nanoparticles by using Ziziphus. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 136, 953–960 (2014). 7. 7. Hankare, P. P., Sanadi, K. R., Garadkar, K. M., Delekar, S. D. & Mulla, I. S. Effect of cobalt doping on structural and thermoelectrical power of zinc allu chromites synthesised by sol–gel auto-combustion method. Mater. Lett. 110, 42–44 (2013). 8. 8. Mubarakali, D., Thajuddin, N., Jeganathan, K. & Gunasekaran, M. Colloids and surfaces B: Biointerfaces plant extract mediated synthesis of silver and gold nanoparticles and its antibacterial activity against clinically isolated pathogens. Colloids Surfaces B Biointerfaces 85, 360–365 (2011). 9. 9. Santoshi, A. Green synthesis, characterization and catalytic activity of palladium nanoparticles by xanthan gum. Appl. Nanosci. 5, 315–320 (2015). 10. 10. Amornkitbamrung, L., Pienpinijtham, P., Thammacharoen, C. & Ekgasit, S. Palladium nanoparticles synthesized by reducing species generated during a successive acidic/alkaline treatment of sucrose. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 122, 186–192 (2014). 11. 11. Brand-Williams, W., Cuvelier, M. E. & Berset, C. Use of a free radical method to evaluate antioxidant activity. LWT Food Sci. Technol. 30, 25–30 (1995). 12. 12. Thakkar, K. N., Mhatre, S. S. & Parikh, R. Y. Biological synthesis of metallic nanoparticles. Nanomed. Nanotechnol. Biol. Med. 6, 257–262 (2009). 13. 13. Korake, P. V., Dhabbe, R. S., Kadam, A. N., Gaikwad, Y. B. & Garadkar, K. M. Highly active lanthanum doped ZnO nanorods for photodegradation of metasystox. J. Photochem. Photobiol. B Biol. 130, 11–19 (2013). 14. 14. Prabhu, S. & Poulose, E. K. Silver nanoparticles: Mechanism of antimicrobial action, synthesis, medical applications, and toxicity effects. Int. Nano Lett. 2, 1–10 (2012). 15. 15. Cai, Y. et al. Histidine-triggered turning-on of gold/copper nanocluster fluorescence for the sensitive and selective detection of histidine. Chem. Commun. 56, 11637–11640 (2020). 16. 16. Zhang, Y., Wang, G., Yang, L., Wang, F. & Liu, A. Recent advances in gold nanostructures based biosensing and bioimaging. Coord. Chem. Rev. 370, 1–21 (2018). 17. 17. Rogoff, R. Blood sugar level sensing and monitoring transducer, U.S. Patent, 1–6 (1985). 18. 18. Dongale, T. D. et al. Development of Ag/WO3/ITO thin film memristor using spray pyrolysis method. Electron. Mater. Lett. 11, 944–948 (2015). 19. 19. Monticone, F. & Alù, A. Metamaterials and plasmonics: From nanoparticles to nanoantenna arrays, metasurfaces, and metamaterials. Chin. Phys. B 23, (2014). 20. 20. Han, L., Li, C., Zhang, T., Lang, Q. & Liu, A. Au@Ag heterogeneous nanorods as nanozyme interfaces with peroxidase-like activity and their application for one-pot analysis of glucose at nearly neutral pH. ACS Appl. Mater. Interfaces 7, 14463–14470 (2015). 21. 21. Yan, L. et al. Gold nanoplates with superb photothermal efficiency and peroxidase-like activity for rapid and synergistic antibacterial therapy. Chem. Commun. 57, 1133–1136 (2021). 22. 22. Liu, A., Wang, G., Wang, F. & Zhang, Y. Gold nanostructures with near-infrared plasmonic resonance: Synthesis and surface functionalization. Coord. Chem. Rev. 336, 28–42 (2017). 23. 23. Remita, H., Saha, A. & Sharma, G. K. Investigation into the catalytic activity of porous platinum nanostructures. Langmuir 29, 11431–11439 (2013). 24. 24. Minocheherhomji, F. P. & Vyas, B. M. Study of the antimicrobial activity of cow urine and medicinal plant extracts on pathogenic human microbial strains. Int. J. Adv. Pharm. Biol. Chem. 3, 836–840 (2014). 25. 25. Kora, A. J. & Rastogi, L. Green synthesis of palladium nanoparticles using gum ghatti (Anogeissus latifolia) and its application as an antioxidant and catalyst. Arab. J. Chem. 11, 1097–1106 (2018). 26. 26. Sathishkumar, M. et al. Phyto-crystallization of palladium through reduction process using Cinnamom zeylanicum bark extract. J. Hazard. Mater. 171, 400–404 (2009). 27. 27. Deka, P., Deka, R. C. & Bharali, P. In situ generated copper nanoparticle catalyzed reduction of 4-nitrophenol. New J. Chem. 38, 1789–1793 (2014). 28. 28. Minocheherhomji, F. P. A Study of Antimicrobial Effect of Cow Urine on Pathogenic Human Microbial Strains, and The Comparative Study of the Combined Effect of Cow Urine with Medicinal Herbs, to Evaluate their Potential (Shri Jagdishprasad Jhabarmal Tibrewala University, 2013). 29. 29. Sahiner, N., Karakoyun, N., Alpaslan, D. & Aktas, N. Biochar-embedded soft hydrogel and their use in Ag nanoparticle preparation and reduction of 4-nitro phenol. Int. J. Polym. Mater. Polym. Biomater. 62, 590–595 (2013). 30. 30. Aditya, T., Jana, J., Singh, N. K., Pal, A. & Pal, T. Remarkable facet selective reduction of 4-nitrophenol by morphologically tailored (111) faceted Cu2O nanocatalyst. ACS Omega 2, 1968–1984 (2017). 31. 31. Divband, B., Khatamian, M., Eslamian, G. R. K. & Darbandi, M. Synthesis of Ag/ZnO nanostructures by different methods and investigation of their photocatalytic efficiency for 4-nitrophenol degradation. Appl. Surf. Sci. 284, 80–86 (2013). 32. 32. Moraes, L. E., Burgos, S. A., DePeters, E. J., Zhang, R. & Fadel, J. G. Short communication: Urea hydrolysis in dairy cattle manure under different temperature, urea, and pH conditions. J. Dairy Sci. 100, 2388–2394 (2017). 33. 33. Suryavanshi, R. D., Mohite, S. V., Shaikh, S. K., Thorat, J. B. & Rajpure, K. Y. Spray deposited Fe2O3 photoelectrode for degradation of benzoic acid and methyl blue dye under solar radiation. J. Mater. Sci. Mater. Electron. 29, 20875–20884 (2018). 34. 34. Vinila, V. S. et al. XRD studies on nano crystalline ceramic superconductor PbSrCaCuO at different treating temperatures. Cryst. Struct. Theory Appl. 03, 1–9 (2014). 35. 35. Theivasanthi T., Alagar M. Titanium Dioxide (TiO2) Nanoparticles XRD Analyses: An Insight. arXiv 2013, 1307–1316. 36. 36. Bakhtiari-Sardari, A. et al. Comparative evaluation of silver nanoparticles biosynthesis by two cold-tolerant Streptomyces strains and their biological activities. Biotechnol. Lett. 42, 1985–1999 (2020). 37. 37. Vijayan, R., Joseph, S. & Mathew, B. Green synthesis of silver nanoparticles using Nervalia zeylanica leaf extract and evaluation of their antioxidant, catalytic, and antimicrobial potentials. Part. Sci. Technol. 37, 805–815 (2019). 38. 38. Parang, Z. et al. Sharif University of Technology Fluorescence emission spectra of silver and silver/cobalt nanoparticles. Sci. Iran. 19, 943–947 (2012). 39. 39. Liu, C., Yang, X., Yuan, H., Zhou, Z. & Xiao, D. Preparation of silver nanoparticle and its application to the determination of ct-DNA. Sensors 7, 708–718 (2007). 40. 40. Sang, S. et al. Facile synthesis of AgNPs on reduced graphene oxide for highly sensitive simultaneous detection of heavy metal ions. RSC Adv. 7, 21618–21624 (2017). 41. 41. Baruah, B., Gabriel, G. J., Akbashev, M. J. & Booher, M. E. Facile synthesis of silver nanoparticles stabilized by cationic polynorbornenes and their catalytic activity in 4-nitrophenol reduction. Langmuir 29, 4225–4234 (2013). 42. 42. Joseph, S. & Mathew, B. Microwave-assisted green synthesis of silver nanoparticles and the study on catalytic activity in the degradation of dyes. J. Mol. Liq. 204, 184–191 (2015). 43. 43. Desai, M. P., Sangaokar, G. M. & Pawar, K. D. Kokum fruit mediated biogenic gold nanoparticles with photoluminescent, photocatalytic and antioxidant activities. Process Biochem. 70, 188–197 (2018). 44. 44. Liu, Y. et al.Self-assembled AgNP-containing nanocomposites constructed by electrospinning as efficient dye photocatalyst materials for wastewater treatment. Nanomaterials 8, 1–14 (2018). ## Acknowledgements The authors pay sincere tribute to Late Ms. Deepika Rai Dhirendra Prasad who suddenly passed away. The authors are thankful to Dr. K.K. Pawar, Mr. Ruturaj Patil and Miss Aasiya Jamadar for their keen interest and sincere help. ## Funding We all the authors hereby declare that this research work is never financially supported by any organization. ## Author information Authors ### Contributions The experimental works have been designed, planned and carried by P.D.S., R.R.M. and N.R.P. Manuscript writing were accomplished by P.D.S., R.R.M. G.S.K. and N.R.P. Experimental results and data were examined by M.S.N., K.K.S., P.S.P., G.S.K, and N.R.P. Revision of the manuscript have been done by G.S.K, and N.R.P. ## Ethics declarations ### Competing interests The authors declare no competing interests. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Sarvalkar, P.D., Mandavkar, R.R., Nimbalkar, M.S. et al. Bio-mimetic synthesis of catalytically active nano-silver using Bos taurus (A-2) urine. Sci Rep 11, 16934 (2021). https://doi.org/10.1038/s41598-021-96335-2 • Accepted: • Published:
2021-12-06 15:46:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7135961055755615, "perplexity": 8099.253829034767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363301.3/warc/CC-MAIN-20211206133552-20211206163552-00024.warc.gz"}
https://icml.cc/virtual/2021/spotlight/8618
## On Reinforcement Learning with Adversarial Corruption and Its Application to Block MDP ### Tianhao Wu · Yunchang Yang · Simon Du · Liwei Wang [ Abstract ] [ Livestream: Visit Reinforcement Learning and Optimization ] [ Paper ] Wed 21 Jul 6:40 a.m. — 6:45 a.m. PDT We study reinforcement learning (RL) in episodic tabular MDPs with adversarial corruptions, where some episodes can be adversarially corrupted. When the total number of corrupted episodes is known, we propose an algorithm, Corruption Robust Monotonic Value Propagation (\textsf{CR-MVP}), which achieves a regret bound of $\tilde{O}\left(\left(\sqrt{SAK}+S^2A+CSA)\right)\polylog(H)\right)$, where $S$ is the number of states, $A$ is the number of actions, $H$ is the planning horizon, $K$ is the number of episodes, and $C$ is the corruption level. We also provide a corresponding lower bound, which indicates that our upper bound is tight. Finally, as an application, we study RL with rich observations in the block MDP model. We provide the first algorithm that achieves a $\sqrt{K}$-type regret in this setting and is computationally efficient.
2021-11-30 07:08:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7159830927848816, "perplexity": 758.2319950690372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00036.warc.gz"}
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.jsl/1140641167
### An alternative semantics for quantified relevant logic Robert Goldblatt and Edwin D. Mares Source: J. Symbolic Logic Volume 71, Issue 1 (2006), 163-187. #### Abstract The quantified relevant logic RQ is given a new semantics in which a formula ∀ x A is true when there is some true proposition that implies all x-instantiations of A. Formulae are modelled as functions from variable-assignments to propositions, where a proposition is a set of worlds in a relevant model structure. A completeness proof is given for a basic quantificational system QR from which RQ is obtained by adding the axiom EC of ‘extensional confinement’: ∀ x(A∨ B)→(A∨∀ xB), with x not free in A. Validity of EC requires an additional model condition involving the boolean difference of propositions. A QR-model falsifying EC is constructed by forming the disjoint union of two natural arithmetical structures in which negation is interpreted by the minus operation. First Page: We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. Permanent link to this document: http://projecteuclid.org/euclid.jsl/1140641167 Digital Object Identifier: doi:10.2178/jsl/1140641167 Mathematical Reviews number (MathSciNet): MR2210060 Zentralblatt MATH identifier: 05038892 ### References Alan R. Anderson, Nuel D. Belnap, and J. M. Dunn Entailment: The logic of relevance and necessity, vol. II, Princeton University Press, Princeton,1992. Mathematical Reviews (MathSciNet): MR1223997 David M. Armstrong A world of states of affairs, Cambridge University Press, Cambridge,1997. Ross Brady (editor) Relevant logics and their rivals, vol. II, Ashgate, Aldershot,2003. C. C. Chang and H. Jerome Kiesler Model theory, second ed., North Holland, Amsterdam,1977. Mathematical Reviews (MathSciNet): MR491125 J. Michael Dunn Algebraic completeness results for R-mingle and its extensions, Journal of Symbolic Logic, vol. 35 (1970), pp. 1--13. Mathematical Reviews (MathSciNet): MR288008 Digital Object Identifier: doi:10.2307/2271149 Zentralblatt MATH: 0231.02024 -------- Star and perp, Philosophical Perspectives, vol. 7 (1993), pp. 331--357. J. Michael Dunn and Greg Restall Relevance logic, Handbook of philosophical logic (G. M. Gabbay and F. Guenthner, editors), vol. 6, Kluwer, Dordrecht, second ed.,2002, pp. 1--128. Kit Fine Models for entailment, Journal of Philosophical Logic, vol. 3 (1974), pp. 347--372, Reprinted in Anderson, Belnap, and Dunn (1992) \S 51. Mathematical Reviews (MathSciNet): MR437309 Digital Object Identifier: doi:10.1007/BF00257480 Zentralblatt MATH: 0296.02013 -------- Semantics for quantified relevance logic, Journal of Philosophical Logic, vol. 17 (1988), pp. 22--59, Reprinted in Anderson, Belnap, and Dunn (1992) \S 53. Mathematical Reviews (MathSciNet): MR925613 Digital Object Identifier: doi:10.1007/BF00249674 Zentralblatt MATH: 0646.03013 -------- Incompleteness for quantified relevance logics, Directions in relevant logic (J. Norman and R. Sylvan, editors), Kluwer, Dordrecht,1989, Reprinted in Anderson, Belnap, and Dunn (1992) \S 52, pp. 205--225. Paul Halmos Algebriac logic, Chelsea, New York,1962. Robert K. Meyer and J. Michael Dunn E, R, and $\gamma$, Journal of Symbolic Logic, vol. 34 (1969), pp. 460--474. Mathematical Reviews (MathSciNet): MR252207 Digital Object Identifier: doi:10.2307/2270909 Robert K. Meyer, J. Michael Dunn, and Hughes Leblanc Completeness of relevant quantificational theories, Notre Dame Journal of Formal Logic, vol. 15 (1974), pp. 97--121. Mathematical Reviews (MathSciNet): MR337564 Digital Object Identifier: doi:10.1305/ndjfl/1093891202 Project Euclid: euclid.ndjfl/1093891202 Zentralblatt MATH: 0272.02028 Greg Restall An introduction to substructural logics, Routledge, London,2000. Richard Routley and Robert K. Meyer The semantics of entailment (I), Truth, syntax, and modality (Hughes Leblanc, editor), North Holland, Amsterdam,1973, pp. 199--243. Mathematical Reviews (MathSciNet): MR409114 Richard Routley, Robert K. Meyer, Val Plumwood, and Ross T. Brady Relevant logics and their rivals, vol. 1, Ridgeview, Atascardero,1982. Zentralblatt MATH: 0579.03011 Bertrand Russell The philosophy of logical atomism,1918, reprinted in Russell, The Philosophy of Logical Atomism, Open Court, LaSalle, IL, 1985.
2013-12-05 19:30:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6292194128036499, "perplexity": 9548.147021659432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163047214/warc/CC-MAIN-20131204131727-00038-ip-10-33-133-15.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2370557/decomposition-into-primes-in-peano-arithmetic
# Decomposition into primes in Peano arithmetic. The language of first-order Peano arithmetic seems to me rather limited. As far as I am aware, you have only the symbols $S, 0, +, \times ,=$. Now the theorem of unique factorization into primes, states that for every $n$ natural number, there exists a unique finite increasing sequence of primes $(p_k)_{k \leq m}$, such that $n=p_0 \times \cdots \times p_m$, but since this language has no inherent notion of sequence, I don't know how this theorem can be written in this language. Is there a simple way of working around this problem? I'd like to point out, I am not very familiar with logic and I am mostly self taught. • I always suspected this was why Arnold Ross said that the fundamental theorem of arithmetic should be: "If $a\mid bc$ and $\gcd(a,b)=1$ then $a\mid c$" rather than unique factorization. – Thomas Andrews Jul 24 '17 at 21:40 • @Keen Thanks for asking this question! I have been wondering about this for a long time myself. – Bram28 Jul 24 '17 at 21:50 • @ThomasAndrews That one is certainly provable in PA! – Bram28 Jul 24 '17 at 21:51 While PA doesn't have a built-in notion of finite sequences, we can still talk about finite sequences in PA. (And this means that talking about e.g. prime factorization is easily done in PA.) This is easy to see if we consider PA augmented by exponentiation: then we can code any sequence $\langle a_1, ..., a_n\rangle$ by the number $2^{a_1+1}3^{a_2+1}...p_n^{a_n+1}$ (think about why we need the "$+1$"). Basic facts about sequences can then be expressed and handled appropriately. In PA itself, this is a bit trickier, but can still be done using a clever application of the Chinese remainder theorem, discovered by Godel. • So this has always been a question for me as well ... in order to prove something like the FTA, we need to talk about finite sequences, but saying that I can encode sequences using some prime number encoding assumes the FTA, and saying that I can code sequences using Godel's beta functions is assuming something about sequences. I mean, how can I prove the Chinese Remainder Theorem in PA? – Bram28 Jul 24 '17 at 21:44 • @Bram28: You use FTA as a meta theorem. You know it is true in the natural numbers, so you can use it. There are other methods that one can use as well to code sequences without resorting to FTA. – Asaf Karagila Jul 24 '17 at 21:52 • @Bram28 That's not really accurate. What's going on is that a statement $\varphi$ can be often be expressed in the language of PA, even if it's not a priori in the language of PA - but the claim that this statement has been successfully expressed must be made and proved in a larger language/theory, in particular one powerful enough to express $\varphi$ itself. So the task is twofold: (i) express the statement $\varphi$ in the language of PA; that is, find a sentence $\varphi'$ in the language of PA which is true iff $\varphi$ is. (ii) Prove $\varphi'$ in PA. (continued) – Noah Schweber Jul 24 '17 at 23:12 • Task (ii) is entirely inside PA. For task (i), though, the proof of $\varphi\iff\varphi'$ has to take place in a system powerful enough to express $\varphi\iff\varphi'$, so in particular one powerful enough to express $\varphi$. This sort of coding, though, is also what lets us "do mathematics inside set theory" - we are always looking at making definitions inside a theory and working (inside that theory) with those definitions, and arguing externally that they capture what we want them to. And, in fact, this is no different from how metatheory is deployed all throughout logic. (continued) – Noah Schweber Jul 24 '17 at 23:16 • When we prove completeness, or soundness, or incompleteness, or ..., we prove these theories inside some formal theory, which is implicit but may not be made explicit. Similarly, when we say that a statement is expressible inside PA via appropriate internal definitions, we are making this claim and proving it inside some formal theory, which is implicit but may not be made explicit. So if you're comfortable with any of logic in the first place you should be comfortable with coding into PA (and other theories). And we sometimes do study what theory is necessary to prove an encoding "works." – Noah Schweber Jul 24 '17 at 23:21 Yes. You can easily code the notion of a finite sequence into the natural numbers with the language of arithmetic. There are many ways, for example Godel's $\beta$ function, or using prime powers. But the idea is that you can uniquely code every finite sequence of natural numbers as another natural number. And you can do that in a recursive way (even primitive recursive), so that the decoding process of the length and coordinates are also recursive (and even primitive recursive). Then prime decomposition becomes easy to state. • I'd mention finite sequences of integers are just sequences of bytes exactly as natural numbers. – reuns Jul 24 '17 at 23:23
2020-02-28 07:20:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.680478036403656, "perplexity": 268.6441408649248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147054.34/warc/CC-MAIN-20200228043124-20200228073124-00359.warc.gz"}
http://thepasqualian.com/?m=20091027
### Archive Archive for October 27th, 2009 ## On Utilities Consumption I: Water I've plotted the average consumption, and the two-sigma 95% bounds I calculated using Bessel's correction of the standard deviation for samples, aka "sample standard deviation": $\sqrt{\frac{1}{N-1}\sum_{i=1}^{N}\left(x_i - \overline{x} \right)^2 }$.  The correction gives an unbiased variance, even though the standard deviation is slightly underbiased... not that it matters much anyway.
2016-08-27 11:35:39
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791415929794312, "perplexity": 6083.511691212735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982905736.38/warc/CC-MAIN-20160823200825-00087-ip-10-153-172-175.ec2.internal.warc.gz"}
https://en.wikibooks.org/wiki/Control_Systems/Digital_Systems/Print_version
# Control Systems/Digital Systems/Print version The Wikibook of automatic Control Systems And Control Systems Engineering With Classical and Modern Techniques And # Preface This book will discuss the topic of Control Systems, which is an interdisciplinary engineering topic. Methods considered here will consist of both "Classical" control methods, and "Modern" control methods. Also, discretely sampled systems (digital/computer systems) will be considered in parallel with the more common analog methods. This book will not focus on any single engineering discipline (electrical, mechanical, chemical, etc.), although readers should have a solid foundation in the fundamentals of at least one discipline. This book will require prior knowledge of linear algebra, integral and differential calculus, and at least some exposure to ordinary differential equations. In addition, a prior knowledge of integral transforms, specifically the Laplace and Z transforms will be very beneficial. Also, prior knowledge of the Fourier Transform will shed more light on certain subjects. Wikibooks with information on calculus topics or transformation topics required for this book will be listed below: Introduction to Control Systems What are control systems? Why do we study them? How do we identify them? The chapters in this section should answer these questions and more. # Introduction ## This Wikibook This book was written at Wikibooks, a free online community where people write open-content textbooks. Any person with internet access is welcome to participate in the creation and improvement of this book. Because this book is continuously evolving, there are no finite "versions" or "editions" of this book. Permanent links to known good versions of the pages may be provided. ## What are Control Systems? The study and design of automatic Control Systems, a field known as control engineering, has become important in modern technical society. From devices as simple as a toaster or a toilet, to complex machines like space shuttles and power steering, control engineering is a part of our everyday life. This book introduces the field of control engineering and explores some of the more advanced topics in the field. Note, however, that control engineering is a very large field, and this book serves as a foundation of control engineering and introduction to selected advanced topics in the field. Topics in this book are added at the discretion of the authors, and represent the available expertise of our contributors. Control systems are components that are added to other components, to increase functionality, or to meet a set of design criteria. For example: We have a particular electric motor that is supposed to turn at a rate of 40 RPM. To achieve this speed, we must supply 10 Volts to the motor terminals. However, with 10 volts supplied to the motor at rest, it takes 30 seconds for our motor to get up to speed. This is valuable time lost. This simple example can be complex to both users and designers of the motor system. It may seem obvious that the motor should start at a higher voltage, so that it accelerates faster. Then we can reduce the supply back down to 10 volts once it reaches ideal speed. This is clearly a simplistic example, but it illustrates an important point: we can add special "Controller units" to preexisting systems, to improve performance and meet new system specifications. Here are some formal definitions of terms used throughout this book: Control System A Control System is a device, or a collection of devices that manage the behavior of other devices. Some devices are not controllable. A control system is an interconnection of components connected or related in such a manner as to command, direct, or regulate itself or another system. Control System is a conceptual framework for designing systems with capabilities of regulation and/or tracking to give a desired performance. For this there must be a set of signals measurable to know the performance, another set of signals measurable to influence the evolution of the system in time and a third set which is not measurable but disturb the evolution. Controller A controller is a control system that manages the behavior of another device or system. Compensator A Compensator is a control system that regulates another system, usually by conditioning the input or the output to that system. Compensators are typically employed to correct a single design flaw, with the intention of affecting other aspects of the design in a minimal manner. There are essentially two methods to approach the problem of designing a new control system: the Classical Approach, and the Modern Approach. ## Classical and Modern Classical and Modern control methodologies are named in a misleading way, because the group of techniques called "Classical" were actually developed later than the techniques labeled "Modern". However, in terms of developing control systems, Modern methods have been used to great effect more recently, while the Classical methods have been gradually falling out of favor. Most recently, it has been shown that Classical and Modern methods can be combined to highlight their respective strengths and weaknesses. Classical Methods, which this book will consider first, are methods involving the Laplace Transform domain. Physical systems are modeled in the so-called "time domain", where the response of a given system is a function of the various inputs, the previous system values, and time. As time progresses, the state of the system and its response change. However, time-domain models for systems are frequently modeled using high-order differential equations which can become impossibly difficult for humans to solve and some of which can even become impossible for modern computer systems to solve efficiently. To counteract this problem, integral transforms, such as the Laplace Transform and the Fourier Transform, can be employed to change an Ordinary Differential Equation (ODE) in the time domain into a regular algebraic polynomial in the transform domain. Once a given system has been converted into the transform domain it can be manipulated with greater ease and analyzed quickly by humans and computers alike. Modern Control Methods, instead of changing domains to avoid the complexities of time-domain ODE mathematics, converts the differential equations into a system of lower-order time domain equations called State Equations, which can then be manipulated using techniques from linear algebra. This book will consider Modern Methods second. A third distinction that is frequently made in the realm of control systems is to divide analog methods (classical and modern, described above) from digital methods. Digital Control Methods were designed to try and incorporate the emerging power of computer systems into previous control methodologies. A special transform, known as the Z-Transform, was developed that can adequately describe digital systems, but at the same time can be converted (with some effort) into the Laplace domain. Once in the Laplace domain, the digital system can be manipulated and analyzed in a very similar manner to Classical analog systems. For this reason, this book will not make a hard and fast distinction between Analog and Digital systems, and instead will attempt to study both paradigms in parallel. ## Who is This Book For? This book is intended to accompany a course of study in under-graduate and graduate engineering. As has been mentioned previously, this book is not focused on any particular discipline within engineering, however any person who wants to make use of this material should have some basic background in the Laplace transform (if not other transforms), calculus, etc. The material in this book may be used to accompany several semesters of study, depending on the program of your particular college or university. The study of control systems is generally a topic that is reserved for students in their 3rd or 4th year of a 4 year undergraduate program, because it requires so much previous information. Some of the more advanced topics may not be covered until later in a graduate program. Many colleges and universities only offer one or two classes specifically about control systems at the undergraduate level. Some universities, however, do offer more than that, depending on how the material is broken up, and how much depth that is to be covered. Also, many institutions will offer a handful of graduate-level courses on the subject. This book will attempt to cover the topic of control systems from both a graduate and undergraduate level, with the advanced topics built on the basic topics in a way that is intuitive. As such, students should be able to begin reading this book in any place that seems an appropriate starting point, and should be able to finish reading where further information is no longer needed. ## What are the Prerequisites? Understanding of the material in this book will require a solid mathematical foundation. This book does not currently explain, nor will it ever try to fully explain most of the necessary mathematical tools used in this text. For that reason, the reader is expected to have read the following wikibooks, or have background knowledge comparable to them: Algebra Calculus The reader should have a good understanding of differentiation and integration. Partial differentiation, multiple integration, and functions of multiple variables will be used occasionally, but the students are not necessarily required to know those subjects well. These advanced calculus topics could better be treated as a co-requisite instead of a pre-requisite. Linear Algebra State-space system representation draws heavily on linear algebra techniques. Students should know how to operate on matrices. Students should understand basic matrix operations (addition, multiplication, determinant, inverse, transpose). Students would also benefit from a prior understanding of Eigenvalues and Eigenvectors, but those subjects are covered in this text. Ordinary Differential Equations All linear systems can be described by a linear ordinary differential equation. It is beneficial, therefore, for students to understand these equations. Much of this book describes methods to analyze these equations. Students should know what a differential equation is, and they should also know how to find the general solutions of first and second order ODEs. Engineering Analysis This book reinforces many of the advanced mathematical concepts used in the Engineering Analysis book, and we will refer to the relevant sections in the aforementioned text for further information on some subjects. This is essentially a math book, but with a focus on various engineering applications. It relies on a previous knowledge of the other math books in this list. Signals and Systems The Signals and Systems book will provide a basis in the field of systems theory, of which control systems is a subset. Readers who have not read the Signals and Systems book will be at a severe disadvantage when reading this book. ## How is this Book Organized? This book will be organized following a particular progression. First this book will discuss the basics of system theory, and it will offer a brief refresher on integral transforms. Section 2 will contain a brief primer on digital information, for students who are not necessarily familiar with them. This is done so that digital and analog signals can be considered in parallel throughout the rest of the book. Next, this book will introduce the state-space method of system description and control. After section 3, topics in the book will use state-space and transform methods interchangeably (and occasionally simultaneously). It is important, therefore, that these three chapters be well read and understood before venturing into the later parts of the book. After the "basic" sections of the book, we will delve into specific methods of analyzing and designing control systems. First we will discuss Laplace-domain stability analysis techniques (Routh-Hurwitz, root-locus), and then frequency methods (Nyquist Criteria, Bode Plots). After the classical methods are discussed, this book will then discuss Modern methods of stability analysis. Finally, a number of advanced topics will be touched upon, depending on the knowledge level of the various contributors. As the subject matter of this book expands, so too will the prerequisites. For instance, when this book is expanded to cover nonlinear systems, a basic background knowledge of nonlinear mathematics will be required. ### Versions This wikibook has been expanded to include multiple versions of its text, differentiated by the material covered, and the order in which the material is presented. Each different version is composed of the chapters of this book, included in a different order. This book covers a wide range of information, so if you don't need all the information that this book has to offer, perhaps one of the other versions would be right for you and your educational needs. Each separate version has a table of contents outlining the different chapters that are included in that version. Also, each separate version comes complete with a printable version, and some even come with PDF versions as well. Take a look at the All Versions Listing Page to find the version of the book that is right for you and your needs. ## Differential Equations Review Implicit in the study of control systems is the underlying use of differential equations. Even if they aren't visible on the surface, all of the continuous-time systems that we will be looking at are described in the time domain by ordinary differential equations (ODE), some of which are relatively high-order. Let's review some differential equation basics. Consider the topic of interest from a bank. The amount of interest accrued on a given principal balance (the amount of money you put into the bank) P, is given by: ${\displaystyle {\frac {dP}{dt}}=rP}$ Where ${\displaystyle {\frac {dP}{dt}}}$ is the interest (rate of change of the principal), and r is the interest rate. Notice in this case that P is a function of time (t), and can be rewritten to reflect that: ${\displaystyle {\frac {dP(t)}{dt}}=rP(t)}$ To solve this basic, first-order equation, we can use a technique called "separation of variables", where we move all instances of the letter P to one side, and all instances of t to the other: ${\displaystyle {\frac {dP(t)}{P(t)}}=r\ dt}$ And integrating both sides gives us: ${\displaystyle \ln |P(t)|=rt+C}$ This is all fine and good, but generally, we like to get rid of the logarithm, by raising both sides to a power of e: ${\displaystyle P(t)=e^{rt+C}}$ Where we can separate out the constant as such: ${\displaystyle D=e^{C}}$ ${\displaystyle P(t)=De^{rt}}$ D is a constant that represents the initial conditions of the system, in this case the starting principal. Differential equations are particularly difficult to manipulate, especially once we get to higher-orders of equations. Luckily, several methods of abstraction have been created that allow us to work with ODEs, but at the same time, not have to worry about the complexities of them. The classical method, as described above, uses the Laplace, Fourier, and Z Transforms to convert ODEs in the time domain into polynomials in a complex domain. These complex polynomials are significantly easier to solve than the ODE counterparts. The Modern method instead breaks differential equations into systems of low-order equations, and expresses this system in terms of matrices. It is a common precept in ODE theory that an ODE of order N can be broken down into N equations of order 1. Readers who are unfamiliar with differential equations might be able to read and understand the material in this book reasonably well. However, all readers are encouraged to read the related sections in Calculus. ## History The field of control systems started essentially in the ancient world. Early civilizations, notably the Greeks and the Arabs were heavily preoccupied with the accurate measurement of time, the result of which were several "water clocks" that were designed and implemented. However, there was very little in the way of actual progress made in the field of engineering until the beginning of the renaissance in Europe. Leonhard Euler (for whom Euler's Formula is named) discovered a powerful integral transform, but Pierre-Simon Laplace used the transform (later called the Laplace Transform) to solve complex problems in probability theory. Joseph Fourier was a court mathematician in France under Napoleon I. He created a special function decomposition called the Fourier Series, that was later generalized into an integral transform, and named in his honor (the Fourier Transform). Pierre-Simon Laplace 1749-1827 Joseph Fourier 1768-1840 The "golden age" of control engineering occurred between 1910-1945, where mass communication methods were being created and two world wars were being fought. During this period, some of the most famous names in controls engineering were doing their work: Nyquist and Bode. Hendrik Wade Bode and Harry Nyquist, especially in the 1930's while working with Bell Laboratories, created the bulk of what we now call "Classical Control Methods". These methods were based off the results of the Laplace and Fourier Transforms, which had been previously known, but were made popular by Oliver Heaviside around the turn of the century. Previous to Heaviside, the transforms were not widely used, nor respected mathematical tools. Bode is credited with the "discovery" of the closed-loop feedback system, and the logarithmic plotting technique that still bears his name (bode plots). Harry Nyquist did extensive research in the field of system stability and information theory. He created a powerful stability criteria that has been named for him (The Nyquist Criteria). Modern control methods were introduced in the early 1950's, as a way to bypass some of the shortcomings of the classical methods. Rudolf Kalman is famous for his work in modern control theory, and an adaptive controller called the Kalman Filter was named in his honor. Modern control methods became increasingly popular after 1957 with the invention of the computer, and the start of the space program. Computers created the need for digital control methodologies, and the space program required the creation of some "advanced" control techniques, such as "optimal control", "robust control", and "nonlinear control". These last subjects, and several more, are still active areas of study among research engineers. ## Branches of Control Engineering Here we are going to give a brief listing of the various different methodologies within the sphere of control engineering. Oftentimes, the lines between these methodologies are blurred, or even erased completely. Classical Controls Control methodologies where the ODEs that describe a system are transformed using the Laplace, Fourier, or Z Transforms, and manipulated in the transform domain. Modern Controls Methods where high-order differential equations are broken into a system of first-order equations. The input, output, and internal states of the system are described by vectors called "state variables". Robust Control Control methodologies where arbitrary outside noise/disturbances are accounted for, as well as internal inaccuracies caused by the heat of the system itself, and the environment. Optimal Control In a system, performance metrics are identified, and arranged into a "cost function". The cost function is minimized to create an operational system with the lowest cost. In adaptive control, the control changes its response characteristics over time to better control the system. Nonlinear Control The youngest branch of control engineering, nonlinear control encompasses systems that cannot be described by linear equations or ODEs, and for which there is often very little supporting theory available. Game Theory Game Theory is a close relative of control theory, and especially robust control and optimal control theories. In game theory, the external disturbances are not considered to be random noise processes, but instead are considered to be "opponents". Each player has a cost function that they attempt to minimize, and that their opponents attempt to maximize. This book will definitely cover the first two branches, and will hopefully be expanded to cover some of the later branches, if time allows. ## MATLAB Information about using MATLAB for control systems can be found in the Appendix MATLAB ® is a programming tool that is commonly used in the field of control engineering. We will discuss MATLAB in specific sections of this book devoted to that purpose. MATLAB will not appear in discussions outside these specific sections, although MATLAB may be used in some example problems. An overview of the use of MATLAB in control engineering can be found in the appendix at: Control Systems/MATLAB. Resources Nearly all textbooks on the subject of control systems, linear systems, and system analysis will use MATLAB as an integral part of the text. Students who are learning this subject at an accredited university will certainly have seen this material in their textbooks, and are likely to have had MATLAB work as part of their classes. It is from this perspective that the MATLAB appendix is written. In the future, this book may be expanded to include information on Simulink ®, as well as MATLAB. There are a number of other software tools that are useful in the analysis and design of control systems. Additional information can be added in the appendix of this book, depending on the experience and prior knowledge of contributors. This book will use some simple conventions throughout. ### Mathematical Conventions Mathematical equations will be labeled with the {{eqn}} template, to give them names. Equations that are labeled in such a manner are important, and should be taken special note of. For instance, notice the label to the right of this equation: [Inverse Laplace Transform] ${\displaystyle f(t)={\mathcal {L}}^{-1}\left\{F(s)\right\}={1 \over {2\pi i}}\int _{c-i\infty }^{c+i\infty }e^{st}F(s)\,ds}$ Equations that are named in this manner will also be copied into the List of Equations Glossary in the end of the book, for an easy reference. Italics will be used for English variables, functions, and equations that appear in the main text. For example e, j, f(t) and X(s) are all italicized. Wikibooks contains a LaTeX mathematics formatting engine, although an attempt will be made not to employ formatted mathematical equations inline with other text because of the difference in size and font. Greek letters, and other non-English characters will not be italicized in the text unless they appear in the midst of multiple variables which are italicized (as a convenience to the editor). Scalar time-domain functions and variables will be denoted with lower-case letters, along with a t in parenthesis, such as: x(t), y(t), and h(t). Discrete-time functions will be written in a similar manner, except with an [n] instead of a (t). Fourier, Laplace, Z, and Star transformed functions will be denoted with capital letters followed by the appropriate variable in parenthesis. For example: F(s), X(jω), Y(z), and F*(s). Matrices will be denoted with capital letters. Matrices which are functions of time will be denoted with a capital letter followed by a t in parenthesis. For example: A(t) is a matrix, a(t) is a scalar function of time. Transforms of time-variant matrices will be displayed in uppercase bold letters, such as H(s). Math equations rendered using LaTeX will appear on separate lines, and will be indented from the rest of the text. ### Text Conventions Information which is tangent or auxiliary to the main text will be placed in these "sidebox" templates. Examples will appear in TextBox templates, which show up as large grey boxes filled with text and equations. Important Definitions Will appear in TextBox templates as well, except we will use this formatting to show that it is a definition. # System Identification ## Systems Systems, in one sense, are devices that take input and produce an output. A system can be thought to operate on the input to produce the output. The output is related to the input by a certain relationship known as the system response. The system response usually can be modeled with a mathematical relationship between the system input and the system output. ## System Properties Physical systems can be divided up into a number of different categories, depending on particular properties that the system exhibits. Some of these system classifications are very easy to work with and have a large theory base for analysis. Some system classifications are very complex and have still not been investigated with any degree of success. By properly identifying the properties of a system, certain analysis and design tools can be selected for use with the system. The early sections of this book will focus primarily on linear time-invariant (LTI) systems. LTI systems are the easiest class of system to work with, and have a number of properties that make them ideal to study. This chapter discusses some properties of systems. Later chapters in this book will look at time variant systems and nonlinear systems. Both time variant and nonlinear systems are very complex areas of current research, and both can be difficult to analyze properly. Unfortunately, most physical real-world systems are time-variant, nonlinear, or both. An introduction to system identification and least squares techniques can be found here. An introduction to parameter identification techniques can be found here. ## Initial Time The initial time of a system is the time before which there is no input. Typically, the initial time of a system is defined to be zero, which will simplify the analysis significantly. Some techniques, such as the Laplace Transform require that the initial time of the system be zero. The initial time of a system is typically denoted by t0. The value of any variable at the initial time t0 will be denoted with a 0 subscript. For instance, the value of variable x at time t0 is given by: ${\displaystyle x(t_{0})=x_{0}}$ Likewise, any time t with a positive subscript are points in time after t0, in ascending order: ${\displaystyle t_{0}\leq t_{1}\leq t_{2}\leq \cdots \leq t_{n}}$ So t1 occurs after t0, and t2 occurs after both points. In a similar fashion above, a variable with a positive subscript (unless specifying an index into a vector) also occurs at that point in time: ${\displaystyle x(t_{1})=x_{1}}$ ${\displaystyle x(t_{2})=x_{2}}$ This is valid for all points in time t. A system satisfies the property of additivity if a sum of inputs results in a sum of outputs. By definition: an input of ${\displaystyle x_{3}(t)=x_{1}(t)+x_{2}(t)}$ results in an output of ${\displaystyle y_{3}(t)=y_{1}(t)+y_{2}(t)}$. To determine whether a system is additive, use the following test: Given a system f that takes an input x and outputs a value y, assume two inputs (x1 and x2) produce two outputs: ${\displaystyle y_{1}=f(x_{1})}$ ${\displaystyle y_{2}=f(x_{2})}$ Now, create a composite input that is the sum of the previous inputs: ${\displaystyle x_{3}=x_{1}+x_{2}}$ Then the system is additive if the following equation is true: ${\displaystyle y_{3}=f(x_{3})=f(x_{1}+x_{2})=f(x_{1})+f(x_{2})=y_{1}+y_{2}}$ Systems that satisfy this property are called additive. Additive systems are useful because a sum of simple inputs can be used to analyze the system response to a more complex input. ### Example: Sinusoids Given the following equation: ${\displaystyle y(t)=\sin(3x(t))}$ Create a sum of inputs as: ${\displaystyle x(t)=x_{1}(t)+x_{2}(t)}$ and construct the expected sum of outputs: ${\displaystyle y(t)=y_{1}(t)+y_{2}(t)}$ Now, substituting these values into our equation, test for equality: ${\displaystyle y_{1}(t)+y_{2}(t)=\sin(3[x_{1}(t)+x_{2}(t)])}$ The equality is not satisfied, and therefore the sine operation is not additive. ## Homogeneity A system satisfies the condition of homogeneity if an input scaled by a certain factor produces an output scaled by that same factor. By definition: an input of ${\displaystyle ax_{1}}$ results in an output of ${\displaystyle ay_{1}}$. In other words, to see if function f() is homogeneous, perform the following test: Stimulate the system f with an arbitrary input x to produce an output y: ${\displaystyle y=f(x)}$ Now, create a second input x1, scale it by a multiplicative factor C (C is an arbitrary constant value), and produce a corresponding output y1: ${\displaystyle y_{1}=f(Cx_{1})}$ Now, assign x to be equal to x1: ${\displaystyle x_{1}=x}$ Then, for the system to be homogeneous, the following equation must be true: ${\displaystyle y_{1}=f(Cx)=Cf(x)=Cy}$ Systems that are homogeneous are useful in many applications, especially applications with gain or amplification. ### Example: Straight-Line Given the equation for a straight line: ${\displaystyle y=f(x)=2x+3}$ ${\displaystyle y_{1}=f(Cx_{1})=2(Cx_{1})+3=C2x_{1}+3}$ ${\displaystyle x_{1}=x}$ Comparing the two results, it is easy to see they are not equal: ${\displaystyle y_{1}=C2x+3\neq Cy=C(2x+3)=C2x+C3}$ Therefore, the equation is not homogeneous. Exercise: Prove that additivity implies homogeneity, but that homogeneity does not imply additivity. ## Linearity A system is considered linear if it satisfies the conditions of Additivity and Homogeneity. In short, a system is linear if the following is true: Take two arbitrary inputs, and produce two arbitrary outputs: ${\displaystyle y_{1}=f(x_{1})}$ ${\displaystyle y_{2}=f(x_{2})}$ Now, a linear combination of the inputs should produce a linear combination of the outputs: ${\displaystyle f(Ax_{1}+Bx_{2})=f(Ax_{1})+f(Bx_{2})=Af(x_{1})+Bf(x_{2})=Ay_{1}+By_{2}}$ This condition of additivity and homogeneity is called superposition. A system is linear if it satisfies the condition of superposition. ### Example: Linear Differential Equations Is the following equation linear: ${\displaystyle {\frac {dy(t)}{dt}}+y(t)=x(t)}$ To determine whether this system is linear, construct a new composite input: ${\displaystyle x(t)=Ax_{1}(t)+Bx_{2}(t)}$ Now, create the expected composite output: ${\displaystyle y(t)=Ay_{1}(t)+By_{2}(t)}$ Substituting the two into our original equation: ${\displaystyle {\frac {d[Ay_{1}(t)+By_{2}(t)]}{dt}}+[Ay_{1}(t)+By_{2}(t)]=Ax_{1}(t)+Bx_{2}(t)}$ Factor out the derivative operator, as such: ${\displaystyle {\frac {d}{dt}}[Ay_{1}(t)+By_{2}(t)]+[Ay_{1}(t)+By_{2}(t)]=Ax_{1}(t)+Bx_{2}(t)}$ Finally, convert the various composite terms into the respective variables, to prove that this system is linear: ${\displaystyle {\frac {dy(t)}{dt}}+y(t)=x(t)}$ For the record, derivatives and integrals are linear operators, and ordinary differential equations typically are linear equations. ## Memory A system is said to have memory if the output from the system is dependent on past inputs (or future inputs!) to the system. A system is called memoryless if the output is only dependent on the current input. Memoryless systems are easier to work with, but systems with memory are more common in digital signal processing applications. Systems that have memory are called dynamic systems, and systems that do not have memory are static systems. ## Causality Causality is a property that is very similar to memory. A system is called causal if it is only dependent on past and/or current inputs. A system is called anti-causal if the output of the system is dependent only on future inputs. A system is called non-causal if the output depends on past and/or current and future inputs. ## Time-Invariance A system is called time-invariant if the system relationship between the input and output signals is not dependent on the passage of time. If the input signal ${\displaystyle x(t)}$ produces an output ${\displaystyle y(t)}$ then any time shifted input, ${\displaystyle x(t+\delta )}$, results in a time-shifted output ${\displaystyle y(t+\delta )}$ This property can be satisfied if the transfer function of the system is not a function of time except expressed by the input and output. If a system is time-invariant then the system block is commutative with an arbitrary delay. This facet of time-invariant systems will be discussed later. To determine if a system f is time-invariant, perform the following test: Apply an arbitrary input x to a system and produce an arbitrary output y: ${\displaystyle y(t)=f(x(t))}$ Apply a second input x1 to the system, and produce a second output: ${\displaystyle y_{1}(t)=f(x_{1}(t))}$ Now, assign x1 to be equal to the first input x, time-shifted by a given constant value δ: ${\displaystyle x_{1}(t)=x(t-\delta )}$ Finally, a system is time-invariant if y1 is equal to y shifted by the same value δ: ${\displaystyle y_{1}(t)=y(t-\delta )}$ ## LTI Systems A system is considered to be a Linear Time-Invariant (LTI) system if it satisfies the requirements of time-invariance and linearity. LTI systems are one of the most important types of systems, and they will be considered almost exclusively in the beginning chapters of this book. Systems which are not LTI are more common in practice, but are much more difficult to analyze. ## Lumpedness A system is said to be lumped if one of the two following conditions are satisfied: 1. There are a finite number of states that the system can be in. 2. There are a finite number of state variables. The concept of "states" and "state variables" are relatively advanced, and they will be discussed in more detail in the discussion about modern controls. Systems which are not lumped are called distributed. A simple example of a distributed system is a system with delay, that is, ${\displaystyle A(s)y(t)=B(s)u(t-\tau )}$, which has an infinite number of state variables (Here we use ${\displaystyle s}$ to denote the Laplace variable). However, although distributed systems are quite common, they are very difficult to analyze in practice, and there are few tools available to work with such systems. Fortunately, in most cases, a delay can be sufficiently modeled with the Pade approximation. This book will not discuss distributed systems much. ## Relaxed A system is said to be relaxed if the system is causal, and at the initial time t0 the output of the system is zero, i.e., there is no stored energy in the system. ${\displaystyle y(t_{0})=f(x(t_{0}))=0}$ In terms of differential equations, a relaxed system is said to have "zero initial state". Systems without an initial state are easier to work with, but systems that are not relaxed can frequently be modified to approximate relaxed systems. ## Stability Control Systems engineers will frequently say that an unstable system has "exploded". Some physical systems actually can rupture or explode when they go unstable. Stability is a very important concept in systems, but it is also one of the hardest function properties to prove. There are several different criteria for system stability, but the most common requirement is that the system must produce a finite output when subjected to a finite input. For instance, if 5 volts is applied to the input terminals of a given circuit, it would be best if the circuit output didn't approach infinity, and the circuit itself didn't melt or explode. This type of stability is often known as "Bounded Input, Bounded Output" stability, or BIBO. There are a number of other types of stability, most of which are based off the concept of BIBO stability. Because stability is such an important and complicated topic, an entire section of this text is devoted to its study. ## Inputs and Outputs Systems can also be categorized by the number of inputs and the number of outputs the system has. Consider a television as a system, for instance. The system has two inputs: the power wire and the signal cable. It has one output: the video display. A system with one input and one output is called single-input, single output, or SISO. a system with multiple inputs and multiple outputs is called multi-input, multi-output, or MIMO. These systems will be discussed in more detail later. Exercise: Based on the definitions of SISO and MIMO, above, determine what the acronyms SIMO and MISO mean. # Digital and Analog ## Digital and Analog There is a significant distinction between an analog system and a digital system, in the same way that there is a significant difference between analog and digital data. This book is going to consider both analog and digital topics, so it is worth taking some time to discuss the differences, and to display the different notations that will be used with each. ### Continuous Time This operation can be performed using this MATLAB command: isct A signal is called continuous-time if it is defined at every time t. A system is a continuous-time system if it takes a continuous-time input signal, and outputs a continuous-time output signal. Here is an example of an analog waveform: ### Discrete Time This operation can be performed using this MATLAB command: isdt A signal is called discrete-time if it is only defined for particular points in time. A discrete-time system takes discrete-time input signals, and produces discrete-time output signals. The following image shows the difference between an analog waveform and the sampled discrete time equivalent: ### Quantized A signal is called Quantized if it can only be certain values, and cannot be other values. This concept is best illustrated with examples: 1. Students with a strong background in physics will recognize this concept as being the root word in "Quantum Mechanics". In quantum mechanics, it is known that energy comes only in discrete packets. An electron bound to an atom, for example, may occupy one of several discrete energy levels, but not intermediate levels. 2. Another common example is population statistics. For instance, a common statistic is that a household in a particular country may have an average of "3.5 children", or some other fractional number. Actual households may have 3 children, or they may have 4 children, but no household has 3.5 children. 3. People with a computer science background will recognize that integer variables are quantized because they can only hold certain integer values, not fractions or decimal points. The last example concerning computers is the most relevant, because quantized systems are frequently computer-based. Systems that are implemented with computer software and hardware will typically be quantized. Here is an example waveform of a quantized signal. Notice how the magnitude of the wave can only take certain values, and that creates a step-like appearance. This image is discrete in magnitude, but is continuous in time: ## Analog By definition: Analog A signal is considered analog if it is defined for all points in time and if it can take any real magnitude value within its range. An analog system is a system that represents data using a direct conversion from one form to another. In other words, an analog system is a system that is continuous in both time and magnitude. ### Example: Motor If we have a given motor, we can show that the output of the motor (rotation in units of radians per second, for instance) is a function of the voltage that is input to the motor. We can show the relationship as such: ${\displaystyle \Theta (v)=f(v)}$ Where ${\displaystyle \Theta }$ is the output in terms of Rad/sec, and f(v) is the motor's conversion function between the input voltage (v) and the output. For any value of v we can calculate out specifically what the rotational speed of the motor should be. ### Example: Analog Clock Consider a standard analog clock, which represents the passage of time though the angular position of the clock hands. We can denote the angular position of the hands of the clock with the system of equations: ${\displaystyle \phi _{h}=f_{h}(t)}$ ${\displaystyle \phi _{m}=f_{m}(t)}$ ${\displaystyle \phi _{s}=f_{s}(t)}$ Where φh is the angular position of the hour hand, φm is the angular position of the minute hand, and φs is the angular position of the second hand. The positions of all the different hands of the clock are dependent on functions of time. Different positions on a clock face correspond directly to different times of the day. ## Digital Digital data is represented by discrete number values. By definition: Digital A signal or system is considered digital if it is both discrete-time and quantized. Digital data always have a certain granularity, and therefore there will almost always be an error associated with using such data, especially if we want to account for all real numbers. The tradeoff, of course, to using a digital system is that our powerful computers with our powerful, Moore's law microprocessor units, can be instructed to operate on digital data only. This benefit more than makes up for the shortcomings of a digital representation system. Discrete systems will be denoted inside square brackets, as is a common notation in texts that deal with discrete values. For instance, we can denote a discrete data set of ascending numbers, starting at 1, with the following notation: x[n] = [1 2 3 4 5 6 ...] n, or other letters from the central area of the alphabet (m, i, j, k, l, for instance) are commonly used to denote discrete time values. Analog, or "non-discrete" values are denoted in regular expression syntax, using parenthesis. Here is an example of an analog waveform and the digital equivalent. Notice that the digital waveform is discrete in both time and magnitude: Analog Waveform Digital Waveform ### Example: Digital Clock As a common example, let's consider a digital clock: The digital clock represents time with binary electrical data signals of 1 and 0. The 1's are usually represented by a positive voltage, and a 0 is generally represented by zero voltage. Counting in binary, we can show that any given time can be represented by a base-2 numbering system: Minute Binary Representation 1 1 10 1010 30 11110 59 111011 But what happens if we want to display a fraction of a minute, or a fraction of a second? A typical digital clock has a certain amount of precision, and it cannot express fractional values smaller than that precision. ## Hybrid Systems Hybrid Systems are systems that have both analog and digital components. Devices called samplers are used to convert analog signals into digital signals, and Devices called reconstructors are used to convert digital signals into analog signals. Because of the use of samplers, hybrid systems are frequently called sampled-data systems. ### Example: Automobile Computer Most modern automobiles today have integrated computer systems that monitor certain aspects of the car, and actually help to control the performance of the car. The speed of the car, and the rotational speed of the transmission are analog values, but a sampler converts them into digital values so the car computer can monitor them. The digital computer will then output control signals to other parts of the car, to alter analog systems such as the engine timing, the suspension, the brakes, and other parts. Because the car has both digital and analog components, it is a hybrid system. ## Continuous and Discrete Note: We are not using the word "continuous" here in the sense of continuously differentiable, as is common in math texts. A system is considered continuous-time if the signal exists for all time. Frequently, the terms "analog" and "continuous" will be used interchangeably, although they are not strictly the same. Discrete systems can come in three flavors: 1. Discrete time (sampled) 2. Discrete magnitude (quantized) 3. Discrete time and magnitude (digital) Discrete magnitude systems are systems where the signal value can only have certain values.Discrete time systems are systems where signals are only available (or valid) at particular times. Computer systems are discrete in the sense of (3), in that data is only read at specific discrete time intervals, and the data can have only a limited number of discrete values. A discrete-time system has a sampling time value associated with it, such that each discrete value occurs at multiples of the given sampling time. We will denote the sampling time of a system as T. We can equate the square-brackets notation of a system with the continuous definition of the system as follows: ${\displaystyle x[n]=x(nT)}$ Notice that the two notations show the same thing, but the first one is typically easier to write, and it shows that the system in question is a discrete system. This book will use the square brackets to denote discrete systems by the sample number n, and parenthesis to denote continuous time functions. ## Sampling and Reconstruction The process of converting analog information into digital data is called "Sampling". The process of converting digital data into an analog signal is called "Reconstruction". We will talk about both processes in a later chapter. For more information on the topic than is available in this book, see the Analog and Digital Conversion wikibook. Here is an example of a reconstructed waveform. Notice that the reconstructed waveform here is quantized because it is constructed from a digital signal: # System Metrics ## System Metrics When a system is being designed and analyzed, it doesn't make any sense to test the system with all manner of strange input functions, or to measure all sorts of arbitrary performance metrics. Instead, it is in everybody's best interest to test the system with a set of standard, simple reference functions. Once the system is tested with the reference functions, there are a number of different metrics that we can use to determine the system performance. It is worth noting that the metrics presented in this chapter represent only a small number of possible metrics that can be used to evaluate a given system. This wikibook will present other useful metrics along the way, as their need becomes apparent. ## Standard Inputs Note: All of the standard inputs are zero before time zero. All the standard inputs are causal. There are a number of standard inputs that are considered simple enough and universal enough that they are considered when designing a system. These inputs are known as a unit step, a ramp, and a parabolic input. Unit Step A unit step function is defined piecewise as such: [Unit Step Function] ${\displaystyle u(t)=\left\{{\begin{matrix}0,&t<0\\1,&t\geq 0\end{matrix}}\right.}$ The unit step function is a highly important function, not only in control systems engineering, but also in signal processing, systems analysis, and all branches of engineering. If the unit step function is input to a system, the output of the system is known as the step response. The step response of a system is an important tool, and we will study step responses in detail in later chapters. Ramp A unit ramp is defined in terms of the unit step function, as such: [Unit Ramp Function] ${\displaystyle r(t)=tu(t)}$ It is important to note that the unit step function is simply the differential of the unit ramp function: ${\displaystyle r(t)=\int u(t)dt=tu(t)}$ This definition will come in handy when we learn about the Laplace Transform. Parabolic A unit parabolic input is similar to a ramp input: [Unit Parabolic Function] ${\displaystyle p(t)={\frac {1}{2}}t^{2}u(t)}$ Notice also that the unit parabolic input is equal to the integral of the ramp function: ${\displaystyle p(t)=\int r(t)dt=\int tu(t)dt={\frac {1}{2}}t^{2}u(t)={\frac {1}{2}}tr(t)}$ Again, this result will become important when we learn about the Laplace Transform. Also, sinusoidal and exponential functions are considered basic, but they are too difficult to use in initial analysis of a system. Note: To be more precise, we should have taken the limit as t approaches infinity. However, as a shorthand notation, we will typically say "t equals infinity", and assume the reader understands the shortcut that is being used. When a unit-step function is input to a system, the steady-state value of that system is the output value at time ${\displaystyle t=\infty }$. Since it is impractical (if not completely impossible) to wait till infinity to observe the system, approximations and mathematical calculations are used to determine the steady-state value of the system. Most system responses are asymptotic, that is that the response approaches a particular value. Systems that are asymptotic are typically obvious from viewing the graph of that response. ### Step Response The step response of a system is most frequently used to analyze systems, and there is a large amount of terminology involved with step responses. When exposed to the step input, the system will initially have an undesirable output period known as the transient response. The transient response occurs because a system is approaching its final output value. The steady-state response of the system is the response after the transient response has ended. The amount of time it takes for the system output to reach the desired value (before the transient response has ended, typically) is known as the rise time. The amount of time it takes for the transient response to end and the steady-state response to begin is known as the settling time. It is common for a systems engineer to try and improve the step response of a system. In general, it is desired for the transient response to be reduced, the rise and settling times to be shorter, and the steady-state to approach a particular desired "reference" output. ## Target Value The target output value is the value that our system attempts to obtain for a given input. This is not the same as the steady-state value, which is the actual value that the system does obtain. The target value is frequently referred to as the reference value, or the "reference function" of the system. In essence, this is the value that we want the system to produce. When we input a "5" into an elevator, we want the output (the final position of the elevator) to be the fifth floor. Pressing the "5" button is the reference input, and is the expected value that we want to obtain. If we press the "5" button, and the elevator goes to the third floor, then our elevator is poorly designed. ## Rise Time Rise time is the amount of time that it takes for the system response to reach the target value from an initial state of zero. Many texts on the subject define the rise time as being the time it takes to rise between the initial position and 80% of the target value. This is because some systems never rise to 100% of the expected, target value, and therefore they would have an infinite rise-time. This book will specify which convention to use for each individual problem. Rise time is typically denoted tr, or trise. ## Percent Overshoot Underdamped systems frequently overshoot their target value initially. This initial surge is known as the "overshoot value". The ratio of the amount of overshoot to the target steady-state value of the system is known as the percent overshoot. Percent overshoot represents an overcompensation of the system, and can output dangerously large output signals that can damage a system. Percent overshoot is typically denoted with the term PO. Example: Refrigerator Consider an ordinary household refrigerator. The refrigerator has cycles where it is on and when it is off. When the refrigerator is on, the coolant pump is running, and the temperature inside the refrigerator decreases. The temperature decreases to a much lower level than is required, and then the pump turns off. When the pump is off, the temperature slowly increases again as heat is absorbed into the refrigerator. When the temperature gets high enough, the pump turns back on. Because the pump cools down the refrigerator more than it needs to initially, we can say that it "overshoots" the target value by a certain specified amount. Example: Refrigerator Another example concerning a refrigerator concerns the electrical demand of the heat pump when it first turns on. The pump is an inductive mechanical motor, and when the motor first activates, a special counter-acting force known as "back EMF" resists the motion of the motor, and causes the pump to draw more electricity until the motor reaches its final speed. During the startup time for the pump, lights on the same electrical circuit as the refrigerator may dim slightly, as electricity is drawn away from the lamps, and into the pump. This initial draw of electricity is a good example of overshoot. Usually, the letter e or E will be used to denote error values. Sometimes a system might never achieve the desired steady-state value, but instead will settle on an output value that is not desired. The difference between the steady-state output value to the reference input value at steady state is called the steady-state error of the system. We will use the variable ess to denote the steady-state error of the system. ## Settling Time After the initial rise time of the system, some systems will oscillate and vibrate for an amount of time before the system output settles on the final value. The amount of time it takes to reach steady state after the initial rise time is known as the settling time. Notice that damped oscillating systems may never settle completely, so we will define settling time as being the amount of time for the system to reach, and stay in, a certain acceptable range. The acceptable range for settling time is typically determined on a per-problem basis, although common values are 20%, 10%, or 5% of the target value. The settling time will be denoted as ts. ## System Order The order of the system is defined by the number of independent energy storage elements in the system, and intuitively by the highest order of the linear differential equation that describes the system. In a transfer function representation, the order is the highest exponent in the transfer function. In a proper system, the system order is defined as the degree of the denominator polynomial. In a state-space equation, the system order is the number of state-variables used in the system. The order of a system will frequently be denoted with an n or N, although these variables are also used for other purposes. This book will make clear distinction on the use of these variables. ### Proper Systems A proper system is a system where the degree of the denominator is larger than or equal to the degree of the numerator polynomial. A strictly proper system is a system where the degree of the denominator polynomial is larger than (but never equal to) the degree of the numerator polynomial. A biproper system is a system where the degree of the denominator polynomial equals the degree of the numerator polynomial. It is important to note that only proper systems can be physically realized. In other words, a system that is not proper cannot be built. It makes no sense to spend a lot of time designing and analyzing imaginary systems. ### Example: System Order 1=Find the order of this system: ${\displaystyle G(s)={\frac {1+s}{1+s+s^{2}}}}$ The highest exponent in the denominator is s2, so the system is order 2. Also, since the denominator is a higher degree than the numerator, this system is strictly proper.}} In the above example, G(s) is a second-order transfer function because in the denominator one of the s variables has an exponent of 2. Second-order functions are the easiest to work with. ## System Type Let's say that we have a process transfer function (or combination of functions, such as a controller feeding in to a process), all in the forward branch of a unity feedback loop. Say that the overall forward branch transfer function is in the following generalized form (known as pole-zero form): [Pole-Zero Form] ${\displaystyle G(s)={\frac {K\prod _{i}(s-s_{i})}{s^{M}\prod _{j}(s-s_{j})}}}$ Poles at the origin are called integrators, because they have the effect of performing integration on the input signal. we call the parameter M the system type. Note that increased system type number correspond to larger numbers of poles at s = 0. More poles at the origin generally have a beneficial effect on the system, but they increase the order of the system, and make it increasingly difficult to implement physically. System type will generally be denoted with a letter like N, M, or m. Because these variables are typically reused for other purposes, this book will make clear distinction when they are employed. Now, we will define a few terms that are commonly used when discussing system type. These new terms are Position Error, Velocity Error, and Acceleration Error. These names are throwbacks to physics terms where acceleration is the derivative of velocity, and velocity is the derivative of position. Note that none of these terms are meant to deal with movement, however. Position Error The position error, denoted by the position error constant ${\displaystyle K_{p}}$. This is the amount of steady-state error of the system when stimulated by a unit step input. We define the position error constant as follows: [Position Error Constant] ${\displaystyle K_{p}=\lim _{s\to 0}G(s)}$ Where G(s) is the transfer function of our system. Velocity Error The velocity error is the amount of steady-state error when the system is stimulated with a ramp input. We define the velocity error constant as such: [Velocity Error Constant] ${\displaystyle K_{v}=\lim _{s\to 0}sG(s)}$ Acceleration Error The acceleration error is the amount of steady-state error when the system is stimulated with a parabolic input. We define the acceleration error constant to be: [Acceleration Error Constant] ${\displaystyle K_{a}=\lim _{s\to 0}s^{2}G(s)}$ Now, this table will show briefly the relationship between the system type, the kind of input (step, ramp, parabolic), and the steady-state error of the system: Unit System Input Type, M Au(t) Ar(t) Ap(t) 0 ${\displaystyle e_{ss}={\frac {A}{1+K_{p}}}}$ ${\displaystyle e_{ss}=\infty }$ ${\displaystyle e_{ss}=\infty }$ 1 ${\displaystyle e_{ss}=0}$ ${\displaystyle e_{ss}={\frac {A}{K_{v}}}}$ ${\displaystyle e_{ss}=\infty }$ 2 ${\displaystyle e_{ss}=0}$ ${\displaystyle e_{ss}=0}$ ${\displaystyle e_{ss}={\frac {A}{K_{a}}}}$ > 2 ${\displaystyle e_{ss}=0}$ ${\displaystyle e_{ss}=0}$ ${\displaystyle e_{ss}=0}$ ### Z-Domain Type Likewise, we can show that the system order can be found from the following generalized transfer function in the Z domain: ${\displaystyle G(z)={\frac {K\prod _{i}(z-z_{i})}{(z-1)^{M}\prod _{j}(z-z_{j})}}}$ Where the constant M is the type of the digital system. Now, we will show how to find the various error constants in the Z-Domain: [Z-Domain Error Constants] Error Constant Equation Kp ${\displaystyle K_{p}=\lim _{z\to 1}G(z)}$ Kv ${\displaystyle K_{v}=\lim _{z\to 1}(z-1)G(z)}$ Ka ${\displaystyle K_{a}=\lim _{z\to 1}(z-1)^{2}G(z)}$ ## Visually Here is an image of the various system metrics, acting on a system in response to a step input: The target value is the value of the input step response. The rise time is the time at which the waveform first reaches the target value. The overshoot is the amount by which the waveform exceeds the target value. The settling time is the time it takes for the system to settle into a particular bounded region. This bounded region is denoted with two short dotted lines above and below the target value. # System Modeling ## The Control Process It is the job of a control engineer to analyze existing systems, and to design new systems to meet specific needs. Sometimes new systems need to be designed, but more frequently a controller unit needs to be designed to improve the performance of existing systems. When designing a system, or implementing a controller to augment an existing system, we need to follow some basic steps: 1. Model the system mathematically 2. Analyze the mathematical model 3. Design system/controller 4. Implement system/controller and test The vast majority of this book is going to be focused on (2), the analysis of the mathematical systems. This chapter alone will be devoted to a discussion of the mathematical modeling of the systems. ## External Description An external description of a system relates the system input to the system output without explicitly taking into account the internal workings of the system. The external description of a system is sometimes also referred to as the Input-Output Description of the system, because it only deals with the inputs and the outputs to the system. If the system can be represented by a mathematical function h(t, r), where t is the time that the output is observed, and r is the time that the input is applied. We can relate the system function h(t, r) to the input x and the output y through the use of an integral: [General System Description] ${\displaystyle y(t)=\int _{-\infty }^{\infty }h(t,r)x(r)dr}$ This integral form holds for all linear systems, and every linear system can be described by such an equation. If a system is causal (i.e. an input at t=r affects system behaviour only for ${\displaystyle t\geq r}$) and there is no input of the system before t=0, we can change the limits of the integration: ${\displaystyle y(t)=\int _{0}^{t}h(t,r)x(r)dr}$ ### Time-Invariant Systems If furthermore a system is time-invariant, we can rewrite the system description equation as follows: ${\displaystyle y(t)=\int _{0}^{t}h(t-r)x(r)dr}$ This equation is known as the convolution integral, and we will discuss it more in the next chapter. Every Linear Time-Invariant (LTI) system can be used with the Laplace Transform, a powerful tool that allows us to convert an equation from the time domain into the S-Domain, where many calculations are easier. Time-variant systems cannot be used with the Laplace Transform. ## Internal Description If a system is linear and lumped, it can also be described using a system of equations known as state-space equations. In state-space equations, we use the variable x to represent the internal state of the system. We then use u as the system input, and we continue to use y as the system output. We can write the state-space equations as such: ${\displaystyle x'(t)=A(t)x(t)+B(t)u(t)}$ ${\displaystyle y(t)=C(t)x(t)+D(t)u(t)}$ We will discuss the state-space equations more when we get to the section on modern controls. ## Complex Descriptions Systems which are LTI and Lumped can also be described using a combination of the state-space equations, and the Laplace Transform. If we take the Laplace Transform of the state equations that we listed above, we can get a set of functions known as the Transfer Matrix Functions. We will discuss these functions in a later chapter. ## Representations To recap, we will prepare a table with the various system properties, and the available methods for describing the system: Properties State-Space Equations Laplace Transform Transfer Matrix Linear, Time-Variant, Distributed no no no Linear, Time-Variant, Lumped yes no no Linear, Time-Invariant, Distributed no yes no Linear, Time-Invariant, Lumped yes yes yes We will discuss all these different types of system representation later in the book. ## Analysis Once a system is modeled using one of the representations listed above, the system needs to be analyzed. We can determine the system metrics and then we can compare those metrics to our specification. If our system meets the specifications we are finished with the design process. However if the system does not meet the specifications (as is typically the case), then suitable controllers and compensators need to be designed and added to the system. Once the controllers and compensators have been designed, the job isn't finished: we need to analyze the new composite system to ensure that the controllers work properly. Also, we need to ensure that the systems are stable: unstable systems can be dangerous. ### Frequency Domain For proposals, early stage designs, and quick turn around analyses a frequency domain model is often superior to a time domain model. Frequency domain models take disturbance PSDs (Power Spectral Densities) directly, use transfer functions directly, and produce output or residual PSDs directly. The answer is a steady-state response. Oftentimes the controller is shooting for 0 so the steady-state response is also the residual error that will be the analysis output or metric for report. Table 1: Frequency Domain Model Inputs and Outputs Input Model Output PSD Transfer Function PSD #### Brief Overview of the Math Frequency domain modeling is a matter of determining the impulse response of a system to a random process. Figure 1: Frequency Domain System ${\displaystyle S_{YY}\left(\omega \right)=G^{*}\left(\omega \right)G\left(\omega \right)S_{XX}=\left|G\left(\omega \right)\right\vert S_{XX}}$[1] where ${\displaystyle S_{XX}\left(\omega \right)}$ is the one-sided input PSD in ${\displaystyle {\frac {magnitude^{2}}{Hz}}}$ ${\displaystyle G\left(\omega \right)}$ is the frequency response function of the system and ${\displaystyle S_{YY}\left(\omega \right)}$ is the one-sided output PSD or auto power spectral density function. The frequency response function, ${\displaystyle G\left(\omega \right)}$, is related to the impulse response function (transfer function) by ${\displaystyle g\left(\tau \right)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{i\omega t}H\left(\omega \right)\,d\omega }$ Note some texts will state that this is only valid for random processes which are stationary. Other texts suggest stationary and ergodic while still others state weakly stationary processes. Some texts do not distinguish between strictly stationary and weakly stationary. From practice, the rule of thumb is if the PSD of the input process is the same from hour to hour and day to day then the input PSD can be used and the above equation is valid. #### Notes 1. Sun, Jian-Qiao (2006). Stochastic Dynamics and Control, Volume 4. Amsterdam: Elsevier Science. ISBN 0444522301. ## Modeling Examples Modeling in Control Systems is oftentimes a matter of judgement. This judgement is developed by creating models and learning from other people's models. ControlTheoryPro.com is a site with a lot of examples. Here are links to a few of them ## Manufacture Once the system has been properly designed we can prototype our system and test it. Assuming our analysis was correct and our design is good, the prototype should work as expected. Now we can move on to manufacture and distribute our completed systems. Classical Controls The classical method of controls involves analysis and manipulation of systems in the complex frequency domain. This domain, entered into by applying the Laplace or Fourier Transforms, is useful in examining the characteristics of the system, and determining the system response. # Sampled Data Systems ## Ideal Sampler In this chapter, we are going to introduce the ideal sampler and the Star Transform. First, we need to introduce (or review) the Geometric Series infinite sum. The results of this sum will be very useful in calculating the Star Transform, later. Consider a sampler device that operates as follows: every T seconds, the sampler reads the current value of the input signal at that exact moment. The sampler then holds that value on the output for T seconds, before taking the next sample. We have a generic input to this system, f(t), and our sampled output will be denoted f*(t). We can then show the following relationship between the two signals: ${\displaystyle f^{\,*}(t)=f(0){\big (}\mathrm {u} (t\,-\,0)\,-\,\mathrm {u} (t\,-\,T){\big )}\,+\,f(T){\big (}\mathrm {u} (t\,-\,T)\,-\,\mathrm {u} (t\,-\,2T){\big )}\,+\;\cdots \;+\,f(nT){\big (}\mathrm {u} (t\,-\,nT)\,-\,\mathrm {u} (t\,-\,(n\,+\,1)T){\big )}\,+\;\cdots }$ Note that the value of f * at time t = 1.5 T is the same as at time t = T. This relationship works for any fractional value. Taking the Laplace Transform of this infinite sequence will yield us with a special result called the Star Transform. The Star Transform is also occasionally called the "Starred Transform" in some texts. ## Geometric Series Before we talk about the Star Transform or even the Z-Transform, it is useful for us to review the mathematical background behind solving infinite series. Specifically, because of the nature of these transforms, we are going to look at methods to solve for the sum of a geometric series. A geometic series is a sum of values with increasing exponents, as such: ${\displaystyle \sum _{k=0}^{n}ar^{k}=ar^{0}+ar^{1}+ar^{2}+ar^{3}+\cdots +ar^{n}\,}$ In the equation above, notice that each term in the series has a coefficient value, a. We can optionally factor out this coefficient, if the resulting equation is easier to work with: ${\displaystyle a\sum _{k=0}^{n}r^{k}=a\left(r^{0}+r^{1}+r^{2}+r^{3}+\cdots +r^{n}\,\right)}$ Once we have an infinite series in either of these formats, we can conveniently solve for the total sum of this series using the following equation: ${\displaystyle a\sum _{k=0}^{n}r^{k}=a{\frac {1-r^{n+1}}{1-r}}}$ Let's say that we start our series off at a number that isn't zero. Let's say for instance that we start our series off at n = 1 or n = 100. Let's see: ${\displaystyle \sum _{k=m}^{n}ar^{k}=ar^{m}+ar^{m+1}+ar^{m+2}+ar^{m+3}+\cdots +ar^{n}\,}$ We can generalize the sum to this series as follows: [Geometric Series] ${\displaystyle \sum _{k=m}^{n}ar^{k}={\frac {a(r^{m}-r^{n+1})}{1-r}}}$ With that result out of the way, now we need to worry about making this series converge. In the above sum, we know that n is approaching infinity (because this is an infinite sum). Therefore, any term that contains the variable n is a matter of worry when we are trying to make this series converge. If we examine the above equation, we see that there is one term in the entire result with an n in it, and from that, we can set a fundamental inequality to govern the geometric series. ${\displaystyle r^{n+1}<\infty }$ To satisfy this equation, we must satisfy the following condition: [Geometric convergence condition] ${\displaystyle r\leq 1}$ Therefore, we come to the final result: The geometric series converges if and only if the value of r is less than one. ## The Star Transform The Star Transform is defined as such: [Star Transform] ${\displaystyle F^{*}(s)={\mathcal {L}}^{*}[f(t)]=\sum _{k=0}^{\infty }f(kT)e^{-skT}}$ The Star Transform depends on the sampling time T and is different for a single signal depending on the frequency at which the signal is sampled. Since the Star Transform is defined as an infinite series, it is important to note that some inputs to the Star Transform will not converge, and therefore some functions do not have a valid Star Transform. Also, it is important to note that the Star Transform may only be valid under a particular region of convergence. We will cover this topic more when we discuss the Z-transform. ### Star ↔ Laplace Complex Analysis/Residue Theory The Laplace Transform and the Star Transform are clearly related, because we obtained the Star Transform by using the Laplace Transform on a time-domain signal. However, the method to convert between the two results can be a slightly difficult one. To find the Star Transform of a Laplace function, we must take the residues of the Laplace equation, as such: ${\displaystyle X^{*}(s)=\sum {\bigg [}{\text{residues of }}X(\lambda ){\frac {1}{1-e^{-T(s-\lambda )}}}{\bigg ]}_{{\text{at poles of E}}(\lambda )}}$ This math is advanced for most readers, so we can also use an alternate method, as follows: ${\displaystyle X^{*}(s)={\frac {1}{T}}\sum _{n=-\infty }^{\infty }X(s+jm\omega _{s})+{\frac {x(0)}{2}}}$ Neither one of these methods are particularly easy, however, and therefore we will not discuss the relationship between the Laplace transform and the Star Transform any more than is absolutely necessary in this book. Suffice it to say, however, that the Laplace transform and the Star Transform are related mathematically. ### Star + Laplace In some systems, we may have components that are both continuous and discrete in nature. For instance, if our feedback loop consists of an Analog-To-Digital converter, followed by a computer (for processing), and then a Digital-To-Analog converter. In this case, the computer is acting on a digital signal, but the rest of the system is acting on continuous signals. Star transforms can interact with Laplace transforms in some of the following ways: Given: ${\displaystyle Y(s)=X^{*}(s)H(s)}$ Then: ${\displaystyle Y^{*}(s)=X^{*}(s)H^{*}(s)}$ Given: ${\displaystyle Y(s)=X(s)H(s)}$ Then: ${\displaystyle Y^{*}(s)={\overline {XH}}^{*}(s)}$ ${\displaystyle Y^{*}(s)\neq X^{*}(s)H^{*}(s)}$ Where ${\displaystyle {\overline {XH}}^{*}(s)}$ is the Star Transform of the product of X(s)H(s). ### Convergence of the Star Transform The Star Transform is defined as being an infinite series, so it is critically important that the series converge (not reach infinity), or else the result will be nonsensical. Since the Star Transform is a geometic series (for many input signals), we can use geometric series analysis to show whether the series converges, and even under what particular conditions the series converges. The restrictions on the star transform that allow it to converge are known as the region of convergence (ROC) of the transform. Typically a transform must be accompanied by the explicit mention of the ROC. ## The Z-Transform Let us say now that we have a discrete data set that is sampled at regular intervals. We can call this set x[n]: x[n] = [ x[0] x[1] x[2] x[3] x[4] ... ] This is also known as the Bilateral Z-Transform. We will only discuss this version of the transform in this book we can utilize a special transform, called the Z-transform, to make dealing with this set more easy: [Z Transform] ${\displaystyle X(z)={\mathcal {Z}}\left\{x[n]\right\}=\sum _{n=-\infty }^{\infty }x[n]z^{-n}}$ Z-Transform properties, and a table of common transforms can be found in: the Appendix. Like the Star Transform the Z Transform is defined as an infinite series and therefore we need to worry about convergence. In fact, there are a number of instances that have identical Z-Transforms, but different regions of convergence (ROC). Therefore, when talking about the Z transform, you must include the ROC, or you are missing valuable information. ### Z Transfer Functions Like the Laplace Transform, in the Z-domain we can use the input-output relationship of the system to define a transfer function. The transfer function in the Z domain operates exactly the same as the transfer function in the S Domain: ${\displaystyle H(z)={\frac {Y(z)}{X(z)}}}$ ${\displaystyle {\mathcal {Z}}\{h[n]\}=H(z)}$ Similarly, the value h[n] which represents the response of the digital system is known as the impulse response of the system. It is important to note, however, that the definition of an "impulse" is different in the analog and digital domains. ### Inverse Z Transform The inverse Z Transform is defined by the following path integral: [Inverse Z Transform] ${\displaystyle x[n]=Z^{-1}\{X(z)\}={\frac {1}{2\pi j}}\oint _{C}X(z)z^{n-1}dz\ }$ Where C is a counterclockwise closed path encircling the origin and entirely in the region of convergence (ROC). The contour or path, C, must encircle all of the poles of X(z). This math is relatively advanced compared to some other material in this book, and therefore little or no further attention will be paid to solving the inverse Z-Transform in this manner. Z transform pairs are heavily tabulated in reference texts, so many readers can consider that to be the primary method of solving for inverse Z transforms. There are a number of Z-transform pairs available in table form in The Appendix. ### Final Value Theorem Like the Laplace Transform, the Z Transform also has an associated final value theorem: [Final Value Theorem (Z)] ${\displaystyle \lim _{n\to \infty }x[n]=\lim _{z\to 1}(z-1)X(z)}$ This equation can be used to find the steady-state response of a system, and also to calculate the steady-state error of the system. ## Star ↔ Z The Z transform is related to the Star transform though the following change of variables: ${\displaystyle z=e^{sT}}$ Notice that in the Z domain, we don't maintain any information on the sampling period, so converting to the Z domain from a Star Transformed signal loses that information. When converting back to the star domain however, the value for T can be re-insterted into the equation, if it is still available. Also of some importance is the fact that the Z transform is bilinear, while the Star Transform is unilinear. This means that we can only convert between the two transforms if the sampled signal is zero for all values of n < 0. Because the two transforms are so closely related, it can be said that the Z transform is simply a notational convenience for the Star Transform. With that said, this book could easily use the Star Transform for all problems, and ignore the added burden of Z transform notation entirely. A common example of this is Richard Hamming's book "Numerical Methods for Scientists and Engineers" which uses the Fourier Transform for all problems, considering the Laplace, Star, and Z-Transforms to be merely notational conveniences. However, the Control Systems wikibook is under the impression that the correct utilization of different transforms can make problems more easy to solve, and we will therefore use a multi-transform approach. ### Z plane Note: The lower-case z is the name of the variable, and the upper-case Z is the name of the Transform and the plane. z is a complex variable with a real part and an imaginary part. In other words, we can define z as such: ${\displaystyle z=\operatorname {Re} (z)+j\operatorname {Im} (z)}$ Since z can be broken down into two independent components, it often makes sense to graph the variable z on the Z-plane. In the Z-plane, the horizontal axis represents the real part of z, and the vertical axis represents the magnitude of the imaginary part of z. Notice also that if we define z in terms of the star-transform relation: ${\displaystyle z=e^{sT}}$ we can separate out s into real and imaginary parts: ${\displaystyle s=\sigma +j\omega }$ We can plug this into our equation for z: ${\displaystyle z=e^{(\sigma +j\omega )T}=e^{\sigma T}e^{j\omega T}}$ Through Euler's formula, we can separate out the complex exponential as such: ${\displaystyle z=e^{\sigma T}(\cos(\omega T)+j\sin(\omega T))}$ If we define two new variables, M and φ: ${\displaystyle M=e^{\sigma T}}$ ${\displaystyle \phi =\omega T}$ We can write z in terms of M and φ. Notice that it is Euler's equation: ${\displaystyle z=M\cos(\phi )+jM\sin(\phi )}$ Which is clearly a polar representation of z, with the magnitude of the polar function (M) based on the real-part of s, and the angle of the polar function (φ) is based on the imaginary part of s. ### Region of Convergence To best teach the region of convergance (ROC) for the Z-transform, we will do a quick example. We have the following discrete series or a decaying exponential: ${\displaystyle x[n]=e^{-2n}u[n]}$ Now, we can plug this function into the Z transform equation: ${\displaystyle X(z)={\mathcal {Z}}[x[n]]=\sum _{n=-\infty }^{\infty }e^{-2n}u[n]z^{-n}}$ Note that we can remove the unit step function, and change the limits of the sum: ${\displaystyle X(z)=\sum _{n=0}^{\infty }e^{-2n}z^{-n}}$ This is because the series is 0 for all time less than n → 0. If we try to combine the n terms, we get the following result: ${\displaystyle X(z)=\sum _{n=0}^{\infty }(e^{2}z)^{-n}}$ Once we have our series in this term, we can break this down to look like our geometric series: ${\displaystyle a=1}$ ${\displaystyle r=(e^{2}z)^{-1}}$ And finally, we can find our final value, using the geometric series formula: ${\displaystyle a\sum _{k=0}^{n}r^{k}=a{\frac {1-r^{n+1}}{1-r}}=1{\frac {1-((e^{2}z)^{-1})^{n+1}}{1-(e^{2}z)^{-1}}}}$ Again, we know that to make this series converge, we need to make the r value less than 1: ${\displaystyle |(e^{2}z)^{-1}|=\left|{\frac {1}{e^{2}z}}\right|\leq 1}$ ${\displaystyle |e^{2}z|\geq 1}$ And finally we obtain the region of convergance for this Z-transform: ${\displaystyle |z|\geq {\frac {1}{e^{2}}}}$ ### Laplace ↔ Z There are no easy, direct ways to convert between the Laplace transform and the Z transform directly. Nearly all methods of conversions reproduce some aspects of the original equation faithfully, and incorrectly reproduce other aspects. For some of the main mapping techniques between the two, see the Z Transform Mappings Appendix. However, there are some topics that we need to discuss. First and foremost, conversions between the Laplace domain and the Z domain are not linear, this leads to some of the following problems: 1. ${\displaystyle {\mathcal {L}}[G(z)H(z)]\neq G(s)H(s)}$ 2. ${\displaystyle {\mathcal {Z}}[G(s)H(s)]\neq G(z)H(z)}$ This means that when we combine two functions in one domain multiplicatively, we must find a combined transform in the other domain. Here is how we denote this combined transform: ${\displaystyle {\mathcal {Z}}[G(s)H(s)]={\overline {GH}}(z)}$ Notice that we use a horizontal bar over top of the multiplied functions, to denote that we took the transform of the product, not of the individual pieces. However, if we have a system that incorporates a sampler, we can show a simple result. If we have the following format: ${\displaystyle Y(s)=X^{*}(s)H(s)}$ Then we can put everything in terms of the Star Transform: ${\displaystyle Y^{*}(s)=X^{*}(s)H^{*}(s)}$ and once we are in the star domain, we can do a direct change of variables to reach the Z domain: ${\displaystyle Y^{*}(s)=X^{*}(s)H^{*}(s)\to Y(z)=X(z)H(z)}$ Note that we can only make this equivalence relationship if the system incorporates an ideal sampler, and therefore one of the multiplicative terms is in the star domain. ### Example Let's say that we have the following equation in the Laplace domain: ${\displaystyle Y(s)=A^{*}(s)B(s)+C(s)D(s)}$ And because we have a discrete sampler in the system, we want to analyze it in the Z domain. We can break up this equation into two separate terms, and transform each: ${\displaystyle {\mathcal {Z}}[A^{*}(s)B(s)]\to {\mathcal {Z}}[A^{*}(s)B^{*}(s)]=A(z)B(z)}$ And ${\displaystyle {\mathcal {Z}}[C(s)D(s)]={\overline {CD}}(z)}$ And when we add them together, we get our result: ${\displaystyle Y(z)=A(z)B(z)+{\overline {CD}}(z)}$ ## Z ↔ Fourier By substituting variables, we can relate the Star transform to the Fourier Transform as well: ${\displaystyle e^{sT}=e^{j\omega }}$ ${\displaystyle e^{(\sigma +j\omega )T}=e^{j\omega }}$ If we assume that T = 1, we can relate the two equations together by setting the real part of s to zero. Notice that the relationship between the Laplace and Fourier transforms is mirrored here, where the Fourier transform is the Laplace transform with no real-part to the transform variable. There are a number of discrete-time variants to the Fourier transform as well, which are not discussed in this book. For more information about these variants, see Digital Signal Processing. ## Reconstruction Some of the easiest reconstruction circuits are called "Holding circuits". Once a signal has been transformed using the Star Transform (passed through an ideal sampler), the signal must be "reconstructed" using one of these hold systems (or an equivalent) before it can be analyzed in a Laplace-domain system. If we have a sampled signal denoted by the Star Transform ${\displaystyle X^{*}(s)}$, we want to reconstruct that signal into a continuous-time waveform, so that we can manipulate it using Laplace-transform techniques. Let's say that we have the sampled input signal, a reconstruction circuit denoted G(s), and an output denoted with the Laplace-transform variable Y(s). We can show the relationship as follows: ${\displaystyle Y(s)=X^{*}(s)G(s)}$ Reconstruction circuits then, are physical devices that we can use to convert a digital, sampled signal into a continuous-time domain, so that we can take the Laplace transform of the output signal. ### Zero order Hold Zero-Order Hold impulse response A zero-order hold circuit is a circuit that essentially inverts the sampling process: The value of the sampled signal at time t is held on the output for T time. The output waveform of a zero-order hold circuit therefore looks like a staircase approximation to the original waveform. The transfer function for a zero-order hold circuit, in the Laplace domain, is written as such: [Zero Order Hold] ${\displaystyle G_{h0}={\frac {1-e^{-Ts}}{s}}}$ The Zero-order hold is the simplest reconstruction circuit, and (like the rest of the circuits on this page) assumes zero processing delay in converting between digital to analog. A continuous input signal (gray) and the sampled signal with a zero-order hold (red) ### First Order Hold Impulse response of a first-order hold. The zero-order hold creates a step output waveform, but this isn't always the best way to reconstruct the circuit. Instead, the First-Order Hold circuit takes the derivative of the waveform at the time t, and uses that derivative to make a guess as to where the output waveform is going to be at time (t + T). The first-order hold circuit then "draws a line" from the current position to the expected future position, as the output of the waveform. [First Order Hold] ${\displaystyle G_{h1}={\frac {1+Ts}{T}}\left[{\frac {1-e^{-Ts}}{s}}\right]^{2}}$ Keep in mind, however, that the next value of the signal will probably not be the same as the expected value of the next data point, and therefore the first-order hold may have a number of discontinuities. An input signal (grey) and the first-order hold circuit output (red) ### Fractional Order Hold The Zero-Order hold outputs the current value onto the output, and keeps it level throughout the entire bit time. The first-order hold uses the function derivative to predict the next value, and produces a series of ramp outputs to produce a fluctuating waveform. Sometimes however, neither of these solutions are desired, and therefore we have a compromise: Fractional-Order Hold. Fractional order hold acts like a mixture of the other two holding circuits, and takes a fractional number k as an argument. Notice that k must be between 0 and 1 for this circuit to work correctly. [Fractional Order Hold] ${\displaystyle G_{hk}=(1-ke^{-Ts}){\frac {1-e^{-Ts}}{s}}+{\frac {k}{Ts^{2}}}(1-e^{-Ts})^{2}}$ This circuit is more complicated than either of the other hold circuits, but sometimes added complexity is worth it if we get better performance from our reconstruction circuit. ### Other Reconstruction Circuits Impulse response to a linear-approximation circuit. Another type of circuit that can be used is a linear approximation circuit. An input signal (grey) and the output signal through a linear approximation circuit # System Delays ## Delays A system can be built with an inherent delay. Delays are units that cause a time-shift in the input signal, but that don't affect the signal characteristics. An ideal delay is a delay system that doesn't affect the signal characteristics at all, and that delays the signal for an exact amount of time. Some delays, like processing delays or transmission delays, are unintentional. Other delays however, such as synchronization delays, are an integral part of a system. This chapter will talk about how delays are utilized and represented in the Laplace Domain. Once we represent a delay in the Laplace domain, it is an easy matter, through change of variables, to express delays in other domains. ### Ideal Delays An ideal delay causes the input function to be shifted forward in time by a certain specified amount of time. Systems with an ideal delay cause the system output to be delayed by a finite, predetermined amount of time. ## Time Shifts Let's say that we have a function in time that is time-shifted by a certain constant time period T. For convenience, we will denote this function as x(t - T). Now, we can show that the Laplace transform of x(t - T) is the following: ${\displaystyle {\mathcal {L}}\{x(t-T)\}\Leftrightarrow e^{-sT}X(s)}$ What this demonstrates is that time-shifts in the time-domain become exponentials in the complex Laplace domain. ### Shifts in the Z-Domain Since we know the following general relationship between the Z Transform and the Star Transform: ${\displaystyle z\Leftrightarrow e^{sT}}$ We can show what a time shift in a discrete time domain becomes in the Z domain: ${\displaystyle x((n-n_{s})\cdot T)\equiv x[n-n_{s}]\Leftrightarrow z^{-n_{s}}X(z)}$ ## Delays and Stability A time-shift in the time domain becomes an exponential increase in the Laplace domain. This would seem to show that a time shift can have an effect on the stability of a system, and occasionally can cause a system to become unstable. We define a new parameter called the time margin as the amount of time that we can shift an input function before the system becomes unstable. If the system can survive any arbitrary time shift without going unstable, we say that the time margin of the system is infinite. ## Delay Margin When speaking of sinusoidal signals, it doesn't make sense to talk about "time shifts", so instead we talk about "phase shifts". Therefore, it is also common to refer to the time margin as the phase margin of the system. The phase margin denotes the amount of phase shift that we can apply to the system input before the system goes unstable. We denote the phase margin for a system with a lowercase Greek letter φ (phi). Phase margin is defined as such for a second-order system: [Delay Margin] ${\displaystyle \phi _{m}=\tan ^{-1}\left[{\frac {2\zeta }{({\sqrt {4\zeta ^{4}+1}}-2\zeta ^{2})^{1/2}}}\right]}$ Oftentimes, the phase margin is approximated by the following relationship: [Delay Margin (approx)] ${\displaystyle \phi _{m}\approx 100\zeta }$ The Greek letter zeta (ζ) is a quantity called the damping ratio, and we discuss this quantity in more detail in the next chapter. ## Transform-Domain Delays The ordinary Z-Transform does not account for a system which experiences an arbitrary time delay, or a processing delay. The Z-Transform can, however, be modified to account for an arbitrary delay. This new version of the Z-transform is frequently called the Modified Z-Transform, although in some literature (notably in Wikipedia), it is known as the Advanced Z-Transform. ### Delayed Star Transform To demonstrate the concept of an ideal delay, we will show how the star transform responds to a time-shifted input with a specified delay of time T. The function :${\displaystyle X^{*}(s,\Delta )}$ is the delayed star transform with a delay parameter Δ. The delayed star transform is defined in terms of the star transform as such: [Delayed Star Transform] ${\displaystyle X^{*}(s,\Delta )={\mathcal {L}}^{*}\left\{x(t-\Delta )\right\}=X(s)e^{-\Delta Ts}}$ As we can see, in the star transform, a time-delayed signal is multiplied by a decaying exponential value in the transform domain. ### Delayed Z-Transform Since we know that the Star Transform is related to the Z Transform through the following change of variables: ${\displaystyle z=e^{-sT}}$ We can interpret the above result to show how the Z Transform responds to a delay: ${\displaystyle {\mathcal {Z}}(x[t-T])=X(z)z^{-T}}$ This result is expected. Now that we know how the Z transform responds to time shifts, it is often useful to generalize this behavior into a form known as the Delayed Z-Transform. The Delayed Z-Transform is a function of two variables, z and Δ, and is defined as such: ${\displaystyle X(z,\Delta )={\mathcal {Z}}\left\{x(t-\Delta )\right\}={\mathcal {Z}}\left\{X(s)e^{-\Delta Ts}\right\}}$ And finally: [Delayed Z Transform] ${\displaystyle {\mathcal {Z}}(x[n],\Delta )=X(z,\Delta )=\sum _{n=-\infty }^{\infty }x[n-\Delta ]z^{-n}}$ ## Modified Z-Transform The Delayed Z-Transform has some uses, but mathematicians and engineers have decided that a more useful version of the transform was needed. The new version of the Z-Transform, which is similar to the Delayed Z-transform with a change of variables, is known as the Modified Z-Transform. The Modified Z-Transform is defined in terms of the delayed Z transform as follows: ${\displaystyle X(z,m)=X(z,\Delta ){\big |}_{\Delta \to 1-m}={\mathcal {Z}}\left\{X(s)e^{-\Delta Ts}\right\}{\big |}_{\Delta \to 1-m}}$ And it is defined explicitly: [Modified Z Transform] ${\displaystyle X(z,m)={\mathcal {Z}}(x[n],m)=\sum _{n=-\infty }^{\infty }x[n+m-1]z^{-n}}$ # Poles and Zeros ## Poles and Zero Poles and Zeros of a transfer function are the frequencies for which the value of the denominator and numerator of transfer function becomes zero respectively. The values of the poles and the zeros of a system determine whether the system is stable, and how well the system performs. Control systems, in the most simple sense, can be designed simply by assigning specific values to the poles and zeros of the system. Physically realizable control systems must have a number of poles greater than or equal to the number of zeros. Systems that satisfy this relationship are called Proper. We will elaborate on this below. ## Time-Domain Relationships Let's say that we have a transfer function with 3 poles: ${\displaystyle H(s)={\frac {a}{(s-l)(s-m)(s-n)}}}$ The poles are located at s = l, m, n. Now, we can use partial fraction expansion to separate out the transfer function: ${\displaystyle H(s)={\frac {a}{(s-l)(s-m)(s-n)}}={\frac {A}{s-l}}+{\frac {B}{s-m}}+{\frac {C}{s-n}}}$ Using the inverse transform on each of these component fractions (looking up the transforms in our table), we get the following: ${\displaystyle h(t)=Ae^{lt}u(t)+Be^{mt}u(t)+Ce^{nt}u(t)}$ But, since s is a complex variable, l m and n can all potentially be complex numbers, with a real part (σ) and an imaginary part (jω). If we just look at the first term: ${\displaystyle Ae^{lt}u(t)=Ae^{(\sigma _{l}+j\omega _{l})t}u(t)=Ae^{\sigma _{l}t}e^{j\omega _{l}t}u(t)}$ Using Euler's Equation on the imaginary exponent, we get: ${\displaystyle Ae^{\sigma _{l}t}[\cos(\omega _{l}t)+j\sin(\omega _{l}t)]u(t)}$ If a complex pole is present it is always accomponied by another pole that is its complex conjugate. The imaginary parts of their time domain representations thus cancel and we are left with 2 of the same real parts. Assuming that the complex conjugate pole of the first term is present, we can take 2 times the real part of this equation and we are left with our final result: ${\displaystyle 2Ae^{\sigma _{l}t}\cos(\omega _{l}t)u(t)}$ We can see from this equation that every pole will have an exponential part, and a sinusoidal part to its response. We can also go about constructing some rules: 1. if σl = 0, the response of the pole is a perfect sinusoid (an oscillator) 2. if ωl = 0, the response of the pole is a perfect exponential. 3. if σl < 0, the exponential part of the response will decay towards zero. 4. if σl > 0, the exponential part of the response will rise towards infinity. From the last two rules, we can see that all poles of the system must have negative real parts, and therefore they must all have the form (s + l) for the system to be stable. We will discuss stability in later chapters. ## What are Poles and Zeros Let's say we have a transfer function defined as a ratio of two polynomials: ${\displaystyle H(s)={N(s) \over D(s)}}$ Where N(s) and D(s) are simple polynomials. Zeros are the roots of N(s) (the numerator of the transfer function) obtained by setting N(s) = 0 and solving for s. The polynomial order of a function is the value of the highest exponent in the polynomial. Poles are the roots of D(s) (the denominator of the transfer function), obtained by setting D(s) = 0 and solving for s. Because of our restriction above, that a transfer function must not have more zeros than poles, we can state that the polynomial order of D(s) must be greater than or equal to the polynomial order of N(s). ### Example Consider the transfer function: ${\displaystyle H(s)={s+2 \over s^{2}+0.25}}$ We define N(s) and D(s) to be the numerator and denominator polynomials, as such: ${\displaystyle N(s)=s+2}$ ${\displaystyle D(s)=s^{2}+0.25}$ We set N(s) to zero, and solve for s: ${\displaystyle N(s)=s+2=0\to s=-2}$ So we have a zero at s → -2. Now, we set D(s) to zero, and solve for s to obtain the poles of the equation: ${\displaystyle D(s)=s^{2}+0.25=0\to s=+i{\sqrt {0.25}},-i{\sqrt {0.25}}}$ And simplifying this gives us poles at: -i/2 , +i/2. Remember, s is a complex variable, and it can therefore take imaginary and real values. ## Effects of Poles and Zeros As s approaches a zero, the numerator of the transfer function (and therefore the transfer function itself) approaches the value 0. When s approaches a pole, the denominator of the transfer function approaches zero, and the value of the transfer function approaches infinity. An output value of infinity should raise an alarm bell for people who are familiar with BIBO stability. We will discuss this later. As we have seen above, the locations of the poles, and the values of the real and imaginary parts of the pole determine the response of the system. Real parts correspond to exponentials, and imaginary parts correspond to sinusoidal values. Addition of poles to the transfer function has the effect of pulling the root locus to the right, making the system less stable. Addition of zeros to the transfer function has the effect of pulling the root locus to the left, making the system more stable. ## Second-Order Systems The canonical form for a second order system is as follows: [Second-order transfer function] ${\displaystyle H(s)={\frac {K\omega ^{2}}{s^{2}+2\zeta \omega s+\omega ^{2}}}}$ Where K is the system gain, ζ is called the damping ratio of the function, and ω is called the natural frequency of the system. ζ and ω, if exactly known for a second order system, the time responses can be easily plotted and stability can easily be checked. More information on second order systems can be found here. ### Damping Ratio The damping ratio of a second-order system, denoted with the Greek letter zeta (ζ), is a real number that defines the damping properties of the system. More damping has the effect of less percent overshoot, and slower settling time. Damping is the inherent ability of the system to oppose the oscillatory nature of the system's transient response. Larger values of damping coefficient or damping factor produces transient responses with lesser oscillatory nature. ### Natural Frequency The natural frequency is occasionally written with a subscript: ${\displaystyle \omega \to \omega _{n}}$ We will omit the subscript when it is clear that we are talking about the natural frequency, but we will include the subscript when we are using other values for the variable ω. Also, ${\displaystyle \omega ~=~\omega _{n}}$ when ${\displaystyle \zeta ~=0}$. ## Higher-Order Systems Modern Controls The modern method of controls uses systems of special state-space equations to model and manipulate systems. The state variable model is broad enough to be useful in describing a wide range of systems, including systems that cannot be adequately described using the Laplace Transform. These chapters will require the reader to have a solid background in linear algebra, and multi-variable calculus. ## Digital Systems Digital systems, expressed previously as difference equations or Z-Transform transfer functions can also be used with the state-space representation. Also, all the same techniques for dealing with analog systems can be applied to digital systems, with only minor changes. ## Digital Systems For digital systems, we can write similar equations, using discrete data sets: ${\displaystyle x[k+1]=Ax[k]+Bu[k]}$ ${\displaystyle y[k]=Cx[k]+Du[k]}$ ### Zero-Order Hold Derivation If we have a continuous-time state equation: ${\displaystyle x'(t)=Ax(t)+Bu(t)}$ We can derive the digital version of this equation that we discussed above. We take the Laplace transform of our equation: ${\displaystyle X(s)=(sI-A)^{-1}Bu(s)+(sI-A)^{-1}x(0)}$ Now, taking the inverse Laplace transform gives us our time-domain system, keeping in mind that the inverse Laplace transform of the (sI - A) term is our state-transition matrix, Φ: ${\displaystyle x(t)={\mathcal {L}}^{-1}(X(s))=\Phi (t-t_{0})x(0)+\int _{t_{0}}^{t}\Phi (t-\tau )Bu(\tau )d\tau }$ Now, we apply a zero-order hold on our input, to make the system digital. Notice that we set our start time t0 = kT, because we are only interested in the behavior of our system during a single sample period: ${\displaystyle u(t)=u(kT),kT\leq t\leq (k+1)T}$ ${\displaystyle x(t)=\Phi (t,kT)x(kT)+\int _{kT}^{t}\Phi (t,\tau )Bd\tau u(kT)}$ We were able to remove u(kT) from the integral because it did not rely on τ. We now define a new function, Γ, as follows: ${\displaystyle \Gamma (t,t_{0})=\int _{t_{0}}^{t}\Phi (t,\tau )Bd\tau }$ Inserting this new expression into our equation, and setting t = (k + 1)T gives us: ${\displaystyle x((k+1)T)=\Phi ((k+1)T,kT)x(kT)+\Gamma ((k+1)T,kT)u(kT)}$ Now Φ(T) and Γ(T) are constant matrices, and we can give them new names. The d subscript denotes that they are digital versions of the coefficient matrices: ${\displaystyle A_{d}=\Phi ((k+1)T,kT)}$ ${\displaystyle B_{d}=\Gamma ((k+1)T,kT)}$ We can use these values in our state equation, converting to our bracket notation instead: ${\displaystyle x[k+1]=A_{d}x[k]+B_{d}u[k]}$ ## Relating Continuous and Discrete Systems Continuous and discrete systems that perform similarly can be related together through a set of relationships. It should come as no surprise that a discrete system and a continuous system will have different characteristics and different coefficient matrices. If we consider that a discrete system is the same as a continuous system, except that it is sampled with a sampling time T, then the relationships below will hold. The process of converting an analog system for use with digital hardware is called discretization. We've given a basic introduction to discretization already, but we will discuss it in more detail here. ### Discrete Coefficient Matrices Of primary importance in discretization is the computation of the associated coefficient matrices from the continuous-time counterparts. If we have the continuous system (A, B, C, D), we can use the relationship t = kT to transform the state-space solution into a sampled system: ${\displaystyle x(kT)=e^{AkT}x(0)+\int _{0}^{kT}e^{A(kT-\tau )}Bu(\tau )d\tau }$ ${\displaystyle x[k]=e^{AkT}x[0]+\int _{0}^{kT}e^{A(kT-\tau )}Bu(\tau )d\tau }$ Now, if we want to analyze the k+1 term, we can solve the equation again: ${\displaystyle x[k+1]=e^{A(k+1)T}x[0]+\int _{0}^{(k+1)T}e^{A((k+1)T-\tau )}Bu(\tau )d\tau }$ Separating out the variables, and breaking the integral into two parts gives us: ${\displaystyle x[k+1]=e^{AT}e^{AkT}x[0]+\int _{0}^{kT}e^{AT}e^{A(kT-\tau )}Bu(\tau )d\tau +\int _{kT}^{(k+1)T}e^{A(kT+T-\tau )}Bu(\tau )d\tau }$ If we substitute in a new variable β = (k + 1)T + τ, and if we see the following relationship: ${\displaystyle e^{AkT}x[0]=x[k]}$ We get our final result: ${\displaystyle x[k+1]=e^{AT}x[k]+\left(\int _{0}^{T}e^{A\alpha }d\alpha \right)Bu[k]}$ Comparing this equation to our regular solution gives us a set of relationships for converting the continuous-time system into a discrete-time system. Here, we will use "d" subscripts to denote the system matrices of a discrete system, and we will use a "c" subscript to denote the system matrices of a continuous system. Matrix Dimensions: A: p × p B: p × q C: r × p D: r × q ${\displaystyle A_{d}=e^{A_{c}T}}$ ${\displaystyle B_{d}=\int _{0}^{T}e^{A_{c}\tau }d\tau B_{c}}$ ${\displaystyle C_{d}=C_{c}}$ ${\displaystyle D_{d}=D_{c}}$ This operation can be performed using this MATLAB command: c2d If the Ac matrix is nonsingular, then we can find its inverse and instead define Bd as: ${\displaystyle B_{d}=A_{c}^{-1}(A_{d}-I)B_{c}}$ The differences in the discrete and continuous matrices are due to the fact that the underlying equations that describe our systems are different. Continuous-time systems are represented by linear differential equations, while the digital systems are described by difference equations. High order terms in a difference equation are delayed copies of the signals, while high order terms in the differential equations are derivatives of the analog signal. If we have a complicated analog system, and we would like to implement that system in a digital computer, we can use the above transformations to make our matrices conform to the new paradigm. ### Notation Because the coefficient matrices for the discrete systems are computed differently from the continuous-time coefficient matrices, and because the matrices technically represent different things, it is not uncommon in the literature to denote these matrices with different variables. For instance, the following variables are used in place of A and B frequently: ${\displaystyle \Omega =A_{d}}$ ${\displaystyle R=B_{d}}$ These substitutions would give us a system defined by the ordered quadruple (Ω, R, C, D) for representing our equations. As a matter of notational convenience, we will use the letters A and B to represent these matrices throughout the rest of this book. ## Converting Difference Equations Now, let's say that we have a 3rd order difference equation, that describes a discrete-time system: ${\displaystyle y[n+3]+a_{2}y[n+2]+a_{1}y[n+1]+a_{0}y[n]=u[n]}$ From here, we can define a set of discrete state variables x in the following manner: ${\displaystyle x_{1}[n]=y[n]}$ ${\displaystyle x_{2}[n]=y[n+1]}$ ${\displaystyle x_{3}[n]=y[n+2]}$ Which in turn gives us 3 first-order difference equations: ${\displaystyle x_{1}[n+1]=y[n+1]=x_{2}[n]}$ ${\displaystyle x_{2}[n+1]=y[n+2]=x_{3}[n]}$ ${\displaystyle x_{3}[n+1]=y[n+3]}$ Again, we say that matrix x is a vertical vector of the 3 state variables we have defined, and we can write our state equation in the same form as if it were a continuous-time system: ${\displaystyle x[n+1]={\begin{bmatrix}0&1&0\\0&0&1\\-a_{0}&-a_{1}&-a_{2}\end{bmatrix}}x[n]+{\begin{bmatrix}0\\0\\1\end{bmatrix}}u[n]}$ ${\displaystyle y[n]={\begin{bmatrix}1&0&0\end{bmatrix}}x[n]}$ ## Solving for x[n] We can find a general time-invariant solution for the discrete time difference equations. Let us start working up a pattern. We know the discrete state equation: ${\displaystyle x[n+1]=Ax[n]+Bu[n]}$ Starting from time n = 0, we can start to create a pattern: ${\displaystyle x[1]=Ax[0]+Bu[0]}$ ${\displaystyle x[2]=Ax[1]+Bu[1]=A^{2}x[0]+ABu[0]+Bu[1]}$ ${\displaystyle x[3]=Ax[2]+Bu[2]=A^{3}x[0]+A^{2}Bu[0]+ABu[1]+Bu[2]}$ With a little algebraic trickery, we can reduce this pattern to a single equation: [General State Equation Solution] ${\displaystyle x[n]=A^{n}x[n_{0}]+\sum _{m=0}^{n-1}A^{n-1-m}Bu[m]}$ Substituting this result into the output equation gives us: [General Output Equation Solution] ${\displaystyle y[n]=CA^{n}x[n_{0}]+\sum _{m=0}^{n-1}CA^{n-1-m}Bu[m]+Du[n]}$ ## Time Variant Solutions If the system is time-variant, we have a general solution that is similar to the continuous-time case: ${\displaystyle x[n]=\phi [n,n_{0}]x[n_{0}]+\sum _{m=n_{0}}^{n-1}\phi [n,m+1]B[m]u[m]}$ ${\displaystyle y[n]=C[n]\phi [n,n_{0}]x[n_{0}]+C[n]\sum _{m=n_{0}}^{n-1}\phi [n,m+1]B[m]u[m]+D[n]u[n]}$ Where φ, the state transition matrix, is defined in a similar manner to the state-transition matrix in the continuous case. However, some of the properties in the discrete time are different. For instance, the inverse of the state-transition matrix does not need to exist, and in many systems it does not exist. ### State Transition Matrix The discrete time state transition matrix is the unique solution of the equation: ${\displaystyle \phi [k+1,k_{0}]=A[k]\phi [k,k_{0}]}$ Where the following restriction must hold: ${\displaystyle \phi [k_{0},k_{0}]=I}$ From this definition, an obvious way to calculate this state transition matrix presents itself: ${\displaystyle \phi [k,k_{0}]=A[k-1]A[k-2]A[k-3]\cdots A[k_{0}]}$ Or, ${\displaystyle \phi [k,k_{0}]=\prod _{m=1}^{k-k_{0}}A[k-m]}$ ## MATLAB Calculations MATLAB is a computer program, and therefore calculates all systems using digital methods. The MATLAB function lsim is used to simulate a continuous system with a specified input. This function works by calling the c2d, which converts a system (A, B, C, D) into the equivalent discrete system. Once the system model is discretized, the function passes control to the dlsim function, which is used to simulate discrete-time systems with the specified input. Because of this, simulation programs like MATLAB are subjected to round-off errors associated with the discretization process. Stability System stability is an important topic, because unstable systems may not perform correctly, and may actually be harmful to people. There are a number of different methods and tools that can be used to determine system stability, depending on whether you are in the state-space, or the complex domain. # Stability ## Stability When a system is unstable, the output of the system may be infinite even though the input to the system was finite. This causes a number of practical problems. For instance, a robot arm controller that is unstable may cause the robot to move dangerously. Also, systems that are unstable often incur a certain amount of physical damage, which can become costly. Nonetheless, many systems are inherently unstable - a fighter jet, for instance, or a rocket at liftoff, are examples of naturally unstable systems. Although we can design controllers that stabilize the system, it is first important to understand what stability is, how it is determined, and why it matters. The chapters in this section are heavily mathematical, and many require a background in linear differential equations. Readers without a strong mathematical background might want to review the necessary chapters in the Calculus and Ordinary Differential Equations books (or equivalent) before reading this material. For most of this chapter we will be assuming that the system is linear, and can be represented either by a set of transfer functions or in state space. Linear systems have an associated characteristic polynomial, and this polynomial tells us a great deal about the stability of the system. Negativeness of any coefficient of a characteristic polynomial indicates that the system is either unstable or at most marginally stable. If any coefficient is zero/negative then we can say that the system is unstable. It is important to note, though, that even if all of the coefficients of the characteristic polynomial are positive the system may still be unstable. We will look into this in more detail below. ## BIBO Stability A system is defined to be BIBO Stable if every bounded input to the system results in a bounded output over the time interval ${\displaystyle [t_{0},\infty )}$. This must hold for all initial times to. So long as we don't input infinity to our system, we won't get infinity output. A system is defined to be uniformly BIBO Stable if there exists a positive constant k that is independent of t0 such that for all t0 the following conditions: ${\displaystyle \|u(t)\|\leq 1}$ ${\displaystyle t\geq t_{0}}$ implies that ${\displaystyle \|y(t)\|\leq k}$ There are a number of different types of stability, and keywords that are used with the topic of stability. Some of the important words that we are going to be discussing in this chapter, and the next few chapters are: BIBO Stable, Marginally Stable, Conditionally Stable, Uniformly Stable, Asymptotically Stable, and Unstable. All of these words mean slightly different things. ## Determining BIBO Stability We can prove mathematically that a system f is BIBO stable if an arbitrary input x is bounded by two finite but large arbitrary constants M and -M: ${\displaystyle -M We apply the input x, and the arbitrary boundaries M and -M to the system to produce three outputs: ${\displaystyle y_{x}=f(x)}$ ${\displaystyle y_{M}=f(M)}$ ${\displaystyle y_{-M}=f(-M)}$ Now, all three outputs should be finite for all possible values of M and x, and they should satisfy the following relationship: ${\displaystyle y_{-M}\leq y_{x}\leq y_{M}}$ If this condition is satisfied, then the system is BIBO stable. A SISO linear time-invariant (LTI) system is BIBO stable if and only if ${\displaystyle g(t)}$ is absolutely integrable from [0,∞] or from: ${\displaystyle \int _{0}^{\infty }|g(t)|\,dt\leq M<{\infty }}$ ### Example Consider the system: ${\displaystyle h(t)={\frac {2}{t}}}$ We can apply our test, selecting an arbitrarily large finite constant M, and an arbitrary input x such that M>x>-M As M approaches infinity (but does not reach infinity), we can show that: ${\displaystyle y_{-M}=\lim _{M\to \infty }{\frac {2}{-M}}=0^{-}}$ And: ${\displaystyle y_{M}=\lim _{M\to \infty }{\frac {2}{M}}=0^{+}}$ So now, we can write out our inequality: ${\displaystyle y_{-M}\leq y_{x}\leq y_{M}}$ ${\displaystyle 0^{-}\leq x<0^{+}}$ And this inequality should be satisfied for all possible values of x. However, we can see that when x is zero, we have the following: ${\displaystyle y_{x}=\lim _{x\to 0}{\frac {2}{x}}=\infty }$ Which means that x is between -M and M, but the value yx is not between y-M and yM. Therefore, this system is not stable. ## Poles and Stability When the poles of the closed-loop transfer function of a given system are located in the right-half of the S-plane (RHP), the system becomes unstable. When the poles of the system are located in the left-half plane (LHP) and the system is not improper, the system is shown to be stable. A number of tests deal with this particular facet of stability: The Routh-Hurwitz Criteria, the Root-Locus, and the Nyquist Stability Criteria all test whether there are poles of the transfer function in the RHP. We will learn about all these tests in the upcoming chapters. If the system is a multivariable, or a MIMO system, then the system is stable if and only if every pole of every transfer function in the transfer function matrix has a negative real part and every transfer function in the transfer function matrix is not improper. For these systems, it is possible to use the Routh-Hurwitz, Root Locus, and Nyquist methods described later, but these methods must be performed once for each individual transfer function in the transfer function matrix. ## Poles and Eigenvalues Note: Every pole of G(s) is an eigenvalue of the system matrix A. However, not every eigenvalue of A is a pole of G(s). The poles of the transfer function, and the eigenvalues of the system matrix A are related. In fact, we can say that the eigenvalues of the system matrix A are the poles of the transfer function of the system. In this way, if we have the eigenvalues of a system in the state-space domain, we can use the Routh-Hurwitz, and Root Locus methods as if we had our system represented by a transfer function instead. On a related note, eigenvalues and all methods and mathematical techniques that use eigenvalues to determine system stability only work with time-invariant systems. In systems which are time-variant, the methods using eigenvalues to determine system stability fail. ## Transfer Functions Revisited We are going to have a brief refesher here about transfer functions, because several of the later chapters will use transfer functions for analyzing system stability. Let us remember our generalized feedback-loop transfer function, with a gain element of K, a forward path Gp(s), and a feedback of Gb(s). We write the transfer function for this system as: ${\displaystyle H_{cl}(s)={\frac {KGp(s)}{1+H_{ol}(s)}}}$ Where ${\displaystyle H_{cl}}$ is the closed-loop transfer function, and ${\displaystyle H_{ol}}$ is the open-loop transfer function. Again, we define the open-loop transfer function as the product of the forward path and the feedback elements, as such: ${\displaystyle H_{ol}(s)=KGp(s)Gb(s)}$ <---Note this definition now contradicts the updated definition in the "Feedback" section. Now, we can define F(s) to be the characteristic equation. F(s) is simply the denominator of the closed-loop transfer function, and can be defined as such: [Characteristic Equation] ${\displaystyle F(s)=1+H_{ol}=D(s)}$ We can say conclusively that the roots of the characteristic equation are the poles of the transfer function. Now, we know a few simple facts: 1. The locations of the poles of the closed-loop transfer function determine if the system is stable or not 2. The zeros of the characteristic equation are the poles of the closed-loop transfer function. 3. The characteristic equation is always a simpler equation than the closed-loop transfer function. These functions combined show us that we can focus our attention on the characteristic equation, and find the roots of that equation. ## State-Space and Stability As we have discussed earlier, the system is stable if the eigenvalues of the system matrix A have negative real parts. However, there are other stability issues that we can analyze, such as whether a system is uniformly stable, asymptotically stable, or otherwise. We will discuss all these topics in a later chapter. ## Marginal Stability When the poles of the system in the complex S-Domain exist on the complex frequency axis (the vertical axis), or when the eigenvalues of the system matrix are imaginary (no real part), the system exhibits oscillatory characteristics, and is said to be marginally stable. A marginally stable system may become unstable under certain circumstances, and may be perfectly stable under other circumstances. It is impossible to tell by inspection whether a marginally stable system will become unstable or not. We will discuss marginal stability more in the following chapters. # Discrete Time Stability ## Discrete-Time Stability The stability analysis of a discrete-time or digital system is similar to the analysis for a continuous time system. However, there are enough differences that it warrants a separate chapter. ## Input-Output Stability ### Uniform Stability An LTI causal system is uniformly BIBO stable if there exists a positive constant L such that the following conditions: ${\displaystyle x[n_{0}]=0}$ ${\displaystyle \|u[n]\|\leq k}$ ${\displaystyle k\geq 0}$ imply that ${\displaystyle \|y[n]\|\leq L}$ ### Impulse Response Matrix We can define the impulse response matrix of a discrete-time system as: [Impulse Response Matrix] ${\displaystyle G[n]=\left\{{\begin{matrix}CA^{k-1}B&{\mbox{ if }}k>0\\0&{\mbox{ if }}k\leq 0\end{matrix}}\right.}$ Or, in the general time-varying case: ${\displaystyle G[n]=\left\{{\begin{matrix}C\phi [n,n_{0}]B&{\mbox{ if }}k>0\\0&{\mbox{ if }}k\leq 0\end{matrix}}\right.}$ A digital system is BIBO stable if and only if there exists a positive constant L such that for all non-negative k: ${\displaystyle \sum _{n=0}^{k}\|G[n]\|\leq L}$ ## Stability of Transfer Function A MIMO discrete-time system is BIBO stable if and only if every pole of every transfer function in the transfer function matrix has a magnitude less than 1. All poles of all transfer functions must exist inside the unit circle on the Z plane. ## Lyapunov Stability There is a discrete version of the Lyapunov stability theorem that applies to digital systems. Given the discrete Lyapunov equation: [Digital Lypapunov Equation] ${\displaystyle A^{T}MA-M=-N}$ We can use this version of the Lyapunov equation to define a condition for stability in discrete-time systems: Lyapunov Stability Theorem (Digital Systems) A digital system with the system matrix A is asymptotically stable if and only if there exists a unique matrix M that satisfies the Lyapunov Equation for every positive definite matrix N. ## Poles and Eigenvalues Every pole of G(z) is an eigenvalue of the system matrix A. Not every eigenvalue of A is a pole of G(z). Like the poles of the transfer function, all the eigenvalues of the system matrix must have magnitudes less than 1. Mathematically: ${\displaystyle {\sqrt {\operatorname {Re} (z)^{2}+\operatorname {Im} (z)^{2}}}\leq 1}$ If the magnitude of the eigenvalues of the system matrix A, or the poles of the transfer functions are greater than 1, the system is unstable. ## Finite Wordlengths Digital computer systems have an inherent problem because implementable computer systems have finite wordlengths to deal with. Some of the issues are: 1. Real numbers can only be represented with a finite precision. Typically, a computer system can only accurately represent a number to a finite number of decimal points. 2. Because of the fact above, computer systems with feedback can compound errors with each program iteration. Small errors in one step of an algorithm can lead to large errors later in the program. 3. Integer numbers in computer systems have finite lengths. Because of this, integer numbers will either roll-over, or saturate, depending on the design of the computer system. Both situations can create inaccurate results. # Jury's Test ## Routh-Hurwitz in Digital Systems Because of the differences in the Z and S domains, the Routh-Hurwitz criteria can not be used directly with digital systems. This is because digital systems and continuous-time systems have different regions of stability. However, there are some methods that we can use to analyze the stability of digital systems. Our first option (and arguably not a very good option) is to convert the digital system into a continuous-time representation using the bilinear transform. The bilinear transform converts an equation in the Z domain into an equation in the W domain, that has properties similar to the S domain. Another possibility is to use Jury's Stability Test. Jury's test is a procedure similar to the RH test, except it has been modified to analyze digital systems in the Z domain directly. ### Bilinear Transform One common, but time-consuming, method of analyzing the stability of a digital system in the z-domain is to use the bilinear transform to convert the transfer function from the z-domain to the w-domain. The w-domain is similar to the s-domain in the following ways: • Poles in the right-half plane are unstable • Poles in the left-half plane are stable • Poles on the imaginary axis are partially stable The w-domain is warped with respect to the s domain, however, and except for the relative position of poles to the imaginary axis, they are not in the same places as they would be in the s-domain. Remember, however, that the Routh-Hurwitz criterion can tell us whether a pole is unstable or not, and nothing else. Therefore, it doesn't matter where exactly the pole is, so long as it is in the correct half-plane. Since we know that stable poles are in the left-half of the w-plane and the s-plane, and that unstable poles are on the right-hand side of both planes, we can use the Routh-Hurwitz test on functions in the w domain exactly like we can use it on functions in the s-domain. ### Other Mappings There are other methods for mapping an equation in the Z domain into an equation in the S domain, or a similar domain. We will discuss these different methods in the Appendix. ## Jury's Test Jury's test is a test that is similar to the Routh-Hurwitz criterion, except that it can be used to analyze the stability of an LTI digital system in the Z domain. To use Jury's test to determine if a digital system is stable, we must check our z-domain characteristic equation against a number of specific rules and requirements. If the function fails any requirement, it is not stable. If the function passes all the requirements, it is stable. Jury's test is a necessary and sufficient test for stability in digital systems. Again, we call D(z) the characteristic polynomial of the system. It is the denominator polynomial of the Z-domain transfer function. Jury's test will focus exclusively on the Characteristic polynomial. To perform Jury's test, we must perform a number of smaller tests on the system. If the system fails any test, it is unstable. ### Jury Tests Given a characteristic equation in the form: ${\displaystyle D(z)=a_{0}+a_{1}z+a_{2}z^{2}+\cdots +a_{N}z^{N}}$ The following tests determine whether this system has any poles outside the unit circle (the instability region). These tests will use the value N as being the degree of the characteristic polynomial. The system must pass all of these tests to be considered stable. If the system fails any test, you may stop immediately: you do not need to try any further tests. Rule 1 If z is 1, the system output must be positive: ${\displaystyle D(1)>0}$ Rule 2 If z is -1, then the following relationship must hold: ${\displaystyle (-1)^{N}D(-1)>0}$ Rule 3 The absolute value of the constant term (a0) must be less than the value of the highest coefficient (aN): ${\displaystyle |a_{0}| If Rule 1 Rule 2 and Rule 3 are satisified, construct the Jury Array (discussed below). Rule 4 Once the Jury Array has been formed, all the following relationships must be satisifed until the end of the array: ${\displaystyle |b_{0}|>|b_{N-1}|}$ ${\displaystyle |c_{0}|>|c_{N-2}|}$ ${\displaystyle |d_{0}|>|d_{N-3}|}$ And so on until the last row of the array. If all these conditions are satisifed, the system is stable. While you are constructing the Jury Array, you can be making the tests of Rule 4. If the Array fails Rule 4 at any point, you can stop calculating the array: your system is unstable. We will discuss the construction of the Jury Array below. ### The Jury Array The Jury Array is constructed by first writing out a row of coefficients, and then writing out another row with the same coefficients in reverse order. For instance, if your polynomial is a third order system, we can write the First two lines of the Jury Array as follows: ${\displaystyle {\overline {\underline {\begin{matrix}z^{0}&z^{1}&z^{2}&z^{3}&\ldots &z^{N}\\a_{0}&a_{1}&a_{2}&a_{3}&\ldots &a_{N}\\a_{N}&\ldots &a_{3}&a_{2}&a_{1}&a_{0}\end{matrix}}}}}$ Now, once we have the first row of our coefficients written out, we add another row of coefficients (we will use b for this row, and c for the next row, as per our previous convention), and we will calculate the values of the lower rows from the values of the upper rows. Each new row that we add will have one fewer coefficient then the row before it: ${\displaystyle {\overline {\underline {\begin{matrix}1)&a_{0}&a_{1}&a_{2}&a_{3}&\ldots &a_{N}\\2)&a_{N}&\ldots &a_{3}&a_{2}&a_{1}&a_{0}\\3)&b_{0}&b_{1}&b_{2}&\ldots &b_{N-1}\\4)&b_{N-1}&\ldots &b_{2}&b_{1}&b_{0}\\\vdots &\vdots &\vdots &\vdots \\2N-3)&v_{0}&v_{1}&v_{2}\end{matrix}}}}}$ Note: The last file is the (2N-3) file, and always has 3 elements. This test doesn't have sense if N=1, but in this case you know the pole! Once we get to a row with 2 members, we can stop constructing the array. To calculate the values of the odd-number rows, we can use the following formulae. The even number rows are equal to the previous row in reverse order. We will use k as an arbitrary subscript value. These formulae are reusable for all elements in the array: ${\displaystyle b_{k}={\begin{vmatrix}a_{0}&a_{N-k}\\a_{N}&a_{k}\end{vmatrix}}}$ ${\displaystyle c_{k}={\begin{vmatrix}b_{0}&b_{N-1-k}\\b_{N-1}&b_{k}\end{vmatrix}}}$ ${\displaystyle d_{k}={\begin{vmatrix}c_{0}&c_{N-2-k}\\c_{N-2}&c_{k}\end{vmatrix}}}$ This pattern can be carried on to all lower rows of the array, if needed. ### Example: Calculating e5 Give the equation for member e5 of the jury array (assuming the original polynomial is sufficiently large to require an e5 member). Going off the pattern we set above, we can have this equation for a member e: ${\displaystyle e_{k}={\begin{vmatrix}d_{0}&d_{N-R-k}\\d_{N-R}&d_{k}\end{vmatrix}}}$ Where we are using R as the subtractive element from the above equations. Since row c had R → 1, and row d had R → 2, we can follow the pattern and for row e set R → 3. Plugging this value of R into our equation above gives us: ${\displaystyle e_{k}={\begin{vmatrix}d_{0}&d_{N-3-k}\\d_{N-3}&d_{k}\end{vmatrix}}}$ And since we want e5 we know that k is 5, so we can substitute that into the equation: ${\displaystyle e_{5}={\begin{vmatrix}d_{0}&d_{N-3-5}\\d_{N-3}&d_{5}\end{vmatrix}}={\begin{vmatrix}d_{0}&d_{N-8}\\d_{N-3}&d_{5}\end{vmatrix}}}$ When we take the determinant, we get the following equation: ${\displaystyle e_{5}=d_{0}d_{5}-d_{N-8}d_{N-3}}$ We will discuss the bilinear transform, and other methods to convert between the Laplace domain and the Z domain in the appendix: # Root Locus ## The Problem Consider a system like a radio. The radio has a "volume" knob, that controls the amount of gain of the system. High volume means more power going to the speakers, low volume means less power to the speakers. As the volume value increases, the poles of the transfer function of the radio change, and they might potentially become unstable. We would like to find out if the radio becomes unstable, and if so, we would like to find out what values of the volume cause it to become unstable. Our current methods would require us to plug in each new value for the volume (gain, "K"), and solve the open-loop transfer function for the roots. This process can be a long one. Luckily, there is a method called the root-locus method, that allows us to graph the locations of all the poles of the system for all values of gain, K ## Root-Locus As we change gain, we notice that the system poles and zeros actually move around in the S-plane. This fact can make life particularly difficult, when we need to solve higher-order equations repeatedly, for each new gain value. The solution to this problem is a technique known as Root-Locus graphs. Root-Locus allows you to graph the locations of the poles and zeros for every value of gain, by following several simple rules. As we know that a fan switch also can control the speed of the fan. Let's say we have a closed-loop transfer function for a particular system: ${\displaystyle {\frac {N(s)}{D(s)}}={\frac {KG(s)}{1+KG(s)H(s)}}}$ Where N is the numerator polynomial and D is the denominator polynomial of the transfer functions, respectively. Now, we know that to find the poles of the equation, we must set the denominator to 0, and solve the characteristic equation. In other words, the locations of the poles of a specific equation must satisfy the following relationship: ${\displaystyle D(s)=1+KG(s)H(s)=0}$ from this same equation, we can manipulate the equation as such: ${\displaystyle 1+KG(s)H(s)=0}$ ${\displaystyle KG(s)H(s)=-1}$ And finally by converting to polar coordinates: ${\displaystyle \angle KG(s)H(s)=180^{\circ }}$ Now we have 2 equations that govern the locations of the poles of a system for all gain values: [The Magnitude Equation] ${\displaystyle 1+KG(s)H(s)=0}$ [The Angle Equation] ${\displaystyle \angle KG(s)H(s)=180^{\circ }}$ ### Digital Systems The same basic method can be used for considering digital systems in the Z-domain: ${\displaystyle {\frac {N(z)}{D(z)}}={\frac {KG(z)}{1+K{\overline {GH}}(z)}}}$ Where N is the numerator polynomial in z, D is the denominator polynomial in z, and ${\displaystyle {\overline {GH}}(z)}$ is the open-loop transfer function of the system, in the Z domain. The denominator D(z), by the definition of the characteristic equation is equal to: ${\displaystyle D(z)=1+K{\overline {GH}}(z)=0}$ We can manipulate this as follows: ${\displaystyle 1+K{\overline {GH}}(z)=0}$ ${\displaystyle K{\overline {GH}}(z)=-1}$ We can now convert this to polar coordinates, and take the angle of the polynomial: ${\displaystyle \angle K{\overline {GH}}(z)=180^{\circ }}$ We are now left with two important equations: [The Magnitude Equation] ${\displaystyle 1+K{\overline {GH}}(z)=0}$ [The Angle Equation] ${\displaystyle \angle K{\overline {GH}}(z)=180^{\circ }}$ If you will compare the two, the Z-domain equations are nearly identical to the S-domain equations, and act exactly the same. For the remainder of the chapter, we will only consider the S-domain equations, with the understanding that digital systems operate in nearly the same manner. ## The Root-Locus Procedure Note: In this section, the rules for the S-Plane and the Z-plane are the same, so we won't refer to the differences between them. In the transform domain (see note at right), when the gain is small, the poles start at the poles of the open-loop transfer function. When gain becomes infinity, the poles move to overlap the zeros of the system. This means that on a root-locus graph, all the poles move towards a zero. Only one pole may move towards one zero, and this means that there must be the same number of poles as zeros. If there are fewer zeros than poles in the transfer function, there are a number of implicit zeros located at infinity, that the poles will approach. First thing, we need to convert the magnitude equation into a slightly more convenient form: ${\displaystyle KG(s)H(s)+1=0\to G(s)H(s)={\frac {-1}{K}}}$ Note: We generally use capital letters for functions in the frequency domain, but a(s) and b(s) are unimportant enough to be lower-case. Now, we can assume that G(s)H(s) is a fraction of some sort, with a numerator and a denominator that are both polynomials. We can express this equation using arbitrary functions a(s) and b(s), as such: ${\displaystyle {\frac {a(s)}{b(s)}}={\frac {-1}{K}}}$ We will refer to these functions a(s) and b(s) later in the procedure. We can start drawing the root-locus by first placing the roots of b(s) on the graph with an 'X'. Next, we place the roots of a(s) on the graph, and mark them with an 'O'. Next, we examine the real-axis. starting from the right-hand side of the graph and traveling to the left, we draw a root-locus line on the real-axis at every point to the left of an odd number of poles or zeros on the real-axis. This may sound tricky at first, but it becomes easier with practice. Now, a root-locus line starts at every pole. Therefore, any place that two poles appear to be connected by a root locus line on the real-axis, the two poles actually move towards each other, and then they "break away", and move off the axis. The point where the poles break off the axis is called the breakaway point. From here, the root locus lines travel towards the nearest zero. It is important to note that the s-plane is symmetrical about the real axis, so whatever is drawn on the top-half of the S-plane, must be drawn in mirror-image on the bottom-half plane. Once a pole breaks away from the real axis, they can either travel out towards infinity (to meet an implicit zero), or they can travel to meet an explicit zero, or they can re-join the real-axis to meet a zero that is located on the real-axis. If a pole is traveling towards infinity, it always follows an asymptote. The number of asymptotes is equal to the number of implicit zeros at infinity. ## Root Locus Rules Here is the complete set of rules for drawing the root-locus graph. We will use p and z to denote the number of poles and the number of zeros of the open-loop transfer function, respectively. We will use Pi and Zi to denote the location of the ith pole and the ith zero, respectively. Likewise, we will use ψi and ρi to denote the angle from a given point to the ith pole and zero, respectively. All angles are given in radians (π denotes π radians). There are 11 rules that, if followed correctly, will allow you to create a correct root-locus graph. Rule 1 There is one branch of the root-locus for every root of b(s). Rule 2 The roots of b(s) are the poles of the open-loop transfer function. Mark the roots of b(s) on the graph with an X. Rule 3 The roots of a(s) are the zeros of the open-loop transfer function. Mark the roots of a(s) on the graph with an O. There should be a number of O's less than or equal to the number of X's. There is a number of zeros p - z located at infinity. These zeros at infinity are called "implicit zeros". All branches of the root-locus will move from a pole to a zero (some branches, therefore, may travel towards infinity). Rule 4 A point on the real axis is a part of the root-locus if it is to the left of an odd number of poles and zeros. Rule 5 The gain at any point on the root locus can be determined by the inverse of the absolute value of the magnitude equation. ${\displaystyle \left|{\frac {b(s)}{a(s)}}\right|=|K|}$ Rule 6 The root-locus diagram is symmetric about the real-axis. All complex roots are conjugates. Rule 7 Two roots that meet on the real-axis will break away from the axis at certain break-away points. If we set s → σ (no imaginary part), we can use the following equation: ${\displaystyle K=-{\frac {b(\sigma )}{a(\sigma )}}}$ And differentiate to find the local maximum: ${\displaystyle {\frac {dK}{d\sigma }}={\frac {d}{d\sigma }}{\frac {b(\sigma )}{a(\sigma )}}}$ Rule 8 The breakaway lines of the root locus are separated by angles of ${\displaystyle {\frac {\pi }{\alpha }}}$, where α is the number of poles intersecting at the breakaway point. Rule 9 The breakaway root-loci follow asymptotes that intersect the real axis at angles φω given by: ${\displaystyle \phi _{\omega }={\frac {\pi +2N\pi }{p-z}},\quad N=0,1,...p-z-1}$ The origin of these asymptotes, OA, is given as the sum of the pole locations, minus the sum of the zero locations, divided by the difference between the number of poles and zeros: ${\displaystyle OA={\frac {\sum _{p}P_{i}-\sum _{z}Z_{i}}{p-z}}}$ The OA point should lie on the real axis. Rule 10 The branches of the root locus cross the imaginary axis at points where the angle equation value is π (i.e., 180o). Rule 11 The angles that the root locus branch makes with a complex-conjugate pole or zero is determined by analyzing the angle equation at a point infinitessimally close to the pole or zero. The angle of departure, φd is given by the following equation: ${\displaystyle \sum _{p}\psi _{i}+\sum _{z}\rho _{i}+\phi _{d}=\pi }$ The angle of arrival, φa, is given by: ${\displaystyle \sum _{z}\rho _{i}+\sum _{p}\psi _{i}+\phi _{a}=\pi }$ We will explain these rules in the rest of the chapter. ## Root Locus Equations Here are the two major equations: [Root Locus Equations] S-Domain Equations Z-Domain Equations ${\displaystyle 1+KG(s)H(s)=0}$ ${\displaystyle 1+K{\overline {GH}}(z)=0}$ ${\displaystyle \angle KG(s)H(s)=180^{o}}$ ${\displaystyle \angle K{\overline {GH}}(z)=180^{o}}$ Note that the sum of the angles of all the poles and zeros must equal to 180. ### Number of Asymptotes If the number of explicit zeros of the system is denoted by Z (uppercase z), and the number of poles of the system is given by P, then the number of asymptotes (Na) is given by: [Number of Asymptotes] ${\displaystyle N_{a}=P-Z}$ The angles of the asymptotes are given by: [Angle of Asymptotes] ${\displaystyle \phi _{k}=(2k+1){\frac {\pi }{P-Z}}}$ for values of ${\displaystyle k=[0,1,...N_{a}-1]}$. ### Asymptote Intersection Point The asymptotes intersect the real axis at the point: [Origin of Asymptotes] ${\displaystyle \sigma _{0}={\frac {\sum _{P}-\sum _{Z}}{P-Z}}}$ Where ${\displaystyle \sum _{P}}$ is the sum of all the locations of the poles, and ${\displaystyle \sum _{Z}}$ is the sum of all the locations of the explicit zeros. ### Breakaway Points The breakaway points are located at the roots of the following equation: [Breakaway Point Locations] ${\displaystyle {\frac {dG(s)H(s)}{ds}}=0}$ or ${\displaystyle {\frac {d{\overline {GH}}(z)}{dz}}=0}$ Once you solve for z, the real roots give you the breakaway/reentry points. Complex roots correspond to a lack of breakaway/reentry. The breakaway point equation can be difficult to solve, so many times the actual location is approximated. ## Root Locus and Stability The root locus procedure should produce a graph of where the poles of the system are for all values of gain K. When any or all of the roots of D are in the unstable region, the system is unstable. When any of the roots are in the marginally stable region, the system is marginally stable (oscillatory). When all of the roots of D are in the stable region, then the system is stable. It is important to note that a system that is stable for gain K1 may become unstable for a different gain K2. Some systems may have poles that cross over from stable to unstable multiple times, giving multiple gain values for which the system is unstable. Here is a quick refresher: Region S-Domain Z-Domain Stable Region Left-Hand S Plane ${\displaystyle \sigma <0}$ Inside the Unit Circle ${\displaystyle |z|<1}$ Marginally Stable Region The vertical axis ${\displaystyle \sigma =0}$ The Unit Circle ${\displaystyle |z|=1}$ Unstable Region Right-Hand S Plane ${\displaystyle \sigma >0}$ Outside the Unit Circle, ${\displaystyle |z|>1}$ ## Examples ### Example 1: First-Order System Find the root-locus of the open-loop system: ${\displaystyle T(s)={\frac {1}{1+2s}}}$ If we look at the characteristic equation, we can quickly solve for the single pole of the system: ${\displaystyle D(s)=1+2s=0}$ ${\displaystyle s=-{\frac {1}{2}}}$ We plot that point on our root-locus graph, and everything on the real axis to the left of that single point is on the root locus (from the rules, above). Therefore, the root locus of our system looks like this: From this image, we can see that for all values of gain this system is stable. ### Example 2: Third Order System We are given a system with three real poles, shown by the transfer function: ${\displaystyle T(s)={\frac {1}{(s+1)(s+2)(s+3)}}}$ Is this system stable? To answer this question, we can plot the root-locus. First, we draw the poles on the graph at locations -1, -2, and -3. The real-axis between the first and second poles is on the root-locus, as well as the real axis to the left of the third pole. We know also that there is going to be breakaway from the real axis at some point. The origin of asymptotes is located at: ${\displaystyle OA={\frac {-1+-2+-3}{3}}=-2}$, and the angle of the asymptotes is given by: ${\displaystyle \phi ={\frac {180(2k+1)}{3}}\;\mathrm {for} \;k=0,1,2}$ We know that the breakaway occurs between the first and second poles, so we will estimate the exact breakaway point. Drawing the root-locus gives us the graph below. We can see that for low values of gain the system is stable, but for higher values of gain, the system becomes unstable. ### Example: Complex-Conjugate Zeros Find the root-locus graph for the following system transfer function: ${\displaystyle T(s)=K{\frac {s^{2}+4.5s+5.625}{s(s+1)(s+2)}}}$ If we look at the denominator, we have poles at the origin, -1, and -2. Following Rule 4, we know that the real-axis between the first two poles, and the real axis after the third pole are all on the root-locus. We also know that there is going to be a breakaway point between the first two poles, so that they can approach the complex conjugate zeros. If we use the quadratic equation on the numerator, we can find that the zeros are located at: ${\displaystyle s=(-2.25+j0.75),(-2.25-j0.75)}$ If we draw our graph, we get the following: We can see from this graph that the system is stable for all values of K. ### Example: Root-Locus Using MATLAB/Octave Use MATLAB, Octave, or another piece of mathematical simulation software to produce the root-locus graph for the following system: ${\displaystyle T(s)=K{\frac {s^{2}+7s+12}{s^{2}+3s+6)}}}$ First, we must multiply through in the denominator: ${\displaystyle N(s)=S^{2}+7S+12}$ ${\displaystyle D(s)=S^{2}+3S+2}$ Now, we can generate the coefficient vectors from the numerator and denominator: num = [0 1 7 12]; den = [0 1 3 2]; Next, we can feed these vectors into the rlocus command: rlocus(num, den); Note:In Octave, we need to create a system structure first, by typing: sys = tf(num, den); rlocus(sys); Either way, we generate the following graph: # Nyquist Criterion ## Nyquist Stability Criteria The Nyquist Stability Criteria is a test for system stability, just like the Routh-Hurwitz test, or the Root-Locus Methodology. However, the Nyquist Criteria can also give us additional information about a system. Routh-Hurwitz and Root-Locus can tell us where the poles of the system are for particular values of gain. By altering the gain of the system, we can determine if any of the poles move into the RHP, and therefore become unstable. The Nyquist Criteria, however, can tell us things about the frequency characteristics of the system. For instance, some systems with constant gain might be stable for low-frequency inputs, but become unstable for high-frequency inputs. Here is an example of a system responding differently to different frequency input values: Consider an ordinary glass of water. If the water is exposed to ordinary sunlight, it is unlikely to heat up too much. However, if the water is exposed to microwave radiation (from inside your microwave oven, for instance), the water will quickly heat up to a boil. Also, the Nyquist Criteria can tell us things about the phase of the input signals, the time-shift of the system, and other important information. ## Contours A contour is a complicated mathematical construct, but luckily we only need to worry ourselves with a few points about them. We will denote contours with the Greek letter Γ (gamma). Contours are lines, drawn on a graph, that follow certain rules: 1. The contour must close (it must form a complete loop) 2. The contour may not cross directly through a pole of the system. 3. Contours must have a direction (clockwise or counterclockwise, generally). 4. A contour is called "simple" if it has no self-intersections. We only consider simple contours here. Once we have such a contour, we can develop some important theorems about them, and finally use these theorems to derive the Nyquist stability criterion. ## Argument Principle Here is the argument principle, which we will use to derive the stability criterion. Do not worry if you do not understand all the terminology, we will walk through it: The Argument Principle If we have a contour, Γ, drawn in one plane (say the complex laplace plane, for instance), we can map that contour into another plane, the F(s) plane, by transforming the contour with the function F(s). The resultant contour, ${\displaystyle \Gamma _{F(s)}}$ will circle the origin point of the F(s) plane N times, where N is equal to the difference between Z and P (the number of zeros and poles of the function F(s), respectively). When we have our contour, Γ, we transform it into ${\displaystyle \Gamma _{F(s)}}$ by plugging every point of the contour into the function F(s), and taking the resultant value to be a point on the transformed contour. ### Example: First Order System Let's say, for instance, that Γ is a unit square contour in the complex s plane. The vertices of the square are located at points I,J,K,L, as follows: ${\displaystyle I=1+j}$ ${\displaystyle J=1-j}$ ${\displaystyle K=-1-j}$ ${\displaystyle L=-1+j}$ we must also specify the direction of our contour, and we will say (arbitrarily) that it is a clockwise contour (travels from I to J to K to L). We will also define our transform function, F(s), to be the following: ${\displaystyle F(s)=2s+1}$ We can factor the denominator of F(s), and we can show that there is one zero at s → -0.5, and no poles. Plotting this root on the same graph as our contour, we see clearly that it lies within the contour. Since s is a complex variable, defined with real and imaginary parts as: ${\displaystyle s=\sigma +j\omega }$ We know that F(s) must also be complex. We will say, for reasons of simplicity, that the axes in the F(s) plane are u and v, and are related as such: ${\displaystyle F(s)=u+vj=2(\sigma +j\omega )+1}$ From this relationship, we can define u and v in terms of σ and ω: ${\displaystyle u=2\sigma +1}$ ${\displaystyle v=2\omega }$ Now, to transform Γ, we will plug every point of the contour into F(s), and the resultant values will be the points of ${\displaystyle \Gamma _{F(s)}}$. We will solve for complex values u and v, and we will start with the vertices, because they are the simplest examples: ${\displaystyle u+vj=F(I)=3+2j}$ ${\displaystyle u+vj=F(J)=3-2j}$ ${\displaystyle u+vj=F(K)=-1+2j}$ ${\displaystyle u+vj=F(L)=-1-2j}$ We can take the lines in between the vertices as a function of s, and plug the entire function into the transform. Luckily, because we are using straight lines, we can simplify very much: • Line from I to J: ${\displaystyle \sigma =1,u=3,v=\omega }$ • Line from J to K: ${\displaystyle \omega =-1,u=2\sigma +1,v=-1}$ • Line from K to L: ${\displaystyle \sigma =-1,u=-1,v=\omega }$ • Line from L to I: ${\displaystyle \omega =1,u=2\sigma +1,v=1}$ And when we graph these functions, from vertex to vertex, we see that the resultant contour in the F(s) plane is a square, but not centered at the origin, and larger in size. Notice how the contour encircles the origin of the F(s) plane one time. This will be important later on. ### Example: Second-Order System Let's say that we have a slightly more complicated mapping function: ${\displaystyle F(s)={\frac {s+0.5}{2s^{2}+2s+1}}}$ We can see clearly that F(s) has a zero at s → -0.5, and a complex conjugate set of poles at s → -0.5 + 0.5j and s → -0.5 - 0.5j. We will use the same unit square contour, Γ, from above: ${\displaystyle I=1+j}$ ${\displaystyle J=1-j}$ ${\displaystyle K=-1-j}$ ${\displaystyle L=-1+j}$ We can see clearly that the poles and the zero of F(s) lie within Γ. Setting F(s) to u + vj and solving, we get the following relationships: ${\displaystyle u+vj=F(\sigma +j\omega )={\frac {(\sigma +0.5)+j(\omega )}{(2\sigma ^{2}-2\omega ^{2}+2\sigma +1)+j(2\sigma \omega +\omega )}}}$ This is a little difficult now, because we need to simplify this whole expression, and separate it out into real and imaginary parts. There are two methods to doing this, neither of which is short or easy enough to demonstrate here to entirety: 1. We convert the numerator and denominator polynomials into a polar representation in terms of r and θ, then perform the division, and then convert back into rectangular format. 2. We plug each segment of our contour into this equation, and simplify numerically. ## The Nyquist Contour The Nyquist contour, the contour that makes the entire nyquist criterion work, must encircle the entire unstable region of the complex plane. For analog systems, this is the right half of the complex s plane. For digital systems, this is the entire plane outside the unit circle. Remember that if a pole to the closed-loop transfer function (or equivalently a zero of the characteristic equation) lies in the unstable region of the complex plane, the system is an unstable system. Analog Systems The Nyquist contour for analog systems is an infinite semi-circle that encircles the entire right-half of the s plane. The semicircle travels up the imaginary axis from negative infinity to positive infinity. From positive infinity, the contour breaks away from the imaginary axis, in the clock-wise direction, and forms a giant semicircle. Digital Systems The Nyquist contour in digital systems is a counter-clockwise encirclement of the unit circle. ## Nyquist Criteria Let us first introduce the most important equation when dealing with the Nyquist criterion: ${\displaystyle N=Z-P}$ Where: • N is the number of encirclements of the (-1, 0) point. • Z is the number of zeros of the characteristic equation. • P is the number of poles in the of the open-loop characteristic equation. With this equation stated, we can now state the Nyquist Stability Criterion: Nyquist Stability Criterion A feedback control system is stable, if and only if the contour ${\displaystyle \Gamma _{F(s)}}$ in the F(s) plane does not encircle the (-1, 0) point when P is 0. A feedback control system is stable, if and only if the contour ${\displaystyle \Gamma _{F(s)}}$ in the F(s) plane encircles the (-1, 0) point a number of times equal to the number of poles of F(s) enclosed by Γ. In other words, if P is zero then N must equal zero. Otherwise, N must equal P. Essentially, we are saying that Z must always equal zero, because Z is the number of zeros of the characteristic equation (and therefore the number of poles of the closed-loop transfer function) that are in the right-half of the s plane. Keep in mind that we don't necessarily know the locations of all the zeros of the characteristic equation. So if we find, using the nyquist criterion, that the number of poles is not equal to N, then we know that there must be a zero in the right-half plane, and that therefore the system is unstable. ## Nyquist ↔ Bode A careful inspection of the Nyquist plot will reveal a surprising relationship to the Bode plots of the system. If we use the Bode phase plot as the angle θ, and the Bode magnitude plot as the distance r, then it becomes apparent that the Nyquist plot of a system is simply the polar representation of the Bode plots. To obtain the Nyquist plot from the Bode plots, we take the phase angle and the magnitude value at each frequency ω. We convert the magnitude value from decibels back into gain ratios. Then, we plot the ordered pairs (r, θ) on a polar graph. ## Nyquist in the Z Domain The Nyquist Criteria can be utilized in the digital domain in a similar manner as it is used with analog systems. The primary difference in using the criteria is that the shape of the Nyquist contour must change to encompass the unstable region of the Z plane. Therefore, instead of an infinitesimal semi-circle, the Nyquist contour for digital systems is a counter-clockwise unit circle. By changing the shape of the contour, the same N = Z - P equation holds true, and the resulting Nyquist graph will typically look identical to one from an analog system, and can be interpreted in the same way. # State-Space Stability ## State-Space Stability If a system is represented in the state-space domain, it doesn't make sense to convert that system to a transfer function representation (or even a transfer matrix representation) in an attempt to use any of the previous stability methods. Luckily, there are other analysis methods that can be used with the state-space representation to determine if a system is stable or not. First, let us first introduce the notion of unstability: Unstable A system is said to be unstable if the system response approaches infinity as time approaches infinity. If our system is G(t), then, we can say a system is unstable if: ${\displaystyle \lim _{t\to \infty }\|G(t)\|=\infty }$ Also, a key concept when we are talking about stability of systems is the concept of an equilibrium point: Equilibrium Point Given a system f such that: ${\displaystyle x'(t)=f(x(t))}$ A particular state xe is called an equilibrium point if ${\displaystyle f(x_{e})=0}$ for all time t in the interval ${\displaystyle [t_{0},\infty )}$, where t0 is the starting time of the system. An equilibrium point is also known as a "stationary point", a "critical point", a "singular point", or a "rest state" in other books or literature. The definitions below typically require that the equilibrium point be zero. If we have an equilibrium point xe = a, then we can use the following change of variables to make the equilibrium point zero: ${\displaystyle {\bar {x}}=x_{e}-a=0}$ We will also see below that a system's stability is defined in terms of an equilibrium point. Related to the concept of an equilibrium point is the notion of a zero point: Zero State A state xz is a zero state if xz = 0. A zero state may or may not be an equilibrium point. ### Stability Definitions The equilibrium x = 0 of the system is stable if and only if the solutions of the zero-input state equation are bounded. Equivalently, x = 0 is a stable equilibrium if and only if for every initial time t0, there exists an associated finite constant k(t0) such that: ${\displaystyle \operatorname {sup} _{t\geq t_{0}}\|\phi (t,t_{0})\|=k(t_{0})<\infty }$ Where sup is the supremum, or "maximum" value of the equation. The maximum value of this equation must never exceed the arbitrary finite constant k (and therefore it may not be infinite at any point). Uniform Stability The system is defined to be uniformly stable if it is stable for all initial values of t0: ${\displaystyle \operatorname {sup} _{t\geq 0}[\operatorname {sup} _{t\geq t_{0}}\|\phi (t,t_{0})\|]=k_{0}<\infty }$ Uniform stability is a more general, and more powerful form of stability than was previously provided. Asymptotic Stability A system is defined to be asymptotically stable if: ${\displaystyle \lim _{t\to \infty }\|\phi (t,t_{0})\|=0}$ A time-invariant system is asymptotically stable if all the eigenvalues of the system matrix A have negative real parts. If a system is asymptotically stable, it is also BIBO stable. However the inverse is not true: A system that is BIBO stable might not be asymptotically stable. Uniform Asymptotic Stability A system is defined to be uniformly asymptotically stable if the system is asymptotically stable for all values of t0. Exponential Stability A system is defined to be exponentially stable if the system response decays exponentially towards zero as time approaches infinity. For linear systems, uniform asymptotic stability is the same as exponential stability. This is not the case with non-linear systems. ### Marginal Stability Here we will discuss some rules concerning systems that are marginally stable. Because we are discussing eigenvalues and eigenvectors, these theorems only apply to time-invariant systems. 1. A time-invariant system is marginally stable if and only if all the eigenvalues of the system matrix A are zero or have negative real parts, and those with zero real parts are simple roots of the minimal polynomial of A. 2. The equilibrium x = 0 of the state equation is uniformly stable if all eigenvalues of A have non-positive real parts, and there is a complete set of distinct eigenvectors associated with the eigenvalues with zero real parts. 3. The equilibrium x = 0 of the state equation is exponentially stable if and only if all eigenvalues of the system matrix A have negative real parts. ## Eigenvalues and Poles A Linearly Time Invariant (LTI) system is stable (asymptotically stable, see above) if all the eigenvalues of A have negative real parts. Consider the following state equation: ${\displaystyle x'=Ax(t)+Bu(t)}$ We can take the Laplace Transform of both sides of this equation, using initial conditions of x0 = 0: ${\displaystyle sX(s)=AX(s)+BU(s)}$ Subtract AX(s) from both sides: ${\displaystyle sX(s)-AX(s)=BU(s)}$ ${\displaystyle (sI-A)X(s)=BU(s)}$ Assuming (sI - A) is nonsingular, we can multiply both sides by the inverse: ${\displaystyle X(s)=(sI-A)^{-1}BU(s)}$ Now, if we remember our formula for finding the matrix inverse from the adjoint matrix: ${\displaystyle A^{-1}={\frac {\operatorname {adj} (A)}{|A|}}}$ We can use that definition here: ${\displaystyle X(s)={\frac {\operatorname {adj} (sI-A)BU(s)}{|(sI-A)|}}}$ Let's look at the denominator (which we will now call D(s)) more closely. To be stable, the following condition must be true: ${\displaystyle D(s)=|(sI-A)|=0}$ And if we substitute λ for s, we see that this is actually the characteristic equation of matrix A! This means that the values for s that satisfy the equation (the poles of our transfer function) are precisely the eigenvalues of matrix A. In the S domain, it is required that all the poles of the system be located in the left-half plane, and therefore all the eigenvalues of A must have negative real parts. ## Impulse Response Matrix We can define the Impulse response matrix, G(t, τ) in order to define further tests for stability: [Impulse Response Matrix] ${\displaystyle G(t,\tau )=\left\{{\begin{matrix}C(t)\phi (t,\tau )B(\tau )&{\mbox{ if }}t\geq \tau \\0&{\mbox{ if }}t<\tau \end{matrix}}\right.}$ The system is uniformly stable if and only if there exists a finite positive constant L such that for all time t and all initial conditions t0 with ${\displaystyle t\geq t_{0}}$ the following integral is satisfied: ${\displaystyle \int _{0}^{t}\|G(t,\tau )\|d\tau \leq L}$ In other words, the above integral must have a finite value, or the system is not uniformly stable. In the time-invariant case, the impulse response matrix reduces to: ${\displaystyle G(t)=\left\{{\begin{matrix}Ce^{At}B&{\mbox{ if }}t\geq 0\\0&{\mbox{ if }}t<0\end{matrix}}\right.}$ In a time-invariant system, we can use the impulse response matrix to determine if the system is uniformly BIBO stable by taking a similar integral: ${\displaystyle \int _{0}^{\infty }\|G(t)\|dt\leq L}$ Where L is a finite constant. ## Positive Definiteness These terms are important, and will be used in further discussions on this topic. • f(x) is positive definite if f(x) > 0 for all x. • f(x) is positive semi-definite if ${\displaystyle f(x)\geq 0}$ for all x, and f(x) = 0 only if x = 0. • f(x) is negative definite if f(x) < 0 for all x. • f(x) is negative semi-definite if ${\displaystyle f(x)\leq 0}$ for all x, and f(x) = 0 only if x = 0. A Hermitian matrix X is positive definite if all its principle minors are positive. Also, a matrix X is positive definite if all its eigenvalues have positive real parts. These two methods may be used interchangeably. Positive definiteness is a very important concept. So much so that the Lyapunov stability test depends on it. The other categorizations are not as important, but are included here for completeness. ## Lyapunov Stability ### Lyapunov's Equation For linear systems, we can use the Lyapunov Equation, below, to determine if a system is stable. We will state the Lyapunov Equation first, and then state the Lyapunov Stability Theorem. [Lyapunov Equation] ${\displaystyle MA+A^{T}M=-N}$ Where A is the system matrix, and M and N are p × p square matrices. Lyapunov Stability Theorem An LTI system ${\displaystyle x'=Ax}$ is stable if there exists a matrix M that satisfies the Lyapunov Equation where N is an arbitrary positive definite matrix, and M is a unique positive definite matrix. Notice that for the Lyapunov Equation to be satisfied, the matrices must be compatible sizes. In fact, matrices A, M, and N must all be square matrices of equal size. Alternatively, we can write: Lyapunov Stability Theorem (alternate) If all the eigenvalues of the system matrix A have negative real parts, then the Lyapunov Equation has a unique solution M for every positive definite matrix N, and the solution can be calculated by: ${\displaystyle M=\int _{0}^{\infty }e^{A^{T}t}Ne^{At}dt}$ If the matrix M can be calculated in this manner, the system is asymptotically stable. Controllers and Compensators There are a number of preexisting devices for use in system control, such as lead and lag compensators, and powerful PID controllers. PID controllers are so powerful that many control engineers may use no other method of system control! The chapters in this section will discuss some of the common types of system compensators and controllers. # Controllability and Observability ## System Interaction In the world of control engineering, there are a slew of systems available that need to be controlled. The task of a control engineer is to design controller and compensator units to interact with these pre-existing systems. However, some systems simply cannot be controlled (or, more often, cannot be controlled in specific ways). The concept of controllability refers to the ability of a controller to arbitrarily alter the functionality of the system plant. The state-variable of a system, x, represents the internal workings of the system that can be separate from the regular input-output relationship of the system. This also needs to be measured, or observed. The term observability describes whether the internal state variables of the system can be externally measured. ## Controllability Complete state controllability (or simply controllability if no other context is given) describes the ability of an external input to move the internal state of a system from any initial state to any other final state in a finite time interval We will start off with the definitions of the term controllability, and the related terms reachability and stabilizability. Controllability A system with internal state vector x is called controllable if and only if the system states can be changed by changing the system input. Reachability A particular state x1 is called reachable if there exists an input that transfers the state of the system from the initial state x0 to x1 in some finite time interval [t0, t). Stabilizability A system is Stabilizable if all states that cannot be reached decay to zero asymptotically. We can also write out the definition of reachability more precisely: A state x1 is called reachable at time t1 if for some finite initial time t0 there exists an input u(t) that transfers the state x(t) from the origin at t0 to x1. A system is reachable at time t1 if every state x1 in the state-space is reachable at time t1. Similarly, we can more precisely define the concept of controllability: A state x0 is controllable at time t0 if for some finite time t1 there exists an input u(t) that transfers the state x(t) from x0 to the origin at time t1. A system is called controllable at time t0 if every state x0 in the state-space is controllable. ### Controllability Matrix For LTI (linear time-invariant) systems, a system is reachable if and only if its controllability matrix, ζ, has a full row rank of p, where p is the dimension of the matrix A, and p × q is the dimension of matrix B. [Controllability Matrix] ${\displaystyle \zeta ={\begin{bmatrix}B&AB&A^{2}B&\cdots &A^{p-1}B\end{bmatrix}}\in R^{p\times pq}}$ A system is controllable or "Controllable to the origin" when any state x1 can be driven to the zero state x = 0 in a finite number of steps. A system is controllable when the rank of the system matrix A is p, and the rank of the controllability matrix is equal to: ${\displaystyle Rank(\zeta )=Rank(A^{-1}\zeta )=p}$ If the second equation is not satisfied, the system is not . MATLAB allows one to easily create the controllability matrix with the ctrb command. To create the controllability matrix ${\displaystyle \zeta }$ simply type ${\displaystyle \zeta =ctrb(A,B)}$ where A and B are mentioned above. Then in order to determine if the system is controllable or not one can use the rank command to determine if it has full rank. If ${\displaystyle Rank(A) Then controllability does not imply reachability. • Reachability always implies controllability. • Controllability only implies reachability when the state transition matrix is nonsingular. ### Determining Reachability There are four methods that can be used to determine if a system is reachable or not: 1. If the p rows of ${\displaystyle \phi (t,\tau )B(t)}$ are linearly independent over the field of complex numbers. That is, if the rank of the product of those two matrices is equal to p for all values of t and τ 2. If the rank of the controllability matrix is the same as the rank of the system matrix A. 3. If the rank of ${\displaystyle \operatorname {rank} [\lambda I-A,B]=p}$ for all eigenvalues λ of the matrix A. 4. If the rank of the reachability gramian (described below) is equal to the rank of the system matrix A. Each one of these conditions is both necessary and sufficient. If any one test fails, all the tests will fail, and the system is not reachable. If any test is positive, then all the tests will be positive, and the system is reachable. ### Gramians Gramians are complicated mathematical functions that can be used to determine specific things about a system. For instance, we can use gramians to determine whether a system is controllable or reachable. Gramians, because they are more complicated than other methods, are typically only used when other methods of analyzing a system fail (or are too difficult). All the gramians presented on this page are all matrices with dimension p × p (the same size as the system matrix A). All the gramians presented here will be described using the general case of Linear time-variant systems. To change these into LTI (time-invariant equations), the following substitutions can be used: ${\displaystyle \phi (t,\tau )\to e^{A(t-\tau )}}$ ${\displaystyle \phi '(t,\tau )\to e^{A'(t-\tau )}}$ Where we are using the notation X' to denote the transpose of a matrix X (as opposed to the traditional notation XT). ### Reachability Gramian We can define the reachability gramian as the following integral: [Reachability Gramian] ${\displaystyle W_{r}(t_{0},t_{1})=\int _{t_{0}}^{t_{1}}\phi (t_{1},\tau )B(\tau )B'(\tau )\phi '(t_{1},\tau )d\tau }$ The system is reachable if the rank of the reachability gramian is the same as the rank of the system matrix: ${\displaystyle \operatorname {rank} (W_{r})=p}$ <chemistry>/control{range} ### Controllability Gramian We can define the controllability gramian of a system (A, B) as: [Controllability Gramian] ${\displaystyle W_{c}(t_{0},t_{1})=\int _{t_{0}}^{t_{1}}\phi (t_{0},\tau )B(\tau )B'(\tau )\phi '(t_{0},\tau )d\tau }$ The system is controllable if the rank of the controllability gramian is the same as the rank of the system matrix: ${\displaystyle \operatorname {rank} (W_{c})=p}$ If the system is time-invariant, there are two important points to be made. First, the reachability gramian and the controllability gramian reduce to be the same equation. Therefore, for LTI systems, if we have found one gramian, then we automatically know both gramians. Second, the controllability gramian can also be found as the solution to the following Lyapunov equation: ${\displaystyle AW_{c}+W_{c}A'=-BB'}$ Many software packages, notably MATLAB, have functions to solve the Lyapunov equation. By using this last relation, we can also solve for the controllability gramian using these existing functions. ## Observability The state-variables of a system might not be able to be measured for any of the following reasons: 1. The location of the particular state variable might not be physically accessible (a capacitor or a spring, for instance). 2. There are no appropriate instruments to measure the state variable, or the state-variable might be measured in units for which there does not exist any measurement device. 3. The state-variable is a derived "dummy" variable that has no physical meaning. If things cannot be directly observed, for any of the reasons above, it can be necessary to calculate or estimate the values of the internal state variables, using only the input/output relation of the system, and the output history of the system from the starting time. In other words, we must ask whether or not it is possible to determine what the inside of the system (the internal system states) is like, by only observing the outside performance of the system (input and output)? We can provide the following formal definition of mathematical observability: Observability A system with an initial state, ${\displaystyle x(t_{0})}$ is observable if and only if the value of the initial state can be determined from the system output y(t) that has been observed through the time interval ${\displaystyle t_{0}. If the initial state cannot be so determined, the system is unobservable. Complete Observability A system is said to be completely observable if all the possible initial states of the system can be observed. Systems that fail this criteria are said to be unobservable. Detectability A system is Detectable if all states that cannot be observed decay to zero asymptotically. Constructability A system is constructable if the present state of the system can be determined from the present and past outputs and inputs to the system. If a system is observable, then it is also constructable. The relationship does not work the other way around. A system state xi is unobservable at a given time ti if the zero-input response of the system is zero for all time t. If a system is observable, then the only state that produces a zero output for all time is the zero state. We can use this concept to define the term state-observability. State-Observability A system is completely state-observable at time t0 or the pair (A, C) is observable at t0 if the only state that is unobservable at t0 is the zero state x = 0. ### Constructability A state x is unconstructable at a time t1 if for every finite time t < t1 the zero input response of the system is zero for all time t. A system is completely state constructable at time t1 if the only state x that is unconstructable at t0 is x = 0. If a system is observable at an initial time t0, then it is constructable at some time t > t0, if it is constructable at t1. ### Observability Matrix The observability of the system is dependent only on the system states and the system output, so we can simplify our state equations to remove the input terms: Matrix Dimensions: A: p × p B: p × q C: r × p D: r × q ${\displaystyle x'(t)=Ax(t)}$ ${\displaystyle y(t)=Cx(t)}$ Therefore, we can show that the observability of the system is dependent only on the coefficient matrices A and C. We can show precisely how to determine whether a system is observable, using only these two matrices. If we have the observability matrix Q: [Observability Matrix] ${\displaystyle Q={\begin{bmatrix}C\\CA\\CA^{2}\\\vdots \\CA^{p-1}\end{bmatrix}}}$ we can show that the system is observable if and only if the Q matrix has a rank of p. Notice that the Q matrix has the dimensions pr × p. MATLAB allows one to easily create the observability matrix with the obsv command. To create the observability matrix ${\displaystyle Q}$ simply type Q=obsv(A,C) where A and C are mentioned above. Then in order to determine if the system is observable or not one can use the rank command to determine if it has full rank. ### Observability Gramian We can define an observability gramian as: [Observability Gramian] ${\displaystyle W_{o}(t_{0},t_{1})=\int _{t_{0}}^{t_{1}}\phi '(\tau ,t_{0})C'(\tau )C(\tau )\phi (\tau ,t_{0})d\tau }$ A system is completely state observable at time t0 < t < t1 if and only if the rank of the observability gramian is equal to the size p of the system matrix A. If the system (A, B, C, D) is time-invariant, we can construct the observability gramian as the solution to the Lyapunov equation: ${\displaystyle A'W_{o}+W_{o}A=-C'C}$ ### Constructability Gramian We can define a constructability gramian as: [Constructability Gramian] ${\displaystyle W_{cn}(t_{0},t_{1})=\int _{t_{0}}^{t_{1}}\phi '(\tau ,t_{1})C'(\tau )C(\tau )\phi (\tau ,t_{1})d\tau }$ A system is completely state observable at an initial time t0 if and only if there exists a finite t1 such that: ${\displaystyle \operatorname {rank} (W_{0})=\operatorname {rank} (W_{cn})=p}$ Notice that the constructability and observability gramians are very similar, and typically they can both be calculated at the same time, only substituting in different values into the state-transition matrix. ## Duality Principle The concepts of controllability and observability are very similar. In fact, there is a concrete relationship between the two. We can say that a system (A, B) is controllable if and only if the system (A', C, B', D) is observable. This fact can be proven by plugging A' in for A, and B' in for C into the observability Gramian. The resulting equation will exactly mirror the formula for the controllability gramian, implying that the two results are the same. # System Specifications ## System Specification There are a number of different specifications that might need to be met by a new system design. In this chapter we will talk about some of the specifications that systems use, and some of the ways that engineers analyze and quantify systems. ## Sensitivity The sensitivity of a system is a parameter that is specified in terms of a given output and a given input. The sensitivity measures how much change is caused in the output by small changes to the reference input. Sensitive systems have very large changes in output in response to small changes in the input. The sensitivity of system H to input X is denoted as: ${\displaystyle S_{H}^{X}(s)}$ ## Disturbance Rejection All physically-realized systems have to deal with a certain amount of noise and disturbance. The ability of a system to reject the noise is known as the disturbance rejection of the system. ## Control Effort The control effort is the amount of energy or power necessary for the controller to perform its duty. # Controllers ## Controllers There are a number of different standard types of control systems that have been studied extensively. These controllers, specifically the P, PD, PI, and PID controllers are very common in the production of physical systems, but as we will see they each carry several drawbacks. ## Proportional Controllers A Proportional controller block diagram Proportional controllers are simply gain values. These are essentially multiplicative coefficients, usually denoted with a K. A P controller can only force the system poles to a spot on the system's root locus. A P controller cannot be used for arbitrary pole placement. We refer to this kind of controller by a number of different names: proportional controller, gain, and zeroth-order controller. ## Derivative Controllers A Proportional-Derivative controller block diagram In the Laplace domain, we can show the derivative of a signal using the following notation: ${\displaystyle D(s)={\mathcal {L}}\left\{f'(t)\right\}=sF(s)-f(0)}$ Since most systems that we are considering have zero initial condition, this simplifies to: ${\displaystyle D(s)={\mathcal {L}}\left\{f'(t)\right\}=sF(s)}$ The derivative controllers are implemented to account for future values, by taking the derivative, and controlling based on where the signal is going to be in the future. Derivative controllers should be used with care, because even small amount of high-frequency noise can cause very large derivatives, which appear like amplified noise. Also, derivative controllers are difficult to implement perfectly in hardware or software, so frequently solutions involving only integral controllers or proportional controllers are preferred over using derivative controllers. Notice that derivative controllers are not proper systems, in that the order of the numerator of the system is greater than the order of the denominator of the system. This quality of being a non-proper system also makes certain mathematical analysis of these systems difficult. ### Z-Domain Derivatives We won't derive this equation here, but suffice it to say that the following equation in the Z-domain performs the same function as the Laplace-domain derivative: ${\displaystyle D(z)={\frac {z-1}{Tz}}}$ Where T is the sampling time of the signal. ## Integral Controllers A Proportional-Integral Controller block diagram To implemenent an Integral in a Laplace domain transfer function, we use the following: ${\displaystyle {\mathcal {L}}\left\{\int _{0}^{t}f(t)\,dt\right\}={1 \over s}F(s)}$ Integral controllers of this type add up the area under the curve for past time. In this manner, a PI controller (and eventually a PID) can take account of the past performance of the controller, and correct based on past errors. ### Z-Domain Integral The integral controller can be implemented in the Z domain using the following equation: ${\displaystyle D(z)={\frac {z+1}{z-1}}}$ ## PID Controllers A block diagram of a PID controller PID controllers are combinations of the proportional, derivative, and integral controllers. Because of this, PID controllers have large amounts of flexibility. We will see below that there are definite limites on PID control. ### PID Transfer Function The transfer function for a standard PID controller is an addition of the Proportional, the Integral, and the Differential controller transfer functions (hence the name, PID). Also, we give each term a gain constant, to control the weight that each factor has on the final output: [PID] ${\displaystyle D(s)=K_{p}+{K_{i} \over s}+K_{d}s}$ Notice that we can write the transfer function of a PID controller in a slightly different way: ${\displaystyle D(s)={\frac {A_{0}+A_{1}s}{B_{0}+B_{1}s}}}$ This form of the equation will be especially useful to us when we look at polynomial design. ### PID Tuning The process of selecting the various coefficient values to make a PID controller perform correctly is called PID Tuning. There are a number of different methods for determining these values:[1] 1) Direct Synthesis (DS) method 2) Internal Model Control (IMC) method 3) Controller tuning relations 4) Frequency response techniques 5) Computer simulation 6) On-line tuning after the control system is installed 7)Trial and error Notes: 1. Seborg, Dale E.; Edgar, Thomas F.; Mellichamp, Duncan A. (2003). Process Dynamics and Control, Second Edition. John Wiley & Sons,Inc. ISBN 0471000779 ### Digital PID In the Z domain, the PID controller has the following transfer function: [Digital PID] ${\displaystyle D(z)=K_{p}+K_{i}{\frac {T}{2}}\left[{\frac {z+1}{z-1}}\right]+K_{d}\left[{\frac {z-1}{Tz}}\right]}$ And we can convert this into a canonical equation by manipulating the above equation to obtain: ${\displaystyle D(z)={\frac {a_{0}+a_{1}z^{-1}+a_{2}z^{-2}}{1+b_{1}z^{-1}+b_{2}z^{-2}}}}$ Where: ${\displaystyle a_{0}=K_{p}+{\frac {K_{i}T}{2}}+{\frac {K_{d}}{T}}}$ ${\displaystyle a_{1}=-K_{p}+{\frac {K_{i}T}{2}}+{\frac {-2K_{d}}{T}}}$ ${\displaystyle a_{2}={\frac {K_{d}}{T}}}$ ${\displaystyle b_{1}=-1}$ ${\displaystyle b_{2}=0}$ Once we have the Z-domain transfer function of the PID controller, we can convert it into the digital time domain: ${\displaystyle y[n]=x[n]a_{0}+x[n-1]a_{1}+x[n-2]a_{2}-y[n-1]b_{1}-y[n-2]b_{2}}$ And finally, from this difference equation, we can create a digital filter structure to implement the PID. ## Bang-Bang Controllers Despite the low-brow sounding name of the Bang-Bang controller, it is a very useful tool that is only really available using digital methods. A better name perhaps for a bang-bang controller is an on/off controller, where a digital system makes decisions based on target and threshold values, and decides whether to turn the controller on and off. Bang-bang controllers are a non-linear style of control. Consider the example of a household furnace. The oil in a furnace burns at a specific temperature—it can't burn hotter or cooler. To control the temperature in your house then, the thermostat control unit decides when to turn the furnace on, and when to turn the furnace off. This on/off control scheme is a bang-bang controller. ## Compensation There are a number of different compensation units that can be employed to help fix certain system metrics that are outside of a proper operating range. Most commonly, the phase characteristics are in need of compensation, especially if the magnitude response is to remain constant. There are four major types of compensation 1. Lead compensation 2. Lag compensation 3. Lead-lag compensation 4. Lag-lead compensation ## Phase Compensation Occasionally, it is necessary to alter the phase characteristics of a given system, without altering the magnitude characteristics. To do this, we need to alter the frequency response in such a way that the phase response is altered, but the magnitude response is not altered. To do this, we implement a special variety of controllers known as phase compensators. They are called compensators because they help to improve the phase response of the system. There are two general types of compensators: Lead Compensators, and Lag Compensators. If we combine the two types, we can get a special Lag-lead Compensator system.(lead-lag system is not practically realisable). When designing and implementing a phase compensator, it is important to analyze the effects on the gain and phase margins of the system, to ensure that compensation doesn't cause the system to become unstable. phase lead compensation:- 1 it is same as addition of zero to open loop TF since from pole zero point of view zero is nearer to origin than pole hence effect of zero dominant. The transfer function for a lead-compensator is as follows: ${\displaystyle T_{lead}(s)={\frac {s-z}{s-p}}}$ To make the compensator work correctly, the following property must be satisfied: ${\displaystyle |z|<|p|}$ And both the pole and zero location should be close to the origin, in the LHP. Because there is only one pole and one zero, they both should be located on the real axis. Phase lead compensators help to shift the poles of the transfer function to the left, which is beneficial for stability purposes. ## Phase Lag The transfer function for a lag compensator is the same as the lead-compensator, and is as follows: [Lag Compensator] ${\displaystyle T_{lag}(s)={\frac {s-z}{s-p}}}$ However, in the lag compensator, the location of the pole and zero should be swapped: ${\displaystyle |p|<|z|}$ Both the pole and the zero should be close to the origin, on the real axis. The Phase lag compensator helps to improve the steady-state error of the system. The poles of the lag compensator should be very close together to help prevent the poles of the system from shifting right, and therefore reducing system stability. ${\displaystyle T_{Lag-lead}(s)={\frac {(s-z_{1})(s-z_{2})}{(s-p_{1})(s-p_{2})}}.}$ ${\displaystyle |p_{1}|>|z_{1}|>|z_{2}|>|p_{2}|}$
2019-04-19 06:54:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 482, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7669093012809753, "perplexity": 502.4321621677931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527148.46/warc/CC-MAIN-20190419061412-20190419083412-00108.warc.gz"}
https://reference.wolfram.com/language/ref/DiscreteMaxLimit.html
# DiscreteMaxLimit DiscreteMaxLimit[f,k] gives the max limit kf(k) of the sequence f as k tends to over the integers. DiscreteMaxLimit[f,{k1,,kn}] gives the nested max limit f(k1,,kn) over the integers. DiscreteMaxLimit[f,{k1,,kn}{,,}] gives the multivariate max limit f(k1,,kn) over the integers. # Details and Options • DiscreteMaxLimit is also known as limit superior, supremum limit, limsup, upper limit and outer limit. • DiscreteMaxLimit computes the smallest upper bound for the limit and is always defined for real-valued sequences. It is often used to give conditions of convergence and other asymptotic properties that do not rely on an actual limit to exist. • DiscreteMaxLimit[f,k] can be entered as f. A template can be entered as dMlim, and moves the cursor from the underscript to the body. • DiscreteMaxLimit[f,{k1,,kn}{,,}] can be entered as f. • The possible limit points are ±. • The max limit is defined as a limit of the max envelope sequence max[ω]: • DiscreteMaxLimit[f,k∞] DiscreteLimit[max[ω],ω∞] DiscreteMaxLimit[f,{k1,…,kn}{∞,…,∞}] DiscreteLimit[max[ω],ω∞] • DiscreteMaxLimit[f[k],k-] is equivalent to DiscreteMaxLimit[f[-l],l] etc. • The definition uses the max envelope max[ω]MaxValue[{f[k],kωk},k] for univariate f[k] and max[ω]MaxValue[{f[k1,,kn],k1ωknωki},{k1,,kn}] for multivariate f[k1,,kn]. The sequence max[ω] is monotone decreasing as ω, so it always has a limit, which may be ±. • The illustration shows max[k] and max[Min[k1,k2]] in blue. • DiscreteMaxLimit returns unevaluated when the max limit cannot be found. • The following options can be given: • Assumptions \$Assumptions assumptions on parameters GenerateConditions Automatic whether to generate conditions on parameters Method Automatic method to use PerformanceGoal "Quality" aspects of performance to optimize • Possible settings for GenerateConditions include: • Automatic non-generic conditions only True all conditions False no conditions None return unevaluated if conditions are needed • Possible settings for PerformanceGoal include \$PerformanceGoal, "Quality" and "Speed". With the "Quality" setting, DiscreteMaxLimit typically solves more problems or produces simpler results, but it potentially uses more time and memory. # Examples open allclose all ## Basic Examples(4) Max limit of a sequence: Max limit of a product: Use dMlim to enter the template and to move from the underscript to the body: ## Scope(21) ### Basic Uses(4) Compute the max limit of a sequence when n approaches Infinity: Compute the max limit of a sequence when n approaches : Compute a nested max limit for a multivariate sequence: Compute the max limit of a list of sequences: ### Elementary Sequences(6) Find the max limit of a rational-exponential sequence: Convergent geometric sequence: Oscillating geometric sequence: Divergent oscillating geometric sequence: Exponential sequence: Power sequence: Trigonometric sequences: Inverse trigonometric sequence: Logarithmic sequence: ### Periodic Sequences(3) Limits of periodic sequences: Eventually periodic sequence: Densely aperiodic sequences: ### Piecewise Sequences(2) Piecewise sequence with a finite max limit: Piecewise sequence with an infinite max limit: Piecewise sequence with periodic conditions: ### Special Function Sequences(2) Compute the limit of a sequence involving Fibonacci: Sequence involving FactorialPower: ### Number Theoretic Sequences(2) Limits involving LCM and GCD: Sequence involving Prime: ### Multivariate Sequences(2) Compute a nested max limit: Plot the sequence and its limit: Multivariate max limits: ## Options(6) ### Assumptions(1) Specify assumptions on a parameter: Different assumptions can produce different results: ### GenerateConditions(3) Return a result without stating conditions: This result is only valid if x>1: Return unevaluated if the results depend on the value of parameters: By default, conditions are generated that return a unique result: By default, conditions are not generated if only special values invalidate the result: With , even these non-generic conditions are reported: ### Method(1) Compute the max limit of a periodic sequence using the default method: Obtain the same answer using the method for periodic sequences: The limit of the sequence is undefined, since it oscillates between 0 and 1: ### PerformanceGoal(1) DiscreteMaxLimit computes limits involving sequences of arbitrarily large periods: Use PerformanceGoal to avoid potentially expensive computations in such cases: The Method option overrides PerformanceGoal: ## Applications(7) ### Basic Applications(2) Compute the asymptotic supremum of a sequence: Plot the sequence and the asymptotic supremum: Verify that the following sequence does not have a limit: Show that DiscreteMaxLimit and DiscreteMinLimit are not equal: Confirm that the limit does not exist by using DiscreteLimit: ### Series Convergence(4) Show that the infinite series whose general term is defined here is convergent, by using the ratio test: Plot the partial sums of the series: Compute the ratio of the adjacent terms using DiscreteRatio: The sequence of ratios does not converge: However, the ratio test can still be used because the upper limit of the ratios is less than 1: Confirm that the series converges using SumConvergence: Evaluate the infinite series: Show that the infinite series whose general term is defined here is convergent, by using the root test: Plot the partial sums of the series: Compute the n root of the general term: The limit of the sequence of roots does not exist: However, the root test still indicates convergence because the max limit is less than 1: Confirm that the series converges using SumConvergence: Evaluate the infinite series: Consider the sequence : The inverse radius of the associated power series is given by: This means the radius of convergence is infinite and converges for all , in particular to : Compute the Taylor series at zero and its radius of convergence for the following function: The Taylor coefficient is : Formally, the Taylor series does sum to the original function: The radius of convergence of the Taylor series is given by: This means the Taylor series will converge for values of within of the origin. For example, at : At values of further away, the sum will not converge; for example, at : At the points , the terms of the Taylor series alternate between and : Hence the partial sums go between and : Visualize and the partial sums of its Taylor series on the interval ; in the interior of the interval, convergence is rapid, but the Taylor polynomials always go to either or at the endpoints: ### Computational Complexity(1) An algorithm runtime function is said to be "big-o of ", written , if : Similarly, is said to be "big-theta of ", written if and : The statement is always true: If and , then : It is possible for two functions to share neither relationship: Hence, defines a reflexive partial order on the space of algorithm runtimes similar to : If and , then , which implies that is an equivalence relation: ## Properties & Relations(11) A real-valued sequence always has a (possibly infinite) max limit: The corresponding limit may not exist: If and have finite max limits, then : In this case, there is strict inequality: Positive multiplicative constants can be moved outside a limit: For a real-valued sequence, if DiscreteLimit exists, DiscreteMaxLimit has the same value: If has a finite limit, then : DiscreteMaxLimit is always greater than or equal to DiscreteMinLimit: If DiscreteMaxLimit equals DiscreteMinLimit, the limit exists and equals their common value: If the max limit is , then the min limit and thus the limit are also : DiscreteMaxLimit can be computed as -DiscreteMinLimit[-f,]: If , then : If the two max limits are equalas in this examplethen has a limit: This is a generalization of the "squeezing" or "sandwich" theorem: MaxLimit is always greater than or equal to DiscreteMaxLimit: ## Possible Issues(1) DiscreteMaxLimit is only defined for real-valued sequences: ## Neat Examples(1) Visualize a set of sequence max limits: Wolfram Research (2017), DiscreteMaxLimit, Wolfram Language function, https://reference.wolfram.com/language/ref/DiscreteMaxLimit.html. #### Text Wolfram Research (2017), DiscreteMaxLimit, Wolfram Language function, https://reference.wolfram.com/language/ref/DiscreteMaxLimit.html. #### BibTeX @misc{reference.wolfram_2021_discretemaxlimit, author="Wolfram Research", title="{DiscreteMaxLimit}", year="2017", howpublished="\url{https://reference.wolfram.com/language/ref/DiscreteMaxLimit.html}", note=[Accessed: 02-August-2021 ]} #### BibLaTeX @online{reference.wolfram_2021_discretemaxlimit, organization={Wolfram Research}, title={DiscreteMaxLimit}, year={2017}, url={https://reference.wolfram.com/language/ref/DiscreteMaxLimit.html}, note=[Accessed: 02-August-2021 ]} #### CMS Wolfram Language. 2017. "DiscreteMaxLimit." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/DiscreteMaxLimit.html. #### APA Wolfram Language. (2017). DiscreteMaxLimit. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/DiscreteMaxLimit.html
2021-08-02 22:09:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8688451051712036, "perplexity": 3473.7927755133687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154385.24/warc/CC-MAIN-20210802203434-20210802233434-00329.warc.gz"}