arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
# analysis question Suppose $f(x) \in \mathbb{R}[x]$ is such that $\operatorname{deg}{f(x)} = 2011$, then $\exists \: c \in \mathbb{R}$ such that $f(c) = f'(c)$. How can I prove/disprove the above statement. Any hints? - Consider $g(x) = f(x)-f'(x)$. Now apply the fact that an odd degree polynomial has at-least one real root. - +1 Painfully and beautifully simple... –  DonAntonio May 30 '12 at 2:39 Chandrasekhar's answer is boss, I wondered what would happen if we replace 2011 say with 2010. But note that any polynomial $f = \sum b_i z^i \in \mathbb{R}[x]$ can be written as $p - p', p = \sum a_i x^i \in \mathbb{R}[x]$ (set $a_n$ to be $b_n$, where $f$ is degree $n$, then just inductively solve $a_i - (i+1)a_{i+1} = b_i$ for $a_{i}$), so this being true for $f$ of even degree is equivalent to $\mathbb{R}$ being algebraically closed o_O! -
# How do you differentiate xy^2-x^3y=6? Mar 9, 2015 You have an Implicit Function in which $y$ is function of $x$ so you have to derive it as well. You get: $\left(1 \cdot {y}^{2} + 2 x y \frac{\mathrm{dy}}{\mathrm{dx}}\right) - \left(3 {x}^{2} y + {x}^{3} \frac{\mathrm{dy}}{\mathrm{dx}}\right) = 0$ using the Product Rule. Then: ${y}^{2} + 2 x y \frac{\mathrm{dy}}{\mathrm{dx}} - 3 {x}^{2} y - {x}^{3} \frac{\mathrm{dy}}{\mathrm{dx}} = 0$ Collecting $\frac{\mathrm{dy}}{\mathrm{dx}}$: $\frac{\mathrm{dy}}{\mathrm{dx}} \left[2 x y - {x}^{3}\right] = 3 {x}^{2} y - {y}^{2}$ $\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{y \left(3 {x}^{2} - y\right)}{x \left(2 y - {x}^{2}\right)}$ Hope it helps
# zbMATH — the first resource for mathematics Determinantal equations for curves of high degree. (English) Zbl 0681.14027 Let C be a reduced, irreducible curve of arithmetic genus g and let $$L_ 1, L_ 2$$ be two line bundles on C of degree $$\geq 2g+1,$$ nonisomorphic if $$g>0$$ and both have degree $$2g+1.$$ If $${\mathfrak L}_ i$$, $$i=1,2$$, denotes the complete linear series $$(L_ i,H^ 0(L_ i))$$ on C, $${\mathfrak L}_ 1{\mathfrak L}_ 2$$ the product series $$(L_ 1\otimes L_ 2,V)$$ and $$\phi_{{\mathfrak L}_ 1{\mathfrak L}_ 2}$$ the associated rational map from C to $${\mathbb{P}}(V)$$, then the main result (see theorem 1) of this paper says that the (maximal) homogeneous ideal of $$\phi_{{\mathfrak L}_ 1{\mathfrak L}_ 2}(C)$$ is generated by the $$2\times 2$$-minors of a matrix of linear forms. This is an extension of a corresponding result of Castelnuovo (see theorem). As a corollary one gets that any curve of genus $$g>0,$$ embedded by a complete linear series of degree $$4g+2$$ into $${\mathbb{P}}^{3g+2}$$ has equations which may be realized as the $$2\times 2$$ minors of a $$(g+2)\times (g+2)$$ matrix of linear forms. Examples of the determinantal representations of elliptic and hyperelliptic curves are considered. In an appendix a generalized form of Clifford’s theorem (for singular curves) is given (see theorem A). Reviewer: M.Herrmann ##### MSC: 14M12 Determinantal varieties 14H45 Special algebraic curves and curves of low genus 13A15 Ideals and multiplicative ideal theory in commutative rings Full Text:
# Experiment Settings¶ This section describes the settings that are available when running an experiment. ## Display Name¶ Optional: Specify a display name for the new experiment. There are no character or length restrictions for naming. If this field is left blank, Driverless AI will automatically generate a name for the experiment. ## Dropped Columns¶ Dropped columns are columns that you do not want to be used as predictors in the experiment. Note that Driverless AI will automatically drop ID columns and columns that contain a significant number of unique values (above max_relative_cardinality in the config.toml file or Max. allowed fraction of uniques for integer and categorical cols in Expert settings). ## Validation Dataset¶ The validation dataset is used for tuning the modeling pipeline. If provided, the entire training data will be used for training, and validation of the modeling pipeline is performed with only this validation dataset. When you do not include a validation dataset, Driverless AI will do K-fold cross validation for I.I.D. experiments and multiple rolling window validation splits for time series experiments. For this reason it is not generally recommended to include a validation dataset as you are then validating on only a single dataset. Please note that time series experiments cannot be used with a validation dataset: including a validation dataset will disable the ability to select a time column and vice versa. This dataset must have the same number of columns (and column types) as the training dataset. Also note that if provided, the validation set is not sampled down, so it can lead to large memory usage, even if accuracy=1 (which reduces the train size). ## Test Dataset¶ The test dataset is used for testing the modeling pipeline and creating test predictions. The test set is never used during training of the modeling pipeline. (Results are the same whether a test set is provided or not.) If a test dataset is provided, then test set predictions will be available at the end of the experiment. ## Weight Column¶ Optional: Column that indicates the observation weight (a.k.a. sample or row weight), if applicable. This column must be numeric with values >= 0. Rows with higher weights have higher importance. The weight affects model training through a weighted loss function and affects model scoring through weighted metrics. The weight column is not used when making test set predictions, but a weight column (if specified) is used when computing the test score. ## Fold Column¶ Optional: Rows with the same value in the fold column represent groups that should be kept together in the training, validation, or cross-validation datasets. By default, Driverless AI assumes that the dataset is i.i.d. (identically and independently distributed) and creates validation datasets randomly for regression or with stratification of the target variable for classification. The fold column is used to create the training and validation datasets so that all rows with the same Fold value will be in the same dataset. This can prevent data leakage and improve generalization. For example, when viewing data for a pneumonia dataset, person_id would be a good Fold Column. This is because the data may include multiple diagnostic snapshots per person, and we want to ensure that the same person’s characteristics show up only in either the training or validation frames, but not in both to avoid data leakage. This column must be an integer or categorical variable and cannot be specified if a validation set is used or if a Time Column is specified. ## Time Column¶ Optional: Specify a column that provides a time order (time stamps for observations), if applicable. This can improve model performance and model validation accuracy for problems where the target values are auto-correlated with respect to the ordering (per time-series group). The values in this column must be a datetime format understood by pandas.to_datetime(), like “2017-11-29 00:30:35” or “2017/11/29”, or integer values. If [AUTO] is selected, all string columns are tested for potential date/datetime content and considered as potential time columns. If a time column is found, feature engineering and model validation will respect the causality of time. If [OFF] is selected, no time order is used for modeling and data may be shuffled randomly (any potential temporal causality will be ignored). When your data has a date column, then in most cases, specifying [AUTO] for the Time Column will be sufficient. However, if you select a specific date column, then Driverless AI will provide you with an additional side menu. From this side menu, you can specify Time Group columns or specify [Auto] to let Driverless AI determine the best time group columns. You can also specify the columns that will be unavailable at prediction time (see More About Unavailable Columns at Time of Prediction for more information), the Forecast Horizon (in a unit of time identified by Driverless AI), and the Gap between the train and test periods. Refer to Time Series in Driverless AI for more information about time series experiments in Driverless AI and to see a time series example. Notes: • Engineered features will be used for MLI when a time series experiment is built. This is because munged time series features are more useful features for MLI compared to raw time series features. • A Time Column cannot be specified if a Fold Column is specified. This is because both fold and time columns are only used to split training datasets into training/validation, so once you split by time, you cannot also split with the fold column. If a Time Column is specified, then the time group columns play the role of the fold column for time series. • A Time Column cannot be specified if a validation dataset is used. • A column that is specified as being unavailable at prediction time will only have lag-related features created for (or with) it. • Unavailable Columns at Time of Prediction will only have lag-related features created for (or with) it, so this option is only used when Time-Series Lag-Based Recipe is enabled. ## Accuracy, Time, and Interpretability Knobs¶ The experiment preview describes what the Accuracy, Time, and Interpretability settings mean for your specific experiment. This preview will automatically update if any of the knob values change. The following is more detailed information describing how these values affect an experiment. ### Accuracy¶ As accuracy increases (as indicated by the tournament_* toml settings), Driverless AI gradually adjusts the method for performing the evolution and ensemble. At low accuracy, Driverless AI varies features and models, but they all compete evenly against each other. At higher accuracy, each independent main model will evolve independently and be part of the final ensemble as an ensemble over different main models. At higher accuracies, Driverless AI will evolve+ensemble feature types like Target Encoding on and off that evolve independently. Finally, at highest accuracies, Driverless AI performs both model and feature tracking and ensembles all those variations. Changing this value affects the feature evolution and final pipeline. Note: A check for a shift in the distribution between train and test is done for accuracy >= 5. Feature evolution: This represents the algorithms used to create the experiment. If a test set is provided without a validation set, then Driverless AI will perform a 1/3 validation split during the experiment. If a validation set is provided, then the experiment will perform external validation. Final Pipeline: This represents the leveling of ensembling done for the final model (if no time column is selected) along with the cross-validation values. ### Time¶ This specifies the relative time for completing the experiment (i.e., higher settings take longer). Early stopping will take place if the experiment doesn’t improve the score for the specified amount of iterations. ### Interpretability¶ Specify the relative interpretability for this experiment. Higher values favor more interpretable models. Changing the interpretability level affects the feature pre-pruning strategy, monotonicity constraints, and the feature engineering search space. Feature pre-pruning strategy: This represents the feature selection strategy (to prune-away features that do not clearly give improvement to model score). Strategy = “FS” if interpretability >= 6; otherwise strategy is None. Monotonicity constraints: If Monotonicity Constraints are enabled, the model will satisfy knowledge about monotonicity in the data and monotone relationships between the predictors and the target variable. For example, in house price prediction, the house price should increase with lot size and number of rooms, and should decrease with crime rate in the area. If enabled, Driverless AI will automatically determine if monotonicity is present and enforce it in its modeling pipelines. Depending on the correlation, Driverless AI will assign positive, negative, or no monotonicity constraints. Monotonicity is enforced if the absolute correlation is greater than 0.1. All other predictors will not have monotonicity enforced. Note: Monotonicity constraints are used in XGBoost GBM, XGBoost Dart, LightGBM, and Decision Tree models. Feature engineering search space: This represents the transformers used when Note that when mixing GBM and GLM in parameter tuning, the search space is split 50%/50% between GBM and GLM. ## Classification/Regression Button¶ Driverless AI automatically determines the problem type based on the response column. Though not recommended, you can override this setting by clicking this button. ## Reproducible¶ The Reproducible button allows you to build an experiment with a random seed and get reproducible results. If this is disabled (default), then results will vary between runs, which can give a good sense of variance among experiment results. Please keep in mind the following when enabling this option: • Experiments can only be reproducible when run on the same hardware (same number and type of GPUs/CPUs, same architecture such as Linux or PPC, etc). For example, you will not get the same results if you try an experiment on a GPU machine, and then attempt to reproduce the results on a CPU-only machine or on a machine with a different number and type of GPUs. • This option should be used with the Reproducibility Level expert setting option, which ensures different degrees of reproducibility based on the OS and environment architecture. Keep in mind that when Reproducibility is enabled, then reproducibility_level=1 by default. • Experiments run using TensorFlow with multiple cores cannot be reproduced. • LightGBM is more reproducible with 64-bit floats, and Driverless AI will switch to 64-bit floats for LightGBM. (Refer to https://lightgbm.readthedocs.io/en/latest/Parameters.html#gpu_use_dp for more information.) • Enabling this option automatically disables all of the Feature Brain expert settings options; specifically: ## Enable GPUs¶ Click the Enable GPUs button to enable/disable GPUs. Note that this option is ignored on CPU-only systems.
# Evaluate the integral. int 8 sin(4t) sin(t/2)dt Carol Gates 2021-03-04 Answered Evaluate the integral. $\int 8\mathrm{sin}\left(4t\right)\mathrm{sin}\left(\frac{t}{2}\right)dt$ You can still ask an expert for help • Live experts 24/7 • Questions are typically answered in as fast as 30 minutes • Personalized clear answers Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it cheekabooy Step 1 Let the given integral is, $\int 8\mathrm{sin}\left(4t\right)\mathrm{sin}\left(\frac{t}{2}\right)dt$ By using the formula, $\mathrm{sin}\left(a\right)\mathrm{sin}\left(b\right)=\frac{-\mathrm{cos}\left(a+b\right)+\mathrm{cos}\left(a-b\right)}{2}$ $\int 8\left(\frac{\mathrm{cos}\left(4t-\frac{t}{2}\right)-\mathrm{cos}\left(4t+\frac{t}{2}\right)}{2}\right)dt$ $⇒8\int \left(\frac{\mathrm{cos}\left(7\frac{t}{2}\right)-\mathrm{cos}\left(9\frac{t}{2}\right)}{2}\right)dt$ Step 2 By separating the integrals, $⇒\frac{8}{2}\int \left(\mathrm{cos}\left(7\frac{t}{2}\right)\right)dt-\int \left(9\frac{t}{2}\right)\right)dt$ Simplifying this, $⇒\int 8\mathrm{sin}\left(4t\right)\mathrm{sin}\left(\frac{t}{2}\right)dt=4\left[\frac{2}{7}\mathrm{sin}\left(7\frac{t}{2}\right)-\frac{2}{9}\mathrm{sin}\left(9\frac{t}{2}\right)\right]+C$
All solutions here are SUGGESTED. Mr. Teng will hold no liability for any errors. Comments are entirely personal opinions. (i) $\int \frac{1}{100-v^2}dv = \frac{1}{20} \mathrm{ln}|\frac{10+v}{10-v}| + C$ (ii) (a) $\frac{dv}{dt} = 10 - 0.1 v^2$ $\int \frac{1}{100-v^2} dv = \int \frac{1}{10} dt$ $\frac{1}{20} \mathrm{ln} |\frac{10+v}{10-v}| = \frac{1}{10} t + C$ When $t = 0, v = 0 \Rightarrow c = 0$ $\therefore t = 0.5 \mathrm{ln} |\frac{10+v}{10-v}|$ When $v = 5, t = 0.5\mathrm{ln}3$ (b) When $t = 1$ $\Rightarrow 2 = \mathrm{ln}|\frac{10+v}{10-v}|$ $\pm e^2 = \frac{10 + v}{10 - v}$ Reject $- e^2$ since $v = 0$ when $t = 0$ $e^2 (10 - v) = 10 + v$ $10 e^2 - 10 = v - e^2v$ $v = 7.62$ (c) From the graph, As $t \rightarrow \infty, v \rightarrow 10$ (i) can be resolved easily with the MF15 formula. (ii), students can actually plot the entire graph of t against v into the graphing calculator and calculate the v when t = 1. They should draw the graph out then. Should they do this, (c) will be solved conveniently. ### One Comment • […] Question 8 […]
A+ » VCE » Further Maths U3 & 4 Master Notes » OA1 Matrices » FM Equilibrium FM Equilibrium 2.1 Simple Matrix Recurrence Relations The Simple Matrix Recurrence Relation Formula • The simplest type of matrix recurrence relation formula we will analyse in Further Maths models a system where the next “state”; S_{n+1}, can be reached by multiplying the current state; S_{n}, by a transition matrix; T, in the form: S_{n+1}=T S_{n} • The state matrices; S_{n} (where n is a positive whole number representing the state of the system), are column matrix listing the value of each of the system’s variables in the corresponding state. • The transition matrix; T, is a square matrix. • As with a linear recurrence relation, it is important to state the initial state of a system; S_{0}. Read More »2.1 Simple Matrix Recurrence Relations
# HW help (Double Slit, Resolving power etc) 1. Oct 25, 2006 ### Alt+F4 -------------------------------------------------------------------------------- (a) A double-slit experiment is set up using red light (l = 708 nm). A first order bright fringe is seen at a given location on a screen. What wavelength of visible light (between 380 nm and 750 nm) would produce a dark fringe at the identical location on the screen? l = nm All right so i know d * sin theta = m * Wavelenght How do i find the Distance and the angle? 2. Oct 25, 2006 ### Max Eilerson The condition for constructive interference is what you have written down (i.e. peak on peak), but destructive interference happens when you have a trough meet a peak, i.e. there is a half wavelength difference so for destructive interference (dark fringe) $$dsin (\theta) = (m + 0.5)\lambda$$. You should be able to go from there. 3. Oct 28, 2006 ### Alt+F4 (b) A new experiment is created with the screen at a distance of 1.8 m from the slits (with spacing 0.08 mm). What is the distance between the second order bright fringe of light with l = 691 nm and the third order bright fringe of light with l = 414 nm? (Give the absolute value of the smallest possible distance between these two fringes: the distance between bright fringes on the same side of the central bright fringe.) Ok so D sin theta = m * Wavelenght Distance is 2.2? I am going to find theta m = 1 Wavelenght is 698 in one part, and then i do the quation again for 413 right? 4. Oct 28, 2006 ### OlderDan Don't lose track of the statement about the different orders of the finges. What is m for second order, and for third order? 5. Oct 28, 2006 ### Alt+F4 2 for second order, and 3 for third order 6. Oct 28, 2006 ### Alt+F4 Last edited: Oct 28, 2006 7. Oct 28, 2006 ### Max Eilerson You don't actually need to find the distance at all. You will have two equations equal to $$dsin (\theta)$$, and you just have to solve for the 1 unknown (wavelength). 8. Oct 28, 2006 ### Max Eilerson and that would be? 9. Oct 28, 2006 ### Alt+F4 .00315 m ................ 10. Oct 28, 2006 ### Max Eilerson Yes that's correct for (b). What answer did you get for (a).
# How do I find the exact value of $\cos^2\left(\frac{5\pi}{12}\right)$? I'm having trouble finding the exact value of $\cos^2\left(\frac{5\pi}{12}\right)$ in radians. I was able to figure out that: \begin{align} \cos\left(\frac{7\pi}{12}\right) &= \cos\left(\frac{3\pi}{12} + \frac{4\pi}{12}\right) &\text{Split into known unit circle values.}\\ &= \cos\left(\frac{\pi}{4} + \frac{\pi}{3}\right) &\text{Reduce.}\\ &= \cos(a + b) = \cos(a)\cos(b) - \sin(a)\sin(b) &\text{Use Cosine Identity.}\\ &= \cos\left(\frac{\pi}{4}\right)\cos\left(\frac{\pi}{3}\right) - \sin\left(\frac{\pi}{4}\right)\sin\left(\frac{\pi}{3}\right) &\text{Substitute values.}\\ &= \left(\frac{\sqrt2}{2} \times \frac{1}{2}\right) - \left(\frac{\sqrt2}{2} \times \frac{\sqrt3}{2}\right) &\text{Evaluate.}\\ &= \frac{\sqrt2}{4} - \frac{\sqrt6}{4}\\ &= \frac{\sqrt2-\sqrt6}{4}\\ \end{align} I know that $\cos^2\left(\frac{5\pi}{12}\right)$ is not all that different, but the square exponent ($^2$) is throwing me for a loop. Can anyone break it down for me? ## 2 Answers Hint: Try the double angle identity $$\cos^2\alpha = \frac{1}{2}(1+\cos 2\alpha).$$ • So would I just plug in 5pi/12 in for α and work it out from there? – Analytic Lunatic Sep 3 '15 at 20:07 • I think you mean $\cos^2(a)=\frac{1}{2}(1+\cos(2a))$. – Eric Sep 3 '15 at 20:07 • @Eric, same question: Would I then just plug in 5pi/12 in for α and work it out from there? – Analytic Lunatic Sep 3 '15 at 20:12 • @AnalyticLunatic Yes. Try it. – rogerl Sep 3 '15 at 20:14 • @AnalyticLunatic I think you probably do, since $2\cdot \frac{5\pi}{12} = \frac{5\pi}{6}$. – rogerl Sep 3 '15 at 20:18 Since $\cos\alpha=-\cos(\pi-\alpha)$, $$\cos\frac{5\pi}{12}=-\cos\left(\pi-\frac{5\pi}{12}\right)= -\cos\frac{7\pi}{12}=\frac{\sqrt{6}-\sqrt{2}}{4}$$ Then $$\cos^2\frac{5\pi}{12}=\left(\frac{\sqrt{6}-\sqrt{2}}{4}\right)^2 =\frac{2-\sqrt{3}}{4}$$
## 666 Text Symbol Hi, i'm getting my first tattoo done next week, I've had an idea for months and months, its 'happiness is a journey not a destination' around my back on my right hip, ideally i would like a longer quote (just a few words longer)anyone got any similar quotesi like the ones about dont wait for the storm to past, dance in the rain. Jesus declares and requires the end from the beginning (Ecclesiastes 3:15), symbolizing that world events at the end of the world will mirror the former events. The chart has 1 Y axis displaying km/h. 0420 and column D. The 666 is identified in many other realms, as well. In the 'Speaker's Commentary,' Introduction, § 11 (a), we find, "Six is the 'signature' of non-perfection;" and, "This number is also a symbol of human rule and power. All Africa Baptist Fellowship. The following words, codes, special symbols, numbers and additional resources may prove useful to the individual trying to interpret common online slang. SpiritualRay takes a closer look at the various Illuminati symbols and their related meanings popularly used to convey cryptic messages to the masses. I think it would be a great tattoo for anyone starting a new life after graduation, after a divorce, or after giving up an addiction. A variation with three lines was used by some to represent the number of the beast. 666 MHz, 667 was the more accurate approximation—conveniently ignoring their own usual rounding practice: as examples, consider the earlier 66. Peter David Phillips was elected President of People’s National Party (PNP) on Sunday, March 26, 2017 and was sworn in as Leader of the Opposition on April 2, 2017. The Meaning of Numbers: The Number 666. Sometimes, one or three dots are added above the pillars. The funny thing is that in two other ciphers used by the Illuminati (also shown in the photos in this group) the Point within the Circle was one of two possible signs for the letter “S”, which, as we have seen, has the value. We engage the development community with real-world statistics. Mobile App 833 666. So, it also applauds you for your drive to do better in life. Plato called the study of number symbolism “the highest level of knowledge”. The symbol of Baphomet is a combination of the serpent Leviathan, the goat and the inverted pentacle. Unfortunately there is not a superscript letter for "q" and "i", so approximate replacements had to be used. One of the typical “gotcha” proofs of this claim is the Papal symbol of the upside down or inverted cross, which the anti-Catholic confidently asserts as being satanic. Symbols, Freemasonry, Illuminati and secret societies. I think it would be a great tattoo for anyone starting a new life after graduation, after a divorce, or after giving up an addiction. This emoji can be used to communicate slow and steady, that you are a Teenage Mutant. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. 1 - 19 of 19 Discord logo designs. Satanic Symbols, Symbolism, and Symbology - Symbols of Satan! PETER'S CROSS - Satanists are not the brightest folks to begin with, but you would think they would check to see if a symbol already had a meaning before adopting it as their own. It's one of the millions of unique, user-generated 3D experiences created on Roblox. As an angel message, 666 can mean that there is an imbalance in your life. ⚦ ⚧ ⚨ ⚩ ☿ ♁ ⚯ ♛ ♕ ♚ ♔ ♜ ♖ ♝ ♗ ♞ ♘ ♟ ♙ ☗ ☖ ♠ ♣ ♦ ♥ ♡ ♢ ♤ ♧ ⚀ ⚁ ⚂ ⚃ ⚄ ⚅ ⚇ ⚆ ⚈ ⚉ ♨ ♩ ♪ ♫ ♬ ♭ ♮ ♯ ⏏ ⎗ ⎘ ⎙ ⎚ ⎇ ⌘ ⌦ ⌫ ⌧ ♲ ♳ ♴ ♵ ♶ ♷ ♸ ♹ ♻ ♼ ♽ ⁌ ⁍ ⎌ ⌇. Hi, i'm getting my first tattoo done next week, I've had an idea for months and months, its 'happiness is a journey not a destination' around my back on my right hip, ideally i would like a longer quote (just a few words longer)anyone got any similar quotesi like the ones about dont wait for the storm to past, dance in the rain. Triangle text symbol. Related Symbols:Top 10 Illuminati Symbols VideoGmail Logo Masonic ApronDisney 666Monster Energy Drinks. The name Celts is a 'modern' name used to collectively describe all the many tribes of people living during the Iron Age. Revelation 14:1-20 ESV / 55 helpful votes Helpful Not Helpful. Above is the double cross in which the lower crossbar is usually represented wider than the one on top. But it hasn’t been possible to read. So, the devils, signs of black magic, fire tongues are often depicted as a representation of Satan. That's the thing on the picture. These letters may be roman numerals or old-style figures with varying heights. In this study I would like to examine the significance and meaning of the number six. Add to Likebox #115797264 - 3D illustration of the symbolic number 666. Siddhartha Gautama, the Lord Buddha, was born in 623 B. PLEASE NOTE: If you see this text, it means that certain resources could not be loaded and the website is not displayed correctly. Inverted Pentagram was approved as part of Unicode 6. Greatly expanded monosaccharide symbol nomenclature for the representation of glycans. These are all occult symbols and manifestations of the spirit of Satan. (2007, February 24). President Donald Trump, with some asserting its usage as a dog-whistle for endorsement of white supremacy and alt-right beliefs. The four symbols on the label and inside sleeve of Led Zeppelin IV, represent (from left to right) Jimmy Page, John Paul Jones, John Bonham and Robert Plant. Now, a friend of mines whom is Greek and who has ancient knowledge in Greek, showed me in the Greek Bible the 666 is put as Xi Chi Stau. Schwarze Sonne (Black Sun), sometimes called the sonnerad: symbol has become synonymous with myriad far-right groups who traffic in neo-Nazi and/or neo-Volkisch ideologies. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. For example, 2 6 A 0 Alt X will insert a warning symbol as ⚠. edited Sep 22 '17 at 13:52. answered May 10 '10 at 15:28. Clearly, 666 is not a nice number! Bible Gematria. Emoji are pictographs (pictorial symbols) that are typically presented in a colorful cartoon form and used inline in text. Here on IlluminatiWatcher. Add to Likebox #115797264 - 3D illustration of the symbolic number 666. Sometimes, when I am in a creative essence I like to pen symbols and numbers, and also write poetry, one of the symbols that came to me for myself was a V with 2 waves (water) running through the V. We discuss the different interpretations of this number, according to the Bible, in this SpiritualRay article. Plus – minus/Minus – plus. The symbol can be seen on the New King James Bible, on certain rock albums (like Led Zepplin's), or you can see it on the cover of such New Age books as The Aquarian Conspiracy. This symbol - the magical hexagram - was used by Hubbard and Parsons during their attempts at incarnating the Antichrist in human form. Illuminati official website with information on our members, symbols, photos, videos, and more. Copy and paste this emoji: 🚩 This Unicode character has no emoji version, meaning this is intended to display only as a black and white glyph on most platforms. In the Greek text it is represented by Greek words. JOE is text editor. It looks forward to the future and brings peace of mind. § To obtain an ALT Character: Make certain that the Num Lock key has been pressed to activate the numeric key section of the keyboard. The symbols of Satan. In Suits at common law, where the value in controversy shall exceed twenty dollars, the right of trial by jury shall be preserved, and no fact tried by a jury, shall be otherwise re-examined in any Court of the United States, than according to the rules of the common law. The All-Seeing Eye and The. Preteristisk tolkning. by tapstudio35554. Unicode is a Universal Character Set (UCS), the standard characters encoding which does not depend on any. The mark of the Beast. Sequence analysis predicted that the 666-amino acid FZD3 protein, which is 98% identical to mouse Fzd3, contains an N-terminal CRD, 7 transmembrane domains, 2 cys residues in the second and third extracellular loops, and 3 N-linked glycosylation sites. You were guided here to find out about the 777 meaning. In the Greek text it is represented by Greek words. Related Symbols:Top 10 Illuminati Symbols VideoGmail Logo Masonic ApronDisney 666Monster Energy Drinks. Scientists have finally been able to read the oldest biblical text ever found. " Furthermore, the "WO" at the start of this patent does. června 2006. The most basic form of this code involved a row 1-5 and a column 1-5, and then filled the matrix in with each letter from left to right and down the grid (combining I and J into one space). An angry text. Remember that because our margins aren’t even, our pages are asymmetrical, although if you look at your book as a series of two-page spreads, the whole layout is neatly and symmetrically arranged around the spine at the center. It has not been Recommended For General Interchange ( RGI) — as an emoji — by Unicode. What would you like to do? Embed Embed this gist in your website. Peter An inverted cross symbol is the cross of St. The Klan has existed in three distinct eras at different points in time during the history of the United States. OK, I Understand. The trinity ring was a symbol of Celtic culture before it was adopted as a Christian symbol. population has a fear of the number 13, and each year the even more specific fear of Friday the 13,. We use Arabic numerals. Catholic Charities of Diocese of Wilmington pushes on despite immense increase in requests for goods, services. They are called satanic symbols which are written in some sort of creature form or beastly form. Make a star symbol, for mostly use of social websites like facebook. The Meaning of Numbers: The Number 666. Triangle text symbol. Copy Paste Icons, Cool Symbols & Special Characters text sets. Copy and paste these cool triangle symbols. The Flower of Life-This shape was not done using the first pattern above. That's the thing on the picture. In his German-language Hebrew and Aramaic Lexicon, the first edition of which was published in sections between 1810 and 1812 and the third edition in 1828, in the 1833 edition in Latin, and in the revised German text of 1834 the Hebrew scholar Wilhelm Gesenius (1786–1842), while recognising that the vowels attached to the Tetragrammaton in the Masoretic text are those of "Adonai" and. Cute symbol emoticons are here too. Against the Numerical Representation 666 Perhaps the most famous (or infamous) number in the entire Bible is the strange number six hundred sixty-six, the number of one of the beasts in the thirteenth chapter of the Book of Revelation. Copy and paste this emoji: 🚩 This Unicode character has no emoji version, meaning this is intended to display only as a black and white glyph on most platforms. Inverted Pentagram was approved as part of Unicode 6. Cassaro has discussed his work on the History. Five-Pointed Stars. or Log in with Google Trying to play the game? Go Don't have a Prodigy account?. ASCII Table and Description. 666 (in the Greek text of the New Testament: χξς΄) is a biblical number for "the Beast", found in the Book of Revelation of the New Testament. Mobile App 833 666. What do all the hand symbols in Emoji mean? You would have seen them on your iPhone Emoji Keyboard, or on your Android or Windows Phone. Whatever is revealed must not conflict with the word of God (the text), or add to it. OK, I Understand. List of Triangle signs, make over 43 triangle symbols text character. The number of the beast (Greek : Ἀριθμὸς τοῦ θηρίου, Arithmos tou Thēriou), also known as the devil's number in the Book of Revelation, of the New Testament, is associated with the Beast of Revelation in chapter 13, verse 18. The mark of the beast is a combination of letters and symbols that will be physically and permanently placed on your forehead or right hand. info under CC BY 3. Find an agent and view current listings. I really wanna use one haha. By: Editorial Staff Your average (active) internet user has a lot of social media choices available to him, and is likely using more than one of them at any given time. Regularly check the flow of Chi at your house and personal office. Green Apostrophe Number 69. The only Bible interpretation of the symbol „beast" is found in Daniel 7:17-18. So what John saw was something that looked like "Chi Xi Stigma", being worn on the foreheads and right arms of the multitude of the beast. Available as Barcode ActiveX, Barcode. It also stands for Satan and the Antichrist's dominion over the Father, Son, and Holy Spirit. Chinese Symbol for Strength. Specifically: Chi Xi Stigma, or "Six-hundred, three-score and six". In the Celtic Christian world, the tri-symmetrical shape was quickly interpreted as alluding to the Holy Trinity of the Father, the Son and the Holy Spirit. For some unexplained reason it was taken as a number symbol for 6 out of the letter sequence. Securities products and services offered to self-directed investors through ST Invest, LLC. If you are mixing 1 digit numbers and 4 digit numbers, it won't work. In the New Testament it is referred to as the "number of the beast. They describe themselves as a "movement dedicated to presenting hope and love to those who are struggling with depression. NET Web Forms Control, Barcode DLL. Chat Number 5 App. These spooky evil symbols and emoji represent death, devil, other creepy stuff. We have 688 free vintage fonts to offer for direct downloading · 1001 Fonts is your favorite site for free fonts since 2001. června 1976 a přepracovaná verze byla uvedena 6. Reputation and "I Am", "I know" counters. I saw a 666 character that made a triangle but I can't find it. =A1&A2&A3 will add the text in cell A1, A2 and A3 together. A fresh and strong flow of Chi, coupled with various feng shui wealth symbols, must be your goal if you are focused on attracting the energy of wealth and abundance. 7 years ago. For Creusot, 666 marks a threshold where it is great time to turnaway from the terrestrial world and prepare to enter in the spiritual world. In the Tarot deck, the star card represents hope, inspiration, and renewal. Download high quality Sex clip art from our collection of 41,940,205 clip art graphics. ALEISTER CROWLEY - SYMBOLS. Try the refresh button to reload this website, or use a different browser (Microsoft Edge, Google Chrome or Mozilla FireFox) preferably on a different device. 8% Positive Feedback Welcome to Decal Vantage with hundreds of the most popular outdoor vinyl decals to sticker your world!. Apple may provide or recommend responses as a possible solution based on the information provided; every potential issue may involve several factors not detailed in the conversations captured in an electronic forum and Apple can therefore provide no guarantee as to the. The Demonic and Satanic text emoticon is single line; Visual size: 10x1 characters; Added on 30 March, 2013; Last commented on 05 October, 2016; Text Emoticon category: Devil text emoticons; Demonic and Satanic has 1 line and is 10 characters long. Smileys ☹ ☺ ☻ ت ヅ ツ ッ シ Ü ϡ ﭢ. Up until then, Phillip was. As it goes, our forehead represents our thoughts, and our right hand represents our actions. The sum of the numbers in your birth date and the sum of value derived from the letters in the name provide. answered May 10 '10 at 15:28. regarding the symbol (number)666,it is written in the bible that it is allowed for us to discover the true meaning of it,unlike the end of days which will come as a thief ,i beliebe that we should search in the original text (not in translated texts) ,for ENDTIMESUK ,what you have written is very interesting and deserves further seatching,of. Satanic text generator Not many text generators has the ability to turn the fonts and texts into some kind of strange characters that are often very hard look and read. This standard yet cool icon set includes ticks, crosses, circles, suns, moons, numbers, exclamation marks and question marks, music symbols, lines and arrows, squares. The problem with most of the other answers here is you need to tweak the size of the outer container so that it is the perfect size based on the font size and number of characters to be displayed. I then went back into text, typed + and removed any number in recents that had the + in front of it. This database provides an overview of many of the symbols most frequently used by a variety of white supremacist groups and movements, as well as some other types of hate groups. Each has advocated extremist reactionary positions such as white nationalism, anti. We use Arabic numerals. The number 666 itself is mentioned as‘Number of the Beast'. Antonyms for mathematical symbol. Alan gives some examples of Islamic headbands and shows you the Arabic symbols that appear in the original Greek text of Revelation 13:18, as it appears in the Interlinear Greek to English Bible. If one of the values is null, then it is treated as an empty string. The number 666 is considered to be very unlucky and is called the number or mark of the Beast in the Book of Revelations. NEW -- 7/08/2015 -- VERY IMPORTANT!! MUST LISTEN!! I highly encourage everyone to listen to my interview with a "NEAR DEATH EXPERIENCER WHO EXPERIENCED A DEMIURGE". This text was originally written in ancient Greek, where numbers are written as letters, as they are in Hebrew – the other main language of the original Biblical texts. There are many animals that prey on the mouse, but mouse depends on a spiritual level of instinct from which it derives a personal sense. 666 is refered to as the "devil's symbol" because people are clueless about history or about the symbolism with the Book of Revelation--not because it is so mysteriious (scholars and historians know what it means) but because they are willfully ignorant. The Kleenex brand is so famous that it is often used a a generic name for tissues in the United States and Canada. We have close relationships with the full range of publishers to help you find the right resources for your children/students, and we always ensure the textbooks we supply meet our exacting. Meaning of XXX. * BJSM’s web, print, video and audio material serves the international sport and exercise medicine. 38 - This is the symbol of dangers, conflicts and troubles that are the consequence of lowself-confidence. On average, women have just three-fourths the legal rights of men. Download this free picture about Heart Care Medical from Pixabay's vast library of public domain images and videos. Some notes about the ankh symbol. Now, a friend of mines whom is Greek and who has ancient knowledge in Greek, showed me in the Greek Bible the 666 is put as Xi Chi Stau. Accrediting Association of Bible Colleges. ConvertCodes, the free online Unicode converter website in real-time by javascript. Ordinarily, I try to keep the math to a minimum when I discuss an aspect of Numerology. Put simply, 666 is being used as a code, and not a particularly subtle one, if you were alive and literate at the time of the New Testament. Tables can be placed either next to the relevant text in the article, or on separate page(s) at the end. To create a file called foo. 666 hand sign, Six Six Six, Circle Eye, OK Sign, Okay. However, Codex C/04 of the fifth century has 616 spelled out with Greek letters as the number of the beast. The Ku Klux Klan (/ ˌ k uː k l ʌ k s ˈ k l æ n, ˌ k j uː-/), commonly called the KKK or the Klan, is an American white supremacist hate group, whose primary target is African Americans. It will be like a key for them that will open doors of acceptance, prosperity and peace. The following article deals with the information regarding the Chinese symbol of strength, as it has been an important symbol for centuries and extremely considered important by the natives. This beast is usually identified as the antichrist. Use unicode star symbols in a html document or copy paste the character. It would appear there is a deep integration of the Creation. • Signal word must be immediately below the hazard symbol. The four symbols on the label and inside sleeve of Led Zeppelin IV, represent (from left to right) Jimmy Page, John Paul Jones, John Bonham and Robert Plant. From: Subject: =?utf-8?B?U2NoZW5nZW4gYsO2bGdlc2luZGUgc8SxbsSxciBrb250cm9sbGVyaSBzxLFrxLFsYcWfdMSxcsSxbGTEsSAtIETDvG55YSBIYWJlcmxlcmk=?= Date: Fri, 14 Apr 2017 17:03. we would be so bold as to say it is an absolute MUST for anyone who is even remotely interested in magick and the occult) to obtain a copy of Aleister Crowley's Liber DCCLXXVII (Liber 777) which, thanks to his research into, knowledge of and dedication to this fascinating subject, provides us with the. Want to convert ASCII to text?. It was also used by Satanist Aleister Crowley around the turn of this century. 13:18 The number is NOT simply three SIXES or 6-6-6, but 600 + 60 + 6, or the number "Six hundred sixty and six". This is a good, readable table. Open yourself to a freedom of expression with text emoticons. He or she tends to be meditative, quiet and intuitive. Bekijk hier het laatste nieuws en de beste aanbiedingen. And yes, as a matter of fact, I absolutely did enjoy screaming out his name into his, soon to be, ex wife's pillow. Columns are separated by any white space. The Temple of Solomon, the Bible and Freemasonry Craft Ritual. Cross of St. Text Symbols Reference. *The last three symbols are finals not letters. NET Web Forms Control, Barcode DLL. Meanings of the Letter X, Esoteric and Otherwise. "The three divisions which were plucked up where the Heruli in 493 the vandals and 534 and the Ostrogoths in 538. This can be explained by the fact that the Overwatch group’s original mission is to stop conflicts and keep the peace. Chains of importance and higher positions acquired by the Illuminati members of that time. Justinian, the emperor, who we see was at Constantinople, working through the general Belisarius, was the power that overthrough the three kingdoms represented by the three horns, and the reason for their overthrow with their adherence to Arianism in opposition to the orthodox. Initially it was named as Corna which means horns in Italian language. The number 666 therefore represents the number of Man carried to its zenith. Chinese characters usually have one or more meanings and some of them are particularly loved by Chinese people. But always used in NT to denote that which is unending – 1: 18, 4: 10 , etc. In HTML, color is defined using CSS properties. Columns are separated by any white space. These spooky evil symbols and emoji represent death, devil, other creepy stuff. But what did John see? Did he see "666" as we do today? No. Since time immemorial, symbols have appeared in every culture, social structures, and religious systems. After an interlude in which Luke. The World Bank Group has two goals, to end extreme poverty and promote shared prosperity in a sustainable way. Property Data; This page displays only the text of a material data sheet. Khof Meaning– 19th Letter of the Hebrew Alphabet The letter Khof (also spelled Kuf, or Qof) originally meant the back of the head, or the eye of a needle and which also means monkey. Welcome back to Instagram. All Africa Baptist Fellowship. For example, when the giant CPU manufacturer Intel introduced the 666 MHz Pentium III in 1999, they chose to market it as the Pentium III 667 on the pretext that, since the actual clock speed was 666. French Fries and Fingers ~ FFF ~ 666. BLOODS are known to be involved in the trafficking of crack cocaine with BLOOD factions now being found in larger Texas cities. Click a symbol to copy and paste. 13:18 The number is NOT simply three SIXES or 6-6-6, but 600 + 60 + 6, or the number "Six hundred sixty and six". The stone also has a symbol of a swastika on it, which was a common symbol in that period. this whole article is way off and so are most the responses, Satan the individual originates from various characters given the title satan which is hebrew for obstacle and in the Old Testament Satan or the satan characters were servants of god who tested or accused christians. Origin: the numbers 69 look like they are "69ing". 666: Blood Ritual Symbol Represents animal and human sacrifices. Its Chi Vav Vav. and Symbol font. 16 Emoji Changelog 📙 Emojipedia Lookups At All Time High 🗓 What the 2021 Unicode Delay Means for Emoji Updates 🦠 Spread of the Coronavirus Emoji 📋 What's New in Unicode 13. Each "Number" monster has a corresponding natural number included at the start of its name after "Number" (and occasionally a letter, such as C). Revelation 13:18: Another form of 666 cleverly concealed: Three Snakes One Charm - Egyptian goddess religion associated with Osiris the horned god. Actually, Satan brings evil, temptation, and seducing human beings into the ways of sin. Select all text with your mouse, right click it and click Copy. To make special characters and accented letters show up on your pages, use a special set of codes called character entities, which you insert into your HTML code and which your. Illuminati official website with information on our members, symbols, photos, videos, and more. Outcomes in Patients Receiving Neoadjuvant Chemotherapy Undergoing Immediate Breast Reconstruction: Effect of Timing, Postoperative Complications, and Delay to Radiation Therapy. Other Symbolic Meanings Of 666. The basic braille alphabet, braille numbers, braille punctuation and special symbols characters are constructed from six dots. They also happen to be the first two letters of “Christ” in Greek ( Christos ). Siddhartha Gautama, the Lord Buddha, was born in 623 B. 39 - This is the symbol of emotional and financial problems. " Since as early as 2016, the symbol has been frequently associated with the supporters of the U. 666: Hexakosioihexekontahexaphobia means fear of the number 666. This is non-sense. Check flight numbers, seat assignments, airport gates, check-in times, itinerary changes, and more. The symbol of Baphomet is a combination of the serpent Leviathan, the goat and the inverted pentacle. It is the principal symbol of Satanism. 666 on $bill. The second alphabet is a set of tiny superscript characters. Tables Please submit tables as editable text and not as images. A large number of symbols and numerals come from the occult, such as the pentagram; numerology is the use of numbers to denote things, philosophies, and people, such as 666 which is the sign of the devil. The “flip-off”, the “bird”, the “highway salute”, or the “one-fingered victory salute” — whatever you call it, its meaning is crystal-clear all throughout the world. Number 5 Messaging. The semicolon tattoo represents mental health struggles and the importance of suicide prevention. The Kleenex brand is so famous that it is often used a a generic name for tissues in the United States and Canada. Nicknames, cool fonts, symbols and tags for Hacker – ꧁H҉A҉C҉K҉E҉R҉꧂, H҉A҉C҉K҉E҉R҉😈, ☠️H҉A҉C҉K҉E҉R҉☠️, ꧁༒☬₣ℜøźєη•₣ℓα₥єֆ꧂, ЕЯЯОЯ, H҉A҉C҉K҉E҉R҉. Tables can be placed either next to the relevant text in the article, or on separate page(s) at the end. Below is the complete character text set of useful copy and paste special characters for designers, websites, documents, designer fonts, trademarks and other copy and paste marks. According to the last book in the Bible, 666 is the number, or name, of the wild beast with seven heads and ten horns that comes out of the sea. In a table, letter Э located at intersection line no. Similar Images. Up until then, Phillip was. claiming that the OK hand sign is a symbol of white supremacy," reads the post-zero of the phenomenon. When examining its physical structure, we can see that it, like the “nine”, is shaped like a spiral, which is a highly symbolic symbol itself. Om symbol is drawn mostly as in the picture. The following article deals with the information regarding the Chinese symbol of strength, as it has been an important symbol for centuries and extremely considered important by the natives. We use cookies for various purposes including analytics. Here is wisdom. Masonic traditions the symbol is used for abbreviation, instead of the usual period. In some cultures, goddesses are associated with Earth, motherhood, love, and the household. Convert "Plant trees" text to binary ASCII code: Use ASCII table to get ASCII code from character. Depicted on the bottom is an infinity sign (∞), and above is a double cross (‡). Educational Attainment by County Subdivision in the Fort Smith Area There are 62 county subdivisions in the Fort Smith Area. Although only a brief overview of Satan worship rituals is presented, it should assist the investigator/patrol officer in identifying some of the symbols. Synonyms for mathematical symbol in Free Thesaurus. 666: epitome of numerical triangularity. Evangelicals and conspiracy believers truly do label the triquetra as a satanic symbol, saying it is a stylized and veiled 666, an allusion to the number of the beast in the Book of Revelation. Meanings of the Letter X, Esoteric and Otherwise. Yet to this day, the Trinity is always thought of as having its origin in Roman Catholicism—most notably at the Council of Nicaea in AD 325, the first ecumenical council of Christian bishops. Moreover, 444 represents your rigorous goal-seeking nature. The symbol of Baphomet is a combination of the serpent Leviathan, the goat and the inverted pentacle. All Africa Baptist Fellowship. Miscellaneous Sixes. 69: [verb] to receive and perform oral sex at the same time. A few verses later, this dragon is clearly identified as Satan: “So the great. I've known both of them since we were 12. Hidden in plain sight, it conspicuously appears on the flags of Communist dictatorships, it is woven into public and private logos, and is evidently used in. List of Triangle signs, make over 43 triangle symbols text character. Press and hold the ALT key and type the number 9733 or 9734 to make star symbol. Use MathJax to format equations. Related Symbols:Top 10 Illuminati Symbols VideoGmail Logo Masonic ApronDisney 666Monster Energy Drinks. Check flight numbers, seat assignments, airport gates, check-in times, itinerary changes, and more. If that is what the text is saying, there is nothing to calculate. OK, I Understand. Photo: Getty Images. Kleenex is a a brand of tissue owned by Kimberly-Clark, an American multinational. The individual has a global perspective. A person wears a Guy Fawkes mask, a trademark and symbol for the online. The Sabela consists of the words, symbols and colors that differentiate one from another. Download free fonts for Windows and Mac. 2 A substance or article with a fragment projection hazard, but not a mass explosion hazard. Note: if you want to add a space between the text from each cell enter your formula like this: =A1& &A2 & ^&A3 Where the ^ is adding a space. I'm looking for a 666 Alt code? Please don't give me links for pages because I've searched. WTF text font style Wʀɪᴛᴇ ᴛᴇxᴛ 𝘂𝘀𝗶𝗻𝗴 c̲o̲m̲p̲u̲t̲e̲r̲ s̲y̲m̲b̲o̲l̲s̲! ∞ 🄰🄽🄳 draw text symbols 🅰🅽🅳 emojis 🅣🅞 find ⓣⓗⓔⓜ. The Sabela consists of the words, symbols and colors that differentiate one from another. Return to LOGOS. At the same time, these symbols can leave you confused and wondering what that dream was all about. OK, I Understand. This Illuminati sign is named because when given with the right hand (see image of Emma Watson above- more subtle and with left hand) the middle, ring, & pinky fingers make up the tail of 3 sequential 6's. Retrieved May 19, 2013, from The American Thinker. It is ostensibly the number of a new creation, new beginning, resurrection, etc. While, in general, triangular numbers are comparatively rare, 666 is unique - its uniqueness resting on the fact that all its numerical attributes are themselves triangles!. One of the fundamental characteristics of Freemasonry is the use of the symbols as a channel of communication. 39 - This is the symbol of emotional and financial problems. Pothier (July 1991) I. Similar Images. Revelation 16:2 and 19:20 cite the "mark of the beast" as a sign that identifies those who worship the beast out of the sea ( Rev 13:1). The second alphabet is a set of tiny superscript characters. The art of deciphering a gang code, much like being a brew master or great chef, uses both science and experience. In this first section, I will teach you about the most popular and important sacred geometry symbols. The text of this Aleister Crowley material is made available here only for personal and non-commercial use. Seven pointed star being the Whore and the phallus should be fairly obvious. Amendment 7 of the United States Constitution. Not all satanic emoji and evil symbols are available as Unicode symbols or emoji. Put simply, 666 is being used as a code, and not a particularly subtle one, if you were alive and literate at the time of the New Testament. Omaha, NE (68102) Today. Use ROBLOX DECALS and thousands of other assets to build an immersive game or experience. The ancient Greeks originally had a number system like the Romans, but in the 4th century BC, they started using this system. 111 111 111 222 222 222 333 333 333 444 444 444 555 555 555 666 666 666 777 777 777 888 888 888 999 999 999 000 000 000 101 101 292 292 383 383 474 474 565 565 656 656 747 747 838 838 929 929 010 010 123-456-7890 123-456-7890 1. ⚦ ⚧ ⚨ ⚩ ☿ ♁ ⚯ ♛ ♕ ♚ ♔ ♜ ♖ ♝ ♗ ♞ ♘ ♟ ♙ ☗ ☖ ♠ ♣ ♦ ♥ ♡ ♢ ♤ ♧ ⚀ ⚁ ⚂ ⚃ ⚄ ⚅ ⚇ ⚆ ⚈ ⚉ ♨ ♩ ♪ ♫ ♬ ♭ ♮ ♯ ⏏ ⎗ ⎘ ⎙ ⎚ ⎇ ⌘ ⌦ ⌫ ⌧ ♲ ♳ ♴ ♵ ♶ ♷ ♸ ♹ ♻ ♼ ♽ ⁌ ⁍ ⎌ ⌇. Specifically: Chi Xi Stigma, or "Six-hundred, three-score and six". We have close relationships with the full range of publishers to help you find the right resources for your children/students, and we always ensure the textbooks we supply meet our exacting. For example, two popular CSS properties for defining color are the color property (for applying foreground color to the text) and background-color property (for applying color to an element's. This can be proven from original biblical texts prior to the copies which read 666. Download this free picture about Heart Care Medical from Pixabay's vast library of public domain images and videos. The barcode reader detects these relative widths and decodes the data from the barcode. The process of working from a number to a name was an ancient process called gematria in Hebrew and isopsephia in Greek. Too bad he faced unlikely foes in the Texas Longhorns and the widow of a hard-rock icon. A symbol is an electromagnetic signal of a particular frequency that our brain decodes as a specific thing. The combination of the All-Seeing Eye floating in a capstone over a 13-step unfinished pyramid is the most popular Illuminati symbols and by far the most recognizable symbol of the Illuminati. It could well be worth your while (at tomegatherion. Anyways, 666 written in Greek starts with an X and 666 is used to show who the Beast aka AntiChrist is. The Baphomet is a popular choice among the followers of occultism. Ultimately, the claim involving a 666 on Monster energy drink cans relies on the incorrect assumption the three claw marks comprising the logo represent three iterations of the Hebrew symbol. The “flip-off”, the “bird”, the “highway salute”, or the “one-fingered victory salute” — whatever you call it, its meaning is crystal-clear all throughout the world. Tables Please submit tables as editable text and not as images. Baphomet represents the equilibrium of the opposite. if you are looking for it. This symbol contains binary elements representing the sum total of the universe (male and female, good and evil). Satanists are found wearing this symbol on their necks in the form of pendants, while carrying out satanic rituals. Baphomet represents the equilibrium of the opposite. The Legends of History 1,926,098 views. Combine Text from Multiple Cells – Enter your formula with the ampersand & between the cell references e. The Gangster Disciples are a criminal street gang which was formed in the South Side of Chicago in the late 1960s, by Larry Hoover, leader of the Supreme Gangsters, and David Barksdale, leader of the Black Disciples. For example, Alt 9 6 9 8 will produce the black lower right triangle symbol as. That is the Mark of the Beast. The 666 building is home to Lucent (Lucifer) Technologies and its RFID microchip (the Mark of the Beast). From: Subject: =?utf-8?B?U2NoZW5nZW4gYsO2bGdlc2luZGUgc8SxbsSxciBrb250cm9sbGVyaSBzxLFrxLFsYcWfdMSxcsSxbGTEsSAtIETDvG55YSBIYWJlcmxlcmk=?= Date: Fri, 14 Apr 2017 17:03. Although many Internet chat rooms have different 'lingo', 'slang', and etiquette, the symbols, special character and numbers listed here are fairly standard throughout the Internet and Chat Room /SMS/ Instant Messaging / Text Messaging areas. TCP3: I’ve been on To Catch a Predator three times: 13. The eye represents the Illuminati ruling from their position on the capstone of the pyramid. The individual is somewhat aloof, but introspective and thoughtful. ℂ ℗⒴ ℘ⓐṨͲℰ Ⓒℌ ℝ ℂ⒯℮ℛ CopyPasteCharacter. The Book of the Witch Moon is dedicated to Hecate, the inspiration behind this Grimoire, the Goddess of the Triple Moon, of Youth, of Wisdom and of Darkness. My co-worker Parker's photo of her own semicolon tattoo. Spirals refer to journeying from one’s center (or soul) outward while never forgetting the core of the self. Small Simple Text Art Small text art pictures. I then went back into text, typed the first letter of the contact and removed any recents that related to that contact. The Baphomet is a popular choice among the followers of occultism. Geometric Green Chain. You were guided here to find out about the 777 meaning. If that is what the text is saying, there is nothing to calculate. This was the longest ruling dynasty in China's history. It is also another form of 666: Baal Symbol - This is one of many symbols that represent the false god Baal. They ruled a lot of the area along the Yellow River. This free online barcode generator creates all 1D and 2D barcodes. While the Alt key is depressed, type the proper sequence of numbers (on the numeric keypad) of the ALT code from the table above. This document covers the Linux version of umask. Combine Text from Multiple Cells – Enter your formula with the ampersand & between the cell references e. You can combine them together creatively to form cool things like I did with the Masonic pyramid symbol "Eye of Providence", aka Illuminati pyramid emoji symbol 👁️⃤ combining the "combining triangle symbol" ⃤ with an eye emoji. 0 Triple 6 Also known as: 666. Origin: the numbers 69 look like they are "69ing". the Greek character for alpha (α) is similar to the fish symbol, as is the Omega (Ω) if rotated 90°. Cassaro has discussed his work on the History. Simply type or paste any words you want to convert into the form below, hit the magic Scrambled Text button and that's it. Hi, i'm getting my first tattoo done next week, I've had an idea for months and months, its 'happiness is a journey not a destination' around my back on my right hip, ideally i would like a longer quote (just a few words longer)anyone got any similar quotesi like the ones about dont wait for the storm to past, dance in the rain. we would be so bold as to say it is an absolute MUST for anyone who is even remotely interested in magick and the occult) to obtain a copy of Aleister Crowley's Liber DCCLXXVII (Liber 777) which, thanks to his research into, knowledge of and dedication to this fascinating subject, provides us with the. That is the Mark of the Beast. X=ten in Roman numerals - Mac OS X - used as a mark for scoring a strike in bowling where it has a value of ten. TCP3: I’ve been on To Catch a Predator three times: 13. Trump is a Heretic and an avaricious Wolf who has written books about "winning" at any cost. The number 666 has been linked to scary scenes and hideous demonic beasts which have spooked the fraught nerves of the laity for almost two thousand years. The first one is 14 which refers to the statement: “We must secure the existence of our people and a. The explanation of Prince's symbol as the basic male/female representation is accurate, but there is much more to it. The 666 is identified in many other realms, as well. En mängd olika tolkningsmodeller finns, och nedan finns en översikt av den vanliga klassificeringen av dessa modeller. Member FINRA / SIPC. A Comparison of the Biblical and Masonic Accounts of King Solomon’s Temple. The Kleenex brand is so famous that it is often used a a generic name for tissues in the United States and Canada. Note : 10% interest rate is applied while computing implied volatility. Scrambled Text Generator ToggleCase cuts out all the hassle of manually figuring out what words to scramble or shuffle to create Scrambled Text. The Satanic Bible Anton Szandor LaVey Called "The Black Pope" by many of his followers, Anton LaVey began the road to High Priesthood of the Church of Satan when he was only 16 years old and an organ player in a carnival: "On Saturday night I would see men lusting after half -naked girls dancing at the carnival, and on Sunday morning when I was. Business Source Premier Active Full-Text Journals & Magazines: 1,110 Active Full-Text Peer-Reviewed Journals: 666. Each Unicode character has its own number and HTML-code. It implies calculation of the equation inside it should. TouchWiz Nature UX 2. In this ‘Ultimate Collection’ Series. Orange background. Create a Text File using joe text editor. NEW -- 7/08/2015 -- VERY IMPORTANT!! MUST LISTEN!! I highly encourage everyone to listen to my interview with a "NEAR DEATH EXPERIENCER WHO EXPERIENCED A DEMIURGE". Thus, we have 666, the sun deity (Lucifer), the Goddess (Mystery, Babylon the Great, Mother of Harlots), and the beast (antichrist, 666), all in one unitary hand sign. This text was originally written in ancient Greek, where numbers are written as letters, as they are in Hebrew - the other main language of the original Biblical texts. Apple Footer. It's where your interests connect you with your people. 38 - This is the symbol of dangers, conflicts and troubles that are the consequence of lowself-confidence. Oh what a web of evil wicked men can weave around something seemingly so ordinary and mundane. The Fruit of Life. Some have made Emperor Titus add up to 666. Exposing Spiritual Corruption: Spiritual Alchemy and The Bible (1) 6. The Stefan–Boltzmann constant. Atrial fibrillation is the most common of all sustained cardiac arrhythmias, with the prevalence increasing with age to up to 5 percent in persons more than 65 years of age, and it is a major. or Log in with Google Trying to play the game? Go Don't have a Prodigy account?. Bible verses about 666. Tables, especially large ones, should be placed on a separate slide. How the number 666 is found through the Roman numerals marked on the base of the pyramid. Variables begin with symbol$, and are declared just like properties. Modern conspiracies suggest that the world, and its most powerful leaders are actually under the control of the Illuminati. This serpent worship absolutely proves Freemasonry is Satan worship. Symbols are the language of dreams. See also free ankh pictures and clip-art. In meditated fraud and malice, bent [ 55 ] On mans destruction, maugre what might hap. neither was coke cola. 13:18 of the Textus Receptus cites the number 666 as Strong's G5516 (χξς - Chi, Xi, and Stigma - 666), while other manuscripts (Nestle, et. Select one of the fonts from the list. Select character encoding type. 11 Internet, Tips Trick Blogging. However, this is not the case in the divine world. • Must be at least 4% of the area of the display panel and more than 6 mm. Devil Emoji is the equivalent of the interjection "muahahaha". When examining its physical structure, we can see that it, like the “nine”, is shaped like a spiral, which is a highly symbolic symbol itself. Check it out! Want to convert ASCII to text?. ; Venutolo. ” — Lao-Tzu. Clone via HTTPS Clone with Git or checkout with SVN using the repository’s web address. Telnet online is a service through which you can connect to hosts and communicate with them, checking the operability and proper configuration of the services started there. The symbol can be seen on the New King James Bible, on certain rock albums (like Led Zepplin's), or you can see it on the cover of such New Age books as The Aquarian Conspiracy. Similar Images. Revelation 13:18: Another form of 666 cleverly concealed: Three Snakes One Charm - Egyptian goddess religion associated with Osiris the horned god. Just click on the symbol to get more information such as stars symbol unicode, download stars emoji as a png image at different sizes, or copy stars symbol to clipboard then paste into. Copy and paste this emoji: 🚩 This Unicode character has no emoji version, meaning this is intended to display only as a black and white glyph on most platforms. It was a number system closer to Arabic numbers (our own number system). #ushuaia h1{font-size:36px}#ushuaia h2{font-size:30px}#ushuaia h3{font-size:24px}#ushuaia h4{font-size:21px}#ushuaia h1,#ushuaia h2. If someone enters a phone number without the area code, Access won’t write the data until the area code data is added. I liked it and used it frequently as a signature on writings, etc. Geometric Green Chain. For example, you might use an input mask with a field that stores phone numbers so that Access requires ten digits of input. Spiral Ancient Goddess symbol of universal pattern of growth in nature. We have all the lyrics you are looking for you to write a funny nick, different and Nice Nick. " Since as early as 2016, the symbol has been frequently associated with the supporters of the U. I then went back into text, typed + and removed any number in recents that had the + in front of it. The problem with most of the other answers here is you need to tweak the size of the outer container so that it is the perfect size based on the font size and number of characters to be displayed. A little "detail" concerning the Biblical number "666": Notice how Revelation 13:18 reads: Here is wisdom. Baphomet represents the equilibrium of the opposite. Please do as follows. If you have ever strayed into the Options screens of Excel you may have noticed something called R1C1 reference style. It can also be said that the triquetra evolved from being a symbol that hinted at the female (mother, maiden, crone) to one that would be interpreted as male ('Father' and 'Son'). The Legends of History 1,926,098 views. The Pythagoreans divided the numbers into two groups: odd and even, male and female, light and dark etc. So the small text letters that you see in the output box above are just a few of the 130,000+ symbols that are specified in the Unicode standard - just like the symbols that you're reading right now. You may have even tried it and when you saw that all the columns had changed from letters to numbers, panicked and switched it back. nothing when. A person hanging on such a cross would be positioned head-downwards. Historically, 666 is the name of Emperor Nero reduced to its numerological value through Hebrew Gematria. However, the design firm responsible for creating the logo claimed the. The Amplified Bible states it as follows: For truly I tell you, until the sky and earth pass away and perish, not one smallest letter nor one little hook [identifying certain Hebrew letters] will pass from the Law until all things [it foreshadows] are accomplished. Love ♥ ۵ 웃 유 ღ ♂ ♀ Zodiac ♈ ♉ ♊ ♋ ♌ ♍ ♎ ♏ ♐ ♑ ♒ ♓ Phone ☎ ☏ Scissors Cross ☩ ☨ ☦ † ‡ Music ♪ ♫ ♩ ♬ ♭ ♮ ♯ ° ø. The two groups united to form the Black Gangster Disciple Nation (BGDN). It also represents a tradition that should result in a perfect social order. Other than the holy trinity, the triangle tattoo has been used to represent a variety of other trinities. Here are the 10 most popular symbols of the Illuminati:. While the Alt key is depressed, type the proper sequence of numbers (on the numeric keypad) of the ALT code from the table above. 0420 and column D. Related Symbols:Top 10 Illuminati Symbols VideoGmail Logo Masonic ApronDisney 666Monster Energy Drinks. murtazamzk. They can have any value that is allowed for a CSS property, such as colors, numbers (with units), or text. The home of free fonts since 1998. " This goat's head is the symbol of Satan. Despite skyrocketing demands brought on by the COVID-19 pandemic, Catholic Charities of the Diocese of Wilmington has maintained the level of service its clients have come to expect, executive director Richelle Vible said April 24. Atheist symbols. AMAMCF: ask me about my cheese fetish: 15. com is mainly design and produce fashion clothing for women all over the world for about 5 years. Start sharing your documents, photos, videos and more with FilesAnywhere © 2020 FilesAnywhere - Send Feedback. 666 MHz, 667 was the more accurate approximation—conveniently ignoring their own usual rounding practice: as examples, consider the earlier 66. It has a specific meaning. ) of this same particular Passage spell each word out - hexakosioi, hexēkonta, and hex (εξακοσιοι εξηκοντα εξ - six hundred sixty six). From the six dots that make up the basic grid, 64 different configurations can be created. Although a common symbol throughout history, the lightning bolt has a deep, Biblical link to Lucifer himself. In his German-language Hebrew and Aramaic Lexicon, the first edition of which was published in sections between 1810 and 1812 and the third edition in 1828, in the 1833 edition in Latin, and in the revised German text of 1834 the Hebrew scholar Wilhelm Gesenius (1786–1842), while recognising that the vowels attached to the Tetragrammaton in the Masoretic text are those of "Adonai" and. High Quality with affordable prices. It also stands for Satan and the Antichrist's dominion over the Father, Son, and Holy Spirit. We engage the development community with real-world statistics. As you review this Top 10 list of the lucky ones, please note Pinyin is also used here, which is the Chinese spelling system for the characters. 30 31 16 17. com you can easily find the emojis you want and copy them to the clipboard. It is the Sabbath, and Jesus is invited to a meal at the house of a leading Pharisee. edited by Laura J. (Physics, scattering) Cross_section_(physics). PROPHETIC SYMBOLS. At Yahoo Finance, you get free stock quotes, up-to-date news, portfolio management resources, international market data, social interaction and mortgage rates that help you manage your financial life. The ancient Babylonians observed the movements of the planets, recorded them as numbers. A list containing 100+ different styles of your own typed or pasted text will appear. Yet to this day, the Trinity is always thought of as having its origin in Roman Catholicism—most notably at the Council of Nicaea in AD 325, the first ecumenical council of Christian bishops. The 666 building is home to Lucent (Lucifer) Technologies and its RFID microchip (the Mark of the Beast). SC: still chafing: 16. The All-Seeing Eye and The. The image of Jesus is a popular Catholic picture called, "The Image Of Divine Mercy," featuring an Illuminati Pyramid with detached capstone formed by the Lord's belt. the easiest way to download a document is to add the symbol after the following url:. Sequence analysis predicted that the 666-amino acid FZD3 protein, which is 98% identical to mouse Fzd3, contains an N-terminal CRD, 7 transmembrane domains, 2 cys residues in the second and third extracellular loops, and 3 N-linked glycosylation sites. Although many Internet chat rooms have different 'lingo', 'slang', and etiquette, the symbols, special character and numbers listed here are fairly standard throughout the Internet and Chat Room /SMS/ Instant Messaging / Text Messaging areas. If you turn the Arabic symbol on its side, it looks exactly like the Greek symbol for 666. population has a fear of the number 13, and each year the even more specific fear of Friday the 13,. A SymbolArts badge always has the “jewelry quality” stamp to re-assure that our badge can be worn as a piece of jewelry. On 12/3/2012 at 2:04 pm beach boy Said: Santa cruz, ca. 🔣 Symbols Heart emojis, clocks, arrows, signs and shapes. Saint Peter's cross is often inverted too. The Temple of Solomon, the Bible and Freemasonry Craft Ritual. The Bible in many places is written in code or symbols, especially the prophecies which foretell future events. Up until then, Phillip was. These are all occult symbols and manifestations of the spirit of Satan. With CheckMyTrip, traveling has never been more stress-free. Reputation and "I Am", "I know" counters. All Angel Channel. Stay seated. Although a common symbol throughout history, the lightning bolt has a deep, Biblical link to Lucifer himself. This symbol stands for evil and is used to conjure evil spirits. The 666 number in 1 Kings 10:14 appears to be simply coincidence. Various Illuminati Symbols And Their Meanings. When using this transparent words maker to design an online transparent writing or transparent word art, you can choose among more than 450 cool artistic font faces to produce transparent PNG text with your name, message, slogan, or any words or letters you need to your banner, header, title, cover, folder, flyer, interface, page, blog, site. Peter David Phillips was elected President of People’s National Party (PNP) on Sunday, March 26, 2017 and was sworn in as Leader of the Opposition on April 2, 2017. Cross of St. Write text symbols using keyboard, HTML or by copy-pasting. Symbol of the perfected nullity and the unkindness, because it is composed of three 6 and that the 6 is the symbol of the imperfection, the evil, the iniquity and the apostasy. Make a star symbol, for mostly use of social websites like facebook. ShareTweetEmailThe life, career, and occult symbolism of XXXTentacion and the bizarre circumstances around his death. Arlandson, J. 144 (5):1023-1032, November 2019. 666 - The number of man. The number 666 has been linked to scary scenes and hideous demonic beasts which have spooked the fraught nerves of the laity for almost two thousand years. The Temple of Solomon (1) 4. High Quality with affordable prices. We have 688 free vintage fonts to offer for direct downloading · 1001 Fonts is your favorite site for free fonts since 2001. com Click to copy — press down alt for multiple Clear As HTML. Although many Internet chat rooms have different 'lingo', 'slang', and etiquette, the symbols, special character and numbers listed here are fairly standard throughout the Internet and Chat Room /SMS/ Instant Messaging / Text Messaging areas. If you insert hyperlinks instead of cross-references, they'll be blue by default and won't revert to black when you update the document. Animations show Christ in English and Greek, as well as Jesus in English and Greek - Notice Greek Jesus has "I" (capital "I" is another form for the Greek Zeta as well as "Z") as first letter as iota is shown in middle symbol of some Greek New Testaments for the "number name". As mentioned, gematria is a system of assigning a numerical value to a word or phrase. All accessible from your mobile device. Instead, we will try to explain how the initiation process goes. How to Read 12 Digit UPC Barcodes. Photo: Getty Images. Use this text generator to make zalgo text for use on Facebook, Twitter, etc. Note that when the UI look'n'feel isn't your biggest concern, but the functionality is, then just use instead of. χριστε χριστέ Χριστον Χριστόν Χριστὸν ΧΡΙΣΤΟΣ Χριστός χριστὸς ΧΡΙΣΤΟΥ χριστού Χριστοῦ χριστόυ Χριστω χριστώ Χριστῷ. We use Arabic numerals. If your KJV has maps or notes, then it may have a copyright, but the text itself does not. In the year 1969, psychedelic-occult rock band Coven used this sign before starting and ending their show on stage. The four symbols on the label and inside sleeve of Led Zeppelin IV, represent (from left to right) Jimmy Page, John Paul Jones, John Bonham and Robert Plant. 666: Blood Ritual Symbol Represents animal and human sacrifices. The use of this number is most appropriate because during the Tribulation Satan will work to exalt his man, the Antichrist, as the world's Messiah. com Click to copy — press down alt for multiple Clear As HTML. In a table, letter Э located at intersection line no. It has often been said that the phrase Bismillah ir-Rahman ir-Rahim contains the true essence of the entire Qur'an, as well as the true essence of all religions. They represent things such as faces, weather, vehicles and buildings, food and drink, animals and plants, or icons that represent emotions, feelings, or activities. Love ♥ ۵ 웃 유 ღ ♂ ♀ Zodiac ♈ ♉ ♊ ♋ ♌ ♍ ♎ ♏ ♐ ♑ ♒ ♓ Phone ☎ ☏ Scissors Cross ☩ ☨ ☦ † ‡ Music ♪ ♫ ♩ ♬ ♭ ♮ ♯ ° ø. From: Subject: =?utf-8?B?U2NoZW5nZW4gYsO2bGdlc2luZGUgc8SxbsSxciBrb250cm9sbGVyaSBzxLFrxLFsYcWfdMSxcsSxbGTEsSAtIETDvG55YSBIYWJlcmxlcmk=?= Date: Fri, 14 Apr 2017 17:03. " (Churchward, p. Prince's symbol is laced with Freemasonic and Satanic imagery. Nicknames, cool fonts, symbols and tags for Hacker - ꧁H҉A҉C҉K҉E҉R҉꧂, H҉A҉C҉K҉E҉R҉😈, ☠️H҉A҉C҉K҉E҉R҉☠️, ꧁༒☬₣ℜøźєη•₣ℓα₥єֆ꧂, ЕЯЯОЯ, H҉A҉C҉K҉E҉R҉. The Amplified Bible states it as follows: For truly I tell you, until the sky and earth pass away and perish, not one smallest letter nor one little hook [identifying certain Hebrew letters] will pass from the Law until all things [it foreshadows] are accomplished. This symbol is also found on the heavy metal Satanic album; the Slayer.
# TS SET July 2018 Paper 1 - (Part 2) (Download PDF) () CBSE NET Test Series this time was pretty different from the previous papers with lots of focus on development, regional Test Series and statistics. Questions from remote sensing were very less. We have provided the detailed solutions to the questions at Doorsteptutor-CBSE-NET (Paper-Based-Test-Series) Test Series (Paper-I) . For preparation for paper 1 check out our link at Doorsteptutor - CBSE-NET (Paper-Based-Test-Series) Paper-I. We would be coming up with many further videos and topics so stay subscribed at https://www.youtube.com/c/Examrace. TS SET July 2018 Paper 1 Solved Part 2 Dr. Manishika Jain explains TSSET 2018 Paper 1 Solved Part 2 (Important for NET Paper 1) Q26. In question you find a pair of words having certain relation. You have to select one pair among the four options that have the same relation as the words in the first pair. FOOD: HUNGER: : SLEEP: ________ (A) REST (B) NIGHT (C) DREAM (D) WEARINESS Q27. In question you find a pair of words having certain relation. You have to select one pair among the four options that have the same relation as the words in the first pair. SUMMER: WINTER: : ________: ________ (A) MONDAY: SATURDAY (B) TUESDAY: MONDAY (C) SUNDAY: HOLIDAY (D) JANUARY: MARCH Q28. The diagram, among the following, which represents the relation between, bus, scooter and conveyance, is (A) (B) (C) (D) Start passage Note: Study the diagram given below where denotes youth, denotes unemployed and denotes educated; and answer question below. Q29. The region representing uneducated and unemployed youth is (A) $5$ (B) $4$ (C) $6$ (D) $7$ Q30. The regions representing educated unemployed youth are (A) 3 (B) 7 (C) 5 (D) 4 End passage Start passage The following Pie–chart shows the expenditure incurred in publishing a book. Study the Pie –chart and answer the questions based on it. Q31. If for a certain quantity of books, the publisher has to pay Rs. 30,600 as printing cost, then what will be amount of royalty to be paid for these books? (A) Rs. 19,450 (B) Rs. 21,200 (C) Rs. 22,950 (D) Rs. 26,150 Q32. What is the central angle of the sector to the expenditure incurred on Royalty? (A) $15°$ (B) $24°$ (C) $54°$ (D) $48°$ Q33. If $5500$ copies are published and the transportation cost on them amounts to Rs. $82,500$, then what should be the selling price of the Book so that the publisher can earn a profit of $25%$? (A) Rs. $187.50$ (B) Rs. $\phantom{\rule{0.2em}{0ex}}191.50$ (C) Rs. $175$ (D) Rs. $180$ Q34. The price of the book is marked $20%$ above the cost price. If the marked price of the book is Rs. $180$, then what is the cost of the paper used in a single copy of the book? (A) Rs. $36$ (B) Rs. $37.50$ (C) Rs. $42$ (D) Rs. $44.25$ Q35. Royalty on the book is less than the printing cost by (A) $5%$ (B) $33\frac{1}{5}%$ (C) $20%$ (D) $25%$ End passage Q36. Method of delivering information Technology (IT) services in which resources are retrieved from the internet through web –based tools and applications as opposed to a direct connection to a server are known as (A) Wikipedia (B) LAN (C) WAN (D) Cloud Computing Q37. Which of the following are correct about QR code? i) It is a two-dimensional (matrix) machine readable bar code made up of black and white squares. ii) It is used for string URLs or other information that link directly to text, emails, websites, phone numbers. iii) It can store up to 7089 digits. (A) i, ii only (B) ii, iii only (C) i, ii, iii (D) i, iii only Q38. VoLTE stands for (A) Voice over Long –Term Evolution (B) Voice only Local Telecom Enterprises (C) Velocity of Locomotive Telecom Efficiency (D) Virtual office Level of Telecom Enterprises Q39. Match the ICT meanings. List –I List –II a) A set of instructions written in a computer language 1) Firewall b) The software that accesses and displays pages and files on the web 2) Cookies c) The practice of blindly posting commercial messages or advertisements to a large number of unrelated and uninterested group 3) Browser d) A mechanism that isolates a network from the rest of the internet, permitting only specific traffic to pass in and out 4) Programme 5) SPAM Codes: a b c d (A) 1 2 3 4 (B) 5 4 3 2 (C) 4 3 5 1 (D) 4 3 2 5 Q40. Which of the following is not correct about ICT? (A) ICT is a modern teaching aid (B) Using of ICT in teaching reduces the physical labour a teacher (C) ICT is a substitute for a teacher (D) ICT in teaching facilitates learning Q41. Match the following. List –I (Colour codes) List –II (Methods for Disposals) I) Yellow plastic bags 1) Disposal in secured land fills II) Black plastic bags 2) Incineration and deep burials III) Blue/white plastic bags 3) Autoclaving and chemical treatment IV) Red plastic bags 4) Microwave treatments and destruction I II III IV (A) 4 3 2 1 (B) 1 4 3 2 (C) 2 1 4 3 (D) 3 2 1 4 Q42. Arrange the following atmospheric layers in proper order (starting from the Earth’s surface) (1) Mesosphere (2) Thermosphere (3) Mesopause (4) Tropopause (5) Stratosphere (6) Troposphere (7) Stratopause (A) 5$\to \phantom{\rule{0.2em}{0ex}}$3$\to \phantom{\rule{0.2em}{0ex}}$1$\to$ 7$\to \phantom{\rule{0.2em}{0ex}}$2$\to \phantom{\rule{0.2em}{0ex}}$4$\to \phantom{\rule{0.2em}{0ex}}$6 (B) 6$\to$ 4$\to$ 7$\to$ 1$\to$ 3$\to$ 5$\to$ 2 (C) 4$\to$ 6$\to$ 5$\to$ 3$\to$ 7$\to$ 1$\to$ 2 (D) 6$\to$ 4$\to$ 5$\to$ 7$\to$ 1$\to$ 3$\to$ 2 Refer - Layers of Atmosphere - 4 Layers in Thermal & 2 in Magneto-Electrical Structure This lecture focuses on the thermal and magneto-electrical layers of atmosphere Q43. Coral reefs are mainly distributed globally in (A) Temperate water (B) Tropical water (C) Antarctic water (D) Arctic water Refer - Beautiful Coral Reefs: Understanding the Formation & Development In this session the formation of coral reefs, characteristics of coral reefs, types of corals Q44. One of the best known matrix methods for environment impact assessment is (A) Leopold matrix (B) Sphere matrix (C) Saratoga matrix (D) Component interaction method Refer - Environmental Impact Assessment - Analyzing Benefits and Actions Dr. Manishika Jain in this lecture explains the concept of Environmental Impact Assessment (EIA) and difference between EIA and Strategic EIA Q45. Protected areas include national parks wild life sanctuaries, biosphere reserves, sacred groves reserve forests are the example for (A) Ex-Situ conservation (B) In-Situ conservation (C) Cybernetic conservation (D) Liebig’s conservation Refer – Insitu & Exsitu Conservation; Topological and Non-Topological Conservation Dr. Manishika Jain explains Biodiversity: Insitu & Exsitu Conservation; Topological and Non-Topological Conservation Q46. NIRF stands for (A) National Information al Repository Fund (B) National Institute of Rural Folks (C) National Institutional Ranking Framework (D) National Institute of Ranking Framework Refer - National Institute Ranking Framework (NIRF) & NIRF Rankings 2018 Dr. Manishika Jain in this video explains National Institute Ranking Framework (NIRF) & Rankings 2018 - Higher Education Also Refer - NIRF Ranking Weightage Criteria for 2018 - Higher Education Dr. Manishika Jain in this video explains NIRF Ranking Weightage Criteria for 2018 - Higher Education Q47. Which of the following are true about ‘Pragathi’? i) It is a scholarship initiated by AICTE meant for Girl students ii) Amount of scholarship of Rs. $30,000/-$ for tuition fee or at actual, whichever is less each year of the course iii) Rs. $2000/-$ per month for $10$ months as incidentals each year of the course (A) i, ii only (B) ii, iii only (C) i, ii, iii (D) i, iii only 48. Which one of the following is not a component of National knowledge Commission (2006 - 2009) report? (B) Knowledge Concepts (C) Delivery of services (D) Borrowing of knowledge Q49. The minimum score of CGPA aggregate core (quantitative and qualitative) in each criterion for an institution to obtain NAAC Accreditation is (A) $1.50\phantom{\rule{0.2em}{0ex}}$ (B) $1.51$ (C) $2.01$ (D) $2.51$ Refer - NAAC (National Assessment and Accreditation Council) & IQAC - Examrace (Higher Education) Dr. Manishika Jain explains NAAC (National Assessment and Accreditation Council) & IQAC What is NAAC? Q50. Match the following with regard to the location of the institutes. List—I List—II a) Hyderabad 1) National Brain Research Centre b) Dona Paula 2) University of Petroleum and Energy studies c) Manesar 3) National Institute of Nutrition d) Dehradun 4) National Institute of Oceanography 5) National Institute of Design Codes: a b c d (A) 3 2 1 5 (B) 3 1 2 4 (C) 3 4 1 2 (D) 3 4 5 2
Sockets - Maple Help Overview of the Sockets Package Calling Sequence Sockets[command](arguments) command(arguments) use Sockets in ... end use Description • The Sockets package is a suite of tools for network communication in Maple. The commands in this package enable you to connect to processes on remote hosts on a network (such as an Intranet or the Internet) and exchange data with these processes. In particular, it enables two independent Maple processes running on different machines on a network to communicate with one another. • For important notes on network security, see the release notes for this package Sockets,release. • This package presents a user interface to reliable, connection-oriented, stream sockets in the Internet domain (TCP/IP). You can create a client socket by using procedure Sockets[Open], which enables you to connect to and communicate with a server on a remote machine. You can also create a server socket by using procedure Sockets[Serve], which enables you to offer computational services to which others can connect and make requests. • The Sockets package automatically shuts down any open connections that it is managing before being garbage collected, or when the Maple process in which it is running terminates normally. However, there is no user control over when this occurs (except in the case of termination), so you should not rely on it to shut down socket connections normally. • Each command in the Sockets package can be accessed by using either the long form or the short form of the command name in the command calling sequence. As the underlying implementation of the Sockets package is a module, it is also possible to use the form Sockets:-command to access a command from the package. For more information,  see Module Members. • If you are using these routines for programming, you can access the exports of this package by enclosing your code in a use statement that binds this package; for example, use Sockets in ... end use. See use for more details. • All socket connections created by routines in this package are represented by an opaque type exported as Sockets:-socketID, which is local to the Sockets package. You can test whether an expression expr has the correct structure for a socket ID by using type( expr, Sockets:-socketID ). (Note that the type must not be quoted in this case.) List of Sockets Package Commands • The following is a list of available commands. To display the help page for a particular Sockets command, see Getting Help with a Command in a Package. Examples > $\mathrm{with}\left(\mathrm{Sockets}\right)$ $\left[{\mathrm{Address}}{,}{\mathrm{Close}}{,}{\mathrm{Configure}}{,}{\mathrm{GetHostName}}{,}{\mathrm{GetLocalHost}}{,}{\mathrm{GetLocalPort}}{,}{\mathrm{GetPeerHost}}{,}{\mathrm{GetPeerPort}}{,}{\mathrm{GetProcessID}}{,}{\mathrm{HostInfo}}{,}{\mathrm{LookupService}}{,}{\mathrm{Open}}{,}{\mathrm{ParseURL}}{,}{\mathrm{Peek}}{,}{\mathrm{Read}}{,}{\mathrm{ReadBinary}}{,}{\mathrm{ReadLine}}{,}{\mathrm{Serve}}{,}{\mathrm{Status}}{,}{\mathrm{Write}}{,}{\mathrm{WriteBinary}}\right]$ (1) Find out where you are on the network. > $\mathrm{GetHostName}\left(\right)$ ${"be527fdac554"}$ (2) Open a connection to the echo server on the peer (a particular machine on the same network as the machine that generated this help page). > $\mathrm{with}\left(\mathrm{Sockets}\right):$ > $\mathrm{sid}≔\mathrm{Open}\left("mantis","echo"\right)$ ${0}$ (3) Send a message to the peer. > $\mathrm{Write}\left(\mathrm{sid},\mathrm{sprintf}\left("Hello from %s!\n",\mathrm{GetHostName}\left(\right)\right)\right)$ ${25}$ (4) > $\mathrm{Read}\left(\mathrm{sid}\right)$ ${"Hello from be527fdac554!"}$ (5) Shut down the connection. > $\mathrm{Close}\left(\mathrm{sid}\right)$ ${\mathrm{true}}$ (6) The following program finger enables you to make a query by using the finger protocol. > finger := proc( _who )     local who, at, host, sock;     who := _who;     at := StringTools:-FirstFromLeft( "@", who );     if at <> FAIL then         host := who[ 1 + at .. -1 ];         who := who[ 1 .. at - 1 ]     else         host := "localhost"     end if;     use Sockets in         sock := Open( host, finger );         Write( sock, sprintf( "%s\r\n", who ) );         printf( "%s\n", Read( sock, 5 ) );         Close( sock )     end use;     NULL end proc: References For background material on network programming concepts in the Sockets package, see references.
There is in souls a sympathy with sounds: And as the mind is pitch'd the ear is pleased With melting airs, or martial, brisk or grave; Some chord in unison with what we hear Is touch'd within us, and the heart replies. --- William Cowper Sound is an important component of modern computer games, without sounds, games lose a lot of atmospheric detail. ## Sound "Sound" is a vibration that propagates as an audible wave of pressure through a transmission medium such as a gas, liquid or solid. Obviously those mechanical waves travel with different velocities as the medium varies, for example sound travels a lot faster through water (1478 m/s) than through air (344 m/s) - which is why sonars work so well in water. Besides its velocity, there are two other parameters to a sound wave: its amplitude and its frequency. The amplitude is a measure of how much air volume is moved over a single period of time. Large speakers (and big-mouthed people) move more air, thus the sound they emit is "stronger" or more "intense". A frequency, strictly speaking, is the number of occurrences of a repeating event per unit of time. In our context, the frequency, or wavelength, is how many complete waves are emitted per second by the sound source. The frequency is measured in Hertz (Hz). Most human beings can hear sounds in the range of 20 and 20000 Hz. The average male has a voice that ranges from 20 to 2000 Hz, while the voice of an average female ranges from 70 to 3000 Hz. (Note to all the feminists: there are differences between men and women - men have more bass and women have more treble.) Mathematically speaking, the velocity of sound, $v$, can be computed using its frequency $f$, and its amplitude $\lambda$, as follows: $v = f \cdot \lambda$. Also note that sounds with the same frequency and the same amplitude might still sound differently, due to having different wave forms (think of sinus waves or sawtooth waves). Our ears have numerous little hair-like structures called stereocilia, which are responsible to catch sound waves. Each of these cilia can detect sound waves of different frequencies. Once a sound wave enters the ear, the cilia resonate and send the according signals to the brain, which then transforms those signals into the perception of sound. (Can you imagine creatures that do not hear, but see sound?) The form of a single pure tone will always be a sine wave, with an arbitrary frequency and amplitude. Mixed thousands of those pure tones together, we get the spectrum of a sound. The most basic waveform is the sine wave, as all other waveforms can be represented by linear combinations of several sine waves. This is the topic of Fourier analysis, which we will postpone to later, more advanced, tutorials. For now, I just want to give a short outlook into the beautiful world of mathematics: ## Can you hear the shape of a drum? A famous mathematical question is whether you can actually hear the shape of a drum. To hear the shape of a drum is to infer information about the shape of the drumhead from the sound it makes. "Can One Hear the Shape of a Drum?" was the title of an article by Mark Kac in the American Mathematical Monthly in 1966, but the phrasing of the title is due to Lipman Bers. The mathematics behind that questions can be traced all the way back to Hermann Weyl. The frequencies at which a drumhead can vibrate depends on its shape. The Helmholtz equation calculates the frequencies if the shape is known. These frequencies are the eigenvalues of the Laplacian in the space. A central question is whether the shape can be predicted if the frequencies are known. No other shape than a square vibrates at the same frequencies as a square. Kac did not know whether it was possible for two different shapes to yield the same set of frequencies. Mathematically speaking, a drum is conceived as an elastic membrane, which is represented by a domain in the plane. Two domains are called isospectral, or homophonic, if they have the same spectrum, i.e. the same eigenvalues. The Dirichlet eigenvalues are exactly the fundamental tones that the drum in question is capable of producing, i.e. they appear naturally as Fourier coefficients in the wave equation of the emitted sound wave. The question can thus be reformulated as follows: What can be inferred on the domain if one only knows its eigenvalues? John Milnor immediately observed that a theorem by Ernst Witt implied the existence of two $16$-dimensional tori with the same eigenvalues but different shapes. The two-dimensional problem remained unanswered until 1992, when Carolyn Gordon, David Webb and Scott Wolpert constructed a pair of regions in the plane, concave polygons, with different shapes but with the same eigenvalues, thus negatively answering the question: for man shapes, one can not hear the shape of the drum completely. This is an image of the two surfaces with the same spectrum that they had constructed. Notice that both polygons have the same area and perimeter. Some information might still be aquired. Mr. Zelditch showed that the question can be answered positively for certain convex planar regions with analytic boundary and today it is known that the set of isospectral domains is compact in the $C^\infty$-topology. ## Digital versus MIDI There are two kinds of sounds that a computer can produce: digital and synthesized. Digital sounds are recordings of sounds, while synthesized sounds are programmed reproductions of sounds based on algorithms. ### Digital Sound Digital sound obviously needs digitalization, i.e. a way to encode data in a digital form of ones or zeros. Just as an electrical signal can create sounds by causing a magnetic field to move the cone magnet of a speaker, "talking" to a speaker creates the opposite effect. That means that the speaker then produces electrical signals based on the vibrations it "feels". Thus, with proper hardware, it is possible to digitize sounds. Once the sound is recorded into memory, it can be processed or simply played back with a digital to analog converter. The number of samples of a sound recorded per second is called the sample rate. If we want to reproduce a sound, the sample rate must be at least twice the frequency of the original sound. Thus, for example, if we want to reproduce a human voice, the sound must be sampled at 4000 Hz. The mathematical reasoning behind this is that if we can sample the highest frequency sine wave of a sound, we can sample all the lower ones as well. Can you figure out why you need the double of the frequency to sample a sine wave? (Hint: Sine waves do rise and fall.) For the mathematically interested, have a look at the Nyquist-Shannon theorem. The second parameter of a sound, the amplitude, also plays a crucial role when sampling a sound. The so called amplitude resolution defines how many different values are available for the amplitude. For example, with a 16-bit resolution, there are $65536$ possible values. To conclude: Digital sound is a recording or sampling of sound converted into a digital form from an analog signal. ### Synthesized Sound Synthesized sound isn't a "real" sound converted into a digital form; it is a mathematical reproduction of a sound. Playing back a single tone is easy, but real sound is made up of many frequencies, they have undertones, overtones and harmonics, for example. Thus, to produce full sound, the hardware must be able to play back many "base" sounds simultaneously. One of the first attempts to create synthesized sounds was the so-called Frequency modulation (FM) synthesis. FM synthesis feeds the output of a signal back to itself, thus modulating the signal and creating harmonics from the original single sine wave. About the same time as FM synthesis, a technical standard for music synthesis was introduced, the Musical Instrument Digital Interface (MIDI) standard. Instead of digitalizing a sound, a MIDI document describes a sound as key, instruments and special codes: Channel 1: B sharp Channel 2: C flat A drawback to this format is, that the synthesis is up to the hardware, thus different hardware produces different sounds. The huge gain in memory size compared to digital music (a few kilobytes against a few megabytes) often made up for that drawback though. To further increase the quality of synthetic sound, a process called wave table synthesis was introduced. Basically wave table synthesis works like this: the wave table has a number of real, sampled digital sounds which can be played back at any desired frequency and amplitude by a digital signal processor. Obviously this takes up a bit more memory again, but the increase in quality is often worth it. As computers got faster, it also became possible to have software synthesizer-based wave table systems. To take things even further, wave guide synthesis was introduced. By using special hardware, the sound synthesizer can generate a mathematical model of an instrument and then actually play it! With this technology, the human ear is no longer able to perceive a difference between "real" and synthesized sound. So much for a basic introduction to the theory of sound. Let us briefly examine what type of sound is needed in a game. ## Basic Sound System At the most basic level, a sound system must be able to play back sounds at any given time. This can be done using our newly implemented event queue. Unfortunately, playing one sound at a time is not good enough in most cases. Imagine a dog running around in the garden, playing fetch, for example. If we played the same footstep sound all the time, it would become repetitive and boring quite quickly, we thus already need different sound files for the same sound effect. Another important thing to consider is that a computer only has a finite number of sound channels it can use at the same time, thus, sometimes, the game has to prioritize sounds, i.e. we have to play back those sounds that are important and ignore the less important ones. An example of this would be two cars crashing into each other at a road next to the garden our dog is playing in. Further imagine that the dog has invited many of his friends to play with. Obviously the sound of the cars crashing would be more important than playing back a "pawstep" sound for each dog in the garden. To be able to do this, our game engine will handle something we will call sound events. A sound event is a map (I love mappings!) between a game event and a sound (or multiple sounds). The game will not directly call for a sound to be played, but it will rather trigger a sound event. There is another thing we have to take into consideration: From how far away do we want the car crash to be heard? It would be quite useful if we could define a maximum distance for sound to travel - we will call this the fallof of a sound event. This leads to the following first draft of a sound event: struct SoundEvent { unsigned int fallof; unsigned int priority; std::vector<SoundFiles*> sounds; } There is yet another thing we have to think about. Imagine the game world having different undergrounds, such as grass and stone, or even caves for the dog to explore. Obviously the pawsteps will sound differently in each scenario. We thus need to be able to switch between sounds based on game variables: struct SoundEvent { unsigned int fallof; unsigned it priority; std::map<std::string, std::vector<SoundFiles*> > sounds; } Now the map stores pairs of strings and vectors of sound files to be able to associate different conditions with different sound files. We will see further examples of this in the next tutorial, when we will actually implement a sound system. Take note though that creating different sound files with all those little changes to the footstep sound will take a lot of memory. It would most probably be better to use dignital signal processing (see above) to simply alter the sounds on the go. In the next tutorial we will learn how to use XAudio2 to implement a basic sound system. # References (in alphabetic order) • Game Programming Algorithms, by Sanjay Madhav • Game Programming Patterns, by Robert Nystrom • Microsoft Developer Network (MSDN) • Tricks of the Windows Game Programming Gurus, by André LaMothe • Wikipedia
# Nonempty, associative, and closed under inverses but not a group Given an example of a set $G$ and an operation $*$ on $G$ such that $*$ is not a binary operation on $G$ but associative, identity and inverses properties hold? Basically, try to find an example to show that the closure property must be hold to be a group - How about the set $G = \{-1,0,1\}$ under usual addition? - All integers under addition. Let Z= all integers: where G= (Z,+,0) Closure property: consider the subset {1,2} under integers, where 1+2=3 and 3 is an integer. Thus, proves the closure property holds for all integers under addition. - What's the inverse of 2? –  Gamma Function Jun 3 '14 at 0:31 the inverse of 2 is -2 –  April Jun 3 '14 at 13:06
## anonymous 3 years ago Cot^2 x + cot x = 0 1. anonymous 2. anonymous No written question, just have to solve for that 3. anonymous $\cot(x)\left(\cot(x)+1\right)=0$ $\iff \cot(x)=0$ or $\cot(x)=-1$ 4. anonymous Got it, thank you!
# Kinematics in One Dimensions practice problems In this page we have Kinematics in One Dimensions practice problems . Hope you like them and do not forget to like , social share and comment at the end of the page. Question 1 (a) Under what condition will the distance and displacement of a moving object will have same magnitude? (b) Can a body have a constant velocity but a varying speed? (c) Can a x-t graph be a straight line parallel to time axis? (d) Can a x-t graph be a straight line parallel to position axis? Question 2 Show that area under the velocity time graph of a particle in uniform motion gives the displacement of the particle in a given time. Question 3 Derive following relations for uniformly accelerated motion using calculus method (a) Velocity time relation (b) Position time relation (c) Velocity displacement relation Question 4 A car starting from rest, accelerates at a rate $f$ through a distance $S$ , then continues at constant speed for time t  and then decelerates at the rate $\frac{f}{2}$to come to rest. If the total distance traversed is 5S, then prove that $S=\frac{1}{2}ft^{2}$ Question 5 A car starts from rest and accelerates uniformly for 10 s to a velocity 8 m/s. It then runs at a constant velocity and is finally bought to rest in 64 m with a constant retardation. The total distance covered by the car is 584 m. find the value of acceleration, retardation and the total time taken. Question 6 the relation between time t and distance $x$ is $t=ax^3+bx$ , where $a$ and $b$ are constants. Find the instantaneous acceleration. Question 7 (a) Can a body have a constant speed but varying velocity? (b) A ball is thrown straight up. What is its velocity and acceleration at the top? (c) Two balls of different masses (one lighter and heavier) are thrown vertically upwards with same initial speed. Which one will rise to the greater height? (d) Can a body have zero velocity and finite acceleration? Justify your answer with an example. Question 8 The v-t graph of two objects make angle of 300 and 60 0 with the time axis. Find the ratio of their acceleration. Question 9 ‘The direction in which the object moves is given by the direction of velocity of the object and not by the direction of acceleration’. Explain the above statement with a suitable example. Question 10 If the distance covered by a moving object varies directly as the time, what conclusion could you draw about the motion and the forces? Question 11 An object is covering distance in direct proportion to$t^3$, where  is the time elapsed. (a) What conclusion might you raw about the acceleration? Is it constant? Increasing? Decreasing? Zero? (b) What might you conclude about the force acting on the object?
# Re: [tlaplus] Conditionally modify array elements Hello, the main reference for getting started with TLA+ is the Hyperbook, available at http://research.microsoft.com/en-us/um/people/lamport/tla/hyperbook.html. More generally, the documentation available from http://research.microsoft.com/en-us/um/people/lamport/tla/tla.html is very helpful if you take the time to study it. From your message, it seems that you are talking about PlusCal. The initialization of the variable is syntactically incorrect, a function (array) value is written in TLA+ in the form [p \in S |-> e]. You can use a similar form to assign a new value to the variable in the process body in a single step (hint: the expression may contain references to the existing array). More conventionally, you could use a while loop to assign individual array elements in different steps. Deciding the "grain of atomicity" in the model of a computation is one of the main design choices. Again, the Hyperbook will give you more information. Hope this helps, and happy exploring. Stephan > On 11 Nov 2015, at 20:18, fai...@xxxxxxxxx wrote: > > Hi, > > I'm new to TLA+ and unable to figure out a way to solve the below problem: > > I have a array a, and would like to modify certain elements in the array if they satisfy some condition. How could I achieve this? > > Eg: > > variable a = [\A p \in 1..N |-> p] > > now in the process body, I would like to achieve: > > \A p \in 1..N, if (a[p] < k) then a[p] = k+1 >
## NFT Blueprint: What is ERC-721? NOTE: At the end of the article, we have a GetSmart Quiz for you. If you correctly answer all the questions, you will earn 1,000 SATs (which will be sent to your CoinSmart account). ## In Brief: • NFTs have become the hottest thing in crypto in a very short time. • NFT = Non-Fungible Tokens • “Non-fungibility” is the property that makes an asset unique and irreplaceable. • ERC-721 is one of the most well-known token standards that imparts non-fungibility. • ERC-721 defines the functions that help in “establishing token ownership” and “transfer of ownership.” Everyone is talking about NFTs. By the way, that’s not an exaggeration. Like we said….EVERYONE. Here is a fun fact for you. NFTs are so mainstream that even SNL has covered it. Earlier last month, news came out that NFTs have ballooned into a billion-dollar market. With Beeple selling his art for $69 million in the prestigious Christie’s auction house, to Lindsay Lohan and Paris Hilton selling their digital art, NFTs have quickly become a household term. Marketplaces like OpenSea and Rarible have made NFTs more accessible than ever before. So, what exactly are NFTs? Well…let’s start with the most obvious question first. ## What does “Non-Fungible” mean? NFT stands for Non-Fungible Tokens. To understand what that’s supposed to mean, let’s first learn about the differences between fungibility and non-fungibility. Fungibility is the property of an asset that makes it interchangeable with another asset of the same type. To understand this, let’s take an example of the most popular fungible assets– money. Imagine you borrowed a$100 note from your friend. When you pay back, you don’t need to give her back that exact same note. You can pay her back with: • Another $100 note. • Two$50 notes. • Ten \$10 notes. The reason being, fiat currency is fungible. You can easily replace one note with another (of the same type and value). You can also break it down and pay it back with smaller denominations. This fungible property of currency allows the citizens of a country to unanimously accept it as a valuable and reliable entity. Every unit of that currency is worth the same no matter which unit is being spent. Now, let’s look at the other side of the equation. Imagine you borrowed your wife’s car for the weekend. What do you think will happen if you return it piece-by-piece? What if instead of a car, you give her a seat, an engine, and a tailpipe? Well, if that does happen, you might as well get comfortable sleeping on the couch! Cars are an example of a non-fungible asset. You can’t break it down into smaller units. In fact, you can’t even replace it with another car, even if it is of the same make and model. A non-fungible token gets its value from the very fact that it is unique and irreplaceable. NFTs can represent digital files such as art, audio, videos, items in video games, and other forms of creative work. ## Ethereum Token Standards: ERC-20 vs ERC-721 Now, if you have been around the crypto-space, you must have heard the terms “ERC-20” and “ERC-721” thrown around quite a bit. “ERC” stands for “Ethereum Request for Comment,” while 20 and 721 are the numbers assigned to the requests. The ERCs are used to create a token standard that the developers use in their smart contracts. So, why is this necessary? Imagine that you are walking down the street with 4 different shops accepting 4 different currencies. Can you imagine how much of a nightmare that will be? Not only do you need to have all four currencies, but these shops need to find out ways to work with each other and constantly keep an eye out for currency exchange rates. Now, extend the same logic to Ethereum. For Ethereum to be a healthy ecosystem, the dApps built on top of it need to be as interoperable as possible. We can’t have Project A with its own unique token and Project B with another unique token. The tokens need to follow a set of rules to ensure that they can easily move in and out of different apps within the ecosystem. This is why the ERC-20 standard was adopted for fungible tokens and ERC-721 was adopted for non-fungible tokens. ## ERC-721 – The Non-Fungible Token Standards In this section, let us run through the different features of the ERC-721 standard, which allows it to create non-fungible tokens. The ownership functions defined in the standard are as follows: • ownerOf(): The function keeps track of the token owner’s address. This way the NFT is continually mapped to the owner. • approve(): The function permits another entity to transfer the token on the owner’s behalf. So, if you own an NFT, you can transfer it to a friend by calling this function. • takeOwnership(): This function acts as a withdrawal function wherein an outside party can call it to take tokens out of another user’s account. When a user has been approved to own a certain amount of tokens, they can call this function to withdraw the tokens from the user’s balance. • transfer(): This function allows the owner to send the token to another user. This function can only be executed if the receiver has already been approved to own the token by the sender. • tokenOfOwnerByIndex(): Think of this function as a database that keeps track of all the NFT tokens owned by the user. As defined by ERC-721, the functions explained above get triggered when one of these two events occurs – Ownership Transfer and Transfer Approval. ### Event #1: Ownership Transfer This event gets fired every time a token’s ownership changes hands. During this transfer of ownership, the functions detail the following data – • The account that sent the token • The account that received the token • The token ID that was transferred ### Event #2: Transfer Approval The second event is the transfer approval, wherein a user allows another user to take ownership of a particular token. ## The Rise and Rise Of NFTs While DeFi may be the most innovative space in crypto, NFTs have carved out a niche for themselves by appealing to the masses. Slowly and surely, “NFTs” has become the defining narrative of 2021, not just in the crypto space but also overall. ERC-721 is the token standard that has made this revolution happen. ## Alright, it's quiz time! Before you take the quiz, make sure that: • You have a verified CoinSmart account (to get your reward if you successfully answer all the questions). • You use the same email in the quiz that you use to register your CoinSmart account. ERC-721 GetSmart Quiz: Answer All The Questions And Earn 1000 SATs
## Algebraic & Geometric Topology ### Vanishing theorems for representation homology and the derived cotangent complex #### Abstract Let $G$ be a reductive affine algebraic group defined over a field $k$ of characteristic zero. We study the cotangent complex of the derived $G$–representation scheme $DRep G ( X )$ of a pointed connected topological space $X$. We use an (algebraic version of) unstable Adams spectral sequence relating the cotangent homology of $DRep G ( X )$ to the representation homology $HR ∗ ( X , G ) : = π ∗ O [ DRep G ( X ) ]$ to prove some vanishing theorems for groups and geometrically interesting spaces. Our examples include virtually free groups, Riemann surfaces, link complements in $ℝ 3$ and generalized lens spaces. In particular, for any finitely generated virtually free group $Γ$, we show that $HR i ( B Γ , G ) = 0$ for all $i > 0$. For a closed Riemann surface $Σ g$ of genus $g ≥ 1$, we have $HR i ( Σ g , G ) = 0$ for all $i > dim G$. The sharp vanishing bounds for $Σ g$ actually depend on the genus: we conjecture that if $g = 1$, then $HR i ( Σ g , G ) = 0$ for $i > r a n k G$, and if $g ≥ 2$, then $HR i ( Σ g , G ) = 0$ for $i > dim Z ( G )$, where $Z ( G )$ is the center of $G$. We prove these bounds locally on the smooth locus of the representation scheme $Rep G [ π 1 ( Σ g ) ]$ in the case of complex connected reductive groups. One important consequence of our results is the existence of a well-defined $K$–theoretic virtual fundamental class for $DRep G ( X )$ in the sense of Ciocan-Fontanine and Kapranov (Geom. Topol. 13 (2009) 1779–1804). We give a new “Tor formula” for this class in terms of functor homology. #### Article information Source Algebr. Geom. Topol., Volume 19, Number 1 (2019), 281-339. Dates Revised: 20 August 2018 Accepted: 2 September 2018 First available in Project Euclid: 12 February 2019 https://projecteuclid.org/euclid.agt/1549940434 Digital Object Identifier doi:10.2140/agt.2019.19.281 Mathematical Reviews number (MathSciNet) MR3910582 Zentralblatt MATH identifier 07053575 #### Citation Berest, Yuri; Ramadoss, Ajay C; Yeung, Wai-kit. Vanishing theorems for representation homology and the derived cotangent complex. Algebr. Geom. Topol. 19 (2019), no. 1, 281--339. doi:10.2140/agt.2019.19.281. https://projecteuclid.org/euclid.agt/1549940434 #### References • M André, Homologie des algèbres commutatives, Grundl. Math. Wissen. 206, Springer (1974) • Y Berest, G Felder, S Patotski, A C Ramadoss, T Willwacher, Representation homology, Lie algebra cohomology and the derived Harish-Chandra homomorphism, J. Eur. Math. Soc. 19 (2017) 2811–2893 • Y Berest, G Felder, A Ramadoss, Derived representation schemes and noncommutative geometry, from “Expository lectures on representation theory” (K Igusa, A Martsinkovsky, G Todorov, editors), Contemp. Math. 607, Amer. Math. Soc., Providence, RI (2014) 113–162 • Y Berest, G Khachatryan, A Ramadoss, Derived representation schemes and cyclic homology, Adv. Math. 245 (2013) 625–689 • Y Berest, A Ramadoss, Stable representation homology and Koszul duality, J. Reine Angew. Math. 715 (2016) 143–187 • Y Berest, A C Ramadoss, W-k Yeung, Representation homology of spaces and higher Hochschild homology, preprint (2017) • A K Bousfield, D M Kan, Homotopy limits, completions and localizations, Lecture Notes in Math. 304, Springer (1972) • A K Bousfield, D M Kan, The homotopy spectral sequence of a space with coefficients in a ring, Topology 11 (1972) 79–106 • K S Brown, Cohomology of groups, Graduate Texts in Math. 87, Springer (1982) • H Cartan, S Eilenberg, Homological algebra, Princeton Univ. Press (1956) • I Ciocan-Fontanine, M Kapranov, Virtual fundamental classes via dg–manifolds, Geom. Topol. 13 (2009) 1779–1804 • F R Cohen, M Stafa, A survey on spaces of homomorphisms to Lie groups, from “Configuration spaces” (F Callegaro, F Cohen, C De Concini, E M Feichtner, G Gaiffi, M Salvetti, editors), Springer INdAM Ser. 14, Springer (2016) 361–379 • M Culler, P B Shalen, Varieties of group representations and splittings of $3$–manifolds, Ann. of Math. 117 (1983) 109–146 • J Cuntz, D Quillen, Algebra extensions and nonsingularity, J. Amer. Math. Soc. 8 (1995) 251–289 • W G Dwyer, J Spaliński, Homotopy theories and model categories, from “Handbook of algebraic topology” (I M James, editor), North-Holland, Amsterdam (1995) 73–126 • P G Goerss, J F Jardine, Simplicial homotopy theory, Progr. Math. 174, Birkhäuser, Basel (1999) • W M Goldman, The symplectic nature of fundamental groups of surfaces, Adv. in Math. 54 (1984) 200–225 • W M Goldman, Representations of fundamental groups of surfaces, from “Geometry and topology” (J Alexander, J Harer, editors), Lecture Notes in Math. 1167, Springer (1985) 95–117 • K Habiro, On the category of finitely generated free groups, preprint (2016) • D K Harrison, Commutative algebras and cohomology, Trans. Amer. Math. Soc. 104 (1962) 191–204 • A Hatcher, Algebraic topology, Cambridge Univ. Press (2002) • S Iyengar, André–Quillen homology of commutative algebras, from “Interactions between homotopy theory and algebra” (L L Avramov, J D Christensen, W G Dwyer, M A Mandell, B E Shipley, editors), Contemp. Math. 436, Amer. Math. Soc., Providence, RI (2007) 203–234 • D Johnson, J J Millson, Deformation spaces associated to compact hyperbolic manifolds, from “Discrete groups in geometry and analysis” (R Howe, editor), Progr. Math. 67, Birkhäuser, Boston (1987) 48–106 • D M Kan, A combinatorial definition of homotopy groups, Ann. of Math. 67 (1958) 282–312 • D M Kan, On homotopy theory and c.s.s. groups, Ann. of Math. 68 (1958) 38–53 • D M Kan, A relation between $\mathrm{CW}$–complexes and free c.s.s. groups, Amer. J. Math. 81 (1959) 512–528 • M Kapranov, Injective resolutions of $BG$ and derived moduli spaces of local systems, J. Pure Appl. Algebra 155 (2001) 167–179 • L Le Bruyn, Qurves and quivers, J. Algebra 290 (2005) 447–472 • J-L Loday, Cyclic homology, 2nd edition, Grundl. Math. Wissen. 301, Springer (1998) • A Lubotzky, A R Magid, Varieties of representations of finitely generated groups, Mem. Amer. Math. Soc. 336, Amer. Math. Soc., Providence, RI (1985) • J Lurie, Derived algebraic geometry, PhD thesis, Massachusetts Institute of Technology (2004) Available at \setbox0\makeatletter\@url https://search.proquest.com/docview/305095302 {\unhbox0 • J Lurie, Higher topos theory, Annals of Mathematics Studies 170, Princeton Univ. Press (2009) • J P May, Simplicial objects in algebraic topology, Van Nostrand Mathematical Studies 11, Van Nostrand, Princeton, NJ (1967) • T Pantev, B Toën, M Vaquié, G Vezzosi, Shifted symplectic structures, Publ. Math. Inst. Hautes Études Sci. 117 (2013) 271–328 • T Pirashvili, Hodge decomposition for higher order Hochschild homology, Ann. Sci. École Norm. Sup. 33 (2000) 151–179 • J P Pridham, Pro-algebraic homotopy types, Proc. Lond. Math. Soc. 97 (2008) 273–338 • J P Pridham, Constructing derived moduli stacks, Geom. Topol. 17 (2013) 1417–1495 • J P Pridham, Presenting higher stacks as simplicial schemes, Adv. Math. 238 (2013) 184–245 • D G Quillen, Homotopical algebra, Lecture Notes in Math. 43, Springer (1967) • D Quillen, Rational homotopy theory, Ann. of Math. 90 (1969) 205–295 • D Quillen, On the (co)homology of commutative rings, from “Applications of categorical algebra” (A Heller, editor), Amer. Math. Soc., Providence, RI (1970) 65–87 • R W Richardson, Commuting varieties of semisimple Lie algebras and algebraic groups, Compositio Math. 38 (1979) 311–327 • S Schwede, Spectra in model categories and applications to the algebraic cotangent complex, J. Pure Appl. Algebra 120 (1997) 77–104 • P B Shalen, Representations of $3$–manifold groups, from “Handbook of geometric topology” (R J Daverman, R B Sher, editors), North-Holland, Amsterdam (2002) 955–1044 • A S Sikora, Character varieties, Trans. Amer. Math. Soc. 364 (2012) 5173–5208 • S Thomas, The functors $\bar {W}$ and $\mathrm{Diag}\circ \mathrm {Nerve}$ are simplicially homotopy equivalent, J. Homotopy Relat. Struct. 3 (2008) 359–378 • R W Thomason, Une formule de Lefschetz en $K$–théorie équivariante algébrique, Duke Math. J. 68 (1992) 447–462 • B Toën, Derived algebraic geometry, EMS Surv. Math. Sci. 1 (2014) 153–240 • B Toën, Derived algebraic geometry and deformation quantization, from “Proceedings of the International Congress of Mathematicians” (S Y Jang, Y R Kim, D-W Lee, I Ye, editors), volume II, Kyung Moon Sa, Seoul (2014) 769–792 • B Toën, G Vezzosi, Homotopical algebraic geometry, I: Topos theory, Adv. Math. 193 (2005) 257–372 • B Toën, G Vezzosi, Homotopical algebraic geometry, II: Geometric stacks and applications, Mem. Amer. Math. Soc. 902, Amer. Math. Soc., Providence, RI (2008) • C A Weibel, An introduction to homological algebra, Cambridge Studies in Advanced Mathematics 38, Cambridge Univ. Press (1994) • A Weil, Remarks on the cohomology of groups, Ann. of Math. 80 (1964) 149–157 • W-k Yeung, Representation homology and knot contact homology, PhD thesis, Cornell University (2017) Available at \setbox0\makeatletter\@url https://search.proquest.com/docview/1959333106 {\unhbox0
# “Wheel Theory”, Extended Reals, Limits, and “Nullity”: Can DNE limits be made to equal the element “$0/0$”? "Wheels" are a little-known kind of algebraic structure: They modify the concept of a field or a ring in such a way that division by any element is possible, including division by zero, while also avoiding contradictions (such as $2 = 1$) in the algebra. They do this by essentially promoting and generalizing the "inversion" operator $x^{-1} = \frac{1}{x}$ to a primary operation, and modifying the distributive laws. Specifically, a wheel is an algebraic structure $(W, +, *, /)$ consisting of a set $W$, two binary operations $+$ and $*$, which are just addition and multiplication, and a third, unary operation $/$, which could be called "division" or "involution", satisfying: 1. $(W, +)$ and $(W, *)$ are commutative monoids. 2. $/$ is involutive, i.e. for all $a \in W$, $//a = a$. 3. A number of modified distributivity principles: for all $a, b, c \in W$, $$ac + bc = (a + b)c + 0c$$ $$(a + bc)/b = a/b + c + 0b$$ $$(a + 0b)c = ac + 0b$$ $$/(a + 0b) = /a + 0b$$ 4. $0 * 0 = 0$ 5. Existence of additive annihilator: for all $a \in W$, $0/0 + a = 0/0$. Following the lead of a somewhat eccentric "computer scientist" who proposed some stuff along these lines but otherwise was kinda loopy, I call $0/0$ "nullity", and denote it $\Phi$. We can then form a wheel from the reals by forming the set $W = \mathbb{R} \cup \{ \infty, \Phi \}$, where we take $/0 = \infty$ and $\Phi = 0/0$. This infinity is unsigned, as in the real projective line. Addition and multiplication are defined similarly, except whenever an operation is "undefined", we define it to equal $\Phi$. In particular, we have $\infty + \infty = \Phi$, $0 * \infty = \Phi$, $0^0 = \Phi$, etc. We can define a "topological wheel" to be a wheel where the set $W$ has topological structure and the functions $+$, $*$ and $/$ are continuous functions, in a manner analogous to the definitions of topological rings and fields. The topology put on the real wheel above would be like that of the projective line plus an isolated point $\Phi$. This is the inspiration for the term "wheel": you can draw this structure on a piece of paper as a circle with a point for $\Phi$ in the center (of course, you can put in anywhere not on the circle, but this is where the term comes from), and that will look like a cart wheel with axle. So in this space, we have that "undefined" operations like $\frac{0}{0}$ yield $\Phi$. Yet with limits, we still have that, say, $\lim_{x \rightarrow 0} \sin\left(\frac{1}{x}\right)$ DNE. So my question is: Is it possible to put a topology on this wheel so that all functions have a limit, with those whose limit DNE in the usual topology having limit $\Phi$ and those whose limit exists in the usual topology have that same limit here? If "no", what is the largest possible class of functions including all those whose limits exist in the usual topology for which the above can be done? EDIT: Hmmmmmmm... I notice that the "wheel-shaped" topology actually doesn't give a topological wheel after all! In particular, the map $x \mapsto x + \infty$ is not continuous in this topology. Note that the preimage of the open set $\{ \Phi \}$ (which is open since $\Phi$ is an isolated point and is actually in fact clopen) pulled back through this map is not $\{ \Phi \}$ but $\{ \infty, \Phi \}$, since $\infty + \infty = \Phi$. Yet this set is not open, but closed, being the union of the closed sets $\{ \infty \}$ and $\{ \Phi \}$ and $\{ \infty \}$ is not a connected component, so it can't be clopen and must be only closed. So this begs another question: is there even any topology on this wheel at all which makes it into a topological wheel and such that real limits are preserved? If so, does such a topology automatically give "DNE" (in the projective reals) limits $\Phi$ as a value? • Wheels are cool. Too bad they're so little-known. – Joao Oct 28 '14 at 4:13 • Quick question: where do you get $0a=0$ from? It doesn't seem to follow from your definition of $W$. – Nate Diamond May 12 '16 at 0:47 • @Nate Diamond: Good catch. I'll have to look at that more closely. It may be that $\Phi = 0/0$ does not annihilate everything multiplicatively in general for arbitrary wheels. It does for $\mathbb{R}^{\odot}$, though. (That is the wheel of projective reals with nullity I just describe above. Given this is such an obscure topic, I don't think there's any standard symbol.) – The_Sympathizer May 12 '16 at 3:19 • (E.g. there could be other elements which multiply with 0 in other ways, and then how they react to $\Phi$ in a multiplication could be different. But if an element multiplies with 0 to give either 0 or $\Phi$ then multiplying it with $\Phi$ will give $\Phi$. It is easy to check that if we add elements $\Phi = 0/0$ and $\infty = 1/0 = /0$ to $\mathbb{R}$ with suitably defined unary division and nothing additional to that, then these new elements must behave that way.) – The_Sympathizer May 12 '16 at 3:22 • One way you may go about attempting to prove it is via the question: Does $\phi + \phi = \phi$? Further, is multiplication equal to summation? If so, you could say $a\phi = \sum_{i=0}^a \phi = \phi + \phi + ... + \phi = \phi$. For $a = 0$ it's trivial because $0\phi = 0 * 0 * /0 = 0/0 = \phi$. Since you're doing it over the reals, you'll have to take into account non-integral $a$, along with creating a definition for summation over $/a$ (to maintain your continuously defined 'division'), but that seems potentially doable as well. – Nate Diamond May 12 '16 at 17:29 First of all I'd like to say that I'm very amused by the fact that you and I independently decided to call wheel theory's $0/0$ 'nullity' after James Anderson's 'transreal arithmetic'. I'm fairly sure that if you take the real projective line topology on $\mathbb{R}\cup\{\infty\}$ and append $\Phi$ as an open extension topology (i.e. the open sets are precisely the pre-existing open sets in $\mathbb{R}\cup \{\infty\}$ and the entire space $\mathbb{R}\cup\{\infty,\Phi\}=\odot_\mathbb{R}$) then you get a topological wheel. Furthermore I think this may be the only way to extend the ordinary real projective line to get a topological wheel (largely because of reasoning similar to that in your edit), but I haven't proved that. This topology is somewhat reminiscent of generic points in the Zariski topology on the spectrum of a ring in that nullity is 'next to' every other number, but it's not exactly the same. Also it's somewhat natural in that it's the quotient topology of $\mathbb{R}^2$ under the equivalence relation $(a,b)\sim(c,d)$ iff $(a,b)$ and $(c,d)$ are not $(0,0)$ and $(a,b)=(e\cdot c,e \cdot d)$ for some nonzero $e$, which is just the construction of the real projective line without deleting $(0,0)$. As far as the limits are concerned you almost get what you want. Every sequence converges to $\Phi$ and at most one other point. The non-$\Phi$ limit point exists iff the sequence converges in the real projective line topology and is equal to that limit. Furthermore I think that this may be the best that you can do. If you consider any series $a_n\in\mathbb{R}$ that normally does not converge, but in your topological wheel converges to $\Phi$, then the series $(a_n, -a_n)$ converges in the product topology $\odot_\mathbb{R} \times \odot_\mathbb{R}$ to $(\Phi,\Phi)$, addition is a continuous map $+ : \odot_\mathbb{R} \times \odot_\mathbb{R} \rightarrow \odot_\mathbb{R}$ therefore the sequence $a_n - a_n=0$ must converge to $\Phi+\Phi=\Phi$ as well as the obvious limit of $0$. • Wow. What kind of space is $\odot_{\mathbb{R}}$, anyways? That is, is it homeomorphic to some kind of more familiar object, like how $\mathbb{RP}^1 \cong \mathbb{R} \cup \{ \infty \}$ itself is homeomorphic to a circle? – The_Sympathizer Jul 30 '15 at 3:11 • @mike4ty4: It's not Hausdorff, so it probably doesn't look like any space you've thought about in the last half hour. $\Phi$ could be said to be a "dense point" - every other point in $\mathbb {RP}^1 = S^1$ is "arbitrarily close" to it. So it's like a circle with one extra point that's everywhere dense. – user98602 Jul 30 '15 at 3:15 • @Mike Miller: So could one think of it as being like the circle but with an extra point that's kind of "smeared out all over the circle" (while keeping in mind the "smeary" thing is a single point, not many points) so as to be connected up with every other point? – The_Sympathizer Jul 30 '15 at 3:17 • Sure, that's good. My personal visual would be that $\Phi$ is the circle itself, considered as a single point. – user98602 Jul 30 '15 at 3:17 • @mike4ty4: That picture also fits with the comparison to generic points in Zariski topologies, because in some sense the generic point corresponding to a curve 'is the curve'. – James Hanson Jul 30 '15 at 3:26 Personally, I tend to think that “nullity” is is exactly the wrong name for 0/0, as “null” means “nothing” and 0/0 is anything but. Rather, I'd call it “omnity” after the fact that 0/0 is usually left undefined because it could literally be anything. My personal inclination would be to also use ⊙ to denote 0/0 precisely because that's also the preferred symbol of the wheel, thus reinforcing the notion that the element is a stand-in for “could be anything”; but I can understand the problems of conflation they would cause. Alternately, the symbol could be an underscore “_”, denoting the “fill in the blank” nature of the element. My only substantive difference with Wheel Theory has to do with 0^0: as I understand it, the limit as you approach 0^0 is the same as 0/0; but the value of 0^0 itself should be 1, for much the same reason why 0! is 1: you're dealing with an empty product, which is 1. As I see it, the most important contribution of Wheel Theory is an explicit unary operator for multiplicative inversion, debited by a prefixed “/”. Prior to Wheel Theory, I couldn't find a notation for inversion that wasn't some sort of binary operator, whether it be “1/x” or “x^{-1}”. It wasn't really all that important until you got to things like the Riemann Sphere, where we started getting examples of “inverses” that didn't give a product of 1. But at that point, it becomes quite important.
# String Theory: Volume 1, An Introduction to the Bosonic Format: Print Length Language: English Format: PDF / Kindle / ePub Size: 5.91 MB Pendulum X and D have equal length and consequently equal natural frequency. The newly appointed Nazi Rector of the University of Graz persuaded Schrodinger to make a ‘repentant confession’. When walking, if you take steps more often, each step must make you travel less distance if you are to continue walking at the same speed. The DNA antenna in our cells’ energy production centers ( mitochondria ) assumes the shape of what is called a super-coil. Pages: 426 Publisher: Cambridge University Press; 1 edition (October 13, 1998) ISBN: B00AKE1RLW Fundamentals of Electric Waves Tsunami: The Underrated Hazard (Springer Praxis Books) Waves on Fluid Interfaces (Publication of the Mathematics Research Center, the University of Wisconsin--Madison) So does this persuade proponents of Mystical Physics? In the reality of quantum common sense, this fact — when we recognize what is obvious, the fact that almost all events, now and in the past, occur without being observed by humans — should be seen as evidence for the irrelevance of “consciousness” and as a logical reason to abandon claims that we play an important role in “creating reality.” It should be seen as a logical reason to be humble about the importance of humans, because the universe does not need us to “observe things and make them happen.” { Do scientists create reality? - silly postmodern claims! } In the delusion of quantum mysticism, this obvious reason for humility is rejected Analytical and Numerical read pdf http://artattackfred.com/lib/analytical-and-numerical-methods-for-wave-propagation-in-fluid-media-stability-vibration-and. The Grand Design In Pierre-Simon Laplace’s day, probability was thought of as a subjective statement — you don’t know everything, but you can manage by quantifying your knowledge. But sometime in the late 1800s and early 1900s, probabilities started cropping up in ways that appeared objective. People were using statistical methods to derive things that could be measured in the laboratory — things like heat String Theory: Volumes I & II download here http://tellfredericksburg.com/freebooks/string-theory-volumes-i-ii-cambridge-monographs-on-mathematical-physics. It can escape a volume, because the charges are moving, but if it doesn't escape, well, the charge remains the same. So the divergence of j in this case reduces to dj dx plus d Rho dt equals zero. So, perhaps in equations, it's easier to think of interpretation. Consider the real line and the point a and b, with a less than b Elastic Wave Scattering and Propagation Elastic Wave Scattering and Propagation. If the displacements vary with $x$, then there will be density changes. The sign is also right: if the displacement $\chi$ increases with $x$, so that the air is stretched out, the density must go down. We now need the third equation, which is the equation of the motion produced by the pressure. If we know the relation between the force and the pressure, we can then get the equation of motion Wave Phenomena (Dover Books on Physics) Wave Phenomena (Dover Books on Physics). Consider this example the surface of an ocean, lake, pond or other body of water. When a pebble is thrown a front in a straight-line direction, or the waves may be circular waves that originate from the point where the disturbances occurs in the pond Gauge Theories and Experiments read epub http://tellfredericksburg.com/freebooks/gauge-theories-and-experiments-at-high-energies-sussp-publications. More explicitly, the superposition principle (ψ = Σanψn) of quantum physics dictates that for a wave function ψ, a measurement will result in a state of the quantum system of one of the m possible eigenvalues fn, n=1,2,....m, of the operator∧F which in the space of the eigenfunctions ψn, n=1,2,...,n , e.g. Few-Body Problems in Physics: download epub http://kr.emischool.com/?library/few-body-problems-in-physics-the-19-th-european-conference-on-few-body-problems-in-physics-aip. Covers a range of solid-state phenomena that can be understood within an independent particle description. Topics include: chemical versus band-theoretical description of solids, electronic band structure calculation, lattice dynamics, transport phenomena and electrodynamics in metals, optical properties, semiconductor physics. (F) Deals with collective effects in solids arising from interactions between constituents , e.g. The Global Approach to Quantum Field Theory read online. Discovering Physics: Waves and Electromagnetism Unit 8-9 (Course S271) Field & Wave Electromagnetics Detection, Estimation and Modulation Theory Part 1 Detection, Estimation, and Linear Modulation Theory Supersymmetry and Supergravity Quantum Field Theory: International Symposium Proceedings Guided Waves in Structures for SHM: The Time - domain Spectral Element Method Singularities and Oscillations (The IMA Volumes in Mathematics and its Applications) Fundamentals of Light Sources and Lasers Introduction to Wave Phenomena Digital Filtering: An Introduction Nonlinear Waves in One-Dimensional Dispersive Systems (Oxford Mathematical Monographs) Electromagnetic Waves (Student Physics Series) Elements Of Wave Mechanics Detection Estimation and Modulation Theory, Detection, Estimation, and Filtering Theory
# How much can value learning be disentangled? post by Stuart_Armstrong · 2019-01-29T14:17:00.601Z · score: 24 (7 votes) · LW · GW · 30 comments ## Contents Why manipulate? Manipulation versus explanation None In the context of whether the definition of human values can disentangled from the process of approximating/implementing that definition, David asks me [LW · GW]: • But I think it's reasonable to assume (within the bounds of a discussion) that there is a non-terrible way (in principle) to specify things like "manipulation". So do you disagree? I think it's a really good question, and its answer is related to a lot of relevant issues, so I put this here as a top-level post. My current feeling is, contrary to my previous intuitions, that things like "manipulation" might not be possible to specify in a way that leads to useful disentanglement. ## Why manipulate? First of all, we should ask why an AI would be tempted to manipulate us in the first place. It may be that it needs us to do something for it to accomplish its goal; in that case it is trying to manipulate our actions. Or maybe its goal includes something that cashes out as out mental states; in that case, it is trying to manipulate our mental state directly. The problem is that any reasonable friendly AI would have our mental states as part of its goal - it would at least want us to be happy rather than miserable. And (almost) any AI that wasn't perfectly indifferent to our actions would be trying to manipulate us just to get its goals accomplished. So manipulation is to be expected by most AI designs, friendly or not. ## Manipulation versus explanation Well, since the urge to manipulate is expected to be present, could we just rule it out? The problem is that we need to define the difference between manipulation and explanation. Suppose I am fully aligned/corrigible/nice or whatever other properties you might desire, and I want to inform you of something important and relevant. In doing so, especially if I am more intelligent than you, I will simplify, I will omit irrelevant details, I will omit arguably relevant details, I will emphasise things that help you get a better understanding of my position, and de-emphasise things that will just confuse you. And these are exactly the same sorts of behaviours that smart manipulator would do. Nor can we define the difference as whether the AI is truthful or not. We want human understanding of the problem, not truth. It's perfectly possible to manipulate people while telling them nothing but the truth. And if the AI structures the order in which it presents the true facts, it can manipulate people while presenting the whole truth as well as nothing but the truth. It seems that the only difference between manipulation and explanation is whether we end up with a better understanding of the situation at the end. And measuring understanding is very subtle. And even if we do it right, note that we have now motivated the AI to... aim for a particular set of mental states. We are rewarding it for manipulating us. This is contrary to the standard understanding of manipulation, which focuses on the means, not the end result. ## Bad behaviour and good values Does this mean that the situation is completely hopeless? No. There are certain manipulative practices that we might choose to ban. Especially if the AI is limited in capability at some level, this would force it to follow behaviours that are less likely to be manipulative. Essentially, there is no boundary between manipulation and explanation, but there is a difference between extreme manipulation and explanation, so ruling out the first can help (or maybe not). The other thing that can be done is to ensure that the AI has values close to ours. The closer the values of the AI are to us, the less manipulation it will need to use, and the less egregious the manipulation will be. It might be that, between partial value convergence and ruling out specific practices (and maybe some physical constraints), we may be able to get an AI that is very unlikely to manipulate us much. Incidentally, I feel the same about low-impact approaches. The full generality problem, an AI that is low impact but value-agnostic, I think is impossible. But if the values of the AI are better aligned with us, and more physically constrained, then low impact becomes easier to define. comment by Jan_Kulveit · 2019-01-31T11:42:44.401Z · score: 9 (5 votes) · LW · GW Not only it is hard to disentangle manipulation and explanation; it is actually difficult to disentangle even manipulation and just asking the human about preferences (like here). Manipulation via incorrect "understanding" is IMO somewhat easier problem (understanding can be possibly tested by something like simulating the human's capacity to predict). Manipulation via messing up with our internal multi-agent system of values seems subtle and harder. (You can imagine AI roughly in the shape of Robin Hanson, explaining to one part of the mind how some of the other parts work. Or just drawing the attention of consciousness to some sub-agents and not others.) My impression is that in full generality it is unsolvable, but something like starting with an imprecise model of approval / utility function learned via ambitious value learning and restricting explanations/questions/manipulation by that may be work. comment by Stuart_Armstrong · 2019-02-01T13:47:55.946Z · score: 4 (2 votes) · LW · GW My impression is that in full generality it is unsolvable, but something like starting with an imprecise model of approval / utility function learned via ambitious value learning and restricting explanations/questions/manipulation by that may be work. Yep. As so often, I think these things are not fully value agnostic, but don't need full human values to be defined. comment by capybaralet · 2019-01-31T04:58:48.703Z · score: 3 (2 votes) · LW · GW So I want to emphasize that I'm only saying it's *plausible* that *there exists* a specification of "manipulation". This is my default position on all human concepts. I also think it's plausible that there does not exist such a specification, or that the specification is too complex to grok, or that there end up being multiple conflicting notions we conflate under the heading of "manipulation". See this post [LW · GW] for more. Overall, I understand and appreciate the issues you're raising, but I think all this post does is show that naive attempts to specify "manipulation" fail; I think it's quite difficult to argue compellingly that no such specification exists ;) "It seems that the only difference between manipulation and explanation is whether we end up with a better understanding of the situation at the end. And measuring understanding is very subtle." ^ Actually, I think "ending up with a better understanding" (in the sense I'm reading it)is probably not sufficient to rule out manipulation; what I mean is that I can do something which actually improves your model of the world, but leads you to follow a policy with worse expected returns. A simple example would be if you are doing Bayesian updating and your prior over returns for two bandit arms is P(r|a_1) = N(1,1), P(r|a_2) = N(2, 1), while the true returns are 1/2 and 2/3 (respectively). So your current estimates are optimistic, but they are ordered correctly, and so induce the optimal (greedy) policy. Now if I give you a bunch of observations of a_2, I will be giving you true information, that will lead you to learn, correctly and with high confidence, that the expected reward for a_2 is ~2/3, improving your model of the world. But since you haven't updated your estimate for a_1, you will now prefer a_1 to a_2 (if acting greedily), which is suboptimal. So overall I've informed you with true information, but disadvantaged you nonetheless. I'd argue that if I did this intentionally, it should count as a form of manipulation. comment by Stuart_Armstrong · 2019-01-31T09:04:05.184Z · score: 2 (1 votes) · LW · GW Thanks for writing that post; have you got much in terms of volunteers currently? comment by capybaralet · 2019-01-31T19:41:05.045Z · score: 3 (2 votes) · LW · GW Haha no not at all ;) I'm not actually trying to recruit people to work on that, just trying to make people aware of the idea of doing such projects. I'd suggest it to pretty much anyone who wants to work on AI-Xrisk without diving deep into math or ML. comment by Stuart_Armstrong · 2019-02-01T13:48:27.256Z · score: 2 (1 votes) · LW · GW Shame :-( comment by John_Maxwell_IV · 2019-01-30T00:25:00.467Z · score: 3 (2 votes) · LW · GW It seems that the only difference between manipulation and explanation is whether we end up with a better understanding of the situation at the end. And measuring understanding is very subtle. And even if we do it right, note that we have now motivated the AI to... aim for a particular set of mental states. We are rewarding it for manipulating us. This is contrary to the standard understanding of manipulation, which focuses on the means, not the end result. It sounds like by the definitions you're using, a teacher who aims to help a student end up with a better understanding of the situation at the end is "manipulating" the student. Is that right? I'm not persuaded measuring understanding is "very subtle". It seems like teachers manage to do it alright. comment by Stuart_Armstrong · 2019-01-30T18:02:46.537Z · score: 5 (3 votes) · LW · GW Certain groups (most prominently religious ones) see secular education systems as examples of indoctrination. I'm not saying that it's impossible to distinguish manipulation from coercion, just that we have to use part of our values when doing the judgement. comment by John_Maxwell_IV · 2019-01-31T07:09:14.401Z · score: 4 (2 votes) · LW · GW Hm, I understood the traditional Less Wrong view to be something along the lines of: there is truth about the world, and that truth is independent of your values. Wanting something to be true won't make it so. Whereas I'd expect a postmodernist to say something like: the Christians have their truth, the Buddhists have their truth, and the Atheists have theirs. Whose truth is the "real" truth comes down to the preferences of the individual. Your statement sounds more in line with the postmodernist view than the Less Wrong one. This matters because if the Less Wrong view of the world is correct, it's more likely that there are clean mathematical algorithms for thinking about and sharing truth that are value-neutral (or at least value-orthogonal, e.g. "aim to share facts that the student will think are maximally interesting or surprising". Note that this doesn't necessarily need to be implemented in a way that a "fact" which triggers an epileptic fit and causes the student to hit the "maximally interesting" button will be selected for sharing. If I have a rough model of the user's current beliefs and preferences, I could use that to estimate the VoI of various bits of information to the user and use that as my selection criterion. Point being that our objective doesn't need to be defined in terms of "aiming for a particular set of mental states".) comment by Stuart_Armstrong · 2019-01-31T09:16:50.701Z · score: 6 (3 votes) · LW · GW Humans have beliefs and values twisted together in all kinds of odd ways. In practice, increasing our understanding tends to go along with having a more individualist outlook, a greater power to impact the natural world, less concern about difficult-to-measure issues, and less respect for traditional practices and group identities (and often the creation of new group identities, and sometimes new traditions). Now, I find those changes to be (generally) positive, and I'd like them to be more common. But these are value changes, and I understand why people with different values could object to them. comment by John_Maxwell_IV · 2019-01-31T10:13:30.696Z · score: 2 (1 votes) · LW · GW Your original argument, as I understood it, was something like: Explanation aims for a particular set of mental states in the student, which is also what manipulation does, so therefore explanation can't be defined in a way that distinguishes it from manipulation. I pushed back on that. Now you're saying that explanation tends to produce side effects in the listener's values. Does this mean you're allowing the possibility that explanation can be usefully defined in a way that distinguishes it from manipulation? BTW, computer security researchers distinguish between "reject by default" (whitelisting) and "accept by default" (blacklisting). "Reject by default" is typically more secure. I'm more optimistic about trying to specify what it means to explain something (whitelisting) than what it means to manipulate someone in a way that's improper (blacklisting). So maybe we're shooting at different targets. Tying all of this back to FAI... you say you find the value changes that come with greater understanding to be (generally) positive and you'd like them to be more common. I'm worried about the possibility that AGI will be a global catastrophic risk. I think there are good arguments that by default, AGI will be something which is not positive. Maybe from a triage point of view, it makes sense to focus on minimizing the probability that AGI is a global catastrophic risk, and worry about the prevention of things that we think are likely to be positive once we're pretty sure the global catastrophic risk aspect of things has been solved? In Eliezer's CEV paper, he writes: In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that inter- preted. I haven't seen anyone on Less Wrong argue against CEV as a vision for how the future of humanity should be determined. And CEV seems to involve having the future be controlled by humans who are more knowledgable than current humans in some sense. But maybe you're a CEV skeptic? comment by Stuart_Armstrong · 2019-02-01T13:51:29.291Z · score: 2 (1 votes) · LW · GW I haven't seen anyone on Less Wrong argue against CEV as a vision for how the future of humanity should be determined. Well, now you've seen one ^_^ : https://www.lesswrong.com/posts/vgFvnr7FefZ3s3tHp/mahatma-armstrong-ceved-to-death I've been going on about the problems with CEV (specifically with extrapolation) for years. This post could also be considered a CEV critique: https://www.lesswrong.com/posts/WeAt5TeS8aYc4Cpms/values-determined-by-stopping-properties comment by Stuart_Armstrong · 2019-02-01T13:45:50.128Z · score: 2 (1 votes) · LW · GW possibility that explanation can be usefully defined in a way that distinguishes it from manipulation? I think explanation can be defined (see https://agentfoundations.org/item?id=1249 ). I'm not confident "explanation with no manipulation" can be defined. comment by Davidmanheim · 2019-02-11T08:38:43.716Z · score: 5 (3 votes) · LW · GW This matters because if the Less Wrong view of the world is correct, it's more likely that there are clean mathematical algorithms for thinking about and sharing truth that are value-neutral (or at least value-orthogonal, e.g. "aim to share facts that the student will think are maximally interesting or surprising". I don't think this is correct - it misses the key map-territory distinction in the human mind. Even though there is "truth" in an objective sense, there is no necessity that the human mind can think about or share that truth. Obviously we can say that experientially we have something in our heads that correlates with reality, but that doesn't imply that we can think about truth without implicating values. It also says nothing about whether we can discuss truth without manipulating the brain to represent things differently - and all imperfect approximations require trade-offs. If you want to train the brain to do X, you're implicitly prioritizing some aspect of the brain's approximation of reality over others. comment by TAG · 2019-02-11T08:56:47.886Z · score: 4 (3 votes) · LW · GW Yep. There are a number of intelligent agents, each with their own subset of true beliefs. Since agents have finite resources, the they cannot learn everything, and so their subset of true beliefs must be random or guided by some set of goals or values. So truth is entangled with value in that sense, and if not in the sense of wishful thinking. Also, there is no evidence of a any kind of One Algorithm To Rule Them All. Its in no way implied by the existence of objective reality, and everything that has been exhibited along those lines has turned out to be computationally intractable. comment by John_Maxwell_IV · 2019-02-12T06:25:41.359Z · score: 2 (1 votes) · LW · GW comment by Stuart_Armstrong · 2019-02-12T10:59:20.880Z · score: 7 (2 votes) · LW · GW That they make some sensible points, but they're wrong when they push them to far (and that they are mixing factual truths with preferences a lot). Christians do have their own "truths", if we interpret these truths as values, which is what they generally are. "It is a sin to engage in sex before marriage" vs "(some) sex can lead to pregnancy". If we call both of these "truths", then we have a confusion. comment by G Gordon Worley III (gworley) · 2019-06-12T22:05:39.236Z · score: 2 (1 votes) · LW · GW Right, both of these views on truth, traditional rationality and postmodernism, result in theories of truth that don't quite line up with what we see in the world but in different ways. The traditional rationality view fails to account for the fact that humans judge truth and we have no access to the view from nowhere, so it's right that traditional rationality is "wrong" in the sense that it incorrectly assumes it can gain privileged access to the truth of claims to know which ones are facts and which ones are falsehoods. The postmodernist view makes an opposite and only slightly less equal mistake by correctly noticing that humans judge truth but then failing to adequately account for the ways those judgements are entangled with a shared reality. The way through is to see that both there is something shared out there that there can in theory be a fact of the matter of and also realizing that we can't directly ascertain those facts because we must do so across the gap of (subjective) experience. As always, I say it comes back to the problem of the criterion [LW · GW] and our failure to adequately accept that it demands we make a leap of faith, small though we may manage to make it. comment by capybaralet · 2019-01-31T19:58:48.431Z · score: 1 (1 votes) · LW · GW IMO, VoI is also not a sufficient criteria for defining manipulation... I'll list a few problems I have with it, OTTMH: 1) It seems to reduce it to "providing misinformation, or providing information to another agent that is not maximally/sufficiently useful for them (in terms of their expected utility)". An example (due to Mati Roy) of why this doesn't seem to match our intuition is: what if I tell someone something true and informative that serves (only) to make them sadder? That doesn't really seem like manipulation (although you could make a case for it). 2) I don't like the "maximally/sufficiently" part; maybe my intuition is misleading, but manipulation seems like a qualitative thing to me. Maybe we should just constrain VoI to be positive? 3) Actually, it seems weird to talk about VoI here; VoI is prospective and subjective... it treats an agent's beliefs as real and asks how much value they should expect to get from samples or perfect knowledge, assuming these samples or the ground truth would be distributed according to their beliefs; this makes VoI strictly non-negative. But when we're considering whether to inform an agent of something, we might recognize that certain information we'd provide would actually be net negative (see my top level comment for an example). Not sure what to make of that ATM... comment by Davidmanheim · 2019-02-11T08:42:46.557Z · score: 1 (1 votes) · LW · GW re: #2, VoI doesn't need to be constrained to be positive. If in expectation you think the information will have a net negative impact, you shouldn't get the information. re: #3, of course VoI is subjective. It MUST be, because value is subjective. Spending 5 minutes to learn about the contents of a box you can buy is obviously more valuable to you than to me. Similarly, if I like chocolate more than you, finding out if a cake has chocolate is more valuable for me than for you. The information is the same, the value differs. comment by capybaralet · 2019-02-11T22:53:59.141Z · score: 1 (1 votes) · LW · GW FWICT, both of your points are actually responses to be point (3). RE "re: #2", see: https://en.wikipedia.org/wiki/Value_of_information#Characteristics RE "re: #3", my point was that it doesn't seem like VoI is the correct way for one agent to think about informing ANOTHER agent. You could just look at the change in expected utility for the receiver after updating on some information, but I don't like that way of defining it. comment by TurnTrout · 2019-01-29T17:06:20.863Z · score: 2 (1 votes) · LW · GW Incidentally, I feel the same about low-impact approaches. The full generality problem, an AI that is low impact but value-agnostic, I think is impossible. My (admittedly hazy) recollection of our last conversation is that your concerns were that “value agnostic, low impact, and still does stuff” is impossible. Can you expand on what you mean by value agnostic here, and why you think we can’t even have that and low impact? comment by Stuart_Armstrong · 2019-01-30T18:06:23.405Z · score: 2 (1 votes) · LW · GW This is based more on experience than on a full formal argument (yet). Take an AI that, according to our preferences, is low impact and still does stuff. Then there is a utility function for which that "does stuff" is the single worst and highest impact thing the AI could have done (you just trivially define a that only cares about that "stuff"). Now, that's a contrived case, but my experience is that problems like that come up all the time in low impact research, and that we really need to include - explicitly or implicitly - a lot of our values/preferences directly, in order to have something that satisfies low impact. comment by TurnTrout · 2019-01-30T18:41:14.242Z · score: 2 (1 votes) · LW · GW This seems to prove too much; the same argument proves friendly behavior can’t exist ever, or that including our preferences directly is (literally) impossible. The argument doesn’t show that that utility has to be important to / considered by the impact measure. Plus, low impact doesn’t have to be robust to adversarially chosen input attainable utilities - we get to choose them. Just choose the “am I activated” indicator utility and AUP seems to do fine, modulo open questions raised in the post and comments. comment by Stuart_Armstrong · 2019-01-30T20:31:59.835Z · score: 2 (1 votes) · LW · GW This seems to prove too much; the same argument proves friendly behavior can’t exist ever, or that including our preferences directly is (literally) impossible. ? I don't see that. What's the argument? (If you want to say that we can't define friendly behaviour without using our values, then I would agree ^_^ but I think you're trying to argue something else). comment by TurnTrout · 2019-01-30T21:43:05.499Z · score: 3 (2 votes) · LW · GW Take a friendly AI that does stuff. Then there is a utility function for which that "does stuff" is the single worst thing the AI could have done. The fact that no course of action is universally friendly doesn’t mean it can’t be friendly for us. As I understand it, the impact version of this argument is flawed in the same way (but less blatantly so): something being high impact according to a contrived utility function doesn’t mean we can’t induce behavior that is, with high probability, low impact for the vast majority of reasonable utility functions. comment by Stuart_Armstrong · 2019-01-30T22:12:42.509Z · score: 2 (1 votes) · LW · GW The fact that no course of action is universally friendly doesn’t mean it can’t be friendly for us. Indeed, by "friendly AI" I meant "an AI friendly for us". So yes, I was showing a contrived example of an AI that was friendly, and low impact, from our perspective, but that was not, as you said, universally friendly (or universally low impact). something being high impact according to a contrived utility function doesn’t mean we can’t induce behavior that is, with high probability, low impact for the vast majority of reasonable utility functions. In my experience so far, we need to include our values, in part, to define "reasonable" utility functions. comment by TurnTrout · 2019-01-30T22:37:51.936Z · score: 2 (1 votes) · LW · GW In my experience so far, we need to include our values, in part, to define "reasonable" utility functions. It seems that an extremely broad set of input attainable functions suffice to capture the “reasonable“ functions with respect to which we want to be low impact. For example, “remaining on”, “reward linear in how many blue pixels are observed each time step”, etc. All thanks to instrumental convergence and opportunity cost. comment by avturchin · 2019-01-29T21:25:05.342Z · score: 1 (1 votes) · LW · GW Even zero impact AI which is limited to pure observation may be not acceptable for many people (not everybody wants his-her sex life to be recorded and analysed). comment by TurnTrout · 2019-01-29T23:30:14.484Z · score: 2 (1 votes) · LW · GW If the AI isn’t just fed all the data by default (ie via a camera already at the opportune location), taking steps to observe is (AUP-)impactful. I think you’re right that agents with small impact allowances can still violate values.
# Is noise density over a bandwidth a measure of the noise's standard deviation? Suppose I have a sensor which has a dominant white gaussian noise source, a $$\1\textit{unit}/\sqrt{Hz}\$$ noise density and a bandwidth of 100Hz. Then the noise becomes 10$$\units\$$. Is this a measure of the standard deviation of the noise? What exactly is $$\10units\$$ a measurement of? In essence my question is, assuming a AWGN source with noise $$\10units, \$$ is it proper to say that this implies that the noise $$\w \sim N(0,10^2)\$$? • I believe I am correct, and this answers my question – john morrison Jan 7 at 20:57 • The 10 units is a measure of the RMS noise. – Spehro Pefhany Jan 7 at 21:00
My Account # Archived Space Science Seminars Announcements are sent via the email lists for faculty (SWL-Faculty-L@listserv.gmu.edu) and students (SWL-Students-L@listserv.gmu.edu).  Send an email to one of these lists to subscribe. ## Spring 2015 January 14th (Tuesday) 1pm, 242 Planetary Hall Building the Next-Generation of Model-Based Solar Cycle Predictions Andrés Muñoz-Jaramillo Montana State University, Harvard-Smithsonian Center for Astrophysics The solar cycle plays a determinant role at shaping the solar magnetic field, defining the conditions of the interplanetary environment, and  driving changes in the Earth’s atmosphere and magnetosphere.  Because of this, solar cycle prediction has become one of the main practical goals of solar physics.  Traditionally, cycle prediction has been performed using the mathematical properties of historical data (extrapolation methods), and/or hunting for observables that correlate with the characteristics of the following cycle (precursor methods).  As a new promising development, the solar minimum of cycle 23 saw the debut of predictions based on the assimilation of solar observations into dynamo models (although with highly varying results).   In this talk we will discuss the relative performance of model-based cycle predictions (compared with other forms of prediction), what caused them to yield such varying results (and what this tell us about the solar dynamo), and some of the steps being taken to improve them. Tuesday, March 18th at 1pm, 242 Planetary Hall An Introduction of FORMOSAT-3/COSMIC Occultation Ion Density and Scintillation S4-index Profile Shih-Ping Chen The Formosa Satellite 3, also named as the Constellation Observing System for Meteorology, Ionosphere, and Climate (abbreviated as FORMOSAT-3/COSMIC or F3/C in short), is a constellation of six microsatellites, designed to monitor weather via radio occultation observations in both the troposphere and the ionosphere. In this talk, we firstly examine the electron density that F3/C provides thousands of profiles per day since 2006. Secondly, F3/C can also record scintillation S4 index observations between GPS and F3/C satellites (S4max also provided/define), which is calculated from the signal to noise ratio of L1 band C/A code (1.575GHz). With the high spatial and temporal resolutions, especially upon the oceanic region that cannot be provided by ground-based observation, the F3/C data can be used as a powerful investigator of global ionosphere. Tuesday, April 1st at 1pm, 242 Planetary Hall The Interaction between Coronal Mass Ejections (CMEs) and Coronal Holes (CHs) during the Solar Cycle 23 and its Geomagnetic Consequences Amaal A. Mohamed Department of Physics, The Catholic University of America, Washington, DC., 20064, USA Abstract: The interactions between the two large scale phenomena, coronal holes (CHs) and coronal mass ejections (CMEs) maybe considered as one of the most important relations that having a direct impact not only on space weather but also on the relevant plasma physics. Many observations have shown that throughout their propagation from the Sun to interplanetary space, CMEs interact with the heliospheric structures (e.g., other CMEs, corotating interaction regions (CIRs), helmet streamers, and CHs). Such interactions could enhance the southward magnetic field component, which has important implications for geomagnetic storm generation. These interactions imply also a significant energy and momentum transfer between the interacting systems where magnetic reconnection is taking place. When CHs deflect CMEs away from or towards the Sun-Earth line, the geomagnetic response of the CME is highly affected. Gopalswamy et al. [2009] have addressed the deflection of CMEs due to the existence of CHs that are in close proximity to the eruption regions. They have shown that CHs can act as magnetic barriers that constrain CMEs propagation and can significantly affect their trajectories. Here,we study the interaction between coronal holes (CHs) and coronal mass ejections (CMEs) using a resultant force exerted by all coronal holes present on the disk and is defined as the coronal hole influence parameter (CHIP). The CHIP magnitude for each CH depends on the CH area, the distance between the CH centroid and the eruption region, and the average magnetic field within the CH at the photospheric level. The CHIP direction for each CH points from the CH centroid to the eruption region. We focus on Solar Cycle 23 CMEs originating from the disk center of the Sun (central meridian distance ≤15°). We present an extensive statistical study via compiling data sets of observations of CMEs and their interplanetary counterparts; known as interplanetary CMEs (ICMEs). There are 2 subsets of ICMEs: magnetic cloud (MC) and non-magnetic cloud (non-MC) ICMEs. MCs are identified by a smooth change of the magnetic field as measured with spacecraft at 1 AU, using ACE and Wind spacecraft. It is found that the maximum phase has the largest CHIP value (2.9 G) for non-MCs. The CHIP is the largest (5.8 G) for driverless (DL) shocks, which are shocks at 1 AU with no discernible MC or non-MC. These results suggest that the behavior of non-MCs is similar to that of the DL shocks and different from that of MCs. In other words, the CHs may deflect the CMEs away from the Sun-Earth line and force them to behave like limb CMEs with DL shocks. This finding supports the idea that all CMEs may be flux ropes if viewed from an appropriate vantage point. ## Fall 2014 242 Planetary Hall at 1pm 08/26/2014 Wenjing Liang Regional ionosphere modeling from ground- and space-based GPS data German Geodetic Research Institute (DGFI) 16/09/2014 Phil Richards GMU Causes of high latitude plasma density troughs 09/30/2014 Ron Turner, NASA/HQ Technology Development in the NASA Innovative Advanced Concepts (NIAC) Program 10/07/2014 Nishu Karna, GMU Coronal Cavity 10/14/2014 How the Sun Knocks Out My Cell Phone from 150 Million Kilometers Away 10/21/2014 Cao Wei Jiang MHD Simulations of Solar Eruption 10/28/2014 David Siskind, NRL How weather in the lower atmosphere can drive weather in space 11/04/2014 Weijia Kuang, NASA GSFC Earth’s magnetic environment: variability, prediction and geophysical implication 11/11/2014 George Chintzoglou, GMU Sounding rocket experiment to study coronal heating 11/18/2014 Steven Brown, GMU A study of seasonal ionospheric peak electron density variation 12/01/2014 Alex Kutepov, NASA GSFC/ Catholic U. of America Breakdown of non-LTE in the planetary atmospheres 12/09/2014 Mel Goldstein, NASA GSFC The Solar Wind as a Laboratory for the Study of Magnetofluid Turbulence ## Spring 2014 Announcements are sent via the email lists for faculty (SWL-Faculty-L@listserv.gmu.edu) and students (SWL-Students-L@listserv.gmu.edu).  Send an email to one of these lists to subscribe. January 14th (Tuesday) 1pm, 242 Planetary Hall Building the Next-Generation of Model-Based Solar Cycle Predictions Andrés Muñoz-Jaramillo Montana State University, Harvard-Smithsonian Center for Astrophysics The solar cycle plays a determinant role at shaping the solar magnetic field, defining the conditions of the interplanetary environment, and  driving changes in the Earth’s atmosphere and magnetosphere.  Because of this, solar cycle prediction has become one of the main practical goals of solar physics.  Traditionally, cycle prediction has been performed using the mathematical properties of historical data (extrapolation methods), and/or hunting for observables that correlate with the characteristics of the following cycle (precursor methods).  As a new promising development, the solar minimum of cycle 23 saw the debut of predictions based on the assimilation of solar observations into dynamo models (although with highly varying results).   In this talk we will discuss the relative performance of model-based cycle predictions (compared with other forms of prediction), what caused them to yield such varying results (and what this tell us about the solar dynamo), and some of the steps being taken to improve them. Tuesday, March 18th at 1pm, 242 Planetary Hall An Introduction of FORMOSAT-3/COSMIC Occultation Ion Density and Scintillation S4-index Profile Shih-Ping Chen The Formosa Satellite 3, also named as the Constellation Observing System for Meteorology, Ionosphere, and Climate (abbreviated as FORMOSAT-3/COSMIC or F3/C in short), is a constellation of six microsatellites, designed to monitor weather via radio occultation observations in both the troposphere and the ionosphere. In this talk, we firstly examine the electron density that F3/C provides thousands of profiles per day since 2006. Secondly, F3/C can also record scintillation S4 index observations between GPS and F3/C satellites (S4max also provided/define), which is calculated from the signal to noise ratio of L1 band C/A code (1.575GHz). With the high spatial and temporal resolutions, especially upon the oceanic region that cannot be provided by ground-based observation, the F3/C data can be used as a powerful investigator of global ionosphere. Tuesday, April 1st at 1pm, 242 Planetary Hall The Interaction between Coronal Mass Ejections (CMEs) and Coronal Holes (CHs) during the Solar Cycle 23 and its Geomagnetic Consequences Amaal A. Mohamed Department of Physics, The Catholic University of America, Washington, DC., 20064, USA Abstract: The interactions between the two large scale phenomena, coronal holes (CHs) and coronal mass ejections (CMEs) maybe considered as one of the most important relations that having a direct impact not only on space weather but also on the relevant plasma physics. Many observations have shown that throughout their propagation from the Sun to interplanetary space, CMEs interact with the heliospheric structures (e.g., other CMEs, corotating interaction regions (CIRs), helmet streamers, and CHs). Such interactions could enhance the southward magnetic field component, which has important implications for geomagnetic storm generation. These interactions imply also a significant energy and momentum transfer between the interacting systems where magnetic reconnection is taking place. When CHs deflect CMEs away from or towards the Sun-Earth line, the geomagnetic response of the CME is highly affected. Gopalswamy et al. [2009] have addressed the deflection of CMEs due to the existence of CHs that are in close proximity to the eruption regions. They have shown that CHs can act as magnetic barriers that constrain CMEs propagation and can significantly affect their trajectories. Here,we study the interaction between coronal holes (CHs) and coronal mass ejections (CMEs) using a resultant force exerted by all coronal holes present on the disk and is defined as the coronal hole influence parameter (CHIP). The CHIP magnitude for each CH depends on the CH area, the distance between the CH centroid and the eruption region, and the average magnetic field within the CH at the photospheric level. The CHIP direction for each CH points from the CH centroid to the eruption region. We focus on Solar Cycle 23 CMEs originating from the disk center of the Sun (central meridian distance ≤15°). We present an extensive statistical study via compiling data sets of observations of CMEs and their interplanetary counterparts; known as interplanetary CMEs (ICMEs). There are 2 subsets of ICMEs: magnetic cloud (MC) and non-magnetic cloud (non-MC) ICMEs. MCs are identified by a smooth change of the magnetic field as measured with spacecraft at 1 AU, using ACE and Wind spacecraft. It is found that the maximum phase has the largest CHIP value (2.9 G) for non-MCs. The CHIP is the largest (5.8 G) for driverless (DL) shocks, which are shocks at 1 AU with no discernible MC or non-MC. These results suggest that the behavior of non-MCs is similar to that of the DL shocks and different from that of MCs. In other words, the CHs may deflect the CMEs away from the Sun-Earth line and force them to behave like limb CMEs with DL shocks. This finding supports the idea that all CMEs may be flux ropes if viewed from an appropriate vantage point. ## Fall 2013 Meets in Planetary Hall, Room 242 at noon. Title: Fast Magnetosonic Waves and Global Coronal Seismology in the Extended Solar Corona Abstract: We present global coronal seismology, for the first time, that allows us to determine inhomogeneous magnetic field strengths in a wide range of the extended solar corona. We use observations of propagating disturbance associated with a coronal mass ejection observed on 2011 August 4 by the COR1 inner coronagraphs on board the STEREO spacecraft. We establish that the disturbance is in fact a fast magnetosonic wave as the upper coronal counterpart of the EIT wave observed by STEREO EUVI. The wave travels across magnetic field lines with inhomogeneous speeds, passing through various coronal regions such as quiet/active corona, coronal holes, and streamers. We derive magnetic field strengths along the azimuthal trajectories of the fronts at heliocentric distances 2.0, 2.5, and 3.0 Rs, using the varying speeds and electron densities. The derived magnetic field strengths are consistent with values determined with a potential field source surface model and reported in previous works. The ranges of the magnetic field strengths at these heliocentric distances are 0.44 ± 0.29, 0.23 ± 0.15, and 0.26 ± 0.14 G, respectively. The uncertainty in determining magnetic field strengths is about 40%. This work demonstrates that observations of fast magnetosonic waves by white-light coronagraphs can provide us with a unique way to diagnose magnetic field strength of an inhomogeneous medium in a wide spatial range of the extended solar corona. The magnitude and inter-hemispheric asymmetry of equatorial ionization anomaly- CHAMP and GRACE observations Chao Xiong1, 2, Hermann Lühr1, ShuYing Ma2, Kristian Schlegel3 1. Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences, Telegrafenberg, 14473 Potsdam, Germany. 2. Department of Space Physics, College of Electronic Information, Wuhan University, 430079 Wuhan, China. 3. Copernicus Gesellschaft e.V., Göttingen, Germany. Abstract. Based on nearly nine years (2001-2009) of observations from CHAMP and GRACE, a comprehensive study has been made on the morphology of the equatorial ionization anomaly (EIA), focusing on the EIA’s magnitude, inter-hemispheric asymmetry by resolving their seasonal and local time variations at different altitudes and solar activity levels. The electron density and the magnetic latitudes of the EIA crests both peak around 1400 LT while the crest-to-trough ratio (CTR) of the EIA reaches its highest value post-sunset around 2000 LT, with a value almost twice the daytime level. The magnetic latitude of the EIA at CHAMP altitude (~400 km) can reach 13° around December solstice during both high and low solar activity years, while at GRACE altitude (~480 km) the crests are observed much closer to the dip equator during low solar activity years. During high solar activity years the averaged apex height of the EIA crests can reach 800 km. During solstice seasons a clear inter-hemispheric asymmetry of the EIA can be seen. At CHAMP altitude the electron density of the EIA crest is stronger in the winter hemisphere during morning to noontime hours. It reverses after the noontime and the transition time appears around 1400 LT and 1200 LT for high and low solar activity years, respectively. At higher altitude (GRACE), the electron density of the EIA crest is always stronger in the summer hemisphere over the whole daytime. Simulation results from the SAMI2 model also show the differences in EIA inter-hemisphere asymmetry at the two altitudes. ## Spring 2013 Meets from 12:30-1:30 Research Hall, Room 162. ### March 26th Room moved to Showcase of Research Hall (glass room outside of elevators). ## Fall 2012 Meets from 1:30-2:50 Innovation 316. ### December 11th Student presentation. Dan. ### November 27th • Terry Fu – GPU Computation of MHD Turbulence • Phil Richards – AGU presentation ### November 13th Student presentations. Nishu and Andrew. ### November 13th Student presentations. Shea and Phil. ### November 6th John V. Shebalin, NASA Johnson Space Center Dipole Alignment in Rotating MHD Turbulence The geodynamo is believed to originate in the highly turbulent liquid outer core of the Earth. There is a strong theoretical similarity between magnetohydrodynamic (MHD) turbulence in a rotating periodic box and that confined by concentric, rotating spherical boundaries. This enables us to use Fourier method numerical simulations as a surrogate for investigating rotating MHD turbulence within the Earth’s liquid core. These simulations indicate strong dipole alignment with the rotation axis as long as there is sufficient time available before the growing dipole locks into a quasistationary coherent structure. Here, results from numerical simulations will be presented, as well as advances in statistical theory that may explain strong dipole alignment observed in simulations, and by extension, help explain alignment seen in planetary magnetic fields. ### October 30th Student presentations. ### October 23rd Jeff Klenzing (GSFC) The low-latitude ionosphere during the extreme solar minimum During the recent solar minimum, solar activity reached the lowest levels observed during the space age, resulting in a contracted ionosphere/thermosphere. This extremely low solar activity provides an unprecedented opportunity to understand the variability of the Earthʼs ionosphere. Recent studies of topside ionospheric densities above 400 km as measured by the Communication/Navigation Outage Forecasting System (C/NOFS) satellite show that the plasma density is significantly lower than predicted by empirical models such as IRI-2007. Additionally, the average ExB drifts measured by the VEFI instrument on C/NOFS during this period are found to have several differences from the expected climatology based on previous solar minima, including downward drifts in the early afternoon and a weak to non-existent pre-reversal enhancement. Using SAMI2 as a computational engine, we discuss the effects of a contracted thermosphere, reduced EUV ionization, and altered electrodynamics for this new baseline ionosphere and compare the results to the average ion densities measured by C/NOFS and NmF2/hmF2 measured by COSMIC. ### October 16th Student presentations No class. ### October 2nd No seminar scheduled. ### September 25th No seminar scheduled. ### September 11th Note: there are two speakers in today’s seminar. (1) Kyungsuk Cho KASI (Korea), NASA/GSFC, and CUA Title: Relationship between Metric Type II Solar Radio Bursts and Coronal Mass Ejections Metric type II solar radio bursts are known radio signatures of coronal shocks. Since the first discovery of the metric type II burst by Payne-Scott, Yabsley, and Bolton (1947), the debate on the origin (solar flare and/or coronal mass ejection) of the type II bursts has continued. By comparing kinematics of m-type II shocks with those of CMEs observed by SOHO/LASCO C1 & C2, MLSO/MK4, STEREO/COR1, and SDO/AIA, I have investigated the relationship between the type II shocks and CMEs. I found that CMEs could be main source of type II bursts, and suggested that type II bursts are generated in two sites: either at the CME nose or at the CME-streamer interaction site. I will review my studies on the relationship between CMEs and metric type II radio bursts (2) Roksoon kim KASI (Korea), NASA/GSFC, and CUA Title: Magnetic Field Strength in the Upper Solar Corona Using White-light Shock Structures Surrounding Coronal Mass Ejections To measure the magnetic field strength in the solar corona, we examined 10 fast ( >1000 km/s) limb coronal mass ejections (CMEs) that show clear shock structures in Solar and Heliospheric Observatory/Large Angle and Spectrometric Coronagraph images. By applying the piston–shock relationship to the observed CME’s standoff distance and electron density compression ratio, we estimated the Mach number, Alfven speed, and magnetic field strength in the height range 3–15 solar radii (Rs). The main results from this study are as follows: (1) the standoff distance observed in the solar corona is consistent with those from a magnetohydrodynamic model and near-Earth observations; (2) the Mach number as a shock strength is in the range 1.49–3.43 from the standoff distance ratio, but when we use the density compression ratio, the Mach number is in the range 1.47–1.90, implying that the measured density compression ratio is likely to be underestimated owing to observational limits; (3) the Alfven speed ranges from 259 to 982 km/s and the magnetic field strength is in the range 6–105 mG when the standoff distance is used; (4) if we multiply the density compression ratio by a factor of two, the Alfven speeds and the magnetic field strengths are consistent in both methods; and (5) the magnetic field strengths derived from the shock parameters are similar to those of empirical models and previous estimates. ### August 28th Kile Baker Title: SuperDARN: What is it, how does it work, and where is it going Abstract: Incoherent scatter radars (ISR) are well known in the aeronomy community and to some extent in the magnetospheric physics community as well. But coherent scatter radars are much less well known. The first radar that was to become part of the Super Dual Auroral Radar Network (SuperDARN) began operations almost 30 years ago. That original radar has now been joined by 26 others with at least three more in the pipeline. The SuperDARN system is an international consortium that involves universities and research laboratories from 8 countries. The SuperDARN system provides continuous, global measurements of ionospheric convection in both hemispheres. This talk will present a brief review of the history of SuperDARN, a discussion of how the radars work and a look at the future evolution of the radars. Data from the SuperDARN radars are freely available to scientists throughout the world and the data can be used in conjunction with other ground-based data as well as with a large variety of satellite data to study a wide range of geospace phenomena. ## Summer 2012 Meets from 1:00-2:00 on Tuesdays in Research Hall room 302. ### August 21 Spiros Patsourakos Departement of Physics, University of Ioannina, Greece Title: The Nature and Genesis of EUV Waves: A Synthesis of Observations from SOHO, STEREO, SDO, and Hinode Abstract: A major, albeit serendipitous, discovery of the SOlar and Heliospheric Observatory mission was the observation by the Extreme Ultraviolet Telescope (EIT) of large-scale extreme ultraviolet (EUV) intensity fronts propagating over a significant fraction of the Sun’s surface. These so-called EIT or EUV waves are associated with eruptive phenomena and have been studied intensely. However, their wave nature has been challenged by non-wave (or pseudo-wave) interpretations and the subject remains under debate. A string of recent solar missions has provided a wealth of detailed EUV observations of these waves bringing us closer to resolving the question of their nature. With this talk, we gather the current state-of-the-art knowledge in the field and synthesize it into a picture of an EUV wave driven by the lateral expansion of the CME. This picture can account for both wave and pseudo-wave interpretations of the observations, thus resolving the controversy over the nature of EUV waves to a large degree but not completely. We close with a discussion on several remaining open questions in the field of EUV waves research. ### May 15th Anand D. Joshi Udaipur Solar Observatory, Physics Research Laboratory Title: Kinematics of CMEs and Associated Prominences from Sterescopic Measurements Abstract: The dynamics of coronal mass ejections (CMEs) and prominences are studied in detail for a long time now. The twin STEREO spacecraft have enabled us to determine the true location and direction propagation of such solar phenomena. In this talk a new triangulation technique that we have developed for the STEREO measurements will be discussed. Based on this technique results from the analysis of two eruptive prominences will be presented. Both the prominences showed pronounced helical twist in their legs during eruption. Several features along the two legs of both the prominence, as observed in the EUVI 304 Angstrom images, were reconstructed. Changes in latitude during the eruptive phase indicate that the prominences underwent a non-radial equator-ward propagation in this phase. True heights of the prominences reveal two phases of eruption: the slow rise and the fast eruptive phase, with constant values of acceleration during both the phases for each reconstructed feature. We have attributed the difference in acceleration of the features along the two legs of the prominences in the fast eruptive phase is because of an interplay between the two motions namely, helical twist and non-radial propagation. Additionally, I would also present a study of six CMEs observed from the COR1 and COR2 coronagraphs. Our study of the kinematics of the CMEs in 3D reveals that the CME leading edge undergoes maximum acceleration typically below 2R_sun. Eruptive prominences associated with three of the CMEs were also analysed. The acceleration profiles of CMEs associated with flares and prominences exhibit different behaviour. Results from this study pertaining to the kinematics of CMEs will also be presented in the talk. ### May 1st Pat Danenault Zhang et al. East-West Coast TEC Differences and then Boding pdf ### April 24th Journal club-style presentation by Phil H. (ajp_746_1_64.pdf) and John R. (“an alternative way to visualize the cross-covariance calculation of Ilonidis” Science-2011-Ilonidis-993-6.pdf). ### April 16th Notice the special time 1:30 PM – 2:30 PM, April 16 (Monday) Michael Thompson High Altitude Observatory, the National Center for Atmospheric Research (NCAR) Helioseismology and the roots of solar activity Helioseismology provides a unique probe into the structure and dynamics of the solar interior. I will review some of the results from helioseismology that may provide constraints on the generation and evolution of magnetic flux in the solar interior, and I will conclude with a discussion of future challenges. ### April 10th Journal club-style presentation by Nishu and Victoir ### April 3rd John V. Shebalin NASA JSC Planetary Dynamos Many planets in our solar system have global, quasistationary magnetic fields with large dipole moments that interact with the solar wind to create planetary magnetospheres. Strong planetary magnetic fields are believed to arise from magnetohydrodynamic (MHD) processes in a conducting liquid core, although it has taken centuries to reach this viewpoint. In particular, the Earth’s magnetic field has caused wonder and bafflement throughout history, and only in the last century have we come to a more realistic understanding of its origin. I will review past efforts to understand the origins of planetary magnetic fields and also discuss a recent application of ideal MHD theory to “the dynamo problem.” ### March 20th Joseph Helmboldt NRL Ionospheric and plasmaspheric science with interferometric VHF data ### March 6th Jeff Stevens will present Meredith et al., Energetic electron precipitation during high‐speed solar wind stream driven storms ### February 28th Dean Pesnell NASA/GSFC The Solar Dynamics Observatory: 65 million images of the Sun and 2 comets NASA’s Solar Dynamics Observatory has returned over 60 million images of the Sun in its first 22 months of taking data. This data has given us spectacular views of flares and erupting prominences. We have learned how to predict when magnetic field will emerge from the surface and why satellite drag caused by solar flares is more complicated then we thought. A comet flew across the face of the Sun on July 6, 2011, causing a bright tail in the images. We are still asking why it was bright. Come hear about the images, see some movies, and learn about the new results from SDO. ### February 21st Pål Breke Senior adviser at the Norwegian Space Centre in Oslo, Our Explosive Sun: The Source of the Northern Lights Our sun is a stormy and variable star that hurls billions of tons of gas toward the Earth, and in the process creates the northern lights, or aurora borealis. This stunning phenomena of lights dancing across the sky is embedded in the mythology of many cultures and has been characterized as everything from dancing spirits to a sign of God’s anger. Solar physicist Paal Brekke gives a multimedia presentation on the sun-Earth connection, including spectacular images and movies from the new NASA spacecraft Solar Dynamics Observatory as well as mind-bending videos of the northern lights. He also discusses how solar storms can be a hazard for our technology- based society, and for humans in space. ### February 14th Thomas G Moran GSFC, CUA Radiative Heating of the Sun’s Corona The atmosphere surrounding the Sun known as the corona has a temperature of 2,000,000 K, which is a factor of 200 higher than that of the photosphere, or ‘surface’. Therefore, the corona must be heated by a nonconductive mechanism. We consider the possibility that some portion of this heating is provided by sunlight in the visible and infrared and test this idea though a Monte-Carlo simulation of the wave-particle interaction. We conclude that sunlight provides at least 40% and possibly all of the power required to heat the corona, with the exception of dense magnetic flux loops. Coronal electrons are heated in a stochastic manner by low-coherence, solar electromagnetic radiation. The low coherence of solar radiation allows moving electrons to gain energy from the chaotic wave field, which imparts multiple random velocity `kicks’ to these particles causing their velocity distribution to broaden or heat at levels required to balance radiative losses. We propose to test this model through a laboratory experiment, confining electrons in a Penning trap, illuminating them with low coherence broadband visible and infrared light and measuring the resulting heating for comparison with our predictions. This experiment could determine whether sunlight heats the corona. ### February 7th Andrew will lead a discussion of Media:athena_simpleGodunov_2009.pdf ### January 31st Shea will lead a discussion of Media:Science-2011-Ilonidis-993-6.pdf (extra figures Media:Ilonidis-SOM.pdf) ## Fall 2011 Meets from 12:00-1:00 on Tuesdays in Research Hall room 301. ### November 29 Statistical constraints on outer zone relativistic electron dynamics T.P. O’Brien Abstract: Outer zone relativistic electron dynamics are routinely treated as linear in the electron phase-space density. For such linear systems, there is an intimate relationship between the time evolution operator (such as the Fokker-Planck equation) and the spatio-temporal covariance of phase space density. Using particle fluxes observed by the CRRES mission, we exploit this relationship to derive constraints on the time evolution operator. We answer three questions: Is a Fokker-Planck operator appropriate? If so, what diffusion coefficients should be used? What plasma waves must be assumed to obtain the inferred diffusion coefficients? ### November 8 Yuan-Yong Deng & Dong-Guang Wang National Astronomical Observatory, Chinese Academy of Sciences In this presentation, we will briefly introduce the projects Chinese solar community is proposing, the Space Solar Telescope (SST) and the Chinese Giant Solar Telescope(CGST). SST is a one-meter optical telescope to be launched into L1 to observe the vector magnetic field and line-of-sight velocity. Coordinated with high energy, EUV and radio observation, SST will provide important observation for solar and space weather forecast. CGST is a ground-based solar telescope with an 8m Ring diameter. Its spectral coverage is from visible to mid-infrared region. CGST aims to get high spatial resolution and high magnetic sensitivity, and also wants to get new solar phenomena in the unknown infrared world. As a background, some other progress on solar instrumentation in China will also be summarized in this presentation. ### November 1 Laura A. Balmaceda Institute for Astronomical, Terrestrial and Space Sciences (ICATE), Argentina Solar irradiance variations have been continuously recorded only since 1978. Undoubtedly, there is a need to extend these records into the past in order to evaluate their possible influence on the Earth’s climate. A reconstruction of solar irradiance back to the Maunder minimum from the surface magnetic flux will be described. The reconstruction is based on a simple physical model that builds on the sunspot number records and sunspot areas where available. Since the use of sunspot data from different sources directly combined can lead to errors in estimating the increase of solar irradiance during the past centuries, a description of the cross- calibration of sunspot areas will be also presented. Finally, a brief review on the latest advances in modeling solar irradiance variations on long-term timescales will be discussed. ### October 18 Dr. Mei Zhang National Astronomical Observatory, Chinese Academy of Sciences Consequences of Magnetic Helicity Accumulation in the Corona Abound observations have shown that magnetic fields emerging on the solar photosphere obey a so-called hemispheric helicity sign rule, that is, positive helicity sign in the southern hemisphere and negative helicity sign in the northern hemisphere. This observational rule, together with the theoretical concept that the total magnetic helicity is approximately conserved in the corona, leads to a natural result that the total magnetic helicity is accumulated in the corona, in respective southern and northern hemisphere. In this talk I will present our understandings on what are the consequences of this magnetic helicity accumulation in the corona. We show that magnetic flux ropes will form in the corona as a result of Taylar relaxation; free magnetic energy will build up according to Woltjer Theorem; coronal mass ejections will take place due to the existence of an upper bound on the total magnetic helicity of force-free fields; and finally Parker-spiral-like structures will form in the interplanetary space to accommodate the large amount of magnetic helicity ejected from the corona. ### October 11 Journal papre dicussion led by Phillip Hess (1)”Propagation of an Earth-Directed Coronal Mass Ejection in Three Dimensions”, Byrne, J. et al, Nature Communications, 1:74 doi:10.1038, 2010 (2) “Experimental Onset Threshold and Magnetic Pressur Pile-up for 3D Reconnection”, Intrator T.P. et al, Nature Physics, 5, 521, 2009 ### October 04 Dr. Diego Janches Space Weather Lab, GSFC/NASA The impact of the micrometeor flux in the Earth’s Mesosphere and Lower Thermosphere Every day, billions of microgram-sized-extraterrestrial particles enter and ablate in the upper layers of the Earth’s atmosphere, depositing their mass in the Mesosphere/Lower Thermosphere (MLT). These particles, mostly originating from the sporadic meteor complex, are the major contributors of metals in the MLT. The material deposited by these particles gives rise to the upper atmospheric metallic and ion layers observed by radars and lidars. In addition, micrometeoroids are believed to be an important source for condensation nuclei (CN), the existence of which is a prerequisite for the formation of noctilucent clouds (NLC) particles in the polar mesopause region. In order to understand how this flux gives rise to these atmospheric phenomena, accurate knowledge of the global meteoric input function (MIF) is critical. This function accounts for the annual and diurnal variations of meteor rates, global distribution, directionality, and velocity and mass distributions. Estimates of most of these parameters are still under investigation. This talk will focus on results from an effort which aims to address how much, when, where and how micrometeoric mass is deposited in the MLT. This includes radar observations of meteor head-echoes as well as the coupling of several models. These include astronomical, plasma and chemical models interaction and ablation processes that these particles undergo upon atmospheric entry. We then use the Whole Atmosphere Community Circulation Model to study the final distribution of metal through out the MLT. ### September 20 Title: Improvements and Applications of Kinematic Models of the Solar Magnetic Cycle By Andrés Muñoz-Jaramillo (Harvard-Smithsonian Center for Astrophysics) The best tools we have for understanding the origin of solar magnetic variability are kinematic dynamo models. During the last decade, this type of models has seen a continuous evolution and has become increasingly successful at reproducing solar cycle characteristics. However, the ingredients that are part of these models remain poorly constrained which allows one to obtain solar-like solutions by “tuning” the input parameters, leading to controversy regarding which parameter set is more appropriate. In this presentation we will visit each of those ingredients and the work we have done to constrain their free parameters. Additionally, and using the improved model as a starting point, we will explore the causes that led to the unusually quiet minimum of cycle 23. ### September 13 Space Science in the new South African Space Agency Dr. Lee-Anne McKinnell Managing Director, South African National Space Agency (SANSA) Space Science (formerly NRF Hermanus Magnetic Observatory), Hermanus, South Africa This presentation will focus on the establishment of the new South African National Space Agency (SANSA) and the role that Space Science will play in the future South African Space programme. Details of the involvement of the agency in Fundamental and Applied research, innovation and technology, space weather, human capital development and science advancement will be covered. The scientific focus on SANSA, with details on the research infrastructure under its care will also be presented. SANSA Webpage: http://www.sansa.org.za HMO Webpage: http://www.hmo.ac.za Space Weather: http://spaceweather.hmo.ac.za ### September 6 Title: Investigation of Magnetosheath Cavities and Upper Atmosphere and Space Weather Activities at ITU By Zerefsan Kaymaz Istanbul Technical University In the Earth’s magnetosheath, depressed density and magnetic field regions have been detected during the increased flux of highly energetic particles called Magnetosheath Cavities. Magnetic field and plasma observations from Cluster spacecraft have been scanned to carry out a statistical study on the effects of the energetic particles on the magnetosheath field and plasma structure and to determine the characteristics of the magnetosheath cavities. The magnetosheath cavities are best described as depressions in the magnetosheath magnetic field and density. Temperatures within the cavities are found to be increased while the velocity is seen to be either increased or stays unchanged. As a result of the low density and speed the magnetopause expands locally outward from the Earth. One of the most distinguishing features that characterize the magnetosheath cavities is the fluctuation levels within the cavity regions in all magnetosheath parameters. Especially in the magnetic field, higher amplitude and higher frequency fluctuations than the background magnetosheath were observed. These indicate wave activity within the cavities. No IMF clock angle relationship has been determined. However, it is found that they occur during the low IMF cone angles (radial IMF). In the results of kinetic-hybrid simulations of the solar wind- magnetosheath interaction under low IMF cone angles, they are seen to be transmitted from the upstream solar wind region into the magnetosheath with the incoming flow. In this talk, results from the Cluster data and kinetic-hybrid simulations will be presented and compared. At the beginning of the talk, a brief introduction on the Upper Atmosphere and Space Weather Laboratory of Istanbul Technical University (ITU) will be given. ## Summer 2011 ### July 26, 2011 11 AM – Noon at Room 302, R1 Title: Can viscous drag account for CME deceleration? By Prasad Subramanian (IISER Pune, India) An understanding of the forces that act on Coronal Mass Ejections (CMEs) in the interplanetary medium are of prime importance in understanding their dynamics and predicting their arrival at the Earth. These forces have ben characterized so far in terms of a “drag parameter$C_{\rm D} \sim 1$ that quantifies the role of the aerodynamic drag experienced by a typical CME due to its interaction with the ambient solar wind. We examine this issue critically, and start by examining microphysical models for viscosity in the turbulent solar wind. We envisage the CME as a bubble propagating through the solar wind and compute the drag on it using these viscosity prescriptions applied to a simple 1D hydrodynamical model. We find that the viscous drag is very inadequate to account for the observed slowing down of CME from the Sun to the Earth. Other factors, such as the energy lost while driving a shock, and/or the tension in magnetic field lines that might connect the CME to the Sun, could manifest themselves as an effective aerodynamic drag. ## Spring 2011 Meets from 1:00-2:30 in Research I room 301. ### May 3, 2011 Title: Photochemistry and Energetics of the ionosphere and thermosphere Phil Richards, GMU The energy that is deposited in the thermosphere and ionosphere by solar EUV photons ultimately ends up heating the ambient neutral gases through the complex set of ion and minor neutral chemical reactions. We summarize the current state of knowledge of the ionospheric and thermospheric chemistry. With the aid of the Field Line Interhemispheric Plasma (FLIP) model, we show that the latest chemical scheme, solar EUV irradiances, and MSIS thermosphere model can satisfactorily account for most solar cycle and seasonal variations in the daytime peak density of the midlatitude ionosphere during magnetically quiet periods. The model calculations also demonstrate the importance of vibrationally excited N2 in the ionosphere. It is particularly important in producing negative ionosphere storms and also helps explain the rapid recovery after storms. ### April 19, 2011 The evolution of fast and slow CMEs in interplanetary space observed by STEREO, SOHO and SDO. Alexis Rouillard, GMU / NRL The STEREO mission allows detailed comparisons of white-light images of the solar wind (SECCHI experiment) with in-situ measurements (ACE, WIND or STEREO) to be performed. We will review the recent results of such comparisons. They confirm that the location, orientation and topology of the magnetic field inside Coronal Mass Ejections (CMEs) largely dictate the aspect of CMEs in white light images. SECCHI can be used to investigate the interaction between CMEs and the background solar wind during their propagation to 1AU. Combined images of the solar corona obtained by STEREO, SOHO and SDO also provide high-cadence, high-resolution observations of shock waves. We use the unprecedented and complimentary observations of a shock-sheath region tracked continuously from the Sun to 1AU during the 2010 April 3-5 period to investigate the onset of a Solar Energetic Particle Event (SEP). The spatial extent, radial coordinates, and speed of the driver are measured from SECCHI observations and used as inputs to a numerical simulation of the CME propagation in the background solar wind. The simulated magnetic and plasma properties of the shock and the sheath region at 1AU agree very well with those measured in situ at L1. These simulation results reveal that during this event, Earth and STEREO-B are magnetically connected to the eastern and western edges of the CME bow shock. The simulation shows that the nine hour delay of the estimated SEP release time relative to the eruption time of the ejecta corresponds to the time required by the shock to reach the magnetic field line connected to L1. The shock compression ratio is found to grow along the magnetic field line until the maximum flux of high-energy particles is reached and then levels off. ### April 12, 2011 How and Why Does the Ionospheric Total Electron Content Vary? Robert R. Meier1, Judith Lean2, John Emmert2, Michael Picone1 April 12, 2011 A new general linear model of ionospheric climatology is described that accounts simultaneously for the influences of solar and geomagnetic activity, oscillations at four frequencies and a secular trend. The model captures more than 98% of the variance in the daily-averaged, global total electron content (TEC) of the ionosphere derived from GPS observations during the 16 years from 1995 to 2010, and enables the reconstruction of TEC variations since 1950. Solar EUV irradiance variations are the dominant ionospheric influence, directly increasing TEC by as much as 40 TECU from solar activity minimum to maximum and producing additional 27-day fluctuations of as much as 15 TECU (in October 2003). The semiannual and annual oscillations are comparable in magnitude to the 27-day fluctuations, with (peak to valley) amplitudes that increase from a few TECU at low solar activity to ~17 TECU during solar activity maximum. The phase and amplitude of the semiannual oscillation are identical in the northern and southern hemispheres (and hence globally). In contrast, the annual oscillation is twice as large in the southern hemisphere (where it peaks in December-January) than in the northern hemisphere (where it peaks in April-May). Seasonal, semiannual and annual anomalies in TEC are direct effects of semiannual and annual oscillations produced by orbitally driven photoionizaion and thermospheric composition changes, not of corresponding oscillations in solar or geomagnetic activity. Geomagnetic influences on daily-averaged global electron densities are relatively modest, with the maximum effect a reduction of 11 TECU (in October 2003) and only 11 episodes in excess of 5 TECU depletions during the past 16 years. A statistically significant positive trend of 0.6 0.3 TECU (1016 electrons m–2) per decade is detected in the 15-year record. 1 SPACS, GMU 2 Space Science Division, NRL ### April 5, 2011 Lara Waldrop Remote sensing of neutral species abundance in the Earth’s upper thermosphere from ground- and space-based platforms The density and composition of the terrestrial upper atmosphere are key state parameters whose knowledge is essential for accurate photochemical modeling, understanding responses to space weather events, assessing secular atmospheric evolution, and supporting magnetospheric imaging capabilities. However, the empirical quantification of the few neutral species that comprise the upper thermosphere and exosphere is notoriously challenging. During the decades-long absence of direct, in-situ mass-spectrometer measurements, attempts to infer neutral density routinely and reliably from ground-based instrumentation have been limited by both observational constraints and theoretical ambiguities. Recent advances in numerical models and experimental techniques, including the availability of satellite-based remote sensing measurements, have motivated renewed efforts toward estimation of these key parameters. In this talk, I will summarize the long-standing challenges of ground-based upper thermosphere remote sensing and describe several promising new techniques, combining multi-platform observations with state-of-the-art photochemical and radiative transfer models, with the goal of achieving self-consistent upper thermospheric state estimation on a routine basis. ### March 29, 2011 Phillip C. Chamberlin, NASA/GSFC SDO/EVE observations of EUV irradiance changes during solar flares, and what impact these changes have on the Earth’s Ionosphere and Thermosphere. The Solar Dynamics Observatory (SDO) began normal operations in May 2010. Since then, the Extreme ultraviolet Variability Experiment (EVE) has been returning the most accurate solar XUV and EUV irradiance measurements (6.5-105 nm) every 10 seconds at almost 100% duty cycle. Having these high temporal resolution observations at good spectral resolution (0.1 nm) allows EVE to quantify the changes in the radiative output during solar flares, leading to new insights into the solar plasma’s thermal evolution at all stages of the flare. These changes in the solar EUV output then drive similar changes in the Earth’s Ionosphere and Thermosphere due to higher ionization rates and heating in these upper atmospheric layers. The presentation will not only present and discuss the new results of solar flare plasma evolution, but also how these changes can influence the Earth’s I/T system. ### March 8, 2011 Journal club discussion of Image:Angeo-26-2.pdf and Image:2010JA015.pdf ### March 1, 2011 John V. Shebalin, NASA JSC Coherent Eigenmodes in MHD Turbulence The statistical mechanics of Fourier models of ideal, homogeneous, incompressible magnetohydrodynamic (MHD) turbulence will be presented. Although statistical theory predicts that Fourier coefficients of fluid velocity and magnetic field areb zero-mean random variables, numerical simulations clearly show that certain coefficients have a non-zero mean value that can be very large compared to standard deviation, i.e., a coherent structure generally exists in MHD turbulence. An eigenanalysis of the system reveals eigenvariables that are generalizations of the Elsässer variables. When certain eigenvariables are large compared to others, coherent structure and broken ergodicity result. Relevance for dissipative magnetofluids will be discussed. ### February 22, 2011 Dusan Odstrcil Geore Mason University Title: The first STEREO multi-event: Numerical simulation of coronal mass ejections (CMEs) launched on August 1, 2010 On 2010-08-01 at least four coronal mass ejections (CMEs) were observed by the Heliospheric Imagers (HIs) onboard STEREO spacecraft. These events originated at diff erent parts of the solar corona and generated complex scenario of four mutually interacting CMEs. Real-time prediction of the arrival times to Earth failed and it is difficult to associate feautures observed by HIs with their solar sources and impacts at apacecraft. We use the heliospheric code ENLIL to show the global solution for two di fferent scenarios using fitted CME parameters from coronagraph observations. We present the temporal pro files and synthetic white-light images that enables direct comparison with in-situ and remote observations. ### February 8th, 2011 Steven Meier [8] Director, Division for Crosscutting Capability Demonstrations [9]. Suborbital Opportunities at NASA: Facilitated Access to the Space environment for Technology (FAST) and Commercial Reusable Suborbital Research (CRuSR) ## Fall 2010 Meets from 12:00-1:00 in Research I room 302 (Except on Sept 14th, Oct 12th, Nov 9th we are in room 306 of Science and Tech I) ### December 7, 2010 Rebekah Evans George Mason University Title: Coronal Heating by Surface Alfven Wave Damping: Implementation in MHD Modeling and Connection to Observations We present results from the development of a solar wind model driven by Alfven waves with realistic damping mechanisms. We self-consistently introduce surface Alfven wave damping, which is characterized by transverse gradients in density. The plasma gradients set up a resonant layer, in which the waves dissipate energy to the wind. First, we applied surface Alfven wave damping in a solar wind model driven by a flat wave spectrum (van der Holst et al. 2010), and demonstrated its effect at the boundary of open and closed magnetic fields (Evans et al. 2010). Here we apply surface wave damping to a model which allows a Kolmogorov-type spectrum of Alfven waves to evolve in frequency space (Oran et al. 2010). We consider waves with frequencies lower than those damped in the chromosphere, and on the order of those dominating the heliosphere (0.0001 to 100 Hz). We provide wave dissipation as a function of frequency. We connect our modeling results to recent observations, including an estimation of resonant absorption damping by Verth, Terradas & Goossens (2010) and density and temperature distributions using differential emission measure tomography by Vasquez, Frazin & Manchester (2010), which we present as both direct and indirect evidence that this dissipation mechanism occurs and is important in the lower corona. ### November 9, 2010 Institute of Atmospheric Physics, Academy of Science Czech Republic Title: Relating Solar-Wind and Plasmaspheric Parameters to Topside Ionospheric Parameters by Using a Multi-Satellite Comprehensive Database The ionosphere and plasmasphere (inner magnetosphere) are very complicated coupled systems and their state highly depends on solar wind conditions. We employ a large database available from the Space Physics Data Facility (SPDF) at the Goddard Space Flight Center to study parameters in the upper F region and the topside ionosphere in relation to the Earth’s plasmasphere and the solar wind. The database comprises all available topside sounder data from the four Allouette/ISIS satellites, in-situ measurements made by Langmuir probes and ion mass spectrometers on the ISIS-1 and ISIS-2 satellites, plasmaspheric measurements made from the OGO-5, and Explorer 45 satellites as well as solar wind data primarily from the Wind satellite. We mainly focus (1) on the relation of the main ionospheric trough to the position of the plasmapause and (2) on the response of high latitude ionosphere to magnetic clouds detected in the solar wind. The main goal of this research is to establish links between features observed in the plasmasphere and solar wind and features observed in the upper ionosphere using the capability of the Goddard SPDF database. We also discuss a possible contribution to the IRI model. ### October 26 Peter Williams NASA/GSFC Title: Supergranule Convection at Solar Minimum As well as the outward transport of energy, the solar convection zone is responsible for generating the Sun’s magnetic field. Improving our understanding of solar convection may lead to improved solar dynamo models and better predictability of the solar cycle. Supergranulation is a component of solar convection with cells approximately 35 Mm across that last for 1-2 days. They are well observed in Doppler data as features with a strong divergent flow (~300 m/s) and are the dominant characteristic of the chromospheric network seen in Ca II K images, where they play an important role in structuring the local magnetic field. Our recent study of supergranulation uses Doppler images obtained from the Michelson Doppler Imager (MDI). Characteristics of supergranulation such as sizes, lifetimes and velocities have been studied for two years relating to two solar minima, times during which very few sunspots were present. Manifestations of supergranulation from data simulations as well as within other data sources are also described. Ongoing studies using Doppler data from the Helioseismic Magnetic Imager (HMI) aboard the Solar Dynamics Observatory (SDO) are presented, offering new insight into surface convection features at unprecedented clarity and resolution. ### September 21 On the consistency of satellite measurements of thermospheric composition and solar EUV irradiance with Australian ionosonde electron density data Phil Richards and Bob Meier George Mason University We use a comprehensive ionosphere model to demonstrate that the TIMED satellite measurements of solar EUV irradiances, neutral densities, and neutral temperatures are consistent with Australian ionosonde measurements of the electron density from 2002 to 2006. Our approach is to adjust the NRLMSISE-00 model neutral densities and temperature to determine the changes that are needed for the ionosphere model to reproduce the electron density. These model-derived neutral densities and temperatures are found to agree well with measurements of neutral densities and temperatures from the TIMED-GUVI instrument for both magnetically quiet and disturbed conditions. The model calculations also demonstrate the importance of vibrationally excited N2 in the ionosphere; particularly in producing negative ionosphere storms. This technique opens up the prospect of using the vast ionosonde database to improve temporal variations of empirical models of the thermosphere during magnetic storms. ### September 14 Journal club discussion of “Spinning Motions in Coronal Cavities”, by Y.-M. Wang and G. Stenborg Discussion led by Georgios Chintzoglou Abstract: In movies made from Fe XII 19.5 nm images, coronal cavities that graze or are detached from the solar limb appear as continually spinning structures, with sky-plane projected flow speeds in the range 5-10 km s-1. These whirling motions often persist in the same sense for up to several days and provide strong evidence that the cavities and the immediatly surrounding streamer material have the form of helical flux ropes viewed along their axes. A pronounced bias toward spin in the equatorward direction is observed during 2008. We attribute this bias to the poleward concentration of the photospheric magnetic flux near sunspot minimum, which leads to asymmetric heating into an equatorward spinning motion when the loops pinch off to form a flux rope. As sunspot activity increases and the polar fields weaken, we expect the preferred direction of the spin to reverse. ### September 7th Journal club discussion of ## Summer 2010 ### July 6th Thanks to the development of space tourism, a new generation of unusually low cost, extremely high flight rate, suborbital vehicles is coming on line in 2011. All of these vehicles are capable of carrying experiments and experimenters into space at prices about 10x lower than conventional sounding rockets. To explore the specific applications of such vehicles for space physics purposes, and the kinds of vehicle attributes that best suit space physics applications, we will hold a small workshop at George Mason University in Fairfax on Friday, July 6th. This workshop will begin at 9:30 a.m. and run until early afternoon. Briefings by Alan Stern will describe the capabilities of these vehicles. Following this, workshop participants will individually present and discuss concepts for space physics applications in auroral, ionospheric, magnetospheric, and heliospheric research. The meeting organizers are Mike Summers (GMU) and Alan Stern. Lunch will be provided. Please indicate your interest my emailing us at either msummers@gmu.edu or astern2010@aol.com. ### June 21st Parallel, grid-adaptive simulations of astrophysical jet plasma Dr. Zakaria Meliani The computational effort involves the use of modern shock-capturing schemes exploited at very high effective resolutions. Our implementation in the AMRVAC code allows various schemes for hydro and magnetohydrodynamic applications. The governing equations of relativistic (magneto)hydrodynamics need accurate numerical treatment, fully obeying their conservation law nature in four-dimensional space-time. To make predictions for the long-term behavior of astrophysical jet flows, the use of parallelized, grid-adaptive software is a requirement, optimally exploiting modern high performance computing platforms. I will discuss the octree-based automated grid refinement (AMR) strategy, its parallel implementation, and provide quantitative information on its performance on some typical applications. I will highlight recent results on the classification of the radio source galaxies according to the properties of the external medium and of the central engine. In fact, we elaborate a model of two-component jet induced by intrinsic features of the central engine (accretion disk + black hole). We demonstrate that two-component jets with high kinetic energy flux contribution from the inner jet are subject to the development of a relativistically enhanced, rotation-induced Rayleigh-Taylor type non-axisymmetric instability. This instability induces strong mixing between both components, decelerating the inner jet and leading to overall jet decollimation. This novel scenario of sudden jet deceleration and decollimation can explain the radio source Fanaroff-Riley dichotomy as a consequence of the efficiency of the central engine in launching the inner jet component vs. the outer jet component. We infer that the FRII/FRI transition, interpreted in our two-component jet scenario, occurs when the relative kinetic energy flux of the inner to the outer jet exceeds a critical ratio. ### June 15th Shin-Yi Su Institute of Space Science, and Center for Space and Remote Sensing Research, National Central University, Chung-Li, Taiwan. Equatorial-to-Middle Latitude Ionospheric Irregularities: Studies Using ROCSAT Data Equatorial-to-middle latitude topside ionospheric ion density variations observed by ROCSAT-1 at the 600-km altitude have been studied to construct the global/seasonal/local-time distributions of the equatorial plasma depletion (plasma bubble) occurrence rates, and the low-to-middle latitude plasma enhancement (plasma blob) occurrence rates from 1999 to 2004 when the solar activities were moderate to high. The occurrence distributions of the two contrasting density irregularity structures indicate some complementary pattern in the latitudinal distribution. The seasonal/longitudinal (s/l) distributions of the equatorial density depletions have been studied extensively and the causes of such distributions have been proposed due to (1) the magnetic declination angle to affect the longitudinal gradient of the ionospheric conductivity across the sunset terminator, (2) the geographic location of the dip equator to affect the ionospheric seasonal density variation, and (3) the strength of the geomagnetic field at the dip equator to drive the over-all electrodynamics. In contrast, the study of the low-to-middle latitude density enhancements has just been started and the occurrence distribution only indicates that the maximum occurrence rates appear during the June solstice in both northern and southern hemispheres. Some occurrence dependence is noticed at longitude of large magnetic declination region, but the causal relationship between the equatorial density depletion and the density enhancement irregularities needs further investigation. Details of the global/seasonal/local-time distributions between the two different density irregularities are compared and the causes of the plasma enhancement irregularity structures are discussed. ## Spring 2010 The weekly SWL meeting will be on Tuesdays from 10:30-12:00 am in room 301. The meeting will either be a journal club discussion, a faculty meeting, or a seminar. On the weeks that there is a Space Physics-related seminar hosted on Friday by the Physics department, we may cancel the SWL meeting. ### May 18th Driving Currents enclosed by Coronal Mass Ejections from the Sun Indian Institute of Science Education and Research, Pune, India It is well known that magnetic fields play a very important role in the solar corona. Appropriately enough, coronal magnetic field measurements are an area of intense research. However, somewhat strangely, there are not many measurements or realistic estimates of coronal currents. We present estimates of currents enclosed by coronal mass ejections. We envisage a scenario where the JXB forces associated with these currents are primarily responsible for driving them. We compare these current estimates with those at other levels in the solar atmosphere (i.e., at the photosophere and chromosphere). ### May 7th Holly R Gilbert NASA/GSFC-6700 Solar Surface Phenomena Associated with Coronal Mass Ejections Solar coronal mass ejections (CMEs) drive some of the most dramatic space weather events that impact the terrestrial environment. These explosive, energetic events are often associated with phenomena on the solar surface, such as flares, prominence eruptions, and large global waves traveling across the low corona and chromosphere. I will discuss the interesting relationship between prominences, which are relative cool, dense material suspended in the hot corona, and CMEs, which are manifestations of the destabilization of the corona. Additionally, I will discuss the nature of globally propagating chromospheric “waves”, which are chromospheric imprints of fast, CME-generated hydromagnetic waves propagating in the solar corona. Understanding the underlying physics involved with surface phenomena associated with CMEs leads to a more complete picture of how these eruptions are initiated and subsequently evolve. ### April 27 Satellite Observations of Shuttle Plumes: Implications for Diffusion, Transport, and Polar Mesospheric Clouds Robert R. Meier George Mason University The satellite-borne Global Ultraviolet Imager (GUVI) on the TIMED satellite has produced more than 20 images of NASA Space Shuttle main engine plumes in the lower thermosphere. These reveal atomic hydrogen and, by inference, water vapor transport over hemispherical-scale distances with speeds much faster than expected from models of thermospheric wind motions. Furthermore, the hydrogen expands at rates that exceed the horizontal diffusion speed at nominal plume altitudes of 104-112 km. Some of the plumes are transported to Polar Regions where they form Polar Mesospheric Clouds—thought by some to be a harbinger of global change in the upper atmosphere. I will present a number of GUVI images and discuss the problems they present to our understanding of the dynamics in the lower thermosphere. ### April 20 Eileen Chollet A Multi-Point Perspective on the Solar Wind at a Small Scale While a broad-brush picture of the heliosphere is typically available for space weather prediction, key parameters can change significantly over a million-kilometer scale. In this presentation, I will discuss the importance of energetic particle predictions for space weather and delve into recent energetic particle transport modeling work which may substantially improve available predictions. Using joint data from the ACE, Wind and STEREO spacecraft, I will explore the limitations on these predictions created by sharp gradients in the energetic particle intensity. I will present the physical properties of these gradients and their relationship to both large-scale solar wind structures and turbulence. ### April 13 Observation and modeling of the Earth-ionosphere cavity electromagnetic transverse resonance and variation of the D-region electron density near sunset – how lightning measurements contribute to improving the International Reference Ionosphere model – Fernando Simões NASA/GSFC, Heliophysics Science Division, Space Weather Laboratory In the frame of the African Monsoon Multidisciplinary Analyses campaign, measurements of very low frequency electric fields were performed onboard a stratospheric balloon launched on 7 August 2006 from Niamey, Niger. During flight, numerous sferics were observed associated to lightning from active convective cells a few hundred kilometers from the balloon. Lightning data analysis shows the transverse mode mean frequency of the Earth-ionosphere cavity decreasing from ~2.4 to 2 kHz over a period of 1 h about sunset. The observed change of the transverse resonance near dusk can be fairly reproduced by an electromagnetic wave propagation model, which takes into account the D-region electron density variation predicted by the International Reference Ionosphere model. In this seminar we discuss the significance of lightning data analysis for the investigation of ionospheric processes, namely the dynamics of the D-region, and how ground and balloon lightning measurements can be combined with incoherent scatter radar, radar networks, and rockets for investigating electron density variability in the low ionosphere. Rebekah M Evans ### March 26 Note special date, time, and location: Room 134, Innovation Hall Friday, 11:30am-12:30pm IBEX and tails Edward Roelof APL Abstract will be posted at http://www.physics.gmu.edu/wiki/Seminars:Spring_2010 ### March 23rd 10am-11am b/c of CDS meeting. 5-minute presentations of what we are all working on. ### March 9 5-minute presentations of what we are all working on. Ken Dere Brian Curtis ### March 5 Note special date, time, and location: Room 134, Innovation Hall Friday, 11:30am-12:30pm Mercury’s Magnetosphere after MESSENGER’s Three Flybys James Slavin NASA Goddard Space Flight Abstract will be posted at http://www.physics.gmu.edu/wiki/Seminars:Spring_2010 ### January 26th Lower Atmosphere Sources of Thermosphere Ionosphere Structure and Variability Tim Fuller-Rowell CIRES University of Colorado and NOAA Space Weather Prediction Center The conventional sources of ionospheric structure and variability are changes in solar radiative output and geomagnetic activity, together with the subsequent response of the thermosphere and ionosphere system and interaction between the components. In the past, the extreme events of storms and flares have captured much of the interest, but most of the time there is not a flare or geomagnetic storm in progress, so it is predicting the day-to-day changes that are required, e.g. is the ionospheric total electron content going to be higher or lower tomorrow? With the recent development of whole atmosphere models (WAM), some attention is now being directed towards quantifying the impact of wave forcing from the lower atmosphere. Features such as the midnight temperature maximum can now be simulated realistically in WAM, and the physics behind the four-cell ionospheric and electrodynamic longitude structure is attracting significant interest. It has also been suggested that episodic lower atmosphere events, such as stratospheric sudden warmings (SSW), impose a strong signature on the ionosphere. A SSW can be simulated in WAM but following a real event will require data assimilation, in order to confirm a real physical connection between changes in the dynamics in the lower atmosphere and the thermosphere ionosphere response. ### February 12 Note special date, time, and location: Room 134, Innovation Hall Friday, 11:30am-12:30pm Dana Langope Physics Department Montana State University Abstract will be posted at http://www.physics.gmu.edu/wiki/Seminars:Spring_2010 ## Fall, 2009 ### November 18th Satellite based FUV observations and their applications Yongliang Zhang The Johns Hopkins University Applied Physics Laboratory Satellite based far ultraviolet (FUV) observations provide a unique way to monitor conditions in the thermosphere, ionosphere and auroral particle precipitations. The major FUV emissions include Lyman alpha (121.6 nm), OI 130.4 nm, OI 135.6 nm, N2 LBHS (140.0-150. nm) and LBHL (165.0 nm-180 nm). We will discuss how the FUV data from TIMED/GUVI can be used to retrieve products for space weather studies, such as energy flux and mean energy of precipitating electrons, thermospheric neutral composition (O/N2 ratio), neutral density profiles, ionospheric density profiles, equatorial plasma bubbles, and solar EUV flux. The FUV measurements greatly benefit the near-real time space weather monitoring over a global scale and provide inputs for models such as IRI, TIMEGCM, etc. ### November 17th (Tuesday) 11 AM – 12 PM. Research 1 room 301 Mikhail Sitnov APL Empirical reconstruction of CME- and CIR-driven magnetic storms A significant advance in the modeling of magnetic storms has become possible due to a dramatic increase of the number of in-situ measurements and a new generation of empirical geomagnetic field models that abandon the main limitation of the past models, their pre-defined modular structure. A new-generation model TS07D (http://geomag_field.jhuapl.edu/model/) employs the expansion of the magnetic field of equatorial currents into a series of basis functions, making the current distribution entirely determined by data. The evolution in time is reconstructed by fitting the model field with a subset of the 1995-2005 database sampled when the average solar wind electric field vBz, Sym-H index, and its time derivative were close to their values at the considered moment. To demonstrate the model performance we consider two events, the April 21-23, 2001 storm, caused by a coronal mass ejection and the March 8-11, 2008 storm, driven by a corotating interaction region. In the latter case the results of the out-of-sample validation using five THEMIS probes are shown. The results are also compared with geosynchronous, Image, and Iridium data. ### November 11th Larry Kepko NASA/GSFC Flow, aurora and Pi2 associations observed by THEMIS It has been known for decades that auroral substorm onset occurs on (or at least near) the most equatorward auroral arc, which is thought to map to the near geosynchronous region. The lack of auroral signatures poleward of this arc prior to onset has been a major criticism of flow-burst driven models of substorm onset. The combined THEMIS 5 spacecraft in-situ and ground array measurements provide an unprecedented opportunity to examine the causal relationship between midtail plasma flows, aurora, and ground magnetic signatures. I first present an event from 2008 using multi-spectral all sky imager data from Gillam and in-situ data from THEMIS. The multispectral data indicate an equatorward moving auroral form prior to substorm onset. When this forms reaches the most equatorward arc, the arc brightens and an auroral substorm begins. The THEMIS data show fast Earthward flows prior to onset as well. I suggest that the results strongly support flow-burst driven models of magnetospheric activity. I discuss further the association of flow bursts and Pi2 pulsations, and discuss the possibility of using Pi2 waveforms to infer midtail reconnection dynamics. ### November 2nd Note special day, time, and location: Monday in room 306 of Science and Tech I at 1pm. Combining Observations and Simulations to Advance our Understanding of Solar Eruptions Noé Lugaz (Institute for Astronomy – University of Hawaii) As solar cycle 24 slowly begins, thanks to the always-expanding float of satellites observing the Sun and the heliosphere, immense progresses can be expected in the forecasting and understanding of space weather, in particular regarding the initiation and propagation of coronal mass ejections (CMEs). To make a full use of the new observation capabilities, numerical simulations are often required, in particular to separate instrumental effects from the observed physical phenomena. This is particularly true for line-of-sight observations, such as coronagraphic and heliospheric images, as well as for in-situ measurements for complex series of CMEs. In this talk, I will discuss recent progresses in determining CME physical properties from white-light images, both in the corona (LASCO) and in the heliosphere (SECCHI) with the help of numerical magneto-hydrodynamics models. I will also discuss new geometrical models, which can give information about the azimuthal properties of CMEs from stereoscopic heliospheric observations, and, which could greatly improve the forecasting of CME hit/miss at Earth. Finally, I will also explore how MHD models can help explaining in situ measurements at 1 AU, from isolated and multiple CMEs. ### October 28th Christopher J. Mertens NASA Langley Research Center, Hampton, Virginia, USA Models of Atmospheric Response to Low- and High-Energy Particle Precipitation Enhanced low-energy particle precipitation during solar-geomagnetic storms increases the ion concentrations in the ionosphere. The state of the ionospheric E-region, in particular, is governed by ion-neutral chemistry. During geomagnetic storms, auroral particle precipitation increases the ionization of the neutral atmosphere, producing vibrationally excited NO+ (i.e., NO+(v)) through fast exothermic ion-neutral chemical reactions, which emits in the 4.3 um spectral region. Since NO+ is the terminal E-region ion, by charge neutrality, NO+(v) 4.3 um emission is an excellent proxy suitable for characterizing storm-time enhancements to the E-region electron densities. Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region electron densities. The empirical model is called STORM-E and will be incorporated into the International Reference Ionosphere (IRI). In the first half of the talk, STORM-E development is discussed and results during the Halloween 2003 storm period are presented. The second half of the talk is focused on radiation exposure from high-energy particle precipitation on the atmosphere. Galactic cosmic rays (GCR) and solar energetic particles (SEP) are the primary sources of human exposure to high linear energy transfer (LET) radiation in the atmosphere. A prototype operational nowcast model of air-crew radiation exposure is currently under development. The model predicts air-crew radiation exposure levels from both GCR and SEP that may accompany solar storms. The new air-crew radiation exposure model is called the Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) model. NAIRAS will provide global, data-driven, real-time exposure predictions of biologically harmful radiation at aviation altitudes. Observations are utilized from the ground (neutron monitors), from the atmosphere (the NCEP Global Forecast System), and from space (NASA/ACE and NOAA/GOES). Radiation exposure rates are calculated using the NASA physics-based HZETRN (High Charge (Z) and Energy TRaNsport) code. An overview of the NAIRAS model is given and results during the Halloween 2003 storms are presented. ### October 6th (Note that this is a Tuesday) Philip Judge HAO/NCAR Look what’s under the magnetic carpet! Solar physics’ key to open the corona: the chromosphere. I will attempt to show why we can no longer just “brush the chromosphere under the carpet”, magnetic or otherwise, and ignore its importance either in a solar or plasma pnysics context. I hope to convince you that the chromosphere deserves to be studied by more than an interesting group of souls who have, like myself, long since lost their way, and become hopelessly entangled in one of the most awkward parts of the Sun. ### September 30th Antti Pulkkinen Automatic determination of the conic coronal mass ejection model parameters NASA/GSFC Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A Bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis. ### September 9th Modeling Solar Coronal Flux Tubes in 2-D with Non-Isotropic Conduction Art Poland George Mason University ## Spring, 2009 ### May 12 “Temporal and Spatial Distribution of Metal Species in the Upper Atmosphere” John Correira Catholic University of America ABSTRACT: Every day the Earth is bombarded by approximately 100 tons of meteoric material. Most of this material is completely ablated on atmospheric entry, resulting in a layer of atomic metals in the upper atmosphere between 70 km and 150 km. These neutral atoms are ionized by solar radiation and charge exchange with ambient ions. UV radiances from the Global Ozone Monitoring Experiment (GOME) spectrometer on the ERS-2 satellite are used to determine long-term dayside temporal and spatial variations of the total vertical column density below 795 km of the meteoric metal species Mg Iand Mg II in the upper atmosphere. A retrieval algorithm developed to determine magnesium column densities was applied to all available data from the years 1996-2001. Long term results show the middle latitude dayside Mg II peaks in total vertical content during the summer, while neutral Mg demonstrates a much more subtle maximum in summer. An analysis of spatial variations shows geospatial distributions are patchy, with local regions of increased column density. To study short term variations and the role of meteors showers a time dependent mass flux rate is calculated using published estimates of meteor stream mass densities and activity profiles. There appears to be little correlation between modeled meteor shower mass flux rates and changes in the observed Mg I and Mg II metal column densities. ### May 5 “The Lunar Data Project – Resurrecting Data From the Apollo Program” David Williams NASA, Goddard Space Flight Center ABSTRACT: Every NASA mission must have a detailed plan to archive the scientific data collected in a standard format before it ever leaves the launch pad, but this was not always the case. Back in the Apollo era, there was no systematic requirement to archive the data at all, and no guidelines as to what constituted a standard archive product or what formats were acceptable. As a result, the Apollo data received at the National Space Science Data Center, NASA’s archive located at Goddard Space Flight Center, were in many cases incomplete, not well documented, and on different media in various formats. The current collection of Apollo data at the NSSDC is housed on microfilm, microfiche, hard-copy documents and magnetic tape in UNIVAC, CDC 6600, EBSIDC and other obsolete formats. With the recent interest in returning ot the Moon we have undertaken the Lunar Data Project to restore these old Apollo data sets, some from instruments which operated on the Moon for 7 years, into standard digital formats for online distribution, as well as look for data that were never archived with NSSDC originally. We have had some success on both fronts, but as we get farther from the Apollo days, the data and the people with direct experience are becoming harder to find. We will discuss the history of the Apollo science program and our efforts to resurrect these old data for use in the current lunar exploration program. ### April 28 “Probing Dark Matter Properties through Dynamics of the Galaxy and the Local Group” Ed Shaya University of Maryland ABSTRACT: Some key parameters of Dark Matter (DM) can most accurately be measured in the very nearby universe because DM dominates the mass in the outer Milky Way (MW) and in the other galaxies of the Local Group. Soon, the distribution of DM will be quantified by study of dynamical processes observable in fine detail within these entities. Precise measurements of 3-D velocities for stars, coherent star streams, and stars in satellite stellar systems out to the edge of the Galaxy can reveal the detailed shape of the dark matter halo as well as the total mass of the Galaxy. Similarly, 3-D velocities of galaxies in the Local Group can reveal the masses of individual dominant galaxies, the mass of the Local Group in total, and the density of the more smoothly distributed warm and hot DM. NASA’s Space Interferometry Mission (SIM) will make measurements at the level of 2-3 microarcsec/yr per star and will provide us with the 2nd and 3rd dimensions of the velocity vectors of stars as faint as 20th mag. The specifics of these mass distributions, total mass-to-light ratios, clumpiness of the Galaxy potential, flatness of the halo, and cuspiness of galaxy cores provide the mass and nature of the dark matter particle(s), and test the standard model of cosmology on small scales. ### April 21 “Whither the thermosphere? Climate change at the edge of space” John T. Emmert Naval Research Laboratory ABSTRACT: The Earth’s thermosphere is the hot, thin, and partially ionized part of the atmosphere situated between altitudes of 90 and 800 km. Its high temperature is primarily due to absorption of solar extreme ultraviolet (EUV) radiation, which is balanced by radiative infrared cooling by carbon dioxide and other agents. The thermosphere exerts significant drag on orbiting spacecraft, which causes their orbits to decay at a rate proportional to the mass density of the ambient gas. Satellite tracking data thereby provide an extensive historical record of the thermosphere back to the dawn of the space age. Recent studies indicate that, after taking into account the strong influence of solar EUV variations, the thermosphere is slowly cooling and contracting, a trend that has important implications for orbit prediction and orbital debris management. In this presentation we review the structure and physics of the thermosphere and briefly describe how density is extracted from orbital tracking data. We then examine trends in thermospheric density, as well trends in other upper atmospheric properties, and discuss their interpretation. ### April 14 “Connecting Stars (their planets), Galaxies, and the Universe in the Decade of Astrometry” Rob Olling University of Maryland ABSTRACT: In the coming era of precision astrophysics, new telescopes on the ground and in space will provide many, many Terabytes of highly precise photometric and astrometric (positional) measurements. The job of astrophysicists is to turn those precise measurements into “highly accurate facts,” i.e. inferences with small systematic errors. The accuracy of many astronomical inferences have been improving steadily over the last few decades: from factors of several to tens of percent. In the near-field, GAIA and SIM-Lite (hopefully) will push the accuracies to the sub-percent level, while the Planck mission will measure the Cosmic Microwave Background with similar accuracy. Many experiments aim to achieve similar accuracies in the intervening parts of the Universe. I will briefly touch on several subjects: – Briefly introduce the proposed SIM-Lite mission – how to use millimag (0.1%) photometry to find transiting extra-solar planets with GAIA-like spacecraft and with my own LEAVITT design – how to find solar-system analogs with astrometry – how to perform cosmology in our own backyard with double stars – how to obtain 1% geometric distances for galaxies in the Local Group (H_0) ### April 7 “Numerical Simulation of Interplanetary Coronal Mass Ejections for Space Weather Prediction” Dusan Odstrcil ABSTRACT: Coronal mass ejections (CMEs) have been identified as a prime causal link between solar activity and large, non-recurrent, geomagnetic storms. Modeling of the origin of CMEs is still in the research phases and it is not expected that real events can be routinely simulated in near future. Therefore, we have developed an intermediate modeling system which uses fitted coronagraph observations, specifies 3D ejecta, and drives the 3D numerical magnetohydrodynamic code ENLIL which uses the WSA coronal maps for background solar wind. We simulated a number of heliospheric events selected by community campaigns which enabled us to analyze the match between different parameters predicted by the model and observed by spacecraft. Attention is given to development of tools facilitating prediction of solar wind parameters at planets and spacecraft. ### March 24th “The Solar Dynamics Observatory and the Wait for Solar Cycle 24” W. Dean Pesnell Goddard Space Flight Center ABSTRACT: The Sun hiccups and satellites die. That is what NASA’s Living With a Star Program is all about. The Solar Dynamics Observatory (SDO) is the first Space Weather Mission in LWS. SDO’s main goal is to understand, driving towards a predictive capability, those solar variations that influence life on Earth and humanity’s technological systems. The SDO science investigations will determine how the Sun’s magnetic field is generated and structured, how this stored magnetic energy is released into the heliosphere and geospace as the solar wind, energetic particles, and variations in the solar irradiance. The SDO mission consists of three scientific investigations (AIA, EVE, and HMI), a spacecraft bus, and a dedicated Ka-band ground station to handle the 150 Mbps data flow. The science teams at LMSAL, LASP, and Stanford are responsible for processing, analyzing, distributing, and archiving the science data. We will talk about the building of SDO and the data and science it will provide to NASA. The late start of Solar Cycle 24 will allow SDO to measure a very interesting solar minimum period. In particular, helioseismic studies of the solar interior will benefit from the low activity that should still be present at the launch of SDO later this year. ### March 17th “Chasing Lightning: Sferics, Tweeks and Whistlers” Phillip A. Webb (GSFC and UMBC/GEST) Kathleen Franzen (INSPIRE) Abstract: The visible flash that we see from lightning is only part of the story. Lightning generates electromagnetic emissions at other frequencies that can propagate hundreds or thousands of kilometers across the surface of the Earth in the form of special signals called “tweeks” and “sferics”. Some of these emissions can even travel tens of thousands of kilometers out into space before returning to the Earth as “whistlers”. The INSPIRE Project, Inc is a non-profit scientific and educational corporation whose original mission was to bring the excitement of observing these very low frequency (VLF) natural radio emissions to high school students and interested individuals. Since 1989, INSPIRE has provided specially designed VLF radio receiver kits to over 2,600 participants around the world. A number of these participants use the VLF data they collect in very creative projects that include fiction, music and art exhibitions. This presentation will provide an overview of lightning and the resulting VLF emissions, the INSPIRE program and the VLF receiver, and discuss experiences gained from using the INSPIRE VLF kits as the basis of an undergraduate course that was taught for the first time in the Fall 2008 semester at University of Maryland Baltimore County (UMBC). ### February 24th “A reinterpretation of the energy balance in EUV loops due to new results from Hinode-EIS” Abstract: New observations made by the Hinode EUV Imaging Spectrometer have revealed persistent redshifts in solar active region loops in the temperature range $10^{5.6} \leq T \leq 10^{6.4}$~K. The presence of redshifts, interpreted as bulk downflows, indicates that the loops are undergoing radiative cooling rather than continuous heating. This has significant consequences for current ideas regarding the physics of the ubiquitous 1~MK loops observed by instruments such as TRACE and SoHO-EIT. A new interpretation of the energy balance in such loops is presented with model results that are found to agree well with the observed redshifts. ### February 10th AGN jet interaction with ICM plasma: Kinetic effects and Thermal conduction Fathalah Alouani Bibi (Physics and Astronomy Department, GMU) Abstract: I will talk about some of my work prior to joining George Mason University. In particular, I will be talking about the importance of kinetic effects in electron transport and thermal conduction during the interaction of an AGN jet with inhomogeneous intra-cluster plasma. I will show some of the dynamics of cooling flow cluster and the role of AGN jets as a main heating source. I will also discuss the limitations of the classical Spitzer theory in cases of steep temperature gradients and in non-Maxwellian plasmas, and give some alternatives/corrections ### February 3rd Forward Modeling of Coronal Mass Ejections using STEREO-SECCHI Data Abstract: I will present a forward modeling technique to reconstruct coronal mass ejections observed with the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) instruments package aboard the Solar Terrestrial Relations Observatory (STEREO). First, I will review the different techniques that can be used to reconstruct the 3D electron density of coronal structures such as CMEs. I will describe in more details the forward modeling method, which consists in fitting a geometric model of a flux rope to the observed images. Finally, I will present a survey of more than 30 CMEs studied with this technique. ## Fall, 2008 ### November 18th Flux Rope Instabilities at the Onset of CMEs Bernhard Kliem Mullard Space Science Lab, University College London, UK and Institute of Physics, University of Potsdam, Germany Images of erupting prominences typically suggest the magnetic topology of a single line tied flux rope. Many prominence eruptions and CMEs begin with an approximately exponential rise, suggesting that an instability of a flux rope may occur at the onset of the eruptions. I will present numerical simulations of two relevant instabilities, the well-known helical kink instability and the torus instability, using the force-free line tied flux rope equilibrium by Titov and Demoulin as initial condition. The properties of these instabilities indicate which parameters of the initial configuration control whether the eruption stays confined or becomes ejective, evolves into a fast or a slow CME, shows strong or weak writhing. Exponential as well as power-law rise profiles can be modeled. Supporting quantitative comparison of the simulations with several well observed eruptions will be included. ### October 21st Interstellar neutrals in the heliospheric interface Univ of Moscow Abstract The heliospheric interface is the region where the solar wind meets the local interstellar cloud. The cloud is partly ionized and neutral component of the cloud penetrates into the heliosphere where it can be observed. New observational information as crossing of the heliospheric termination shock (TS) by both Voyagers, new SOHO/SWAN and Ulysses data as well as maps of the heliospheric ENA spectra that are expected from the Interstellar Boundary Explorer (IBEX) mission after its launch on October 19 2008 create new requirements and new challenges for modelling of the heliospheric interface. Modern kinetic-gasdynamic models of the SW/LIC interaction takes into account multi-component nature of both the solar wind and the interstellar medium. New results that include dynamic effects of the interstellar H atoms, the 11-year and latitudinal variations of the solar wind, interstellar and heliospheric magnetic fields will be discussed. Analysis of the constraints on the models, which follow from the TS crossing by Voyagers and other observational data, will be given in the paper. Theoretical predictions of the ENA fluxes that will be measured by IBEX will be provided. ### September 30th Physics of Solar and Stellar Coronal Heating GMU at NASA Goddard Space Flight Center Since the discovery of the enigmatically hot layer in the Sun, the solar corona, the problem of its heating had remained elusive. Over the last 53 years we learned that the solar corona is not just hot “quiet” plasma, but a highly “emotional” place for the most violent eruptions in the solar system, occurring on scales from hundreds to hundreds of thousands of kilometers. The recent solar missions, SOHO, TRACE and Hinode provided crucial clues for resolving the long-standing problem of the solar coronal heating. Meanwhile the space missions, HST, Chandra, FUSE, XMM-Newton, have confirmed that hot and X-ray bright coronae exist not only in the Sun, but in stars ranging from young pre-main sequence stars to evolved giants. These new data raise one fundamental question in astrophysics: can the solar analogy be directly applied to other stars, and how do the underlying physical processes differ? In this review I will discuss recent observations of the solar and stellar coronae and the physical mechanisms involved in their heating. ### September 23rd Ulysses Observations of Periodic Structures in the Solar Wind Velocity Christina Henderson Christina Henderson will present a surface-level talk about her research. The talk is aimed at an undergraduate level with no prior knowledge of the Sun/Earth system. The talk is open to discussion. George Mason University As we bask in the warm glow of a summer day, we become intimately familiar with the constant output of solar photons. Thanks to deflection by the Earth’s magnetic field (called the magnetosphere), we are less intimately familiar with the constant outflow of solar plasma known as the solar wind. Spacecraft sitting outside the magnetosphere, however, can constantly measure properties of the solar wind such as the bulk velocity, magnetic field, and density. Ulysses is one such spacecraft. In this research, we use data gathered by Ulysses in its polar solar orbit over a span of 18 years from 1990 to 2008. We compute the power spectrum of the velocity, searching for periods in the range of 5- to 40-days, as these have the largest impact on predicting processes, like aurora, that occur in the Earth’s magnetosphere. We develop methods to see the periods as they change in time by computing many power spectra while stepping through the data; we call this the spectrogram. We are able to compare the spectrogram with calculations of the fundamental period and harmonics of a idealized sawtooth type solar wind. We conclude that many periods in the spectrogram of the Ulysses data are due to real physical processes and not artifacts of the numerical calculations, such as harmonics. 3-D solar wind simulation results can be compared with Ulysses data; we hope to learn what physical processes could be missing in the simulations. ### September 16th SECCHI View of CME Dynamics: Observations and Theory Valbona Kunkel George Mason University and NRL The propagation of CMEs through the field of view of LASCO (2–30 Rs) has been extensively studied in the past 10 years. Based on theory-data comparison, it has been established that most, if not all, CMEs can be understood as erupting magnetic flux ropes and that the observed dynamics in this regime can be correctly described by the erupting flux rope model (Chen 1996). Until STEREO became available, CME dynamics were not observed and the EFR model has not been directly compared with data beyond 30 Rs. In this talk, I will discuss new SECCHI observations of CMEs and their dynamaics and extend the modeling of CME propagation to Hi 1 field of view (out to about 100 Rs projected). Four CMEs are discussed. It is shown that the erupting flux rope model is able to fit the observed height-time and velocity-time data throughout the EUVI-COR1-COR2-HI1 field of view. This suggests that the model correctly captures the main acceleration phase and the residual acceleration phase of CME dynamics, i.e., the forces acting on CMEs. It is found that significantly larger values of the drag coefficient in the model than previously used are required to fit both the COR1-COR2 data and HI1 data. This means that the extended field of view imposes stronger constraints on model parameters than previously thought, such as the drag coefficient and therefore the magnetic energy required to power the eruption and subsequent propagation. ### September 9th An analytical model unscrambling the inner state of CMEs based on the scale measurements in coronagraphs Yuming WANG George Mason University An analytical model is proposed to unscramble two physical parameters, polytropic index $\Gamma$ and the non-force-free index $I_{nff}$, defined by $|f_{em}/f_{th}|$ where $f_{em}$ and $f_{th}$ are the Lorentz force and thermal pressure force respectively, of flux-rope CMEs based only on the scale measurements in coronagraphs. By applying this model to the 2007 October 8 CME, we find that (1) $\Gamma$ of the CME plasma decreased quickly from 1.35 at the beginning to 1.05 before it went beyond 15 Rs and then continuously approached to 1.0, and (2) $I_{nff}$ kept decreasing from nearly 1.0 to below 0.1 when the CME leading edge arived at about 70 Rs. The first result implies taht the plasma in this CME was heated throughout the interplanetary space, and the CME underwent a nearly isothermal process. The second result suggests that the CME was not force-free at the early phase, but it tended to approach the force-free state when it ran away from the Sun. Besides, the model predicts that, for an initially non-force-free flux rope, $\Gamma$ will be less than 4/3 if it reaches force-free state at infinite distance, and particularly, $\Gamma=1$ and the density at the flux rope axis must be larger than that at the boundary if the flux rope finally goes to a steady propagating and expanding state. We expect that this model has potential application to other researches where a flux rope is employed. ### August 28th Multi-dimensional representation of the ionosphere from GNSS, altimetry and COSMIC M. Schmidt (1), C. Zeilhofer (1), D. Bilitza (2), C.K. Shum (3), J. Zhang (1), L.-C. Tsai (4) (1) Deutsches Geodaetisches Forschungsinstitut (DGFI), Alfons-Goppel-Strasse 11, 80539 Muenchen, Germany (2) Heliospheric Physics Laboratory/GMU, NASA Goddard Space Flight Center, Greenbelt, Maryland, USA (3) Geodetic Science, School of Earth Sciences, The Ohio State University, 275 Mendenhall, 125 S Oval Mall, Columbus OH 43210, USA (4) Center for Space and Remote Sensing Research, National Central University, Taiwan During the last decade various satellite missions have turned out to be promising tools for monitoring ionospheric parameters. Dual-frequency GNSS observations, e.g., can be used to determine the slant total electron content (STEC), i.e. the integral of the electron density along the ray-path of the signal between the transmitting satellite and a receiver. Furthermore, dual-frequency altimetry satellites allow measuring the vertical total electron content (VTEC). In this contribution we present a multi-dimensional ionospheric model calculated from GNSS, altimetry and COSMIC measurements. To be more specific our model consists of a given reference part, computed from the International Reference Ionosphere (IRI), and an unknown correction term. Since the latter is represented as a series expansion in terms of multi-dimensional base functions, e.g., constructed from polynomial B-splines, trigonometric B-splines or spherical harmonics, our approach can be applied to global, regional and local data sets. The unknown series coefficients are calculable by applying parameter estimation procedures. Since the input data are heterogeneously sampled in space and in time due to the specific orbit and instrumental characteristics, finer structures of the target function are modelable just in regions with a sufficient number of observation sites. ## Summer, 2008 ### Tuesday August 5th at 11 am in room 301 Space Weather research in South Africa Lee-Anne McKinnell ## Spring, 2008 ### Friday May 23rd at 11 am in room 302 Note special day, time, and room Theoretical and Observational Constraints on Accretion Flows on Black Hole: The Case of sub-Keplerian Motion Sandip K. Chakrabarti (1) Senior Professor, S.N. Bose National Centre for Basic Sciences Theoretically, matter enters into a black hole with velocity of light and thus every flow, independent of its past history, must be supersonic on the horizon. Not surprisingly, the transonic flow solutions respect such a boundary condition, even when it allows very exciting possibility that flows should pass through shocks and slow the matter down at a few Schwarzschild radii before the horizon. It is thus no surprise that ALL the observations (ranging from spectral state transition, Quasi-periodic oscillations, jets and outflows and non-thermal spectra from black holes) agree to the fact that such a Centrifugal Pressure Dominated Boundary Layer (CENBOL) should exist. There are several post-facto cartoon models in the literature which apparently have no knowledge of such beautiful behaviour of the flow and surprisingly come up with cartoon diagrams having the same behaviour. In our advective disk paradigm, jets are produced when CENBOL is present. Thus it no surprise that some post-facto models include the base of the jet (which is nothing but CENBOL in our picture) in explaining outgoing spectrum from disk surface and thereby creating a confusion that X-rays from the jets are also serious contestants. We show that for every observation that has been a pre-facto prediction of our paradigm of two component advective flow (TCAF). These are re-discovered by many in the literature under new names, pictures and models. What is more important, however, is that the theoretical solutions and the cartoons from fitting observational data are finally converging. This paves the way of further progress in the subject. (1) Also, In Charge, Academic Affairs, Indian Centre for Space Physics, Kolkata ### May 7th Robert Duffin George Mason University Type III-L Solar Radio Bursts and their Correlations with Solar Energetic Proton Events Abstract Type III-L bursts are a sub-class of type III solar radio bursts that tend to occur after the impulsive phase of flares; are longer in duration than individual type IIIs and tend to be low-frequency. There has been a proposal that type III-Ls are connected to solar energetic proton (SEP) events. Most work on this connection has started from samples of SEP events, but if type III-Ls are to be useful for prediction of SEP events, then we need to understand the properties of samples of type III-L bursts. This talk reports preliminary results from such a study. An operating definition based on previous work is used to identify type III-L events amongst M- and X-class flares from 2001; and then correlations with other properties of these events are investigated, including association with SEP events. If there is a correlation with SEP events, one important factor that these bursts allow us to address is the question of whether acceleration takes place at an associated CME, or closer to the flare site well below the CME. ### April 30th Joseph Lazio Naval Research Laboratory The Dark Ages Lunar Interferometer (DALI) The Dark Ages represent the last frontier in cosmology, the era between the genesis of the cosmic microwave background (CMB) at recombination and the formation of the first stars. During the Dark Ages, when the Universe was unlit by any star, the only detectable signal is likely to be that from neutral hydrogen (HI), which will appear in absorption against the CMB. The HI absorption represents potentially the richest of all data sets in cosmology—not only is the underlying physics relatively simple so that the Hi absorption can be used to constrain fundamental cosmological parameters in a manner similar to that of CMB observations, but the spectral nature of the signal allows the evolution of the Universe as a function of redshift (z) to be followed. The Hi absorption occurs in dark matter-dominated overdensities, locations that will later become the birthplaces of the first stars, so tracing this evolution will provide crucial insights into the properties of dark matter and potentially reveal aspects of cosmic inflation. Moreover, given the relatively simple physics—the Universal expansion, Compton scattering between CMB photons and residual electrons, and gravity—any deviation from the expected evolution would be a “clean” signature of fundamentally new physics. The Dark Ages Lunar Interferometer (DALI) is a mission proposed for study to NASA for a telescope located on the far side of the Moon, the only site in the solar system shielded from human-generated interference and, at night, from solar radio emissions. The DALI array will observe at 3–30 m wavelengths (10–100 MHz; redshifts 15 \le z \le 150), and the DALI baseline concept builds on ground-based telescopes operating at similar wavelengths, e.g., the Long Wavelength Array (LWA) and Murchison Widefield Array (MWA). Specifically, the fundamental collecting element will be dipoles. The dipoles will be grouped into “stations,” deployed via rovers over an area of approximately 50 km in diameter to obtain the requisite angular resolution. The desired three-dimensional imaging requires approximately 1000 stations, each containing 100 dipoles (i.e., ~ 10^5 dipoles); alternate processing approaches may produce useful results with significantly fewer dipoles (factor ~ 3–10). Each station would be deployed by one rover, which would also serve as a “transmission hub” for sending the signals for correlation to a central processing facility. After sending the correlator output to Earth, analysis would then proceed via standard methods being developed for ground-based arrays. ### Friday May 23rd at 11 am Note special day and time Theoretical and Observational Constraints on Accretion Flows on Black Hole: The Case of sub-Keplerian Motion Sandip K. Chakrabarti (1) Senior Professor, S.N. Bose National Centre for Basic Sciences Theoretically, matter enters into a black hole with velocity of light and thus every flow, independent of its past history, must be supersonic on the horizon. Not surprisingly, the transonic flow solutions respect such a boundary condition, even when it allows very exciting possibility that flows should pass through shocks and slow the matter down at a few Schwarzschild radii before the horizon. It is thus no surprise that ALL the observations (ranging from spectral state transition, Quasi-periodic oscillations, jets and outflows and non-thermal spectra from black holes) agree to the fact that such a Centrifugal Pressure Dominated Boundary Layer (CENBOL) should exist. There are several post-facto cartoon models in the literature which apparently have no knowledge of such beautiful behaviour of the flow and surprisingly come up with cartoon diagrams having the same behaviour. In our advective disk paradigm, jets are produced when CENBOL is present. Thus it no surprise that some post-facto models include the base of the jet (which is nothing but CENBOL in our picture) in explaining outgoing spectrum from disk surface and thereby creating a confusion that X-rays from the jets are also serious contestants. We show that for every observation that has been a pre-facto prediction of our paradigm of two component advective flow (TCAF). These are re-discovered by many in the literature under new names, pictures and models. What is more important, however, is that the theoretical solutions and the cartoons from fitting observational data are finally converging. This paves the way of further progress in the subject. (1) Also, In Charge, Academic Affairs, Indian Centre for Space Physics, Kolkata ### May 7th Robert Duffin George Mason University Type III-L Solar Radio Bursts and their Correlations with Solar Energetic Proton Events Abstract Type III-L bursts are a sub-class of type III solar radio bursts that tend to occur after the impulsive phase of flares; are longer in duration than individual type IIIs and tend to be low-frequency. There has been a proposal that type III-Ls are connected to solar energetic proton (SEP) events. Most work on this connection has started from samples of SEP events, but if type III-Ls are to be useful for prediction of SEP events, then we need to understand the properties of samples of type III-L bursts. This talk reports preliminary results from such a study. An operating definition based on previous work is used to identify type III-L events amongst M- and X-class flares from 2001; and then correlations with other properties of these events are investigated, including association with SEP events. If there is a correlation with SEP events, one important factor that these bursts allow us to address is the question of whether acceleration takes place at an associated CME, or closer to the flare site well below the CME. ### April 30th Joseph Lazio Naval Research Laboratory The Dark Ages Lunar Interferometer (DALI) The Dark Ages represent the last frontier in cosmology, the era between the genesis of the cosmic microwave background (CMB) at recombination and the formation of the first stars. During the Dark Ages, when the Universe was unlit by any star, the only detectable signal is likely to be that from neutral hydrogen (HI), which will appear in absorption against the CMB. The HI absorption represents potentially the richest of all data sets in cosmology—not only is the underlying physics relatively simple so that the Hi absorption can be used to constrain fundamental cosmological parameters in a manner similar to that of CMB observations, but the spectral nature of the signal allows the evolution of the Universe as a function of redshift (z) to be followed. The Hi absorption occurs in dark matter-dominated overdensities, locations that will later become the birthplaces of the first stars, so tracing this evolution will provide crucial insights into the properties of dark matter and potentially reveal aspects of cosmic inflation. Moreover, given the relatively simple physics—the Universal expansion, Compton scattering between CMB photons and residual electrons, and gravity—any deviation from the expected evolution would be a “clean” signature of fundamentally new physics. The Dark Ages Lunar Interferometer (DALI) is a mission proposed for study to NASA for a telescope located on the far side of the Moon, the only site in the solar system shielded from human-generated interference and, at night, from solar radio emissions. The DALI array will observe at 3–30 m wavelengths (10–100 MHz; redshifts 15 \le z \le 150), and the DALI baseline concept builds on ground-based telescopes operating at similar wavelengths, e.g., the Long Wavelength Array (LWA) and Murchison Widefield Array (MWA). Specifically, the fundamental collecting element will be dipoles. The dipoles will be grouped into “stations,” deployed via rovers over an area of approximately 50 km in diameter to obtain the requisite angular resolution. The desired three-dimensional imaging requires approximately 1000 stations, each containing 100 dipoles (i.e., ~ 10^5 dipoles); alternate processing approaches may produce useful results with significantly fewer dipoles (factor ~ 3–10). Each station would be deployed by one rover, which would also serve as a “transmission hub” for sending the signals for correlation to a central processing facility. After sending the correlator output to Earth, analysis would then proceed via standard methods being developed for ground-based arrays. ### March 26th Juan C Luna George Mason University The role of bulk and thermal Comptonization in producing the time lags observed in X-ray pulsars Fourier analysis of X-ray pulsar data reveals the presence of time lags between hard and soft channels in millisecond pulsars. There is currently no consistent theoretical explanation for this effect based on a fundamental physical model for pulsar sources. In the proposed research, a new theoretical model is developed from first principles based on the bulk and thermal Comptonization occurring in the gas inside the accretion column above one (or both) of the magnetic poles on a rotating neutron star. The model utilizes a combination of Fourier and Laplace transformation in order to obtain quantitative predictions for the time lags. This approach will be used to make predictions about the possible presence of time lags in the spectra of bright pulsars such as Her X-1. Theoretical interpretation of the time lags can provide detailed information about the size and properties of the scattering plasma and also the spatial density profile of the scattering electrons. ### March 19th V. Truhlik Institute of Atmospheric Physics, Prague, Czech Republic Studying of solar activity variation of the electron temperature in the topside ionosphere Electron temperature (Te) in the topside ionosphere and plasmasphere is an important parameter because thermal electrons play a key role in the energy balance of these regions. The IRI (International Reference ionosphere) model includes an empirical representation of Te in the topside ionosphere depending on altitude, latitude, local time, and season. But due to a lack of data and sometimes conflicting measurements, the solar activity variation of Te has not been reliably modeled so far. We have made good progress in modeling the Te behavior with the help of a large database of satellite electron temperature measurements, and of Incoherent Scatter Radars observations, and with the assistance of simulations with the theoretical FLIP model. The presentation will focus in particular on (1) comparison of calculation of the FLIP model with data (2) latitudinal and altitudinal variation of Te and the heat flux (3) discussion prevailing cooling and heating terms influencing Te balance and causing its changing with solar activity. We will also discuss development of a new global Te model with the Te solar activity variation as a correction term which can help to improve current Te model in IRI. ### February 20th Phil Richards George Mason University Controversies in Solar EUV Irradiance and Ionospheric Photoelectron fluxes For many years, there has been controversy over the magnitude of both the solar EUV (0-100 nm) irradiance and 0-1 keV photoelectron flux. The solar EUV irradiance is the primary driver of the energetics and dynamics of the Earth’s upper atmosphere above 100 km. There are uncertainties in theoretical photoelectron fluxes because of uncertainties in cross sections and solar EUV irradiance. Accurate solar EUV irradiance measurements are difficult to make because they must be made at high altitudes and because the energetic photons degrade the instruments that measure them. The ionization of oxygen and nitrogen in the upper atmosphere produces energetic photoelectrons as well as ions. Photoelectrons take approximately half the incident photon energy in the creation of secondary ions and electrons and airglow emissions. In recent years, the photoelectron flux has become important because the airglow emissions are heavily used in diagnosing variations in the upper atmosphere. This paper reexamines the consistency of solar EUV irradiance and ionospheric photoelectron fluxes using recent measurements. ### February 13th Geospace Imaging: The Big Picture Bob Meier George Mason University Various regions of the geospace environment have been named and are often studied as if they exist in isolation. Yet emerging high quality multidisciplinary global datasets clearly demonstrate the complex and highly variable synergy among traditional space physics regimes. As a result, interdisciplinary endeavors, such as for example, magnetospheric-ionospheric coupling studies, are growing rapidly but face difficult challenges in understanding just how the various geospace regions interact. The recent progression of global imaging missions and the encouraging efforts to interface models of the various geospace regions give hope that one day we may actually be able to literally see “the big picture” that is crucial for understanding the space environment as a whole system. Ultimately we may be able to trace the paths of radiation and plasma eruptions from their origins at the Sun through to the responsive interactions among the magnetosphere, plasmasphere, ionosphere, and thermosphere. This lecture will trace the evolution of global imaging, from the initial measurements, to what we are learning now, to innovative prospects for developing new understanding from big pictures of the neutral and ionized components of geospace. ## Fall, 2007 ### November 14th, 2007 Merging Galaxies: A Nearby Laboratory for High-Redshift Star Formation and Supermassive Black Holes David Rupke University of Maryland drupke@astro.umd.edu The rates of star formation and black hole activity in the universe peaked 10 billion years ago. The majority of this star formation occurred in dusty, merging galaxies, which in turn evolved into galaxies containing luminous black holes. Many examples of these dusty mergers occur in the local universe. I will review some of the unique properties of these local mergers, including morphologies, masses, gas dynamics, and heavy element content. I will place them both in the context of other galaxies in the local universe and in the context of their high-redshift counterparts. ### November 7, 2007 Stephen Rinehart NASA — Goddard Space Flight Center Stephen.A.Rinehart@nasa.gov From Spitzer to SPECS: The Future of Far-Infrared Astronomy The development of infrared astronomy in the 20th century led to the discovery that the universe appears fundamentally different at long wavelengths. Missions such as the Infrared Astronomical Satellite (IRAS), the Infrared Space Observatory (ISO), and the Kuiper Airborne Observatory (KAO) have led to new understanding of the origins of galaxies, stars, and planets. Spitzer, currently on-orbit, has continued breaking new ground, and upcoming facilities such as the Herschel Space Telescope and the Stratospheric Observatory for Infrared Astronomy (SOFIA) promise to continue the legacy of their predecessors. As these missions move forward, we will develop the next generation of far-infrared observatories, taking advantage of new technologies and new techniques to address some of the most compelling astrophysical questions of our time. ### October 31, 2007 Manolis K. Georgoulis Johns Hopkins University/APL http://sd-www.jhuapl.edu/FlareGenesis/Team/Manolis/ Progress and Challenges in the Analysis of Solar Vector Magnetograms: Why do we need these measurements, anyway? Despite decades of ground-breaking advances in solar vector magnetography, vector magnetograms with a potential for meaningful science are routinely produced only within the last fifteen years or so. However, serious limitations in the acquisition of such pristine data result in an inherently incomplete physical understanding of the solar magnetized atmosphere. We are in urgent need of even this partial information because nearly every aspect of the long- or short-term evolution in the Sun stems from the emergence and evolution of solar magnetic fields. I will briefly review the current status of the analysis and the challenges pertaining to solar vector magnetograms. My main focus, however, will be on the new insight of the magnetic Sun that these measurements can help us gain. I will try to show that fitting even some pieces of the inextricable puzzle of solar magnetism can lead to substantial developments in the physical understanding of our magnetic star. ### October 17th, 2007 The Angry Sun: Explosions in the Corona James Klimchuk james.klimchuk@nrl.navy.mil Space Science Division, Naval Research Lab Although the Sun is a benevolent provider of warmth and comfort, it also has a very angry side. Solar outbursts cause inclement space weather that sometimes wrecks havoc on technological systems on which our society is progressively more dependent. These outbursts involve the sudden release of energy that is stored in stressed coronal magnetic fields. They occur on a wide variety of scales. In this talk, I will discuss the smallest and largest events: nanoflares, which collectively heat the corona to multi-million degree temperatures and are responsible for the variable X-ray and UV radiation that modifies the Earth’s upper atmosphere; and coronal mass ejections (CMEs), which are spectacular eruptions responsible for the largest geomagnetic storms. I will present new observations from the recently launched Hinode and STEREO missions, and I will review the current state of theoretical understanding. ### October 10, 2007 Seeing the Heliosphere with New Eyes: First Results from the SECCHI Experiment on STEREO Angelos Vourlidas SECCHI Project Scientist, Naval Research Laboratory The STEREO mission was launched on October, 2006 with the main objective to study Coronal Mass Ejections (CMEs) from their initiation in the solar corona to their arrival at Earth using a suite of remote sensing and in-situ instruments on two, almost identical, spacecraft. The mission objectives are mainly addressed by the imaging experiment, named Sun-Earth Connection Coronal & Heliospheric Investigation (SECCHI), which comprises a suite of five telescopes; an EUVI full disk imager, two coronagraphs covering the range from 1.5 to 15 solar radii, and two heliospheric imagers observing along the Sun-Earth LINE from 15 solar radii to the Earth’s orbit and beyond. It is the first time that such imaging capabilities are available and they will certainly lead to important advances in our understanding of the CME initiation, propagation, and its three-dimensional configuration. In this talk, we will showcase the observations and initial results from the first months of operations of the SECCHI telescopes. We will also discuss the instrument performance and synergies with existing observatories (e.g., SOHO). SECCHI was built by a consortium of US and European institutions under the direction of the Solar Physics Branch at the U.S. Naval Research Laboratory. ### October 3, 2007 New perspective on CME rates and their distributions: Lessons from CACTus (Computer Aided CME Tracking) Eva Robbrecht Eva.Robbrecht@oma.be SIDC/Royal Observatory of Belgium We present the first ‘objective’ LASCO CME catalog, a result of the application of the CACTus software on the LASCO archive during the interval September 1997 – January 2007. We have studied the CME characteristics over solar cycle 23 and have compared them with similar results obtained by manual detection (CDAW catalog). The main results that I will discuss during my talk are: I. There is a great discrepancy between CACTus and CDAW CME rates, both in shape and in amplitude. The CACTus statistics are dominated by narrow events that are mostly not included in the CDAW catalog. II. While the classical CME picture is a white light structure having a typical angular width of 45° in the coronagraphic field of view, our catalog suggests that the CME process is scale invariant, i.e. that no typical CME size exists. III. Are narrow CMEs witnessing the continuous renewal of the magnetic field? Are all plasma outflows an indication of the same physical mechanism? IV. Our different CME statistics shed new light on the composition of CME (and other) catalogs and highlights the need for caution in the usage of catalogs. Perhaps the most revealing conclusion of this paper is that at present, no ‘ground truth’ CME catalog exists, since no consensus exists about the nature and origin of marginal coronal eruptions. ### September 26, 2007 Reducing Parameter Estimation Bias in Empirical Models:�A Case for Data Assimilation in Radiation Belt Science Josh Rigler, NCAR/HAO jrigler@hao.ncar.edu Leaving relevant variables out of a model will almost invariably lead to biased estimates of any empirical parameters required by the model, assuming that these parameters are optimized to somehow minimize the�discrepancy between model output and real data. Since this is quite often the case, the previous statement acknowledges formally what many modelers already understand intuitively: parameterized models that�ignore relevant physics tend to compensate by over- or under-stating the influence of whatever physics were actually included in the model.�� This raises the question of how one mitigates bias error so that the physics being modeled are most accurately portrayed? I present results from an ongoing study using data-derived radiation belt electron flux models, combined with a relatively simple data assimilation�technique. By simultaneously estimating its parameters, and correcting the model to better match observations, much correlated structure in the model residuals that causes bias error is removed, thereby providing more realistic parameter estimates. This a very general result, so similar results should be possible using any formal data assimilation scheme and empirical or semi-empirical model. ### September 19, 2007 Global Hybrid Modeling of Magnetic and Energetic Particle Storms on the Magnetosphere West Virginia High Tech Consortium Foundation A 2.5 dimensional hybrid model of massless fluid electrons and kinetic ions which also includes a simple ionosphere-magnetosphere coupling is used to investigate the impacts of interplanetary shocks and high energy particles presumably resulting from magnetic storms on the magnetosphere. The code is structured to model the magnetosphere dynamics of the Earth-Solar wind system by utilizing a finite element mesh specifically tailored to magnetosphere’s regions. It spans many hundred Earth radii in each direction (upstream, downstream, dawn and dusk). Realistic parameters characteristic of solar wind, its IMF and geomagnetic field are used. The code has been tested by its ability to predicting a magnetosphere by initializing a dipole at equilibrium with a flow subjected to an incoming solar wind with an IMF. The tests revealed generation of a steady state bow shock, as well as dayside reconnection (for southward IMF) as well as a tail sheet formation. The interplanetary shock is generated by a sudden enhancement of the incoming IMF by an order of magnitude. This act introduced a fast MHD shock which propagated downstream and collided with the bow shock. This collision resulted not only in a steep rise in density and temperature of the bow shock, but also in the tail sheet region as the shock propagated downstream. The densities and temperatures, though, eventually relaxed to what are normal bow shock and tail values as the fast shock left the simulation domain. The sharp rise in the tail density which is insulated by geomagnetic field lines, can only be a result of kinetic effects. The results are analyzed and the role of different kinetic effects along with diagnostics discussed. The high energy flux of particles are simulated by injecting Kev to Mev range particles. These particles are traced as their trajectories are stored. The deflection angle of the incoming particles versus their incident energies and their incident latitudes are obtained for the cases in which the incident IMF points north versus southward. Both these investigations are aimed at better understanding of the transport of energy and momentum by geomagnetic storms through their resulting interplanetary shock waves and high energy particles into the inner magnetosphere. This work is supported by the NSF-ATM-0651690. ### September 12, 2007 Observations of Interplanetary Coronal Mass Ejections in the Inner Heliosphere Using Multiple Spacecraft Ian Richardson Astroparticle Physics Laboratory, NASA Goddard Space Flight Center ianr@milkyway.gsfc.nasa.gov Observations of interplanetary coronal mass ejections (ICMEs), the interplanetary counterparts of coronal mass ejections at the Sun, in the inner heliosphere during cycle 23 have largely been confined to the vicinity of Earth. However, in the mid-1970s to early 1980s, observations made by the Helios 1 and 2 spacecraft, in heliocentric orbits at 0.3-1 AU, together with observations near the Earth, provided a unique opportunity to investigate ICMEs and their associated shocks and energetic particle events at widely separated locations. With the recent launch of the STEREO spacecraft, and the SENTENELS mission under development, multi-point observations of interplanetary structures at <~ 1 AU will again be possible. The in-situ signatures of ICMEs will be reviewed, and results from earlier multi-point studies discussed, including their implications for these newer missions and space weather forecasting. ### September 5, 2007 Living in an Asymmetric Solar System: What are we learning from the solar system final frontier Merav Opher George Mason University, 4400 University Drive, Fairfax, VA 22030 mopher@physics.gmu.edu In the last couple of years we are having a flurry of activity at the edge of the solar system. After more than 30 years the twin Voyager spacecrafts arrived at the edge of the solar system and are sending back new data that are putting in putting in check old paradigms and forcing us to reexamine old theories. The twin spacecraft are probing the northern and southern hemispheres of the heliosphere providing us a stereo view. We showed recently (Opher et al. Science 2007; Opher et al. ApJL 2006) that only an asymmetric Solar system can explain the current data (radio emissions and streaming of low energy particles). Furthermore we were able to constrain the direction of the local interstellar magnetic field as not being in the plane of the disk of the galaxy (as was thought previously). This could be the first constrain of turbulence in the local interstellar medium in small scales. In this presentation I will review this findings, our present knowledge of the solar system final frontier and the current puzzles and open questions. ## Spring, 2007 ### February 21, 2007 The CFD center at GMU Rainald Lohner Head, CFD Center Dept. of Computational and Data Sciences College of Sciences M.S. 6A2, George Mason University, Fairfax, VA 22030-4444, USA An overview of the activities of the CFD center at GMU will be presented. Both the strategic application areas covered by the Center, as well as the fundamental and technical questions associated with them will be discussed. Current open questions will be discussed, as well as possible ways of resolving them. ### Febrary 28, 2007 A Review of Atmospheric Flow and Dispersion Patterns Fernando E. Camelli Center for Computational Fluid Dynamics Department of Computational Data Sciences College of Sciences, George Mason University fcamelli@gmu.edu The application of Computational Fluid Dynamics (CFD) for transport and dispersion of pollutants in the urban scales has increased in the last decade. Improvement in computer performance is one of the pivotal reasons for this growing interest in using CFD with this type of application. In addition, the threat of an intentional chemical/biological/nuclear (CBN) release in a densely populated urban area has sparked research on dispersion patterns in urban scales for the past decade. The research of gas dispersion for scales larger than a city has been the focus of study for decades now, and Gaussian models have been the most successfully applied to these large scales. Unfortunately, the simpler models have been unable to reproduce and capture all the complex processes at an urban level. The reason for this failure is primarily the inability to represent the mechanical forces (i.e. building geometry, trees, traffic) and the thermal forces (i.e. surface heating, HAVC systems) that control dispersion at this scale level. Dispersion models that use first principle physics are available today as a direct result of the sustained increase of computational ability, thus allowing the performance of more operations in less time. This talk will review a Computational Fluid Dynamics (CFD) model called FEFLO-URBAN used to accurately calculate in time the flow field inside an urban layout. The transport and dispersion of a passive release is incorporated using an Eulerian framework. Five different scenarios will be presented: first, a realistic urban setting in Northern Virginia where the building geometry was obtained through blueprints (commercial development at Tysons Corner); second, the MUST experiment conducted in the U.S. Army Dugway Proving Ground Horizontal Grid test site in Utah; third, a scenario in New York City using FEFLO-URBAN as part of a collaborative effort supporting the design of the upcoming experiment in the Madison Square Garden area; fourth, the study of assessing maximum possible damage for contaminant release events in a generic subway station; and finally, the transport and dispersion problem around ship vessels, studying the flow patterns for the LPD17 and the concentration levels for the T-AKE 1. Discussion of last two seminars and handout ### March 21, 2007 Comparative studies of multi-scale convective transport through the Earth’s plasma sheet Timothy B. Guild The Aerospace Corporation Space Sciences Department/Chantilly In this talk we will explore multi-scale, convective transport through the Earth’s plasma sheet using in situ observations and global terrestrial magnetospheric simulations. We statistically test the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamic (MHD) model with observations from the Geotail spacecraft at a variety of spatial and temporal scales within the plasma sheet. These comparisons illuminate model shortcomings and highlight the additional physics necessary to resolve data/model discrepancies. Specifically, we will describe comparisons of global-scale plasma moments, magnetic fields, and bulk flows within the plasma sheet. By characterizing the LFM plasma sheet velocity distribution as a function of simulation resolution, we find that increased resolution inherently changes the nature of the dynamics and transport within the LFM plasma sheet, bringing it into closer agreement with magnetotail observations containing fast, localized bulk flows. Due to the importance of these fast flows to mass, momentum, and energy transport in both the observed and simulated plasma sheets, we use the LFM to establish that locally-reconnecting magnetic lobe field lines initiate these simulated “flow channels”, explore the physical processes governing their subsequent evolution, and examine their similarity to observations of bursty bulk flows. ### March 28, 2007 Self-Organized Criticality in a Numerical MHD Current Sheet with Cross-Scale Coupling to a Current-Driven Kinetic Instability Alex Klimas NASA, Goddard Space Flight Center alex.klimas@nasa.gov 301-286-3682 Through analyses due to Uritsky et al. [JGR, 2002; GRL, 2003, 2006] of Polar UVI auroral emissions data, it is now well established that regions of bright UV emissions in the night-side aurora exhibit the properties of avalanches in a system in SOC. Based on the observed relationship between localized reconnection in Earth’s magnetotail and consequent auroral UV emissions, on the large range of emissions scales plus the necessary excitation energy, neither of which can have their origins in the ionosphere, and on various analogies between the driving and dissipation of sandpile SOC models and the loading and unloading of magnetic flux/energy in the magnetotail, Uritsky et al. and Klimas et al. [JGR, 2004] have suggested that the auroral dynamics is a reflection of the reconnection dynamics of the magnetotail, which is in or near a self-organized critical state. A study of reconnection in a 2-D current-sheet model containing coupled resistive MHD and kinetic-micro turbulence components will be discussed. The current sheet supports a magnetic field reversal and is configured so that under steady loading at its boundaries an equilibrium state can be reached in which the rate at which magnetic flux is driven into the reversal is balanced by the rate at which it is dissipated through annihilation. The transport of electromagnetic (primarily magnetic) energy carried by the Poynting flux into the reconnection region of the model has been examined. It has been shown that the Poynting flux evolves through bursts of avalanching activity separated by quiet times during which the current sheet recovers. All of the analysis techniques (and more) that have been applied to the auroral image data have also been applied to this Poynting flux. New results will be presented showing that the Poynting flux exhibits so many of the key properties of systems in self-organized criticality that an alternate interpretation is implausible. A strong correlation between these key properties of the model and those of the auroral UV emissions will be demonstrated. We suggest that, in general, the driven reconnection model is an important step toward a realistic plasma physical model of self-organized criticality and we conclude, more specifically, that it is also a step in the right direction toward modeling the multiscale reconnection dynamics of the magnetotail. ### May 2, 2007 The International Reference Ionosphere – Climatological standard for the ionosphere Dieter Bilitza Raytheon IIS, GSFC, Space Physics Data Facility, Code 672, Greenbelt, MD 20771 Dieter.Bilitza.1@gsfc.nasa.gov The International Reference Ionosphere (IRI) a joint project of URSI and COSPAR is the defacto standard for a climatological specification of ionospheric parameters. IRI is based on a wide range of ground and space data and has been steadily improved since its inception in 1969 with the ever-increasing volume of ionospheric data and with better mathematical descriptions of the observed global and temporal variation patterns. The IRI model has been validated with a large amount of data including data from the most recent ionospheric satellites (KOMPSAT, ROCSAT and TIMED) and data from global network of ionosondes. This talk will give an overview over the IRI effort with special emphasis on the activities that are currently in progress. I will discuss the latest version of the IRI model, IRI-2007, highlighting the most recent changes and additions. Finally, the talk will review some of the applications of the IRI model. # Journal Club ## Fall 2012 The SWL journal club will meet ad-hoc on Tuesdays 1:00 pm – 2:30 pm in a room that will change from week-to-week. To receive announcements, register for the SWL student email list given at http://aurora.gmu.edu/spaceweather/index.php/Main_Page Astrophysics Journal Club: This fall, the journal club will meet on alternate Wednesdays (Sep 5, 19; Oct 3, 17, 31; Nov 14, 28) from 3:00pm to 4:00pm in Planetary Hall (S&T I) room 306. ## Spring, 2012 The SWL journal club will meet ad-hoc on Tuesdays 1:00 pm – 2:30 pm in Research Hall 302. To receive announcements, register for the SWL student email list given at http://aurora.gmu.edu/spaceweather/index.php/Main_Page Astrophysics Journal Club: Jan 25; Feb 8, 22; March 7, 21; April 4, 18; May 2 from 1:30pm to 2:30pm in S&T I, room 306. To receive announcements, contact Joseph Weingartner joe@physics.gmu.edu. ## Fall, 2011 The SWL journal club will meet ad-hoc on Tuesdays from 12:00-1:00pm. During the fall 2011 semester, the astrophysics journal club will meet on alternate Wednesdays (Sep 7, 21; Oct 5, 19; Nov 2, 16, 30) from 1:30pm to 2:30pm in S&T I, room 306. ## Spring, 2011 The SWL journal club will meet ad-hoc on Tuesdays from 1:00-2:00. To receive announcements, register for the SWL student email list given at http://aurora.gmu.edu/spaceweather/index.php/Main_Page The Astronomy journal club’s first meeting of the semester is Wednesday, Jan 26 at 3:30 in S&T I, room 306. Suggested papers can be found on the web site: http://physics.gmu.edu/~joe/jc.html ## Fall, 2010 The SWL journal club will meet ad-hoc on Tuesdays from 12:00-1:00. To receive announcements, register for the SWL student email list given at http://aurora.gmu.edu/spaceweather/index.php/Main_Page The Astronomy journal club web site is http://physics.gmu.edu/~joe/jc.html ## Spring, 2010 The SWL journal club will meet ad-hoc on Tuesdays from 10:30-11:30. To receive announcements, register for the SWL student email list given at http://aurora.gmu.edu/spaceweather/index.php/Main_Page The Astronomy journal club will meet on alternate Wednesdays at 12:30 in S&T I, room 306 starting next on Jan 20. Suggested papers can be found on the web site: http://physics.gmu.edu/~joe/jc.html ## Fall, 2009 ### Sept. 29th • Roundtable discussions of Graduate Work. ### Sept. 22nd • Yod Poomvises will discuss his research • Title: CME propagation and expansion in 3-D space in the heliosphere based on STEREO/SECCHI observations. • Abstract: We report a study of kinematics and morphological evolution of CMEs by using STEREO/SECCHI observation to track in 3-D space a set of well observed events from the sun to a large distance in the heliosphere. The CME tracking is based on the Raytrace model (Thernisien et al 2006), which is able to represent a CME as a 3-D flux rope in the upper portion and two straight legs in the lower portion. The true 3-D location can be obtained. We are able to further calculate 3D velocity and 3D acceleration of CMEs free of project effect. In particular, the true cross-section of CMEs, and thus the expansion speed can be found. For the 5 events studied, we find that their bulk velocities eventually converge into a narrow range of 190 km/s – 430 km/s, while their initial velocities range from about 150 km/s to 1500 km/s. Their expansion velocities also converge into a narrow range between 140 km/s and 300 km/s. We find that the deceleration for fast events and acceleration for slow events mainly occur within 40 solar radii. ## Spring, 2009 • Meeting Time: 10:30 – 11:30 AM Tuesdays • Room 302 ### Mar. 3, 2009 • On Formation of a Shock Wave in Front of a Coronal Mass Ejection With Velocity Exceeding the Critical One • Author: M.V. Eselevich and V.G. Eselevich, Geophysical Research Letter, Vol. 35, L22105, doi:10.1029/2008GRL035482, 2008 • Host: Indrajit Das ### Feb. 17, 2009 • Processes and Mechanisms Governing the Initiation and Propagation of CMEs • B. Vrsnak. Annales Geophysicae,. 26, 3089-3101, 2008 • HOST: Oscar Olmedo ## Fall, 2008 • New Meeting Time 3:00-4:00PM Tuesdays • Room 301 ### Oct 14th Particle Acceleration at Perpendicular Shocks • HOST: Oscar Olmedo ### Sep 2 Multiscale modeling of magnetospheric reconnection • HOST: Rebekah Evans ## Summer, 2008 ### Aug 12 Solar excursion phases during the last 14 solar cycles • HOST: Christy Henderson ### July 29 Characteristic magnetic field and speed properties of interplanetary coronal mass ejections and their sheath regions • HOST: Yod Poomvises ### July 15 NUMERICAL INVESTIGATION OF THE HOMOLOGOUS CORONAL MASS EJECTION EVENTS FROM ACTIVE REGION 9236 ## Fall, 2007 There are two Journal Clubs this semester: A Joint Space Weather Journal Club and the Astrophysics Journal Club and a stand-alone Space Weather Journal Club. The joint journal club meets every other week (generally in SUB II). Seehttp://physics.gmu.edu/~mjordan/AJC.html for links to papers. The stand-along club meets intermittenly in the off weeks (generally in Res. I, room 301). • September 12 (Room: SUB II room 5) • “Implications of Interstellar Dust and Interstellar Magnetic Field at the Heliosphere”, P.C.Frisch,July 23, 2007 astro-ph arXiv:0707.2970v2 • “The radius and mass or the subgiant star beta Hyi from interferometry and asteroseismology”, J. R. North, et al, MNRAS 380, L80-L83 (2007) • “The Orientation of the Local Interstellar Magnetic Field”, M. Opher, et al, Science 11 May 2007 316:875-878 • “NGC 4254: An act of harrassment uncovered by the Arecibo Legacy Fast ALFA survey”, M.P. Haynes, et al, ApJ 665: L19-L22, 2007 August 10 • “Optically unseen H1 detections towards the Virgo Cluster detedted in the Arecibo Legacy Fast ALFA survey”, B.R. Kent, et al, ApJ 665: L15-L18, 2007 August 10 • September 26 (Room: SUB II room 5) • TBD • October 10 (Room: SUB II room 4) • TBD • October 31 (Room: SUB II room 6) • TBD • November 14 (Room: SUB II room 4) • TBD • November 28 (Room: SUB II room 4) • TBD • December 5 (Room: SUB II room 4) • TBD ## Spring, 2007 Space Weather Journal Club • January 23: Yuming Wang : “Geomagnetic storms caused by compressed structures” • January 30: Yong Liu: “Ion Thermalization and Wave Excitation Downstream of Earth’s Quasiperpendicular and Marginally Supercritical Bow Shock” • February 06: Yong Liu – second part • February 13: Bob Weigel “Ring Currents” • February 20: Ken Dere • Feburary 27: Art Poland • March 06: Merav Opher “Magnetic Effects in the Heliosphere” • March 13: NOTE: Spring Break • March 20: Bob Meier
# Which one of the following formulas correctly expresses this statement: a quantity x is equal... ## Question: Which one of the following formulas correctly expresses this statement: a quantity {eq}x {/eq} is equal to the sum of the squares of {eq}a {/eq} and {eq}b {/eq}? a. {eq}x = a^2 + b^2 {/eq} b. {eq}x = \sqrt{ab} {/eq} c. {eq}x = 2a + 2b {/eq} d. {eq}x = \sqrt{a + b} {/eq} ## Translating Words into Mathematical Equations: In solving mathematical problems, we need to carefully understand the problem itself so that it could be correctly translated into mathematical words. From this, the problem could be solved correctly. The problem states that {eq}x{/eq} is a quantity equal to the sum of the squares. This means that the elements making up {eq}x{/eq} should be squared first before adding up, therefore: $$x = a^2 + b^2$$ Therefore, the answer is {eq}\rm (a){/eq}. Translating Words to Algebraic Expressions from Algebra I: High School Chapter 8 / Lesson 17 38K
# Function with continuous derivative is continuous? Is it true that if $\frac{d}{dx}f(x)$ is continuous, then $f(x)$ is continuous too? If not, can you give a counterexample? • Have you tried relating the definition of the derivative to the definition of continuity? – Mark Bennet Jun 8 '13 at 14:14 • Do you mean the derivative in the sense of distributions? – Siméon Jun 8 '13 at 14:19 Just the fact that your function $f(x)$ is differentiable is enough to prove that it is continuous. The derivative $\frac{d}{dx}f(x)$, need not even be continuous. Please have a look here http://www-math.mit.edu/~djk/18_01/chapter02/proof04.html To be differentiable at a point $a$, a function must also be continuous at that point $a$. In your question, this holds for all $a\in \mathbb{R}$.
# S&P 500 Technical Analysis for 12/30/09 The SPY gave back some of the gains of the prior 5 day rally but it remains comfortably above the prior important resistance at 111.80-112.00 and as long as it holds above this level the breakout should be trusted, even with light volume.  The nearer term support remains at the prior breakout attempt high made on Dec 4 at 112.38, although that high and Monday’s low of 112.32 are labeled, they shouldn’t be taken as literal support level, but the 112.30 area should be considered first support. click chart to enlarge
#### 2.1.34 $$u_t+u u_x=x$$ with $$u(x,0)=f(x)$$ Example 3.5.11 in Lokenath Debnath. problem number 34 From example 3.5.11, page 219 nonlinear pde’s by Lokenath Debnath, 3rd edition. Solve for $$u(x,y)$$ $u_t+u u_x=x$ with $$u(x,0)=f(x)$$ Mathematica Failed Maple $u \left (x , t\right ) = \RootOf \left (\left (\mathit {\_Z} \,{\mathrm e}^{2 t}-2 x \,{\mathrm e}^{t}+{\mathrm e}^{2 t} f \left (\mathit {\_Z} \right )+\mathit {\_Z} -f \left (\mathit {\_Z} \right )\right ) \left (\mathit {\_Z} +f \left (\mathit {\_Z} \right )\right )\right ) {\mathrm e}^{t}+{\mathrm e}^{t} f \left (\RootOf \left (\left (\mathit {\_Z} \,{\mathrm e}^{2 t}-2 x \,{\mathrm e}^{t}+{\mathrm e}^{2 t} f \left (\mathit {\_Z} \right )+\mathit {\_Z} -f \left (\mathit {\_Z} \right )\right ) \left (\mathit {\_Z} +f \left (\mathit {\_Z} \right )\right )\right )\right )-x$ ________________________________________________________________________________________
# If $$\left| {z + \bar {z}\ |= \;} \right|z - \bar z|$$, then the locus of z is 1. A pair of straight lines 2. A line 3. A set of four straight lines 4. A circle Option 1 : A pair of straight lines Free CT 1: हिन्दी (आदिकाल) 4877 10 Questions 40 Marks 10 Mins ## Detailed Solution Concept: Consider a complex number  x + iy. It can be represented by a graph as real part on X-axis and imaginary part on Y-axis. Calculation: Let, z = x + iy, z̅  = x - iy $$\rm \left| {z + \bar {z}| = \;} \right|z - \bar {z}|\\ ⇒ |x+iy+x-iy|=|x+iy-x+iy|\\ ⇒ |2x| = |2iy|\\ ⇒ |x| = |iy|$$ Case 1: x > 0, y > 0 ⇒ y = x Case 2: x < 0, y > 0 ⇒ y = -x Case 3: x > 0, y < 0 ⇒ -y = x Case 4: x < 0, y < 0 ⇒ -y = -x ∴ The locus of z is a pair  of straight lines Hence, option (1) is correct.
2020 Fundamentals of Fluid Mechanics B Midterm Exam Thursday March 26th 2020 11:00 — 12:15 INSTRUCTIONS • USE FUNDAMENTALS OF FLUID MECHANICS TABLES THAT WERE DISTRIBUTED. • ALL QUESTIONS HAVE EQUAL VALUE; ANSWER ALL 2 QUESTIONS. • WRITE YOUR SOLUTIONS IN SINGLE COLUMN FORMAT, WITH ONE STATEMENT FOLLOWING ANOTHER VERTICALLY. • WRITE YOUR SOLUTIONS NEATLY SO THAT THEY ARE EASY TO READ AND VERIFY. • DON'T WRITE ONE LINE WITH TWO EQUAL SIGNS. • HIGHLIGHT YOUR ANSWERS USING A BOX. 12.12.19 Question #1 Recall that for Poiseuille flow between two plates, we obtained: $$\frac{\dot{m}}{W} = -\frac{\rho H^3}{12\mu} \frac{\partial P}{\partial x}$$ $$\vec{v}=\frac{y}{2\mu} \frac{\partial P}{\partial x} (y-H) \vec{i}$$ where $W$ is the width of the plates (along $z$) and $H$ is the distance between the two plates (along $y$). Do the following: (a) Find the wall shear stress $\tau_w$ on each plate due to the fluid friction. (b) Derive an expression for the Darcy friction factor function of Reynolds number. Clearly define your Reynolds number. (c) Write down the hydraulic diameter for this problem. (d) Rewrite your Reynolds number and friction factor in terms of the hydraulic diameter. 03.05.20 Question #2 Consider two fluid layers flowing along a plane as follows: Given the plane inclination $\phi$, the gravitational acceleration $g$, as well as the fluid properties $\rho_{\rm A}$, $\mu_{\rm A}$, $\rho_{\rm B}$, $\mu_{\rm B}$, and starting from the mass and momentum transport equations, do the following: (a) Knowing that the speed of the flow at point C is $q_C$, derive an expression for the velocity within fluid A and fluid B as a function of $q_C$, and $x$, $y$, $H_{\rm A}$, $H_{\rm B}$, $g$, $\phi$. (b) Derive an expression for $H_{\rm B}$ as a function of $H_{\rm A}$, $q_{\rm C}$, $g$, $\phi$, and the fluid properties $\rho_{\rm A}$, $\rho_{\rm B}$, $\mu_{\rm A}$, $\mu_{\rm B}$. $\pi$
# How do you find all zeros of f(x)=5x^4+15x^2+10? Dec 30, 2016 $f \left(x\right)$ has zeros $\pm i$ and $\pm \sqrt{2} i$ #### Explanation: We will use the difference of squares identity, which can be written: ${a}^{2} - {b}^{2} = \left(a - b\right) \left(a + b\right)$ with $a = x$ and $b = i$ or $b = \sqrt{2} i$ as follows: $f \left(x\right) = 5 {x}^{4} + 15 {x}^{2} + 10$ $\textcolor{w h i t e}{f \left(x\right)} = 5 \left({x}^{4} + 3 {x}^{2} + 2\right)$ $\textcolor{w h i t e}{f \left(x\right)} = 5 \left({x}^{2} + 1\right) \left({x}^{2} + 2\right)$ $\textcolor{w h i t e}{f \left(x\right)} = 5 \left({x}^{2} - {i}^{2}\right) \left({x}^{2} - {\left(\sqrt{2} i\right)}^{2}\right)$ $\textcolor{w h i t e}{f \left(x\right)} = 5 \left(x - i\right) \left(x + i\right) \left(x - \sqrt{2} i\right) \left(x + \sqrt{2} i\right)$ Hence the zeros of $f \left(x\right)$ are: $x = \pm i$ $x = \pm \sqrt{2} i$
# American Institute of Mathematical Sciences March  2009, 2(1): 215-229. doi: 10.3934/krm.2009.2.215 ## Three-dimensional instabilities in non-parallel shear stratified flows 1 Department of Mathematics and Statistics, Center for Environmental Fluid Dynamics, Department of Mechanical and Aerospace Engineering, Arizona State University, Tempe, AZ 85287-1804, United States, United States 2 N/A, United States Received  November 2008 Revised  November 2008 Published  January 2009 The instabilities of non-parallel flows ($\overline{U}(x_3)$, $\overline{V}(x_3), 0)$ ($\overline{V} \ne 0$) such as those induced by polarized inertia-gravity waves embedded in a stably stratified environment are analyzed in the context of the 3D Euler-Boussinesq equations. We derive a sufficient condition for shear stability and a necessary condition for instability in the case of non-parallel velocity fields. Three dimensional numerical simulations of the full nonlinear equations are conducted to characterize the respective modes of instability, their topology and dynamics, and subsequent breakdown into turbulence. We describe fully three-dimensional instability mechanisms, and study spectral properties of the most unstable modes. Our stability/instability criteria generalizes that in the case of parallel shear flows ($\bar{V}=0$), where stability properties are governed by the Taylor-Goldstein equations previously studied in the literature. Unlike the case of parallel flows, the polarized horizontal velocity vector rotating with respect to the vertical coordinate ($x_3$) excites unstable modes that have different spectral properties depending on the orientation of the velocity vector. At each vertical level, the horizontal wave vector of the fastest growing mode is parallel to the local vector ($d\overline{U}(x_3)/dx_3$, $d \overline{V}(x_3)/dx_3)$. We investigate three-dimensional characteristics of the unstable modes and present computational results on Lagrangian particle dynamics. Citation: Alex Mahalov, Mohamed Moustaoui, Basil Nicolaenko. Three-dimensional instabilities in non-parallel shear stratified flows. Kinetic and Related Models, 2009, 2 (1) : 215-229. doi: 10.3934/krm.2009.2.215 [1] Anna Geyer, Ronald Quirchmayr. Weakly nonlinear waves in stratified shear flows. Communications on Pure and Applied Analysis, 2022, 21 (7) : 2309-2325. doi: 10.3934/cpaa.2022061 [2] Olof Heden, Faina I. Solov’eva. Partitions of $\mathbb F$n into non-parallel Hamming codes. Advances in Mathematics of Communications, 2009, 3 (4) : 385-397. doi: 10.3934/amc.2009.3.385 [3] Mário Bessa, Jorge Rocha. Three-dimensional conservative star flows are Anosov. Discrete and Continuous Dynamical Systems, 2010, 26 (3) : 839-846. doi: 10.3934/dcds.2010.26.839 [4] Mats Gyllenberg, Ping Yan. On the number of limit cycles for three dimensional Lotka-Volterra systems. Discrete and Continuous Dynamical Systems - B, 2009, 11 (2) : 347-352. doi: 10.3934/dcdsb.2009.11.347 [5] Eric S. Wright. Macrotransport in nonlinear, reactive, shear flows. Communications on Pure and Applied Analysis, 2012, 11 (5) : 2125-2146. doi: 10.3934/cpaa.2012.11.2125 [6] Christopher Logan Hambric, Chi-Kwong Li, Diane Christine Pelejo, Junping Shi. Minimum number of non-zero-entries in a stable matrix exhibiting Turing instability. Discrete and Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021128 [7] Hong Cai, Zhong Tan. Time periodic solutions to the three--dimensional equations of compressible magnetohydrodynamic flows. Discrete and Continuous Dynamical Systems, 2016, 36 (4) : 1847-1868. doi: 10.3934/dcds.2016.36.1847 [8] Xiaoli Li, Boling Guo. Well-posedness for the three-dimensional compressible liquid crystal flows. Discrete and Continuous Dynamical Systems - S, 2016, 9 (6) : 1913-1937. doi: 10.3934/dcdss.2016078 [9] Li Liu. Unique subsonic compressible potential flows in three -dimensional ducts. Discrete and Continuous Dynamical Systems, 2010, 27 (1) : 357-368. doi: 10.3934/dcds.2010.27.357 [10] Calin I. Martin. On three-dimensional free surface water flows with constant vorticity. Communications on Pure and Applied Analysis, 2022, 21 (7) : 2415-2431. doi: 10.3934/cpaa.2022053 [11] Biswajit Basu. On the nonlinear three-dimensional models in equatorial ocean flows. Communications on Pure and Applied Analysis, 2022, 21 (7) : 2271-2290. doi: 10.3934/cpaa.2022085 [12] Xu Zhang. Sinai-Ruelle-Bowen measures for piecewise hyperbolic maps with two directions of instability in three-dimensional spaces. Discrete and Continuous Dynamical Systems, 2016, 36 (5) : 2873-2886. doi: 10.3934/dcds.2016.36.2873 [13] V. Torri. Numerical and dynamical analysis of undulation instability under shear stress. Discrete and Continuous Dynamical Systems - B, 2005, 5 (2) : 423-460. doi: 10.3934/dcdsb.2005.5.423 [14] Anna Geyer, Ronald Quirchmayr. Shallow water models for stratified equatorial flows. Discrete and Continuous Dynamical Systems, 2019, 39 (8) : 4533-4545. doi: 10.3934/dcds.2019186 [15] Ming Zhao, Cuiping Li, Jinliang Wang, Zhaosheng Feng. Bifurcation analysis of the three-dimensional Hénon map. Discrete and Continuous Dynamical Systems - S, 2017, 10 (3) : 625-645. doi: 10.3934/dcdss.2017031 [16] Weiping Yan. Existence of weak solutions to the three-dimensional density-dependent generalized incompressible magnetohydrodynamic flows. Discrete and Continuous Dynamical Systems, 2015, 35 (3) : 1359-1385. doi: 10.3934/dcds.2015.35.1359 [17] Biswajit Basu. On an exact solution of a nonlinear three-dimensional model in ocean flows with equatorial undercurrent and linear variation in density. Discrete and Continuous Dynamical Systems, 2019, 39 (8) : 4783-4796. doi: 10.3934/dcds.2019195 [18] Yinxia Wang. A remark on blow up criterion of three-dimensional nematic liquid crystal flows. Evolution Equations and Control Theory, 2016, 5 (2) : 337-348. doi: 10.3934/eect.2016007 [19] Myoungjean Bae, Hyangdong Park. Three-dimensional supersonic flows of Euler-Poisson system for potential flow. Communications on Pure and Applied Analysis, 2021, 20 (7&8) : 2421-2440. doi: 10.3934/cpaa.2021079 [20] Futoshi Takahashi. On the number of maximum points of least energy solution to a two-dimensional Hénon equation with large exponent. Communications on Pure and Applied Analysis, 2013, 12 (3) : 1237-1241. doi: 10.3934/cpaa.2013.12.1237 2020 Impact Factor: 1.432
# Testability of GUTs at the LHC 1. Jan 24, 2009 ### Orbb Hello, my understanding of particle physics is very limited. I know that several GUTs involving various symmetries have been proposed. My question is, wether experiments at the LHC can help to rule out or even verify some of the proposed GUTs? Maybe someone has some specific answers. Thank you! 2. Jan 24, 2009 ### Haelfix Doubtful, the usual hypothetical GUT scale is at very high energies, near the Planck scale, many orders of magnitude away from what the LHC probes. The best you can do is get better experimental precision on the standard coupling constants (Electroweak, Strong) that way we can refine the interpolation when we run them down to that scale to see if they unify. Right now, they nearly hit but not quite (with Supersymmetry its even closer). Barring that, we need better precision on the bounds of potential proton decay (a typical prediction of GuTs is that indeed protons do decay after a very long time) but the LHC won't help us there directly either. Still sometimes there are roundabout ways of getting there, depending on which physics we see or don't see but I wouldn't count on it. 3. Jan 24, 2009 ### bomanfishwow I respectfully disagree with the post above. Many GUTs include decompositions such as (and as a very simple example): $$SO(10) \to SU(5) \otimes U(1)_{GUT} \to SU(3) \otimes SU(2) \otimes U(1)_{Y} \otimes U(1)_{GUT}$$ The new U(1) would exhibit itself as a new neutral current process (so called Z primes). There are further decompositions where one finds new SU(2) x U(1), implying W primes as well as Z primes. There are people actively looking for these things in various channels of potential interest, and some models provide striking signatures. Now, we will either make some exceedingly exciting discoveries or rule out certain models and model parameters. 4. Jan 26, 2009 ### Haelfix I agree that the discovery of say tev neutral current processes *might* shed some light on GUT processes if we are really lucky, but its far more probable a hadron collider won't be able to illuminate the specifics. The inverse problem is in full effect there since a number of potential non GUT models contain them. Afaik, they are hard to disentangle. 5. Jan 26, 2009 ### bomanfishwow Depends on just how much you want to disentangle. It's fairly easy (given enough integrated luminosity) to measure the spin and couplings of any new resonance from the decay kinematics, and this constrains things hugely. I.e. Spin 1 implies a new U(1) like thing, Spin 2 implies (amongst other things) KK models, same-sign events can imply exotic models with doubly-charged resonances etc etc. You can also add in further evidence - displaced decay vertices can imply models with B-L symmetry, boosted decay products can give further handles etc etc. The argument is, of course, that a linear collider will be needed to really probe any new structure, but we'll definitely be able to say more than 'It's a bump at 1.5 TeV' if we find something.
Throughout test of ephemerides: JPARSEC vs Horizons, IMCCE, and Astronomical Almanac # Throughout test of ephemerides: JPARSEC vs Horizons, IMCCE, and Astronomical Almanac This month I have implemented more than 250 test cases of ephemerides in JPARSEC, using data from Astronomical Almanac (AA) and the ephemerides servers Horizons and IMCCE. I tested ephemerides for the last time a few years ago, and never in depth, so I expected to find some bugs in the results and it has been the case, but the errors are not very important. These are the main improvements in the ephemerides of the JPARSEC version uploaded today. • The calculation of the heliocentric distance of planets were not completely correct (a secondary light-time correction was not implemented), and this had some little effects in the values of the elongations or plase angles. Both values were dependent of the ephemeris type (apparent, astrometric, and geometric). • The astrometric position of the Moon was always wrong by a few arcseconds due to a bug in the correction for aberration. • The sign of the subsolar longitude of the Moon was wrong. • Ephemerides for asteroids/comets never show orientation of axis/sizes. This occurs for those comets/asteroids with known parameters of the orientation of their rotation axis. • The calculation of LSR velocities in StarEphem were wrong, and B1950 to J2000 transformation was not completely accurate/well documented. In addition to these bug fixes, new features have been implemented in JPARSEC. The main ones are listed next. • Ephemerides calculations now supports two new reduction algorithms: the IAU 1976 method (Lieske precession and obliquity, among other algorithms), required to match results from IMCCE server, and IAU 2006 resolutions to fit AA values. Finally, IAU 2006 resolutions are supported, and some methods to work with matrices (like the NPB matrix) are available, although the implementation in the ephemerides is not based in matrices. • The orientation of the rotation axis of planets and satellites are supported for IAU 2000, 2006, and 2009 resolutions, independently. The selected orientation parameters depends on the selected reduction algorithm. IAU 2006 improves the accuracy of the rotation of some planets, but IAU 2009 resolutions changes too much the longitude of the central meridian of Jupiter and Pluto. Since the reduction itself is the same (RA and DEC), I think the best current method is IAU 2006. • JPLEphemeris class has been redesigned, adding support for DE422, improving the speed of calculations (Chebyshev polynomials are now stored in memory), and allowing the use of external files. DE102 no longer available. • The rise, set, and transit times in the EphemElement object are now arrays that can hold different values in the case that a given object has more than 1 rise/set/transit in a given day, or none at all. Calculations in the RiseSetTransit class will automatically add a new rise/set/transit time after the last one calculated. • JPL ephemerides were not used as reference to calculate ephemerides for natural satellite in the accurate mode in Ephem class. Now this is done automatically if DE4xx files are detected to be available. A little set of limitations has been identified and could require some kind of fix in the future. These are: • Moon orientation seems more accurate with Eckardt theory than with IAU resolutions, since Eckardt's theory is much more compatible with the results of JPL ephemerides. I've kept IAU resolutions by default, but the IAU model for the rotation of the Moon seems to be outdated. I expected a difference of 0.03 deg according to documentation, but found 0.7. An accurate reduction of nutation/librations should be implemented in JPLEphemeris and PlanetEphem classes. • Planetary positions are respect to barycenter (center of mass). For instance, there's an offset of 2000 km in Pluto, which is noticeable in the RA/DEC values (0.07”) despite the distance of this object. • The getConstellation method in Constellation class should be used with astrometric positions, instead of apparent ones. This means that the constellation where an object lies could be wrong in very extreme cases when the limit to the next constellation is lower than 30” (aberration + nutation). But the main effort has been to test ephemerides with different sources. The official ephemerides theory used in the astronomical community is the DE405 integration, but it is noticeable the difference between modern JPL integrations (DE422) and DE405, which is clearly becoming outdated. For instance, the difference in the position of Jupiter (which is not the object where this difference is the highest) varies from 0.05” at J2000 to about 0.15” at J1900. It is also noticeable the extremely old implementation of ephemerides reduction algorithms in astronomical ephemerides servers (Horizons and IMCCE), that produces differences in the RA/DEC from 0.05” at J2000 to 0.3-0.4” at J1900 (mainly due to Lieske's precession), compared to IAU 2006 resolutions. These differences explain the dificulties when comparing the results from different sources and when trying to match them with JPARSEC or test them accurately. ## The Horizons ephemerides server It is available at http://ssd.jpl.nasa.gov/horizons.cgi. After testing it with JPARSEC it seems to have a very good implementation of the geometry of the Solar System, which means that the values of parameters like planetary elongations and phase angles, or longitudes of the central meridians are fine. I've been unable to fit the RA/DEC values to the milliarcsecond, probably because I don't know the reduction algorithms used internally. It seems to use some IAU 1976 algorithms (precession), but not all of them or at least in a non-standard way. Maybe an accurate apparent RA/DEC output is not the main object. I've found also unexpected results for the longitude of the central meridian in Saturn and Pluto (should follow IAU 2006 resolutions as in other bodies, but it doesn't), and also a possible bug: longitudes of central meridian seems to be wrong far from J2000 (even after or before a few years the difference is noticeable), compared to IMCCE and JPARSEC. There's also a strange value for the distance to the Moon. In fact, Horizons gives two different values for this distance depending on the ephemeris type (VECTOR and OBSERVER), being the one for VECTOR type what I can obtain. Since the difference can reach 100 km (much greater than the difference of 2 km between the geometric and barycentric centers of the Moon), maybe the other is the distance that light cross to rearch the observer (corrected for relativistic effects), but it is confusing. It is also noticeable the difference in the distance to the planets when selecting or not the (quite hidden) extra precision flag, that should be checked to get correct results for all those decimal places given. ## The IMCCE ephemerides server It is available at http://www.imcce.fr/en/ephemerides/generateur_ephemerides.php. In this case the object is clearly to produce accurate RA/DEC output, but the point that output positions are respect to the ICRF takes no sense considering the effects of using these old algorithms instead of those for IAU 2000/2006 resolutions. In addition, according to IAU resolutions ICRF-based positions should be used with astrometric ephemerides, and apparent places of objects should be based on the mean dynamical equinox of J2000. IAU 1976 standard reduction algorithms are used, but close to J2000 the reduction is different and seems to follow more recent techniques (either the APPLY_WILLIAMS or the APPLY_JPLDE40x set of algorithms implemented in JPARSEC). IAU 2000 resolutions are used to calculate the orientation of the planets, but these values are often wrong, sometimes in the sign of the longitude of the central meridian and in other cases in the value itself. It is noticeable the option to generate a chart with the orientation of the planet, that seems to show always the correct value for the longitude of the central meridian, that as I say is sometimes different from that of the table. I've also seen wrong values for the elongations sometimes. The IMCCE server has also an option to access 'General ephemeris of natural satellites position', that produces extended calculations for natural satellites compared to the main option of Solar System bodies. In this case the output far from J2000 for Martian satellites (only test I have done here) is wrong far from J2000, compared to the output positions of the main page. IMCCE seems the right server to test RA/DEC output to the milliarcsecond level, which clearly has been well tested, but the rest is a little buggy. ## Results of the tests I have implemented different tests to ensure the quality of JPARSEC. A first test of the rectangular coordinates for different planetary and satellite theories to match the results from Horizons and from the documentation of the IMCCE theories (see ftp://ftp.imcce.fr/pub/ephem) gives as a result a very little difference in the positions of the Uranian satellites using GUST86 theory (< 1 km), that I will see in detail to fix it if necessary. Another test using Horizons is implemented to test apparent/astrometric/geometric RA/DEC to 0.1” of accuracy, and other values (elongation, phase angle, subsolar and subearth positions). This test is repeated for all theories in JPARSEC (VSOP87, ELP2000, every JPL integration, Moshier method) to ensure basically that correct results are always obtained for every field of an EphemElement object when the test is not done to the milliarcsecond (mas) level. Next I extended the tests to the milliarcsecond in planets and satellites, trying to 'fit' the results from Horizons, IMCCE, and AA. IAU 1976 and 2006 resolutions were implemented for this task, and I found it easy for AA and IMCCE and planets, and a little tricky but possible with natural satellites. Uranian satellites with GUST86 were already perfect to the mas level. In Saturn and using TASS 1.7 I had to rotate ecliptic positions from TASS to equatorial using the ecliptic of date (even though TASS positions are J2000). With L1 theory in Jupiter the discrepancy increases also from J2000, so some kind of correction seems to be required, although the difference is of a few tenths of mas at most. Martian satellites results departs from JPARSEC results by 2” in 4 centuries, and JPARSEC results seems wrong here since Horizons gives similar output compared to IMCCE. Maybe the public version of the theory for the Martian satellites cannot be used far from J2000. Below there's a set of comparative results obtained with JPARSEC, IMCCE, and HORIZONS. Date and time (TT) Method Body RA, DEC, DISTANCE COMMENT 1600, Jan 1, 0h IMCCE Mars 09h 24m 41.58178s, 19º 15' 18.9756”, 0.746598316 JPARSEC 09h 24m 41.58188s, 19º 15' 18.9746”, 0.746598317 Using IAU 1976 reduction algorithms and ICRF frame 1900, Jan 1, 0h IMCCE Mars 19h 00m 39.90359s, -23º 38' 58.3076”, 2.400963430 JPARSEC 19h 00m 39.90409s, -23º 38' 58.3100”, 2.4009634300 2200, Jan 1, 0h IMCCE Mars 10h 25m 57.25433s, 13º 29' 37.0809”, 0.842946359 JPARSEC 10h 25m 57.25447s, 13º 29' 37.0803”, 0.84294636204 1600, Jan 1, 0h IMCCE Neptune 10h 00m 31.59378s, 12º 51' 13.3640”, 29.466102577 JPARSEC 10h 00m 31.59382s, 12º 51' 13.3634”, 29.4661025774 1900, Jan 1, 0h IMCCE Neptune 05h 39m 21.87494s, 22º 03' 58.9701”, 28.920271361 JPARSEC 05h 39m 21.87494s, 22º 03' 58.9700”, 28.9202713608 2200, Jan 1, 0h IMCCE Neptune 01h 24m 58.76536s, 07º 07' 44.9398”, 29.615237432 JPARSEC 01h 24m 58.76543s, 07º 07' 44.9404”, 29.6152374318 2000, Jan 1, 0h IMCCE Neptune 20h 21m 39.47908s, -19º 13' 01.9774”, 31.021117421 2000, Jan 1, 0h HORIZONS Neptune 20h 21m 39.4756s, -19º 13' 01.987”, 31.0211174207 JPARSEC 20h 21m 39.47555s, -19º 13' 01.9870”, 31.0211174207 (IAU 1976 reduction algorithms) JPARSEC 20h 21m 39.47906s, -19º 13' 01.9774”, 31.0211174207 Using JPLDE40x reduction algorithms 2011, July 18, 0h AA Mars 05h 11m 32.063s, 23º 05' 16.56”, - JPARSEC 05h 11m 32.06253s, 23º 05' 16.5596”, 2.17658764475 Using IAU 2006 reduction algorithms and J2000 frame 2011, July 18, 0h AA Jupiter 02h 21m 54.955s, 12º 49' 28.91”, - JPARSEC 02h 21m 54.95512s, 12º 49' 28.9066”, 5.06967777879 2011, July 18, 0h AA Neptune 22h 10m 46.150s, -11º 49' 46.06”, - JPARSEC 22h 10m 46.15016s, -11º 49' 46.0553”, 29.1731323295 1600, Jan 1, 0h IMCCE Deimos 09h 24m 44.52675s, 19º 15' 19.6628”, 0.746640479 JPARSEC 09h 24m 44.58900s, 19º 15' 18.2332”, 0.74662669602 Using IAU 1976 reduction algorithms and ICRF 1900, Jan 1, 0h IMCCE Deimos 19h 00m 40.56578s, -23º 38' 59.5043”, 2.400848638 JPARSEC 19h 00m 40.56644s, -23º 38' 59.5203”, 2.4008486685 2200, Jan 1, 0h IMCCE Deimos 10h 25m 59.52983s, 13º 29' 25.4275”, 0.842883773 JPARSEC 10h 25m 59.45245s, 13º 29' 25.0445”, 0.84287562191 1600, Jan 1, 0h IMCCE Titan 13h 45m 15.35365s, -08º 17' 57.0768”, 9.99873438 JPARSEC 13h 45m 15.35371s, -08º 17' 57.0768”, 9.99873438030 Using ecliptic of date, not J2000 ecliptic 1900, Jan 1, 0h IMCCE Titan 17h 50m 10.06797s, -22º 24' 27.4608”, 11.031604402 JPARSEC 17h 50m 10.06800s, -22º 24' 27.4608”, 11.0316044020 2200, Jan 1, 0h IMCCE Titan 22h 02m 43.88507s, -13º 27' 15.7030”, 10.450165195 JPARSEC 22h 02m 43.88515s, -13º 27' 15.7028”, 10.4501651962 1600, Jan 1, 0h IMCCE Callisto 09h 33m 53.33724s, 15º 30' 35.5888”, 4.558908178 JPARSEC 09h 33m 53.33510s, 15º 30' 35.6000”, 4.55890833039 1900, Jan 1, 0h IMCCE Callisto 15h 56m 43.78523s, -19º 36' 26.0122”, 6.125545509 JPARSEC 15h 56m 43.78401s, -19º 36' 26.0106”, 6.12554669865 2200, Jan 1, 0h IMCCE Callisto 22h 26m 48.97572s, -10º 48' 53.3763”, 5.494266711 JPARSEC 22h 26m 48.97557s, -10º 48' 53.3779”, 5.49426633534 1600, Jan 1, 0h IMCCE Oberon 01h 41m 49.40287s, 10º 01' 47.6674”, 19.470081383 JPARSEC 01h 41m 49.40292s, 10º 01' 47.6673”, 19.47008138146 1900, Jan 1, 0h IMCCE Oberon 16h 34m 01.71001s, -21º 54' 48.8151”, 19.838041698 JPARSEC 16h 34m 01.71002s, -21º 54' 48.8149”, 19.83804168796 2200, Jan 1, 0h IMCCE Oberon 05h 46m 52.47223s, 23º 32' 39.3707”, 18.121971509 JPARSEC 05h 46m 52.47231s, 23º 32' 39.3711”, 18.12197150938 Tests were implemented for other parts or the library, which resulted in different minor corrections. Everything seems to be fine now, but there's still margin to improve. B1950 to J2000 transformation can maybe be improved according to J. Bennett (see http://ned.ipac.caltech.edu/help/calc_doc.txt). I would like to test eclipses in more detail, and also the MainEvents class. There are some fails in the tests for occultation of stars by planets, mutual events of natural satellites, and the ephemerides for Triton. Other fails mentioned above include the orientation of Saturn and Pluto axes, GUST86 positions, and ephemerides of Martian, Jovian, and maybe Saturnian satellites to the mas level. Perhaps I will have to contact people at IMCCE or Horizons to solve these fails. Other things remain like the ephemerides of dwarf satellites, maps of solar and lunar eclipses, or to test the polar movement correction, among others, and I would like to improve code readability and documentation even more. ## Discussion ______ ___ ____ ___ _ __ /_/ /_/ |_|/_/ /_/ /_/|_|
Timezone: » Poster Matching in Multi-arm Bandit with Collision YiRui Zhang · Siwei Wang · Zhixuan Fang Tue Nov 29 02:00 PM -- 04:00 PM (PST) @ Hall J #939 In this paper, we consider the matching of multi-agent multi-armed bandit problem, i.e., while agents prefer arms with higher expected reward, arms also have preferences on agents. In such case, agents pulling the same arm may encounter collisions, which leads to a reward of zero.For this problem, we design a specific communication protocol which uses deliberate collision to transmit information among agents, and propose a layer-based algorithm that helps establish optimal stable matching between agents and arms. With this subtle communication protocol, our algorithm achieves a state-of-the-art $O(\log T)$ regret in the decentralized matching market, and outperforms existing baselines in experimental results.
# Expansion redshift VS gravitational redshift? 1. Jan 12, 2010 ### anya2 While objects closer to us tend to shift both in direction red or blue, depending on their movement in relation to us, distant objects such as galaxies tend to only shift to the red. As I understand this is the base of the idea that the universe is expanding. But how are we sure that is the case, and redshifts are not due to the gravitational pull of all those objects that lie between us and the observed objects? 2. Jan 12, 2010 ### sylas Because gravitation effects of stuff in between the source and the observer cancel out. It is the difference in gravitational potential between source and observer that matters. Dropping a little bit into and then out of the gravitational well along the way may alter the direction of light, but that's all. This change in direction is measured and used in study of gravitational lensing. Cheers -- sylas 3. Jan 12, 2010 ### anya2 Yep, that makes sense, thanks a lot The only gravity that would not cancel out is that of the observed object, as it is the starting point it can only pull light back I am not sure but I think I've read that the CMBR has shifted uniformly, but if space indeed expanded in such a high rate - shouldn't different regions of the CMBR be redshifted by a different amount, depending on their position relative to our point of observation? 4. Jan 12, 2010 ### Ich No, it results in a blueshift. If it'd cancel out, there would be no deceleration of expansion either. 5. Jan 13, 2010 ### Chalnoth Huh? Sylas is correct. On average, the effects completely cancel, because compared to the average density, there are just as many voids as collapsed objects (by some appropriate measure). 6. Jan 13, 2010 ### Wallace I think we probably all agree and are just using the words differently, but in fact gravitational effects play a crucial role in cosmological redshift. The distance vs redshfit relationship is a key cosmological probe and the very reason it tells us about the composition of the Universe is because the gravitational effects at play can be modelled and that tells us how much stuff (and how much of different kinds of stuff) are around. This is true even if we ignore structure (i.e. the simplest homogenous FRW model). To see how gravity is important, try modelling a matter only Universe using Newtonian physics only. You will see that even for quite large distances, you get pretty close to the correct answer by modelling the redshift as a combination of a Doppler redshift plus a blueshift due to the gravitation matter enclosed in a sphere centred on the observer with a radius equal to the distance to the emmitter (Gauss's law lets us ignore everything outside in a homogenous universe). In the Newtonian case you can work out the gravitational blueshift by thinking about the potential energy difference between the emmitter and observer. Now, in the full relativistic case, there is an inherent ambiguity in dividing up the redshift into the 'doppler' and 'graviational' parts, and it depends on the co-ordinates you choose as to which label gets what. There have been various bun fights in the literature about this, but the bottom line is that both motion and gravity are at play in determining what redshift is observed. 7. Jan 13, 2010 ### Ich Thanks, Wallace. If we have, as a toy model, a whole universe filled with static dust, the gravitational effects between any two points do not "cancel out". Instead, they make the whole thing collapse. This concernes photons also, they get blueshifted. Exactly. But "potential" is not a basic feature of GR; for example, it is not defined in homogeneous coordinates. That does not mean that there are no gravitational effects. To define a potential, you have to use static coordinates. The potential is then $\sqrt{g_{tt}} \simeq g_{tt}/2$. Static coordinates are centered around one (arbitrary, of course) point r=0. The potential is then (in Newtonian limit) $$\frac{2 \pi G}{3 c^2} \rho r^2$$ As long as there are no significant density changes during the light travel time, you can decompose photon redshift unambiguously into gravitational blueshift and doppler redshift. In case of a static spacetime, like de Sitter, an unambiguous decomposition is always possible. 8. Jan 13, 2010 ### sylas Not so. The simplest such model, the Milne model, is a flat universe filled with dust at critical density, and it keeps expanding indefinitely, though slowing down indefinitely as well. Photons in this universe are always redshifted. The only way you get blue shift is with supercritical densities, which can reverse expansion into contraction and a Big Crunch. You can get blue shifts once contraction gets underway, which is perfectly obviously not going on in our universe. As Wallace points out, you can, depending on how you work with co-ordinates, regard the redshift (or blueshift, in a contracting universe) as a gravitational effect, associated with the difference in density between emission and observation of a photon in different regions of the dust filled universe. Whether you do this with Newtonian approximations or the more correct relativistic methods only makes a difference on sufficiently large scales. If the dust is inhomogenous on small scales, then you end up with something a bit more like our own universe, with local motions and clusters of galaxies and so on. If a photon passes by clumps of matter between emission and observation, this makes no difference, which is the point I was trying to make. What counts is the state at emission, and at observation. Going into and out of a localized gravitational well along the way has no effect, except perhaps on directions, which is the basis of gravitational lensing. Cheers -- sylas 9. Jan 13, 2010 ### Ich No, the Milne Model is massless, filled only with "expanding" test particles. Consequently, it has zero density and is negatively curved. Redshift is purely doppler, there are no gravitational effects. I'm talking ablout homogeneous static dust, like a closed universe at maximum expansion. Just to illustrate that gravitational effects definitely do not cancel out. I'm not talking about a net blueshift. I said that gravitation results in a blueshift, which is outweighed by doppler redshift in an expanding universe. That's not how I read Wallace's post, and I wouldn't agree either. As i understand it, Wallace and I are claiming that the "potential" approach is valid in an homogeeous universe. (quote:"This is true even if we ignore structure (i.e. the simplest homogenous FRW model).") Yep, but the underlying physics is more accessible in the Newtonian formulation. You can see easily that it's exactly the matter between two points which accelerates them, not some other dubious effect. Again: perfectly homogeneous dust (or DE, for that matter) does not mean that there is no potential difference. It means that, by choosing an origin for static coordinates, you can define where the global minimum is. Every photon being observed at r=0 comes from a higher potential in that picture. 10. Jan 13, 2010 ### Wallace Edit: Crossed posts with Ich. We seem to be in good agreement though... Right, but when you add a homogenous matter distribution what do you find? (Edit: missed seeing that you suggest Milne model is at critical density, as Ich points out it is empty, there is no dust in it. In the Milne model nothing ever slows down, all proper velocites remain fixed). You find that the more matter you add, the less redshift you see for objects a fixed distance from you (however you define that). This is because of the effects of gravity "adding a blueshift" in some loosely defined way as the photon travels. I think we are talking at cross purposes. Ich explained how you get a component of the redshift which is gravitational (and in fact this componet reduces the redshift), not how any gravity gives you a net blueshift. Last edited: Jan 13, 2010 11. Jan 13, 2010 ### Chalnoth After the collapse has begun, sure. But our universe is expanding. 12. Jan 13, 2010 ### sylas Sorry! You are quite right; I mixed up the names of my models. I meant what is sometimes called the "Einstein-de Sitter" model, which is confusing because neither Einstein nor de Sitter proposed it. I meant "dust", with mass, at critical density; not the massless test particles of the Milne model. My mistake. Ah! I had take the "static" to mean no peculiar motions, sometimes the taken as the defining quality of "dust". My apologies again. Yes, this model will contract from that static starting point, and you will have blue shifts. The gravitational effects can be considered as gravitational in the sense Wallace described, and I think we are all on the same page with that. My original remark about "cancellation" was strictly intended to refer to the effect of a photon passing near a massive object, in a inhomogeneous universe. Fritz Zwicky considered whether something like this could work (it is a form of "tired light" model). But it doesn't work. A localized patch of higher density matter along the photon's path has no net effect. You can think of the photon being blueshifted as it moves into the denser local region, and then redshifted as it moves back out again, with net cancellation as if that intervening clump of matter had not been there at all. Apart from a change in direction, possibly, as in gravitational lensing. Cheers -- sylas 13. Jan 13, 2010 ### Chalnoth Well, actually this is only the case for matter domination. In the case of some form of dark energy, there is some net effect because gravitational potentials decay with time: the photon has less of a well to climb out of on the way out than in. This is why I added the point that on average, due to the various underdensities and overdensities of the universe, these effects tend to cancel. In a more detailed analysis, they don't cancel exactly, but instead have some extra directional variation as a result (there's still no average effect when taken over the entire sky). Last edited: Jan 13, 2010 14. Jan 13, 2010 ### sylas That's an interesting idea... I had not thought of that. The effect would have to be phenomenally tiny in our universe, but I see how it could work. I wouldn't like to try and calculate it, however! Cheers -- sylas 15. Jan 13, 2010 ### Chalnoth Good stuff! It's known as the Integrated Sachs-Wolfe Effect, and basically it slightly increases the fluctuations in the CMB at large scales (at small scales the effect cancels more). 16. Jan 13, 2010 ### Ich That's why doppler redshift dominates. In fact, there is additional redshift due to a negative dark energy potential. Now I understand why you meant that the effects cancel on average. What cancels are the inhomogeneities (except our own cluster, of course). I'm talking about the total matter distribution, which adds a blueshift component to incoming light. 17. Jan 13, 2010 ### Chalnoth That doesn't make any sense to me. If you take, for instance, a closed universe, and take two times equidistant from the turnover point, there will be no net redshift or blueshift between them, whereas by your claim, one would expect a net blueshift. 18. Jan 13, 2010 ### Ich Sorry, I'm not sure I understand that phrase. If you mean a photon emitted dt before maximum expansion and received the same dt after maximum expansion: The distance r is then 2dt*c. In this case, you have a gravitational blueshift of $$\frac{2 \pi G}{3 c^2} \rho r^2$$ The coordinate acceleration of the emitter is $$\frac{4 \pi G}{3} \rho r$$ Since emitter and observer were at relative rest at turnaround, and the signal was sent dt = r/2c before, the relative velocity at the time of emission was $$dv=\frac{4 \pi G}{3} \rho r*dt = \frac{2 \pi G}{3c} \rho r^2$$ giving a redshift of $$\frac{2 \pi G}{3c^2} \rho r^2$$ which exactly cancels the blueshift above. Really, I'm not claiming new physics. This is simply a local Newtonian approximation to an FRW metric - weak field, small velocity, no pressure. 19. Jan 13, 2010 ### Chalnoth How do they have a net relative velocity, though? At emission, the emitter would have been moving away from the observer. But at the same time, since the system is symmetric, the observer would be moving towards the emitter by the same amount when the photon was observed, canceling that redshift. 20. Jan 14, 2010 ### Chronos Gravity works both ways, matter on the far side counters gravitational effects from the near side. A net zero effect. Expansion is the only logical explanation. 21. Jan 14, 2010 ### Wallace You are double counting somehow. Lets look at this in two ways. The simplest way is to place to origin of some co-ordinates at the reciever such that they remain fixed. Imagine a spherical region around them with the emmitter at the edge of that region. When they fire the photon towards the centre they are moving away from the reciever. Since the reciever is always fixed, this means there is a redshift from the original motion so it doesn't matter that later on the emmitter starts moving towards the observer when the Universe begins contracting. The gravitational blueshift, in this case, exactly cancels this original redshift. It looks like this: Motion at emmission causing a Doppler redshift Obs . . . . . . Em -> Photon is falling towards the bottom of the potential well, causing a blueshift Obs . . . . . . << Photon We can instead define the co-ordinates to be centred on the emmitter. In this case it remains fixed. If you think about this it means that compared to the rest frame of the emmitter, the observer will be moving towards the emmitter when the photon is observed. Thus you will have a blueshift due to motion. This might be confusing, until you realise that in these co-ordinates, the photon is moving away from the origin, climbing out of the potential well we have define, and therefore in this system the effect of gravity is to cause a redshift, in this case exactly cancelling the Doppler blueshift. It looks like this: Motion at reception, causing Doppler blueshift Em . . . . . . <- Obs Photon is climbing out of potential well, causing gravitional redshift Em . . . . . >> Photon We could also place the origin between the emmitter and observer. In this case the relative motion cancels out, so there is no Doppler contribution. But also, we now define the bottom of the potential well to be between the two, so the photon picks up a blueshift falling in, which exactly cancels the redshift of it climbing out. It looks like this Motion at emmission <-Obs . . . . . . O . . . . . . Em -> Motion cancelled at reception, no net Doppler effect Obs -> . . . . . . O . . . . . . <- Em Photon falls into potential well, gaining energy . . . . . . O . . . . . . << Photon But then loses the same amoung climbing out again << Photon . . . . . . O . . . . . . This might sound like a bit of mathemagic, but it is all just co-ordinate tricks with classical physics. As with any problem to do with energy, you have to be very careful about where you are defining the arbitary zero point, and make sure you are referencing everything consistantly with respect to that. 22. Jan 14, 2010 ### Chalnoth In any case, these things are vastly easier to understand if you just take them in co-moving coordinates, where both the emitter and observer are stationary (up to local peculiar velocities). In co-moving coordinates, the only source of redshift is the overall expansion, and so the redshift is simply: $$z + 1 = \frac{a_{obs}}{a_{emit}}$$ 23. Jan 14, 2010 ### Wallace But hang on, we know that we can always just use these co-ordinates. The question is what the hell do they mean? The OP asked how motion and or gravity is responsible for causing redshift, which is a very reasonable question. Simply stating the above equation tells you how to calculate it, but it doesn't tell you what that means and doesn't answer the question. Reducing everything to the effect of 'the overall expansion' leaves you at sqaure one; what precisely is that motion, and how does it cause redshift? In fact the 'motion' implied by looking at da/dt is nothing like the intuitive motion we see in day to day life, since it encodes gravitational effects as well. This is very very convenient for cosmologists, since it reduces everything to the single function a(t), but it is horrible for people new to the area trying to work out what that function means in terms that are familiar. Ich and I explained how you can understand the interplay between motion and gravity by looking at how the more familiar Newtonian physics gives you the same answer, but more obviously demonstrates how both motion and gravity are both at work, even in a homogenous universe. Writing down a simple relation, and really understanding what that means are two vastly different things. Last edited: Jan 14, 2010 24. Jan 14, 2010 ### Chalnoth I guess I just don't see those sorts of questions as very productive. There are so vastly many ways of looking at the situation that one can't say that they mean any one particular thing in these terms. So I'd rather just go by the simplest explanation, which is that the photons are expanded along with space. 25. Jan 14, 2010 ### Wallace Well then I have to disagree. When you say 'photons are expanded along with space' you are talking about something that is only true for one specific set of co-ordinates and you also imply a false causality; that there is a physical effect called 'expansion of space' which causes photons to stretch. Simply saying 'there are many ways of looking at this, so none of them mean anything' is not very useful. In fact, as has been explained, the physics is universal, and can be seen readily by looking at the Newtonian picutre, to which all co-ordinate descriptions will converge to for small distances. The co-ordinates are what are malleable, yet you want to fix on just one co-ordinate system and force the physics to conform to that (since you remove gravity and motion and invent a new placeholder fictious effect which acts for both). I'm afraid that is bass-ackwards. As can be readily evidenced in this forum, blanket use of this phrase without context leads to much wailing and nashing of teeth, such as 'why don't galaxies get expanded by space?' 'does the expansion of space drive electrons further from the nucleus of atoms?'. These are reasonable questions to ask when you've been told to just think of everything in terms of some ill-defined 'expansion of space' but the are easily done away with when you break it down into the simple underlying physics. Again, I go back to the OP. It was asked whether motion and/or gravity is responsible for the observed redshift of galaxies. How does writing down 1 + z = a/a_0 and saying 'the photons get stretched by expanding space' answer this question? Redshift can be understood in simple well understood terms like motion and gravity, I see no reason to force people to abandon these intuitive notions in favour of a co-ordinate dependant mathematical function which has no universal physical meaning. It depends on what we are trying to help people with. If you want to learn how to calculate cosmological quantities, then you need to learn the maths behind co-moving co-ordinates, and learn the easiest way to make calculations. If someone wants to a good non-mathematical intuitive understanding in terms of familiar concepts, then this is clearly not the best way to go. Last edited: Jan 14, 2010
## How stop google from giving too much link juice to particular URLs? We have a product website with separate pages for product details, product images, product videos, product reviews. We want to design a card for our products which we can use everywhere i.e. on internal website ads, cross-sell etc. Below is a sample card. There is a problem that we see here – this will create too many linkages to our product review, images and videos page. The most important page for us is the product details page and we want to give maximum link juice to that page. How can we fix this link juice distribution problem and indicate to google that product details is the most important link out of all these links? We are apprehensive of doing no-crawl/no-follow as we are not sure if it would solve this issue. ## Morphism of Lie groups $\theta:G\rightarrow H$ giving an equivalence of categories $BG\rightarrow BH$? Given a morphism of Lie groups $$\theta:G\rightarrow H$$  and a principal $$G$$ bundle $$\pi:P\rightarrow M$$ there are (at least) two ways to assign a principal $$H$$ bundle. 1. See that the morphism of Lie groups $$\theta:G\rightarrow H$$ gives an action of $$G$$ on $$H$$ by $$g.h=\theta(g).h$$. Given an action of $$G$$ on manifold (Lie group in this case) $$H$$ there is an associated fibre bundle $$P\times_G H\rightarrow M$$ with fibre $$H$$. This gives a principal $$H$$ bundle. 2. For principal bundle $$\pi:P\rightarrow M$$, we can find an open cover $$\{U_\alpha\}$$ of $$M$$ and  (transition) maps $$g_\alpha g_\beta:U_{\alpha\beta}\rightarrow G$$ satifsying the cocycle condition $$g_{\alpha\beta}g_{\beta\gamma}=g_{\alpha\gamma}$$ on $$U_\alpha\cap U_\beta\cap U_\gamma$$. Then the compositions $$\tau_{\alpha\beta}=\theta\circ g_{\alpha\beta}:U_{\alpha\beta}\rightarrow G\rightarrow H$$ also satifies the cocycle condition $$\tau_{\alpha\beta}\tau_{\beta\gamma}=\tau_{\alpha\gamma}$$ on $$U_\alpha\cap U_\beta\cap U_\gamma$$. One can then produce a principal $$H$$ bundle over $$M$$ given this open cover $$\{U_\alpha\}$$ of $$M$$ and smooth maps $$\tau_{\alpha\beta}:U_\alpha\cap U_\beta\rightarrow H$$ satisfying the cocycle condition. This gives a principal $$H$$ bundle. It is a good exercise (that I have not tried) to check that principal $$H$$ bundles obtained from above two methods are (naturally) isomorphic. Given a Lie group $$G$$, let $$BG$$ denote the category of principal $$G$$ bundles. Objects are principal $$G$$ bundles and morphisms are $$G$$-equivariant morphisms. Given a morphism of Lie groups $$\theta:G\rightarrow H$$, above construction gives a functor (at the level of objects) $$B\theta:BG\rightarrow BH$$. It is not difficult to see that, a $$G$$-equivarint map induce a $$H$$-equivariant map. This gives a functor. I am trying to understand what can we say about $$\theta:G\rightarrow H$$ if we know that $$B\theta:BG\rightarrow BH$$ is an equivalence of categories? Does it have to be a diffeomorphism? Any comments are welcome. ## Server Hunter – Giving away $1,000 worth of VPSs To celebrate the launch of ServerHunter.com, we partnered with three hosting providers to give away three annual subscriptions to 3 powerful KVM VPSs worth over$ 1,000 USD. Head over to www.serverhunter.com/giveaway/ to read the full mechanics of the contest. This giveaway will run from the 14th of January at 00:00:01 UTC until the 14th of February at 23:59:59 UTC. Good luck! ## XRDP: connection problem, giving up (ubuntu 18.04 server) Here’s the exact message it’s giving me: enter image description here Ive tried opening up an ssh connection on port 3350, also tried just about every “solution” online for this. I did get connected with ultravnc earlier, but its so laggy and hard to use that I decided I needed something better. I really like using RDP, but if you know something else I can use fullscreen with no lag let me know. ## MORPH TARGET INFLUENCES continuously keeps giving me UNDEFINED when animating object in three.js So I exported this simple 2d animation (circle that morphs into a triangle) as a gltf file into my three.js project. But when i run it, I get this error: “Uncaught TypeError: Cannot set property ‘0’ of undefined at render“. This error come this line of code: “mesh.morphTargetInfluences[ 0 ] = Math.sin(delta) * 20.0; By looking at my code, i made sure my scene is my mesh. I also log the mesh geometry to see that is not undefined. I get no errors when i set my Morph Targets to TRUE either. But When i do console(console.log(mesh.morphTargetInfluences) i do get UNDEFINED which i don’t know why since all the mesh geometry is there. <html> <head> <title>threejs - models</title> <style> body{ margin: 0; overflow: hidden; } </style> </head> <body> <canvas id="myCanvas"></canvas> <script src="js/three.js"></script> <script src="js/GLTFLoader.js"></script> <script> var renderer, scene, camera, myCanvas = document.getElementById('myCanvas'); var mesh; //RENDERER renderer = new THREE.WebGLRenderer({ canvas: myCanvas, antialias: true }); renderer.setClearColor(0xffffff); renderer.setPixelRatio(window.devicePixelRatio); renderer.setSize(window.innerWidth, window.innerHeight); //CAMERA camera = new THREE.PerspectiveCamera(35, window.innerWidth / window.innerHeight, 0.1, 1000 ); //SCENE scene = new THREE.Scene(); //LIGHTS var light = new THREE.AmbientLight(0xffffff, 0.5); scene.add(light); var light2 = new THREE.PointLight(0xffffff, 0.5); scene.add(light2); var loader = new THREE.GLTFLoader(); loader.load('morphObj.gltf', function ( gltf ) { gltf.scene.traverse( function ( node ) { if ( node.isMesh ) { mesh = node; mesh.material.morphTargets = true; console.log(mesh.geometry); } } ); //mesh.material.morphTargets = true; console.log(mesh.morphTargetInfluences); //console.log(mesh.material.morphTargets); //mesh.material = new THREE.MeshLambertMaterial(); scene.add( mesh ); mesh.position.z = -10; }); //RENDER LOOP render(); var delta = 0; var prevTime = Date.now(); function render() { delta += 0.1; if ( mesh !== undefined ) { console.log("mesh is not undefined!"); mesh.rotation.y += 0.01; //animation mesh mesh.morphTargetInfluences[ 0 ] = Math.sin(delta) * 20.0; } renderer.render(scene, camera); requestAnimationFrame(render); } </script>> </body> </html> I’m very new to three.js so fusure I’m forgetting something in my program, but i don’t know what that is. I will really appreciate your help guys ## Countifs and Sumproduct is giving Different results I am using excel 2010. So, I used Countifs instead of sumproduct and to my surprise the results are different and the sumproduct results are accurate. So Would like to take your help in understanding what did I do with countifs for not getting the accurate results. The Data and the results are provided in the below links with the formulas. Could you please help me in understanding where I am doing it wrong. Regards, Kiran ## Connection on a principal bundle $P(M,G)$ giving a functor on $\mathcal{P}_1(M)$ Question : Let $$P(M,G)$$ be a principal $$G$$ bundle. How does connection on $$P(M,G)$$ defines a functor $$\text{Hol}: \mathcal{P}_1(M)\rightarrow BG$$ (here $$BG$$ is the Lie groupoid whose morphism set is $$G$$ whose object set is $$\{*\}$$). I have seen in some places that, giving a connection on $$P(M,G)$$ is giving a map $$P_1(M)\rightarrow G$$. Here $$P_1(M)$$ are special collection of special types of paths. This is the morphism set of what is called the path groupoid of $$M$$, usually denoted by $$\mathcal{P}_1(M)$$ whose objects are elements of $$M$$. Once this is done, seeing the Lie group $$G$$ as a Lie groupoid $$BG$$ (I know this is a bad notation but let me use this for this time) whose set of objects is singleton and set of morphisms is $$G$$. This would then give a functor $$\mathcal{P}_1(M)\rightarrow BG$$. They say giving a connection means giving a functor $$\mathcal{P}_1(M)\rightarrow BG$$ with some good conditions. Then, to make sense of $$2$$-connections, they just have to consider $$\mathcal{P}_2(M)\rightarrow \text{some category}$$. This is the set up. I do not understand (I could not search it better) how giving a connection on $$P(M,G)$$ gives a map $$\text{Hol}:P_1(M)\rightarrow G$$. For each path $$\gamma$$ in $$M$$ they are associating an element of $$G$$ and calling it to be the holonomy of that path $$\gamma$$. They say it is given by integrating forms on paths. All I know is, a connection on $$P(M,G)$$ is a $$\mathfrak{g}$$ valued $$1$$-form on $$P$$ with some extra conditions. Suppose I have a path $$\gamma$$ on $$M$$, how do I associate an element of $$G$$? Is it $$\int_{\gamma}\omega$$? How to make sense of this? It is not clear how I should see this as $$\omega$$ is a form on $$P$$ and $$\gamma$$ is a path on $$M$$. To make sense of this, there are two possible ways I can think of. • I have to pull back the path $$\gamma$$ which is on $$M$$ to a path on $$P$$. So that both the differential form and path are in same space. • I have to push forward $$\omega$$ to a (collection of) form(s) on $$M$$. So that both the differential form and path are in same space. Given a path $$\gamma:[0,1]\rightarrow M$$ with $$\gamma(0)=x$$, fix a point $$u\in \pi^{-1}(x)$$. Then, connection gives a unique path $$\widetilde{\gamma}$$ in $$P$$ whose starting point is $$u$$ such that projection of $$\widetilde{\gamma}$$ along $$\pi$$ is $$\gamma$$. The problem here is that we have to fix a point $$u$$. Only then we can get a curve. It can happen that for any two points on $$\pi^{-1}(x)$$ may give same result but I am not sure if that is true. I mean, let $$\widetilde{\gamma}_u,\widetilde{\gamma}_v$$ be lifts of $$\gamma$$ fixing $$u\in \pi^{-1}(x)$$ and $$v\in \pi^{-1}(x)$$ respectively. Does it then happen that $$\int_{\widetilde{\gamma}_u}\omega=\int_{\widetilde{\gamma}_v}\omega$$? Even if this is the case, what does it mean to say integrating a $$\mathfrak{g}$$ valued $$1$$-form on a path? How is it defined? I guess it should give an element $$A$$ of $$\mathfrak{g}$$ (just like integrating a $$\mathbb{R}$$ valued $$1$$-form along a path gives an element of $$\mathbb{R}$$). Do we then see image of $$A$$ under $$\text{exp}:\mathfrak{g}\rightarrow G$$ to get an element of $$G$$? We can declare this to be $$\int_{\gamma}\omega$$. Is this how we associate an element of $$G$$ to a path $$\gamma$$ in $$M$$?? Otherwise, given $$\omega$$ on $$P$$, using trivialization, we can get an open cover $$\{U_i\}$$ of $$M$$ and get forms $$\mathfrak{g}$$ valued $$1$$-forms $$\omega_i$$ on $$U_i$$ with some compatibility on intersections. We can consider $$\gamma_i:[0,1]\bigcap \gamma^{-1}(U_i)\rightarrow U_i$$. These $$\gamma_i$$ are paths on $$U_i$$ and $$\omega_i$$ are $$1$$-forms on $$U_i$$. So, $$\int_{U_i}\gamma_i$$ makes sense. This gives a collection of elements $$\{A_i\}$$ of $$\mathfrak{g}$$ and may be all these comes from a single element $$A\in \mathfrak{g}$$ and seeing its image under $$\text{exp}:\mathfrak{g}\rightarrow G$$ gives an element in $$G$$. We can then declare it to be $$\int_{\gamma}\omega$$. Is this how we associate an element of $$G$$ to a path $$\gamma$$ in $$M$$?? Error : Exception in thread “main” java.util.concurrent.CancellationException: Task was cancelled. at com.google.common.util.concurrent.AbstractFuture.cancellationExceptionWithCause(AbstractFuture.java:1237) at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:524) at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:487) at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:83) at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62) at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:127) ## Giving birth to a child in UK [migrated] I am currently in the UK on a visit visa since 13 th of October 2018 with my UK husband . He got his passport from his dad not by birth .. now i am pregnant and my due date is on 7th of September 2019. I have to go to my home country after 6 months from my first entry so i have to be in my home country by April . The thing that i want to deliver my baby in the UK to grant him the citizenship. Regarding the spouse visa , we will be capable to sumbit it after July so i am afraid we might get a reply after my delivery. So please help. ## elasticsearch-6.5.4 unable to start giving jvm errors on Ubuntu17.0 Good morning All, Merry Christmas and Good luck. I installed Linux Ubunto 17.0 and then I installed elasticsearch-6.5.4 followed by Java 11.0. However when I start the elastic search, it keeps giving me the errors related to jvm. Could you please help me. I was able to run elasticsearch on my old machine with Ubunto 14.0. I really appreciate your kind help because I am new to elasticsearch-6.5.4 please. Thank you. Venu@venu-INVALID:~/elasticsearch-6.5.4/bin$ java -version java version “11.0.1” 2018-10-16 LTS Java(TM) SE Runtime Environment 18.9 (build 11.0.1+13-LTS) Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.1+13-LTS, mixed mode) venu@venu-INVALID:~/elasticsearch-6.5.4/bin$./elasticsearch Exception in thread “main” java.nio.file.AccessDeniedException: /home/venu/elasticsearch-6.5.4/config/jvm.options at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:215) at java.base/java.nio.file.Files.newByteChannel(Files.java:370) at java.base/java.nio.file.Files.newByteChannel(Files.java:421) at java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420) at java.base/java.nio.file.Files.newInputStream(Files.java:155) at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:60) venu@venu-INVALID:~/elasticsearch-6.5.4/ venu@venu-INVALID:~/elasticsearch-6.5.4/bin$ sudo ./elasticsearch [sudo] password for venu: Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. Java HotSpot(TM) 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=1 [2018-12-25T09:36:22,232][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [unknown] uncaught exception in thread [main] org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:140) ~[elasticsearch-6.5.4.jar:6.5.4] at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:127) ~[elasticsearch-6.5.4.jar:6.5.4] at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.5.4.jar:6.5.4] at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.5.4.jar:6.5.4] at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.5.4.jar:6.5.4] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) ~[elasticsearch-6.5.4.jar:6.5.4] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:86) ~[elasticsearch-6.5.4.jar:6.5.4] Caused by: java.lang.RuntimeException: can not run elasticsearch as root at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:103) ~[elasticsearch-6.5.4.jar:6.5.4] at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170) ~[elasticsearch-6.5.4.jar:6.5.4] at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) ~[elasticsearch-6.5.4.jar:6.5.4] at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:136) ~[elasticsearch-6.5.4.jar:6.5.4] … 6 more venu@venu-INVALID:~/elasticsearch-6.5.4/bin\$
Department of # Mathematics Seminar Calendar for events the day of Thursday, February 13, 2014. . events for the events containing (Requires a password.) More information on this calendar program is available. Questions regarding events or the calendar should be directed to Tori Corkery. January 2014 February 2014 March 2014 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 1 1 5 6 7 8 9 10 11 2 3 4 5 6 7 8 2 3 4 5 6 7 8 12 13 14 15 16 17 18 9 10 11 12 13 14 15 9 10 11 12 13 14 15 19 20 21 22 23 24 25 16 17 18 19 20 21 22 16 17 18 19 20 21 22 26 27 28 29 30 31 23 24 25 26 27 28 23 24 25 26 27 28 29 30 31 Thursday, February 13, 2014 11:00 am in 241 Altgeld Hall,Thursday, February 13, 2014 #### Small gaps between primes ###### James Maynard (Univ. Montreal) Abstract: It is believed that there should be infinitely many pairs of primes which differ by 2; this is the famous twin prime conjecture. More generally, it is believed that for every positive integer $m$ there should be infinitely many sets of $m$ primes, with each set contained in an interval of size roughly $m\log{m}$. Although proving these conjectures seems to be beyond our current techniques, recent progress has enabled us to obtain some partial results. We will introduce a refinement of the `GPY sieve method' for studying these problems. This refinement will allow us to show (amongst other things) that $\liminf_n(p_{n+m}-p_n)<\infty$ for any integer $m$, and so there are infinitely many bounded length intervals containing $m$ primes. 1:00 pm in Altgeld Hall 347,Thursday, February 13, 2014 #### Moduli spaces of constant curvature spacetimes ###### Jeff Danciger (University of Texas - Austin) Abstract: A Margulis spacetime is the quotient of three-dimensional space by a free group of affine transformations acting properly discontinuously. Each of these manifolds is equipped with a flat Lorentzian metric compatible with the affine structure. I will survey some recent results, joint with Francois Gueritaud and Fanny Kassel, about the geometry, topology, and deformation theory of these flat spacetimes. In particular, we give a parameterization of the moduli space in the same spirit as Penner's cell decomposition of the decorated Teichmuller space of a punctured surface. I will also discuss connections with the negative curvature (anti de Sitter) setting. 2:00 pm in 140 Henry Administration Building,Thursday, February 13, 2014 #### Some proofs of quadratic reciprocity ###### Dane Skabelund (UIUC Math) Abstract: The law of quadratic reciprocity, first completely proven by Gauss in 1801, is perhaps the most proved theorem in number theory. I will present several proofs of this theorem which I find interesting, some of which I hope not everyone will have seen before. 4:00 pm in 245 Altgeld Hall,Thursday, February 13, 2014 #### Complex dynamics in several variables ###### Eric Bedford (Indiana University) Abstract: The study of the dynamics of polynomial and rational maps of C has developed into a beautiful theory. The analogous results in higher dimension are still in the early stages of development. We will give a general survey of the behavior of polynomial and rational maps in higher dimensions. Our focus is on self-mappings of complex Euclidean space.
# [NTG-context] Strange behaviour with tikz Mojca Miklavec mojca.miklavec.lists at gmail.com Thu Sep 11 10:05:47 CEST 2008 On Wed, Sep 10, 2008 at 7:44 PM, Eric DÉTREZ wrote: > Hello again > > I don't understand a strange thing. > > Patterns in tikz become black in some case. > > Here is a minimal example : > *************************************** > \usemodule[tikz] > \usetikzlibrary[patterns] > \starttext > > blabla > > \chapter {blabla} > > \starttikzpicture > \draw [pattern=north west lines](0,0) rectangle +(1,2); > \stoptikzpicture > > \stoptext > ************************************* > > Without any text before the chapter or without the chapter command I > see the patterns. > With these commands I just see a black rectangle. > > Can I get my patterns back ? What version of ConTeXt and TikZ are you using? I first tried your example with TeX Live 2008, and it looked like a ConTeXt-related problem in TikZ. The patterns only worked on the first page. But then I checked with the latest version of both ConTeXt and TikZ (http://minimals.contextgarden.net/current/modules/t-tikz/ - you can fetch it with rsync) rsync -av rsync://contextgarden.net/minimals/current/modules/t-tikz/ place-on-your-computer And it worked fine. Mojca
# Linear Programming and Optimization What is linear programming? A two-variable linear programming has a linear function f(x,y) in two variables, called objective function, along with systems of linear inequalities called constraints which defines the feasible solution set as defined in systems of inequalities with two variables. Linear programming is one of the methods of optimization where there is a need to find values of some variables x, y so that function f of the variables x, y has a maximum or minimum value depending on the application to solve. Possible applications of linear programming may be found in engineering, agriculture, medicine, finance, economics, etc. The solutions to a linear programming problem may be solved using the following theorems: Theorem 1 The solution of a linear programming problem, it it exists, must occur at a vertex of the feasible set associated with the constraints of the problem. If the solution associated with the constraints and the objective function occur at two adjacent vertices, then all points on the line segment joining these two vertices are solutions. Theorem 2 In a linear programming problem with a feasible set A and objective function f(x,y) = a x + b y, the following cases may occur 1) If A is bounded, then f(x,y) has both a maximum and a minimum at the vertices of A. 2) If x ≥ 0 and y ≥ 0 and if A in unbounded, and both a and b are positive, then f(x,y) has a minimum at one (or more) of the vertices of A. Examples with detailed solutions and explanations are presented. Example 1: Find maximum value Find the values of x and y that make z(x , y) = 2 x + 4 y maximum subject to the conditions shown below and find the value of z at these values of x and y $\begin{cases} \ x \ge 0 \\ \ y \ge 0 \\ \ y \le x + 1 \\ \ 4y + x \le 10 \\ \ y - x \ge - 3 \\ \end{cases}$ Solution to Example 1: The solution set of the system of inequalities given above and the vertices of the feasible set obtained has already been found in Example 3 of "solving systems of inequalities in two variables". The solution set is shown in the figure below. . The vertices were also found (in the same example) to be: A(0 , 0), B(0 , 1), C(6/5 , 11/5), D(22/5 , 7/5), E( 3 , 0) We now evaluate function z(x , y) = 2 x + 4 y at all 5 vertices of the feasible set. • at A: the x and y coordinates of A are x = 0 and y = 0. Substitute x and y by 0 and 0 respectively in the linear function z = 2 x + 4 y to obtain z(A) = 2 (0) + 4 (0) = 0 • at B: the x and y coordinates of B are x = 0 and y = 1. Substitute x and y by 0 and 1 respectively in the linear function z = 2 x + 4 y to obtain z(B) = 2 (0) + 4 (1) = 4 • at C: the x and y coordinates of C are x = 6/5 and y = 11/5. Substitute x and y by 6/5 and 11/5 respectively in the linear function z = 2 x + 4 y to obtain z(B) = 2 (6/5) + 4 (11/5) = 11.2 • at D: the x and y coordinates of D are x = 22/5 and y = 7/5. Substitute x and y by 22/5 and 7/5 respectively in the linear function z = 2 x + 4 y to obtain z(B) = 2 (22/5) + 4 (7/5) = 14.4 • at E: the x and y coordinates of D are x = 3 and y = 0. Substitute x and y by 3 and 0 respectively in the linear function z = 2 x + 4 y to obtain z(B) = 2 (3) + 4 (0) = 6 The maximum value of z occur at vertex D with coordinates x = 22/5 and y = 7/5. Hence the solution of the given problem is Z has a maximum value at x = 22/5 and y = 7/5 and this value is 14.4 Example 2: Find minimum value Find the minimum value of z(x , y) = 4 x + 7 y where x ≥ 0 and y ≥ 0 subject to the conditions $\begin{cases} \ 2x + 3y \ge 6 \\ \ x - y/3 \le 4 \\ \ -2x+2y \le 8 \\ \ x + (5/2)y \le 13 \\ \end{cases}$ Solution to Example 2: The feasible set to the systems of inequalities is shown below. A vertex is determined by solving the system of equations corresponding to the equations of the lines whose intersection is the vertex to be found. Solve the system of equations 2x + 3y = 6 and x = 0 to find A(0 , 2) Solve the system of equations x = 0 and -2x + 2y = 8 to find B(0 , 4) Solve the system of equations -2x + 2y = 8 and x + (5/2)y = 13 to find C(6/7 , 34/7) Solve the system of equations x + (5/2)y = 13 and x - y/3 = 4 to find D(86/17 , 54/17) Solve the system of equations x - y/3 = 4 and y = 0 to find E(4 , 0) Solve the system of equations y = 0 and 2x + 3y = 6 to find F(3 , 0) . We now evaluate z(x , y) = 4 x + 7 y at each vertex found above at A(0 , 2): z = 4(0) + 7(2) = 14 at B(0 , 4): z = 4(0) + 7(4) = 28 at C(6/7 , 34/7): z = 4(6/7) + 7(34/7) = 37.4 at D(86/17 , 54/17): z = 4(86/17) + 7(54/17) = 42.5 at E(4 , 0): z = 4(4) + 7(0) = 16 at F(3 , 0): z = 4(3) + 7(0) = 12 Function z = 4 x + 7y has a minimum value of 12 and occurs for x = 3 and y = 0 (vertex F). Example 3: Many solutions Find the maximum value of the function z = 3 x + 3 y where x ≥ 0 and y ≥ 0 subject to the conditions $\begin{cases} \ x \le 6 \\ \ y \le 7 \\ \ y \le - x + 9 \\ \end{cases}$ Solution to Example 3: The feasible set to the systems of inequalities given above is shown below. vertices are at (see examples 2 and 3 above on how to find them) A(0 , 0) B(0 , 7) C(2 , 7) D(6 , 3) E(6 , 0) . Evaluate function z = 3 x + 3 y at each vertex at A(0 , 0) : z = 3 (0) + 3 (0) = 0 at B(0 , 7) : z = 3 (0) + 3 (7) = 21 at C(2 , 7) : z = 3 (2) + 3 (7) = 27 at D(6 , 3) : z = 3 (6) + 3 (3) = 27 at E(6 , 0) : z = 3 (6) + 3 (0) = 18 Function z = 3 x + 3 y has a maximum value at two vertices: C and D. In fact all points between C and D on the line y = - x + 9 gives z = 27. Explanation: z may be written as z = 3 (x + y) The equation of the line through BC is y = - x + 9 may be written as x + y = 9 Substitute x + y by 9 in z = 3 (x + y) to obtain z = 3 (9) = 27 and is maximum for all point on the line y = - x + 9. More To Explore
You're reading the documentation for a development version. For the latest released version, please have a look at v4.3.0. The multi component spreading model adds a second bound state $$q_{i,2}$$ to the Langmuir model (see Section Multi Component Langmuir) and allows the exchange between the two bound states $$q_{i,1}$$ and $$q_{i,2}$$. In the spreading model a second state of the bound molecule (e.g., a different orientation on the surface or a different folding state) is added. The exchange of molecules between the two states is allowed and, since the molecules can potentially bind in both states at the same binding site, competitivity effects are present. This is different to the Bi-Langmuir model in which another type of binding sites is added and no exchange between the different bound states is considered (see Section Multi Component Bi-Langmuir). For all components $$i = 0, \dots, N_{\text{comp}} - 1$$ the equations are given by \begin{split}\begin{aligned} \frac{\mathrm{d} q_{i,1}}{\mathrm{d} t} &= \left( k_a^A\: c_{p,i} - k_{12} q_{i,1} \right) q_{\text{max},i}^A \left( 1 - \sum_{j=0}^{N_{\text{comp}} - 1} \frac{q_j^A}{q_{\text{max},j}^A} - \sum_{j=0}^{N_{\text{comp}} - 1} \frac{q_j^B}{q_{\text{max},j}^B} \right) - k_d^A q_{i,1} + k_{21} q_{i,2}, \\ \frac{\mathrm{d} q_{i,2}}{\mathrm{d} t} &= \left( k_a^B\: c_{p,i} + k_{12} q_{i,1} \right) q_{\text{max},i}^A \left( 1 - \sum_{j=0}^{N_{\text{comp}} - 1} \frac{q_j^A}{q_{\text{max},j}^A} - \sum_{j=0}^{N_{\text{comp}} - 1} \frac{q_j^B}{q_{\text{max},j}^B} \right) - \left( k_d^B + k_{21} \right) q_{i,2}. \end{aligned}\end{split}
# Surface function for a circle tilted with an angle and then rotating around z axis 1. Nov 14, 2014 ### zs96742 My first idea is this will result in a elliptic torus. The horizontal semi-axis a=R and the vertical semi-axis b=R*cos(beta). assuming the titled or inclined angle is beta. The distance away from the z-axisis c and it is a constant. But it looks not when I plot the surface in 3D using the elliptic equation given on wolfram elliptic. The green dot points are generated by rotating the red line around the z axis and then plot the corresponding circle in 3D space. The surface I created using the torus equation is somehow like the bottom one: 2. Nov 20, 2014
# Frank Morley ### Quick Info Born 9 September 1860 Woodbridge, Suffolk, England Died 17 October 1937 Baltimore, Maryland, USA Summary Frank Morley wrote mainly on geometry but also on algebra. He is best known for his theorem about the trisectors of the angles of a triangle. ### Biography Frank Morley was born into a Quaker family. His mother was Elizabeth Muskett, and his father was Joseph Roberts Morley. Joseph ran a china shop in Woodbridge. Frank attended Seckford Grammar School in Woodbridge before he entered King's College, Cambridge, in 1879, having won an open scholarship. An important influence on his school career was Airy who Frank had met through their shared passion for chess. It was Airy's encouragement which saw Frank compete for the scholarship. However ill health disrupted Morley's undergraduate course and he was forced to take an extra year because of these health problems. Morley only achieved the eighth place in the First Class Honours. To say 'only' here may seem strange since this was an extremely good result in an examination which saw Mathews first and Whitehead fourth. Richmond writes in [4], however:- Ill health beyond all doubt had prevented him from doing himself justice, but the disappointment was keen. In middle life he was loath to speak of his student days... Morley graduated from Cambridge with a B.A. in 1884 but his relatively poor performance meant that he had no hope of a fellowship. He took a job as a school master, teaching mathematics at Bath College until 1887. This was an important period for Morley since he was able to overcome his health problems and with the improvement in health came a renewed confidence in his own mathematical abilities. He settled in the United States and was appointed an instructor at the Quaker College in Haverford, Pennsylvania, in 1887. The following year he was promoted to professor. At Haverford, Morley worked, not with others at the College, but with the mathematicians Scott and James Harkness, both also graduates of Cambridge, England, who were at Bryn Mawr which was close to Haverford. In 1889 he married Lilian Janet Bird, who was a musician and poet. We should note that the marriage produced three sons, Christopher (born in Haverford on 5 May 1890), Felix, and Frank, who all went on to become Rhodes scholars. Christopher Darlington Morley (1890-1957) became a novelist, and his works include The Trojan Horse, Kitty Foyle and The Old Mandarin . Felix M Morley (1894-1982) became editor of the Washington Post and was also president of Haverford College from 1940 to 1945. Frank Vigor Morley (1899-1985) became a director of the publishing firm Faber and Faber but was also a mathematician who collaborated with his father for over twenty years. Morley was appointed Professor of Mathematics at Johns Hopkins University in 1900. The University had been a leading one in the United States over the period while Sylvester worked there, but he had left in 1883. The strong graduate programme in mathematics which had been set up there continued to flourish but by 1900 it had begun to decline and Morley's appointment was a very definite attempt to reinvigorate the programme. Certainly he proved an excellent choice leading by example and supervising 48 doctoral students over his years at Johns Hopkins. Coble writes that Morley made it (see [1] or [2]):- ... a cardinal point to have on hand a sufficient variety of thesis problems to accommodate particular tastes and capacities. We should look now at Morley's mathematical achievements. He wrote papers mainly on geometry but also on algebra. We mentioned that while at Haverford he had collaborated with Harkness. They jointly authored a text A treatise on the theory of functions which was published in 1893 and revised as Introduction to the theory of analytic functions in 1898. These were excellent advanced level texts published at a time when very few such advanced mathematics books were being produced in the United States. Many years later, in 1933, he published Inversive geometry written jointly with his son Frank V Morley. Morley's own favourite among his geometry papers was On the Lüroth quartic curve which he published in 1919. He is perhaps best known, however, for a theorem which is now known as Morley's Theorem:- If the angles of any triangle be trisected, the triangle, formed by the meets of pairs of trisectors, each pair being adjacent to the same side, is equilateral. Morley loved posing mathematical problems and over a period of 50 years, starting in his undergraduate days, he published over 60 problems in the Educational Times. Most are of a geometric nature. Here is an example, see [1]:- Show that on a chess-board the number of squares visible is 204, and the number of rectangles (including squares) visible is 1296; and that, on a similar board with $n$ squares in each side, the number of squares is the sum of the first $n$ square numbers, and the number of rectangles (including squares) is the sum of the first $n$ cube numbers. We mentioned that Morley was a chess enthusiast while at school and, indeed, he was an exceptionally good chess player, so the problem above reflects one of his hobbies. He played at the highest level and beat Lasker on one occasion while Lasker was World Chess Champion. Zund writes about the significance of Morley's mathematical work:- Today much of Morley's research seems of less than compelling significance, and one is tempted to regard his interests as those of a talented amateur - an artist who took delight in small and beautiful things - rather than those of a professional mathematician. Yet, whatever the significance one chooses to attach to them, Morley must be given credit for both finding and solving such questions. Morley made a major contribution to mathematics in the United States. He undertook editorial work for the Bulletin of the American Mathematical Society and the American Journal of Mathematics while at Haverford. Later, when at Johns Hopkins University, became the editor of the American Journal of Mathematics and held this position for 30 years. In 1919-20 he served as president of the American Mathematical Society. He is described by Cohen in [2] as:- ... a striking figure in any group. Deliberate in manner and speech, there was a suggestion of shyness about him. He was generally very well informed and interested in a strikingly wide range of subjects. He was of an artistic temperament. While many of his papers and lectures seemed involved to the uninitiated, they all possessed a characteristic artistic charm. His son, Frank V Morley, gives this description of his father:- ... then he would begin to fiddle in his waistcoat pocket for a stub of pencil perhaps two inches long, and there would be a certain amount of scrabbling in a side pocket for an old envelope, and then there would be silence for a long time; until he would get up a little stealthily and make his way toward his study - but the boards in the hall always creaked, and my mother would call out, "Frank, you're not going to work!" - and the answer would always be, "A little, not much!" - and the study door would close. (It wasn't hard to gather that my father was working at geometry, and I knew pretty well what geometry was, because for a long time I had been drawing triangles and things; but when you examined the envelope he left behind, what was really mysterious was that there was hardly ever a drawing on it, but just a lot of calculations in Greek letters. Geometry without pictures I found it hard to approve; indeed, I prefer it with pictures to this present day.) ### References (show) 1. R C Archibald, A semicentennial history of the American Mathematical Society 1888-1938 (New York, 1980), 194-201. 2. A B Coble, Frank Morley, Bull. Amer. Math. Soc. 44 (1938), 167-170. 3. A Cohen, Frank Morley, Science N.S. 86 (1937), 461. 4. H B Phillips, Obituary: Frank Morley (1860-1937), Proc. Amer. Acad. Arts Sci. 73 (1939), 138-139. 5. H W Richmond, Frank Morley, Nature 40 (1937), 880.
Journal topic Atmos. Meas. Tech., 12, 4261–4276, 2019 https://doi.org/10.5194/amt-12-4261-2019 Atmos. Meas. Tech., 12, 4261–4276, 2019 https://doi.org/10.5194/amt-12-4261-2019 Research article 07 Aug 2019 Research article | 07 Aug 2019 # Analyzing the atmospheric boundary layer using high-order moments obtained from multiwavelength lidar data: impact of wavelength choice Analyzing the atmospheric boundary layer using high-order moments obtained from multiwavelength lidar data: impact of wavelength choice Gregori de Arruda Moreira1,2,3,4, Fábio Juliano da Silva Lopes4, Juan Luis Guerrero-Rascado1,2, Jonatan João da Silva4,5, Antonio Arleques Gomes4, Eduardo Landulfo4, and Lucas Alados-Arboledas1,2 Gregori de Arruda Moreira et al. • 1Andalusian Institute for Earth System Research, Granada, Spain • 3Astronomy, Geophysics and Atmospheric Science Institute, University of São Paulo, São Paulo, Brazil • 4Nuclear and Energy Research Institute, São Paulo, Brazil • 5Federal University of Western Bahia, Bahia, Brazil Correspondence: Gregori de Arruda Moreira (gregori.moreira@usp.br) Abstract The lowest region of the troposphere is a turbulent layer known as the atmospheric boundary layer (ABL) and characterized by high daily variability due to the influence of surface forcings. This is the reason why detecting systems with high spatial and temporal resolution, such as lidar, have been widely applied for researching this region. In this paper, we present a comparative analysis on the use of lidar-backscattered signals at three wavelengths (355, 532 and 1064 nm) to study the ABL by investigating the high-order moments, which give us information about the ABL height (derived by the variance method), aerosol layer movement (skewness) and mixing conditions (kurtosis) at several heights. Previous studies have shown that the 1064 nm wavelength, due to the predominance of particle signature in the total backscattered atmospheric signal and practically null presence of molecular signal (which can represent noise in high-order moments), provides an appropriate description of the turbulence field, and thus in this study it was considered a reference. We analyze two case studies that show us that the backscattered signal at 355 nm, even after applying some corrections, has a limited applicability for turbulence studies using the proposed methodology due to the strong contribution of the molecular signature to the total backscatter signal. This increases the noise associated with the high-order profiles and, consequently, generates misinformation. On the other hand, the information on the turbulence field derived from the backscattered signal at 532 nm is similar to that obtained at 1064 nm due to the appropriate attenuation of the noise, generated by molecular component of backscattered signal by the application of the corrections proposed. 1 Introduction The atmospheric boundary layer (ABL) is the part of the troposphere that is directly or indirectly influenced by the Earth's surface (land and sea) and responds to gases and aerosol particles emitted at the Earth's surface and to surface forcing at timescales of less than a day. Forcing mechanisms include heat transfer, fluxes of momentum, frictional drag and terrain-induced flow modification. The height of this layer (ABLH) varies from hundreds of meters up to a few kilometers, due to the intensification or reduction of convective or mechanical processes with additional contribution from orographic effects. The ABL presents a daily pattern controlled by the energy balance at the Earth's surface. Thus, after sunrise the positive net radiative flux (Rn) induces the rise of surface air temperature that initiates the convective process, which is responsible for the growth of the so-called mixing layer (ML) or convective boundary layer (CBL). This layer grows over the day, extending the region affected by the convective process until around midday, when it reaches maximum development. Slightly before sunset, the decrease in the incoming solar irradiance at the surface results in a radiative cooling of the Earth's surface. This cooling affects the closest air layer, diminishing the convective process. In this way, the CBL disappears and two new layers characterize the ABL, a stable and stratified layer known as the stable boundary layer (SBL) at the bottom and the residual layer (RL) over the latter with characteristics of the previous day's ML (Stull1988). The turbulent features of the ABL are relevant in air quality and weather forecasting and thus are worthy of study. As a rule, the turbulent processes are treated as nondeterministic and, therefore, the turbulence is characterized by its statistical properties. Thus, high-order statistical moments are used to generate information about the turbulent fluctuation field, besides a description about mixing processes in the ABL . ABL turbulence has been commonly studied by means of anemometer towers (e.g., Kaimal and Gaynor1983) and aircrafts . Nevertheless, the former have a use restricted to regions near the surface, due to their limited vertical range. Aircraft offer an alternative approach that allows for extending the analyses to higher atmospheric layers, but, conversely, they have a reduced time window, thus limiting the period of analysis. Due to the large variability of the ABL characteristics over the day, the use of systems endowed with high spatial and temporal resolution allows for studies with a higher degree of detail. Consequently, remote-sensing systems (mainly lidar) become an important tool in ABLH detection , as well as in turbulence studies . In addition, the different lidar techniques offer the possibility of analyses with several variables, such as vertical wind velocity by Doppler lidar , water vapor mixing profiles by Raman lidar or differential absorption lidar (DIAL) , temperature by rotational Raman lidar , and aerosol number density by elastic lidar or high spectral resolution lidar (HSRL) . Therefore, a wider range of results can be obtained, especially when different types of systems are synergistically used, as shown by , who combine elastic and Doppler lidar data for deriving the vertical aerosol flux. have shown that it is feasible to use elastic lidar measuring at a high acquisition rate for characterizing the atmospheric turbulence. In particular, they have shown that the fluctuation of the range-corrected signal (RCS) at 1064 nm is a proxy for the fluctuation of the particle concentration, due to predominance of particle signature (βpar) in the total backscattered signal at this wavelength, and thus it can be used for observing the turbulent aerosol movements in the CBL. However, if other wavelengths are used in this kind of analysis, the effects of molecular backscatter coefficient (βmol) and atmospheric extinction (α) must be considered. In this work, we perform a comparative analysis regarding the use of three different wavelengths, namely 355, 532 and 1064 nm (the latter adopted as reference), to obtain the high-order moments, i.e., variance (σ2), skewness (S), kurtosis (K) and also the integral timescale (τ). Moreover, the interference of noise ε and βmol over the high-order moments and τ obtained from each one of the considered wavelengths was analyzed, in order to quantify how such factors can influence the correct interpretation of the statistical variables. The goal of this study is to show the viability of the proposed methodology for studying turbulence by computing the high-order moments of the backscattered signal at different wavelengths. We pay special attention to the advantages and limitations of each wavelength analyzed considering the importance of the proposed correction schemes. This paper is organized as follows. The measurement site and the experimental setup are introduced in Sect. 2. The methodology is described in Sect. 3. The comparisons and case studies are analyzed in Sect. 4. Conclusions are given in Sect. 5. 2 Experimental site and instrumentation This study was performed at LEAL (Laser Environmental Applications Laboratory) from July 2017 to July 2018; however, to illustrate the analysis, only two cases are discussed in detail in this article. LEAL is part of the Latin America Lidar Network – since 2001. This lidar facility is installed at the Nuclear and Energy Research Institute in São Paulo, Brazil (2333 S, 4638 W, 760 m a.s.l.), in the largest metropolitan area in South America, with a population of approximately 12 million and endowed with a subtropical climate where winter is mild (15 C) and dry, while summer is wet and has moderately high temperatures (23 C) (IBGE, 2017). The São Paulo lidar station (SPU) has a coaxial ground-based multiwavelength Raman lidar system operated at LEAL. The system operates with a pulsed Nd : YAG laser, emitting radiation at 355, 532, and 1064 nm; a laser repetition rate of 10 Hz; and a laser beam pointing to zenith direction. The pulse energy (and stability) of each wavelength is 225 mJ (2 mJ) at 355 nm, 400 mJ (4 mJ) at 532 nm and 850 mJ (6 mJ) at 1064 nm. The Metropolitan São Paulo I (MSPI) lidar detects three elastic channels at 355, 532 and 1064 nm and three Raman-shifted channels at 387 nm, 408 nm (corresponding to the shifting from 355 nm by N2 and H2O) and 530 nm (corresponding to the rotational Raman shifting from 532 nm by N2, ). This system is equipped with Hamamatsu R7400 photomultipliers . The SPU lidar reaches full overlap at around 300 m a.g.l. . This system operates with temporal and spatial resolution of 2 s and 7.5 m, respectively. 3 Methodology The turbulence study is based on the observation of the fluctuation q(t) of a determined variable (q) in the time t. The values are obtained as follows: firstly q(t) are averaged in packages that cover a certain time interval, from which the mean value ($\stackrel{\mathrm{‾}}{q}$) is extracted. Then, such values are subtracted from each q(t) value, providing the fluctuation q(t) as demonstrated in the equation below via Reynolds decomposition : $\begin{array}{}\text{(1)}& {q}^{\prime }\left(t\right)=q\left(t\right)-\stackrel{\mathrm{‾}}{q}\left(t\right).\end{array}$ In the analysis performed with elastic lidar systems, the variable of interest is the aerosol number density (N), from which we obtain its fluctuation (N) by Eq. (1). However, elastic lidar systems do not directly provide the value of N. Therefore, considering the validity of Mie theory (where the aerosol backscatter coefficient is linked to the backscatter efficiency, particle radius (r) and the number of particles with radius r), we can write Eq. (2) under several assumptions. The premises adopted here are that (i) the variation in aerosol size with the relative humidity can be neglected, (ii) the atmospheric volume probed is composed of similar types of aerosol particles and (iii) the fluctuations of the aerosol microphysical properties are smaller than the fluctuations of the total number density in the volume probed by the lidar. More details about these assumptions can be found in . and demonstrated the relation between relative humidity and hygroscopic growth, thus such effects can start at 80 % RH. The two cases presented in this work were gathered in winter, the driest season of São Paulo. In particular, RH was below 80 % in both days (see Sect. 4). Such a value is lower than the RH threshold to hygroscopic effects indicated by the two papers mentioned above. Consequently, ignoring the hygroscopic growth and assuming similar types of aerosol throughout the atmospheric column, the following equation can be used: $\begin{array}{}\text{(2)}& {\mathit{\beta }}_{\mathrm{aer}}\left(z,t\right)\phantom{\rule{0.125em}{0ex}}\approx \phantom{\rule{0.125em}{0ex}}N\left(z,t\right)Y\left(z\right),\text{(3)}& \mathit{\beta }{{}^{\prime }}_{\mathrm{aer}}\left(z,t\right)\phantom{\rule{0.125em}{0ex}}=\phantom{\rule{0.125em}{0ex}}{N}^{\prime }\left(z,t\right),\end{array}$ where βaer and $\mathit{\beta }{{}^{\prime }}_{\mathrm{aer}}$ represent the particle backscatter coefficient and its fluctuation, respectively. The variable z is the height above the ground, t is the time and Y is a variable that does not depend on time. The lidar equation is defined as follows in : $\begin{array}{}\text{(4)}& \begin{array}{rl}& P\left(z,t\right)\phantom{\rule{0.125em}{0ex}}=\phantom{\rule{0.125em}{0ex}}{P}_{\mathrm{0}}\frac{c\mathit{\tau }}{\mathrm{2}}A\mathit{\eta }O\left(\mathit{\lambda },z\right)\frac{\mathit{\beta }\left(\mathit{\lambda },z\right)}{{z}^{\mathrm{2}}}\\ & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\mathrm{exp}\left[-\mathrm{2}\underset{o}{\overset{z}{\int }}\mathit{\alpha }\left(\mathit{\lambda },\phantom{\rule{0.25em}{0ex}}z{}^{\prime }\right)\mathrm{d}z{}^{\prime }\right],\end{array}\end{array}$ where P(λ, z) is the power signal (W) detected at a distance z (m) and time t (s), z is the distance (m) of the atmospheric volume investigated, P0 is the power emitted by the laser source (W), c is the speed of light (m s−1), τ the laser pulse duration (ns), A is the effective area of the telescope receptor (m2), η is a variable related to the efficiency of the lidar system and O(λ,z) is the laser beam receiver field-of-view overlap function. The most important quantities are β(λ,z), which is the total backscatter coefficient, due to atmospheric molecules, βmol(λ,z) and aerosol βaer(λ,z). In other words, $\mathit{\beta }\left(\mathit{\lambda },z\right)={\mathit{\beta }}_{\mathrm{mol}}\left(\mathit{\lambda },z\right)+{\mathit{\beta }}_{\mathrm{aer}}\left(\mathit{\lambda },z\right)$ (m sr)−1 at distance z and α(λ,z) is the total extinction coefficient, due to atmospheric molecules, αmol(λ,z) and aerosols αaer(λ,z). In other words, $\mathit{\alpha }\left(\mathit{\lambda },z\right)={\mathit{\alpha }}_{\mathrm{mol}}\left(\mathit{\lambda },z\right)+{\mathit{\alpha }}_{\mathrm{aer}}\left(\mathit{\lambda },z\right)$ (m)−1 at distance z. If the wavelength 1064 nm is used, we can neglect the influence of the extinction coefficient α(λ,z) provided by aerosol, the Rayleigh scattering generated by atmospheric molecules and the βmol(λ,z) . Therefore, Eq. (4) for the wavelength of 1064 nm can be rewritten as follows: $\begin{array}{}\text{(5)}& \begin{array}{rl}& {\mathrm{RCS}}_{\mathrm{1064}}\left(z,t\right)\phantom{\rule{0.125em}{0ex}}=\phantom{\rule{0.125em}{0ex}}{P}_{\mathrm{1064}}\left(z,t\right)\cdot {z}^{\mathrm{2}}\\ & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\approx \phantom{\rule{0.125em}{0ex}}G\cdot {\mathit{\beta }}_{\mathrm{1064}}\left(z,t\right)\phantom{\rule{0.125em}{0ex}}\approx \phantom{\rule{0.125em}{0ex}}G\cdot {\mathit{\beta }}_{\mathrm{aer}}\left(z,t\right),\end{array}\end{array}$ where RCS1064 is the range-corrected signal, G is a constant and the subscribed indexes represent the wavelength and the particles. Then, by applying Reynolds decomposition (Eq. 1) over Eq. (5), the following equation is derived as follows: $\begin{array}{}\text{(6)}& {\mathrm{RCS}}_{\mathrm{1064}}{}^{\prime }\left(z,t\right)\phantom{\rule{0.125em}{0ex}}\approx \phantom{\rule{0.125em}{0ex}}{\mathit{\beta }}_{\mathrm{1064}}{}^{\prime }\left(z,t\right)\phantom{\rule{0.125em}{0ex}}=\phantom{\rule{0.125em}{0ex}}{\mathit{\beta }}_{\mathrm{aer}}{}^{\prime }\left(z,t\right)\phantom{\rule{0.125em}{0ex}}=\phantom{\rule{0.125em}{0ex}}N{}^{\prime }\left(z,t\right).\end{array}$ Our purpose is to evaluate the use of other wavelengths under the effects of the molecular backscatter coefficient (βmol). The interest is based on the best performance of the technology for detecting wavelengths in the VIS and UV and on the extended use of these wavelengths in the following lidar networks: the Latin America Lidar Network – LALINET , European Aerosol Research Lidar Network – EARLINET and the NASA Micropulse Lidar Network – MPLNet . ## 3.1 High-order moments The high-order moments used in this study are obtained from RCS${}^{\prime }\left(z,t\right)$, generated by Eq. (1), where $\stackrel{\mathrm{‾}}{\mathrm{RCS}}\left(z\right)$ represents the 1 h average package of RCS(z,t) data. From this, the high-order moments, variance (σ2), skewness (S) and kurtosis (K) are obtained as demonstrated in the first column of Table 1, as well as their corrections and errors in the second and third columns of the same table, respectively. In Table 2 the physical meaning of each high-order moment in the context of the proposed analysis is presented. Table 1Variables applied to statistical analysis of turbulence in the ABL region . The sum of subindex of autocovariance function Mij represents the order of the analysis. Table 2Physical meaning of the high-order moments The integral timescale (τ) is an important prerequisite in turbulence studies. It guarantees that most of the horizontal variability of the turbulent eddies is detected with good resolution, enabling the solution of inertial subrange and dissipation range in the spectrum and autocorrelation function, respectively . τ must be larger than the temporal resolution of the analyzed time series (SPU lidar station time acquisition is 2 s). In the same way as the high-order moments, such variables are obtained from RCS${}^{\prime }\left(z,t\right)$, as shown in the first column of Table 1. ## 3.2 Error analysis The high-order moments and τ generated from RCS${}^{\prime }\left(z,t\right)$ can also be obtained from the following autocovariance function Mij, which has its order represented by the sum of the subscript i and j , according to the following equation: $\begin{array}{}\text{(7)}& {M}_{ij}\phantom{\rule{0.125em}{0ex}}=\phantom{\rule{0.125em}{0ex}}\underset{\mathrm{0}}{\overset{{t}_{\mathrm{f}}}{\int }}{\left[\mathrm{RCS}{}^{\prime }\left(z,t\right)\right]}^{i}{\left[\mathrm{RCS}{}^{\prime }\left(z,t+{t}_{\mathrm{f}}\right)\right]}^{j}\mathrm{d}t,\end{array}$ where tf means final time. However, it is important to consider the influence of instrument noise ε(z,t) in the RCS${}^{\prime }\left(z,t\right)$ profile. Therefore, Mij can be rewritten as follows: $\begin{array}{}\text{(8)}& \begin{array}{rl}& {M}_{ij}\phantom{\rule{0.125em}{0ex}}=\phantom{\rule{0.125em}{0ex}}\underset{\mathrm{0}}{\overset{{t}_{\mathrm{f}}}{\int }}{\left[\mathrm{RCS}{}^{\prime }\left(z,t\right)+\mathit{\epsilon }\left(z,t\right)\right]}^{i}\\ & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}{\left[\mathrm{RCS}{}^{\prime }\left(z,t+{t}_{\mathrm{f}}\right)+\mathit{\epsilon }\left(z,t+{t}_{\mathrm{f}}\right)\right]}^{j}\mathrm{d}t.\end{array}\end{array}$ Although atmospheric fluctuations are correlated in time, ε(z,t) is random and uncorrelated with the atmospheric signal; therefore, ε(z,t) is only associated with lag zero. Consequently, it is possible to obtain the corrected autocovariance function, M11(→ 0), removing the error ΔM11(0) of the uncorrected autocovariance function M11(0), as demonstrated in the equation below: $\begin{array}{}\text{(9)}& {M}_{\mathrm{11}}\left(\to \mathrm{0}\right)\phantom{\rule{0.125em}{0ex}}=\phantom{\rule{0.125em}{0ex}}{M}_{\mathrm{11}}\left(\mathrm{0}\right)-\mathrm{\Delta }{M}_{\mathrm{11}}\left(\mathrm{0}\right).\end{array}$ Based on this concept, proposed two methods to correct for the noise influence: • First lag correction: the lag zero (ΔM11(0)) is directly subtracted from the uncorrected autocovariance function M11(0), generating M11(→0). • Two-thirds correction: a new lag zero value is obtained by the extrapolation of M11(0) to the first nonzero lag back to lag zero, using the inertial subrange hypothesis : $\begin{array}{}\text{(10)}& {M}_{\mathrm{11}}\left(\to \mathrm{0}\right)\phantom{\rule{0.125em}{0ex}}=\phantom{\rule{0.125em}{0ex}}\stackrel{\mathrm{‾}}{RCS{}^{\prime }\left(z,t\right)}+C{t}^{\mathrm{2}/\mathrm{3}},\end{array}$ where C represents a parameter of turbulent eddy dissipation rate. In this study, we also used the first five points after lag zero to perform this correction. In Table 1 the second and third columns present the corrections and errors, respectively, of high-order moments and τ. Figure 1Methodological description of data analysis performed for elastic lidar data. Figure 1 shows how the procedures described in Sect. 3.1 and 3.2 are used. Firstly, the lidar data are acquired with a time resolution of 2 s. Then, these data are averaged in packages of 1 h (the influence of the time window is demonstrated in ) generating $\stackrel{\mathrm{‾}}{\mathrm{RCS}\left(z\right)}$, from which it is possible to obtain RCS${}^{\prime }\left(z,t\right)$, as illustrated in Eq. (1). Then, the two corrections shown in Sect. 3.2 are separately applied. Finally, the high-order moments and the τ, corrected and without correction, are estimated. The ABLH is estimated from the variance method, which establishes, in convective conditions, the top of CBL (ABLH) as the maximum of the variance of the RCS [${\mathit{\sigma }}_{\mathrm{RCS}}^{\mathrm{2}}\left(z\right)$] . Examples of the application of such a methodology in varied meteorological scenarios (presence of clouds and aerosol sublayers) are presented in . 4 Results In this section we present two case studies, applying the methodology described in Sect. 3, in order to perform a comparative analysis about the influence of βmol and ε in the high-order moments and τ obtained from different wavelengths (355, 532 and 1064 nm). Figure 2Time–height plot of RCS532. ## 4.1 Case study I: 26 July 2017 In this case study we gathered measurements from 13:00 to 19:00 UTC. Figure 2 shows the time–height plot of RCS532 during this period. This case is composed of two distinct periods, in the first 2 h there is an RL with an underlying shallow CBL. Nevertheless, in the last part of the second hour the CBL quickly grows and it mixes with RL, forming a fully developed ABL, with its top situated between 1500 and 1600 m from 15:00 to 19:00 UTC. The dotted black box between 17:00 and 18:00 UTC represents the period selected to perform the statistical analysis. In order to check the hypothesis proposed by , which assumes that there is not particle hygroscopic growth and that the same type of aerosol is present in the entire atmospheric column in the ABL region, we analyzed the relative humidity and mixing ratio profile retrieved from radio-sounding measurements (http://weather.uwyo.edu/upperair/sounding.html, last access: 25 September 2018), launched at the Campo de Marte Airport (São Paulo, Brazil), which is about 10 km away from the SPU lidar system. Figure 3a and b show the relative humidity and mixing ratio profiles, respectively, measured on 26 Jul 2017 at 12:00 UTC. Both relative humidity and mixing ratio can be considered constant below 1500 m, with mean values of 67±8 % and 7.6±0.9 g kg−1, respectively. Since there is no large variation in water vapor mixing ratio and relative humidity values in this region, we assume that this case is not affected by particle hygroscopic growth. In addition, the AERONET Sun photometer data from the São Paulo station were retrieved in order to check the aerosol type, as can be seen in Fig. 3c. Figure 3(a) Vertical profile of relative humidity derived from radio sounding. (b) Mixing ratio derived from radio sounding. (c) Aerosol-optical-depth-related Ångström exponent time series from AERONET, for measurements retrieved at 26 July 2017. According to , the Ångström exponent (AE) can be a useful tool for distinguishing different types of atmospheric aerosols. Figure 3c shows the aerosol AE time series for the case study of 26 July 2017. The AE was calculated at the spectral range 340–440 and 440–675 nm using AERONET products from level 1.5 version 3 data. For this measurement period, the percentage variation in AE was no more than 3 % in both cases. Therefore, there are no considerable changes during the whole measurement period, which is a strong indication that there is no aerosol type change throughout the day. In Fig. 4 the signal-to-noise-ratio (SNR) profile of the raw lidar signal is presented, as calculated by Heese (2010) at three wavelengths (1064 nm, red line; 532 nm, green line; and 355 nm, violet line) during the analyzed period. All wavelengths have values of SNR higher than 1 (the threshold for good quality) below the ABLH (dotted blue line) with a predominance of values lower than 1 in the free troposphere (FT), which was expected due to the strong reduction of aerosol concentration in such region. Although the three wavelengths have similar SNR profiles, close to ABLH the differences among them become more evident, principally the fast decreasing of the 355 nm and the high values of 532 nm. Figure 4Signal-to-noise-ratio (SNR) profile of the three wavelengths (1064 nm, red line; 532 nm, green line; and 355 nm, violet line) obtained at 26 July 2017 between 17:00 and 18:00 UTC. Figure 5 shows the autocovariance function (ACF), obtained between 17:00 and 18:00 UTC for the wavelengths 355 (ACF355), 532 (ACF532) and 1064 nm (ACF1064) at 1000 and 1700 m a.g.l. Thus, from the comparison of Figs. 2 and 5 it is possible to observe that the altitude chosen at 1000 m (red line) is situated below the top of CBL, while the altitude chosen at 1700 m (light green line) is in the FT. As expected, the ε, which is represented by the peak on the lag zero of the autocovariance function (5), increases with height for all the wavelengths due to reduction of aerosol load with height. ACF355 has the lowest intensity (around 90 % smaller those of ACF532 and ACF1064) and is clearly much more affected by the magnitude of ε that represents approximately 25 % of ACF355, while for ACF532 and ACF1064 the noise represents around 10 % of the respective autocovariance. Figure 6 presents all statistical variables, their respective corrections and errors (shadows), generated from the methodology described in Sect. 3 for data acquired between 17:00 and 18:00 UTC. Figure 5Autocovariance function at 1064 nm (a), 532 nm (b) and 355 nm (c) on 26 July 2017 from 17:00 to 18:00 UTC. For 355 nm the insert magnifies the signal 10x. The variance profiles, ${\mathit{\sigma }}_{\mathrm{RCS}}^{\mathrm{2}}\left(z\right)$, with and without corrections for all wavelengths are represented in Fig. 6.01–.09. The low and almost constant values of uncorrected ${\mathit{\sigma }}_{{\mathrm{RCS}}_{\mathrm{1064}}}^{\mathrm{2}}\left(z\right)$ from the bottom up to around 1000 m in altitude demonstrates an almost constant distribution of aerosol particles in this region, as can be seen in Fig. 6.01. Above 1000 m in altitude, the value of uncorrected ${\mathit{\sigma }}_{{\mathrm{RCS}}_{\mathrm{1064}}}^{\mathrm{2}}\left(z\right)$ increases, reaching its maximum peak at around 1600 m. This peak represents the entrainment zone, the region where a mixing occurs between air parcels coming from the CBL and FT. According to , there is an intense variation in aerosol concentration during this process, generating a maximum in the uncorrected ${\mathit{\sigma }}_{{\mathrm{RCS}}_{\mathrm{1064}}}^{\mathrm{2}}\left(z\right)$, which represents the ABLH. Above the ABLH, the aerosol concentration is considerably lower than in CBL and thus the uncorrected ${\mathit{\sigma }}_{{\mathrm{RCS}}_{\mathrm{1064}}}^{\mathrm{2}}\left(z\right)$ is reduced to practically zero. This methodology for estimating the ABLH is named the variance method or centroid method and it was described by and , respectively. The main limitations of this method are its applicability only for CBL and the ambiguous results in complex cases, such as the presence of several aerosol layers (Emeis2011). In such situations more sophisticated methods like Wavelet , PathfinderTURB and POLARIS are recommended. The uncorrected ${\mathit{\sigma }}_{{\mathrm{RCS}}_{\mathrm{532}}}^{\mathrm{2}}\left(z\right)$, presented in Fig. 6.04, is rather similar to uncorrected ${\mathit{\sigma }}_{{\mathrm{RCS}}_{\mathrm{1064}}}^{\mathrm{2}}\left(z\right)$, including the position of maximum peak. Nevertheless, although uncorrected ${\mathit{\sigma }}_{{\mathrm{RCS}}_{\mathrm{355}}}^{\mathrm{2}}\left(z\right)$, presented in Fig. 6.07, also has the maximum peak situated at around 1600 m in altitude, the profile is noisier than the profiles obtained from the other wavelengths and, therefore, it is not possible to identify the regions with uniform aerosol distribution as evidenced in uncorrected ${\mathit{\sigma }}_{{\mathrm{RCS}}_{\mathrm{1064}}}^{\mathrm{2}}\left(z\right)$. Although the ${\mathit{\sigma }}_{{\mathrm{RCS}}_{\mathrm{355}}}^{\mathrm{2}}\left(z\right)$ is noisier than another ones, there is a low difference among the ABLH estimated from the three different wavelengths (lower than 10 %). The two-thirds correction, shown in Fig. 6.02, 6.05 and 6.08, does not cause significant changes in the uncorrected profiles. On the other hand, the first lag correction significantly changes the profiles, thus ${\mathit{\sigma }}_{{\mathrm{RCS}}_{\mathrm{532}}}^{\mathrm{2}}\left(z\right)$ becomes very similar to ${\mathit{\sigma }}_{{\mathrm{RCS}}_{\mathrm{1064}}}^{\mathrm{2}}\left(z\right)$, while ${\mathit{\sigma }}_{{\mathrm{RCS}}_{\mathrm{355}}}^{\mathrm{2}}\left(z\right)$ continues with some differences, mainly in the region below the ABLH, as can be seen in Fig. 6.03, 6.06 and 6.09. The integral timescale profiles ${\mathit{\tau }}_{\mathrm{RCS}{}^{\prime }}\left(z\right)$, with and without corrections, ${\mathit{\tau }}_{\mathrm{RCS}{}^{\prime }}^{\mathrm{corr}}\left(z\right)$ and ${\mathit{\tau }}_{\mathrm{RCS}{}^{\prime }}^{\mathrm{unc}}\left(z\right)$, respectively, calculated for the three wavelengths are presented in Fig. 6.10–.18. The ${\mathit{\tau }}_{\mathrm{RCS}{}^{\prime }}^{\mathrm{unc}}\left(z\right)$ presents values larger than SPU lidar station time acquisition, shown as a dotted black line in the region below ABLH at all wavelengths, as can be seen in Fig. 6.10, 6.13 and 6.16. The largest values of ${\mathit{\tau }}_{\mathrm{RCS}{}^{\prime }}^{\mathrm{unc}}\left(z\right)$ correspond to 1064 nm, while the lowest values are computed for 355, which is practically half of those obtained with the reference wavelength, 1064 nm. The low value for the ${\mathit{\tau }}_{\mathrm{RCS}{}^{\prime }}^{\mathrm{unc}}\left(z\right)$ at 355 nm can be associated with the influence of the noise in the signal retrieved at this wavelength. The application of the two-thirds correction does not cause significant changes in the profiles, while the first lag correction changes the profiles significantly mainly in the region below the ABLH, as can be seen in Fig. 6.11, 6.14 and 6.17 and in Fig. 6.12, 6.15 and 6.18, respectively. The skewness profiles SRCS(z) represent the degree of asymmetry in a distribution, where SRCS(z)=0 represents symmetric distributions around its mean, while positive and negative values represent cases where the tail of distribution is on the left and right side of the distribution, respectively. The uncorrected skewness profiles ${S}_{\mathrm{RCS}}^{\mathrm{unc}}\left(z\right)$ and their respective corrections ${S}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ for the three wavelengths are presented in Fig. 6.19–.27. The ${S}_{\mathrm{RCS}}^{\mathrm{unc}}\left(z\right)$ generated from the wavelengths 1064 and 532 nm, presented in Fig. 6.19 and 6.22, respectively, presents similar behavior up to approximately 150 m above the ABLHelastic, with positive values in the low part of the profile and one inflection point close to ABLHelastic. Such a point characterizes the transition from the region with entrainment of clean FT air into the CBL (negative values) to a region a few meters above the ABLHelastic with the presence of aerosol plumes (positive values) due to convective movement. This behavior of the skewness profile was also observed by and at the region of the ABLHelastic. Therefore, the same set of phenomena is evidenced by the dataset at both wavelengths, although there are differences in the absolute values. The two corrections cause negligible variations in the profiles at 1064 nm, as shown in Fig. 6.20 and 6.21. On the other hand, the corrections applied to the ${S}_{\mathrm{RCS}}^{\mathrm{unc}}\left(z\right)$ at 532 nm produce skewness profiles similar to those at the reference wavelength, as can be seen in Fig. 6.23 and 6.24. It is possible to observe a difference between the skewness profiles at 532 nm (positive) and 1064 nm (negative) in the region above the ABLHelastic. Such difference is a consequence of the low values of signal-to-noise ratio (SNR) of the RCS' and consequently τRCS(z) observed in this region, preventing the observation of turbulence due to the technical limitations of the instruments used. The skewness profiles at 355 nm, ${S}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ and ${S}_{\mathrm{RCS}}^{\mathrm{unc}}\left(z\right)$ present a rather different behavior and do not follow the same variations observed in the reference wavelength profile, as can be seen in Fig. 6.25, 6.26 (two-thirds correction) and 6.27 (first lag correction). Consequently, it is not possible to observe the aerosol dynamics using the information gathered at the wavelength 355 nm. The kurtosis profile ${K}_{\mathrm{RCS}{}^{\prime }}$ is the most complex high-order moment presented in this study and, consequently, in such profiles the differences among the three wavelengths are more evident. In the context of our analysis, the values of ${K}_{\mathrm{RCS}{}^{\prime }}$ are indicators of the mixing degree at each altitude, as well as of the intermittence of turbulence caused by large eddies. Because of some technical limitations of our lidar system, it is possible to resolve eddies only up to a predetermined size. Therefore, in regions where turbulence is performed in overly small scales, our system cannot solve these eddies. The kurtosis equation presented in the Table A1 represents the kurtosis of a normal distribution, which is equal to 3 (Bulmer, 1965). Consequently, such a value is applied as a threshold in the analyses performed in this paper. Values lower than 3 represent a well-mixed region, indicating a flatter distribution in comparison to a normal distribution, thus the turbulence caused by large eddies can be characterized as frequent. In contrast, values higher than 3 indicate a peaked distribution in comparison to a Gaussian distribution. In other words, there is an unusual variation in the RCS${}^{\prime }\left(z,t\right)$, which represents a low degree of mixing and the presence of an infrequent large eddy turbulence . The ${K}_{\mathrm{RCS}{}^{\prime }}^{\mathrm{unc}}$ at 532 and 1064 nm have some differences in the region below 1300 m in altitude, where the profile at 1064 nm only shows values higher than 3, representing a region with a low degree of mixing, while the ${K}_{\mathrm{RCS}{}^{\prime }}^{\mathrm{unc}}$ obtained from 532 nm is composed of values higher and lower than 3. From 1300 to 3500 m in altitude, the profiles of these two wavelengths are very similar, with values lower than 3 in the region below the ABLH, characterizing a well-mixed region, a peak of values higher than 3 in the first meters above the ABLH, and values between 3 and 4 in the remainder of the profile. The corrections do not cause significant changes in the 1064 nm kurtosis profile, as can be seen in Fig. 6.29 and 6.30. However, the variation in the kurtosis profile at 532 nm is remarkable, as presented in Fig. 6.32 and 6.33. Thus, it becomes very similar to the 1064 nm profile, mainly with the use of first lag correction. The ${K}_{\mathrm{RCS}{}^{\prime }}^{\mathrm{unc}}$ obtained from 355 nm does not have the same variations observed in the profiles obtained at the reference wavelength. Therefore, it is not possible to identify the occurrence of the phenomena previously described. The same problem occurs in the ${K}_{\mathrm{RCS}{}^{\prime }}^{\mathrm{corr}}$, although the application of corrections causes relevant variations in relation to values observed in ${K}_{\mathrm{RCS}{}^{\prime }}^{\mathrm{unc}}$. Figure 6High-order moments and τ without correction and corrected by the two-thirds law and first lag correction at 1064 (red line), 532 (green line) and 355 nm (violet line) on 26 July 2017 from 17:00 to 18:00 UTC. The horizontal dotted blue line represents the ABLHelastic. Figure 7 shows the profiles of βmol, βmol+aer and βratio of the wavelengths 1064 nm (Fig. 7.1 and 7.2), 532 nm (Fig. 7.3 and 7.4) and 355 nm (Fig. 7.5 and 7.6). Such profiles were obtained from the data retrieved during the period of analysis presented previously. From Fig. 7.1 it is possible to observe the predominance of βaer in the wavelength 1064 nm and because of this the βratio presented in Fig. 7.2 achieved large values. In Fig. 7.3 it is possible to observe the predominance of βaer in the wavelength 532 nm and a small impact of βmol. The backscatter profile at 355 nm presented in Fig. 7.5 shows that both βaer and βmol, have the same order of magnitude but with a predominance of βaer. Such profiles justify the differences and similarities observed in the results obtained from each wavelength. Although the backscatter profiles at 532 nm are composed of the molecular and aerosol signatures, the predominance of the latter enables the observation of the phenomena presented by high-order moment profiles obtained from the reference wavelength. The small presence of βmol can also be an indicator of the low values of noise, although they are higher than the values of reference wavelength. Figure 7Total (aerosol and molecular) backscatter profile and backscatter ratio retrieved using the Klett–Fernald–Sasano inversion technique for 1064, 532 and 355 nm, respectively, for data retrieved on 26 July 2017 at 17:00–18:00 UTC by the SPU lidar system. ## 4.2 Case study II: 19 July 2018 In this case study measurements were gathered with the SPU lidar station from 12:00 to 21:00 UTC. Figure 8 shows the time–height plot of RCS532 during this period (the time–height plot of the RCS355 and RCS1064 are available in the Supplement as Figs. S1 and S2, respectively). At the beginning of measurement it is possible to observe the presence of an ascending CBL covered by a RL, which has the top situated at around 1300 m in altitude. At approximately 15:30 UTC the CBL breaks up the RL and becomes fully developed, thus its growth speed is reduced and the value of top height remains practically constant (1600 m) from 17:00 UTC until 21:00 UTC. The dotted black box in Fig. 8 represents the chosen period for performing the statistical analysis (18:00–19:00 UTC). Figure 8Time–height plot of RCS532. In the same way as case study I, the hypothesis proposed by is validated from the profiles presented in Fig. 9 (more information are available in the Suplement as Fig. S3). The profiles of relative humidity and mixing ratio, presented in Fig. 9a and b, respectively, do not have large variations in the CBL below 1200 m in altitude. In addition, the aerosol-optical-depth-related Ångström exponent time series did not show considerable changes during the whole measurement period, as can be seen in Fig. 9c. For this measurement period the percentage variation in AE was no more than 4 % and 3 % in the spectral range 340–440 and 440–675 nm, respectively. Therefore, there are no considerable changes during the whole measurement period, which is a strong indication that there are no aerosol type change throughout the day and the atmospheric conditions are not propitious for particle hygroscopic growth events. Figure 10 presents the SNR profile of the raw lidar signal of the three wavelengths (1064 nm; red line; 532 nm, green line; and 355 nm, violet line) during the analyzed period. In the ABL region, all wavelengths have similar profiles with values higher than 1. However, as ABLH approaches, the values of SNR reduce sharply, mainly at 355 nm. Consequently, in the FT region all profiles have values lower than 1, as expected. Figure 9(a) Vertical profile of relative humidity derived from radio sounding. (b) Mixing ratio derived from radio sounding. (c) Aerosol-optical-depth-related Ångström exponent time series from AERONET, for measurements retrieved at 19 July 2018. Figure 11 shows a comparison among the ACF obtained from the three wavelengths 1064 nm (Fig.11a), 532 nm (Fig.11b) and 355 nm (Fig.11c), between 18:00 and 19:00 UTC at two heights: 1000 m (red line) and 1700 (green line). In the same way as case study I, the region above ABLH (green line) is more influenced by noise than the region situated below this height (red line). The intensity of ACF532 and ACF1064 are very similar, although the presence of noise in the former, which is 40 % and 46 %, below and above ABLH, respectively, is higher than in the latter, 27 % and 30 %, below and above ABLH, respectively. The ACF355 presents a lower intensity value in comparison to the other two wavelengths and a strong presence of noise below and above the ABLH, 50 % and 67 %, respectively. Figure 10Signal-to-noise-ratio (SNR) profile of the three wavelengths (1064 nm; red line, 532 nm green line; and 355 nm, violet line) obtained on 19 July 2018 between 18:00 and 19:00 UTC. The three high-order moments and τRCS, both corrected by the first lag correction and obtained between 18:00 and 19:00 UTC, are presented in Fig. 12. The ${\mathit{\tau }}_{\mathrm{RCS}}^{\mathrm{corr}}$ for all wavelengths has values higher than 2 s from the bottom of profile up to the first meters above the ABLHelastic with a maximum of ${\mathit{\sigma }}_{\mathrm{RCS}{}^{\prime }}^{\mathrm{2}}\left(z\right)$. Although the values obtained from 1064 nm and 532 nm are almost twice as large as the values generated from 355 nm (in the same way as case study I), there are some differences among the maxima of the [${\mathit{\sigma }}_{\mathrm{RCS}}^{\mathrm{2}}\left(z\right)$] and they do not significantly influence the ABLH estimation, thus the difference among the ABLH obtained from each wavelength is lower than 10 %. The positive values of ${S}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ of 1064 nm indicate the presence of aerosol updrafts from the bottom of the profile up to around 750 m in altitude. From this height up to the ABLH, the ${S}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ is characterized by negative values, which represents a region with entrainment of clean FT air into the CBL. In the same way as case study I, there is an inflection point at ABLH, which reproduces the transition from negative to positive values, the latter values indicating the presence of aerosol updraft layers in the first 200 m above the ABLH. Such behavior in the region of ABLH was also observed by and and can be considered characteristic of convective regime. The ${S}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ obtained from the wavelengths 1064 and 532 nm presents an identical pattern of behavior, demonstrating the occurrence of the same phenomena. The ${S}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ obtained from the wavelength 355 nm, in the same way as the previous case study, does not exhibit the behavior observed in the reference wavelength, presenting only positive values in the whole profile. Therefore, it is not possible to identify variations in the aerosol dynamic using 355 nm. Figure 11Autocovariance function at 1064 (a), 532 (b) and 355 nm (c) on 19 July 2018 from 18:00 to 19:00 UTC. The ${K}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ obtained from the wavelength 1064 nm presents values higher than 3 from the bottom up to around 1300 m in altitude, characterizing a region with a low degree of mixing. From 1300 m up to the ABLH the ${K}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ has values lower than 3, which characterize this region as showing a large degree of mixing and (in a more evident way) the presence of turbulence. Such behavior occurs mainly due to entrainment of cleaner air. A few meters above the ABLH, the ${K}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ has a great peak, which occurs due to rare aerosol plumes penetrating at this region. Such behavior was also observed in case study I, as well as by and . Above the ABLH the profile only has values higher than 3; however, as ${\mathit{\tau }}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ decreases to values close to zero and low values of SNR of the RCS are characteristic of this region, it is not possible to extract conclusive information from ${K}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$. In the same way as the comparison performed with other variables, the ${K}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ obtained from the wavelength 532 nm presents similar behavior to the profile obtained from 1064 nm, thus the same phenomena can be observed. On the other hand, the ${K}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ obtained from the wavelength 355 nm does not allow for observing the behavior detected in the profile obtained from the reference wavelength because along the whole profile the ${K}_{\mathrm{RCS}}^{\mathrm{corr}}\left(z\right)$ at 355 nm presents values higher than 3. Figure 12High-order moments corrected by first lag correction at 1064 (red line), 532 (green line) and 355 nm (violet line) on 19 July 2018 from 18:00 to 19:00 UTC. Figure 13 shows the composition signal of βaer and βmol, retrieved during the analyzed period of this case study (18:00–19:00 UTC) using the Klett–Fernald–Sasano inversion , at each one of the three wavelengths, as well as the βratio calculated using the backscatter profile of aerosol and molecular component (Bucholtz1995). From Fig. 13.1 it is possible to observe that the backscattered signal at 1064 nm has a predominance of βaer, with almost null values of βmol. The composition of the backscattered signal at 532 nm is shown in Fig. 13.3. Although the component βmol has values higher than the ones observed in wavelength 1064 nm, the component βaer is predominant in the backscattered signal composition. The backscattered signal at 355 nm, presented in Fig. 13.5, unlike the other wavelengths, is predominantly composed of βmol and has a low percentage of βaer. Figure 13Total (aerosol and molecular) backscatter profile and backscatter ratio retrieved using Klett–Fernald–Sasano inversion technique for 1064, 532 and 355 nm, respectively, for data retrieved on 19 July 2018 from 18:00 to 19:00 UTC. From the results obtained in both case studies, it is possible to observe the influence of the wavelength in the proposed methodology. The wavelength 1064 nm, considered our signal reference, has a negligible influence on component molecules; therefore, the backscatter signal retrieved at 1064 nm can be considered approximately equal to the backscatter signal retrieved only by the aerosol contribution, β1064 ≈ βaer. Before taking into account the approximation demonstrated in Eq. (5) (RCS1064 ≈ β1064), we can conclude that the range-corrected signal retrieved from a lidar at 1064 nm can be considered, with good precision, approximately equal to the backscatter signal retrieved at the same wavelength for aerosol components, RCS1064 ≈ βaer. Such a relation enables the observation of behavior of aerosol plumes from high-order moments. In the case of wavelength 532 nm, β532 is composed of βaer and βmol (${\mathit{\beta }}_{\mathrm{532}}={\mathit{\beta }}_{{\mathrm{aer}}_{\mathrm{532}}}+{\mathit{\beta }}_{{\mathrm{mol}}_{\mathrm{532}}}$); however, as shown in Figs. 8 and 13, there is a predominance of βaer. Although the high-order moment profiles obtained from the wavelength 532 nm are noisier than that one generated from the reference wavelength data, the phenomena observed from the 1064 nm data can also be observed in 532 nm data, mainly after the application of first lag correction. Consequently, the wavelength at 532 nm can be used in the proposed methodology providing satisfactory results. On the other hand, the backscatter at 355 nm is predominantly composed of βmol and has a small percentage of βaer, as presented in Figs. 8 and 13. This fact justifies the low quality observed in the results retrieved using the wavelength of 355 nm. As established in Eq. (3), the turbulent variable is directly associated with $\mathit{\beta }{{}^{\prime }}_{\mathrm{aer}}$, but, due to the low contribution of this component in the backscatter signal at 355 nm, the supposition established in Eq. (6) cannot be applied. Consequently, the high-order moments obtained from the proposed methodology are noisier and the value of ${\mathit{\tau }}_{\mathrm{RCS}{}^{\prime }}\left(z\right)$ is almost half of the value obtained from the reference wavelength, both due to influence of βmol that presents the stronger contribution to the total backscatter coefficient at this wavelength. Therefore, the behavior observed in the high-order moment profiles generated from the 1064 nm wavelength data can be detected as partially (or even totally) suppressed as the complexity of high-order moments increases. In both case studies it was possible to observe from the third-order moment (skewness) that the results obtained from the wavelength 355 nm provide misinformation. 5 Conclusions In this paper we performed a comparative analysis about the use of different wavelengths (355, 532 and 1064 nm) in studies about turbulence. The data were acquired with an elastic lidar, from the SPU lidar station of LALINET, by measurements gathered at high frequency (0.5 Hz) from July 2017 to July 2018. The RCS provided by this system was used to calculate high-order moments (variance, skewness and kurtosis) and the integral timescale, which were applied to characterization of aerosol dynamics. Based on previous studies, the wavelength 1064 nm was adopted as reference due to predominance of βaer. Two case studies (26 July 2017 and 19 July 2018) were performed in order to verify the proposed methodology, as well as the applicability of each wavelength. In both cases, the results obtained from 1064 nm wavelength demonstrate that the high-order moments can support a detailed analysis of the ABL region. In addition, it is remarkable that the values of τRCS in the region below the ABLH demonstrates the viability of the proposed methodology. The high-order moments obtained from the wavelength 532 nm are slightly more influenced by the noise than the results obtained from the reference wavelength (the value of noise can be observed by the ACF532. However, the same phenomena observed in the high-order moment profiles generated from the 1064 nm wavelength can be observed in the one generated from the wavelength 532 nm, mainly with the application of first lag correction. On the other hand, the high-order moments obtained from 355 nm have a strong presence of noise, and thus the phenomena presented in the high-order moments obtained from 1064 nm wavelength cannot be observed from the third-order moment (skewness) in 355 nm high-order moment profiles. The analysis of the backscatter signal at each wavelength shows that for both case studies βaer is the predominant contribution at 532 nm, while βmol is predominant at 355 nm. In this way, the high-order statistics become noisier at 355 nm and cannot be applied in the proposed methodology. In contrast, the predominance of βaer at 532 nm implicates that this wavelength provides results similar to those obtained at 1064 nm, especially after the application of first lag correction. Consequently, the 532 nm wavelength can be used to apply the proposed methodology, providing results similar to that obtained from 1064 nm wavelength. The results obtained in this paper show the viability of the proposed methodology and its applicability to the 532 nm wavelength, due to the similarity with results derived at 1064 nm and the evidence of a low ε influence. On the other hand, the wavelength 355 nm does not provide satisfactory results in such a methodology due to the predominance of molecular signal in its composition. However, a better assessment of the molecular backscatter at 355 can reduce the influence of the noise caused by molecular signal and improve the results obtained from the data generated from this channel. In addition, the high-order moments obtained from the SPU lidar station using an elastic lidar data provided us with detailed information about some phenomena in the ABL, giving us a better comprehension of the aerosol dynamics. Data availability Data availability. Data used in this paper are available upon request from corresponding author (gregori.moreira@usp.br). Supplement Supplement. Author contributions Author contributions. The conceptualization was done by GdAM, JLGR and LAA. The methodology was done by GdAM, JLGR and LAA. The analysis software was developed by GdAM. The experiments were designed by GdAM. The data acquisition was performed by GdAM, FJSL, JJS and AAG. The formal analysis, investigation, writing of the original draft, preparation and review of the writing, and editing were performed by GdAM, FJSL, JLGR and LAA. The supervision, project administration and funding acquisition were done by LAA and EL. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. This work was supported by the Andalusia Regional Government, through the project P12-RNM-618 2409, the Spanish Agencia Estatal de Investigación (AEI), through projects CGL2016-81092-R, CGL2017-90884-REDT and CGL2017-83538-C3-1-R, and the Spanish Ministry of Economy and Competitiveness through projects CGL2016-81092-R, and CGL2017-90884-REDT. We acknowledge the financial support by the European Union's Horizon 2020 research and innovation program through project ACTRIS-2 (grant agreement no. 621654109). The authors gratefully acknowledge the University of Granada that supported this study through the Excellence Units Program and “Plan Propio. Programa 9 Convocatoria 2013”. The authors would also like to acknowledge The National Council for Scientific and Technological Development (CNPQ) for their support (projects 152156/2018-6, 432515/2018-6 and 150716/2017-6), the São Paulo Research Foundation (FAPESP; grant no. 2015/12793-0) and the FEDER program for the University of Granada that supported this study through the Excellence Units Program. Financial support Financial support. This research has been supported by the Andalusian Regional Government (P12-RNM-618 2409 project), the Spanish Agencia Estatal de Investigación (AEI, CGL2016-81092-R, CGL2017-90884-REDT and CGL2017-83538-C3-1-R projects), the Spanish Ministry of Economy and Competitiveness (CGL2016-81092-R, and CGL2017-90884-REDT projects), the European Union's Horizon 2020 project (NACTRIS 2, grant no. 621654109), the University of Granada, the National Council for Scientific and Technological Development (CNPQ, 152156/2018-6, 432515/2018-6 and 150716/2017-6 projects), the São Paulo Research Foundation (FAPESP, grant no. 2015/12793-0), and the FEDER program for the University of Granada. Review statement Review statement. This paper was edited by Vassilis Amiridis and reviewed by three anonymous referees. References Andrews, E., Sheridan, P. J., Ogren, J. A., and Ferrare, R.: In situ aerosol profiles over the Southern Great Plains cloud and radiation test bed site: 1. Aerosol optical properties, J. Geophys. Res.-Atmos., 109, D06208, https://doi.org/10.1029/2003JD004025, 2004. a Antuña Marrero, J. C., Landulfo, E., Estevan, R., Barja, B., Robock, A., Wolfram, E., Ristori, P., Clemesha, B., Zaratti, F., Forno, R., Armandillo, E., Bastidas, A. E., de Frutos Baraja, A. M., Whiteman, D. N., Quel, E., Barbosa, H. M. J., Lopes, F., Montilla-Rosero, E., and Guerrero-Rascado, J. L.: LALINET: The First Latin American-Born Regional Atmospheric Observational Network, B. Am. Meteorol. Soc., 98, 1255–1275, https://doi.org/10.1175/BAMS-D-15-00228.1, 2017. a, b Baars, H., Ansmann, A., Engelmann, R., and Althausen, D.: Continuous monitoring of the boundary-layer top with lidar, Atmos. Chem. Phys., 8, 7281–7296, https://doi.org/10.5194/acp-8-7281-2008, 2008. a Bravo-Aranda, J. A., de Arruda Moreira, G., Navas-Guzmán, F., Granados-Muñoz, M. J., Guerrero-Rascado, J. L., Pozo-Vázquez, D., Arbizu-Barrena, C., Olmo Reyes, F. J., Mallet, M., and Alados Arboledas, L.: A new methodology for PBL height estimations based on lidar depolarization measurements: analysis and comparison against MWR and WRF model-based results, Atmos. Chem. Phys., 17, 6839–6851, https://doi.org/10.5194/acp-17-6839-2017, 2017. a Bucholtz, A.: Rayleigh-scattering calculations for the terrestrial atmosphere, Appl. Opt., 34, 2765–2773, https://doi.org/10.1364/AO.34.002765, 1995. a de Arruda Moreira, G., Guerrero-Rascado, J. L., Benavent-Oltra, J. A., Ortiz-Amezcua, P., Román, R., E. Bedoya-Velásquez, A., Bravo-Aranda, J. A., Olmo Reyes, F. J., Landulfo, E., and Alados-Arboledas, L.: Analyzing the turbulent planetary boundary layer by remote sensing systems: the Doppler wind lidar, aerosol elastic lidar and microwave radiometer, Atmos. Chem. Phys., 19, 1263–1280, https://doi.org/10.5194/acp-19-1263-2019, 2019. a, b, c Eck, T. F., Holben, B. N., Reid, J. S., Dubovik, O., Smirnov, A., O'Neill, N. T., Slutsker, I., and Kinne, S.: Wavelength dependence of the optical depth of biomass burning, urban, and desert dust aerosols, J. Geophys. Res.-Atmos., 104, 31333–31349, https://doi.org/10.1029/1999JD900923, 1999. a Emeis, S.: Surface-Based Remote Sensing of the Atmospheric Boundary Layer, Atmospheric and Oceanographic Sciences Libraty, Vol. 40, Springer Heidelberg, https://doi.org/10.1007/978-90-481-9340-0, 2011. a Engelmann, R., Wandinger, U., Ansmann, A., Müller, D., Zeromskis, E., Althausen, D., and Wehner, B.: Lidar Observations of the Vertical Aerosol Flux in the Planetary Boundary Layer, J. Atmos. Ocean. Technol., 25, 1296–1306, https://doi.org/10.1175/2007JTECHA967.1, 2008. a Feingold, G. and Morley, B.: Aerosol hygroscopic properties as measured by lidar and comparison with in situ measurements, J. Geophys. Res., 108, 4327, https://doi.org/10.1029/2002JD002842, 2003. a Fernald, F. G.: Analysis of atmospheric lidar observations: some comments, Appl. Opt., 23, 652–653, https://doi.org/10.1364/AO.23.000652, 1984. a Guerrero-Rascado, J. L., Landulfo, E., na, J. C. A., de Melo Jorge Barbosa, H., Barja, B., Álvaro Efrain Bastidas, Bedoya, A. E., da Costa, R. F., Estevan, R., Forno, R., Gouveia, D. A., Jiménez, C., calves Larroza, E. G., da Silva Lopes, F. J., Montilla-Rosero, E., de Arruda Moreira, G., Nakaema, W. M., Nisperuza, D., Alegria, D., Múnera, M., Otero, L., Papandrea, S., Pallota, J. V., Pawelko, E., Quel, E. J., Ristori, P., Rodrigues, P. F., Salvador, J., Sánchez, M. F., and Silva, A.: Latin American Lidar Network (LALINET) for aerosol research: Diagnosis on network instrumentation, J. Atmos. Sol.-Terr. Phys., 138/139, 112–120, https://doi.org/10.1016/j.jastp.2016.01.001, 2016. a, b Hammann, E., Behrendt, A., Le Mounier, F., and Wulfmeyer, V.: Temperature profiling of the atmospheric boundary layer with rotational Raman lidar during the HD(CP)2 Observational Prototype Experiment, Atmos. Chem. Phys., 15, 2867–2881, https://doi.org/10.5194/acp-15-2867-2015, 2015. a Heese, B., Flentje, H., Althausen, D., Ansmann, A., and Frey, S.: Ceilometer lidar comparison: backscatter coefficient retrieval and signal-to-noise ratio determination, Atmos. Meas. Tech., 3, 1763–1770, https://doi.org/10.5194/amt-3-1763-2010, 2010. Holben, B., Eck, T., Slutsker, I., Tanré, D., Buis, J., Setzer, A., Vermote, E., Reagan, J., Kaufman, Y., Nakajima, T., Lavenu, F., Jankowiak, I., and Smirnov, A.: AERONET—A Federated Instrument Network and Data Archive for Aerosol Characterization, Remote Sens. Environ., 66, 1–16, https://doi.org/10.1016/S0034-4257(98)00031-5, 1998a. a Holben, B. N., Eck, T. F., Slutsker, I., Tanré, D., Buis, J. P., Setzer, A., Vermote, E., Reagan, J. A., Kaufman, Y. J., Nakajima, T., Lavenu, F., Jankowiak, I., and Smirnov, A.: Aeronet – A Federal Instrument Network and Data Archive for Aerosol Characterization, Remote Sens. Environ., 66, 1–16, https://doi.org/10.1016/S0034-4257(98)00031-5, 1998b. a Hooper, W. P. and Eloranta, E. W.: Lidar Measurements of Wind in the Planetary Boundary Layer: The Method, Accuracy and Results from Joint Measurements with Radiosonde and Kytoon, J. Clim. Appl. Meteor., 25, 990–1001, 1986. a Kaimal, J. C. and Gaynor, J. E.: The Boulder Atmospheric Observatory, J. Clim. Appl. Meteor., 22, 863–880, https://doi.org/10.1175/1520-0450(1983)022<0863:TBAO>2.0.CO;2, 1983. a Kiemle, C., Ehret, G., Fix, A., Wirth, M., Poberaj, G., Brewer, W. A., Hardesty, R. M., Senff, C., and LeMone, M. A.: Latent Heat Flux Profiles from Collocated Airborne Water Vapor and Wind Lidars during IHOP_2002, J. Atmos. Ocean. Technol., 24, 627–639, https://doi.org/10.1175/JTECH1997.1, 2007. a Klett, J. D.: Lidar calibration and extinction coefficients, Appl. Opt., 22, 514–515, https://doi.org/10.1364/AO.22.000514, 1983. a Klett, J. D.: Lidar inversion with variable backscatter/extinction ratios, Appl. Opt., 24, 1638–1643, https://doi.org/10.1364/AO.24.001638, 1985. a Lagouarde, J.-P., Commandoire, D., Irvine, M., and Garrigou, D.: Atmospheric boundary-layer turbulence induced surface temperature fluctuations. Implications for TIR remote sensing measurements, Remote Sens. Environ., 138, 189–198, https://doi.org/10.1016/j.rse.2013.06.011, 2013. a Lagouarde, J.-P., Irvine, M., and Dupont, S.: Atmospheric turbulence induced errors on measurements of surface temperature from space, Remote Sens. Environ., 168, 40–53, https://doi.org/10.1016/j.rse.2015.06.018, 2015. a Lenschow, D. H., Wyngaard, J. C., and Pennell, W. T.: Mean-Field and Second-Moment Budgets in a Baroclinic, Convective Boundary Layer, J. Atmos. Sci., 37, 1313–1326, https://doi.org/10.1175/1520-0469(1980)037<1313:MFASMB>2.0.CO;2, 1980. a Lenschow, D. H., Mann, J., and Kristensen, L.: How Long Is Long Enough When Measuring Fluxes and Other Turbulence Statistics?, J. Atmos. Ocean. Technol., 11, 661–673, https://doi.org/10.1175/1520-0426(1994)011<0661:HLILEW>2.0.CO;2, 1994. a Lenschow, D. H., Wulfmeyer, V., and Senff, C.: Measuring Second- through Fourth-Order Moments in Noisy Data, J. Atmos. Ocean. Technol., 17, 1330–1347, https://doi.org/10.1175/1520-0426(2000)017<1330:MSTFOM>2.0.CO;2, 2000. a, b, c Lopes, F. J. S., Luis Guerrero-Rascado, J., Benavent-Oltra, J. A., Román, R., Moreira, G. A., Marques, M. T. A., da Silva, J. J., Alados-Arboledas, L., Artaxo, P., and Landulfo, E.: Rehearsal for Assessment of atmospheric optical Properties during biomass burning Events and Long-range transportation episodes at Metropolitan Area of São Paulo-Brazil (RAPEL), EPJ Web Conf., 176, 08011, https://doi.org/10.1051/epjconf/201817608011, 2018. a Lothon, M., Lenschow, D., and Mayor, S.: Coherence and Scale of Vertical Velocity in the Convective Boundary Layer from a Doppler Lidar, Bound.-Lay. Meteorol., 121, 521–536, https://doi.org/10.1007/s10546-006-9077-1, 2006. a Martucci, G., Matthey, R., Mitev, V., and Richner, H.: Comparison between Backscatter Lidar and Radiosonde Measurements of the Diurnal and Nocturnal Stratification in the Lower Troposphere, J. Atmos. Ocean. Technol., 24, 1231–1244, https://doi.org/10.1175/JTECH2036.1, 2007. a McNicholas, C. and Turner, D. D.: Characterizing the convective boundary layer turbulence with a High Spectral Resolution Lidar, J. Geophys. Res.-Atmos., 119, 12910–12927, https://doi.org/10.1002/2014JD021867, 2014. a, b, c, d Menut, L., Flamant, C., Pelon, J., and Flamant, P. H.: Urban boundary-layer height determination from lidar measurements over the Paris area, Appl. Opt., 38, 945–954, https://doi.org/10.1364/AO.38.000945, 1999. a, b Monin, A. S. and Yaglom, A. M.: Statistical Fluid Mechanics, Vol. 2, MIT Press, 874 pp., 1979. a Muppa, S. K., Behrendt, A., Späth, F., Wulfmeyer, V., Metzendorf, S., and Riede, A.: Turbulent Humidity Fluctuations in the Convective Boundary Layer: Case Studies Using Water Vapour Differential Absorption Lidar Measurements, Bound.-Lay. Meteorol., 158, 43–66, https://doi.org/10.1007/s10546-015-0078-9, 2016. a O'Connor, E. J., Illingworth, A. J., Brooks, I. M., Westbrook, C. D., Hogan, R. J., Davies, F., and Brooks, B. J.: A Method for Estimating the Turbulent Kinetic Energy Dissipation Rate from a Vertically Pointing Doppler Lidar, and Independent Evaluation from Balloon-Borne In Situ Measurements, J. Atmos. Ocean. Technol., 27, 1652–1664, https://doi.org/10.1175/2010JTECHA1455.1, 2010. a Pal, S., Behrendt, A., and Wulfmeyer, V.: Elastic-backscatter-lidar-based characterization of the convective boundary layer and investigation of related statistics, Ann. Geophys., 28, 825–847, https://doi.org/10.5194/angeo-28-825-2010, 2010. a, b, c, d, e, f, g, h, i, j, k, l, m, n, o Pappalardo, G., Amodeo, A., Apituley, A., Comeron, A., Freudenthaler, V., Linné, H., Ansmann, A., Bösenberg, J., D'Amico, G., Mattis, I., Mona, L., Wandinger, U., Amiridis, V., Alados-Arboledas, L., Nicolae, D., and Wiegner, M.: EARLINET: towards an advanced sustainable European aerosol lidar network, Atmos. Meas. Tech., 7, 2389–2409, https://doi.org/10.5194/amt-7-2389-2014, 2014. a Poltera, Y., Martucci, G., Collaud Coen, M., Hervo, M., Emmenegger, L., Henne, S., Brunner, D., and Haefele, A.: PathfinderTURB: an automatic boundary layer algorithm. Development, validation and application to study the impact on in situ measurements at the Jungfraujoch, Atmos. Chem. Phys., 17, 10051–10070, https://doi.org/10.5194/acp-17-10051-2017, 2017. a Sasano, Y. and Nakane, H.: Significance of the extinction/backscatter ratio and the boundary value term in the solution for the two-component lidar equation, Appl. Opt., 23, 1–13, https://doi.org/10.1364/AO.23.0011_1, 1984. a Stull, R.: An Introduction to Boundary Layer Meteorology, Atmospheric and Oceanographic Sciences Library, Springer Netherlands, 1988. a Stull, R., Santoso, E., Berg, L., and Hacker, J.: Boundary Layer Experiment 1996 (BLX96), B. Am. Meteorol. Soc., 78, 1149–1158, https://doi.org/10.1175/1520-0477(1997)078<1149:BLEB>2.0.CO;2, 1997. a Titos, G., Cazorla, A., Zieger, P., Andrews, E., Lyamani, H., Granados-Muñoz, M., Olmo, F., and Alados-Arboledas, L.: Effect of hygroscopic growth on the aerosol light-scattering coefficient: A review of measurements, techniques and error sources, Atmos. Environ., 141, 494–507, https://doi.org/10.1016/j.atmosenv.2016.07.021, 2016.  a Turner, D. D., Ferrare, R. A., Wulfmeyer, V., and Scarino, A. J.: Aircraft Evaluation of Ground-Based Raman Lidar Water Vapor Turbulence Profiles in Convective Mixed Layers, J. Atmos. Ocean. Technol., 31, 1078–1088, https://doi.org/10.1175/JTECH-D-13-00075.1, 2014. a Veselovskii, I., Whiteman, D. N., Korenskiy, M., Suvorina, A., and Pérez-Ramírez, D.: Use of rotational Raman measurements in multiwavelength aerosol lidar for evaluation of particle backscattering and extinction, Atmos. Meas. Tech., 8, 4111–4122, https://doi.org/10.5194/amt-8-4111-2015, 2015. a Vogelmann, A. M., McFarquhar, G. M., Ogren, J. A., Turner, D. D., Comstock, J. M., Feingold, G., Long, C. N., Jonsson, H. H., Bucholtz, A., Collins, D. R., Diskin, G. S., Gerber, H., Lawson, R. P., Woods, R. K., Andrews, E., Yang, H.-J., Chiu, J. C., Hartsock, D., Hubbe, J. M., Lo, C., Marshak, A., Monroe, J. W., McFarlane, S. A., Schmid, B., Tomlinson, J. M., and Toto, T.: RACORO Extended-Term Aircraft Observations of Boundary Layer Clouds, B. Am. Meteorol. Soc., 93, 861–878, https://doi.org/10.1175/BAMS-D-11-00189.1, 2012. a Wang, Z., Cao, X., Zhang, L., Notholt, J., Zhou, B., Liu, R., and Zhang, B.: Lidar measurement of planetary boundary layer height and comparison with microwave profiling radiometer observation, Atmos. Meas. Tech., 5, 1965–1972, https://doi.org/10.5194/amt-5-1965-2012, 2012. a Lidar: Range-Resolved Optical Remote Sensing of the Atmosphere, Springer Series in Optical Sciences, Springer New York, 2005. a Welton, E. J., Campbell, J. R., Spinhirne, J. D., and Scott, V. S.: Global monitoring of clouds and aerosols using a network of micropulse lidar systems, Proc. SPIE 4153, Lidar Remote Sensing for Industry and Environment Monitoring, https://doi.org/10.1117/12.417040, 2001. a Williams, A. G. and Hacker, J. M.: The composite shape and structure of coherent eddies in the convective boundary layer, Bound.-Lay. Meteorol., 61, 213–245, https://doi.org/10.1007/BF02042933, 1992. a Wulfmeyer, V.: Investigation of Turbulent Processes in the Lower Troposphere with Water Vapor DIAL and Radar–RASS, J. Atmos. Sci., 56, 1055–1076, https://doi.org/10.1175/1520-0469(1999)056<1055:IOTPIT>2.0.CO;2, 1999. a Wulfmeyer, V., Pal., S., Turner, D. D., and Wagner, E.: Can water vapor Raman lidar resolve profiles of turbulent variables in the convective boundary layer?, Bound.-Lay. Meteorol., 136, 253–284, https://doi.org/10.1007/s10546-010-9494-z, 2010. a
# Question #0ac33 Feb 23, 2017 $\textcolor{b l u e}{\text{Making "y" the dependant variable}}$ Divide both sides by 4 $\frac{4}{4} y = \frac{1.4}{4} x - \frac{1}{4}$ But $\frac{4}{4} = 1$ $y = \frac{1.4}{4} x - \frac{1}{4}$ But $\frac{1.4}{4}$ is the same as $\frac{1.4 \times 10}{4 \times 10} = \frac{14}{40} = \frac{14 \div 2}{40 \div 2} = \frac{7}{20}$ $\textcolor{g r e e n}{y = \frac{7}{20} x = \frac{1}{4}}$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $\textcolor{b l u e}{\text{Making "x" the dependant variable}}$ I am a bit fed up with the decimal so lets get rid of it. Multiply everything by 10 $40 y = 14 x - 10$ $40 y + 10 = 14 x - 10 + 10$ But -10+10=0 $40 y + 10 = 14 x$ Divide both sides (everything) by 14 $\frac{40}{14} y + \frac{10}{14} = \frac{14}{14} \times x$ $\frac{20}{7} y + \frac{5}{7} = x$ Write as: $\textcolor{g r e e n}{x = \frac{20}{7} y + \frac{5}{7}}$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Substitute any known value into these and you have you answer, However, no known values provided in the question.
# The financial implications of working longer: An application of a micro-economic model of retirement in Belgium 1. Federal Planning Bureau, Belgium 2. Centre for Sociological Research (CESO), Katholieke Universiteit Leuven, Belgium Research article Cite this article as: G. J. M. Dekkers; 2007; The financial implications of working longer: An application of a micro-economic model of retirement in Belgium; International Journal of Microsimulation; 1(1); 10-25. doi: 10.34196/ijm.00003 ## Abstract In this paper, the costs and benefits associated with postponing retirement are simulated in a standard simulation model for Belgium, using the approach of Stock and Wise (1990). Unlike earlier microsimulation-based applications of this approach, such as Gruber and Wise (1999, 2004), this model does not take a representative sample as the point of departure, but simulates the costs and benefits of postponing retirement for four fictitious employees, representing male and female white- and blue-collar workers. While confirming conclusions drawn by other authors, this model allows for the separation of specific retirement schemes, and of the effect of different fiscal regimes for those retired and working. It is shown that differences between retirement schemes show up in differences in replacement rates and by whether or not the retirement benefit is a function of career length. Furthermore, advantageous fiscal regulations for the retired have a strong impact on the implicit costs of postponing retirement. ## 1. Introduction As a result of structurally low fertility and ever-increasing life expectancy, the Belgian population is ageing rapidly. The 2004 report of the ‘Studiecommissie voor de Vergrijzing’ (Commission for the Study of Ageing) (High Council of Finances, 2004: 20), estimates the budgetary cost of ageing to be 3.4% of GDP. It also concludes that an effective way to moderate costs would be to persuade individuals to postpone their retirement, as an increase by one year of the average age at which people retire would decrease the budgetary cost of ageing by 0.9% of GDP. In this, Belgium especially faces a challenge. Figure 1 shows the activity rate and effective retirement age of older employees in European countries. Figure 1 For the European Union as a whole, 38.8 % of those older than 55 still have a job. This percentage is 25.1 in Belgium, a figure lower only in Luxembourg. The effective age of retirement is 59.9 in Europe as a whole, and 57 in Belgium. The European countries decided at the European Summit in Barcelona in 2002 that this effective retirement age should be increased by five years before 2010 (EC, 2003). This clearly is even more important for Belgium than for other countries. The purpose of this paper is to provide an insight into what extent the current public retirement schemes for private-sector wage earners might encourage older workers to voluntarily leave the labour market and enter retirement. What is the penalty for continuing to work? And does this differ between workers of different categories, and between the two main public retirement schemes available in Belgium? These questions are to be answered exploiting the so-called option-value approach. The model to be presented in this paper is based on the notion of actuarial non-neutrality (OECD, 2003: 4) of a retirement scheme, setting the gains from postponing retirement (extra salary) against the losses (foregone expected pension, for all future years until decease) associated with a specific retirement scheme. The which its simulation results will be discussed at length. Finally, conclusions will be drawn. ## 2. The option value approach The well-known replacement ratio compares the last-earned salary with the pension benefit one receives immediately after retirement. This may not be the best variable to reflect a retirement decision, for two reasons. First, it is based on pension benefit in the first year after retirement, and ignores the future development of pension benefit conditional upon the year of retirement. Second, it suggests that working and retirement are two exchangeable strategies, i.e. that a retired individual can re-enter the labour market in any future year. Although this may be theoretically possible, in practise such behaviour is rare. Hence Duval (2003: 34) describes retirement as an “absorbing rather than as a dynamic state”. The option value approach, developed by Stock and Wise (1990) supposes that a representative agent considers the gains and losses in utility pertaining to every year that he or she could retire. He or she weighs the utility of consumption (i.e. the higher income when working) against leisure (retirement). In this, he or she does not compare just the current alternative incomes, but the expected value of all current and future incomes. Define t as the first year that our individual has the institutional possibility to retire, and define rt as the year that he or she retires. The flow of expected future utilities can then be written as: (1) ${V}_{t}\left(r\right)=\sum _{s=t}^{r-1}{\beta }^{s-t}{a}_{s}{U}_{y}{Y}_{s}+\sum _{s=r}^{\infty }{\beta }^{s-t}{a}_{s}{U}_{b}{b}_{s}\left(r\right)$ t the year of potential retirement r the year of actual retirement s future year, starting from either t or r ys labour income or salary in year s bs(r) pension income in year s, given retirement in r Uy the utility of consumption Ub the utility of leisure β discount factor = 1/(1+discount rate) as the probability of survival from t to s Given that r* = arg max(Vt(r)), it holds that Vt(r*) − Vt(r) > 0 for each year that r* ≠ r. Considering all the future years that one can choose to retire (i.e. from t to the year one reaches the mandatory retirement age), the option value in t is Gt(r*) = Vt(r*) − Vt(t), and one will postpone retirement from year t for as long as Gt(r*) > 0. In r*, there is no additional expected utility from working, and one will therefore retire. Following Gruber and Wise (2004: 26), the option value can be rewritten as (2) ${G}_{t}\left({r}^{\ast }\right)=\sum _{s=t}^{{r}^{\ast }-1}{\beta }^{s-t}{a}_{s}{U}_{y}\left({y}_{s}\right)+\left[\sum _{s={r}^{\ast }}^{\infty }{\beta }^{s-t}{a}_{s}{U}_{b}\left({b}_{s}\left({r}^{\ast }\right)\right)-\sum _{s-t}^{\infty }{\beta }^{s-t}{a}_{s}{U}_{b}\left({b}_{s}\left(t\right)\right)\right]$ or As defined the option value and peak value both assume that the older worker considers all years from t to the legal retirement age in one decision. One might, however, also want to consider year-to-year decisions, in which one decides (not) to retire just for the year to come. This decision is reflected by some additional variables, closely related to the above option and peak value, which are to be presented now. First of all, Social Security Wealth (SSW) can be expressed as the flow of discounted expected utility from retirement in the year r, as seen from t: (3) $SS{W}_{r}^{t}=\sum _{s=r}^{\infty }{\beta }^{s-t}{a}_{s}{U}_{b}\left(b\left(r\right)\right)$ The change of SSW as a result of postponing retirement by just one year, is (4) $△SS{W}_{r}^{t}=SS{W}_{r}^{t}-SS{W}_{r-1}^{t}.$ This is referred to as the wealth accrual. When retirement is postponed by one year, one renounces one year of pension benefits. However, depending on the scheme, the pension benefit may be a function of the career length, and postponing retirement then implies a higher benefit in the remaining years of retirement. If the two effects cancel each other out for an additional year of work, the system is said to be “actuarially neutral at the margin” (OECD, 2003: 34). In practice, however, the first effect usually outweighs the second one (not least because of discounting), so usually ∆SSWrt < 0, in which case continuing to work comes with a loss in pension wealth (i.e. total post-retirement pension income). The option value and peak value reflect a retirement decision to be taken about all the future years from t to the year one reaches the mandatory retirement age. The wealth accrual describes a year-to-year retirement decision, but considers only pension income and neglects labour income. One may therefore want to add variables that compare discounted salaries and pension incomes in a year-to-year-retirement decision. Write PR as the balance of discounted salaries and changes in pension wealth that result from postponing retirement by one year: (5) $P{R}_{t}\left(r\right)=△SS{W}_{r}^{t}+{\beta }^{r-t}{a}_{r}{y}_{r}$ This variable is the balance of expected gains (one additional year of salary) and losses (a decrease of the stream of future pension benefits) if one postpones retirement in r. Income earned usually outweighs the negative wealth accrual, and therefore PRt(r) is usually positive. Gains associated with continuing to work will often outweigh the losses, but we shall see that these increases in wealth as a result of continuing to work are (sometimes considerably) lower than the salary suggests. This may help to account for the fact that that a proportion of wage-earners chooses not to work but to retire instead. Note again the strong relation between the single- and multiple-year retirement decisions; PR and SSW are the single-year ‘versions’ of the option value and peak value. A second additional variable is used by the OECD (2003: 34), Duval (2003: 18), Börsch-Supan (2000:31) as well as Nelissen (2001: 5) and is referred to as the ‘implicit tax on working through r’. For every year that retirement is postponed, the ratio of expected losses (renounced pension wealth) and gains (salary), shows that the former presses as an implicit tax on the latter. (6) $ita{x}_{r}^{t}=\frac{-△SS{W}_{r}^{t}}{{\beta }^{r-t}{a}_{r}{y}_{r}}$ Following the same line of reasoning explaining that PR usually is positive, one can expect itax to be usually positive as well. If not, it is to be interpreted as an implicit subsidy on work. ## 3. A comparison between mep and earlier applications of the option value approach The option value approach has been applied in several studies to date. The best known is that of Gruber and Wise (2004), in which the probability that one retires at a certain age is regressed on wealth accrual, peak value and option value. A similar study has been undertaken by Dellis et al. (2004: 41) for Belgium. The broad conclusion is that, even though the estimators of all variables are significant, wealth accrual has the highest explanatory value for both men and women. Subsequently these models have been used to simulate the effect of policy measures on the retirement probability. The OECD (OECD, 2003; Duval, 2003) also uses this approach to simulate and compare the actuarial nonneutrality of retirement schemes between countries. Their conclusion is that the implicit tax on continuing to work is low for workers of 55-years old, but increases considerably with age. Countries differ significantly, and these differences coincide with differences in replacement ratios. Similarly, Börsch-Supan (2000) explains labour market status (retired or working) using a regression model, including option value with gender, health, education, age and pension-type. This model is estimated on German data and the results were used to simulate the impact of actuarial changes in retirement benefits on retirement age. In later work, Berkel and Börsch-Supan (2003) concentrate on institutional characteristics of the German retirement system, and simulate the effect of implementing a system of Notional Defined Contributions (NDC) on the effective retirement age. The above papers all present models which can be classified as static microsimulations (Van Mechelen & Verbist, 2005), for the calculation of the option values are made for a representative sample, and then regressed on the probability that an individual retires. This means that information on specific alternative retirement schemes are combined for existing individuals, and that the information on different individuals is combined in the estimation results. In contrast, the Micro-Economic Pension model (MEP) to be presented in this paper may be classified as a standard model. As in a conventional microsimulation model, the point of departure is the individual; but a standard simulation model concentrates on the calculation of the various indicator variables for four fictional individuals representing male and female blue- and white-collar private-sector wage earners. It also includes two separate ‘first-pillar’ (state) retirement schemes – the early retirement scheme and a conventional early leavers’ scheme. Instead of aggregating outcomes across these schemes and types of individual, MEP keeps them separate, so that comparisons can be made as to the actuarial non-neutrality of the two retirement schemes with respect to male or female white- or blue-collar workers, and the degree to which these schemes may stimulate a step into retirement. Furthermore, an explicit goal of MEP is to bring to the fore the effect of different tax regimes on the implicit costs of continuing to work. For this reason we follow the two OECD studies (OECD, 2003; Duval, 2003) in expressing the variables presented above in monetary units rather than in terms of utility. The advantage of this approach is that it prevents the simulation results being determined to some degree simply by assumptions concerning the difference in utility between a salary or a pension benefit of one euro. Moreover, it is in line with the widely used replacement ratio, which is also a ratio of currency-units. Finally, MEP does not include a behavioural equation relating the option value variables with effective retirement probabilities. So, a higher (lower) implicit cost of retiring is assumed to encourage (discourage) retirement, but it is unknown by how much exactly. ## 4. The two Belgian state retirement schemes The Belgian retirement system consists of three pillars. The first pillar is provided by public social security programs, which are the most important source of income for current pensioners. The second pillar is that of company pension schemes. Although coverage of these schemes is increasing rapidly, their importance in terms of the income they provide to current pensioners is still limited. The third pillar consists of individual life-insurances and retirement savings. We concentrate upon the first-pillar social security retirement schemes. Ignoring disability schemes, older private-sector wage earners have two ways of retiring before the mandatory age of retirement. The first is via the system of early retirement (‘vervroegd rustpensioen’), and the second is via the conventional early leavers’ scheme (‘brugpensioen’) hereafter abbreviated to CELS. Both will be explained in broad lines in this section. The retirement system provides former private sector employees a pension benefit which essentially is a function of their past career. The mandatory age of retirement is 65 for males. For females it is gradually increasing from 61 years of age (from July 1997) via 63 years (from 2003 on) up to 65 (from 2009 on). Males become eligible for early retirement from the age of 60 on, if they have a minimum career service 35 years. For females, this minimum career length increased from 20 years in 1997 to 34 years in 2004, and is set to increase further, to 35 years, in 2009. Males and females can in principle therefore retire, and start to drawn public social security pensions, 5 and 3 years before the mandatory retirement age. The remainder of this paper seeks to identify the implicit costs associated with working through to the mandatory retirement age rather than exercise the optional right to retire early. State pension benefit is calculated as The wage-base is essentially the sum of past salaries, indexed on the development of prices and with additional discretionary adjustments for the development of wages between the year of earning and the year of retirement. This modified sum of corrected salaries is then multiplied by the length of the career and divided by the length of the career needed for a full pension. The latter equals the age at which one becomes eligible to a full pension benefit minus 20. So, for males, it is 65 – 20 = 45 years. For females, it is gradually increasing to 45 years. As a result, if one does not have a full career, continuing to work causes the pension benefit to move towards the ‘full-career pension benefit’. This wage-base is then multiplied by either 60% or 75%. If the individual is single, 60% is used. If they have a partner, they can choose the ‘family pension benefit’ of 75%, but then their partner loses his or her own pension entitlement. In consequence, this choice is only beneficial if one’s partner has no significant revenues of his/her own. Redistributive solidarity is embedded in the pension system in several ways. First of all, the wage one earns in a certain year during one’s career is taken into account only up to a certain limit. All incomes higher than this limit do not add to the wage-base, and hence not to the future pension benefit. Those earning a higher income therefore face a lower replacement rate. Second, for those with a career of at least 15 years, the wage-base is calculated substituting a minimum annual allowance for periods of low earnings. Third, there is a minimum pension benefit guaranteed to all, modified for those without a full career history. An alternative to this system of early retirement is the conventional early leavers’ scheme, which is essentially an unemployment scheme. It allows older workers to exit the labour market and become unemployed on favourable terms until the mandatory retirement age. People generally become eligible for a CELS benefit from the age of 58 onwards. (In practice, many older workers retire to the CELS before the age of 60. But these regulations are of an ad hoc nature and therefore not considered further in this paper.) The benefit consists of two parts. First, there is a general unemployment benefit which is equal to 60 % of the last wage up to a ceiling. Second, there is an additional benefit which equals 50% of the difference between the unemployment benefit and the last wage minus taxes and social-security benefits. The CELS benefit decreases relative to increasing income, because (i) regular unemployment benefit is limited, and (ii) the progressive tax-system causes net income to lag behind gross income as the latter increases. Unlike early retirement benefit, the CELS benefit does not depend on the number of working years. Furthermore, when one enters the CELS, formally one does not retire but becomes unemployed. The career length, on which the future statue pension will be based upon reaching the mandatory retirement age, therefore continues to increase. Note that one may choose to work after retirement, but the additional income one may earn is very limited. Furthermore, returning to work from a situation of retirement is possible in theory, but rarely seen in practice. Finally, switching from the CELS to the early retirement scheme may be possible, but is not rational and therefore never done. Switching in the opposite direction is not possible. ## 5. The Micro-Economic Pension Model (MEP) The Micro-Economic Pensions model (MEP) evaluates Equations 2 to 6 for fictitious individuals representing male and female white- and blue-collar workers. The model is written in the SAS macro language and consists of a body of several modules, accompanied by two modules that read time-dependent parameters, such as gross wages, pensions, minimum and/or maximum benefit levels and wage-ceilings, tax and social contribution rates, survival probabilities, inflation rates, and so forth. Figure 2 shows the technical structure of the model. Figure 2 The discussion of the model starts in the top-left quadrant of Figure 2. The user needs to provide the model with information on the gender and labour market status of the fictitious individual, and whether or not he or she applies for a family pension benefit or a single-person pension benefit. Furthermore, one needs to provide a discount rate, and to decide whether results should be expressed in gross or net amounts. Using this information, MEP will for every year r ≥ t that the individual is eligible to the pension benefit in question, calculate the flow of future expected pension benefits from all future years s until expected death, using the information available at r. All benefits are then discounted back to t using a discount rate and a survival probability. In the case of the CELS, the social security wealth SSW will sum CELS benefits up to the mandatory retirement age, and pension benefits afterwards. For the calculation of pension or CELS benefit, as well as the various indicators of actuarial non-neutrality, one needs past and current gross-incomes. The model includes a matrix containing wages per day for every combination of gender and labour market status, for all ages between 20 and 64 and for all the years between 1955 and 2004. The creation of this matrix is discussed in the appendix. For a thorough understanding of the model, two fundamental assumptions need to be discussed briefly. The first assumption pertains to the set of laws, rules and regulations on pensions, taxes and contributions that an individual faces in a certain year. In any year r, potential retirement benefits for all future years s are derived using information on laws, rules and regulations available at r. Suppose that an individual may choose to retire in 2003. His potential pension benefit in the future year 2010 will be derived using information available in 2003. If he or she postpones retirement and reconsiders in 2004, then the potential pension in the same year 2010 will be calculated using the information available in 2004. The second assumption pertains to the individuals who form the basis of the MEP model. To keep things simple, especially with respect to fiscal laws and regulations, it is assumed that an individual is either single or the sole income earner in the household. In other words, the income of the individual is the only income of their household. A consequence of this is that a married individual will by definition choose the family pension benefit of 75% of the wage-base. Inputs to the model are based on information drawn from Put (various years) and Ministry of Finance (various years) and follow regulations on taxes, social contributions, pension benefits and CELS benefits for the years 1997–2004. It is assumed that the four typical employees are born in 1940 and enter the labour market at the age of 20. Survival probabilities are derived from the 2000 mortality rates (National Institute of Statistics and Federal Planning Bureau, 2003). Finally, and in accordance with Dellis et al. (2004: 5) and Berkel and Börsch-Supan (2003: 3), the discount rate is set at 3%. ## 6. Results ### 6.1 Retirement Tables 1 to 4 below present the results for a single male or female white- or blue-collar worker. Table 1 Table 2 Table 3 Table 4 Table 1 presents the results for a single male white-collar worker. The year 2000 (at the age of 60) is the first year in which he can choose to retire. If he continues to work through this year, his gross income will be 47,811 euro. If he decides to apply for a pension benefit, this gross benefit will amount to 15,763 euro. The second row contains this information if he decides to work through 2000 and reconsider retirement in 2001 (i.e. at the age of 61). The ratio of alternative incomes, hereafter loosely called the replacement ratio, lies around 34%. That this is below 0.60, the fraction by which the pension base is multiplied to calculate the pension benefit, is caused by the fact that the pension benefit is based on the average (adjusted) wage over the length of the career, and given a maximum. Especially for male white-collar workers, the salary increases with age. The wage earned in the last year of the career is therefore higher than the average wage, and the ‘replacement ratio’ ends up well below 60%. The replacement ratio increasing with age is caused by the increasing career length, causing the pension benefit (the numerator) to increase, combined with the lower growth of the gross salary (the denominator). Finally, the net replacement ratio is 53% on average, and is higher than the gross replacement ratio. This is because of the progressiveness of the tax rate, and of a tax-exemption for retirees. This will be discussed further in Section 6.3.1. Equation 1 shows Vr to be the expected discounted value of future benefits and salaries if the individual retires in r. When postponing retirement by one year, one roughly loses one year of pension benefits. The future pension benefits for the remaining years of retirement may however increase as a result of the longer career. One moreover gains a years’ salary. The gains outweigh the losses, so total wealth is maximized if one continues to work as long as possible, i.e. max(Vr) = V2004. The option value is therefore positive and becomes zero at r = 2004. The peak value in the first row of Table 1 shows the difference in the flow of expected discounted benefits if one retires at the age of 60 compared to when one retires at that age where the option value is zero (i.e. at 64). The sooner one retires, the greater the flow of expected future benefits, meaning that the increase of future pension benefit as a result of continuing to work does not fully compensate for the lost pension year. The peak value will therefore be negative and zero at the age of 64. When only pension benefits are considered, one should of course retire as soon as possible. The three indicators to be discussed next consider a year-to-year retirement decision. The variable ∆SSW reflects the loss in expected flow of pension benefits if retirement is postponed by one year. One can expect ∆SSW<0 for the same reason as for the peak value. Table 1 shows that this net loss of postponement is more than 5000 euro in the first year, and increasing. So there is reason to retire as soon as the possibility arises. However, one does not only lose when postponing retirement; one gains salary as well. The variable PRr shows the balance between the two alternatives. It again shows that the profit of postponing retirement decreases with age, especially between the first and second year. The last indicator itaxr shows the loss (the sum of lost future expected discounted pension benefits) as a fraction of the gain (salary). The advantage of this indicator is that it does not have a scale, and this will turn out to be convenient when the results are to be compared with those of other representative individuals. As PR is positive, itax will usually be positive as well, meaning that there is an implicit tax on working longer. In the first year, this implicit tax is almost 24%, but increases immediately to almost 33% in the second year. After that, the increase continues, albeit at a lower speed. The conclusion again is that retirement is encouraged, and that implicit taxes increase with age. Comparing Table 1 with Tables 2 (female white-collar workers) and 3 (male blue-collar workers) reveals that both the gross and net replacement rate are inversely related to income. There are two explanations for this. First, the increase of wage with age is higher for male white-collar workers than for female white-collar workers and blue-collar workers (see appendix). The average income will therefore be further below the last wage for the first category compared to the last two, and hence the difference in the replacement ratio. Second, as male white-collar workers have the highest income, the effect of the upper wage limit in the calculation of the wage-base (and therefore the pension) is more important, causing the replacement rate to be lower. Compared to the figures in Table 1, incomes, profits and losses of postponing retirement, and therefore the indicators, are all lower in Tables 2 and 3. In contrast the implicit tax rate itax has no scale. Two effects explain the change of the implicit tax rate between categories of employees. The first, relating to the change of itax between male blue- and white-collar workers, is that the implicit tax rate increases with the replacement ratio (cf. Börsch-Supan, 2000: 131; Duval, 2003: 33). The second, relating to the difference between male and female white-collar workers, is that the loss associated with postponing retirement is especially high for females. This is because the length of career required for a full pension benefit is shorter for women than for men, although increasing over time. As the year of entering the labour market is assumed to be the same for all typical individuals whose simulations are discussed, female employees reach a full pension before males. If full career is reached, postponing retirement no longer results in an increase in the pension benefit for the remaining retirement years. The profit associated with postponing retirement therefore decreases, and the implicit tax increases. The results for female blue-collar workers are discussed separately. With the employee-types discussed so far, the loss of one year of pension benefits as a result of postponing retirement outweighed the gain in the higher pension benefit in the remaining years. As a result ∆SSW was negative, and itax was therefore positive as well. In the case of female blue-collar workers, this is no longer the case, as their pension benefit is not a function of the wage, but of the minimum pension benefit set by law. The effect of postponing retirement is therefore determined by the development of this minimum pension over time. If a female blue-collar worker postpones retirement from 2000 to 2001, the minimum pension benefit given a full career will have gone up considerably. Moreover, the woman does not have a full career and the effective minimum benefit level is decreased pro rata temporis. Postponing retirement therefore results in the effective minimum pension benefit increasing even further. This causes the profit of postponement to outweigh the loss, and thus results in an implicit subsidy (a negative implicit tax) on labour of 21%. In later years, the situation is back to normal in that itax>0. This is because of the lower increase of the minimum pension benefit over time, combined with the fact that the woman has reached full career. The conclusion therefore is that it is profitable for blue-collar females to continue to work between 2000 and 2001, but highly costly afterwards. ### 6.2 CELS benefit Tables 5 to 8 contain the simulation results of the CELS benefit for the four types of employee discussed in this paper. Table 5 Table 6 Table 7 Table 8 The CELS benefit is a function of the wage earned in the last year of the career, and not of the average wage. Moreover, the CELS benefit is subject to a higher wage ceiling than the pension benefit. The replacement ratio of the CELS benefit therefore exceeds that of the pension benefit. Consider an individual, say a male white-collar worker at the age of 60, who has the choice between a pension (the first line in Table 1) and a CELS benefit (the third line in Table 5). The gross income of course is the same in both cases, namely 47,811 euro, but the gross benefits are not: the pension benefit is 15,763 euro, whereas the CELS benefit amounts to 25,773 euro. But there is more; if one applies for the latter benefit, one does not retire but becomes unemployed. Pensionable career length will therefore continue to increase, and one will at 65 become eligible to a pension benefit of 18,856 euro. Clearly, when given the choice, one will always choose to enter the CELS over the retirement scheme. In practice, many older workers enter the CELS as a result of company restructuring. These people obviously do not have the choice between working and retiring. But even then does it remain advantageous to choose the CELS over the pension scheme. When considering the implicit tax on working longer, itax, it immediately becomes clear that this is much higher for the CELS than for the old-age pension scheme. For the four categories of employees, the average net itax is 87% (0.784/0.422) higher for the former than for the latter. The first and most obvious reason for this is that the former benefit is higher than the latter, both before and after retirement. This is only part of the explanation though, for the average replacement ratio for the four workers is ‘only’ 16% higher for the CELS benefit than the pension benefit. The second reason is that the CELS benefit is not affected by length of career. Postponing retirement, in contrast, only results in a loss of benefits, whilst the expected benefit to be received during the remaining future years does not change. Before turning to a discussion of simulation variants, let us once again take a closer look at the female blue-collar worker. For the other worker types considered, the salary earned when postponing retirement was larger than the pension benefit lost. However, the salary of female blue-collar workers is low. Moreover, due to the minimum unemployment benefit level and the fact that the additional benefit is a function of the difference between this benefit and net income, the total gross CELS benefit is relatively high, and the loss of postponing (in net amounts) outweighs the gain. The PR therefore is often negative or positive but very small, and the option value is zero in the first year. In stark contrast to the other types of employees, the female blue-collar worker actually loses wealth when she continues to work, and the average after-tax implicit tax on working longer is higher than 1. ### 6.3 Simulation variants The previous section discussed the considerable costs associated with delaying retirement. For pension benefits, these costs are generally higher for females than for males, and higher for blue-collar worker than for white-collar workers (with the exception of female blue-collar workers). For the CELS benefit, the results are more alike between the four categories, and the implicit taxes generally are higher. These results are based upon after-tax indicators of the cost of postponing retirement for single employees, who enter the labour market at the age of 20. How do these findings change when (i) before-tax indicators are calculated; (ii) the individual is no longer single, but has a partner without any income of their own; (iii) the individual has experienced a shorter career at the moment of becoming eligible to any benefit; (iv) changes are made to the rules for the calculation of the pension benefit, CELS benefit and the systems of social contributions and taxes. To facilitate discussion of these questions, Tables 9 and 10 do not contain the year-to-year simulation results, but these results averaged over all decision years r. For example, the top-left quadrant of Table 9 contains the averages of Table 1. To further facilitate discussion, Table 9 introduces the variable , which is simply the mean of the row variable over the four columns. So, is either the mean value of the replacement ratio or itax over the four categories of workers, depending on the row of the value of . The following sub-sections use this information to address in turn each of questions (i)–(iv) above. Table 9 Table 10 #### 6.3.1 Simulation results before-tax and social contributions How can we expect the results from Sections 6.1 and 6.2 to change when expressed in gross amounts? In other words, can we disentangle the effect of the tax system from the effect of the retirement system itself? First of all, let us take a brief look at the system of taxes and contributions in Belgium. Whereas social security social contributions for employees amount to 13.07% of gross income, this equivalent is 3.55% for pensioners unless the resulting taxable income drops below a minimum. For those receiving CELS benefits, the social contributions are 3.5% plus 3%, each conditional upon receipt of the same minimum level of taxable income. Moreover, tax rates increase by income band, so that the tax system by itself is progressive. In addition, those receiving a non-salary income (including a pension or a CELS benefit) are granted an additional tax exemption which, for example, equals 1612 euro for singles in 2003. All in all, those who are retired are subject to a favourable regime of taxes and social contributions. The relative gain from working over retirement therefore decreases, when expressed as an after-tax amount. Or, to put it another way, expressing the simulation results in gross instead of net amounts should result in the costs of postponing retirement to decrease. This is confirmed by Table 9. For the pension benefit, the average itax over the four categories of workers is 0.265 before taxes and 0.422 after taxes, giving a net implicit tax on working longer. The causes of this implicit tax can be sub-divided into a ‘gross benefit effect’ and a ‘tax effect’. The gross benefit effect, being the direct result of the pension scheme itself, accounts for 0.265/0.422 or 63% of the overall net implicit tax, with the reaming 37%, attributable to ‘tax effect’ of the state system of social contribution and taxation. For the CELS the contribution of the gross benefit effect to the overall net implicit tax is 0.593/0.789, or 76%, leaving 24% to be accounted for by the ‘tax effect’. #### 6.3.2 Family versus single-beneficiary pension benefit When applying for a pension benefit, a single individual will receive a pension benefit equal to 60% of their wage-base, given a full career. If one’s partner has no or limited revenues of his or her own, one can choose the ‘family pension benefit’ of 75 % of the wage-base. Also the minimum pension benefit increases by 25%. How does this change the above results? Is the implicit tax of working longer higher for those receiving a family pension benefit as compared to those receiving a single-beneficiary pension benefit? Of course, the average gross replacement rate of the pension benefit increases by 25% from 0.487 to 0.609. Likewise, the before-tax cost of postponing retirement increases: the average itax for the four types of employees increases by 25% as well, from 0.487 to 0.609. Similarly, one can expect the CELS benefit to increase, though not by as much as 25%. The general unemployment benefit (the first part of the CELS benefit) does not change. However, the fact that one is financially responsible for a partner without any income of his or her own has important fiscal consequences. In this case, 30% of one’s income is taxed as if it were the income of the partner. As the tax system is progressive, this implies a reduction of the tax burden compared to a single individual. The net income that is the basis of the additional part of the CELS benefit increases, and so does the additional part of the CELS benefit. The average replacement ratio of the CELS benefit therefore increases, albeit only by 10% (from 0.614 to 0.657). The increase of itax is more important, namely 13% (from 0.593 to 0.672). This difference is caused by the fact that the implicit tax increases not only as a result of the increasing CELS benefit, but also of the increasing future expected pension benefit which the individual will receive after reaching the retirement age. The pattern of changes is comparable when considering after-tax amounts, though all changes are considerably smaller than the changes in before-tax amounts. This is because the increase of the gross amounts is partially taxed away by the progressive tax scheme. #### 6.3.3 A shorter career So far it has been assumed that each individual enters the labour market at the age of 20 in 1960, and therefore chooses to retire (at the age of 58 or 60) after a career of 38 (CELS) or 40 (pension) years. In this section, the impact of a later entry date is considered. Changing the length of the career has no effect on the CELS benefit, and a limited effect on the implicit tax of this scheme. For the pension scheme, one may expect both the replacement ratio and the implicit tax on working longer to decrease. Table 9 contains the simulation results when somebody enters the labour market at the age of 25 and therefore becomes eligible for retirement after a career of 35 years instead of 40. The average gross and net replacement ratios decrease by 8.5% (0.446/0.487) and 8.3% (0.680/0.706). Gross and net itax decrease by 25% (0.199/0.265) and 19.3% (0.341/0.422). Why is the decrease of the latter so much stronger? This is because the pension benefit increases career length. An increase in career length of one year is more important in relative terms when one has a career of 35 years than 40 years. The relative increase of the pension benefit will therefore be more important, and the cost of postponing retirement will be lower. The conclusion, consequently, is that a shorter career comes with a lower implicit tax on working longer. #### 6.3.4 Some technical variants In this final section of results, the effect of changes to the rules and regulations for calculating the pension benefit, CELS benefit and taxation will be introduced and discussed briefly. The goal is to further demonstrate the simulation possibilities of the model and to show the effect of possible policy measures on the implicit tax of working longer. For a more elaborate discussion of these technical variants, see Dekkers (2005). Table 10 contains the simulation results for these variants. ##### 6.3.4.1 An increase of the career equirement In a first technical variant, consider what would happen if the career required for a full pension benefit were to be increased by one year. The number of years that one should work in order to get a full pension would increase from 45 to 46 years for males, from 40 to 41 years for females joining a scheme before 2001, and so on. This clearly has no effect on the CELS benefit, and only a limited effect on the implicit cost of postponing CELS. The discussion will therefore be limited to the implications for the pension benefit. One can expect both the replacement rate and the implicit tax on working longer to decrease. Table 10 shows that this indeed is the case, although the changes are limited. When taking the average over the four categories of employees, the gross replacement ratio decreases by 1.85%, from 0.487 to 0.478. The net replacement ratio also decreases, by 1.70%, from 0.706 to 0.694. The gross and net average implicit tax on postponing retirement respectively decrease by 12.35% (from 0.265 to 0.233) and 9.07% (from 0.422 to 0.384). Changing the required career has a stronger effect on itax than on the replacement rate, analogous to the results in the previous section. But the explanation is not the same. As the required career length increases, workers reach full career in a later year than before. Postponing retirement therefore has a greater effect on the pension benefit for all future years, and the gain of delaying retirement therefore increases, meaning that itax decreases. ##### 6.3.4.2 A simultaneous change in the system of taxes and contributions, and the minimum pension benefit A second technical variant introduces a simultaneous change of the system of taxes and social contributions, and of the minimum pension benefit. As explained in Section 6.3.1, those retired benefit from an additional tax exemption, which decreases their effective tax rate relative to workers. Now suppose that this tax deduction is abolished for both pensioners and CELS beneficiaries. At the same time, the minimum pension benefit is increased by 20%. The higher taxes to be paid over the future pension benefit results in a decrease of both the average net replacement rate (by 8.17 %, from 0.706 to 0.649) and itax (by 11.29 %, from 0.422 to 0.375) over the four categories of workers. However, the results differ strongly between the categories of workers. For the pension scheme and for workers other than female blue-collar workers, the decrease of the net replacement ratio is comparable, and lies between 12.4 and 15.3%. The decrease of the implicit tax on postponing retirement lies between 14.9 and 16.7%. The results are different only for female blue-collar workers, as their pension benefit level is determined by the designated minimum benefit level. As a result, the higher contribution rate is accompanied by an increase of their gross pension benefit by 20%. As might be expected, the latter is more important than the former, so their net replacement ratio and itax increase by 6.9 and 3.06%. For the CELS benefit, only the loss of the tax exemption has an effect on the net replacement ratio. The average net replacement ratio decreases by 8.96% (from 0.784 to 0.714). For all employee-categories but the female blue-collar worker, itax will decrease between 5.5% and 12.39%. The increasing minimum pension benefit of female blue-collar workers will only become effective once they reach the retirement age. As this future value is discounted and corrected for the survival rate, the positive effect of this increase on itax is not strong enough to compensate for the decreasing effect of the higher social contributions. The itax therefore decreases by 6.76% for female blue-collar workers. ##### 6.3.4.3 A change in the social contribution rate for CELS beneficiaries In a third and final technical variant, the advantageous social contribution rate is abolished for CELS beneficiaries, but maintained for pension beneficiaries. CELS beneficiaries now face the same social contribution rate as workers, and only pensioners have a lower social contribution rate. The result is that nothing changes for the pension beneficiaries whilst the gross replacement rate and itax remains the same for the CELS benefit. The only change is that the net replacement rate and net itax decrease for the latter, as the social contributions which have to be paid on the gross benefit increase. The average net replacement ratio for the four types of employees decreases by 5.54% from 0.820 to 0.775, and the net implicit tax on postponing retirement decreases by 4.88% from 0.784 to 0.746. The increase of the net itax relative to its gross value is now limited and this fiscal measure therefore decreases the attractiveness of the CELS benefit relative to the pension benefit. In the base-variant, the average net implicit tax rate was almost 86% (0.784 to 0.422) higher for the former than for the latter. This difference is now reduced to 77% (0.746 to 0.422). ## 7. Conclusions One of the possible solutions to limiting the budgetary consequences of demographic ageing is to increase the activity rate of older workers in Europe. Acknowledging the fact that retirement is an absorptive state from which few or none return, a Micro-Economic Pension Model (MEP) expresses the costs of postponing retirement by one or more years, both before and after taxes and social contributions, and this for two early-retirement schemes: the pension scheme and the CELS. The main conclusion is that the gains from continuing to work in most cases outweigh the losses, so that working longer causes total wealth to increase. However, the implicit costs associated with postponing retirement are in some cases considerable. They may be limited for the gross pension benefit, but they increase considerably when net amounts are simulated. Furthermore, the costs associated with postponing retirement are systematically higher for the CELS benefit than for the pension benefit, and this difference is only partially explained by the higher CELS benefit relative to the pension benefit. The higher costs associated with working longer are more endogenous to the CELS-system and to a lesser extent caused by the fiscal inequality between workers and retirees. The model has also considered the impact of alternative worker characteristics: singles versus those with a no income partner; longer versus shorter careers. As might be expected, the effect of these different characteristics is more important in the case of the pension benefit. It also comes as no surprise that the cost of postponing retirement is lower if one has a shorter career, although the magnitude of this difference is remarkable. Furthermore, several changes to the rules and regulations for the pension benefit, CELS benefit and tax and contribution regime have been simulated. This was done not so much to suggest policy measures, as to demonstrate the capabilities of the model. This exercise clearly shows that different measures have different effects, not only upon the four types of workers as a whole, but also between male and female and white- and blue-collar workers. Policy measures designed to increase the activity rate among older workers, should take these differences into account. Finally, by virtue of being able to disaggregate impacts in this way, the fact that MEP is a standard simulation model and not a genuine microsimulation model is arguably an advantage. But there are also disadvantages. How representative are the results for the population of older workers? Are the differences between categories of workers statistically significant? And by how much will the employment rate of older workers change as a result of a policy change? All these questions cannot be answered at this stage, as they require MEP to be based on a representative dataset of ‘real’ individuals, and not upon a limited selection of fictitious agents. A useful line of future enquiry, therefore, would be to combine MEP outputs with a static microsimulation model to generate representative aggregate costs to the state of alternative policy scenarios. ## The estimation of wage matrices The calculation of the pension and CELS benefit is based on the income an individual made throughout his or her career. Since the model simulates retirement benefits of male and female white- and blue-collar workers, we ideally need category-specific datasets containing btage, the wage per day, spanning all of the years, t, that the workers were of age between 20 and 65. As this is not available, it has been necessary to construct such wage rate datsets for four fictitious individuals, one in each category. This has been achieved using the following information: 1. The total gross wage mass and total working days from the centralized statistics of the “Rijksdienst Sociale Zekerheid” or RSZ; quarterly data available from the first quarter of 1976 (cf. Bresseleers & Hendrickx, 2003a, b). 2. Long-term time series of gross-wages and employment, available from 1953 (cf. Hendrickx, 2001). 3. Individual information from the “Loonen arbeidstijdengegevensbank” of the RSZ; quarterly data available from the first quarter of 1997 to the last quarter of 2000. 4. Employment figures for blue-collar orkers, white-collar workers and civil servants, specified to gender and age (5 year groups) from the “Enquete arbeidskrachten” of the National Institute of Statistics, available for the years between 1986 and 2003. 5. Population statistics on age and gender and population averages, from the National Institute of Statistics, available from 1948 onward. First of all, using data source (1), the macroeconomic wage per day wt in the year t was derived for the four categories of workers. This was extrapolated from 1976 back to 1955 using (2). Figure A1 shows this general wage per day for the four categories of workers. Starting with this wt, we use a simple model to derive wtage, the wage per day of an individual of a certain age in year t. We use the fact that the macroeconomic wage per day is a weighted average of the unknown wtage for every age group at t, where the weight is the proportional number of workers of every age group (4; and 5 for the years before 1986). So, the equation (A1) $\frac{\sum _{age=20}^{64}{N}_{age}^{t}{W}_{age}^{t}}{\sum _{age=20}^{64}{N}_{age}^{t}}={W}^{t}$ holds for every category of worker, where Ntage is the number of workers in the age group age. At the same time the proportional size of the group of age years old in t, may be denoted as (A2) ${P}_{age}^{t}=\frac{{N}_{age}^{t}}{\sum _{age=20}^{64}{N}_{age}^{t}}$ so that (A3) $\sum _{age=20}^{64}{p}_{age}^{t}{w}_{age}^{t}={w}^{t}\cdot$ For every t, we have one equation with 45 unknowns, being wtwt and we therefore need additional information to solve this model. This is provided by data source (3). Suppose a relation f(.) between the gross wage per day at a certain age, and the gross wage per day at a reference age, say 20, and suppose that this relation is the same for all t, giving (A4) ${w}_{age\ne 20}^{t}=f\left({w}_{20}^{t}\right)\cdot$ Substitution results in (A5) $\sum _{age=20}^{64}{p}_{age}^{t}f\left({w}_{20}^{t}\right)={w}^{t}\cdot$ The remaining unknown is wt20, which of course is (A6) ${w}_{20}^{t}={f}^{-1}\left(\frac{{w}^{t}}{\sum _{age=20}^{64}{p}_{age}^{t}}\right)\cdot$ Now all that is left do is to estimate a wage profile f(.), separately for each category of worker, which relates the wage per day of an individual in that category aged age to that of a 20 year old. This has been done using simple quadratic regression on wage data from the 3rd quarter of 1998, again using data source (3). Figure A2 contains the resulting wage profiles. As might be expected, both male and female blue collar workers have a less steep wage profile than white collar workers. Furthermore, the wage per day of young male blue-collar workers is higher than that of white-collar workers, both male and female; and although the wage of female white-collar workers catches up rapidly at first, it only ends up slightly higher than that of male blue-collar workers from the age of 40 on. To summarize, the growth of individual wages between two years, t and t + 1 (and, therefore, between age and age + 1) is determined by the growth rate of the macroeconomic wage wt (Figure A1), whereas the wage difference between two individuals of different age at t is determined by the relationships captured in Figure A2. Figure A1 Figure A2 ## References 1. 1 Pension Reform in Germany: The impact on Retirement Decisions. NBER Working Paper No 9913 (2003) National Bureau of Economic Research. 2. 2 Incentive effects of social security on labor force participation: Evidence in Germany and across Europe (2000) Journal of Public Economics 78:25–29. 3. 3 Databanken RSZ: “LATG-brochure” en “snelle ramingen” (2003) Databanken RSZ: “LATG-brochure” en “snelle ramingen”, mimeo General Directorate, ADDG(03) VB-KH/6450/9070, dossier 006/002, Brussels, Federal Planning Bureau, August4. 4. 4 Databank RSZ-gecentraliseerd (2003) Databank RSZ-gecentraliseerd, mimeo General Directorate ADDG(03)6476/VB-KH/9112, dossier 006/002, Brussels, Federal Planning Bureau, October22. 5. 5 De Financiële Implicaties van Langer Werken: een MicroEconomisch Pensioen Model (MEP). Working Paper No 15/05 (2005) Brussels: Federal Planning Bureau. 6. 6 Micro-modeling of retirement in Belgium (2004) In: J. Gruber, D Wise, editors. Social Security Programs and Retirement around the world: Micro-estimation (1). Chicago: the University of Chicago Press. pp. 41–98. 7. 7 The Retirement Effects of Old-age Pension and Early Retirement Schemes in OECD Countries. Economics Department Working Papers Nr. 370 (2003) Paris: Organisation for Economic Co-operation and Development OECD. 8. 8 Adequate and Sustainable Pensions – joint report by the Commission and the Council (2003) Brussels: European Commission, Directorate-General for Employment and Social Affairs, Unit E.2. 9. 9 Introduction (1999) In: J. Gruber, D Wise, editors. Social Security Programs and Retirement around the world. Chicago: the University of Chicago Press. pp. 1–36. 10. 10 Introduction (2004) In: J Gruber, D Wise, editors. Social Security Programs and Retirement around the world: micro-estimation. Chicago: the University of Chicago Press. pp. 1–41. 11. 11 Bruto-lonen en Werk-gelegenheid: lange-termijnreeksen (2001) Bruto-lonen en Werk-gelegenheid: lange-termijnreeksen, mimeo General Directorate ADDG(01) KH/6305/8719, Brussels, Federal Planning Bureau, July31. 12. 12 Jaarlijks verslag van de Studiecommissie voor de vergrijzing (2004) Jaarlijks verslag van de Studiecommissie voor de vergrijzing, Brussels, April. 13. 13 Fiscaal Memento Brussels: Ministerie van Financiën. 14. 14 Mathematische Demografie: Bevolkingsvooruitzichten 2000–2050 per arrondissement (2001) Brussels: Nationaal Instituut voor de Statistiek. 15. 15 Het effect van wijzigingen in vervroegde uittredingsregelingen op de arbeidsparticipatie van oudere werknemers (2001) The Netherlands: research rapport Center Applied Research for the Ministry of Social Affairs. 16. 16 Labour Force Participation of Groups at the Margin of the Labour Market: Past and Future Trends and Policy Challenges. ECO/CPE/WP1(2003) (2003) Paris: Organisation for Economic Co-operation and Development OECD, Economics Department for Working Party No 1 on Macroeconomic and Structural Policy Analysis. 17. 17 Praktijkboek Sociale Zekerheid voor de onderneming en de sociale adviseur (editors) Brussels: Ced.Samsom. 18. 18 Pensions, the option value of work, and retirement (1990) Econometrica 58:1151–1180. 19. 19 Simulatiemodellen: Instrumenten voor Sociaal economisch Onderzoek en Beleid (2005) Tijdschrift voor Sociologie 26:137–153. ## Article and author information ### Author details 1. #### Gijs J. M. Dekkers 1. Federal Planning Bureau, Belgium 2. Centre for Sociological Research (CESO), Katholieke Universiteit Leuven, Belgium gd@plan.be ### Acknowledgements The author wishes to thank two anonymous referees for their helpful comments. ### Publication history 1. Version of Record published: December 31, 2007 (version 1)
Slovník Laplaceovej transformácie vo formáte PDF ### Slovník originálov a ich Laplaceových obrazov $f(t)$ - Originál $F(s)$ - Obraz $$\delta (t)$$ $$1$$ $$1 (t)$$ $${1}\over{s}$$ $$A \cdot 1 (t)$$ $${A}\over{s}$$ $$e^{-at} \cdot 1(t)$$ $${1}\over{s+a}$$ $$e^{at} \cdot 1(t)$$ $${1}\over{s-a}$$ $$A t \cdot 1 (t)$$ $${A}\over{s^2}$$ $$A {{1} \over {(n-1)!}} t^{n-1} \cdot 1 (t), n>1$$ $${A}\over{s^n}$$ $$t e^{-at} \cdot 1 (t)$$ $${1}\over{(s+a)^2}$$ $${{1} \over {(n-1)!}} t^{n-1} \cdot 1 (t), n \geq 1$$ $${1}\over{(s+a)^n}$$ $$\sin (\omega t ) \cdot 1(t)$$ $${\omega}\over{s^2 + \omega^2}$$ $$\cos (\omega t ) \cdot 1(t)$$ $${s}\over{s^2 + \omega^2}$$ $$e^{-at}\sin (\omega t ) \cdot 1(t)$$ $${\omega}\over{(s+a)^2 + \omega^2}$$ $$e^{-at} \cos (\omega t ) \cdot 1(t)$$ $${s+a}\over{(s+a)^2 + \omega^2}$$ $$A \cdot f(t)$$ $$A \cdot F(s)$$ $$A_1 \cdot f_1 (t) + A_2 \cdot f_2 (t)$$ $$A_1 \cdot F_1 (s) + A_2 \cdot F_2 (s)$$ $$f'(t)$$ $$sF(s)-f(0)$$ $$f^{(n)}(t)$$ $$s^n F(s) - s^{n-1} f(0) - s^{n-2} f'(0) -\\ \dots - f^{(n-1)}(0)$$ $$\int_{0}^{t} f(\tau) d\tau$$ $${F(s)}\over{s}$$ $$\text{lim}_{t\rightarrow \infty} f(t)$$ $$\text{lim}_{s\rightarrow 0} [sF(s)]$$ $$\text{lim}_{t\rightarrow 0} f(t)$$ $$\text{lim}_{s\rightarrow \infty} [sF(s)]$$ $$\text{lim}_{t\rightarrow \infty} f'(t)$$ $$\text{lim}_{s\rightarrow 0} [s^2 F(s)]$$ $$\text{lim}_{t\rightarrow 0} f'(t)$$ $$\text{lim}_{s\rightarrow \infty} [s^2 F(s)]$$ ## Tabuľky pre syntézu regulátorov ### Naslinova metóda syntézy $$\delta_{max} [\%]$$ $$16$$ $$12$$ $$8$$ $$5$$ $$3$$ $$1$$ $$\alpha [-]$$ $$1.75$$ $$1.8$$ $$1.9$$ $$2$$ $$2.2$$ $$2.4$$ kde $\delta_{max} [\%]$ je maximálne preregulovanie. ### Metóda sysntézy: Zigler-Nichols $$Typ$$ $$r_0$$ $$r_{-1}$$ $$r_{1}$$ $$P$$ $$0.5 r_{0KR}$$ $$-$$ $$-$$ $$PI$$ $$0.45 r_{0KR}$$ $$\frac{r_{0}}{0.85 T_{K}}$$ $$-$$ $$PID$$ $$0.6 r_{0KR}$$ $$\frac{r_{0}}{0.5 T_{K}}$$ $$0.125 T_{K} r_{0}$$ kde $r_{0KR}$ je kritické zosilnenie a $T_{K}$ je perióda kritických kmitov.
# Which is true? Algebra Level 1 \large { \begin{aligned} x&=&y^a \\ y&=&z^b \\ z&=&x^c \end{aligned}} If $x,y$ and $z$ satisfy the system of equations above, which one of the following is true? ×
# Micro Exam 1 Term 1 / 36 single-celled Click the card to flip 👆 Terms in this set (36) What is the difference between the lytic and lysogenic cycles of bacteriophages? (Level 2 - Understanding) A. The lytic cycle is a viral reproduction mechanism, while the lysogenic cycle is how archaea reproduce. B. The lytic cycle results in the lysis of the infected cell, while the lysogenic cycle is when the viral DNA is integrated into the host genome. C. During the lytic cycle the phage DNA injects its DNA, while in the lysogenic cycle the DNA is not injected at all. D. The lytic cycle results in the lysis of the infected cell, while the lysogenic cycle releases its DNA through exocytosis. E. The lytic cycle happens over time, while the lysogenic cycle happens very quickly. B - TRUE because the lytic cycle is when viral DNA is injected, and then new DNA is synthesized into phages. The cell then lyses which releases the new phages. The lysogenic cycle is when the injected DNA integrates into the DNA of its host so that when the daughter cells are made, each copy contains the DNA with the section of the viral DNA. Yeasts reproduce asexually via: (Level 1 - Remembering) A. Sporulation where mycelia grow into a fruiting body which makes spores B. Budding where a small bud forms on cell membrane of parent cell and splits off C. Sporulation where two different mating types come together and form spores D. Binary fission where the parent cell splits in two after copying the DNA E. Meiosis where one parent cell creates four daughter cells each with half the initial DNA Why is the electron transport chain the last step in aerobic respiration? (Level 2 - Understanding) a. Oxygen was not available until the last step b. It relies on the products from the other steps of respiration c. The protein complexes were not fully formed until the last step d. ATP was not available until the last step e. It forms CO2 which is the final waste product A disease has broken out among lab mice. The lead scientist is a microbiologist and wants to determine what is causing the illness. They initially isolated the sick mice from the healthy ones to contain the pathogen. They did not change anything else in the mice's environment. From one of the sick mice, a sample was taken and grown in a pure culture. The microbe was then injected into one of the mice in the healthy population. How will the microbiologist know whether or not that microbe was the pathogen or not? (Level 2 - Understanding) A. If the pathogen is present in the injected mouse B. If the pathogen is present in the mice's food/environment C. If the injected mouse is showing the same symptoms as the ill mice. D. If there is a bump at the injection site E. If the first culture matches the one taken from the injected mouse.
# Cauchy Sequence Problem ## Homework Statement q(n) = Sum(from k=1 to n) 1/n! Exercise 3: Prove that {q(n)}n(forall)Ns is a cauchy sequence. none. ## The Attempt at a Solution So many attempts at a solution. I know that a sequence is a cauchy sequence if for all epsilons greater than 0 there exists an N such that m,n >N and therefore the absolute value of q(m) minus (qn) is less than epsilon. A sequence is considered a cauchy sequence of its terms approach a limit (and converge). My problem is with proving this as it is a sum, and not letting it get messy with double factorials. How do I prove this? ## Answers and Replies Related Calculus and Beyond Homework Help News on Phys.org AKG Homework Helper I think you mean the sum of 1/k!, not 1/n!. Hint 1: If $\sum _{k = 1} ^{\infty}\frac{1}{k!}$ converges, then for any $\epsilon > 0$, there exists a natural N such that $\sum _{k=N} ^{\infty} \frac{1}{k!} < \epsilon$ Hint 2: What's the Taylor (or Maclaurin) expansion of ex? OK, so.. if qn converges, then for any epsilon>0 there exists a natural N such that (qn when N=k) is less than epsilon. With the maclaurin formula we can write that e^x = the sum (from n=0 to infinity) of x^n/n!. Therefore can we just say that since the lim (as n approaches infinity) of q(n) is e, then it converges, and therefore is a cauchy sequence? Or do we still need to show that there's an N such that q(n) is less than epsilon (for any epsilon greater than 0)? Dick
# Chapter 5 - Section 5.5 - Trigonometric Equations - Exercise Set - Page 704: 118 $(\frac{7\pi}{6}, -\frac{3}{2})$ and $(\frac{11\pi}{6}, -\frac{3}{2})$; see graph. #### Work Step by Step Step 1. Graph $f(x)$ (red curve) and $g(x)$ (blue curve) as shown in the figure. Step 2. Let $f(x)=g(x)$; we have $3sin(x)=sin(x)-1$ or $sin(x)=-\frac{1}{2}$, which gives solutions in $[0,2\pi]$ as $x=\frac{7\pi}{6}$ and $x=\frac{11\pi}{6}$ corresponding to points of intersection $(\frac{7\pi}{6}, -\frac{3}{2})$ and $(\frac{11\pi}{6}, -\frac{3}{2})$ Step 3. We can identify the above points as shown on the graph. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
Joint Continous Probability Distributions The joint continuous distribution is the continuous analogue of a joint discrete distribution. For that reason, all of the conceptual ideas will be equivalent, and the formulas will be the continuous counterparts of the discrete formulas. Most often, the PDF of a joint distribution having two continuous random variables is given as a function of two independent variables. Formulas Suppose the PDF of a joint distribution of the random variables $X$ and $Y$ is given by $f_{XY}(x,y)$. As with all continuous distributions, two requirements must hold for each ordered pair   $(x,y)$   in the domain of $f$. $f_{XY}(x,y) \ge 0$ $\int\limits_x \int\limits_y f_{XY}(x,y) = 1$ Then the marginal PDFs $f_X (x)$ and $f_Y(y)$, the expected values $E(X)$ and $E(Y)$, and the variances $Var(X)$ and $Var(Y)$ can be found by the following formulas. \begin{align} f_X (x) &= \int\limits_y f_{XY}(x,y) \, \mathrm{d}y \\ f_Y (y) &= \int\limits_x f_{XY}(x,y) \, \mathrm{d}x \\ E(X) &= \int\limits_x x f_X (x) \, \mathrm{d}x \\ E(Y) &= \int\limits_y y f_Y (y) \, \mathrm{d}y \\ Var(X) &= \int\limits_x x^2 f_X (x) \, \mathrm{d}x - (E(X))^2 \\ Var(Y) &= \int\limits_y y^2 f_Y (y) \, \mathrm{d}y - (E(Y))^2 \end{align} As always, the standard deviations $\sigma_X$ and $\sigma_Y$ are the square roots of their respective variances. To measure any relationship between two random variables, we use the covariance, defined by the following formula. $Cov(X,Y) = \int\limits_x \int\limits_y xy f_{XY} (x,y) \, \mathrm{d}y \, \mathrm{d}x - E(X)E(Y)$ The correlation has the same definition,   $\rho_{XY} = \dfrac{Cov(X,Y)}{\sigma_X \sigma_Y}$,   and the same interpretation as for joint discrete distributions. An Example A college professor wants to learn if there is a relationship between time spent on homework and the percent of the homework that is completed. Using $X$ as the number of weeks after being distributed that an assignment is turned in, and $Y$ as the percent of the assignment that is completed, he finds that the PDF of the distribution follows the function   $f_{XY}(x,y) = \dfrac{9}{10} xy^2 + \dfrac15$,   when   $0 \le x \le 2$   and   $0 \le y \le 1$. First, we shall verify that this function meets the requirements to be a continuous PDF. For nonnegative values of $x$ and $y$, the function will satisfy   $f_{XY}(x,y) \ge 0$. As for the integral, we have: \begin{align} \int_x \int_y f_{XY}(x,y) \, \mathrm{d}y \, \mathrm{d}x &= \int_0^2 \int_0^1 \left( \dfrac{9}{10} xy^2 + \dfrac15 \right) \, \mathrm{d}y \, \mathrm{d}x \\ &= \int_0^2 \left[ \dfrac{3}{10}xy^3 + \dfrac15 y \right]_0^1 \, \mathrm{d}x \\ &= \int_0^2 \left( \dfrac{3}{10} x + \dfrac15 \right) \, \mathrm{d}x \\ &= \left[ \dfrac{3}{20} x^2 + \dfrac15 x \right]_0^2 \\ &= \dfrac{12}{20} + \dfrac25 - 0 - 0 \\ &= 1 \end{align} The marginal density functions (or marginal PDFs) are found by integrating over the variable to be removed from consideration. \begin{align} f_X (x) &= \int_0^1 \left( \dfrac{9}{10} xy^2 + \dfrac15 \right) \, \mathrm{d}y \\ &= \left[ \dfrac{3}{10} xy^3 + \dfrac15 y \right]_0^1 \\ &= \dfrac{3}{10} x + \dfrac15 \end{align} \begin{align} f_Y (y) &= \int_0^2 \left( \dfrac{9}{10} xy^2 + \dfrac15 \right) \, \mathrm{d}x \\ &= \left[ \dfrac{9}{20} x^2 y^2 + \dfrac15 x \right]_0^2 \\ &= \dfrac95 y^2 + \dfrac25 \end{align} With these formulas, we can obtain probabilities. The probability that a student will turn in the assignment less than half of a week after it is assigned is given by \begin{align} P(x < 0.5) &= \int_0^{0.5} f_X(x) \, \mathrm{d}x \\ &= \int_0^{0.5} \left( \dfrac{3}{10} x + \dfrac15 \right) \, \mathrm{d}x \\ &= \left[ \dfrac{3}{20}x^2 + \dfrac15 x \right]_0^{0.5} \\ &= 0.0375 + 0.1 \\ &= 0.1375 \end{align} The probability that an assignment will be less than 40% completed when it is turned in is given by \begin{align} P(y < 0.4) &= \int_0^{0.4} f_Y(y) \, \mathrm{d}y \\ &= \int_0^{0.4} \left( \dfrac95 y^2 + \dfrac25 \right) \, \mathrm{d}y \\ &= \left[ \dfrac35 y^3 + \dfrac25 y \right]_0^{0.4} \\ &= 0.0384 + 0.16 \\ &= 0.1984 \end{align} The probability that a randomly selected student will turn in an assignment in less than one week with more than half of the assignment completed is given by \begin{align} P(x < 1, y > 0.5) &= \int_0^1 \int_{0.5}^1 f_{XY}(x,y) \, \mathrm{d}y \, \mathrm{d}x \\ &= \int_0^1 \int_{0.5}^1 \left( \dfrac{9}{10} xy^2 + \dfrac15 \right) \, \mathrm{d}y \, \mathrm{d}x \\ &= \int_0^1 \left[ \dfrac{3}{10} xy^3 + \dfrac15 y \right]_0.5^1 \, \mathrm{d}x \\ &= \int_0^1 \left( \dfrac{21}{80} x + \dfrac{1}{10} \right) \, \mathrm{d}x \\ &= \left[ \dfrac{21}{160} x^2 + \dfrac{1}{10} x \right]_0^1 \\ &= 0.13125 + 0.1 \\ &= 0.23125 \end{align} The expected value (or mean) of each random variable can be found by use of the formulas. \begin{align} E(X) &= \int_x x f_X (x) \, \mathrm{d}x \\ &= \int_0^2 x \left( \dfrac{3}{10} x + \dfrac15 \right) \, \mathrm{d}x \\ &= \int_0^2 \left( \dfrac{3}{10} x^2 + \dfrac15 x \right) \, \mathrm{d}x \\ &= \left[ \dfrac{1}{10} x^3 + \dfrac{1}{10} x^2 \right]_0^2 \\ &= \dfrac{8}{10} + \dfrac{4}{10} - 0 - 0 \\ &= \dfrac65 = 1.2 \end{align} \begin{align} E(Y) &= \int_y y f_Y (y) \, \mathrm{d}y \\ &= \int_0^1 y \left( \dfrac95 y^2 + \dfrac25 \right) \, \mathrm{d}y \\ &= \int_0^1 \left( \dfrac95 y^3 + \dfrac25 y \right) \, \mathrm{d}y \\ &= \left[ \dfrac{9}{20} y^4 + \dfrac15 y^2 \right]_0^1 \\ &= \dfrac{9}{20} + \dfrac{1}{5} \\ &= \dfrac{13}{20} = 0.65 \end{align} Therefore, students are turning in the assignment after 1.2 weeks on average, and the assignments are 65% complete on average. Or in other words, if a student is randomly selected, we could expect them to turn in a paper after 1.2 weeks, and that paper would be 65% complete. We can also use the formulas to compute the variance and standard deviation of each random variable. \begin{align} Var(X) &= \int_x x^2 f_X (x) \, \mathrm{d}x - (E(X))^2 \\ &= \int_0^2 x^2 \left( \dfrac{3}{10} x + \dfrac15 \right) \, \mathrm{d}x - \left( \dfrac65 \right)^2 \\ &= \int_0^2 \left( \dfrac{3}{10} x^3 + \dfrac15 x \right) \, \mathrm{d}x - \dfrac{36}{25} \\ &= \left[ \dfrac{3}{40} x^4 + \dfrac{1}{15} x^3 \right]_0^2 - \dfrac{36}{25} \\ &= \dfrac65 + \dfrac{8}{15} - \dfrac{36}{25} \\ &= \dfrac{22}{75} \approx 0.2933 \\ \sigma_X &= \sqrt{ \dfrac{22}{75} } \approx 0.5416 \end{align} \begin{align} Var(Y) &= \int_y y^2 f_Y(y) \, \mathrm{d}y - (E(Y))^2 \\ &= \int_0^1 y^2 \left( \dfrac95 y^2 + \dfrac25 \right) \, \mathrm{d}y - \left( \dfrac{13}{20} \right)^2 \\ &= \int_0^1 \left( \dfrac95 y^4 + \dfrac25 y^2 \right) \, \mathrm{d}y - \dfrac{169}{400} \\ &= \left[ \dfrac{9}{25} y^5 + \dfrac{2}{15} y^3 \right]_0^1 - \dfrac{169}{400} \\ &= \dfrac{9}{25} + \dfrac{2}{15} - \dfrac{169}{400} \\ &= \dfrac{17}{240} \approx 0.0708 \\ \sigma_Y &= \sqrt{ \dfrac{17}{240}} \approx 0.2661 \end{align} Interpreting these results, we find variances of 0.2933 squared weeks and 0.0708 squared completions. The standard deviations are more clear, and give 0.5416 weeks and 26.61% completion. These standard deviations are an average distance of a data point from the means computed earlier. To obtain the strength of any relationship between these variables, we can compute the covariance and the correlation. \begin{align} Cov(X,Y) &= \int_x \int_y xy f_{XY}(x,y) \, \mathrm{d}y \, \mathrm{d}x - E(X)E(Y) \\ &= \int_0^2 \int_0^1 xy \left( \dfrac{9}{10} xy^2 + \dfrac15 \right) \, \mathrm{d}y \, \mathrm{d}x - \left( \dfrac65 \right) \left( \dfrac{13}{20} \right) \\ &= \int_0^2 \int_0^1 \left( \dfrac{9}{10} x^2y^3 + \dfrac15 xy \right) \, \mathrm{d}y \, \mathrm{d}x - \dfrac{39}{50} \\ &= \int_0^2 \left[ \dfrac{9}{40} x^2 y^4 + \dfrac{1}{10} xy^2 \right]_0^1 \, \mathrm{d}x - \dfrac{39}{50} \\ &= \int_0^2 \left( \dfrac{9}{40} x^2 + \dfrac{1}{10} x \right) \, \mathrm{d}x - \dfrac{39}{50} \\ &= \left[ \dfrac{3}{40} x^3 + \dfrac{1}{20} x^2 \right]_0^2 - \dfrac{39}{50} \\ &= \dfrac35 + \dfrac15 - \dfrac{39}{50} \\ &= \dfrac{1}{50} = 0.02 \\ \rho_{XY} &= \dfrac{Cov(X,Y)}{\sigma_X \sigma_Y} = \dfrac{0.02}{(0.5416)(0.2661)} = 0.1388 \end{align} The correlation between these variables is slightly positive, indicating that papers will generally be more complete as the time spent on them increases. However, it is a rather weak correlation, because the value of $\rho_{XY}$ is quite close to zero.
# Tangential Contact forces between objects My book says that there are two forces two objects experience when they're in contact with eachother. (it uses an example of a cylinder and a parallelepiped that are in contact with eachother) The chapter is about statics. 1) The normal force. 2) A tangential force, tangential to both surfaces that are in contact with eachother. This happens only when the surfaces are rough. I find this strange because I have never seen the second. The book further states that to be in static equilibrium, these tangential forces can't exceed the maximum force of static friction. I would understand this if the tangential force comes from an external source but I have never seen that two surfaces experience tangential forces due to their contact alone. Can someone clarify this? • @delivosa: Yes, the surfaces must be trying to slide, and if you use $F < \mu_s N$, the normal force can't be zero as well (i.e. they must be pressing against each other). – user7777777 Oct 3 '18 at 12:36
## 30 July, 2015 ### Shrinkage Estimators for Counting Statistics Edit 19 March 2020: This post has been adapted into a paper in the Journal of Mathematical Psychology. In the process of writing the paper, a number of  mistakes, omissions, or misstatements were found in this post. It is being left up as it was originally written, just in case anybody is interested. For a more correct version, please refer to the journal article. Warning: this post is going to be incredibly technical, even by the standards of this blog.  If what I normally post is gory math, this is the running of the bulls. I'm making it so I can refer back to it when I need to. The goal is to set up the theoretical framework for shrinkage estimation of normalized counting statistics to some common mean. I will fully admit this is a very, very limited framework, but some of the most basic baseball statistics fit into it. In the future I hope I can possibly expand this to include more advanced statistics. I will give (not show) a few purely theoretical results - for proofs, see Natural Exponential Families with Quadratic Variance Functions by Carl Morris in The Annals of Statistics, Vol. 11, No. 2 (1983), 515-529, or the more updated version of that paper. ## Theoretical Framework Let's say I have some metric $X_i$ for player, team, or object $i$. In this framework, $X_i$ represents a count or a sum of some kind - the raw number of hits, or the raw number of earned runs, etc. I know that $X_i$ is the result of a random process that is controlled by a probability distribution with parameter $\theta_i$, which is unique to each player, team, or object - in baseball, for example, $\theta_i$ represents the player's true "talent" level with respect to metric $X_i$. $X_i \sim p(x_i | \theta_i)$ I have to assume that the talent levels $\theta_i$ are exchangeable, though the definition is a bit too much to go into here. I'm going to assume that $p(x_i | \theta_i)$ is a member of the natural exponential family with a quadratic variance function (NEFQVF) - this includes very common distributions such as the normal, binomial, Poisson, gamma, and negative binomial. Each of these can be written as the convolution (sum) of $n_i$ other independent, identical distributions, each of which is also NEFQVF with mean $\theta_i$ - the normal is the sum of normals, the binomial is the sum of Bernoullis, the Poisson is the sum of Poissons, the negative binomial is the sum of geometrics, etc.. I will assume that is the case here - that $X_i = \displaystyle \sum_{j = 1}^{n_i} Y_{ij}$ Translating this to baseball terms, this means that $Y_{ij}$ is the outcome of inning, plate appearance, etc.,  $j$ for player $i$ ($j$ ranges from 1 to $n_i$). The metric $X_i$ is then sum of $n_i$ of these outcomes. Each outcome is assumed independent and identical. Once again, $X_i$ is not normalized by dividing by $n_i$. Conditional on having mean $\theta_i$, the expectations of the $Y_{ij}$ are $E[Y_{ij} | \theta_i] = \theta_i$ And so conditional on having mean $\theta_i$, the expected value of the $X_i$ are $E[X_i | \theta_i] = E\left[\displaystyle \sum_{j = 1}^{n_i} Y_{ij} \biggr | \theta_i \right] = \displaystyle \sum_{j = 1}^{n_i} E\left[ Y_{ij} \biggr | \theta_i \right] = n_i E[Y_{ij}| \theta_i] = n_i \theta_i$ Baseball terms: if a player has, for example, on-base percentage $\theta_i$, then the number of on-base events I expect in $n_i$ plate appearances is $n_i \theta_i$. This does not have to be a whole number. Similarly, and again conditional on mean $\theta_i$, the independence assumption allows us to write the variance of the $X_i$ as $Var(X_i | \theta_i) = Var\left(\displaystyle \sum_{j = 1}^{n_i} Y_{ij} \biggr | \theta_i \right) = \displaystyle \sum_{j = 1}^{n_i} Var\left( Y_{ij} \biggr | \theta_i \right) = n_i Var(Y_{ij}| \theta_i) = n_i V(\theta_i)$ I'm going to repeat that last bit of notation again, because it's important: $Var(Y_{ij}| \theta_i) =V(\theta_i)$ $V(\theta_i)$ is the variance of the outcome at the most basic level - plate appearance, inning, batter faced, etc.  - conditional on having mean $\theta_i$. For NEFQVF distributions, this has a very particular form -  the variance can be written as a polynomial function of the mean $\theta_i$ up to degree 2 (this is the "Quadratic Variance Function" part of NEFQVF): $Var(Y_{ij} | \theta_i) = V(\theta_i) = c_0 + c_1 \theta_i + c_2 \theta_i^2$ For example, the normal distribution has $V(\theta_i) = \sigma^2$, so it fits the QVF model with $c_0 = \sigma^2$ and $c_1 = c_2 = 0$. For the Binomial distribution, $V(\theta_i) = \theta_i (1-\theta_i) = \theta_i - \theta_i^2$, so it fits the QVF model with $c_0 = 0, c_1 = 1$, and $c_2 = -1$. The Poisson distribution has $V(\theta_i) = \theta_i$, so it fits the QVF model with $c_0 = c_2 = 0$ and $c_1 = 1$. I'm now going to assume that the talent levels $\theta_i$ themselves follow some distribution $G(\theta_i | \mu, \eta)$. The parameter $\mu$ is the expected value of  the $\theta_i$ ($E[\theta_i] = \mu$), and it represents the league average talent level. The parameter $\eta$ controls, but is not necessarily equal to, the variance of $\theta_i$ (how spread out the talent levels are). Both are assumed to be known. The two-stage model is then $X_i \sim p(x_i | \theta_i)$ $\theta_i \sim G(\theta_i | \mu, \eta)$ The unconditional expectation of the $X_i$ is $E[X_i] = E[E[X_i | \theta_i]] = E[n_i \theta_i] = n_i \mu$ And the unconditional variance of $X_i$ is $Var(X_i) = E[Var(X_i | \theta_i)] + Var(E[X_i | \theta_i]) = n_i E[ V(\theta_i)] + n_i^2 Var(\theta_i)$ In the above formula, the quantity $E[V(\theta_i)]$ is the average variance of the outcome at the most basic level (plate appearance, inning, etc.), averaging over all possible talent levels $\theta_i$. The quantity $Var(\theta_i)$ is the variance of the talent levels themselves - how spread out talent is in the league. To this point I haven't normalized the $X_i$ by dividing by each by $n_i$ - let's do that. If I define $\bar{X_i} = X_i/n_i,$ then based on the formulas above $E[\bar{X_i}] = E\left[\dfrac{X_i}{n_i}\right] = \dfrac{1}{n_i} E[X_i] = \dfrac{n_i \theta_i}{n_i} = \theta_i$ And variance $Var(\bar{X_i}) = Var\left(\dfrac{X_i}{n_i}\right) = \dfrac{1}{n_i^2} Var(X_i) = \dfrac{n_i E[ V(\theta_i)] + n_i^2 Var(\theta_i)}{n_i^2} = \dfrac{1}{n_i}E[ V(\theta_i)] + Var(\theta_i)$ As members of the exponential family, members of the NEFQVF family are guaranteed to have a conjugate prior distribution, so I'll assume that $G(\theta_i | \mu, \eta)$ is conjugate to $p(x_i | \theta_i)$. For example, if $X_i$ follows a normal distribution, $G(\theta_i | \mu, \eta)$ is a normal as well. If $X_i$ follows a Binomial distribution, then $G(\theta_i | \mu, \eta)$ is a beta distribution. If $X_i$ follows a Poisson distribution, then $G(\theta_i | \mu, \eta)$ is a gamma distribution. The priors themselves do not have to be NEFQVF. Since $\eta$ and $\mu$ are assumed known, we can use the Bayes' rule with conjugate prior $G(\theta_i | \mu, \eta)$ to calculate the posterior distribution for $\theta_i$ $\theta_i | x_i, \mu, \eta \sim \dfrac{p(x_i | \theta_i)G(\theta_i | \mu, \eta)}{\int p(x_i | \theta_i)G(\theta_i | \mu, \eta) d\theta_i}$ NEFQVF families have closed-form posterior densities. I'm then going to take my as my estimator the expected value of the posterior, $\hat{\theta_i} = E[\theta_i | x_i]$. Specifically for NEFQVF distributions with conjugate priors, the estimator is then given by $\hat{\theta_i} = \mu + (1 - B)(\bar{x_i} - \mu) = (1-B) \bar{x_i} + B \mu$ Where $B$ is known as the shrinkage coefficient. For NEFQVF distributions, the form of $B$ is $B = \dfrac{E[\bar{X_i} | \theta_i]}{Var(X_i)} = \dfrac{\dfrac{1}{n_i}E[ V(\theta_i)]}{\dfrac{1}{n_i}E[ V(\theta_i)] + Var(\theta_i)} = \dfrac{E[V(\theta_i)]}{E[V(\theta_i)] + n_i Var(\theta_i)}$ Note: The above two formulas, and several of the rules I used to derive them, are guaranteed for NEF distributions and not just NEFQVF distributions; however, the conjugate prior for a NEF may not have a normalizing constant that exists in closed form, and in practical application the distributions that are actually used tend to be NEFQFV. For NEFQFV distributions, a few more algebraic results can be shown about the exact form of the shrinkage estimator by writing the conjugate prior in the general form for exponential densities - for more information, see section 5 of Morris (1983), mentioned in the introduction. The shrinkage estimator $B$ for NEFQVF distributions is the ratio of the within-metric variance to the total variance - which is a function of how noisy the data are compared and how spread out the talent levels are. If at a certain $n_i$ the normalized metric tends to be very noisy around its mean but the means tend to be clustered together, shrinkage will be large. If the normalized metric tends to stay close to its mean value but the means tend to be very spread out, shrinkage will be small. And as the number of observations $n_i$ grows bigger, the effect of the noise gets smaller, decreasing the shrinkage amount. $B$ itself can be thought of as a shrinkage proportion - if $B = 0$ then there is no shrinkage, and the estimator is just the raw observation.  This would occur if the average variance around the mean is zero - if there's no noise. If $B = 1$ then complete shrinkage takes place and the estimate of the player's true talent level is just the league average talent level. This occurs if the variance in league talent levels is equal to zero - every player has the exact same talent level. Note that $B$ has no units, since both the top and bottom are variances, so rescaling the data will not change the shrinkage proportion. I'm going to show a few examples, working through gory mathematical details. WARNING: the above results are guaranteed only for NEFQVF distributions - the normal, binomial, negative binomial, Poisson, and gamma, NEF-GHS. Some results also apply to NEF distributions - see Morris (1983) for details. If the data model is not one of those distributions, I can't say whether or not the formulas I've given above will be correct. ## Normal-Normal Example Let's start with one familiar form - the normal model. This model says that $X_i$, the metric for player $i$, is normally distributed, and is constructed as a sum of $Y_{ij}$ random variables, which are also normally distributed with mean $\theta_i$ and known variance $\sigma^2$. The distribution of talent levels also follows a normal distribution with league mean $\mu$ and variance $\tau^2$. This can be written as $Y_{ij} \sim N(\theta_i, \sigma^2)$ $X_i \sim N(n_i \theta_i, n_i \sigma^2)$ $\theta_i \sim N(\mu, \tau^2)$ The average variance is simple. As stated before, $V(\theta_i) = \sigma^2$ is constant for the normal distribution, no matter what the actual $\theta_i$ is. Hence, $E[V(\theta_i)] = E[\sigma^2] = \sigma^2$ The variance of the averages is simple, too - the model assumes it's constant as well. $Var(\theta_i) = \tau^2$ This gives a shrinkage coefficient of $B = \dfrac{\sigma^2}{\sigma^2 + n_i \tau^2}$ Which, if I divide both the top and bottom by $n_i$, might look more familiar as $B = \dfrac{\sigma^2/n_i}{\sigma^2/n_i + \tau^2}$ The shrinkage estimator is then $\hat{\theta_i} = \mu + \left(1 - \dfrac{\sigma^2/n_i}{\sigma^2/n_i + \tau^2}\right)(\bar{x_i} - \mu)$ Alternatively, I can write $B$ as $B = \dfrac{\sigma^2/\tau^2}{\sigma^2/\tau^2 + n_i}$ And then it follows the familiar pattern from other estimators of $B = m/(m + n)$ for some parameter $m$. It may seem like the normal-normal is not of use - how many counting statistics are there that are normally distributed at the level of inning, plate appearance, or batter faced? The very idea that they are counting statistics says that that's impossible. However, the central limit theorem guarantees that sums of independent, identical random variables converge to a normal - hence the distribution of $X_i$ should be unimodal and bell-shaped for large enough $n_i$ (and I'll intentionally leave the discussion of what constitutes "large enough" aside). Thus, as long as the distribution of the $\theta_i$ (the distribution of talent levels) is bell-shaped and symmetric, using a normal-normal with the normal as an approximation at the $X_i$ level should work. ## Beta-Binomial Example Suppose we're measuring the sum of binary events of some kind - a hit, an on-base event, a strikeout, etc. - in $n_i$ observations - plate appearances, innings pitched, batters faced, etc. Each event can be thought of as a sample from a Bernoulli distribution (these are the $Y_{ij}$) with variance function $V(\theta_i) = \theta_i(1-\theta_i)$. The observed metric $X_i$ binomial, and it is constructed as the sum of these Bernoulli random variables $Y_{ij} \sim Bernoulli(\theta_i)$ $X_i \sim Binomial (n_i, \theta_i)$ The prior distribution for the binomial distribution is the beta. $\theta_i \sim Beta(\mu, M)$ Fitting with the framework given above, I'm using $\mu = \alpha/(\alpha+\beta)$ and $M = \alpha + \beta$ instead of the traditional $\alpha, \beta$ parametrization, so that $\mu$ represents the league mean and $M$ controls the variation. The average variance is fairly complicated here. We need to find $E[V(\theta_i)] = E[\theta_i(1-\theta_i)] = \displaystyle \int_0^1 \dfrac{\theta_i(1-\theta_i) * \theta_i^{\mu M-1}(1-\theta_i)^{(1-\mu) M-1}}{\beta(\mu M, (1-\mu) M)} d\theta_i = \dfrac{\displaystyle \int_0^1 \theta_i^{\mu M}(1-\theta_i)^{(1-\mu) M} d\theta_i}{\beta(\mu M, (1-\mu) M)}$ The top part is a $\beta(\mu M + 1, (1-\mu)M + 1)$ function. Utilizing the properties of the beta function, we have $E[\theta_i(1-\theta_i)] = \dfrac{\beta(\mu M+1, (1-\mu) M + 1)}{\beta(\mu M, (1-\mu) M)} = \dfrac{\beta(\mu M, (1-\mu) M + 1)}{\beta(\mu M, (1-\mu) M)}\left(\dfrac{\mu M}{\mu M + (1-\mu) M + 1}\right) =$ $\dfrac{\beta(\mu M, (1-\mu) M )}{\beta(\mu M, (1-\mu) M)}\left(\dfrac{\mu M}{\mu M + (1-\mu) M + 1}\right) \left(\dfrac{(1-\mu) M}{\mu M + (1-\mu) M}\right) = \dfrac{\mu(1-\mu)M^2}{(M+1)M} = \dfrac{\mu(1-\mu) M}{M+1}$ The variance of the $\theta_i$ doesn't require nearly as much calculus, since it can be taken directly as the variance of a beta distribution $Var(\theta_i) = \dfrac{\mu(1-\mu)}{M+1}$ The shrinkage estimator $B$ is then $B = \dfrac{\dfrac{\mu(1-\mu)M}{(M+1)}}{\dfrac{\mu(1-\mu)M}{(M+1)} +\dfrac{n_i \mu(1-\mu)}{(M+1)}} = \dfrac{M}{M + n_i}$ Since $\mu(1-\mu)/(M+1)$ is in every term on the top and bottom, so it will cancel out. Using this model, then the shrinkage estimator is given by $\hat{\theta_i} = \mu + \left(1 - \dfrac{M}{M + n_i}\right)\left(\bar{x_i} - \mu\right)$ ## Poisson-Gamma Example Now suppose that instead of a binary event, the outcome can be a count - zero, one, two, three, etc. Each count can be thought of as a sample from a Poisson distribution with parameter $\theta_i$ (these are the $Y_{ij}$, with $V(\theta_i) = \theta_i$) with $X_i$ as the sum total of counts, which also has a Poisson distribution with parameter $n_i \theta_i$. $Y_{ij} \sim Poisson(\theta_i)$ $X_i \sim Poisson(n_i \theta_i)$ The prior distribution of $\theta_i$ for a Poisson is a gamma. $\theta_i \sim Gamma(\mu, K)$ In this parametrization, I'm using $\mu = \alpha/\beta$ and $K = \beta$ as compared to the traditional $\alpha, \beta$ parametrization. The average variance is $E[V(\theta_i)] = E[\theta_i] = \mu$ And the variance of the averages is $Var(\theta_i) = \dfrac{\mu}{K}$ So the shrinkage coefficient $B$ is $B = \dfrac{\mu}{\mu + \dfrac{n_i \mu}{K}} = \dfrac{1}{1 + \dfrac{n_i}{K}} = \dfrac{K}{K + n_i}$ Which gives a shrinkage estimator of $\hat{\theta_i} = \mu + \left(1 - \dfrac{K}{K + n_i}\right)(\bar{x_i} - \mu)$ ## What Statistics Fit Into this Framework? Any counting statistic that is constructed as a sum of the same basic events falls under framework. It's possible to combine multiple basic events into one "super" event, as long as they are considered to be equal. Examples of this include batting average, on-base percentage, earned run average, batting average on balls in play, fielding percentage, stolen base percentage, team win percentage, etc. It's possible to weight the sum, as long as you're just adding the same type of event to itself over and over. Any statistic that is a sum, weighted or unweighted, of different events does not fall into this framework - examples include weighted on-base average, slugging percentage, on-base plus slugging percentage, fielding independent pitching, isolated power, etc. Also, any statistics that are ratios of counts -strikeout to walk ratio, for example - do not fall under this framework. Statistics like wins above replacement are right out. I want to make clear that this is simply a discussion of what statistics fit nominally into a very specific theoretical framework.  A statistic falling under the framework does not imply that a statistic is good, nor does not falling under it imply that a statistic is bad. Furthermore, even if a statistic does not fall under this framework, shrinkage estimation using these formulas may still work as a very good approximation - the best statistics in sabermetrics today are often weighted sums of counting events, and people have been using these shrinkage estimators on them successfully for years, so clearly they must be doing something right.. This is simply what I can justify using statistical theory. ## Performing the Analysis The values of  $\eta$ and $\mu$ must be chosen or estimated. If prior data exists - like, for example, historical baseball data - values can be chosen based upon a careful analysis of that information. If no prior data exists, one option is to estimate the parameters through either moment-based or marginal likelihood-based estimation, and then plug in those values - this method is known as parametric empirical Bayes. Another option is to place a hyperprior or hyperpriors on $\eta$ and $\mu$ and perform a full hierarchical Bayesian analysis, which will almost certainly involve MCMC. Depending on the form of your prior, your shrunk results will likely be similar to, but not equal to, the shrinkage estimators given here. What if none of the NEFQVF models appear to fit your data? You have a few options, such as nonparametric or hierarchical Bayesian modeling, but any method is to get more difficult and more computational. ## 24 July, 2015 ### Normal-Normal Shrinkage Estimation by Empirical Bayes Shrinkage estimation is a very common technique in baseball statistics. So is the normal model. It turns out that one of the one way to shrink is to assume that both the data you see and the distribution of means of the data you see are normal - and then estimate the distribution of means prior by use of the data itself. The (non-annotated) code I used to generate these results may be found on my github. ## The Normal-Normal Model The basic normal-normal model assumes two "stages" of data - the first is the observed data, which we will call $Y_i$ - this is assumed to be normally distributed with mean $\theta_i$ and variance $\sigma^2$. $Y_i \sim N(\theta_i, \sigma^2)$ $\theta_i \sim N(\mu, \tau^2)$ Bayes' rule says that if you assume the "second" level - the distribution of the $\theta_i$ - is also normal, and treat it as a prior, the posterior distribution of $\theta_i$ (that is, the distribution in belief of values of $\theta_i$ after taking into account the data $y_i$ - see my previous post on Bayesian inference) is also normal with distribution $\theta_i | y_i, \mu, \sigma^2 \sim N(B\mu + (1-B)y_i, (1-B)\sigma^2)$ where $B = \left(\dfrac{\sigma^2}{\sigma^2 + \tau^2}\right)$ The quantity $B$ gives the amount that the mean of the data shrinks towards the mean of the prior. If $\sigma^2$ is large compared to $\tau^2$ (and so the data has a large amount of variance relative to the prior) then the mean of the data gets shrunk towards the prior mean by quite a bit. If $\sigma^2$ is small compared to $\tau^2$ (and so the data has a small amount of variance relative to the prior) then the mean of the data doesn't get shrunk much at all - it tends to stay near the observed $y_i$. No matter what the prior is, this Bayesian estimator can be thought of as a shrinkage estimator. It's just a question of what you're shrinking to, and by how much. A Bayesian, if he or she wishes to be "noninformative" (and I'm going to completely ignore all the controversy of naming and choosing priors) might pick something like $\theta_i \sim N(0,1000)$ as a prior, so $B$ is very close to zero and the shrinkage is very small. ## Empirical Bayes What we're going to do, however, is focus on using the data to choose the prior distribution, by assuming the normal-normal model as described above and estimating the parameters of the prior from the data itself - this is known as empirical Bayes. The effect  of using the data to choose the prior is that the data is shrunk towards the mean of the data, in an amount determined by the variance(s) of the data. How do we estimate $\mu$ and $\sigma^2$? One nice property of the normal-normal model is that the marginal distribution of $y_i$ is also normal: $y_i | \sigma^2, \tau^2, \mu \sim N( \mu, \sigma^2 + \tau^2 )$ There are three quantities that need to be estimated to perform this - $\mu$, $\sigma^2$, and $\tau^2$. The formula above gives us two - the first, the population mean, can be estimated by the sample mean $\hat{\mu} = \bar{y}$. The variance is a bit trickier. If I just take the standard variance estimator of the $y_i$ $Var(Y) = \dfrac{\sum (y_i - \bar{y})^2}{N-1}$ then that gives an estimate $\hat{\sigma^2 + \tau^2}$ (the hat is over the entire thing - it's not estimating two individual variances and summing them, it's estimating the sum of two individual variances), assuming $\sigma^2$ is the same for every observation. So we're going to need to get some information from somewhere about what $\sigma^2$  is. If the $y_i$ are sums or averages of observations, we can use that. If not, we have to get creative. ## Baseball Example Let's say $Y_i$ is the distribution of a player's batting average in $n_i$ at-bats (the $Y_i$ are already divided by $n_i$, so they are averaged), and the player has true batting average $\theta_i$. We know that a true 0.300 hitter isn't going to always hit 0.300 - he will hit sometimes above, sometimes below. The model says that the player's observed batting average follows a normal distribution with true batting average $\theta_i$ and variance $\sigma^2/n_i$ (since it is an average) - this is the "first" normal distribution as described above. The second stage is the underlying distribution of the $\theta_i$ - that is, the distribution of all players' true batting averages. It is normal with mean $\mu$ (the true mean league average) and variance $\tau^2$. So using this two-stage model is equivalent to saying that if I selected a random, unknown batter from the pool of all major league baseball players and observed his batting average $y_i$, it will follow a normal distribution with mean $\mu$ (the true league mean batting average) and variance $\sigma^2/n_i + \tau^2$ (the sum of the variation due naturally to a player's luck and the variation in batting averages between all players). I understand that this is a bit weird to think about, but imagine trying to describe the distribution of a player's observed batting average when we don't even know his name - you have to figure his true average is somewhere between 0.200 and 0.330, with an mean of around 0.265, and then on top of that add in the average amount of natural variation around his true average (sometimes above, sometimes below) at $n_i$ at-bats. That's what's going on here. In baseball terms, $\sigma^2/n_i$ can be thought of as the "within-player" variance or "luck" and $\tau^2$ can be thought of as the "between-player" variance or "talent." If a batter is very consistent in hitting near his true ability and the distribution of all batting averages is very spread out, then not much shrinkage will occur. Conversely, if a batter has a lot of variation in his observed batting average but there's not much apparent variation in the distribution of all batting averages, then the player will be shrunk heavily towards the league mean. We need three estimates in order to do the procedure - an estimate of each of $\mu$, $\sigma^2/n_i$, and $\tau^2$. The first, $\mu$, is the mean batting average of the league - and since $y_i$ is the batting average for player i, the estimate of the league mean batting average is the obvious one - $\hat{\mu} = \bar{y}$. In the case that $n_i$ is the same for all of your observations, taking $Var(y_i)$ gives us $\hat{\sigma^2/n_i + \tau^2}$. If there are differing $n_i$, the estimation method gets more complex - it's worth its own post at some point to work through it. I'm going to assume that all the players have the same number of at-bats to keep things simple for this example, though it admittedly does make it feel rather artificial. Now we need an estimate of $\sigma^2/n_i$ - the "within-player" variance. But the normal distribution doesn't tell us what $\sigma^2/n_i$ should be. It's pretty typical to model a player's hits in $n_i$ at-bats as following a binomial distribution with true batting average $\theta_i$. Then the sample batting average (hits over at-bats) has variance $Var(\textrm{Sample Batting Average}) = \dfrac{\theta_i(1-\theta_i)}{n_i}$ The binomial distribution is just a sum of independent, identical Bernoulli trials (at-bats, in this case) each with probability of getting a hit $\theta$. So the central limit theorem says that for large $n_i$, we can approximate the distribution of the sample batting average with a normal! $\textrm{Sample Batting Average} \sim N\left(\theta_i, \dfrac{\theta_i(1-\theta_i)}{n_i}\right)$ A value for $\theta_i$ is needed. It feels natural to use $y_i$ - the player's batting average - in the estimation. This is wrong, however - remember, we don't know how the player's true talent level! We need to shrink by the average variance amount. The average variance amount is estimated by the variance at the league batting average - which is a quantity we also have an estimate for. The estimate of the within-player variance is then $\dfrac{\hat{\sigma^2}}{n_i} \approx \dfrac{\bar{y}(1-\bar{y})}{n_i}$ Then the empirical Bayes estimator is given by $\hat{\theta_i} = \hat{B} \bar{y} + (1-\hat{B})y_i$ where $\hat{B} = \dfrac{\bar{y}(1-\bar{y})/n_i}{\sum (y_i - \bar{y})^2/(N-1)}$ ## Comparison Using the famous Morris data set to compare these estimators (I'll call them $\hat{\theta}^{NN}$ for normal-normal) to the shrinkage estimators from the Beta-Binomial (see post here - I'll call these $\hat{\theta}^{BB}$) and James-Stein (see post here - and note that the James-Stein estimator is just a specific version of the normal-normal estimator - I'll call them $\hat{\theta}^{JS}$), we see that it performs well. \begin{array}{l c c c c c} \hline \textrm{Player} & y_i & \hat{\theta}^{NN} & \hat{\theta}^{BB} & \hat{\theta}^{JS} & \theta \\ \hline Clemente        &0.400             &0.280             &0.280            &0.290 &0.346 \\ F. Robinson        &0.378             &0.277             &0.278            &0.286 &0.298 \\ F. Howard        &0.356             &0.275             &0.275            &0.282 &0.276 \\ Johnstone        &0.333             &0.273            &0.273            &0.277 &0.222\\ Barry        &0.311             &0.270             &0.270            &0.273 &0.273\\ Spencer        &0.311             &0.270             &0.270            &0.273 &0.270\\ Kessinger        &0.289             &0.268             &0.268            &0.268 &0.263\\ L.Alvarado        &0.267             &0.266             &0.266            &0.264 &0.210\\ Santo        &0.244             &0.263             &0.263            &0.259 &0.269\\ Swoboda       &0.244             &0.263             &0.263            &0.259 &0.230\\ Unser       &0.222             &0.261             &0.261            &0.254 &0.264\\ Williams       &0.222             &0.261             &0.261            &0.254 &0.256\\ Scott      &0.222             &0.261             &0.261            &0.254 &0.303\\ Petrocelli &0.222             &0.261             &0.261            &0.254 &0.264\\ E. Rodriguez &0.222             &0.261             &0.261            &0.254 &0.226\\ Campaneris       &0.200             &0.258             &0.258            &0.249 &0.285\\ Munson &0.178             &0.256             &0.256            &0.244 &0.316\\ Alvis       &0.156             &0.254             &0.253            &0.239 &0.200\\ \hline \end{array} For this data set, the normal-normal and beta-binomial estimates are almost identical. This shouldn't be a surprise - both the distribution of batting average talent and variation around batting averages is roughly bell-shaped and symmetric, so the normal-normal model and the beta-binomial models are both flexible enough to take that shape. The normal-normal and beta-binomial estimators shrinks the most while the James-Stein shrinks a moderate amount. For this specific data set, the James-Stein estimator seems to hold a slight-advantage - not by much, though. • $\sum (\hat{\theta}^{NN}_i - \theta_i)^2 = 0.0218$ • $\sum (\hat{\theta}^{BB}_i - \theta_i)^2 = 0.0218$ • $\sum (\hat{\theta}^{JS}_i - \theta_i)^2 = 0.0215$ • $\sum (y_i - \theta_i)^2 = 0.0753$. Whatever the method you use to shrink, estimates are produced that, when judged using the squared error loss function, are far superior to using the raw batting averages. This estimator relies on the assumption of normality for both the data and underlying distribution of means - this means it will work well for batting statistics (which tend to be constructed as sums, weighted or otherwise, of assumed independent, identical events) but not as well for other statistics which don't naturally look "normal." Furthermore, if I have to estimate $\sigma^2$ with a binomial variance -why don't I just use a beta-binomial model? That doesn't depend on a large number of at-bats for normality of the distribution of the sample batting average. Overall, I think it will give nice results when used appropriately, but in many situations a different model will fit more naturally to the data. ## 16 July, 2015 ### Bayes' Rule (This post acts as sort of a prequel to my post on Bayesian inference) I've talked enough about it, so I figured I would make an actual post on a probability rule that I've been using quite a bit - Bayes' rule. ## Purpose I want to jump straight into a probability example first. Much like the infield fly rule or the offside rule in soccer, I think Bayes' rule makes much more sense when you understand what it's trying to do before learning the technical details. Suppose a manager has exactly two pinch hitters available to him - let's call them Adam and José.  Adam has a $0.350$ OBP and José has a $0.300$ OBP. The manager calls Adam 70% of the time and calls José 30% of the time. So without knowing who the manager will call, how do we calculate what the probability is that the pinch hitter will get on-base? Like so: $P(\textrm{ On-Base }) = P(\textrm{ On-Base } | \textrm{ Adam })P(\textrm{ Adam })+P(\textrm{ On-Base } | \textrm{ José })P(\textrm{ José })$ The notation $P(\textrm{ On-Base }|\textrm{ Adam })$ means the probability of getting on-base "given" that Adam was chosen to pinch hit - that is, if we knew that the manager selected Adam, there would be a $0.350$ probability of getting on-base. As stated above, $P(\textrm{ Adam })$ - the probability the manager selects player Adam - is $0.7$. Similarly, $P(\textrm{ On-Base }|\textrm{ José }) = 0.300$ and $P(\textrm{ José }) =0.3$. Plugging numbers into the formula above, $P(\textrm{ On-Base }) = (0.350)(0.7) + (0.300)(0.3) = 0.335$ The OBP is $0.350$ with probability $0.7$, and $0.300$ with probability $0.3$, so overall there is a $0.335$ probability that the pinch hitter will get on-base. Now, let's flip what you know around - suppose you know that the pinch hitter got on base, but not which pinch hitter it was. Which player do you think was picked, Adam or José? Logically it's more likely to be Adam - but how much more likely? Can you give probabilities? ## Bayes' Rule This is the basic idea of Bayes' rule - it flips conditional probabilities around. Instead of $P(\textrm{ On-Base } | \textrm{ Adam })$, it allows you to find find $P(\textrm{ Adam } | \textrm{ On-Base })$. For two events $A$ and $B$, the basic formulation is $P(B | A) = \dfrac{P(A|B)P(B)}{P(A)}$ $P(A)$ on the bottom can be calculated as $P(A) = P(A | B)P(B) + P(A | \textrm{ Not B }) P( \textrm{ Not B })$ Note that this is why above I specified that there were exactly two pinch hitters, so saying that "Not Adam" is the same thing as saying "José".  If there are more than two pinch hitters available, the formula above can be expanded. Applying this to the batting averages, we have $P(\textrm{ Adam }| \textrm{ On-Base }) = \dfrac{P(\textrm{ On-Base }|\textrm{ Adam })P(\textrm{ Adam })}{P(\textrm{ On-Base })}$ $=\dfrac{P(\textrm{ On-Base }|\textrm{ Adam })P(\textrm{ Adam })}{ P(\textrm{ On-Base } | \textrm{ Adam })P(\textrm{ Adam })+P(\textrm{ On-Base } | \textrm{ José })P(\textrm{ José })}$ Plugging in numbers, we get $P(\textrm{ Adam }| \textrm{ On-Base }) = \dfrac{(0.350)(0.7)}{0.335} \approx 0.731$ And similarly, $P(\textrm{ José }| \textrm{ On-Base }) = \dfrac{(0.300)(0.3)}{0.335} \approx 0.269$ So, given that the pinch hitter got on base, there was approximately  a 73.1% chance it was Adam and approximately a 26.9% chance it was Jose. ## One More Example Adam tests positive for PED use. Let's suppose 10% of all MLB players are using PEDs. The particular test Adam took has 95% specificity and sensitivity - that is, if the player is using PEDs it will correctly identify so 95% of the time, and if the player is not using PEDs it will correctly identify so 95% of the time. Given a positive test, what is the probability that Adam is actually using PEDs? It's not 95%! We have to use Bayes' rule to figure it out. I'm going to use "+" to indicate a positive test (indicating the test says that the player is using drugs) and a "-" to indicate a negative test. $P(\textrm{ PEDs } | +) = \dfrac{P( + | \textrm{ PEDs })P(\textrm{ PEDs })}{P( + | \textrm{ PEDs })P(\textrm{ PEDs })+P( + | \textrm{ Not PEDs })P(\textrm{ Not PEDs }) }$ Let's figure these out one by one. As stated in the problem description, if a person is using PEDs, the test will identify so 95% of the time. Hence, $P( + | \textrm{ PEDs }) = 0.95$. Furthermore, 10% of all players are using PEDs, so $P(\textrm{ PEDs }) = 0.1$ and $P(\textrm{ Not PEDs }) = 0.9$. Lastly, since $P( - | \textrm{ Not PEDs }) = 0.95$ as stated in the problem description, it must be that $P( + | \textrm{ Not PEDs }) = 0.05$. Plugging all these numbers in, we get $P(\textrm{ PEDs } | +) = \dfrac{(0.95)(0.1)}{(0.95)(0.1) + (0.05)(0.9)} \approx 0.678$ So given that Adam tests positive for PEDs, there's actually only about a 2/3 chance that  he's using. It seems counter-intuitive given that the tests were pretty good - 95% sensitivity and specificity - but since most players aren't using (90%), there's bound to be a lot of false positives, making it so that Adam has a very, very good argument if Adam gets suspended over this particular test (vindicated). Put another way - suppose you have 200 MLB players. 180 (90% of the total) are clean, and 20 (10% of the total) are using PEDs. Of the 180 that are clean, 171 (95% of the clean) test negative and 9 (5% of the clean) test positive. Of those that are using, 19 (95% of the PED users) test positive and 1 (5% of the PED users) tests negative. This gives 19 PED users testing positive and 9 clean players testing positive, so the probability of being a PED user given testing positive is 19/(9+19) = 0.678. (Note that I made all these numbers up. I'm sure that the tests MLB actually uses have higher specificity and sensitivity than 95%, and I have no idea what proportion of all MLB players are using PEDs) ## From Bayes' Rule to Inference So how do we go from the rule to inference? Given some sort of model with parameter $\theta$, we can calculate $p(x | \theta)$ -  the probability of seeing the data that you saw given a particular value of the parameter. You may recognize this as the likelihood from earlier posts. Bayesians use Bayes' rule to flip around what's inside the probability statement and calculate $p(\theta | x)$  - the probability of a particular value of the parameter given the data that you saw - by $p(\theta | x) = \dfrac{p(x | \theta)p(\theta)}{\int p(x | \theta)p(\theta) d\theta}$ where $p(\theta)$ is the prior distribution chosen by the Bayesian and $p(\theta | x)$ is the posterior distribution that is calculated. Inference about $\theta$ is then performed using the posterior. That's Bayesian inference in a nutshell - start with a model, calculate the probability of seeing the data $x$ given a parameter $\theta$, and then use Bayes' rule to flip that around to the probability of  the parameter $\theta$ given that you saw the data $x$. (Oh, and then do checking to make sure your model fits - but that's another post) ## 15 July, 2015 ### 2015 Win Total Predictions (At All-Star Game) These predictions are based on my own "secret sauce" method, which I definitely think can be improved. I set the nominal coverage at 95% (meaning the way I calculated it the intervals should get it right 95% of the time), but based on testing the actual coverage might be at around 93%. Intervals are inclusive. \begin{array} {c c c} Team  & Lower  & Upper \\ \hline ATL  &  66  &   86 \\ ARI  & 69   &  90 \\ BAL  &  71  &   92 \\ BOS    &66   &  86 \\ CHC   & 75   &  96 \\ CHW &   64  &   84 \\ CIN  &  64   &  84 \\ CLE   & 68   &  89 \\ COL  &  62   &  82 \\ DET  &  71   &  91 \\ HOU  &  78   &  97 \\ KCR   & 84  &  105 \\ LAA   & 76  &   97 \\ LAD   & 81  &  101 \\ MIA &   63   &  83 \\ MIL  &  61   &  82 \\ MIN  &  76   &  96 \\ NYM  &  73  &   94 \\ NYY   & 76  &   97 \\ OAK &   69  &   88 \\ PHI  &  45   &  64 \\ PIT  &  84  &  104 \\ SDP  &  64 &    83 \\ SEA  &  65 &    85 \\ SFG  &  74  &   94 \\ STL  &  89  &  109 \\ TBR  &  73  &   93 \\ TEX  &  67  &   87 \\ TOR  &  75 &    94 \\ WSN  &  77  &   98 \\ \hline\end{array} Interesting features: my model thinks that the St. Louis Cardinals are the best team in baseball, and that the Phillies are the worse. I will add that the model gives a Phillies a true winning percentage of around 36%, (roughly 58 games out of 162), but they've been both bad and unlucky, and so are likely to finish with less than that. Also note that even at this point in the season, it still can't predict whether all but four teams (the Cardinals, Pirates, Royals, and Phillies) will finish above 0.500 or not. ## 10 July, 2015 ### A Beta-Binomial Derivation of the 37-37 Shrinkage Rule A rule of thumb is that to estimate a team's "true" ability $\theta_i$, you should add 74 games of 0.500 ball - that is, $\hat{\theta_i} = \dfrac{w_i + 37}{n_i + 74}$ where $\hat{\theta_i}$ is the estimate of team $i$'s true winning proportion, $w_i$ is the number of wins of team $i$, and $n_i$ is the number of games team $i$ has played so far. Notice that the number of games stays the same no matter what $n_i$ is - if the team has played a full season ($n_i = 162$), shrink by 74 games of 0.500 ball. If the team has only played 10 games ($n_i = 10$), shrink by 74 games of 0.500 ball. In this post I'm going to derive a very similar result as the posterior expectation of a binomial model with a beta prior, and try to give a less mathematical explanation of why the rule works no matter what $n_i$ is. The code I used to generate the images in this post may be found on my github. ## The Beta-Binomial Model First off, let's assume that the number of wins $w_i$ follows a binomial distribution with number of games played $n_i$ and true winning proportion $\theta_i$. $w_i \sim Bin(n_i, \theta_i)$ Furthermore, let's assume that the winning percentages themselves follow a beta distribution. Traditionally the beta distribution has parameters $\alpha$ and $\beta$, but I'm going to use the parametrization $\mu = \alpha/(\alpha + \beta)$ and $M = \alpha + \beta$. This makes $\mu$ the mean of the $\theta_i$ and $M$ control the variation - how spread out the $\theta_i$ are. The reason I'm doing this is that we know what $\mu$ is - mathematically, we must have $\mu = 0.5$. Why? Because in a system like baseball, every win by one team represents a loss by another team. The wins and losses cancel out, the scales remain balanced, and the average $\theta_i$ must be equal to 0.5. $\theta_i \sim Beta(0.5, M)$ If we knew $M$, we could just apply Bayes' rule to obtain the posterior distribution of the $\theta_i$ - but there's no intuitive value for it like there is for $\mu$. Thankfully, we have a way to get $M$. ## Estimating M Often, rather than working with the observed win totals $w_i$, people work with the observed win proportion $w_i/n_i$ (and in fact, we're going to use some data from that form in a bit). In a two-level model like this, we can calculate the variance of the observed win proportions as $Var\left(\dfrac{w_i}{n_i}\right) = E\left[Var\left(\dfrac{w_i}{n_i} \biggr | \theta_i\right)\right] + Var\left(E\left[\dfrac{w_i}{n_i} \biggr | \theta_i\right]\right)$ The first part - $Var\left(w_i/n_i\right)$ - is the variance of the observed win proportions. I'm going to call this the total variance. The second part -  $E\left[Var\left(w_i/n_i| \theta_i\right)\right]$ - is the average amount of variance of a team's observed winning proportion around its true winning proportion $\theta_i$. I'm going to call this the within-team variance (this is what others have referred to this as "luck"). This can be calculated as $E\left[Var\left(\dfrac{w_i}{n_i} \biggr | \theta_i\right)\right] = E\left[\dfrac{\theta_i(1-\theta_i)}{n_i}\right] = \dfrac{1}{n_i}(E[\theta_i(1-\theta_i)])$ $= \left(\dfrac{1}{n_i}\right)\left(0.5(1-0.5) \right)\left(\dfrac{M}{M+1}\right) = \dfrac{0.25M}{n_i(M+1)}$ I'm skipping a bit of gory math here - $E[\theta_i(1-\theta_i)]$ can be found by noting that multiplying the $Beta(0.5, M)$ density by $\theta_i(1-\theta_i)$ produces the kernel of another beta density. The last part - $Var(E[w_i/n_i | \theta_i])$ - is the variance of the $\theta_i$ themselves. It represent the natural variance of true winning proportions among all teams. I'm going to call this the between-team variance (this is what others have referred to as "talent"). This can be calculated as $Var\left(E\left[\dfrac{w_i }{n_i} \biggr | \theta_i\right]\right) = Var(\theta_i) = \dfrac{0.5(1-0.5)}{M+1} = \dfrac{0.25}{M+1}$ Hence, the total variation in observed winning proportion is $Var\left(\dfrac{w_i}{n_i}\right) = \dfrac{0.25M}{n_i(M+1)} +\dfrac{0.25}{M+1}$ Based on historical data, sports analyst Tom Tango suggests that the correct value of $Var(w_i/n_i)$ for teams that have played at least 160 games is $Var(w_i/n_i) = 0.07^2$. I'll trust him that this is accurate. Notice from the formula above that the $Var(w_i/n_i)$ value is linked to the number of games used to estimate it - it's within-team variance plus between-team variance, and within-team variance is shrinking while the between-team variance is constant. This is why it's important to use a point estimate based off observations with the same number of games - the $n_i$ is constant in the formula above, and it becomes a function solely of $M$. What we're going to do is assume that $n_i = 162$ for all the teams in the $Var(w_i/n_i) = 0.07^2$ value. This isn't technically true, but it's true for most of them, and the difference in within-team variation between a 160 win team and a 163 win team is very small, so we can safely ignore it. Then using Tom Tango's value, this sets up the equation $0.07^2 = \dfrac{0.25M}{162 (M+1)} +\dfrac{0.25}{M+1}$ Doing a bit of algebra yields the value of M $M = \dfrac{0.25*162-0.07^2*162}{0.07^2*162-0.25} = 73.01618$ Which is close enough that $M = 73$ can be used as the variance parameter for the distribution of the $\theta_i$. This is only one game smaller than the value of $74$ used in the rule of thumb, and the difference probably derives from different distributional choices used when calculating the between-team variance. As a side note, the variance of observed win proportions in $n_i$ games is given by $Var\left(\dfrac{w_i}{n_i}\right) = \dfrac{0.25(73)}{n_i(74)} +\dfrac{0.25}{74}$ which implies that at $n_i = M = 73$ games, the within-team variance (luck) is equal to the between-team variance (talent). ## Bayesian Estimator Now that we have $M$, we can use treat the $Beta(0.5, 73)$ distribution as a the prior distribution for  $\theta_i$ as a prior and use Bayes' rule to get the posterior distribution for the $\theta_i$ $\theta_i | w_i \sim Beta(w_i + 0.5*73, n_i - w_i + 0.5*73)$ (Here I'm using the traditional $\alpha$ and $\beta$ parametrization to define the beta distribution above) And if we use $\hat{\theta_i} = E[\theta_i | w_i]$, that gives us the famous $\hat{\theta} = \dfrac{w_i + 0.5*73}{w_i + 0.5*73 + n_i - w_i + 0.5*73} = \dfrac{w_i + 36.5}{n_i + 73}$ I want to add that this is not the only possible estimator that can be derived from the posterior - you could take the posterior mode rather than the mean, and calculate $\hat{\theta_i} = \dfrac{y_i + 36.5 -1}{n_i + 73 - 2} = \dfrac{y_i + 35.5}{n_i + 71}$ That is, add 71 games of 0.500 ball to the team in order to shrink it, and it would still be a statistically justified estimator (since the beta posterior starts off bell-shaped and symmetric, however, the mean and the mode will remain very close - so these two estimates should closely coincide) You could also use the posterior to calculate a credible interval for $\theta_i$ by taking quantiles from the posterior distribution - see my previous post on credible intervals for doing this in the beta-binomial situation. For example, if you have a team that has won $w_i = 6$ out of $n_i = 10$ games (for an observed 0.60 winning proportion), a 95% interval estimate for $\theta_i$ is given as $(0.405, 0.618)$. Similarly, if you have a team that has won $w_i = 96$ out of $n_i = 160$ games (again, for an observed 0.60 winning proportion), a 95% interval estimate for $\theta_i$ is given as $(0.505, 0.632)$. Above is the posterior distribution for $\theta_i$ for the 6-4 team. The solid vertical lines represent the boundaries of the 95% credible interval and the dashed line represents $\hat{\theta_i}$. ## Why Does this Happen? And by that, I mean - why does do you add the same approximately 74 "shrinkage" games, no matter what the actual number of games played is? As I write this post, the major league baseball season is currently underway. I'm going to ask you to estimate $\theta_i,$ the true winning proportion of team $i$. That doesn't sound tough, right? First, you'll want to know what team I'm thinking of. Here's the thing, though: I'm not going to tell you which team I'm thinking of. So how in the world can you guess how good a team is without knowing anything about it? Use what you know about baseball! How good is the average team? The average is 0.500, right? So let's start there. Your guess for $\theta_i$ is 0.500. Okay, so now I want you to think of how much variation there is for $\theta_i$. What range do you think $\theta_i$ could possible be in? I'm guessing most people would agree with me that most teams are 60 to 100 win teams over the course of the season. That sounds like a pretty reasonable estimate - and I'll add that there's a better chance of being near 81 than 60 or 100. This corresponds to a winning proportion of between approximately 0.37 and 0.617. I'm not going to show the math, but this range can and should be adjusted a little bit based on historical data. When we calculate the correct range, it's the same as the range of observed win proportions you would expect a 0.500 team that has played 74 (or 73, by my calculation) games to be in. Check it- the within-team standard deviation (luck) is $\sqrt{0.5(1-0.5)/74} = 0.058$, and so two standard deviations below and above 0.500 gives a range of (0.384, 0.616), or roughly between 62 and 100 wins over a full season. Okay, so now you've used your baseball knowledge to determine that without knowing anything about the team, you can guess how good it is by assuming it has gone 37-37. Now, let's get some information about the identity of team $i$. I'll start by telling you that the team went 0-2 in its first two games. Now try to estimate $\theta_i$. The raw point estimate is $\hat{\theta_i} = 0/2 = 0$. But you don't really believe that the team has a true winning proportion of 0, do you? That would mean it never wins a single game the entire season. No team has ever won zero games before, or even come close. And you just told me that the most teams finish with between 60 and 100 wins! But neither should you throw away the 0-2 information either -that's information as it is now, rather than about the hypothetical average team. What you want to do is mix your guess with your current information. Before I told you the name of the team, your guess represented a team that was 37-37. Now that you know the team is 0-2 - just add those to the what you thought before. Your new estimate for the team's true winning percentage is $\hat{\theta_i} = (0+37)/(2+74) = 0.486$. You've shrunk a little bit towards zero to represent the new information, but not by much - two games is very little additional knowledge. So little, in fact, that you're better off leaning heavily on the 37-37 you figured before you even knew what the team was. Now how about if I told you the team played 100 games and went 60-40? Great. We do the same thing - mix our guess about the average team with the new information we had before, by taking $\hat{\theta_i} = (60+37)/(100+74) = 0.557$. As the number of games the team has played increases, the information you had about the hypothetical average team stays constant - but the information you have about the team current team grows. So adding 37-37 to whatever the current record is now means that the current information is slowly overtaking your guess - which is exactly what you want. If the number of games your hypothetical average team played was increasing, then the amount of information being mixed together from what you guessed and from what you observed would be increasing at the same rate - and so your current information would never overtake your guess! That's not what you want - so keeping 37-37 constant is the correct thing to do. ## 06 July, 2015 ### Bayesian Credible Intervals for a Batting Average (This post is a followup to my previous post on bayesian inference) I've posted a fair amount about confidence intervals for various quantities. All of the ones I've posted so far - central limit based theorem intervals, Wald theory intervals, and likelihood intervals - are based on a frequentist understanding of probability - that is to say, probability is defined as the limiting result of the proportion of times an event (say, a hit) happens as the number of trials (say, at-bats) goes to infinity. The term "95% confidence" refers to the construction of the interval itself - that is, if we were to calculate a 95% confidence interval for the true batting average $\theta$ for each of our millions and millions of trials, then 95% of them will contain the true batting average $\theta$. Statisticians tend to avoid making probability statements about confidence intervals. The statement that "There is a 95% chance that $\theta$ is in the interval." is incorrect because $\theta$ is conceptualized as a fixed quantity. Furthermore, the statement "There is a 95% chance that the interval contains $\theta$." is awkward because once an interval has been calculated, there's no more randomness anymore - it either contains $\theta$, or it doesn't. This is why statisticians prefer to use confidence rather than probability to describe intervals. But what if you are working in a Bayesian framework? The end result of a Bayesian analysis is a distribution $p(\theta | x)$ that represents the distribution of believe in $\theta$ after the data has been accounted for - so it makes perfect sense to write, for example, $P(0.250 \le \theta \le 0.300)$. All the code used to generate the images in this article may be found on my github. ## Credible Intervals Instead of confidence intervals, Bayesian statisticians instead calculate  "Credible" intervals - these are intervals from the distribution of $p(\theta | x)$ that contain the desired amount of probability. Say, for example, that $P(0.250 \le \theta \le 0.300) = 0.95$ - then the interval $(0.250, 0.300)$ would be a 95% credible interval for $\theta$. The main issue with this method is that there is more than one way to get a 95% credible interval given $p(\theta | x)$ - technically, any interval $(L, U)$ with $P(L \le \theta \le U) = 0.95$ is a valid 95% credible interval. Statisticians have several ways to determine which one to use, but I'm going to show you one that can be easily done with most computer software. ## Baseball Example In the previous post, I explained how to use the beta-binomial model to get a posterior distribution for a batting average using a beta-binomial model. Let's take observer A, who for the batter with $15$ hits in $n = 50$ at-bats had a prior distribution of belief given by a beta distribution with parameters $\alpha = 1$ and $\beta = 1$ And a posterior distribution of belief given by a beta distribution with parameters $\alpha' = 16$ and $\beta' = 36$ To get a credible interval, we can take quantiles from the beta distribution. A quantile is a value $Q_{p}$ for a distribution so that $P(X \le Q_{p}) = p$. To get a 95% credible interval, we can take $Q_{0.025}$ as the lower boundary of the interval and $Q_{0.975}$ as the upper boundary of the interval (since 97.5% - 2.5% = 95%) so that the interval contains the middle 95% of the probability. Since these do not have nice formulas to calculate by hand, it's easiest to use computer software to get the - any good statistical software should be able to give quantiles for common distributions. In the program $R$, the command to do this is > qbeta(c(.025,.975),16,36) [1] 0.1911040 0.4382887 So for observer A, a 95% credible interval for $\theta$ is given by $(0.191, 0.438)$. With quantiles, the posterior belief for observer A looks like The area under the curve between the two vertical lines is 0.95 - and so the values of the vertical lines give the 95% credible interval. What about observer B? Observer B used a beta distribution as their prior with $\alpha = 53$ and $\beta = 147$, for a posterior distribution that is beta with $\alpha' = 68$ and $\beta' = 182$. Quantiles from observer B's posterior distribution are > qbeta(c(.025,.975),68,182) [1] 0.2187438 0.3287111 So observer B's 95% credible interval for $\theta$ is $(0.219, 0.329)$. Note that observer B's interval is much more realistic - as baseball fans, we know that a $\theta = 0.400$ batting average is very, very unlikely - so good prior information can lead to improved inference.
# [QGIS Commit] r11063 - docs/trunk/english_us/gis_introduction svn_qgis at osgeo.org svn_qgis at osgeo.org Tue Jul 14 05:04:28 EDT 2009 Author: dassau Date: 2009-07-14 05:04:28 -0400 (Tue, 14 Jul 2009) New Revision: 11063 Modified: docs/trunk/english_us/gis_introduction/Makefile docs/trunk/english_us/gis_introduction/attributedata.tex docs/trunk/english_us/gis_introduction/authors.tex docs/trunk/english_us/gis_introduction/crs.tex docs/trunk/english_us/gis_introduction/datacapture.tex docs/trunk/english_us/gis_introduction/gis_introduction.tex docs/trunk/english_us/gis_introduction/gisintro.tex docs/trunk/english_us/gis_introduction/mapproduction.tex docs/trunk/english_us/gis_introduction/rasteranalysis.tex docs/trunk/english_us/gis_introduction/rasterdata.tex docs/trunk/english_us/gis_introduction/topology.tex docs/trunk/english_us/gis_introduction/vectoranalysis.tex docs/trunk/english_us/gis_introduction/vectordata.tex Log: finished migration to latex Modified: docs/trunk/english_us/gis_introduction/Makefile =================================================================== --- docs/trunk/english_us/gis_introduction/Makefile 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/Makefile 2009-07-14 09:04:28 UTC (rev 11063) @@ -21,8 +21,8 @@ make pics latex $(FILE) #bibtex$(FILE) - bibtex $(FILE)1 - bibtex$(FILE)2 + #bibtex $(FILE)1 + #bibtex$(FILE)2 #now loop over Latex files, until stable: echo Rerun > $(FILE).log while grep Rerun$(FILE).log >/dev/null 2>&1 ; do latex $(FILE).tex ; done @@ -30,7 +30,7 @@ text: latex$(FILE) - bibtex $(FILE) + #bibtex$(FILE) #now loop over Latex files, until stable: echo Rerun > $(FILE).log while grep Rerun$(FILE).log >/dev/null 2>&1 ; do latex \$(FILE).tex ; done Modified: docs/trunk/english_us/gis_introduction/attributedata.tex =================================================================== --- docs/trunk/english_us/gis_introduction/attributedata.tex 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/attributedata.tex 2009-07-14 09:04:28 UTC (rev 11063) @@ -15,7 +15,7 @@ \hline \end{tabular} -\subsection{Overview}\label{subsec:overview} +\subsection{Overview} If every line on a map was the same colour, width, thickness, and had the same label, it would be very hard to make out what was going on. The map Modified: docs/trunk/english_us/gis_introduction/authors.tex =================================================================== --- docs/trunk/english_us/gis_introduction/authors.tex 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/authors.tex 2009-07-14 09:04:28 UTC (rev 11063) @@ -12,68 +12,65 @@ %\updatedisclaimer \begin{figure}[ht] -\centering +\begin{center} \begin{minipage}[h]{5cm}\includegraphics[width=4.7cm]{tim_sutton} \end{minipage} -\begin{minipage}[h]{11cm} +\begin{minipage}[h]{11.5cm} \textbf{Tim Sutton - Editor \& Lead Author} \\ Tim Sutton is a developer and project steering committee member of the Quantum GIS project. He is passionate about seeing GIS being Freely available to everyone. Tim is also a founding member of Linfiniti Consulting CC. - a small business set up with the goal of helping people to learn and use open source GIS software. \\ -\textbf{Web: http://linfiniti.com , Email: tim at linfiniti.com} +Web: http://linfiniti.com , Email: tim at linfiniti.com \end{minipage} -\end{figure} -\begin{figure}[ht] -\centering +\vspace{0.1cm} + \begin{minipage}[h]{5cm}\includegraphics[width=4.7cm]{otto_dassau} \end{minipage} -\begin{minipage}[h]{11cm} +\begin{minipage}[h]{11.5cm} \textbf{Otto Dassau - Assistant Author} \\ Otto Dassau is the documentation maintainer and project steering committee member of the Quantum GIS project. Otto has considerable experience in using and training people to use Free and Open Source GIS software. \\ -\textbf{Web: http://www.gbd-consult.de , Email: dassau at gbd-consult.de} +Web: http://gbd-consult.de , Email: dassau at gbd-consult.de \end{minipage} -\end{figure} -\begin{figure}[ht] -\centering +\vspace{0.1cm} + \begin{minipage}[h]{5cm}\includegraphics[width=4.7cm]{marcelle_sutton} \end{minipage} -\begin{minipage}[h]{11cm} +\begin{minipage}[h]{11.5cm} \textbf{Marcelle Sutton - Project Manager} \\ Marcelle Sutton studied english and drama and is a qualified teacher. Marcelle is also a founding member of Linfiniti Consulting CC. - a small business set up with the goal of helping people to learn and use open source GIS software. \\ -\textbf{Web: http://linfiniti.com , Email: marcelle at linfiniti.com} +Web: http://linfiniti.com , Email: marcelle at linfiniti.com \end{minipage} -\end{figure} -\begin{figure}[ht] -\centering +\vspace{0.1cm} + \begin{minipage}[h]{5cm}\includegraphics[width=4.7cm]{lerato_nsibande} \end{minipage} -\begin{minipage}[h]{11cm} +\begin{minipage}[h]{11.5cm} \textbf{Lerato Nsibande} \\ Lerato is a grade 12 scholar living in Pretoria. Lerato learns Geography at school and has enjoyed learning GIS with us! \end{minipage} -\end{figure} -\begin{figure}[ht] -\centering +\vspace{0.1cm} + \begin{minipage}[h]{5cm}\includegraphics[width=4.7cm]{sibongile_mthombeni} \end{minipage} -\begin{minipage}[h]{11cm} +\begin{minipage}[h]{11.5cm} \textbf{Sibongile Mthombeni} \\ Sibongile lives near Johannesburg with her young daughter. Her goal is to continue her studies and become a nurse. Working on this project was the first time Sibongile used a computer. \end{minipage} +\end{center} \end{figure} Modified: docs/trunk/english_us/gis_introduction/crs.tex =================================================================== --- docs/trunk/english_us/gis_introduction/crs.tex 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/crs.tex 2009-07-14 09:04:28 UTC (rev 11063) @@ -15,7 +15,7 @@ \hline \end{tabular} -\subsection{Overview}\label{subsec:overview} +\subsection{Overview} \textbf{Map projections} try to portray the surface of the earth or a portion of the earth on a flat piece of paper or computer screen. A \textbf{coordinate reference Modified: docs/trunk/english_us/gis_introduction/datacapture.tex =================================================================== --- docs/trunk/english_us/gis_introduction/datacapture.tex 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/datacapture.tex 2009-07-14 09:04:28 UTC (rev 11063) @@ -14,7 +14,7 @@ \hline \end{tabular} -\subsection{Overview}\label{subsec:overview} +\subsection{Overview} In the previous two topics we looked at vector data. We saw that there are two key concepts to vector data, namely: \textbf{geometry} and Modified: docs/trunk/english_us/gis_introduction/gis_introduction.tex =================================================================== --- docs/trunk/english_us/gis_introduction/gis_introduction.tex 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/gis_introduction.tex 2009-07-14 09:04:28 UTC (rev 11063) @@ -22,7 +22,7 @@ \include{rasteranalysis} \include{authors} \include{appendices/fdl} -\include{literature} +%\include{literature} \begin{htmlonly} \input{qgis_style.tex} Modified: docs/trunk/english_us/gis_introduction/gisintro.tex =================================================================== --- docs/trunk/english_us/gis_introduction/gisintro.tex 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/gisintro.tex 2009-07-14 09:04:28 UTC (rev 11063) @@ -20,7 +20,7 @@ \hline \end{tabular} -\subsection{Overview}\label{subsec:overview} +\subsection{Overview} Just as we use a word processor to write documents and deal with words on a computer, we can use a \textbf{GIS application} to deal with \textbf{spatial Modified: docs/trunk/english_us/gis_introduction/mapproduction.tex =================================================================== --- docs/trunk/english_us/gis_introduction/mapproduction.tex 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/mapproduction.tex 2009-07-14 09:04:28 UTC (rev 11063) @@ -15,7 +15,7 @@ \hline \end{tabular} -\subsection{Overview}\label{subsec:overview} +\subsection{Overview} \textbf{Map production} is the process of arranging map elements on a sheet of paper in a way that, even without many words, the average person can understand Modified: docs/trunk/english_us/gis_introduction/rasteranalysis.tex =================================================================== --- docs/trunk/english_us/gis_introduction/rasteranalysis.tex 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/rasteranalysis.tex 2009-07-14 09:04:28 UTC (rev 11063) @@ -15,6 +15,243 @@ \hline \end{tabular} -\subsection{Overview}\label{subsec:overview} +\subsection{Overview} +\textbf{Spatial analysis} is the process of manipulating spatial information +to extract +new information and meaning from the original data. Usually spatial analysis +is carried out with a Geographic Information System (GIS). A GIS usually +provides spatial analysis tools for calculating feature statistics and +carrying out geoprocessing activities as data interpolation. +In hydrology, users will likely emphasize the importance of terrain analysis +and hydrological modelling (modelling the movement of water over and in the +earth). In wildlife management, users are interested in analytical functions +dealing with wildlife point locations and their relationship to the +environment. Each user will have different things they are interested in +depending on the kind of work they do. +\subsection{Spatial interpolation in detail} + +\begin{figure}[ht] + \begin{center} + \caption{Temperature map interpolated from South African Weather Stations.} +\label{fig:temperature}\smallskip + \includegraphics[clip=true, width=0.5\textwidth]{temperatures_20090415} +\end{center} +\end{figure} + +\textbf{Spatial interpolation} is the process of using points with known values to +estimate values at other unknown points. For example, to make a precipitation +(rainfall) map for your country, you will not find enough evenly spread +weather stations to cover the entire region. Spatial interpolation can +estimate the temperatures at locations without recorded data by using known +temperature readings at nearby weather stations (see Figure +\ref{fig:temperature}). This type of interpolated surface is often called a +\textbf{statistical surface}. +Elevation data, precipitation, snow accumulation, water table and population +density are other types of data that can be computed using interpolation. + +Because of high cost and limited resources, data collection is usually +conducted only in a limited number of selected point locations. In GIS, +spatial interpolation of these points can be applied to create a raster +surface with estimates made for all raster cells. + +In order to generate a continuous map, for example, a digital elevation map +from elevation points measured with a GPS device, a suitable interpolation +method has to be used to optimally estimate the values at those locations +where no samples or measurements were taken. The results of the interpolation +analysis can then be used for analyses that cover the whole area and for +modelling. + +There are many interpolation methods. In this introduction we will present +two widely used interpolation methods called \textbf{Inverse Distance +Weighting} (IDW) and \textbf{Triangulated Irregular Networks (TIN)}. If you +interpolation methods, please refer to the further reading section at the end +of this topic. + +\subsection{Inverse Distance Weighted (IDW)} + +In the IDW interpolation method, the sample points are weighted during +interpolation such that the influence of one point relative to another +declines with distance from the unknown point you want to create (see +Figure \ref{fig:idw}). + +\begin{figure}[ht] + \begin{center} + \caption{Inverse Distance Weighted interpolation based on weighted sample +point distance (left). Interpolated IDW surface from elevation vector points +(right). Image Source: Mitas, L., Mitasova, H. (1999).} +\label{fig:idw}\smallskip + \includegraphics[clip=true, width=0.6\textwidth]{interpolation_IDW} +\end{center} +\end{figure} + +Weighting is assigned to sample points through the use of a weighting +coefficient that controls how the weighting influence will drop off as the +distance from new point increases. The greater the weighting coefficient, the +less the effect points will have if they are far from the unknown point +during the interpolation process. As the coefficient increases, the value of +the unknown point approaches the value of the nearest observational point. + +It is important to notice that the IDW interpolation method also has some +disadvantages: The quality of the interpolation result can decrease, if the +distribution of sample data points is uneven. Furthermore, maximum and +minimum values in the interpolated surface can only occur at sample data +points. This often results in small peaks and pits around the sample data +points as shown in Figure \ref{fig:idw}. + +In GIS, interpolation results are usually shown as a 2 dimensional raster +layer. In Figure \ref{fig:qgisidw}, you can see a typical IDW interpolation +result, based on elevation sample points collected in the field with a GPS +device. + +\begin{figure}[ht] + \begin{center} + \caption{IDW interpolation result from irregularly collected elevation +sample points (shown as black crosses).} +\label{fig:qgisidw}\smallskip + \includegraphics[clip=true, width=0.6\textwidth]{qgis_interpolation_IDW} +\end{center} +\end{figure} + +\subsection{Triangulated Irregular Network (TIN)} + +TIN interpolation is another popular tool in GIS. A common TIN algorithm is +called \textbf{Delaunay} triangulation. It tries to create a surface formed by +triangles of nearest neighbour points. To do this, circumcircles around +selected sample points are created and their intersections are connected to a +network of non overlapping and as compact as possible triangles (see Figure +\ref{fig:tin}). + +%% Minipage to put both figures on one page +\begin{figure}[htpb] + \begin{minipage}[h]{\textwidth} + \begin{center} + \caption{Delaunay triangulation with circumcircles around the red sample +data. The resulting interpolated TIN surface created from elevation vector +points is shown on the right. Image Source: Mitas, L., Mitasova, H. (1999).} + \label{fig:tin}\smallskip + \includegraphics[clip=true, width=0.8\textwidth]{interpolation_TIN} + \end{center} + \end{minipage} \\ + \vspace{1cm} + \begin{minipage}[h]{\textwidth} + \begin{center} + \caption{Delaunay TIN interpolation result from irregularly collected +rainfall sample points (blue circles).} + \label{fig:qgistin}\smallskip + \includegraphics[clip=true, width=0.8\textwidth]{qgis_interpolation_TIN} + \end{center} + \end{minipage} +\end{figure} + +The main disadvantage of the TIN interpolation is that the surfaces are not +smooth and may give a jagged appearance. This is caused by discontinuous +slopes at the triangle edges and sample data points. In addition, +triangulation is generally not suitable for extrapolation beyond the area +with collected sample data points (see Figure \ref{fig:qgistin}). + +\subsection{Common problems / things to be aware of} + +It is important to remember that there is no single interpolation method that +can be applied to all situations. Some are more exact and useful than others +but take longer to calculate. They all have advantages and disadvantages. In +practice, selection of a particular interpolation method should depend upon +the sample data, the type of surfaces to be generated and tolerance of +estimation errors. Generally, a three step procedure is recommended: + +\begin{enumerate} +\item Evaluate the sample data. Do this to get an idea on how data are +distributed in the area, as this may provide hints on which interpolation +method to use. +\item Apply an interpolation method which is most suitable to both the sample +data and the study objectives. When you are in doubt, try several methods, if +available. +\item Compare the results and find the best result and the most suitable method. +This may look like a time consuming process at the beginning. However, as you +gain experience and knowledge of different interpolation methods, the time +required for generating the most suitable surface will be greatly reduced. +\end{enumerate} + +\subsection{Other interpolation methods} + +Although we concentrated on IDW and TIN interpolation methods in this +worksheet, there are more spatial interpolation methods provided in GIS, such +as Regularized Splines with Tension (RST), Kriging or Trend Surface + +\subsection{What have we learned?} + +Let's wrap up what we covered in this worksheet: + +\begin{itemize} +\item \textbf{Interpolation} uses vector points with known values to estimate +values at unknown locations to create a raster surface covering an entire area. +\item The interpolation result is typically a \textbf{raster} layer. +\item It is important to \textbf{find a suitable interpolation method} to +optimally estimate values for unknown locations. +\item \textbf{IDW interpolation} gives weights to sample points, such that +the influence of one point on another declines with distance from the new +point being estimated. +\item \textbf{TIN interpolation} uses sample points to create a surface +formed by triangles based on nearest neighbour point information. +\end{itemize} + +\subsection{Now you try!} + +Here are some ideas for you to try with your learners: + +\begin{itemize} +\item The Department of Agriculture plans to cultivate new land in your area but +apart from the character of the soils, they want to know if the rainfall is +sufficient for a good harvest. All the information they have available comes +from a few weather stations around the area. Create an interpolated surface +with your learners that shows which areas are likely to receive the highest +rainfall. +\item The tourist office wants to publish information about the weather conditions +in January and February. They have temperature, rainfall and wind strength +data and ask you to interpolate their data to estimate places where tourists +will probably have optimal weather conditions with mild temperatures, no +rainfall and little wind strength. Can you identify the areas in your region +that meet these criteria? +\end{itemize} + + +If you don't have a computer available, you can use a toposheet and a ruler +to estimate elevation values between contour lines or rainfall values between +fictional weather stations. For example, if rainfall at weather station A is +50 mm per month and at weather station B it is 90 mm, you can estimate, that +the rainfall at half the distance between weather station A and B is 70 mm. + + +\textbf{Books}: + +\begin{itemize} +\item Chang, Kang-Tsung (2006): Introduction to Geographic Information Systems. 3rd +Edition. McGraw Hill. (ISBN 0070658986) +\item DeMers, Michael N. (2005): Fundamentals of Geographic Information Systems. +3rd Edition. Wiley. (ISBN 9814126195) +\item Mitas, L., Mitasova, H. (1999): Spatial Interpolation. In: P.Longley, M.F. +Goodchild, D.J. Maguire, D.W.Rhind (Eds.), Geographical Information Systems: +Principles, Techniques, Management and Applications, Wiley. +\end{itemize} + +\textbf{Websites}: + +\url{http://en.wikipedia.org/wiki/Interpolation} \\ +\url{http://en.wikipedia.org/wiki/Delaunay\_triangulation} \\ +\url{http://www.agt.bme.hu/public_e/funcint/funcint.html} + +The QGIS User Guide also has more detailed information on interpolation tools +provided in QGIS. + +\subsection{What's next?} + +This is the final worksheet in this series. We encourage you to explore QGIS +and use the QGIS manual to discover all the other things you can +do with GIS software! + + Modified: docs/trunk/english_us/gis_introduction/rasterdata.tex =================================================================== --- docs/trunk/english_us/gis_introduction/rasterdata.tex 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/rasterdata.tex 2009-07-14 09:04:28 UTC (rev 11063) @@ -14,7 +14,7 @@ \hline \end{tabular} -\subsection{Overview}\label{subsec:overview} +\subsection{Overview} In the previous topics we have taken a closer look at vector data. While vector features use geometry (points, polylines and polygons) to represent Modified: docs/trunk/english_us/gis_introduction/topology.tex =================================================================== --- docs/trunk/english_us/gis_introduction/topology.tex 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/topology.tex 2009-07-14 09:04:28 UTC (rev 11063) @@ -15,7 +15,7 @@ \hline \end{tabular} -\subsection{Overview}\label{subsec:overview} +\subsection{Overview} \textbf{Topology} expresses the spatial relationships between connecting or Modified: docs/trunk/english_us/gis_introduction/vectoranalysis.tex =================================================================== --- docs/trunk/english_us/gis_introduction/vectoranalysis.tex 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/vectoranalysis.tex 2009-07-14 09:04:28 UTC (rev 11063) @@ -15,9 +15,275 @@ \hline \end{tabular} -\subsection{Overview}\label{subsec:overview} +\subsection{Overview} +\textbf{Spatial analysis} uses spatial information to extract new and additional +meaning from GIS data. Usually spatial analysis is carried out using a GIS +Application. GIS Applications normally have spatial analysis tools for +feature statistics (e.g. how many vertices make up this polyline?) or +geoprocessing such as feature buffering. The types of spatial analysis that +are used vary according to subject areas. People working in water management +and research (hydrology) will most likely be interested in analysing terrain +and modelling water as it moves across it. In wildlife management users are +interested in analytical functions that deal with wildlife point locations +and their relationship to the environment. In this topic we will discuss +buffering as an example of a useful spatial analysis that can be carried out +with vector data. +\subsection{Buffering in detail} +\textbf{Buffering} usually creates two areas: one area that is +\textbf{within} a specified distance to selected real world features and the +other area that is \textbf{beyond}. The area that is within the specified +distance is called the \textbf{buffer zone}. +\begin{figure}[ht] + \begin{center} + \caption{The border between the United States of America and Mexico is +separated by a buffer zone. (Photo taken by SGT Jim Greenhill 2006).} +\label{fig:mexborder}\smallskip + \includegraphics[clip=true, width=0.6\textwidth]{border_usa_mexico} +\end{center} +\end{figure} +A \textbf{buffer zone} is any area that serves the purpose of keeping real world +features distant from one another. Buffer zones are often set up to protect +the environment, protect residential and commercial zones from industrial +accidents or natural disasters, or to prevent violence. Common types of +buffer zones may be greenbelts between residential and +commercial areas, border zones between countries (see Figure +\ref{fig:mexborder}), noise protection zones around airports, or pollution +protection zones along rivers. + +\newpage + +In a GIS Application, \textbf{buffer zones} are always represented as +\textbf{vector polygons} enclosing other polygon, line or point features (see +Figures \ref{fig:buffer}a-c). + +\begin{figure}[ht] +\centering +\caption{Buffering vector points, polylines and polygons}\label{fig:buffer} + \subfigure[A buffer zone around vector points.] + {\label{subfig:poibuffer}\includegraphics[clip=true, width=0.3\textwidth]{pointbuffer}}\goodgap + \subfigure[A buffer zone around vector polylines.] + {\label{subfig:linebuffer}\includegraphics[clip=true, width=0.3\textwidth]{polylinebuffer}}\goodgap + \subfigure[A buffer zone around vector polygons] + {\label{subfig:polybuffer}\includegraphics[clip=true, width=0.3\textwidth]{polygonebuffer}} +\end{figure} + +\subsection{Variations in buffering} + +There are several variations in buffering. The \textbf{buffer distance} or +buffer size \textbf{can vary} according to numerical values provided in the +vector layer attribute +table for each feature. The numerical values have to be defined in map units +according to the Coordinate Reference System (CRS) used with the data. For +example, the width of a buffer zone along the banks of a river can vary +depending on the intensity of the adjacent land use. For intensive +cultivation the buffer distance may be bigger than for organic farming (see +Figure \ref{fig:riverbuffer} and Table \ref{tab:buffer}). + +\begin{figure}[ht] + \begin{center} + \caption{Buffering rivers with different buffer distances.} +\label{fig:riverbuffer}\smallskip + \includegraphics[clip=true, width=0.5\textwidth]{variable_buffer} +\end{center} +\end{figure} + +%% Note: xdvi does not show white text on black background but it works! +\begin{table}[ht] +\centering +\caption{Attribute table with different buffer distances to rivers based on + \label{tab:buffer} + \begin{tabular}{|p{5cm}|p{6cm}|p{5cm}|} + \hline + \rowcolor{black} + \textcolor{white}{\textbf{River}} & + \textcolor{white}{\textbf{Buffer distance (meters)}} \\ + \hline Breede river & Intensive vegetable cultivation & 100 \\ + \hline Komati & Intensive cotton cultivation & 150 \\ + \hline Oranje & Organic farming & 50 \\ + \hline Telle river & Organic farming & 50 \\ +\hline +\end{tabular} +\end{table} + +Buffers around polyline features, such as rivers or roads, do not have to be +on both sides of the lines. They can be on either the left side or the right +side of the line feature. In these cases the left or right side is determined +by the direction from the starting point to the end point of line during +digitising. + +\subsection{Multiple buffer zones} + +A feature can also have more than one buffer zone. A nuclear power plant may +be buffered with distances of 10, 15, 25 and 30 km, thus forming multiple +rings around the plant as part of an evacuation plan (see Figure +\ref{fig:powerplant}). + +\begin{figure}[ht] + \begin{center} + \caption{Buffering a point feature with distances of 10, 15, 25 and 30 km.} +\label{fig:powerplant}\smallskip + \includegraphics[clip=true, width=0.4\textwidth]{multiple_buffer} +\end{center} +\end{figure} + +\subsection{Buffering with intact or dissolved boundaries} + +Buffer zones often have dissolved boundaries so that there are no overlapping +areas between the buffer zones. In some cases though, it may also be useful +for boundaries of buffer zones to remain intact, so that each buffer zone is +a separate polygon and you can identify the overlapping areas (see Figure +\ref{fig:buffertypes}). + +\begin{figure}[ht] + \begin{center} + \caption{Buffer zones with dissolved (left) and with intact boundaries +(right) showing overlapping areas.} +\label{fig:buffertypes}\smallskip + \includegraphics[clip=true, width=0.9\textwidth]{dissolved_intact_buffer} +\end{center} +\end{figure} + +\subsection{Buffering outward and inward} + +Buffer zones around polygon features are usually extended outward from a +polygon boundary but it is also possible to create a buffer zone inward from +a polygon boundary. Say, for example, the Department of Tourism wants to plan +a new road around Robben Island and environmental laws require that the road +is at least 200 meters inward from the coast line. They could use an inward +buffer to find the 200m line inland and then plan their road not to go beyond +that line. + +\subsection{Common problems / things to be aware of} + +Most GIS Applications offer buffer creation as an analysis tool, but the +options for creating buffers can vary. For example, not all GIS Applications +allow you to buffer on either the left side or the right side of a line +feature, to dissolve the boundaries of buffer zones or to buffer inward from +a polygon boundary. + +A buffer distance always has to be defined as a whole number +(\textbf{integer}) or a decimal number (\textbf{floating point value}). This +value is defined in \textbf{map units} (meters, feet, decimal degrees) +according to the Coordinate Reference System (CRS) of the vector layer. + +\subsection{More spatial analysis tools} + +Buffering is a an important and often used spatial analysis tool but there +are many others that can be used in a GIS and explored by the user. + +\begin{figure}[ht] + \begin{center} + \caption{Spatial overlay with two input vector layers (a\_input = +rectangle, b\_input = circle). The resulting vector layer is displayed +green.} +\label{fig:overlay}\smallskip + \includegraphics[clip=true, width=\textwidth]{overlay} +\end{center} +\end{figure} + +\textbf{Spatial overlay} is a process that allows you to identify the relationships +between two polygon features that share all or part of the same area. The +output vector layer is a combination of the input features information (see +Figure \ref{fig:overlay}). Typical spatial overlay examples are: + +\begin{itemize} +\item \textbf{Intersection}: The output layer contains all areas where both +layers overlap (intersect). +\item \textbf{Union}: the output layer contains all areas of the two input +layers combined. +\item \textbf{Symmetrical difference}: The output layer contains all areas of +the input layers except those areas where the two layers overlap (intersect). +\item \textbf{Difference}: The output layer contains all areas of the first +input layer that do not overlap (intersect) with the second input layer. +\end{itemize} + +\subsection{What have we learned?} + +Let's wrap up what we covered in this worksheet: + +\begin{itemize} +\item \textbf{Buffer zones} describe areas around real world features. +\item Buffer zones are always \textbf{vector polygons}. +\item A feature can have \textbf{multiple} buffer zones. +\item The size of a buffer zone is defined by a \textbf{buffer distance}. +\item A buffer distance has to be an \textbf{integer} or \textbf{floating +point} value. +\item A buffer distance can be different for each feature within a vector layer. +\item Polygons can be buffered \textbf{inward} or \textbf{outward} from the +polygon boundary. +\item Buffer zones can be created with \textbf{intact} or \textbf{dissolved} +boundaries. +\item Besides buffering, a GIS usually provides a variety of vector analysis tools +\end{itemize} + +\subsection{Now you try!} + +\begin{itemize} +\item Because of dramatic traffic increase, the town planners want to widen the +properties that fall within the buffer zone (see Figure \ref{fig:lanebuffer}). +\item For controlling protesting groups, the police want to establish a neutral +zone to keep protesters at least 100 meters from a building. Create a buffer +around a building and colour it so that event planners can see where the +buffer area is. +\item A truck factory plans to expand. The siting criteria stipulate that a +potential site must be within 1 km of a heavy-duty road. Create a buffer +along a main road so that you can see where potential sites are. +\item Imagine that the city wants to introduce a law stipulating that no bottle +stores may be within a 1000 meter buffer zone of a school or a church. Create +a 1km buffer around your school and then go and see if there would be any +bottle stores too close to your school. +\end{itemize} + +\begin{figure}[ht] + \begin{center} + \caption{Buffer zone (green) around a roads map (brown). You can see which +houses fall within the buffer zone, so now you could contact the owner and +talk to him about the situation.} +\label{fig:lanebuffer}\smallskip +\end{center} +\end{figure} + + +If you don't have a computer available, you can use a toposheet and a compass +to create buffer zones around buildings. Make small pencil marks at equal +distance all along your feature using the compass, then connect the marks +using a ruler! + + +\textbf{Books}: + +\begin{itemize} +\item Galati, Stephen R. (2006): Geographic Information Systems Demystified. Artech +House Inc. (ISBN 158053533X) +\item Chang, Kang-Tsung (2006): Introduction to Geographic Information Systems. 3rd +Edition. McGraw Hill. (ISBN 0070658986) +\item DeMers, Michael N. (2005): Fundamentals of Geographic Information Systems. +3rd Edition. Wiley. (ISBN 9814126195) +\end{itemize} + +\textbf{Websites}: + +\url{http://www.manifold.net/doc/transform\_border_buffers.htm} + +The QGIS User Guide also has more detailed information on analysing vector +data in QGIS. + +\subsection{What's next?} + +In the section that follows we will take a closer look at +\textbf{interpolation} as an example of spatial analysis you can do with +raster data. + + Modified: docs/trunk/english_us/gis_introduction/vectordata.tex =================================================================== --- docs/trunk/english_us/gis_introduction/vectordata.tex 2009-07-13 20:58:51 UTC (rev 11062) +++ docs/trunk/english_us/gis_introduction/vectordata.tex 2009-07-14 09:04:28 UTC (rev 11063) @@ -15,7 +15,7 @@ \hline \end{tabular} -\subsection{Overview}\label{subsec:overview} +\subsection{Overview} \textbf{Vector data} provide a way to represent real world \textbf{features} within the GIS environment. A feature is anything you can see on the
# Algebraic Number Theory - Lemma for Fermat's Equation with $n=3$ I have to prove the following, in my notes it is lemma before Fermat's Equation, case $n=3$. I was able to prove everything up to the last two points: Let $\zeta=e^{(\frac{2\pi i}{3})}$. Consider $A:=\mathbb{Z}[\zeta]=\{a+\zeta b \quad|\quad a,b\in \mathbb{Z}\}$. Then 1. $\zeta$ is a root of the irreducible poly. $X^2+X+1$. 2. The field of fractions of $A$ is $\mathbb{Q}(\sqrt{-3})$ 3. The norm map $N:\mathbb{Q}(\sqrt{-3})\rightarrow \mathbb{Q},$ given by $a+\sqrt{-3}b \mapsto a^2+3b^2$ is multiplicative and sends every element in $A$ to an element in $\mathbb{Z}$. In particular, $u\in A$ is a unit iff $N(u)\in\{-1,1\}$. Moreover, if $N(a)=\pm$ prime number, then $a$ is irreducible. 4. The unit group $A^x$ is cyclic of order $6$. ($A^x=\{\pm 1, \pm\zeta, \pm\zeta^2\}$) 5. The ring $A$ is Euclidean with respect to the norm $N$ and hence a unique factorisation domain. 6. The element $\lambda=1-\zeta$ is a prime element in $A$ and $3=-\zeta^2\lambda^2$. 7. The quotient $A$ / $(\lambda)$ is isomorphic to $\mathbb{F}_3$. 8. The image of the set $A^3=\{a^3|a\in A\}$ under $\pi: A \rightarrow A / (\lambda^4)=A / (9)$ is equal to $\{0+(\lambda^4),\pm 1+(\lambda^4),\pm \lambda^3+(\lambda^4)\}$ I was not able to prove 7 and 8. For 7 I do not even know which isomorphism, I guess it should be an isomorphism of rings? I hope anybody knows what to do or has at least some hints, - Here is a direct bash to show (7). We have $$\begin{eqnarray*} \Bbb{Z}[\zeta]/(1 - \zeta) &\cong& \left[\frac{\Bbb{Z}[x]}{(x^2 + x + 1)}\right]\bigg/ \left[\frac{\left(1-x ,x^2 + x + 1\right)}{(x^2 + x + 1)}\right] \\ &\cong& \Bbb{Z}[x]\Big/\left(x^2 + x + 1,1-x\right) \hspace{1in} (\text{Third Isomorphism Theorem}) \\ &\cong& \Bbb{Z}[x]/(1-x,3) \\ &\cong& \Bbb{F}_3.\end{eqnarray*}$$ If at any point you don't understand how I got those isomorphisms please tell me and I will elaborate. Edit: For the sake of the OP let me elaborate how I got from the third last line to the second last line. I claim that in fact we have an equality of ideals $$(1-x,3) = (x^2 + x + 1,1-x).$$ How do we show this equality? We just need to show that both generators on the left are in the right, and both generators on the right are in the left one. Now by long division from school I get $$(x^2 + x + 1) = (1-x)(-x-2) + 3.$$ This shows that $x^2 +x +1 \in (1-x,3)$ and $3 \in (x^2 + x+1,1-x)$ and so we are done! - Your first line of isomorphisms is particularly confusing to me. It seems to need a few more parenthesis at the very least. – JSchlather Feb 23 '13 at 14:03 Thank you! A really nice proof! Could you just precise how you obtained the second last line. Thanks! – Mathoman Feb 23 '13 at 16:16 @Mathoman It i a quotient by an ideal with two generaters, so you can "double quotient" it, thus obtain the result. – awllower Feb 23 '13 at 16:36 I still do not understand how you passed from $\Bbb{Z}[x]/\left(x^2 + x + 1,1-x\right)$ to $\Bbb{Z}[x]/(1-x,3)$, the last line is Ok for me. – Mathoman Feb 23 '13 at 16:49 $x\equiv 1$in $\mathbb Z[x]/(1-x)$, and hence... – awllower Feb 23 '13 at 17:06 For 7), note that 6) tells you that $3 \in (\lambda)$, and since by 6) $\lambda$ is prime, $A \ne (\lambda)$. Moreover $a + \zeta b = a + (1-\lambda) b \equiv a + b \pmod{\lambda}$. So if you want an explicit isomorphism, it is $a + \zeta b + (\lambda) \mapsto a+ b \pmod{3}$. - For (7), notice first that, since $\lambda\mid 3$, the characteristic of the quotient ring $A/(\lambda)$ is $3$. Now $N(\lambda)=3$, so by (3), $\lambda$ is irreducible. This implies that the quotient ring is in fact a field. Again since $N(\lambda)=3$, we find that the inertia degree of $\lambda$ over $\mathbb Q$ is $1$, i.e. $A/(\lambda)\cong F_3$. For (8), since $(a+b\zeta)^3=a^3+b^3-3ab^2+\zeta(3a^2b-3ab^2)$, we find that $(a+b\zeta)^3$ is sent to $0$ if and only if both $9$ divides $a^3+b^3-3ab^2$ and $3$ divides $ab(a-b)$. The latter occurs if and only if $3$ divides $a$, $b$, or $a\equiv b\pmod 3$. In each of the cases, we conclude that $3$ divides both $a$ and $b$. So, to find the image of $\mathbb A^3$ under $\pi$, one just checks each of $9$ cases, and noting that $\zeta\equiv1\pmod{\lambda}$ and $\lambda^2=-3\zeta$. Hope this helps. - Notice also that the conjugate of $\lambda$ is $-\zeta^2\lambda$, so (6) told you that the norm of $\lambda$ is $3$.And I assume here that OP knoew already something about elementary algebraic number theory, such as inertia degrees and something like that. – awllower Feb 23 '13 at 13:31 I understand what you wrote for (8), but I do not know how to find the image under $\pi$. I do not understand how your computations, should help to find the image. – Mathoman Feb 23 '13 at 16:23 We try to find the kernel of the map, then finding the number of elements in the image. Finally we do some more calculations to exhaust all elements and find the image? Is this way too ambiguous? – awllower Feb 23 '13 at 16:25 As I know out teacher, there should be an shorter way to prove this, if I do not find it, I will take this method, thanks. – Mathoman Feb 23 '13 at 16:49 It is field isomorphism. We are trying to compute $F = \mathbb Z[\sqrt{-3}]/(1-\zeta)$ and $(1-\zeta)$ is a prime ideal. In $F$ we have $3=0$ because $\zeta = \frac{-1 + \sqrt{-3}}{2}$ and the norm is $(1-\frac{-1 + \sqrt{-3}}{2})(1-\frac{-1 - \sqrt{-3}}{2}) = 3$. We also have $\zeta = 1$ in this field (when you take a quotient $F/(p)$ think of multiples of $p$ being equal to zero), so $F=\mathbb F_3$ - You meant $\mathbb Z[\sqrt{-3}]$ ? – user18119 Feb 24 '13 at 17:43 @QiL'8, yes, thank you – user58512 Feb 24 '13 at 17:46
+0 # Parabola Equation 0 278 2 Thank you for helping me :) May 11, 2021 #1 +36434 +2 Vertex is   4, 0 vertex form   =   y = a (x-4)^2   + 0      sub in the given point  (6,2)  to calculate 'a' 2 = a ( 6-4)^2 a = 1/2 your equation is then     y = 1/2 (x-4)^2 + 0 May 11, 2021 #2 +873 +1 General Equation of Parabola: $y=a(x-h)^2 + k$ where $(h,k)$ is the vertex of the parabola. We have $y=a(x-4)^2 + 0$ We can’t the vertex $(4,0)$ again to plug in for $(x,y)$ because it will yield: $0=0+0.$ Let’s use the other point given(namely $(6,2)$): $2=a(6-4)^2 + 0$ We have $a=\frac{2}{4}=\frac{1}{2}.$ Thus the equation is $y=\frac{1}{2}(x-4)^2$ May 11, 2021
## Calculus (3rd Edition) A circle in the xy-plane of radius $9$ centered at the origin. We put $$x=9\cos t, \quad y=9\sin t$$ hence we get $$x^2+y^2=81\sin^2t+81\cos^2t=81.$$ Which is a circle in the xy-plane of radius $9$ centered at the origin.
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " Page 1 /100 Display every page 5 10 20 Item S. Y. Novak Statistics , 2014, DOI: 10.3150/13-BEJ512 Abstract: The paper suggests a simple method of deriving minimax lower bounds to the accuracy of statistical inference on heavy tails. A well-known result by Hall and Welsh (Ann. Statist. 12 (1984) 1079-1084) states that if $\hat{\alpha}_n$ is an estimator of the tail index $\alpha_P$ and $\{z_n\}$ is a sequence of positive numbers such that $\sup_{P\in{\mathcal{D}}_r}\mathbb{P}(|\hat{\alpha}_n-\alpha_P|\ge z_n)\to0$, where ${\mathcal{D}}_r$ is a certain class of heavy-tailed distributions, then $z_n\gg n^{-r}$. The paper presents a non-asymptotic lower bound to the probabilities $\mathbb{P}(|\hat{\alpha}_n-\alpha_P|\ge z_n)$. We also establish non-uniform lower bounds to the accuracy of tail constant and extreme quantiles estimation. The results reveal that normalising sequences of robust estimators should depend in a specific way on the tail index and the tail constant. Statistics , 2013, DOI: 10.1214/12-AOS1062 Abstract: Researchers are often interested in drawing inferences regarding the order between two experimental groups on the basis of multivariate response data. Since standard multivariate methods are designed for two-sided alternatives, they may not be ideal for testing for order between two groups. In this article we introduce the notion of the linear stochastic order and investigate its properties. Statistical theory and methodology are developed to both estimate the direction which best separates two arbitrary ordered distributions and to test for order between the two groups. The new methodology generalizes Roy's classical largest root test to the nonparametric setting and is applicable to random vectors with discrete and/or continuous components. The proposed methodology is illustrated using data obtained from a 90-day pre-chronic rodent cancer bioassay study conducted by the National Toxicology Program (NTP). PLOS ONE , 2013, DOI: 10.1371/journal.pone.0058369 Abstract: This article provides a fully Bayesian approach for modeling of single-dose and complete pharmacokinetic data in a population pharmacokinetic (PK) model. To overcome the impact of outliers and the difficulty of computation, a generalized linear model is chosen with the hypothesis that the errors follow a multivariate Student t distribution which is a heavy-tailed distribution. The aim of this study is to investigate and implement the performance of the multivariate t distribution to analyze population pharmacokinetic data. Bayesian predictive inferences and the Metropolis-Hastings algorithm schemes are used to process the intractable posterior integration. The precision and accuracy of the proposed model are illustrated by the simulating data and a real example of theophylline data. Mathematics , 2009, Abstract: A complete and user-friendly directory of tails of Archimedean copulas is presented which can be used in the selection and construction of appropriate models with desired properties. The results are synthesized in the form of a decision tree: Given the values of some readily computable characteristics of the Archimedean generator, the upper and lower tails of the copula are classified into one of three classes each, one corresponding to asymptotic dependence and the other two to asymptotic independence. For a long list of single-parameter families, the relevant tail quantities are computed so that the corresponding classes in the decision tree can easily be determined. In addition, new models with tailor-made upper and lower tails can be constructed via a number of transformation methods. The frequently occurring category of asymptotic independence turns out to conceal a surprisingly rich variety of tail dependence structures. Mathematics , 2015, Abstract: Branching random walks on multidimensional lattice with heavy tails and a constant branching rate are considered. It is shown that under these conditions (heavy tails and constant rate), the front propagates exponentially fast, but the particles inside of the front are distributed very non-uniformly. The particles exhibit intermittent behavior in a large part of the region behind the front (i.e., the particles are concentrated only in very sparse spots there). The zone of non-intermittency (were particles are distributed relatively uniformly) extends with a power rate. This rate is found. Gilles Zumbach Quantitative Finance , 2009, Abstract: The covariance matrix is formulated in the framework of a linear multivariate ARCH process with long memory, where the natural cross product structure of the covariance is generalized by adding two linear terms with their respective parameter. The residuals of the linear ARCH process are computed using historical data and the (inverse square root of the) covariance matrix. Simple measure of qualities assessing the independence and unit magnitude of the residual distributions are proposed. The salient properties of the computed residuals are studied for three data sets of size 54, 55 and 330. Both new terms introduced in the covariance help in producing uncorrelated residuals, but the residual magnitudes are very different from unity. The large sizes of the inferred residuals are due to the limited information that can be extracted from the empirical data when the number of time series is large, and denotes a fundamental limitation to the inference that can be achieved. Statistics , 2015, Abstract: The Marcinkiewicz Strong Law, $\displaystyle\lim_{n\to\infty}\frac{1}{n^{\frac1p}}\sum_{k=1}^n (D_{k}- D)=0$ a.s. with $p\in(1,2)$, is studied for outer products $D_k=X_k\overline{X}_k^T$, where $\{X_k\},\{\overline{X}_k\}$ are both two-sided (multivariate) linear processes ( with coefficient matrices $(C_l), (\overline{C}_l)$ and i.i.d.\ zero-mean innovations $\{\Xi\}$, $\{\overline{\Xi}\}$). Matrix sequences $C_l$ and $\overline{C}_l$ can decay slowly enough (as $|l|\to\infty$) that $\{X_k,\overline{X}_k\}$ have long-range dependence while $\{D_k\}$ can have heavy tails. In particular, the heavy-tail and long-range-dependence phenomena for $\{D_k\}$ are handled simultaneously and a new decoupling property is proved that shows the convergence rate is determined by the worst of the heavy-tails or the long-range dependence, but not the combination. The main result is applied to obtain Marcinkiewicz Strong Law of Large Numbers for stochastic approximation, non-linear functions forms and autocovariances. David D. Hanagal Journal of Reliability and Statistical Studies , 2009, Abstract: Block (1975) extended bivariate exponential distributions (BVEDs) of Freund (1961)and Proschan and Sullo (1974) to multivariate case and called them as Generalized Freund-Weinman's multivariate exponential distributions (MVEDs). In this paper, we obtain MLEs of theparameters and large sample test for testing independence and symmetry of k components in thegeneralized Freund-Weinman's MVEDs. Statistics , 2010, Abstract: In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully Bayesian hierarchy for sparse models using slab and spike priors (two-component delta-function and continuous mixtures), non-Gaussian latent factors and a stochastic search over the ordering of the variables. The framework, which we call SLIM (Sparse Linear Identifiable Multivariate modeling), is validated and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable computational complexity. We attribute this mainly to the stochastic search strategy used, and to parsimony (sparsity and identifiability), which is an explicit part of the model. We propose two extensions to the basic i.i.d. linear framework: non-linear dependence on observed variables, called SNIM (Sparse Non-linear Identifiable Multivariate modeling) and allowing for correlations between latent variables, called CSLIM (Correlated SLIM), for the temporal and/or spatial data. The source code and scripts are available from http://cogsys.imm.dtu.dk/slim/. Mathematics , 2008, Abstract: Multivariate normal mixtures provide a flexible model for high-dimensional data. They are widely used in statistical genetics, statistical finance, and other disciplines. Due to the unboundedness of the likelihood function, classical likelihood-based methods, which may have nice practical properties, are inconsistent. In this paper, we recommend a penalized likelihood method for estimating the mixing distribution. We show that the maximum penalized likelihood estimator is strongly consistent when the number of components has a known upper bound. We also explore a convenient EM-algorithm for computing the maximum penalized likelihood estimator. Extensive simulations are conducted to explore the effectiveness and the practical limitations of both the new method and the ratified maximum likelihood estimators. Guidelines are provided based on the simulation results. Page 1 /100 Display every page 5 10 20 Item
Question (a) A bat uses ultrasound to find its way among trees. If this bat can detect echoes 1.00 ms apart, what minimum distance between objects can it detect? (b) Could this distance explain the difficulty that bats have finding an open door when they accidentally get into a house? 1. $0.343 \textrm{ m}$ 2. Since doors are wider than 34 cm, this does not explain why bats have a difficult time finding the door to exit a house. Solution Video
# Bivariate Model Example library(BGPhazard) library(dplyr) #> #> Attaching package: 'dplyr' #> The following objects are masked from 'package:stats': #> #> filter, lag #> The following objects are masked from 'package:base': #> #> intersect, setdiff, setequal, union library(ggplot2) We will use the built-in dataset KIDNEY to show how the bivariate model functions work. All the functions for the bivariate model start with the letters BSB, which stand for Bayesian Semiparametric Bivariate. KIDNEY #> # A tibble: 38 x 6 #> id sex t1 t2 delta1 delta2 #> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 1 1 8 16 1 1 #> 2 2 0 23 13 1 0 #> 3 3 1 22 28 1 1 #> 4 4 0 447 318 1 1 #> 5 5 1 30 12 1 1 #> 6 6 0 24 245 1 1 #> 7 7 1 7 9 1 1 #> 8 8 0 511 30 1 1 #> 9 9 0 53 196 1 1 #> 10 10 1 15 154 1 1 #> # … with 28 more rows ## Initial setup First, we use the BSBInit function to create the necessary data structure that we have to feed the Gibbs Sampler. We can skim the data structure with the summary and print methods. bsb_init <- BSBInit( KIDNEY, alpha = 0.001, beta = 0.001, c = 1000, part_len = 10, seed = 42 ) summary(bsb_init) #> #> Individuals: 38 #> Time partition intervals: 57 #> Censored t1: 6 #> Censored t2: 12 #> Predictors: TRUE sex Our data consists of 38 individuals with two failure times each. For the first failure time t1 we have six censored observations, while for the second failure time we have twelve. The model will use sex as a predictor variable. ## Gibbs Sampler To obtain the posterior samples, we use the function BSBHaz. We run 100 iterations with a burn-in period of 10. The number of simulations is low in order to reduce the complexity of building this vignette. In practice, you should see how many iterations the model needs to reach convergence. samples <- BSBHaz( bsb_init, iter = 100, burn_in = 10, gamma_d = 0.6, theta_d = 0.3, seed = 42 ) print(samples) #> #> Samples: 90 #> Individuals: 38 #> Time partition intervals: 57 #> Predictors: TRUE The print method shows that we only kept the last 90 iterations as posterior simulations. ## Summaries ### Tables We can get posterior sample summaries with the function get_summaries. This function returns the posterior mean and a 0.95 probability interval for all the model parameters. Additionally, it returns the acceptance rate for variables sampled using the Metropolis-Hastings algorithm. BSBSumm(samples, "omega1") #> Individual Mean Prob. Low 95% Prob. High 95% Acceptance Rate #> 1 1 3.5161998 2.93559241 3.9604323 0.04494382 #> 2 2 1.1065573 0.18571302 1.6881673 0.06741573 #> 3 3 0.7416358 0.36037618 1.5592198 0.15730337 #> 4 4 1.2766251 0.47604867 2.3175307 0.03370787 #> 5 5 0.6756267 0.38536523 1.3612876 0.11235955 #> 6 6 0.5794627 0.13703276 0.7149068 0.08988764 #> 7 7 0.3970680 0.14758059 0.7535655 0.10112360 #> 8 8 1.4426608 1.34200170 1.5006374 0.02247191 #> 9 9 0.8038096 0.46468569 1.4004952 0.10112360 #> 10 10 2.7630778 2.05615796 3.6564398 0.06741573 #> 11 11 0.8593926 0.40157655 3.5427683 0.12359551 #> 12 12 2.8803886 2.89006674 2.8900667 0.01123596 #> 13 13 0.7382331 0.44092970 0.9362285 0.06741573 #> 14 14 1.2081150 0.86413885 1.4607704 0.07865169 #> 15 15 1.7902202 1.29562801 2.2319309 0.06741573 #> 16 16 1.3627014 0.82697995 1.9902757 0.04494382 #> 17 17 0.5523012 0.33161142 1.3102429 0.03370787 #> 18 18 0.5630820 0.49259116 0.7187072 0.04494382 #> 19 19 1.7544980 0.17037680 2.8154057 0.08988764 #> 20 20 2.0120744 1.97106409 2.0230490 0.01123596 #> 21 21 8.2350556 1.85218433 12.9039642 0.11235955 #> 22 22 0.6949245 0.48424296 0.8675818 0.06741573 #> 23 23 0.2214919 0.08392286 0.6664909 0.15730337 #> 24 24 0.2812495 0.08070932 1.2778070 0.13483146 #> 25 25 1.2771252 0.67825416 2.5472605 0.11235955 #> 26 26 2.8058518 1.83041665 3.3339332 0.06741573 #> 27 27 0.8531273 0.47196104 1.7233703 0.06741573 #> 28 28 0.8204746 0.52322421 1.4459134 0.08988764 #> 29 29 0.9169544 0.30734051 1.4592335 0.10112360 #> 30 30 0.5382379 0.22762565 0.7643319 0.07865169 #> 31 31 0.8629524 0.09194932 2.5330043 0.08988764 #> 32 32 0.7209464 0.19416043 1.3089953 0.17977528 #> 33 33 2.1644123 1.65823576 2.8291761 0.04494382 #> 34 34 1.3729098 0.29452080 2.4280549 0.08988764 #> 35 35 1.7693823 1.09815920 2.5212324 0.10112360 #> 36 36 1.6086777 0.78312506 1.8237267 0.05617978 #> 37 37 0.9144461 0.43560334 1.5250703 0.11235955 #> 38 38 0.7551478 0.54353895 1.1552375 0.05617978 BSBSumm(samples, "lambda1") #> Interval Mean Prob. Low 95% Prob. High 95% #> 1 1 0.0025857030 1.095838e-03 0.004529922 #> 2 2 0.0028201306 1.072372e-03 0.006209417 #> 3 3 0.0028701368 1.028741e-03 0.004577725 #> 4 4 0.0014738311 4.456613e-04 0.002929883 #> 5 5 0.0010384908 1.513226e-04 0.002427628 #> 6 6 0.0015676796 2.653196e-04 0.003885577 #> 7 7 0.0016694291 2.945030e-04 0.003660394 #> 8 8 0.0009113779 1.018708e-04 0.002452031 #> 9 9 0.0009840101 1.035397e-04 0.002962389 #> 10 10 0.0014811821 9.997731e-05 0.003422372 #> 11 11 0.0010404547 4.245591e-05 0.002902399 #> 12 12 0.0015906226 7.036417e-05 0.004392124 #> 13 13 0.0021560652 8.016327e-05 0.006407150 #> 14 14 0.0019912459 1.563375e-05 0.004534501 #> 15 15 0.0018077036 3.339335e-05 0.004162587 #> 16 16 0.0022137172 5.657934e-05 0.006073974 #> 17 17 0.0013511934 1.000000e-05 0.003463942 #> 18 18 0.0013227631 1.000000e-05 0.003636501 #> 19 19 0.0017753832 2.399019e-05 0.004081210 #> 20 20 0.0013563878 1.000000e-05 0.003769219 #> 21 21 0.0010545479 1.000000e-05 0.003959654 #> 22 22 0.0013524100 1.000000e-05 0.003525046 #> 23 23 0.0013050529 1.000000e-05 0.004016113 #> 24 24 0.0013235438 1.000000e-05 0.004210845 #> 25 25 0.0011104413 1.000000e-05 0.003055708 #> 26 26 0.0007253316 1.000000e-05 0.002132804 #> 27 27 0.0009401048 1.000000e-05 0.002904711 #> 28 28 0.0010058183 1.000000e-05 0.002763355 #> 29 29 0.0012358063 1.000000e-05 0.004230714 #> 30 30 0.0018967682 1.828473e-05 0.004760684 #> 31 31 0.0018014437 5.402194e-05 0.004791687 #> 32 32 0.0011479481 8.625434e-05 0.003700070 #> 33 33 0.0011435303 2.343359e-04 0.002767294 #> 34 34 0.0012261302 7.771174e-05 0.003636052 #> 35 35 0.0013919237 1.270542e-04 0.004347896 #> 36 36 0.0012904054 1.361895e-04 0.003190328 #> 37 37 0.0015285924 1.837256e-04 0.004401775 #> 38 38 0.0015326806 1.507929e-04 0.004137058 #> 39 39 0.0019105030 5.312515e-04 0.004687678 #> 40 40 0.0026009009 3.822327e-04 0.006309135 #> 41 41 0.0033056801 9.513998e-04 0.007796441 #> 42 42 0.0039823749 1.055664e-03 0.008110874 #> 43 43 0.0042108617 1.282347e-03 0.007878363 #> 44 44 0.0045044175 1.930756e-03 0.007491707 #> 45 45 0.0052750300 2.019230e-03 0.009657346 #> 46 46 0.0049449366 1.368544e-03 0.009301134 #> 47 47 0.0045744088 2.421738e-04 0.011400391 #> 48 48 0.0047951245 7.088428e-04 0.011686684 #> 49 49 0.0053951413 1.780167e-03 0.011833267 #> 50 50 0.0070323069 3.014840e-03 0.011168967 #> 51 51 0.0088950441 2.929981e-03 0.015553042 #> 52 52 0.0109052593 4.808901e-03 0.018417408 #> 53 53 0.0121661018 4.616973e-03 0.026259576 #> 54 54 0.0145331873 3.989288e-03 0.029947030 #> 55 55 0.0154093520 4.827066e-03 0.034733410 #> 56 56 0.0139422058 2.577616e-03 0.030471030 #> 57 57 0.0124603843 3.500266e-03 0.024895142 It is important to notice that lambda1 and lambda2 are the estimated hazard rates for the baseline hazards $$h_0$$. They do not include the effect of predictor variables. The same applies for the survival function estimates s1 and s2. ### Plots We can get two summary plots: estimated hazard rates and estimated survival functions. Baseline hazards BSBPlotSumm(samples, "lambda1") BSBPlotSumm(samples, "lambda2") Survival functions BSBPlotSumm(samples, "s1") BSBPlotSumm(samples, "s2") You can also get diagnostic plots for the simulated variables. Choose the type of plot with the argument type. BSBPlotDiag(samples, "omega1", type = "traceplot") BSBPlotDiag(samples, "omega1", type = "ergodic_means")
# Math Help - checking 1. ## checking Instructions are: In a right triangle, find the length of the side not given. The sides that are given are, b=1, c=sqrt5 This is what I have done: a^2+b^2=c^2 a^2+1^2=sqrt5^2 a^2+1=5 a^2+1-1=5-1 a^2=4 a=2 Have I got it? 2. Yes.
# Explanation of definition of George Wilson's adelic Grassmannian How is George Wilson's adelic Grassmannian from e.g. the paper https://link.springer.com/article/10.1007%2Fs002220050237 related to the adeles or (especially) the affine Grassmannian (a.k.a. the loop Grassmannian)? Is there a more algebraic definition of the adelic Grassmannian than the one presented in the above paper of G. Wilson? Xinwen Zhu has fantastic notes on all sorts of affine Grassmannians from the point of view of algebraic geometry: see here. (You can take your base field to be $$\mathbf{C}$$ everywhere, and some of the ind-schemes and non-representable sheaves and things can probably actually be represented in a more concrete way by some infinite-dimensional complex-analytic spaces). His section 4.3 describes a relationship between "adelic" affine Grassmannians and the adele ring of the corresponding function field (here, this would be the adele ring of the function field $$\mathbf{C}(z)$$, i.e. the restricted product of $$\mathbf{C}((z-\lambda))$$ as $$\lambda$$ ranges through $$\mathbf{C} \cup \{\infty\}$$). I think Wilson's $$\mathrm{Gr}_\lambda$$'s are loosely analogous to the loop affine Grassmannian $$\mathrm{Gr}_{\mathrm{GL}_1} = \mathbf{C}((z-\lambda))^\times/\mathbf{C}[[z-\lambda]]^\times$$ (note that this latter space, however, is a disjoint union of non-reduced points). The $$\mathrm{Gr}_\lambda$$'s parametrize $$\mathbf{C}$$-subspaces of $$\mathbf{C}(z)$$ which, for some $$k$$, sit between $$(z-\lambda)^k \mathbf{C}[z]$$ and $$(z-\lambda)^{-k}\mathbf{C}[z]$$, with the condition that these inclusions have codimension $$k$$. If you drop this codimension condition and require the subspace to be a $$\mathbf{C}[z]$$-module, you'd get $$\mathrm{Gr}_{\mathrm{GL}_1}$$. I don't really know how to think about Wilson's definition, or why it's useful for things - it seems related to a more analytic notion of affine Grassmannian. I think the relationship with the usual "loop" affine Grassmannian is mostly an analogy, and the two things arise in rather different contexts, but I'm not an expert here. This becomes "adelic" when you allow $$\lambda$$ to vary, and consider a space parametrizing finite sets $$\{\lambda_1, \ldots, \lambda_n\}$$ inside $$\mathbf{C}$$. Wilson's $$\mathrm{Gr}^{\mathrm{Ad}}$$ is analogous to Zhu's $$\mathrm{Gr}_{\mathrm{Ran}, \mathrm{GL}_1}$$. The latter can be thought of as parametrizing finite subsets $$\{\lambda_1, \ldots, \lambda_k\}$$ of $$\mathbf{C}$$, together with $$\mathbf{C}[z]$$-modules inside of $$\mathbf{C}(z)$$ which sit between $$(z-\lambda_1)^{-k} \cdots (z-\lambda_n)^{-k} \mathbf{C}[z]$$ and $$(z-\lambda_1)^k \cdots (z-\lambda_n)^k\mathbf{C}[z]$$ for some $$k$$. The points of this space are related to $$\mathrm{GL}_1$$ of the adeles of $$\mathbf{C}(z)$$ by "Weil's uniformization theorem": see this answer. One answer is provided in this paper I wrote with Tom Nevins (inspired especially by work of Berest-Wilson). We show Wilson's Grassmannian is precisely analogous to the Beilinson-Drinfeld Grassmannian, which is the "adèlic" form of the affine Grassmannian (as explained eg in Zhu's excellent article cited by dorebell). Namely, it can be identified with the moduli space of D-line-bundles" (rank 1 projective D-modules) on a curve (usually $$P^1$$) equipped with a trivialization (identification with D) outside of finitely many points. Wilson's linear-algebraic picture is obtained from this D-module picture by applying a Riemann-Hilbert correspondence. It forms a factorization ind-scheme -- roughly speaking these play the role of groups to the "Lie algebras" which are vertex algebra (in this case the $$\mathcal W_{1+\infty}$$ algebra).
# All Questions 17,921 questions Filter by Sorted by Tagged with 7 views ### Set parent task to DONE based on child conditions I'm new to Emacs (spacemacs) and trying to get Org-Mode's TODO feature working how I'd like it to work. I am trying to have parent tasks set to DONE when all child checkboxes (or TODO's are complete).... 7 views ### Change the color of ivy-current-match depending on the caller When using ivy, say I want the current match to be red when calling counsel-M-x, but blue when calling counsel-find-file. Is there a simple way of changing ivy-current-match depending on the function ... 21 views ### define-key doesn't seem to work on certain keymap I'm trying to add some bindings to undo-tree-visualizer-mode-map. To do so I use: (define-key undo-tree-visualizer-mode-map (kbd "S-l") 'undo-tree-visualizer-quit) But it doesn't work, and if I do ... 7 views ### Org-mode verbatim tilde (=~=) is still processed by Latex back-end I have some verbatim text containing a tilde, =some ~ text=. Now, org-mode is supposed to leave the tilde alone. However, upon exporting with the latex backend, I get the following in my .tex: \texttt{... 14 views ### org-priority and org-insert-structure-template key binding clash I'm using or-mode 9.2 and when I hit C-c C-, and I got the following message: Before first headline at position 480 in buffer index.org The message indicates that the key actually get translated as ... 49 views ### Sort ripgrep results based on proximity to current buffer I use counsel-projectile-rg to search within the project I work on. But there is my sub-project inside this big repo that most of my development is in. Is there a way to sort ripgrep results based on ... 19 views ### Spaces don't work as expected in isearch I recently installed a new major mode for something I'm working on, and suddenly spaces in expressions I'm searching for don't work as expected. Help! 26 views ### hs-minor-mode and sage-shell-mode (derived from python-mode) I would like to use emacs' hide-show mode (to collapse class and function definitions) with Sho Takemori's sage-shell-mode (for the SageMath computer algebra system) which derives from python-mode. ... 9 views ### Newsticker failure: (wrong-type-argument listp \.\.\.) Suddenly newsticker is failing to load. I get the following stack trace on error. I have made no recent upgrades. Debugger entered–Lisp error: (wrong-type-argument listp \.\.\.) ... 14 views ### How to run a command with 'modify-syntax-entry' reset? I have used modify-syntax-entry to remove underscores from being delimiting characters. e.g: (modify-syntax-entry ?_ "w") However I would like to temporary allow underscores to be used as a ... 16 views ### Busy Python console I am using Spacemacs on MacOS. Whenever I run a python file (Spc m s b), I get this in the console: import codecs, os;__pyfile = codecs.open('''/var/folders/vg/vztzj2jn1yn_y46_zcx6w_nh0000gn/T/... 18 views ### Capture at an arbitrary date In this example I changed the date for Entered to 12/07/2019 at the capture editing stage. Yet it gets filed under today's date. How can file it under the right date, that is 12/07/2019? Ideally in ... 32 views ### Preload problem: Emacs as daemon in X Background: In my .profile, I try to start emacs in daemon mode in the background, and it didn't start, and there is this message: http://bugzilla.gnome.org/show_bug.cgi?id=85715 Emacs might crash ... 23 views ### How to navigate to the next/previous python class? I would like to navigate quickly between classes in Python code. I'm looking for a command to go to the next / previous class so I can bind that to a key sequence. This question is not about ... 28 views ### How to navigate in a Dired buffer? window 10, Emacs 26.1, Dired+ Suppose I open in Dired+ mode some folder: d:/TEMP/test_folder/folder2/ I need next: When press button "End" then go to the last file in the folder. Like this: When ... 30 views ### Regular template On a near weekly basis, I need to create an org entry that looks something like * <TODAYS DATE> - <Person 1> - What this person did - <Person 2> - What this person did ... 27 views ### Unescape elisp string I have a function which returns a file path, but the returned value has backslash excaped 'special' characters e.g. "/home/fred/Documents/This\ file\ \(20200120\).txt" but the function where I want ... 45 views ### How to make fonts show anti-aliased on Linux/X11? With Linux/X11, certain font's show without anti-aliasing. JetBrains Mono Cascadia Code These two fonts show with anti-aliasing in other programs (st terminal for example). All other fonts show with ... 11 views ### why defcustom variable is not seeable from describe symbol? take org-brain-path for example: (defcustom org-brain-path (expand-file-name "brain" org-directory) "The root directory of your org-brain. `org-mode' files placed in this directory, or its ... 31 views ### Unable to install emacs 26.3 I am trying to install emacs 26.3. In installed sudo apt install autoconf make gcc texinfo libgtk-3-dev libxpm-dev libjpeg-dev libgif-dev libtiff5-dev libgnutls-dev libncurses5-dev It cant find ... 34 views ### Emacs leaving behind files preceded with # despite lock files being disabled I've noticed that emacs is leaving behind files in the directories of files I've edited with it. The files are named the same as the original, except the filename is preceeded and succeeded with a # ... 22 views ### Send command to run in eshell, after compiled So I've been mapping my M-x compile like this, according to c-mode-hook or c++-mode-hook: (add-hook 'c++-mode-hook (lambda () (set (make-local-variable 'compile-command) ... 18 views ### How to read the python documentation offline in emacs? Sometimes I use ctrl h i or M-x woman, though I almost always head to the web for basic queries like python find line in string. Do I need to download this[0], that[1] or something else before leaving ... 11 views ### org-mode call occur not function Since I need to search some pattern frequently, I wish to write some code there then execute it in org-mode. for example: #+begin_src elisp :results output (defun myoccur (arg) (occur (s-replace " "... 6 views ### how to use bbdb-snarf to capture addresses from point i want catch address information from point inside a buffer or from kill-ring. as far as i know, bbdb-snarf from bbdb3 would be a candidate to use. but it doesn't find email addresses or names from ... 14 views ### Automatically highlight elements in an XML buffer I spend a lot of time in XML buffers using nxml-mode (Emacs 26). Out of hundreds of XML elements, I am only interested in one or two. How can I have Emacs always show the XML elements I'm interested ... 25 views ### Help; how use Persistent Sets of Completion Candidates for search? I want to use a set of files inside my directory ~/Dokumente to search for changes in title, content. So I saved such a set of file names persistently, using C-} during file-name completion with ... 11 views ### TCL is adding different tab widths in the program I am trying to rearrange the columns of below file input.txt, Contents of this file: Name Location Purpose --------------------------------------------- Andy US Business1 ... 33 views ### whitespace-line-column highlight basing on the column number instead of char count config (require 'whitespace) (setq whitespace-line-column 80) ;; limit line length (setq whitespace-style '(face tabs empty trailing lines-tail)) For first line, whitespace-mode correctly highlights ... 17 views ### Restore after `maximize-window` If I have multiple windows open, I can maximize one using M-x maximize-window. How should I restore the windows to their original size? 36 views ### Regular expression for finding 'href' attribute value of <a> HTML element I need a regex pattern for finding web page links in HTML. For example <a href="https://www.google.com/search?q=elisp+regex+lookaround" ....></a> I can use (?<=href=\").+?(?=\") | (?&... 38 views ### How to “call” a keymap I would like to have a function call-keymap which takes one argument km (a keymap) and such that the result of (call-keymap km) is the same as binding km to a key and then pressing that key. It ... 10 views ### How to get first nested parameter indented based on outermost function I would like to get the following cc-mode indentation behavior (c-basic-offset is 4): outerFunction(innerFunction( parameter1, parameter2)); However I would like to maintain arglist aligned ... 18 views ### org-mode : how to get more space between 'List of all TODO entries' C-a-t is bound to `org-todo-list (i.e., "List of all TODO entries"). Is there a way to add space between each row in the *Org Agenda* output buffer when the list is displayed? 17 views ### C-g doesn't work within Counsel's action menu (M-o) Let's say I'm completing a file with counsel-find-file. I then press M-o to show the menu with available actions. If within that menu, I press C-g, I get the message "C-g is not bound". The ... 28 views ### keymap (overlay / text property): how to create and modify? I'd like to have certain keymap active when the cursor is in a particular highlighted area. For instance, I would like to bind hl-todo-next to some easy key combination, but only when the cursor is on ... 37 views I have a bookmark in one of my org files (set using the usual emacs bookmarks), but I'm finding that when I try to jump to the bookmark (with the usual C-x r b, which invokes bookmark-jump), it doesn'... 36 views ### Emacs font style and colorisation I am used to Emacs under Windows 10 and trying to switch to Ubuntu. My Windows 10 Emacs font is M-x describe-font: name (opened by): -outline-Courier New-normal-normal-normal-mono-13-*-*-*-c-*-... 11 views ### Markdown equivalent of `org-indent-mode`? What's the simplest way of ensuring that the text below a headline in a Markdown file is aligned with the headline? In org-mode you can accomplish this by calling org-indent-mode. Is there a similar ... 15 views ### “Run all code blocks above this point” in org-babel? I know you can use org-babel-execute-src-block (C-c C-c) to execute individual code blocks, but is there something akin to "Run all chunks above" that RStudio has for Rmarkdown files? 137 views ### Ligatures with the JetBrains Mono font After installing the JetBrains Mono Font and setting it as my default face. I am wandering how to enable Ligatures in Emacs. I tried a solution based on this answer that show how to do this with Fira ... 24 views ### (octave-mode) highlighted region remains highlighted after command I am in octave-mode running an inferior octave shell. Now I mark a region and issue 'octave-send-region'. All commands are executed as expected and echoed in the shell buffer, and the highlighting is ... 45 views ### Goto line by number in folded org buffer I looking for a way to go to the specific line by number in an org file, when the structure is folded. An ideal solution would be to unfold only necessary headings to put the cursor in the line. I was ... 31 views ### How to improve eldoc + lsp-mode output with C/C++ comments? When using eldoc with lsp-mode, there are some irritations with the default output. Various characters are backslash escaped, so: /* This: isn't so -> "good", you see. */ Displays as: This\: ... 34 views ### How do I run an external command on an org mode source block? I'm writing a screenplay using Fountain mode. My org file is set up so there are multiple screenplays in a given file, for example * Number one! #+begin_src fountain INT A PARK IN SAN FRANCISCO ... 13 views ### gnutls-negotiate: GnuTLS error: #<process smtpmail<1>>, -15 Following Sending Mail Since no credentials are given in this configuration, Emacs will look them up in \$(HOME)/.authinfo or \$(HOME)/.authinfo.gpg (encrypted). The content of this file should ... 14 views ### How to make helm matching faces inherit from ivy? I like Ivy face when matching better than helm for the particular theme I am using. When listing all faces: list-faces-display I see that I want helm matching to be exactly like: ivy-minibuffer-... 41 views ### How to search certain subdirectories of directories that are ignored by Git I'm working in a Scala project that has target/ directories entered as ignored in the .gitignore. However, there is also Scala code being generated during the build which is being nested under a ...
# Periodic Functions ## What are Periodic Functions? Periodic functions are applied to study signals and waves in electrical and electronic systems, vibrations in mechanical and civil engineering systems, waves in physics and wireless systems and has many other applications. The graph of a periodic function repeats itself over cycles for $x$ in the domain of the function. If $f$ is known over one cycle, it is known everywhere over the domain of $f$ since the graph repeats itself. A function $f$ is periodic with period $P$ if $f(x) = f(x + P)$ , for $x$ in the domain of $f$. $P$ is the smallest positive real number for which the above condition holds. In the graph below is shown a periodic function with two cycles as an example. The period $P$ is the distance, along the x axis, between any two points making a cycle as shown in the graph below. $P = x_2 - x_1 = x_4 - x_3$ Example 1 All six trigonometric functions are periodic. 1. $\sin(x + 2\pi ) = \sin(x)$ , the period of $\sin(x)$ is equal to $P = 2\pi$ The graph of $\sin(x)$ is shown below with one cycle, in red, whose length over the x axis is equal to one period P given by: $P = 2 \pi - 0 = 2 \pi$ 2. $\cos(x + 2\pi ) = \cos(x)$ , the period of $\cos(x)$ is equal to $P = 2\pi$ 3. $\sec(x + 2\pi ) = \sec(x)$ , the period of $\sec(x)$ is equal to $P = 2\pi$ 4. $\csc(x + 2\pi ) = \csc(x)$ , the period of $\csc(x)$ is equal to $P = 2\pi$ 5. $\tan(x + \pi ) = \tan(x)$ , the period of $\tan(x)$ is equal to $P = \pi$ The graph of $\tan(x)$ is shown below with one cycle, in red, whose length over the x axis is equal to one period P given by: $P = \dfrac{\pi}{2} - (-\dfrac{\pi}{2} ) = \pi$ 6. $\cot(x + \pi ) = \cot(x)$ , the period of $\cot(x)$ is equal to $P = \pi$ ## Period of Transformed Functions 1) If $P$ is the period of $f(x)$, then the period of $A f(b x + c ) + D$ is given by $\dfrac{P}{|b|}$ 2) If $P$ is the period of $f(x)$, then $f(x + n P) = f(x)$ , for $n$ integer Example 2 Use the period of the trigonometric functions given in example 1 to find the period of each function given below 1. $f(x) = \sin(0.5 x)$ 2. $g(x) = \tan(2 x + \pi/6)$ 3. $h(x) = \cos(-(2/3) x - \pi)$ 4. $j(x) = \sec(\pi x - 2)$ 5. $k(x) = \cot(-(2\pi/3) x)$ Solution to Example 2 1. The period of $\sin(x)$ is $2\pi$. We use the above formula to find the period of $f(x) = \sin(0.5 x)$ as follows: $\dfrac{2\pi }{|0.5|} = 4\pi$ 2. The period of $\tan(x)$ is $\pi$, hence the period of $g(x) = \tan(2 x + \pi/6)$ is equal to $\dfrac{\pi }{|2|}$ 3. The period of $\cos(x)$ is $2\pi$, hence the period of $h(x) = \cos(-(2/3) x - \pi)$ is equal to $\dfrac{2\pi }{|-2/3|} = 3\pi$ 4. The period of $\sec(x)$ is $2\pi$ and the period of$j(x) = \sec(\pi x - 2)$ is given by $\dfrac{2\pi }{|\pi|} = 2$ 5. The period of $\cot(x)$ is $\pi$ and the period of $k(x) = \cot(-(2\pi/3) x)$ is given by $\dfrac{\pi }{|-2\pi/3|} = 3/2$ ## More Examples with Solutions Find the period of each of the functions given below Example 3 Use the period of the trigonometric functions given in example 1 to find the period of each function given below 1. $f(x) = \sin(x) \cos(x)$ 2. $g(x) = \sin^2(x)$ 3. $h(x) = \cos(x) + \sin(x)$ Solution to Example 3 1. Use the identity $\sin(2x) = 2 \sin(x) \cos(x)$ to write $f(x) = \sin(x) \cos(x) = (1/2) \sin(2x)$, hence the period of $f(x)$ is given by $\dfrac{2\pi }{|2|} = \pi$ 2. Use the identity $cos(2x) = 1 - 2 \sin^2(x)$ to write $g(x) = \sin^2(x) = \dfrac{1}{2} cos(2x)+ \dfrac{1}{2}$, hence the period of $g(x)$ is given by $\dfrac{2\pi }{|2|} = \pi$ 3. Use trigonometric identity of a sum to expand $\sin(x + \pi/4) = \sin(x) \cos(\pi/4) + \cos(x) \sin(\pi/4) = \dfrac{\sqrt 2}{2}{\sin(x) + \dfrac{\sqrt 2} \cos(x)}$ to rewrite $h(x)$ as $h(x) = {\sin(x) + \cos(x)} = \dfrac{2}{\sqrt 2}{\sin(x + \pi/4)}$ and calculate the period of $h(x)$ as: $\dfrac{2\pi }{|1|} = 2\pi$ functions
Favorite Answer. Never attempt to react calcium or the alkali metals with steam. Having seen the use of an indicator in metal–water reactions and the production of hydrogen gas in metal–acid reactions, students might suggest these as possible signs that the reactions are connected. magnesium + steam → magnesium oxide + hydrogen . Lv 6. What is the equation when magnesium reacts with steam forming magnesium oxide and liberating hydrogen? Here, the reaction initially produces magnesium oxide (Equation 3), which can continue to produce the hydroxide on reaction with liquid water (Equation 4). What is the contribution of candido bartolome to gymnastics? What is the birthday of carmelita divinagracia? Word equation for Magnesium + Steam? The reaction of magnesium with steam PDF, Size 0.15 mb; Additional information. In the second method, the hydrogen is collected over water and tested with a lighted spill. … Is evaporated milk the same thing as condensed milk? In … hydroxides and hydrogen. The rinsed glassware can be disposed of in the broken glass bin. Downloads . This demonstration shows how the reaction of metals with water can be dramatically sped up by an increase in temperature. This is an example of displacement of hydrogen in water by a The boiling tube should not be reused but can be rinsed in 500 cm3 of water to convert any silicides to silanes. The indicator will begin to change colour within a few minutes (Figure 2) but a few days may be needed to collect a significant volume of gas which could be tested in the next lesson. The magnesium glows as it reacts with the steam; A white powdery substance remains in the heated tube where the magnesium was; When the experiment is repeated using zinc it is less vigorous but hydrogen is also given out; Conclusion. The demonstrator should wear splash-proof goggles. When magnesium ribbon reacts with steam, a solid called magnesium oxide and hydrogen gas are formed. 4 years ago. Magnesium ribbon reacts in a satisfying manner with strong acids and barely reacts at all with room temperature water. Word Equation For Magnesium With Steam - Displaying top 8 worksheets found for this concept. It’s the same as when magnesium reacts with liquid water, only faster: Mg + 2H₂O → MgO + H₂. 1 decade ago. How long does it take to cook a 23 pound turkey in an oven? The audience should remain at least 2 metres away, wearing eye protection. magnesium plus steam produces magnesium oxide and hydrogen gas. Finally, add the bung fitted with glass tubing. Students can see the production of the hydrogen gas and draw a mental connection between the reactions seen in Equation 1 and Equation 2. In the second method, the hydrogen is collected over water and tested with a lighted spill. Active metals react with water/steam to produce metal Magnesium burns in steam to produce white magnesium oxide and hydrogen gas. $Mg_{(s)} + H_2O_{(g)} \rightarrow MgO_{(s)} + H_{2(g)} \label{1}$ Very clean magnesium ribbon has a mild reaction with cold water, given below. In the first method, the hydrogen that is formed is allowed to burn at the mouth of the flask. Answer Save. Inter state form of sales tax income tax? The difference is that in water, the reaction has a second step, converting MgO to Mg (OH)₂. Copyright © 2020 Multiply Media, LLC. Then, move the Bunsen burner to the water-soaked mineral wool to begin vaporising the water. Small pops or flashes from pyrophoric silanes may be seen. Take steps to prevent theft; never leave reels of magnesium in the laboratory. It will, however, react rapidly with steam. Materials: magnesium ribbon, hard glass test tube, water, sand, 2 burners, short glass tube, splint, clamp stand Method Set up the apparatus as shown above Heat the damp sand and magnesium ribbon so that steam is passed This site uses cookies from Google and other third parties to deliver its services, to personalise adverts and to analyse traffic. Set up safety screens to protect the audience and demonstrator. In the first method, the hydrogen that is formed is allowed to burn at the mouth of the flask. Declan Fleming is a chemistry teacher and author of our Exhibition Chemistry column. Similarly, they might see demonstrations of the more reactive metals with water or even explore these themselves practically. How long will it take to cook a 12 pound turkey? Active metals react with water/steam to produce metal hydroxides and hydrogen. Mg +2 H2O---> Mg(OH)2 + H2. This approach gives some pupils the impression that the two types of reactions are disparate. The end of the tubing should protrude at least 2 cm from the rubber to enable the evolved hydrogen from the demonstration to be lit. Word Equation For Magnesium With Steam - Displaying top 8 worksheets found for this concept.. When did organ music become associated with baseball? By Declan Fleming2020-11-09T09:20:00+00:00, Here’s how to bridge a common gap in students’ understanding of the reactivity series, Watch the video and download the technician notes from the Education in Chemistry website: rsc.li/3oBNyqC. Magnesium. science teacher. The balanced chemical equation is: Mg(s) +2H2O(g) yields Mg(OH)2(aq) + H2(g) O magnesium + hydrogen -> magnesium … What is plot of the story Sinigang by Marby Villaceran? How long will the footprints on the moon last? Who of the proclaimers was married to a little person? hps23456 hps23456 11 minutes ago Chemistry High School When magnesium ribbon reacts with steam, a solid called magnesium oxide and hydrogen gas are formed. 4 Answers. What is the conflict of the story of sinigang? They may also have seen the reaction of lithium with water to produce hydrogen gas and a hydroxide, as evidenced by the use of a universal indicator or phenolphthalein (Equation 2). Who is the longest reigning WWE Champion of all time? It’s the same as when magnesium reacts with liquid water, only faster: Mg + 2H₂O → MgO + H₂. Information about your use of this site is shared with Google. Signs to look out for and how to help during the pandemic, Ready to start a fire with water? The magnesium glows more brightly as the steam passes over it and a splint can be used to light the evolved hydrogen at the end of the glass tubing. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. The magnesium glows more brightly as the steam passes over it and a splint can be used to light the evolved hydrogen at the end of the glass tubing. Students can see the production of the hydrogen Students will likely already have seen the reaction of magnesium with acids to produce hydrogen gas and a salt (Equation 1). Read our policy. metal. What details make Lochinvar an attractive and romantic figure? go on bbc bitzize. mangesium does not react with cold water but reacts with steam What is the equation when magnesium reacts with steam forming magnesium oxide and liberating hydrogen? After several minutes, hydrogen gas bubbles form on its surface, and the coil of magnesium ribbon usually floats to the surface. 1 decade ago. A strip of magnesium ribbon is heated to produce magnesium oxide . Lv 7. Burning magnesium ribbon is plunged into the steam above boiling water in a conical flask. Burning magnesium ribbon is plunged into the steam above boiling water in a conical flask. Students could then be invited to make predictions about what the reactants and products are for the reaction of magnesium and water at room temperature and what evidence we might collect for the reaction taking place (Equation 5) if they had several days to wait for the results. You can test this by leaving an inverted funnel and collecting tube over some magnesium ribbon which has been submerged in water with a few drops of phenolphthalein. By using this site, you agree to its use of cookies. Heat the tube with a Bunsen burner directly below the magnesium until the ribbon just catches fire. Figure 2: Testing for the production of hydrogen. Figure 1: Set up, ready to react magnesium ribbon with steam. When teaching the reactivity series, it’s common for pupils to undertake a practical to explore the patterns in the reactions of acids with the less reactive metals. How will understanding of attitudes and predisposition enhance teaching? magnesium + steam → magnesium oxide + hydrogen Mg (s) + H2O (g) → MgO (s) + H2(g) Metals which react with steam form the solid metal oxide and hydrogen gas. Anonymous. What is the chemical equation for magnesium with steam? Load the mineral wool into the boiling tube and soak it with water before clamping the tube horizontally and inserting the magnesium coil. Never look directly at burning magnesium. How to make effective use of pre-recorded lessons, Taking a week-by-week approach to remote teaching, Using investigations to engage your 11–14 students, magnesium silicide may have been produced, Use on-screen simulations to successfully boost data skills, Magnesium ribbon (flammable) – approximately 10 cm. Use this demo to teach students about enthalpy and properties of water, Teach your students about volatile organic compounds with these juicy demonstrations, Teach chemical change and the thermal stability of carbonates with this simple demonstration using sweets.
# Taking advantage of linearity of integration in Mathematica I want to evaluate an integral of form given below $$\int\limits_\alpha^\beta (f(x) + g(x) + h(x) + ...) dx$$ When I give it to Mathematica it takes forever to evaluate. But if I give it in this form $$\int\limits_\alpha^\beta f(x)dx + \int\limits_\alpha^\beta g(x)dx + \int\limits_\alpha^\beta h(x)dx + ...$$ It takes comparatively lesser time. integrate[y_ + z_, x_] := integrate[y, x] + integrate[z, x] for two variables. But I want to be able to do this for arbitrary number of variables. How to is the question. - Perhaps you could list your functions as $f_1, f_2, \ldots$ instead of $f(x), g(x), \ldots$ and set mathematica up to read it as $\displaystyle\sum_{i=1}^n \displaystyle\int_{\alpha}^{\beta} f_i(x) dx$? I don't have the mathematica skill to tell you the exact code, though. –  tomcuchta Jul 19 '11 at 0:05 I got it integrate[y_ + z_, x_] := integrate[y, x] + integrate[z, x] is recursively defined. It takes care of arbitrary summation number of functions. Now my problem is that integrate does not Integrate. –  Pratik Deoghare Jul 19 '11 at 1:42 I tried integrate := Integrate and wow!! it worked! –  Pratik Deoghare Jul 19 '11 at 1:43 I just noticed this question, so please forgive the (very) late reply. If you want a function that will automatically split across addition, like you've tried to define, I'd do this Clear[integrate] integrate[a_Plus, x_, opts:OptionsPattern[]] := integrate[#, x, opts]& /@ a which with input integrate[a + b + c, {x, 0, 5}] gives integrate[a, {x, 0, 5}] + integrate[b, {x, 0, 5}] + integrate[c, {x, 0, 5}] Then, you can define integrate[a_, x_, opts:OptionsPattern[]]:= Integrate[a, x, opts] to map it back to the original function. - For $$\int\limits_\alpha^\beta (f(x) + g(x) + h(x) + ...) dx$$ In[1]:= f[x_]:= your definition This does what you want, i.e integrates the $f,g,h\cdots$ and then adds them, rather than adding and then integrating. Tested on Mathematica 7 Problem is I get output in the form of $f(x)+g(x)+h(x)+...$ after doing a lot many operations, say after expanding something. I am not defining the functions. –  Pratik Deoghare Jul 19 '11 at 0:39 @MachineCharmer is that perhaps because you have used the lower case i for Integrate in your question. Mathematica is case sensitive. –  kuch nahi Jul 19 '11 at 0:41 i.e integrate[x,y] just echos the same expression while Integrate[x,y] returns xy Maybe that is why you are getting the sums –  kuch nahi Jul 19 '11 at 0:42 @kuch nahi: I think he means that he has arbitrary number of terms in his sum. You could generalize this answer by automatically splitting the integrand with F = Apply[List, q[x]] where g[x] = f[x] + g[x] + ... (untested). Edit: I missed that a recursive solution was found in the other comments. –  Mikael Öhman Jul 19 '11 at 2:16
# If p,q,r have truth values T,F,T respectively, which of the following is true ? Updated On: 17-04-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free, UPLOAD PHOTO AND GET THE ANSWER NOW! Click here to get PDF DOWNLOAD for all questions and answers of this Book - OBJECTIVE RD SHARMA ENGLISH Class 12 MATHS Text Solution ( p to q) ^^ r ( p to q) ^^ ~ r ( p^^ q) ^^ ( p vv r) q to ( p ^^ r) D Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Transcript hello friends in this question we have given p q and R Truth value and winning to find which of the following is true so basically first of all we know the truth table of P implies q so this is so this is P Q and R P implies q and her true true false false true false true false and P implies Q is force only went to employees for so basically had force and all our true now now we have to check option number one we say that P implies q and are now in the question it says that he is basically true this is true and false and R is basically true and we know that implies force is basically false so from here we can say that Falls and r&b are is given as true so we can write is false and true and conjunction with force that it implies force so from here we can say that this is basically equivalent to Falls from here we can get yours now option number 2 is said that P implies q and navigation of and basically it is given as true to is false and R is basically given as to from here we can say that P implies Q Basic this to employees force base is equal to false so from here we can say that Falls and navigation of R and R is basically equal to do so from heavy casset force and navigation of true negation of true is equal to force from here we can say that this is equivalent to false option we can say that this is also Falls now we have to check for option before so basically option number forces that you implies p&r Kyon employees and are so basically Q is false happy is true and this is also true and we know that conjunction conjunction is true only when both are when both are true so from where we can see let you implies through this whole part becomes true So from here we can write as Queen + 2 and here we can say that Force 2 is basically Falls sophos employees to and from truth table we can say that post employees through is basically true So from here we can say that this is equivalent to true So option number for is our now we have to check for option number 3 option number 3 is basically P and Q and p r and P is given as true to is given as force and P is true and dispose so from where we can see that conjunctions and then we can see that this whole statement becomes basically true and cause is equivalent to Falls now we know that roof or false basically equivalent to true So from here we can say that this whole part becomes too so from where we can get through now we know that this is false and true and we know that if conjunction if in a conjunction if there is any force then we can say that this is equivalent to Falls from option we can say that this is force now we have to find which of the following is true option before is our correct answer thank you
This check simplifies subscript expressions. Currently this covers calling .data() and immediately doing an array subscript operation to obtain a single element, in which case simply calling operator[] suffice. Examples: std::string s = ...; char c = s.data()[i]; // char c = s[i]; ## Options¶ Types The list of type(s) that triggers this check. Default is ::std::basic_string;::std::basic_string_view;::std::vector;::std::array
### User initiated connect to database. No. 48 An application writing records to a database server Our aim is enhancing our first GUI prototype being described in Figure 946, “A simple GUI to insert data into a database server. ”. The application shall start being disconnected from the database server. Prior to entering data the user shall be guided to open a connection. The following video illustrates the desired user interface: Figure 948. A GUI frontend for adding personal data to a server. Mind the following topics: 1. There are potential database related problems: 1. JDBC driver registration may fail e.g due to a missing library. 2. Establishing a connection may fail e.g. due to wrong connection parameters. 3. Inserting data may fail e.g. due to a missing table definition. Separate end-user and expert related hints: • Expert messages shall be handled by log4j. • End user error messages shall just convey a basic idea containing a reference to the log file i.e. Could not open database, see log file for details. 2. Separate your GUI and database components: Basically the GUI part shall have no reference to SQL / JDBC and your database part shall have no Vaadin GUI related references. ### Tip • The database layer may send both error and informative messages to your GUI to be presented in a end user friendly manner. • Using your class PersistenceHandler from Database layer handling requires two steps: 1. Executing mvn install in your project Database layer handling . This will create libraries to be installed below ~/.m2. 2. Importing these libraries as Maven dependencies into your GUI project's pom.xml: ... <dependencies> <dependency> <groupId>de.hdm_stuttgart.mi.persistence</groupId> <artifactId>persistencehandler</artifactId> <version>3</version> </dependency> ... A: ### Caution Caveat: Calling mvn compile in the dependent project P/Sda1/PersistenceHandler/Statement is a prerequisite which installs required libraries below ~/.m2. Our implementation uses class de.hdm_stuttgart.mi.PersistenceHandler for handling database communication handling. Our GUI needs to visualize the two different disconnected and connected states. In disconnected state the whole input pane for entering datasets and clicking the Insert button is locked. So the user is forced to open a database connection. Notice the detach listener: public class LoginUI extends UI implements DetachListener { ... @Override public void detach(DetachEvent event) { if (persistenceHandler.isConnected()) { persistenceHandler.toggleConnectionState(); } } This allows for closing JDBC connections whenever the application server chooses to evict our application instance i.e. due to timeouts.
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 18 Oct 2019, 04:13 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track Your Progress every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # A retailer originally bought 50 equally priced phones for a total of z new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 58458 A retailer originally bought 50 equally priced phones for a total of z  [#permalink] ### Show Tags 24 Jan 2019, 03:51 00:00 Difficulty: 45% (medium) Question Stats: 59% (01:32) correct 41% (01:13) wrong based on 25 sessions ### HideShow timer Statistics A retailer originally bought 50 equally priced phones for a total of z dollars. If he sold each phone for 25% more than he paid for it, then in terms of z, how much was each phone sold for? A z/50 B z/40 C 5z/4 D 4z/5 E 62.5z _________________ Director Joined: 09 Mar 2018 Posts: 994 Location: India Re: A retailer originally bought 50 equally priced phones for a total of z  [#permalink] ### Show Tags 24 Jan 2019, 04:01 Bunuel wrote: A retailer originally bought 50 equally priced phones for a total of z dollars. If he sold each phone for 25% more than he paid for it, then in terms of z, how much was each phone sold for? A z/50 B z/40 C 5z/4 D 4z/5 E 62.5z IMO B 50 mobiles were bought for 5000 dollars, just chose a value 1 mobile will be bought for 100 dollars Now he sold for each phone for 25% greater value 125 Plug value back for z in the options to get value of each phone 5000/40 = 125 _________________ If you notice any discrepancy in my reasoning, please let me know. Lets improve together. Quote which i can relate to. Many of life's failures happen with people who do not realize how close they were to success when they gave up. VP Joined: 31 Oct 2013 Posts: 1464 Concentration: Accounting, Finance GPA: 3.68 WE: Analyst (Accounting) Re: A retailer originally bought 50 equally priced phones for a total of z  [#permalink] ### Show Tags 24 Jan 2019, 04:02 Bunuel wrote: A retailer originally bought 50 equally priced phones for a total of z dollars. If he sold each phone for 25% more than he paid for it, then in terms of z, how much was each phone sold for? A z/50 B z/40 C 5z/4 D 4z/5 E 62.5z Cost price of each set = z/50 Selling price = z/50 * 125/100 = z/40. B is the correct answer. e-GMAT Representative Joined: 04 Jan 2015 Posts: 3074 Re: A retailer originally bought 50 equally priced phones for a total of z  [#permalink] ### Show Tags 24 Jan 2019, 05:07 Solution Given: • Original number of phones bought = 50, at a total price of z dollars • Price of each phone is same • Each phone is sold at a price of 25% more than the price at which he bought it To find: • Selling price of each phone Approach and Working: • Cost price of each phone = $$\frac{z}{50}$$ • Implies, selling price of each phone = $$\frac{z}{50}$$ + 25% of $$\frac{z}{50} = 1.25 * \frac{z}{50} = \frac{z}{40}$$ Hence, the correct answer is Option B Answer: B _________________ GMAT Club Legend Joined: 18 Aug 2017 Posts: 5017 Location: India Concentration: Sustainability, Marketing GPA: 4 WE: Marketing (Energy and Utilities) Re: A retailer originally bought 50 equally priced phones for a total of z  [#permalink] ### Show Tags 24 Jan 2019, 05:17 Bunuel wrote: A retailer originally bought 50 equally priced phones for a total of z dollars. If he sold each phone for 25% more than he paid for it, then in terms of z, how much was each phone sold for? A z/50 B z/40 C 5z/4 D 4z/5 E 62.5z original price z/50 sp = 2*1.25/50 ; z/40 IMO B Re: A retailer originally bought 50 equally priced phones for a total of z   [#permalink] 24 Jan 2019, 05:17 Display posts from previous: Sort by # A retailer originally bought 50 equally priced phones for a total of z new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne
# Free product We’ve already solved two universal problems! In this post, we will solve the third, which is not more complicated than the last problem. However, the next problem is pretty complicated and it will use what we’ll do here, so it’s important to understand this problem as good as possible. Ok, let’s not spend any more time and just dive in: ## The problem In this prolem, two groups are given – $\color{blue}G$ and $\color{red} H$. From those groups we can define homomorphisms $\color{blue}\varphi$ and $\color{red}\psi$ to some other group $K$. But that’s not really a problem, right? The real problem is to find some universal group $U$ with homomorphism: i_G :\color{blue}G\color{black}\to U \\ i_H: \color{red}H\color{black}\to U such that all the homomorphism to $K$ goes “through it”. That is, there exists a unique homomorphism $L:U\to K$ such that: \color{blue}\varphi\color{black}=L\circ i_G\\ \color{red}\psi\color{black}=L\circ i_H I am not going to prove the uniqueness of $U$ here, I’ve already proved uniqueness in the previous problems and the process is the same! (You can find a full proof here, not the same case, but you can easily apply this prove to this case) It turns out that in order to solve this problem we need to define a new group – the free product: ### Sets of Words First, we need to define a set $M$ that is the set of all the words that are made from elements from $G$ and $H$. And what’s that exactly? such a “word” is just a finite sequence of elements from $G$ and $H$ (it can be empty…). For example, the following sequence: (g_1,h_1,g_2,g_3,h_2,h_3,h_4,g_4) (Where $g_i\in G, h_i\in H$) is an example of a word. But dealing with words won’t give us anything, in order to make thing a bit more interesting we need to define an equivalence realtion on the set. We’ll define: (---, g_1,g_2,---)\sim (---, g_1g_2,---) \ , \ g_1,g_2\in G\\ (---, h_1,h_2,---)\sim (---, h_1h_2,---) \ , \ h_1,h_2\in H That is, if two elements from the same group are next to each other in the word – we can ‘reduce’ the word by replacing both elements with their product. Moreover, we can reduce the identities: (---,1_G,---)\sim(------) \\ (---,1_H,---)\sim(------) We’ll denote the set of equivalence classes as $G* H$. The only thing left to do is to define an action on the set: #### The action on the set The most natural action to define on the set of words is the following: (---,x_1)\circ(x_2,---)=(---,x_1,x_2,---) We just “connect” the words. We can now easilly define a similar action on $G*H$: [(---,x_1)]\circ[(x_2,---)]=[(---,x_1,x_2,---)] This action is indeed well defined: We need to shot that if $[x_1]\sim [x_2]$ and $[y_1]\sim [y_2]$ then $[x_1]\circ [y_1]\sim [x_2]\circ [y_2]$. But that’s not so complicated – Suppose that: x_1=(a_1,\dots,a_k), y_1=(b_1,\dots,b_m) So their product is: (a_1,\dots,a_k,b_1,\dots,b_m) Intuitively we can put an ‘imaginary bar’ : (a_1,\dots,a_k\ |\ b_1,\dots,b_m) and ‘split’ the words into two parts. In the left part we can do the required manipulation in order to bring it to the form of $x_2$ (this is possible since $x_1\sim x_2$), and similarly, do the required manipulation in order to bring right side to the form of $y_2$. Thus, the word after the manipulations is: (x_2\ |\ y_2) And this yields that $x_1y_1\sim x_2y_2$ as we wanted. The proof for the associativity is done pretty much in the same way – try to prove it yourself! The identity will be the class of the empty word – $[()]$. The inverse element of the class: [(x_1,\dots,x_m)] Will be the elements: [(x_m^{-1},\dots,x_1^{-1})] It’s not hard to see why (convince yourself! multiply those classes if you want and see what you get). Note that if the groups are not trivial, then the action is not commutative, to see this we can just pick some $g\in G$ and $h\in H$ and get: [(g)][(h)]=[(g,h)]\neq[(h,g)]=[(h)][(g)] ### The group Great, so we’ve found out that $G* H$ is indeed a group! In fact, this group has a special name – it is called the free product of $G$ and $H$. Note that we can easily define natural homomorphisms: i_G:G\to G*H\\ i_H:H\to G*H How? Just send an element to the class of the word that is made from it: i_G(g)=[(g)] \\ i_G(h)=[(h)] Those are indeed homomorphisms, for example: i_G(g_1g_2)=[(g_1g_2)]=[(g_1,g_2)]=[(g_1)][(g_2)]=i_G(g_1)i_g(g_2) Moreover, those homomorphism are in fact injective (those are monomorphisms): i_G(1_G)=[(1_G)]=[()]=1_{G*H} Great, so we have a group and we have homomorphisms to it, the only thing left to check is that this group is indeed the solution for the universal problem: ## Is that what we are looking for? So, now the diagram looks like: We just need to figure out who is $L$ and why it is unique. Well, we know that it should satisfy: \color{blue}\varphi\color{black}=L\circ i_G \\ \color{red}\psi\color{black}=L\circ i_H So for every $g\in G, h\in H$ is should satisfy: \color{blue}\varphi\color{black}(g)=L\circ i_G(g)=L([(g)]) \\ \color{red}\psi\color{black}(h)=L\circ i_H(h)=L([(h)]) And that’s exactly how $L$ is defined! (That also proves the uniqueness) For example, if the sequence is: [(g_1,h_1,g_2,g_3)] Then: L([(g_1,h_1,g_2,g_3)])=L([(g_1)][(h_1)][(g_2)][(g_3)]) = L([(g_1)])L([(h_1)])L([(g_2)])L([(g_3)])=\color{blue}\varphi\color{black}(g_1)\color{red}\psi\color{black}(h_1)\color{blue}\varphi\color{black}(g_2)\color{blue}\varphi\color{black}(g_3) ## Summary I find the construction of the group pretty natural – think about it, the universal group recieves ‘data’ from two groups, it should know how to ‘handle it’. When I first saw this problem I said to myself – why would it work if we just pick $U$ to be the cartesian product $G\times H$? Well it doesn’t, If you try to use this group as a solution, you will get that $L$ is not a homomorphism… Try it, it’s a really nice practice – think about who should be $i_G,i_H$ and how $L$ has to be defined. Great, so we now solved three universal problems, three more to go! The next problem is the most complicated one! However, it is a really important one and it will allow us to compute fundamental groups like crazy!
# Series SERIES (a Latin word from serere, to join), a succession or sequence. In mathematics, the term is applied to a succession of arithmetical or algebraic quantities (see below); in geology it is synonymous with formation, and denotes a stage in the classification of strata, being superior to group (and consequently to bed, and zone or horizon) and inferior to system; in chemistry, the term is used particularly in the form homologous series, given to hydrocarbons of similar constitution and their derivatives which differ in empirical composition by a multiple of CH 2 , and in the form isologous series, applied to hydrocarbons and their derivatives which differ in empirical composition by a multiple of H 2 ; it is also used in the form isomorphous series to denote elements related isomorphously. The word is also employed in zoological and botanical classification. In mathematics a set of quantities, real or complex, arranged in order so that each quantity is definitely and uniquely determined by its position, is said to form a series. Usually a series proceeds in one direction and the successive terms are denoted by MI, MJ, . . . , . . . ; we may, however, have a series proceeding in both directions, a back-and-forwards series, in which case the terms are denoted by . . . u_m, . . . M_ 2 , _i, MO, i, 2, ...,... ; or its general term may depend on two integers positive or negative, and its general term may be denoted by u m , ; such a series is called a double series, and so on. The number of terms may be limited or unlimited, and we have two theories, (i) of finite series and (2) of infinite series. The first concerns itself mainly with the summation of a finite number of terms of the series; the notions of convergence and divergence present themselves in the theory of infinite series. Finite Series. i. When we are given a series, it is supposed that we are given the law by which the general term is formed. The first few terms of a series afford no clue to the general term; the series of which the first four terms are 1,2,4, 8, may be the series of which the general term is 2"; it may equally well be the series of which the general term is i(n 3 +5n-(-6) ; in fact we can construct an infinite number of series of which the leading terms shall be any assigned quantities. The only case in which the series may be completely determined from its leading terms is that of a " recurring series." A recurring series is a series in which the consecutive terms, after the earlier ones, are connected by a linear relation ; thus if we have a relation of the form a p u r + a p -,Ur + i +flp- 2 r+ j-|- . . . +a,u r + f _ ) +a u r+f =o, the series is said to be a recurring series with a scale of relation though taken from a genuine specimen ; but little that can be called Ralline in character is observable therein. The same is to be said of an egg laid in captivity at Paris; but a specimen in Mr Walter's possession undeniably shows it (cf. Proc. Zool. Society, 1881, p. 2). 1 A supposed fossil Cariama from the caves of Brazil, mentioned by Bonaparte (C.R. xliii. p. 779) and others, has since been shown by Reinhardt (Ibis, 1882, pp. 321-332) to rest upon the misinterpretation of certain bones, which the latter considers to have been those of a Rhea. 4 Near Tucuman and Catamarca (Burmeister, Reise durch die La Plata Staaten, ii. p. 508). a,, + a t x + a*x* + . . . + <V- It is clear that we can regard the series o4-tti*+tt2* 2 +. . .as the expansion in powers of * of an expression of the form (ba+bix+ . . . +6 p _i*^ 1 )/(oo+0i*+ . . . +apX"), and by splitting this expression into partial fractions we can obtain the general term of the series. If we know that a series is a recurring series and know the number of terms in its scale of relation, we can determine this scale if we are given a sufficient number of terms of the series and obtain its general term. It follows that the general term of a recurring series is of the form Z<t>(n)a", where 4>(n) is a rational integral algebraic function of n, and o is independent of n. The series whose general term is of the form Ka" +<, where <t>(n) is a rational integral algebraic function of degree r, is a recurring series whose scale of relation is (l ax) (l *) rfl , but the general term of this series may be obtained by another method. Suppose we have a series M O , i, Ms, . . . From this we can form a series v a , vi, v 2 ,... where !> n =Mn+i w>; from fo, PI, >,... we similarly form another series and so on; we write n =AMn, and we suppose E to be an operation such that En=ttfH-i (the notation is that of the calculus of finite differences) ; the operations E and I +A are equivalent and hence the operations E" and (i +A)" are equivalent, so that we obtain w n = Wo4-Ao + ' t 2 A 2 + . . . This is true whatever the form of n. When Mn is of the form Ka"+<(n), where 4>(n) is of degree r, A 1 " 1 " 1 ^, A r+i o, . . . form a geometrical progression, of which the common difference is a i, or vanish if the term Ka" is absent. In either case we readily obtain the expression for w n . 2. The general problem of finite series is to find the sum of terms of a series of which the law of formation is given. By finding the sum to n terms is meant finding some simple function of n, or a sum of a finite number of simple functions, the number being independent of n, which shall be equal to this sum. Such an expression cannot always be found even in the case of the simplest series. The sum of n terms of the arithmetic progression a, a+b, 0+26, ... is na-\-\n(n i)b; the sum of n terms of the geometric progression a, ab, a& 2 , ...is a(i 6")/(l 6); yet we can find no simple expression to represent the sum of n terms of the harmonic progression 3. The only type of series that can be summed to n terms with complete generality is a recurring series. If we let Sn = o+Mi*+ . . . +n-iX"~ l , where KO, . . . is a recurring series with a given scale of relation, for simplicity take it to be i+px+qx*, we shall have Sn(i +px+qx*) = tt +(i+M>)*+(M'-i+2Mn_2)* n +g_i*" +1 . If * had a value that made l+px+qx 2 vanish, this method would fail, but we could find the sum in this case by finding the general term of the series. For particular cases of recurring series we may proceed somewhat differently. If the nth term is u n x" we have from the equivalence of the operations E and I +A, (i-x)' in general, and for the case of x = unity we have .n.n l. , n.n i.n 2 i , T" A 2 tti + . . ., which will give the sum of the series very readily when is a polynomial in n or a polynomial + a term of the form Ka". 4. Other types of series, whe.i they can be summed to n terms at all, are summed by some special artifice. Summing the series to 3 or 4 terms may suggest the form of the sum to n terms which can then be established by induction. Or it may be possible to express ttnin the form Wa+i w n , in which case the sum to n terms is w+\ w\. Thus, if Un = a(a+b)(a+2b) . . . (a+n-ib)/c(c+b)(c+2b) . . . (c+n ib), the relation (c+nb)u^.i = (a+nb)Un can be thrown into the form (c+nb)u^+i (c+n ib)u n = (a c+b)ti*, whence the sum can be found. Again, if n = tan nx tan ( + i)y, the summation can be effected by writing Un in the form cot x (tan n + ix tan nx) i . Or a series may be recognized as a coefficient in a product. Thus, if f(x) = u +u 1 x+uiX 2 +. . ., o+i+. . . +w n is the coefficient of x" in f(x)/(i x) ; in this way the sum of the first n coefficients in the expansion of (i x)-* may be found. The sum of one series may be deduced from that of another by differentiation or integration. For further information the reader may consult G. Chrystal's Algebra (vol. ii.). 5. The sum of an infinite series may be deduced from the sum to n terms, when this is known, by increasing n indefinitely and finding the limit, if any, to which it tends, but a series may often be summed to infinity when it cannot be summed to n terms; the sum of the infinite series ^+55+75+. . .is -^-, the sum to n terms cannot be found. For methods and transformations by means of which the sum to n terms of a series may be found approximately when it cannot be found exactly, the reader may consult G. Boole's Treatise on the Calculus of Finite Differences. Infinite Series. 6. Let MI, s, 8 , . . . , be a series of numbers real or complex, and let S n denote !+%+. . . +. We thus form a sequence of numbers Si.Sj, . . . S n . This sequence may tend to a definite finite limit S as n increases indefinitely. In this case the series WI+MJ+ . . . + is said to be convergent, and to converge to a sum S. If by taking n sufficiently large |S n | can be made to exceed any assignable quantity, however large, the series is said to be divergent. If the sequence Si, Sj, . . . tends to finite but different limits according to the form of n the series is said to oscillate, and is also classed under the head of divergent series. The sum of n terms of the geometric series i +*+**+. . .is (l x")/(i x). If * is less than unity S n clearly tends to the limit I/(i x), and the series is convergent and its sum is i/(i x). If x is greater than unity S n clearly can be made greater than any assignable quantity by taking n large enough, and the series is divergent. The series 1 1+1 1 + ..., where S n is unity or zero, according as n is odd or even, is an example of an oscillating series. The condition of convergency may also be presented under the following form. Let P R ? denote S^. p S n : let e be any arbitrarily assigned positive quantity as small as we please; if we can find a number m such that for m=or>, LR n |< for all values i, 2,... of p, then the series converges. The least value of the number m corresponding to a given value of , if it can be found, may be regarded as a measure of rapidity of the convergency of the series ; it may happen that when involves a variable x, m increases indefinitely as x approaches some value; in this case the convergence of the series is said to be infinitely slow for this value of *. 7. An infinite series may contain both positive and negative terms. The terms may be positive and negative alternately or they may occur in groups which without altering the order of the terms of the series may each be collected jnto a single term; thus all series may be regarded as belonging to one of two types, i+ W2+w s + . . .in which the terms are all positive, or u\ Wj+Uj . . .in which the terms are alternately positive and negative. 8. It is clear that if a series is convergent must tend to the limit zero as n is increased indefinitely. This condition though necessary is by no means sufficient. If all the terms of a convergent series are positive a series obtained by writing its terms in any other order is convergent and converges to the same sum. For if S n denotes the sum of n terms of the first series and 2 n denotes the sum of n terms of the new series, then, when n is any large number, we can choose numbers p and q such that S,>2 n >S p ; so that 2 tends to the common limit of S p and S,, which is the sum of the original series. If ui, u 2 , u>, . . . are all positive, and if after some fixed term, say the p" 1 , continually decreases and tends to the limit zero, the series u\ u 2 +us < + ... is convergent. For ISp+z,, SJ lies between \u p j.i u p+ ?\ and |M P +I Wp+ 2 j so that, when n is increased indefinitely, |S r+ n| remains finite; also | Sj+jn+i Sp+snl tends to zero, so that the series converges. If u, tends to a limit a, distinct from zero, then the series i j+i>3 . . ., where == o, converges and the series HI u,+u,. . . oscillates. As examples we may take the series i \$-\-\ 1+ ... and 2 f+f 1 + ...; the first of these converges, the second oscillates. 9. The series Ui+u 3 +u t + . . ., %+<++. . . may each of them diverge, though the series MI MJ+WS .. .converges. A series such that the series formed by taking all its terms positively is convergent is said to be absolutely convergent; when this is not the case the series is said to be semi-convergent or conditionally convergent. A series of complex numbers in which u n = p n +iq n , where /> n and g n are real (i being V i), is said to be convergent when the series pi+pi+p3+ ., \$1+52+33+... are separately convergent, and if they converge to P and Q respectively the sum of the series is P+iQ. Such a series is said to be absolutely convergent when the series of moduli of , i.e., 2( n 2 +<jv 2 )i, is convergent; this is sufficient but not necessary for the separate convergence of the p and q series. There is an important distinction between absolutely convergent and conditionally convergent series. In an absolutely convergent series the sum is the same whatever the order of the terms; this is not the case with a conditionally convergent series. The two series i-i+i-l+..., and i+i-l + fc+*-i+..., in which the terms are the same but in different orders, are convergent but not absolutely convergent. If we denote the sum of the first by S and the sum of the second by 2 it can be shown that 2 = IS. G. F. B. Riemann and P. G. L. Dirichlet have shown that the terms of a semiconvergent series may be so arranged as to make the series converge to any assigned value or even to diverge. 10. Tests for convergency of series of positive terms are obtained by comparing the series with some series whose convergency or divergency is readily established. If the series of positive terms i+2+3+. . ., t>i+t> 2 +t>3+. . . are such that u n jv n is always finite, then they are convergent or divergent together; if tH-i/ttn<n+i/z>n and Zv* is convergent, then Sn is convergent; if Wn+i/ttn>n+i/Pn and St>n is divergent, then 2 n is divergent. By comparison with the ordinary geometric progression we obtain the following tests. If Mu. approaches a limit / as n is indefinitely increased, 2u n will converge if I is less than unity and will diverge if I is greater than unity (Cauchy's test); if u^i/u, approaches a limit / as n is indefinitely increased, Zu, will converge if /is less than unity and diverge if / is greater than unity (D Alembert s test). Nothing is settled when the limit / is unity, except in the case when I remains greater than unity as it approaches unity. The series then diverges. It may be remarked that if u*+i/u n approaches a limit and *V approaches a limit, the two limits are the same. The choice of the more useful test to apply to a particular series depends on its form. . . In the case in which M^I/M approaches unity remaining constantly fess than unity, J.L. Raabeand J. M. C. Duhamel have given the following further criterion. Write /+i = l+a,,, where o is positive and approaches zero as n is indefinitely increased. If na n approaches a limit /, the series converges for / > i and diverges for /< i. For / = i nothing is settled except for the case where I remains constantly less than unity as it approaches it ; in this case the series If / is positive and decreases as n increases, the series 2f is convergent or divergent with the series Za"/(a") where a is any number > 2 (Cauchy s condensation test). By means of this theorem we can show that the series whose general terms are nlnl 2 n(l 3 n)'" where 1 denotes log n, 1'n denotes log-log n, \'n denotes log log log n, and so on, are convergent if o> i and divergent if o =or< i. By comparison with these series, a sequence of criteria, known as the logarithm criteria, has been established by De Morgan and J. L. Bertrand. A. De Morgan's form is as follows: writing u n =il<t>(n), where K* denotes log log log. . .*. If the limit, when x is infinite, of the first of the functions pa, pi, pi, .... whose limit is not unity, is greater than unity the series is convergent, if less than unity it is divergent. In Bertrand's form we take the series of functions 1 /In, l-r-/l ! , Irr-rz/ 13 ". nu n ni/nln If the limit, when n is infinite, of the first of these functions, whose limit is not unity, is greater than unity the series is convergent, if less than unity it is divergent. Other forms of these criteria may be found in Chrystal's Algebra, vol. ii. Though sufficient to test such series as occur in ordinary mathematics, it is possible to construct series for which they entirely fail. It follows that in a convergent series not only must we have Lt = o but also Lt nu, = o, Lt nlnu* = o, etc. Abel has, however, shown that no function 0(n) can exist such that the series ZM is convergent or divergent as Lt <t>(n)u n is or is not zero. II. Two or more absolutely convergent series may be added together, thus (i+j+. . .) + (ri+j+. . .) = (tti+i) + (ttj+t>i) + . . . , that is, the resulting series is absolutely convergent and has for its sum the sum of the sums of the two series. Similarly two or more absolutely convergent scries may be multiplied together thus and the resulting series is absolutely convergent and its sum is the product of the sums of the two series. This was shown by Cauchy, who also showed that the series SK>, where U',=u>v,+tttv^.i + . . . +iWi, U not necessarily convergent when both series are semiconvergent. A striking instance is furnished by the series i -rr + / - i * i which is convergent, while its square jr+TJ . . . may be shown to be divergent. \F. K. L. Mertens has shown that a sufficient condition is that one of the two series should be absolutely convergent, and Abel has shown that if 2ai n converges at all, it converges to the product of Z and Zt> n . But more properly the multiplication of two series gives rise to a double series of which the general term is ,. 12. Before considering a double series we may consider the case of a series extending backwards and forwards to infinity Such a series may be absolutely convergent and the sum is then independent of the order of the terms and is equal to the sums of the two series KO+KI+MS+. . . and -n+w-t+. . ., but, if not absolutely convergent, the expression has no definite meaning until it is explained in what manner the terms are intended to be grouped together; for instance, the expression may be used to denote the foregoing sum of two series, or to denote the series tto+(tti+-i) + (ui+u-t) + and the sum may have different values, or there may be no sum, accordingly. Thus, if the series be ...} } + o + } + J+..., with the former meaning the two series O+J+J+ . . . and } J . . . are each divergent, and there is no sum; but with the latter meaning the series is 0+0+0+ . . . which has a sum o. So, if the series be taken to denote the limit of (o+i + . . . +n) + . . . +-m), where n and m are each of them ultimately infinite, there may be a sum depending on the ratio n : m, which sum acquires a determinate value only when this ratio is given. In the case of the series given above, if this ratio is k, the sum of the series is log k. 13. In a singly infinite series we have a general term u n , where n is an integer positive in the case of an ordinary series, and positive or negative in the case of a back-and-forwards series. Similarly for a doubly infinite series we have a general term ,, where m, n are integers which may be each of them positive, and the form of the series is then <<o,o, "o,: . "",:, t 1,0, !,!, !,!,. or they may be each of them positive or negative. _The latter is the more general supposition, and includes the former, since ,, may =o, for m or n each or either of them negative. To attach a definite meaning to the notion of a sum, we may regard m, n as the rectangular coordinates of a point in a plane; if m and n are each positive we attend only to the positive quadrant of the plane, but otherwise to the whole plane. We may imagine a boundary depending on a parameter T, which for T infinite is at every point thereof at an infinite distance from the boundary ; for instance, the boundary may be the circle a? +y 2 =T, or the four sides of a rectangle, x= oT, y= =*=0T. Suppose the form is given and the value of T, and let the sum S,n be understood to denote the sum of the terms !<,, within the boundary, then, if as T increases without limit, S, continually approaches a determinate limit (dependent, it may be, on the form of the boundary) for such form of boundary the series is said to be convergent, and the sum of the doubly infinite series is the limit of Sm, n . The condition of convergency maybe otherwise stated ; it must be possible to take T so large that the sum R m , for all terms ,, which correspond to points outside the boundary shall be as small as we please. 14. It is easy to see that, if each of the terms Um, n is positive and the series is convergent for any particular form of boundary, it will be convergent for any other form of boundary, and the sum will be the same in each case. Suppose that in the first case the boundary is the curye/iOe, y) =T. Draw any other boundary /(*, r)=T. Wholly within this we can draw a curve fi(x, y)=Ti of the first family, and wholly outside it we can draw a second curve of the first family, f\(x, y) =Tj. The sum of all the points within f,(x, y) =T' lies between the sum of all the points within f\(x, y) =Ti and the sum of all the points within f t (x, y)=T 2 . It therefore tends to the common limit to which these two last sums tend. The sum is therefore independent of the form of the boundary. Such a series is said to be absolutely convergent, and similarly a doubly infinite series of positive and negative terms is absolutely convergent when the series formed by taking all its terms positively is convergent. 15. It is readily seen that when the series is not absolutely convergent the sum will depend on the form of the boundary. Consider the case in which m and n are always positive, and the boundary is the rectangle formed by x = m,y = n, and the axes. Let the sum within this rectangle be S, m . This may have a limit when we first make n infinite and then m ; it may have a limit when we first make m infinite and then n, but the limits are not necessarily the same; or there may be no limit in either of these cases but a limit depending on the ratio of m to n.that is to say, on the shape of the rectangle. _ When the product of two series is arranged as a doubly infinite series, summing for the rectangular boundary x = aT, y =/JT we obtain the product of the sums of the series. When we arrange the double series in the form UiVi + (uM+u,v,) + . . . we are summing over the triangle bounded by the axes and the straight line *+y = T, and the results are not necessarily the same if the terms are not all positive. For full particulars concerning multiple series the reader may consult E. Goursat, Gours d 'analyse, vol. i. ; G. Chrystal, Algebra, vol. ii.; or T. J. I'A. Bromwich, The Theory of Infinite Series. 16. In the series so far considered the terms are actual numbers, or, at least, if the terms are functions of a variable, we have considered the convergency only when that variable has an assigned value. In the case, however, of a series ui(z)+uz(z) + . . ., where i(z), ttj(z),. . . are single-valued continuous functions of the general complex variable z, if the series converges for any value of z, in general it converges for all values of z, whose representative points lie within a certain area called the " domain of convergence "and within this area defines a function which we may call S(z). It might be supposed that S(z) was necessarily a continuous function of z, but this is not the case. G. G. Stokes (1847) and P. L. Seidel (1848) independently discovered that in the neighbourhood of a point of discontinuity the convergence is infinitely slow and thence arises the notion of uniform and non-uniform convergence. 17. If for any value of z the series MI(Z)+MJ(Z) + . . .converges it is possible to find an integer n such that |S(z) S n (z)|<, |S(z) S,H.I(Z) | < where c is any arbitrarily assigned positive quantity, however small. For a given t the least value of n will vary throughout any region from point to point of that region. It may, however, be possible to find an integer v which is a superior limit to all _the values of n in that region, and we thus have, throughout this region, | S(z) -Si-(z) | < ,J S (z) -Si. + i(z) |< t. . .where z is any point in the region and v is a finite integer depending only on t and not on *. The series is then said to converge uniformly throughout this region. If, as z approaches the value z\, n increases as |z-zi| diminishes and becomes indefinitely great as |z-Zj.| becomes indefinitely small the series is said to be non-uniformly convergent at the point z\. A function represented by a series is continuous throughout any region in which the series is uniformly convergent; there cannot be discontinuity with uniform convergence; on the other hand there may be continuity and non-uniform convergence. If i(z) + 2 (z) +... is uniformly convergent we shall havefS(z)dz=fui(z)dz+fu i (z')dz+... along any path in the region of uniform convergence; and we shall also have j~S(z) = J-MI(Z) + j- 2 (z) + ... if the series jjWi(z) + j^ 2 ( z ) + ... is uniformly convergent. Uniform convergence is essentially different from absolute convergence; neither implies the other (see FUNCTION). 18. A series of the form a +aiZ+a 2 z 2 + . . ., in which a , d, 02, ... are independent of z, is called a power series. In the case of a power series there is a quantity R such that the series converges if | z \ < R, and diverges if | z | > R. A circle described with the origin as centre and radius R is called the circle of convergence. A power series may or may not converge on the circle of convergence. The circle of convergence may be of infinite radius as in the case of the series for sin z, viz. z -[+ -i ... In this case the series converges over the whole of the z plane. Or its radius may be zero as in the case of the series 1+1! z+2 ! z* + . . ., which converges nowhere except at the origin. The radius R may be found usually, but not always, from the consideration that a series converges absolutely if |Mn+i/n|<i, an d diverges if |n+i/n| > i. A power series converges absolutely and uniformly at every point within its circle of convergence; it may be differentiated or integrated term by term ; the function represented by a power series is continuous within its circle of convergence and, if the series is convergent on the circle of convergence, the continuity extends on to the circle of convergence. Two power series cannot be equal throughout any region in which both are convergent without being identical. 19. Series of the type Oo+ai cos z+az cos 2z+ . . . +61 sin z+&2 sin 2Z+ . . ., where the coefficients ay, 01, 02, ... bi, b,, . . . are independent of z, are called Fourier's series. They are of the greatest interest and importance both from the point of view of analysis and also because of their applications to physical problems. For the consideration of these series and the expansion of arbitrary functions in series of this type see FUNCTION and FOURIER'S SERIES. For the general problem of the development of functions in infinite series of various types see FUNCTION. 20. The modern theory of convergence dates from the publication in 1821 of Cauchy's Analyse algebrique. The great mathematicians of the 18th century used infinite series freely with very little regard to their convergence or divergence and with, occasionally, very extraordinary results. Series which are ultimately divergent may be used to calculate values of functions in special cases and to represent what are called " asymptotic expansions " of functions (see FUNCTION). Infinite Products. 21. The product of an infinite number of factors formed in succession according to any given law is called an infinite product. The infinite product m=(i+Mi)(i+z) . . . (i+n) is said to be convergent when Ltn-ojUn tends to a definite finite limit other than zero. If Lt n. is zero or infinite or tending to different finite values according to the form of n the product is said to be divergent. The condition for convergency may also be stated in the following form, (i) The value of n n remains finite and different from zero however great n may become, and (2) Lt n and Lt n^. r must be equal, when n is increased indefinitely, and r is any positive integer. Since in particular Lt n n = Lt m+i, we must have Lt u^i = o. Hence after some fixed term ui, u%, . . or their moduli in the case of complex quantities, must diminish continually down to zero. Since we may remove any finite number of terms in which \u n \ > I without affecting the convergence of the whole product, we may regard as the general type of a convergent product (i +i)(i + 2 ) . . . (i +u n ) . . . where \Ui\, |MS|, . . . \u n \, ... are all less than unity and decrease continually to zero. A convergent infinite product is said to' be absolutely convergent where the order of its factors is immaterial. Where this is not the case it is said to be semi-convergent. 22. The necessary and sufficient condition that the product (i +i)(i +1*2) ... should converge absolutely is that the series |i| + |2|+ . . . should be convergent. If MI, 2 , . . . are all of the same sign, then, if the series UI+UK+ ... is divergent, the product is infinite if i,M2, . . are all positive and zero if they are all negative. If i 4-ttj-r- ... is a semi-convergent series the product converges, but not absolutely, or diverges to the value zero, according as the series u?+u+ ... is convergent or divergent. These results may be deduced by considering, instead of n n , log Iln which is the series log (i+i)+log (i+2)+ . . . (see G. Chrystal's Algebra, vol. ii., or E. T. Whittaker's Modern Analysis, chap, ii.); they may also be proved by means of elementary theorems on inequalities (see E. W. Hobson's Plane Trigonometry, chap. xvii.). 23. If i, MS, ... are functions of a variable 2, a convergent infinite product (i +ui) (i +Ui) . . . defines a function of z. For such products there is a theory of uniform convergence analogous to that of infinite series. Is is not in general possible to represent a function as an infinite product; the question has been dealt with by Weierstrass (see his Abhandluneen aus der Functionlehre or A. R. Forsyth's Theory of Functions). One of the simplest cases of a function expressed as an infinite product is that of sin z/z, which is the value of the absolutely convergent infinite product. 24. K. T. W. Weierstrass has shown that a semi-convergent or divergent infinite product may be made absolutely convergent by the association with each factor of a suitable exponential factor called sometimes a " convergency factor." The product (i+~) (i-| ) 1 __ ( i + ) ... is divergent; the product (i-| J e " /i-)- 1 e 2 * . . . is absolutely convergent. The product for sin z/z is semi-convergent when written in the form but absolutely convergent when written in the form ( Z '-; From this last form it can be shown that if ,- H) (.-) ...(-) (.Hi) (Gr.,4) ... (Gr.,+). then the limit of <(z) as m and n are both made infinite in any given ratio is (m\ 5 sin z I - I IT - . \nl z Another example of an absolutely convergent infinite product, whose convergency depends on the presence of an exponential factor, is the product zll (i jjj e " " where Q denotes 2mun + 2n&>2, ai and un being any two quantities having a complex ratio, and the product is taken over all positive and negative integer and zero values of m and n, except simultaneous zeros. This product is the expression in factors of Weierstrass's elliptic function <r(z). AUTHORITIES. G. Chrystal, Algebra, vol. ii. (1900); E. Goursat, Cours analyse (translated by E. R. Hedrick), vol. i. (1902); J. Harkness and F. Mprley, A Treatise on the Theory of Functions (1893) and Introduction to the Theory of Analytic Functions (1899); E. W. Hobson, Plane Trigonometry (1891), and Theory of Functions of a Real Variable: H. S. Carslaw, Fourier's Series; E. T. Whittaker, Modern Analysis (1902); J. Tannery, Introduction a la theorie des functions d'une variable; C. Jordan, Cours d' analyse de I'Ecole Polytechnique (2nd ed., 1896); E. Cesaro, Corso di analisi algebraica (1894); O- Stolz, Allgemeine Arithmetik (1886); O. Biermann, Elemente der hdheren Mathematik (1895) ; W. F. Osgood, Introduction to Infinite Series ; T. J. I'A. Bromwich, Theory of Infinite Series (1908). Also the article by A. Pringsheim, " Irrationalzahlen und Konvergenz unendlichen Prozesse " in the Encyclopadie^ der mathcmatischen Wissenschojten i, a. 3 (Leipzig). For the history of the subject see R.Reiff, Geschichte der unendlichen Reihen; G. H. Hardy, A Course of Pure Mathematics. (A. E. J.) Note - this article incorporates content from Encyclopaedia Britannica, Eleventh Edition, (1910-1911)
### Brice Rodrigue Mbombo: Extreme amenability of the isometry group of the Urysohn space Data: sexta-feira, 04 de abril de 2014, às 14h Sala: 243-A Palestrante: Brice Rodrigue Mbombo, IME-USP Título: Extreme amenability of the isometry group of the Urysohn space Resumo: The definition of an extremely amenable group is obtained from the classical definition of an amenable group by removing the two underlined words of the definition: A topological group $G$ is amenable if every continuous $\underline{\text{affine}}$ action of $G$ on compact $\underline{\text{convex}}$ set $X$ admits a fixed point: for some $\xi\in X$ and all $g\in G$, one has $g\xi=\xi$. The existence of extremely amenable semigroups was proved by Granirer in $1967$. But it was at first unclear if extremely amenable topological groups existed at all. The first example of this kind was done by Herer and Christensen in $1975$. Some further examples known to date include: • $\mathcal{U}(\ell^{2})$, equipped with strong operator topology (Gromov-Milman, $1984$); • $Aut\,(\mathbb{Q},\leq)$ with the topology of simple convergence (Pestov, $1998$); • $Iso(\mathbb{U})$ where $\mathbb{U}$ is the universal Urysohn space (Pestov, $2002$). In this talk, we will provide a short history(motivations) of the subject, recall the Kat\v etov construction of the Urysohn space $\mathbb{U}$ and give all details of Pestov Proof of extreme amenability of the group $Iso(\mathbb{U})$.
# Difference between revisions of "Literature on Carbon Nanotube Research" Title: CNT Literature About: Moderator: [Markus Landgraf] Created: March 13th, 2009 Modified: April 3rd, 2009 Tags: This is a collaborative article Discipline(s): Wiki, Engineering, Chemistry I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. ## Summary and Way Forward For now I have finished the review of current literature on carbon nanotube (CNT) research, at least the part that is concerned with the creation of strong fibers/yarns for application in the space elevator. The following nine papers are interesting to read and give quite a good overview of the state of the art. The state of the art as of early 2009 appears to be that it is impossible to catch the impressive specific tensile strength of sub-millimetre size single-wall nanotubes (SWNTs) for infinitely long wires or yarns. The current limited understanding of the CNT growth process and the inter-fiber forces in a spun yarn does not allow us to build a sufficiently strong wire for the space elevator from CNTs. The maximum reported breaking strength is in the order of 10 GPa for a 1 mm long spun yarn. However, in the respective paper by Koziol et al. (2007) it is very clear that this strength is lost when going to longer yarns. Realistically speaking, we are still at around 3 GPa of breaking strength. On the positive side: there is progress in the understanding of the molecular dynamics of CNT growth in chemical vapour deposition (CVD) processes. In a recent paper actual observation of the growth of CNTs on a catalyst are presented. One idea to produce the strong wires we need could be to optimise the manufacturing of the catalytic growth substrate in the CVD process, so that the CNTs grow infinitely long. This could be achieved by using the aluminium oxide buffer layer on top of the silicon base as it was done by G. Zhang et al. (2005). The catalyst should be applied on top of the buffer layer using lithographic techniques. The catalyst layout created by the lithographic process would have to be optimised in order to guarantee a long lifetime of the catalyst as well as good accessibility of the growth site (interface between catalyst and underlying layers) to the feedstock compounds. Overall, this process will not be able to grow the space elevator wire in one piece, because the growth rate is about $100 \mu m min.^{-1}$ or so, meaning for 100,000 km to grow we would need 2 million years! Now the yarn spinning explained by M. Zhang et al. (2004) comes into play. If we are able to transfer the full tensile strength from one fiber to another we would not need the full 100,000 km as a single fiber. I am quite optimistic that if we have 1 m-long CNT fibers, we can transfer the full tensile strength of the fiber to a neighbouring one. In this context "transfer the full tensile strength" means that in a pull test of a yarn spun of two fibers one of the fibers would break before they can be separated from each other. If the 1 m fibers are enough, we are in good shape as they take one week to grow in the CVD chamber at $100 \mu m min.^{-1}$. Whether or not this proposed approach works or whether there are show-stoppers remains to be seen. I propose to contact the CNT industry in order to find out. A new paper by Wang et al. (2009) discussed below describes a promising new development: CNTs 18.5cm in length! If spun to a yarn, perhaps the van der Waals forces between those long fibers can be strong enough to transmit the impressive mechanical properties of the fibers to the macroscopic yarn, which could make our ribbon. ## Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes B. G. Demczyk et al., Materials and Engineering, A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150 GPa and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150 GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. ## Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science, 304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). It was then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,... The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang, et al. (2005, see below). ## Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology M. Zhang, K. R. Atkinson, and R. H. Baughman, Science, 306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given: $\frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right)$ where $\alpha$ is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant $k=\sqrt(dQ/\mu)/3L$ is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity $\mu=0.13$ is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), $L=30{\rm \mu m}$ is the fiber length. A critical review of this formula is given here. In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry. ## Ultra-high-yield growth of vertical single-walled carbon nanotubes - Hidden roles of hydrogen and oxygen Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon. In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper. In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs. ## Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning Q. Li et al. have published a paper on a subject that is very close to our hearts: growing long CNTs. The longer the fibers, which we hope have a couple of 100 GPa of tensile strength, can hopefully be spun into the yarns that will make our SE ribbon. In the paper the method of chemical vapour deposition (CVD) onto a catalyst-covered silicon substrate is described, which appears to be the leading method in the publications after 2004. This way a CNT "forest" is grown on top of the catalyst particles. The goal of the authors was to grow CNTs that are as long as possible. They found that the growth was terminated in earlier attempts by the iron catalyst particles interdiffusing with the substrate. This can apparently be avoided by putting an aluminium oxide layer of 10 nm thickness between the catalyst and the substrate. With this method the CNTs grow to an impressive 4.7 mm! Also, in a range from 0.5 to 1.5 mm fiber length the forests grown with this method can be spun into yarns. The growth rate with this method was initially $60{\rm \mu m\ min.^{-1}}$ and could be sustained for 90 min. This is very different from the $1{\rm \mu m\ min.^{-1}}$ reported by G. Zhang, et al. (2005), which shows that the growth is very dependent on the method and materials used. The growth was prolonged by the introduction of water vapour into the mixture, which achieved the 4.7 mm after 2 h of growth. By introducing periods of restricted carbon supply, the authors produced CNT forests with growth marks. This allowed to determine that the forest grew from the base. This is in line with the in situ observations by S. Hofmann, et al. (2007). Overall the paper is somewhat short on the details of the process, but the results are very interesting. Perhaps the 5 mm CNTs are long enough to be spun into a usable yarn. ## In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically. Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock. If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing. ## High-Performance Carbon Nanotube Fiber K. Koziol et al., Science, 318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20 mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel. They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20 mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20 mm long fiber is 20 times higher than encountering one on a 1 mm long fiber. As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000 km and a tensile strength of better than 3 GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open. ## Tensile and Electrical Properties of Carbon Nanotube Yarns and Knitted Tubes in Pure or Composite Form The paper by S. Hutton, et al, is the latest on yarns spun out of CNTs. The core of the paper is concerned with the effect the different amounts of twist has on the tensile strength and on the electrical conductivity of the yarn. The bad news for us is that they arrive only at 1 GPa/ccm for the optimum tensile strength of the yarn. However, some insight is given in the spinning process and in the different methods of processing the CNTs. They use relatively short CNTs (0.2 to 0.3 mm) grown into a MWNT forest by chemical vapour deposition (CVD) onto a silicon substrate covered with metal catalyst. The latter method appears to have become standard recently. ## Strong and Ductile Colossal Carbon Tubes with Walls of Rectangular Macropores This paper does not actually fit here, because it is not about CNTs. This is about "collosal Carbon Tubes" (CCTs), which are rolled-up sandwiches of porous sheets of amorpohous carbon. I came across the paper from the article on the elevator on the main wiki. The two obvious questions are: do CCTs actually exist, and what can CCTs do for the space elevator? Let me start by summarising the paper: In a process that is identical to the growth process of CNTs, except that no catalysts are used, apparently CCTs grow in the furnace. They are imaged by transmission electron microscopy (TEM) and measured: $50 \mu m$ in diameter and half a centimetre long. In the paper it is stated that it is still unclear how the CCTs form. It is speculated that a graphite sheet with embedded rectangular macropores grows in the chemical vapour deposition (CVD) process. The two sides of the sheet grow at different rates, which curls it up to form the tube. Now the interesting part comes: while the CCTs are not much stronger than regular carbon composite wires (6.9 GPa breaking strength), they are much lighter. The density of the carrying structure (the wall of the CCTs) is given in the paper with 0.116 g/ccm. This might be not so imortant for applications like bullet-proof vests, but is critical to the SE. We get a specific tensile strength of 59 GPa/g/ccm, enough for the SE! The open questions are: • Do CCTs really exist? • Can they be produced in quantity? • Can the produced as a infinitely long wire? • If not, how much is the specific breaking strength reduced if a yarn is spun out of them? ## Pressure-induced Interlinking of Carbon Nanotubes This offers a new way for nanotubes to be linked to each other laterally. Previously, they were held to each other by Van der Waals forces, much like static cling. This research put various type of CNTs under lateral pressure of many GPas, which caused deformation, then for the tubes to bond to each other. This has the potential to be stronger than VdW forces, as well as a way to bundle CNTs other than weaving and tape. ## Fabrication of Ultralong and Electrically Uniform Single-Walled Carbon Nanotubes on Clean Substrates This paper shows how to produce really long CNTs using CVD. The maximum reached length was 18.5cm. This is two orders of magnitude longer than previous work. The trick appears to be to place the catalysts on a thin CNT film and not on an inert substrate like Aluminium oxide or Silicates. In the paper only electrical properties are discussed with an application in micro electronics (FET devices). Mechanical properties that could be interesting for the SE are not discussed in the paper. It is however mentioned that the ultra long CNTs are capable to bridge gaps between substrate plates. They do, however break if the different substrate plates are moved. This shows that, as expected individual CNTs are not very strong due to ther small diameter (few nanometres). Thos means, in order to use this new finding for strong ropes, ribbons, or yarns. An efficient method for post processing must me defined. For this the treatment with acetone and subsequent spinning is quite conceivable and I would be very interested in results from such experiments. We can expect that the strength of yarn spun from CNTs increases in strength with the length of the individual fibers as indicated by the formula given above. Can we get the required ~50 GPa/g/ccm out of those ultra-long CNTs? We'll see.
# Planar Subgraph Isomorphism Revisited * Corresponding author Abstract : The problem of Subgraph Isomorphism is defined as follows: Given a pattern H and a host graph G on n vertices, does G contain a subgraph that is isomorphic to H? Eppstein [SODA 95, J'GAA 99] gives the first linear time algorithm for subgraph isomorphism for a fixed-size pattern, say of order k, and arbitrary planar host graph, improving upon the O(n^\sqrt{k})-time algorithm when using the Color-coding'' technique of Alon et al [J'ACM 95]. Eppstein's algorithm runs in time k^O(k) n, that is, the dependency on k is superexponential. We solve an open problem posed in Eppstein's paper and improve the running time to 2^O(k) n, that is, single exponential in k while keeping the term in n linear. Next to deciding subgraph isomorphism, we can construct a solution and enumerate all solutions in the same asymptotic running time. We may list w subgraphs with an additive term O(w k) in the running time of our algorithm. We introduce the technique of "embedded dynamic programming" on a suitably structured graph decomposition, which exploits the topology of the underlying embeddings of the subgraph pattern (rather than of the host graph). To achieve our results, we give an upper bound on the number of partial solutions in each dynamic programming step as a function of pattern size--as it turns out, for the planar subgraph isomorphism problem, that function is single exponential in the number of vertices in the pattern. Keywords : Document type : Conference papers Jean-Yves Marion and Thomas Schwentick. 27th International Symposium on Theoretical Aspects of Computer Science - STACS 2010, Mar 2010, Nancy, France. pp.263-274, 2010, Proceedings of the 27th Annual Symposium on the Theoretical Aspects of Computer Science Domain : Cited literature [24 references] https://hal.inria.fr/inria-00455215 Contributor : Publications Loria <> Submitted on : Tuesday, February 9, 2010 - 5:25:16 PM Last modification on : Wednesday, November 29, 2017 - 10:26:31 AM Document(s) archivé(s) le : Friday, June 18, 2010 - 7:47:29 PM ### File dorn.pdf Files produced by the author(s) ### Identifiers • HAL Id : inria-00455215, version 1 ### Citation Frédéric Dorn. Planar Subgraph Isomorphism Revisited. Jean-Yves Marion and Thomas Schwentick. 27th International Symposium on Theoretical Aspects of Computer Science - STACS 2010, Mar 2010, Nancy, France. pp.263-274, 2010, Proceedings of the 27th Annual Symposium on the Theoretical Aspects of Computer Science. 〈inria-00455215〉 Record views
# Does anything like a “pdfXeLaTeX” exist? I have generally been using `pdfLaTeX` to typeset my documents. I recently heard about `XeTeX` which supposedly is the same thing, except has better support for things like unicode and fonts. Is there a tool which operates as XeTeX does but which allows a direct PDF translation, rather than going through intermediate stages? - - Can you tell Lualatex that a document uses Unicode as its character encoding, without using inputenc? –  Charles Stewart Sep 24 '10 at 7:08 Yes, you should not use inputenc (or anything else) if you're using UTF8 encoded input. For fontencoding, use EU2 fontenc or the fontspec package. –  topskip Sep 24 '10 at 7:18 That was really not subtle, Patrick ;-) –  Arthur Reutenauer Sep 24 '10 at 16:58 @Charles: LuaTeX expects UTF-8 input, full stop. Well, that's not the whole story, as Patrick says, but that's how LuaTeX is supposed to ve used. –  Arthur Reutenauer Sep 24 '10 at 17:00 XeLaTeX outputs a PDF by default. Yes, it does use `xdvipdfmx` along the way, but why should that bother you? No DVI file is left behind. - One problem is that routing the typesetting through `xdvipdfmx` precludes the use of pdfTeX-like enhancements such as those provided by the `microtype` package. `microtype` is currently adding support for LuaTeX. –  Sharpie Sep 23 '10 at 18:49 It's unclear to me that the intermediate level is the problem. Anyway, `microtype` is also adding (partial) support for XeTeX... See my question here. tex.stackexchange.com/questions/2986/… –  frabjous Sep 23 '10 at 19:45 `microtype` has added LuaTeX support long ago, and a preliminary version that works with XeTeX is available from xetex.tk –  Philipp Sep 23 '10 at 19:54 I stand corrected. Sorry for the noise. –  Sharpie Sep 23 '10 at 20:51 What you say is true for the TeX engine, but XeTeX uses an extended DVI format (en.wikipedia.org/wiki/XeTeX), so I expect them to have added PNG support to that format. –  Blaisorblade Sep 30 '10 at 16:36
dof_to_vertex_map¶ dolfin.cpp.fem.dof_to_vertex_map() Return a map between dof indices and vertex indices Only works for FunctionSpace with dofs exclusively on vertices. For mixed FunctionSpaces vertex index is offset with the number of dofs per vertex. In parallel the returned map maps both owned and unowned dofs (using local indices) thus covering all the vertices. Hence the returned map is an inversion of vertex_to_dof_map. Parameters: FunctionSpace & space (const) – (FunctionSpace ) The FunctionSpace for what the dof to vertex map should be computed for std::vector< std::size_t > std::vector The dof to vertex map
# Thread: Solving Complex Fractional Equations 1. ## Solving Complex Fractional Equations $\frac{4m}{m-2}-\frac{13}{3m-6}=\frac{1}{3}$ I'm having problems finding a common denominator 2. (4 m)/(m-2)-13/(m-6) = 1/3 Multiply both sides by m-6: (4 (m-6) m)/(m-2)-13 = (m-6)/3 Expand out terms on both sides: (4 m^2)/(m-2)-(24 m)/(m-2)-13 = m/3-2 Write the left hand side as a single fraction: (4 m^2-37 m+26)/(m-2) = m/3-2 Multiply both sides by m-2: 4 m^2-37 m+26 = 1/3 (m-2) m-2 (m-2) Expand out terms on the right hand side: 4 m^2-37 m+26 = m^2/3-(8 m)/3+4 Subtract (m^2/3-(8 m)/3+4) from both sides: (11 m^2)/3-(103 m)/3+22 = 0 Solve the quadratic equation by completing the square: Divide both sides by 11/3: m^2-(103 m)/11+6 = 0 Subtract 6 from both sides: m^2-(103 m)/11 = -6 m^2-(103 m)/11+10609/484 = 7705/484 Factor the left hand side: (m-103/22)^2 = 7705/484 Take the square root of both sides: |m-103/22| = sqrt(7705)/22 Eliminate the absolute value: m-103/22 = -sqrt(7705)/22 or m-103/22 = sqrt(7705)/22 m = 1/22 (103-sqrt(7705)) or m-103/22 = sqrt(7705)/22 m = 1/22 (103-sqrt(7705)) or m = 1/22 (103+sqrt(7705)) 3. i dont understand it how u wrote it and the denominator was $\frac{13}{3m-6}$ 4. oops. that would have made it much easier haha. anyways m=1 do you need the steps? 5. yes I'm trying to understand. 6. factor out a 3 on the denominator of 3m-6 so you will have 3(m-2) so to get common denominators just multiple the other denominator and numerator of 4m/(m-2) by 3. this will give you common denominators. can you solve for m from there? 7. Originally Posted by purplec16 $\frac{4m}{m-2}-\frac{13}{3m-6}=\frac{1}{3}$ I'm having problems finding a common denominator note that $3m-6 = 3(m-2)$ $\frac{3 \cdot 4m}{3(m-2)}-\frac{13}{3(m-2)}=\frac{1(m-2)}{3(m-2)}$ denominators are all the same ... the numerator forms the equation $12m - 13 = m-2$ solve for $m$ , remember that m cannot equal $2$. 8. i got the 12m-13 but i dont get how u got the m-2 9. would be willing to help me with a few other questions at least by telling me if you got the same answers? 10. $\frac{4m}{m-2}-\frac{13}{3m-6}=\frac{1}{3}$ $\frac{3(4m)}{3(m-2)}-\frac{13}{3(m-2)}=\frac{1}{3}$ $\frac{12m-13}{3(m-2)}=\frac{1}{3}$ $\frac{12m-13}{m-2}=1$ $12m-13 = m-2$ $11m=11$ $m=1$ 11. Originally Posted by purplec16 i got the 12m-13 but i dont get how u got the m-2 look at the numerator on right side of the equation ... 12. Originally Posted by purplec16 would be willing to help me with a few other questions at least by telling me if you got the same answers? 13. $\frac{3}{2b+4}-\frac{4}{b-2}=\frac{3}{2b^2-8}$
# First Order Differential • March 11th 2013, 01:49 PM quantoembryo First Order Differential $2r(s^2+1)dr+(r^4+1)ds=0$ This is the second question in the textbook and I am able to do every question after it, so perhaps there is something simple that I am not seeing? The answer has no trig functions in it. $(2r)/(r^4+1)dr+(s^2+1)ds=0$ The only way I can integrate the dr term is with trig functions, but according the the answers there is a simple way such that no trig functions are required. • March 11th 2013, 01:58 PM ILikeSerena Re: First Order Differential Quote: Originally Posted by quantoembryo $2r(s^2+1)dr+(r^4+1)ds=0$ This is the second question in the textbook and I am able to do every question after it, so perhaps there is something simple that I am not seeing? The answer has no trig functions in it. $(2r)/(r^4+1)dr+(s^2+1)ds=0$ The only way I can integrate the dr term is with trig functions, but according the the answers there is a simple way such that no trig functions are required. Hi quantoembryo! :) After separation of variables, your equation should be: $\frac{2r}{r^4+1}dr+\frac{1}{s^2+1}ds=0$ How would you integrate the ds term? Can you perhaps apply something similar to the dr term? • March 11th 2013, 02:17 PM Prove It Re: First Order Differential The first term can be integrated with a substitution if you write \displaystyle \begin{align*} \frac{2r}{r^4 + 1} = \frac{2r}{\left( r^2 \right)^2 + 1} \end{align*} and let \displaystyle \begin{align*} u = r^2 \implies du = 2r\,dr \end{align*}. • March 11th 2013, 02:51 PM quantoembryo Re: First Order Differential Ah. So simple. Thanks!
It is currently 15 Dec 2017, 04:29 # Decision(s) Day!: CHAT Rooms | Wharton R1 | Stanford R1 | Tuck R1 | Ross R1 | Haas R1 | UCLA R1 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn Author Message TAGS: ### Hide Tags Director Status: Finally Done. Admitted in Kellogg for 2015 intake Joined: 25 Jun 2011 Posts: 531 Kudos [?]: 4319 [4], given: 217 Location: United Kingdom GMAT 1: 730 Q49 V45 GPA: 2.9 WE: Information Technology (Consulting) In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn [#permalink] ### Show Tags 20 Mar 2012, 08:00 4 KUDOS 29 This post was BOOKMARKED 00:00 Difficulty: 15% (low) Question Stats: 80% (01:18) correct 20% (01:06) wrong based on 576 sessions ### HideShow timer Statistics In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn= tn-1 - 3 for each n > 1. What is the value of n when tn = -4? A. -1 B. 7 C. 10 D. 14 E. 20 [Reveal] Spoiler: OA _________________ Best Regards, E. MGMAT 1 --> 530 MGMAT 2--> 640 MGMAT 3 ---> 610 GMAT ==> 730 Kudos [?]: 4319 [4], given: 217 Intern Joined: 04 Feb 2012 Posts: 6 Kudos [?]: 16 [7], given: 0 Location: Greece Concentration: Entrepreneurship, General Management GMAT Date: 03-07-2012 WE: General Management (Real Estate) ### Show Tags 20 Mar 2012, 08:42 7 KUDOS 4 This post was BOOKMARKED well.....t2 =t1-3 =23-3=20 t3=t2-3=20-3=17 So every time we n increases tn decreases by 3. Since t1=23 we have 23-3=20-3=17-3=14-3=11-3=8-3=5-3=2-3=-1-3=-4!VOILA So n=10 ten times substructing 3 from 23 to reach -4. clear Kudos [?]: 16 [7], given: 0 Math Expert Joined: 02 Sep 2009 Posts: 42620 Kudos [?]: 135707 [11], given: 12706 Re: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn [#permalink] ### Show Tags 20 Mar 2012, 12:44 11 KUDOS Expert's post 17 This post was BOOKMARKED enigma123 wrote: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn= tn-1 - 3 for each n > 1. What is the value of n when tn = -4? A. -1 B. 7 C. 10 D. 14 E. 20 $$t_n=t_{n-1}-3$$ means that each term is 3 less than the previous term. Now, the difference between $$t_1=23$$ and $$t_n=-4$$ is $$23-(-4)=27$$, so we moved $$\frac{27}{3}=9$$ terms from $$t_1$$, so from $$t_1$$ to $$t_{10}$$. _________________ Kudos [?]: 135707 [11], given: 12706 Senior Manager Joined: 23 Oct 2010 Posts: 380 Kudos [?]: 412 [13], given: 73 Location: Azerbaijan Concentration: Finance Schools: HEC '15 (A) GMAT 1: 690 Q47 V38 Re: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn [#permalink] ### Show Tags 12 Apr 2012, 09:24 13 KUDOS 1 This post was BOOKMARKED tn= tn-1 - 3 means that d=-3 tn=t1 +d(n-1) -4=23-3(n-1) -30=-3n n=10 _________________ Happy are those who dream dreams and are ready to pay the price to make them come true I am still on all gmat forums. msg me if you want to ask me smth Kudos [?]: 412 [13], given: 73 SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1849 Kudos [?]: 2790 [4], given: 193 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Re: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn [#permalink] ### Show Tags 22 Apr 2014, 01:15 4 KUDOS 2 This post was BOOKMARKED 23... 20.... 17... 14... 11.... 8... 5... 2.... -1.... -4 -4 is the 10th term _________________ Kindly press "+1 Kudos" to appreciate Kudos [?]: 2790 [4], given: 193 Senior Manager Status: Math is psycho-logical Joined: 07 Apr 2014 Posts: 437 Kudos [?]: 144 [2], given: 169 Location: Netherlands GMAT Date: 02-11-2015 WE: Psychology and Counseling (Other) Re: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn [#permalink] ### Show Tags 03 Feb 2015, 00:44 2 KUDOS I also did the same as Paresh, but like this: We know that t1 = 23 So, using the given formula we have: t1=(t1-1) -3 =23 t0 - 3 = 23 t0= 26 The sam way we find that t2= 20 It seems that the sequence goes like this: t0 = 26 t1 = 23 t2 = 20 t3 = 17 t4 = 14 t5 = 11 t6 = 8 t7 = 5 t8 = 2 t9 = -1 t10 = -4 So, our ANS is C. However, I did do it wrong because, by the way I saw it writen, I though that the whole formula equaled to 23 (t1, t2, t3, ..., tn, t1=23). I didn't see any relationship as to why this is 23 (it is an addition, is it a multiplication?). So, then I thought that what was meant is that this sequence is doing a circle, going from t1 to t1 again, and there are 23 numbers in the sequence. So, I though we needed to find tn. Hopefully the gmat would show in a more clear way that t1=23 and not the whole sequence.. Kudos [?]: 144 [2], given: 169 Intern Joined: 22 Jul 2016 Posts: 27 Kudos [?]: 8 [1], given: 33 Re: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn [#permalink] ### Show Tags 08 Jan 2017, 23:59 1 KUDOS t1=23 t2=t1-3=20 t3=t2-3=17 and so on... Here is when we need to consider the formula for AP as we know the common difference is -3 tn=t1 + d(n-1) given, tn=-4 -4=23 + (-3) (n-10) >> n=10 Ans : C Kudos [?]: 8 [1], given: 33 Intern Joined: 07 Sep 2016 Posts: 6 Kudos [?]: [0], given: 1 Re: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn [#permalink] ### Show Tags 09 Jan 2017, 02:17 enigma123 wrote: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn= tn-1 - 3 for each n > 1. What is the value of n when tn = -4? A. -1 B. 7 C. 10 D. 14 E. 20 Its a normal arithmetic Progression Question , whose 1st term is 23 and common difference is -3 Tn = a+ (n-1)d -4 = 23 +(n-1)(-3) n = 10 . Kudos [?]: [0], given: 1 Target Test Prep Representative Affiliations: Target Test Prep Joined: 04 Mar 2011 Posts: 1810 Kudos [?]: 981 [1], given: 5 Re: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn [#permalink] ### Show Tags 11 Jan 2017, 07:09 1 KUDOS Expert's post 1 This post was BOOKMARKED enigma123 wrote: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn= tn-1 - 3 for each n > 1. What is the value of n when tn = -4? A. -1 B. 7 C. 10 D. 14 E. 20 In the given sequence, since we are given the first term, we can use that value to find the second term, and once we know the second term, we can use that value to find the third term, and so on. t_1 = 23 t_2 = t_1 – 3 = 23 – 3 = 20 t_3 = t_2 – 3 = 20 – 3 = 17 t_4 = t_3 – 3 = 17 – 3 = 14 t_5 = t_4 – 3 = 14 – 3 = 11 t_6 = t_5 – 3 = 11 – 3 = 8 t_7 = t_6 – 3 = 8 – 3 = 5 t_8 = t_7 – 3 = 5 – 3 = 2 t_9 = t_8 – 3 = 2 – 3 = -1 t_10 = t_9 – 3 = -1 – 3 = -4 So n = 10. Alternative Solution: Notice that starting from the second term, each term is 3 less than the previous term, which makes the sequence an arithmetic sequence. In an arithmetic sequence, the nth term, a_n, can be found by using the formula a_n = a_1 + d(n – 1) in which a_1 is the first term and d is the common difference. Since we are given t_n, we can modify the formula to t_n = t_1 + d(n – 1) in which t_1 = 23 and d = -3. So we have: t_n = t_1 + d(n – 1) -4 = 23 + (-3)(n – 1) -27 = -3(n – 1) 9 = n – 1 10 = n _________________ Jeffery Miller GMAT Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Kudos [?]: 981 [1], given: 5 Senior Manager Status: Professional GMAT Tutor Affiliations: AB, cum laude, Harvard University (Class of '02) Joined: 10 Jul 2015 Posts: 446 Kudos [?]: 524 [0], given: 59 Location: United States (CA) Age: 38 GMAT 1: 770 Q47 V48 GMAT 2: 730 Q44 V47 GMAT 3: 750 Q50 V42 GRE 1: 337 Q168 V169 WE: Education (Education) Re: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn [#permalink] ### Show Tags 08 Jun 2017, 15:44 Top Contributor Attached is a visual that should help. Attachments Screen Shot 2017-06-08 at 4.33.09 PM.png [ 108.8 KiB | Viewed 15830 times ] _________________ Harvard grad and 770 GMAT scorer, offering high-quality private GMAT tutoring, both in-person and online via Skype, since 2002. GMAT Action Plan - McElroy Tutoring Kudos [?]: 524 [0], given: 59 Intern Joined: 14 May 2016 Posts: 11 Kudos [?]: [0], given: 4 Re: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn [#permalink] ### Show Tags 05 Jul 2017, 18:18 Here is how I did it: t1 = 23 and tn-1 = -3; therefore for every t you need to subtract 3 lets begin: t1----> 23-3 = 20 t2----> 20-3 = 17 t3----> 17-3 = 14 when tn = 4...now subtract 4 from the 14 tn----> 14-4 = 10 Kudos [?]: [0], given: 4 Manager Status: IF YOU CAN DREAM IT, YOU CAN DO IT Joined: 03 Jul 2017 Posts: 177 Kudos [?]: 8 [0], given: 14 Location: India Re: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn [#permalink] ### Show Tags 01 Dec 2017, 19:53 i have one doubt in the question.Why cant we use the AP formula for the nth term to calculate the number if terms.When i did , i get n to be 8. Why is that method incorrect? Kudos [?]: 8 [0], given: 14 Math Expert Joined: 02 Sep 2009 Posts: 42620 Kudos [?]: 135707 [0], given: 12706 Re: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn [#permalink] ### Show Tags 01 Dec 2017, 22:59 longhaul123 wrote: i have one doubt in the question.Why cant we use the AP formula for the nth term to calculate the number if terms.When i did , i get n to be 8. Why is that method incorrect? You have to show your work. _________________ Kudos [?]: 135707 [0], given: 12706 Re: In the arithmetic sequence t1, t2, t3, ..., tn, t1=23 and tn   [#permalink] 01 Dec 2017, 22:59 Display posts from previous: Sort by
# GDP Revised Down Slightly to 2.4% for Q1 2013 Q1 2013 real GDP was revised downward slightly to 2.4% from 2.5%.   This is still an improvement, from the fourth quarter 0.4% GDP showing a stagnant economy.   Consumer spending was the biggest improvement while increased imports posed a major economic drag.  Government spending declines continue to be an economic damper.  The revision shows more consumer spending than originally reported, less investment, less imports, less exports and government expenditures were less than previously estimated.  Generally speaking a 2.4% GDP implies moderate economic growth, yet overall real demand in the economy is still fairly weak. As a reminder, GDP is made up of: $Y=C+I+G+{\left(X-M\right)}$ where Y=GDP, C=Consumption, I=Investment, G=Government Spending, (X-M)=Net Exports, X=Exports, M=Imports*. The below table shows the percentage point spread breakdown from Q4 to Q1 GDP major components.  GDP percentage point component contributions are calculated individually. Comparison of Q1 2013 and Q4 2012 GDP Components Component Q1 2013 Q4 2012 GDP +2.38 +0.37 +2.01 C +2.40 +1.28 +1.12 I +1.16 +0.17 +0.99 G –0.97 –1.41 +0.44 X +0.11 –0.40 +0.51 M –0.32 +0.73 -1.05 Comparison of Q1 2013 Component Revisions Component Q1 2013 Revised GDP +2.50 +2.38 -0.12 C +2.24 +2.40 +0.16 I +1.56 +1.16 -0.40 G –0.80 –0.97 -0.17 X +0.40 +0.11 -0.29 M –0.90 –0.32 +0.58 Consumer spending, C in our GDP equation, shows an increase from Q4.  In terms of percentage changes, real consumer spending increased 3.4% in Q1 in comparison to a 1.8% increase in Q4.  Services drove consumer spending with a 1.42 percentage point contribution in household consumption expenditures.  Goods consumer spending contributed 0.98 percentage points to personal consumption expenditures.  Below is a percentage change graph in real consumer spending going back to 2000. Graphed below is PCE with the quarterly annualized percentage change breakdown of durable goods (red or bright red), nondurable goods (blue) versus services (maroon). Imports and Exports, M & X subtracted -0.21 percentage points to Q1 GDP as imports increased from Q4 .  Import services, which includes offshore outsourcing, were -0.16 percentage points of the -0.32 percentage point GDP reduction caused by imports.   The below graph shows real imports vs. exports in billions. The break down of the GDP percentage change to point contributions gives a clear picture on how much the trade deficit stunts U.S. economic growth. Government spending, G was –0.97 percentage points of Q1's GDP.   For the second quarter in a row, there were national defense spending declines, with another –12.1% drop for Q1, a –0.63 percentage point contribution.  State and local governments subtracted –0.29 percentage points from Q1 GDP.  Below is the percentage quarterly change of government spending, adjusted for prices, annualized. Investment, I is made up of fixed investment and changes to private inventories. The change in private inventories alone gave a +0.63 percentage point contribution to Q1, with farm inventories contributing +0.84 percentage points of the change in private inventories.  Business inventories were a negative -0,21 percentage point GDP contribution.  Farm inventories saved the Q1 GDP day. Below are the change in real private inventories and the next graph is the change in that value from the previous quarter. Fixed investment is residential and nonresidential and is a bright spot in the Q1 GDP report, as it was in Q4.  Overall, fixed investment contributed +0.53 percentage points to GDP.  Equipment and Software investments contributed +0.34 percent points to Q1 GDP. Part of fixed investment is Residential fixed investment. Residential contributed +0.30 percentage points to Q1 GDP.   One can see the housing bubble collapse in the below graph and also the meteoric recovery.  Yet by overall volume is still below the bubble years, in spite of dramatic increase in housing prices of recent. Motor Vehicles as a whole, as output was +0.28 percentage points of Q1 real GDP.  Computer final sales, added 0.02 percentage points from Q1 GDP.  These categories are different from personal consumption, or C sub-components, such as auto & parts. These are overall separate indices to show how much they added to GDP overall.  Motor vehicles, computers are bought as investment, as fleets, in bulk, by the government, as well as part of consumer spending, government spending and so on. Services overall added a +0.84 percentage point contribution to GDP while goods, overall added +1.60 percentage points.  Structures, overall, which is building activity, both residential and commercial and economic activity it generates, subtracted -0.06 percentage points to Q1 real GDP. The price index for gross domestic purchases, was revised upward by 0.1 percentage points to 1.2% for Q1.  In Q4 the price index was 1.6%.  This means there was less inflation in comparison to last quarter.  Since the price index is used to remove inflation from GDP to obtain real growth, less inflation means less price increases to eat away at economic growth.   The core price index, or prices excluding food and energy products, was 1.4%, an upward revision of 0.2 percentage points . Real final sales of domestic product is GDP - inventories change. This gives a better feel for real demand in the economy.  This is because while private inventories represent economic activity, the stuff is sitting on the shelf, it's not demanded or sold.  Real final sales increased 1.8% for Q1, which reflects weak demand.  Real final sales was revised upward 0.3 percentage points on lower changes in private inventories estimates.   Q4 real final sales increased 1.9%. Gross domestic purchases are what U.S. consumers bought no matter whether it was made in Ohio or China.  It's defined as GDP plus imports and minus exports or using our above equation: $P=Y-X+M$ where P = Real gross domestic purchases.  Real gross domestic purchases only increased 2.5% in Q1 in comparison to no change in Q4.  Exports are subtracted off because they are outta here, you can't buy 'em, but imports, as well a know all too well, are available for purchase at your local Walmart.  When gross domestic purchases exceed GDP, that's actually bad news, it means America is buying imports instead of goods made domestically. GNP - Gross National Product:  Real gross national product, GNP, is the goods and services produced by the labor and property supplied by U.S. residents. GNP = GDP + (Income receipts from the rest of the world) - (Income payments to the rest of the world) Real GNP increased 1.5% in Q1 whereas in Q4 GNP increased 0.9%.  GNP includes, GDP excludes net income from the rest of the world.  GNP increases beyond GDP if Americans made out like bandits from foreign investments more than foreigners cashed in on investments within the U.S. borders.  The fact GNP is less than GDP implies a lot of foreigners are making a lot of money while operating and investing within the U.S. borders. GDI - Gross Domestic Income:  Gross Domestic Income is all income from within the borders of a nation and should normally equal GDP.  GDI is wages, profits & taxes minus subsidies.  Real GDI increased 2.5% in Q1 in comparison to a 5.5% increase in Q4. Nominal GDP:  In current dollars, not adjusted for prices, of the U.S. output,was \$16,004.5 brillion, an 3.6% increase. In Q4 nominal GDP increased 1.3%. Below are the percentage changes of Q1 2013 GDP components, from Q4.  There is a difference between percentage change and percentage point change.  Point change adds up to the total GDP percentage change and is reported above. The below is the individual quarterly percentage change, against themselves, of each component which makes up overall GDP.  Additionally these changes are seasonally adjusted and reported by the BEA in annualized format. Q1 2013 Component Percentage Change (annualized) Component Percentage Change from Q4 GDP +2.4% C +3.4% I +9.0% G –4.9% X +0.8% M +1.9% Here is our overview for Q1 GDP advance estimate, Other reports on gross domestic product can be found here. ## Forum Categories: ### farm inventories > 1/3rd of GDP you figure that big jump in farm inventories was just a seasonal adjustment aberration due to drought in earlier quarters? you know they werent filling their silos with corn harvested in january... You must have Javascript enabled to use this form. rjs ### farm inventories include animals They killed many more animals earlier as feed prices soared during the drought. Side note, the entire concept of raising animals to be eaten is disgusting, especially how they stuff animals into way too small areas, cages and then they way they kill them is often inhumane. I should be a vegetarian. Yes, anyone notice beyond this site how farm inventory fluctuations boosting up GDP is completely ignored by the financial press and even many economists? You must have Javascript enabled to use this form.
# Snub dodecahedron Snub dodecahedron Type Archimedean solid Uniform polyhedron Elements F = 92, E = 150, V = 60 (χ = 2) Faces by sides (20+60){3}+12{5} Conway notation sD Schläfli symbols sr{5,3} or ${\displaystyle s{\begin{Bmatrix}5\\3\end{Bmatrix}}}$ ht0,1,2{5,3} Wythoff symbol | 2 3 5 Coxeter diagram Symmetry group I, 1/2H3, [5,3]+, (532), order 60 Rotation group I, [5,3]+, (532), order 60 Dihedral angle 3-3: 164°10′31″ (164.18°) 3-5: 152°55′53″ (152.93°) References U29, C32, W18 Properties Semiregular convex chiral Colored faces 3.3.3.3.5 (Vertex figure) Pentagonal hexecontahedron (dual polyhedron) Net In geometry, the snub dodecahedron, or snub icosidodecahedron, is an Archimedean solid, one of thirteen convex isogonal nonprismatic solids constructed by two or more types of regular polygon faces. The snub dodecahedron has 92 faces (the most of the 13 Archimedean solids): 12 are pentagons and the other 80 are equilateral triangles. It also has 150 edges, and 60 vertices. It has two distinct forms, which are mirror images (or "enantiomorphs") of each other. The union of both forms is a compound of two snub dodecahedra, and the convex hull of both forms is a truncated icosidodecahedron. Kepler first named it in Latin as dodecahedron simum in 1619 in his Harmonices Mundi. H. S. M. Coxeter, noting it could be derived equally from either the dodecahedron or the icosahedron, called it snub icosidodecahedron, with a vertical extended Schläfli symbol ${\displaystyle s\scriptstyle {\begin{Bmatrix}5\\3\end{Bmatrix}}}$ and flat Schläfli symbol sr{5,3}. ## Cartesian coordinates With the 15 absolute values given, the coordinates of a snub dodecahedron with edge length 1 are the even permutations of • (c2, c1, c14), (c0, c8, c12) and (c7, c6, c11) with an even number of plus signs • (c3, c4, c13) and (c9, c5, c10) with an odd number of plus signs. [1] ## Surface area and volume For a snub dodecahedron whose edge length is 1, the surface area is ${\displaystyle A=20{\sqrt {3}}+3{\sqrt {25+10{\sqrt {5}}}}\approx 55.286\,744\,958\,445\,15}$ and the volume is ${\displaystyle V={\frac {12\xi ^{2}(3\varphi +1)-\xi (36\varphi +7)-(53\varphi +6)}{6{\sqrt {3-\xi ^{2}}}^{3}}}\approx 37.616\,649\,962\,733\,36}$ ${\displaystyle R={\frac {1}{2}}{\sqrt {\frac {2-x}{1-x}}}=2.15584\dots }$ where ${\displaystyle x}$ is the appropriate root of ${\displaystyle x^{3}+2x^{2}={\Big (}{\tfrac {1\pm {\sqrt {5}}}{2}}{\Big )}^{2}}$ and ${\displaystyle \varphi }$ is the golden ratio. The four positive real roots of the sextic in ${\displaystyle R^{2}}$ ${\displaystyle 4096R^{12}-27648R^{10}+47104R^{8}-35776R^{6}+13872R^{4}-2696R^{2}+209=0}$ is the circumradius of the snub dodecahedron (U29), great snub icosidodecahedron (U57), great inverted snub icosidodecahedron (U69), and great retrosnub icosidodecahedron (U74). The snub dodecahedron has the highest sphericity (about 0.982) of all Archimedean solids. ## Orthogonal projections The snub dodecahedron has no point symmetry, so the vertex in the front does not correspond to an opposite vertex in the back. The snub dodecahedron has two especially symmetric orthogonal projections as shown below, centered on two types of faces: triangles and pentagons, corresponding to the A2 and H2 Coxeter planes. Orthogonal projections Centered by Face Triangle Face Pentagon Edge Solid Wireframe Projective symmetry [3] [5]+ [2] Dual ## Geometric relations Dodecahedron, rhombicosidodecahedron and snub dodecahedron (animated expansion and twisting) The snub dodecahedron can be generated by taking the twelve pentagonal faces of the dodecahedron and pulling them outward so they no longer touch. At a proper distance this can create the rhombicosidodecahedron by filling in square faces between the divided edges and triangle faces between the divided vertices. But for the snub form, pull the pentagonal faces out slightly less, only add the triangle faces and leave the other gaps empty (the other gaps are rectangles at this point). Then apply an equal rotation to the centers of the pentagons and triangles, continuing the rotation until the gaps can be filled by two equilateral triangles. (The fact that the proper amount to pull the faces out is less in the case of the snub dodecahedron can be seen in either of two ways: the circumradius of the snub dodecahedron is smaller than that of the icosidodecahedron; or, the edge length of the equilateral triangles formed by the divided vertices increases when the pentagonal faces are rotated.) Uniform alternation of a truncated icosidodecahedron The snub dodecahedron can also be derived from the truncated icosidodecahedron by the process of alternation. Sixty of the vertices of the truncated icosidodecahedron form a polyhedron topologically equivalent to one snub dodecahedron; the remaining sixty form its mirror-image. The resulting polyhedron is vertex-transitive but not uniform. ## Related polyhedra and tilings This semiregular polyhedron is a member of a sequence of snubbed polyhedra and tilings with vertex figure (3.3.3.3.n) and Coxeter–Dynkin diagram . These figures and their duals have (n32) rotational symmetry, being in the Euclidean plane for n = 6, and hyperbolic plane for any higher n. The series can be considered to begin with n = 2, with one set of faces degenerated into digons. ## Snub dodecahedral graph Snub dodecahedral graph 5-fold symmetry Schlegel diagram Vertices60 Edges150 Automorphisms60 PropertiesHamiltonian, regular Table of graphs and parameters In the mathematical field of graph theory, a snub dodecahedral graph is the graph of vertices and edges of the snub dodecahedron, one of the Archimedean solids. It has 60 vertices and 150 edges, and is an Archimedean graph.[2]
# Autocorrelation Function Question 1. May 3, 2010 ### frenzal_dude Hi, we're learning about the autocorrelation function at uni, and I know it's meant to show similarities between a function and a delayed version of that function. But how does the autocorrelation show these similarities? For example, if$$x(t)=Asinc(2Wt)$$ then $$R_x(\tau )=\frac{A^2}{2W}sinc(2W\tau)$$ How can you look at the resulting function and see what the similarities are? Thanks for the help guys. David 2. May 3, 2010 ### marcusl The cross-correlation doesn't tell "similarities" so much as how correlated two functions are as they slide past each other. Autocorrelation is just cross-correlation where the functions are one and the same. To see why the autocorrelation of a sinc is another sinc, draw the function onto two slips of paper and visually perform the cross-correlation (multiply point by point and integrate) as you slide them past each other. At zero lag (offset) they line up and the correlation is one. As they slide apart, the amplitude falls, then goes negative when the big peak lines up with the first negative lobe. At large lag there's not much correlation (where one is big the other is small).
{{ item.displayTitle }} navigate_next No history yet! Progress & Statistics equalizer Progress expand_more Student navigate_next Teacher navigate_next {{ filterOption.label }} {{ item.displayTitle }} {{ item.subject.displayTitle }} arrow_forward {{ searchError }} search {{ courseTrack.displayTitle }} {{ printedBook.courseTrack.name }} {{ printedBook.name }} # Calculating Mean and Standard Deviation Concept ## Numerical Data Numerical data is data that is measurable, such as time, speed and distance. It is described with numbers that can be either discrete or continuous. When the data is continuous, in theory, there are infinitely many possibilities. Concept ## Measure of Center A measure of center is a statistic that summarize a data set by finding its center. The most common measures of center are mean, median and mode. Concept # Mean The mean or average of a data set is one representation of the center of the data set. It is one measure of center. The others are the median and the mode. To calculate the mean, add all the data points together, then divide by the number of data points. mean $=\dfrac{\text{sum of values}}{\text{number of values}}$ Suppose a data set represents the heights of different towers. The mean of this data set gives an idea of a typical height. Calculating the mean could be seen as rearragning the blocks so that all the towers have the same height. Animate the mean value Reset After the blocks are rearranged, the towers each have a height of $4.$ Therefore, the mean is $4.$ If the heights are written as $x$, then the mean is sometimes written as $\bar{x}.$ The towers' mean height can then be written as $\bar{x}=4.$ fullscreen Exercise When on vacation in Mexico, Peter finds a rose species he has never seen before. He decides to study how many petals each flower has. The result of his study is this. $10,\; 14,\; 11,\; 9,\; 16$ How many petals do the flowers on average have? Show Solution Solution In order to determine the mean, we need to add all the data points together. Then we divide the sum by the total number of points, which in this case is $5.$ $\bar{x} = \dfrac{10 + 14 + 11 + 9 + 16}{5}$ $\bar{x} = \dfrac{60}{5}$ $\bar{x} = 12$ The mean is $12.$ Thus, on average the roses have $12$ petals each. Concept A measure of spread is a way of quantifying how spread out, or different, the points in a data set are. A small spread means data points are similar, while a large spread means they are different. This is illustrated by the two data sets below. Both have a mean, median and mode of $3,$ but, we can assume the second data set has a larger spread because of how different its data points are. Some commonly used measures of spread are range, mean absolute deviation, standard deviation, and interquartile range. These are often used together with a measure of center, to give an idea both of what a typical value is and how much the data can be expected to deviate from it. Rule # Standard Deviation Standard deviation is a commonly used measure of spread. It is a measure of how much a randomly selected value from a data set is expected to differ from the mean. To denote the standard deviation, the Greek letter $\sigma$ is used, which is read as "sigma."To calculate a standard deviation, the rule $\sigma = \sqrt{ \dfrac{ (x_1 - \bar{x})^2 + (x_2 - \bar{x})^2 + \ldots + (x_n - \bar{x})^2}{n} }$ is used, where $n$ is the number of values in the data set and $\bar{x}$ is the mean of the set. Method ## Finding the Standard Deviation of a Data Set The standard deviation, $\sigma,$ of a data set is calculated using the rule $\sigma = \sqrt{ \dfrac{ (x_1 - \bar{x})^2 + (x_2 - \bar{x})^2 + \ldots + (x_n - \bar{x})^2}{n} },$ where $n$ is the number of values in the data set and $\bar{x}$ is the mean of the set. Performing this calculation in one step makes for a convoluted expression. Therefore, it is best divided into a few, smaller steps. Consider the following data set as an example. $1, 5, 3, 8, 3, 12$ ### 1 Find the mean, $\bar{x}$ First, the mean, $\bar{x},$ should be calculated. The example data set has $6$ values, so the denominator is $6.$ $\bar{x} = \dfrac{1 + 5 + 3 + 8 + 3 + 12}{6}$ $\bar{x} = \dfrac{32}{6}$ $\bar{x} = 4$ ### 2 Find the deviation of each data value, $x - \bar{x}$ For each data value, $x - \bar{x}$ can now be calculated and added to a table. This shows how much each data point varies from the mean. $x$ $x - \bar{x}$ $1$ $1 - 4 = \text{-} 3$ $5$ $5 - 4 = 1$ $3$ $3 - 4 = \text{-} 1$ $8$ $8 - 4 = 4$ $3$ $3 - 4 = \text{-} 1$ $12$ $12 - 4 = 8$ ### 3 Square the deviations Square the deviations, and add them to a new column in the table. $x$ $x - \bar{x}$ $(x - \bar{x})^2$ $1$ $\text{-} 3$ $(\text{-} 3)^2 = 9$ $5$ $1$ $1^2 = 1$ $3$ $\text{-} 1$ $(\text{-} 1)^2 = 1$ $8$ $4$ $4^2 = 16$ $3$ $\text{-} 1$ $(\text{-} 1)^2 = 1$ $12$ $8$ $8^2 = 64$ ### 4 Find the mean of the squared deviations The squared deviations should be added and divided by the number of data values. In other words, the mean of the squared deviations is found. $\dfrac{9 + 1 + 1 + 16 + 1 + 64}{6}$ $\dfrac{92}{6}$ $15.33333 \ldots$ $15.33$ This value is called the variance of the data set. ### 5 Square-root the mean of the squared deviations Finally, take the square root of the just found quotient to get the standard deviation. Here, the fraction is used instead of the quotient, to avoid rounding errors. $\sigma = \sqrt{ \dfrac{92}{6} } \approx 3.92$ Thus, a randomly chosen value from this data set is expected to deviate roughly $4$ units from the mean.
# Dimensional analysis ## Approach Dimensional analysis is very important to interpolate the experimental laboratory results (prototype models) to full scale system. Two criteria must be fulfilled to perform such an objective. First, dimensional similarity, in which all dimensions of the prototype to full scale system must be in the same ratio, should be fulfilled. Secondly, dynamic similarity should be met in which relevant dimensionless groups are the same between the prototype model and full scale system. The convective heat transfer coefficient is a function of the thermal properties of the fluid, the geometric configuration, flow velocities, and driving forces. As a simple example, let us consider forced convection in a circular tube with a length L and a diameter D. The flow is assumed to be incompressible and natural convection is negligible compared with forced convection. The heat transfer coefficient can be expressed as $h = h(k,\mu ,{c_p},\rho ,U,\Delta T,D,L) \qquad \qquad(1)$ where k is the thermal conductivity of the fluid, μ is viscosity, cp is specific heat, ρ is density, U is velocity, and ΔT is the temperature difference between the fluid and tube wall. Equation (1) can also be rewritten as $F(h,k,\mu ,{c_p},\rho ,U,\Delta T,D,L) = 0 \qquad \qquad(2)$ It can be seen from eq. (2) that nine dimensional parameters are required to describe the convection problem. To determine the functions in eq. (1) or (2), it is necessary to perform a large number of experiments, which is not practical. The theory of dimensional analysis shows – as will become evident later – that it is possible to use fewer dimensionless variables to describe the convection problem. According to Buckingham’s Π theorem (Buckingham, 1914), the number of dimensionless variables required to describe the problem equals the number of primary dimensions required to describe the problem minus the number of dimensional variables. Since there are five primary dimensions or units required to describe the problem – mass M (kg), length L (m), time T (sec), temperature T (k), and heat transfer rate Q (W) – four dimensionless variables are needed to describe the problem. These dimensionless variables can be identified using Buckingham’s Π theorem and are formed from products of powers of certain original dimensional variables. Any of such dimensionless groups can be written as $\Pi = {h^a}{k^b}{\mu ^c}c_p^d{\rho ^e}{U^f}{(\Delta T)^g}{D^h}{L^i} \qquad \qquad(3)$ Substituting dimensions (units) of all variables into eq. (3) yields $\begin{array}{l} \Pi = {\left( {\frac{Q}{{{L^2}T}}} \right)^a}{\left( {\frac{Q}{{LT}}} \right)^b}{\left( {\frac{M}{{Lt}}} \right)^c}{\left( {\frac{{Qt}}{{MT}}} \right)^d}{\left( {\frac{M}{{{L^3}}}} \right)^e}{\left( {\frac{L}{t}} \right)^f}{T^g}{L^h}{L^i} \\ = {{\rm{M}}^{c - d + e}}{L^{ - 2a - b - c - 3e + f + h + i}}{t^{ - c + d - f}}{T^{ - a - b - d + g}}{{\rm{Q}}^{a + b + d}} \\ \end{array} \qquad \qquad(4)$ For Π to be dimensionless, the components of each primary dimension must be summed to zero, i.e.: $\begin{array}{l} c - d + e = 0 \\ - 2a - b - c - 3e + f + h + i = 0 \\ - c + d - f = 0 \\ - a - b - d + g = 0 \\ a + b + d = 0 \\ \end{array} \qquad \qquad(5)$ This gives a set of five equations with nine unknowns. According to linear algebra, the number of distinctive solutions of eq. (5) is four (9 − 5 = 4), which coincides with the Π theorem. In order to obtain the four distinctive solutions, we have free choices on four of the nine components. If we select a = d = i = 0 and f = 1, the solutions of eq. (5) become b = 0, c = − 1, e = 1, g = 0, and h = 1 which give us the first nondimensional variable ${\Pi _1} = \frac{{\rho UD}}{\mu } \qquad\qquad(6)$ which is the Reynolds number Π1 = Re. Similarly, we can set a = 1, c = f = 0 and i = 0 and get the solutions of eq.(5) as b = − 1, d = 0, e = 0, g = 0, and h = 1. The second nondimensional variable becomes ${\Pi _{\rm{2}}} = \frac{{hD}}{k} \qquad\qquad(7)$ which is the Nusselt number (Π2 = Nu). Following a similar procedure, we can get two other dimensionless variables: ${\Pi _{\rm{3}}} = \frac{{{c_p}\mu }}{k} \qquad \qquad(8)$ ${\Pi _{\rm{4}}} = \frac{L}{D} \qquad \qquad(9)$ which are the Prandtl number (${\Pi _3} = \Pr = \nu /\alpha$) and the aspect ratio of the tube. Equation (2) can be rewritten as $F({\Pi _{\rm{1}}},{\Pi _{\rm{2}}},{\Pi _{\rm{3}}},{\Pi _{\rm{4}}}) = 0 \qquad \qquad(10)$ or $Nu = f(Re, Pr, L/D) \qquad\qquad(11)$ Furthermore, if the flow and heat transfer in the tube are fully developed, meaning no change of flow and heat transfer in the axial direction, eq. (10) can be simplified further: $Nu = f(Re, Pr) \qquad \qquad(12)$ It can be seen that the number of nondimensional variables is three, as opposed to the nine dimensional variables in eq. (1). $h = h[k,\mu ,{c_p},\rho ,L,\Delta T,({\rho _\ell } - {\rho _v})g,{h_{\ell v}},\sigma ] \qquad \qquad(13)$ There are 10 dimensional variables in eq. (13) and there are five primary dimensions in the boiling and condensation problem. Therefore, it will be necessary to use (10 − 5) = 5 dimensionless variables to describe the liquid-vapor phase change process, i.e.: $Nu = f(Gr, Ja, Pr, Bo) \qquad \qquad(14)$ where the Grashof number is defined as: $Gr = \frac{{\rho g({\rho _\ell } - {\rho _v}){L^3}}}{{{\mu ^2}}} \qquad \qquad(15)$ The new dimensionless parameters introduced in eq. (14) are the Jakob number, Ja, and the Bond number, Bo, which are defined as $Ja = \frac{{{c_p}\Delta T}}{{{h_{\ell v}}}} \qquad \qquad(16)$ $Bo = \frac{{({\rho _\ell } - {\rho _v})g{L^2}}}{\sigma } \qquad \qquad(17)$ ## Dimensionless numbers The following table provides a summary of the definitions, physical interpretations, and areas of significance of the important dimensionless numbers for transport phenomena in multiphase systems. At reduced-length scale, the effects of gravitational and inertial forces become less important while the surface tension plays a dominant role. Summary of dimensionless numbers for transport phenomena Name Symbol Definition Physical interpretation Area of significance Bond number Bo $({\rho _\ell } - {\rho _v})g{L^2}/\sigma$ Buoyancy force/surface tension Boiling and condensation Brinkman number Br μU2 / (kΔT) Viscous dissipation /enthalpy change High-speed flow Capillary number Ca μU / σ We/Re Two-phase flow Eckert number Ec U2 / (cpΔT) Kinetic energy/enthalpy change High speed flow Fourier number Fo αt / L2 Dimensionless time Transient problems Froude number Fr U2 / (gL) Interfacial force/gravitational force Flow with a free surface Grashof number Gr gβΔTL3 / ν2 Buoyancy force/viscous force Natural convection Jakob number ${Ja_\ell}$ ${c_{p\ell }}\Delta T/{h_{\ell v}}$ sensible heat/latent heat Film condensation and boiling Jav ${c_{pv}}\Delta T/{h_{\ell v}}$ Kapitza number Ka $\mu _\ell ^4g/[({\rho _\ell } - {\rho _v}){\sigma ^3}]$ Surface tension/viscous force Wave on liquid film Knudsen number Kn λ / L Mean free path /characteristic length Noncontinuum flow Lewis number Le α / D Ratio between thermal and mass diffusivities Mass transfer Mach number Ma U / c Velocity/speed of sound Compressible flow Nusselt number Nu hL / k Thermal resistance of conduction/thermal resistance of convection convective heat transfer Peclet number Pe UL / α RePr Forced convection Prandtl number Pr ν / α Rate of diffusion of viscous effect/rate of diffusion of heat convection Rayleigh number Ra gβΔTL3 / (να) GrPr Natural convection Reynolds number Re UL / ν Inertial force/viscous force Forced convection Schmidt number Sc ν / D Rate of diffusion of viscous effect/rate of diffusion of mass Convective mass transfer Sherwood number Sh hmL / D12 Resistance of diffusion /resistance of convection Convective mass transfer Stanton number St h / (ρcpU) Nu/(Re Pr) Forced convection Stanton number (mass transfer) Stm hm / U Sh/(Re Sc) Mass transfer Stefan number Ste ${c_p}\Delta T/{h_{s\ell }}$ Sensible heat/latent heat Melting and solidification Strouhal number Sr Lf / U Time characteristics of fluid flow Oscillating flow /bioengineering Weber number We ρU2L / σ Inertial force/surface tension force Liquid-vapor phase change Womersley number Wr $\sqrt {\omega {R^2}/\nu }$ Radial force /viscous force Bioengineering The convective heat transfer coefficient can be obtained analytically, numerically, or experimentally, and the results are often expressed in terms of the Nusselt number, in a fashion similar to eqs. (12) and (14). ## References Buckingham, E., 1914, “On Physically Similar Systems: Illustrations of the Use of Dimensional Equations,” Phys. Rev., Vol. 4, pp. 345-376. Faghri, A., and Zhang, Y., 2006, Transport Phenomena in Multiphase Systems, Elsevier, Burlington, MA. Faghri, A., Zhang, Y., and Howell, J. R., 2010, Advanced Heat and Mass Transfer, Global Digital Press, Columbia, MO.
# Train DDPG Agent to Swing Up and Balance Cart-Pole System This example shows how to train a deep deterministic policy gradient (DDPG) agent to swing up and balance a cart-pole system modeled in Simscape™ Multibody™. For more information on DDPG agents, see Deep Deterministic Policy Gradient (DDPG) Agents. For an example showing how to train a DDPG agent in MATLAB®, see Train DDPG Agent to Control Double Integrator System. ### Cart-Pole Simscape Model The reinforcement learning environment for this example is a pole attached to an unactuated joint on a cart, which moves along a frictionless track. The training goal is to make the pole stand upright without falling over using minimal control effort. Open the model. ```mdl = "rlCartPoleSimscapeModel"; open_system(mdl)``` The cart-pole system is modeled using Simscape Multibody. For this model: • The upward balanced pole position is 0 radians, and the downward hanging position is `pi` radians. • The force action signal from the agent to the environment is from –15 to 15 N. • The observations from the environment are the position and velocity of the cart, and the sine, cosine, and derivative of the pole angle. • The episode terminates if the cart moves more than 3.5 m from the original position. • The reward ${\mathit{r}}_{\mathit{t}}$, provided at every timestep, is `${\mathbit{r}}_{\mathbit{t}}=\text{\hspace{0.17em}}-0.1\left(5{{\theta }_{\mathbit{t}}}^{2}\text{\hspace{0.17em}}+{{\mathit{x}}_{\mathbit{t}}}^{2}\text{\hspace{0.17em}}+\text{\hspace{0.17em}}0.05{\mathit{u}}_{\mathit{t}-1}^{2}\right)-100\mathit{B}$` Here: • ${\theta }_{\mathit{t}}$ is the angle of displacement from the upright position of the pole. • ${\mathit{x}}_{\mathit{t}}$ is the position displacement from the center position of the cart. • ${\mathit{u}}_{\mathit{t}-1}$ is the control effort from the previous time step. • $\mathit{B}$ is a flag (1 or 0) that indicates whether the cart is out of bounds. ### Create Environment Interface Create a predefined environment interface for the pole. `env = rlPredefinedEnv("CartPoleSimscapeModel-Continuous")` ```env = SimulinkEnvWithAgent with properties: Model : rlCartPoleSimscapeModel AgentBlock : rlCartPoleSimscapeModel/RL Agent ResetFcn : [] UseFastRestart : on ``` The interface has a continuous action space where the agent can apply possible torque values from –15 to 15 N to the pole. Obtain the observation and action information from the environment interface. ```obsInfo = getObservationInfo(env); actInfo = getActionInfo(env);``` Specify the simulation time `Tf` and the agent sample time `Ts` in seconds ```Ts = 0.02; Tf = 25;``` Fix the random generator seed for reproducibility. `rng(0)` ### Create DDPG Agent DDPG agents use a parametrized Q-value function approximator to estimate the value of the policy. A Q-value function critic takes the current observation and an action as inputs and returns a single scalar as output (the estimated discounted cumulative long-term reward for which receives the action from the state corresponding to the current observation, and following the policy thereafter). To model the parametrized Q-value function within the critic, use a neural network with two input layers (one for the observation channel, as specified by `obsInfo`, and the other for the action channel, as specified by `actInfo`) and one output layer (which returns the scalar value). Note that `prod(obsInfo.Dimension)` and `prod(actInfo.Dimension)` return the number of dimensions of the observation and action spaces, respectively, regardless of whether they are arranged as row vectors, column vectors, or matrices. Define the network as an array of layer objects. Assign names to the input and output layers of each path. These names allow you to connect the paths and then later explicitly associate the network input and output layers with the appropriate environment channel. For more information on creating a deep neural network value function representation, see Create Policies and Value Functions. ```% Define path for the state input statePath = [ featureInputLayer(prod(obsInfo.Dimension),Name="NetObsInLayer") fullyConnectedLayer(128) reluLayer fullyConnectedLayer(200,Name="sPathOut")]; % Define path for the action input actionPath = [ featureInputLayer(prod(actInfo.Dimension),Name="NetActInLayer") fullyConnectedLayer(200,Name="aPathOut",BiasLearnRateFactor=0)]; % Define path for the critic output (value) commonPath = [ additionLayer(2,Name="add") reluLayer fullyConnectedLayer(1,Name="CriticOutput")]; % Create layerGraph object and add layers criticNetwork = layerGraph(statePath); criticNetwork = addLayers(criticNetwork,actionPath); criticNetwork = addLayers(criticNetwork,commonPath); % Connect paths and convert to dlnetwork object criticNetwork = connectLayers(criticNetwork,"sPathOut","add/in1"); criticNetwork = connectLayers(criticNetwork,"aPathOut","add/in2"); criticNetwork = dlnetwork(criticNetwork);``` Display the number of weights and plot the network configuration. `summary(criticNetwork)` ``` Initialized: true Number of learnables: 27.1k Inputs: 1 'NetObsInLayer' 5 features 2 'NetActInLayer' 1 features ``` `plot(criticNetwork)` Create the critic representation using the specified deep neural network and options. You must also specify the action and observation information for the critic, which you already obtained from the environment interface. For more information, see `rlQValueFunction`. ```critic = rlQValueFunction(criticNetwork, ... obsInfo,actInfo,... ObservationInputNames="NetObsInLayer", ... ActionInputNames="NetActInLayer");``` DDPG agents use a parametrized deterministic policy over continuous action spaces, which is learned by a continuous deterministic actor. This actor takes the current observation as input and returns as output an action that is a deterministic function of the observation. To model the parametrized policy within the actor, use a neural network with one input layer (which receives the content of the environment observation channel, as specified by `obsInfo`) and one output layer (which returns the action to the environment action channel, as specified by `actInfo`). Since the output of `tanhLayer` is limited between -1 and 1, scale the network output to the range of the action using `scalingLayer`. ```actorNetwork = [ featureInputLayer(prod(obsInfo.Dimension)) fullyConnectedLayer(128) reluLayer fullyConnectedLayer(200) reluLayer fullyConnectedLayer(prod(actInfo.Dimension)) tanhLayer scalingLayer(Scale=max(actInfo.UpperLimit))];``` Convert to dlnetwork and display the number of weights. ```actorNetwork = dlnetwork(actorNetwork); summary(actorNetwork)``` ``` Initialized: true Number of learnables: 26.7k Inputs: 1 'input' 5 features ``` Create the actor in a similar manner to the critic. For more information, see `rlContinuousDeterministicActor`. `actor = rlContinuousDeterministicActor(actorNetwork,obsInfo,actInfo);` Specify training options for the critic and the actor using `rlOptimizerOptions`. ```criticOptions = rlOptimizerOptions(LearnRate=1e-03,GradientThreshold=1); actorOptions = rlOptimizerOptions(LearnRate=5e-04,GradientThreshold=1);``` Specify the DDPG agent options using `rlDDPGAgentOptions`, and include the training options for the actor and critic. ```agentOptions = rlDDPGAgentOptions(... SampleTime=Ts,... ActorOptimizerOptions=actorOptions,... CriticOptimizerOptions=criticOptions,... ExperienceBufferLength=1e6,... MiniBatchSize=128);``` You can also modify the agent options using dot notation. ```agentOptions.NoiseOptions.Variance = 0.4; agentOptions.NoiseOptions.VarianceDecayRate = 1e-5;``` Alternatively, you can also create the agent first, and then access its option object and modify the options using dot notation. Then, create the agent using the actor, critic and agent options objects. For more information, see `rlDDPGAgent`. `agent = rlDDPGAgent(actor,critic,agentOptions);` ### Train Agent To train the agent, first specify the training options. For this example, use the following options. • Run each training episode for at most 2000 episodes, with each episode lasting at most `ceil(Tf/Ts)` time steps. • Display the training progress in the Episode Manager dialog box (set the `Plots` option) and disable the command line display (set the `Verbose` option to `false`). • Stop training when the agent receives an average cumulative reward greater than –400 over five consecutive episodes. At this point, the agent can quickly balance the pole in the upright position using minimal control effort. • Save a copy of the agent for each episode where the cumulative reward is greater than –400. For more information, see `rlTrainingOptions`. ```maxepisodes = 2000; maxsteps = ceil(Tf/Ts); trainingOptions = rlTrainingOptions(... MaxEpisodes=maxepisodes,... MaxStepsPerEpisode=maxsteps,... ScoreAveragingWindowLength=5,... Verbose=false,... Plots="training-progress",... StopTrainingCriteria="AverageReward",... StopTrainingValue=-400,... SaveAgentCriteria="EpisodeReward",... SaveAgentValue=-400);``` Train the agent using the `train` function. Training this agent process is computationally intensive and takes several hours to complete. To save time while running this example, load a pretrained agent by setting `doTraining` to `false`. To train the agent yourself, set `doTraining` to `true`. ```doTraining = false; if doTraining % Train the agent. trainingStats = train(agent,env,trainingOptions); else % Load the pretrained agent for the example. load("SimscapeCartPoleDDPG.mat","agent") end``` ### Simulate DDPG Agent To validate the performance of the trained agent, simulate it within the cart-pole environment. For more information on agent simulation, see `rlSimulationOptions` and `sim`. ```simOptions = rlSimulationOptions(MaxSteps=500); experience = sim(env,agent,simOptions);``` `bdclose(mdl)`
Physics Forums (http://www.physicsforums.com/index.php) -   Differential Equations (http://www.physicsforums.com/forumdisplay.php?f=74) -   -   trivial solution method for constant coeff case (http://www.physicsforums.com/showthread.php?t=114337) mathwonk Mar14-06 10:35 PM trivial solution method for constant coeff case I have noticed the following trivial method for solving special constant coeff. ode's, but cannot find it in the usual ode books, (i had better look in courant though, as everything else is in there.) given a cc ode of form (D-a)(D-b)y = f, where f is a solution of some cc ode, i.e. where g is annihilated by some polynomial P(D) in D, if the polynomial P does not have D-a or D-b as a factor (which case is easily done separately), then when expressed as a polynomial in D-a, P has a non zero constant term, hence can be solved as follows: (D-a)Q(D) = c where c is not zero. Hence we get that (D-a)^-1 = (1/c)Q(D). Thus y = (1/c)Q(D)f. Repeat for D-b. example: This is not exactly the same type but is an easy example: To solve (D^2 +D + 1)y = f, where f is a polynomial of degree 2, just take y = (1-D)f. I.e. mod D^3, (D^2+D+1)^-1 = 1-D. This is just the usual principle that the inverse of an invertible matrix with given minimal polynomial, can be computed by solving the minimal polynomial for the non zero constant term, then dividing out the variable, and dividing by the constant term. e.g. if T^2 + T - 2 = 0, then T^2+T = 2, so T[T+1] = 2, so T^(-1) = (1/2)(T+1). Surely a solution method as obvious as this was standard hundreds of years ago, but i have not found it in any of over 15 books I have consulted. In the example above, if the RHS of the equation is a polynomial of degree < n, then D^n annihilates it, so the inverse of (1-D) is 1+D+D^2+...+D^n. This can be modified to give the inverse of any (D-a), hence any polynomial in D. On this forum there is surely someone who has seen this method. If so please give me a reference. :confused: This obviously related to the annihilator method, or undetermined coefficients method, but obviates the need to solve for any constants as they are provided automatically. mathwonk Mar15-06 10:18 PM more details: method 1: suppose the RHS of our ode is a polynomial of degree < n. Then D^n =0 on that function, so if the LHS factors with factors such as D-a, then since (1-D)(1+D+D^2+D^3+...+D^[n-1]) = 1-D^n, then also (1-D/a)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n-1]) = 1-[D/a]^n, hence a(1-D/a)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n-1]) = (a-D)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n-1]) = a(1-[D/a]^n). Hence (D-a)(-1/a)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n-1]) = (1-[D/a]^n). Thus if we want to solve (D-a)y = f where f is a polynomial of degree < n,m then taking y = (-1/a)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n-1])f, gives us (D-a)y = (1-[D/a]^n)f = f, since (D/a)^nf = 0. Repeat for each factor (D-a) of the differential operator. I.e. this method inverts any operator which is a product of operators of form D-a, i.e. any linear diff op with constant coefficients. mathwonk Mar15-06 10:24 PM method 2: If the RHS of our ode is say sinx, hence is annihilated by D^2+1, then we solve for the constant term, getting 1 = -D^2. Thus if the LHS has factor D-a, we rewrite the annihilating polynomial as 1 = -D^2 = -(D-a+a)^2 = -[(D-a)^2 +2a(D-a) + a^2). Then we put all constants ion the same side, getting: 1+a^2 = -[(D-a)^2 +2a(D-a)] = (D-a)[-(D-a) -2a]. Thus we have, acting on sinx, that (D-a)^(-1) = (-1/1+a^2)[(D-a)+2a] = (-1/1+a^2)[D+a]. so to solve (D-a)y = sinx, we set y = (-1/1+a^2)[D+a]sinx. it is too late at night to check this and find my error, which seems always to be present but here is "the idea". As evertone knows this is the usual fact that if T is an invertible linear transformation, with minimal polynomial X^2+1, then ther inverse of T is precisely -T^2. I.e. every invertible linear map T on a finite dimensional space has a minimal annihilating polynomial with non zero constant term, hence T^(-1) is also a polynomial in T, obtained by setting the constant term of the minimal polynomial on the other side and factoriong out T from what is left. Pardon me for belaboring this, but I was not very clear before, and having given experts the chance to cite this method elsewhere, i am now trying to explain it for novices. Hurkyl Mar16-06 06:01 PM When I saw it yesterday, it didn't seem familiar. When I looked again today, it seemed vaguely familiar. Of course, I did see it just yesterday, so read into it what you will. mathwonk Mar16-06 09:09 PM one of the fun parts of this method is it provides a right inverse which is not a left inverse to the given operator. and it is only a "local inverse", in that it depends on the nature of the desired output, as a right inverse should. I.e. we have a differential operator L that maps all smooth functions to other smooth functions. But this "right inverse" of L, which is designed to solve Ly = f, is defined so as to be a right inverse of L only on the space annihilated by some polynomial P(D) that annihilates f. I.e. the "inverse" operator M, depends on f, and has the property that L(M(f)) = f, but L(M(g)) may well not equal g for any g which is not annihilated by the given polynomial P(D) which annihilates f. I.e. y = M(f) solves Ly = f, but y = M(g) may well not solve Ly = g for most other g; and since L is not injective, we cannot have M(L(y)) = y for all y. mathwonk Mar16-06 09:24 PM for a theorem in linear algebra which inspired one of these methods, see Herstein, Topics in Algebra, 1964, page 220, Theorem 6B, "a linear transformation of a finite dimensional space is invertible if and only if the constant term of its minimal polynomial is non zero". Hurkyl Mar16-06 09:41 PM Quote: method 2: If the RHS of our ode is say sinx, hence is annihilated by D^2+1, I think just this much is what I was remembering seeing -- finding a differential operator that annihilates the RHS, and hitting your equation with it to turn it into a homogeneous cc equation. mathwonk Mar16-06 11:02 PM in my book that is called the annihilator method, but they do not carry it further to give a formula for the inverse operator. they merely say to write down the general homogeneous solution from memory, plug it in and solve for the relevant coefficients. this is wasteful and requires more knowledge. the present method gives a specific expression for the solution, without knowing any general solution formulas, and without solving for any constants. perhaps this method is so old it has been forgotten, as I still have not found it in any current ode books, except of course it is a direct application of the previously cited inversion principle from finite dimensional, linear algebra in herstein. but even linear algebra books such as herstein in which the technique appears do not seem to apply it to differential operators. i bet it could be in hoffman and kunze though. that is a good linear algebra book, probably with applications to differential operators. mathwonk Mar17-06 12:51 PM i still have not found this exact method anywhere, including loomis and sternberg, and courant vol 1, and dieudonne vol 1, but the idea is the same as the annihilator or undetermined coefficient method explained almost everywhere. the theoretical discussion in loomis (he wrote the chapter on differential equations in L-S), does help understand the method as follows: as in herstein, cited above, a linear endomorphism L of a finite dimensional vector space V is in vertible if anf only if it satisfies a polynomial with non zero constant term, e.g. its minimal polynomial. to find the inverse of L if P(L) = L^n + .... + a1 L+a0 = 0 but a0 is not zero, solve for a0 = -L[a1 +...+L^(n-1)]. Then divide both sides by a0, and get L^(-1) = (-1/a0)[a1 +...+L^(n-1)]. we apply this as follows to a linear nth order differential operator L acting on the infinite dimensional space C(n) = functions on some interval I with n continuous derivatives, mapping C(n) to C = continuous functions on I. By the usual theory, L is a surjective linear map C(n)-->C with n dimensional kernel or null space. Moreover if t=0 is a point of the interval I, then the subspace where y(0) = y'(0) = ....=y^(n-1)(0) = 0 of C(n), has codimension n, and is a direct sum complement to the solution space {y: L(y)=0}. Now consider the non homogeneous equation Ly = f, where f is annihilated by some polynomial in D, i.e. f is it self a solution of some homogeneous constant coefficient operator M. Now suppose L has constant coefficients and factors as a product of factors (D-a) where L and M have no factors in common. Then the homogeneous solution subspace V = {y:My=0} to which f belongs, meets the solution space {y:Ly=0} only in the function {0}. Hence the operator L induces an isomorphism of the finite dimensional space V to it self. In particular, there is a solution y of the equation Ly = f, in the space V to which f itsel;f belongs. This is the key idea behoind the annihilator or undetermined coefficients method. But in fact it is easy to write down an explicit inverse for L on this space, by the "herstein" method above. I.e. If M is the annihilator of f, where the polynomials L(D) and M(D) have no common linear factors, then we invert L on the space {M=0} as follows: Factor L nito linear factors and invert each factor separately one at a time. If (D-a) is a linear factor of L, then write D = (D-a)+a in the polynomial M(D), and exp[and as a polynomial in (D-a). By the theory above, (since D-a is invertible on the space {M=0}) the constant term will be non zero. Thuis we can solve explicitly for (D-a)^(-1) as a polynomial in D-a, hence in D. Doing this for each factor in L, gives us an explicit polynomial Q(D) in D which is inverse to L on the space {M=0}. Since f belongs to this space, y = Q(D)(f), solves the non homogeneous equation Ly = f.:biggrin: mathwonk Mar17-06 12:57 PM a quicker way to write down the inverse in the acse that f is a polynomial of degree m, hence is annihilated by the polynomial D^(m+1) = 0, is to use the fact that we know the inverse to (1-D), modulo the polynomial D^(m+1), is the truncated geometric series 1+D+D^2+...+D^m. this easily allows us also to invert (D-a) = -a(1 - [D/a]) by a similar truncated geometric series. Remark: When I tried this on some problems in the book, I originally abandoned it as useless, since ordinary undetermiend coefficients was quicker. But I soon found that was because the book ahd chosen thoise problems to suit its opwn method, I.e. on some problems this method is quicker, and no method is best for all problems. I suggest for example that to solve (D^n +D^(n-1)+....+D+1)y = f, where f is a polynomial of degree n, no method is quicker than this, since y = (1-D)f, is a solution. second remark, to young persons: notice that I, an oft published PhD in mathematics, am delighted to have noticed this simple application of a well traveled idea, while many neophytes here are unsatisfied unless they can submit some crackpot solution of the Riemann hypothesis. To understand this phenomenon, read Don Quixote, or merely look at the famous illustrations by the great French caricaturist (of the Don going out in the morning on his quest, and then coming back afterwards).:cool: mathwonk Mar18-06 03:13 PM please forgive me for repeating this again, but i like to go over and over an idea until i understand it in the simplest way possible for me. the point here is to find an inverse for an invertible operator acting on a finite dimensional space, by finding a polynomial that annihilates it. The first step in solving Ly = f this way is to identify the correct finite dimensional space, by looking for a polynomial annihilator of f, say P(D). This of course is only possible if f is a product of exponentials, polynomials and sins and cosines. E.g. if f = x^n e^(bx) then (D-b)^(n+1) works, and if f = sin or cos, then D^2 +1 works. Now that P is found, the appropriate finite dimensional space is V = ker(P), but with this method there is no need to even know exactly what functions make up this space, unlike in the annihilator method. The only thing one needs to know is what polynomial P annihilates f. Then assuming that the polynomial operator L(D) has no linear factors in common with P(D), it follows that L is invertible on V. Hence to invert L on V, we only need to find an appropriate polynomial annihilating L on V. Since L(D) is a polynomial in D, it suffices to find an annihilator of each linear factor (D-a) of L separately. But since we have a polynomial P(D) which equals zero on V, we can find a polynomial vanishing on any linear polynomial of form D-a by re - expanding P(D) as P(D-a+a). The fact that D-a is not factor of P(D) guarantees that the resulting polynomial in D-a will have non zero constant term, [if I am not wrong]. This is the key point. Although all polynomials L(D) do theoretically factor into complex linear factors, the method is easier when the factors are simple integer factors. So to sum up, the annihilator P(D) of f, defines a space V = kerP(D), on which P annihilates D, hence re expands to yield a polynomial annihilating any operator of form D-a. This allows one to invert on V, all polynomials L(D) which are relatively prime to P(D). If L does have linear factors in common with P, it is easy to invert those factors by hand first, and eliminate them. E.g. to solve (D-a) y = x^k e^(ax), one simply integrates on the polynomial factor i.e. then y ={x^(k+1)/[k+1]} e^(ax). this is no harder than solving Dy = x^k, by y = x^(k+1)/[k+1]. here is a worked example: to solve (D-a)y = xe^(bx), the annihilator of the RHS is (D-b)^2, which re expands as (D-a+a-b)^2 = (D-a)^2 + 2(a-b)(D-a) + (a-b)^2, where the constant term (a-b)^2 is non zero if and only if a differs from b. Then (a-b)^2 = -(D-a)^2 - 2(a-b)(D-a) = [-(D-a) - 2(a-b)](D-a), so (D-a)^(-1) = [1/(a-b)^2] [a-D -2(a-b)] = [1/(a-b)^2] [ (2b-a) - D]. E.g. to solve (D-1) y = xe^3x, we have (D-3)^2 = 0 = (D-1-2)^2 = (D-1)^2 -4(D-1) + 4, so 4 = 4(D-1) - (D-1)^2 = (D-1)[4 - (D-1)] = (D-1)[5-D]. So (D-1)^(-1) = (1/4)[5-D]. Since a = 1, b = 3, this agrees with the general formula [1/(a-b)^2] [ (2b-a) - D] above. Hence y = (1/4)[5-D](xe^(3x)) = (1/4)[ 5xe^(3x) - 3xe^(3x) - e^(3x)] = (1/4) [ 2xe^(3x) - e^(3x)]. Checking: (D-1)y = (D-1)(1/4) [ 2xe^(3x) - e^(3x)] = (1/4) (D-1)[ 2xe^(3x) - e^(3x)] = (1/4) [ 2e^(3x) + 6xe^(3x) - 3e^(3x) -2xe^(3x) + e^(3x) = (1/4) [4xe^(3x)] = xe^(3x), as desired. batman394 Mar19-06 01:37 AM ok wow.. that all looks familiar.. and i just took DiffEQ this summer.. and i got an A in the 8 week course.... but its like stuck at the back of my brain... what youre saying looks good.. but there's gotta be a reason that it isnt published anywhere... there might be a style of cc ode that it doesnt work for....? mathwonk Mar19-06 12:07 PM i think i have proved conclusively that it always works. i have also tried to make the point that no method is always best, and that this one is clearly better than others for some problems, just as some others are clearly better than this one on other problems.:smile: batman394 Mar20-06 12:03 AM oh i know.. im just saying there's gotta be some exception.. otherwise like you said theyd have figured it out before. Hurkyl Mar20-06 12:12 AM Thinking about it again, I'm less convinced I haven't seen it before. Argh, if only I could remember where I saw this method! I had thought it was in my DiffEq book, but if it is, it's been eluding me. Now, my DiffEq book itself is eluding me, so I can't even go back to look for it again. :frown: mathwonk Mar28-06 12:57 PM perhaps hurkyl, you saw it here: I have at last found one of the methods I discovered of formally inverting constant coefficient linear differential operators, the one using the formal power series for 1/(1-D) = 1 +D + D^2 +.... The classic book by tenenbaum and pollard devotes 25 pages to it, 268-292. It is very clearly explained there from a user’s point of view, rather than a theoretist’s. There is also an enhancement of the method, using partial fractions expansions to express the product of the inverses of different operators (D-a), (D-b),,, as a sum of constant multiples of the individual inverses. This book has been highly recommended, probably in these pages, and that is no doubt where I learned of it, and why I purchased it. It is one of those old, unhurried, scholarly, thorough books, that are no longer being written or adopted. Almost every conceivable basic method seems to be covered in a careful and useful way, and many good problems are given, some with solutions. It is also cheap, being a Dover paperback. There are also many applications, from cave paintings to engineering problems. [Some of the discussion there is not quite precise theoretically. E.g. the definition of a normalized particular solution at the bottom of page 269, as one not involving any terms from the homogeneous solution does not make sense, except with the usual tacit assumption one is using standard familiar functions as a basis to express everything. I.e. these statements are “basis dependent” but no basis has been specified except tacitly. A (different) basis - free definition of a normalized particular solution y is one with y(0) = y’(0) = 0, but this is actually less convenient than the imprecise one used there. The question involves choosing a complement to a given subspace. A natural basis - free complement to the space of homogeneous solutions of (D-r)(D-s)y = 0, is the space of y’s where the initial value vector (y(0), y’(0)) of y is zero, but the usual basis 1,x,x^2,...,x^n for the space of polynomials Pn(x) of degree at most n, provides a more convenient complement to the space spanned by e^rx, e^sx, for use in solving (D-a)(D-s)y = Pn(x).] So the basic principle that any good idea that uses only classical concepts, has already been thought of by the classical workers is verified again. However, I did not find there the other incarnation I noticed of this method, via expressing the minimal polynomial m(D) of the RHS, in terms of (D-a), and solving for the constant term, to invert D-a on the space spanned by the derivatives of the RHS. Since the general historical principle above is usually valid, it too may yet be found somewhere.:smile: All times are GMT -5. The time now is 11:11 PM.
• anonymous The heights of a certain species of plant are normally distributed, with mean 23 cm and standard deviation 1 cm. What is the probability that a plant chosen at random will be will be between 20.5 and 25.5 cm tall? Mathematics Looking for something else? Not the answer you are looking for? Search for more explanations.
Repository URL: Author(s): Craig Callender ##### preprint description A persistent question about the deBroglie–Bohm interpretation of quantum mechanics concerns the understanding of Born’s rule in the theory. Where do the quantum mechanical probabilities come from? How are they to be interpreted? These are the problems of emergence and interpretation. In more than 50 years no consensus regarding the answers has been achieved. Indeed, mirroring the foundational disputes in statistical mechanics, the answers to each question are surprisingly diverse. This paper is an opinionated survey of this literature. While acknowledging the pros and cons of various positions, it defends particular answers to how the probabilities emerge from Bohmian mechanics and how they ought to be interpreted. # This preprint has 1 Wikipedia mention. #### Quantum non-equilibrium Quantum non-equilibrium is a concept within stochastic formulations of the De Broglie–Bohm theory of quantum physics. Quantum non-equilibrium: \psi(X,t)|^2 Relaxation to quantum equilibrium: \psi(X,t)|^2 Quantum equilibrium hypothesis: \psi(X,t)|^2 with \rho(X,t) representing ...
1. ## continuity problem (domain) Hello, I don't understand why the answer for the domain is wrong. Can anyone tell me the correct answer? Thank you. 2. ## Re: continuity problem (domain) $Q(x)$ isn't defined for $x = \sqrt[3]{3}$ therefore the domain of $Q(x)$ is $(-\infty, \sqrt[3]{3}) \cup (\sqrt[3]{3},\infty)$ $Q(x)$ is however continuous at all the points in it's domain. It's not continuous at $x=\sqrt[3]{3}$ but that point isn't in it's domain.
# Funny identities Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples? - Maybe a moderator should put the zeta ones together since there are three already? –  anon Nov 3 '10 at 22:29 Perhaps this should be a community wiki question. –  Nuno Nov 3 '10 at 22:31 This is related. –  J. M. Nov 3 '10 at 22:35 I have tripped up many calculus students with this one: $log(1+2+3)=log1+log2+log3$. I am evil... –  user641 Dec 8 '12 at 1:23 @SteveD If only we could find an odd example... –  peoplepower Jan 13 '13 at 0:31 $$\int_0^1\frac{\mathrm{d}x}{x^x}=\sum_{k=1}^\infty \frac1{k^k}$$ - I had to do something about my accept range :) –  AD. May 17 '12 at 4:47 Sophomore's Dream? –  rotskoff Jun 19 '12 at 20:50 show 1 more comment $$\sum_{n=1}^{+\infty}\frac{\mu(n)}{n}=1-\frac12-\frac13-\frac15+\frac16-\frac17+\frac1{10}-\frac1{11}-\frac1{13}+\frac1{14}+\frac1{15}-\cdots=0$$ This relation was discovered by Euler in 1748 (before Riemann's studies on the $\zeta$ function as a complex variable function, from which this relation becomes much more easier!). Then one of the most impressive formulas is the functional equation for the $\zeta$ function, in its asimmetric form: it highlights a very very deep and smart connection between the $\Gamma$ and the $\zeta$: $$\pi^{\frac s2}\Gamma\left(\frac s2\right)\zeta(s)= \pi^{\frac{1-s}2}\Gamma\left(\frac{1-s}2\right)\zeta(1-s)\;\;\;\forall s\in\mathbb C\;.$$ Moreover no one seems to have wrote the Basel problem (Euler, 1735): $$\sum_{n=1}^{+\infty}\frac1{n^2}=\frac{\pi^2}{6}\;\;.$$ - $$\frac{\pi}{4}=\sum_{n=1}^{\infty}\arctan\frac{1}{f_{2n+1}},$$ where $f_{2n+1}$ there are fibonacci numbers, $n=1,2,...$ - $$\int_0^\infty\frac1{1+x^2}\cdot\frac1{1+x^\pi}dx=\int_0^\infty\frac1{1+x^2}\cdot\frac1{1+x^e}dx$$ - $(x-a)(x-b)(x-c)\ldots(x-z) = 0$ - $$\frac{1}{2}=\frac{\frac{1}{2}}{\frac{1}{2}+\frac{\frac{1}{2}}{\frac{1}{2}+\frac{\frac{1}{2}}{\frac{1}{2}+\frac{\frac{1}{2}}{\frac{1}{2}+\frac{\frac{1}{2}}{\frac{1}{2}+\frac{\frac{1}{2}}{\frac{1}{2}+\cdots}}}}}}$$ and more generally we have $$\frac{1}{n+1}=\frac{\frac{1}{n(n+1)}}{\frac{1}{n(n+1)}+\frac{\frac{1}{n(n+1)}}{\frac{1}{n(n+1)}+\frac{\frac{1}{n(n+1)}}{\frac{1}{n(n+1)}+\frac{\frac{1}{n(n+1)}}{\frac{1}{n(n+1)}+\frac{\frac{1}{n(n+1)}}{\frac{1}{n(n+1)}+\frac{\frac{1}{n(n+1)}}{\frac{1}{n(n+1)}+\ddots}}}}}}$$ - Here's one clever trigonometric identity that impressed me in high-school days. Add $\sin \alpha$, to both the numerator and the denominator of $\sqrt{\frac{1-\cos \alpha}{1 + \cos \alpha}}$ and get rid of the square root and nothing changes. In other words: $$\frac{1 - \cos \alpha + \sin \alpha}{1 + \cos \alpha + \sin \alpha} = \sqrt{\frac{1-\cos \alpha}{1 + \cos \alpha}}$$ If you take a closer look you'll notice that the RHS is the formula for tangent of a half-angle. Actually if you want to prove those, nothing but the addition formulas are required. - \begin{align} \frac{\mathrm d}{\mathrm dx}(x^x) &= x\cdot x^{x-1} &\text{Power Rule?}&\text{False}\\ \frac{\mathrm d}{\mathrm dx}(x^x) &= x^{x}\ln(x) &\text{Exponential Rule?}&\text{False}\\ \frac{\mathrm d}{\mathrm dx}(x^x) &= x\cdot x^{x-1}+x^{x}\ln(x) &\text{Sum of these?}&\text{True}\\ \end{align} - This is a special case of $\frac{d}{dx} h(f(x),g(x)) = \partial_1 h f' + \partial_2h g'$ –  ronno Dec 20 '13 at 13:56 $$\frac{\pi}{2}=1+2\sum_{k=1}^{\infty}\frac{\eta(2k)}{2^{2k}}$$ $$\frac{\pi}{3}=1+2\sum_{k=1}^{\infty}\frac{\eta(2k)}{6^{2k}}$$ where $\eta(n)=\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^{n}}$ - For all $n\in\mathbb{N}$ and $n\neq1$ $$\prod_{k=1}^{n-1}2\sin\frac{k \pi}{n} = n$$ For some reason, the proof involves complex numbers and polynomials. - Best near miss $$\int_{0}^{\infty }\cos\left ( 2x \right )\prod_{n=0}^{\infty}\cos\left ( \frac{x}{n} \right )~\mathrm dx\approx \frac{\pi}{8}-7.41\times 10^{-43}$$ One can easily be fooled into thinking that it is exactly $\dfrac{\pi}{8}$. References: - Let $\sigma(n)$ denote the sum of the divisors of $n$. If $$p=1+\sigma(k),$$ then $$p^a=1+\sigma(kp^{a-1})$$ where $a,k$ are positive integers and $p$ is a prime such that $p\not\mid k$. - $$27\cdot56=2\cdot756,$$ $$277\cdot756=27\cdot7756,$$ $$2777\cdot7756=277\cdot77756,$$ and so on. - \begin{align}\frac{64}{16}&=\frac{6\!\!/\,4}{16\!\!/}\\&=\frac41\\&=4\end{align} For more examples of these weird fractions, see "How Weird Are Weird Fractions?", Ryan Stuffelbeam, The College Mathematics Journal, Vol. 44, No. 3 (May 2013), pp. 202-209. - show 1 more comment $$\sin \theta \cdot \sin \bigl(60^\circ - \theta \bigr) \cdot \sin \bigl(60^\circ + \theta \bigr) = \frac{1}{4} \sin 3\theta$$ $$\cos \theta \cdot \cos \bigl(60^\circ - \theta \bigr) \cdot \cos \bigl(60^\circ + \theta \bigr) = \frac{1}{4} \cos 3\theta$$ $$\tan \theta \cdot \tan \bigl(60^\circ - \theta \bigr) \cdot \tan \bigl(60^\circ + \theta \bigr) = \tan 3\theta$$ - I just wanted to mention that your first identity is equivalent to the case $n=3$ of the formula for $\sin nx$ given there. (Just replace $\sin(60^{\circ}-\theta)$ by $\sin(\theta+120^{\circ})$.) –  Hans Lundmark Nov 4 '10 at 9:56 considering your first two identities the thirth should be $$\tan \theta \cdot \tan \bigl(60 - \theta \bigr) \cdot \tan \bigl(60 + \theta \bigr) = \tan 3\theta$$ –  Neves Mar 6 '11 at 16:08 $\textbf{Claim:}\quad$$\frac{\sin x}{n}=6$$ for all$n,x$($n\neq 0$).$\textit{Proof:}\quad$$\frac{\sin x}{n}=\frac{\dfrac{1}{n}\cdot\sin x}{\dfrac{1}{n}\cdot n}=\frac{\operatorname{si}x}{1}=\text{six}.\quad\blacksquare$$ - $\tan^{-1}(1)+\tan^{-1}(2)+\tan^{-1}(3) = \pi$ (using the principal value), but if you blindly use the addition formula $\tan^{-1}(x) + \tan^{-1}(y) = \tan^{-1}\dfrac{x+y}{1-x y}$ twice, you get zero: $\tan^{-1}(1) + \tan^{-1}(2) = \tan^{-1}\dfrac{1+2}{1-1*2} =\tan^{-1}(-3)$; $\tan^{-1}(1) + \tan^{-1}(2) + \tan^{-1}(3) =\tan^{-1}(-3) + \tan^{-1}(3) =\tan^{-1}\dfrac{-3+3}{1-(-3)(3)} = 0$. - $$\lim_{\omega\to\infty}3=8$$ The "proof" is by rotation through $\pi/2$. More of a joke than an identity, I suppose. - Remind me of this: http://xkcd.com/184/ –  alex.jordan Nov 3 '13 at 17:31 show 1 more comment $$2592=2^59^2$$ Found this in one of Dudeney's puzzle books - Heres a interesting one again $3435=3^3+4^4+3^3+5^5%$ - \begin{array}{rcrcl} \vdots & \vdots & \vdots & \vdots & \vdots \1mm] \int{1 \over x^{3}}\,{\rm d}x & = & -\,{1 \over 2}\,{1 \over x^{2}} & \sim & x^{\color{#ff0000}{\large\bf -2}} \\[1mm] \int{1 \over x^{2}}\,{\rm d}x & = & -\,{1 \over x} & \sim & x^{\color{#ff0000}{\large\bf -1}} \\[1mm] \int{1 \over x}\,{\rm d}x & = & \ln\left(x\right) & \sim & x^{\color{#0000ff}{\LARGE\bf 0}} \color{#0000ff}{\LARGE\quad ?} \\[1mm] \int x^{0}\,{\rm d}x & = & x^{1} & \sim & x^{\color{#ff0000}{\large\bf 1}} \\[1mm] \int x\,{\rm d}x & = & {1 \over 2}\,x^{2} & \sim & x^{\color{#ff0000}{\large\bf 2}} \\[1mm] \vdots & \vdots & \vdots & \vdots & \vdots \end{array} - Hmm, considering that logarithms get at the exponent, and x has a constant exponent ... Since \ln\left(x^a\right)=a\ln\left(x\right) (the log of an expression equals the exponent times the log of the base), then \ln\left(x^1\right)=1\ln\left(x\right)=x^0\ln\left(x\right) might be saying something to the effect that it's more important that your exponent is a constant, than the fact that the log of your base \ln\left(x\right) is growing slowly. – Travis Bemrose Sep 28 '13 at 10:11 show 2 more comments \int_{-\infty}^{\infty}{\sin\left(x\right) \over x}\,{\rm d}x = \pi\int_{-1}^{1}\delta\left(k\right)\,{\rm d}k - add comment \sqrt{\vphantom{\large A}2025\,} = 20 + 25 = 45 - add comment \begin{align} E &= \sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} = mc^{2} + \left[\sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} - mc^{2}\right] \\[3mm]&= mc^{2} + {\left(pc\right)^{2} \over \sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} + mc^{2}} = mc^{2} + {p^{2}/2m \over 1 + {\sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} - mc^{2} \over 2mc^{2}}} \\[3mm]&= mc^{2} + {p^{2}/2m \over 1 + {p^{2}/2m \over \sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} + mc^{2}}} = mc^{2} + {p^{2}/2m \over 1 + {p^{2}/2m \over 1 + {p^{2}/2m \over \sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} - mc^{2}}}} \end{align} - add comment \[\sqrt{n^{\log n}}=n^{\log \sqrt{n}} - a^{\log b} = b^{\log a} for a and b at least 1. – Wok Nov 30 '10 at 10:03 You should have written \sqrt{n^{\log n}} – ypercube Feb 25 '11 at 22:13 show 2 more comments Here is an Asian kid paradox - perhaps they will understand (apologies for not being strictly mathematical). IfStudy=No \;Fail$$and$$No \; Study=Fail$$then$$Study+No\;Study=Fail+No\;Fail\implies (1+No)Study=(1+No)Fail$$Cancelling gives$$Study=Fail$$Isn't that weird??? - 1+No is 0 in both sides. You can't cancel zero... – CODE Jun 4 '13 at 16:47 add comment I have one: In a \Delta ABC,$$\tan A+\tan B+\tan C=\tan A\tan B\tan C.$$- Also \cot(A/2)+\cot(B/2)+ \cot(C/2)=\cot(A/2)\cot(B/2)\cot(C/2). – N. S. Apr 11 '13 at 22:39 add comment We have by block partition rule for determinant$$ \det \left[ \begin{array}{cc} U & R \\ L & D \end{array} \right] = \det U\cdot \det ( D-LU^{-1}R) $$But if U,R,L and D commute we have that$$ \det \left[ \begin{array}{cc} U & R \\ L & D \end{array} \right] = \det (UD-LR) $$- add comment The following number is prime p = 785963102379428822376694789446897396207498568951 and p in base 16 is 89ABCDEF012345672718281831415926141424F7 which includes counting in hexadecimal, and digits of e, \pi, and \sqrt{2}. Do you think this's surprising or not?$$11 \times 11 = 121111 \times 111 = 123211111 \times 1111 = 123432111111 \times 11111 = 123454321\vdots - The prime is unsurprising -- the final F7 doesn't seem to mean anything, and about one in 111 numbers of that size is prime. So it's not very remarkable that there's a prime among the 256 40-hex-digit numbers that start with those particular 38 chosen digits. –  Henning Makholm Nov 20 '13 at 18:04 I remember that last from reading "The number devil"! And it works for other bases too; for a base $b$, until $\left(\sum_{n=0}^{b-1}\left(b^n\right)\right)^2=123...\ \text{digit } b-1\ ...321$. –  JMCF125 Nov 24 '13 at 11:06 $32768=(3-2+7)^6 / 8$
• 0 Custom Quests Lists (bad argument #1 to 'ipairs') Question I vaguely remember a thread about the exact same issue I'm facing these days, but I can't find it using either the forum's search engine or Google, so I apologize in advance if this was already answered somewhere else. My problem, however, is not only the common "bad arguments to ipairs", which I believe deals with a missing list table, but not knowing how to customize the lists (dropdown menus) to my liking. For a few days now I've been trying to put the new quest window to good use, adding new missions and the like. However, I'd like to also be able to insert new lists in the episode tab, local and so forth. So, as a form of test list, I created a file called local_testequest_list.lub, following the steps of the official Dewata's quest list, only changing the scrfilename = [[TesteQuest]] followed by a new entry in the quest/quest_function.lub file. I'm using the 2013-03-20eRagexe client and my files are in the data folder instead of some grf file. As far as troubleshooting goes I tried changing back and forth to .lua files, another version for the quest_function.lub as well as comparing names, to make sure that I didn't get the syntax wrong. Sadly, nothing seems to solve my issue. quest_function.lub: http://upaste.me/bf4f92077fc5050f localquest/local_testequest_list.lub: http://upaste.me/618a92063f1e1c58 localquest/questinfo/l_teste_list.lub: http://upaste.me/2f83920824db004b I'm not sure if custom lists are even possible since I noticed that the local_begintutorialquest_list.lub that comes with the folder will cause  errors too. I haven't seen any private serves using the new quest window to its fullest extent so I also apologize if this is not possible due to client restrictions. Share on other sites • 0 Hello perculis yes it is possibl to add custom quest lub files ,   the most existing servers are adding new Tabs into a existing lub file . BUT  with the NEMO patcher you can add your own lub files . Since the quest_lubs are hardcoded into the client ,  NEO made a patch to add custom entrys for it. if you dont got NEMO Patcher , you can Download it here. http://herc.ws/board/topic/2905-nemo-client-patcher/ when you download it , you will need a  .txt file what contains the path and names of your custom lub files. example: i made a txt file called  endlessro.txt and i want to have ingame a EndlessRO quest tab then you have to add into txt file this 2 lines localquestlocal_endlessroquest_list             <------------ Just an Examplelocalquestquestinfol_endlessro_list                <------------ Just an Example if you want them displayed ingame in the  Episode Tab you can need to write this in the txt file epquestep_146quest_list <------------ Just an Exampleepquestquestinfoepsoid146_list   <------------ Just an Example if you got the txt file rdy , open up nemo and select the Read custom quest lubs , and browse him to your  txt file and patch it. after your done with it be sure to add the corrext values into the quest_function.lub  what is loacted in the  quest/ lub folder. makeLocalQuestList(LOCAL_EndlessroQuest_List)QuestTable.EndlessroQuest = EndlessroQuest_List after your done with the quest_function.lub  be sure to create the correct custom quest lubs , Be sure the names are all matching /localquest/local_endlessroquest_list LOCAL_EndlessroQuest_List = { { name = [[EndlessRO Quests]], imagefile = [[ep_test_sample.bmp]], list = { { name = [[Test entry 1]], list = { { name =[[Test entry 2]], scrfilename = [[EndlessroQuest]], questID = 15000, }, }, } }, }, } /localquest/quest/l_endlessro_list.lub EndlessroQuest_List = {[15000] = { NPCFromName = [[4_MAL_CAPTAIN]], NPCFromMap = [[malaya]], NPCFromSpr = [[4_MAL_CAPTAIN]], NPCFromX = 290, NPCFromY = 340, NPCToName = [[Guard Leader]], NPCToMap = [[malaya]], NPCToSpr = [[4_MAL_CAPTAIN]], NPCToX = 290, NPCToY = 340, Item = [[]], PrizeItem = [[ < image = "6497">Lesser Agimat<end> (3) < image = "6497">Lesser Agimat(PC)<end> (6)]], Title = [[Hello Neo Its Working]], Info = [[Hello Neo Its Working]], Hunt1 = [[ < link = "PORING">Poring<end>]], Hunt2 = [[]], Hunt3 = [[]], Time = [[0]], LV = [[0]], },} Share on other sites • 0 Hello perculis yes it is possibl to add custom quest lub files ,   the most existing servers are adding new Tabs into a existing lub file . BUT  with the NEMO patcher you can add your own lub files . Since the quest_lubs are hardcoded into the client ,  NEO made a patch to add custom entrys for it. if you dont got NEMO Patcher , you can Download it here. http://herc.ws/board/topic/2905-nemo-client-patcher/ when you download it , you will need a  .txt file what contains the path and names of your custom lub files. I wonder how I missed that program, it looks like such a great tool and a nifty alternative. My custom lists are nice and working already after following your simple step-by-step guide.Thank you so much for introducing me to it, ossi0110. Before I mark this as solved, just one more question, any idea on how to insert a quest list in the event tab? I hope I'm not asking for too much, it's ok if it's not possible, I will take what I can get. As far as I can tell the quest_function.lub file only has options for function makeEPQuestList and function makeLocalQuestList so I'm a tidbit confused. Edited by perculis Share on other sites • 0 nope current i didnt got the EVENT tab to work , still working on it Share on other sites • 0 >_< i'm working on it too, and think i have a key, but i don't know much "hardcode" stuff need help Share on other sites • 0 Need to be pinned ! Share on other sites • 0 I pinned this @@vykimo, ossi0110's response is an actual thorough guide into adding Custom Quests. Share on other sites • 0 i want to use the "recommended tab" and the "event tab", but i don't know much about "hardcode" stuff :'( Share on other sites • 0 Hello perculis yes it is possibl to add custom quest lub files ,   the most existing servers are adding new Tabs into a existing lub file . BUT  with the NEMO patcher you can add your own lub files . Since the quest_lubs are hardcoded into the client ,  NEO made a patch to add custom entrys for it. if you dont got NEMO Patcher , you can Download it here. http://herc.ws/board/topic/2905-nemo-client-patcher/ when you download it , you will need a  .txt file what contains the path and names of your custom lub files. example: i made a txt file called  endlessro.txt and i want to have ingame a EndlessRO quest tab then you have to add into txt file this 2 lines localquestlocal_endlessroquest_list             <------------ Just an Examplelocalquestquestinfol_endlessro_list                <------------ Just an Example if you want them displayed ingame in the  Episode Tab you can need to write this in the txt file epquestep_146quest_list <------------ Just an Exampleepquestquestinfoepsoid146_list   <------------ Just an Example if you got the txt file rdy , open up nemo and select the Read custom quest lubs , and browse him to your  txt file and patch it. after your done with it be sure to add the corrext values into the quest_function.lub  what is loacted in the  quest/ lub folder. makeLocalQuestList(LOCAL_EndlessroQuest_List)QuestTable.EndlessroQuest = EndlessroQuest_List after your done with the quest_function.lub  be sure to create the correct custom quest lubs , Be sure the names are all matching /localquest/local_endlessroquest_list LOCAL_EndlessroQuest_List = { { name = [[EndlessRO Quests]], imagefile = [[ep_test_sample.bmp]], list = { { name = [[Test entry 1]], list = { { name =[[Test entry 2]], scrfilename = [[EndlessroQuest]], questID = 15000, }, }, } }, }, } /localquest/quest/l_endlessro_list.lub EndlessroQuest_List = {[15000] = { NPCFromName = [[4_MAL_CAPTAIN]], NPCFromMap = [[malaya]], NPCFromSpr = [[4_MAL_CAPTAIN]], NPCFromX = 290, NPCFromY = 340, NPCToName = [[Guard Leader]], NPCToMap = [[malaya]], NPCToSpr = [[4_MAL_CAPTAIN]], NPCToX = 290, NPCToY = 340, Item = [[]], PrizeItem = [[ < image = "6497">Lesser Agimat<end> (3) < image = "6497">Lesser Agimat(PC)<end> (6)]], Title = [[Hello Neo Its Working]], Info = [[Hello Neo Its Working]], Hunt1 = [[ < link = "PORING">Poring<end>]], Hunt2 = [[]], Hunt3 = [[]], Time = [[0]], LV = [[0]], },} if I use <image> in my lub file, client crashes, any alternatives for this? Share on other sites • 0 if I use <image> in my lub file, client crashes, any alternatives for this? Share on other sites • 0 When viewing some of the quests that are on the quest window crashes the client. I realized that are those ones with the: use of  <image> for example: [9155] = { NPCFromName = [[Tribal Chief Paiko]], NPCFromMap = [[dew_in01]], NPCFromSpr = [[4_M_DEWZATICHIEF]], NPCFromX = 15, NPCFromY = 49, NPCToName = [[Tribal Chief Paiko]], NPCToMap = [[dew_in01]], NPCToSpr = [[4_M_DEWZATICHIEF]], NPCToX = 15, NPCToY = 49, Item = [[ < image = "6405">Cendrawasih Feather<\end> (15)]], PrizeItem = [[]], QuickInfo  = [[]], Hunt1 = [[]], Hunt2 = [[]], Hunt3 = [[]], Time = [[0]], LV = [[0]], }, Then this happen: I'm guessing I am missing some sort of lua file, but I'm not sure which. Any help? The client is set to show errors, however there is no error shown with these ones.. Edited by iraciz Create an account Register a new account
# Real-Time Robotics Framework ### Sidebar getting_started:tutorials:oneaxis # Control a Single Motor This tutorial will show you how to control a single motor using EEROS. You can find the code in the directory with examples. Navigate to examples/simpleMotorController. ## Part 1: Theoretical Background The motors position is measured by an encoder. After differentiating this signal we obtain the velocity. Control loop For a good dynamical stiffness we choose f0 = fs / 20 where fs is the sampling frequency. With fs = 1kHz we get f0 = 50Hz. With ω0 = 2·π·f0 the parameters for the position and velocity controller, kp and kv respectively, will be as follows: kp = ω0 / 2·D and kv = 2·D·ω0 D is the damping factor and we choose it as 0.9. The input of the velocity controller is the difference between reference and measured velocity. Additionally, the feed forward velocity is added. The output of this controller is an acceleration. This value is then multiplied by the inertia and divided by the motor constant, in order to obtain a current reference value to control the motor. ## Part 2: Experimental Setup As a processing platform we use a regular PC (x86-64) together with a National Instrument card: PCIe - 6251 (M-Series). The card requires the comedi library together with the EEROS hardware wrapper, see Hardware Libraries. As an alternative we use our cb20 controller board (http://wiki.ntb.ch/infoportal/embedded_systems/imx6/cb together with http://www.flink-project.ch and the appropriate EEROS hardware wrapper, see Hardware Libraries. For both alternatives a maxon motor controller (50V / 5A) deflivers the necessary power. The motor we use has the following properties: Value Unit Properties 9.49 kgm2 16.3 10-3 Nm/A 500 ## Part 3: Test Application In the EEROS library you will find a directory with examples. Navigate to examples/simpleMotorController. You will find two different hardware configuration files. • HalSimpleMotorControllerComedi.json Start our application by choosing the appropriate configuration file, e.g.: \$ ./simpleMotorController -c HalSimpleMotorControllerComedi.json ## Part 4: Implementation ### Control System The control system declares in MyControlSystem.hpp all the necessary blocks as given in the picture at the top of this page. Those blocks are then defined in MyControlSystem.cpp, connected together, and added to a time domain. At last the time domain is added to the executor. ### Safety System Safety levels and events are declared in MySafetyProperties.hpp. MySafetyProperties.cpp initializes these objects, defines critical inputs and outputs, defines level actions, and adds the levels to the safety system. The levels and events causing transitions between those levels are shown in the next figure. Safety levels and events Two critical inputs are defined: “emergency” and “readySig1”. “enable” is a critical output. Critical inputs and outputs are checked and set by each safety level. For example “enable” is set to true as soon as the safety level is equal or higher than powerOn. “emergency” is unchecked for the two lowest levels and leads to level change to level emergency for higher levels. ### Sequencer The sequencer runs a sequence which turns the motor several steps forward. After 20 seconds it will position the motor back to some base position and restart the process.
# Relationship between Coefficients of Orthogonal Polynomial and Normal regression So this is more a question to help me understand what is going on rather than application: So in normal regression we have $$\mathbf{Y=X} \mathbf{\beta} + \varepsilon$$ Now the part that really confuses me: A regression with orthogonal polynomial regression is just: $$\mathbf{Y= \phi(X)} \mathbf{W}+ \varepsilon$$ where $\mathbf{\phi(X) = P^{T} X}$ and $W= \mathbf{P\beta}$ Where P is the orthonormal change of basis matrix that you can obtain in R using contri.poly()?These P transformations chosen usually appear to also centre the data? Also if I am correct, how exactly are these change of basis matrix determined?
# Problem 23 Chapter 2. Evans PDE 2nd edition This is a Problem from Evans PDE 2nd Edition,Chapter 2 Problem 23. Let $S$ denote the square lying in $\Bbb R\times (0,\infty)$ with corners at the points $(0,1),(1,2),(0,3),(-1,2)$. Define $$f(x,t):=\left \{ \begin{array}{ll} \ -1, \text{ for } (x,t)\in S\cap \{ t>x+2\} \\ 1, \text{ for } (x,t)\in S\cap \{ t<x+2\} \\ \ 0, \text{ otherwise. } \end{array} \right.$$ Asume $u$ solves $$\left \{ \begin{array}{ll} \ u_{ tt}-u_{ xx}=f, \text{ in } \Bbb R\times (0,\infty) \\ \ u=g,u_t=0,\text{ on } \Bbb R\times\{ t=0\}. \end{array} \right.$$ Describe the shape of $u$ for times $t>3$. By Duhamel's principle, for any $(x, t)$, we have $$u(x,t) = \int^t_0 \int^{x + (t-s) }_{x-(t-s)} f(y,s)\, dy\, ds.$$ Refering to the picture, we see that for $t_0 \ge 3$, if $(x_0,t_0)$ is on the right hand side of the grey area, then the domain of dependence for $u(x_0,t_0)$ does not intersect the support of $f$ and thus $u(x_0,t_0) = 0$. Likewise, if $(x_0,t_0)$ is on the left hand side of the grey area then the domain of dependence of $u(x_0, t_0)$ contains the entire support of $f$ so since $f$ is balanced, the integral will be zero. Because of this, the only interesting'' region is the grey region. As pictured, for $(x_0, t_0)$ in the grey region, $u(x_0, t_0)$ depends only on the length $d = d(x_0, t_0)$ of the orange line segment because it determines how much of the support of $f$ lies in the domain of dependence of $u(x_0,t_0)$ (the blue/red region is the intersection of the domain of dependence with the support of $f$). In this region we see that if $d = 0$, we should get $u(x_0,t_0) = 0$ and if $d = \sqrt 2$ then $u(x_0, t_0) = 0$. Further, as $d$ grows from $0$ to $\tfrac{\sqrt 2}2$ the value of $u$ is simply the amount of blue area. For $\tfrac{\sqrt 2}{2} \le d \le \sqrt 2$, $u$ is the blue area minus the red area. Calculating the area, we see $$u(x_0, t_0) = \left\{ \begin{matrix} d(x_0,t_0) \sqrt 2, & 0 \le d(x_0,t_0) \le \tfrac{\sqrt{2}}{2} \\ \tfrac{\sqrt 2}{2}\sqrt 2 - \sqrt 2(\tfrac{\sqrt 2}2 - (\sqrt 2 - d(x_0,t_0))), & \tfrac{\sqrt{2}}{2} \le d(x_0,t_0) \le \sqrt 2. \end{matrix} \right.$$ or simplifying a bit $$u(x_0, t_0) = \left\{ \begin{matrix} d(x_0,t_0) \sqrt 2, & 0 \le d(x_0,t_0) \le \tfrac{\sqrt{2}}{2} \\ 2 - d(x_0, t_0) \sqrt 2& \tfrac{\sqrt{2}}{2} \le d(x_0,t_0) \le \sqrt 2. \end{matrix} \right.$$ Pictorially, this $u$ looks like a triangular spike along the line between $t = x +1$ and $t = x+3$. Solving explicitly for $d(x_0, t_0)$ is not hard, but the algebra is somewhat ugly and the process is in no way edifying.
SMS scnews item created by Ulrich Thiel at Wed 8 Aug 2018 1807 Type: Seminar Distribution: World Expiry: 19 Sep 2018 Calendar1: 17 Aug 2018 1200-1300 CalLoc1: Carslaw 375 CalTitle1: Scherich -- Discrete Representations of the Braid Groups Auth: thiel@202-159-175-70.dyn.iinet.net.au (uthi9031) in SMS-WASM # Algebra Seminar: Scherich -- Discrete Representations of the Braid Groups Nancy Scherich (UC Santa Barbara) Friday 17 August, 12-1pm, Place: Carslaw 375 Title: Discrete Representations of the Braid Groups Abstract: Many well known representations of the braid groups are parameterized. Using a little algebraic number theory, I will show how to carefully choose evaluations of the parameter so that the image is a discrete group, and sometimes lands in a lattice! It is exciting to see how algebraic techniques give rise to more geometric results. Actions:
# Computing Conditional Probabilities of Brownian Motion with Strict Inequalities I was doing some reading after recently finishing a course in introductory stochastic processes where we finished by talking about Gaussian processes and Brownian motion and came across a problem I have no idea how to solve. Let $$W_t$$ be standard Brownian motion. Find $$\mathbb{P}\left(W_{4}<2\mid W_{5}>1\right)$$ and $$\mathbb{P}\left(W_{5}>1\mid W_{4}<2\right)$$. My first thought was to exploit the independence of increments by writing $$W_{5}=W_{5}-W_{4}+W_{4}$$ but I'm having trouble applying this idea to the first conditional probability as we are conditioning on $$W_{5}$$. Could someone please elaborate on how I first conditional probability could be found? If I apply this property to the second, would it be true that I will have: $$\mathbb{P}\left(W_{5}>1\mid W_{4}<2\right)=\mathbb{P}\left(W_{5}-W_{4}>1\mid W_{4}<2\right)+\mathbb{P}\left(W_{4}>1\mid W_{4}<2\right)$$ But due to independence of increments, we have that $$W_{5}-W_{4}$$ is independent of $$W_{4}$$ and so the above would reduce to $$\mathbb{P}\left(W_{5}>1\mid W_{4}<2\right)=\mathbb{P}\left(W_{5}-W_{4}>1\right)+\mathbb{P}\left(1 Then we could exploit the fact that increments are normally distributed to have that $$\left(W_{5}-W_{4}\right)\sim\mathcal{N}\left(0,1\right)$$ to get $$\mathbb{P}\left(W_{5}>1\mid W_{4}<2\right)=1-\Phi(1)+\left(\Phi\left(1\right)-\Phi\left(\frac{1}{2}\right)\right)$$ Where I have transformed $$W_{4}$$ to have a standard normal distribution. Is what I have done for the second conditional probability correct? • Aww hell yeah! I love brownies! – clathratus Oct 5 '18 at 1:13
# optimisers 0th Percentile ##### optimisation methods Functions to set up optimisers (which find parameters that maximise the joint density of a model) and change their tuning parameters, for use in opt(). For details of the algorithms and how to tune them, see the SciPy optimiser docs or the TensorFlow optimiser docs. ##### Usage nelder_mead()powell()cg()bfgs()newton_cg()l_bfgs_b(maxcor = 10, maxls = 20)tnc(max_cg_it = -1, stepmx = 0, rescale = -1)cobyla(rhobeg = 1)slsqp()gradient_descent(learning_rate = 0.01)adadelta(learning_rate = 0.001, rho = 1, epsilon = 1e-08)adagrad(learning_rate = 0.8, initial_accumulator_value = 0.1)adagrad_da(learning_rate = 0.8, global_step = 1L, l1_regularization_strength = 0, l2_regularization_strength = 0)momentum(learning_rate = 0.001, momentum = 0.9, use_nesterov = TRUE)adam(learning_rate = 0.1, beta1 = 0.9, beta2 = 0.999, epsilon = 1e-08)ftrl(learning_rate = 1, learning_rate_power = -0.5, initial_accumulator_value = 0.1, l1_regularization_strength = 0, l2_regularization_strength = 0)proximal_gradient_descent(learning_rate = 0.01, l1_regularization_strength = 0, l2_regularization_strength = 0)proximal_adagrad(learning_rate = 1, initial_accumulator_value = 0.1, l1_regularization_strength = 0, l2_regularization_strength = 0)rms_prop(learning_rate = 0.1, decay = 0.9, momentum = 0, epsilon = 1e-10) ##### Arguments maxcor maximum number of 'variable metric corrections' used to define the approximation to the hessian matrix maxls maximum number of line search steps per iteration max_cg_it maximum number of hessian * vector evaluations per iteration stepmx maximum step for the line search rescale log10 scaling factor used to trigger rescaling of objective rhobeg reasonable initial changes to the variables learning_rate the size of steps (in parameter space) towards the optimal value rho the decay rate epsilon a small constant used to condition gradient updates initial_accumulator_value initial value of the 'accumulator' used to tune the algorithm global_step the current training step number initial value of the accumulators used to tune the algorithm l1_regularization_strength L1 regularisation coefficient (must be 0 or greater) l2_regularization_strength L2 regularisation coefficient (must be 0 or greater) momentum the momentum of the algorithm use_nesterov whether to use Nesterov momentum beta1 exponential decay rate for the 1st moment estimates beta2 exponential decay rate for the 2nd moment estimates learning_rate_power power on the learning rate, must be 0 or less decay discounting factor for the gradient ##### Details The cobyla() does not provide information about the number of iterations nor convergence, so these elements of the output are set to NA ##### Value an optimiser object that can be passed to opt. • optimisers • powell • cg • bfgs • newton_cg • l_bfgs_b • tnc • cobyla • slsqp • momentum • ftrl • rms_prop ##### Examples # NOT RUN { # use optimisation to find the mean and sd of some data x <- rnorm(100, -2, 1.2) mu <- variable() sd <- variable(lower = 0) distribution(x) <- normal(mu, sd) m <- model(mu, sd) # configure optimisers & parameters via 'optimiser' argument to opt opt_res <- opt(m, optimiser = bfgs()) # compare results with the analytic solution opt_res\$par c(mean(x), sd(x)) # } Documentation reproduced from package greta, version 0.3.0, License: Apache License 2.0 ### Community examples Looks like there are no examples yet.
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # Chi-Square Test ## Closeness of observed data to expected data of the model Estimated10 minsto complete % Progress Practice Chi-Square Test MEMORY METER This indicates how strong in your memory this concept is Progress Estimated10 minsto complete % Chi Squared Statistic Suppose you wanted to evaluate a recent statistic stating that iOS represents 32% and Android 51% of active smart phones. You would like to know if the statistic actually reflects the distribution of phones among your friends. How could you evaluate the data you collect to see if it supports this hypothesis? Look to the end of the lesson for the answer. ### Chi-Squared Statistic The Greek letter “chi”, written as \begin{align*}\chi\end{align*}, is the symbol used to identify a chi-square statistic, which we will use here to evaluate how well a set of observed data fits a corresponding expected set. Conducting a Chi-Square test is much like conducting a Z-test or T-test as we did in Chapter 10. We will follow the same basic series of steps and compare a calculated value to a chart to evaluate the probability of getting the results we have if the null hypothesis is true, just as we did with the Z and F tests. Additionally, as was the case with the F-testing, we will be evaluating the number of degrees of freedom, and choosing values from a chart based on the number. The primary difference between a Chi-Square test and the tests we have work with before is that previous tests have all been primarily dedicated to comparing single parameters, whereas Chi-Square tests are used to determine if two random variables are independent or related and so deal with multiple values for each variable. Additionally, the Chi-Square statistic is useful for looking at categorical data rather than quantitative data. The Chi-Square statistic is actually pretty straightforward to calculate: \begin{align*}\chi^2=\sum \frac{(observed - expected)^2}{expected}\end{align*} #### Determining the Validity of a Study The American Pet Products Association conducted a survey in 2011 and determined that 60% of dog owners have only one dog, 28% have two dogs, and 12% have three or more. Supposing that you have decided to conduct your own survey and have collected the data below, determine whether your data supports the results of the APPA study. Use a significance level of 0.05. Data: Out of 129 dog owners, 73 had one dog and 38 had two dogs. • Step 1: Clearly state the null and alternative hypotheses \begin{align*}H_0\end{align*}:The survey agrees with the sample. \begin{align*}H_1\end{align*}:The survey does not agree with the sample. • Step 2: Identify an appropriate test and significance level Since we are comparing two sets of data, and not just a single value, a Chi-Square test is appropriate. In the absence of a stated significance level in the problem, we assume the default 0.05. • Step 3: Analyze sample data Create a table to organize data and compare the observed data to the expected data: One Dog Two Dogs 3+ Dogs TOTAL Observed 73 38 18 129 Expected To identify the expected values, multiply the expected % by the total number observed: One Dog Two Dogs 3+ Dogs TOTAL Observed \begin{align*}73\end{align*} \begin{align*}38\end{align*} \begin{align*}18\end{align*} \begin{align*}129\end{align*} Expected \begin{align*}0.60 \times 129=77.4\end{align*} \begin{align*}0.28 \times 129=36.1\end{align*} \begin{align*}0.12 \times 129=15.5\end{align*} \begin{align*}129\end{align*} To calculate our chi-square statistic, we need to sum the squared difference between each observed and expected value divided by the expected value: \begin{align*}\chi^2 &=\sum \frac{(observed - expected)^2}{expected} \\ \chi^2 &=\frac{(73 - 77.4)^2}{77.4} + \frac{(38 - 36.1)^2}{36.1} + \frac{(18 - 15.5)^2}{15.5} \\ \chi^2 &=\frac{(-4.4)^2}{77.4} + \frac{(1.9)^2}{36.1} + \frac{(2.5)^2}{15.5} \\ \chi^2 &=\frac{19.36}{77.4} + \frac{3.61}{36.1} + \frac{6.25}{15.5} \\ \chi^2 &=0.2501 + 0.1000 + 0.4032 \\ \chi^2 &=0.7533\end{align*} Now that we have our chi-square statistic, we need to compare it to the chi-square value for the significance level 0.05. We can use a reference table such as the one below, or a chi-square value calculator. Just as with the T-tests in Chapter 10, we will need to know the degrees of freedom, which equal the number of observed category values minus one. In this case, there are three category values: one dog, two dogs, and three or more dogs. The degrees for freedom, therefore, are \begin{align*}3 - 1 = 2\end{align*}. Using the calculator or the table, we find that the critical value for a 0.05 significance level with \begin{align*}df = 2\end{align*} is 5.9915. That means that 95 times out of 100, a survey that agrees with a sample will have a \begin{align*}\chi ^2\end{align*} critical value of 5.9915 or less. If our chi-square value is greater than 5.9915, then the measurements we took only occur 5 or fewer times out of 100, or the null hypothesis is incorrect. Our chi-square statistic is only 0.7533, so we will not reject the null hypothesis. • Step 4: Interpret the results Since our chi-square statistic was less than the critical value, we do not reject the null hypothesis, and we can say that our survey data does support the data from the APPA. #### Real-World Application: Car Insurance Rachel told Eric that the reason her car insurance is less expensive is that female drivers get in fewer accidents than male drivers. Specifically, she says that male drivers are held responsible in 65% of accidents involving drivers under 23. Credit: Derek Hatfield Source: https://www.flickr.com/photos/loimere/6772341183 If Eric does some research of his own and discovers that 46 out of the 85 accidents he investigates involve male drivers, does his data support Rachel’s hypothesis? • Step 1: Clearly state the null and alternative hypotheses \begin{align*}H_0\end{align*}:The survey agrees with the sample. \begin{align*}H_1\end{align*}:The survey does not agree with the sample. • Step 2: Identify an appropriate test and significance level Since we are comparing two sets of data, and not just a single value, a Chi-Square test is appropriate. In the absence of a stated significance level in the problem, we assume the default 0.05. • Step 3: Analyze sample data Create a table to organize data and compare the observed data to the expected data: Male Drivers Female Drivers TOTAL Observed 46 39 85 Expected To identify the expected values, multiply the expected % by the total number observed: Male Drivers Female Drivers TOTAL Observed \begin{align*}46\end{align*} \begin{align*}39\end{align*} \begin{align*}85\end{align*} Expected \begin{align*}0.65 \times 85=55.25\end{align*} \begin{align*}0.35 \times 85=29.75\end{align*} \begin{align*}85\end{align*} To calculate our chi-square statistic, we need to sum the squared differences between each observed and expected value divided by the expected value: \begin{align*}\chi^2 &= \sum \frac{(observed - expected)^2}{expected} \\ \chi^2 &= \frac{(46 - 55.25)^2}{55.25} + \frac{(39 - 29.75)^2}{29.75} \\ \chi^2 &= \frac{(-9.25)^2}{55.25} + \frac{(9.25)^2}{29.75} \\ \chi^2 &= \frac{85.5625}{55.25} + \frac{85.5625}{29.75} \\ \chi^2 &= 1.5486 + 2.8760 \\ \chi^2 &= 4.4246\end{align*} Now that we have our chi-square statistic, we need to compare it to the chi-square critical value for 0.05 with one degree of freedom, since we have two categories. Using the chi-square value calculator, we find the critical value to be 3.8414. The critical value indicates that only 0.05, or 5%, of values would be as high as 3.8414. If the \begin{align*}\chi ^2\end{align*} of our data is greater than 3.8414, then fewer than 5 times out of 100 would we expect to get that result if the null hypothesis is true. • Step 4: Interpret your results Our calculated data value of \begin{align*}\chi^2 = 4.4246\end{align*} is greater than the 0.05 significance level critical value of 3.8141, so we reject the null hypothesis. The data that Eric observed does not support the distribution that Rachel claimed. #### Real-World Application: Car Magazine Credit: PGHAuto2010 Source: https://www.flickr.com/photos/47591094@N08/4354783689 Credit: George Rigato Source: https://www.flickr.com/photos/georgerigato/2880099644 The online car magazine “Camaro5.com” claims that 51% of Ford Mustang or Chevy Camaro owners own Camaros. Ellen is a Mustang lover and decides to do some research. If Ellen collects the data below, does her data support the magazine’s claim? Data: Mustang owners: 28, Camaro owners: 34 • Step 1: Clearly state the null and alternative hypotheses \begin{align*}H_0\end{align*}:The survey agrees with the sample. \begin{align*}H_1\end{align*}:The survey does not agree with the sample. • Step 2: Identify an appropriate test and significance level Since we are comparing two sets of data, and not just a single value, a Chi-Square test is appropriate. In the absence of a stated significance level in the problem, we assume the default 0.05. • Step 3: Analyze sample data We will start by creating a table to organize our data: Mustang Camaro TOTAL Observed \begin{align*}28\end{align*} \begin{align*}34\end{align*} \begin{align*}62\end{align*} Expected \begin{align*}0.49 \times 62=30.4\end{align*} \begin{align*}0.51 \times 62=31.6\end{align*} \begin{align*}62\end{align*} Now we can calculate our chi statistic: \begin{align*}\chi^2 &= \sum \frac{(observed - expected)^2}{expected} \\ \chi^2 &= \frac{(28 - 30.4)^2}{30.4} + \frac{(34 - 31.6)^2}{31.6} \\ \chi^2 &= \frac{(-2.4)^2}{30.4} + \frac{(2.4)^2}{31.6} \\ \chi^2 &= .3718\end{align*} The chi-square critical value for \begin{align*}df=1\end{align*} and a significance level of 0.05 is 3.8414 (the same as in Example B). • Step 4: Interpret your results Our calculated data value of \begin{align*}\chi^2=0.3718\end{align*} is significantly less than the 0.05 significance level critical value of 3.8141, so we fail to reject the null hypothesis. This means that, unfortunately for Ellen, her research did not allow her to deny the claim that Camaros are more popular. #### Earlier Problem Revisited Suppose you wanted to evaluate a recent statistic stating that iOS represents 32% and Android 51% of active smart phones. You would like to know if the statistic actually reflects the distribution of phones among your friends. How could you evaluate the data you collect to see if it supports this hypothesis? You could evaluate the hypothesis by collecting data from a SRS of cell phone owners and using a chi-square test to see if your data supports the hypothesis. ### Examples Examples  1-5 refer to the following data: Tuscany claims that 70% of dog or cat owners own a dog, and 30% own a cat. Sayber decides to test her claim and learns that 23 of the 40 people he asks own dogs, and 17 own cats. #### Example 1 What kind of test could you use to see if Sayber’s data supports Tuscany’s claim? A chi-square test would be appropriate #### Example 2 What would be the null and alternative hypotheses? The null hypothesis, \begin{align*}H_0\end{align*}, would be that the research does support the hypothesis, the alternative hypothesis would be that is does not. #### Example 3 What would be the expected values of dog and cat owners? The expected number of dog owners, according to Tuscany's claim, would be 70% of the 40 people that Sayber polled, or 28 dog owners. The expected number of cat owners would be 30% of the 40 people polled, or 12. #### Example 4 What is the chi-square statistic of the observed data? The \begin{align*}\chi ^2\end{align*} statistic is the sum of the squared differences between the observed and expected values, divided by the expected values: \begin{align*}\chi^2 &= \frac{(23 - 28)^2}{28} + \frac{(17 - 12)^2}{12} \\ &= \frac{25}{28} + \frac{25}{12} \\ \chi^2 & =2.9762\end{align*} #### Example 5 Assuming a 0.1 significance level, does Sayber’s data support Tuscany’s claim? The critical value of chi-squared for 1 degree of freedom at a significance level of 0.1 is 2.705. Since the chi-square statistic we calculated is 2.9762, and is therefore more extreme than the critical value, we may reject the hypothesis, and say that Sayber’s data does not support Tuscany’s claim. ### Review Questions 1-5 refer to the following: Evan claims that 15% of computer gamers have played “Team Fortress 2”, and 35% have played “World of Warcraft”. Evan’s brother is skeptical of those figures and decides to do some research. He discovers that 60 of the 200 computer gamers he polls have played “Team Fortress 2”, and 90 have played “World of Warcraft”. 1. Create a table to organize the data and prepare for hypothesis testing. 2. What sort of test would be appropriate to determine if the observed data supports Evan’s claim? 3. What would be \begin{align*}H_0\end{align*} and \begin{align*}H_1\end{align*}? 4. What would be the \begin{align*}\chi ^2\end{align*} statistic for the observed data? 5. How many degrees of freedom are there in the variable “played game”? 6. Assuming a significance level of 0.05, what is the \begin{align*}\chi ^2\end{align*} critical value? 7. Does the observed data support Evan’s claim? Explain your findings. Questions 8-15 refer to the following: Mack claims that 84% of street racers drive import cars, and 16% drive domestic muscle cars. Abbi likes domestic cars and thinks Mack is overstating the percentage of imports, so she does some research of her own and finds that 57 of the street racers she interviewed drive imports, and 31 drive American muscle. 8. Create a table to organize the data and prepare for hypothesis testing. 9. What sort of test would be appropriate to determine if the observed data supports Mack’s claim? 10. What would be \begin{align*}H_0\end{align*} and \begin{align*}H_1\end{align*}? 11. What would be the \begin{align*}\chi ^2\end{align*} statistic for the observed data? 12. How many degrees of freedom are there in the variable “played game”? 13. Assuming a significance level of 0.10, what is the \begin{align*}\chi ^2\end{align*} critical value? 14. Does the data indicate that Abbi should reject, or fail to reject \begin{align*}H_0\end{align*}? ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes ### Vocabulary Language: English chi-squared distribution The distribution of the chi-square statistic is called the chi-square distribution. chi-squared goodness of fit test The chi-square goodness of fit test can be used to estimate how closely an observed distribution matches an expected distribution. chi-squared statistic The chi-squared statistic (X^2) is used to evaluate how well a set of observed data fits a corresponding expected set. chi-squared test The chi-squared test calculates the probability that a given distribution is a good fit for observed data. contingency tables A contingency table (two-way table) is used to organize data from multiple categories of two variables so that various assessments may be made. degrees of freedom Degrees of freedom are essentially the number of samples that have the ‘freedom’ to change without necessarily affecting the sample mean. Degrees of freedom has the formula df = n - 1. test for independence The test for independence is used when estimating if two random variables are independent of one another. test of significance A test of significance (calculating a z-score or a t-statistic) is done when a claim is made about the value of a population parameter.