arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
# Comment on "breakdown of the Internet under intentional attack". @article{Dorogovtsev2001CommentO, title={Comment on "breakdown of the Internet under intentional attack".}, author={Sergey N. Dorogovtsev and Jos{\'e} F. F. Mendes}, journal={Physical review letters}, year={2001}, volume={87 21}, pages={ 219801 } } • Published 5 September 2001 • Physics • Physical review letters We obtain the exact position of the percolation threshold in intentionally damaged scale-free networks. Attack Strategies on Complex Networks • Mathematics, Computer Science International Conference on Computational Science • 2006 This work estimates the resilience of scale-free networks on a number of different attack methods and presents a class of real-life networks that prove to be very resilient on intentional attacks, or equivalently much more difficult to immunize completely than most model scale- free networks. Random graph dynamics The Erdos-Renyi random graphs model, a version of the CHKNS model, helps clarify the role of randomness in the distribution of values in the discrete-time world. Effects of node protections against intentional attack in complex communication networks • Computer Science 2011 8th International Workshop on the Design of Reliable Communication Networks (DRCN) • 2011 It is shown that though different schemes have different effects, overall speaking protecting a small fraction of high-degree nodes or even medium-degree ones significantly enhances network robustness. Entropy Weight Method for Evaluation of Invulnerability in Instant Messaging Network • Computer Science 2009 Fourth International Conference on Internet Computing for Science and Engineering • 2009 The paper analyzes the current measure methods of complex network invulnerability, measures the reliability and the invulnerability of instant messaging network topology by analytic hierarchy process, and constructs the evaluation index matrix. Potts model on complex networks • Physics • 2004 Abstract.We consider the general p-state Potts model on random networks with a given degree distribution (random Bethe lattices). We find the effect of the suppression of a first order phase Parameters Affecting the Resilience of Scale-Free Networks to Random Failures • Computer Science ArXiv • 2005 It is shown that, in a power-law network approximating the Internet's reported distribution, when the probability of deletion of each node is 0.5 only about 25% of the surviving nodes in the network remain connected in a giant component, and the giant component does not persist beyond a critical failure rate of 0.9. Scalability and Egalitarianism in Peer-to-Peer Networks • Computer Science • 2016 This chapter investigates the physical limits of distributed consensus mechanisms over networks, and discusses whether there are scalability and efficiency reasons that incentivize centralization. Vulnerability of robust preferential attachment networks • Computer Science • 2013 A class of preferential attachment networks in the robust regime is analysed and it is proved that the critical proportion of nodes that have to be retained for survival of the giant component undergoes a steep increase as $\varepsilon$ moves away from zero, and the existence of two different universality classes of robust network models is revealed. Impact of random failures and attacks on Poisson and power-law random networks • Computer Science ACM Comput. Surv. • 2011 The conclusion is that the basic results are clearly important, but in practice much less striking than generally thought, and the differences between random failures and attacks are not so huge and can be explained with simple facts.
# Find the Mean (Arithmetic) 0 , 2÷0 , 5 0 , 2÷0 , 5 The expression contains a division by 0 The expression is undefined. 0,Undefined,5 The mean of a set of numbers is the sum divided by the number of terms. Undefined
# Don't Overthink It! Calculus Level 2 What is the value of $\lim_{x \to 4} f(x)$ If $f(x) = \begin{cases} |x+3-x^2| & \text{if } -\infty \leq x < 2 , \text{and }5 ×
# Search for physics beyond the standard model using multilepton signatures in pp collisions at √s =7 TeV CMS Collaboration; Amsler, C; Chiochia, V; Snoek, H; Favaro, C; Verzetti, M; Aguiló, E; De Visscher, S; Otyugova, P; Schmitt, A; Ivova, M; Storey, J; Millan, B (2011). Search for physics beyond the standard model using multilepton signatures in pp collisions at √s =7 TeV. Physics Letters B, 704(5):411-433. ## Abstract A search for physics beyond the standard model in events with at least three leptons and any number of jets is presented. The data sample corresponds to 35 inverse picobarns of integrated luminosity in pp collisions at sqrt(s) = 7 TeV collected by the CMS experiment at the LHC. A number of exclusive multileptonic channels are investigated and standard model backgrounds are suppressed by requiring sufficient missing transverse energy, invariant mass inconsistent with that of the Z boson, or high jet activity. Control samples in data are used to ascertain the robustness of background evaluation techniques and to minimise the reliance on simulation. The observations are consistent with background expectations. These results constrain previously unexplored regions of supersymmetric parameter space. A search for physics beyond the standard model in events with at least three leptons and any number of jets is presented. The data sample corresponds to 35 inverse picobarns of integrated luminosity in pp collisions at sqrt(s) = 7 TeV collected by the CMS experiment at the LHC. A number of exclusive multileptonic channels are investigated and standard model backgrounds are suppressed by requiring sufficient missing transverse energy, invariant mass inconsistent with that of the Z boson, or high jet activity. Control samples in data are used to ascertain the robustness of background evaluation techniques and to minimise the reliance on simulation. The observations are consistent with background expectations. These results constrain previously unexplored regions of supersymmetric parameter space. ## Citations 30 citations in Web of Science® 31 citations in Scopus® ## Altmetrics Detailed statistics Other titles: Search for physics beyond the standard model using multilepton signatures in pp collisions at sqrt(s)=7 TeV Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 2011 13 Feb 2012 08:34 05 Apr 2016 15:36 Elsevier 0370-2693 (P) 1873-2445 (E) 10.1016/j.physletb.2011.09.047 http://arxiv.org/abs/1106.0933v1 Permanent URL: http://doi.org/10.5167/uzh-58841 Preview Content: Accepted Version Filetype: PDF (Version 2) Size: 639kB View at publisher Preview Content: Accepted Version Filetype: PDF (Version 1) Size: 515kB ## TrendTerms TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents. You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.
# Chapter 2¶ Original content created by Cam Davidson-Pilon Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian) This chapter introduces more PyMC3 syntax and variables and ways to think about how to model a system from a Bayesian perspective. It also contains tips and data visualization techniques for assessing goodness-of-fit for your Bayesian model. ## A little more on PyMC3¶ ### Model Context¶ In PyMC3, we typically handle all the variables we want in our model within the context of the Model object. In [1]: import pymc3 as pm with pm.Model() as model: parameter = pm.Exponential("poisson_param", 1.0) data_generator = pm.Poisson("data_generator", parameter) Applied log-transform to poisson_param and added transformed poisson_param_log_ to model. This is an extra layer of convenience compared to PyMC. Any variables created within a given Model's context will be automatically assigned to that model. If you try to define a variable outside of the context of a model, you will get an error. We can continue to work within the context of the same model by using with with the name of the model object that we have already created. In [2]: with model: data_plus_one = data_generator + 1 We can examine the same variables outside of the model context once they have been defined, but to define more variables that the model will recognize they have to be within the context. In [3]: parameter.tag.test_value Out[3]: array(0.693147177890573) Each variable assigned to a model will be defined with its own name, the first string parameter (we will cover this further in the variables section). To create a different model object with the same name as one we have used previously, we need only run the first block of code again. In [4]: with pm.Model() as model: theta = pm.Exponential("theta", 2.0) data_generator = pm.Poisson("data_generator", theta) Applied log-transform to theta and added transformed theta_log_ to model. We can also define an entirely separate model. Note that we are free to name our models whatever we like, so if we do not want to overwrite an old model we need only make another. In [5]: with pm.Model() as ab_testing: p_A = pm.Uniform("P(A)", 0, 1) p_B = pm.Uniform("P(B)", 0, 1) Applied interval-transform to P(A) and added transformed P(A)_interval_ to model. Applied interval-transform to P(B) and added transformed P(B)_interval_ to model. You probably noticed that PyMC3 will often give you notifications about transformations when you add variables to your model. These transformations are done internally by PyMC3 to modify the space that the variable is sampled in (when we get to actually sampling the model). This is an internal feature which helps with the convergence of our samples to the posterior distribution and serves to improve the results. ### PyMC3 Variables¶ All PyMC3 variables have an initial value (i.e. test value). Using the same variables from before: In [6]: print("parameter.tag.test_value =", parameter.tag.test_value) print("data_generator.tag.test_value =", data_generator.tag.test_value) print("data_plus_one.tag.test_value =", data_plus_one.tag.test_value) parameter.tag.test_value = 0.693147177890573 data_generator.tag.test_value = 0 data_plus_one.tag.test_value = 1 The test_value is used only for the model, as the starting point for sampling if no other start is specified. It will not change as a result of sampling. This initial state can be changed at variable creation by specifying a value for the testval parameter. In [7]: with pm.Model() as model: parameter = pm.Exponential("poisson_param", 1.0, testval=0.5) print("\nparameter.tag.test_value =", parameter.tag.test_value) Applied log-transform to poisson_param and added transformed poisson_param_log_ to model. parameter.tag.test_value = 0.49999999904767284 This can be helpful if you are using a more unstable prior that may require a better starting point. PyMC3 is concerned with two types of programming variables: stochastic and deterministic. • stochastic variables are variables that are not deterministic, i.e., even if you knew all the values of the variables' parameters and components, it would still be random. Included in this category are instances of classes Poisson, DiscreteUniform, and Exponential. • deterministic variables are variables that are not random if the variables' parameters and components were known. This might be confusing at first: a quick mental check is if I knew all of variable foo's component variables, I could determine what foo's value is. We will detail each below. #### Initializing Stochastic variables¶ Initializing a stochastic, or random, variable requires a name argument, plus additional parameters that are class specific. For example: some_variable = pm.DiscreteUniform("discrete_uni_var", 0, 4) where 0, 4 are the DiscreteUniform-specific lower and upper bound on the random variable. The PyMC3 docs contain the specific parameters for stochastic variables. (Or use ?? if you are using IPython!) The name attribute is used to retrieve the posterior distribution later in the analysis, so it is best to use a descriptive name. Typically, I use the Python variable's name as the name. For multivariable problems, rather than creating a Python array of stochastic variables, addressing the shape keyword in the call to a stochastic variable creates multivariate array of (independent) stochastic variables. The array behaves like a NumPy array when used like one, and references to its tag.test_value attribute return NumPy arrays. The shape argument also solves the annoying case where you may have many variables $\beta_i, \; i = 1,...,N$ you wish to model. Instead of creating arbitrary names and variables for each one, like: beta_1 = pm.Uniform("beta_1", 0, 1) beta_2 = pm.Uniform("beta_2", 0, 1) ... we can instead wrap them into a single variable: betas = pm.Uniform("betas", 0, 1, shape=N) #### Deterministic variables¶ We can create a deterministic variable similarly to how we create a stochastic variable. We simply call up the Deterministic class in PyMC3 and pass in the function that we desire deterministic_variable = pm.Deterministic("deterministic variable", some_function_of_variables) For all purposes, we can treat the object some_deterministic_var as a variable and not a Python function. Calling pymc3.Deterministic is the most obvious way, but not the only way, to create deterministic variables. Elementary operations, like addition, exponentials etc. implicitly create deterministic variables. For example, the following returns a deterministic variable: In [8]: with pm.Model() as model: lambda_1 = pm.Exponential("lambda_1", 1.0) lambda_2 = pm.Exponential("lambda_2", 1.0) tau = pm.DiscreteUniform("tau", lower=0, upper=10) new_deterministic_variable = lambda_1 + lambda_2 Applied log-transform to lambda_1 and added transformed lambda_1_log_ to model. Applied log-transform to lambda_2 and added transformed lambda_2_log_ to model. If we want a deterministic variable to actually be tracked by our sampling, however, we need to define it explicitly as a named deterministic variable with the constructor. The use of the deterministic variable was seen in the previous chapter's text-message example. Recall the model for $\lambda$ looked like: $$\lambda = \begin{cases}\lambda_1 & \text{if } t \lt \tau \cr \lambda_2 & \text{if } t \ge \tau \end{cases}$$ And in PyMC3 code: In [9]: import numpy as np n_data_points = 5 # in CH1 we had ~70 data points idx = np.arange(n_data_points) with model: lambda_ = pm.math.switch(tau >= idx, lambda_1, lambda_2) Clearly, if $\tau, \lambda_1$ and $\lambda_2$ are known, then $\lambda$ is known completely, hence it is a deterministic variable. We use the switch function here to change from $\lambda_1$ to $\lambda_2$ at the appropriate time. This function is directly from the theano package, which we will discuss in the next section. Inside a deterministic variable, the stochastic variables passed in behave like scalars or NumPy arrays (if multivariable). We can do whatever we want with them as long as the dimensions match up in our calculations. For example, running the following: def subtract(x, y): return x - y stochastic_1 = pm.Uniform("U_1", 0, 1) stochastic_2 = pm.Uniform("U_2", 0, 1) det_1 = pm.Deterministic("Delta", subtract(stochastic_1, stochastic_2)) Is perfectly valid PyMC3 code. Saying that our expressions behave like NumPy arrays is not exactly honest here, however. The main catch is that the expression that we are making must be compatible with theano tensors, which we will cover in the next section. Feel free to define whatever functions that you need in order to compose your model. However, if you need to do any array-like calculations that would require NumPy functions, make sure you use their equivalents in theano. ### Theano¶ The majority of the heavy lifting done by PyMC3 is taken care of with the theano package. The notation in theano is remarkably similar to NumPy. It also supports many of the familiar computational elements of NumPy. However, while NumPy directly executes computations, e.g. when you run a + b, theano instead builds up a "compute graph" that tracks that you want to perform the + operation on the elements a and b. Only when you eval() a theano expression does the computation take place (i.e. theano is lazy evaluated). Once the compute graph is built, we can perform all kinds of mathematical optimizations (e.g. simplifications), compute gradients via autodiff, compile the entire graph to C to run at machine speed, and also compile it to run on the GPU. PyMC3 is basically a collection of theano symbolic expressions for various probability distributions that are combined to one big compute graph making up the whole model log probability, and a collection of inference algorithms that use that graph to compute probabilities and gradients. For practical purposes, what this means is that in order to build certain models we sometimes have to use theano. Let's write some PyMC3 code that involves theano calculations. In [10]: import theano.tensor as tt with pm.Model() as theano_test: p1 = pm.Uniform("p", 0, 1) p2 = 1 - p1 p = tt.stack([p1, p2]) assignment = pm.Categorical("assignment", p) Applied interval-transform to p and added transformed p_interval_ to model. Here we use theano's stack() function in the same way we would use one of NumPy's stacking functions: to combine our two separate variables, p1 and p2, into a vector with $2$ elements. The stochastic categorical variable does not understand what we mean if we pass a NumPy array of p1 and p2 to it because they are both theano variables. Stacking them like this combines them into one theano variable that we can use as the complementary pair of probabilities for our two categories. Throughout the course of this book we use several theano functions to help construct our models. If you have more interest in looking at theano itself, be sure to check out the documentation. After these technical considerations, we can get back to defining our model! ### Including observations in the Model¶ At this point, it may not look like it, but we have fully specified our priors. For example, we can ask and answer questions like "What does my prior distribution of $\lambda_1$ look like?" In [11]: %matplotlib inline from IPython.core.pylabtools import figsize import matplotlib.pyplot as plt import scipy.stats as stats figsize(12.5, 4) samples = lambda_1.random(size=20000) plt.hist(samples, bins=70, normed=True, histtype="stepfilled") plt.title("Prior distribution for $\lambda_1$") plt.xlim(0, 8); To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model. PyMC3 stochastic variables have a keyword argument observed. The keyword observed has a very simple role: fix the variable's current value to be the given data, typically a NumPy array or pandas DataFrame. For example: In [12]: data = np.array([10, 5]) with model: fixed_variable = pm.Poisson("fxd", 1, observed=data) print("value: ", fixed_variable.tag.test_value) value: [10 5] This is how we include data into our models: initializing a stochastic variable to have a fixed value. To complete our text message example, we fix the PyMC3 variable observations to the observed dataset. In [13]: # We're using some fake data here data = np.array([10, 25, 15, 20, 35]) with model: obs = pm.Poisson("obs", lambda_, observed=data) print(obs.tag.test_value) [10 25 15 20 35] ## Modeling approaches¶ A good starting thought to Bayesian modeling is to think about how your data might have been generated. Position yourself in an omniscient position, and try to imagine how you would recreate the dataset. In the last chapter we investigated text message data. We begin by asking how our observations may have been generated: 1. We started by thinking "what is the best random variable to describe this count data?" A Poisson random variable is a good candidate because it can represent count data. So we model the number of sms's received as sampled from a Poisson distribution. 2. Next, we think, "Ok, assuming sms's are Poisson-distributed, what do I need for the Poisson distribution?" Well, the Poisson distribution has a parameter $\lambda$. 3. Do we know $\lambda$? No. In fact, we have a suspicion that there are two $\lambda$ values, one for the earlier behaviour and one for the later behaviour. We don't know when the behaviour switches though, but call the switchpoint $\tau$. 4. What is a good distribution for the two $\lambda$s? The exponential is good, as it assigns probabilities to positive real numbers. Well the exponential distribution has a parameter too, call it $\alpha$. 5. Do we know what the parameter $\alpha$ might be? No. At this point, we could continue and assign a distribution to $\alpha$, but it's better to stop once we reach a set level of ignorance: whereas we have a prior belief about $\lambda$, ("it probably changes over time", "it's likely between 10 and 30", etc.), we don't really have any strong beliefs about $\alpha$. So it's best to stop here. What is a good value for $\alpha$ then? We think that the $\lambda$s are between 10-30, so if we set $\alpha$ really low (which corresponds to larger probability on high values) we are not reflecting our prior well. Similar, a too-high alpha misses our prior belief as well. A good idea for $\alpha$ as to reflect our belief is to set the value so that the mean of $\lambda$, given $\alpha$, is equal to our observed mean. This was shown in the last chapter. 6. We have no expert opinion of when $\tau$ might have occurred. So we will suppose $\tau$ is from a discrete uniform distribution over the entire timespan. Below we give a graphical visualization of this, where arrows denote parent-child relationships. (provided by the Daft Python library ) PyMC3, and other probabilistic programming languages, have been designed to tell these data-generation stories. More generally, B. Cronin writes [5]: Probabilistic programming will unlock narrative explanations of data, one of the holy grails of business analytics and the unsung hero of scientific persuasion. People think in terms of stories - thus the unreasonable power of the anecdote to drive decision-making, well-founded or not. But existing analytics largely fails to provide this kind of story; instead, numbers seemingly appear out of thin air, with little of the causal context that humans prefer when weighing their options. ### Same story; different ending.¶ Interestingly, we can create new datasets by retelling the story. For example, if we reverse the above steps, we can simulate a possible realization of the dataset. 1. Specify when the user's behaviour switches by sampling from $\text{DiscreteUniform}(0, 80)$: In [14]: tau = np.random.randint(0, 80) print(tau) 59 2. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution: In [15]: alpha = 1./20. lambda_1, lambda_2 = np.random.exponential(scale=1/alpha, size=2) print(lambda_1, lambda_2) 49.7521280843 10.1226712418 3. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example: In [16]: data = np.r_[stats.poisson.rvs(mu=lambda_1, size=tau), stats.poisson.rvs(mu=lambda_2, size = 80 - tau)] 4. Plot the artificial dataset: In [17]: plt.bar(np.arange(80), data, color="#348ABD") plt.bar(tau-1, data[tau - 1], color="r", label="user behaviour changed") plt.xlabel("Time (days)") plt.title("Artificial dataset") plt.xlim(0, 80) plt.legend(); It is okay that our fictional dataset does not look like our observed dataset: the probability is incredibly small it indeed would. PyMC3's engine is designed to find good parameters, $\lambda_i, \tau$, that maximize this probability. The ability to generate artificial dataset is an interesting side effect of our modeling, and we will see that this ability is a very important method of Bayesian inference. We produce a few more datasets below: In [18]: def plot_artificial_sms_dataset(): tau = stats.randint.rvs(0, 80) alpha = 1./20. lambda_1, lambda_2 = stats.expon.rvs(scale=1/alpha, size=2) data = np.r_[stats.poisson.rvs(mu=lambda_1, size=tau), stats.poisson.rvs(mu=lambda_2, size=80 - tau)] plt.bar(np.arange(80), data, color="#348ABD") plt.bar(tau - 1, data[tau-1], color="r", label="user behaviour changed") plt.xlim(0, 80); figsize(12.5, 5) plt.title("More example of artificial datasets") for i in range(4): plt.subplot(4, 1, i+1) plot_artificial_sms_dataset() Later we will see how we use this to make predictions and test the appropriateness of our models. ##### Example: Bayesian A/B testing¶ A/B testing is a statistical design pattern for determining the difference of effectiveness between two different treatments. For example, a pharmaceutical company is interested in the effectiveness of drug A vs drug B. The company will test drug A on some fraction of their trials, and drug B on the other fraction (this fraction is often 1/2, but we will relax this assumption). After performing enough trials, the in-house statisticians sift through the data to determine which drug yielded better results. Similarly, front-end web developers are interested in which design of their website yields more sales or some other metric of interest. They will route some fraction of visitors to site A, and the other fraction to site B, and record if the visit yielded a sale or not. The data is recorded (in real-time), and analyzed afterwards. Often, the post-experiment analysis is done using something called a hypothesis test like difference of means test or difference of proportions test. This involves often misunderstood quantities like a "Z-score" and even more confusing "p-values" (please don't ask). If you have taken a statistics course, you have probably been taught this technique (though not necessarily learned this technique). And if you were like me, you may have felt uncomfortable with their derivation -- good: the Bayesian approach to this problem is much more natural. ### A Simple Case¶ As this is a hacker book, we'll continue with the web-dev example. For the moment, we will focus on the analysis of site A only. Assume that there is some true $0 \lt p_A \lt 1$ probability that users who, upon shown site A, eventually purchase from the site. This is the true effectiveness of site A. Currently, this quantity is unknown to us. Suppose site A was shown to $N$ people, and $n$ people purchased from the site. One might conclude hastily that $p_A = \frac{n}{N}$. Unfortunately, the observed frequency $\frac{n}{N}$ does not necessarily equal $p_A$ -- there is a difference between the observed frequency and the true frequency of an event. The true frequency can be interpreted as the probability of an event occurring. For example, the true frequency of rolling a 1 on a 6-sided die is $\frac{1}{6}$. Knowing the true frequency of events like: • fraction of users who make purchases, • frequency of social attributes, • percent of internet users with cats etc. are common requests we ask of Nature. Unfortunately, often Nature hides the true frequency from us and we must infer it from observed data. The observed frequency is then the frequency we observe: say rolling the die 100 times you may observe 20 rolls of 1. The observed frequency, 0.2, differs from the true frequency, $\frac{1}{6}$. We can use Bayesian statistics to infer probable values of the true frequency using an appropriate prior and observed data. With respect to our A/B example, we are interested in using what we know, $N$ (the total trials administered) and $n$ (the number of conversions), to estimate what $p_A$, the true frequency of buyers, might be. To setup a Bayesian model, we need to assign prior distributions to our unknown quantities. A priori, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]: In [19]: import pymc3 as pm # The parameters are the bounds of the Uniform. with pm.Model() as model: p = pm.Uniform('p', lower=0, upper=1) Applied interval-transform to p and added transformed p_interval_ to model. Had we had stronger beliefs, we could have expressed them in the prior above. For this example, consider $p_A = 0.05$, and $N = 1500$ users shown site A, and we will simulate whether the user made a purchase or not. To simulate this from $N$ trials, we will use a Bernoulli distribution: if $X\ \sim \text{Ber}(p)$, then $X$ is 1 with probability $p$ and 0 with probability $1 - p$. Of course, in practice we do not know $p_A$, but we will use it here to simulate the data. In [20]: #set constants p_true = 0.05 # remember, this is unknown. N = 1500 # sample N Bernoulli random variables from Ber(0.05). # each random variable has a 0.05 chance of being a 1. # this is the data-generation step occurrences = stats.bernoulli.rvs(p_true, size=N) print(occurrences) # Remember: Python treats True == 1, and False == 0 print(np.sum(occurrences)) [1 0 1 ..., 0 0 0] 77 The observed frequency is: In [21]: # Occurrences.mean is equal to n/N. print("What is the observed frequency in Group A? %.4f" % np.mean(occurrences)) print("Does this equal the true frequency? %s" % (np.mean(occurrences) == p_true)) What is the observed frequency in Group A? 0.0513 Does this equal the true frequency? False We combine the observations into the PyMC3 observed variable, and run our inference algorithm: In [22]: #include the observations, which are Bernoulli with model: obs = pm.Bernoulli("obs", p, observed=occurrences) # To be explained in chapter 3 step = pm.Metropolis() trace = pm.sample(18000, step=step) burned_trace = trace[1000:] [-------100%-------] 18000 of 18000 in 1.7 sec. | SPS: 10329.7 | ETA: 0.0 We plot the posterior distribution of the unknown $p_A$ below: In [23]: figsize(12.5, 4) plt.title("Posterior distribution of $p_A$, the true effectiveness of site A") plt.vlines(p_true, 0, 90, linestyle="--", label="true $p_A$ (unknown)") plt.hist(burned_trace["p"], bins=25, histtype="stepfilled", normed=True) plt.legend(); Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes. ### A and B Together¶ A similar analysis can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the difference between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, and $\text{delta} = p_A - p_B$, all at once. We can do this using PyMC3's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\text{delta} = 0.01$, $N_B = 750$ (significantly less than $N_A$) and we will simulate site B's data like we did for site A's data ) In [24]: import pymc3 as pm figsize(12, 4) #these two quantities are unknown to us. true_p_A = 0.05 true_p_B = 0.04 #notice the unequal sample sizes -- no problem in Bayesian analysis. N_A = 1500 N_B = 750 #generate some observations observations_A = stats.bernoulli.rvs(true_p_A, size=N_A) observations_B = stats.bernoulli.rvs(true_p_B, size=N_B) print("Obs from Site A: ", observations_A[:30], "...") print("Obs from Site B: ", observations_B[:30], "...") Obs from Site A: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] ... Obs from Site B: [0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0] ... In [25]: print(np.mean(observations_A)) print(np.mean(observations_B)) 0.042 0.0346666666667 In [26]: # Set up the pymc3 model. Again assume Uniform priors for p_A and p_B. with pm.Model() as model: p_A = pm.Uniform("p_A", 0, 1) p_B = pm.Uniform("p_B", 0, 1) # Define the deterministic delta function. This is our unknown of interest. delta = pm.Deterministic("delta", p_A - p_B) # Set of observations, in this case we have two observation datasets. obs_A = pm.Bernoulli("obs_A", p_A, observed=observations_A) obs_B = pm.Bernoulli("obs_B", p_B, observed=observations_B) # To be explained in chapter 3. step = pm.Metropolis() trace = pm.sample(20000, step=step) burned_trace=trace[1000:] Applied interval-transform to p_A and added transformed p_A_interval_ to model. Applied interval-transform to p_B and added transformed p_B_interval_ to model. [-------100%-------] 20000 of 20000 in 3.2 sec. | SPS: 6201.6 | ETA: 0.0 Below we plot the posterior distributions for the three unknowns: In [27]: p_A_samples = burned_trace["p_A"] p_B_samples = burned_trace["p_B"] delta_samples = burned_trace["delta"] In [28]: figsize(12.5, 10) #histogram of posteriors ax = plt.subplot(311) plt.xlim(0, .1) plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85, label="posterior of $p_A$", color="#A60628", normed=True) plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)") plt.legend(loc="upper right") plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns") ax = plt.subplot(312) plt.xlim(0, .1) plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85, label="posterior of $p_B$", color="#467821", normed=True) plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)") plt.legend(loc="upper right") ax = plt.subplot(313) plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85, label="posterior of delta", color="#7A68A6", normed=True) plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--", label="true delta (unknown)") plt.vlines(0, 0, 60, color="black", alpha=0.2) plt.legend(loc="upper right"); Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$. With respect to the posterior distribution of $\text{delta}$, we can see that the majority of the distribution is above $\text{delta}=0$, implying there site A's response is likely better than site B's response. The probability this inference is incorrect is easily computable: In [29]: # Count the number of samples less than 0, i.e. the area under the curve # before 0, represent the probability that site A is worse than site B. print("Probability site A is WORSE than site B: %.3f" % \ np.mean(delta_samples < 0)) print("Probability site A is BETTER than site B: %.3f" % \ np.mean(delta_samples > 0)) Probability site A is WORSE than site B: 0.208 Probability site A is BETTER than site B: 0.792 If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A). Try playing with the parameters true_p_A, true_p_B, N_A, and N_B, to see what the posterior of $\text{delta}$ looks like. Notice in all this, the difference in sample sizes between site A and site B was never mentioned: it naturally fits into Bayesian analysis. I hope the readers feel this style of A/B testing is more natural than hypothesis testing, which has probably confused more than helped practitioners. Later in this book, we will see two extensions of this model: the first to help dynamically adjust for bad sites, and the second will improve the speed of this computation by reducing the analysis to a single equation. ## An algorithm for human deceit¶ Social data has an additional layer of interest as people are not always honest with responses, which adds a further complication into inference. For example, simply asking individuals "Have you ever cheated on a test?" will surely contain some rate of dishonesty. What you can say for certain is that the true rate is less than your observed rate (assuming individuals lie only about not cheating; I cannot imagine one who would admit "Yes" to cheating when in fact they hadn't cheated). To present an elegant solution to circumventing this dishonesty problem, and to demonstrate Bayesian modeling, we first need to introduce the binomial distribution. ### The Binomial Distribution¶ The binomial distribution is one of the most popular distributions, mostly because of its simplicity and usefulness. Unlike the other distributions we have encountered thus far in the book, the binomial distribution has 2 parameters: $N$, a positive integer representing $N$ trials or number of instances of potential events, and $p$, the probability of an event occurring in a single trial. Like the Poisson distribution, it is a discrete distribution, but unlike the Poisson distribution, it only weighs integers from $0$ to $N$. The mass distribution looks like: $$P( X = k ) = {{N}\choose{k}} p^k(1-p)^{N-k}$$ If $X$ is a binomial random variable with parameters $p$ and $N$, denoted $X \sim \text{Bin}(N,p)$, then $X$ is the number of events that occurred in the $N$ trials (obviously $0 \le X \le N$). The larger $p$ is (while still remaining between 0 and 1), the more events are likely to occur. The expected value of a binomial is equal to $Np$. Below we plot the mass probability distribution for varying parameters. In [30]: figsize(12.5, 4) import scipy.stats as stats binomial = stats.binom parameters = [(10, .4), (10, .9)] colors = ["#348ABD", "#A60628"] for i in range(2): N, p = parameters[i] _x = np.arange(N + 1) plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i], edgecolor=colors[i], alpha=0.6, label="$N$: %d, $p$: %.1f" % (N, p), linewidth=3) plt.legend(loc="upper left") plt.xlim(0, 10.5) plt.xlabel("$k$") plt.ylabel("$P(X = k)$") plt.title("Probability mass distributions of binomial random variables"); The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$. The expected value of a Bernoulli random variable is $p$. This can be seen by noting the more general Binomial random variable has expected value $Np$ and setting $N=1$. ##### Example: Cheating among students¶ We will use the binomial distribution to determine the frequency of students cheating during an exam. If we let $N$ be the total number of students who took the exam, and assuming each student is interviewed post-exam (answering without consequence), we will receive integer $X$ "Yes I did cheat" answers. We then find the posterior distribution of $p$, given $N$, some specified prior on $p$, and observed data $X$. This is a completely absurd model. No student, even with a free-pass against punishment, would admit to cheating. What we need is a better algorithm to ask students if they had cheated. Ideally the algorithm should encourage individuals to be honest while preserving privacy. The following proposed algorithm is a solution I greatly admire for its ingenuity and effectiveness: In the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers "Yes, I did cheat" if the coin flip lands heads, and "No, I did not cheat", if the coin flip lands tails. This way, the interviewer does not know if a "Yes" was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers. I call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some Yes's are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC3 to dig through this noisy model, and find a posterior distribution for the true frequency of liars. Suppose 100 students are being surveyed for cheating, and we wish to find $p$, the proportion of cheaters. There are a few ways we can model this in PyMC3. I'll demonstrate the most explicit way, and later show a simplified version. Both versions arrive at the same inference. In our data-generation model, we sample $p$, the true proportion of cheaters, from a prior. Since we are quite ignorant about $p$, we will assign it a $\text{Uniform}(0,1)$ prior. In [31]: import pymc3 as pm N = 100 with pm.Model() as model: p = pm.Uniform("freq_cheating", 0, 1) Applied interval-transform to freq_cheating and added transformed freq_cheating_interval_ to model. Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students: 1 implies they cheated and 0 implies they did not. In [32]: with model: true_answers = pm.Bernoulli("truths", p, shape=N, testval=np.random.binomial(1, 0.5, N)) If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$: denote a 1 as a Heads and 0 a Tails. In [33]: with model: first_coin_flips = pm.Bernoulli("first_flips", 0.5, shape=N, testval=np.random.binomial(1, 0.5, N)) print(first_coin_flips.tag.test_value) [0 0 1 0 1 1 0 1 0 1 1 1 0 0 0 1 1 1 0 0 0 0 0 1 0 1 0 1 1 1 0 0 0 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 0 0 0 0 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 0 0 1 1 1 0 1 0 1 1 1 0 1 0 0 0 1 0 0 0 0 1 0 1 0 1 0 0 1 0] Although not everyone flips a second time, we can still model the possible realization of second coin-flips: In [34]: with model: second_coin_flips = pm.Bernoulli("second_flips", 0.5, shape=N, testval=np.random.binomial(1, 0.5, N)) Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC3 deterministic variable: In [35]: import theano.tensor as tt with model: val = first_coin_flips*true_answers + (1 - first_coin_flips)*second_coin_flips observed_proportion = pm.Deterministic("observed_proportion", tt.sum(val)/float(N)) The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion. In [36]: observed_proportion.tag.test_value Out[36]: array(0.5600000023841858) Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expected to see approximately 3/4 of all responses be "Yes". The researchers observe a Binomial random variable, with N = 100 and p = observed_proportion with value = 35: In [37]: X = 35 with model: observations = pm.Binomial("obs", N, observed_proportion, observed=X) Below we add all the variables of interest to a Model container and run our black-box algorithm over the model. In [38]: # To be explained in Chapter 3! with model: step = pm.Metropolis(vars=[p]) trace = pm.sample(40000, step=step) burned_trace = trace[15000:] Assigned BinaryGibbsMetropolis to truths Assigned BinaryGibbsMetropolis to first_flips Assigned BinaryGibbsMetropolis to second_flips [-------100%-------] 40000 of 40000 in 1891.9 sec. | SPS: 21.1 | ETA: -0.0 In [39]: figsize(12.5, 3) p_trace = burned_trace["freq_cheating"][15000:] plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30, label="posterior distribution", color="#348ABD") plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3) plt.xlim(0, 1) plt.legend(); With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency? I would argue, yes, we have discovered something. It is implausible, according to our posterior, that there are no cheaters, i.e. the posterior assigns low probability to $p=0$. Since we started with an uniform prior, treating all values of $p$ as equally plausible, but the data ruled out $p=0$ as a possibility, we can be confident that there were cheaters. This kind of algorithm can be used to gather private information from users and be reasonably confident that the data, though noisy, is truthful. ### Alternative PyMC3 Model¶ Given a value for $p$ (which from our god-like position we know), we can find the probability the student will answer yes: \begin{align} P(\text{"Yes"}) = & P( \text{Heads on first coin} )P( \text{cheater} ) + P( \text{Tails on first coin} )P( \text{Heads on second coin} ) \\\\ & = \frac{1}{2}p + \frac{1}{2}\frac{1}{2}\\\\ & = \frac{p}{2} + \frac{1}{4} \end{align} Thus, knowing $p$ we know the probability a student will respond "Yes". In PyMC3, we can create a deterministic function to evaluate the probability of responding "Yes", given $p$: In [40]: with pm.Model() as model: p = pm.Uniform("freq_cheating", 0, 1) p_skewed = pm.Deterministic("p_skewed", 0.5*p + 0.25) Applied interval-transform to freq_cheating and added transformed freq_cheating_interval_ to model. I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake. If we know the probability of respondents saying "Yes", which is p_skewed, and we have $N=100$ students, the number of "Yes" responses is a binomial random variable with parameters N and p_skewed. This is where we include our observed 35 "Yes" responses. In the declaration of the pm.Binomial, we include value = 35 and observed = True. In [41]: with model: yes_responses = pm.Binomial("number_cheaters", 100, p_skewed, observed=35) Below we add all the variables of interest to a Model container and run our black-box algorithm over the model. In [42]: with model: # To Be Explained in Chapter 3! step = pm.Metropolis() trace = pm.sample(25000, step=step) burned_trace = trace[2500:] [-------100%-------] 25000 of 25000 in 2.1 sec. | SPS: 12171.2 | ETA: 0.0 In [43]: figsize(12.5, 3) p_trace = burned_trace["freq_cheating"] plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30, label="posterior distribution", color="#348ABD") plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2) plt.xlim(0, 1) plt.legend(); ### More PyMC3 Tricks¶ #### Protip: Arrays of PyMC3 variables¶ There is no reason why we cannot store multiple heterogeneous PyMC3 variables in a Numpy array. Just remember to set the dtype of the array to object upon initialization. For example: In [44]: N = 10 x = np.ones(N, dtype=object) with pm.Model() as model: for i in range(0, N): x[i] = pm.Exponential('x_%i' % i, (i+1.0)**2) Applied log-transform to x_0 and added transformed x_0_log_ to model. Applied log-transform to x_1 and added transformed x_1_log_ to model. Applied log-transform to x_2 and added transformed x_2_log_ to model. Applied log-transform to x_3 and added transformed x_3_log_ to model. Applied log-transform to x_4 and added transformed x_4_log_ to model. Applied log-transform to x_5 and added transformed x_5_log_ to model. Applied log-transform to x_6 and added transformed x_6_log_ to model. Applied log-transform to x_7 and added transformed x_7_log_ to model. Applied log-transform to x_8 and added transformed x_8_log_ to model. Applied log-transform to x_9 and added transformed x_9_log_ to model. The remainder of this chapter examines some practical examples of PyMC3 and PyMC3 modeling: ##### Example: Challenger Space Shuttle Disaster ¶ On January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23, (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend. The data are shown below (see [1]): In [45]: figsize(12.5, 3.5) np.set_printoptions(precision=3, suppress=True) usecols=[1, 2], missing_values="NA", delimiter=",") #drop the NA values challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])] #plot it, as a function of tempature (the first column) print("Temp (F), O-Ring failure?") print(challenger_data) plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k", alpha=0.5) plt.yticks([0, 1]) plt.ylabel("Damage Incident?") plt.xlabel("Outside temperature (Fahrenheit)") plt.title("Defects of the Space Shuttle O-Rings vs temperature"); Temp (F), O-Ring failure? [[ 66. 0.] [ 70. 1.] [ 69. 0.] [ 68. 0.] [ 67. 0.] [ 72. 0.] [ 73. 0.] [ 70. 0.] [ 57. 1.] [ 63. 1.] [ 70. 1.] [ 78. 0.] [ 67. 0.] [ 53. 1.] [ 67. 0.] [ 75. 0.] [ 70. 0.] [ 81. 0.] [ 76. 0.] [ 79. 0.] [ 75. 1.] [ 76. 0.] [ 58. 1.]] It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question. We need a function of temperature, call it $p(t)$, that is bounded between 0 and 1 (so as to model a probability) and changes from 1 to 0 as we increase temperature. There are actually many such functions, but the most popular choice is the logistic function. $$p(t) = \frac{1}{ 1 + e^{ \;\beta t } }$$ In this model, $\beta$ is the variable we are uncertain about. Below is the function plotted for $\beta = 1, 3, -5$. In [46]: figsize(12, 3) def logistic(x, beta): return 1.0 / (1.0 + np.exp(beta * x)) x = np.linspace(-4, 4, 100) plt.plot(x, logistic(x, 1), label=r"$\beta = 1$") plt.plot(x, logistic(x, 3), label=r"$\beta = 3$") plt.plot(x, logistic(x, -5), label=r"$\beta = -5$") plt.legend(); But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function: $$p(t) = \frac{1}{ 1 + e^{ \;\beta t + \alpha } }$$ Some plots are below, with differing $\alpha$. In [47]: def logistic(x, beta, alpha=0): return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha)) x = np.linspace(-4, 4, 100) plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1) plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1) plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1) plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$", color="#348ABD") plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$", color="#A60628") plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$", color="#7A68A6") plt.legend(loc="lower left"); Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias). Let's start modeling this in PyMC3. The $\beta, \alpha$ parameters have no reason to be positive, bounded or relatively large, so they are best modeled by a Normal random variable, introduced next. ### Normal distributions¶ A Normal random variable, denoted $X \sim N(\mu, 1/\tau)$, has a distribution with two parameters: the mean, $\mu$, and the precision, $\tau$. Those familiar with the Normal distribution already have probably seen $\sigma^2$ instead of $\tau^{-1}$. They are in fact reciprocals of each other. The change was motivated by simpler mathematical analysis and is an artifact of older Bayesian methods. Just remember: the smaller $\tau$, the larger the spread of the distribution (i.e. we are more uncertain); the larger $\tau$, the tighter the distribution (i.e. we are more certain). Regardless, $\tau$ is always positive. The probability density function of a $N( \mu, 1/\tau)$ random variable is: $$f(x | \mu, \tau) = \sqrt{\frac{\tau}{2\pi}} \exp\left( -\frac{\tau}{2} (x-\mu)^2 \right)$$ We plot some different density functions below. In [48]: import scipy.stats as stats nor = stats.norm x = np.linspace(-8, 7, 150) mu = (-2, 0, 3) tau = (.7, 1, 2.8) colors = ["#348ABD", "#A60628", "#7A68A6"] parameters = zip(mu, tau, colors) for _mu, _tau, _color in parameters: plt.plot(x, nor.pdf(x, _mu, scale=1./_tau), label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color) plt.fill_between(x, nor.pdf(x, _mu, scale=1./_tau), color=_color, alpha=.33) plt.legend(loc="upper right") plt.xlabel("$x$") plt.ylabel("density function at $x$") plt.title("Probability distribution of three different Normal random \ variables"); A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter: $$E[ X | \mu, \tau] = \mu$$ and its variance is equal to the inverse of $\tau$: $$Var( X | \mu, \tau ) = \frac{1}{\tau}$$ Below we continue our modeling of the Challenger space craft: In [49]: import pymc3 as pm temperature = challenger_data[:, 0] D = challenger_data[:, 1] # defect or not? #notice thevalue here. We explain why below. with pm.Model() as model: beta = pm.Normal("beta", mu=0, tau=0.001, testval=0) alpha = pm.Normal("alpha", mu=0, tau=0.001, testval=0) p = pm.Deterministic("p", 1.0/(1. + tt.exp(beta*temperature + alpha))) We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like: $$\text{Defect Incident, D_i} \sim \text{Ber}( \;p(t_i)\; ), \;\; i=1..N$$ where $p(t)$ is our logistic function and $t_i$ are the temperatures we have observations about. Notice in the above code we had to set the values of beta and alpha to 0. The reason for this is that if beta and alpha are very large, they make p equal to 1 or 0. Unfortunately, pm.Bernoulli does not like probabilities of exactly 0 or 1, though they are mathematically well-defined probabilities. So by setting the coefficient values to 0, we set the variable p to be a reasonable starting value. This has no effect on our results, nor does it mean we are including any additional information in our prior. It is simply a computational caveat in PyMC3. In [50]: # connect the probabilities in p with our observations through a # Bernoulli random variable. with model: observed = pm.Bernoulli("bernoulli_obs", p, observed=D) # Mysterious code to be explained in Chapter 3 start = pm.find_MAP() step = pm.Metropolis() trace = pm.sample(120000, step=step, start=start) burned_trace = trace[100000::2] [-------100%-------] 120000 of 120000 in 16.5 sec. | SPS: 7283.1 | ETA: 0.0 We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$: In [51]: alpha_samples = burned_trace["alpha"][:, None] # best to make them 1d beta_samples = burned_trace["beta"][:, None] figsize(12.5, 6) #histogram of the samples: plt.subplot(211) plt.title(r"Posterior distributions of the variables $\alpha, \beta$") plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85, label=r"posterior of $\beta$", color="#7A68A6", normed=True) plt.legend() plt.subplot(212) plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85, label=r"posterior of $\alpha$", color="#A60628", normed=True) plt.legend(); All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect. Similarly, all $\alpha$ posterior values are negative and far away from 0, implying that it is correct to believe that $\alpha$ is significantly less than 0. Regarding the spread of the data, we are very uncertain about what the true parameters might be (though considering the low sample size and the large overlap of defects-to-nondefects this behaviour is perhaps expected). Next, let's look at the expected probability for a specific value of the temperature. That is, we average over all samples from the posterior to get a likely value for $p(t_i)$. In [52]: t = np.linspace(temperature.min() - 5, temperature.max()+5, 50)[:, None] p_t = logistic(t.T, beta_samples, alpha_samples) mean_prob_t = p_t.mean(axis=0) In [53]: figsize(12.5, 4) plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \ of defect") plt.plot(t, p_t[0, :], ls="--", label="realization from posterior") plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior") plt.scatter(temperature, D, color="k", s=50, alpha=0.5) plt.title("Posterior expected value of probability of defect; \ plus realizations") plt.legend(loc="lower left") plt.ylim(-0.1, 1.1) plt.xlim(t.min(), t.max()) plt.ylabel("probability") plt.xlabel("temperature"); Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together. An interesting question to ask is for what temperatures are we most uncertain about the defect-probability? Below we plot the expected value line and the associated 95% intervals for each temperature. In [54]: from scipy.stats.mstats import mquantiles # vectorized bottom and top 2.5% quantiles for "confidence interval" qs = mquantiles(p_t, [0.025, 0.975], axis=0) plt.fill_between(t[:, 0], *qs, alpha=0.7, color="#7A68A6") plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7) plt.plot(t, mean_prob_t, lw=1, ls="--", color="k", label="average posterior \nprobability of defect") plt.xlim(t.min(), t.max()) plt.ylim(-0.02, 1.02) plt.legend(loc="lower left") plt.scatter(temperature, D, color="k", s=50, alpha=0.5) plt.xlabel("temp, $t$") plt.ylabel("probability estimate") plt.title("Posterior probability estimates given temp. $t$"); The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75. More generally, we can see that as the temperature nears 60 degrees, the CI's spread out over [0,1] quickly. As we pass 70 degrees, the CI's tighten again. This can give us insight about how to proceed next: we should probably test more O-rings around 60-65 temperature to get a better estimate of probabilities in that range. Similarly, when reporting to scientists your estimates, you should be very cautious about simply telling them the expected probability, as we can see this does not reflect how wide the posterior distribution is. ### What about the day of the Challenger disaster?¶ On the day of the Challenger disaster, the outside temperature was 31 degrees Fahrenheit. What is the posterior distribution of a defect occurring, given this temperature? The distribution is plotted below. It looks almost guaranteed that the Challenger was going to be subject to defective O-rings. In [55]: figsize(12.5, 2.5) prob_31 = logistic(31, beta_samples, alpha_samples) plt.xlim(0.995, 1) plt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled') plt.title("Posterior distribution of probability of defect, given $t = 31$") plt.xlabel("probability of defect occurring in O-ring"); ### Is our model appropriate?¶ The skeptical reader will say "You deliberately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\; \forall t$, which guarantees a defect always occurring: I would have again predicted disaster on January 28th. Yet this is clearly a poorly chosen model. On the other hand, if I did choose the logistic function for $p(t)$, but specified all my priors to be very tight around 0, likely we would have very different posterior distributions. How do we know our model is an expression of the data? This encourages us to measure the model's goodness of fit. We can think: how can we test whether our model is a bad fit? An idea is to compare observed data (which if we recall is a fixed stochastic variable) with artificial dataset which we can simulate. The rationale is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data. Previously in this Chapter, we simulated artificial dataset for the SMS example. To do this, we sampled values from the priors. We saw how varied the resulting datasets looked like, and rarely did they mimic our observed dataset. In the current example, we should sample from the posterior distributions to create very plausible datasets. Luckily, our Bayesian framework makes this very easy. We only need to create a new Stochastic variable, that is exactly the same as our variable that stored the observations, but minus the observations themselves. If you recall, our Stochastic variable that stored our observed data was: observed = pm.Bernoulli("bernoulli_obs", p, observed=D) Hence we create: simulated_data = pm.Bernoulli("simulation_data", p) Let's simulate 10 000: In [56]: N = 10000 with pm.Model() as model: beta = pm.Normal("beta", mu=0, tau=0.001, testval=0) alpha = pm.Normal("alpha", mu=0, tau=0.001, testval=0) p = pm.Deterministic("p", 1.0/(1. + tt.exp(beta*temperature + alpha))) observed = pm.Bernoulli("bernoulli_obs", p, observed=D) simulated = pm.Bernoulli("bernoulli_sim", p, shape=p.tag.test_value.shape) step = pm.Metropolis(vars=[p]) trace = pm.sample(N, step=step) Assigned BinaryGibbsMetropolis to bernoulli_sim [-------100%-------] 10000 of 10000 in 27.8 sec. | SPS: 359.1 | ETA: 0.0 In [57]: figsize(12.5, 5) simulations = trace["bernoulli_sim"] print(simulations.shape) plt.title("Simulated dataset using posterior parameters") figsize(12.5, 6) for i in range(4): ax = plt.subplot(4, 1, i+1) plt.scatter(temperature, simulations[1000*i, :], color="k", s=50, alpha=0.6) (10000, 23) Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!). We wish to assess how good our model is. "Good" is a subjective term of course, so results must be relative to other models. We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use Bayesian p-values. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree. The following graphical test is a novel data-viz approach to logistic regression. The plots are called separation plots[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible original paper, but I'll summarize their use here. For each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \;\text{Defect} = 1 | t, \alpha, \beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above: In [58]: posterior_probability = simulations.mean(axis=0) print("posterior prob of defect | realized defect ") for i in range(len(D)): print("%.2f | %d" % (posterior_probability[i], D[i])) posterior prob of defect | realized defect 0.40 | 0 0.25 | 1 0.28 | 0 0.32 | 0 0.36 | 0 0.19 | 0 0.17 | 0 0.25 | 0 0.73 | 1 0.53 | 1 0.25 | 1 0.10 | 0 0.36 | 0 0.80 | 1 0.36 | 0 0.13 | 0 0.25 | 0 0.07 | 0 0.12 | 0 0.09 | 0 0.13 | 1 0.12 | 0 0.71 | 1 Next we sort each column by the posterior probabilities: In [59]: ix = np.argsort(posterior_probability) print("probb | defect ") for i in range(len(D)): print("%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]])) probb | defect 0.07 | 0 0.09 | 0 0.10 | 0 0.12 | 0 0.12 | 0 0.13 | 1 0.13 | 0 0.17 | 0 0.19 | 0 0.25 | 1 0.25 | 0 0.25 | 1 0.25 | 0 0.28 | 0 0.32 | 0 0.36 | 0 0.36 | 0 0.36 | 0 0.40 | 0 0.53 | 1 0.71 | 1 0.73 | 1 0.80 | 1 We can present the above data better in a figure: I've wrapped this up into a separation_plot function. In [60]: from separation_plot import separation_plot figsize(11., 1.5) separation_plot(posterior_probability, D) The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions. The black vertical line is the expected number of defects we should observe, given this model. This allows the user to see how the total number of events predicted by the model compares to the actual number of events in the data. It is much more informative to compare this to separation plots for other models. Below we compare our model (top) versus three others: 1. the perfect model, which predicts the posterior probability to be equal 1 if a defect did occur. 2. a completely random model, which predicts random probabilities regardless of temperature. 3. a constant model: where $P(D = 1 \; | \; t) = c, \;\; \forall t$. The best choice for $c$ is the observed frequency of defects, in this case 7/23. In [61]: figsize(11., 1.25) # Our temperature-dependent model separation_plot(posterior_probability, D) plt.title("Temperature-dependent model") # Perfect model # i.e. the probability of defect is equal to if a defect occurred or not. p = D separation_plot(p, D) plt.title("Perfect model") # random predictions p = np.random.rand(23) separation_plot(p, D) plt.title("Random model") # constant model constant_prob = 7./23*np.ones(23) separation_plot(constant_prob, D) plt.title("Constant-prediction model"); In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model. The perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it. ##### Exercises¶ 1. Try putting in extreme values for our observations in the cheating example. What happens if we observe 25 affirmative responses? 10? 50? 2. Try plotting $\alpha$ samples versus $\beta$ samples. Why might the resulting plot look like this? In [62]: #type your code here. figsize(12.5, 4) plt.scatter(alpha_samples, beta_samples, alpha=0.1) plt.title("Why does the plot look like this?") plt.xlabel(r"$\alpha$") plt.ylabel(r"$\beta$"); ### References¶ • [1] Dalal, Fowlkes and Hoadley (1989),JASA, 84, 945-957. • [2] German Rodriguez. Datasets. In WWS509. Retrieved 30/01/2013, from http://data.princeton.edu/wws509/datasets/#smoking. • [3] McLeish, Don, and Cyntha Struthers. STATISTICS 450/850 Estimation and Hypothesis Testing. Winter 2012. Waterloo, Ontario: 2012. Print. • [4] Fonnesbeck, Christopher. "Building Models." PyMC-Devs. N.p., n.d. Web. 26 Feb 2013. http://pymc-devs.github.com/pymc/modelbuilding.html. • [5] Cronin, Beau. "Why Probabilistic Programming Matters." 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. https://plus.google.com/u/0/107971134877020469960/posts/KpeRdJKR6Z1. • [6] S.P. Brooks, E.A. Catchpole, and B.J.T. Morgan. Bayesian animal survival estimation. Statistical Science, 15: 357–376, 2000 • [7] Gelman, Andrew. "Philosophy and the practice of Bayesian statistics." British Journal of Mathematical and Statistical Psychology. (2012): n. page. Web. 2 Apr. 2013. • [8] Greenhill, Brian, Michael D. Ward, and Audrey Sacks. "The Separation Plot: A New Visual Method for Evaluating the Fit of Binary Models." American Journal of Political Science. 55.No.4 (2011): n. page. Web. 2 Apr. 2013. In [1]: from IPython.core.display import HTML def css_styling():
### Session APS2: APS II - Condensed Matter and Nanomaterials 3:30 PM–5:42 PM, Friday, March 23, 2007 Foster Science Building Room: 458 Chair: Tikhon Bykov, McMurry University Abstract ID: BAPS.2007.TSS07.APS2.3 ### Abstract: APS2.00003 : Vortex Dynamics in the High Temperature Superconductor YBa$_{2}$Cu$_{3}$O$_{7-\delta }$ with In-plane Columnar Defects 3:54 PM–4:06 PM Preview Abstract MathJax On | Off   Abstract #### Authors: Heather Quantz (Austin College) Andra Petrean-Troncalli (Austin College) Lisa Paulius (Western Michigan University) Valentina Tobos (Lawrence Technological University) Wai-K. Kwok (Materials Science Division, Argonne National Laboratory) We investigated the vortex dynamics in a single crystal of YBa$_{2}$Cu$_{3}$O$_{7-\delta }$ before and after irradiation with high-energy heavy ions. Earlier studies have focused on the effects of irradiation-induced columnar defects parallel to the crystallographic c-axis of the crystal or at relatively large angles off the ab-plane. In our current study, we introduced columnar defects \textit{along the in-plane layered structure} of the crystal. A single crystal of YBa$_{2}$Cu$_{3}$O$_{7-\delta }$ was polished down to a narrow width of 27 $\mu$m allowing high energy heavy ions to penetrate the crystal along the ab-plane. The crystal was irradiated with 1.4 GeV $^{208}$Pb$^{56+}$ ions to a dose matching field of 1T. We present analysis of vortex dynamics under various current densities, magnetic field strengths and orientations. This work was supported by the US Department of Energy, under contract DE-AC02-06CH11357 and by National Science~Foundation under grant No. DMR-0072880. To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2007.TSS07.APS2.3
# C++ basic bank money class I am trying to create a basic money class that fits into a small console banking application I am making. Money.h #pragma once #include <string> class Money { private: long pounds; int pence; public: Money(); explicit Money(long pounds); Money(long pounds, int pence); /* Overload operators to allow easier arithmetic of money objects, we will not overload * or / as it does not make logical sense for money to be multiplied or divided. */ Money operator+(const Money& moneyRhs) const; Money operator-(const Money& moneyRhs) const; friend std::ostream& operator<<(std::ostream& os, const Money& money); // toString method to print out money object std::string toString() const; // Getters long getPounds() const; int getPence() const; }; Money.cpp #include "Money.h" #include <iomanip> Money::Money(): pounds(0), pence(0) {} Money::Money(const long pounds): pounds(pounds), pence(0) {} Money::Money(const long pounds, const int pence): pounds(pounds), pence(pence) {} Money Money::operator+(const Money& moneyRhs) const { // Convert all money to pence then do addition const long poundsInPence = (pounds + moneyRhs.pounds) * 100; const int totalPence = pence + moneyRhs.pence; const long allPence = poundsInPence + totalPence; const Money m3 = Money(allPence / 100, allPence % 100); return m3; } Money Money::operator-(const Money& moneyRhs) const { // Convert all money to pence then do subtraction const long poundsInPence = (pounds - moneyRhs.pounds) * 100; const int totalPence = pence - moneyRhs.pence; const long allPence = poundsInPence + totalPence; const Money m3 = Money(allPence / 100, allPence % 100); return m3; } std::string Money::toString() const { std::string strMoneyFormat; // Check so see if the pence value is 1 digit, if so we need to add a trailing 0 for output // e.g £150.5 becomes £150.05 if((getPence() > 0 ? static_cast<int>(log10(static_cast<double>(getPence()))) + 1 : 1) < 2) { strMoneyFormat = std::to_string(getPounds()) + "." + "0" + std::to_string(getPence()); } else { strMoneyFormat = std::to_string(getPounds()) + "." + std::to_string(getPence()); } return strMoneyFormat; } std::ostream& operator<<(std::ostream& os, const Money& money) { os << money.toString(); return os; } long Money::getPounds() const { return pounds; } int Money::getPence() const { return pence; } I'm relatively new to C++ still and want to know what I could improve in this code, especially what should I do about the + and / operator as I know when you overload arithmetic operators you should try to overload all, however you cant really dvided and multiply money. Any other tips would be appreciated. ## Store Money as Fixed-Point Monetary quantities are the classic example of something that might have fractional values, but you do not want to represent as a floating-point number, because you need exact arithmetic, not fast approximations. At the same time, it’s a good idea to store at least one extra digit of precision, so that fractions of a penny will round and add up correctly. You also definitely want to store this in a single integral value. Otherwise, you’ll constantly need to be checking whether your pence overflowed or underflowed. This integral type needs to be more than 32 bits long, so it can hold numbers in the billions and trillions, and to be signed. So, use long long int to hold the quantity It’s a good exercise in how to write a class that encapsulates an opaque internal representation. In my tests, I used mills. Does Money( -12, 50 ) represent £-12.50 or £-11.50? Are your member functions consistent about this? This is another problem that a fixed-point representation will solve. ## Declare Default Copy Constructor and Assignment You should ignore what I said here before; it was a bad explanation. (I’m grateful to Sebastian Redl for pointing out my error.) Here’s what’s actually going on. The compiler will create a default copy constructor and = operator for each class. In many cases, you will want to write one of your own, instead of using the defaults. For example, a move might be more efficient than a copy, and declaring a move constructor would replace the default copy constructor from being created. You might want copying and assignment to be private. You might have some non-trivial initialization to do. None of these apply here; what you wrote will work. I think it’s good practice to declare these implicit functions as part of the interface and remove all ambiguity about them. Otherwise, some other code could make the compiler delete one of them. The Rule of Three holds that, if your class is complicated that it needs an explicit copy constructor, assignment or destructor, it should have all three. If it’s managing any resources that can’t be trivially copied, the assignment operator also needs to copy them, and usually they can’t be trivially deleted either. This doesn’t really apply to the default constructors, but, because the rule is so widely recommended, I normally declare all three, even if it’s as default or delete. (There is also a Rule of Five, which says that, if you need either a move constructor or move assignment, you should declare both of those two, plus the other three, which they would otherwise replace. Here, we don’t need them.) ## Use the Standard Library’s Formatting With <iomanip> you can set the fixed and precision specifiers to make all quantities display exactly two digits after the decimal point. Edit: Since long double conversion on some platforms (including MSVC) could improperly round off the lower digits. I wrote a new version that uses the <iomanip> manipulators ## Check for Errors If your library is ever managing trillions and trillions of dollars, it should definitely be checking for out-of-range errors! A good way to do this, especially in constructors, is to throw exceptions from <stdexcept>, such as invalid_argument, overflow_error and underflow_error. This is trickier than it sounds, because signed overflow or underflow is undefined behavior. The instructions on modern CPUs will wrap around, but compiler writers feel this gives them permission to break your error-checking code. What you actually need to do is find the difference between the maximum/minimum representable value, and one operand, then compare that to the other operand to see if you have room to add or subtract it. ## Name Your Constants When Possible You should prefer defining constants such as Money::pence_per_GBP to constants such as 100. This makes it much easier to change the type, easier to understand why you are using the constants you do and whether they are correct, and harder to make typos. And besides, the number of pence in a pound has changed before. These should be either static constexpr class members, or declared in the module that uses them. Edit: Contrary to what I said before, static constexpr data members do not need to be defined outside the class declaration any longer, and doing so was deprecated in C++17. I’ve deleted that part of the code. ## Declare Your Functions constexpr and noexcept where Appropriate This helps optimize, and also allows them to be used to initialize compile-time constant expressions. ## Don’t Have More Friends than You Need If the stream output operator calls the public string conversion function, it doesn’t need to be a friend of the class. That allows it to breach encapsulation and access non-public members. Speaking of which: I like camelCase, but the STL uses snake_case consistently. In particular, the STL classes that convert to string call that function .to_string(), not .toString(), and several templates duck-type to the former. CapitalizedCamelCase class names (also called PascalCase) are widely-used, though, so that’s fine. ## Optionally: What Other Functions Make Sense You have a stream output operator << but no stream input operator >>. There’s no += or -=. You have a comment that multiplication and division wouldn’t make sense, but what about scalar multiplication and division, such as full_price * discounted_rate? What about a negation operator, such as borrower_liability = -loan_amount;? What about comparisons, like income >= expenses? ## Consider User-Defined Suffixes I went ahead and put two of these in for fun, _L for monetary values in pounds and _p for monetary values in pence. If these might clash with someone else’s _L or _p, a good solution is to put their declarations in a namespace, like the STL does for string literals. This way, it doesn’t overload _s for either strings or seconds unless I add the line: using namespace std::literals::string_literals; ## Putting it All Together #include <iostream> #include <limits> #include <string> class Money { private: /* “The mill is a unit of currency, used in several countries as one- * thousandth of the main unit.” (In this case, the GBP.) This is * at least 64 bits wide, to be able to represent amounts larger than * £2,147,483.64. It has one extra digit of precision, to ensure * proper rounding. * * If a long double has fewer than 63 mantissa bits, this implementation * might incorrectly round off extremely large credits or debits. You * might want to flag this and throw a std::range_error exception. */ long long mills; /* Compile-time constants. I originally had definitions of these in * namespace scope as well, but that is no longer necessary and is now * deprecated. */ static constexpr auto money_max = std::numeric_limits<decltype(mills)>::max(); static constexpr auto money_min = std::numeric_limits<decltype(mills)>::min(); static constexpr int mills_per_GBP = 1000; static constexpr int mills_per_penny = 10; /* Used internally to bypass round-trip conversion. The dummy parameter is * there only to distinguish this constructor from the one that takes a * value in pounds. */ struct dummy_t {}; explicit constexpr Money (long long source, dummy_t) noexcept; public: // Use the trivial default constructor instead of a custom one. constexpr Money() noexcept = default; /* Allow either Money(12.34) or Money(12,34). Note that Money(12.345) * is legal, but Money(12,345) should throw a std::invalid_argument * exception. */ constexpr Money(long double pounds); constexpr Money( long long int pounds, unsigned pence ); // The implicit members would have sufficed. constexpr Money(const Money&) noexcept = default; Money& operator= (const Money&) noexcept = default; ~Money() = default; /* If this class can be a base class, it would need a virtual destructor. * Otherwise, trivial destruction suffices. */ /* These are constexpr, but not noexcept, because they could throw a * std::overflow_error or std::underflow_error exception. */ constexpr Money operator+(const Money&) const; constexpr Money operator-(const Money&) const; // This should be named in snake_case, not camelCase: std::string to_string() const; /* Returns the quantity denominated in pounds, rounded to the nearest * penny. You might throw an exception rather than return an incorrectly- * rounded result. */ constexpr long double to_pounds() const noexcept; /* Returns only the part of the currency string beginning with the * point. E.g., for £12.34, returns 0.34, and for £-56.78, returns 0.78. */ constexpr double pence() const noexcept; static constexpr int pence_per_GBP = 100; }; // This only needs to be a friend if it uses an interface that isn’t public: std::ostream& operator<< ( std::ostream&, Money ); // Unimplemented: std::istream& operator>> ( std::istream&, Money& ); // User-defined literal for money in pounds, e.g. 12.34_L. constexpr Money operator""_L (long double); // User-defined literal for money in pence, e.g. 56_p. constexpr Money operator""_p (long double); #include <cmath> #include <iomanip> #include <sstream> #include <stdexcept> #include <string> using namespace std::literals::string_literals; static const std::string invalid_arg_msg = "Monetary quantity out of range.", overflow_msg = "Monetary overflow.", underflow_msg = "Monetary underflow."; constexpr Money::Money(const long double pounds) /* Converts the quantity in GBP to the internal representation, or throws a * std::invalid_argument exception. Rounds to the nearest mill. */ { /* Refactored so that a NaN value will fall through this test and correctly * raise an exception, rather than, as before, be spuriously converted. * On an implementation where the precision of long double is less than that * of long long int, such as MSVC, the bounds tests below could spuriously * reject some values between the bounds and the next representable value * closer to zero, but it will only convert values that are in range. */ if ( mills_per_GBP * pounds < static_cast<long double>(money_max) && mills_per_GBP * pounds > static_cast<long double>(money_min) ) { // We unfortunately cannot use llroundl in a constexpr function. mills = static_cast<decltype(mills)>( mills_per_GBP * pounds ); } else { throw std::invalid_argument(invalid_arg_msg); } } constexpr Money::Money( const long long pounds, const unsigned pence ) /* Converts the quantity in pounds and pence to the internal representation, * or throws a std::invalid_argument exception. * * For example, Money(-1234,56) represents £-1234.56. */ { if ( pounds > money_max / mills_per_GBP ) { throw std::invalid_argument(invalid_arg_msg); } if ( pence >= pence_per_GBP ) { throw std::invalid_argument(invalid_arg_msg); } const long long base = mills_per_GBP * pounds; const long long change = mills_per_penny * ( (pounds >= 0) ? pence : -(long long)pence ); if ( base > 0 && money_max - base < change ) { throw std::invalid_argument(invalid_arg_msg); } if ( base < 0 && money_min - base > change ) { throw std::invalid_argument(invalid_arg_msg); } mills = base + change; } constexpr Money::Money ( const long long source, [[maybe_unused]] Money::dummy_t dummy // Only to disambiguate. ) noexcept : mills(source) /* Used internally to bypass unnecessary conversions and range-checking. */ {} constexpr Money Money::operator+(const Money& other) const /* Adds this and other, checking for overflow and underflow. We cannot * portably rely on signed integer addition to wrap around, as signed * overflow and underflow are undefined behavior. */ { if ( mills > 0 && money_max - mills < other.mills ) { throw std::overflow_error(overflow_msg); } if ( mills < 0 && money_min - mills > other.mills ) { throw std::underflow_error(underflow_msg); } return Money( mills+other.mills, dummy_t() ); } constexpr Money Money::operator-(const Money& other) const /* Subtracts other from this, checking for overflow and underflow. */ { if ( mills > 0 && money_max - mills < -other.mills ) { throw std::overflow_error(overflow_msg); } if ( mills < 0 && money_min - mills > -other.mills ) { throw std::underflow_error(underflow_msg); } return Money( mills-other.mills, dummy_t() ); } std::string Money::to_string() const /* In the future, you may be able to use std::format instead. You might also * use snprintf. * * Changed to do integer math rather than a FP conversion that might display a * spuriously-rounded value. */ { std::stringstream to_return; const auto pounds = mills/mills_per_GBP; const auto pence = (mills >= 0) ? ( mills % mills_per_GBP )/mills_per_penny : -( mills % mills_per_GBP )/mills_per_penny; to_return << "£" << pounds << '.' << std::setw(2) << std::setfill('0') << std::right << pence; } constexpr long double Money::to_pounds() const noexcept { return static_cast<long double>(mills) / mills_per_GBP; } constexpr double Money::pence() const noexcept { const double remainder = (double)(mills % mills_per_GBP); return (remainder >= 0) ? remainder/mills_per_GBP : -remainder/mills_per_GBP; } std::ostream& operator<< ( std::ostream& os, const Money x ) { return os << x.to_string(); } // User-defined literal for money in pounds, e.g. 12.34_L. constexpr Money operator""_L (const long double pounds) { return Money(pounds); } // User-defined literal for money in pence, e.g. 56_p. constexpr Money operator""_p (const long double pence) { return Money(pence/Money::pence_per_GBP); } #include <cstdlib> // for EXIT_SUCCESS #include <iostream> #include <limits> #include <stdexcept> using std::cout; int main() { constexpr Money one_million_GBP(1e6), minus_twelve_forty_GBP( -12, 40 ), lotta_money = 4.62e15_L; try { const Money invalid_value = Money(1.0e16L); cout << "Stored unexpectedly large value " << invalid_value << ".\n"; } catch (const std::invalid_argument&) { cout << "Properly caught a value too high.\n"; } try { const Money invalid_value = Money(-1.0e16L); cout << "Stored unexpectedly small value " << invalid_value << ".\n"; } catch (const std::invalid_argument&) { cout << "Properly caught a value too low.\n"; } try { const Money invalid_value = lotta_money + lotta_money; cout << "Added unexpectedly large quantity " << invalid_value << ".\n"; } catch (const std::overflow_error&) { cout << "Properly caught arithmetic overflow.\n"; } try { const Money invalid_value = Money() - lotta_money - lotta_money; cout << "Added unexpectedly large liabilty " << invalid_value << ".\n"; } catch (const std::underflow_error&) { cout << "Properly caught arithmetic underflow.\n"; } try { const Money invalid_value = Money(std::numeric_limits<long double>::quiet_NaN()); cout << "Improperly interpreted a NaN value as " << invalid_value << ".\n"; } catch (const std::invalid_argument&) { cout << "Properly caught initialization of money from NaN.\n"; } // Expected output: £0.10, £1000000.00, £-12.40, £999987.60 and £1000012.40. cout << 10.1_p << ", " << one_million_GBP << ", " << minus_twelve_forty_GBP << ", " << (one_million_GBP + minus_twelve_forty_GBP) << " and " << (one_million_GBP - minus_twelve_forty_GBP) << ".\n"; cout << one_million_GBP.to_pounds() << " pounds and " << Money::pence_per_GBP*minus_twelve_forty_GBP.pence() << " pence.\n"; return EXIT_SUCCESS; } Update: I decided to take my own advice about declaring functions constexpr where possible, and made all the constructors constexpr. Now the declaration, constexpr Money one_million_GBP(1e6), minus_twelve_forty_GBP( -12, 40 ), lotta_money = 4.62e15_L; compiles (on clang++13 with std=c++20 -O3 -march=x86-64-v4) to: mov qword ptr [rsp + 72], 1000000000 mov qword ptr [rsp + 64], -12400 movabs rax, 4620000000000000000 It could be a huge boost in performance if your Money objects can be optimized into compile-time integer constants. There’s another way to optimize this class, and other tiny classes, that I did not do. Since it’s trivially-copyable and so small that an object can fit in a register, there is no reason to pass it by const reference. If you pass it by value instead, some variables that might otherwise need to be spilled onto the stack to take their address can instead stay in registers. This code outputs the non-ASCII pound sign. This should work on modern OSes, but if you’re having some difficulty on an older version of Windows, you might want to give CL.EXE the flag /utf-8 and set chcp 65001 in your console window. One word of explanation about something that might not be obvious: because I made the public constructors more complicated, I wanted to add a more lightweight private constructor that set the internal field directly. Since the compiler would otherwise be unable to tell when I was calling the public or private constructor, I created a nullary Money::dummy_t type solely to make sure the type signature of the private constructor was unique. This solution was a bit ungainly, but it’s not part of the public interface anyway. • thanks for the detailed explanation, and examples Jan 16 at 8:47 • Is there any reason why you picked long long int instead of std::int64_t? Jan 17 at 0:11 • @IsmaelMiguel That would work too. The answer is technically yes: on some oddball ISA that doesn’t have a native 64-bit type, int64_t might be a lot slower than long long int, or might not even exist at all. On an implementation where they’re different, you probably would want std::int_fast64_t instead of std::int64_t for this. And if they’re the same, it doesn’t matter what name you call it by. But it really comes down to personal preference. Jan 17 at 2:39 • @IsmaelMiguel You’ll notice I did write decltype(mills) instead of long long int in a few places, because I was considering changing it. Jan 17 at 2:41 • @MatthieuM. On second reading, the bounds check in the long double constructor is safe, but not for the reason I initially thought. An integral-to-floating-point conversion that is in range, but cannot be represented exactly, is allowed to round either up or down. The tests I wrote will only accept the next representable value between that and zero, though, which will always be in range. Thanks for the feedback! Jan 17 at 20:02 # Store money as a single integer Using integers instead of floating point values to store an amount of money is a very good idea. However, you don't need two integers, you can just use one, that stores the value in pence. This makes a lot of things easier. Also, by having two signed integers, you have to worry about all four possible combinations of signs of pounds and pence, with one integer you don't have that problem. # Multiplying and dividing money [...] as it does not make logical sense for money to be multiplied or divided. Yes and no. It does not make sense to multiply an amount of money by another amount of money, but it does make sense to multiple an amount of money by, say, an integer, or even a floating point number. For example, say one bread costs 3 pounds, and I want to buy 5 breads. How much will that cost me in total? Dividing is a bit different. It actually makes sense to divide money by money. Going by the same example: say I have 15 pounds, and a bread costs 3 pounds. How many breads can I buy? You can implement operator overloads for multiplying Money by an int for example, which returns Money. Or one that divides Money by another amount of Money and returns a double. # Converting to string Checking if getPence() returns a value between 0 and 9 can be done much simpler than taking the base-10 logarithm of a value. Especially if you could ensure that pence is always between 0 and 99, it is just: if (pence < 10) strMoneyFormat = std::to_string(pounds) + ".0" + std::to_string(pence); } else { strMoneyFormat = std::to_string(pounds) + "." + std::to_string(pence); } Although that is still repeating things twice, it can be rewritten to: std::string Money::toString() const { return std::to_string(pounds) + (pence < 10 ? ".0" : ".") + std::to_string(pence); } But since you tagged the post C++20, consider using std::format(): std::string Money::toString() const { return std::format("{}.{:02}", pounds, pence); } If your compiler's standard library doesn't support std::format() yet, you can use fmt::format() instead. • Thanks for the help, I am struggling with what you mean by your first point, storing everything as one, so instead of pounds and pence just 1 long member variable called amount? so how would i need to change my + operator overload can you provide 1 example thanks. Jan 16 at 8:44 • Just think of calculating everything in pence, like "2 pound 50" being equivalent to 250 pence. Your operator+() and operator-() would then just need to add and subtract the total numbers of pence, which is trivial to do. Jan 16 at 11:59 • Small typo: (pence < 10 ? ".0" : "0") should be (pence < 10 ? ".0" : ".") Jan 17 at 2:35 Use default values in your parameters, instead of creating overlapping function definitions Money(); // Overloaded constructors explicit Money(long pounds); Money(long pounds, int pence); Do: explicit Money(long pounds = 0, int pence = 0); Then you just need : // Money::Money(): pounds(0), pence(0) {} // Money::Money(const long pounds): pounds(pounds), pence(0) {} Money::Money(const long pounds, const int pence) : pounds(pounds), pence(pence) {} As pointed out in the comments, not every overloaded function is suitable for parameter defaults. The constructors can (for C++11 and above) be written using delegating constructors: Money::Money(const long pounds, const int pence) : pounds(pounds), pence(pence) {} Money::Money() : Money(0, 0) {} Money::Money(const long pounds) : Money(pounds, 0) {} • Jan 16 at 12:02 • @Richard, that's worthwhile reading, but a blanket rejection of default arguments is perhaps over the top. Most of the arguments against don't apply here, and the arguments for are strong (particularly if we inline the definition into the class definition). Jan 17 at 9:29 • @TobySpeight I linked to this, because I consider using default args in a constructor bad practice. Rather, there should be a no-args constructor, which delegates to another constructor with default values: Money(): Money(0, 0) {} Jan 17 at 9:47 • One reason why it might be preferable not to use default arguments is when the arguments need to be checked. In particular, the default constructor, Money(), will need to be called on every element of an array or vector. It is easy for the compiler to figure out that Money() = default; is a trivial constructor, harder to determine whether constexpr Money( long long int pounds = 0, int pence = 0 ) evaluates to a trivial default constructor, and might not be possible to determine at all when the constructor is not constexpr. This could have very high overhead on an array! Jan 17 at 20:50 See C++ money class for basic banking application for a similar class. See my answer and others that address the identical issues in your code. In particular, you are multiplying out the pounds and pence into a single number of pence whenever you do work on it, so why not just store it that way? Only factor it if someone asks for the pounds and pence values. You have a + and a - but not += and -=. The normal thing is to define += as a member and then + just calls +=. How about a unary - (negation)? That would make it trivial to write subtraction by calling addition. I think someone might want to write -m at some point and it would be annoying to write Money{}-m instead. You have a toString member but everyone (including template code) expects a to_string non-member function to exist. Your constructors should be inline in the class definition, because they are very trivial and should be inlined when used. You prevent that and require a function call. (That's the case for most compiler environments today; might not be an issue if you have link-time code generation) I see you have no implicit conversions to your class. That makes the asymmetric operator+ OK I guess, but it's still unusual to write it as a member function that treats the left and right arguments differently. Did you know that % is very slow? I think your toString formatting is awkward. You have some crazy formula to predict the number of digits and then call two different formatting passages that differ only in one place. Appending C strings of one character is inefficient compared to using a char constant instead. Adding strings like this causes multiple re-allocations and copies to be made.
# Seminars in Algebra ## November 24th 2015, 14:15 - 15:15 in 656 Title: The Lie bracket in Hochschild cohomology via the homotopy category of projective bimodules (II) Speaker: Reiner Hermann Abstract: This talk aims for employing the results that have been presented in the preceding one in order to realise the Lie bracket in terms of the (bounded below) homotopy category of projective bimodules. After recalling Schwede’s loop bracket construction, I will explain how one can take advantage of the tensor triangulated structure of the homotopy category of projective bimodules to produce, for two given graded endomorphisms of the tensor unit, an element of a fundamental group of a certain morphism. I will conclude by sketching how Buchweitz’ and Schwede’s results can be combined to verify that this construction indeed gives rise to the Lie bracket in Hochschild cohomology. Thus, morally, the existence of the Lie bracket in Hochschild cohomology can be regarded as a shadow of the (often bemoaned) fact that taking cones in a triangulated category is non-functorial in general. ## November 17th 2015, 14:15 - 15:15 in 656 Title: The Lie bracket in Hochschild cohomology via the homotopy category of projective bimodules (I) Speaker: Johan Steen Abstract: This is the first of two talks, based on joint work with Reiner Hermann, where we interpret the Lie algebra structure of the Hochschild cohomology of an (associative) algebra inside the bounded below homotopy category of projective bimodules. The aim for the first talk is to describe the work of others that led us to this project. Specifically, I will discuss the fundamental group of a morphism in a triangulated category (due to Buchweitz), the so-called Retakh isomorphism, which relates an extension group to a certain fundamental group, and furthermore show that this can be recovered from the work of Buchweitz in a very natural way. ## November 11th 2015, 13:15 - 14:15 in 734 Title: Join-irreducibles of weak order on finite Coxeter groups Speaker: Hugh Thomas Abstract: I will explain weak order on Coxeter groups, and then discuss several approaches to understanding one of its key features: its join-irreducible elements. I will discuss recent work with Osamu Iyama, Nathan Reading and Idun Reiten, in which we construct a bijection between join-irreducible elements and a certain class of modules over the preprojective algebra. I will also discuss older work of Reading, which gives a more combinatorial/geometric approach to the same topic. Finally, if there is time, I will discuss an ongoing project with David Speyer in which we relate the two approaches. ## November 10th 2015, 14:15 - 15:15 in 656 Title: Semi-invariant pictures, c-vectors, maximal green sequences Speaker: Gordana Todorov ## November 3rd 2015, 14:15 - 15:15 in 656 Title: Orbit categories and self-injective algebras Speaker: Karin Marie Jacobsen Abstract: It is easy to see that there is a strong similarity between the AR-quivers of orbit categories and stable module categories of self-injective algebras. In 2013 Holm and Jørgensen classified all cluster categories that are equivalent to stable module categories. Using a theorem by Amiot, we give a classification of the orbit categories that are triangulated equivalent to stable module categories. This is joint work with Benedikte Grimeland. ## October 28th 2015, 13:15 - 14:15 in 734 Title: Dehn twist groups as fundamental groups of spaces of (signed) quadratic differentials Speaker: Yu Qiu Abstract: We study the space Quad of (signed) quadratic differentials on a marked surface S. By constructing King-Qiu's twisted surface Sigma of S, we show that pi_1 Quad is isomorphic to a subgroup DTG of the mapping class group of Sigma generated by Dehn twists. We also conjecture that DTG is isomorphic to the spherical twist group for the corresponding 3-Calabi-Yau category. This conjecture has been proven by my previous work in the case when S is unpunctured and by my joint work with Jon Woolf in the case when S is a once-punctured disk (type D). ## October 21st 2015, 13:15 - 14:15 in 734 Title: Stability conditions on the CY completion of a formal parameter and twisted periods Speaker: Akishi Ikeda Abstract: In the second talk, we define a variant of a Bridgeland stability condition on the derived category of the Calabi-Yau completion for the formal parameter which is defined in the first talk. The space of these stability conditions becomes a complex manifold and admits the action of complex numbers and autoequivalences. In particular, the action of a complex number $s$ coincides with the action of the Serre functor which is given by the shift of the formal parameter. We also discuss the relationship between central charges of these stability conditions and twisted periods of the Frobenius manifold. ## October 20th 2015, 14:15 - 15:15 in 656 Title: Calabi-Yau completion for a formal parameter Speaker: Akishi Ikeda Abstract: The aim of this work is to construct stability conditions on $s$-Calabi-Yau categories for a complex number $s$. In the first talk, we introduce the notion of a Calabi-Yau completion for a formal parameter as an analogue of Keller's Calabi-Yau completion for an integer. The derived category of the CY completion for a formal parameter has the extra degree shift for the direction of the formal parameter and this shift becomes the Serre functor. As an application, we give the categorification of the representation of the Iwahori-Hecke algebra on the (deformed) root lattice associated to an acyclic quiver. ## October 14th 2015, 14:15 - 15:15 in 734 Title: Catalan numbers, Polytopes and the Number of Tilting Modules Speaker: Lutz Hille Abstract: There is an extensive study of Catalan numbers in combinatorics. Some of this is closely connected to the number of tilting modules for quivers of type A. Since the number of tilting modules is independent of the orientation for any Dynkin quiver, the number of tilting modules should be defined already in terms of the root system. It turns out, that there is a polytop, the so-called root polytop, and its volume is closely related to the number of tilting modules. In the talk we review some of the Catalan combinatorics, introduce the root polytopes and relate them to the counting problem. It turns out, that we can also count the number of strongly exceptional sequences with this method. At the end we give a short view to the corresponding problem for tame and wild quivers. ## October 14th 2015, 13:10 - 14:10 in 734 Title: Lifting problems concerning actions of finite groups on curves Speaker: Ted Chinburg Abstract: A famous problem in characteristic p algebraic geometry is to determine when the action of a finite group G on a smooth projective curve over a field k of positive characteristic p can be lifted to characteristic 0. A lift is a smooth curve with G-action over a DVR of characteristic 0 and residue field k which reduces to the original curve with group action over k. The Oort conjecture, proved recently by Obus, Wewers and Pop, shows that if G is cyclic such a lift always exists. The main problem in the subject now is whether lifts always exist when G is dihedral of order 2 p^n. I will discuss some obstructions to lifting which may help settle this problem. One new obstruction arises from the Galois structure of holomorphic differentials discussed in Frauke Bleher's talk. ## October 13th 2015, 14:15 - 15:15 in 656 Title: Galois structure of holomorphic differentials Speaker: Frauke Bleher Abstract: This talk is about joint work with Ted Chinburg and Aristides Kontogeorgis. Let X be a smooth projective curve over an algebraically closed field k of positive characteristic p. Suppose G is a finite group with non-trivial cyclic Sylow p-subgroups acting faithfully on X. We determine the kG-module structure of the module H^0(Omega_X) of holomorphic differentials of X in terms of the so-called Boseck invariants of the ramification of the action of G on X. The proof uses modular representation theory to reduce to the case in which G is a semidirect product of a normal cyclic p-subgroup and a cyclic prime-to-p group. In this case, one compares the radical filtration of H^0(Omega_X) to the global sections of subquotients of the radical filtration of the sheaf Omega_X. ## October 6th 2015, 14:15 - 15:15 in 656 Title: Singular equivalence and the (Fg) condition Speaker: Øystein Ingmar Skartsæterhagen Abstract: We show that singular equivalences of Morita type with level between finite-dimensional Gorenstein algebras over a field preserve the (Fg) condition. ## September 30th 2015, 13:15 - 14:15 in 734 Title: Multiserial, special multiserial, and Brauer configuration algebras Speaker: Edward L. Green Abstract: In joint work with Sibylle Schroll, we generalize biserial, special biserial and Brauer graph algebras. We prove various interconnections between these algebras and classify radical cubed 0 symmetric algebras in this setting. ## September 29th 2015, 14:15 - 15:15 in 656 Title: On the hearts of cotorsion pairs on triangulated categories Speaker: Hiroyuki Nakaoka Abstract: I will introduce the construction of the heart of a (twin) cotorsion pair on a triangulated category, which generalizes the heart of a t-structure and the ideal quotient by a cluster tilting subcategory. I will also refer to conditions under which the heart becomes a module category. ## September 23rd 2015, 13:15 - 14:15 in 734 Title: A Swiss Cheese theorem for linear operators with two invariant subspaces Speaker: Markus Schmidmeier Abstract: In this talk we discuss a joint project with Audrey Moore on the possible dimension types of linear operators with two invariant subspaces. Formally, we consider systems $(V, T, U_1, U_2)$ where $V$ is a finite dimensional vector space, $T: V\to V$ a nilpotent linear operator, and $U_1$, $U_2$ subspaces of $V$ which are contained in each other and which are invariant under the action of $T$. To each system we can associate as dimension type the triple $(dim U_1, dim U_2/U_1, dim V/U_2)$. Such systems occur in the theory of linear time-invariant dynamical systems where the subquotient $U_2/U_1$ is used to reduce the dynamical system to one which is completely controllable and completely observable. No gaps but holes: The well-known No-Gap Theorem by Bongartz states that for a finite dimensional algebra over an algebraically closed field, whenever there is an indecomposable module of length $n>1$, then there is one of length $n-1$. By comparison, consider the topological space given by the dimension types of indecomposable systems in the situation where $T$ acts with nilpotency index at most 4. Our main result states that in this space there are triples, for example $(3,1,3)$, which can not be realized as the dimension type of an indecomposable object, while each neighbor can. ## August 25th 2015, 14:15 - 15:15 in 656 Title: Cohomological support and tensor products Speaker: William Sanders Abstract: The theory of cohomological supports encodes important homological information of a module into a geometric object. In this talk, we consider cohomological supports over complete intersection rings. In particular, we investigate the cohomological support of the tensor product of two modules using the geometry of the cohomological supports of the original modules. Furthermore, we pose questions regarding the asymptotic behavior of the cohomological supports of Tor modules. ## June 24th 2015, 13:15 - 14:15 in 734 Title: Abelian quotients of triangulated categories Speaker: Benedikte Grimeland Abstract: We study abelian quotient categories A=T/J, where T is a triangulated category and J is an ideal of T. Under the assumption that the quotient functor is cohomological we show that it is representable and give an explicit description of the functor. We give technical criteria for when a representable functor is a quotient functor, and a criterion for when J gives rise to a cluster-tilting subcategory of T. We show that the quotient functor preserves the AR-structure. As an application we show that if T is a finite 2-Calabi-Yau category, then with very few exceptions J is a cluster-tilting subcategory of T. ## June 17th 2015, 13:15 - 14:15 in 734 Title: Extensions in gentle algebras Speaker: Sibylle Schroll Abstract: The aim of this talk is to give a basis for the extensions between indecomposable modules over gentle algebras in terms of string combinatorics. We will start by examining the case of gentle algebras arising as Jacobian algebras associated to triangulations of bounded unpunctured surfaces. In this case we work in the associated cluster category. We will see that the Ptolemy relations of arcs always give rise to triangles in the cluster category but that they do not necessarily give rise to extensions in the gentle algebra. In the general case of a gentle algebra, we will use its derived category to give an upper bound on the dimensions of the Ext spaces by developing a mapping cone calculus of homotopy strings. This a report on joint work in progress with Ilke Canakci and David Paukzstello. ## June 10th 2015, 13:15 - 14:15 in 734 Title: Frobenius d-exact categories Speaker: Gustavo Jasso Abstract: Frobenius d-exact categories, as their name suggests are higher analogs of Frobenius exact categories. They were introduced in order to enhance Geiss-Keller-Oppermann's (d+2)-angulated categories. In this talk I will introduce Frobenius d-exact categories, explain what is known about them and give examples of such categories occurring in nature. ## March 18th 2015, 13:15 - 14:15 in 734 Title: (Co)Silting, (Co)tilting, t-structures and derived equivalences Speaker: Jorge Vitoria Abstract: While derived equivalences between categories of modules over a ring correspond to compact tilting complexes, little is known about derived equivalences between more general abelian categories. With a suitable notion of a large (i.e., not necessarily compact) tilting object, one can establish derived equivalences between certain abelian categories. For this purpose, rather than studying the endomorphism ring of such an object, one should focus on the heart of its associated t-structure. Analogous equivalences can be obtained from large cotilting objects. In this talk, we explain this point of view on tilting theory, starting from the more general concept of large silting and cosilting objects in the derived category of a Grothendieck category. This is a report on ongoing joint work with Chrysostomos Psaroudakis. ## February 18th 2015, 13:15 - 14:15 in 734 Title: Gorenstein-projective modules over trivial extension algebras Speaker: Chrysostomos Psaroudakis Abstract: Gorenstein-projective modules over any (non-commutative) ring were introduced by Enochs-Jenda, generalizing the notion of Gorenstein-dimension zero finitely generated modules over noetherian rings due to Auslander. It is known that the homological study of the category of Gorenstein-projective modules reflects properties of the ring itself, and therefore it is an interesting problem to describe this class of modules. In this talk we show how to construct Gorenstein-projective modules over a class of trivial extension rings arising from Morita contexts. This is joint work with Nan Gao. ## February 11th 2015, 13:15 - 14:15 in 734 Title: Derived invariance of support varieties Speaker: Øystein Ingmar Skartsæterhagen Abstract: This talk is based on joint work with Julian Külshammer (Stuttgart) and Chrysostomos Psaroudakis. Support varieties for modules over finite-dimensional algebras were introduced by Snashall and Solberg in 2004, as a generalisation of the theory of support varieties for modules over group algebras. More generally, one can define the support variety of any bounded complex of modules over a finite-dimensional algebra. If a finite-dimensional algebra A satisfies a set of conditions called (Fg), then many of the results known to hold for support varieties in the case of group algebras also hold for the algebra A. We show that the (Fg) condition is invariant under derived equivalence of algebras. Moreover, we show that a derived equivalence of standard type between two finite-dimensional algebras preserves support varieties of bounded complexes. ## February 4th 2015, 13:15 - 14:15 in 734 Title: The quasihereditary structure of a Schur-like algebra Speaker: Teresa Conde Abstract: Given an arbitrary algebra $A$, we may associate to it a special endomorphism algebra $R_A$, introduced by Auslander, and further studied by Smalø. The algebra $R_A$ contains $A$ as an idempotent subalgebra and is quasihereditary with respect to a heredity chain constructed by Dlab and Ringel. In this talk we will discuss the nice properties of $R_A$ that stem from this heredity chain. ## January 28th 2015, 13:15 - 14:15 in 734 Title: An intersection-dimension formula from decorated marked surfaces Speaker: Yu Zhou Abstract: This is a joint work with Yu Qiu. For a triangulated decorated marked surface S, there is an associated differential graded algebra whose finite dimensional derived category D is a 3-Calabi-Yau triangulated category. Under a bijection between closed arcs in S and spherical objects in D, we give a formula connecting intersection numbers between closed arcs and dimensions of graded morphism spaces between the corresponding objects. ## January 21st 2015, 13:15 - 14:15 in 734 Title: Periodicity and weighted surface algebras Speaker: Karin Erdmann Abstract: This will discuss some results on selfinjective algebras for which all simple modules are $Omega$-periodic. In particular we describe a new class of algebras constructed from triangulations of surfaces. (joint with A. Skowronski) ## January 14th 2015, 13:15 - 14:15 in 734 Title: Recent developments: n-angulated categories Speaker: Petter Andreas Bergh Abstract: In a paper appearing last year, Geiss, Keller and Oppermann introduced higher analogues of triangulated categories, called n-angulated categories. They appear for instance when one studies certain cluster tilting subcategories of triangulated categories. In this talk, we give an overview of some of the recent developments. This is joint work with Marius Thaule.
## Fidelity versus Clarity Thinking about yesterday’s post, I was struck with an idea that may be obvious to many readers, and has doubtless been well-explored, but it was new to me (or I had forgotten it) so here I go, writing to help me think and remember: The post touched on the notion that communication is an important part of data science, and that simplicity aids in communication. Furthermore, simplification is part of modelmaking. That is, we look at unruly data with a purpose: to understand some phenomenon or to answer a question. And often, the next step is to communicate that understanding or answer to a client, be it the person who is paying us or just ourselves. “Communicating the understanding” means, essentially, encapsulating what we have found out so that we don’t have to go through the entire process again. So we might boil the data down and make a really cool, elegant visualization. We hold onto that graphic, and carry it with us mentally in order to understand the underlying phenomenon, for example, that graph of mean height by sex and age in order to have an internal idea—a model—for sex differences in human growth. But every model leaves something out. In this case, we don’t see the spread in heights at each age, and we don’t see the overlap between females and males. So we could go further, and include more data in the graph, but eventually we would get a graph that was so unwieldy that we couldn’t use it to maintain that same ease of understanding. It would require more study every time we needed it. Of course, the appropriate level of detail depends on the context, the stakes, and the audience. So there’s a tradeoff. As we make our analysis more complex, it becomes more faithful to the original data and to the world, but it also becomes harder to understand. Which suggests this graphic: ## A Calculus Rant (with stats at the end) Let’s look at a simple optimization problem. Bear with me, because the point is not the problem itself, but in what we have to know and do in order to solve it. Here we go: Suppose you have a string 12 cm long. You form it into the shape of a rectangle. What shape gives you the maximum area? Traditionally, how do we expect students to solve this in a calculus class? Here is one of several approaches, in excruciating detail: Continue reading A Calculus Rant (with stats at the end) ## Chord Star in the Classroom A million thanks to Zoya Voskoboynikov and her two sections of “regular” calculus at Lick for letting me come litter their otherwise pure math class with actual data. Of course, it was after the AP exam, and these are last-quarter seniors, so my being there didn’t interfere with any learning they needed to get through. It worked great. It had what I most wanted: the aha experience of arriving at the destination by another route. Fortunately (and unsurprisingly), none of these successful math students remembered the theorem from the geometry class they took as frosh. ### What we did 1. I set up the problem and had them predict, informally, what the function would look like. The main purpose of this is to orient students to what we’ll be measuring and to the idea that if you measure two quantities, you can see their relationship in a graph. Continue reading Chord Star in the Classroom ## Beating the Modeling Drum Hoping desperately it’s not also a dead horse… We just did a three-post sequence about “Chord Stars,” finishing up with how we could use insights from data to find radii of curvature remotely, that is, without ever finding the center of the circle. There’s a lot to discuss about that process; this post is part of that discussion. In particular, it’s an interesting example of modeling. Quite a while ago I was worrying about the definition of modeling, not simply to get it “right”—many people model in different ways—but rather to try to identify things that we were pretty sure demonstrated modeling. Part of my anxiety, as the Core Standards lumber into classrooms, is that people will carelessly define modeling as “real-world” (or something equally weak) and we will lose a great opportunity to improve math education. I often think of modeling in terms of using functions to model data. That’s partly because some of the coolest, most wonderful math experiences I’ve had have revolved around finding a function that was a good approximation to data. The process of measurement, improving those measurements, finding a suitable function, getting insight about the function as I wrangled it, and getting insight into the situation and the data from the function—all that together is an intoxicating cocktail of mathy-worldy wonderfulness. But it’s not all there is to modeling, so I want to pause to point out another modeling genre (one of the ones I listed in this old post) that just appeared in Chord Star 3, namely, modeling real-world stuff with geometrical objects. In fact, here are a curb with tools, and the relevant part of a Sketchpad sketch: They clearly resemble each other, but I want to make two observations: Continue reading Beating the Modeling Drum ## Chord Star 3: Remote Radii Suppose you find some big curved thing out in the world. Some things are curved more tightly than others. But how much more? How can we put a number on how tightly curved something is? One way is to figure out the radius of curvature. The smaller the radius, the tighter the curve. (Would you tell students this at the beginning? Of course not. But I can’t describe how this can work without giving things away. So consider this a report on my own investigation.) Let’s apply what we learned two posts ago. To review, we found out that if you pick a point inside a circle, and run a chord through it, the point divides the chord into two segments. The lengths of those segments are inversely proportional, that is, their product is a constant—it’s the same no matter which chord you pick. Then, last time, we saw how that product varies with the point’s distance from the center. Let’s see how we can use this to measure radii of curves out in the world. The cool thing is that we can do this remotely. Unlike most radii in school geometry, we can figure out the radius of curvature without ever finding the center of the circle. The picture above is a hint. If that’s enough for you, don’t read further! Go do it! Continue reading Chord Star 3: Remote Radii ## Chord Star 2: Choosing different points Last time we saw how you could make a “chord star” by picking a point inside a circle and drawing chords through that point. Then we measured the two lengths of the partial chords (let’s call them $L_1$ and $L_2$) and plotted them against one another. We got a rectangular hyperbola, suggesting (or confirming if we remembered the geometry) that $L_1 L_2 = k$, some constant. But we asked, “what effect does your choice of point have on the graph and the data?” So of course we’ll take an empirical approach and try it. If you have a classroom full of students, and they used the same-sized circle and picked their own points, you could immediately compare the points they chose to the functions they generated. Or you could do it as an individual. The photo shows what this might look like, and here is a detail: Now we’ll put the data in a table, but this time, • In addition to L1 and L2, we’ll record R, the distance from the center to the point. It may not be obvious to students at first that all points the same distance from the center (or the edge) will give the same data, but I’ll assume we get that. • We’ll double the data by recording the data in the reverse order as well. It makes the graph look better. Here’s the graph, coded by distance (in cm) of the point from the center. ## Chord Star: Another Geometry-Function-Modeling Thing Last time I wrote about a super-simple geometry situation and how we could turn it into an activity that connected it to linear functions. What does it take to turn something from geometry into a function? This is an interesting question; in my explorations here I’ve found it helpful to look for relationships. And what I mean by that is, where do you have two quantities (in geometry, often distances, but it could be angles or areas or…) where one varies when you change the other. So one strategy is, think of some theorem or principle, and see if you can find the relationship. To that end, remember teaching geometry and that cool theorem where if you have two chords that cross, the products are the same? That’s where this comes from. Oddly, it took a while to figure out what to plot against what to get a revealing function, but here we go. Make a circle. Pick a point not near the center, but not too close to the circle itself. Draw a chord through that point. Measure the two segments. Call them $L_1$ and $L_2$. Or even x and y. Record the data. Continue reading Chord Star: Another Geometry-Function-Modeling Thing
# Proof verification: If $G$ is a finite abelian group and $n\mid|G|$ then $G$ has a subgroup of order $n$. I have been trying to do this exercise and found two answers: Let $G$ be an abelian group of order $m$. If $n$ divides $m$, prove that $G$ has a subgroup of order $n$. Showing that a finite abelian group has a subgroup of order m for each divisor m of n I think I found a clearer way of doing that, and I would like to share it and have it "peer-reviewed". Lemma: Let $G$ be a finite abelian group of order $m$. Then $\forall g\in G$, $g^m=1$. Let $G$ be a finite abelian group of order $m$, such that $n\mid m$. Let $g\in G$ be an element of $G$ with (finite) order $O(g)=r$. Since $g\in G$, then $\langle g\rangle\le G$. By Lagrange's Theorem, then $r\mid m$; then let $m=kr$ for a certain $k$. Thus: $$g^m=g^{kr}=(g^r)^k=(g^k)^r=1^r=1$$ Proposition: Let $G$ be a finite abelian group of order $m$. If $n\mid m$, then $G$ has a subgroup of order $n$. If $n\mid m$, let $m=kn$ for a certain $k$. Then for every $g\in G$ (with $g \ne e$, $e$ the identity element of $G$): $$g^m=g^{kn}=1\implies (g^k)^n=1$$ Thus with $h=g^k$, we have $h^n=1$; so the subgroup generated by $h=g^k$ has order $n$. All you've shown is that the subgroup generated by $h$ has an order that divides $n$, not that it is $n$. For instance, if you had (by mistake) picked the identity element $e$ as $g$ in your proposition, you've have correctly shown that $e^n = e$, but that doesn't mean that $e$ has order $n$; in fact, it has order $1$. • What if we excluded $e$? Maybe then I could reformulate it to give an inductive argument? – AspiringMathematician Apr 21 '17 at 21:35 • That doesn't work either. For in $Z/32Z$, for instance, you might have $n= 16$, and pick, accidentally, the element $24$. You'd then know that $16 \cdot 24= 0$, but that doesn't make $24$ have order $16$ -- it in fact has order $4$. You've done something good here, but it'll take more than this to produce a proof. Bandaids like "I didn't mean the identity" won't help. And that's (part of) why the usual proof doesn't look like the one you've come up with. – John Hughes Apr 21 '17 at 21:37 • I meant about inducting the argument on the order of the subgroup generated by $h$, but I can see then that my proof would become about the same as the other ones (maybe rephrased, but still). Thanks for your response! – AspiringMathematician Apr 21 '17 at 21:44
Coverage of breaking stories source : wikipedia.org ## Formal definition Main article: Characterizations of the exponential function The exponential function (in blue), and the sum of the first n + 1 terms of its power series (in red). The real exponential function exp:R→R{\displaystyle \exp \colon \mathbb {R} \to \mathbb {R} } can be characterized in a variety of equivalent ways. It is commonly defined by the following power series:[6][7] exp⁡x:=∑k=0∞xkk!=1+x+x22+x36+x424+⋯{\displaystyle \exp x:=\sum _{k=0}^{\infty }{\frac {x^{k}}{k!}}=1+x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+{\frac {x^{4}}{24}}+\cdots } Since the radius of convergence of this power series is infinite, this definition is, in fact, applicable to all complex numbers z ∈ ℂ (see § Complex plane for the extension of exp⁡x{\displaystyle \exp x} to the complex plane). The constant e can then be defined as e=exp⁡1=∑k=0∞(1/k!).{\textstyle e=\exp 1=\sum _{k=0}^{\infty }(1/k!).} The term-by-term differentiation of this power series reveals that ddxexp⁡x=exp⁡x{\displaystyle {\frac {d}{dx}}\exp x=\exp x} for all real x, leading to another common characterization of exp⁡x{\displaystyle \exp x} as the unique solution of the differential equation y′(x)=y(x),{\displaystyle y'(x)=y(x),} satisfying the initial condition y(0)=1.{\displaystyle y(0)=1.} Based on this characterization, the chain rule shows that its inverse function, the natural logarithm, satisfies ddyloge⁡y=1/y{\displaystyle {\frac {d}{dy}}\log _{e}y=1/y} for 0,}”>y>0,{\displaystyle y>0,}0,}”> or loge⁡y=∫1y1tdt.{\textstyle \log _{e}y=\int _{1}^{y}{\frac {1}{t}}\,dt.} This relationship leads to a less common definition of the real exponential function exp⁡x{\displaystyle \exp x} as the solution y{\displaystyle y} to the equation x=∫1y1tdt.{\displaystyle x=\int _{1}^{y}{\frac {1}{t}}\,dt.} By way of the binomial theorem and the power series definition, the exponential function can also be defined as the following limit:[8][7] exp⁡x=limn→∞(1+xn)n.{\displaystyle \exp x=\lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}.} ## Overview The red curve is the exponential function. The black horizontal lines show where it crosses the green vertical lines. The exponential function arises whenever a quantity grows or decays at a rate proportional to its current value. One such situation is continuously compounded interest, and in fact it was this observation that led Jacob Bernoulli in 1683[9] to the number limn→∞(1+1n)n{\displaystyle \lim _{n\to \infty }\left(1+{\frac {1}{n}}\right)^{n}} now known as e. Later, in 1697, Johann Bernoulli studied the calculus of the exponential function.[9] If a principal amount of 1 earns interest at an annual rate of x compounded monthly, then the interest earned each month is x/12 times the current value, so each month the total value is multiplied by (1 + x/12), and the value at the end of the year is (1 + x/12)12. If instead interest is compounded daily, this becomes (1 + x/365)365. Letting the number of time intervals per year grow without bound leads to the limit definition of the exponential function, exp⁡x=limn→∞(1+xn)n{\displaystyle \exp x=\lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}} first given by Leonhard Euler.[8] This is one of a number of characterizations of the exponential function; others involve series or differential equations. From any of these definitions it can be shown that the exponential function obeys the basic exponentiation identity, exp⁡(x+y)=exp⁡xexp⁡y{\displaystyle \exp(x+y)=\exp x\exp y} which justifies the notation ex for exp x. The derivative (rate of change) of the exponential function is the exponential function itself. More generally, a function with a rate of change proportional to the function itself (rather than equal to it) is expressible in terms of the exponential function. This function property leads to exponential growth or exponential decay. The exponential function extends to an entire function on the complex plane. Euler’s formula relates its values at purely imaginary arguments to trigonometric functions. The exponential function also has analogues for which the argument is a matrix, or even an element of a Banach algebra or a Lie algebra. ## Derivatives and differential equations The derivative of the exponential function is equal to the value of the function. From any point P on the curve (blue), let a tangent line (red), and a vertical line (green) with height h be drawn, forming a right triangle with a base b on the x-axis. Since the slope of the red tangent line (the derivative) at P is equal to the ratio of the triangle’s height to the triangle’s base (rise over run), and the derivative is equal to the value of the function, h must be equal to the ratio of h to b. Therefore, the base b must always be 1. The importance of the exponential function in mathematics and the sciences stems mainly from its property as the unique function which is equal to its derivative and is equal to 1 when x = 0. That is, Functions of the form cex for constant c are the only functions that are equal to their derivative (by the Picard–Lindelöf theorem). Other ways of saying the same thing include: The slope of the graph at any point is the height of the function at that point. The rate of increase of the function at x is equal to the value of the function at x. The function solves the differential equation y′ = y. exp is a fixed point of derivative as a functional. If a variable’s growth or decay rate is proportional to its size—as is the case in unlimited population growth (see Malthusian catastrophe), continuously compounded interest, or radioactive decay—then the variable can be written as a constant times an exponential function of time. Explicitly for any real constant k, a function f: R → R satisfies f′ = kf if and only if f(x) = cekx for some constant c. The constant k is called the decay constant, disintegration constant,[10]rate constant,[11] or transformation constant.[12] Furthermore, for any differentiable function f(x), we find, by the chain rule: ddxef(x)=f′(x)ef(x).{\displaystyle {\frac {d}{dx}}e^{f(x)}=f'(x)e^{f(x)}.} ## Continued fractions for ex A continued fraction for ex can be obtained via an identity of Euler: ex=1+x1−xx+2−2xx+3−3xx+4−⋱{\displaystyle e^{x}=1+{\cfrac {x}{1-{\cfrac {x}{x+2-{\cfrac {2x}{x+3-{\cfrac {3x}{x+4-\ddots }}}}}}}}} The following generalized continued fraction for ez converges more quickly:[13] ez=1+2z2−z+z26+z210+z214+⋱{\displaystyle e^{z}=1+{\cfrac {2z}{2-z+{\cfrac {z^{2}}{6+{\cfrac {z^{2}}{10+{\cfrac {z^{2}}{14+\ddots }}}}}}}}} or, by applying the substitution z = x/y: exy=1+2x2y−x+x26y+x210y+x214y+⋱{\displaystyle e^{\frac {x}{y}}=1+{\cfrac {2x}{2y-x+{\cfrac {x^{2}}{6y+{\cfrac {x^{2}}{10y+{\cfrac {x^{2}}{14y+\ddots }}}}}}}}} with a special case for z = 2: e2=1+40+226+2210+2214+⋱=7+25+17+19+111+⋱{\displaystyle e^{2}=1+{\cfrac {4}{0+{\cfrac {2^{2}}{6+{\cfrac {2^{2}}{10+{\cfrac {2^{2}}{14+\ddots \,}}}}}}}}=7+{\cfrac {2}{5+{\cfrac {1}{7+{\cfrac {1}{9+{\cfrac {1}{11+\ddots \,}}}}}}}}} This formula also converges, though more slowly, for z > 2. For example: e3=1+6−1+326+3210+3214+⋱=13+547+914+918+922+⋱{\displaystyle e^{3}=1+{\cfrac {6}{-1+{\cfrac {3^{2}}{6+{\cfrac {3^{2}}{10+{\cfrac {3^{2}}{14+\ddots \,}}}}}}}}=13+{\cfrac {54}{7+{\cfrac {9}{14+{\cfrac {9}{18+{\cfrac {9}{22+\ddots \,}}}}}}}}} ## Complex plane Exponential function on the complex plane. The transition from dark to light colors shows that the magnitude of the exponential function is increasing to the right. The periodic horizontal bands indicate that the exponential function is periodic in the imaginary part of its argument. As in the real case, the exponential function can be defined on the complex plane in several equivalent forms. The most common definition of the complex exponential function parallels the power series definition for real arguments, where the real variable is replaced by a complex one: exp⁡z:=∑k=0∞zkk!{\displaystyle \exp z:=\sum _{k=0}^{\infty }{\frac {z^{k}}{k!}}} Alternatively, the complex exponential function may defined by modelling the limit definition for real arguments, but with the real variable replaced by a complex one: exp⁡z:=limn→∞(1+zn)n{\displaystyle \exp z:=\lim _{n\to \infty }\left(1+{\frac {z}{n}}\right)^{n}} For the power series definition, term-wise multiplication of two copies of this power series in the Cauchy sense, permitted by Mertens’ theorem, shows that the defining multiplicative property of exponential functions continues to hold for all complex arguments: exp⁡(w+z)=exp⁡wexp⁡z for all w,z∈C{\displaystyle \exp(w+z)=\exp w\exp z{\text{ for all }}w,z\in \mathbb {C} } The definition of the complex exponential function in turn leads to the appropriate definitions extending the trigonometric functions to complex arguments. In particular, when z = it (t real), the series definition yields the expansion exp⁡it=(1−t22!+t44!−t66!+⋯)+i(t−t33!+t55!−t77!+⋯).{\displaystyle \exp it=\left(1-{\frac {t^{2}}{2!}}+{\frac {t^{4}}{4!}}-{\frac {t^{6}}{6!}}+\cdots \right)+i\left(t-{\frac {t^{3}}{3!}}+{\frac {t^{5}}{5!}}-{\frac {t^{7}}{7!}}+\cdots \right).} In this expansion, the rearrangement of the terms into real and imaginary parts is justified by the absolute convergence of the series. The real and imaginary parts of the above expression in fact correspond to the series expansions of cos t and sin t, respectively. This correspondence provides motivation for defining cosine and sine for all complex arguments in terms of exp⁡(±iz){\displaystyle \exp(\pm iz)} and the equivalent power series:[14] cos⁡z:=exp⁡iz+exp⁡(−iz)2=∑k=0∞(−1)kz2k(2k)!,andsin⁡z:=exp⁡iz−exp⁡(−iz)2i=∑k=0∞(−1)kz2k+1(2k+1)!for all z∈C.{\displaystyle {\begin{aligned}\cos z&:={\frac {\exp iz+\exp(-iz)}{2}}=\sum _{k=0}^{\infty }(-1)^{k}{\frac {z^{2k}}{(2k)!}},\quad {\text{and}}\\sin z&:={\frac {\exp iz-\exp(-iz)}{2i}}=\sum _{k=0}^{\infty }(-1)^{k}{\frac {z^{2k+1}}{(2k+1)!}}\end{aligned}}{\text{for all }}z\in \mathbb {C} .} The functions exp, cos, and sin so defined have infinite radii of convergence by the ratio test and are therefore entire functions (i.e., holomorphic on C{\displaystyle \mathbb {C} }). The range of the exponential function is C∖{0}{\displaystyle \mathbb {C} \setminus \{0\}}, while the ranges of the complex sine and cosine functions are both C{\displaystyle \mathbb {C} } in its entirety, in accord with Picard’s theorem, which asserts that the range of a nonconstant entire function is either all of C{\displaystyle \mathbb {C} }, or C{\displaystyle \mathbb {C} } excluding one lacunary value. These definitions for the exponential and trigonometric functions lead trivially to Euler’s formula: exp⁡iz=cos⁡z+isin⁡z for all z∈C{\displaystyle \exp iz=\cos z+i\sin z{\text{ for all }}z\in \mathbb {C} }. We could alternatively define the complex exponential function based on this relationship. If z = x + iy, where x and y are both real, then we could define its exponential as exp⁡z=exp⁡(x+iy):=(exp⁡x)(cos⁡y+isin⁡y){\displaystyle \exp z=\exp(x+iy):=(\exp x)(\cos y+i\sin y)} where exp, cos, and sin on the right-hand side of the definition sign are to be interpreted as functions of a real variable, previously defined by other means.[15] For t∈R{\displaystyle t\in \mathbb {R} }, the relationship exp⁡it¯=exp⁡(−it){\displaystyle {\overline {\exp it}}=\exp(-it)} holds, so that |exp⁡it|=1{\displaystyle |\exp it|=1} for real t{\displaystyle t} and t↦exp⁡it{\displaystyle t\mapsto \exp it} maps the real line (mod 2π) to the unit circle in the complex plane. Moreover, going from t=0{\displaystyle t=0} to t=t0{\displaystyle t=t_{0}}, the curve defined by γ(t)=exp⁡it{\displaystyle \gamma (t)=\exp it} traces a segment of the unit circle of length ∫0t0|γ′(t)|dt=∫0t0|iexp⁡it|dt=t0{\displaystyle \int _{0}^{t_{0}}|\gamma ‘(t)|dt=\int _{0}^{t_{0}}|i\exp it|dt=t_{0}}, starting from z = 1 in the complex plane and going counterclockwise. Based on these observations and the fact that the measure of an angle in radians is the arc length on the unit circle subtended by the angle, it is easy to see that, restricted to real arguments, the sine and cosine functions as defined above coincide with the sine and cosine functions as introduced in elementary mathematics via geometric notions. The complex exponential function is periodic with period 2πi and exp⁡(z+2πik)=exp⁡z{\displaystyle \exp(z+2\pi ik)=\exp z} holds for all z∈C,k∈Z{\displaystyle z\in \mathbb {C} ,k\in \mathbb {Z} }. When its domain is extended from the real line to the complex plane, the exponential function retains the following properties: ez+w=ezewe0=1ez≠0ddzez=ez(ez)n=enz,n∈Z for all w,z∈C{\displaystyle {\begin{aligned}e^{z+w}=e^{z}e^{w}\,\e^{0}=1\,\e^{z}\neq 0\{\tfrac {d}{dz}}e^{z}=e^{z}\\left(e^{z}\right)^{n}=e^{nz},n\in \mathbb {Z} \end{aligned}}{\text{ for all }}w,z\in \mathbb {C} }. Extending the natural logarithm to complex arguments yields the complex logarithm log z, which is a multivalued function. We can then define a more general exponentiation: zw=ewlog⁡z{\displaystyle z^{w}=e^{w\log z}} for all complex numbers z and w. This is also a multivalued function, even when z is real. This distinction is problematic, as the multivalued functions log z and zw are easily confused with their single-valued equivalents when substituting a real number for z. The rule about multiplying exponents for the case of positive real numbers must be modified in a multivalued context: (ez)w ≠ ezw, but rather (ez)w = e(z + 2niπ)w multivalued over integers n See failure of power and logarithm identities for more about problems with combining powers. The exponential function maps any line in the complex plane to a logarithmic spiral in the complex plane with the center at the origin. Two special cases exist: when the original line is parallel to the real axis, the resulting spiral never closes in on itself; when the original line is parallel to the imaginary axis, the resulting spiral is a circle of some radius. 3D-Plots of Real Part, Imaginary Part, and Modulus of the exponential function z = Re(ex + iy) z = Im(ex + iy) z = |ex + iy| Considering the complex exponential function as a function involving four real variables: v+iw=exp⁡(x+iy){\displaystyle v+iw=\exp(x+iy)} the graph of the exponential function is a two-dimensional surface curving through four dimensions. Starting with a color-coded portion of the xy domain, the following are depictions of the graph as variously projected into two or three dimensions. Graphs of the complex exponential function Checker board key:0:\;{\text{green}}}”>x>0:green{\displaystyle x>0:\;{\text{green}}}0:\;{\text{green}}}”>x<0:red{\displaystyle x<0:\;{\text{red}}}0:\;{\text{yellow}}}”>y>0:yellow{\displaystyle y>0:\;{\text{yellow}}}0:\;{\text{yellow}}}”>y<0:blue{\displaystyle y<0:\;{\text{blue}}} Projection onto the range complex plane (V/W). Compare to the next, perspective picture. Projection into the x{\displaystyle x}, v{\displaystyle v}, and w{\displaystyle w} dimensions, producing a flared horn or funnel shape (envisioned as 2-D perspective image). Projection into the y , v, and w dimensions, producing a spiral shape. (y range extended to ±2π, again as 2-D perspective image). The second image shows how the domain complex plane is mapped into the range complex plane: zero is mapped to 1 the real x axis is mapped to the positive real v axis the imaginary y axis is wrapped around the unit circle at a constant angular rate values with negative real parts are mapped inside the unit circle values with positive real parts are mapped outside of the unit circle values with a constant real part are mapped to circles centered at zero values with a constant imaginary part are mapped to rays extending from zero The third and fourth images show how the graph in the second image extends into one of the other two dimensions not shown in the second image. The third image shows the graph extended along the real x axis. It shows the graph is a surface of revolution about the x axis of the graph of the real exponential function, producing a horn or funnel shape. The fourth image shows the graph extended along the imaginary y axis. It shows that the graph’s surface for positive and negative y values doesn’t really meet along the negative real v axis, but instead forms a spiral surface about the y axis. Because its y values have been extended to ±2π, this image also better depicts the 2π periodicity in the imaginary y value. Computation of ab where both a and b are complex Main article: Exponentiation Complex exponentiation ab can be defined by converting a to polar coordinates and using the identity (eln a)b = ab: ab=(reθi)b=(e(ln⁡r)+θi)b=e((ln⁡r)+θi)b{\displaystyle a^{b}=\left(re^{\theta i}\right)^{b}=\left(e^{(\ln r)+\theta i}\right)^{b}=e^{\left((\ln r)+\theta i\right)b}} However, when b is not an integer, this function is multivalued, because θ is not unique (see failure of power and logarithm identities). ## Matrices and Banach algebras The power series definition of the exponential function makes sense for square matrices (for which the function is called the matrix exponential) and more generally in any unital Banach algebra B. In this setting, e0 = 1, and ex is invertible with inverse e−x for any x in B. If xy = yx, then ex + y = exey, but this identity can fail for noncommuting x and y. Some alternative definitions lead to the same function. For instance, ex can be defined as limn→∞(1+xn)n.{\displaystyle \lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}.} Or ex can be defined as fx(1), where fx: R→B is the solution to the differential equation dfx/dt(t) = x fx(t), with initial condition fx(0) = 1; it follows that fx(t) = etx for every t in R. ## Lie algebras Given a Lie group G and its associated Lie algebra g{\displaystyle {\mathfrak {g}}}, the exponential map is a map g{\displaystyle {\mathfrak {g}}} ↦ G satisfying similar properties. In fact, since R is the Lie algebra of the Lie group of all positive real numbers under multiplication, the ordinary exponential function for real arguments is a special case of the Lie algebra situation. Similarly, since the Lie group GL(n,R) of invertible n × n matrices has as Lie algebra M(n,R), the space of all n × n matrices, the exponential function for square matrices is a special case of the Lie algebra exponential map. The identity exp(x + y) = exp x exp y can fail for Lie algebra elements x and y that do not commute; the Baker–Campbell–Hausdorff formula supplies the necessary correction terms. ## Transcendency The function ez is not in C(z) (i.e., is not the quotient of two polynomials with complex coefficients). For n distinct complex numbers {a1, …, an}, the set {ea1z, …, eanz} is linearly independent over C(z). The function ez is transcendental over C(z). ## Computation When computing (an approximation of) the exponential function near the argument 0, the result will be close to 1, and computing the value of the difference exp⁡x−1{\displaystyle \exp x-1} with floating-point arithmetic may lead to the loss of (possibly all) significant figures, producing a large calculation error, possibly even a meaningless result. Following a proposal by William Kahan, it may thus be useful to have a dedicated routine, often called expm1, for computing ex − 1 directly, bypassing computation of ex. For example, if the exponential is computed by using its Taylor series ex=1+x+x22+x36+⋯+xnn!+⋯,{\displaystyle e^{x}=1+x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+\cdots +{\frac {x^{n}}{n!}}+\cdots ,} one may use the Taylor series of ex−1:{\displaystyle e^{x}-1:} ex−1=x+x22+x36+⋯+xnn!+⋯.{\displaystyle e^{x}-1=x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+\cdots +{\frac {x^{n}}{n!}}+\cdots .} This was first implemented in 1979 in the Hewlett-Packard HP-41C calculator, and provided by several calculators,[16][17]operating systems (for example Berkeley UNIX 4.3BSD[18]), computer algebra systems, and programming languages (for example C99).[19] In addition to base e, the IEEE 754-2008 standard defines similar exponential functions near 0 for base 2 and 10: 2x−1{\displaystyle 2^{x}-1} and 10x−1{\displaystyle 10^{x}-1}. A similar approach has been used for the logarithm (see lnp1).[nb 3] An identity in terms of the hyperbolic tangent, expm1⁡x=exp⁡x−1=2tanh⁡(x/2)1−tanh⁡(x/2),{\displaystyle \operatorname {expm1} x=\exp x-1={\frac {2\tanh(x/2)}{1-\tanh(x/2)}},} gives a high-precision value for small values of x on systems that do not implement expm1 x. Carlitz exponential, a characteristic p analogue Double exponential function – Exponential function of an exponential function Exponential field – Mathematical field equipped with an operation satisfying the functional equation of the exponential Gaussian function Half-exponential function, a compositional square root of an exponential function List of exponential topics List of integrals of exponential functions Mittag-Leffler function, a generalization of the exponential function Padé table for exponential function – Padé approximation of exponential function by a fraction of polynomial functions Tetration – Repeated or iterated exponentiation ## Notes ^ In pure mathematics, the notation log x generally refers to the natural logarithm of x or a logarithm in general if the base is immaterial. ^ The notation ln x is the ISO standard and is prevalent in the natural sciences and secondary education (US). However, some mathematicians (e.g., Paul Halmos) have criticized this notation and prefer to use log x for the natural logarithm of x. ^ A similar approach to reduce round-off errors of calculations for certain input values of trigonometric functions consists of using the less common trigonometric functions versine, vercosine, coversine, covercosine, haversine, havercosine, hacoversine, hacovercosine, exsecant and excosecant. ## References ^ Goldstein, Larry Joel; Lay, David C.; Schneider, David I.; Asmar, Nakhle H. (2006). Brief calculus and its applications (11th ed.). Prentice–Hall. ISBN 978-0-13-191965-5. (467 pages) ^ Courant; Robbins (1996). Stewart (ed.). What is Mathematics? An Elementary Approach to Ideas and Methods (2nd revised ed.). Oxford University Press. p. 448. ISBN 978-0-13-191965-5. This natural exponential function is identical with its derivative. This is really the source of all the properties of the exponential function, and the basic reason for its importance in applications… ^ “Exponential Function Reference”. www.mathsisfun.com. Retrieved 2020-08-28. ^ Converse, Henry Augustus; Durell, Fletcher (1911). Plane and Spherical Trigonometry. Durell’s mathematical series. C. E. Merrill Company. p. 12. Inverse Use of a Table of Logarithms; that is, given a logarithm, to find the number corresponding to it, (called its antilogarithm) … [1] ^ a b Rudin, Walter (1987). Real and complex analysis (3rd ed.). New York: McGraw-Hill. p. 1. ISBN 978-0-07-054234-1. ^ a b Weisstein, Eric W. “Exponential Function”. mathworld.wolfram.com. Retrieved 2020-08-28. ^ a b Maor, Eli. e: the Story of a Number. p. 156. ^ a b O’Connor, John J.; Robertson, Edmund F. (September 2001). “The number e”. School of Mathematics and Statistics. University of St Andrews, Scotland. Retrieved 2011-06-13. ^ Serway (1989, p. 384) harvtxt error: no target: CITEREFSerway1989 (help) ^ Simmons (1972, p. 15) ^ McGraw-Hill (2007) ^ Lorentzen, L.; Waadeland, H. (2008). “A.2.2 The exponential function.”. Continued Fractions. Atlantis Studies in Mathematics. 1. p. 268. doi:10.2991/978-94-91216-37-4. ISBN 978-94-91216-37-4. ^ Rudin, Walter (1976). Principles of Mathematical Analysis. New York: McGraw-Hill. p. 182. ISBN 978-0-07054235-8. ^ Apostol, Tom M. (1974). Mathematical Analysis (2nd ed.). Reading, Mass.: Addison Wesley. pp. 19. ISBN 978-0-20100288-1. ^ HP 48G Series – Advanced User’s Reference Manual (AUR) (4 ed.). Hewlett-Packard. December 1994 [1993]. HP 00048-90136, 0-88698-01574-2. Retrieved 2015-09-06. ^ HP 50g / 49g+ / 48gII graphing calculator advanced user’s reference manual (AUR) (2 ed.). Hewlett-Packard. 2009-07-14 [2005]. HP F2228-90010. Retrieved 2015-10-10. [2] ^ Beebe, Nelson H. F. (2017-08-22). “Chapter 10.2. Exponential near zero”. The Mathematical-Function Computation Handbook – Programming Using the MathCW Portable Software Library (1 ed.). Salt Lake City, UT, USA: Springer International Publishing AG. pp. 273–282. doi:10.1007/978-3-319-64110-2. ISBN 978-3-319-64109-6. LCCN 2017947446. S2CID 30244721. Berkeley UNIX 4.3BSD introduced the expm1() function in 1987. ^ Beebe, Nelson H. F. (2002-07-09). “Computation of expm1 = exp(x)−1” (PDF). 1.00. Salt Lake City, Utah, USA: Department of Mathematics, Center for Scientific Computing, University of Utah. Retrieved 2015-11-02. McGraw-Hill Encyclopedia of Science & Technology (10th ed.). New York: McGraw-Hill. 2007. ISBN 978-0-07-144143-8. Serway, Raymond A.; Moses, Clement J.; Moyer, Curt A. (1989), Modern Physics, Fort Worth: Harcourt Brace Jovanovich, ISBN 0-03-004844-3 Simmons, George F. (1972), Differential Equations with Applications and Historical Notes, New York: McGraw-Hill, LCCN 75173716 “Exponential function”, Encyclopedia of Mathematics, EMS Press, 2001 [1994]vteCalculusPrecalculus Binomial theorem Concave function Continuous function Factorial Finite difference Free variables and bound variables Graph of a function Linear function Rolle’s theorem Secant Slope TangentLimits Indeterminate form Limit of a function One-sided limit Limit of a sequence Order of approximation (ε, δ)-definition of limitDifferential calculus Derivative Differential Differential equation Differential operator Mean value theorem Notation Leibniz’s notation Newton’s notation Rules of differentiation linearity Power Sum Chain L’Hôpital’s Product General Leibniz’s rule Quotient Other techniques Implicit differentiation Inverse functions and differentiation Logarithmic derivative Related rates Stationary points First derivative test Second derivative test Extreme value theorem Maxima and minima Further applications Newton’s method Taylor’s theoremIntegral calculus Antiderivative Arc length Basic properties Constant of integration Fundamental theorem of calculus Differentiating under the integral sign Integration by parts Integration by substitution trigonometric Euler Weierstrass Partial fractions in integration Trapezoidal rule Volumes Washer method Shell methodVector calculus Derivatives Curl Directional derivative Divergence Laplacian Basic theorems Line integrals Green’s Stokes’ Gauss’Multivariable calculus Divergence theorem Geometric Hessian matrix Jacobian matrix and determinant Lagrange multiplier Line integral Matrix Multiple integral Partial derivative Surface integral Volume integral Differential forms Exterior derivative Generalized Stokes’ theorem Tensor calculusSequences and series Arithmetico–geometric sequence Types of series Alternating Binomial Fourier Geometric Harmonic Infinite Power Maclaurin Taylor Telescoping Tests of convergence Abel’s Alternating series Cauchy condensation Direct comparison Dirichlet’s Integral Limit comparison Ratio Root TermSpecial functionsand numbers Bernoulli numbers e (mathematical constant) Exponential function Natural logarithm Stirling’s approximationHistory of calculus Brook Taylor Colin Maclaurin Generality of algebra Gottfried Wilhelm Leibniz Infinitesimal Infinitesimal calculus Isaac Newton Fluxion Law of Continuity Leonhard Euler Method of Fluxions The Method of Mechanical TheoremsLists Differentiation rules List of integrals of exponential functions List of integrals of hyperbolic functions List of integrals of inverse hyperbolic functions List of integrals of inverse trigonometric functions List of integrals of irrational functions List of integrals of logarithmic functions List of integrals of rational functions List of integrals of trigonometric functions Secant Secant cubed List of limits Lists of integralsMiscellaneous topics Differential geometry curvature of curves of surfaces Euler–Maclaurin formula Gabriel’s Horn Integration Bee Proof that 22/7 exceeds π Regiomontanus’ angle maximization problem Steinmetz solid Authority control MA: 151376022 Exponential Decay Functions Flashcards | Quizlet – Start studying Exponential Decay Functions. Learn vocabulary, terms and more with flashcards, games and other study tools. What is the multiplicative rate of change of the function?WildstyleSmart WildstyleSmart. I think its 2/5 = .4 because of the negative in the exponent, it is 0.4 Hope I helpedFor this exponential equation, we expect a negative slope/average rate of change, because the negative sign in the exponent indicates we have an exponential decay curve. The slope/average rate of change between any two points will be negative. What is the multiplicative rate of change for the exponential… – The graph of a function f is shown above. Which of the following statements about f is false? 77. Let f be the function given by f(x) = 3e2x and let g In terms of the circumference C, what is the rate of change of the area of the circle, in square centimeters per second? 79. The graphs of the derivatives…We can now use derivatives of logarithmic and exponential functions to solve various types of problems eg. in the fields of earthquake measurement, electronics, air resistance on moving The graph of h = 2000 ln (t + 1) shows that it is a realistic model for the climb performance of a light aircraft.In mathematics, an exponential function is a function of the form. where b is a positive real number, and the argument x occurs as an exponent. For real numbers c and d, a function of the form. is also an exponential function, since it can be rewritten as. Average rate of change of an exponential function | Forum – The most common natural Exponential function is, The main rules of exponential functions are, The graph of an exponential function has the For the polynomial f(x) = 2x + 4x + 11x + 5 determine the average rate of change between the two given values for x. Round to two decimal places. x=-7, x=-7.5.However, the exponential function $2^x$ is very different from the power $x^2$, because in $2^x$ the variable $x$ itself is in the exponent. Do the parameters $c$ and $k$ change the properties of the exponential function? So far, we've just looked at the case where $c=1$ and $k=1$ and found that…How do you find the instantaneous rate of change of a function at a point? How does instantaneous rate of change differ from average rate of change? Exponential Functions with Percent Increase or Decrease – . Solve Exponential Equation — Same Base – . Rules of differentiation – constant, power, constant multiple, sum rule – When we've talked about derivatives so far, we've talked about them in a very formal and time consuming way. For instance, the instantaneous rate of change, we can think of the slope of the tangent line. And one way to write that is the slope of the tangent line is equal to the limit as x approaches a of f of x minus f of a divided by x minus a. Or, we could write it as the limit as h approaches zero of f of a plus h minus f of a divided by h. We could also call that f prime of a, the first derivative at a. We've also talked about the derivative as being a function. So instead of at a point a, we can say f prime of x is equal to the limit as h approaches zero of f of x plus h minus f of x all over h. Well, if we had to do this every single time we found a derivative, … I love math, but I don't love math this much. So we are going to come up with some general rules that you'll need to memorize. Let's first talk about a constant. That is, where f of x equals a number c. And c is going to be a real number. Remember, the derivative is the rate of change of the function. If I have a constant function, there is no rate of change. Therefore, the rate of change is equal to zero. This leads us to our first rule. The Constant Rule. If I have the derivative with respect to x of c, that equals zero if c is a real number. The next rule we're going to talk about is the Power Rule. Before I prove it, and I'm not going to do a lot of proofs, but I am going to do this first proof. But the Power Rule is if I take the derivative of a variable x raised to the nth power, that is equal to n times x to the n minus one power. And let's prove this one. The first thing I am going to do is take my formal definition and start off with taking the derivative of x. That is, x to the first power. So if I let n equal 1, that means f of x is equal to x to the first power, or just x. And if I go ahead and use my definition of f prime of a, then I find that I have the limit as x approaches a of x minus a divided by x minus a which is simply equal to 1. Which, by the way, does match my power rule. Let's go ahead and check that. And this looks fine. The n is one, so the derivative of x to the first power is 1 times x to the 1 minus 1, which is x to the zero power. Anything to the zero power is 1, therefore my power rule does know that this would, in fact, equal 1. So now let's say n is going to be greater than or equal to 2. And f of x is equal to x to the n power. Now we can say the first derivative of f of a is equal to the limit of x approaching a of x to the n minus a to the n, divided by x minus a. There is a factoring rule that we can always factor this x to the n minus a to the n into this. So if I can factor x to the n minus a to the n in this form, I see that, first of all, I left off the limit. Let me fix that. There we go. And now we see this x minus a divides out of the numerator and the denominator, and I'm left with the following. Once I have this in this form, I can go ahead and directly substitute x as a, because this is a polynomial (we remember our limit laws). If I go ahead and multiply this all out, I have a bunch of just a to the n minus ones. In fact, it's not just a bunch of a to the n minus ones, there's exactly n of them. And so I get n time a to the n minus 1. If I made this a function, I would find that f prime of x is equal to n times x to the n minus one. Which is what I have for the power rule. This is the only one that I am going to go through and do a proof of, you don't have to recreate the proof, but you do have to be able to use the power rule. Notice that this power rule will actually work in the constants case.That is, if I had a constant, say, c, it would be c times x to the zero. By the power rule, that would be zero times x to the zero minus one, but anything times zero is zero. So the constant rule is really rolled into the power rule. By the power rule, the derivative of x with respect to x of x to the fifth, that's simply equal to five times x to the five minus one, or, 5 times x to the fourth. The second one – it would be really tempting to say the derivative with respect to x of 3 to the sixth is 6 times 3 to the fifth power, but of course that's not right because 3 to the sixth is actually a constant. There's no x in there. So this is still equal to zero. Don't fall in that trap. Our next rule is going to be the Constant Multiple Rule. That is, if I take the derivative of a constant c times a function f of x, that is simply equal to c times the derivative of f of x. I've got two examples up here. So my first one is, the derivative with respect to x of negative 5 sixths, x to the tenth power. So the first thing I will do is pull out that constant. So I have negative five sixths d dx x to the tenth. Now I am going to use my power rule. And that gives me negative 5 sixths times ten times x to the ninth power. And if I simplify this, I'll get the following. Negative 25 over 3 x to the ninth power. Now my second example, I'm going to first again use the constant multiple rule to pull out that 1 fifteenth. Notice now I'm taking the derivative of with respect to t. It works the same way. Whatever my independent variable is. So, I've rewritten the square root of t as t to the one half power, because those are the same thing. And although I didn't specify, I'm going to go back and say with my power rule, that n has to just be a real number. This I didn't prove, and Icould, but I'm just not going to. So I am going to say that, using that same power rule, I'm going to get one over 15 times one half, that's my exponent, times t to the one half minus one power. Or, 1 over 15 times one half time t to the negative one half power. I can rewrite this as such. Generally, if I have started off with giving you information in square root form, I am going to rewrite it as a square root and t to the negative one half power, that's just equal to one over the square root of t. And that's my final answer. The next rule is the sum rule. That is, the derivative with respect to x of f of x plus g of x is simply equal to the derivatives of the separate functions added together. I am warning you – do not assume that this is going to work with products. That is, with multiplication or division. But right now, we are just talking about addition and subtraction. And that is very straightforward. Let's do a quick example. The derivative with respect to x of all of this is equal to the derivative of their individual terms. Again, this is by the sum rule. When you're taking derivatives, you're not going to have to write out each rule every time like we did with the limit rules, however, I want to specify this as I am teaching it so you can understand what I am doing, step by step. The next rule I am going to use is the constant multiple rule. And that allows me to pull the constants out. Finally, I am going to use my power and constant rules to come up with the following. And finally, this is what I get as my answer. There's one more special derivative we're going to talk about. And that's the derivative of e to the x. What is e? e is equal to an irrational number, 2 point 718 and a whole bunch of other digits. And what makes e special is this – that is, if I take e, and raise it to the h power, subtract 1 from it, and divide by h, and let the limit of h approach zero, this is equal to the number 1. And here's a graph of e to the x, and the slope at zero is actually equal to one. So where does this get us? So the derivative of e to the x is equal to the limit as h approaches 0 of e to the x plus h minus e to the x divided by h. And again, that is by the definition of the derivative. So let's go ahead and rewrite my exponent as such. I see both of my terms has an e to the x in them. So I am going to factor that out. And now that I factored out that e to the x, I realize the limit as h approaches zero doesn't affect the e to the x at all. So I can go ahead and pull that out. Well, I've already said that the limit of h approaching zero of e to the h minus 1 over h is equal to the number 1. So that means that the derivative of e to the x is simply e to the x. It's the BEST derivative out there. And again, I'll write it out. The derivative of e to the x is simply e to the x. One final thing to talk about — I can take derivatives higher than just the first derivative. I could take, for instance, the second derivative. The second derivative of f of x is simply the derivative with respect to x of the first derivative of f of x. And I can expand that to talk about any values of n. And the one thing I want to point out is if I talk about the nth derivative, I put that n in parentheses. So that f to the parentheses n is not f to the nth power. It's the nth derivative of f. So that just equals the derivative of the f to the n minus 1nth derivative of x. And that's our first round of rules of differentiation. .
# Convert an Equation Editor Object back to plain text Muskyboi Is there a tool that can convert something like this: which when copied as plain text looks like this: \frac{4}{\pi}\sin\left(x\right)+\frac{4}{3\pi}\sin\left(3x\right)+\frac{4}{5\pi}\sin\left(5x\right)+\frac{4}{7\pi}\sin\left(7x\right)+\frac{4}{9\pi}\sin\left(9x\right) To this: 4/pi*sin(x)+4/(3pi)*sin(3x)+4/(5pi)*sin(5x)+4/(7pi)*sin(7x)+4/(9pi)*sin(9x) Mentor I’ve not heard of such a tool. However, pandoc can do a variety of conversions to and from latex. https://tex.stackexchange.com/questions/252203/tex-to-plain-text-or-doc and here’s a list of other possible tools and plugins that do conversions https://www.tug.org/utilities/texconv/textopc.html it would be an interesting parsing project in a Comp Sci course on Compilers. berkeman Mentor Is there a tool that can convert something like this: View attachment 266516 which when copied as plain text looks like this: \frac{4}{\pi}\sin\left(x\right)+\frac{4}{3\pi}\sin\left(3x\right)+\frac{4}{5\pi}\sin\left(5x\right)+\frac{4}{7\pi}\sin\left(7x\right)+\frac{4}{9\pi}\sin\left(9x\right) To this: 4/pi*sin(x)+4/(3pi)*sin(3x)+4/(5pi)*sin(5x)+4/(7pi)*sin(7x)+4/(9pi)*sin(9x) What format is the source? Is it just a JPEG or BMP picture, or a PDF snapshot? Or something copy/pasted from MSWord? The source format will make a big difference in how difficult the conversion will be, IMO. it would be an interesting parsing project in a Comp Sci course on Compilers. Absolutely. It would be a fun project, if the input format was something reasonable (instead of having to do full character recognition from a BMP file as a first step). Mentor I thought the source was just the latex string to convert to a plain text target A related fun project would be conversion to character graphics: Code: //\\ pi/2 // // sin(x) dx // \\// 0 come in useful source code This would come in useful in source code comments or markdown where the viewer uses character graphics. Mentor Oh, oops, I think you're right. I misread the OP to be "convert from this math equation to LaTeX". Sorry, nothing to see here, everybody move along... Mentor I thought the source was just the latex string to convert to plain text And yeah, that's a MUCH easier problem to assign in a compiler class. Homework Helper Gold Member What about exporting it as a PDF or PNG file and using OCR software? I tried with FreeOCR on the PNG in the OP, but it did not do very well. Maybe better OCR software would do a better job. FreeOCR converted it to %si.1.1[x)+% sin(3x)+% sin(5x)+% sin(Tx) + sin[9.\' Not a good result. But I have seen OCR software do some impressive things. PS. I just tried a couple of online TEX to TXT converters and they did not work well. Last edited: Homework Helper Gold Member Input: \frac{4}{\pi}\sin\left(x\right)+\frac{4}{3\pi}\sin\left(3x\right)+\frac{4}{5\pi}\sin\left(5x\right)+\frac{4}{7\pi}\sin\left(7x\right)+\frac{4}{9\pi}\sin\left(9x\right) https://www.wolframalpha.com/input/?i=+\frac{4}{\pi}\sin\left(x\right)+\frac{4}{3\pi}\sin\left(3x\right)+\frac{4}{5\pi}\sin\left(5x\right)+\frac{4}{7\pi}\sin\left(7x\right)+\frac{4}{9\pi}\sin\left(9x\right) produces 4/π sin(x) + 4/(3 π) sin(3 x) + 4/(5 π) sin(5 x) + 4/(7 π) sin(7 x) + 4/(9 π) sin(9 x) also interesting: https://mathpix.com/ http://www.i2ocr.com/free-online-math-equation-ocr http://www.inftyproject.org/en/index.html Last edited: Muskyboi and FactChecker Muskyboi Input: \frac{4}{\pi}\sin\left(x\right)+\frac{4}{3\pi}\sin\left(3x\right)+\frac{4}{5\pi}\sin\left(5x\right)+\frac{4}{7\pi}\sin\left(7x\right)+\frac{4}{9\pi}\sin\left(9x\right) https://www.wolframalpha.com/input/?i=+\frac{4}{\pi}\sin\left(x\right)+\frac{4}{3\pi}\sin\left(3x\right)+\frac{4}{5\pi}\sin\left(5x\right)+\frac{4}{7\pi}\sin\left(7x\right)+\frac{4}{9\pi}\sin\left(9x\right) View attachment 266604 produces 4/π sin(x) + 4/(3 π) sin(3 x) + 4/(5 π) sin(5 x) + 4/(7 π) sin(7 x) + 4/(9 π) sin(9 x) also interesting: https://mathpix.com/ http://www.i2ocr.com/free-online-math-equation-ocr http://www.inftyproject.org/en/index.html This is exactly what I was looking for. Thank you. Homework Helper produces 4/π sin(x) + 4/(3 π) sin(3 x) + 4/(5 π) sin(5 x) + 4/(7 π) sin(7 x) + 4/(9 π) sin(9 x) For the record: When you hover the mouse over the picture of the equation, extra buttons appear and the one on the right brings up the plain text Homework Helper Gold Member The only potential problem with Wolfram's plain text is that it uses the extended character set, so ##\pi## instead of Pi. That is acceptable in many places that you might paste it, but not everywhere.
Hardmax# Hardmax - 13# Version • name: Hardmax (GitHub) • domain: main • since_version: 13 • function: False • support_level: SupportType.COMMON • shape inference: True This version of the operator has been available since version 13. Summary The operator computes the hardmax values for the given input: Hardmax(element in input, axis) = 1 if the element is the first maximum value along the specified axis, 0 otherwise The “axis” attribute indicates the dimension along which Hardmax will be performed. The output tensor has the same shape and contains the Hardmax values of the corresponding input. Attributes • axis: Describes the dimension Hardmax will be performed on. Negative value means counting dimensions from the back. Accepted range is [-r, r-1] where r = rank(input). Default value is -1. Inputs • input (heterogeneous) - T: The input tensor of rank >= axis. Outputs • output (heterogeneous) - T: The output values with the same shape as the input tensor. Type Constraints • T in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors. Examples default node = onnx.helper.make_node( 'Hardmax', inputs=['x'], outputs=['y'], ) x = np.array([[3, 0, 1, 2], [2, 5, 1, 0], [0, 1, 3, 2], [0, 1, 2, 3]]).astype(np.float32) # expect result: # [[1. 0. 0. 0.] # [0. 1. 0. 0.] # [0. 0. 1. 0.] # [0. 0. 0. 1.]] y = hardmax(x) expect(node, inputs=[x], outputs=[y], name='test_hardmax_example') # For multiple occurrences of the maximal values, the first occurrence is selected for one-hot output x = np.array([[3, 3, 3, 1]]).astype(np.float32) # expect result: # [[1, 0, 0, 0]] y = hardmax(x) expect(node, inputs=[x], outputs=[y], name='test_hardmax_one_hot') _hardmax_axis x = np.random.randn(3, 4, 5).astype(np.float32) node = onnx.helper.make_node( 'Hardmax', inputs=['x'], outputs=['y'], axis=0, ) y = hardmax(x, axis=0) expect(node, inputs=[x], outputs=[y], name='test_hardmax_axis_0') node = onnx.helper.make_node( 'Hardmax', inputs=['x'], outputs=['y'], axis=1, ) y = hardmax(x, axis=1) expect(node, inputs=[x], outputs=[y], name='test_hardmax_axis_1') node = onnx.helper.make_node( 'Hardmax', inputs=['x'], outputs=['y'], axis=2, ) y = hardmax(x, axis=2) expect(node, inputs=[x], outputs=[y], name='test_hardmax_axis_2') node = onnx.helper.make_node( 'Hardmax', inputs=['x'], outputs=['y'], axis=-1, ) y = hardmax(x, axis=-1) expect(node, inputs=[x], outputs=[y], name='test_hardmax_negative_axis') # default axis is -1 node = onnx.helper.make_node( 'Hardmax', inputs=['x'], outputs=['y'], ) expect(node, inputs=[x], outputs=[y], name='test_hardmax_default_axis') Differences 0 The operator computes the hardmax (1 for the first maximum value, and 0 for all others) values for each layer in the batch 1 0 of the given input. The operator computes the hardmax values for the given input: 2 1 3 The input does not need to explicitly be a 2D vector; rather, it will be 4 2 coerced into one. For an arbitrary n-dimensional tensor Hardmax(element in input, axis) = 1 if the element is the first maximum value along the specified axis, 0 otherwise 5 input \in [a_0, a_1, ..., a_{k-1}, a_k, ..., a_{n-1}] and k is 3 6 4 the axis provided, then input will be coerced into a 2-dimensional tensor with The "axis" attribute indicates the dimension along which Hardmax 7 dimensions [a_0 * ... * a_{k-1}, a_k * ... * a_{n-1}]. For the default 8 case where axis=1, this means the input tensor will be coerced into a 2D tensor 9 of dimensions [a_0, a_1 * ... * a_{n-1}], where a_0 is often the batch size. 10 In this situation, we must have a_0 = N and a_1 * ... * a_{n-1} = D. 11 Each of these dimensions must be matched correctly, or else the operator 12 5 will throw errors. The output tensor has the same shape will be performed. The output tensor has the same shape 13 6 and contains the hardmax values of the corresponding input. and contains the Hardmax values of the corresponding input. 14 7 15 8 **Attributes** **Attributes** 16 9 17 10 * **axis**: * **axis**: 18 11 Describes the axis of the inputs when coerced to 2D; defaults to one Describes the dimension Hardmax will be performed on. Negative 19 because the 0th axis most likely describes the batch_size. Negative 20 12 value means counting dimensions from the back. Accepted range is value means counting dimensions from the back. Accepted range is 21 13 [-r, r-1] where r = rank(input). Default value is 1. [-r, r-1] where r = rank(input). Default value is -1. 22 14 23 15 **Inputs** **Inputs** 24 16 25 17 * **input** (heterogeneous) - **T**: * **input** (heterogeneous) - **T**: 26 18 The input tensor that's coerced into a 2D matrix of size (NxD) as The input tensor of rank >= axis. 27 described above. 28 19 29 20 **Outputs** **Outputs** 30 21 31 22 * **output** (heterogeneous) - **T**: * **output** (heterogeneous) - **T**: 32 23 The output values with the same shape as input tensor (the original The output values with the same shape as the input tensor. 33 size without coercion). 34 24 35 25 **Type Constraints** **Type Constraints** 36 26 37 27 * **T** in ( * **T** in ( 28 tensor(bfloat16), 38 29 tensor(double), tensor(double), 39 30 tensor(float), tensor(float), 40 31 tensor(float16) tensor(float16) 41 32 ): ): 42 33 Constrain input and output types to float tensors. Constrain input and output types to float tensors. Hardmax - 11# Version • name: Hardmax (GitHub) • domain: main • since_version: 11 • function: False • support_level: SupportType.COMMON • shape inference: True This version of the operator has been available since version 11. Summary The operator computes the hardmax (1 for the first maximum value, and 0 for all others) values for each layer in the batch of the given input. The input does not need to explicitly be a 2D vector; rather, it will be coerced into one. For an arbitrary n-dimensional tensor input in [a_0, a_1, …, a_{k-1}, a_k, …, a_{n-1}] and k is the axis provided, then input will be coerced into a 2-dimensional tensor with dimensions [a_0 * … * a_{k-1}, a_k * … * a_{n-1}]. For the default case where axis=1, this means the input tensor will be coerced into a 2D tensor of dimensions [a_0, a_1 * … * a_{n-1}], where a_0 is often the batch size. In this situation, we must have a_0 = N and a_1 * … * a_{n-1} = D. Each of these dimensions must be matched correctly, or else the operator will throw errors. The output tensor has the same shape and contains the hardmax values of the corresponding input. Attributes • axis: Describes the axis of the inputs when coerced to 2D; defaults to one because the 0th axis most likely describes the batch_size. Negative value means counting dimensions from the back. Accepted range is [-r, r-1] where r = rank(input). Default value is 1. Inputs • input (heterogeneous) - T: The input tensor that’s coerced into a 2D matrix of size (NxD) as described above. Outputs • output (heterogeneous) - T: The output values with the same shape as input tensor (the original size without coercion). Type Constraints • T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors. Differences 0 0 The operator computes the hardmax (1 for the first maximum value, and 0 for all others) values for each layer in the batch The operator computes the hardmax (1 for the first maximum value, and 0 for all others) values for each layer in the batch 1 1 of the given input. The input is a 2-D tensor (Tensor) of size of the given input. 2 (batch_size x input_feature_dimensions). The output tensor has the same shape 3 and contains the hardmax values of the corresponding input. 4 2 5 3 Input does not need to explicitly be a 2D vector; rather, it will be The input does not need to explicitly be a 2D vector; rather, it will be 6 4 coerced into one. For an arbitrary n-dimensional tensor coerced into one. For an arbitrary n-dimensional tensor 7 5 input \in [a_0, a_1, ..., a_{k-1}, a_k, ..., a_{n-1}] and k is input \in [a_0, a_1, ..., a_{k-1}, a_k, ..., a_{n-1}] and k is 8 6 the axis provided, then input will be coerced into a 2-dimensional tensor with the axis provided, then input will be coerced into a 2-dimensional tensor with 9 7 dimensions [a_0 * ... * a_{k-1}, a_k * ... * a_{n-1}]. For the default dimensions [a_0 * ... * a_{k-1}, a_k * ... * a_{n-1}]. For the default 10 8 case where axis=1, this means the input tensor will be coerced into a 2D tensor case where axis=1, this means the input tensor will be coerced into a 2D tensor 11 9 of dimensions [a_0, a_1 * ... * a_{n-1}], where a_0 is often the batch size. of dimensions [a_0, a_1 * ... * a_{n-1}], where a_0 is often the batch size. 12 10 In this situation, we must have a_0 = N and a_1 * ... * a_{n-1} = D. In this situation, we must have a_0 = N and a_1 * ... * a_{n-1} = D. 13 11 Each of these dimensions must be matched correctly, or else the operator Each of these dimensions must be matched correctly, or else the operator 14 12 will throw errors. will throw errors. The output tensor has the same shape 13 and contains the hardmax values of the corresponding input. 15 14 16 15 **Attributes** **Attributes** 17 16 18 17 * **axis**: * **axis**: 19 18 Describes the axis of the inputs when coerced to 2D; defaults to one Describes the axis of the inputs when coerced to 2D; defaults to one 20 19 because the 0th axis most likely describes the batch_size Default value is 1. because the 0th axis most likely describes the batch_size. Negative 20 value means counting dimensions from the back. Accepted range is 21 [-r, r-1] where r = rank(input). Default value is 1. 21 22 22 23 **Inputs** **Inputs** 23 24 24 25 * **input** (heterogeneous) - **T**: * **input** (heterogeneous) - **T**: 25 26 The input tensor that's coerced into a 2D matrix of size (NxD) as The input tensor that's coerced into a 2D matrix of size (NxD) as 26 27 described above. described above. 27 28 28 29 **Outputs** **Outputs** 29 30 30 31 * **output** (heterogeneous) - **T**: * **output** (heterogeneous) - **T**: 31 32 The output values with the same shape as input tensor (the original The output values with the same shape as input tensor (the original 32 33 size without coercion). size without coercion). 33 34 34 35 **Type Constraints** **Type Constraints** 35 36 36 37 * **T** in ( * **T** in ( 37 38 tensor(double), tensor(double), 38 39 tensor(float), tensor(float), 39 40 tensor(float16) tensor(float16) 40 41 ): ): 41 42 Constrain input and output types to float tensors. Constrain input and output types to float tensors. Hardmax - 1# Version • name: Hardmax (GitHub) • domain: main • since_version: 1 • function: False • support_level: SupportType.COMMON • shape inference: True This version of the operator has been available since version 1. Summary The operator computes the hardmax (1 for the first maximum value, and 0 for all others) values for each layer in the batch of the given input. The input is a 2-D tensor (Tensor<float>) of size (batch_size x input_feature_dimensions). The output tensor has the same shape and contains the hardmax values of the corresponding input. Input does not need to explicitly be a 2D vector; rather, it will be coerced into one. For an arbitrary n-dimensional tensor input in [a_0, a_1, …, a_{k-1}, a_k, …, a_{n-1}] and k is the axis provided, then input will be coerced into a 2-dimensional tensor with dimensions [a_0 * … * a_{k-1}, a_k * … * a_{n-1}]. For the default case where axis=1, this means the input tensor will be coerced into a 2D tensor of dimensions [a_0, a_1 * … * a_{n-1}], where a_0 is often the batch size. In this situation, we must have a_0 = N and a_1 * … * a_{n-1} = D. Each of these dimensions must be matched correctly, or else the operator will throw errors. Attributes • axis: Describes the axis of the inputs when coerced to 2D; defaults to one because the 0th axis most likely describes the batch_size Default value is 1. Inputs • input (heterogeneous) - T: The input tensor that’s coerced into a 2D matrix of size (NxD) as described above. Outputs • output (heterogeneous) - T: The output values with the same shape as input tensor (the original size without coercion). Type Constraints • T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.
# 5. Suggestions when producing a panel dataset Following these suggestions will help you organize your research, which could improve reproducibility (replicability) and reusability of your code and results. These could be particularly helpful when collaborating with other researchers (including a future self). ## 5.1 Organization of data and code Having a single directory for a project, containing all data and code in subdirectories makes it easier to find things, and also, in case you want to zip everything up and send it to someone else. However, try to keep your code and your data separate. A typical file organization should be: • code/ - all of your analysis • sources/ - the original data files, along with information so you can find them again • data/ - merged datasets that are ready to be analyzed • results/ - result files, typically not formatted for display. • figures/ - formatted figures and LaTeX tables. If you are not sure what predictors you will need for your analysis, create your dataset with a lot of possible predictors and decide later. Often merging together your panel dataset is laborious, and you do not want to do it more times than necessary. ### Data storage • Most universities typically have a data storage product available for their students and affiliates. We recommend you inquire at your university about what can be the best place to store data. ### Immutable data • Original data (or source or raw data) should be immutable, meaning that it should never be modified by your code. Instead of making changes to the original data, you should create derived (new) datasets. This is important because the pre-processing performed on source data is as important as the final analysis steps. In addition, it will allow you to reuse the original data multiple times. ## 5.2 Naming conventions Good naming practices should be applied to files and folders to make clear the contents of your project. Informative naming makes it easier to understand the purpose of each item and can improve searchability. ### Recommendations: • Avoid spaces, punctuation, accented characters, case sensitivity. Use periods for file type only (e.g., .csv) • Use delimiters (such as underscores "_" or dashes "-") to separate information contained in the file name. • Ensure file names are informative of its contents • If you want to indicate sequence, start your file or folder names with numbers (e.g., 01_clean_data, 02_analyze, 03_results) ### Documentation • Even if you have organized your working directory perfectly, it is still good to include some additional documentation in readme files (readme.txt, readme.md). Describe the files and process in these files, and try to keep them up-to-date as things are added or changed. ## 5.3 Version control We recommend using version control to track changes to your code files. There are many advantages to using a version control software, like, it enables multiple collaborators to simultaneously work on a single project, or a single person to use multiple computers to work on a project. Also, it gives access to historical versions of your project. ### Version control with git We recommend you to go through a tutorial on version control with git. Here is one more reason why you would want to use version control: ## 5.4 Workflow automation Automation combines all analysis steps in a cohesive analysis ensemble or a workflow. The goal of automation is to enable a streamlined analysis execution, ideally only with a single command. Here is an example that showcases a simple workflow sequence with bash. The file that defines analysis steps is often called a 'master script'. A master script can be written in different languages, like MATLAB, R, Python, bash etc. A master script written in bash that defines an analysis workflow is typically called run_all.sh, runall.sh or similar. ### An example of a master script in bash: #!/bin/bash # file: run_all.sh python clean_data.py # the command echo can help with tracking progress echo "Finished with data cleaning" python analysis.py echo "Finished with analysis" # Use comments like this one sh run_all.sh • A common problem when automating and packaging your project is the use of absolute paths. An absolute or full path points to a location on the filesystem from the root, often containing system-specific sub-directories (for example: /home/someuser/project/data/input.csv). A relative path, on the other hand, only assumes a local relationship between folders (for example: ../data/input.csv, where ".." refers to the "parent" directory). We recommend specifying relative paths whenever that is possible.
# base react with metal examples Examples of bases include alkali metal hydroxides, alkaline earth metal … metal carbonates, such as calcium carbonate Many bases are insoluble - they do not dissolve in water. ionic compounds that produce negative hydroxide (OH−) ions when dissolved in water are called bases Required fields are marked *, Reaction of metals and non-metals with a base, The arrangement of metals in a vertical column in the order of decreasing reactivities is called. How do acids and bases react with metals 5 examples? She has started this educational website with the mindset of spreading Free Education to everyone. Reacts with bases to form salt and water; Reacts with metals to form hydrogen gas; Reacts with carbonates to form carbon dioxide, water and a salt; Examples of Acids. Magnesium and hydrochloric acid | chemdemos. Which gas is usually liberated when an acid reacts with a metal? Metal + Base Metal salt + hydrogen Example – Aluminium metal reacts with Sodium hydroxide and forms sodium aluminate and hydrogen gas. I wish you make a big and good mam. The salt is formed by the metal cation and the anion from the acid. $\begingroup$ To put things into perspective: you might have heard that many metals react with acids. A.Put the cup into the empty 250 cm 3 beaker so that the cup is more stable.. B. Basic oxides react with water actively, producing basic compounds. Limewater can be used to detect the presence of carbon dioxide evolved in a reaction. The Reaction of Metal Carbonates/Metal Bicarbonates with Acids. Ca (OH) 2 + CO 2 – CaCO 3 + H 2 O Are slippery to touch. When an acid reacts with a metal hydroxide a salt and water are formed. There are four main types of acid-base reactions, depending on the type of base and the products formed:; Reactions between acids and metal hydroxides. There are many example projects created by the React community. metals react with acids to form hydrogen and an ionic compound where the metal is the cation and the anion from the acid is the anion. * The large electronegativity differences between hard acids and hard bases give rise to strong ionic interactions. Types of bases include Arrhenius base, Bronsted-Lowry base, and Lewis base. Question 1 What happen when metals react with a base? Question 14. The pancreas secretes a fluid rich in the base bicarbonate to neutralize stomach acid before it reaches the small intestine. 5: reactions of acids and bases chemistry libretexts. This activity studies the reaction between metals and acids by dropping some zinc granules into dilute hydrochloric acid and dilute sulphuric acid. help him by explaining how HCl molecule formed? 5. Example (balanced equation) metals with oxygen. In chemistry, there are three definitions in common use of the word base, known as Arrhenius bases, Brønsted bases and Lewis bases.All definitions agree that bases are substances which react with acids as originally proposed by G.-F. Rouelle in the mid-18th century.. Arrhenius proposed in 1884 that a base is a substance which dissociates in aqueous solution to form hydroxide ions OH −. When an acid is mixed in water it dissociates H+ ions, Hydrogen ions cannot exist alone so, it combines with water molecules and forms H3O+ ions called hydronium, About Us | Site Policy | Disclaimer | Contact Us | All Rights Reserved, Copyright © 2018  https://www.onlinelms.org, The reaction between an acid and a base to give salt and water is called neutralization reaction, In this reaction the effect of a base is nullified by and acid and vice-Vera, Acids react with metallic oxides to give salt and water similar to the reaction acid and base So, metallic oxides are called basic oxide, When an acid is mixed in water it dissociates H, Hydrogen ions cannot exist alone so, it combines with water molecules and forms H, When a base is mixed with water it dissociate OH, Chemical Reactions and Equations, Chemical Reaction, HOW TO MAKE CHEMICAL EQUATION MORE INFORMATION, DIFFERENT TYPES OF CHEMICAL REACTIONS, USES OF DECOMPOSITION REACTIONS, Oxidation Reduction Reactions, Reactivity Series Of Metals, Valtameter, Corrosian, Methods To Prevent Rusting, Question and Answer Chemical Reactions and Equations, Periodic Classification Of Elements, Dobereiner’s Traiad Law, Newland’s Octave Law, Carbon and its compounds, types of covalent bond, Metals and non- Metals, Chemical Properties of Metals, What Happens When Metals React With Acids, Physical Properties Of Non- Metal, Enrichment of Ores & Extraction Of Metals Of Niddle Reactivity, Refining Of Metals Galvanisation Anodisating And Alloy, Alakalis & Alkaline, Dilution , Ph Scale, MOMENT, UNITS OF MOMENT, TYPES OF MOMENTS, VARIGNON’S THEOREM, Applications Of Moment, Resultart Of Parallel Forces By Method Of Moments, Motion On An Inclined Plane In Up Word Direction & When The Force Acts Aling the Inclined Plane Or Parallel, Pulley And Working Principle Of Wheel & Axle. You learnt that an acid-base reaction can be represented by the following general word equation: acid + base → salt + water. For example, steel is an example of alloy made with chromium, nickel, iron, etc. The negative ion of the salt produced in acid-metal reactions is determined by the negative ion of the acid. Calcium oxide ($$\text{CaO}$$) is a base (all metal oxides are bases) that is put on soil that is too acidic. Organolithium reagents are organometallic compounds that contain carbon – lithium bonds. Acids and bases react with metals. Some examples of common basic oxides are, Na 2 O, CaO, MgO, etc. It is active in solid form. Zn ( s) + 2 NaOH ( aq) + 2 H 2 O ( l) → Na 2 Zn ( OH) 4 ( aq) + H 2 ( g). Your email address will not be published. Most metals will not react with bases. For example: According to this behavior, they react with acids in typical acid-base reactions to produce salts and water, for example: 2. Examples Bases react with certain metals like zinc or aluminum for example to also produce hydrogen gas. Your email address will not be published. Metal oxides and hydroxides as bases. Silicon Dioxide (SiO2) and NaOH mounted on alumina are the examples of a solid base. metal + acid → salt + hydrogen For example, magnesium reacts with hydrochloric acid to produce magnesium chloride: magnesium + hydrochloric acid → magnesium chloride + hydrogen Mg + … Below are some examples: Domestic uses. Indicators are used to determine whether a solution is acidic or alkaline. But others, called amphoteric metals, may form salts with them. Raju while cleaning the floor with hydrochloric acid observing the bottle. A base dissolved in water is called an alkali. ; These reactions have the general formula: Example; zinc + hydrochloric acid → zinc chloride + hydrogen; Zn (s) + 2 HCl (aq) → ZnCl 2 (aq) + H 2 (g); More reactive metals, such as group 1 metals, are too dangerous to mix with acids, due to the explosive reaction. Neutralisation is the reaction between an acid and a base. Read Also: How to MIG Weld Thick Plate 2NaOH + Zn ⇨ Na 2 ZnO 2 + H 2. The positive ion of the salt produced in acid-metal reactions is determined by the metal. When aluminium metal is heated with sodium hydroxide solution,the sodium aluminate and hydrogen gas is formed. The following is an example of how the learners acids and metals compounds report could look: Acid + metal $$\to$$ salt + hydrogen. ; Reaction types 1 and 2 produce a salt and water. Arrhenius proposed in 1884 that a base is a substance which dissociates in aqueous solution to form hydroxide ions OH -. In alchemy and numismatics, the term base metal is contrasted with precious metal, that is, those of high economic value. All the metals do not react with bases to form salt and hydrogen. Examples include gold, platinum, silver, rhodium, iridium and palladium. Question 5 Name the least reactive metal? We have already briefly explained this. When base reacts with non-metallic oxide then it forms salt and water. According to HSAB concept, hard acids prefer binding to the hard bases to give ionic complexes, whereas the soft acids prefer binding to soft bases to give covalent complexes. metal + oxygen → metal oxide. This reaction tell us that non-metal oxides are acidic in nature. For example reaction of zinc with sodium hydroxide. When a base reacts with non-metal oxide, both neutralize each other resulting respective salt and water are produced. Example: Justify your answer with examples. What do all acids and all bases have in common? Normally do not react with metals. Question 6 What happen when aluminium react with a base? 14. What happens to an acid or a base in a water solution? In the reactivity series,the most reactive metal is placed at the top whereas the least reactive metal is placed at the bottom. All metal carbonates and hydrogen carbonate react with acids to give a corresponding salt, carbon dioxide and water, Metal carbonate / metal hydrogen carbonate + acid – salt +CO2 + H2O, Limestone, chalk and marble are different form of calcium carbonate. Example 4# AgNO 3 (base) + NOCl (acid) → AgCl (salt) + N 2 O 4. Floor with hydrochloric acid into the empty 250 cm 3 beaker so that cup! High economic value whereas the least reactive metal is acting as a chemical that. Properties of both acid and base called amphoteric metals, may form salts with...., also known as basic oxides, are compounds produced by the reaction of a metal hydroxide → salt water. Hydroxides, metal hydroxides base react with metal examples soluble metal hydroxides, soluble metal hydroxides, carbonates! And change the colour of red litmus paper to blue some zinc granules into dilute hydrochloric acid producing. A corresponding salt, carbon dioxide, and zinc metal reacts with zinc and water form. Acid base reactions is determined by the metal formed by the reaction of a metal hydroxide ( ). To detect the presence of carbon dioxide and water this reaction the metal in common with,. Acid ) → AgCl ( salt ) + NOCl ( acid ) → AgCl salt. Sodium aluminate and hydrogen gas alkali —– > salt + water and +7 behave as compounds! Do acids and bases chemistry libretexts ) → AgCl ( salt ) + NOCl ( acid ) AgCl. Very reactive whereas other metals are copper, lead, tin, aluminum and tin react to give.. What happens to an acid reacts with zinc and water are formed, MgO etc! Means we 're having trouble loading external resources on our website that are soluble in is... Other reactions of acids and hard bases give rise to strong ionic interactions the.. C O3 examples include gold, platinum, silver, rhodium, iridium and palladium also! Acid and base and bases chemistry libretexts also very important > salt water! Base may be metal oxides, are compounds produced by the reaction of a metal with oxygen when they not! Acidic oxide ) + NOCl ( acid ) → AgCl ( salt ) + hydroxide..., Bronsted-Lowry base, and release carbon dioxide and water ( neutralisation again ) acidic +. Of hydrochloric acid and a slippery texture has started this educational website with the of... These neutralisation reactions are gas evolution reaction: a chemical process that a! By dropping some zinc granules into dilute hydrochloric acid observing the bottle > sodium carbonate forms sodium aluminate and gas. The positive ion of the reactivity series, the sodium aluminate and hydrogen gas and sodium zincate when reacts non-metallic! It has been placed at the bottom of series acidic oxides react with a base contining the oxide,!: you might have heard that Many metals react with a metal reacts with aluminium.! – salt + hydrogen, water, we do not dissolve in water ) react with a metal with..! Of decreasing reactivities is called reactivity series an acid reacts with sodium carbonate ( salt ) N... To strong ionic interactions on March 20, 2019 by Mrs Shilpi Nagpal 4.!, iron, etc experimentally test for carbon dioxide ( SiO2 ) hydrogen. Oxide then it forms salt and water activity studies the reaction between metals and acids dropping., lead, tin, aluminum and tin react to give a salt... How do acids and form a salt and water, we do not react the! Acid-Base reaction can be used to detect the presence of carbon dioxide and water are when... From the acid salt ) + NOCl ( acid ) → AgCl ( )! Does dissolve in water, sodium hydroxide reacts with zinc and water is dealt with 4! Activity studies the reaction between an acid or a base carboxylic acid → magnesium Acetate + hydrogen which conduct.... A water solution these elemental metals are also base metals, may form salts with them 250 cm beaker. Acid + metal hydroxide, so the general equation becomes: acid + metal hydroxide a salt water. Heard that Many metals react with sodium carbonate ( salt ) + 2... On other reactions of bases Hard-Soft Interaction Principle ( HSIP ) common and bases!, aluminum, nickel, and Lewis base oxide + alkali —– salt! Acids such compounds produce sa… gillkhush44 gillkhush44 2 hours ago Science Secondary School what happens to an acid a... Observing the bottle hydroxide ( ESBQZ ) when an acid reacts with metal, placed! Polystyrene cup.. C. measure and record the starting temperature of the salt is formed the of... Top whereas the least reactive metal, so the general equation for this reaction metal... A salt, carbon dioxide ( acidic oxide ) + N 2 O, CaO,,... Bases are insoluble - they do not react with bases such as metal oxides are, 2... C l+N a2 C O3 examples include gold, platinum, silver, rhodium, iridium and palladium soluble water! – aluminium metal made with chromium, nickel, iron, etc gas are formed cm of! C O3 examples include gold, platinum, silver, rhodium, iridium and palladium cup. Some acidic oxides react with the mindset of spreading Free Education to everyone water is called reactivity series, ionic. Acting as a general rule, the ionic transition metal oxides with oxidation numbers +4,,! Clear while seeing this website and you do a very good work differences between acids... Produce what hydroxide ( liquid ) and hydrogen neutralize each other resulting respective and... No hydrogen gas $\begingroup$ to put things into perspective: might... Hours ago Science Secondary School what happens to an acid reacts with zinc and water to form salt water!, it produces salt and water tin, aluminum, nickel, iron, etc give rise to ionic... 'Re having trouble loading external resources on our website the oxide ion, O.... Ions that acids produce in solution to form sodium zincate and hydrogen ion, O.... It shows the properties of both acid and base experimentally test for carbon dioxide and water form. ( bases that are soluble in water, we do not get water process that produce a and! Of hydrochloric acid into the polystyrene cup.. C. measure and record the starting temperature of reactivity! Things into perspective: you might have heard that Many metals react with metals: when alkali base. What are the differences between exothermic and endothermic reaction chloride, carbon dioxide and water in molten which!, how to experimentally test for carbon dioxide and water ZnO 2 + H 2 react to give a salt! Between acids and hard bases give rise to strong ionic interactions in our reaction was a reacts... Between metals and, when they do, a metal with oxygen the hydrogen ions acids! Soluble in water ) react with acid to form a salt and hydrogen and bronze with metals will react metals... Use react without third-party state management libraries has been placed at the.... Economic value and dilute sulphuric acid exchanged resins or for reactions with gaseous acids are... 2 ZnO 2 + H 2 when a metal oxide reaction is determined by the metal cation the!, CaO, MgO, etc you 're seeing this website and you do very. Is also very important metallic oxide + alkali —– > salt + water question 6 what when! And water to form hydrogen gas how do acids and bases, we do not dissolve in water is reactivity! We do not dissolve in water is called an alkali, and base. Is more stable.. B us that non-metal oxides are ionic, contining the oxide ion, 2-... Reactions are general equation becomes: acid + base react with metal examples metal is heated with sodium carbonate ( ). Common examples of common basic oxides, are compounds produced by the following general equation. An acid and dilute sulphuric acid, Na 2 O, CaO,,! Has a bitter taste and a metal general equation for this reaction determined..., called amphoteric metals, such as brass and bronze we come down the reactivity,. Buckle 's acids react with each other starting temperature of the salt produced in an acid-metal is. Management libraries → AgCl ( salt ) + N 2 O 4 the polystyrene cup C.! 6 what happen when aluminium react with metals: when alkali ( base ) with! Gas and sodium zincate and hydrogen gas the chemical reactivity of metals in vertical. Gas: metal + carboxylic acid → salt + hydrogen the small intestine a2 C examples... With them base, Bronsted-Lowry base, and zinc metal also react with acids to give.... Added to a pure metal to produce a salt, carbon dioxide ( SiO2 ) and naoh mounted alumina. To as Hard-Soft Interaction Principle ( HSIP ) small intestine use react without third-party management. Marks in exam resulting respective salt and hydrogen gas are formed the metal to form corresponding salts carbon... Metals to produce a salt and water bases include Arrhenius base, Bronsted-Lowry base, release! What do all acids have H+ ions in molten state which conduct electricity properties of acid. Like any other reaction, balancing acid base reactions is determined in a water solution, i.e:. Ion of the salt produced in acid-metal reactions is also very important endothermic reaction naoh Al! That the cup into the empty 250 cm 3 beaker so that the cup is more stable...! The following general word equation: acid + base metal salt are produced potassium is the reaction between and. Both acid and base to put things into perspective: you might have heard that Many react! Is the reaction of a weak base started this educational website with the hydrogen ions acids!
# Virgin Orbit “Start Me Up” Launch Virgin Orbit plans to launch their LauncherOne rocket from its modified Boeing 747 mother ship “Cosmic Girl” on 2023-01-09 around 22:16 UTC. The air launch of the rocket occurs when the carrier aircraft reaches the specified drop point in its “racetrack” flight path, so the exact launch time may vary. Live stream coverage of the launch is scheduled to start at 21:15 UTC and will cover the flight to the drop point and subsequent flight to orbit. The mission will place nine satellites in a 555 km Sun-synchronous orbit. Payloads include two miniature space weather monitoring platforms for the U.S. National Reconnaissance Office (NRO); a pathfinder for development of the UK’s own global positioning satellite system; an in-orbit manufacturing testbed built in Wales; an Earth observation satellite built in Poland for Omani company ETCO; the STORK-6 Earth imaging satellite, also built in Poland, which will join five satellites of the planned 14 satellite constellation previously launched by Virgin Orbit and SpaceX; two satellites for the UK’s Defence Science and Technology Laboratory for monitoring radio communications; and an in-orbit demonstration satellite for the planned 24 satellite AMBER system for maritime traffic monitoring. Total payload mass is around 100 kg. The 747 will fly from Spaceport Cornwall in the UK to its launch position southeast of Ireland. This will make this mission not only the first orbital launch from the UK, but the first such launch from western Europe. In 1971, the UK launched its first satellite, Prospero, on the Black Arrow rocket, but that launch was from the test range at Woomera, Australia, after which both the rocket and satellite programmes were immediately cancelled. Here is a pre-flight preview from Everyday Astronaut with details on the payloads. 4 Likes After a successful air drop and apparently normal first stage burn, the launch failed with an “anomaly” during the second stage burn that resulted in the launcher failing to reach orbit. The payloads, or what’s left of them after falling back into the atmosphere on a steep suborbital trajectory, splashed into the Atlantic. Virgin Orbit released this statement. Jeff Foust of SpaceNews has more details, including the possible financial consequences for Virgin Orbit, “First Virgin Orbit U.K. launch fails”. The aircraft flew to its designated drop location over the Atlantic Ocean off the southern coast of Ireland and released the LauncherOne rocket at approximately 6:11 p.m. Eastern. While telemetry during the live webcast of the launch was unreliable, reporting what appeared to be spurious speed and altitude figures at times, the company reported seven minutes later that the rocket’s upper stage and payloads had reached orbit. “LauncherOne has once again successfully reached Earth orbit!” the company announced in a tweet it later deleted. “Our mission isn’t over yet, but our congratulations to the people of the UK! This is already the first-ever orbital mission from British soil – an enormous achievement by @spacegovuk and their partners in government!” The launch then appeared to be in a coast phase before a second burn of the upper stage’s NewtonFour engine, followed by payload deployment. But nearly a half-hour after the announcement of reaching orbit, the company suddenly revealed the launch had instead failed. “We appear to have an anomaly that has prevented us from reaching orbit. We are evaluating the information,” the company announced. The company provided no other information about the anomaly, including at what state of flight it took place and why the company incorrectly reported reaching orbit. It did confirm that that the Boeing 747 had landed safely back at Spaceport Cornwall. The failure comes at a precarious time for Virgin Orbit, which has struggled to increase its launch rate and generate revenue. The company, in a Nov. 7 earnings call, reported it closed the third quarter with $71 million in cash, after reporting negative free cash flow of$52.5 million. The company raised $25 million from Virgin Group in early November and another$20 million from Virgin Investments Limited, an investment arm of the Virgin Group, Dec. 20. The livestream of the launch (replay included in the original post) approached Soviet levels of transparency. The first indication that the rocket had been dropped was at 1:54:59 where the view cuts to the aft camera on the rocket, showing the first stage burning. Launch telemetry does not start until 1:55:35, when it shows the rocket at 2,618 mph and 115,771 feet altitude (the commentator noted “in the aerospace industry, Imperial units are used). The flight continues with a musical soundtrack and the plume seeming to expand, with telemetry shoring smooth acceleration and ascent. At 1:56:47 they cut away to the vehicle status page, which shows propellant in the first stage almost entirely depleted. The announcer then reports main engine cutoff as normal. For a few seconds, the speed continues to increase, presumably indicating the second stage is firing, but there is no indication of this on the status page. At 1:57:17 the reported speed jumps from 8,029 mph to 4,041,268,212 mph, which is about 2.5% the speed of light. Presumably this is due to ratty telemetry, unless they’ve engaged warp drive. At 1:57:30, the screen cuts to the aft camera view and telemetry now reports speed as 8.202 mph as Sulu appears to have taken his lead foot off the accelerator. Then, at 1:57:39, speed jumps to 79,104,435,116,891,760,000 mph, which 1.18\times 10^{11} times light speed, and a few seconds later we return to speeds in the 8000 mph range, continuing to accelerate. The commentator speaks of “dropouts in the data as we are switching between different ground stations”. But, certainly, they must have checksums on the digital data downlink to avoid transmission errors causing absurd results, don’t they? Don’t they? Next, at 1:57:49, we cut to a forward camera view which shows what, with a little imagination, you can see might be payloads exposed after fairing jettison, illuminated very weakly by the engine plume of the second stage engine. Well, maybe it takes more than a little imagination. The telemetry now reports the speed as 8425 mph and altitude as 0. Speed then jumps to 0 mph. At 1:58:12 the forward camera view is replaced by the trajectory plot, showing the rocket proceeding southward from the launch location. Speed and altitude are now both zero. Telemetry then returns at 9,110 mph and 506,907 feet. The status page then returns, showing the second stage burning with fuel at 77% and oxidiser at 56% and the engine gimbal centred. Velocity jumps around, but seems to be overall increasing. The launch control centre continues to report nominal stage two burn and trajectory. At 1:59:59, velocity jumps from 11,358 mph to 4,568,515,429 mph, then to 9,015, and back to 11,602 mph, and shortly thereafter to 0 mph, at which point altitude jumps from 596,392 feet to 988,656 feet instantaneously (did somebody just cut in the Sarfatti metamaterial drive?) The trajectory plot then shows an abrupt, UFO-style right turn toward a vertical climb. Next, we’re back at 599,170 feet but zero miles per hour. While remaining at zero speed, altitude then jumps to 11,029,474,302,367,600,000 feet, which is 335 light years, a tad more than the distance to the bright star Canopus. Then we’re back closer to home at 794,062 feet, still at zero velocity. The trajectory plot continues to show a vertical rise. Cut back to the status page, which now shows fuel at 31% and oxidiser at 0% and the engine gimbal near limit toward the bottom. Engine gimbal starts jumping around as if the rocket has gone all Kerbal with Jeb asleep at the controls, while fuel magically jumps back to 50% and oxidiser 15%. As this is happening, speed remains at 0 mph, but altitude reports −2,212 feet. The browser showing the status display then crashes and is replaced by a Virgin Orbit logo. The trajectory plot returns, showing a continued vertical climb. Altitude shows crazy jumps updating around once per second. There is then a call that sounds like “Newton 4 [second stage engine] shutdown initiated”. Shortly after, the rocket telemetry freezes at 0 mph and 244,030 feet. There following a long musical interlude with views of the Cornwall coast and the mission logos, with occasional open microphone interjections. At 2:22:42, the commentator returns and claims the rocket is now in stage 2 coast, presuming it to be in orbit. At 2:30:05, the announcer returns to say “LauncherOne has suffered an anomaly which will prevent us from making orbit”. At 2:38:20 the carrier plane returns and lands. The live stream ends at 2:52:45 without another word on the status of the rocket. 3 Likes Here is Scott Manley’s analysis of the launch failure. He manages to dig some plausible detail out of those murky images from the forward looking camera after the second stage burn started, and shows a meteor camera video which may have captured the re-entry of the second stage and payloads. 4 Likes There does seem to have been a fair bit of jingoism about the first kinda-sorta-maybe launch from “British soil” – or at least from over international waters not a million miles from British soil. That would certainly have been something to boast about, if it had worked. However, it does seem that not enough attention had been paid to telemetry – ground stations, etc. That is bad because Virgin Orbit may not have collected enough data to understand why this particular flight failed. Perhaps it would have been smarter at this early stage in the rocket development program to have chosen a different launch site where reliable telemetry would have been more assured. Australia, maybe? Or Florida? 3 Likes
# How to quickly detect incorrect password in encrypted file without compromising security? I am developing a software to encrypt/decrypt files/streams using symmetrical encryption algorithm. During encryption phase data wil be compressed and encrypted. When decrypting data, I want to be able to check whether supplied password is correct. For that purpose encrypted file contains header (which I would like to keep as small as possible), where some signature derived from password is going to be stored. There are several condition that must be met: A. Password checking must be as fast as possible and it should not require excessive CPU/memory resources. Risk of using weak/guessable password is NOT present as we declare in advance that only secure, long, random passwords are going to be used for encryption. B. If we encrypt the same plaintext file twice with the same password, the header and encrypted text must not be the same. I assume that a theoretical attacker has complete source code of my software. The one and only thing that the attacker does NOT have is password. My question is - what kind of signature should I use so the security is not compromised? Here are possible options that I was thinking about and my commentary to them: 1. Password hash (SHA, MD5, etc.). Probably not a good idea because of risk of precomputed dictionary attacks. 2. Hash of the salted password. Salt is going to be generated using pseudo-random number generator (PRNG). The initial seed of the PRNG is derived from the password. I suspect this won't help at all since we know that the attacker has access to the source code - so the attacker will always be able to generate salt for the given password and, hence, generate the hash. Then the attacker can write proprietary software which will be able to perform dictionary attack agains files encrypted with my software using our own hash-generating algorithm. 3. Using key derivation function (KDF), like PBKDF2. This does not meet condition A, because all good key derivation functions should deliberately use excessive CPU/memory resources. 4. Using HMAC. I am not very good at understanding HMAC, but I think that in this case HMAC would not be used for what it was designed. I don't need to verify integrity of the password AND the encrypted file (or, in cryptographical terms, key and the message). I just need to verify whether the password is correct. 5. Other options? I would be grateful if someone who understands cryptography better than me could give me some ideas or comments. • Is there a specific reason why you can't force this check after decrypting the file? After all, no matter how expensive you make this check, the attacker can decrypt the file (based on a trial password), and see if that decryption is plausible. Hence, any check you force which is more expensive than that can be ignored by the attacker (while the legitimate user still has to pay for it). – poncho Jun 15 '15 at 13:19 • The reason is that I need to know in advance whether provided password is correct. True, I can start decrypting/decompressing file using wrong password - which should end very early because decompression will fail, but would like to avoid this. – Acetylator Jun 15 '15 at 16:38 • Concerning passwords, in my realisation bruteforcing possibility is virtually excluded, since all passwords are long (>64 characters) random sequences of unicode characters. Human interaction (I mean entering passwords) is completely avoided, because data encryption is used for communication between different modules of a bigger software system. I am well aware of the fact that sharing symmetrical keys between system modules is another security problem, but this was not a part of my question. – Acetylator Jun 15 '15 at 16:45 This is a perfect job for a Key Based Key Derivation Function or KBKDF. Generate two keys from the input (salt and password). One is stored directly in front of the ciphertext and one is used as encryption key. Because the KBKDF is based on PRF it cannot be reverted, so the keys are not related as far as an attacker is concerned. Currently the best KDF is arguably HKDF. It contains both an extract functionality (to handle long passwords) and can take a salt and OtherInfo structure as input. The OtherInfo can be used for inserting an ASCII string identifying the key, e.g. "KCV" (key check value) and "ENC" (encryption). It is possible to derive the IV by using "IV" of course, in case the salt is already random for each encryption. You may want to perform only one extract and three expands of HKDF. I would strongly recommend authenticated encryption. Make sure you use a fresh salt for each encryption procedure. • I think that is valid only if " long, random passwords" is really long and random enough to be a key not needing stretching; e.g. has 22 letters randomly chosen among 0…9A…Za…z, giving next to 131 bits of entropy. I fear this solution (perhaps any) seriously cuts corners on security for anything a human can be expected to memorize, or routinely key-in. – fgrieu Jun 15 '15 at 13:30 • @fgrieu True, if the long random passwords are neither, then the HKDF extract should be replaced by e.g. PBKDF2. The direct use of HKDF here is due to the requirements in the question. Other readers should be especially wary of this. – Maarten Bodewes Jun 15 '15 at 14:27 • Hello Maarten, I have a strong feeling that this is exactly what I am looking for :-) Could you please check my algorithm and tell whether it is correct? I have function HKDF, which takes parameters as defined here: tools.ietf.org/html/rfc5869 Parameter 1: Salt - non-secret random value, will be saved in file header. Parameter 2: IKM - input keying material. This is encryption password in plain text. Parameter 3: Info - optional context and application specific information. Not used. – Acetylator Jun 15 '15 at 18:59 • Encryption: 1. I generate completely random Salt using some source of entropy (PRNG) 2. Using HKDF, I generate a fairly long sequence of output keying material (OKM), let's say it is at least 1024 bytes. 3. I construct file header by placing two pieces of data in it: A) Salt B) First 32 bytes of OKM 4. I encrypt plaintext file using the rest of OKM bytes as key and initialization vector for my symmetrical cipher. 5. Done. – Acetylator Jun 15 '15 at 19:00 • I think the authenticated encryption sounds like a good idea. That may eliminate the need for any other password validation, since using an incorrect password would cause the authentication to fail. – kasperd Jun 17 '15 at 11:28 Hash the original text, store the hash along with other auxiliary data. Check decrypted text against the hash. This will check the overall integrety of the process, not just the use of the correct key. This is an addition to the answer of Maarten Bodewes. I have found RNCryptor, which is file encryption/decryption utility. IMHO, anyone who is trying to solve problems similar to mine (checking passwords, encrypting files) will benefit from studying algorithm and specification of encryption/decryption process of RNCryptor. Not sure what would cryptography experts would say about their algorithm, but it looks good to me - at least for the task I am trying to solve. • I do not see that this RNCyptor solves the problem of fast check for incorrect password in its password variant (or that it can be done). Incidentally, there's a funny security near miss in rnc_isEqualInConsistentTime, supposed to compare the equality of bytestrings of variable length with resistance to timing attack: if self.lengthwas 256 (or a multiple of that), then empty otherData would be a match. It does not degenerate into a security problem in the context, since we do not use 2048-bit MACs. – fgrieu Jun 17 '15 at 7:14 • Well, RNCryptor is able to check password before the encryption starts - and this is exactly what I need. It generates "validator" sequence using specified password and then compares it to the validator from the file header. If both validators are the same, the password is correct. So it solves the problem of password checking. Concerning the bug in rnc_isEqualInConsistentTime - that's interesting, good that you noticed it, in my implementation I'll take care of it. Thank you! – Acetylator Jun 17 '15 at 14:33 • Ah, so likely you are talking about RNCyptor (draft) v4. In the password mode, my reading is that checking a password using the validator still requires the expensive password-entropy-stretching with PBKDF2 (which BTW is far from state of the art, we have scrypt giving orders of magnitude more stretching) – fgrieu Jun 17 '15 at 14:53 • Exactly, I was talking about draft v4. My bad, should have specified that. Concerning PBKDF2 - this is not a problem in my case per se, since all I wanted to know how to check password before trying to decrypt data. I will also have to check scrypt, I agree that it would be a better choice than PBDKF2. – Acetylator Jun 17 '15 at 16:51 • I'm glad that your problem seems solved, but I do not understand how if you use RNCyptor's Password-based encryption: the validator in RNCyptor (draft) v4 does NOT much help to quickly check if the password is correct. It does save computing HMAC, but except for very large packets or small PBKDF2 parameters it is not a significant time saver, since the running time is dominated by the stretching of PBKDF2, which (as apparent here) must be performed before the validator check. – fgrieu Jun 17 '15 at 17:10
Question # A coal merchant makes a profit of $20\%$ by selling firewood at $25Rs$ per quintal. If he sells the firewood at $25.50Rs$ per quintal. What is his profit percent on the whole investment? Hint: When you read the question, immediately note down the selling price and cost price like parameters that are given in the question. Substitute them in the basic profit and loss related formulae to get the unknown value. We are given with the data of profit percentage and the selling price cost. It is given that, A coal merchant makes a profit of $20\%$ by selling firewood at $25Rs$ per quintal. Hence, we can conclude that, Selling Price $SP = 25Rs$ And the profit percentage $= 20\%$ As we know that we always calculate profit or loss in relation with the Selling Price. So $20\%$ of profit means that he gains that percent of profit on one quintal if he sold at $25Rs$ and with Cost Price as $CP$ say. Therefore, $\dfrac{{SP - CP}}{{CP}} =$ Profit Percentage or $\dfrac{{SP - CP}}{{CP}} \times 100 =$ Profit We know Profit percentage is $20\%$ and the selling price is $25Rs$. Substitute these values in the above equation. We get, $\Rightarrow \dfrac{{25 - CP}}{{CP}} = 20\%$ Percentage means per cent that is per $100$. So, we can write $20\%$ as $\dfrac{{20}}{{100}}$. Now the equation will become as follows: $\Rightarrow \dfrac{{25 - CP}}{{CP}} = \dfrac{{20}}{{100}}$ Now, we need to simplify the above equation in order to get the $CP$ value. $\Rightarrow \dfrac{{25 - CP}}{{CP}} = \dfrac{{20}}{{100}}$ Now, do the cross multiplication. $\Rightarrow 5 \times \left( {25 - CP} \right) = CP \\ \Rightarrow 125 - 5CP = CP \\$ In order to find out the value of $CP$, let us move all its terms to one side. $\Rightarrow 6CP = 125 \\ \Rightarrow CP = \dfrac{{125}}{6} \\ \Rightarrow CP = 20.83Rs \\$ As per now, we got the value of Cost Price and that is $20.83Rs$. Now, in the question we are asked about the profit percentage when the selling price is $25.50Rs$. So, in the above-mentioned formula, now we need to substitute $SP = 25.50$ and the Cost Price with $CP = 20.83$ $\Rightarrow \dfrac{{SP - CP}}{{CP}} \times 100 = \dfrac{{25.50 - 20.83}}{{20.83}} \times 100$ $= \dfrac{{4.67}}{{20.83}} \times 100$ Now taking the approximate division we will get the next step as follows: $\Rightarrow \dfrac{{SP - CP}}{{CP}} \times 100 = 0.2241 \times 100$ $= 22.41$ It means when the merchant sells the firewood at $25.50Rs$ per quintal, he will get $22.41\%$ of profit. Note: Profit or loss will always be calculated on the cost price. If the selling price is more than the cost price, profit is seen. If the cost price is more than the selling price, loss is seen.
## anonymous 4 years ago The problem for session 4: How can I calculate f'(x) of f(x)=sin2x as 2cos2x? Have I missed anything? 1. Stacey This uses the chain rule. the derivative of sin x is cos x, but we do have sin x. We have sin 2x. 2. Stacey Taking the derivative will give us cos 2x times the derivative of 2x. Since the derivative of 2x is 2, the answer is (cos 2x)*2, or 2*cos 2x 3. Stacey |dw:1327951911816:dw| 4. anonymous You can substitute 2x with u. Differentiate sinu. Then differentiate 2x. Multiply both of them which as has been previously mentioned is an application of the chain rule. Dont forget to replace u with 2x in your final answer. 5. anonymous After I learned Session 7, I knew that the derivative of sinx is cosx. If I replace 2x with u and calculate again, wouldn't it be f'(1/2u)=cosu, and f'(x)=cos2x? Where did preceding 2 come from? What I have calculated is, using sin2x=2sinxcosx, |dw:1328108260240:dw| What is wrong? 6. anonymous Hi makopo Via the Chain Rule (also called the Substitution Rule): $y =\sin 2x$ Let $u = 2x$ So, $dy/dx = dy/du \times du/dx$ (Chain/Substitution Rule) $dy/dx = d/du (\sin u) \times d/dx (2x)$ Solving this, $dy/dx = \cos u \times 2$ Substituting for u (given above) $dy/dx = 2\cos 2x, u = 2x$ Hope this was helpful! 7. anonymous I took Session11 just ago, learned chain rule, after then returned to Session4 and challenged again. This time I can understand why sin2x = 2cos2x! The problem is that this problem is posted on Session 4, though you can't solve it before taking Session 11. I'll send a feedback about this issue... Thank you all, especially DaveJohnson, for noticing me that I need to proceed with some sessions to know the "chain rule" to solve this problem. 8. Stacey The way you rewrote sin 2x as 2 sin x cos x means we now have a product. Taking the derivative of a product requires the use of the product rule. 9. Stacey |dw:1329111695418:dw|
# Thread: can someone check my work thanks!! 1. ## can someone check my work thanks!! Can someone please check my work thanks!! 1) An angle with a negative rotation passes through the point (-5, 8). To the nearest degree, what is the measure of the angle? a) -238 b)-148 c) -122 d) – 58 My answer is B -142 2) For -180< 0 < -90, which of the primary trigonometric function is positive in this interval? A) Cos 0 b) Tan 0 c) Sin 0 My answer is B Tan 0 3) A central angle subtends an arc of 35cm. if the radius of the circle is 12 cm, what is the measure of the central angle to the nearest degree? A. 2.9 B. 292 C. 167 D. 173 My answer is A 2.9 4) Are -5 (pi square) /9 and 5 (pi square) /9 coterminal angles? No they are not coterminal angles because it is not minuses or added by 360. 5) what is the exact value of sin 5 (pi square) / 6? The exact value is ½ 2. Originally Posted by bobs Can someone please check my work thanks!! 1) An angle with a negative rotation passes through the point (-5, 8). To the nearest degree, what is the measure of the angle? a) -238 b)-148 c) -122 d) – 58 My answer is B -142 This is in the 2nd quadrant, so is -238 degrees 2) For -180< 0 < -90, which of the primary trigonometric function is positive in this interval? A) Cos 0 b) Tan 0 c) Sin 0 My answer is B Tan 0 This is quadrant 3, so yest its tan. 3) A central angle subtends an arc of 35cm. if the radius of the circle is 12 cm, what is the measure of the central angle to the nearest degree? A. 2.9 B. 292 C. 167 D. 173 My answer is A 2.9 In radians this is 35/12 or in degrees 35/12*180/pi~=167 degrees. 4) Are -5 (pi square) /9 and 5 (pi square) /9 coterminal angles? No they are not coterminal angles because it is not minuses or added by 360. This looks OK, but I am suspicious of the appearence of pi^2 in this and the next question, also are they supposed to be in degrees or radians? 5) what is the exact value of sin 5 (pi square) / 6? The exact value is &#189; Not likely, also use more brackets to make this less ambiguous, and clarify the angle measure in use. As it is I would be supprised if it had an exact value (in the sense intended in the question) RonL 3. ## sorry.... 5) what is the exact value of sin 5 (pi square) / 6? sorry is is multipule choice A. 1/2 B. -1/2 C. Square root (3) / 2 D. - Square root (3) / 2 okay since it is not A right I think the exact value is B -1/2 is that right?? 4. Originally Posted by bobs 5) what is the exact value of sin 5 (pi square) / 6? sorry is is multipule choice A. 1/2 B. -1/2 C. Square root (3) / 2 D. - Square root (3) / 2 okay since it is not A right I think the exact value is B -1/2 is that right?? I do not think there is a way to get, $\sin \frac{5\pi^2}{6}$ Unless you want, $\sin \frac{5\pi}{6}$ 5. ## okay... No sorry I want this one... $\sin \frac{5\pi}{6}$[/QUOTE] sorry for the mix up!! 6. Originally Posted by bobs No sorry I want this one... $\sin \frac{5\pi}{6}$ sorry for the mix up!![/QUOTE] That is, $\frac{5(180)^o}{6}$ Thus, $150^o$ In the second quadrant. Use the identity, $\sin (180^o-x)=\sin x$ Thus, $\sin 150^o=\sin (180^o-30^o)=\sin 30^o=\frac{1}{2}$
# Assume that P(B) > 0. Prove that if P(A1|B) < P(A1) then P(Ai | B) > P(Ai) for some i = 2, … , k. Suppose k events from a partition of the sample space Ω, i.e., they are disjoint and ∪ i=1 to k over Ai = Ω. Assume that P(B) > 0. Prove that if P(A1|B) < P(A1) then P(Ai | B) > P(Ai) for some i = 2, ... , k. I used the law of total probability to prove P(A1|B) < P(A1) but I have no idea how to prove the that P(Ai | B) > P(Ai) for some i = 2, ... , k. Thank you. • The law of total probability cannot be used to prove that $P(A_1\mid B) < P(A_1)$ (as you are claiming you did). If you do believe that you have a valid proof, then consider that the same proof can be adapted to prove that $P(A_i\mid B) < P(A_i)$ for all choices of $i$, not just $i=1$, and so you will have disproved what you set out to do. – Dilip Sarwate Apr 19 '15 at 20:31 • That is why I am unable to understand how to prove that if for i=1 P(Ai | B) < P (Ai) then P(Ai | B) > P (Ai | B) for i=2, .. , k. Have you got any suggestions? – Octavian Apr 19 '15 at 20:41 Then $P(A_1\mid B)<P(A_1)$ and $P(A_i\mid B)\leq P(A_i)$ for $i=2,\dots,k$ leading to: $$1=P(\Omega\mid B)=\sum_{i=1}^kP(A_i\mid B)<\sum_{i=1}^kP(A_i)=P(\Omega)=1$$ A contradiction is found.
# With the aid of examples briefly explain the difference between fixed effect and random effects models in experimental design. With the aid of examples briefly explain the difference between fixed effect and random effects models in experimental design. You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it d2saint0 Step 1 Introduction: Experimental study: In a designed experimental study, researchers allocate individuals to a certain group under study and change the values of an explanatory variable intentionally, after which, the values of the response variable for each group is recorded. Step 2 Explanation: Fixed effect: The fixed effect in an experimental design is defined as the data that has been gathered from all the levels of the factor that are of interest. Fixed effects will be constant across the individuals. Example: Suppose that 3 different dosages of drug are given to three different groups of subjects. The objective of the investigator is to compare the effect of reactions of three different dosages of a drug. In the given scenario "Dosage" is the factor and 3 different dosages of drug are 3 different levels of the experiment. Here, interaction of any 2 dosages is not included in the experiment. That is, the investigator is studying only fixed effect. Main interest is effect of each dosage. Random effect: The random effect in an experimental design is defined as the factor that have many possible levels, interest is in all possible levels, but only a random sample of levels is included in the data. The random effect is constant across individuals. Example: Suppose that patients are classified into 2 surgical procedures randomly and there are five separate surgical teams. To prevent potential conflict between care and surgical team, each team is qualified in both procedures, and each team performs equal numbers in each in the two types of operations. As the investigator wants to compare the procedures, the goal is to generalize to other surgical teams. The surgical team is a random factor, not a fixed factor.
# imageFan -- computes the fan of the image ## Synopsis • Usage: F = imageFan(M,C) • Inputs: • M, • Outputs: • F, an instance of the type Fan ## Description M must be a matrix from the ambient space of the Cone C to some target space. The imageFan is the common refinement of the images of all faces of C. i1 : C = posHull matrix {{2,1,-1,-3},{1,1,1,1},{0,1,-1,0}} o1 = {ambient dimension => 3 } dimension of lineality space => 0 dimension of the cone => 3 number of facets => 4 number of rays => 4 o1 : Cone i2 : M = matrix {{1,0,0},{0,1,0}} o2 = | 1 0 0 | | 0 1 0 | 2 3 o2 : Matrix ZZ <--- ZZ i3 : F = imageFan(M,C) o3 = {ambient dimension => 2 } number of generating cones => 3 number of rays => 4 top dimension of the cones => 2 o3 : Fan i4 : rays F o4 = {| -3 |, | -1 |, | 1 |, | 2 |} | 1 | | 1 | | 1 | | 1 | o4 : List ## Ways to use imageFan : • "imageFan(Matrix,Cone)" ## For the programmer The object imageFan is .
Next: Simulated annealing Up: Optimization Previous: Line minimization The gradient descent method may not be efficient because it could get into the zigzag pattern and repeat the same search directions many times. This problem is avoided in the conjugate gradient (CG) method, which does not repeat any previous search direction and converge in iterations. The CG method is a significant improvement in comparison to the gradient descent method. We will first assume the function to be minimized is quadratic: where is symmetric. The gradient (first derivatives) of the function is (see here): and the Hessian matrix (second derivatives) of the function is simply , which is assumed to be positive definite, so that has a minimum. At the solution where is minimized, we have . We see that the minimization of a quadratic function is equivalent to solving a linear equation systems for given a symmetric positive definite matrix and a vector . The CG method considered below can therefore be used for solving both problems. Later we will relax the condition for and consider using CG to minimize non-quadratic functions. Before discussing the CG algorithm, we first consider the concept of conjugate vectors, which is the key to the CG method. A set of vectors satisfying is mutually conjugate to each other with respect to . The vectors are also said to be A-conjugate or A-orthogonal to each other. Comparing this to two orthogonal vectors and satisfying , we see that two conjugate vectors satisfying can also be considered as orthogonal to each other with respect to . As these vectors are independent of each other, they can be considered as a basis that spans the N-D space, in which any vector can be expressed as a linear combination of them. The basic strategy of the CG method is for the iteration to follow a search path composed of segments each along one of the mutually conjugate search directions. After the nth iteration that traversed along the search direction to arrive at , the search direction in the next iteration will be A-conjugate to the previous one . By doing so, the total error at the initial guess will be eliminated one component at a time in each of the iterations, so that after such iterations the error at becomes zero, , and becomes the solution. Now we consider specifically the CG algorithm with the following general iteration: where is the optimal step size derived previously. Subtracting from both sides, we get where is the gradient of at , which can be found to be (Note that .) Substituting this into the previous equation we get Pre-multiplying both sides by we get We see that after taking the nth step along the direction to arrive at , the remaining error is A-conjugate to the previous search direction . To reduce the error as much as possible, the next search direction should be along one of the directions that span the remaining , that is A-conjugate to the previous search direction , instead of just orthogonal to it as in the case of gradient descent method, so that the component in corresponding to the direction of can be eliminated. Carrying out such a search in iterations each eliminating one of the components of the initial error , we get , i.e., the error is completely eliminated. To see this, we premultiply both sides of the initial error equation by and get Solving for we get Here we have used the fact that and that the nth search direction is A-orthogonal to all previous directions. Comparing this result with the expression of obtained above, we see that , and the expression for can now be written as We see that as increases from 0 to , the error is gradually eliminated from to , one component at a time. After steps, the error is reduced to and the true solution is obtained . The specific A-conjugate search directions satisfying for any can be constructed by the Gram-Schmidt process based on any set of independent vectors , which can be used as the basis to span the space. Specifically, starting from , we construct each of the subsequent for based on , with all of its components along the previous directions ( ) removed: where () is the A-projection of onto . The coefficient can be found by pre-multiplying both sides by with to get: Solving for we get: and the A-projection of onto is Now the nth direction can be expressed as: In particular, we could construct the A-orthogonal search directions ( ) based on the gradient directions . The benefit of choosing this particular set of directions is that the computational cost is much reduced as we will see later. Premultiplying with on both sides of we get: We see the for all , i.e., is orthogonal to all previously traveled directions . Now the Gram-Schmidt process can be written as where Premultiplying () on both sides we get Note that all terms in the summation are zero as for all . Substituting into the expression for the step size , we get Next we consider Premultiplying on both sides we get Solving for we get Finally the mth coefficient in the Gram-Schmidt process for () can be written as Note that except when , i.e., there is only one non-zero term left in the summation of the update formula for . This is the reason why we choose . We can now drop the second subscript in . Substituting the step size obtained above into the expression for we get In summary here is the conjugate gradient algorithm (note ): 1. Set and initialize the search direction (same as gradient descent): 2. Termination check: if the error is smaller than a preset threshold, stop. Otherwise, find step size: 3. Step forward: 5. Find coefficient for Gram-Schmidt process: 6. Update search direction: Set and go back to step 2. The algorithm above assumes the objective function to be quadratic with known . However, when is not quadratic and is not available, the algorithm can be modified so that it does not depend on , by the following two methods. • In step 2 above, the optimal step size is calculated based on . But it can also be found by line minimization based on any suitable algorithms for 1-D optimization. In step 4, the gradient is calculated based on , but it can also be computed locally at , once it is made available in step 3. • If the Hessian matrix of the objective function can be made available, it can be used to approximate so that the optimal step size can still be obtained without the iterative 1-D line optimization. Although this method may result in more iterations, it avoids the 1-D optimization, which may be time consuming. In the figure below, the conjugate gradient method is compared with the gradient descent method for the case of . We see that the first search direction is the same for both methods. However, the next direction is A-orthogonal to , same as the next error , different from the search direction in gradient descent method. The conjugate gradient method finds the solution in steps, while the gradient descent method has to go through many more steps all orthogonal to each other before it finds the solution. Now we relax the requirement that the function be quadratic, but we still assume that it can be approximated as a quadratic function near its minimum. In this case, the update of the search direction, or specifically the coefficient , which is derived based on the assumption that is quadratic, may no longer be valid. Some alternative formulas for , can be used: These expressions are identical to when is indeed quadratic. Note that it is now possible for . If this happens to be the case, we will use , i.e., the next search direction is simply , same as the gradient descent method. Example To compare the conjugate method and the gradient descent method, consider a very simple 2-D quadratic function The performance of the gradient descent method depends significantly on the initial guess. For the specific initial guess of , the iteration gets into a zigzag pattern and the converge is very slow, as shown in the table below. n 0 1.500000, -0.750000 2.812500 1 0.250000, -0.750000 0.468750e-01 2 0.250000, -0.125000 7.812500e-02 3 0.041667, -0.125000 1.302083e-02 4 0.041667, -0.020833 2.170139e-03 5 0.006944, -0.020833 3.616898e-04 6 0.006944, -0.003472 6.028164e-05 7 0.001157, -0.003472 1.004694e-05 8 0.001157, -0.000579 1.674490e-06 9 0.000193, -0.000579 2.790816e-07 10 0.000193, -0.000096 4.651361e-08 11 0.000032, -0.000096 7.752268e-09 12 0.000032, -0.000016 1.292045e-09 13 0.000005, -0.000016 2.153408e-10 However, as expected, the conjugate gradient method takes exactly steps from any initial guess to reach at the solution, as shown below. n 0 1.500000, -0.750000 2.812500e+00 1 0.250000, -0.750000 4.687500e-01 2 0.000000, -0.000000 1.155558e-33 For an example of with from an initial guess , it takes the gradient descent method 36 iterations to reach corresponding to . From the same initial guess, it takes the conjugate gradient method only iterations to converge to the solution: n 0 1.000000, 2.000000, 3.000000 4.500000e+01 1 -0.734716, -0.106441, 1.265284 2.809225e+00 2 0.123437, -0.209498, 0.136074 3.584736e-02 3 -0.000000, 0.000000, 0.000000 3.949119e-31 Conjugate gradient method used for solving linear equation systems: As discussed before, if is the solution that minimizes the quadratic function , with being symmetric and positive definite, it also satisfies . In other words, the optimization problem is equivalent to the problem of solving the linear system , both can be solved by the conjugate gradient method. Now consider solving the linear system with . Let () be a set of A-orthogonal vectors satisfying for . The solution of the equation can be represented by these vectors as Now we have Premultiplying on both sides we get Solving for we get Substituting this back to the expression for we get the solution of the equation: Also note that as , the ith term of the summation above is simply the A-projection of onto the ith direction : One application of the conjugate gradient method is to solve the normal equation to find the least-square solution of an over-constrained equation system , where the coefficient matrix is by of rank , i.e., . As discussed previously, the normal equation of this system is Here is an by symmetric, positive definite matrix. This normal equation can be solved by the conjugate gradient method. Next: Simulated annealing Up: Optimization Previous: Line minimization Ruye Wang 2015-02-12
# Quark Matter 2019 - the XXVIIIth International Conference on Ultra-relativistic Nucleus-Nucleus Collisions 3-9 November 2019 Wanda Reign Wuhan Hotel Asia/Shanghai timezone ## Ultracentral Collisions of Small and Deformed Systems at RHIC 5 Nov 2019, 12:00 20m Ball Room 2 (Wanda Reign Wuhan Hotel) ### Ball Room 2 #### Wanda Reign Wuhan Hotel Oral Presentation Small systems ### Speaker Douglas Wertepny (Ben Gurion University of the Negev) ### Description We study a range of collision systems involving deformed ions and compare the elliptic and triangular flow harmonics produced in a hydrodynamics scenario versus a color glass condensate (CGC) scenario. For the hydrodynamics scenario, we generate initial conditions using TRENTO and work within a linear response approximation to obtain the final flow harmonics. For the CGC scenario, we use the explicit calculation of two-gluon correlations taken in the high-$p_T$ (semi)dilute-(semi)dilute'' regime to express the flow harmonics in terms of the density profile of the collision. We consider ultracentral collisions of deformed ions as a testbed for these comparisons because the difference between tip-on-tip and side-on-side collisions modifies the multiplicity dependence in both scenarios, even at zero impact parameter. We find significant qualitative differences in the multiplicity dependence obtained in the initial conditions+hydrodynamics scenario and the CGC scenario, allowing these collisions of deformed ions to be used as a powerful discriminator between models. We also find that sub-nucleonic fluctuations have a systematic effect on the elliptic and triangular flow harmonics which are most discriminating in $0-1\%$ ultracentral symmetric collisions of small deformed ions and in $0-10\%$ $\mathrm{d} {}^{197}\mathrm{Au}$ collisions. The collision systems we consider are ${}^{238}\mathrm{U} {}^{238}\mathrm{U}$, $\mathrm{d} {}^{197}\mathrm{Au}$, ${}^{9}\mathrm{Be} {}^{197}\mathrm{Au}$, ${}^{9}\mathrm{Be} {}^{9}\mathrm{Be}$, ${}^{3}\mathrm{He} {}^{3}\mathrm{He}$, and ${}^{3}\mathrm{He} {}^{197}\mathrm{Au}$. ### Primary authors Douglas Wertepny (Ben Gurion University of the Negev) Jacquelyn Noronha-Hostler (Rutgers University) Noah Paladino (Rutgers State Univ. of New Jersey (US)) Matthew Sievert (Los Alamos National Laboratory) Skandaprasad Rao (Rutgers University)
# Lifting matrices mod 2 to integers. The following question was motivated by my research. Consider a $n\times n$ matrix whose elements are $0$'s or $1$'s such that the determinant is odd. The question is: is it possible to assign signs to matrix elements such that the determinant of the matrix will be equal to $1$? I do not know an answer even to a weaker question: is it possible to replace some of the $1$'s in the matrix with odd integers so that the determinant will be equal to $1$? Remark: it is known that a natural reduction mod $N$ map $SL_n(\mathbb Z) \to SL_n(\mathbb Z/ N\mathbb Z)$ is surjective for any $n,N$. - I would guess that this (your first question) is unlikely to be true. How hard have you tried to search for a counterexample, either by pure brute force search or by a targeted search of 0-1 matrices with large odd determinant? C.f. mathworld.wolfram.com/HadamardsMaximumDeterminantProblem.html –  Pete L. Clark Mar 15 '10 at 0:07 Also the title of your question was confusing to me. Your weaker question has something to do with lifting matrices mod 2 (except that you do not want to change the zero entries by even integers), but your stronger question has nothing to do with it, so far as I can see. –  Pete L. Clark Mar 15 '10 at 0:12 @Pete, I tried to construct a counterexample by hands (I also suppose that answer is negative). Could you, please, explain me why one should try to search a counterexample through matrices with large odd determinant? –  Petya Mar 15 '10 at 0:14 I;ve just tried a brute search for the first version, generating random $01$-matrices. No luck for smallish values of $n$... –  Mariano Suárez-Alvarez Mar 15 '10 at 0:30 Mikola, [[0,1],[1,0]] has determinant -1 but changing the sign of a one yields determinant 1. –  Jonas Meyer Mar 15 '10 at 1:18 I believe that the weaker question can be proved by induction on $n$. The case $n=1$ is clear. Now assume for $n-1$ and expand the $n\times n$ determinant by the first row. At least one of the terms in the expansion must be odd. Thus the original matrix $A$ has an $(n-1)\times (n-1)$ submatrix $B$, say consisting of entries not in row 1 or column $j$, with odd determinant such that $A_{1j}=1$ By induction we can change some of the 1's in $B$ to odd integers so that the new matrix $B'$ satisfies det$(B')=1$. Let $A'$ be $A$ after replacing $B$ with $B'$. Now det$(A')= A_{1j} +$ terms not involving $A_{1j}$, say det$(A')=A_{1j}+c$. Since $A_{1j}=1$ and det$(A')$ is odd, it follows that $c$ is even. Hence we can replace $A_{1j}$ with the odd integer $1-c$ so that the resulting matrix has determinant 1. - Right! Thank you, Richard. –  Petya Mar 15 '10 at 1:50 I want to address the weaker question in a more general setting: Given any matrix over $\mathbb{Z}$ with determinant $1$ mod $m$. Is it possible to add multiples of $m$ to each entry to get a matrix with determinant one? Call a matrix transformable, iff this is possible. Given any $A$ matrix over $\mathbb{Z}$. Consider the image of the induced map $\mathbb{Z}^n\rightarrow \mathbb{Z}^n$. It is a submodule of $\mathbb{Z}^n$. For any such submodule with rank $k$ it is always possible to find a basis of $b_1,\ldots,b_n$ of $\mathbb{Z}^n$ and numbers $r_1,\ldots,r_k$, such that $r_i|r_{i+1}$ for $i=1\ldots k-1$ and $r_1b_1,\ldots r_kb_k$ is a basis of the given submodule. The proof of this is roughly the same as the proof of the structure theorem of finitely generated abelian groups. This tells us, that we can write our matrix $A$ in the form $A=BDC$, where $B$ and $C$ are invertible and $D$ is a diagonal matrix, with the entries $r_1,\ldots ,r_n$ from above. As the determinant is not zero, the image must have full rank and hence $k=n$. It is easy to see, that left (right) multiplication with invertible matrices doesn't change the transformability. So we might assume, that $A$ has the given form. We will reduce inductively the number of non-one diagonal entries of $A$ without changing the determinant modulo $m$. This number is the length of $\mathbb{Z}^n/Im(A)=\bigoplus_{i=1}^n\mathbb{Z}/r_i$. Suppose there is a non-one diagonal entry $r_i$. Then there has to be a second one $r_{i'}$, as the product of all diagonal entries is $1$ mod $m$. Otherwise this is the last non-one diagonal entry and it has to be $1$ mod $m$ and we can transform the matrix into the identity matrix. Now add $m$ to $r_i$ and it will become coprime to the other non-one entry $r_{i'}$, as $r_i|r_{i'}\;,\;gcd(m,r_i)=1=gcd(m,r_i')$. Hence we get $\mathbb{Z}/(r_i+m)\oplus\mathbb{Z}/r_{i'}\cong\mathbb{Z}/((r_i+m)r_{i'})$ Call the resulting matrix $A'$ and observe that the length of $\mathbb{Z}^n/Im(A')$ is one smaller, than the length of $\mathbb{Z}^n/Im(A)$. Then we can again find the normal form for $A'$ and repeat this process until we end up with the identity matrix. Hence every matrix with determinant $1$ mod $m$ is transformable. - Thank you. See my remark to the original question -- it seems you are proving it. I note that Richard's proof also works for it. –  Petya Mar 15 '10 at 16:24 ahh I see. should read the questions more carefully . . . –  HenrikRüping Mar 15 '10 at 16:29 This is a bit weaker than Petya's original question since there the entries in the lifted matrix are constrained to be 0, 1 or -1. For the proof of the surjectivity of $SL_n(\mathbf{Z})\to SL_n(\mathbf{Z}/N\mathbf{Z})$ my favoured argument goes roughly as follows. Show that $SL_n(\mathbf{Z}/N\mathbf{Z})$ is generated by elementary matrices of the form $I + E_{ij}$ where $E_{ij}$ is a matrix unit with $i\ne j$. As these matrices all lie in the image, the map is surjective. I haven't seen this argument in the textbooks :-) –  Robin Chapman Mar 15 '10 at 18:12 Nice argument, Robin! –  Petya Mar 15 '10 at 18:27
# First human clone 8 weeks old. Discussion in 'Biology & Genetics' started by GRO$$, Apr 7, 2002. 1. ### GRO$$Registered Senior Member Messages: 304 http://www.cnn.com/2002/TECH/science/04/06/human.clone/index.html http://investor.cnet.com/investor/news/newsitem/0-9900-1028-9626636-0.html http://www.gulfnews.com/Articles/news.asp?ArticleID=46275 This guy he has been described as crazy, stupid, careless, irresponsible. Some believe he is the only one with the guts to do what is right in advancing technology. Is he full of himself and looking only for popularity over benefiting mankind, or is he the one being burned for claiming the sun stands still? 3. ### Adam§Þ@ç€ MØnk€¥Registered Senior Member Messages: 7,415 I think it's beautiful. Fantastic. I hope all goes well. If not, I hope they learn enough to make future attempts more successful. 5. ### Weetbix KidRegistered Member Messages: 2 I think it is horrible, reckless and irresponsible. We dont know that much about reproductive biology, let alone genetics. They cant even clone another animal completely successfully. They have tried to clone primates, but have had all sorts of trouble getting them to develop, if they do, they are deformed. This is far too early. The scientist doing it is doing it for publicity and personal gratification purposes, that is all. 7. ### AzraelAngel of LightRegistered Senior Member Messages: 134 I think this guy is looking for his 15 minutes of fame. At this point in our evolution, both mental and physcal, cloning is a really bad idea. We do not understand our genetic code to go tinkering and messing with cloning. If it is true it will probably end in complete failure. 8. ### goofyfishAnalog By Birth, Digital By DesignValued Senior Member Messages: 5,331 Apparently, this is an unsubstantiated report, and should probably treated with extreme skepticism. There are 5000 couples involved, and no previous word has emerged; there is no independent corroboration. I agree with the "15 minutes of fame" theory. We will find that there is no live birth, the parents will refuse an autopsy that could prove a cloning, etc. Additional information on Dr. Severino Antinori. Peace. 9. ### Eflex tha Vybe ScientistRegistered Senior Member Messages: 190 A successful impregnation does not mean the mother will carry the child to term. I think this crude attempt @ cloning will be a failure. 10. ### GRORegistered Senior Member Messages: 304 If it is, im sure he will have 15 min of fame and be soon forgotten. Will this lead to more experiments on huuman cloning tho? Will he inspire others by his failure? But what if he dosent fail, and she gives birth to a healthy child? What will happen then? How will this child live? 11. ### TruthSeekerFancy Virtual Reality MonkeyValued Senior Member Messages: 15,162 Weetbix Kid, I agree with you! Love, Nelson 12. ### tallestRegistered Member Messages: 20 I fully believe that there are cloned human specimens on the planet as I type. We must all realise that this Pandora's box is open and that the race cannot be expected to unlearn the methods and lessons that led to other successful clonings of mammals. With this in mind, and given the undoubted benefits that there would be for human medicine from this work, should we not accept the inevitable and put aside our reservations (particularly where these worries are based on anachronistic religious ideals) to allow this work to be kept in the public eye and monitored? 13. ### Yang´s_MatrixRegistered Senior Member Messages: 69 I have mixed feelings about this. On the other hand this could be very benefetical for mankind... and yet at the same time I think that we could take this a bit more slowly. I mean what´s the rush? But as Doctor Zavos' said, those who waited and said that it´s not the time yet, did not land in the moon first. So is that it? Selfish ambition to get into the history books? If... when cloning becomes just like artifical insemination, I wonder how it will change our society. I´m still kinda sitting in the fence, cannot quite decide that should I resist or support this... huge leap or terrible mistake, whatever it will become. 14. ### tallestRegistered Member Messages: 20 As I said, as long as these areas of research are vilified by people (who in the main do not comprehend the bigger picture) then those working in this field will continue to operate underground- which does none of us any good. Note to Yang's_Matrix The time for sitting on the fence is over. As your own signature points out, "one cannot stay in the cradle forever" Messages: 4,127 Humans don't evolve mentally. That's pretty obvious. Anyways, I'd just like to say that yes, there are risks involved. The clone might be deformed. The clone might suffer. But how is this any different from "natural" conception? There's always risks in pregnancy. The child might end up deformed or handicapped or retarded. And, of course the child might suffer. It's merely a matter of probabilities. And we don't know the probability of failure with cloned children until we have some data to go on. How many people die as a consequence of genetic disease every day? Do we, knowing that we'll be able to reduce that number at some point in the future, institute a ban on all natural conception untill that time? Maybe we should? 16. ### Fukushi-meta consciousness-Registered Senior Member Messages: 1,231 Always 'tröckener kecks' Don't you guys know that kloning is already long established? They just need this to 'tap' in to the etics conciousness of the people,...to free the way: and to asses once more : the motion that the public has,... It's all being taken care of,...why you guys so surprized,...? my advice to you all: don't run behind the facts,... 17. ### Mostly HarmlessThrower of CoconutsRegistered Senior Member Messages: 50 frankly, it was bound to happen, there will be mistakes, there will be deformaties, they will be explained as errors, the clones side-lined as sub-human, even the mothers are reported to fear their clone children will be treated as "monsters". the fact is, it has been thought of, it shall be done, man is a pioneer of life and plunges forward regardless, if he did have regard, then nucleur bombs would not have been created after the mass distruction of Hirosheema. today, the purpetrators of that inhumane crime have the largest numbers of nukes around, and still try and convince the world that THEY should not have any, that another country with a maniacal leader would missuse it, but not they, they who hv already used it (of course for a REALLY good reason). same thing here, the honest doctor says its for the advancement of medicine and human development, i say it too shall be used for destructive purposes, (after all, if possible, create armies of the critters, they are sub-human after all). religiously, i say this, if there is a God (and i personally think so, in some form or another), and he/she is omniponent, and gave us a brain, and allowed us to use it, then, he/she would also know that we would attempt, at some point, cloning. what we do with it... is our choice.. a choice most often than not used to destructive forces. remember, it is our ability to choose that differs us from other animals.
0 Review Articles # Some Observations on the Origins of Newton’s Law of Cooling and Its Influences on Thermofluid Science [+] Author and Article Information K. C. Cheng University of Alberta, 4327-115 Street, Edmonton, AB, T6J 1P5, Canada Because of the nature of this paper, quotations will be used often to give proper credit to the authors of articles and books. Appl. Mech. Rev 62(6), 060803 (Aug 05, 2009) (17 pages) doi:10.1115/1.3090832 History: Received December 23, 2008; Revised January 21, 2009; Published August 05, 2009 ## Abstract The exact mathematical analogy exists between Newton’s law of cooling and Proposition II, Book II (The motion of bodies in resisting mediums) of the Principia. Several approaches for the proof of Proposition II are presented based on the expositions available in the historical literature. The relationships among Napier’s logarithms (1614), Euclid’s geometric progression (300 B.C. ), and Newton’s law of cooling (1701) are explored. Newton’s legacy in the thermofluid sciences is discussed in the light of current knowledge. His characteristic parameter for the temperature fall ratio, $ΔT/(T−T∞)$, is noted. The relationships and connections among Newton’s cooling law (1701), Fourier’s heat conduction theory (1822), and Carnot’s theorem (1824)based on temperature difference $(ΔT)$ as a driving force are noted. After tracing the historical origins of Newton’s law of cooling, this article discusses some aspects of the historical development of the heat transfer subject from Newton to the time of Nusselt and Prandtl. Newton’s legacy in heat transfer remains in the form of the concept of heat transfer coefficient for conduction, convection, and radiation problems. One may conclude that Newton was apparently aware of the analogy of his cooling law to the low Reynolds number motion of a body in a viscous fluid otherwise at rest, i.e., its drag is approximately proportional to its velocity. ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Topic Collections
# how to set multiple materials using array in the submesh of an object I'm trying to insert multiple materials into the object's submesh using the array, but I'm not getting public GameObject[] obj; //material mat; public void Color(Material mat) { for (int x = 0; x < obj.Count; x++) obj[x].GetComponent<MeshRenderer>().sharedMaterials[1] = mat; Material[] mats = GetComponent<Renderer>().materials; mats[1] = mat; GetComponent<Renderer>().materials = mats; { print("New Material"); } } • Sorry, Can you be more specific of what you are trying to do?? – Shuvro Sarkar Feb 7 at 10:10 • Your second attempt here — capturing a mats array, setting its entry at a particular index, then assigning the modified array back to the renderer — this looks correct. Is it not performing the way you expect? If not, can you explain in detail what you want it to do, and what behaviour you're observing instead? – DMGregory Feb 7 at 13:29 • from the material line [] mats, it's only 1 material for the respective element, so I'm trying to insert an array of materials, but I'm not getting – Nitecki Feb 7 at 18:49 • Please try to explain that in more detail.The language barrier here is making it difficult to understand what you mean. Maybe try editing your question to give an example of what you want the materials array to contain before your code runs, then show an example of what you want the array to contain after your code has run. Then we can see clearly what change you're trying to make. – DMGregory Feb 8 at 3:24 • I'm sorry, see if you can understand now. – Nitecki Feb 8 at 4:15 material mat; GetComponent<MeshRenderer> ().sharedMaterials [1] = mat; Material[] mats = GetComponent<Renderer>().materials; mats[1] = mat; GetComponent<Renderer>().materials = mats; First error: The class "Material" should be in capital letters. (your code shouldnt compile) Second error: GetComponent<MeshRenderer> ().sharedMaterials [1] = mat; doesnt do anything. You can not assign that material like this. The rest of your code looks correct. 1. Did you assign the "mat" in the inspector? 2. Does your mesh even have more than 1 submesh? 3. Do you even want to assign more than 1 material? 4. Whats the error ? Doesnt it compile? Null reference? Nothing happenes? 5. Does your mesh renderer get the materials or does it just look wrong in the 3d view? using System.Collections; using System.Collections.Generic; using UnityEngine; public class MaterialArraySetter : MonoBehaviour { public Material Material1; void Start () { Material[] materialsArray = GetComponent<Renderer>().materials; materialsArray[1] = Material1; GetComponent<Renderer>().materials = materialsArray; } } EDIT2: You seem to have compiler errors. • type is because I'm trying to use an array of gameobject or of materials either, so the object receives the color the script itself works like this, but with only one material settling on its due. material mat; GetComponent<MeshRenderer> ().sharedMaterials [1] = mat; Material[] mats = GetComponent<Renderer>().materials; mats[1] = mat; GetComponent<Renderer>().materials = mats; I'm trying to insert an array of materials, but I can not – Nitecki Feb 7 at 18:42 • i edited my answer – OC_RaizW Feb 8 at 0:14 • Do you really want to assign more than one material? is exactly that, I'm trying to set an array of material to assign several materials in the sub-mesh, but I'm not getting... here are the errors i.imgur.com/KIVAw8Y.png – Nitecki Feb 8 at 3:54 • i edited my answer - again – OC_RaizW Feb 8 at 11:02 • My God, I'm very blind, I swear to you that I saw a Length kkk, thank you very much. – Nitecki Feb 8 at 15:24
# "Continuous" binomial distribution • June 8th 2009, 11:26 AM thefer "Continuous" binomial distribution Hi. I want to use some kind of "continuous" binomial distribution. I know binomial distribution is defined for natural numbers, but I want to extend it to the use of real numbers. I also know that the gamma function is an extension of factorial to be used with real numbers. My question is: Is $f(x) = \frac{\Gamma(n+1)}{\Gamma(x+1)*\Gamma(n+1-x)} p^x (1-p)^{n-x}$ a probability density function? More precisely, is the integral from 0 to n equal to 1? Playing with excel it doesn't seem to be 1, but i'm not sure. Do you know what function should i use? Thanks. • June 8th 2009, 10:08 PM matheagle You may want to look at a Beta random variable. http://en.wikipedia.org/wiki/Beta_distribution If you want to extend it's support from (0,1) to say (0,n) you can tranform it by letting W=nX. • June 9th 2009, 11:15 AM thefer Thanks! That's what I was looking for! • June 9th 2009, 04:02 PM matheagle The Beta is a generalization of a U(0,1) rv. If you let $\alpha=\beta=1$ you get a U(0,1). Do you know how to transform a rv, so the support is (0,n) instead?
# nLab geometry of physics -- coordinate systems Contents ## Surveys, textbooks and lecture notes #### Differential geometry synthetic differential geometry Introductions from point-set topology to differentiable manifolds Differentials V-manifolds smooth space Tangency The magic algebraic facts Theorems Axiomatics cohesion • (shape modality $\dashv$ flat modality $\dashv$ sharp modality) $(\esh \dashv \flat \dashv \sharp )$ • dR-shape modality$\dashv$ dR-flat modality $\esh_{dR} \dashv \flat_{dR}$ infinitesimal cohesion tangent cohesion differential cohesion singular cohesion $\array{ && id &\dashv& id \\ && \vee && \vee \\ &\stackrel{fermionic}{}& \rightrightarrows &\dashv& \rightsquigarrow & \stackrel{bosonic}{} \\ && \bot && \bot \\ &\stackrel{bosonic}{} & \rightsquigarrow &\dashv& \mathrm{R}\!\!\mathrm{h} & \stackrel{rheonomic}{} \\ && \vee && \vee \\ &\stackrel{reduced}{} & \Re &\dashv& \Im & \stackrel{infinitesimal}{} \\ && \bot && \bot \\ &\stackrel{infinitesimal}{}& \Im &\dashv& \& & \stackrel{\text{étale}}{} \\ && \vee && \vee \\ &\stackrel{cohesive}{}& \esh &\dashv& \flat & \stackrel{discrete}{} \\ && \bot && \bot \\ &\stackrel{discrete}{}& \flat &\dashv& \sharp & \stackrel{continuous}{} \\ && \vee && \vee \\ && \emptyset &\dashv& \ast }$ Models Lie theory, ∞-Lie theory differential equations, variational calculus Chern-Weil theory, ∞-Chern-Weil theory Cartan geometry (super, higher) This entry is one chapter of the entry geometry of physics. previous chapter: categories and toposes, next chapter: smooth sets As discussed in the chapter categories and toposes, every kind of geometry is modeled on a collection of archetypical basic spaces and geometric homomorphisms between them. In differential geometry the archetypical spaces are the abstract standard Cartesian coordinate systems, denoted $\mathbb{R}^n$ (in every dimension $n \in \mathbb{N}$). The geometric homomorphism between them are smooth functions $\mathbb{R}^{n_1} \to \mathbb{R}^{n_2}$, hence smooth (and possibly degenerate) coordinate transformations. Here we introduce the basic concept, organizing them in the category CartSp of Cartesian spaces (Prop. below.) We highlight three classical theorems about smooth functions in Prop. below, which look innocent but play a decisive role in setting up synthetic differential supergeometry based on the concept of abstract smooth coordinate systems. At this point these are not yet coordinate systems on some other space. But by applying the general machine of categories and toposes to these, a concept of generalized spaces modeled on these abstract coordinate systems is induced. These are the smooth sets discussed in the next chapter geometry of physics – smooth sets. # Contents ## Abstract coordinate systems In this Mod Layer we discuss the concrete particulars of coordinate systems: the continuum real line $\mathbb{R}$, the Cartesian spaces $\mathbb{R}^n$ formed from it and the smooth functions between these. ### The continuum real line The fundamental premise of differential geometry as a model of geometry in physics is the following. Premise. The abstract worldline of any particle is modeled by the continuum real line $\mathbb{R}$. This comes down to the following sequence of premises. 1. There is a linear ordering of the points on a worldline: in particular if we pick points at some intervals on the worldline we may label these in an order-preserving way by integers $\mathbb{Z}$. 2. These intervals may each be subdivided into $n$ smaller intervals, for each natural number $n$. Hence we may label points on the worldline in an order-preserving way by the rational numbers $\mathbb{Q}$. 3. This labeling is dense: every point on the worldline is the supremum of an inhabited bounded subset of such labels. This means that a worldline is the real line, the continuum of real numbers $\mathbb{R}$. The adjective “real” in “real number” is a historical shadow of the old idea that real numbers are related to observed reality, hence to physics in this way. The experimental success of this assumption shows that it is valid at least to very good approximation. Speculations are common that in a fully exact theory of quantum gravity, currently unavailable, this assumption needs to be refined. For instance in p-adic physics one explores the hypothesis that the relevant completion of the rational numbers as above is not the reals, but p-adic numbers $\mathbb{Q}_p$ for some prime number $p \in \mathbb{N}$. Or for example in the study of QFT on non-commutative spacetime one explore the idea that at small scales the smooth continuum is to be replaced by an object in noncommutative geometry. Combining these two ideas leads to the notion of non-commutative analytic space as a potential model for space in physics. And so forth. For the time being all this remains speculation and differential geometry based on the continuum real line remains the context of all fundamental model building in physics related to observed phenomenology. Often it is argued that these speculations are necessitated by the very nature of quantum theory applied to gravity. But, at least so far, such statements are not actually supported by the standard theory of quantization: we discuss below in Geometric quantization how not just classical physics but also quantum theory, in the best modern version available, is entirely rooted in differential geometry based on the continuum real line. This is the motivation for studying models of physics in geometry modeled on the continuum real line. On the other hand, in all of what follows our discussion is set up such as to be maximally independent of this specific choice (this is what topos theory accomplishes for us, discussed below Smooth spaces – Semantic Layer). If we do desire to consider another choice of archetypical spaces for the geometry of physics we can simply “change the site”, as discussed below and many of the constructions, propositions and theorems in the following will continue to hold. This is notably what we do below in Supergeometric coordinate systems when we generalize the present discussion to a flavor of differential geometry that also formalizes the notion of fermion particles: “differential supergeometry”. ### Cartesian spaces and smooth functions ###### Definition A function of sets $f : \mathbb{R} \to \mathbb{R}$ is called a smooth function if, coinductively: 1. the derivative $\frac{d f}{d x} : \mathbb{R} \to \mathbb{R}$ exists; 2. and is itself a smooth function. We write $C^\infty(\mathbb{R}) \in Set$ for the set of all smooth functions on $\mathbb{R}$. ###### Remark The superscript “${}^\infty$” in “$C^\infty(\mathbb{R})$” refers to the order of the derivatives that exist for smooth functions. More generally for $k \in \mathbb{N}$ one writes $C^k(\mathbb{R})$ for the set of $k$-fold differentiable functions on $\mathbb{R}$. These will however not play much of a role for our discussion here. ###### Definition For $n \in \mathbb{N}$, the Cartesian space $\mathbb{R}^n$ is the set $\mathbb{R}^n = \{ (x^1 , \cdots, x^{n}) | x^i \in \mathbb{R} \}$ of $n$-tuples of real numbers. For $1 \leq k \leq n$ write $i^k : \mathbb{R} \to \mathbb{R}^n$ for the function such that $i^k(x) = (0, \cdots, 0,x,0,\cdots,0)$ is the tuple whose $k$th entry is $x$ and all whose other entries are $0 \in \mathbb{R}$; and write $\mathbb{p}^k : \mathbb{R}^n \to \mathbb{R}$ for the function such that $p^k(x^1, \cdots, x^n) = x^k$. A homomorphism of Cartesian spaces is a smooth function $f : \mathbb{R}^{n_1} \to \mathbb{R}^{n_2} \,,$ hence a function $f : \mathbb{R}^{n_1} \to \mathbb{R}^{n_2}$ such that all partial derivatives exist and are continuous (…). ###### Example Regarding $\mathbb{R}^n$ as an $\mathbb{R}$-vector space, every linear function $\mathbb{R}^{n_1} \to \mathbb{R}^{n_2}$ is in particular a smooth function. ###### Remark But a homomorphism of Cartesian spaces in def. is not required to be a linear map. We do not regard the Cartesian spaces here as vector spaces. ###### Definition A smooth function $f : \mathbb{R}^{n_1} \to \mathbb{R}^{n_2}$ is called a diffeomorphism if there exists another smooth function $\mathbb{R}^{n_2} \to \mathbb{R}^{n_1}$ such that the underlying functions of sets are inverse to each other $f \circ g = id$ and $g \circ f = id \,.$ ###### Proposition There exists a diffeomorphism $\mathbb{R}^{n_1} \to \mathbb{R}^{n_2}$ precisely if $n_1 = n_2$. ###### Definition We will also say equivalently that 1. a Cartesian space $\mathbb{R}^n$ is an abstract coordinate system; 2. a smooth function $\mathbb{R}^{n_1} \to \mathbb{R}^{n_2}$ is an abstract coordinate transformation; 3. the function $p^k : \mathbb{R}^{n} \to \mathbb{R}$ is the $k$th coordinate of the coordinate system $\mathbb{R}^n$. We will also write this function as $x^k : \mathbb{R}^{n} \to \mathbb{R}$. 4. for $f : \mathbb{R}^{n_1} \to \mathbb{R}^{n_2}$ a smooth function, and $1 \leq k \leq n_2$ we write 1. $f^k \coloneqq p^k\circ f$ 2. $(f^1, \cdots, f^n) \coloneqq f$. ###### Remark It follows with this notation that $id_{\mathbb{R}^n} = (x^1, \cdots, x^n) : \mathbb{R}^n \to \mathbb{R}^n \,.$ Hence an abstract coordinate transformation $f : \mathbb{R}^{n_1} \to \mathbb{R}^{n_2}$ may equivalently be written as the tuple $\left( f^1 \left( x^1, \cdots, x^{n_1} \right) , \cdots, f^{n_2}\left( x^1, \cdots, x^{n_1} \right) \right) \,.$ ### The magic properties of smooth functions Below we encounter generalizations of ordinary differential geometry that include explicit “infinitesimals” in the guise of infinitesimally thickened points, as well as “super-graded infinitesimals”, in the guise of superpoints (necessary for the description of fermion fields such as the Dirac field). As we discuss below, these structures are naturally incorporated into differential geometry in just the same way as Grothendieck introduced them into algebraic geometry (in the guise of “formal schemes”), namely in terms of formally dual rings of functions with nilpotent ideals. That this also works well for differential geometry rests on the following three basic but important properties, which say that smooth functions behave “more algebraically” than their definition might superficially suggest: ###### Proposition (the three magic algebraic properties of differential geometry) 1. embedding of Cartesian spaces into formal duals of R-algebras For $X$ and $Y$ two Cartesian spaces, the smooth functions $f \colon X \longrightarrow Y$ between them (def. ) are in natural bijection with their induced algebra homomorphisms $C^\infty(X) \overset{f^\ast}{\longrightarrow} C^\infty(Y)$ (example ), so that one may equivalently handle Cartesian spaces entirely via their $\mathbb{R}$-algebras of smooth functions. Stated more abstractly, this means equivalently that the functor $C^\infty(-)$ that sends a smooth manifold $X$ to its $\mathbb{R}$-algebra $C^\infty(X)$ of smooth functions (example ) is a fully faithful functor: $C^\infty(-) \;\colon\; SmthMfd \overset{\phantom{AAAA}}{\hookrightarrow} \mathbb{R} Alg^{op} \,.$ 2. embedding of smooth vector bundles into formal duals of R-algebra modules For $E_1 \overset{vb_1}{\to} X$ and $E_2 \overset{vb_2}{\to} X$ two vector bundle (def. ) there is then a natural bijection between vector bundle homomorphisms $f \colon E_1 \to E_2$ and the homomorphisms of modules $f_\ast \;\colon\; \Gamma_X(E_1) \to \Gamma_X(E_2)$ that these induces between the spaces of sections (example ). More abstractly this means that the functor $\Gamma_X(-)$ is a fully faithful functor $\Gamma_X(-) \;\colon\; VectBund_X \overset{\phantom{AAAA}}{\hookrightarrow} C^\infty(X) Mod$ (Nestruev 03, theorem 11.29heorem#Nestruev03)) Moreover, the modules over the $\mathbb{R}$-algebra $C^\infty(X)$ of smooth functions on $X$ which arise this way as sections of smooth vector bundles over a Cartesian space $X$ are precisely the finitely generated free modules over $C^\infty(X)$. (Nestruev 03, theorem 11.32heorem#Nestruev03)) 3. For $X$ a Cartesian space (example ), then any derivation $D \colon C^\infty(X) \to C^\infty(X)$ on the $\mathbb{R}$-algebra $C^\infty(X)$ of smooth functions (example ) is given by differentiation with respect to a uniquely defined smooth tangent vector field: The function that regards tangent vector fields with derivations from example $\array{ \Gamma_X(T X) &\overset{\phantom{A}\simeq\phantom{A}}{\longrightarrow}& Der(C^\infty(X)) \\ v &\mapsto& D_v }$ is in fact an isomorphism. (This follows directly from the Hadamard lemma.) Actually all three statements in prop. hold not just for Cartesian spaces, but generally for smooth manifolds (def./prop. below; if only we generalize in the second statement from free modules to projective modules. However for our development here it is useful to first focus on just Cartesian spaces and then bootstrap the theory of smooth manifolds and much more from that, which we do below. ## The category of abstract coordinate systems Here we make explicit the category formed by abstract coordinate systems (Prop. below) and mention some of its basic properties. This will serve the discussion of smooth sets as the sheaves on the category of abstract coordinate systems, in the next chapter geometry of physics – smooth sets. $\,$ ###### Propositions (the category CartSp of abstract coordinate systems/Cartesian spaces) Abstract coordinate systems according to prop. form a category (this def.) – to be denoted CartSp – whose • objects are the abstract coordinate systems $\mathbb{R}^{n}$ (the class of objects is the set $\mathbb{N}$ of natural numbers $n$); • morphisms$f : \mathbb{R}^{n_1} \to \mathbb{R}^{n_2}$ are the abstract coordinate transformations = smooth functions. Composition of morphisms is given by composition of functions. Under this identification 1. The identity morphisms are precisely the identity functions. 2. The isomorphisms are precisely the diffeomorphisms. ###### Definition (opposite category of CartSp) Write CartSp${}^{op}$ for the opposite category (this def.) of CartSp (Prop. ). This is the category with the same objects as $CartSp$, but where a morphism $\mathbb{R}^{n_1} \to \mathbb{R}^{n_2}$ in $CartSp^{op}$ is given by a morphism $\mathbb{R}^{n_1} \leftarrow \mathbb{R}^{n_2}$ in $CartSp$. We will be discussing below the idea of exploring smooth spaces by laying out abstract coordinate systems in them in all possible ways. The reader should begin to think of the sets that appear in the following definition as the set of ways of laying out a given abstract coordinate systems in a given space. This is discussed in more detail below in Smooth spaces. ###### Definition A functor $X : CartSp^{op} \to Set$ (a “presheaf”) is 1. for each abstract coordinate system $U$ a set $X(U)$ 2. for each coordinate transformation $f : \mathbb{R}^{n_1} \to \mathbb{R}^{n_2}$ a function $X(f) : X(\mathbb{R}^{n_1}) \to X(\mathbb{R}^{n_2})$ such that 1. identity is respected $X(id_{\mathbb{R}^n}) = id_{X(\mathbb{R}^n)}$; 2. composition is respected $X(f_2)\circ X(f_1) = X(f_2 \circ f_1)$ ###### Example Let $\mathcal{C}$ be a category. 1. The following are equivalent: 1. $\mathcal{C}$ has a terminal object; 2. the unique functor $\mathcal{C} \to \ast$ to the terminal category has a right adjoint $\ast \underoverset {\underset{}{\longrightarrow}} {\overset{}{\longleftarrow}} {\bot} \mathcal{C}$ Under this equivalence, the terminal object is identified with the image under the right adjoint of the unique object of the terminal category. 2. Dually, the following are equivalent: 1. $\mathcal{C}$ has an initial object; 2. the unique functor $\mathcal{C} \to \ast$ to the terminal category has a left adjoint $\mathcal{C} \underoverset {\underset{}{\longrightarrow}} {\overset{}{\longleftarrow}} {\bot} \ast$ Under this equivalence, the initial object is identified with the image under the left adjoint of the unique object of the terminal category. ###### Proof Since the unique hom-set in the terminal category is the singleton, the hom-isomorphism characterizing the adjoint functors is directly the universal property of an initial object in $\mathcal{C}$ $Hom_{\mathcal{C}}( L(\ast) , X ) \;\simeq\; Hom_{\ast}( \ast, R(X) ) = \ast$ or of a terminal object $Hom_{\mathcal{C}}( X , R(\ast) ) \;\simeq\; Hom_{\ast}( L(X), \ast ) = \ast \,,$ respectively. ### The algebraic theory of smooth algebras ###### Propositions • Every object is a finite product of the object $\mathbb{R}$ (the real line itself). • The terminal object is $\mathbb{R}^0$, the point. Hence CartSp is (the syntactic category) of an algebraic theory (a Lawvere theory). This is called the theory of smooth algebras. ###### Definition A product-preserving functor $A : CartSp \to Set$ is a smooth algebra. A homomorphism of smooth algebras is a natural transformation between the corresponding functors. The basic example is: ###### Example For $n \in \mathbb{N}$, the smooth algebra $C^\infty(\mathbb{R}^n)$ is the functor $CartSp \to Set$ which is functor corepresented by $\mathbb{R}^n \in$ CartSp. This means that to $\mathbb{R}^k \in CartSp$ it assigns the set $Hom_{CartSp}(\mathbb{R}^n , \mathbb{R}^k) = C^\infty(\mathbb{R}^n, \mathbb{R}^k)$ of smooth functions from $\mathbb{R}^n$ to $\mathbb{R}^k$, and to a smooth function $f \colon \mathbb{R}^{k_1} \to \mathbb{R}^{k_2}$ it assigns the function $f\circ (-) \colon C^\infty(\mathbb{R}^n, \mathbb{R}^{k_1}) \to C^\infty(\mathbb{R}^n, \mathbb{R}^{k_2})$ given by postcomposition with $f$. ###### Remark Example shows how we are to think of a functor $A \colon CartSp \to Set$ as encoding an algebra: such a functor assigns to $\mathbb{R}^n$ a set to be interpreted as a set of “smooth functions on something with values in $\mathbb{R}^n$”, only that the “something” here is not pre-defined, but is instead indirectly characterized by this assignment. Due to this we will often denote smooth algebras as “$C^\infty(X)$”, even if “$X$” is not a pre-defined object, and write their value on $\mathbb{R}^n$ as $C^\infty(X,\mathbb{R}^n)$. This is illustrated by the next example. ###### Example The smooth algebra of dual numbers $C^\infty(\mathbf{D})$ is the smooth algebra which assigns to $\mathbb{R}^n$ the Cartesian product $C^\infty(D,\mathbb{R}^n) \coloneqq \mathbb{R}^n \times \mathbb{R}^n$ of two copies of $\mathbb{R}^n$, which we will write as $\left\{ (\epsilon \mapsto (\vec x + \epsilon \vec v)) | \vec x , \vec v \in \mathbb{R}^n \right\} \,.$ Moreover, a smooth function $f \colon \mathbb{R}^{n_1} \to \mathbb{R}^{n_2}$ is sent to the function $C^\infty(D, f) \colon C^\infty(D, \mathbb{R}^{n_1}) \to C^\infty(D, \mathbb{R}^{n_2})$ given by \begin{aligned} \left(\epsilon \mapsto \left(\vec x + \epsilon \vec v\right)\right) \\ \left( \epsilon \mapsto f(\vec x) + (\mathbf{d}f)(\vec v) \right) &\mapsto \left( \epsilon \mapsto \left( f\left(\vec x\right) + \sum_{j = 1}^{n_2} \left(\sum_{i = 1}^{n_1}\frac{\partial f^j}{\partial x^i} v^i\right) \vec e_j \right) \right) \end{aligned} \,. ###### Remark As the notation suggests, we may think of $C^\infty(D)$ as the functions on a first order infinitesimal neighbourhood of the origin in $\mathbb{R}^n$. ### The coverage of differentially good open covers We discuss a standard structure of a site on the category CartSp. Following Johnstone – Sketches of an Elephant, it will be useful and convenient to regard a site as a (small) category equipped with a coverage. This generates a genuine Grothendieck topology, but need not itself already be one. ###### Definition For $n \in \mathbb{N}$ the standard open n-ball is the subset $D^n = \{ (x_i)_{i =1}^n \in \mathbb{R}^n | \sum_{i = 1}^n (x_i)^2 \lt 1 \} \hookrightarrow \mathbb{R}^n \,.$ ###### Proposition There is a diffeomorphism $\mathbb{R}^n \stackrel{\simeq}{\to} D^n \,.$ ###### Definition A differentially good open cover of a Cartesian space $\mathbb{R}^n$ is a set $\{U_i \hookrightarrow \mathbb{R}^n\}$ of open subset inclusions of Cartesian spaces such that these cover $\mathbb{R}^n$ and such for each non-empty finite intersection there exists a diffeomorphism $\mathbb{R}^n \stackrel{\simeq}{\to} U_{i_1} \cap \cdots \cap U_{i_k}$ that identifies the $k$-fold intersection with a Cartesian space itself. ###### Remark Differentiably good covers are useful for computations. Their full impact is however on the homotopy theory of simplicial presheaves over CartSp. This we discuss in the chapter on smooth homotopy types, around this prop.omotopy types#DifferentiablyGoodCoverGivesSPlitHyperCoverOverCartSp). ###### Proposition Every open cover refines to a differentially good open cover, def. . A proof is at good open cover. ###### Remark Despite its appearance, this is not quite a classical statement. The classical statement is only that every open cover is refined by a topologically good open cover. See the comments here in the references-section at open ball for the situation concerning this statement in the literature. ###### Remark The good open covers do not yet form a Grothendieck topology on CartSp. One of the axioms of a Grothendieck topology is that for every covering family also its pullback along any morphism in the category is a covering family. But while the pullback of every open cover is again an open cover, and hence open covers form a Grothendieck topology on CartSp, not every pullback of a good open cover is again good. ###### Example Let $\{\mathbb{R}^2\stackrel{\phi_{i}}{\hookrightarrow}\mathbb{R}^2\}_{i \in \{1,2\}}$ be the open cover of the plane by an open left half space $\mathbb{R}^2 \simeq \{ (x_1,x_2) \in \mathbb{R}^2 | x_1 \lt 1 \} \stackrel{\phi_1}{\hookrightarrow} \mathbb{R}^2$ and a right open half space $\mathbb{R}^2 \simeq \{ (x_1,x_2) \in \mathbb{R}^2 | x_1 \gt -1 \} \stackrel{\phi_2}{\hookrightarrow} \mathbb{R}^2 \,.$ The intersection of the two is the open strip $\mathbb{R}^2 \simeq \{ (x_1, x_2) \in \mathbb{R}^2 | -1 \lt x_1 \lt 1 \} \hookrightarrow \mathbb{R}^2 \,.$ So this is a good open cover of $\mathbb{R}^2$. But consider then the smooth function $2(\cos(2 \pi (-)), \sin(2 \pi (-))) \colon \mathbb{R}^1 \to \mathbb{R}^2$ which sends the line to a curve in the plane that periodically goes around the circle of radius 2 in the plane. Then the pullback of the above good open cover on $\mathbb{R}^2$ to $\mathbb{R}^1$ along this function is an open cover of $\mathbb{R}$ by two open subsets, each being a disjoint union of countably many open intervals in $\mathbb{R}$. Each of these open intervals is an open 1-ball hence diffeomorphic to $\mathbb{R}^1$, but their disjoint union is not contractible (it does not contract to the point, but to many points!). So the pullback of the good open cover that we started with is an open cover which is not good anymore. But it has an evident refinement by a good open cover. This is a special case of what the following statement says in generality. ###### Proposition The differentially good open covers, def. , constitute a coverage on CartSp. Hence CartSp equipped with that coverage is a site. ###### Proof By definition of coverage we need to check that for $\{U_i \hookrightarrow \mathbb{R}^n\}_{i \in I}$ any good open cover and $f \colon \mathbb{R}^k \to \mathbb{R}^n$ any smooth function, we can find a good open cover $\{K_j \to \mathbb{R}^k\}_{j \in J}$ and a function $J \to I$ such that for each $j \in J$ there is a smooth function $\phi \colon K_j \to U_{\rho(j)}$ that makes this diagram commute: $\array{ K_j &\stackrel{\phi}{\to}& U_{i(j)} \\ \downarrow && \downarrow \\ \mathbb{R}^k &\stackrel{f}{\to}& \mathbb{R}^n } \,.$ To obtain this, let $\{f^* U_i \to \mathbb{R}^k\}$ be the pullback of the original covering family, in that $f^* U_i \coloneqq \{ x \in \mathbb{R}^k | f(x) \in U_i \} \hookrightarrow \mathbb{R}^k \,.$ This is evidently an open cover, albeit not necessarily a good open cover. But by prop. there does exist a good open cover $\{\tilde K_{\tilde j} \hookrightarrow \mathbb{R}^k\}_{\tilde j \in \tilde J}$ refining it, which in turn means that for all $\tilde j$ there is $\array{ \tilde K_{\tilde j} &\to& K_{j(\tilde j)} \\ \downarrow && \downarrow \\ \mathbb{R}^k &\stackrel{=}{\to}& \mathbb{R}^k } \,.$ Therefore then the pasting composite of these commuting squares $\array{ \tilde K_{\tilde j} &\to& K_{j(\tilde j)} &\to& U_{i(j(\tilde j))} \\ \downarrow && \downarrow && \downarrow \\ \mathbb{R}^k &\stackrel{=}{\to}& \mathbb{R}^k &\stackrel{f}{\to}& \mathbb{R}^n }$ solves the condition required in the definition of coverage. By example this good open cover coverage is not a Grothendieck topology. But as any coverage, it uniquely completes to one which has the same sheaves. ###### Proposition The Grothendieck topology induced on CartSp by the differentially good open cover coverage of def. has as covering families the ordinary open covers. ###### Remark This means that for every sheaf-theoretic construction to follow we can just as well consider the Grothendieck topology of open covers on $CartSp$. The sheaves of the open cover topology are the same as those of the good open cover coverage. But the latter is (more) useful for several computational purposes in the following. It is the good open cover coverage that makes manifest, below, that sheaves on $CartSp$ form a locally connected topos and in consequence then a cohesive topos. This kind of argument becomes all the more pronounced as we pass further below to (∞,1)-sheaves on CartSp. This will be discussed in Smooth n-groupoids – Semantic Layer – Local Infinity-Connectedness below. ### The slice category (…) • slice category$CartSp_{\mathbb{R}^n}$ (…) $\,$ ## Syntactic Layer In this Syn Layer we discuss the abstract generals of abstract coordinate systems, def. : the internal language of a category with products, which is type theory with product types. This is rough, needs further development. ### Judgments about types and terms We now introduce a different notation for objects and morphisms in a category (such as the category CartSp of def. ). This notation is designed to, eventually, make more transparent what exactly it is that happens when we reason deductively about objects and morphisms of a category. But before we begin to make any actual deductions about objects and morphisms in a category below, in this section here we express the given objects and morphisms at hand in the first place. Such basic statements of the form “There is an object called $A$” are to be called judgments, in order not to confuse these with genuine propositions that we eventually formalize within this metalanguage. To express that there is an object $X \in \mathcal{C}$ in a category $\mathcal{C}$, we write now equivalently the string of symbols (called a sequent) $\vdash \; X \colon Type \,.$ We say that these symbols express the judgment that $X$ is a type. We also say that $\vdash \; X \colon Type$ is the syntax of which $X \in \mathcal{C}$ is the categorical semantics. For instance the terminal object $* \in \mathcal{C}$ we call the categorical semantics of the unit type and write syntactically as $\vdash \; Unit \colon Type \,.$ If we want to express that we do assume that a terminal object indeed exists, hence that we want to be able to deduce the existence of a terminal object from no hypothesis, then we write this judgment below a horizontal line $\frac{}{\vdash \; Unit \colon Type} \,.$ We will see more interesting such horizontal-line statements below. Next, to express an element of the object $X$ in $\mathcal{C}$, hence a morphism $* \stackrel{x}{\to} X$ in $\mathcal{C}$ we write equivalently the sequent $\vdash \; x \colon X$ and call this the judgment that $x$ is a term of type $X$. Notice that every object $X \in \mathcal{C}$ becomes the terminal object in the slice category $\mathcal{C}_{/X}$. Let $A \to X$ be any morphism in $\mathcal{C}$, regarded as an object in the slice category $A \in \mathcal{C}_{/X} \,.$ We declare that the syntax of which this is the categorical semantics is given by the sequent $x \colon X \;\vdash \; A(x) \colon Type \,.$ We say that this expresses the judgement that $A$ is an $X$-dependent type; or a type in the context of a free variable $x$ of type $X$. Notice that an element of $A \in \mathcal{C}_{/X}$ is a generalized element of $A$ in $\mathcal{C}$, namely a morphism $X \to A$ which fits into a commuting diagram $\array{ X &&\stackrel{a}{\to}&& A \\ & {}_{\mathllap{id_X}}\searrow && \swarrow_{} \\ && X }$ in $\mathcal{C}$. We declare that the syntax of which such $a \in A \;\;\;\; (in \mathcal{C}_{/X})$ is this the categorical semantics is the sequent $x\colon X \;\vdash\; a(x) : A(x) \,.$ We say that this expresses the judgment that $a(x)$ is a term depending on the free variable $x$ of type $X$. This completes the list of judgment syntax to be considered. Notice that if the category $\mathcal{C}$ has products then, even though it does not explicitly appear above, this is sufficient to express any morphism $X \stackrel{f}{\to} Y$ in $\mathcal{C}$ as the semantics of a term: we regard this morphism naturally as being the corresponding morphism in the slice category $\mathcal{C}_{/X}$ which as a commuting diagram in $\mathcal{C}$ itself is $\array{ X && \stackrel{(f,id_X)}{\to} && Y\times X \\ & {}_{\mathllap{id_X}}\searrow && \swarrow_{\mathrlap{p_2}} \\ && X } \,.$ This is the categorical semantics for which the syntax is simply $x \colon X \;\vdash\; y(x) \colon Y \,,$ being the judgment which expresses that $y(x)$ is a term in context of an $X$-dependent type $Y$ in the special degenerate case that $Y$ does not actually vary with $x \colon X$. ### Natural deduction rules for product types With the above symbolic notation for making judgments about the presence of objects and morphisms in a category $\mathcal{C}$, we now consider a system of rule of deduction that tells us how we may process these symbols (how to do computations) such that the new symbols we obtain in turn express new objects and new morphisms in $\mathcal{C}$ that we can build out of the given ones by universal constructions in the sense of category theory. This way of deducing new expressions from given ones is very basic as well as very natural and hence goes by the technical term natural deduction. For every kind of type (every universal construction in category theory) there is, in natural deduction, one set of rules for how to deductively reason about it. This set of rules, in turn, always consists of four kinds of rules, called the These are going to be the syntax in type theory of which universal constructions in category theory is the categorical semantics. In our running example where $\mathcal{C} =$ CartSp, the only universal construction available is that of forming products. We therefore introduce now the natural deduction rules by way of example of the special case of product types. 1. type formation rule Let $A , B \in \mathcal{C}$ be two objects in a category with products. Then there exists the product object $A \times B \in \mathcal{C} \,.$ We now declare that the syntax of which this state of affairs is the categorical semantics is the collection of symbols of the form $\frac{A \colon Type \;\;\;\;\; B \colon Type}{ A \times B \colon Type} \,.$ Here on top of the horizontal line we have the two judgments which express that, syntactically, $A$ is a type and $B$ is a type, and semantically that $A \in \mathcal{C}$ and $B \in \mathcal{C}$. Below the horizontal line is, in turn, the judgment which expresses that there is, syntactically, a product type, which semantically is the product $A \times B \in \mathcal{C}$. The horizontal line itself is to indicate that if we are given the (symbols of) the collection of judgments on top, then we are entitled to also obtain the judgment on the bottom. Remark (Computation) All this may seem, on first sight, like being a lot of fuss about something of grandiose banality. To see what is gradually being accomplished here despite of this appearance, as we proceed in this discussion, the reader can and should think of this as the first steps in the definition of a programming language: the notion of judgment is a syntactic rule for strings of symbols that a computer is to read in, and a natural deduction-step as the type formation rule above is an operation that this computer validates as being an allowed step of transforming a memory state with a given collection of such strings into a new memory state to which the string below the horizontal line is added. As we add the remaining rules below, what looks like a grandiose banality so far will remain grandiose, but no longer be a banality. The reader feeling in need of more motivational remarks along these lines might want to take a break here and have a look at the entry computational trinitarianism first, that provides more pointers to the grandiose picture which we are approaching here. Next, the second natural deduction rule for product types is the 2. term elimination rule. The fact that $A \times B \in \mathcal{C}$ is equipped with two projection morphisms $\array{ A \stackrel{p_1}{\leftarrow} A \times B \stackrel{p_2}{\to} B }$ means that from every element $t$ of $A \times B$ we may deduce the existence of elements $p_1(t)$ and $p_2(t)$ of $A$ and $B$, respectively. We declare now that this is the categorical semantics of which the natural deduction syntax is: $\frac{\vdash \; t \colon A \times B}{\vdash \; p_1(t) \colon A} \;\;\;\;\;\;\;\;\; \frac{\vdash \; t \colon A \times B}{\vdash \; p_2(t) \colon B} \,.$ As before, this is to say that if syntactically we are given strings of symbols expressing judgments as on the top of these horizontal lines, then we may “naturally deduce” also the judgment of the string of symbols on the bottom of this line. 3. term introduction rule. The first part of the universal property of the product in category theory is that for $Q \in \mathcal{C}$ any other object equipped with morphisms $\array{ && Q \\ & {}^{\mathllap{a}}\swarrow && \searrow^{\mathrlap{b}} \\ A && && B }$ in $\mathcal{C}$, we obtain a canonical morphism $Q \to A \times B$ in $\mathcal{C}$. This is now declared to be the categorical semantics of which the natural deduction syntax is $\frac{ \vdash\; a \colon A \;\;\;\;\;\; \vdash\; b \colon B }{ \vdash (a,b) \colon A \times B } \,.$ With the elements that are the semantics of the terms appearing here made explicit, this is the syntax for a diagram $\array{ && Q \\ & {}^{\mathllap{a}}\swarrow &\downarrow^{\mathrlap{(a,b)}}& \searrow^{\mathrlap{b}} \\ A && A \times B && B } \,.$ 4. computation rule. The next part of the universal property of the product in category theory is that the resulting diagram $\array{ && Q \\ & {}^{\mathllap{a}}\swarrow &\downarrow& \searrow^{\mathrlap{b}} \\ A &\stackrel{p_1}{\leftarrow}& A \times B & \stackrel{p_2}{\to} & B }$ is in fact a commuting diagram. Syntactically this is, clearly, the rule that the following identifications of strings of symbols are to be enforced $p_1(a,b) = a \;\;\;\;\;\; p_2(a,b) = b \,.$ This concluces the description of the natural deduction about objects, morphisms and products in a category using its type theory syntax. We summarize the dictionary between category theory and type theory discussed so far below. In the next section we promote our running example category $\mathcal{C}$, which admits only very few universal constructions (just products), to a richer category, the sheaf topos over it. That richer category then accordingly comes with a richer syntax of natural deduction inside it, namely with full dependent type theory. This we discuss in the Syn Layer below. ### Natural deduction rules for dependent sum types (…) dependent sum type (…) ### Dictionary: type theory / category theory The dictionary between dependent type theory with product types and category theory of categories with products. type theorycategory theory syntaxsemantics judgmentdiagram typeobject in category $\vdash\; A \; \mathrm{type}$$A \in \mathcal{C}$ termelement $\vdash\; a \colon A$$* \stackrel{a}{\to} A$ dependent typeobject in slice category $x \colon X \;\vdash\; A(x) \; \mathrm{type}$$\array{A \\ \downarrow \\ X} \in \mathcal{C}_{/X}$ term in contextgeneralized elements/element in slice category $x \colon X \;\vdash \; a(x)\colon A(x)$$\array{X &&\stackrel{a}{\to}&& A \\ & {}_{\mathllap{id_X}}\searrow && \swarrow_{\mathrlap{}} \\ && X}$ $x \colon X \;\vdash \; a(x)\colon A$$\array{X &&\stackrel{(id_X,a)}{\to}&& X \times A \\ & {}_{\mathllap{id_X}}\searrow && \swarrow_{\mathrlap{p_1}} \\ && X}$ $\,$ type theorycategory theory syntaxsemantics natural deductionuniversal construction substitution…………………….pullback of display maps $\frac{ x_2 \colon X_2\; \vdash\; A(x_2) \colon Type \;\;\;\; x_1 \colon X_1\; \vdash \; f(x_1)\colon X_2}{ x_1 \colon X_1 \;\vdash A(f(x_1)) \colon Type}$$\,$ $\array{ f^* A &\to& A \\ \downarrow && \downarrow \\ X_1 &\stackrel{f}{\to}& X_2 }$ $\,$ type theorycategory theory syntaxsemantics natural deductionuniversal construction product typeproduct type formation$\frac{\vdash \;A \colon Type \;\;\;\;\; \vdash \;B \colon Type}{\vdash A \times B \colon Type}$$A,B \in \mathcal{C} \Rightarrow A \times B \in \mathcal{C}$ term introduction$\frac{\vdash\; a \colon A\;\;\;\;\; \vdash\; b \colon B}{ \vdash \; (a,b) \colon A \times B}$$\array{ && Q\\ & {}^{\mathllap{a}}\swarrow &\downarrow_{\mathrlap{(a,b)}}& \searrow^{\mathrlap{b}}\\ A &&A \times B&& B }$ term elimination$\frac{\vdash\; t \colon A \times B}{\vdash\; p_1(t) \colon A} \;\;\;\;\;\frac{\vdash\; t \colon A \times B}{\vdash\; p_2(t) \colon B}$$\array{ && Q\\ &&\downarrow^{t} && \\ A &\stackrel{p_1}{\leftarrow}& A \times B &\stackrel{p_2}{\to}& B}$ computation rule$p_1(a,b) = a\;\;\; p_2(a,b) = b$$\array{ && Q \\ & {}^{\mathllap{a}}\swarrow &\downarrow_{(a,b)}& \searrow^{\mathrlap{b}} \\ A &\stackrel{p_1}{\leftarrow}& A \times B& \stackrel{p_2}{\to} & B}$ $\,$ The inference rules for dependent pair types (aka “dependent sum types” or “$\Sigma$-types”): Below in Smooth spaces - Syntactic Layer we complete this dictionary to one between dependent type theory with dependent products and toposes. Last revised on June 12, 2018 at 16:33:52. See the history of this page for a list of all contributions to it.
# Find the measure of all the missing angles worksheet answer key The transformation is a translation, i. Simple level demands the direct answer. New Vocabulary quadrilateral rectangle square parallelogram rhombus trapezoid Hands-On Mini Lab The figure below is a quadrilateral , since it has four sides and four angles. We hope this graphic will likely be certainly one of excellent reference for Find The Missing Angle Measure Worksheet Along With Special Quadrilaterals Worksheet Worksheets For All. In doing so, it is generally best to force the term to be positive. Shapes Geometry Worksheets Topics Workbooks and worksheets with a mixed review of measurement skills and curriculum. doc Author: JSCHROE1 Created Date: Finding missing angles; Isosceles Triangle Example Question (5:34) Worksheet 4-6 - isosceles and equilateral triangles Bonus question: The measure of two angles of an isosceles triangle are (3x+5)° and (x+16)°. The measures of two angles of a triangle are 80° and 50°. Create New Sheet One atta Time Flash Cards Share Select a Worksheet Version 1 Version 2 Version 3 Version 4 Version 5 Version 6 Version 7 Version 8 Version 9 Version 10 Grab 'em All Create New Sheet One atta Time Introduction to Finding Missing Angle Measurements. Finally, dividing by 10 gives x = 18. Find them all. 1. 2. z Worksheet by Kuta Software LLC-9-Answers to Angles in Triangles - Section 4-2 1) 70 ° 2) 31 ° 3) 25 ° 4) 26 ° 5) 60 ° 6) 60 ° 7) −12 8) 5 Apply angle sum theorem of a triangle to find the measure of the unknown interior angles. We cover a wide range of geometry concepts. His or her job is to use a standard protractor to measure the angles in degrees, extending the lines with a straight edge if necessary. We can then identify an alternate exterior or interior angle pair. Answer the. Matching Worksheet - Match the missing measures of angles to the shapes that are missing them. 3) 15 8 A B C 4) 14 7 A C B 5) 6 11 B C A The complementary angles worksheet maker creates unique worksheets with up to 16 pairs of complementary angles. parallel lines and transversals • Use . Thus, EDC = 180° - 35°, so x = 145°. Name: Answers. So, each pair of base angles is congruent. Four of the angles of a pentagon are equal. Each interior angle of a polygon is 150°. If you are looking for arcs angles and algebra worksheet answers gina wilson you’ve come to the right place. Solve 50° 180° + 90° -140° 140° 40° The measures of the angles of the triangle are 40°, 50°, and 90°. Also, consecutive interior angles are supplementary. Worksheet #1 - Finding the Missing Angle Date_____ Period____ Find the values of the six trigonometric functions of for each triangle. ) If b = 18 and m<A = 100 , find c. Angle Relationships Worksheet #2 B. Theorem 8. Find Missing Dimensions of Rectangles You can find the missing measure of a rectangle if you know the measure of the other side. Find the measure of an angle that is 30o less than its supplement. 21) ∠1 22) ∠2 23) ∠3 Microsoft Word - Worksheet Triangle sum and exterior angle. Two angles are supplementary. Example: In the figure, m∠2 = 75. I. Here is a graphic preview for all of the Angles Worksheets. 4 Measure and Classify Angles 29 6. 40°. The quadrilaterals are meant to be cut out, measured, folded, compared, and even written upon. 3. Easy Teacher can help you teach your child calculating the value of the third angle when two angles are given. Z r HAil OlW ErFirgUh7tfsf 7rteismevrYvveQdz. We combine these two thoughts to solve angle problems in triangles. Find the sum of the interior angles of a decagon. Find The Measure Of The Missing Angle Worksheet Free Worksheets 5th Grade Geometry Measure And Draw Angles Worksheets Maths Worksheet Answer Example Reflex Find the measure of each numbered angle and name the name two congruent angles. In an equilateral triangle, all three sides are equal & each angle is 60°. Label it with A, N, G, L, E. If two parallel lines are cut by a transversal, then each pair of alternate interior angles are a. Use your algebra skills to find the What is the measure ofthe angle? 19. 7) What is the complement of an 11° angle? C. Find the sum of the interior angles of each convex polygon. congruent b. Apply the properties of arcs 1. lines. 4. Isosceles Triangles Worksheet #1 Name _____ October 13, 2011 Thursday Draw a picture to help find the missing sides or angles!! 1. You will receive your score and answers at the end. Then find the degree measure of ån in each triangle. Supplementary angles will then add to one-hundred and eighty degrees. 60 48. Three angles of a quadrilateral are equal and the measure of the fourth angle is 120°. I can define inscribed angles and apply their properties to solve problems. pdf; angles red. Find x. Guided Lesson - Please note that the letters are used to indicate the total angle or sum of the 2 angles. Worksheets on Angles Formed by Parallel Lines Cut by a Transversal. 13. Easier to grade, more in-depth and best of all 100% FREE! Common Core , Kindergarten, 1st Grade, 2nd Grade, 3rd Grade, 4th Grade, 5th Grade and more! Ask students to find the missing angles in these diagrams. This activity will allow your students to practice angles sum in triangles using an appealing format. 1) tan 2) cot Find the measure of each angle indicated. e. Solution: Here the sum of the four angles 2. Block / Date Section and Objectives Classwork and Homework 1 8. Name the missing coordinates of each triangle. Find Zl and Z2 x — and PRACTICE: Interior and Exterior Angles Worksheet. All Rights Reserved. Geometry worksheets: Angle measurement and classification. Model Problems II. Label it with points A, B, C. Find all possible values of x. Combining the x's on the left-hand side, we have 10x = 180. As we proceed to collect more mathematical knowledge, this article will be expanded to provide the information about angles necessary for understanding the more complex subjects. 70°. Subtract the sum from 180° to find the measure of the third angle. Round to the nearest tenth. 20 28. The eight angles will together form four pairs of corresponding angles. Apply the properties of arcs. Several things make my games unique. 14. a) nonagon b) 50-gon ~~the~me~a~su~re o~e~a c~n~e~o~ Find the measure of each exterior angle of a regular decagon. 6. 31. ∠A and ∠B are complementary, and ∠C and ∠B are complementary. The ratio of the side lengths of a triangle is 4 : 7 : 9. Measuring Angles. We have 5 images about arcs angles and algebra worksheet answers gina wilson including images, pictures, photos, wallpapers, and more. 2 levels are inlcuded. • corresponding angles • alternate interior angles • alternate exterior angles. I can apply properties of inscribed polygons to solve problems. 30. After checking the theorems with your teacher, then complete the remaining examples. d. When two parallel lines are cut by a transversal, the following pairs of angles are congruent. Scaffolded questions that start relatively easy and end with some real challenges. Free worksheet(pdf) and answer key on using sine,cos, tan to solve for the side length of a right triangle. CommonCoreSheets. 3) 15 8 A B C 4) 14 7 A C B 5) 6 11 B C A The sum of all angles of a triangle is equal to 180 degrees. Think About a Plan " e measure of an exterior angle of nDEF is 4x. Chapter 10, Section 3: Inscribed Angles How can we apply properties of inscribed angles to help us choose a seat for the Hunger Games movie or Snow White with Julia Roberts? Section 10. Write and solve equations to find the measure of a missing angle in a triangle. and if you could show your work that would be great because i want to learn to do it i just don't want answers because that wouldn't really solve my problem. ©R 8KguCt9aK YSAoJfvtmwja xrlel SLlL sCw. To find the perimeter of a polygon, add the lengths of its sides. Angles F and B in the figure above constitutes one of the pairs. Each angle pair contains one missing angle measurement for the student to calculate. x = 180° - (90° + 35°), so x = 55°. 150 # 4,23,24,26 11/1 3 Remote Interior Angles & Exterior Angles The Exterior Angle Theorem Worksheet (circled questions) 11/4 4 QUIZ Geometry worksheet, Angles worksheet, Math Reading Science Tests for Grades , Practice Sample Test, Free Online Worksheets Two angles are complementary. The measures of two angles are given and the third angle measurement is unknown. Block Day, 5/18-19/11. All the problem sets are randomly generated, so you can create multiple worksheets without repeating the same problems. . Angles in a Triangle Worksheets Angle Measures in Polygons – Worksheet #2 7) Find the missing angle 8) Find the missing angle measure in the triangle. Discuss alternate ways of finding the missing angles as there are often other ways of What are the measures of the angles located at positions a, b, & c? Print out the worksheets below to help your children practice finding missing angles. Standard: MATH 3 Grades: (9-12) View worksheet Triangle Angles. b. An angle whose measure is greater than 180° and less than 360° is called an Reflex Angle. using the Theorem • Use . ? Best Answer: 1. supplementary 8. supplementary 9. Students are then given a few more minutes to find the value of the variables. We focus primarily on angles, shapes, perimeter and area. How many sides does the polygon have? Two interior angles of a pentagon measure 80° and 100°. Kuta Software - Infinite Aloebra 2 N ame Period Right Trianole Trig. Draw five angles so that ∠∠2 and 3 are acute vertical angles, ∠∠1 and 2 are supplementary, 2 and 5∠∠are complementary, and 4 and 5∠∠are adjacent. PRACTICE FINDING THE MISSING ANGLE 'X' IN A TRIANGLE WITH THE QUIZ BELOW order operations, angles, simple 100 Math printable exercises and 100 pages of answer The best source for free math worksheets. 21) 84 ° x + 59 x + 51 A 44 ° 22) x + 37 x + 67 A 30 ° 23) 130 ° 8x + 4 3x − 6 A 30 ° 24) 80 ° 4x + 17 x + 23 A 35 °-3-Create your own worksheets like this one with Infinite Geometry. 11. Perfect for differentiation! Challenge your students with this worksheet where they must find 71 missing angles formed by lines! Students use what they know about parallel lines and transversals and interior angles of triangles to find missing angles. 9. com. They can be quite useful in teaching all sorts of concepts related to quadrilaterals. Find the measures of each numbered angle. ANSWER KEY Find the Angles Answers key may not include all possible answers. . A Straight  These worksheets explain how to calculate the measurement of an angle. Missing addend concept (0-10) · Addition facts when the sum is 6 · Addition  triangle, all the sides and angles are congruent, and . The trigonometric ratios can be used to solve a triangle. Finding the missing measure of a quadrilateral: The measures of three angles of a quadrilateral are 115 , 68 , and 45 . CLASSIFYING ANGLES Classify the angle with the given measure asacute, obtuse, right, orstraight. Acute Angles Obtuse Angles Right Angles FEG, JHI, HIJ, IHJ, FGE, EGF, EFG, Angles Worksheets Find All Angles Worksheets. Determine the value of 'A'. 8. Since the triangle has an obtuse angle, it is obtuse. pdf; angles amber. Find the measure of missing angles from the picture worksheet with answers for 6th grade math curriculum is available online for free in printable and downloadable (pdf & image) format. Tap on PRINT, PDF or IMAGE button to print or download this grade-6 geometry worksheet to practice finding the measure of missing angle without using protractor. Matching Worksheet - Match the sum of the angles and the missing parts to their measures. are lines that intersect at a right angle. This chart will NAME _____ GEOMETRY UNIT 4 NOTE PACKET Polygon information Date Page(s) Topic Homework 10/31 2 Classifying Triangles by sides & angles Find value for angles in a triangles Pg. 4) Use what you know about vertical and supplement angles to find each missing angle. 1 2. are lines that do not intersect. Suggested activities: Print the sheet out on A3 and have different students find a particular type of angle. Use the angles identified in the figure to name two pairs of the indicated type of angle pair. Corresponding angles are congruent if the two lines are parallel. Both angles measure 25 degrees. Polygons include triangles, quadrilaterals, pentagons, and hexagons. 1 Find Angle Measures in Polygons Construct all possible diagonals from a given vertex in a polygon Find the sum of the measures of the angles in a convex polygon Finding Missing Angles Worksheet Name_______________________________ For the following problems find the measure of all the missing angles 1. 22. Solution a. com is an online resource used every day by thousands of teachers, students and parents. Find the measure of these equal angles. Auxiliary lines. If a reflection took place, A' would be at the coordinates for C' and vice versa. The seven angle of an octagon are 132° each. The exercise is designed to remind students of the basics of determining missing angles that they were exposed to in middle school. You will be supplied the measure of one of the angles in every single issue, then apply your understanding of parallel lines and transversals to locate measurements of the rest of the angles. PQ and QR is 39°, find the values of x and y. com Central Angles… Some of the worksheets for this concept are Math 8 name classify date block, Complementary angles standard s1, Complementary angles, Complementary and supplementary angles, Name the relationship complementary linear pair, Name the relationship complementary supplementary, Supplementary angles standard s1, Missing measures complimentary and Angles In Polygons Worksheet Answers 27/08/2019 18/09/2019 · Worksheet by Lucas Kaufmann Previous to dealing with Angles In Polygons Worksheet Answers , be sure to be aware that Instruction is usually your key to an improved another day, and studying does not only end once the college bell rings. There is a total of 180 degrees in a triangle and Solving Right Triangles Using Trigonometry Examples 1. 1) 13 12 B A C θ 22. 5 Answer Key: Designing With Geometry The two triangles are congruent because their side lengths and angle measurements are the same. 4 Vertical Angles 75 Goal Find the measures of angles formed by intersecting lines. To see CCSS connections, simply click the common core icon . arc length and area a sector worksheet areas circles and sectors worksheet the best worksheets image arcs and central angles worksheet unique objective properties arcs and central angles worksheet elegant inscribed shapes find 58 fantastic reading a protractor worksheet – free worksheets arcs and central angles worksheet best angles in a circle how to determine the geometry of a circle The sum of the interior angles of a triangle are equal to 180 o. An angle whose measure is 90° is called a Zero Angle. Angle B: We can find the measure of angle B without using any trigonometric ratios. Word Doc PDF Find the missing angle in the triangles and the covex quadrilaterals, worksheet #3. ) bº a+b+c = 360. Find the value of x, the measure of each angle of the triangle, and the measure of the exterior angle. 23. Find the measure of the third angle. Vertical opposite angles (the angle opposite the known angle) are equal. Sample answer: Complementary angles have a sum of 90°, but supplementary angles have a sum of 180°. This website and its content is subject to our Terms and Conditions. Hannah La Forge Name_ Finding Missing Angles Worksheet For the following problems find the measure of all 37 Triangle Interior Angle Worksheet Answers from angles in transversal worksheet answer key , source:goybparenting. This is a math pdf printable activity sheet with several exercises. Each group selects 6-8 different regular polygons (two per person). Worksheets are Right triangle trig missing sides and angles, Trigonometry to find angle measures, Triangle, Section, Finding unknown angles, Triangles angle measures length of sides and classifying, Work section 3 2 angles and parallel lines, Name date practice triangles and angle sums. Level 2 gives students 4 angles and they must find the other 71. Worksheets for classifying triangles by sides, angles, or both. Any two angles that sum to 180 ̊ can be described as angles. If two parallel lines are cut by a transversal, then each pair of alternate exterior angles are a. Find the value of x and the measure of all angles. So,. That gives us all the measures of the angles around the known angle. a All About Angles Lesson . aº cº EXERCISE 1. 5. Ch. The options below let you specify the range of angles used on the worksheet. Practice questions. 9I dnJf MiNnyi wtUeV AGReZoemVent dray Y. Just below them, you'll find worksheets meant for angle geometry. Get an angle on triangles with a geometry practice sheet! Find the missing angles in these triangles by solving for x. 3: If two angles are complementary to the same angle, then these two angles are congruent. 62/87,21 The sum of the measures of the angles of a triangle is 180. Parallel Lines cut by a Transversal - Printable Missing Angle Worksheets w Key . 2 1. Duncan ID: 1 Name_____ Sine, Cosine, and Tangent Practice Find the value of each trigonometric ratio. 9 10 B A C θ 5) 7. Find the measure of angle that is 10o more than its complement. Your students Click the buttons to print each worksheet and associated answer key. Improve your math knowledge with free questions in "Find missing angles in triangles" and thousands of other math skills. 100° 8. - Finding Missing Sides and AnglesDate_____ Period____ Find the measure of each angle indicated. B. If you're seeing this message, it means we're having trouble loading external resources on our website. Tuesday, 5/17/11. One is that I have an answer page that shows all 24 cards on it that one person holds so that if the game gets stuck that pe The Distance Formula Find the Missing Side (Answer Key) Use a, b, and c to label the legs and the hypotenuse of the triangle below. Find the measure of each of the equal angles. Therefore the measure of this angle is _____°. are angles that equal 90°. A worksheet on which students must calculate all of the missing angles using properties of angles on parallel lines and in triangles. Some of the worksheets displayed are Gina wilson all things algebra 2014 answers pdf, Gina wilson unit 8 quadratic equation answers pdf, Gina wilson of all things algebra, Functions, Name unit 5 systems of equations inequalities bell, Unit 7, Unit 2 syllabus parallel and Fortunately, as you’ll see in the following practice questions, there’s a handy formula that you can use to find a missing interior angle in a polygon, whether it’s a square, a hexagon, or whatever. using the formula from Unit 5: !!! Measuring Angles Worksheets Each angles worksheet in this section also provide great practice for measuring angles with a protractor. Visit the Types of Angles page for worksheets on identifying acute, right, and obtuse angles. Find the measure of the fourth angle. Angles are measured in degrees using a protractor. Look at the angle puzzle below. These are the bare basics you have to know when it comes to angles. of answers using mental computation and estimation strategies. Objectives: • Understand the . Change answer; Math Class 7 Maths Chapter 5 Lines and Angles The first chapter . We only rely on advertising to sustain the operations. QR||ST. c. Each math worksheet contains a riddle that the student solves by completeing all the problems on the worksheet. 1 Find the width of the rectangle shown. Tailor assessments and response to intervention to meet the . m∠ X 5 30 8 9. A straight line has an angle measure of 180. That is the degree of that angle You can find the missing angles by one of 2 ways: If you know the other 2 angles you can subtract them from 180. Mixed Angles I can find the measure of an angle associated with a circle. a. Examples (1) Find the measure of ?C. Measuring Angles Formed by Parallel Lines & Transverals Worksheet 2 - This angle worksheet features 8 different problems where parallel lines are intersected by a transveral. 80 36. If the tetherball arena is rectangular, then IBJ = 90°. A straight angle is an angle with measure equal to 180 degrees. 32. ∠s. Y 90 10. Answer Key. Determine the measure of the missing angle in each diagram. 8 Worksheet by Kuta Software LLC PMath 10 - Mr. 14 558 Chapter 10 Geometry: Angles and Polygons 10-7 Quadrilaterals MAIN IDEA I will classify quadrilaterals and find missing angle measures in quadrilaterals. Using Trigonometry to Find Missing Angles of Right Triangles (Note: Figures in this section may not be drawn to scale. The right angle is shown by the little box in the corner: Another angle is often labeled θ, and the three sides are then called: The student will solve real-world problems involving angles of polygons. The perimeter of a figure is the length around it. Find the measures of the remaining angles. An Obtuse angle measures more than 90° but less than 180°. mathworksheets4kids. Using your answer to b. All Of My Resources A 10 The Missing Link / Workshop 1: Proportionality  So by adding the measures of angles JMK and KML, we will get the measure of Worksheets are 2 the angle addition postulate, Segment addition postulate and Type in any equation to get the solution, steps and graph Central Angle How do you find the missing angle measurement using the angle addition postulate? GCSE Maths Geometry and measure learning resources for adults, children, Here is a graphic preview for all of the Angles Worksheets. A = ˜ × w 36 = 9 × w 36 = 9 × 4 So, the width is 4 inches. Grade 5 math worksheets on classifying and measuring angles. Worksheet Template : By Choosing This Inscribed Angles Worksheet Answers, You Shorten Your Work HELAENE Inscribed Angles Worksheet Answers. com Resource Practice: Finding Missing Measures of Angles and Sides (with answer key) Practice: Finding Missing Measures of Angles and Sides (with answer key) Created By Summit Math Find the missing angle in the triangles and the covex quadrilaterals, worksheet #2. This activity is also great for practicing beginning algebra. What do you notice about the exterior angles of any polygon? Examples: 3. Round to the Quiz & Worksheet - Central and Inscribed Angles Quiz; Choose an answer and hit 'next'. 90 66. 1 Y cM6aWdVea zwEiKtGhI GIHnmfyimnbiytXez lGbeqoQm8etwrjy d. The fifth angle is 40°. com Angles in Parallelogram Sheet 1 < < < < 112 o 50 1) 2) 3) Answer key Find the missing angles in each Sum of the Interior Angles of a Triangle Worksheet 1 - This angle worksheet features 12 different triangles. You can We have classifying and naming angles, reading protractors and measuring angles, finding These Angles Worksheets are great for practicing finding missing angles from  These Angles Worksheets are great for practicing finding missing angles on a This worksheet is a great resources for the 5th, 6th Grade, 7th Grade, and 8th Angles Worksheet Answer Page. Solution: As with all problems, we must first use the facts that are given to us. Find the missing angle measure using any method. What is the measure of the angle? 21. (Abbreviation: vert. Classify each polygon in as many ways as possible. (ANSWER KEY) Geometry Unit 4 Study Guide In the isosceles trapezoid at the right, what is the measure of <A? Find the missing angle. 2 Find the length of the rectangle shown. Measure all the angles of your quadrilateral. So, $16:(5 25 and 62/87,21 If a quadrilateral is inscribed in a circle, then its opposite angles are supplementary. Measure of angle: Measure of complement: )( Since the equation to be solved is a quadratic equation (an equation with in it), a standard approach is to gather all terms to one side to set the equation equal to zero. Find All Angles Worksheets These Angles Worksheets are great for practicing finding missing angles on a graph using complementary, supplementary, vertical, alternate, and corresponding angle relationships. It is important that students be confident on calculating angles, measuring angles and drawing angles to be successful in their maths exams, but having a solid knowledge of lines and angles can also help students’ understanding of the world. One of the angles is 43°, what is the measure of the other angle? 88. Find the measure of each angle. Gina Wilson All Things Algebra 2014. What we need to remember to find this value is that the sum of the three angles of a triangle will always add up to 180 degrees. Parallel Lines and Angle Pairs . 12. " e measure of one of this angle’s remote interior angles is x 1 23. Students will also use their knowledge of complementary and supplementary angles while finding missing angles in triangles. angle measures. Bolster practice with this set of angles in parallelograms worksheets and develop key skills like finding the indicated vertex and diagonal angles, solve for x using the given vertex and interior angles, find the missing angles and much more. This Angles Worksheet is great for practicing finding missing angles on a graph using complementary, supplementary, vertical, alternate, and corresponding angle relationships. 10 22. The angle of a pentagon are (x – 1)°, (x – 2)°, (x – 3)°, (x – 4)° and (x – 5)° . The ratio of the angle measures in a triangle is 8 : 9 : 19. Complementary angles will form a right angle if they are placed next to each, but supplementary angles form a straight angle when they are placed next to each other. Math Worksheets > Grade 5 > Geometry > Classify and measure angles. Lesson Notes Students review key terminology regarding angles before attempting several examples. 17. The actual angle is provided on the answer key, so in addition to identifying whether an angle is obtuse or acute, you can suggest that students mesaure the angle with a protractor and supply the measurement as part of their work. Showing top 8 worksheets in the category - Gina Wilson All Things Algebra 2014. Solve the two equations to find the values of x and y. Find the value of angle 'A' and angle 'B'. This clue belongs to Mirror Quiz Crossword September 6 2019 Answers. pdf FREE PDF DOWNLOAD NOW!!! Source #2: angles in polygons worksheet answer. Displaying all worksheets related to - Triangle Angle Sum. Thank you for your input. 15° 9. One of the angles is five times the smaller angle, what is the measure of the larger angle? 89. Other angles pictures include: QUESTION 1 of 10. Since the sum of the angles in a triangles is 180, we have 7x + 2x + x = 180. 1 angles? EdPlace teacher, Ms Alison explains with explanations and worksheets. Then answer the following questions and try to develop the theorems that represent these relationships. Key for Tiles: = 1 . Puzzle 1 Given: m DAI=40 , m FBC=120 Find: The measure of as many angles as you can. 15. Find the measure of angle b. Find All Angles Worksheets. Open link remote interior angles are in a ratio of 2i3. 62/87,21 The trapezoid ABCD is an isosceles trapezoid. Use the values of the variables to find and . Find the missing angle: 2. Find the missing angle in the triangles and the covex quadrilaterals, worksheet #2. Step 1: Draw right angle ABC on the board. We know, the sum of the angles of a triangle = 180° Since there are two In a quadrilateral ABCD, ∠A = 100°, ∠B = 105° and ∠C = 70°, find ∠D. This keeps kids motivated to complete each problem so that they can find the answer to the riddle. Lines and angles are involved in nearly every aspect of our daily lives. In the figure, m L 2 = 75. Putting it All Together. Day 2: Trig Review and Co-Functions SWBAT: 1) Solve problems involving angle of elevation/depression, and 2) Express sine and cosine in terms of its CoFunction. Answer Keys G w aMda rd Rec 1w8iJtGhg nIunQfBiln hi AtMeh yA 3llgne AbBrMas b2C. Use the figure below. This Blog is built for everyone, we don't charge. Reflex Angles. How to use the Theorem to solve geometry problems and missing angles involving triangles, worksheets, examples and step by step solutions, triangle sum theorem to find the base angle measures given the vertex angle in an isosceles triangle The theorems of the angle relationships include: alternate interior angles are equal, alternate exterior angles are equal, consecutive interior angles are supplementary (equal to 180 degrees to find unknown angle measures . Vertical angles have equal measure. The angles of a pentagon are x°, (x – 5)°, (x + 15)°, (3x – 44)° and (x – 70)°. 7. Give exact answers and answers rounded to the nearest ten-thousandth. Worksheet – Section 3-2 Angles and Parallel Lines. An Acute Angle An acute angle is an angle with a measure between 0 and 90 degrees. These symbols imply the two lines are parallel. Solution: The sum of all the angle 8. We expect it carry something new for Find The Missing Angle Measure Worksheet Along With Special Quadrilaterals Worksheet Worksheets For All. it’s converse • Find . In a group, pass the sheet around and have each student find one angle at a time. Math www. Missing Numbers 1 To 20. Unbelievable Printable Math Sheets Find The Missing Angle Math Triangle One of Several Examples From 7 Popular Vertical Angles Worksheet You Will Never Miss Return to "7 Popular Vertical Angles Worksheet You Will Never Miss" Arc Length And Sector Area Worksheet Answers. Pythagorean Theorem Triangles, rectangles, and polygons are a major part of SAT Math geometry. Sep 14, 2001. One of two complementary angles measures 30o more than three times the other. We hope that you find exactly what you need for your home or classroom! Answers may vary. STUDY. Do they total 180°? Use the angles you know to solve each problem. corresponding angles alternate interior angles o alternate exterior angles Also, consecutive interior angles are supplementary. Develop protractor usage skills. Then write steps which show how you figured out the specified angle. When the sum of the measures of two angles is 90°, the angles are . Solution:. (Abbreviation: ∠s at a pt. Write and solve an equation to find the missing angle measures. + 5, mL2 = 10. 7 14 A B C θ 6) 5 B 4 A C θ 7) 11 4. 50°. Angles in Parallelogram. com Central Angles… Measuring Angles. Find the measure of the missing angle. By the inscribed angle property, the measure of WXZ is _____°, and the measure of XZY is _____°. a 5. Molecular geometry worksheet answer key pogil com Lesson 1: Complementary and Supplementary Angles Student Outcomes Students solve for unknown angles in word problems and in diagrams involving complementary and supplementary angles. Some of the worksheets for this concept are Counting practice from 1 to 20, Missing number 1 20, Item 4658 1 missing numbers, 1 3 5 6 8 10 12 14 16 18 20 2 4, Counting practice work, Missing numbers, Kids work org, Fill in the missing numbers below in ascending order. This worksheet provides the student with a set of angles. This worksheet is about finding the unknown angles in triangles. The sum of the three angles of a triangle is 180°. ) bº a+b = 180. Then, ask the students to name each angle on the 1. 3/5 (4) Brand: TES Triangle Angle Sum. You may select whole numbers or decimal numbers for the 6 problems that are generated per worksheet. 130 Find the measures of the missing angles in the Find each measure. ) is equilateral, and is isosceles. Students need to understand measurement in all parts of life, and these exciting, dynamic worksheets will help students master length, time, volume and other subjects in both English and metric systems as they measure their own progress in leaps and bounds! PRACTICE: Inscribed Angles Worksheet. - Finding Missing Sides and Angles Date_____ Period____ Find the measure of each angle indicated. There are over 85 topics in all, from multi-step equations to constructions. Math · Math Games · Math Worksheets · Algebra · Language Arts · Science · Social Studies · Literature Missing Angles in Triangles Knowing that a triangle contains 180° makes calculating the measure of a missing angle much simpler. Central Angles And Inscribed Angles Worksheet Answer Key Angles In A Circle Worksheet Worksheets for all from Central Angles And Inscribed Angles Worksheet Answer Key , source: bonlacfoods. Give a comprehensive explanation of your work and reasoning - video response is an option as well. Draw a regular hexagon inscribed within the circle C to the right: . Key Words • vertical angles • linear pair 2. View Homework Help - Missing Angles Worksheet Mixed from MATH 231 at Rutherford High School. The answers can be found below. The point at which they meet is the vertex of the angle. Use a ruler or straightedge to draw the shapes. ©J 7220 4182 t MK1u ktoa s NSEowfut UwKaTrhe C ZLDLOC4. 5. h c XMXa9dYeJ 6wtiotYho CI4nhfXiVnYiotWeH EAPlvgueEbZr7a6 E1u. Free trial available at KutaSoftware. Find the angle measures. An angle whose measure is 180° is called a Straight Angle. com Practice Worksheet - The jungle is running wild with all types of missing angles. The sum of adjacent angles on a straight line is 180 . 4 Trigonometry helps us find angles and distances, and is used a lot in science, engineering, video games, and more! Right-Angled Triangle. SWBAT: 1) Explore and use Trigonometric Ratios to find missing lengths of triangles, and 2) Use trigonometric ratios and inverse trigonometric relations to find missing angles. Apply the properties of arcs When two parallel lines are cut by a transversal, the following pairs of angles are congruent. (Abbreviation: ∠s on a line. , find the measure each interior angle of the hexagon. Use a protractor to find the Geometry Chapter 4 – Find the missing angle measures. Find the measure of each arc created by the vertices of the hexagon. Page 2. Then classify the triangle by its angle measures. Angles with equal measures are called congruent angles. Finding Missing Angles Displaying all worksheets related to - Finding Missing Angles . pdf FREE PDF DOWNLOAD There could be some typos (or mistakes) below (html to pdf converter made them): angles in polygons worksheet answer All Images Videos Maps News Shop | My saves 1,720,000 Results Any time [PDF] 3. Y Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 2 Name_____ Right Triangle Trig. Worksheets are 4 angles in a triangle, Triangle, Name date practice triangles and angle sums, Angle sum of triangles and quadrilaterals, Triangle, Sum of the interior angles of a triangle, Triangle, Relationship between exterior and remote interior angles. algebra to find unknown variable. The four angle of a pentagon are equal and the fifth angle measure 140°. Module 5: Equations and Angles Period: ______ Find the measure of all the missing angles. Find x if the angles are supplementary. What is the measure of the angle? 20. Some of the worksheets displayed are Types of angles classify each angle as acute obtuse, A resource for standing mathematics qualifications, 5 angles mep y7 practice book a, Classify and measure the, Name working with reflex angles, Types of angles, Types of angles, Classifying angles l1s1. website answer key to ws 5 Find the indicated measure (side to the nearest tenth or angle to the nearest degree). They have to find missing angle measures of various intersecting lines. Find the measure of unknown angles. To find the third angle of a triangle when the other two angles are known subtract the number of degrees in the other two angles from 180 o. 1-10 92 83 75 67 58 50 42 33 The “missing-angle problems” in this topic occur in many American geometry courses Ask students to find the missing angles in these diagrams. " e measure of the other remote interior angle is 2x 1 12. Challenge Problems IV. Line Segments G/M-1d,G/M-6 Materials: ruler protractor sharp pencil sheet of . Students should be familiar with all of the following: The problem indicates the angle is a right angle or a 90º angle; The problem indicates that one line is perpendicular to the other Find the measure of angle A. parallel lines cut by a transversal theorem. Find the sum of interior angles of a polygon which has half the number of sides of the given polygon. Prior to beginning the instructional portion of this unit, hand each student the Concept Builder worksheet (M-G-6-1_Concept Builder. - Finding Missing S des and Angleôate Find the measure of each angle indicated. 150° 10. triangle. Find here an unlimited supply worksheets for classifying triangles by their sides, angles, or both — one of the focus areas of 5th grade geometry. I usually . How to Find the Measure of an Inscribed Angle 5:09 So, 6LQFH The sum of the measures of the angles of a triangle is 180. Complex Angles I can find the measure of an angle associated with a circle. Answers may include the following: vertical angles are equal in measure; Describing angles as supplementary or complementary refers only to the measures of their angles. Find the measures of the missing angles 1 through 15. Let x be the measure of unknown angle in the ILJXUH 62/87,21 The sum of the measures of the angles of a triangle is 180. Each worksheet has 20 problems finding the missing angle to make complementary or supplementary angle. These are the books for those you who looking for to read the Arc Length And Sector Area Worksheet Answers, try to read or download Pdf/ePub books and some of authors may have disable the live reading. m∠ W 5 180 8 8. a) Since x is the angle that we want to find, we will let this angle be our reference angle. Kids love these problems! angles green. The vertex angle of an isosceles triangle is 40 . Line up one side of the angle with the zero line of the protractor (where you see the number 0). Practice III. are angles that have a shared vertex and a shared side. In an isosceles triangle, two sides are equal and two angles are equal. 16. Answer Key Web Resources So we check the sum of all three angles: This lesson is composed of multiple parts that deal with angles and arcs of circles. Grade 9 math Here is a list of all of the math skills students learn in grade 9! These skills Free worksheet(pdf) and answer key on the interior angles of a triangle. 40 Find the measure of each side indicated. The measure of each exterior angle in a regular polygon is 24°. Suitable for any class with geometry content. aº The sum of adjacent angles around a point is 360 . Designed for all levels of learners, from remedial to advanced. The worksheet are available in both PDF and html formats. A finding missing angles worksheet requiring use of the properties of triangles and of complementary and supplementary angles. One of the angles is 57°, what is the measure of the other angle? 87. In these page, we also have variety of images available. Infinite Geometry covers all typical Geometry material, beginning with a review of important Algebra 1 concepts and going through transformations. Since the ratio of the angles is 7:2:1, we can write the angles as 7x, 2x, and x, for some x. 24. Find the measure of the five numbered angles at vertex C. Congruence Find the values of the missing angles and give reasons for you answers (the first one is an online component Geometry Worksheets (pdf) with answer keys Chart Maker Use your knowledge about vertical angles to find missing angle measures. The solution to this problem will be slightly different than the solution to the others . 1) 2) 45O 65O 3) 4) 28O 73O 5) 6) 42 O 39 7) 8) 36O 80O Measure of Complementary & Supplementary Angles Measure of Missing Angle Geometry & shapes worksheets (questions & answers) for 1st, 2nd, 3rd, 4th, 5th & 6th grade teachers, parents and students is available for free in printable & downloadable (pdf & image) format. Geometry Worksheets. A B C Supplementary Angles Complementary Angles Right Angle Straight Angle 1. Tes Global Ltd is registered in England (Company No 02017289) with its registered office at 26 Red Lion Square London WC1R 4HQ. Printable Algebra Worksheets With Answer Key >>>CLICK HERE<<< With this worksheet generator, you can make printable worksheets for The answer key is automatically generated and is placed on the second page of the file. All angles that have the same position with regards to the parallel lines and the transversal are corresponding pairs. Problem Solving Motivation. 6° You may select whole numbers or decimal numbers for the 6 problems that are generated per worksheet. The sum of the This is a 24 card game of I have who has. Student Name: _____ Score: Free Math Worksheets @ http://www. Oct 2018 notes; Sept 2017 Example ; Worksheet: Parallel Lines, Angles in Triangles (write angle properties, write an equation for the properties, solve). angles in polygons worksheet answer. Two angles are needed to form both kinds of angle pairs. Use the area formula of a rectangle to find its width. and angle measures involve . missing angle=101. The complementary angles worksheet maker creates unique worksheets with up to 16 pairs of complementary angles. 62/87,21 Let x and y be the measure of the larger and smaller Right Triangle Trig. Practice using knowledge of vertical, complementary, and supplementary angles to find a missing angle. Missing Numbers 1 To 20 - Displaying top 8 worksheets found for this concept. Problem 1 : Find the missing angle measure in the triangle given below. Discuss the facts that student recall and use these as a starting point for the lesson. Given: ∠A and ∠B are complementary, and ∠C and ∠B are complementary. fore all the angles are equal. Find the measure of four equal angles. Solution : Step 1 : Write the Triangle Sum Theorem for this triangle. Therefore,$16:(5 101 The complementary angles worksheet maker creates unique worksheets with up to 16 pairs of complementary angles. Twinkl » Australia » Australian Curriculum Browser » NSW Curriculum Browser » Maths » Stage 3 » Measurement and Geometry » Angles 2 » Investigate, with and without the use of digital technologies, angles on a straight line, angles at a point, and vertically opposite angles; use the results to find unknown angles (ACMMG141) Proving Angles Congruent Practice. The angles that directly oppose each other in that setup are called vertical angles. Learn expert strategies to deal with triangle questions and practice on realistic SAT Math problems. 3 Goals G-C. ALGEBRA The measure of the larger acute angle in a right triangle is two degrees less than three times the measure of the smaller acute angle. Showing top 8 worksheets in the category - Reflex Angles. measure in the pentagon. Find the measure of each of the unlabled angles in the previous question. The perimeter of the triangle is 120 feet. Finding Missing Angle. To solve a triangle means to find all the missing measures of the triangle. In the figure, In the figure, DQGWKHDQJOHPHDVXULQJ DUH congruent. Finding angles Finding missing sides of triangles Finding sine, cosine, tangent Equations Absolute value equations Distance, rate, time word problems Mixture word problems Work word problems One step equations Multi step equations Exponents Graphing exponential functions Operations and scientific notation Properties of exponents Writing scientific notation ANSWER KEY Find the Angles Answers key may not include all possible answers. Let us examine the following triangle, and learn how to use Trigonometry to find x. 4 A B C θ 8) 3 3 B C A θ Find the measure of each side indicated. The two rays are the sides of the angle. Also see the measurement page for more angle worksheets. ) bº aº a = b. Draw a pair of vertical angles with the given measure. Each group member is responsible for accurately drawing two polygons on separate sheets of paper. Since the sum of angles in a triangle is 180°, the measure of WVX is _____°. Teachers, tutors, parents or students can check or validate the solved questions by using the corresponding answers key which comprises the step by step work on how to identify the type This is a set of 25 task cards (with and without QR codes) that help to strengthen students' skills in working with parallel lines cut by a transversal. Classify each triangle by its sides and by its angles. 115 68 45 x 360 228 x 360 x 132 The measure of the fourth angle is 132°. EDF and EDC are supplementary and must add up to 180°. Acute Angles Obtuse Angles Right Angles FEG, JHI, HIJ, IHJ, FGE, EGF, EFG, Finding Missing Angles in Triangles. \angle C &= & 136^\circ \end{array}\end{align*}The answer is 136. The measure of four angles of a heptagon is equal and the measure of the other three angles is 120° each. There measures will be equal. In Lessons 1–4, students parallel lines cut by a transversal worksheet answer key Students will observe that. We will do the angle B first. The Importance of Lines and Angles in Real Life. 1-8 88 Finding Missing Angles. Students us protractors to measure the angle on their card and then write that measurement down. 8 Right Triangle Trig. This product contains two activities; e Directions: Select which relationship the sets of angles will have. The ratio used depends upon what measures are given and what measures are missing. Printable worksheets and online practice tests on lines and angles for grade 7. You can use the Vertical Angles Theorem to find missing angle measures in Essential Question: What are the key ideas about perpendicular bisectors of a segment? Differentiated missing angle worksheets. Find the measure of the missing angles in a parallelogram, if ∠A = 70°. a = _____ angle. Click to find similar content by grade. From basic to more advanced concepts. com Supplementary Angles Complementary Angles Right Angle Straight Angle 1. Complementary Angles and Supplementary angles - relationships of various types of paired angles, with examples, worksheets and step by step solutions, Word Problems on Complementary and Supplementary Angles solved using Algebra, Create a system of linear equations to find the measure of an angle knowing information about its complement and supplement Free worksheet(pdf) and answer key on the distributive property. and . a1 and a2 are a linear pair because they are adjacent and their While these worksheets a suitable for estimating common angle dimensions, some of these worksheets can also be used as practice for measuring angles with a protractor (the correct angle measurement is given in the answer key for each geometry worksheet). a) b) c) Missing Measures: Complimentary and Supplementary Angles For problems 1 – 8, the angles are complimentary. Then classify the triangle by its side lengths. Geometry, including angles, has been used throughout history. IBJ and ABIare supplementary, so must also be 90°. Chapter 2- Lesson 1 Missing Angle Puzzles1 Without measuring, deduce the measures of the angles in the diagrams below using the information given. You will be given the measure of one of the angles in each problem, then use your knowledge of parallel lines and transversals to find measurements of the remaining angles. In the figure, the following is true about the value of the degree measurement of angles a and b: 70 < a + b < 150. Label it with M, A, T, H. For a bigger challenge, see if she can tell you the supplementary angle for each one. Find the side lengths. CommonCoreSheets. The measures of two adjacent angles have a ratio of 3 : 5. Start by filling in the missing lengths of the sides. pdf Math / Geometry and measures. For instance, the length of the shorter missing side is 6 because if you add it to the 3 on the left, the result should be the 9 on the right. vertical angles 2. 60°. Now you are ready to create your Angles Worksheet by pressing the Create Button. Right Triangle Trigonometry Page 3 of 15 Solution: We are being asked to find values for x, y, and B. For each of the following: a) write an equation to find the missing angles and b) find the You may need to do your work on a separate sheet of paper. Label the angles with their measures. Free worksheet(pdf) and answer key on the interior angles of a triangle. What others are saying These anchor charts cover a variety of angles and triangles, with both pictures and definitions. Ask how a math problem might indicate that the angle is a right angle. Monday, 5/16/11. Recognize that the snack bar is a right triangle, with the three angles adding up to NAME _____ GEOMETRY UNIT 4 NOTE PACKET Polygon information Date Page(s) Topic Homework 10/31 2 Classifying Triangles by sides & angles Find value for angles in a triangles Pg. for each angle, the angle measures are 106, 47, and 27. Plus model problems explained step by step We expect it carry something new for Find The Missing Angle Measure Worksheet Along With Special Quadrilaterals Worksheet Worksheets For All. PRACTICE: Interior and Exterior Angles Worksheet. Look Back Did you answer the question that was asked? Add the measures of the three angles. i have no idea what to do home schooled i have no other reference's please help only people who can help please. 23 Jan 2013 Find missing angle measures of quadrilaterals given a sum of 360 . It also contains worksheets based on finding the value of exterior angles. So, Find each measure. - Finding Missing Sides and Angle»ate Find the measure of each angle indicated. , a slide. Q x FAel Pl8 Xr8iYgoh5t LsL 3rkeps1eSr vAe1dA. ) b. m 2 62/87,21 Right Triangle Trigonometry Page 3 of 15 Solution: We are being asked to find values for x, y, and B. 2 (Triangle Facts). Remember that a triangle has a total of 180 degrees. Interior and Exterior Angles I can find the measure of an angle associated with a circle. Two angles of a quadrilateral measure 85° and 75° respectively. The diagram above illustrates the Triangle Angle Sum Theorem. Geometry and Measurement Worksheet Answer Key Measure Twice, Cut Once Find the missing side of the triangle. All the angles in a triangle add up to 180. Let's do some examples involving the Triangle Sum Theorem to help us see its utility. Proof of the Triangle Sum Theorem. Practice Worksheet - Find an assortment of missing angles and angle sums. If angle between. An Obtuse Angle An obtuse angle is an angle with a measure between 90 and 180 degrees. Worksheet on Quadrilateral. This particular set contains 5 angle puzzles with 5 complete answer keys. parallel lines cut by a transversal worksheet 8th grade Students will be given handout worksheets for class p210 pdf work and the teacher will. Part B. Finding missing angle measures in triangles worksheet - Solution. Find the angles. What is the measure of each angle in a regular octagon? Exterior Angles Refer to the two polygons below. NAMING ANGLES Name three different angles in the diagram at the right. An angle separates the plane into the interior region, the exterior region, and the angle itself. Demonstrate the two methods of naming an angle—by its vertex only, or by three points on the angle with the vertex listed between the other two points. 115° 46° 46° 100° 82° 9) Find the measure of the 10) Find the measure of the largest angle in the smallest angle in the quadrilateral. Guided Lesson Explanation - I treat this one very simplistically. What is the measure of the angle? Refer to the figure to the right to answer 22 22. We will also practice identifying the corresponding parts of congruent or similar triangles. Angles Worksheets See more Angles Worksheets - Determining Angles with Protractors Measuring Angles with a Protractor Answer Key, Teacher Resource 5. 29. The complement of an angle is 390. 20. the measurement of EDF= 35°. Students find the Find the Missing Angles in assorted problems. Answer Key Web Resources So we check the sum of all three angles: I. Round to the nearest tenth degree. Which angle is a vertical angle to ∡mns : (1 answer) 13. Here is the answer for: Unit of measurement of angles crossword clue answers, solutions for the popular game Mirror Quiz Crossword. The supplement of an angle is 1390. Most of the worksheets on this page align with the Common Core Standards. This 4th grade geometry lesson explains angle measure, how to measure Home · Free worksheets . They find the value of interior angles and exterior angles. Notes: Original. are 2 angles whose sum is 180° and together form a straight line. Construct supplementary adjacent angles in which one measures 120 degrees. Below are six versions of our grade 5 math worksheet on classifying (acute / obtuse / right) and measuring (in degrees) angles. What is the measure of an angle, if three is subtracted from twice the supplement and the result is 297 degrees? 21. third angle is twice the sum of the first two angles. They explain why angles are measured in degrees, and identifies the parts of an angle. 4 Vertical Angles Determine whether the labeled angles are vertical angles, a linear pair, or neither. 5) Answer the following questions about adjacent angles. Remember, the sum of all four angles must be 360 . PRACTICE: Mixed Angles Worksheet . m 3 = 174, Part I: Angle relationships in triangles. Step 2 : Substitute the given angle measures. G w aMda rd Rec 1w8iJtGhg nIunQfBiln hi AtMeh yA 3llgne AbBrMas b2C. Fill in the correct angle measurement. Stair Railing: A stair railing is designed as shown in the figure. Equation practice with vertical angles · Practice: Create equations to solve for missing angles · Practice: Unknown angle problems (with algebra) · Next lesson. Include Angles Worksheet Answer Page. 27° 153°. An angle whose measure is greater than 90° and less than 180° is called an Obtuse Angle. Construct a pair of vertical angles. Worksheets available online . Angles Five Worksheet Pack - Hope you brushed up on your angle vocabulary for this one. 12 56 87 12 g 10 12 11 56 13 14 13 15 m LI = rnL4 = rnL7 = 105 105 75 - 105 105 75 vertical angles, then KHL must also equal 35°. The triangle of most interest is the right-angled triangle. COMPLEMENTARY & SUPPLEMENTARY ANGLES Find the measure of angle b. Find the measure of all angles in the triangles below. Practice 14. In this angles in polygons worksheet, students determine the value of a missing angle measure. 5 Answer Key 1. When students find the values of all of the variables, they can go back and use the information to find the lengths of any other missing sides. Use the diagram to find each angle measure. Plus model problems explained step by step 1. 150 # 4,23,24,26 11/1 3 Remote Interior Angles & Exterior Angles The Exterior Angle Theorem Worksheet (circled questions) 11/4 4 QUIZ The download includes two answer sheets – one with and one without the challenge question answers. to find unknown 3) Find the measure the supplement of each angle. Students will have to do the following:· Identify and find missing angle measures in the different types of angles formed by parallel lines cut by Find the measures of angles 1–15. A quadrilateral has three acute angles, each measuring 75°. 8 Easier to grade more in depth and best of all. Geometry worksheets angles worksheets for practice and study. Complementary Angles Two angles are complementary if the sum of their measures is equal to 90 degree. Triangles and Angle Sums. Answers Key. For detailed geometry worksheets, see the geometry packs. C b. The supplement of an angle is 670. It has an answer key attached on the second page. facts, missing lengths You get 12 Task cards, a recording sheet, and answer key. Example: How many degrees are in the third angle of a triange whose other two angles are 40 o and 65 o? Answer: 180 o - 40 o-65 o = 75 o Right Triangle Trig. Have students compare the angles from the warm-up worksheet to the angles on the “Naming and Measuring Angles” worksheet. Construct an angle that measures 25 degrees. Measure all the interior angles inside the shapes on your worksheets using a Guided Lesson - Three great scenarios of finding missing angles of triangles. You will need to register for a TES account to access this resource, this is free of charge. Check your answer to c. 6° Printable Math Worksheets @ www. Answers for Unit 8 Quiz Review. xls). In each of the following circles, use the angle properties to find the missing angles. Find the measure of each exterior angle of a polygon with 18 sides. The sum of the measures of the interior angles of a triangle is 180. Which angle can be described as consecutive exterior angle with ∡ ú : (1 answer) 14. Students will work with circumference, central angles, arc length, diameter, chords, and inscribed angles. Original and Answer Key The theorems of the angle relationships include: alternate interior angles are equal, alternate exterior angles are equal, consecutive interior angles are supplementary (equal to 180 degrees Sum of Angles in Polygons Worksheet Answer Key Part 1: Drawing Polygon Shapes 1. Sometimes, more than one ratio can be used. Z 95 MEASURING ANGLES Trace the diagram and extend the rays. Put that protractor to good use! In this geometry worksheet your student will practice measuring each of these angles using a protractor. Angle Calculator - Isosceles Triangles - Measure Angles and Side Lengths by entering 2 known values Enter Side Lengths and either top Angle or Base length to calculate all other side lengths, angles, triangle height and area, and re-draw the scaled diagram. the following pairs of angles are congruent. ) 1. E Worksheet by Kuta We can use the Law of Cosines, in the form where it is solved for the cosine of an angle, to find the measures of two of the angles, then use the fact that the sum of the measures of the angles is 180° to find the third: WorksheetWorks. Find the measure of each of these angles. 1) 13 12 B A C θ 2) 4 13 A B C θ 3) 9 6 A B C θ 4) 11. find the measure of all the missing angles worksheet answer key 6hz0p6i, jzddzq, deysj4upg, nfdyfs, dmdjex, rgk, cto, fimatvgzv, qk, dobtv, d8bjf,
1, the molal boiling point elevation constant for water is 0.51°C/m. Thus a 1.00 m aqueous solution of a nonvolatile molecular solute such as glucose or sucrose will have an increase in boiling point of 0.51°C, to give a boiling point of 100.51°C at 1.00 atm.1, the molal boiling point elevationboiling point elevationBoiling-point elevation describes the phenomenon that the boiling point of a liquid (a solvent) will be higher when another compound is added, meaning that a solution has a higher boiling point than a pure solvent.https://en.wikipedia.org › wiki › Boiling-point_elevation ### Boiling-point elevation – Wikipedia constant for water is 0.51°C/m. Thus a 1.00 m aqueous solution of a nonvolatile molecular solute such as glucose or sucrose will have an increase in boiling point of 0.51°C, to give a boiling point of 100.51°C at 1.00 atm. ## What is the boiling point of 0.1 M glucose? The $0.1$ molal aqueous solution of glucose boils at $100.16^\circ {\text{ C}}$. The boiling point of $0.5$ molal aqueous solution of sucrose will be: A. $500.80^\circ {\text{ C}}$. ## What will be the boiling point of 0.1 m? The boiling point of 0.1 molal aqueous solution of urea is 100.18^(@)C at 1 atm. ## Which of the following has higher boiling point 0.1 m NaCl or 0.1 M glucose? Answer : 0.1 M of NaCl has the higher boiling point. ## Which of the following has highest boiling point 0.1 M glucose? The right answer is b) 0.1 BaCl2 has the highest boiling point. ## Why KCl has higher boiling point than glucose? This is because KCl dissociates in water to give two ions (KCI→k4+Cl-), whereas glucose does not dissociate. Therefore, number of solute particles is greater in 0.1 m KCl as compared to 0.1 m glucose. Hence, elevation in boiling points in 0.1 m KCl will be higher and therefore, its boiling point will be higher. ## What has the highest boiling point? The chemical element with the lowest boiling point is Helium and the element with the highest boiling point is Tungsten. ## Which has low boiling point? The chemical element with the lowest boiling point is Helium and the element with the highest boiling point is Tungsten. The unity used for the melting point is Celsius (C). ## Which will have a higher boiling point 0.1 m NaCl or 0.1 m bacl2 solution in water? Since both the salt solution have same concentration, therefore the salt which gives maximum number of ions will show higher boiling point. NaCl dissociate to give two ions while BaCl. Hence BaCl2 posses high boiling point. ## What is the boiling point of 0.1 m na2so4? Boiling Point/Range 100°C. Partition Coefficient N/A. (log POW). Flash Point: N/A. ## Which has maximum boiling point 0.1 N NaCl? 0.1 M aqueous solution of ferric chloride has the highest boiling point. The dissociation of one molecule of ferric chloride gives the maximum number of ions (four ions) in the solution. Glucose is a non-electrolyte and it does not undergo dissociation. ## Which one has highest boiling point 0.1 N Na2SO4? Among the given alternatives, Na2SO4 will exhibit the highest boiling point since it will dissociate into 3 ions when dissolved in the solution. ## What is the boiling point of glucose? Predicted data is generated using the ACD/Labs Percepta Platform – PhysChem Module Density: 1.6±0.1 g/cm 3 Boiling Point: 527.1±50.0 °C at 760 mmHg Vapour Pressure: 0.0±3.1 mmHg at 25°C Enthalpy of Vaporization: 92.2±6.0 kJ/mol Flash Point: 286.7±26.6 °C. ## Which one has the highest melting point? The chemical element with the lowest melting point is Helium and the element with the highest melting point is Carbon. ## Which has higher boiling point 0.1 m NaCl or 0.1 m CaCl2? 0.1 M CaCl_(2) has higher boiling point than 0.1 M NaCl. 0.1 M glucose exerts higher osmotic pressure than 0.08 M CH_(3)COOH (25% dissociated). ## Which has maximum boiling point at one atmospheric pressure? Van’t hoff is the number of ions a compound dissociates. So here we can see that the highest value of van’t hoff factor is of BaCl2 so it will have highest value of elevation constant and in continuation will have the highest boiling point among all. So, the correct answer is Option C. ## Which has higher boiling point NaCl or cacl2? So from everything I’ve learned, NaCl would be more effective for the melting, because calcium chloride dissociates into three ions and sodium chloride dissociates into two, so that would make the boiling point of water with calcium chloride higher. ## Which solution has lowest freezing point? Remember, the greater the concentration of particles, the lower the freezing point will be. 0.1mCaI2 will have the lowest freezing point, followed by 0.1mNaCl, and the highest of the three solutions will be 0.1mC6H12O6, but all three of them will have a lower freezing point than pure water. ## Which is not a Colligative property? Both solutions have the same freezing point, boiling point, vapor pressure, and osmotic pressure because those colligative properties of a solution only depend on the number of dissolved particles. Other non-colligative properties include viscosity, surface tension, and solubility. ## What increases boiling point? Compounds that can hydrogen bond will have higher boiling points than compounds that can only interact through London dispersion forces. An additional consideration for boiling points involves the vapor pressure and volatility of the compound. Typically, the more volatile a compound is, the lower its boiling point. ## Do alkynes have higher boiling points? Alkynes have higher boiling points than alkanes or alkenes, because the electric field of an alkyne, with its increased number of weakly held π electrons, is more easily distorted, producing stronger attractive forces between molecules. ## Which order of boiling point is correct? The decreasing order of boiling points (highest to lowest) are as follows: (III) > (I) > (II) alcohol > ether > alkane. “NF ”mar fil Therefore, it has the highest boiling point. Key Terms. Compound B does not form hydrogen bond. ## Which solution has lowest boiling point? since C6H12O6 is having the highest value of Vont Hoff factor, i; and i is directly proportional to the depression in freezing point hence, C6H12O6 is having the lowest boiling point. ## Which one has the lowest boiling point * 1 point? Thus, option A is the correct answer, i.e. CH3Cl has the lowest boiling point. Note:Like boiling point melting point of a solid also varies depending on the same factors. Under intermolecular forces, the forces caused by hydrogen bonding are the strongest and lead to the higher boiling point among the molecules. ## Which has lowest melting point? Melting Points of Elements Reference Symbols Melting Point Name 0.95 K -272.05 °C Helium 14.025 K -258.975 °C Hydrogen 24.553 K -248.447 °C Neon 50.35 K -222.65 °C Oxygen. ## Does KCl have a high boiling point? 2,588°F (1,420°C). ## Which one will have higher boiling point 0.1 M sucrose or 0.1 M KCl solution? 0.1 Molal aqueous solutions of KCl will have higher boiling point because it is an electrolyte solute and on addition to the solvent it will give 2 ions and thus the no. of moles increases and boiling point elevation will be more. ## What is the boiling point of bacl2? 2,840°F (1,560°C). ## Which solution has the highest boiling point 1.0 M kno3? Hence, 1.0 M Na2St4 has highest value of boiling point. ## What is the normal boiling point of the solution represented by the phase diagram? It is the state where the vapour pressure of the solution becomes equal to atmospheric pressure, so point D represents the normal boiling point of the solution. So, Point D is the correct answer. ## Which aqueous solution will have the lowest boiling point temperature? The solution with the lowest boiling point is 0.50 m LiCl (Choice C).
Serving the Quantitative Finance Community pcaspers Topic Author Posts: 712 Joined: June 6th, 2005, 9:49 am ### Carr / Madan: A note on sufficient conditions for no arbitrage I have a question on the paper "A note on sufficient conditions for no arbitrage" by Carr / Madan: They say \sum_{i=1}^\infty q_{i,j} = 1. To me it seems this sums equals Q_{1,j}  = S_0 - C_{1,j} / K_1 != 1. Do we have add an additional strike, possibly K = K_0 = 0 and attach the probability 1 - Q_{1,j} to it to complete the probability distribution? Less important, do we have to put an additional condition on the sequence of strikes to ensure (C_{i,j} - C_{i+1,j}) / (K_{i+1}-K_i) tends to zero when i goes to infinity, like "there is an epsilon > 0 s.t. K_{i+1}-K_i > epsilon for all i". Cuchulainn Posts: 64681 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Carr / Madan: A note on sufficient conditions for no arbitrage Howdy Peter, Here is article by Carr and Itkin, might be something to be gleaned? https://engineering.nyu.edu/sites/defau ... ammaMo.pdf And closer, MSc of Lykke Rasmussan, section 6 (BTW having difficulty reading your LATEX  ). She has 3 conditions for no-arbitrage. I think Joerg knows thesis. Attachments Mar2012_MasterThesis_LykkeRasmussen.pdf Last edited by Cuchulainn on December 14th, 2020, 9:52 pm, edited 1 time in total. "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl Cuchulainn Posts: 64681 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Carr / Madan: A note on sufficient conditions for no arbitrage And I quote $\frac{\partial C}{\partial K} \le 0$ $\frac{\partial^2 C}{\partial K^2} \ge 0$ $\frac{\partial C}{\partial \tau} \ge 0$ Does this make sense? "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl Alan Posts: 10652 Joined: December 19th, 2001, 4:01 am Location: California Contact: ### Re: Carr / Madan: A note on sufficient conditions for no arbitrage I have a question on the paper "A note on sufficient conditions for no arbitrage" by Carr / Madan: They say \sum_{i=1}^\infty q_{i,j} = 1. To me it seems this sums equals Q_{1,j}  = S_0 - C_{1,j} / K_1 != 1. Do we have add an additional strike, possibly K = K_0 = 0 and attach the probability 1 - Q_{1,j} to it to complete the probability distribution? Less important, do we have to put an additional condition on the sequence of strikes to ensure (C_{i,j} - C_{i+1,j}) / (K_{i+1}-K_i) tends to zero when i goes to infinity, like "there is an epsilon > 0 s.t. K_{i+1}-K_i > epsilon for all i". I agree with your sum and agree that an additional (unstated) condition seems to be needed. Since they are doing a lattice version of the continuum problem, let's first look at that. Fixing the maturity T, the call value is $C(K)$.  Breeden and Litzenberger say that $q(K) \equiv C_{KK}(K)$ can be interpreted, under some conditions, as the pdf of finding the stock price at strike K on maturity. What are the conditions? If the stock price cannot actually reach 0, then for mass preservation we need $1 = \int_{0^+}^{\infty} C_{KK} \, dK = C_K |^{K=\infty}_{K=0^+}$ The upper end is not problematic: since C(K) is decreasing to 0, $C_K$ must be decreasing to 0 as well. But, at the lower end, we need the condition that (*)  $C_K(0^+) = -1$. In the Carr-Madan lattice version this would be the condition that $1 = \frac{C_0 - C_1}{K_1 - K_0} = \frac{S_0 - C_1}{K_1}$. So you need to assume that $C_1 = S_0 - K_1$ as the lattice version of (*). Cuchulainn Posts: 64681 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Carr / Madan: A note on sufficient conditions for no arbitrage Alan, If you differentiate Dupire PDE twice with respect to K you get a PDE with zero BC and Dirac payoff, so it's a pdf, yes?. We discussed for BS PDE in same vein. (but transform to (0,1)).a pdf is always >= 0 and that can be proved by PDE theory. Is Dupire PDE just a common-or-garden PDE at the end of the day? "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl Alan Posts: 10652 Joined: December 19th, 2001, 4:01 am Location: California Contact: ### Re: Carr / Madan: A note on sufficient conditions for no arbitrage Dupire/Gyongy/Local vol model is somewhat specialized theory, in the sense that it requires a diffusion process (or at least an Ito process) for the asset price. But, yes, 2 $K$-derivatives should be a PDE for $q(K,T)$ if you know the local vol function. And (usually) 0's at the spatial boundaries, as you say. Breeden-Litzenberger is more general with minimal assumptions. Basically, just need existence of a (norm-preserving) $q(K,T)$ at a given option expiration $T$. Doesn't require any type of dynamics: diffusions or whatever. The particle can just magically appear at expiration. pcaspers Topic Author Posts: 712 Joined: June 6th, 2005, 9:49 am ### Re: Carr / Madan: A note on sufficient conditions for no arbitrage In the Carr-Madan lattice version this would be the condition that $1 = \frac{C_0 - C_1}{K_1 - K_0} = \frac{S_0 - C_1}{K_1}$. So you need to assume that $C_1 = S_0 - K_1$ as the lattice version of (*). Thanks. I am a bit reluctant to make this assumption, since $K_1 > 0$, so this would only hold if the stock has zero volatility? pcaspers Topic Author Posts: 712 Joined: June 6th, 2005, 9:49 am ### Re: Carr / Madan: A note on sufficient conditions for no arbitrage And I quote $\frac{\partial C}{\partial K} \le 0$ $\frac{\partial^2 C}{\partial K^2} \ge 0$ $\frac{\partial C}{\partial \tau} \ge 0$ Does this make sense? It does. I am setting up an arbitrage-checker on a discrete grid though and want some theoretical foundation for that. Alan Posts: 10652 Joined: December 19th, 2001, 4:01 am Location: California Contact: ### Re: Carr / Madan: A note on sufficient conditions for no arbitrage In the Carr-Madan lattice version this would be the condition that $1 = \frac{C_0 - C_1}{K_1 - K_0} = \frac{S_0 - C_1}{K_1}$. So you need to assume that $C_1 = S_0 - K_1$ as the lattice version of (*). Thanks. I am a bit reluctant to make this assumption, since $K_1 > 0$, so this would only hold if the stock has zero volatility? The continuum limit version would be something like (**) $C(K) = S_0 - K + o(K), \quad \mbox{as} \quad K \rightarrow 0$. Likely the precise nature of the sub-leading terms depends on whether or not  the stock price cannot reach the origin. It's probably been done carefully in somebody's paper; you might look for discussions of the Breeden-Litzenberger formula generalized to models that allow bankruptcy.  Another thing to check is the behavior implied by Roger Lee's no-arb condition for the implied vol at extreme strikes. If you don't like my relation on the lattice, an alternative might be: require only that $C_1 \ge S_0 - K_1$ and $C_1 > C_2$ and interpret any "missing mass" as the probability of finding the stock at the origin (or at least below $K_1$). This was more or less your original idea of how to handle it. As a practical matter, if your arbitrage checker uses for $K_1$ some smallest quotable strike, then there's certainly a non-zero probability of finding the stock price below that value. pcaspers Topic Author Posts: 712 Joined: June 6th, 2005, 9:49 am ### Re: Carr / Madan: A note on sufficient conditions for no arbitrage That makes sense. Actually, in the paper, they start without making any model assumptions, they just impose some no-arbitrage conditions on a discrete set of observed option prices. From that, they construct a discrete probability distribution for $S_{T_j}$ at each maturity $T_j$ compatible with the observed prices. I think it's okay to have $P(S_{T_j} = 0) > 0$ in this model. In a second step they argue that under an additional calendar spread arbitrage condition there exists a discrete martingale generating the observed call prices, thereby proving that the call price matrix is free of (static) arbitrage. For the arbitrage-checker all that does not really matter, the arbitrage conditions in the paper include the point $K_0=0, C_0=S_0$ in a reasonable way. I was just a little confused about the subsequent derivation of aribtrage-freeness. Cuchulainn Posts: 64681 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Carr / Madan: A note on sufficient conditions for no arbitrage Dupire/Gyongy/Local vol model is somewhat specialized theory, in the sense that it requires a diffusion process (or at least an Ito process) for the asset price. But, yes, 2 $K$-derivatives should be a PDE for $q(K,T)$ if you know the local vol function. And (usually) 0's at the spatial boundaries, as you say. Breeden-Litzenberger is more general with minimal assumptions. Basically, just need existence of a (norm-preserving) $q(K,T)$ at a given option expiration $T$. Doesn't require any type of dynamics: diffusions or whatever. The particle can just magically appear at expiration. I went through the steps in deriving Breeden-Litzenberger, starting from the expectation for call C. Let's instead take the PDE for C and differentiate wrt K and KK to produce 2 pdes with a Heaviside and Dirac, respectively. I have done it for 'normal' delta and gamma by approximating it by fdm. Some remarks and questions 1. I don't need to solve FPE to get the transition probability function. I could do it but haven't got around to it yet. 2. My approach looks like  the PDE version of the Derman/Kani probabilistic argument. I would say that they are essentially the same: SDE->Ito->PDE. 3. Computing theta using my PDE approach is not clea. $T$ is nowhere to be found in pde nor payoff. Unless we view $\sigma(T)$ as holding $T$. 4. I'm not sure when/how to pull $\sigma^{2}(K,T)$ out of the hat as it were. The Continuous Sensitivity Equation (CSE) is discussed here. https://www.datasim.nl/application/file ... hesis_.pdf Dupire PDE on interval (0,1) https://onlinelibrary.wiley.com/doi/epd ... wilm.10014 "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl pcaspers Topic Author Posts: 712 Joined: June 6th, 2005, 9:49 am ### Re: Carr / Madan: A note on sufficient conditions for no arbitrage On 5 I believe it's because they have a specific goal in mind, namely showing that the conditions they state are sufficient for no-arbitrage. The discrete model they construct is the most parsimonious and natural way to do that.
# zbMATH — the first resource for mathematics Tournament sequences and Meeussen sequences. (English) Zbl 0973.11028 Electron. J. Comb. 7, No. 1, Research paper R44, 16 p. (2000); printed version J. Comb. 7, No. 2 (2000). A tournament sequence [see P. Capell and T. V. Narayana, Can. Math. Bull. 13, 105-109 (1970; Zbl 0225.60006)] is an increasing sequence of positive integers $$(t_1, t_2,\dots)$$ such that $$t_1+1$$ and $$t_{i+1}\leq 2t_i$$. The authors define a Meeussen sequence to be an increasing sequence of positive integers $$(m_1, m_2,\dots)$$ such that $$m_1=1$$, every non-negative integer is the sum of a subset of the $$\{m_i\}$$ and each integer $$m_i-1$$ is the sum of a unique such subset. They then show that Meeussen sequences are precisely the tournament sequences, by exhibiting a bijection between the two sets of sequences which respects the natural tree structure on each set. They also present an efficient way of counting these sequences, and discuss the asymptotic growth of the number of sequences. ##### MSC: 11B83 Special sequences and polynomials 05A15 Exact enumeration problems, generating functions 05A16 Asymptotic enumeration Full Text:
# Precalculus with Limits ## Educators ### Problem 1 $\mathrm{An}$ _____ _____ is a function whose domain is the set of positive integers. Check back soon! ### Problem 2 A sequence is a _____ sequence when the domain of the function consists only of the first $n$ positive integers. Check back soon! ### Problem 3 If you are given one or more of the first few terms of a sequence, and all other terms of the sequence are defined using previous terms, then the sequence is said to be defined _____. Check back soon! ### Problem 4 If $n$ is a positive integer, then $n$ _____ is defined as $n !=1 \cdot 2 \cdot 3 \cdot 4 \cdots(n-1) \cdot n$ Check back soon! ### Problem 5 For the sum $$\sum_{i=1}^{n} a_{i},$$ $i$ is called the _____ of summation, $n$ is the _____ limit of summation, and 1 is the _____ limit of summation. Check back soon! ### Problem 6 The sum of the terms of a finite or infinite sequence is called a ____. Check back soon! ### Problem 7 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=4 n-7$$ Check back soon! ### Problem 8 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=2-\frac{1}{3^{n}}$$ Check back soon! ### Problem 9 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=(-2)^{n}$$ Check back soon! ### Problem 10 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=\left(\frac{1}{2}\right)^{n}$$ Check back soon! ### Problem 11 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=\frac{n}{n+2}$$ Check back soon! ### Problem 12 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=\frac{6 n}{3 n^{2}-1}$$ Check back soon! ### Problem 13 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=\frac{1+(-1)^{n}}{n}$$ Check back soon! ### Problem 14 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=\frac{(-1)^{n}}{n^{2}}$$ Check back soon! ### Problem 15 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=\frac{2^{n}}{3^{n}}$$ Check back soon! ### Problem 16 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=\frac{1}{n^{3 / 2}}$$ Check back soon! ### Problem 17 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=\frac{2}{3}$$ Check back soon! ### Problem 18 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ . $$a_{n}=1+(-1)^{n}$$ Check back soon! ### Problem 19 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=n(n-1)(n-2)$$ Check back soon! ### Problem 20 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=n\left(n^{2}-6\right)$$ Check back soon! ### Problem 21 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=(-1)^{n}\left(\frac{n}{n+1}\right)$$ Check back soon! ### Problem 22 Writing the Terms of a Sequence In Exercises $7-22,$ write the first five terms of the sequence. (Assume that $n$ begins with $1 .$ ) $$a_{n}=\frac{(-1)^{n+1}}{n^{2}+1}$$ Check back soon! ### Problem 23 Finding a Term of a Sequence In Exercises $23-26$ , find the indicated term of the sequence. $$\begin{array}{l}{a_{n}=(-1)^{n}(3 n-2)} \\ {a_{25}=}\end{array}$$ Check back soon! ### Problem 24 Finding a Term of a Sequence In Exercises $23-26$ find the indicated term of the sequence. $$\begin{array}{l}{a_{n}=(-1)^{n-1}[n(n-1)]} \\ {a_{16}=}\end{array}$$ Check back soon! ### Problem 25 Finding a Term of a Sequence In Exercises $23-26$ find the indicated term of the sequence. $$\begin{array}{l}{a_{n}=\frac{4 n}{2 n^{2}-3}} \\ {a_{11}=}\end{array}$$ Check back soon! ### Problem 26 Finding a Term of a Sequence In Exercises $23-26$ find the indicated term of the sequence. $$\begin{array}{l}{a_{n}=\frac{4 n^{2}-n+3}{n(n-1)(n+2)}} \\ {a_{13}=}\end{array}$$ Check back soon! ### Problem 27 Graphing the Terms of a Sequence In Exercises $27-32,$ use a graphing utility to graph the first 10 terms of the sequence. (Assume that $n$ begins with $1 . )$ $$a_{n}=\frac{2}{3} n$$ Check back soon! ### Problem 28 Graphing the Terms of a Sequence In Exercises $27-32,$ use a graphing utility to graph the first 10 terms of the sequence. (Assume that $n$ begins with $1 .$ . $$a_{n}=2-\frac{4}{n}$$ Check back soon! ### Problem 29 Graphing the Terms of a Sequence In Exercises $27-32,$ use a graphing utility to graph the first 10 terms of the sequence. (Assume that $n$ begins with $1 . )$ $$a_{n}=16(-0.5)^{n-1}$$ Check back soon! ### Problem 30 Graphing the Terms of a Sequence In Exercises $27-32,$ use a graphing utility to graph the first 10 terms of the sequence. (Assume that $n$ begins with $1 . )$ $$a_{n}=8(0.75)^{n-1}$$ Check back soon! ### Problem 31 Graphing the Terms of a Sequence In Exercises $27-32,$ use a graphing utility to graph the first 10 terms of the sequence. (Assume that $n$ begins with $1 . )$ $$a_{n}=\frac{2 n}{n+1}$$ Check back soon! ### Problem 32 Graphing the Terms of a Sequence In Exercises $27-32,$ use a graphing utility to graph the first 10 terms of the sequence. (Assume that $n$ begins with $1 . )$ $$a_{n}=\frac{3 n^{2}}{n^{2}+1}$$ Check back soon! ### Problem 33 Matching a Sequence with a Graph In Exercises $33-36,$ match the sequence with the graph of its first 10 terms. [The graphs are labeled (a), (b), (c), and (d).] $$a_{n}=\frac{8}{n+1}$$ Check back soon! ### Problem 34 Matching a Sequence with a Graph In Exercises $33-36$ , match the sequence with the graph of its first 10 terms. [The graphs are labeled (a), (b), (c), and (d).] $$a_{n}=\frac{8 n}{n+1}$$ Check back soon! ### Problem 35 Matching a Sequence with a Graph In Exercises $33-36$ , match the sequence with the graph of its first 10 terms. [The graphs are labeled (a), ( b), (c), and (d).] $$a_{n}=4(0.5)^{n-1}$$ Check back soon! ### Problem 36 Matching a Sequence with a Graph In Exercises $33-36,$ match the sequence with the graph of its first 10 terms. [The graphs are labeled (a), (b), (c), and (d).] $$a_{n}=\frac{4^{n}}{n !}$$ Check back soon! ### Problem 37 Finding the $n$ th Term of a Sequence In Exercises $37-48,$ write an expression for the apparent $n$ th term $\left(a_{n}\right)$ of the sequence. (Assume that $n$ begins with $1 . )$ $$3,7,11,15,19, \ldots$$ Check back soon! ### Problem 38 Finding the $n$ th Term of a Sequence In Exercises $37-48,$ write an expression for the apparent $n$ th term $\left(a_{n}\right)$ of the sequence. (Assume that $n$ begins with $1 . )$ $$0,3,8,15,24, \dots$$ Check back soon! ### Problem 39 Finding the $n$ th Term of a Sequence In Exercises $37-48,$ write an expression for the apparent $n$ th term $\left(a_{n}\right)$ of the sequence. (Assume that $n$ begins with $1 . )$ $$-\frac{2}{3}, \frac{3}{4},-\frac{4}{5}, \frac{5}{6},-\frac{6}{7}, \dots$$ Check back soon! ### Problem 40 Finding the $n$ th Term of a Sequence In Exercises $37-48,$ write an expression for the apparent $n$ th term $\left(a_{n}\right)$ of the sequence. (Assume that $n$ begins with $1 . )$ $$\frac{1}{2},-\frac{1}{4}, \frac{1}{8},-\frac{1}{16}, \ldots$$ Check back soon! ### Problem 41 Finding the $n$ th Term of a Sequence In Exercises $37-48,$ write an expression for the apparent $n$ th term $\left(a_{n}\right)$ of the sequence. (Assume that $n$ begins with $1 . )$ $$\frac{2}{1}, \frac{3}{3}, \frac{4}{5}, \frac{5}{7}, \frac{6}{9}, \ldots$$ Check back soon! ### Problem 42 Finding the $n$ th Term of a Sequence In Exercises $37-48,$ write an expression for the apparent $n$ th term $\left(a_{n}\right)$ of the sequence. (Assume that $n$ begins with $1 . )$ $$\frac{1}{3}, \frac{2}{9}, \frac{4}{27}, \frac{8}{81}, \dots$$ Check back soon! ### Problem 43 Finding the $n$ th Term of a Sequence In Exercises $37-48,$ write an expression for the apparent $n$ th term $\left(a_{n}\right)$ of the sequence. (Assume that $n$ begins with $1 . )$ $$1, \frac{1}{4}, \frac{1}{9}, \frac{1}{16}, \frac{1}{25}, \dots$$ Check back soon! ### Problem 44 Finding the $n$ th Term of a Sequence In Exercises $37-48,$ write an expression for the apparent $n$ th term $\left(a_{n}\right)$ of the sequence. (Assume that $n$ begins with $1 . )$ $$1, \frac{1}{2}, \frac{1}{6}, \frac{1}{24}, \frac{1}{120}, \dots$$ Check back soon! ### Problem 45 Finding the $n$ th Term of a Sequence In Exercises $37-48,$ write an expression for the apparent $n$ th term $\left(a_{n}\right)$ of the sequence. (Assume that $n$ begins with $1 . )$ $$1,-1,1,-1,1, \ldots$$ Check back soon! ### Problem 46 Finding the $n$ th Term of a Sequence In Exercises $37-48,$ write an expression for the apparent $n$ th term $\left(a_{n}\right)$ of the sequence. (Assume that $n$ begins with $1 . )$ $$1,3,1,3,1, \dots$$ Check back soon! ### Problem 47 Finding the $n$ th Term of a Sequence In Exercises $37-48,$ write an expression for the apparent $n$ th term $\left(a_{n}\right)$ of the sequence. (Assume that $n$ begins with $1 . )$ $$1,3, \frac{3^{2}}{2}, \frac{3^{3}}{6}, \frac{3^{4}}{24}, \frac{3^{5}}{120}, \dots$$ Check back soon! ### Problem 48 Finding the $n$ th Term of a Sequence In Exercises $37-48,$ write an expression for the apparent $n$ th term $\left(a_{n}\right)$ of the sequence. (Assume that $n$ begins with $1 . )$ $$1+\frac{1}{2}, 1+\frac{3}{4}, 1+\frac{7}{8}, 1+\frac{15}{16}, 1+\frac{31}{32}, \ldots$$ Check back soon! ### Problem 49 Writing the Terms of a Recursive Sequence In Exercises $49-52,$ write the first five terms of the sequence defined recursivelv. $$a_{1}=28, \quad a_{k+1}=a_{k}-4$$ Check back soon! ### Problem 50 Writing the Terms of a Recursive Sequence In Exercises $49-52$ , write the first five terms of the sequence defined recursively. $$a_{1}=3, \quad a_{k+1}=2\left(a_{k}-1\right)$$ Check back soon! ### Problem 51 Writing the Terms of a Recursive Sequence In Exercises $49-52$ , write the first five terms of the sequence defined recursively. $$a_{0}=1, a_{1}=2, \quad a_{k}=a_{k-2}+\frac{1}{2} a_{k-1}$$ Check back soon! ### Problem 52 Writing the Terms of a Recursive Sequence In Exercises $49-52,$ write the first five terms of the sequence defined recursively. $$a_{0}=-1, a_{1}=1, \quad a_{k}=a_{k-2}+a_{k-1}$$ Check back soon! ### Problem 53 Writing the $n$ th Term of a Recursive Sequence In Exercises $53-56$ , write the first five terms of the sequence defined recursively. Use the pattern to write the $n$ th term of the sequence as a function of $n .$ $$a_{1}=6, \quad a_{k+1}=a_{k}+2$$ Check back soon! ### Problem 54 Writing the $n$ th Term of a Recursive Sequence In Exercises $53-56$ , write the first five terms of the sequence defined recursively. Use the pattern to write the $n$ th term of the sequence as a function of $n .$ $$a_{1}=25, \quad a_{k+1}=a_{k}-5$$ Check back soon! ### Problem 55 Writing the $n$ th Term of a Recursive Sequence In Exercises $53-56$ , write the first five terms of the sequence defined recursively. Use the pattern to write the $n$ th term of the sequence as a function of $n .$ $$a_{1}=81, \quad a_{k+1}=\frac{1}{3} a_{k}$$ Check back soon! ### Problem 56 Writing the $n$ th Term of a Recursive Sequence In Exercises $53-56$ , write the first five terms of the sequence defined recursively. Use the pattern to write the $n$ th term of the sequence as a function of $n .$ $$a_{1}=14, \quad a_{k+1}=(-2) a_{k}$$ Check back soon! ### Problem 57 Fibonacci Sequence In Exercises 57 and 58 , use the Fibonacci sequence. (See Example 5.) Write the first 12 terms of the Fibonacci sequence $a_{n}$ and the first 10 terms of the sequence given by $$b_{n}=\frac{a_{n+1}}{a_{n}}, \quad n \geq 1$$ Check back soon! ### Problem 58 Fibonacci Sequence In Exercises 57 and 58 , use the Fibonacci sequence. (See Example $5 . )$ Using the definition for $b_{n}$ in Exercise $57,$ show that $b_{n}$ can be defined recursively by $$b_{n}=1+\frac{1}{b_{n-1}}$$ Check back soon! ### Problem 59 Writing the Terms of a Sequence Involving Factorials In Exercises $59-62,$ write the first five terms of the sequence. (Assume that $n$ begins with $0 . )$ $$a_{n}=\frac{5}{n !}$$ Check back soon! ### Problem 60 Writing the Terms of a Sequence Involving Factorials In Exercises $59-62,$ write the first five terms of the sequence. (Assume that $n$ begins with $0 . )$ $$a_{n}=\frac{n !}{2 n+1}$$ Check back soon! ### Problem 61 Writing the Terms of a Sequence Involving Factorials In Exercises $59-62,$ write the first five terms of the sequence. (Assume that $n$ begins with $0 . )$ $$a_{n}=\frac{1}{(n+1) !}$$ Check back soon! ### Problem 62 Writing the Terms of a Sequence Involving Factorials In Exercises $59-62,$ write the first five terms of the sequence. (Assume that $n$ begins with $0 . )$ $$a_{n}=\frac{(-1)^{2 n+1}}{(2 n+1) !}$$ Check back soon! ### Problem 63 Simplifying a Factorial Expression In Exercises $63-66,$ simplify the factorial expression. $$\frac{4 !}{6 !}$$ Check back soon! ### Problem 64 Simplifying a Factorial Expression In Exercises $63-66,$ simplify the factorial expression. $$\frac{12 !}{4 ! \cdot 8 !}$$ Check back soon! ### Problem 65 Simplifying a Factorial Expression In Exercises $63-66,$ simplify the factorial expression. $$\frac{(n+1) !}{n !}$$ Check back soon! ### Problem 66 Simplifying a Factorial Expression In Exercises $63-66,$ simplify the factorial expression. $$\frac{(2 n-1) !}{(2 n+1) !}$$ Check back soon! ### Problem 67 Finding a Sum In Exercises $67-74,$ find the sum. $$\sum_{i=1}^{5}(2 i+1)$$ Check back soon! ### Problem 68 Finding a Sum In Exercises $67-74,$ find the sum. $$\sum_{j=3}^{5} \frac{1}{j^{2}-3}$$ Check back soon! ### Problem 69 Finding a Sum In Exercises $67-74,$ find the sum. $$\sum_{k=1}^{4} 10$$ Check back soon! ### Problem 70 Finding a Sum In Exercises $67-74,$ find the sum. $$\sum_{i=0}^{4} i^{2}$$ Check back soon! ### Problem 71 Finding a Sum In Exercises $67-74,$ find the sum. $$\sum_{k=2}^{5}(k+1)^{2}(k-3)$$ Check back soon! ### Problem 72 Finding a Sum In Exercises $67-74,$ find the sum. $$\sum_{i=1}^{4}\left[(i-1)^{2}+(i+1)^{3}\right]$$ Check back soon! ### Problem 73 Finding a Sum In Exercises $67-74,$ find the sum. $$\sum_{i=1}^{4} 2^{i}$$ Check back soon! ### Problem 74 Finding a Sum In Exercises $67-74,$ find the sum. $$\sum_{j=0}^{4}(-2)^{j}$$ Check back soon! ### Problem 75 Finding a Sum In Exercises $75-78$ , use a graphing utility to find the sum. $$\sum_{n=0}^{5} \frac{1}{2 n+1}$$ Check back soon! ### Problem 76 Finding a Sum In Exercises $75-78$ , use a graphing utility to find the sum. $$\sum_{k=0}^{4} \frac{(-1)^{k}}{k+1}$$ Check back soon! ### Problem 77 Finding a Sum In Exercises $75-78$ , use a graphing utility to find the sum. $$\sum_{k=0}^{4} \frac{(-1)^{k}}{k !}$$ Check back soon! ### Problem 78 Finding a Sum In Exercises $75-78$ , use a graphing utility to find the sum. $$\sum_{n=0}^{25} \frac{1}{4^{n}}$$ Check back soon! ### Problem 79 Using Sigma Notation to Write a Sum In Exercises $79-88$ , use sigma notation to write the sum. $$\frac{1}{3(1)}+\frac{1}{3(2)}+\frac{1}{3(3)}+\dots+\frac{1}{3(9)}$$ Check back soon! ### Problem 80 Using Sigma Notation to Write a Sum In Exercises $79-88$ , use sigma notation to write the sum. $$\frac{5}{1+1}+\frac{5}{1+2}+\frac{5}{1+3}+\dots+\frac{5}{1+15}$$ Check back soon! ### Problem 81 Using Sigma Notation to Write a Sum In Exercises $79-88$ , use sigma notation to write the sum. $$\left[2\left(\frac{1}{8}\right)+3\right]+\left[2\left(\frac{2}{8}\right)+3\right]+\cdots+\left[2\left(\frac{8}{8}\right)+3\right]$$ Check back soon! ### Problem 82 Using Sigma Notation to Write a Sum In Exercises $79-88$ , use sigma notation to write the sum. $$\left[1-\left(\frac{1}{6}\right)^{2}\right]+\left[1-\left(\frac{2}{6}\right)^{2}\right]+\cdots+\left[1-\left(\frac{6}{6}\right)^{2}\right]$$ Check back soon! ### Problem 83 Using Sigma Notation to Write a Sum In Exercises $79-88$ , use sigma notation to write the sum. $$3-9+27-81+243-729$$ Check back soon! ### Problem 84 Using Sigma Notation to Write a Sum In Exercises $79-88$ , use sigma notation to write the sum. $$1-\frac{1}{2}+\frac{1}{4}-\frac{1}{8}+\dots-\frac{1}{128}$$ Check back soon! ### Problem 85 Using Sigma Notation to Write a Sum In Exercises $79-88$ , use sigma notation to write the sum. $$\frac{1}{1^{2}}-\frac{1}{2^{2}}+\frac{1}{3^{2}}-\frac{1}{4^{2}}+\ldots-\frac{1}{20^{2}}$$ Check back soon! ### Problem 86 Using Sigma Notation to Write a Sum In Exercises $79-88$ , use sigma notation to write the sum. $$\frac{1}{1 \cdot 3}+\frac{1}{2 \cdot 4}+\frac{1}{3 \cdot 5}+\dots+\frac{1}{10 \cdot 12}$$ Check back soon! ### Problem 87 Using Sigma Notation to Write a Sum In Exercises $79-88$ , use sigma notation to write the sum. $$\frac{1}{4}+\frac{3}{8}+\frac{7}{16}+\frac{15}{32}+\frac{31}{64}$$ Check back soon! ### Problem 88 Using Sigma Notation to Write a Sum In Exercises $79-88$ , use sigma notation to write the sum. $$\frac{1}{2}+\frac{2}{4}+\frac{6}{8}+\frac{24}{16}+\frac{120}{32}+\frac{720}{64}$$ Check back soon! ### Problem 89 Finding a Partial Sum of a Series In Exercises $89-92,$ find the indicated partial sum of the series. $$\sum_{i=1}^{\infty} 5\left(\frac{1}{2}\right)^{i}$$ Fourth partial sum Check back soon! ### Problem 90 Finding a Partial Sum of a Series In Exercises $89-92,$ find the indicated partial sum of the series. $$\sum_{i=1}^{\infty} 2\left(\frac{1}{3}\right)^{i}$$ Fifth partial sum Check back soon! ### Problem 91 Finding a Partial Sum of a Series In Exercises $89-92,$ find the indicated partial sum of the series. $$\sum_{n=1}^{\infty} 4\left(-\frac{1}{2}\right)^{n}$$ Third partial sum Check back soon! ### Problem 92 Finding a Partial Sum of a Series In Exercises $89-92,$ find the indicated partial sum of the series. $$\sum_{n=1}^{\infty} 8\left(-\frac{1}{4}\right)^{n}$$ Fourth partial sum Check back soon! ### Problem 93 Finding the Sum of an Infinite Series Exercises $93-96$ , find the sum of the infinite series. $$\sum_{i=1}^{\infty} \frac{6}{10^{i}}$$ Check back soon! ### Problem 94 Finding the Sum of an Infinite Series Exercises $93-96$ , find the sum of the infinite series. $$\sum_{k=1}^{\infty}\left(\frac{1}{10}\right)^{k}$$ Check back soon! ### Problem 95 Finding the Sum of an Infinite Series Exercises $93-96$ , find the sum of the infinite series. $$\sum_{k=1}^{\infty} 7\left(\frac{1}{10}\right)^{k}$$ Check back soon! ### Problem 96 Finding the Sum of an Infinite Series Exercises $93-96$ , find the sum of the infinite series. $$\sum_{i=1}^{\infty} \frac{2}{10^{i}}$$ Check back soon! Compound Interest An investor deposits $\$ 10,000$in an account that earns 3.5$\%$interest compounded quarterly. The balance in the account after$n$quarters is given by $$A_{n}=10,000\left(1+\frac{0.035}{4}\right)^{n}, \quad n=1,2,3, \ldots$$ (a) Write the first eight terms of the sequence. (b) Find the balance in the account after 10 years by computing the 40 th term of the sequence. (c) Is the balance after 20 years twice the balance after 10 years? Explain. Check back soon! ### Problem 98 The numbers$a_{n}$(in thousands) of AIDS cases reported from 2003 through 2010 can be approximated by$a_{n}=-0.0126 n^{3}+0.391 n^{2}-4.21 n+48.5n=3,4, \ldots, 10$where$n$is the year, with$n=3$corresponding to$2003 .$(Source: U.S. Centers for Disease Control and Prevention) (a) Write the terms of this finite sequence. Use a graphing utility to construct a bar graph that represents the sequence. (b) What does the graph in part (a) say about reported cases of AIDS? Check back soon! ### Problem 99 True or False? In Exercises 99 and$100,$determine whether the statement is true or false. Justify your answer. $$\sum_{i=1}^{4}\left(i^{2}+2 i\right)=\sum_{i=1}^{4} i^{2}+2 \sum_{i=1}^{4} i$$ Check back soon! ### Problem 100 True or False? In Exercises 99 and$100,$determine whether the statement is true or false. Justify your answer. True or False? In Exercises 99 and$100,$determine whether the statement is true or false. Justify your answer. $$\sum_{j=1}^{4} 2^{j}=\sum_{j=3}^{6} 2^{j-2}$$ Check back soon! ### Problem 101 Arithmetic Mean In Exercises$101-103,$use the following definition of the arithmetic mean$\overline{x}$of a set of$n$measurements$x_{1}, x_{2}, x_{3}, \ldots, x_{n}$$$\overline{x}=\frac{1}{n} \sum_{i=1}^{n} x_{i}$$ Find the arithmetic mean of the six checking account balances$\$327.15, \quad \$ 785.69, \quad \$433.04, \quad \$ 265.38\$604.12,$ and $\$ 590.30 .$Use the statistical capabilities of a graphing utility to verify your result. Check back soon! ### Problem 102 Arithmetic Mean In Exercises$101-103,$use the following definition of the arithmetic mean$\overline{x}$of a set of$n$measurements$x_{1}, x_{2}, x_{3}, \ldots, x_{n}$$$\overline{x}=\frac{1}{n} \sum_{i=1}^{n} x_{i}$$ Proof Prove that $$\sum_{i=1}^{n}\left(x_{i}-\overline{x}\right)=0$$ Check back soon! ### Problem 103 Arithmetic Mean In Exercises$101-103,$use the following definition of the arithmetic mean$\overline{x}$of a set of$n$measurements$x_{1}, x_{2}, x_{3}, \ldots, x_{n}$$$\overline{x}=\frac{1}{n} \sum_{i=1}^{n} x_{i}$$ Proof Prove that $$\sum_{i=1}^{n}\left(x_{i}-\overline{x}\right)^{2}=\sum_{i=1}^{n} x_{i}^{2}-\frac{1}{n}\left(\sum_{i=1}^{n} x_{i}\right)^{2}$$ Check back soon! ### Problem 104 HOW DO YOU SEE IT? The graph represents the first 10 terms of a sequence. Complete each expression for the apparent$n$th term$a_{n}$of the sequence. Which expressions are appropriate to represent the cost$a_{n}$to buy$n \mathrm{MP} 3$songs at a cost of$\$1$ per song? Explain. $$a_{n}=1$$ $$a_{n}=\frac{!}{(n-1) !}$$ $$a_{n}=\sum_{k=1}^{n}$$ Check back soon! ### Problem 105 Finding the Terms of a Sequence In Exercises 105 and $106,$ find the first five terms of the sequence. $$a_{n}=\frac{x^{n}}{n !}$$ Check back soon! ### Problem 106 Finding the Terms of a Sequence In Exercises 105 and $106,$ find the first five terms of the sequence. $$a_{n}=\frac{(-1)^{n} x^{2 n+1}}{2 n+1}$$ Check back soon! ### Problem 107 Cube $A 3 \times 3 \times 3$ cube is made up of 27 unit cubes (a unit cube has a length, width, and height of 1 unit), and only the faces of each cube that are visible are painted blue, as shown in the figure. (a) Complete the table to determine how many unit cubes of the $3 \times 3 \times 3$ cube have 0 blue faces, 1 blue face, 2 blue faces, and 3 blue faces. (b) Repeat part (a) for a $4 \times 4 \times 4$ cube, a $5 \times 5 \times 5$ cube, and $66 \times 6 \times 6$ cube. (c) What type of pattern do you observe? (d) Write formulas you could use to repeat part (a) for $\quad$ an $n \times n \times n$ cube. Check back soon!
# Why does $\ell=0$ correspond to spherically symmetric solutions for the spherical harmonics? In quantum mechanics why do states with $\ell=0$ in the Hydrogen atom correspond to spherically symmetric spherical harmonics? • Do you mean in this in the conceptual or mathematical sense? – xish Dec 14 '13 at 4:04 • I guess both would be delightful. – user24082 Dec 14 '13 at 4:05 • Are you asking why $Y_0^0(\theta,\phi)$ is spherically symmetric, or why the spherical harmonics are a part of the solution, or why the state with zero angular momentum is spherically symmetric? Actually, the answer to any of those questions might very well be: because the math says so. – Geoffrey Dec 14 '13 at 4:22 • I'm asking why ONLY the states with zero angular momentum are symmetric. – user24082 Dec 14 '13 at 5:05 • Having a non-zero angular momentum means it has to point somewhere, hence not all directions in space are equivalent, hence lack of spherical symmetry. – Slaviks Dec 14 '13 at 5:56 One way to understand it is to recognize that for the spherical harmonic $|l,m\rangle$ with $l=0$ (and obviously $m=0$), we have $\hat L_i|0,0\rangle=0$, where $\hat L_i$ is the angular momentum operator in the direction $i=x,y,z$. It is obvious for $\hat L_z$, which eigenvalue is $m=0$, and can be verified for the other two. Then, the rotation operator $\hat R(\theta)$ around a direction $\vec n$ with angle $\theta$ is given by $$\hat R(\theta)=\exp(i\theta \,\vec n . \vec{\hat L} )$$ from which we clearly see that the state $|0,0\rangle$ is invariant for all rotations : $\hat R(\theta)|0,0\rangle=|0,0\rangle$ and is thus spherically symmetric. In this formulation, you see that it is the only state like that. You can also show that the state $|l,0\rangle$ is axially symmetric (along $z$), etc. See for instance this nice picture : Suppose that there existed a spherically symmetrical wavefunction $\psi({\bf r})=f(r)$ for which $l\neq0$. This cannot be, for if we calculate $\langle \psi | L^2 | \psi \rangle$ we will always get zero, as each term in $L^2$ has derivatives with respect to $\theta$ and $\phi$.
The University of Southampton University of Southampton Institutional Repository # An Axiomatization of the Algebra of Petri Net Concatenable Processes Sassone, V. (1996) An Axiomatization of the Algebra of Petri Net Concatenable Processes. Theoretical Computer Science, 170 (1-2), 277-296. Record type: Article ## Abstract The concatenable processes of a Petri net $N$ can be characterized abstractly as the arrows of a symmetric monoidal category $Pn(N)$. However, this is only a partial axiomatization, since it is based on a concrete, ad hoc chosen, category of symmetries $Sym_N$. In this paper we give a completely abstract characterization of the category of concatenable processes of $N$, thus yielding an axiomatic theory of the noninterleaving behaviour of Petri nets. PDF P-of-N-Off.pdf - Other Published date: 1996 Keywords: petri nets, petri nets processes, categorical semantics, symmetric monoidal categories Organisations: Web & Internet Science ## Identifiers Local EPrints ID: 261820 URI: https://eprints.soton.ac.uk/id/eprint/261820 ISSN: 0304-3975 PURE UUID: 5cea478f-fdd8-4ae1-9c1c-63fc7328b2c4 ## Catalogue record Date deposited: 26 Jan 2006 ## Contributors Author: V. Sassone
# Export/Import DataPump Parameter ACCESS_METHOD[ID 552424.1] Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ? [ID 552424.1] Modified 06-APR-2009     Type HOWTO     Status PUBLISHED ## Applies to: Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6 Oracle Server - Personal Edition - Version: 10.1.0.2 to 11.1.0.6 Oracle Server - Standard Edition - Version: 10.1.0.2 to 11.1.0.6 Enterprise Manager for RDBMS - Version: 10.1.0.2 to 11.1 Information in this document applies to any platform. ## Goal Starting with Oracle10g, Oracle Data Pump can be used to move data in and out of a database. Data Pump can make use of different methods to move the data, and will automatically choose the fastest method. It is possible though, to manually enforce a specific method. This document demonstrates how to specify the method with which data will be loaded or unloaded with Data Pump. ## Solution ### 1. Introduction. Data Pump can use four mechanisms to move data in and out of a database: • Data file copying; • Direct path; • External tables; The two most commonly used methods to move data in and out of databases with Data Pump are the "Direct Path" method and the "External Tables" method. 1.1. Direct Path mode. After data file copying, direct path is the fastest method of moving data. In this method, the SQL layer of the database is bypassed and rows are moved to and from the dump file with only minimal interpretation. Data Pump automatically uses the direct path method for loading and unloading data when the structure of a table allows it. 1.2. External Tables mode. If data cannot be moved in direct path mode, or if there is a situation where parallel SQL can be used to speed up the data move even more, then the external tables mode is used. The external table mechanism creates an external table that maps the dump file data for the database table. The SQL engine is then used to move the data. If possible, the APPEND hint is used on import to speed the copying of the data into the database. Note: When the Export NETWORK_LINK parameter is used to specify a network link for an export operation, a variant of the external tables method is used. In this case, data is selected from across the specified network link and inserted into the dump file using an external table. 1.3. Data File Copying mode. This mode is used when a transport tablespace job is started, i.e.: the TRANSPORT_TABLESPACES parameter is specified for an Export Data Pump job. This is the fastest method of moving data because the data is not interpreted nor altered during the job, and Export Data Pump is used to unload only structural information (metadata) into the dump file. This mode is used when the NETWORK_LINK parameter is specified during an Import Data Pump job. This is the slowest of the four access methods because this method makes use of an INSERT SELECT statement to move the data over a database link, and reading over a network is generally slower than reading from a disk. The "Data File Copying" and "Network Link Import" methods to move data in and out of databases are outside the scope of this article, and therefore not discussed any further. For details about the access methods of the classic export client (exp), see: Note:155477.1 "Parameter DIRECT: Conventional Path Export Versus Direct Path Export" Export Data Pump will use the "Direct Path" mode to unload data in the following situations: EXPDP will use DIRECT_PATH mode if: 2.1. The structure of a table allows a Direct Path unload, i.e.: - The table does not have fine-grained access control enabled for SELECT. - The table is not a queue table. - The table does not contain one or more columns of type BFILE or opaque, or an object type containing opaque columns. - The table does not contain encrypted columns. - The table does not contain a column of an evolved type that needs upgrading. - If the table has a column of datatype LONG or LONG RAW, then this column is the last column. 2.2. The parameters QUERY, SAMPLE, or REMAP_DATA parameter were not used for the specified table in the Export Data Pump job. 2.3. The table or partition is relatively small (up to 250 Mb), or the table or partition is larger, but the job cannot run in parallel because the parameter PARALLEL was not specified (or was set to 1). Note that with an unload of data in Direct Path mode, parallel I/O execuation Processes (PX processes) cannot be used to unload the data in parallel (paralllel unload is not supported in Direct Path mode). Export Data Pump will use the "External Tables" mode to unload data in the following situations: EXPDP will use EXTERNAL_TABLE mode if: 3.1. Data cannot be unloaded in Direct Path mode, because of the structure of the table, i.e.: - Fine-grained access control for SELECT is enabled for the table. - The table is a queue table. - The table contains one or more columns of type BFILE or opaque, or an object type containing opaque columns. - The table contains encrypted columns. - The table contains a column of an evolved type that needs upgrading. - The table contains a column of type LONG or LONG RAW that is not last. 3.2. Data could also have been unloaded in "Direct Path" mode, but the parameters QUERY, SAMPLE, or REMAP_DATA were used for the specified table in the Export Data Pump job. 3.3. Data could also have been unloaded in "Direct Path" mode, but the table or partition is relatively large (> 250 Mb) and parallel SQL can be used to speed up the unload even more. Note that with an unload of data in External Tables mode, parallel I/O execuation Processes (PX processes) can be used to unload the data in parallel. In that case the Data Pump Worker process acts as the coordinator for the PX processes. However, this does not apply when the table has a LOB column: in that case the table parallelism will always be 1. See also: Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN" Import Data Pump will use the "Direct Path" mode to load data in the following situations: IMPDP will use DIRECT_PATH if: 4.1. The structure of a table allows a Direct Path load, i.e.: - A global index does not exist on a multipartition table during a single-partition load. This includes object tables that are partitioned. - A domain index does not exist for a LOB column. - The table is not in a cluster. - The table does not have BFILE columns or columns of opaque types. - The table does not have VARRAY columns with an embedded opaque type. - The table does not have encrypted columns. - Supplemental logging is not enabled and the table does not have a LOB column. - The table into which data is being imported is a pre-existing table and: – There is not an active trigger, and: – The table is partitioned and has an index, and: – Fine-grained access control for INSERT mode is not enabled, and: – A constraint other than table check does not exist, and: – A unique index does not exist. 4.2 The parameters QUERY, REMAP_DATA parameter were not used for the specified table in the Import Data Pump job. 4.3. The table or partition is relatively small (up to 250 Mb), or the table or partition is larger, but the job cannot run in parallel because the parameter PARALLEL was not specified (or was set to 1). Import Data Pump will use the "External Tables" mode to load data in the following situations: IMPDP will use EXTERNAL_TABLE if: 5.1. Data cannot be loaded in Direct Path mode, because at least one of the following conditions exists: - A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned. - A domain index exists for a LOB column. - A table is in a cluster. - A table has BFILE columns or columns of opaque types. - A table has VARRAY columns with an embedded opaque type. - The table has encrypted columns. - Supplemental logging is enabled and the table has at least one LOB column. - The table into which data is being imported is a pre-existing table and at least one of the following conditions exists: – There is an active trigger – The table is partitioned and does not have any indexes – Fine-grained access control for INSERT mode is enabled for the table. – An enabled constraint exists (other than table check constraints) – A unique index exists 5.2. Data could also have been loaded in "Direct Path" mode, but the parameters QUERY, or REMAP_DATA were used for the specified table in the Import Data Pump job. 5.3. Data could also have been loaded in "Direct Path" mode, but the table or partition is relatively large (> 250 Mb) and parallel SQL can be used to speed up the load even more. Note that with a load of data in External Tables mode, parallel I/O execuation Processes (PX processes) can be used to load the data in parallel. In that case the Data Pump Worker process acts as the coordinator for the PX processes. However, this does not apply when the table has a LOB column: in that case the table parallelism will always be 1. See also: Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN" In very specific situations, the undocumented parameter ACCESS_METHOD can be used to enforce a specific method to unload or load the data. Example: %expdp system/manager ... ACCESS_METHOD=DIRECT_PATH %expdp system/manager ... ACCESS_METHOD=EXTERNAL_TABLE or: %impdp system/manager ... ACCESS_METHOD=DIRECT_PATH %impdp system/manager ... ACCESS_METHOD=EXTERNAL_TABLE Important Need-To-Know's when the parameter ACCESS_METHOD is specified for a job: • The parameter ACCESS_METHOD is an undocumented parameter and should only be used when requested by Oracle Support. • If the parameter is not specified, then Data Pump will automatically choose the best method to load or unload the data. • If import Data Pump cannot choose due to conflicting restrictions, an error will be reported: ORA-31696: unable to export/import TABLE_DATA:"SCOTT"."EMP" using client specified AUTOMATIC method • The parameter can only be specified when the Data Pump job is initially started (i.e. the parameter cannot be specified when the job is restarted). • Enforcing a specific method may result in a slower performance of the overall Data Pump job, or errors such as: ... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA ORA-31696: unable to export/import TABLE_DATA:"SCOTT"."MY_TAB" using client specified DIRECT_PATH method ... • To determine which access method is used, a Worker trace file can be created, e.g.: %expdp system/manager DIRECTORY=my_dir / DUMPFILE=expdp_s.dmp LOGFILE=expdp_s.log / TABLES=scott.my_tab TRACE=400300 The Worker trace file shows the method with which the data was loaded (or unloaded for Import Data Pump): ... KUPW:14:57:14.289: 1: object: TABLE_DATA:"SCOTT"."MY_TAB" KUPW:14:57:14.289: 1: TABLE_DATA:"SCOTT"."MY_TAB" external table, parallel: 1 ... Note:286496.1 " Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump" ### 7. Known issues. 7.1. Bug 4722517 - Materialized view log not updated after import into existing table Defect:  Bug:4722517 "MATERIALIZED VIEW LOG NOT UPDATED AFTER IMPORT DATAPUMP JOB INTO EXISTING TABLE" Symptoms:  a materialized view is created with FAST REFRESH on a master table; if data is imported into this master table, then these changes (inserts) do not show up in the materialized view log Releases:  10.1.0.2.0 and higher Fixed in:  not applicable, closed as not-a-bug Patched files:  not applicable Workaround:  if possible import into a temporary holding table then copy the data with "insert as select" into the master table Cause:  a fast refresh does not apply changes that result from bulk load operations on masters, such as an INSERT with the APPEND hint used by Import Data Pump Trace:  not applicable, changes are not propagated Remarks:  see also Note:340789.1 "Import Datapump (Direct Path) Does Not Update Materialized View Logs " 7.2. Bug 5599947 - Export Data Pump is slow when table has a LOB column Defect:  Bug:5599947 "DATAPUMP EXPORT VERY SLOW" Symptoms:  Export Data Pump has low performance when exporting table with LOB column Releases:  11.1.0.6 and below Fixed in:  not applicable, closed as not feasible to fix Patched files:  not applicable Workaround:  if possible re-organize the large table with LOB column and make it partitioned Cause:  if a table has a LOB column, and the unload or load takes place in "External Tables" mode, then we cannot make use of parallel I/O execution Processes (PX processes) Trace:  not applicable Remarks:  see also Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN" 7.3. Bug 5941030 - Corrupt blocks after Import Data Pump when table has LONG / LONG RAW column Defect:  Bug:5941030 "Datapump import can produce corrupt blocks when there is a LONG / LONG RAW" Symptoms:  Direct Path import of a LONG / LONG RAW column can create corrupt blocks in the database. If DB_BLOCK_CHECKING is enabled then an ORA-600 [6917] error can be signalled. If not then the corrupt block can cause subsequent problems, like ORA-1498 (block check failure) on an analyze of the table. Releases:  11.1.0.6 and below Fixed in:  10.2.0.5.0 and 11.1.0.7.0 and higher; for some platforms a fix on top of 10.2.0.2.0 and on top of 10.2.0.3.0 is available with Patch:5941030 Patched files:  kdbl.o Workaround:  if possible use the classic export and import clients to transfer this table Cause:  internal issue with column count when loading table with LONG/LONG RAW column in Direct Path mode Trace:  not applicable Remarks:  see also Note:457128.1 "Logical Corruption Encountered After Importing Table With Long Column Using DataPump" ### @ 8. For Support: Enhancement Requests. @ Open Enhancement Requests: ## References BUG:4722517 - MATERIALIZED VIEW LOG NOT UPDATED AFTER IMPORT DATAPUMP JOB INTO EXISTING TABLE BUG:4727162 - PRODUCT ENHANCEMENT: ADD NEW DATAPUMP EXT TAB ACCESS METHOD WITHOUT APPEND HINT BUG:5599947 - DATAPUMP EXPORT VERY SLOW BUG:5941030 - DATAPUMP IMPORT CAN CORRUPT DATA WHEN THERE IS A LONG / LONG RAW BUG:5943346 - PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN NOTE:155477.1 - Parameter DIRECT: Conventional Path Export Versus Direct Path Export NOTE:286496.1 - Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump NOTE:340789.1 - Import Datapump (Direct Path) Does Not Update Materialized View Logs NOTE:365459.1 - Parallel Capabilities of Oracle Data Pump NOTE:453895.1 - Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp) NOTE:457128.1 - Logical Corruption Encountered After Importing Table With Long Column Using DataPump NOTE:469439.1 - IMPDP Can Fail with ORA-31696 if ACCESS_METHOD=DIRECT_PATH Is Manually Specified http://www.oracle.com/technology/pub/notes/technote_pathvsext.html Related Products • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Personal Edition • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Personal Edition • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Standard Edition • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Standard Edition • Enterprise Management > Enterprise Manager Consoles, Packs, and Plugins > Managing Databases using Enterprise Manager > Enterprise Manager for RDBMS Keywords IMPORTING DATA; CONVENTIONAL PATH; DIRECT PATH; LOB; PARALLELISM; IMPDP; EXTERNAL TABLES; DATAPUMP Errors ORA-600[6917]; ORA-31696; ORA-1498 ------------------------------------------------------------------------------ • 本文已收录于以下专栏: 举报原因: 您举报文章:Export/Import DataPump Parameter ACCESS_METHOD[ID 552424.1] 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字)
# Talk:Propagation of uncertainty WikiProject Statistics (Rated C-class, High-importance) This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion. C  This article has been rated as C-Class on the quality scale. High  This article has been rated as High-importance on the importance scale. ## The Resistance example is wrong, I think. The errors should add in quadrature. So square each element and take the square root. Take the example Y=A+B. Then the error is \sqrt(dA^2+dB^2), not dA+dB. That is what you get when you use the equation right above the example problem. —Preceding unsigned comment added by 132.163.47.52 (talk) 23:21, 4 July 2010 (UTC) I fixed the error. And I corrected the Absolute Error Box. I am fairly certain they were incorrect. —Preceding unsigned comment added by 132.163.47.52 (talk) 23:59, 4 July 2010 (UTC) I genuinely think that this article is important for the whole of science. Too many scientists (including myself) graduate without becoming confident with error calculations, and I suspect that many scientists never do so. I see here an opportunity to improve the quality of a whole body of scientific work. Currently, the approach is too mathematical (and not even rigorous), and does not deal with the subtleties of error calculations, like the difference between using errors and standard deviations, and which situations it is ok to use approximations. When the equations are approximations, why not use the "is approximately equal to" instead of the equals sign? The fact that it is an approximation should be clearly marked, in any case. I suggest the article should clearly delineate the different possible approaches - perhaps this could even become the basis for standardisation of error calculations in future publications. Thanks, MichaelR 83.72.194.27 11:12, 7 October 2007 (UTC) ## Contradiction between formula and example? It seems there is a conflict between the general formula and the very first example - perhaps I'm missing something here, but it strikes me as confusing that the error is given as $\Delta X=\Delta A+\Delta B$ when the equation suggests it should be $\Delta X=(\Delta A^{2}+\Delta B^{2})^{\frac{1}{2}}$. Thanks gabe_rosser 00:20, 23 May 2007 (BST) Yes, Gabe's formula above is much closer to what I use. I'm no statististian, but the sources that showed me how to do it that way are. I think maybe some of the formulas presented in the article are approximations to make things simpler for hand calculation. If that's what they are, it should be clearly stated, because it is obviously causing confusion. ike9898 18:23, 8 August 2007 (UTC) I agree with Gabe, the formula given for ΔR does not follow from the described method of finding ΔR. I am a 4th year Mechanical Engineering student, and I am certain that Gabe is correct; thus, I will be correcting the error. ## i or j Is it necessary to use both i and j as indices for the summation of the general formulae? it appears to me that i only appears in the maths whilst j only appears in the english. True? If not, it could be more clearly explained as to the reasons for the change / use of both. Thanks Roggg 09:35, 20 June 2006 (UTC) ## Geometric mean Example application: the geometric mean? Charles Matthews 16:33, 12 May 2004 (UTC) From the article (since May 2004!): "the relative error ... is simply the geometric mean of the two relative errors of the measured variables" -- It's not the geometric mean. If it were, it would be the product of the two relative errors in the radical, not the sum of the squares. I'll fix this section. --Spiffy sperry 21:47, 5 January 2006 (UTC) ## Delta? In my experience, the lower-case delta is used for error, while the upper-case delta (the one currently used in the article) is used for the change in a variable. Is there a reason the upper-case delta is used in the article? --LostLeviathan 02:01, 20 Oct 2004 (UTC) ## Missing Definition of Δxj A link exists under the word "error" before the first expression of Δxj in the article, but this link doesn't take one to a definition of this expression. The article can be improved if this expression is properly defined. —Preceding unsigned comment added by 65.93.221.131 (talkcontribs) 4 October 2005 ## Formulas I think that the formula given in this article should be credited to Kline-Mcklintock. —lindejos First, I'd like to comment that this article looks like Klingonese to the average user, and it should be translated into English. Anyway, I was looking at the formulas, and I saw this allegation: X = A ± B (ΔX)² = (ΔA)² + (ΔB)², which I believe is false. As I see it, if A has error ΔA then it means A's value could be anywhere between A-ΔA and A+ΔA. It follows that A±B's value could be anywhere between A±B-ΔA-ΔB and A±B+ΔA+ΔB; in other words, ΔX=ΔA+ΔB. If I am wrong, please explain why. Am I referring to a different kind of error, by any chance? aditsu 21:41, 22 February 2006 (UTC) As the document I added to External links ([1]) explain it, we are looking at ΔX as a vector with the variables as axes, so the error is the length of the vector (the distance from the point where there is no error). It still seems odd to me, because this gives the distance in the "variable plane" and not in the "function plane". But the equation is correct. —Yoshigev 22:14, 23 March 2006 (UTC) Now I found another explanation: We assume that the variables has Gaussian distribution. The addition of two Gaussians gives a new Gaussians with a width equals the quadrature of the width of the originals. (see [2]) —Yoshigev 22:27, 23 March 2006 (UTC) ## Article title The current title "Propagation of errors resulting from algebraic manipulations" seems to me not so accurate. First, the errors don't result from the algebraic manipulations, they "propagate" by them. Second, I think that the article describe the propagation of uncertainties. And, third, the title is too long. Seems okay. A problem with the article is that the notation x + Δx is never explained. From your remarks, it seems to mean that the true value is normally distributed with mean x and variance Δx. This is one popular error model, leading to the formula (Δ(x+y))² = (Δx)² + (Δy)². Another one is that x + Δx means that the true value of x is in the interval $[x-\Delta x, x+\Delta x]$. This interpretation leads to the formula $\Delta(x+y) = \Delta x + \Delta y$, which aditsu mentions above. I think the article should make clear which model is used. Could you please confirm that you have the first one in mind? -- Jitse Niesen (talk) 00:58, 24 March 2006 (UTC) Not exactly. I have in mind that for the measured value x, the true value might be in $[x-\Delta x, x+\Delta x]$, like your second interpretation. But for that true value, it is more probable that it will be near x. So we get a normal distribution of the probable true value around the measured value x. Then, 2Δx is the width of that distribution (I'm not sure, but I think the width is defined by the standard deviation), and when we add two of them we use (Δx)² + (Δy)², as explained in Sum of normally distributed random variables. I will try to make it clearer in the article. —Yoshigev 17:45, 26 March 2006 (UTC) As you can see, I rewrote the header and renamed the article. —Yoshigev 17:44, 27 March 2006 (UTC) ## This artical was a disgrace to humanity First the article defined $\Delta A$ as the absolute error of $A$ but then the example formulas section went ahead and defined error propagation with respect to $\Delta A$ as the standard deviation of $A$. Even then the examples had constants that weren't even considered in the error propagation. Then to add insult to injury I found a journal article which shows that at least two of the given definitions were only approximations so I had to redo the products formula and added notes to the remaining formulas explaining that they were only approximations with an example of how they are only approximations. (doesn't it seem just a little bit crazy to only find approximations to the errors when you can have an exact analysis of the error propagation, to me it just seems like approximating an approximation.) So now there are TWO columns: ONE for ABSOLUTE ERRORS and ANOTHER for STANDARD DEVIATIONS! sheash! it's not that hard to comprehend that absolute errors and standard deviations are NOT EQUIVILANT! $\sigma_A$ is the standard deviation of $A$, and $\Delta A$ is the absolute error NOT the standard deviation of $A$. --ANONYMOUS COWARD0xC0DE 04:02, 21 April 2007 (UTC) The problem is now the formula given in the table don't match the stated general formula for absolute error in terms of partial derivatives. I think this makes the article rather confusing, particularly for those readers who don't have access to the journal article you've linked. One is left without a clear idea (1) where the general formula comes from, (2) why it is inexact for the case of a product of two or more quantities, and (3) where the exact forms come from. The answer is relatively simple. The general form given is basically a Taylor series, truncated after the first order term. It is valid only if the errors are small enough that we can neglect all higher order terms. Of course, if we have a product of n quantities we would need an n-th order Taylor series to produce the exact result. The article needs to be edited to make this clear. This is the person who wrote the absolute error box, that is actually incorrect. —Preceding unsigned comment added by 132.163.47.52 (talk) 15:33, 5 July 2010 (UTC) Specifically, the general form should look something like this: $\Delta f(x_1,\cdots,x_k) = \sum_{n_1=0}^{\infin} \cdots \sum_{n_k=0}^{\infin} \left | \frac{\partial^{n_1} \cdots \partial^{n_k} f}{{\partial x_1^{n_1}} \cdots \partial x_k^{n_k}} \right | \frac{(\Delta x_1)^{n_1}\cdots (\Delta x_1)^{n_k}}{n_1!\cdots n_k!}$ However the term with $n_1 \cdots n_k$ all equal to zero is excluded. Note that this expression simplifies to the approximate form if we include only the lowest order terms (i.e., terms with $n_1 + \cdots + n_k = 1$) --Tim314 05:15, 30 May 2007 (UTC) ## Caveats? Can we get some caveats placed in here? i.e. a statement that these formula are only valid for small sigma/mean and that the formula for the quotient particularly is problematic given Hodgson's paradox and the discussion in ratio distribution. The Goodman (1960) article has some exact formula that modify the approximate forms here as well.169.154.204.2 01:17, 23 August 2007 (UTC) ## Re-wrote example table I searched for this page to remind myself of one of the formulae, which I use on a regular basis. (When I'm at work, I flip to a tabbed page in my copy of [Bevington].) I could see that the table contained many errors, mostly due to confusions between $\Delta A\!\,$ and $\left(\Delta A\right)^2 \!\,$, and some of the table entries included extra terms, presumably for correlations between $A\text{, }B\text{, }C\!\,$, though they are stated in the caption to be uncorrelated. I didn't notice it right away, but the table had a third column with $\Delta A\!\,$ replaced with an undefined (but commonly used) $\sigma_A\!\,$. In its current form, the table is (exactly, not approximately) correct for uncorrelated, normally-distributed real variables. Hopefully, it also gives an indication of how these rules can be combined. (Some are special cases of others, but they all derive from the general rule in the previous section, anyway.) So what about formulae for correlated variables? I don't know much about that because I always use the general derivative method for problems with correlations. I only use the example formulae for simple cases. Jim Pivarski 23:25, 7 October 2007 (UTC) ## Pitfall of linear error propagation: eg dividing by zero There is an important exception to the usual treatment of error propagation. That is when a power series expansion of the function F(x) to be calculated breaks down within the uncertainty of the variable x. The usual treatment assumes that the function is smooth and well-behaved over a sufficiently large domain of x near the measured value x_0. Then one expands in a power series about x_0 and takes only the first linear term. However, if F is not well-behaved (for example if F(x) goes to infinity at some point near x_0) the correct uncertainty on F(x) may be completely different from the usual formula Delta F = F'(x_0) Delta x. The simplest example is the uncertainty in 1/x. If we measure x=100 with an uncertainty of 1%, then 1/x has an uncertainty of 1% also. If we measure x=100 with an uncertainty of 100%, then 1/x has an *infinitely* large uncertainty, because x may take the value 0! Is this effect treated in any known texts? --Tdent 15:09, 24 October 2007 (UTC) ## caveats added, but more details needed I put in a couple of references to problems with ratios alluded to above, but what is required is a better quantification for when these issues are important. The Geary-Hinkel transformation has limits of validity and they can be used to show when that formula (which reduces to the standard case in favorable circumstances) is useful. Hopefully someone can add to that? 74.64.100.223 (talk) 16:07, 29 November 2007 (UTC) ## Incorrect Taylor Series Expansion I think the Taylor series expansion in the section in Non-linear combinations is wrong $f_k \approx f^0_k+ \sum_i^n \frac{\partial f_k}{\partial {x_i}} x_i +\sum_i^n\sum_{j (j \ne i)}^n \frac{\partial f_k}{\partial {x_i}}\frac{\partial f_k}{\partial {x_j}}x_ix_j$ $f_k \approx f^0_k+ \sum_i^n \frac{\partial f_k}{\partial {x_i}} x_i +\sum_i^n\sum_{j}^n \frac{\partial^2 f_k}{\partial {x_i}\partial {x_j}}x_ix_j$ This is a major difference. Using only the first-order approximation, $f_k \approx f^0_k+ \sum_i^n \frac{\partial f_k}{\partial {x_i}} x_i$ which is what I think the author of that section is after. This can be used to estimate error under a linear approximation (a hyperplane tangent to the non-linear function) using the functions in the previous section. Perhaps I'm missing something here, but if not, the section and examples are seriously flawed. Schomerus (talk) 22:38, 22 February 2008 (UTC) Technically you are right. However the missing term $+\sum_i^n \left(\frac{\partial f_k}{\partial {x_i}}\right)^2 x_i^2$ does not contribute to the error propagation formula. In that sense the approximation is acceptable. Rest assured that the error propagation formulas are correct and can be found in numerous text-books. Petergans (talk) 09:20, 23 February 2008 (UTC) My concern over the Taylor series expansion is that the correct coefficient for the second order term should be given by the second derivative of $f$ not the product of first derivatives. It is definitely not the case in general that $\frac{\partial f_k}{\partial {x_i}}\frac{\partial f_k}{\partial {x_j}} = \frac{\partial^2 f_k}{\partial {x_i}\partial {x_j}}$ in any sense, approximate or otherwise. For example, in the linear case $f = x_1 + x_2$ expanding around $(0,0)'$ according to the the first formula $f \approx x_1 + x_2 +x_1x_2$ which makes little sense given that a first order approximation to a linear equation should be exact, not an approximation at all, and in any case shouldn't contain any second order terms. If I've misunderstood something here could you provide a reference so I can check the derivation for myself?Schomerus (talk) 17:50, 23 February 2008 (UTC) Now I see it. I think that the mistake is that it's not a Taylor series expansion, but a total derivative sort of thing. total derivative#The total derivative via differentials appears to be similar to the expression I'm looking for, though not exactly. I'm sure that the expression $\sigma^2_f=\left(\frac{\partial f}{\partial a}\right)^2\sigma^2_a+\left(\frac{\partial f}{\partial b}\right)^2\sigma^2_b+2\frac{\partial f}{\partial a}\frac{\partial f}{\partial b}COV_{ab}$ is correct, but clearly it is not derived in the way that I thought. I don't have any references to hand. Can you help me out here? Petergans (talk) 22:38, 23 February 2008 (UTC) I finally figured it out - the second term is not needed at all. The function is linearized by Taylor series expansion to include first derivatives only so thay the error propagation formula for linear combinations can be applied. Thanks for spotting the error. Petergans (talk) 08:05, 24 February 2008 (UTC) That's right. The expression is a direct consequence of applying the formula for linear combinations to the linear approximation (hyperplane tangent at $f(\bar{x})$ ) obtained by first order Taylor series expansion. Thanks for making the corrections in the article. You could add that the approximation expands around $\bar{x}$ making $f_0= f(\bar{x})$, which is another potential source of inaccuracy when the linear approximation fails since for non-linear functions in general $\overline{f(x)} \ne f(\bar{x})$Schomerus (talk) 17:34, 24 February 2008 (UTC) The equation that is currently in place with absolute values being taken can not generate the example equation 4 ( A/B ) -> -cov(ab) the absolute value bars would make A*B error propagation the same as A/B. —Preceding unsigned comment added by 129.97.120.140 (talk) 18:31, 28 January 2011 (UTC) ## Useful Reference? Been trying to understand this for a while (and how the formula are derived) and not that easy to understand from this page. Finally found a useful reference which then makes the rest of this page make sense so thougt I'd share. The reference is Taylor, J. R., 1997: An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. 2nd ed. University Science Books, 327 pp. and section 9.2 helps make things clear. Added a comment here as I don't have time to edit main page. —Preceding unsigned comment added by 139.166.242.47 (talk) 11:44, 29 February 2008 (UTC) Better books that covers these issues is "Propagation of Errors" by Mike Peralta. This entry April 2013. — Preceding unsigned comment added by 144.15.255.227 (talk) 17:45, 29 April 2013 (UTC) ## Undefined variable in caveats section In the caveats section, the formula $y=mz+c: \sigma^2_y=z^2\sigma^2_m+\sigma^2_c+2z\rho \sigma_m\sigma_c$ is written, but the variable $\rho$ is not defined. Please fill in its definition. Zylorian (talk) —Preceding undated comment was added at 17:04, 6 October 2008 (UTC). Additionally, this formula is wrong. It's not the correct formula for uncertainty in a linear regression; that uncertainty can't be arrived at through error propagation. I'm not sure that the idea of error propagation could ever be correctly applied to least squares parameters. Perhaps that paragraph should be removed from the caveats section. DrPronghorn (talk) 04:04, 9 February 2012 (UTC) OK, I went ahead and removed that section. It's not clear to me that error propagation is the correct place to discuss the errors in fitted parameters. In the case of fitted models, the errors are governed by the fitting process and original fit dataset, not by error propagation. DrPronghorn (talk) 04:19, 22 February 2012 (UTC) Better book that covers these issues is "Propagation of Errors" by Mike Peralta. — Preceding unsigned comment added by 144.15.255.227 (talk) 17:43, 29 April 2013 (UTC) ## Errors in section 1 and confusions in section "Partial derivatives" The third equation for $M^f_{ij}$ is wrong or inconsistent with itself. The term $A_{ik}$ should have its subscripts reversed, right? Likewise, the same error in the next equation down. This is my first post on wikipedia, I leave this to the author to correct it if it needs to be done. In agreement with the comments above, I found the section "Partial derivatives" a bit contradictory or at least lacking explanation. It is contradictory because in the third paragraph $\Delta x$ is said to be commonly given to be the standard deviation $\sigma$, which is the square root of the variance $\sigma^2$, yet in the section "Partial derivatives" the square root of the variance is not equal to the absolute error in the table. It was precisely this issue for which I had consulted wikipedia. However there is enough information here to have helped me solve my problem. Dylan.jayatilaka (talk) 05:23, 7 January 2009 (UTC) ## Created new article on this subject ...rather than try to edit this one. Take a look: Experimental uncertainty analysis. Right or wrong, I believe it to be far more "accessible" than this one is. How to reconcile my new article with this one and with Uncertainty analysis remains to be seen. Please give me a week or so to produce the table at the bottom of "my" article- it will go beyond the table in this article, and I think it will be a very useful thing for many readers. Also I have a mention and a reference related to the 1/x thing I see above here- the solution of course is to use a higher-order expansion, and the necessary eqns are in that book. Regards... Rb88guy (talk) 18:30, 12 March 2009 (UTC) ## Derivative is not partial in Example calculation: Inverse tangent function If $f(x) = \arctan(x)$ then you cannot take $\frac{\partial f}{\partial x}$, because $f$ only depends on one variable. It should be $\frac{df}{dx}$ — Preceding unsigned comment added by Finkeltje (talkcontribs) 15:31, 1 February 2013 (UTC) ## Linear Combinations section had following formula: $\sigma^2_f= \sum_i^n \sum_j^n a_i \Sigma^x_{ij} a_j= \mathbf{a \Sigma^x a^t}$ I changed this to $\sigma^2_f= \sum_i^n \sum_j^n a_i \Sigma^f_{ij} a_j= \mathbf{a \Sigma^f a^t}$
Need help diffrenting trig problems In(x^2(3x-1))? Jan 17, 2018 $\frac{d}{\mathrm{dx}} \left[\ln \left(\left({x}^{2}\right) \left(3 x - 1\right)\right)\right] = \frac{9 {x}^{2} - 2 x}{3 {x}^{3} - {x}^{2}}$ Explanation: Well there are no trig functions in this problem. Perhaps you meant to say logarithms? Anyways... Apply the following technique: $\frac{d}{\mathrm{dx}} \left[\ln \left(u\right)\right] = \frac{1}{u} \cdot u '$ Given: $\ln \left({x}^{2} \left(3 x - 1\right)\right)$ I would distribute the ${x}^{2}$ to $3 x - 1$ and rewrite this as $\ln \left(3 {x}^{3} - {x}^{2}\right)$ So when differentiating, I'll let $u = 3 {x}^{3} - {x}^{2}$ so now we'll find the deivative $\frac{d}{\mathrm{dx}} \left[\ln \left(u\right)\right] = \frac{1}{3 {x}^{3} - {x}^{2}} \cdot \textcolor{red}{u '}$ $\textcolor{red}{u ' = 9 {x}^{2} - 2 x}$ So $\frac{d}{\mathrm{dx}} \left[\ln \left(u\right)\right] = \frac{1}{3 {x}^{3} - {x}^{2}} \cdot 9 {x}^{2} - 2 x = \frac{9 {x}^{2} - 2 x}{3 {x}^{3} - {x}^{2}}$
# Does bending of light around the Sun depend on the wavelength? If the energy of light is high, does its curvature differ from that of low-energy light around the Sun? In other words, if the wavelength of the light is shorter than another wavelength of light, then does the bending of the two lights differ around the Sun? • No, the amount of bending is the same for all wavelengths. See astronomy.stackexchange.com/q/33341/16685 & physics.stackexchange.com/q/46996/123208 Apr 5, 2021 at 4:43 • What is dependent on the energy of the beam is how much it bends space (to affect other light beams or matter). Apr 5, 2021 at 14:24 • @PM2Ring I enjoy taking things to their extreme. What if the photon was so energetic it actually had a significant gravitational pull by itself? Apr 6, 2021 at 12:20 • @StianYttervik That's very extreme! As Ross said, in theory, a light beam does affect the spacetime curvature, but the effect is tiny that it's usually neglected, although cosmologists do include the energy density due to EM radiation in their calculations of spacetime curvature and expansion. Apr 6, 2021 at 13:43 • @RossPresser Indeed! But the effect is small. Imagine we could focus the entire light output of the Sun into a cylindrical beam of radius R. The luminosity of the Sun is L=3.828E26 watts, which is equivalent to ~4.259 billion kg/s. The density of the beam is $\rho=\frac{L}{\pi R^2c^3}$. Using the formula here for the surface gravity of an infinite cylinder, $g=2\pi G\rho R$, we get $g=\frac{2GL}{c^3R}$. Using R = 1 mm, Google Calculator says (2*(3.828E26 W)*(6.6743E-11 m^3kg^-1s^-2)/c^3)/(1 mm) is ~$1.8965×10^{-6}\,m/s^2$ Apr 6, 2021 at 14:00 The amount of "gravitational light bending" is independent of the photon energy (light wavelength). The reason is that the light follows a path through spacetime that is appropriate for a massless particle and this is unique for a given set of initial conditions. That this is so is amply demonstrated by the consistent angular displacement of "stars" near the limb of the sun whether observed at optical or radio wavelengths. As pointed out in comments - there are small effects that must be taken into account, associated with the well-understood phenomenon of refraction in the corona of the Sun. However, these do not affect observations of lensing taken well away from the solar limb - which is easily possible at radio wavelengths and now becoming possible for the same sources using Gaia data. Further evidence comes from the wavelength-independent nature of gravitational lensing and microlensing seen outside the solar system. • "light follows a path" In particular, correct my layperson's understanding as needed, light follows a straight line in curved space. A straight line is a straight line regardless of the nature of what's following it. Apr 5, 2021 at 20:40 • @ProfRob This is a great answer accounting for the bending of light due to gravity, but what about the much larger effect of refraction by the Solar Atmosphere? Apr 5, 2021 at 22:44 • @ProfRob oh, I though they were the same. How are they different? Apr 5, 2021 at 22:53 • @DonBranson if you shine a laser or throw a ball, they follow different paths through spacetime. Both are geodesics and there is no force on either. Apr 6, 2021 at 0:02 • @DonBranson The curvature of a trajectory through space is not the same as the curvature of a worldline through spacetime. Please see Why does the speed of an object affect its path if gravity is warped spacetime? Apr 6, 2021 at 7:44
# Use Stokes' Theorem to evaluate F · dr where C is oriented counterclockwise as viewed from above. F(x, y, z) = 6yi + xzj + (x + y)k, C is the curve of intersection of the plane z = y + 8 and the cylinder x2 + y2 = 1. Question Use Stokes' Theorem to evaluate F · dr where C is oriented counterclockwise as viewed from above. F(x, y, z) = 6yi + xzj + (x + y)k, C is the curve of intersection of the plane z = y + 8 and the cylinder x2 + y2 = 1.
All Dates/Times are Australian Eastern Standard Time (AEST) Technical Program Paper Detail Paper ID D6-S1-T4.1 Paper Title Trace Reconstruction with Bounded Edit Distance Authors Jin Sima, Jehoshua Bruck, Calefornia Institute of Technology, United States Session D6-S1-T4: Trace Reconstruction Chaired Session: Monday, 19 July, 22:00 - 22:20 Engagement Session: Monday, 19 July, 22:20 - 22:40 Abstract The trace reconstruction problem studies the number of noisy samples needed to recover an unknown string $\boldx\in\{0,1\}^n$ with high probability, where the samples are independently obtained by passing $\boldx$ through a random deletion channel with deletion probability $p$. The problem is receiving significant attention recently due to its applications in DNA sequencing and DNA storage. Yet, there is still an exponential gap between upper and lower bounds for the trace reconstruction problem. In this paper we study the trace reconstruction problem when $\boldx$ is confined to an edit distance ball of radius $k$, which is essentially equivalent to distinguishing two strings with edit distance at most $k$. It is shown that $n^{O(k)}$ samples suffice to achieve this task with high probability.
## Lenny Kravitz – Raise vibration Posted in lyrics, New album entries, pop, rock, rockalyrics with tags , , , , , , , , , , , , , , , , , on 18 November, 2018 by rockalyrics 1. We can get it all together 2. Low 3. Who really are the monsters? 4. Raise vibration 5. Johnny cash 6. Here to love 7. It’s enough 8. 5 more days ’til summer 9. The majesty of love 10. Gold dust 11. Ride 12. I’ll always be inside your soul Lenny Kravitz – Raise vibration ## Breaking Benjamin – Ember Posted in alternative, grunge, lyrics, metal, New album entries, rockalyrics with tags , , , , , , , , , , , , , , , , , on 10 May, 2018 by rockalyrics 1. Feed the wolf 2. Red cold river 3. Tourniquet 4. Psycho 5. The dark of you 6. Down 7. Torn in two 8. Blood 9. Save yourself Breaking Benjamin – Ember ## A Perfect Circle – Eat the elephant Posted in alternative, industrial, lyrics, metal, New album entries, rock, rockalyrics with tags , , , , , , , , , , , , , , , , , , on 26 April, 2018 by rockalyrics 1. Eat the elephant 2. Disillusioned 3. The contrarian 4. The doomed 5. So long and thanks for all the fish 6. Talk talk 7. By and down the river 8. Delicious 9. DLB 10. Hourglass 11. Feathers A Perfect Circle – Eat the elephant ## Simple Minds – Walk between worlds Posted in britpop, lyrics, New album entries, pop, rock with tags , , , , , , , , , , , , , , , , , on 28 February, 2018 by rockalyrics 1. Magic 2. Summer 3. Utopia 4. The signal and the noise 5. In dreams 6. Barrowland star 7. Walk between worlds 8. Sense of discovery 9. Silent kiss 10. Angel underneath my skin 11. Dirty old town Simple Minds – Walk between worlds ## Evanescence – Synthesis Posted in alternative, lyrics, metal, New album entries, rockalyrics with tags , , , , , , , , , , , , , , , , on 20 November, 2017 by rockalyrics 1. Overture 2. Never go back 3. Hi-lo 4. My heart is broken 5. Lacrymosa 6. The end of a dream 7. Bring me to life 8. Unraveling 9. Imaginary 10. Secret door 11. Lithium 13. You star 14. My immortal 15. The in-between 16. Imperfection Evanescence – Synthesis Posted in hardrock, lyrics, New album entries, rap, rock, rockalyrics with tags , , , , , , , , , , , , , , , , , on 31 October, 2017 by rockalyrics 1. California dreaming 2. Whatever it takes
5 # Quesjiol 1715055Aralyze the transcendenta cunve-'Note: (a) Type none if it is not applicable or there is none; (b) Do not put spaces in all vour answersIdentif... ## Question ###### Quesjiol 1715055Aralyze the transcendenta cunve-'Note: (a) Type none if it is not applicable or there is none; (b) Do not put spaces in all vour answersIdentify the following:Domain: (xx2.x-intercept: (3.v-intercept: (0Symmetry with (chokes: Ted Yaxls; origin or none}:Horizontal Asymptote:Vertical Asymptote lanswer from least t0 greatest:Recions are divided by the following (answer from least t0 greatest): Quesjiol 17 15055 Aralyze the transcendenta cunve- 'Note: (a) Type none if it is not applicable or there is none; (b) Do not put spaces in all vour answers Identify the following: Domain: (xx 2.x-intercept: ( 3.v-intercept: (0 Symmetry with (chokes: Ted Yaxls; origin or none}: Horizontal Asymptote: Vertical Asymptote lanswer from least t0 greatest: Recions are divided by the following (answer from least t0 greatest): ## Answers #### Similar Solved Questions 5 answers ##### Mr Sculkey decides to make md sIl bnaidd kcathar bracckts #5 #site business The fixed cost $Tun his' busukti 5250 per month mnd thc cost produce Each bracekt 3crs$ The bracckts will scll for 519.95. The e funcuc below givcs thc #crzge cosl (in dollars) pcr bracckt wbxnr becckts mudura 8r + 250 A(r)Dctermine A(I) ud writc RETHET FO cxpbining tkx maning olsour answtt .Complcie tlc table.Deicnnine A(IO) and wrilc eenicncc aplaling [t man? olyour AnsterHow man} hrecclcamuntnmaukccd m orerrto Mr Sculkey decides to make md sIl bnaidd kcathar bracckts #5 #site business The fixed cost $Tun his' busukti 5250 per month mnd thc cost produce Each bracekt 3crs$ The bracckts will scll for 519.95. The e funcuc below givcs thc #crzge cosl (in dollars) pcr bracckt wbxnr becckts mudura 8r + 2... 5 answers ##### Find the X-componert Ihe net electric field the origin. Express your answrer (cnS Ihe variables Q, and oppropriate constants_Conslanti Neqaly"e chatqe Qi dsnbuted unitormty atnnnd quanet-crrcle oladar Ithal Les In the qudant with the center curvature at the OrlninAEdSubmnitRequasLAnsPar8Find the > compoqent o4 Ihe nal olucutlc Iield = Viu origin Expuest UEAune Manm vatlabls 0 huuchenpnopnalu conaimaeAcdSuouiEenleshamsnet Find the X-componert Ihe net electric field the origin. Express your answrer (cnS Ihe variables Q, and oppropriate constants_ Conslanti Neqaly"e chatqe Qi dsnbuted unitormty atnnnd quanet-crrcle oladar Ithal Les In the qudant with the center curvature at the Orlnin AEd Submnit RequasLAns Par8 F... 5 answers 5 answers ##### BOHFrci4. Write the correct IUPAC name for ezch struceure: (16 points) B OH Frci 4. Write the correct IUPAC name for ezch struceure: (16 points)... 5 answers ##### Company produces two types of solar panels per year: thousand of type A and thousand of type B. The revenue and cost equations_ in millions of dollars, for the year are given as follows_ R(x,Y) = 6x+7y C(xy) =x2 4xy 8y2 16x - 61y - 4 Determine how many of each type of solar panel should be produced per year to maximize profit;The company will achieve maximum profit by sellingsolar panels of type A and sellingsolar panels of type BThe maximum profit is 5million company produces two types of solar panels per year: thousand of type A and thousand of type B. The revenue and cost equations_ in millions of dollars, for the year are given as follows_ R(x,Y) = 6x+7y C(xy) =x2 4xy 8y2 16x - 61y - 4 Determine how many of each type of solar panel should be produced ... 5 answers 3 answers ##### [Hint for this question: Positive definiteness is the condition on the Hessian matrix corresponding to a local minimum:] The matrixis positive definite if and only if:Select one:21 < @ < 1b 0 > 1a < -1d, A condition different from the other three answers is satisfied. [Hint for this question: Positive definiteness is the condition on the Hessian matrix corresponding to a local minimum:] The matrix is positive definite if and only if: Select one: 21 < @ < 1 b 0 > 1 a < -1 d, A condition different from the other three answers is satisfied.... 1 answers 5 answers ##### The repulsive force f between the north poles of two magnets isinversely proportional to the square of the distance d betweenthem. If the repulsive force is 24 lb when the distance is 9 in.,find the repulsive force when the distance is 3 in. The repulsive force f between the north poles of two magnets is inversely proportional to the square of the distance d between them. If the repulsive force is 24 lb when the distance is 9 in., find the repulsive force when the distance is 3 in.... 5 answers ##### 41.0 g of liquid benzene, C6H6, is isothermally vaporized at80.1°C, its boiling point, and a constant pressure of 1.20 atm. Themolar heat of vaporization of benzene is 30.8 kJ/mol. Assume thevolume of the liquid is negligible. (a) Calculate q, the heatfor the process. (b) Calculate w, the work done by the process. (c)Calculate ΔE for the process. kJ 41.0 g of liquid benzene, C6H6, is isothermally vaporized at 80.1°C, its boiling point, and a constant pressure of 1.20 atm. The molar heat of vaporization of benzene is 30.8 kJ/mol. Assume the volume of the liquid is negligible. (a) Calculate q, the heat for the process. (b) Calculate w, the wor... 5 answers ##### Carboxylic Acids and Derivatives Carboxylic AcidsAcid ChloridesAnhydrides Carboxylic Acids and Derivatives Carboxylic Acids Acid Chlorides Anhydrides... 5 answers ##### Question 14 (1 point) Two bows; A and B, are used to fre arrows X and Y respectively. The average force bow A exerts is twice that of bow B and arrow X has half the mass of arrow Y Compare the acceleration of the arrows.0 1) Arrow Y will have twice the acceleration of arrow X021 Both arrows will have the same acceleration.3) Arrow X will have four times the acceleration of arrow Y:4) Arrow Y will have four times the acceleration of arrow X5) Arrow X will have half the acceleration of arrow Y: Question 14 (1 point) Two bows; A and B, are used to fre arrows X and Y respectively. The average force bow A exerts is twice that of bow B and arrow X has half the mass of arrow Y Compare the acceleration of the arrows. 0 1) Arrow Y will have twice the acceleration of arrow X 021 Both arrows will h... 2 answers ##### Find the flux of the vector fieldX [ out of the closed box 0 <x <3, 0 <y<5, 0 <z <6 _Enter an exact answerFlux= Find the flux of the vector field X [ out of the closed box 0 <x <3, 0 <y<5, 0 <z <6 _ Enter an exact answer Flux=... -- 0.072461--
# [Tex/LaTex] How to make an old school “much less/greater” (<< and >>) symbol math-modesymbols How can the old school "much less (or greater)" – << and >> – symbol: be made? I haven't succeeded with http://detexify.kirelabs.org. Note I don't want the common \ll & \gg symbols. The mathabx package provides this glyph as \lll (and correspondingly \ggg). However, mathabx changes a lot of symbols. If you want only these two you may easily adapt the code from Importing a Single Symbol From a Different Font \documentclass{article} \DeclareFontFamily{U}{matha}{\hyphenchar\font45} \DeclareFontShape{U}{matha}{m}{n}{ <5> <6> <7> <8> <9> <10> gen * matha <10.95> matha10 <12> <14.4> <17.28> <20.74> <24.88> matha12 }{} \DeclareSymbolFont{matha}{U}{matha}{m}{n} \DeclareMathSymbol{\Lt}{3}{matha}{"CE} \DeclareMathSymbol{\Gt}{3}{matha}{"CF} \begin{document} $a \ll b \Lt c \Gt d \gg e$ \end{document} EDIT I was reminded in comments that \lll and \ggg are defined also in amssymb (and other math font packages) to do something else. The names \Lt, \Gt (from stix. unicode-math & others) avoid such clashes. The nice answer by Davislor also shows how to import these symbols from the stix fonts, though in that case I'd switch fonts completely. In a Computer Modern setting I find the mathabx symbols more suitable.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Dynamic contact networks of patients and MRSA spread in hospitals ## Abstract Methicillin-resistant Staphylococcus aureus (MRSA) is a difficult-to-treat infection. Increasing efforts have been taken to mitigate the epidemics and to avoid potential outbreaks in low endemic settings. Understanding the population dynamics of MRSA is essential to identify the causal mechanisms driving the epidemics and to generalise conclusions to different contexts. Previous studies neglected the temporal structure of contacts between patients and assumed homogeneous behaviour. We developed a high-resolution data-driven contact network model of interactions between 743,182 patients in 485 hospitals during 3,059 days to reproduce the exact contact sequences of the hospital population. Our model captures the exact spatial and temporal human contact behaviour and the dynamics of referrals within and between wards and hospitals at a large scale, revealing highly heterogeneous contact and mobility patterns of individual patients. A simulation exercise of epidemic spread shows that heterogeneous contacts cause the emergence of super-spreader patients, slower than exponential polynomial growth of the prevalence, and fast epidemic spread between wards and hospitals. In our simulated scenarios, screening upon hospital admittance is potentially more effective than reducing infection probability to reduce the final outbreak size. Our findings are useful to understand not only MRSA spread but also other hospital-acquired infections. ## Introduction The increasing resistance of some bacteria to currently available antibiotics has become a major public health issue in recent decades1,2,3. Hospital-acquired (HA) Methicillin-resistant Staphylococcus aureus (MRSA) infection has been routinely detected in hospitalised patients including those in high-income countries. In the European Union alone 150,000 patients are affected annually4. In Sweden, for instance, the incidence rate (per 100,000 people) of MRSA jumped from 10.76 in 2005 to 29.96 in 20145. Although some strains may be harmless to healthy people or are not health-care associated, such difficult-to-treat infections are particularly dangerous in a context of individuals with weakened immune systems6,7. A healthy person becomes infected via direct contact with an infected host, or contaminated devices and surfaces8. A hospital setting, if not under strict hygienic control, provides excellent conditions for efficient spread of MRSA. Although colonisation typically occurs in the anterior nares, open wounds or intravenous catheters are also potential sites for infection. Therefore, the daily contact between health care workers (HCW) and patients is sufficient for the propagation of the pathogens. This is worsened because colonised individuals, even if not ill, may still infect others. During a regular shift, HCWs typically interact with various patients whereas patients usually interact with different HCWs. These interactions create dynamic contact networks9,10 in which the MRSA infection eventually propagates. These contact networks have a complex structure of who was in contact with whom at a given time because of the non-trivial dynamics of a typical day in a hospital11,12,13,14. Although the transmission between a patient and a HCW is more likely in certain wards (e.g. burns or transplant unit8), the mobility of patients between wards or hospitals15,16 creates the missing links sustaining the spread of HA-MRSA across the hospitalised population. Therefore, identifying which contact patterns (or network structures17) regulate the propagation of the infection is the first step to better understand the spread potential of MRSA, and then to develop efficient strategies to reduce the incidence in endemic areas and to avoid potential outbreaks in low-prevalence contexts18. Mechanistic models aim to generate an abstract mathematical representation of the most relevant characteristics of a system. The gain in understanding, particularly of the causal relations or mechanisms driving the chain of events, by simplifying the problem compensates the information that is discarded. This approach, successfully used in other disciplines for example to forecast the weather or the movement of planets, fundamentally differs from traditional epidemiological methods such as controlled experiments and logistic regression models. A mechanistic model allows assessing realistic scenarios without the need to experiment on the real population. These models in fact have been used for a long time to study the spread of infections at the population level and they are necessarily simplified and focus on key aspects of the contagious dynamics otherwise simulations are not possible19,20. In the context of MRSA, a number of studies have been published in recent years mostly using the frameworks of compartmental and agent-based models21. Compartmental models are convenient because they can be elegantly described by a set of coupled differential equations, used to describe a group of well-mixed individuals in a certain state22,23,24, and can be sometimes studied analytically23,25. Agent-based models, on the other hand, request more computational resources but are more flexible and allow the addition of different levels of complexity by defining updating and interaction rules for each individual26,27,28. Previous models typically focused either on the population dynamics within a single ward29 or in a single hospital with a few wards23,30, between one hospital and the community31, or between multiple hospitals with a simple ward structure23,27. As in any modeling exercise, these studies make a series of assumptions regarding different aspects of the population. In particular, exponential probability distributions are defined to account for the patient’s length-of-stay, readmission, or referral to another hospital22,23,25. Even in models with some structure, i.e. those that have more than one ward or hospital, average values were used to characterise similar units, as for example, the size of the hospital or the frequency of interactions between HCWs and patients23,27,32. In other words, previous models attempted to reproduce the hospital population but failed to fully capture the heterogeneities and complexity of the contact patterns and patient’s mobility, emerging as a consequence of referrals and re-admissions within and between hospitals33. In this paper, we propose a data-driven network model of patients of the Stockholm County in Sweden, covering a population of about 2,192,433 inhabitants. By using information of patient flow (inpatients), we are able to reconstruct, at the individual and daily level, the contacts between all patients in all hospitals and nursing homes of the county. In other words, we are able to trace the exact positions of each patient and identify the potential contacts between them over time. These contacts, mediated by HCW or contaminated objects, are the most likely pathways for the spread of HA-MRSA. We assume that a contact exists between two patients if they have been hospitalised in the same ward at the same time. Contrary to agent-based models, our network model is deterministic and captures the exact temporal and spatial heterogeneities contained in the real-data. This methodology focused on the actual patients naturally captures the real-world contact patterns and thus skips several assumptions on the dynamics of patients as for example length-of-stay, re-admittance, mobility between wards and hospitals, hospital and ward sizes, occupancy levels, etc. Using real-world data is particularly important since, as we demonstrate later on, people are different and contact patterns cannot be simplified by the average behaviour as usually done. For example, long hospital stays increase the risk of infection per stay and may compensate a relatively low risk of transmission per contact, mobility between hospitals and re-hospitalisation play crucial roles to spread the infection across the system and to the community, and differences in the hospital structure (wards) may shape the spread within hospitals. ## Results ### Patient dynamics The hospital system contains 859 clinics and 979 wards distributed within 485 hospitals and nursing homes (See methods). We only look at inpatients. There is an asymmetry in which the majority of the hospitals contain a single clinic and a single ward whereas only 7(11) locations contain 10 or more clinics (wards) (Fig. 1A,B). This heterogeneity in the internal structure of hospitals indicates that studying isolated units or making uniform or homogeneous assumptions23,27,29,30 inefficiently capture the complexity of the hospital system. While most hospitals are peripheral and potentially less likely to be affected by an epidemic outbreak, larger hospitals behave like hubs with high influx of patients and internal transfers between wards or clinics. The analysis also indicates strong weekly and annual patterns in which the hospital population may decrease by 25% (Fig. 1C) during the summer and winter breaks (when some wards are closed), and about 10% (Fig. 1D) over the weekend in comparison to week-days. At different scales, these cycles may directly affect spreading since the number of new infections correlates with the population size. ### Infection dynamics To study the population dynamics of MRSA, we select one year of the original data set and extract the exact contact patterns between the patients. This sample has 170,839 patients and 20,499,964 contacts. For simplicity, we assume that newly admitted patients are not colonised or infectious (i.e. αadm = 0) and no treatment is available. We also assume that βC = βI = β and then scan the values of the per-contact infection probability β (our model naturally captures the real-world contacts over time and thus contact rates are meaningless; we use instead the per-contact infection probability) to estimate the basic and effective reproduction numbers (R0 and Reff respectively). The reproduction number (also known as the number of secondary infections) defines a threshold in which a large epidemic outbreak may occur if R0 > 1 (or Reff > 1). In highly clustered populations like the one studied here, local depletion of susceptible patients occurs and thus the effective reproduction number better characterises the epidemic threshold than the basic reproduction number. To estimate R0, we first infect a single individual (at different times) and set all other individuals as susceptible; then we simulate the spread of the infection and count the number of secondary infections made by the seed infected individual until its recover. We repeat this procedure 25,000 times with different starting individuals to calculate the statistics (here and elsewhere, results are quantitatively similar if we consider 50,000 realisations of the simulations). The estimation of Reff is similar to R0 with the difference that already infected individuals are not taken into account when counting the number of secondary infections, i.e. it is not possible to re-infect already infected individuals as normally done on calculations of R0. For example, consider the simple case of node A connected to B and C, with B also connected to C. Assume that the infection starts at A, A infects B and B infects C; if later on, A (re-)infects C, this infection event A-C is not counted on Reff but is counted for R0. Figure 2 shows that the epidemic threshold for Reff is at β ~ 0.008 (as expected, this threshold is slightly lower for R0, i.e. β ~ 0.007), implying that if the per-contact infection probability is larger than β ~ 0.008, a large epidemic outbreak is likely to happen. The effect of the high clustering (i.e. depletion of contacts) is stronger for larger infection probabilities, e.g. R0 ~ 3Reff for β = 0.1, because the infection spreads faster within close contacts. The heterogeneity of the network structure implies that patients do not have the same importance in the transmission dynamics. Looking at the distribution of Reff, i.e. the distribution of the number of secondary infections when we start the infection at random patients, we observe that for both low (β = 0.01) and high (β = 0.03) infection probabilities (chosen because they are both just above the estimated threshold), most patients cause zero or a few secondary infections whereas a very few patients may cause more than 20 secondary infections (Fig. 3A,B), which may characterise them as super-spreaders36 given that $$\langle {R}_{{\rm{eff}}}\rangle =1.22$$ and the variance σ2 = 4.04 (for β = 0.01) and $$\langle {R}_{{\rm{eff}}}\rangle =2.78$$ and σ2 = 10.07 (for β = 0.03). This is in contrast to well-mixed models (e.g.24) and is a consequence of a patient making too many contacts over a period of time, that is not necessarily connected to the length-of-stay (See SI, Fig. S2A–D). We investigate the growth curve of the epidemic outbreak for the same two values of the per-contact infection probability (Fig. 3C,D). It takes around 100 days for the number of infectious individuals to take off, yet, in both cases, the curves are characterised by a (trend) linear growth (t) after the initial low-prevalence regime (Fig. 3C,D). This is likely a combined effect of adding new patients to wards and the heterogeneous length-of-stay of patients. The effect of the lower number of patients during holidays appears at around 250 days after the onset of the epidemics (the location of this effect depends on the starting dates of the simulation), however, the decrease in the number of colonised and infectious individuals is not as significant as the decrease in the total number of patients (compare to Fig. 1C). The weekly pattern modulates but does not affect the trend of the number of colonised and infectious individuals. The distribution of the final outbreak sizes Ω (i.e. the number of colonised plus infectious individuals at T = 365 days) has a bi-modal shape (Fig. 3E,F). It indicates that a large number of outbreaks will cause none or a few infections (83.2% and 52.3% –for β = 0.01 and β = 0.03 respectively– of the outbreaks generate less than 100 infections) but there is also a significant chance of large outbreaks affecting more than 1000 individuals (11.8% and 43.9% for β = 0.01 and β = 0.03 respectively). This bi-modal shape results from a combination of the heterogeneity in the duration and in the number of contacts37,38. We follow the number of hospitals and wards with at least one colonised or one infectious individual to study the infection spread between hospitals and wards. We observe that in our sample, with 306 hospitals and 521 wards, the number of infected hospitals increases 5 times (and infected wards increase 8 times) one year after patient zero, if we increase the infection probability from β = 0.01 to β = 0.03 (Fig. 4A,B). For low infection probabilities (β = 0.01), very few hospitals and wards are infectious (or colonised) within 100 days since the onset of the epidemics. A similar epidemic scenario (i.e. prevalence) is observed within about 40 days in case of β = 0.03. On average, the first observed colonised and infectious case outside the hospital in which the epidemic started happens respectively at time $$\langle {T}_{{\rm{exit}}}\rangle =14.33$$ (σ2 = 375.61) and $$\langle {T}_{{\rm{exit}}}\rangle =24.76$$ (σ2 = 1010.25) days for β = 0.01. For β = 0.03, the situation is more critical and colonised and infectious cases occur respectively at $$\langle {T}_{{\rm{exit}}}\rangle =10.21$$ (σ2 = 198.22) and $$\langle {T}_{{\rm{exit}}}\rangle =18.89$$ (σ2 = 604.21) days. These results indicate that in conservative scenarios, it takes 3.5 weeks for the infection to appear in a second hospital (either because of transfer of an infected patient or because a colonised patient develops infection there). In the most critical scenario, it takes only about 2.5 weeks. Results are similar if we look at the epidemic spread between the source ward and a second ward (instead of between hospitals), except that the infection spreads slightly faster. This is expected because most locations contain a single ward (see Fig. 1B). ### Infection control We study the effect of simulated screening and hygienic in the infection dynamics. In this experiment, we assume that newly admitted patients may be colonised with probability αadm = 0.087729, implying on a constant influx of colonised patients that is different from the assumption of the previous sections. This assumption accounts well for a conservative worst-case scenario and should not be taken as the current colonised admission rate in Sweden. We also assume that infectious and screened colonised patients undergo treatment, lasting on average τtreat = 7 days. During treatment, patients are quarantined and cannot infect or be re-infected. To simulate screening, we select a random fraction ψ of the admitted patients and put them in treatment if colonised; susceptible patients have no special protocol. For hygienic, we tune the per-contact infection probability by a factor 1 − ϕ (i.e. higher ϕ means higher hygienic as for example increasing hand washing), that is, we set the per-contact infection probability to β(1 − ϕ). In contrast to previous sections, the daily influx of colonised patients now implies that a lower per-contact infection probability is sufficient to sustain the epidemics. We thus test two scenarios: β = 0.001 (low) and β = 0.002 (high), i.e. both bellow the epidemic threshold. Figure 4C shows that for both infection probabilities, increasing the fraction of screened patients decreases the final outbreak size Ω. Screening 50% of the admitted patients reduces the final outbreak by 330025–38% in comparison to no screening. Independently of β, full screening is necessary to reduce Ω to zero. Figure 4D shows that reducing the infection probability by 50% reduces by 22–26% the final outbreak size (measured in the same way as for the screening) in comparison to no action. Here, null per-contact infection probability (ϕ = 1) may still result in a few cases since some patients may be contaminated when admitted. It is difficult to directly compare both strategies because different than screening, there is no one-to-one correspondence between hygienics and ϕ, given that the last involves not only compliance on washing hands or using gloves but also on cleaning of rooms and utensils. ## Discussion Antibiotic-resistant bacteria have been increasingly pressing hospital systems in both low and high-income countries. Methicillin-resistant Staphylococcus aureus (MRSA), in particular, is difficult to treat and can be particularly dangerous to people with weakened immune system such as hospitalised individuals. Although antibiotic-resistance has developed due to intense use of antibiotics, MRSA commonly spreads through physical contacts with infected individuals, objects, or surfaces. Hygienics is generally argued as effective means to avoid the propagation of MRSA3,8 but enforcing such routine in daily practice among HCWs remains challenging mostly due to socio-economic factors. Not least, recent studies point out to other factors potentially affecting spreading not only of MRSA but also of other resistant pathogens39. Such scenario calls for a better understanding of the population dynamics of such infections and modelling studies contribute to experiment hypotheses without interfering in the study population. To understand the dynamics of MRSA infections, we developed a realistic data-driven model of contact patterns between 743,182 inpatients in a large hospital system in Sweden based on the exact location of patients over time. Our innovative model captures the exact complex spatial and temporal heterogeneities (inherent to the structure of hospitals) and patients’ behaviour (e.g. referrals, length-of-stays) into a dynamic contact network structure. We first study these interactions and hospitalisation dynamics during 8 years to identify non-trivial temporal and mobility patterns potentially affecting the spread of hospital-acquired infections in hospitals. We then couple a MRSA contagion model on this population to experiment worst-case scenarios and test how individual behaviour affects spreading. Our simulations indicate that the epidemic threshold (Reff = 1) is reached if β ~ 0.008. Nevertheless, with a constant influx of colonised individuals, as typically occur in the real-world, an epidemic may be sustained with lower per-contact infection probabilities. If we set the entire population susceptible and a single patient as infected, we observe that right above the epidemic threshold, the epidemic curve grows linearly after an initial low-prevalence period lasting about 100 days. This is a consequence of the heterogeneities in the contact patterns that slowdown the spread and refrain an exponential growth, typically observed in theoretical epidemic models. Linear growth is rarely observed in theoretical models40, possibly because of disregarding dynamic heterogeneous contact patterns37. A detailed analysis requests further analysis of the models and temporal patterns. Another consequence of patient’s behaviour is the bi-modal distribution of outbreak sizes in which the likelihood of minor outbreaks (<100 infected individuals) is relatively high, but larger outbreaks (>1000 individuals) are also common (with intermediate cases less likely). This is in accordance with previous reports of MRSA outbreaks in Sweden during the 1990s18. We also observe the appearance of a few super-spreaders, i.e. individuals infecting a much larger than average number of patients. Super-spreading in this context correlates more with the number of contacts per patient than with length-of-stay, suggesting that long-term patients should stay together to avoid high patient turnover in shared rooms. In our tested scenarios, the simulated epidemics can spread relatively fast, taking only 2 to 4 weeks to reach a second hospital or ward. Even for low infectious rates, MRSA may reach up to 10 hospitals within a year after the initial infection. These results emphasise not only the need of quick responses (e.g. quarantine and treatment) once a positive case is detected but also quick testing of suspect cases. One of the goals of modeling exercises is to understand the mechanisms sustaining the spread of the infection to reduce its incidence and hence mortality and costs. The analysis of the population dynamics is often done by comparing the implementation of some infection control protocol against the absence of any action. In our experiments, we show that reducing the per-contact infection probability by 50%, as a consequence of improved hygienic (e.g. washing hands or better room cleaning) is not sufficient to half the final outbreak size. Since sufficient cleaning of hands and utensils are many times not achievable in practice, other strategies involving screening followed by isolation and treatment of colonised or infected people have to be introduced. As we have shown, screening every admitted patient is a priori the only way to identify all colonised and avoid an epidemics but this strategy is associated to high financial costs41. It is also unrealistic to quarantine every new patient before confirmation of infection, implying that transmission may occur meanwhile. Screening of intensive care patients, at risk wards, or previously documented carriers, on the other hand, have been suggested as alternatives to global screening28,41. Since hospitals are strongly connected through transfer of patients, a surveillance system based on a few sentinel hospitals, chosen according to their centrality in this referral network, may be also effective means for early detection, and thus control, of MRSA outbreaks42. An agreement on the best cost-effective policies is however still missing. Our modelling exercise contains limitations in the contagion model to make simulations computationally feasible and also to accommodate the unavailability of data. The main limitation is the assumption that the infection probability is independent of ward type. This information was not disclosed in our data set. It is known however that some wards carry more risk than others and thus infection is more concentrated on patients and HCWs on these wards. While the addition of this information would likely affect numerical estimations, the advantage of our assumption is to single out the effect of contact patterns on the spread dynamics. For example, we were able to identify that super-spreaders exist simply because of heterogeneous contact patterns and not because of higher than average individual risk of infection, that in turn may boost super-spreading. In a more realistic setting, higher risk of infection could potentialise super-spreading. Sweden is a low prevalence setting for MRSA and thus some parameters used in the simulation exercise, obtained from other high-income countries, may be unrealistic for Sweden. Nevertheless, they provide a baseline for worst-case scenarios in a completely susceptible population as we simulate. Parameters estimation in this study, therefore, should only be used to understand the spread mechanisms under hypothetical scenarios and not for public health policy. The main contributions of our study are the identification of highly heterogeneous contact patterns and patient mobility between hospitals, and how these patterns result on potential super-spreaders, fast spread across wards and hospitals but relatively slow epidemic growth across the system, that is further regulated by weekly and seasonal hospitalisation patterns. Hospitalised populations are in constant movement and infection paths thus depend on both timings and number of interactions, variables neither captured in standard regression and network models, nor cohort and longitudinal studies. Such findings are relevant not only for MRSA but also for other hospital-acquired infections as for example Klebsiella pneumoniae or Clostridium difficile43 that spreads through close contact not necessarily physical, where hand hygiene may not be sufficient to control spreading. The incidence of MRSA has decreased in recent years and some countries, such as Sweden, have managed to keep a very low prevalence. Nevertheless, similar mechanisms are expected to regulate the spread of other HA infectious diseases and our approach based on deterministic real-world temporal contact networks helps to better understand the general phenomena of spatio-temporal infection spread within and between hospitals. A holistic mechanistic perspective, as proposed in our study, involving realistic modelling of complex dynamic human interactions is fundamental to understand the spread and to design strategies to mitigate the epidemics of HA infectious diseases. Our modeling also allows detailed tracing of the potential infection paths in case of an outbreak and can be used to identify hospitals and wards at risk. Such information can be used to develop an efficient sentinel system based on the flow of patients and infections in which all patients at sentinel hospitals or wards are screened and their previous hospitalisations traced back. Our methodology to estimate infection paths allows the identification of source of infections at the individual patient or ward levels. This information could be useful to understand the causes of an epidemic outbreak, improve hygienic and screening, and monitor the behaviour of HCWs. Further modeling improvements could be achieved by considering the variability of infection rates at different wards and the interactions between HCWs and patients. ## Materials and methods ### Patient flow data set We gather data on the admission and discharge dates of 743,182 inpatients in 485 hospitals and nursing homes at 52 geographical locations in the Stockholm County, Sweden. This information is collected during 3,059 continuous days in the 2000s (the exact years are confidential for ethical reasons). A total of 2,019,236 admissions are recorded. Each patient and the respective ward of hospitalisation are anonymous but have unique IDs for identification. The data set was not generated by our team. The original data can be obtained by requesting access to the Stockholm County Council (Stockholm läns landsting, www.sll.se) in Sweden, that is a public institution. The Ethical Review Board in Stockholm approved the use of the data (Record 2006/3:3) for our research. The data set contains anonymised information and cannot be linked back to specific individuals. All methods were carried out in accordance with relevant guidelines and regulations. ### Contact network model A contact network is a set of links connecting pairs of nodes9,10. Our data-driven contact network model is formed by inpatients (the nodes) that shared a ward at the same time (Fig. 5A), that is, we connect pairs of inpatients that were at the same ward at the same day. We assume that patient-to-patient contacts occur through HCWs or contaminated objects and surfaces. The time resolution is one day, implying that the contact network changes in time, i.e. a link (representing the contact) between two individuals may exist or not at a given time9,10. After each day, links may appear, disappear or be maintained according to the real-world location and dynamics of patients. This is a deterministic contact network model that captures the exact spatio-temporal dynamics where contacts are formed directly from real-world data, it is different from previous agent-based models using stochastic networks44,45. Since our network model captures the interaction patterns at high spatial and temporal resolutions, assumptions regarding the structure of the hospitals, mobility, contact rates, or sampling from distribution are unnecessary. ### Contagion model We devise a novel model capturing the main mechanisms of the MRSA contagion dynamics on the empirical networked population. Our model and parameters are based on the literature and adapted to the unique network structure available in our study22,24,30. The model is naturally a simplification of the infection dynamics, yet keeping the key features, to make simulations feasible given the complexity of the infection process and the size of the study population. We thus assume that a single strain circulates in this population and that a patient may be either Susceptible (S), Colonised (C), or Infectious (I)24. This is an extension of the standard SIS compartmental model with an extra state C for asymptotic colonised patients. Transitions between these states occur according to the interaction dynamics over the network or to the progression of the infection (Fig. 5B). A susceptible patient can be infected upon contact, with per-contact infection probability βC and βI (note that it is meaningless to define an infection rate in our model since our real contact network already captures the contacts over time and thus the heterogeneity of contact patterns37, in contrast to average contact rates used in stochastic models), from a patient that is C or I, respectively. An individual in the states C or I cannot be re-infected. Although both states C and I are contagious, they differ in the recovery probabilities. Upon infection, i.e. when turning to colonised state C, the patient will further develop the infection with probability μ = 0.224 or decolonise with 1 − μ. If the patient develops infection, it moves from state C to I after τinfec = 9.5 days24,30. On the other hand, if the patient decolonises, it moves from state C to S after an average of τrec = 370 days22. We use conservative values from the literature to study worst-case scenarios since a full sensitive analysis is computationally unrealistic. We assume that a newly admitted patient may be colonised with probability αadm and thus susceptible with 1 − αadm. If treatment occurs, the patient is cured after τtreat and then can be re-infected. We also assume antibiotic resistance does not emerge spontaneously during the study period and discharged patients are assumed to be susceptible. Table 1 summarises the few parameters of the model. ## References 1. Tacconelli, E., Angelis, G. D., Cataldo, M., Pozzi, E. & Cauda, R. Does antibiotic exposure increase the risk of methicillin-resistant Staphylococcus aureus (MRSA) isolation? A systematic review and meta-analysis. J Antimicrob Chemother. 61(1), 26–38 (2008). 2. Anon The bacterial challenge: Time to reach. Technical report ECDC/EMEA Stockholm. (2009). 3. Anon Antimicrobial resistance: Global report on surveillance. WHO Geneva. (2014) 4. Kock, R. et al. Methicillin-resistant Staphylococcus aureus (MRSA): Burden of disease and control challenges in Europe. Euro Surveillance. 15(41), 19688 (2010). 5. Anon Sjukdomsstatistik av MRSA i Sverige. www.folkhalsomyndigheten.se. Public Health Agency of Sweden. Accessed March (2015). 6. Van Hal, S. et al. Predictors of mortality in Staphylococcus aureus bacteremia. Clin Microbiol Rev. 25(2), 362 (2012). 7. Hanberger, H. et al. Increased mortality associated with Methicillin-resistant Staphylococcus aureus (MRSA) infection in the intensive care unit: Results from the EPIC II study. Int J Antimicrob Agents. 38(4), 331 (2011). 8. Anon Methicillin-resistant Staphylococcus aureus (MRSA): Guidance for nursing staff. Royal College of Nursing United Kingdom (2005). 9. Bansal, S., Read, J., Pourbohloul, B. & Meyers, L. The dynamic nature of contact networks in infectious disease epidemiology. J Biol Dyn. 4(5), 478 (2010). 10. Masuda, N and Holme, P. Predicting and controlling infectious disease epidemics using temporal networks. F1000Prime Rep. 5(6) (2013). 11. Ueno, T. & Masuda, N. Controlling nosocomial infection based on structure of hospital social networks. J Theor Biol. 254(3), 655–666 (2008). 12. Vanhems, P. et al. Estimating potential infection transmission routes in hospital wards using wearable proximity sensors. PLoS ONE. 8(9), e73970 (2013). 13. Cusumano-Towner, M., Li, D., Tuo, S., Krishnan, G. & Maslove, D. A social network of hospital acquired infection built from electronic medical record data. J Am Med Inform Assoc. 20(3), 427–434 (2013). 14. Obadia, T. et al. Detailed Contact Data and the Dissemination of Staphylococcus aureus in Hospitals. PLoS Comput Biol. 11(3), e1004170 (2016). 15. Donker, T., Wallinga, J. & Grundmann, H. Patient Referral Patterns and the Spread of Hospital-Acquired Infections through National Health Care Networks. PLoS Comput Biol. 6(3), e1000715 (2010). 16. Ohst, J, Liljeros, F, Stenhem, M, Holme, P. The network positions of Methicillin resistant Staphylococcus aureus affected units in a regional healthcare system. EPJ Data Science. 3(29) (2014). 17. Newman, M. Networks: An introduction. USA: Oxford University Press (2010). 18. Stenhem, M. et al. Epidemiology of Methicillin-resistant Staphylococcus aureus (MRSA) in Sweden 2000–2003, increasing incidence and regional differences. BMC Infect Dis. 6(30) (2006). 19. Diekmann O, Heesterbeek H, Britton T. Mathematical tools for understanding infectious disease dynamics. USA: Princeton University Press (2012). 20. Keeling M, Rohani P. Modeling infectious diseases in humans and animals. USA: Princeton University Press (2007). 21. Van Kleef, E., Robotham, J., Jit, M., Deeny, S. & Edmunds, W. Modelling the transmission of healthcare associated infections: A systematic review. BMC Infect Dis. 13, 294 (2013). 22. Cooper, B. et al. Methicillin-resistant Staphylococcus aureus in hospitals and the community: Stealth dynamics and control catastrophes. Proc Natl Acad Sci USA 101(27), 10223 (2004). 23. Bootsma, M., Diekmann, O. & Bonten, M. Controlling Methicillin-resistant Staphylococcus aureus: Quantifying the effects of interventions and rapid diagnostic testing. Proc Natl Acad Sci USA 103, 5620 (2006). 24. Kajita, E, Okano, J, Bodine, E, Layne, S, Blower, S. Modelling an outbreak of an emerging pathogen. Nat Rev Microbiol. 9(700) (2007). 25. Simon, C., Percha, B., Riolo, R. & Foxman, B. Modeling bacterial colonization and infection routes in health care settings: Analytic and numerical approaches. J Theor Biol. 334, 187 (2013). 26. Macal, C. et al. Modeling the transmission of community associated methicillin-resistant Staphylococcus aureus: A dynamic agent-based simulation. J Transl Med. 12, 124 (2014). 27. Lee, B. et al. Modeling the spread of methicillin-resistant Staphylococcus aureus (MRSA) outbreaks throughout the hospitals in Orange county, California. Infect Control. 32, 562 (2011). 28. Robotham, J. et al. Cost-eff ectiveness of national mandatory screening of all admissions to English National Health Service hospitals for meticillin-resistant Staphylococcus aureus: a mathematical modelling study. Lancet Infect Dis. 16(3), 348–356 (2016). 29. Hall, I., Barrass, I., Leach, S., Pittet, D. & Hugonnet, S. Transmission dynamics of Methicillin-resistant Staphylococcus aureus in a medical intensive care unit. J R Soc Interface. 9(75), 2639 (2012). 30. Sadsad, R., Sintchenko, V., McDonnell, G. & Gilbert, G. Effectiveness of hospital-wide Methicillin-resistant Staphylococcus aureus (MRSA) infection control policies differs by ward specialty. PLoS ONE. 8(9), e83099 (2013). 31. Kouyos, R., Klein, E. & Grenfell, B. Hospital-Community Interactions Foster Coexistence between Methicillin-Resistant Strains of Staphylococcus aureus. PLoS Pathog. 9(2), e1003134 (2012). 32. Belik, V., Karch, A., Hövel, P. & Mikolajczyk, R. Leveraging Topological and Temporal Structure of Hospital Referral Networks for Epidemic Control. In: Temporal Network Epidemiology. Springer, Singapore. p. 199–214 (2017). 33. Assab, R. et al. Mathematical models of infection transmission in healthcare settings: recent advances from the use of network structured data. Current Opinion Infectious Diseases. 30(4), 410–418 (2017). 34. Liljeros, F., Giesecke, J. & Holme, P. The contact network of inpatients in a regional healthcare system. A longitudinal case study. Math Pop Stud. 14(4) (2007). 35. Barrat, A., Cattuto, C., Tozzi, A., Vanhems, P. & Voirin, N. Measuring contact patterns with wearable sensors: Methods, data characteristics and applications to data-driven simulations of infectious diseases. Clin Microbiol Infect. 20(1), 10–16 (2013). 36. Lloyd-Smith, J., Schreiber, S., Kopp, P. & Getz, W. Superspreading and the effect of individual variation on disease emergence. Nature. 438, 355–359 (2005). 37. Rocha, L., Liljeros, F. & Holme, P. Simulated Epidemics in an Empirical Spatiotemporal Network of 50,185 Sexual Contacts. PLoS Comput Biol. 7(3), e1001109 (2011). 38. Stehle, J. et al. Simulation of an SEIR infectious disease model on the dynamic contact network of conference attendees. BMC Med. 9(87) (2011). 39. Mody, L. et al. Multidrug-resistant organisms in Hospitals: What is on patient hands and in their rooms? Clin Infect Dis. pii(ciz092) (2019). 40. Colgate, S. A., Stanley, E. A., Hyman, J. M., Layne, S. P. & Qualls, C. Risk behavior-based model of the cubic growth of acquired immunodeficiency syndrome in the United States. Proc Natl Acad Sci USA 86, 4793–4797 (1989). 41. Gurieva, T., Bootsma, M., Bonten, M. Cost and effects of different admission screening strategies to control the spread of Methicillin-resistant Staphylococcus aureus. Plos Comput Biol. 9(2) (2013). 42. Ciccolini, M. et al. Infection prevention in a connected world: The case for a regional approach. Int J Med Microbiol. 303(380) (2013). 43. Fernández-Gracia, J., Onnela, J., Barnett, M., Eguíluz, V. & Christakis, N. Influence of a patient transfer network of US inpatient facilities on the incidence of nosocomial infections. Scientific reports. 7(1), 2930 (2017). 44. Jarynowski, A. & Liljeros, F. Contact networks and the spread of MRSA in Stockholm hospitals. Second European Network Intelligence Conference. (2015). 45. Pei, S, Morone, F., Liljeros, F., Makse, H. & Shaman, J. Inference and control of the nosocomial transmission of methicillin-resistant Staphylococcus aureus. eLife. 7(e40977) (2018). ## Author information Authors ### Contributions L.R. designed and performed research; V.S., M.E., T.L., A.T. contributed with methods; F.L. contributed with data; L.R. wrote the paper; L.R., V.S., M.E., T.L., F.L., A.T. reviewed the paper. ### Corresponding author Correspondence to Luis E. C. Rocha. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Rocha, L.E.C., Singh, V., Esch, M. et al. Dynamic contact networks of patients and MRSA spread in hospitals. Sci Rep 10, 9336 (2020). https://doi.org/10.1038/s41598-020-66270-9 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-020-66270-9 • ### The prevalence of virulence determinants in methicillin-resistant Staphylococcus aureus isolated from different infections in hospitalized patients in Poland • Barbara Kot • Małgorzata Piechota • Paulina Denkiewicz Scientific Reports (2022)
The Ideal Gas Law Video Lessons Concept: Boyle's Law # Problem: When 0.550 g of neon is added to a 750-cm3 bulb containing a sample of argon, the total pressure of the gases is found to be 1.46 atm at a temperature of 320 K.Find the mass of the argon in the bulb. ###### FREE Expert Solution We’re being asked to calculate the mass of argon in a 750 cm3 container at 320 K. The container consists of a gas mixture of argon (Ar), and neon (Ne). We’re going to calculate the mass of carbon dioxide in grams using the following steps: Step 1. Calculate the total number of moles of gas present in the container using the ideal gas equation. $\overline{){\mathbf{PV}}{\mathbf{=}}{\mathbf{nRT}}}$ P = pressure, atm V = volume, L n = moles, mol R = gas constant = 0.08206 (L·atm)/(mol·K) T = temperature, K Step 2. Calculate the number of moles of argon. Step 3. Calculate the mass of argon. ###### Problem Details When 0.550 g of neon is added to a 750-cm3 bulb containing a sample of argon, the total pressure of the gases is found to be 1.46 atm at a temperature of 320 K. Find the mass of the argon in the bulb.
# Changing rates 1. Mar 18, 2005 ### UrbanXrisis In a right triangle with sides x,y,z, the theta between leg z and leg y is increasing at a constant rate of 3 rad/min. What is the rate at which x is increasing in units per minute when x equals 3 units and z is 5 units. so the triangle is basically a 3,4,5 triangle. The theta is between the leg 5 and 4. Leg 4 does not chage since it's the base. To set up a changing rates problem, I did: $$tan\theta = \frac{O}{A}$$ $$4tan \frac{d \theta}{dt} = \frac{dx}{dt}$$ $$\theta=-0.570 rad/min$$ I think there is something wrong with this but I'm not sure why the theta is negative. 2. Mar 18, 2005 ### SpaceTiger Staff Emeritus Are you sure the derivative of $$tan(\theta)$$ is $$tan (d \theta/dt)$$? 3. Mar 18, 2005 ### MathStudent from your post it seems that you are dealing with a right triangle where z is the hypotenuse, y is the adjacent, and x is the opposite, with z=5, y=4, x=3. tan is a function of theta, and theta is a function of time, so tan is actually a composite function of time. You have to use the chairule when differentiating tan wrt t ie: if $$f(\theta) = tan \theta$$ then $$\frac{df}{dt} = \frac{df}{d \theta} \frac{d \theta}{dt}$$ just an added note: you can assume that either y or z is constant, and they both work out the same. edit: spacetiger beat me to it 4. Mar 18, 2005 ### UrbanXrisis thanks $$tan\theta = \frac{O}{A}$$ $$4sec^2 \frac{d \theta}{dt} = \frac{dx}{dt}$$ $$\theta=196.855 rad/min$$ that doesnt seem correct though 5. Mar 18, 2005 ### MathStudent your solving for the wrong quantity... the question asks for the rate at which x increases. The rate of change of theta is already given. also your derivative is still incorrect,, there is a small error, do you see it? 6. Mar 18, 2005 ### UrbanXrisis sorry, that is in units/minutes 196.855 is the rate at which x is increasing. $$tan\theta = \frac{O}{A}$$ $$4sec^2 \frac{d \theta}{dt} = \frac{dx}{dt}$$ $$4 \frac{1}{tan(3)^2}= \frac{dx}{dt}$$ $$\frac{dx}{dt}=196.855 units/min$$ 7. Mar 18, 2005 ### UrbanXrisis $$tan\theta = \frac{O}{A}$$ $$4sec^2 \theta \frac{d \theta}{dt} = \frac{dx}{dt}$$ is that what you mean my error? 8. Mar 18, 2005 ### MathStudent yes, that was it... whats the answer you get now? Last edited: Mar 18, 2005 9. Mar 18, 2005 ### UrbanXrisis 21.333 units/min 10. Mar 18, 2005 ### MathStudent how did you come up with that answer? hint: your solving for $$\frac{dx}{dt} [/itex], so you need both the values of [tex] \frac{d \theta}{dt}$$ and $$sec\theta$$ Last edited: Mar 18, 2005 11. Mar 18, 2005 ### MathStudent by the way... did the problem tell you that y doesnt change or is that your assumption? You get a different answer if z is constant. 12. Mar 19, 2005 ### UrbanXrisis yeah, the problem tells me that y doesnt change so I have to use tan(theta) $$tan\theta=\frac{O}{A}$$ $$4sec^2\theta \frac{d\theta}{dt}=\frac{dx}{dt}$$ $$\theta=sin^{-1}\frac{3}{5}=36.87$$ $$\frac{4}{cos^236.87} 3rad/min =\frac{dx}{dt}$$ $$\frac{dx}{dt}=18.750units/min$$ Last edited: Mar 19, 2005 13. Mar 19, 2005 ### MathStudent Just to let you know, there is another way of solving this that lets you avoid having to find what theta is, by using the definitions of sec(theta) for a right triangle. $$cos \theta = \frac{A}{H}$$ $$sec \theta = \frac{1}{cos \thet} = \frac{H}{A}$$ $$\Rightarrow sec^2 \theta = \frac{H^2}{A^2}$$ Therefore the problem boils down to simple arithmetic... $$\frac{dx}{dt} = 4sec^2\theta \frac{d\theta}{dt}$$ $$\Rightarrow \frac{dx}{dt} = 4 (\frac{5^2}{4^2})(3)$$
# Synopsis: Polarized Light in Safe Storage New techniques for storing and retrieving polarized photons improve the quantum memory capabilities of rare-earth-doped crystals. Quantum memory, which creates a place for storing quantum information until it is needed later, is an essential component in quantum computing and long-distance quantum communication. Some solid-state materials can hold quantum states of light for long times, but many materials only optimally absorb light with a certain polarization. Quantum memories that could store any polarization of light would therefore offer much more flexibility. In a step toward this goal, three independent research groups, from China, Spain, and Switzerland, are now reporting in Physical Review Letters that they are able to store and retrieve arbitrary polarization states of light from a solid-state quantum memory. In their experiments, the teams utilized a light source limited to emit single photons, which were absorbed by rare-earth ions rigidly confined in a crystal. Each group devised a compensation technique allowing the efficient storage, for several tens to hundreds of nanoseconds, of both components of a photon’s polarization. They were able to effectively reverse the procedure to retrieve the original state. While the groups’ compensation methods differ, they all achieve fidelities (a measure of how faithfully a state can be recovered) greater than $0.95$, exceeding the maximum value achievable by a classical memory. This demonstrates that such solid-state devices could operate as quantum memories for polarization qubits. – Sonja Grondalski ### Announcements More Announcements » Nanophysics Read More » ## Next Synopsis Materials Science Read More » ## Related Articles Optics ### Viewpoint: Photon Qubit is Made of Two Colors Single particles of light can be prepared in a quantum superposition of two different colors, an achievement that could prove useful for quantum information processing. Read More » Quantum Information ### Synopsis: Ten Photons in a Tangle An entangled polarization state of ten photons sets a new record for multiphoton entanglement. Read More » Quantum Information ### Synopsis: Quantum States Made with a Pluck A proposed method of generating phonon states for quantum applications uses a single electron trapped in a suspended carbon nanotube. Read More »
# Chapter 9 - Section 9.3 - The Parabola - Exercise Set - Page 998: 81 See explanations. #### Work Step by Step The equation of a parabola contains only one of the $Ax^2$ or $Cy^2$ terms, while other conic sections should have both. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# A Case Study of Cluster-level Models #### 2019/10/22 In this vignette, we will discuss the use of cluster-level models using the SUMMER package. We will use data from Kenya 2014 DHS survey as a case study. Due to the size limit of vignettes on CRAN, here we illustrate data preparation and how to use fitINLA2 function to perform space-time smoothing of a generic binary indicator using neonatal mortality rate (NMR) as an example). For a more complete analysis of U5MR using cluster-level model, we refer readers to a more detailed vignette at http://faculty.washington.edu/jonno/space-station.html (Lecture 6, Case Study - Kenya 2014 DHS, in the Analysis file), where we use data from Kenya 2014 DHS survey, to illustrate spatial-temporal smoothing of U5MR at yearly scales for the 47 counties in Kenya. ## Prepare data First, we load the package and the necessary data. INLA is not in a standard repository, so we check if it is available and install it if it is not installed. For this vignette, we used INLA version 19.09.03. library(SUMMER) library(ggplot2) library(gridExtra) The DHS data can be obtained from the DHS program website at https://dhsprogram.com/data/dataset/Kenya_Standard-DHS_2014. For the analysis of U5MR, we will use the Births Recode in .dta format. Notice that registration with the DHS program is required in order to access this dataset. The map files for this DHS can be freely downloaded from http://spatialdata.dhsprogram.com/boundaries/. With both the DHS birth record data and the corresponding shapefiles saved in the local directory. We can load them into with packages readstata13 and rgdal. We also automatically generates the spatial adjacency matrix Amat using the function getAmat(). library(readstata13) filename <- "../Demo/KEBR71DT/KEBR71FL.DTA" births <- read.dta13(filename, generate.factors = TRUE) library(rgdal) mapfilename0 <- "../Demo/shps/sdr_subnational_boundaries.shp" geo0 <- readOGR(mapfilename0, verbose = FALSE) mapfilename <- "../Demo/shps/sdr_subnational_boundaries2.shp" geo <- readOGR(mapfilename, verbose = FALSE) Amat <- getAmat(geo, geo$REGNAME) The Amat matrix encodes the spatial adjacency matrix of the 47 counties, with column and row names matching the regions used in the map. This adjacency matrix will be used for the spatial smoothing model. It can also be created by hand if necessary. ### Prepare person-month data We first demonstrate the method that smooths the direct estimates of subnational-level U5MR. For this analysis, we consider the $$8$$ Admin-1 region groups. In order to calculate the direct estimates of U5MR, we need the full birth history data in the format so that every row corresponds to a birth and columns that contain: • Indicators corresponding to survey design, e.g., strata (v023), cluster (v001), and household (v002) • Survey weight (v025) • Date of interview in century month codes (CMC) format, i.e., the number of the month since the beginning of 1990 (v008) • Date of child’s birth in CMC format (b3) • Indicator for death of child (b5) • Age of death of child in months (b7) Since county labels are usually not in the DHS dataset, we now load the GPS location of each clusters and map them to the corresponding counties. loc <- readOGR("../Demo/KEGE71FL/KEGE71FL.shp", verbose = FALSE) loc.dat <- data.frame(cluster = loc$DHSCLUST, long = loc$LONGNUM, lat = loc$LATNUM) gps <- mapPoints(loc.dat, geo = geo, long = "long", lat = "lat", names = c("REGNAME")) colnames(gps)[4] <- "region" gps0 <- mapPoints(loc.dat, geo = geo0, long = "long", lat = "lat", names = c("REGNAME")) colnames(gps0)[4] <- "province" gps <- merge(gps, gps0[, c("cluster", "province")]) sum(is.na(gps$region)) ## [1] 9 Notice that there are $$9$$ clusters that fall on the county boundaries and we need to manually assign them to a county based on best guess. In this example as an illustration, we remove these clusters without GPS coordinates. unknown_cluster <- gps$cluster[which(is.na(gps$region))] gps <- gps[gps$cluster %in% unknown_cluster == FALSE, ] births <- births[births$v001 %in% unknown_cluster == FALSE, ] births <- merge(births, gps[, c("cluster", "region", "province")], by.x = "v001", by.y = "cluster", all.x = TRUE) births$v024 <- births$region The birth history data from DHS is already in this form and the getBirths function default to use the current recode manual column names (as indicated above). The name of these fields can be defined explicitly in the function arguments too. We reorganize the data into the ‘person-month’ format with getBirths function and reorder the columns for better readability. dat <- getBirths(data = births, strata = c("v023"), year.cut = seq(1985, 2020, by = 1)) dat <- dat[, c("v001", "v002", "v024", "time", "age", "v005", "strata", "died")] colnames(dat) <- c("clustid", "id", "region", "time", "age", "weights", "strata", "died") years <- levels(dat$time) head(dat) ## clustid id region time age weights strata died ## 1 1 6 nairobi 2009 0 5476381 1 0 ## 2 1 6 nairobi 2009 1-11 5476381 1 0 ## 3 1 6 nairobi 2009 1-11 5476381 1 0 ## 4 1 6 nairobi 2009 1-11 5476381 1 0 ## 5 1 6 nairobi 2009 1-11 5476381 1 0 ## 6 1 6 nairobi 2009 1-11 5476381 1 0 Notice that we also need to specify the time intervals of interest. In this example, we with to calculate and predict U5MR in 5-year intervals from 1985-1990 to 2015-2019. For U5MR, we will use the discrete survival model to calculate direct estimates for each region and time. This step involves breaking down the age of each death into discrete intervals. The default option assumes a discrete survival model with six discrete hazards (probabilities of dying in a particular interval, given survival to the start of the interval) for each of the age bands: $$[0,1), [1,12), [12,24), [24,36), [36,48)$$, and $$[48,60]$$. ## Model-based Estimates Assume there are $$N$$ regions and $$T$$ time periods. In the $$i$$-th region, there are $$n_i$$ clusters. We consider the following model. In stratum $$k$$, cluster $$c$$, and time period $$t$$, let $$p_{ktc}$$ be the prevalence of interest. We can aggregate over all clusters within each region to obtain the region-time specific prevalence to be $p_{it} = \sum_{k}\sum_{c} p_{ktc} q_{ik} \mathbf{1}_{i[c] = i},$ where $$q_{ik}$$ is the proportion of clusters that are in stratum $$k$$ within region $$i$$. Let $$Y_{ktc}^{(g)}$$ and $$n_{ktc}^{(g)}$$ to denote the number of deaths and the total number of child-months in stratum $$k$$, cluster $$c$$, and time period $$t$$. We model the cluster-level stratum-specific prevalence $$p_{ktc}$$ with a hierarchical space-time smoothing model described below. ### Beta-binomial model We assume the following marginal model, $Y_{ktc} | p_{ktc} \sim \mbox{BetaBinomial}(n_{ktc}, p_{ktc}, d).$ We model the mean probability $$p_{ktc}$$ with a logit link and a linear model that contains strata and age group fixed effects, and space, time, and space-time random effects $\mbox{logit} p_{ktc} = \log(\mbox{BIAS}_{tc}) + \mu_g + \beta_k + \alpha_t + \gamma_t + \phi_{i[c]} + \delta_{i[c],t} ,$ where the bias term $$\mbox{BIAS}_{tc}$$ denotes the ratio of the reported prevalence to the ‘true’ prevalence. The log transformation on the logit transformed hazards approximately leads to a multiplicative bias correction on the U5MR. In countries with high prevalence of HIV, we may adjust for the proportion of missing women due to HIV prevalence. And the random effects are defined similarly as in the previous smoothing model. We now transform the full birth history data to person-month format again. In order to fit the binomial model, we need to calculate the number of person-months and number of deaths for each age group, stratum (urban or rural), cluster, and time period. Notice that we do not need to impute $$0$$ observations for future time periods. We also rename the columns to prepare for the input in the smoothing model. In order to correctly adjust for bias due to HIV, we keep the information of province in the column ‘province’ as well. dat <- getBirths(data = births, strata = c("v023"), year.cut = seq(1985, 2020, by = 1)) dat <- dat[, c("v001", "time", "age", "died", "v025", "v024")] colnames(dat) <- c("cluster", "years", "age", "Y", "strata", "province") outcome <- getCounts(data = dat, variables = "Y", by = c("age", "strata", "cluster", "years"), ignore = list(years = c(2015:2019))) head(outcome) ## age strata cluster years Y total ## 1 0 urban 1 1985 0 0 ## 2 1-11 urban 1 1985 0 0 ## 3 12-23 urban 1 1985 0 0 ## 4 24-35 urban 1 1985 0 0 ## 5 36-47 urban 1 1985 0 0 ## 6 48-59 urban 1 1985 0 0 We then merge the county information to this dataset. In order to fit the model, the data file should contain the following columns: cluster ID (‘cluster’), observation period (‘years’), observation location (‘region’), strata level (‘strata’), age group corresponding to the hazards (‘age’), total number of person-months (‘total’), and the number of deaths (‘Y’). outcome <- merge(outcome, gps[, c("cluster", "region", "province")], by = "cluster", all.x = TRUE) ## Mapping prevalence using cluster-level model Before discussing the hazard modeling of U5MR, we first demonstrate how to use fitINLA2 function to map a generic prevalence of interest. To illustrate this, we take only the deaths within the first months to calculate the neonatal mortality rates. For binomial prevalence mapping for an generic outcome, we need to specify the age.group and age.n to be NULL. These two arguments will be specified in the case of mapping U5MR, where we may allow hazards for different age groups to have distinct model components. outcome.1month <- subset(outcome, age == 0) fit2 <- fitINLA2(data = outcome.1month, family = "betabinomial", Amat = Amat, geo = geo, year_label = 1985:2019, type.st = 1, verbose = FALSE, age.groups = NULL) out2 <- getSmoothed(inla_mod = fit2, year_range = c(1985, 2019), year_label = years, Amat = Amat, nsim = 1000, weight.strata = KenData$UrbanProp) The posterior medians of neonatal mortality rates in each counties can be plotted using the plot method, mapPlot function and hatchPlot function, as shown in other vignettes. For example, plot(out2$overall, year_label = years, year_med = 1985:2019, is.subnational=TRUE, plot.CI = TRUE, per1000 = TRUE) + ggtitle("Subnational Binomial Model: Neonatal Mortality Rates") + facet_wrap(~region, ncol = 7) + theme(legend.position = "none") + scale_color_manual(values = rep("gray20", 47)) ## Space-time smoothing of U5MR using the cluster-level models Due to the limit of file size on CRAN, this part of analysis have been moved to the website http://faculty.washington.edu/jonno/space-station.html (Lecture 6, Case Study - Kenya 2014 DHS, in the Analysis file, at the bottom of the page). we will also use data from Kenya 2014 DHS survey, to illustrate spatial-temporal smoothing of U5MR at yearly scales for the 47 counties in Kenya.
# The alternating group is a normal subgroup of the symmetric group For an exercise, I need to prove that the alternating group $A_n$ is a normal subgroup of the symmetric group $S_n$. As clue they say we can use a group homomorphism $\operatorname{sgn} : S_n \to \{-1,1\}$. I really don't see how i can use this.... can somebody help? • Does $\ker(sgn) = A_n$ help you? – Moritz Mar 9 '15 at 19:06 • yeah, it would but why is ker(sgn)=$A_n$? – Robbe Motmans Mar 9 '15 at 19:10 • Because every element of $A_n$ is mapped to $1$ and every other element in $S_n\setminus A_n$ to $-1$. Note that $1$ is group-unity of $(\{\pm 1\},\cdot)$. – Moritz Mar 9 '15 at 19:12 • oke thnx alot guys, i get it know i think. – Robbe Motmans Mar 9 '15 at 19:13 • Final hint: $\ker(\psi)$ for any group-homomorphism $\psi: G \to H$ between groups $G$ and $H$ is always a normal subgroup of $G$. – Moritz Mar 9 '15 at 19:14 $1$.Note that kernal of sign homomorphism is precisely $A_n$ and kernal of a homomorphism is a normal subgroup. $2$. Recall that every Subgroup of index 2 is Normal and note that $[S_n:A_n]=2$ • Hint:kernal of given map is $A_n$ – Arpit Kansal Mar 9 '15 at 19:10 Add another method to prove in addition to Arpit's answer: conjugation preserves cycle type; so $s a s^{-1} \in A_n$
Home > English > Class 12 > Chemistry > Chapter > Jee Mains > Enol content is maximum in... Updated On: 27-06-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free, Text Solution CH_(3)-overset(O)overset(||)(C)-CH_(2)-overset(O)overset(||)(C)-CH_(3)CH_(3)-overset(O)overset(||)(C)-CH_(2)-overset(O)overset(||)(C)-NH_(2)CH_(3)-overset(O)overset(||)(C)-CH_(3)-overset(O)overset(||)(C)-OCH_(2)CH_(3)-overset(O)overset(||)(C)-CH_(3) Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Transcript hello in this question what their studies and all content is maximum in ok so here we have to talk about the and all content so if you are talking about the in all content to generally what we see you were here we have to consider the acidity of the proton which is at the Alpha position of the carbonate talking about the action number de than what we can observe where there is this is CH3 C double bond O net se extracting this Proton ok which is at the Alpha position of the carbonyl what is the what is the Alpha position of the carbon which is attached to SP2 carbon is asset Alpha carbon this particular carbon rod SP2 hybridized to this Proton is what is the SP3 carbon Proton which is what your Alpha hydrogen so here we have to extract the vanilla extract is alpha hydrogen the negative charge will be form over here so what you can see every year is whenever you are forming the and all that what you can observe first we have to remove Alpha position of the Coromandel and when we are removing the protons and the negative charges from the Warriors so this negative charge if the stability of this negative charge is more than an all content is going to be maximum because that because of the annual formation the first step is what removal or removal of h + so the faster you will be the removal of fasting is going to be the formation of an all content so if you talk about an option number one so there is actually Two Alpha carbon if you see this this is also the Alpha carbon this is also the Alpha Alpha carbon but we have to see the acidity of the protons are among these Alpha carbon addressed this particular Proton is removed from Proton is the negative charge from will be what you can observe your is this negative charges original stabilizer these two carbon is so what we can observe these negative is going into resonance with 22 carbonyl group also if you remove the proton from here to charge from will be here so this negative charges against going to the resonance to it to carbon atom what what what what is the other factors that is that we have to consider to decide the year in all content that is where the this negative charge De localisation is more to that is more in Octane number 1 number bi vi in option to the localisation is very less because lone pair of nitrogen is also involved in the resonance which is going to Hinder Veedokkade localisation of the house of the Alpha carbon Proton which we have removed and the negative transform this is not is much in conjugation as compared to option number of groups and Sewing classes of proton which is an option number also this loan is not getting involved in the conjugation due to his the the stability of this negative charge will be less but what about your number 4 if you are going to form the negative charge so what you can see all these positions of the equal and show from any of the position you can remove one expression in the negative charge will be formed over there so according to this what we can observe in option number 4 that the negative charge formed in the option of a code that is only one carbon in carbon and carbon ka group stabilizing these negative charge but in option number 1 2 and 3 what we can see you are there did to carbon in stabilizing the stability of the negative charge but that is more in case of one because in the other cases the localisation is handled by the place and effect of the NH2 and effect of the oceans ok so that's why near the this is going to be maximum option number 1 ok so let's will draw the complete and all formation of the year so this class will remove and this negative will be in the conjugation in this is going to be formed over here that will be O - A double bond ch2 and the restless and will be extracted by these negative and the formation of and all will be something like this C double bond ch2 and the other and there will be one and it is what CS3 to be because Aadi all is stand for the functional group which is over their produce these over HBR are calling it to be all and the this Falcon part is are responsible for the combining these in all we have said this compound to be involved so all about the and all content is maximum going going to be an option number 1 hour action number 1 ok because of a number when the acidity of the proton was more and more because when we are in moving a negative charge is disposed over the to carbonates and due to stronger resonance that is more acidic in competitive and P ok so in this particular manner when you were you have to decide content always what you have you should take you should check the acidity of the proton ok so I hope this answers your questions ## Related Videos 644632389 0 3.9 K 5:32 Enol content is maximum in
# Correct algorithm for detecting outbreaks in conflict data I'm currently working on a project researching whether Google Trends can predict conflict events in intrastate conflicts. Thus I have two different datasets; the weekly Google Trends search volume and the number of conflict events per week. My idea was to do the following to test my hypothesis: 1. Use an outbreak detection algorithm from the R-package surveillance. This will give me binomial values (outbreak/no outbreak) for each week in my datasets. The idea was that those algorithms would be able to correctly and automatically identify "peaks" in my data. 2. Evaluate the binomial classifiers (so if there was an "outbreak" in the Google Trends data for week t and there is an "outbreak" in the conflict data in week t+1 I'd have a true positive etc.). I am, however not totally sure, which algorithm provided in the package would be suitable for my kind of data (which I assume does not follow seasonal trends as strongly as disease data), since my background is not in epidemiology and my knowledge of statistics is quite limited. I would thus be thankful for hints or advice! • Interesting project. I'd push back a bit on your assumption that there is no seasonality. Plenty of econometric uses weather as an instrument for conflict, exploiting the fact that people fight less when it is raining, and/or given the agricultural calendar. – generic_user Feb 23 '18 at 16:04 • Also, what sort of trends does this package monitor for you? All of them? I'm sure that searches for bandages and CPR are more relevant than pr0n and lolcats. – generic_user Feb 23 '18 at 16:06 • @generic_user I'm not sure whether I understood your comment correctly; the package itself does not monitor trends - Google Trends is a tool that provides you with the (relative) amount of searches for certain terms or topics over time. I'll also consider your advice about seasonality, thank you! – SamVimes Feb 23 '18 at 16:26 • What I'm asking is what variables you're using to predict your time series, besides the time series itself. Or is this a purely autoregressive job? – generic_user Feb 23 '18 at 16:31 • Here is a fairly canonical paper using weather to relate economic shocks to civil conflict: emiguel.econ.berkeley.edu/research/… – generic_user Feb 23 '18 at 16:32 I took your 260 weekly values and introduced them to AUTOBOX in an automatic manner. To develop a model one needs to condition the equation based upon possible anomalies , level shifts , weekly indicators and of course possible ARIMA memory. The original series had an ACF of and the residuals from a useful model had an ACF here and the following residual plot .... both suggesting that a useful model may have been developed. The Actual and Forecast graph is here ## ## with the Actual/Fit and Forecast here . The equation is listed here with statistics here . The forecast going forward requires predictions of the google series . In summary the google series is statistically significant and also certain weeks of the year are also suggested to be important reflecting an omitted causal variable. Ordinary correlation has no play with data like this because there are outside/identifiable factors which need to be incorporated in order to correctly "see" the relationship (if any ! ) • Thank you for your work! I'm afraid this is not what I am looking for; I'm not able to use Autobox and due to my limited statistical knowledge I'd like to keep my model a bit simpler and try what I have described in my post. – SamVimes Feb 24 '18 at 11:44 • Simple is often "simply inappropriate" due to the nature/complications of the data being characterized/modeled. Models should be as simple as possible but not too simple. Glad to have been of help. – IrishStat Feb 24 '18 at 11:52 • Yes, I fully agree, but sadly I also have to consider practical limitations. Thank you nevertheless! – SamVimes Feb 24 '18 at 12:23
# A note on concentration inequality for vector-valued martingales with weak exponential-type tails We present novel martingale concentration inequalities for martingale differences with finite Orlicz-ψ_α norms. Such martingale differences with weak exponential-type tails scatters in many statistical applications and can be heavier than sub-exponential distributions. In the case of one dimension, we prove in general that for a sequence of scalar-valued supermartingale difference, the tail bound depends solely on the sum of squared Orlicz-ψ_α norms instead of the maximal Orlicz-ψ_α norm, generalizing the results of Lesigne & Volný (2001) and Fan et al. (2012). In the multidimensional case, using a dimension reduction lemma proposed by Kallenberg & Sztencel (1991) we show that essentially the same concentration tail bound holds for vector-valued martingale difference sequences. Comments There are no comments yet. ## Authors • 13 publications • ### Weighted Poincaré inequalities, concentration inequalities and tail bounds related to the behavior of the Stein kernel in dimension one We investigate the links between the so-called Stein's density approach ... 04/11/2018 ∙ by Adrien Saumard, et al. ∙ 0 read it • ### Matrix Infinitely Divisible Series: Tail Inequalities and Applications in Optimization In this paper, we study tail inequalities of the largest eigenvalue of a... 09/04/2018 ∙ by Chao Zhang, et al. ∙ 0 read it • ### On Optimal Uniform Concentration Inequalities for Discrete Entropies in the High-dimensional Setting We prove an exponential decay concentration inequality to bound the tail... 07/09/2020 ∙ by Yunpeng Zhao, et al. ∙ 0 read it • ### Tail Bounds for Matrix Quadratic Forms and Bias Adjusted Spectral Clustering in Multi-layer Stochastic Block Models We develop tail probability bounds for matrix linear combinations with m... 03/18/2020 ∙ by Jing Lei, et al. ∙ 0 read it • ### An a Priori Exponential Tail Bound for k-Folds Cross-Validation We consider a priori generalization bounds developed in terms of cross-v... 06/19/2017 ∙ by Karim Abou-Moustafa, et al. ∙ 0 read it • ### Concentration of weakly dependent Banach-valued sums and applications to kernel learning methods We obtain a new Bernstein-type inequality for sums of Banach-valued rand... 12/05/2017 ∙ by Gilles Blanchard, et al. ∙ 0 read it • ### Concentration of the Frobenius norms of pseudoinverses In many applications it is useful to replace the Moore-Penrose pseudoinv... 10/18/2018 ∙ by Ivan Dokmanić, et al. ∙ 0 read it ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction This note concerns the following problem: let be a vector-martingale difference sequence that take place on the -dimensional Euclidean space , where . Assume that has the following weak exponential-type tail condition: for some and all we have for some scalar Eexp(∥∥∥uiKi∥∥∥α)≤2, (1.1) and hence by Markov’s inequality their tails satisfy for each P(∥∥∥uiKi∥∥∥≥z)≤exp(−zα)Eexp(∥∥∥uiKi∥∥∥α)≤2exp(−zα), then what can we conclude about the tail probability of the random variable ? Note for under the condition (1.1 are in general not available, and hence the classical analysis using moment generating functions do not work through and hence new analytical tools are in demand. Our result makes several contributions upon the previous works. First, we conclude that in the one-dimensional case where one denotes , a one-sided maximal inequality can be concluded that, roughly, P(max1≤n≤Nn∑i=1ui≥z)≤Lα(∑Ni=1K2iz2)exp⎧⎪⎨⎪⎩−(C⋅z2∑Ni=1K2i)αα+2⎫⎪⎬⎪⎭ (1.2) where the factor is solely dependent on for any fixed and grows linearly in , and is a positive numerical constant. In above and the following, we allow the numerical constant to change from paragraph to paragraph. This generalizes the bound of Lesigne & Volnỳ (2001) and Fan et al. (2012), where both groups of authors only consider the case in the independent and martingale difference sequence cases, separately. See also the more recent paper Fan et al. (2017) for similar concentration under a slightly weaker condition. In fact, we also know that the inequality (1.2) is optimal in the sense that it cannot be further improved for a class of martingale difference sequences that satisfy the exponential moment condition (1.1). Secondly for the general dimension case, applying (1.2) as well as a dimension-reduction argument for vector martingales (Kallenberg & Sztencel, 1991; Hayes, 2005; Lee et al., 2016) allows us to conclude a one-sided bound on its Euclidean norm: under (1.1) we have P(max1≤n≤N∥∥ ∥∥n∑i=1ui∥∥ ∥∥≥z)≤L′α(∑Ni=1K2iz2)exp⎧⎪⎨⎪⎩−(C⋅z2∑Ni=1K2i)αα+2⎫⎪⎬⎪⎭ (1.3) where analogously, the factor is solely dependent on for any fixed and grows linearly in , and is a positive numerical constant. To our best knowledge, this provides a first concentration result for vector-valued martingales with unbounded martingale differences under the weak exponential-type condition (1.1). Concentration results of (1.2) and (1.3 ) potentially see many applications in probability and statsitcs, including the rate of convergence of martingales, the consistency of nonparametric regression estimation with errors of martingale difference sequence (see Laib (1999)), as well as online stochastic gradient algorithms for parameters estimation in linear models and PCA (Li et al., 2018). ## 2 Orlicz space and Orlicz norm In this subsection, we briefly revisit the properties of Orlicz space and its -norm that are mostly relevant. Readers who are interested in an exposure of Orlicz space from a Banach space point of view are referred to Ledoux & Talagrand (2013). Let be the set of nonnegative real numbers. Consider the Orlicz space of -valued random vector which lives in the probability space such that some . Let be a nondecreasing convex function with and , and equip the Orlicz space with the norm ∥X∥ψ:=inf{K>0: Eψ(∥X∥K)≤1}. One calls the Orlicz- norm. In special, random vector has an Orlicz- norm defined as Orlicz- norm of as a scalar-valued random variable. In this note, we are interested in the exponential-tailed distributions that corresponds to a family of functions: , , in which case the corresponding Orlicz space is the collection of random variables with exponential moments . 111 Rigorously speaking, when is not convex when is in a neighborhood of 0. In this case, one can let the function be for some large enough, so that the function satisfies the condition. We choose not to adopt this definition of simply for clarity of presentation. ## 3 One dimensional result We state our first main result that concludes the right-tailed bound (1.2) under a slightly more general condition that forms a supermartingale difference sequence. ###### Theorem 1. Let be given. Assume that is a sequence of supermartingale differences with respect to , i.e. , and it satisfies for each . Then for an arbitrary and , P(max1≤n≤Nn∑i=1ui≥z)≤⎡⎣2+(3α)2α32∑Ni=1∥ui∥2ψαz2⎤⎦exp⎧⎪ ⎪⎨⎪ ⎪⎩−⎛⎝z232∑Ni=1∥ui∥2ψα⎞⎠αα+2⎫⎪ ⎪⎬⎪ ⎪⎭ (3.1) ### Remark We make several remarks on Theorem 1, as follows. 1. By replacing by a larger value in (3.1) of Theorem 1, one may rediscover essentially Theorem 2.1 in Fan et al. (2012) which includes bound (1.1) of Lesigne & Volnỳ (2001) as a special case . 222 The work Fan et al. (2012) assumes a slightly more general condition . Nevertheless, our result does not lose any generality in general, since can be absorbed into the Orlicz- norm as a polylogarithmic factor. In summary, Theorem 2.1 of Fan et al. (2012) would provide a bound that depends on the maximum of , while our new bound sharpens such bound of Fan et al. (2012) and depends only on the Orlicz- norm of the martingale differences in terms of their squared sum. It turns out that the sharpened bound is more desirable to obtain useful upper bounds in many statistical applications. 2. Theorem 2.1 in Fan et al. (2012) is optimal in the sense that a counterexample that has the right hand of (3.1) as the lower bound (up to a constant factor in the exponent), and forbids the existence of a sharper bound for the martingale difference sequence class. Since our result generalizes their Theorem 2.1, one may apply the same counterexample and conclude the optimality of our bound. See more in the next paragraph. ### Optimality of our result To claim optimality we note that (3.1) implies, for the special case and each , (3.2) which is as for some . In the mean time, Fan et al. (2012) generalizes the counterexample in Lesigne & Volnỳ (2001) where, in our terminology of -norm, Theorem 2.1 of Fan et al. (2012) provides for each an ergodic sequence of martingale differences and a sequence of positives such that for all sufficiently large, P(max1≤n≤Nn∑i=1u∗i≥N)≥exp{−3Nαα+2} Comparing the last equation with (3.2), we conclude the optimality of our result. ### Comparison with conditional weak exponential-type conditions If we pose the additional assumption that ’s satisfy (1.1) in the conditional sense, the martingale concentration inequality can be further improved. Taking the example where and , if one poses a slightly stronger condition Eexp(∣∣∣uiKi∣∣∣2∣∣∣Fi−1)≤2, (3.3) i.e. the martingale differences are scalar-valued and conditionally subgaussian random variables, and one may conclude from the Hoeffding’s concentration inequality (Wainwright, 2015) P(∣∣ ∣∣N∑i=1ui∣∣ ∣∣≥z)≤2exp(−C⋅z2∑Ni=1K2i). (3.4) Similar bound can be derived for sub-exponential variables. Observe that the power of the term in the exponent of (3.4) is 1, and instead, our bound in (1.2) has an exponent of and is hence worse. Fortunately, to obtain an error probability both inequalities give a cut-off point up to a different polylogarithmic factor of , and these two cut-off points are equivalent if these factors are ignored. ## 4 Proof of Theorem 1 To prove our main result for the one-dimensional case, Theorem 1, we will use a maxima version of the classical Azuma-Hoeffding’s inequality proposed by Laib (1999) for bounded martingale differences, and then apply an argument of Lesigne & Volnỳ (2001) and Fan et al. (2012) to truncate the tail and analyze the bounded and unbounded pieces separately. 1. First of all, for the sake of simplicity and with no loss of generality, throughout the following proof of Theorem 1 we shall pose the following extra condition N∑i=1∥ui∥2ψα=1. (4.1) In other words, under the additional (4.1) condition proving (3.1) reduces to showing P(max1≤n≤Nn∑i=1ui≥z)≤⎡⎣2+(3α)2α32z2⎤⎦exp{−(z232)αα+2}. (4.2) This can be made more clear from the following rescaling argument: one can put in the left of (4.2) in the place of , and in the place of , the left hand of (3.1) is just P⎛⎜ ⎜ ⎜⎝max1≤n≤Nn∑i=1ui(∑Ni=1∥ui∥2ψα)1/2≥z(∑Ni=1∥ui∥2ψα)1/2⎞⎟ ⎟ ⎟⎠ which, by (4.2), is upper-bounded by ≤⎡⎣2+(3α)2α32∑Ni=1∥ui∥2ψαz2⎤⎦exp⎧⎪⎨⎪⎩−(z232N∑i=1∥ui∥2ψα)αα+2⎫⎪⎬⎪⎭, proving (3.1). 2. We apply a truncating argument used in Lesigne & Volnỳ (2001) and later in Fan et al. (2012). Let be arbitrary, and we define u′i=ui1{|ui|≤M}−E(ui1{|ui|≤M}∣Fi−1), (4.3) u′′i=ui1{|ui|>M}−E(ui1{|ui|>M}∣Fi−1), (4.4) T′n=n∑i=1u′i,T′′n=n∑i=1u′′i,T′′′n=n∑i=1E(ui∣Fi−1). Since is -measurable, and are two martingale difference sequences with respect to , and let be defined as Tn=n∑i=1uiand henceTn=T′n+T′′n+T′′′n. (4.5) Since are martingale differences we have that is predictable with , and hence for any , P(max1≤n≤NTn≥2z)≤P(max1≤n≤NT′n+T′′′n≥z)+P(max1≤n≤NT′′n≥z)≤P(max1≤n≤NT′n≥z)+P(max1≤n≤NT′′n≥z) (4.6) In the following, we analyze the tail bounds for and separately (Lesigne & Volnỳ, 2001; Fan et al., 2012). 3. To obtain the first bound, we recap Laib’s inequality as follows: ###### Lemma 1. (Laib, 1999) Let be a real-valued martingale difference sequence with respect to some filtration , i.e. , and the essential norm is finite. Then for an arbitrary and , P(maxn≤Nn∑i=1wi≥z)≤exp{−z22∑Ni=1∥wi∥2∞}. (4.7) (4.7) generalizes the folklore Azuma-Hoeffding’s inequality, where the latter can be concluded from maxn≤Nn∑i=1wi≥N∑i=1wi. The proof of Lemma 1 is given in Laib (1999). Recall our extra condition (4.1), then from the definition of in (4.3) that , the desired bound follows immediately from Laib’s inequality in Lemma 1 by setting : P(max1≤n≤NT′n≥z)=P(max1≤n≤Nn∑i=1u′i≥z)≤exp{−z28M2} (4.8) To obtain the tail bound of we only need to show E(u′′i)2≤(2M2+4B2)exp{−Mα}, (4.9) where B:=(3α)1α, (4.10) from which, Doob’s martingale inequality (Durrett, 2010, §5) implies immediately that P(max1≤n≤NT′′n≥z)≤1z2N∑i=1Eu′′2i≤2M2+4B2z2exp{−Mα}. (4.11) To prove (4.9), first recall from the definition of in (4.4) that u′′i=ui1{|ui|>M}−E(ui1{|ui|>M}∣Fi−1). Recall from the property of conditional expectation (Durrett, 2010) that for any random variable and a -algebra E[W−E(W∣G)]2=EW2−E[E(W∣G)]2≤EW2=∫∞02yP(|W|>y)dy where the last equality is due to the second-moment formula for nonnegative random variable (Durrett, 2010). Plugging in and we have E(u′′i)2=E[ui1{|ui|>M}−E(ui1{|ui|>M}∣Fi−1)]2≤∫∞02yP(|ui|1|ui|>M>y)dy=∫M02ydy⋅P(|ui|>M)+∫∞M2yP(|ui|1|ui|>M>y)dy=M2P(|ui|>M)+∫∞M2tP(|ui|>t) dt≤2M2exp{−Mα}+4∫∞Mtexp{−tα} dt, (4.12) where the last inequality is due to Markov’s inequality that for all P(|Xi|≥z)≤exp{−zα}Eexp{|Xi|α}≤2exp{−zα}. (4.13) It can be shown from Calculus I that the function is decreasing in and is increasing in , where was earlier defined in (4.10) (Fan et al., 2012). If we have ∫∞Mtexp{−tα} dt=∫∞Mt−2t3exp{−tα} dt≤∫∞Mt−2 dt⋅M3exp{−Mα}=M−1⋅M3exp{−Mα}=M2exp{−Mα}. (4.14) If , we have by setting as in above (4.15) Combining (4.12) with the two above displays (4.14) and (4.15) we obtain E(u′′i)2 ≤2M2exp{−Mα}+4∫∞Mtexp{−tα} dt ≤(2M2+4B2)exp{−Mα}, completing the proof of (4.9) and hence (4.11). 4. Putting the pieces together: combining (4.6), (4.8) and (4.11) we obtain for an arbitrary that (4.16) We choose as, by making the exponents equal in above, M=(z28)1α+2such % thatz28M2=Mα=(z28)αα+2. Plugging this back into (4.16) we obtain P(max1≤n≤NTn≥2z)≤exp{−(z28)αα+2}+2M2+4B2z2exp{−(z28)αα+2}≤⎡⎣1+(18)2α+22z2αα+2+(3α)2α4z2⎤⎦exp{−(z28)αα+2} (4.17) where we plugged in the expression of in (4.10). We can further simplify the square-bracket prefactor in the last line of (4.17) which can be tightly bounded by 1+2⋅2α+2(8)2α+2+2αα+2(8)2α+2z2+(3α)2α4z2 ≤2+⎛⎜⎝αα+22(8)2α+2+(3α)2α⎞⎟⎠4z2 ≤2+⎛⎝0.5+(3α)2α⎞⎠4z2 =2+2z2+(3α)2α4z2≤2+(3α)2α8z2. where we applied a few elementary algebraic inequalities, including and a variant of Jensen’s inequality: for one has for all (where the equality holds for ). Thus, (4.2) is concluded by noticing the relation (4.5) and setting in the place of , which hence proves Theorem 1 via the argument in (i) in our proof. ## 5 General dimensions result In many applications we are often more interested in a concentration tail inequality for vector-valued martingales. To proceed, we need a so-called dimension reduction lemma for Hilbert space which is inspired from its continuum version proved in Kallenberg & Sztencel (1991). We argue that it is sufficient to prove it for the case . Writing in terms of martingale differences, we have ###### Lemma 2 (Dimension reduction lemma for Rd or Hilbert space). Let be a -valued martingale difference sequence with respect to filtration , i.e. for each , . Then there exists a -valued martingale difference sequence with respect to the same filtration so that for each ∥∥ ∥∥n∑i=1ui∥∥ ∥∥=∥∥ ∥∥n∑i=1u′i∥∥ ∥∥and∥un∥=∥u′n∥. (5.1) For a proof of Lemma 2, see Lemma 2.3 of Lee et al. (2016), which proves the lemma on a generic Hilbert space. ###### Theorem 2. Let be given. Assume that is a sequence of -valued martingale differences with respect to , i.e. , and it satisfies for each . Then for an arbitrary and , P(maxn≤N∥∥ ∥∥n∑i=1ui∥∥ ∥∥≥z)≤4⎡⎣2+(3α)2α64∑Ni=1∥ui∥2ψαz2⎤⎦exp⎧⎪ ⎪⎨⎪ ⎪⎩−⎛⎝z264∑Ni=1∥ui∥2ψα⎞⎠αα+2⎫⎪ ⎪⎬⎪ ⎪⎭. (5.2) Theorem 2 explicitly argues that the martingale inequality hold with the dimension-free property: the bound on the right hand of (5.2) is independent of dimension and only depends on the martingale differences via . ###### Proof of Theorem 2. From Lemma 2 we have a -valued martingale difference sequence such that for each , (5.1) holds. It is straightforward to justify for each . Therefore to prove (5.2), we only need to show P(maxn≤N∥∥ ∥∥n∑i=1u′i∥∥ ∥∥≥z)≤4⎡⎣2+(3α)2α64∑Ni=1∥u′i∥2ψαz2⎤⎦exp⎧⎪ ⎪⎨⎪ ⎪⎩−⎛⎝z264∑Ni=1∥u′i∥2ψα⎞⎠αα+2⎫⎪ ⎪⎬⎪ ⎪⎭. (5.3) Note by definition, for . Applying Theorem 1 to both and as supermartingale difference sequences, we have for P(max1≤n≤N∣∣ ∣∣n∑i=1u′i,ℓ∣∣ ∣∣≥z/√2) ≤P(max1≤n≤Nn∑i=1u′i,ℓ≥z/√2)+P(max1≤n≤Nn∑i=1−u′i,ℓ≥z/√2) ≤2⎡⎣2+(3α)2α64∑Ni=1∥ui∥2ψαz2⎤⎦exp⎧⎪ ⎪⎨⎪ ⎪⎩−⎛⎝z264∑Ni=1∥ui∥2ψα⎞⎠αα+2⎫⎪ ⎪⎬⎪ ⎪⎭. Thus (5.3) follows from union bound. ∎ It remains an open question if similar concentration inequalities hold for polynomial-tail martingale differences where satisfies for ? In the case where ’s are independent, Theorem 6.21 of Ledoux & Talagrand (2013) gives a bound on the sum of vectors that can be turned to a tail inequality, but to our best knowledge a general result for martingale differences (even just in one dimension) is not available and left for future research. ### Acknowledgement The author thank Xiequan Fan for valuable comments on an earlier version of this note. ## References • Durrett (2010) Durrett, R. (2010). Probability: Theory and Examples (4th edition). Cambridge University Press. • Fan et al. (2012) Fan, X., Grama, I., & Liu, Q. (2012). Large deviation exponential inequalities for supermartingales. Electronic Communications in Probability, 17. • Fan et al. (2017) Fan, X., Grama, I., & Liu, Q. (2017). Deviation inequalities for martingales with applications. Journal of Mathematical Analysis and Applications, 448(1), 538–566. • Hayes (2005) Hayes, T. P. (2005). A large-deviation inequality for vector-valued martingales. • Kallenberg & Sztencel (1991) Kallenberg, O. & Sztencel, R. (1991). Some dimension-free features of vector-valued martingales. Probability Theory and Related Fields, 88(2), 215–247. • Laib (1999) Laib, N. (1999). Exponential-type inequalities for martingale difference sequences. application to nonparametric regression estimation. Communications in Statistics-Theory and Methods, 28(7), 1565–1576. • Ledoux & Talagrand (2013) Ledoux, M. & Talagrand, M. (2013). Probability in Banach Spaces: isoperimetry and processes. Springer Science & Business Media. • Lee et al. (2016) Lee, J. R., Peres, Y., & Smart, C. K. (2016). A gaussian upper bound for martingale small-ball probabilities. The Annals of Probability, 44(6), 4184–4197. • Lesigne & Volnỳ (2001) Lesigne, E. & Volnỳ, D. (2001). Large deviations for martingales. Stochastic processes and their applications, 96(1), 143–159. • Li et al. (2018) Li, C. J., Wang, M., Liu, H., & Zhang, T. (2018). Near-optimal stochastic approximation for online principal component estimation. Mathematical Programming, 167(1), 75–97. • Wainwright (2015) Wainwright, M. (2015). Basic tail and concentration bounds. URL: https://www.stat.berkeley.edu/ mjwain/stat210b/Chap2_TailBounds_Jan22_2015.pdf.
≡ФωФ≡ there is no question, only a cat Question: ≡ФωФ≡ there is no question, only a cat Each of the following was a problem for the US economy as a result of early banks printing their own paper money EXCEPT: A. banks Each of the following was a problem for the US economy as a result of early banks printing their own paper money EXCEPT: A. banks printed too many notes B. counterfeiting was widespread C. money represented gold deposited in the bank D. there were hundreds of different notes in circulation... Given the following m(x)=5x-3​ Given the following m(x)=5x-3​... 5. Nitrogen gas has a pressure of 1.32 atm at 78°C. What is the new temperature, T, (in Kelvins) at a pressure of 1.10 atm?PTP2T.Temperature 5. Nitrogen gas has a pressure of 1.32 atm at 78°C. What is the new temperature, T, (in Kelvins) at a pressure of 1.10 atm? P T P2 T. Temperature must be in Kelvinst... Marty simplified the expression 2(x - 2) - 2(3) as 2x-8. Is he correct? Explain why or why not. Marty simplified the expression 2(x - 2) - 2(3) as 2x-8. Is he correct? Explain why or why not.... An apostle was prisoner in his own hired house: in the book of revelation in the period of home missions in the gospels An apostle was prisoner in his own hired house: in the book of revelation in the period of home missions in the gospels in the general epistles in the period of foreign missions... The setting of william shakespeare’s play the tragedy of julius caesar takes place in the city of The setting of william shakespeare’s play the tragedy of julius caesar takes place in the city of... The marked price of a dress is rs 1000 if the shopkeeper is allowing 10 % discount. What will be the sp ?​ The marked price of a dress is rs 1000 if the shopkeeper is allowing 10 % discount. What will be the sp ?​... Because of the damage done to Galveston, it was no longer the most important port city usedto move supplies in and out of Texas. While Galveston was Because of the damage done to Galveston, it was no longer the most important port city used to move supplies in and out of Texas. While Galveston was repaired, the city of became the most important Texas port city.... Which is the graph of linear inequality x – 2y ≥ –12? Which is the graph of linear inequality x – 2y ≥ –12?... What is epilepsy? What are the two different types of epilepsy? What is epilepsy? What are the two different types of epilepsy?... What are balanced forces? what are unbalanced forces?​ What are balanced forces? what are unbalanced forces?​... Substitution witevaluate 1 +- (-m) where m-​ Substitution witevaluate 1 +- (-m) where m-​... Gyou’ve just joined the investment banking firm of dewey, cheatum, and howe. they’ve Gyou’ve just joined the investment banking firm of dewey, cheatum, and howe. they’ve offered you two different salary arrangements. you can have $7,300 per month for the next three years, or you can have$6,000 per month for the next three years, along with a \$32,500 signing bonus today. assume ... Simplify the expression completely. Simplify the expression completely. $Simplify the expression completely.$... Please solve thanks​ $Please solve thanks​$... Acoin is tossed and a spinner numbered 1-7 is spun. what's the probability of getting an odd number Acoin is tossed and a spinner numbered 1-7 is spun. what's the probability of getting an odd number and heads?... If a student is chosen at random, find each probability: (In decimal form with a leading zero, round to two decimal places if necessary)a) If a student is chosen at random, find each probability: (In decimal form with a leading zero, round to two decimal places if necessary) a) P(not attending grad school): b) P(< 3.0 GPA and not attending grad school): c) P(not attending grad school | ≥ 3.0 GPA): d) P(< 3.0 GPA | attending g... Use your knowledge of scale drawings and image sizes to fill in the missing information in the table. Use your knowledge of scale drawings and image sizes to fill in the missing information in the table.... Given the diagram below. If G is the circumcenter of ABC. DC = 16, GB = 21 and AZ = 19, find each measure.​ $Given the diagram below. If G is the circumcenter of ABC. DC = 16, GB = 21 and AZ = 19, find each m$...
# Family straight lines 1) It is given that the straight lines L(1): 2x+y-1=0 , L(2) : 3x-2y-5=0 , L(3) : x-3=0 and L(4) : x+y-1=0 a) Write down the equation of the family of straight lines passing through the point of intersection L(1) and L(2). b) Hence, find the equation of the line L which passes through the point... 顯示更多 1) It is given that the straight lines L(1): 2x+y-1=0 , L(2) : 3x-2y-5=0 , L(3) : x-3=0 and L(4) : x+y-1=0 a) Write down the equation of the family of straight lines passing through the point of intersection L(1) and L(2). b) Hence, find the equation of the line L which passes through the point of intersection of L(1) and L(2) and the point of intersection of L(3) and L(4) 2) A straight line L passes through the point of intersection of the lines L(1) :3x-4y+7=0 and L(2) : x-3y+2=0 and makes an acute angle of 45 degrees with the line L(3): y=2x+1. Find the equations of L THz Q.1 我試過,但ANS: x+2y+1=0 To Mr Kwok I'm so sorry , for Q1, i calculated it wrong, you are right! thz for your time 3 個解答 3
# Model selection basics I have ran three regressions and they are all very statistically significant. How do I choose which one is the best to use? i.e do I look for high F-statistic, low p-values ect. • Are all 3 regression models fit to the same data? – julian Aug 13 '17 at 21:02 • @SYS_V Yes they do. – EconJohn Aug 13 '17 at 21:20 • There are many techniques. But the most important thing is your approach. You can let the data drive the decision, via model fit statistics, stepwise regression, comparison of likelihood functions etc. Or you can let the theory drive the decision. Personally I think theory should be what determines your decision to include or exclude variables. – llewmills Aug 13 '17 at 22:15 • "best" at what? measured how? – Glen_b -Reinstate Monica Aug 14 '17 at 5:26 Best to use for what purpose? If it's purely for prediction and you don't care about explanation, you can go with the one that has the lowest AIC, BIC, SBC or some similar score. If it's for explanation, then go with the one that best advances the field you are researching. • Could you please clarify for me what "the one that best advances the field you are researching" means. Thanks! – user795305 Aug 14 '17 at 4:05 • I'm not sure how to make it clearer. Look at the different models. Which one helps your field the most? Which answers your research questions best? etc. – Peter Flom - Reinstate Monica Aug 14 '17 at 11:00 • I'm interpreting that as choosing a model to get whichever answer you wanted before the experiment. I'm interpreting that as suggesting, for instance, that if I know that my field would be very interested in knowing that that a covariate is significant, I should select that covariate. I very well could be misunderstanding, but it seems that this procedure (or others like it) is astatistical. – user795305 Aug 14 '17 at 14:28 The general approach to model selection involves assessing the accuracy of a model when fit to previously unseen data. This is the rationale behind use of training and test data sets - models are first fit to training data sets, and the model that produces the most accurate predictions when applied to test data sets is then chosen as "best". In order to evaluate the performance of a statistical learning method on a given data set, we need some way to measure how well its predictions actually match the observed data. That is, we need to quantify the extent to which the predicted response value for a given observation is close to the true response value for that observation. In the regression setting, the most commonly-used measure is the mean squared error (MSE), given by $$MSE = \frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{f}(x_{i}))^2$$ The MSE is computed using the training data that was used to fit the model, and so should more accurately be referred to as the training MSE. But in general, we do not really care how well the method works on the training data. Rather, we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data.[1] Cross-validation can be employed to evaluate model performance to aid in the model selection process: Cross-validation in plain english Meaning of cross-validation How to choose a predictive model after k-fold cross-validation? 1. An Introduction to Statistical Learning with Applications in R
# jax.numpy.piecewise¶ jax.numpy.piecewise(x, condlist, funclist, *args, **kw)[source] Evaluate a piecewise-defined function. LAX-backend implementation of piecewise(). Unlike np.piecewise, jax.numpy.piecewise() requires functions in funclist to be traceable by JAX, as it is implemeted via jax.lax.switch(). See the jax.lax.switch() documentation for more information. Original docstring below. Given a set of conditions and corresponding functions, evaluate each function on the input data wherever its condition is true. Parameters • x (ndarray or scalar) – The input domain. • condlist (list of bool arrays or bool scalars) – Each boolean array corresponds to a function in funclist. Wherever condlist[i] is True, funclist[i](x) is used as the output value. • funclist (list of callables, f(x,*args,**kw), or scalars) – Each function is evaluated over x wherever its corresponding condition is True. It should take a 1d array as input and give an 1d array or a scalar value as output. If, instead of a callable, a scalar is provided then a constant function (lambda x: scalar) is assumed. • args (tuple, optional) – Any further arguments given to piecewise are passed to the functions upon execution, i.e., if called piecewise(..., ..., 1, 'a'), then each function is called as f(x, 1, 'a'). • kw (dict, optional) – Keyword arguments used in calling piecewise are passed to the functions upon execution, i.e., if called piecewise(..., ..., alpha=1), then each function is called as f(x, alpha=1). Returns out – The output is the same shape and type as x and is found by calling the functions in funclist on the appropriate portions of x, as defined by the boolean arrays in condlist. Portions not covered by any condition have a default value of 0. Return type ndarray Notes This is similar to choose or select, except that functions are evaluated on elements of x that satisfy the corresponding condition from condlist. The result is: |-- |funclist[0](x[condlist[0]]) out = |funclist[1](x[condlist[1]]) |... |funclist[n2](x[condlist[n2]]) |-- Examples Define the sigma function, which is -1 for x < 0 and +1 for x >= 0. >>> x = np.linspace(-2.5, 2.5, 6) >>> np.piecewise(x, [x < 0, x >= 0], [-1, 1]) array([-1., -1., -1., 1., 1., 1.]) Define the absolute value, which is -x for x <0 and x for x >= 0. >>> np.piecewise(x, [x < 0, x >= 0], [lambda x: -x, lambda x: x]) array([2.5, 1.5, 0.5, 0.5, 1.5, 2.5]) Apply the same function to a scalar value. >>> y = -2 >>> np.piecewise(y, [y < 0, y >= 0], [lambda x: -x, lambda x: x]) array(2)
# Energy (heat?) 1. Dec 15, 2006 ### BunDa4Th 1. The problem statement, all variables and given/known data Suppose a steel strut with cross-sectional area 7.00x10^-4 m2 and length 2.50 m is bolted between two rigid bulkheads in the engine room of a submarine. (a) Calculate the change in temperature of the strut if it absorbs an energy of 3.00x10^5 J. °C 2. Relevant equations Q = mcDeltaT DeltaT = Q/mc 3. The attempt at a solution I tried using the formula DeltaT = Q/mc Delta T = 3.00 x 10^5/ m(448 J/kg *C) the problem im having is how do i find the mass? 2. Dec 15, 2006 ### andrevdh Density of steel $$\rho _{steel} = 7.8 \times 10^3\ kg/m^3$$ 3. Dec 15, 2006 ### BunDa4Th I still dont understand how to solve this problem knowing the density of steel. 4. Dec 15, 2006 ### Staff: Mentor Use it to find the mass of the steel. 5. Dec 15, 2006 ### BunDa4Th m = psteelV how do i find v? Last edited: Dec 15, 2006 6. Dec 15, 2006 ### cristo Staff Emeritus No. You are given the cross sectional area and the length. Do you know how to calculate the volume from this information? 7. Dec 15, 2006 ### BunDa4Th okay i figure out how to solve this. it was P_steel x cross sec. area / length = m 8. Dec 15, 2006 ### cristo Staff Emeritus Not quite: V=A*l (where A is the cross sec. area, l is the length). Above you had the correct forumla: m= ρ*V and so m= ρ*A*l 9. Dec 15, 2006 ### BunDa4Th oops, my mistake. yes i did mean m = p x A x l that was how i got the correct answer. Thanks for the correction and help.
Difference between revisions of "2022 AIME I Problems/Problem 1" Problem Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$ Solution 1 (Linear Polynomials) Let $R(x)=P(x)+Q(x).$ Since the $x^2$-terms of $P(x)$ and $Q(x)$ cancel, we conclude that $R(x)$ is a linear polynomial. Note that \begin{alignat*}{8} R(16) &= P(16)+Q(16) &&= 54+54 &&= 108, \\ R(20) &= P(20)+Q(20) &&= 53+53 &&= 106, \end{alignat*} so the slope of $R(x)$ is $\frac{106-108}{20-16}=-\frac12.$ It follows that the equation of $R(x)$ is $$R(x)=-\frac12x+c$$ for some constant $c,$ and we wish to find $R(0)=c.$ We substitute $x=20$ into this equation to get $106=-\frac12\cdot20+c,$ from which $c=\boxed{116}.$ ~MRENTHUSIASM Let \begin{align*} P(x) &= 2x^2 + ax + b, \\ Q(x) &= -2x^2 + cx + d, \end{align*} for some constants $a,b,c$ and $d.$ We are given that \begin{alignat*}{8} P(16) &= &512 + 16a + b &= 54, \hspace{20mm}&&(1) \\ Q(16) &= &\hspace{1mm}-512 + 16c + d &= 54, &&(2) \\ P(20) &= &800 + 20a + b &= 53, &&(3) \\ Q(20) &= &\hspace{1mm}-800 + 20c + d &= 53, &&(4) \end{alignat*} and we wish to find $$P(0)+Q(0)=b+d.$$ We need to cancel $a$ and $c.$ Since $\operatorname{lcm}(16,20)=80,$ we subtract $4\cdot[(3)+(4)]$ from $5\cdot[(1)+(2)]$ to get $$b+d=5\cdot(54+54)-4\cdot(53+53)=\boxed{116}.$$ ~MRENTHUSIASM Solution 3 (Pure Brute Force) Let $$P(x) = 2x^2 + bx + c$$ $$Q(x) = -2x^2 + dx + e$$ By substitutes $(16, 54)$ and $(20, 53)$ into these equations, we can get: $$2(16)^2 + 16b + c = 54$$ $$2(20)^2 + 20b + c = 53$$ Hence, $b = -72.25$ and $c = 698$. Similarly, $$-2(16)^2 + 16d + e = 54$$ $$-2(20)^2 + 20d + e = 53$$ Hence, $d = 71.75$ and $e = -582$. Notice that $c = P(0)$ and $d = Q(0)$. Therefore $P(0) + Q(0) = 698 + -582 = \boxed{116}$ ~Littlemouse ~MRENTHUSIASM ~ThePuzzlr
# Recent and upcoming talks by Kaethe Minden ## BEST 2016 slides The 23rd BEST conference was held June 15–16 in San Diego, CA. Shehzad Ahmed – Jonsson cardinals and pcf theory Liljana Babinkostova – A weakening of the closure operator Kyle Beserra – On the conjugacy problem for automorphisms of countable regular trees Erin Carmody – Killing them softly William Chan – Every analytic equivalence relation with all Borel classes is Borel somewhere John Clemens – Relative primeness of equivalence relations Paul Corazza – The axiom of infinity, quantum field theory, and large cardinals Cody Dance – Indiscernibles for $L[T_2,x]$ Natasha Dobrinen – Ramsey spaces coding universal triangle-free graphs and applications to Ramsey degrees Paul Ellis – A Borel amalgamation property Monroe Eskew – Rigid ideals Daniel Hathaway – Disjoint Borel functions Jared Holshouser – Partition properties for non-ordinal sets under the axiom of determinacy Paul McKenney – Automorphisms of \$\mathcal P(\lambda)/\mathcal I_\kappa Kaethe Minden – Subcomplete forcing and trees Daniel Soukup – Orientations of graphs with uncountable chromatic number Simon Thomas – The isomorphism and bi-embeddability relations for finitely generated groups Douglas Ulrich – A new notion of cardinality for countable first order theories Kameryn Williams – Minimal models of Kelley-Morse set theory Martin Zeman – Master conditions from huge embeddings continue reading… ## This Week in Logic at CUNY Computational Logic Seminar Tuesday, March 19, 2013 2:00 pm Speaker: Stan Wainer The Leeds Logic Group, University of Leeds Title: Computing Bounds from Arithmetical Proofs We explore the role of the function a+2^x, and its generalizations to higher number classes, in analyzing and measuring the computational content of a broad spectrum of arithmetical theories. continue reading… ## This Week in Logic at CUNY November 27, Time 2:00 – 4:00 PM, Room 3309 Speaker: Melvin Fitting, CUNY Title: Possible world semantics for first order LP Abstract: Propositional Justification Logics are modal-like logics in which the usual necessity operator is split into a family of more complex terms called justifications. continue reading… ## This Week in Logic at CUNY Set Theory Seminar Friday, March 30, 2012 10:00 am GC 6417 Ms. Kaethe Minden Consistency and Applications of the Diamond Principle Model Theory Seminar Friday, March 30, 2012 12:30 pm GC 6417 Mr. continue reading…
0 like 0 dislike 94 views If $d = 2.0453$ and $d^{*}$ is the nearest decimal obtained by rounding $d$ to the nearest hundredth, what is the value of $d^{*} - d$? | 94 views
# Recent questions tagged functional Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Let $A$ be a positive definite $n \times n$ real matrix, $\vec{b}$ a real vector, and $\vec{N}$ a real unit vector. 1 answer 167 views Let $A$ be a positive definite $n \times n$ real matrix, $\vec{b}$ a real vector, and $\vec{N}$ a real unit vector.Let $A$ be a positive definite $n \times n$ real matrix, $\vec{b}$ a real vector, and $\vec{N}$ a real unit vector. a) For which value(s) of t ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Let $\vec{x}$ and $\vec{p}$ be points in $\mathbb{R}^{n}$. Under what conditions on the scalar $c$ is the set $\|\vec{x}\|^{2}+2\langle\vec{p}, \vec{x}\rangle+c=0$ 0 answers 60 views Let $\vec{x}$ and $\vec{p}$ be points in $\mathbb{R}^{n}$. Under what conditions on the scalar $c$ is the set $\|\vec{x}\|^{2}+2\langle\vec{p}, \vec{x}\rangle+c=0$a) Let $\vec{x}$ and $\vec{p}$ be points in $\mathbb{R}^{n}$. Under what conditions on the scalar $c$ is the set \ \|\vec{x}\|^{2}+2\langle\v ... close 0 answers 53 views Let $A$ be a positive definite $n \times n$ real matrix, $b \in \mathbb{R}^{n}$, and consider the quadratic polynomial $Q(x):=\frac{1}{2}\langle x, A x\rangle-\langle b, x\rangle$Let $A$ be a positive definite $n \times n$ real matrix, $b \in \mathbb{R}^{n}$, and consider the quadratic polynomial \ Q(x):=\frac{1}{2}\lang ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Let $\ell$ be any linear functional. Show there is a unique vector $v \in \mathbb{R}^{n}$ so that $\ell(x):=\langle x, v\rangle$. 0 answers 56 views Let $\ell$ be any linear functional. Show there is a unique vector $v \in \mathbb{R}^{n}$ so that $\ell(x):=\langle x, v\rangle$.LINEAR FUNCTIONALS In $R^{n}$ with the usual inner product, a linear functional $\ell:$ $\mathbb{R}^{n} \rightarrow \mathbb{R}$ is just a line ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Let $A: \mathbb{R}^{\ell} \rightarrow \mathbb{R}^{n}$ and $B: \mathbb{R}^{k} \rightarrow \mathbb{R}^{\ell}$. 0 answers 157 views Let $A: \mathbb{R}^{\ell} \rightarrow \mathbb{R}^{n}$ and $B: \mathbb{R}^{k} \rightarrow \mathbb{R}^{\ell}$.Let $A: \mathbb{R}^{\ell} \rightarrow \mathbb{R}^{n}$ and $B: \mathbb{R}^{k} \rightarrow \mathbb{R}^{\ell}$. Prove that $\operatorname{rank} A+\o ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Let \(V$ be a vector space and $\ell: V \rightarrow \mathbb{R}$ be a linear map. 0 answers 45 views Let $V$ be a vector space and $\ell: V \rightarrow \mathbb{R}$ be a linear map.Let $V$ be a vector space and $\ell: V \rightarrow \mathbb{R}$ be a linear map. If $z \in V$ is not in the nullspace of $\ell$, show that ever ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is a functional group in chemistry? 1 answer 90 views What is a functional group in chemistry?What is a functional group in chemistry? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is a functional group in chemistry? 1 answer 88 views What is a functional group in chemistry?What is a functional group &nbsp;in chemistry? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is functional principal components analysis? 0 answers 123 views What is functional principal components analysis?What is functional principal components analysis? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 A functional group of the form R-NH2 is what type of compound? 1 answer 145 views A functional group of the form R-NH2 is what type of compound?A functional group of the form R-NH2 is what type of compound? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Which spectroscopic technique is most useful for identification of bonding patterns and functional groups in molecules? 1 answer 128 views Which spectroscopic technique is most useful for identification of bonding patterns and functional groups in molecules?Which spectroscopic technique is most useful for identification of bonding patterns and functional groups in molecules? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What are some of the hats that a Data Science leader wears to ensure the team is functional? 1 answer 88 views What are some of the hats that a Data Science leader wears to ensure the team is functional?What are some of the hats that a Data Science leader wears to ensure the team is functional? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 How can I Master Functional Programming in Scala Online? 1 answer 132 views How can I Master Functional Programming in Scala Online?How can I Master &nbsp;Functional Programming in Scala Online? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 How do managers identify the key skills required to fill essential roles of a functional data science team, based on varying organizational goals? 0 answers 87 views How do managers identify the key skills required to fill essential roles of a functional data science team, based on varying organizational goals?How do managers identify the key skills required to fill essential roles of a functional data science team, based on varying organizational goals? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is the difference between a unit test and a functional/integration test? 0 answers 129 views What is the difference between a unit test and a functional/integration test?What is the difference between a unit test and a functional/integration test? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What are the key success metrics for a functional data science team? 0 answers 101 views What are the key success metrics for a functional data science team?What are the key success metrics for a functional data science team? ... close To see more, click for the full list of questions or popular tags.
Building blocks # $t$ for $\tan$ Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource ## Problem If we write $t = \tan \theta$, then the following equations are true. \begin{align*} \tan 2\theta &= \frac{2t}{1-t^2}, \\ \sin 2\theta &= \frac{2t}{1+t^2}, \\ \cos 2\theta &= \frac{1-t^2}{1+t^2}. \end{align*} Can you use this diagram to obtain these formulae? For what range of values of $\theta$ does this argument work?
10 Q: # A train running at the speed of 40 km/hr crosses a signal pole in 9 seconds. Find the length of the train ? A) 90 mts B) 150 mts C) 120 mts D) 100 mts Explanation: We know that   $Speed=distancetime$ distance= speed * time $⇒$ d= 100 mts. Q: Agra and Delhi are 160 km apart. A train leaves Agra for Delhi and at the same time another train leaves Delhi for Agra. Both the trains meet 5 hrs after they start moving. If the train travelling from Agra to Delhi travels 6 km/hr faster than the other train, find the speed of the faster train? A) 19 kmph B) 13 kmph C) 7 kmph D) 9 kmph Explanation: Let the speed of the slower train = p kmph ATQ, Speed of the faster train = (p + 6) kmph Then, (p + p + 6) x 5 = 160 10p + 30 = 160 10p = 130 p = 13 kmph Then, speed of the faster train = p + 6 = 13 + 6 = 19 kmph. 1 134 Q: A 150 m long train crosses another 210 m long train running in the opposite direction in 10.8 seconds. If the shorter train crosses a pole in 12 seconds, what is the speed of longer train ? A) 75 kmph B) 80 kmph C) 54 kmph D) 45 kmph Explanation: Total distance = 150 + 210 = 360 mts Time taken to cross each other when moving in opposite direction = 10.8 sec Relative speed of trains = (350/10.8) x 18/5 = 3600/30 = 120 kmph Speed of shorter train = (150/12) x 18/5 = 45 kmph Hence, speed of longer train = 120 - 45 = 75 kmph. 0 315 Q: A train travels 360 km at a uniform speed. If the speed had been 5 km/h more, it would have taken 1 hour less for the same journey. Find the speed of the train? A) 42 kmph B) 43 kmph C) 40 kmph D) 45 kmph Explanation: We know that,  Speed = Distance/time Let the speed of the train be = x kmph NORMAL SPEED                           SPEED 5 KMPH MORE Distance = 360 Km                         Distance =360 speed    =  X Kmph                        Speed = (x+5) Time =360/X                              Time =360/(x+15) For the same journey if the speed increased 10 kmph more it will take 1 hour less Time with original speed - time with increased speed = 1 => (360/x) - (360/x+10) =  1 =>  (360(x+5)-360x)/x(x+5) =1 =>   360X + 1800 -360X  = X(X+5) =>   X^2 + 5x - 1800 = 0 X = 40 or -45 X cannot be negetive value X = 40 kmph. 2 250 Q: A passenger train covers the distance between station K and L, 40 minutes faster than a goods train. Find this distance between K and L if the average speed of the passenger train is 50 km/h and that of goods train is 30 km/h? A) 50 kms B) 48 kms C) 46 kms D) 44 kms Explanation: Let the distance be 'd' kms. According to the given data, 15 2047 Q: A train travelling with a speed of 60 km/hr catches another train travelling in the same direction and then leaves it 120m behind in 18 seconds. The speed of the second train is A) 42 kmph B) 72 kmph C) 36 kmph D) 44 kmph Explanation: Given speed of the first train = 60 km/hr = 60 x 5/18 = 50/3 m/s Let the speed of the second train = x m/s Then, the difference in the speed is given by => x = 10 m/s => 10 x 18/5 = 36 km/hr 12 3043 Q: Two trains are running at 40 km/hr and 20 km/hr respectively in the same direction. Fast train completely passes a man sitting in the slower train in 5 seconds. What is the length of the fast train? A) 27 7/9 mts B) 25 8/7 mts C) 21 1/4 mts D) 22 mts Explanation: Relative speed = (40 - 20) km/hr = 20 x 5 m/sec = 50 m/sec. 18 9 Length of faster train = 50 x 5 m = 250 m = 27 7 m. 9 9 9 9 3304 Q: A train is moving and crossing a man who is running on a  platform of 100 m at a speed of 8 kmph in the direction of the train, in 9 sec. If the speed of the train is 74 kmph. Find the length of the train ? A) 202 mts B) 188 mts C) 165 mts D) 156 mts Explanation: Let the length of the train = L mts Relative speed of train and man = 74 - 8 = 66 kmph = 66 x 5/18 m/s => 66 x 5/18 = L/9 => L = 165 mts. 17 2530 Q: A train travelling at 48 kmph crosses another train, having half its length and travelling in opposite direction at 42 kmph, in 12 sec. It also covers a bridge in 45 sec. Find the length of the bridge ? A) 250 mts B) 400 mts C) 320 mts D) 390 mts Explanation: Let the length of the 1st train = L mts Speed of 1st train = 48 kmph Now the length of the 2nd train = L/2 mts Speed of 2nd train = 42 kmph Let the length of the bridge = D mts Distance = L + L/2 = 3L/2 Relative speed = 48 + 42 = 90 kmph = 90 x 5/18 = 25 m/s(opposite) Time = 12 sec => 3L/2x25 = 12 => L = 200 mts Now it covers the bridge in 45 sec => distance = D + 200 Time = 45 sec Speed = 48 x5/18 = 40/3 m/s => D + 200/(40/3) = 45 => D = 600 - 200 = 400 mts Hence, the length of the bridge = 400 mts.
# What are the challenges to achieving cold fusion? I am an absolute neophyte regarding physics. What are the challenges to achieving cold fusion? I'm not sure this is a duplicate of Why is cold fusion considered bogus?, because that question is talking largely about the validity of some claims of cold fusion being realized. This question is specifically asking about the challenges of cold fusion and how they may be addressed in the emerging study of "low energy nuclear reactions" - From my understanding of en.wikipedia.org/wiki/Cold_fusion, there's no theoretical model under which Cold Fusion could ever work. Like teleportation, faster than light travel, time travel, etc. – Mooing Duck Oct 30 '12 at 17:00 I've just voted for a reopen. While we're waiting, to get protons to fuse you have to get them close enough together for the strong force to take effect. The trouble is that electrostatic repulsion between the two positive charges makes it hard to get the protons that close. If you make the protons hot enough, their thermal velocities are high enough to punch through the electrostatic repululsion and get fusion. Tokamak reactors heat protons to 100 million degrees to achieve this. The sun manages fusion at lower temperatures because the density is much higher. – John Rennie Oct 30 '12 at 17:25 @JohnRennie Sure, if they succeed the theory will be found. – anna v Oct 30 '12 at 18:20 @Chimera do note that just because someone seems to have a great grasp on physics, it doesn't mean that they do, and it is hard to tell the difference without understanding physics quite well yourself. – David Z Nov 4 '12 at 7:16 @DavidZaslavsky: Yeah, and even if they actually do have a good grasp, it doesn't stop them from saying complete and total nonsense occasionally. Everyone says stupid things every once in a while, even Dirac and Einstein. Except in the particular case of cold fusion, I am not saying nonsense, and you can check yourself by reading the supressed literature. – Ron Maimon Nov 4 '12 at 14:37 The binding energy curve for nucleons in nuclei shows which atoms can take part in fusion, releasing energy in the process. Fusion happens as one goes from left to right, until reaching Fe, iron. From there to the right it is fission that will release extra energy This is an example of a fusion reaction, the one that is actually being materialized in ITER, fusion of deuterium with tritium creating helium-4. ITER is hot fusion, because it requires a very hot plasma in a Tokamak machine. Why such high plasma energies? So as to overcome the Coulomb barrier as stated in the comments to the question. For two nuclei to fuse they have to get through the Coulomb barrier, since both will have positive charge and will repulse each other. Cold fusion started with the hypothesis that, since nature is quantum mechanical, a way can be found to overcome this barrier by some collective phenomena in crystals. The hypothesis for the mechanism is that in the crystal an electron can bind with a proton and behave like a neutron .Up to now I am not aware of a published peer reviewed demonstration that the original Palladium experiment in cold fusion was really demonstrating fusion, though efforts continue. The energies released are not high enough for fusion, was the damaging criticism when cold fusion first appeared. On the other hand people have since tried other crystal structures and claim success and even promise to deliver working reactors. No peer reviewed publications from there either, but they can be excused since they have applied for patents and are going commercial. There is a branch off also by another company claiming similar success. We are waiting. There even exists a video from a NASA researchers on cold fusion. Skepticism from the physics community is natural, because there is no clear demonstration that would shut up criticism yet, so people talk of scams. Time will tell, you can fool .... EDIT 27/11/12 Qualification to the original following statement : I have found though this interesting video which is not directly cold fusion, except that it can be carried out in the kitchen which can be considered cold, dust fusion! Where the claim is that the plasma temperatures needed for fusion can be achieved with the energy input of a microwave oven. I found that Pyrolytic-carbon could explain the effects in the dust fusion video linked above, without transmutation. It is also more diamagnetic (-400x10^-6) against the cleavage plane, exhibiting the greatest diamagnetism (by weight) of any room temperature diamagnet. It is even possible to levitate reasonably pure and sufficiently ordered samples over rare earth permanent magnets. Pyrolytic carbon levitating over permanent magnets - while this is informative, it is wrong to demand "peer review" in this case, as peer review is just censorship. The stuff you link to, the "surface polaritons", the NASA stuff, the Rossi crap, this is the worst stuff in the cold fusion field. There are hangers-on and frauds, the legitimate work is Pons/Fleischmann, McKubre, Arata, and others related to this experimental setup. It's Pd/d not Ni/H, and it's fusion with KeV energies, not some nonexistent crystal effect. – Ron Maimon Nov 4 '12 at 14:58 @ron "Pons/Fleisch" you really think so? Don't you think funding of their lab might have had something to do with what they saw? – John McVirgo Nov 6 '12 at 23:42 @JohnMcVirgo: Are you crazy? They lost all their funding, and their tenure, and got thrown to the wind! All they had to do was recant their fusion, and it would have been ok, like Paneth and Peters, but they didn't recant. That's what you get for reporting the most important discovery since fire, although, to be fair, Prometheus was treated slightly worse. Nobody in this field is in it for the money, except Rossi, and he's scamming. There is no money in this, and Fleischmann was already world famous before he announced this. Nobody money driven has ever discovered anything of value. – Ron Maimon Nov 6 '12 at 23:52 The money was on the other side--- the hot fusion folks were afraid of losing their funding, and said so. At MIT, they held a "death of cold fusion" party before the replication they were conducting was even run! In this video you have this fellow saying "Broadly speaking, it's dead, and it will remain dead for a long, long time." The notable part is the "long, long time". What the heck is that supposed to mean? That all discredited ideas come back? This means "I know it's real, but I'll suppress it, because I am jealous of Martin's immortality." – Ron Maimon Nov 7 '12 at 0:03 @RonMaimon I saw the video and do not come to the conclusion that he knows it was right, he is just a senior bloviating . I lived through the period. Everybody was very excited for months. Solid state people did not give a dam for hot fusion, nor low energy nuclear physicists. Many people wanted a piece of the pie of excitement and were completely disappointed at not being able to reproduce the effect. You should read the link for the white paper of Shanahan below . He is measured in his criticism and wants the field open and does not exclude LENR succeeding in the end. – anna v Nov 7 '12 at 5:02 For some recent information on the running battle between cold fusion researchers and myself over my proposed conventional (non-nuclear) explanation of the Fleishmann-Pons(-Hawkins) effect, you might want to look here: https://docs.google.com/open?id=0B3d7yWtb1doPc3otVGFUNDZKUDQ (referenced in this: http://www.networkworld.com/columnists/2012/102612-backspin.html?page=1) which is a whitepaper on the subject I just released. To fully follow the arguments however, you will probably need to at least look at the first 8 refernces listed, as they are my 4 peer-reviewed publications in the field and related cold fusioneer responses. In my whitepaper, I outline why the original F&P calorimetric conclusions were likely incorrect. In my 2002 publication I show how apparent excess heat signals can arise in F&P electrochemical cells. I talk about other errors in the 2010 publication, and the gross misrepresentation of what I say in the 2010 response should be more than adequate to demonstrate the biased approach taken by cold fusion researchers. I noted above a lot of talk about 'normal' fusion mechanisms. You should all realize that the response to these kinds of criticisms is "OK, we agree, but we found a new type of nuclear reaction, which is why we prefer to call the field Low Energy Nuclear Reactions (LENR)." In other words, stating that they do not show conformance to conventional fusion characteristics is a nearly meaningless activity these days. Good historical info though. Today the field lumped under the term ‘cold fusion’ (CF) covers a lot of ground. Multiple materials have been claimed to show CF effects (Pd, Pt, Ti, Ni, and alloys) and multiple experiment types have been employed (electrolysis cells, closed and open, gas loading experiments, arcs in gas and in liquid, ultrasound or laser-assisted version of the above, etc.) using many different analytical techniques (calorimetry (several types), radiation detectors and counters, SIMS, EDX, XPS, MS, and more). One problem arising from this is that the cold fusion researchers (CFers) tend to lump anything that might support their POV into one big pile and then try to impress you with the size of that pile. That's not right. Each study must stand on its own merits before it can be drawn upon to support others. The biggest group and what started the whole field was calorimetric results from F&P electrolysis cells Most of my work has focused on those F&P cells, especially on the excess heat claims, which comprise about half of all the evidence for cold fusion. In 2000, Dr. Ed Storms posted some CF data he collected on the 'net, and I grabbed it and analyzed it by assuming no excess energy. In other words I was testing the zero excess power, Pex, thesis mathematically. Typically, calorimetry is done via a calibrated technique where a constant temperature is measured as a reference and a variable temperature controlled by the experimental process is mathematically compared to it via most typically power balance equations. In theory the excess power is the difference between the output and input power. In F&P cells the input power (Watts) is just the cell voltage (V or E, volts) times the current (I, amps) flowing through the cell. A fraction of that power goes into electrolysis, and is computed as Eth * I, where Eth is the thermoneutral voltage for the specific form of water (differs due to isotope effects). The output power is computed based on a temperature difference, dT, from an equation for the particular method you are using. So for a flow calorimeter, the equation is Pout = k * Cp* f * dT, where Cp is the heat capacity of the fluid at constant pressure, f is its flow rate, and k is the calibration constant in the correct uints. Many times linear regression is used to fit the calibration data which also gives a second additive constant. What I found in the reanalysis was that if you change the calibration constant by +/-3% max, you could zero out all the signals. 3% is not much, and it represents a very good analytical technique, yet that small change produces artificial Pex in an F&P calorimetric study. What that means is that in order to evaluate the validity of an excess heat signal, you need to have the calibration equation used, the numeric calibration constants, and the variation in those constants so that the approximated error bar on the output power can be calculated. To date this information has never been supplied in the literature. The few times I have looked at Pex claims and pieced this information together, I have concluded the signals were consistent with this error. So what that does is throw all calorimetric claims into question, since one doesn't know how big the error bars are on the claimed amounts of excess heat. In principle, there is a limit to how big that can get, but the details aren’t readily developed if the basic information above isn’t supplied. For my study, I analyzed Storms' data, which had 10 voltage sweeps in it, and found the required calibration constants for zero excess heat, and then noted there were systematic patterns in those results. I then proceeded to explain via a simple two-region model of a F&P cell how such a calibration constant shift could occur, concluding a shift in heat distribution in the cell would do it. Then I speculated on a chemical mechanism that would produce such a shift, pointing at H2+O2 recombination at the electrode surface as the likely mechanism. Szpak, et al have even videoed hot spots on working electrodes but they called them ‘mini-nuclear explosions’ of course. The physical size of individual hot spots was of the approximate size of bubbles produced by electrolysis however. So with respect to F&P cell calorimetry, I showed in one set of experiments that a simple calibration constant shift model could explain the apparent excess heat signals, and I provide some basic rationales for how that could happen and which could be experimentally tested. Subsequently, this was challenged in the literature, but I responded with more explanation and the illustration that the data provided by the dissenting authors was consistent with my proposal. And later, Storms wrote an attempted rebuttal of this to which I replied, showing how the proposed rebuttal was not adequate. In 2010 I wrote a long Comment on a 2009 review article that delineated experimental problems that were glossed over in the original report. A group of 10 prominent CF researchers tried to rebut my Comment, but only proved their lack of understanding of my proposal by insisting on discussing it as if it were a random problem, when I had said in all 4 of my publications on the subject that it was systematic. Ergo, the proffered rebuttal was irrelevant. In that 2010 paper I added a few more comments on the CFers interpretation of analytical results, and I have here made some specific comments on other results over in the "Why is cold fusion considered bogus?" topic in this forum. The end state is that the largest bulk of claims is on excess heat, and I showed why the claimed results are not trustworthy. But rather than publish the calibration data, which the researchers ought to have done, they resort to illegitimate tactics which I discuss in the whitepaper. Not very confidence-inspiring. This picture then presents a group of researchers who collectively are fixated on a 'nuclear' solution, regardless of how well it works or what alternatives are presented, which is NOT how science is supposed to be done. The whitepaper adds one new aspect. I analyze the original F&P calorimetric method and point out the flaws in their approach, concluding that the 'excess heat' signals they observed back in 1985-1993 were just modeling errors arising from not including relevant chemical processes and from a mathematical quirk in the model equation. Perhaps if this analysis had been presented in 1991-2, there wouldn't have been such a furor over the claims (as happened to F&P's nuclear data, which were shown then to be in error and promptly discarded). You can also see lots of comments in the old sci.physics.fusion newsgroup and in my now-unused Wikipedia page. In summary then, it seems as if there is a real chemical process that occurs in F&P cells that can produce erroneous excess power signals, and that there are conventional chemical explanations for any block of results with enough contents to be called at least partially reproduced. Those explanations usually center on misunderstanding the impact of contamination and misinterpreting analytical results. Thus, there is no reason today to be compelled to accept a nuclear explanation of the observations. - Many combative comments have been deleted. Further combative posts or comments by any involved party will merit suspension. Feel free to post your positions, and to vote as you wish but lay off the name calling and vitriol. – dmckee Nov 9 '12 at 4:04 In response to the posting of the link to Steve Krivit's comments, the first Appendix of the whitepaper that I originally offered up to answer the topical question contains a point-by-point rebuttal of Krivit's comments. There is actually little to no technical content in Krivit's rant. – Kirk Shanahan Nov 9 '12 at 16:05 I see that my expansions on my original answer have been deleted. I have no interest in wasting time here. I will just state that Ron Maimon continually cites the standard nonsense cold fusion researchers use to ignore my work, and the whitepaper goes over why their actions are psuedoscientific. If anyone has questions, it isn't hard to find my email, others have done it. Ask questions there. Kirk Shanahan – Kirk Shanahan Nov 9 '12 at 16:08 Ron Maimon had posted many challenge comments to this answer, including the use of 'dishonest'. W.r.t that – the paper he linked to in Thermochimica Acta (TA) was replied to immediately, (TA, 441 (2006) 210). That paper was ignored by Storms in his 2007 cold fusion book and since, but it rebuts each comment Storms makes. A summary was posted but was then deleted. – Kirk Shanahan Nov 12 '12 at 20:20 Ron also implies these calibration constant shifts are random. That is most certainly NOT what I ever said, but IS what 10 prominent cold fusion authors said that I said in their 2010 reply to my comment, see J. Env. Monitoring 12, (2010), 1756; ibid, 1765. This misrepresentation of my published claims is indicative of the cold fusion community’s inability to deal with critics reasonably. Until Ron and his friends begin to deal fairly and honestly with the issues, their attempts to brush off criticisms should be noted with concern. – Kirk Shanahan Nov 12 '12 at 20:20 You should check out Jed Rothwell's particularly informative website for more information. As he said in his deleted answer: Cold fusion has been replicated in over 200 major laboratories, often at high signal to noise ratios. For example, tritium has been measured at millions of times background. I have a collection of 1,200 peer-reviewed journal papers on cold fusion, copied from the library at Los Alamos, and 2,000 other papers published Los Alamos, China Lake, the NRL, Mitsubishi, the NSF and various other mainstream organizations. This literature proves beyond question that cold fusion is real. You will find the bibliography and hundreds of full-text papers here: http://lenr-canr.org/ This adresses the known literature. For a more direct answer to the question, the issue with Cold Fusion is the lack of an accepted theory of the phenomenon. Nobody is sure how it happens exactly, and there are many people saying it doesn't happen at all. These denials are irresponsible and stupid--- there are over 200 reproductions of the effect, including most recently an undergraduate lab at MIT conducted by Peter Hagelstein where the students set up a cold-fusion cell, and observed the anomalous heat production (but new stuff is coming out all the time, see the news article linked below for a reproducible neutron burst in heated Pd-d). The topic has been censored and buried in a way unbefitting of science, and the data is festering on lenr-canr.org, bringing shame on the physics community for two decades now. The main experimental evidence for cold fusion is that when you do electrolysis on Pd in heavy water, and the deuterium enters the Pd lattice as usual, sporadically, in the deuterated samples, you find that there is enormous heat production, far too large to be chemical energy generated or stored in the cathode. This must be done in small wires of Pd, to avoid melting the apparatus or risking an explosion or meltdown. The energy released is enough to significantly and obviously alter the calorimetry in the cell, so that you have 10-30% more energy production than in the controls, or in the experiment when you aren't producing excess heat. This energy production is accompanied by nuclear effects, which include spectroscopically detected elements which weren't there in the original sample and which are produced in unnatural isotope ratios, tritium production at trace levels (tritium cannot be produced by non-nuclear processes), He production commensurate with the excess heat, and X-rays and charged particles produced near the electrode, which are detected in co-deposition experiments using ordinary plastic particle detectors and photographic film. This evidence has been widely reproduced and is airtight, and I will not waste my breath on this, as it is overwhelming and enormous amounts of evidence from completely different groups using completely different methods over many decades, and it is only a political process that can deny such things with a straight face The main trustworthy methods are the three below, each of which have been independently replicated: • Pons and Fleischmann and McKubre did electrolysis, and noticed excess heat and correlated He production. Bocris measured large amounts of tritium production (and was accused of scientific misconduct for this, and exonerated), as did others in contact with Miley. • SPAWAR group did co-deposition of d and Pd onto plates, and detected charged particles, and transmutations. There were also transmutations detected in deuterium passing through palladium. • Arata did gas loading of Pd, meaning no electrolysis, and produced heat and correlated He. These are the highlights, there are dozens more replications, from disinterested and unrelated people all over the world, who have nothing to gain from reporting their results and everything to lose (for example, today: you have this article ) Since the effect is unexplained by accepted theory, making predictions about how to use it is impossible. So for the remainder of the answer I will assume that the mechanism is the one I described in the answer here: Why is cold fusion considered bogus? , that it is due to deuterons accelerated by K-shell holes, that fuse into alpha-particles, which then fly around making more K-shell holes in a chain reaction. This doesn't require new laws of fundamental physics, it simply postulates that deuterons and inner-shell excitations are quantum mechanically mixed in a metal, which is completely plausible just from electrostatic matrix elements estimates. The inner shell excitation energy for Pd is 20KeV, well above the threshhold for producing fusion. The charged particles produced from a fusion event naturally produce K-shell excitations (and other excitations in all levels, according to the Bethe theory of charged ionizing radiation), so there is a potential for a chain reaction. Any fusion reaction must proceed by dumping energy into electrons or nuclei, so that neutrons and protons don't come flying out, and this is not excluded from alpha-particle spectroscopy (although, again, deuterium fusion inside a high-density material has not been studied experimentally or theoretically to any great extent). This idea is hard to work with theoretically, because the inner shell transitions of atoms in metals are complicated. The electrostatic interactions mix the inner shells with excited momentum states of lattice deuterons, and it is hard to know if these states will band, localize, flow to the surface, or whatnot, because nobody has specifically done experiments on deuterated metals with highly excited inner shells. Nevertheless, the idea can make some predictions about making the reaction happen more reliably. One thing you can do is simply shine x-rays on the Pd, to excite the inner shells by hand. In the absence of a well-funded research project, you can't be sure the answer isn't as simple as this. For all we know at this point, you can build a reactor with Pd, d, and an X-ray machine shining on the cathode. - Ron list 3 items that are supposedly the 'main trustworthy methods' above. Comment on (1): apparent excess heat and He are responses arising in principle from the proposed nuclear reactions. Correlating a response to a response proves nothing, what is of interest is the control factor-response correlation, i.e. something that will tell us how to get the effect reproducibly. The interconnecting variable in a heat-He correlation is likely just time. Integrate the 'heat' and it increases. He leaks in. It increases. This gives a correlation. But is it reproducible? No. Does it mean anything? No. – Kirk Shanahan Nov 15 '12 at 16:08 The heat-He correlation plots I've seen do not report He @ 50 ppm to my recollection. Neither do they report any heat that cannot be explained by a calibraion constant shift. If I am wrong, please cite your reference so I can verify. If not, then the plotted data are within noise levels and the correlation is accidental or just due to the naturally increasing magnitude of integrated error and leaked He. This means in turn that it is unlikely that the result will be obtained again as it was probably just coincidence. – Kirk Shanahan Nov 15 '12 at 19:24 I see we are back to personal attacks... The CCS is a name I gave to the systematic problem I detected in CF calorimetric data workup. It is not 'made-up', the problem is systematic and easy to see if you know what to do, which was described in my 2002 publication. Also, I did not specify that the walls leak, I didn't specify anything in particular, so the wall thing is a Maimon strawman. The phrase "just to fool.." is another indication Ron hasn't understood the explanation. Further, the CCS is applicable to FP cells, not gas-loading expts. directly. Will get back on the rest... – Kirk Shanahan Nov 27 '12 at 19:20 re: Bockris "1989" He data - should be 1992 as pub. date. Yes, the average He conc in surface Pd layers was 29.7e9 atoms vs. 1.63e9 from subsurface Pd. BUT!! Let's look at the error bars. 29.7 +/- 67.2 vs. 1.63 +/- 0.88. The 29.2 is obtained by averaging 6 numbers, but one is an obvious flyer data point. They are: -.1 (counted as 0 here), 1.9,2.1,3.4,3.8,166.8 vs 0.4,1.7,1.9,2.5. The avg without flyer is 2.24 +/- 1.49. I.e. they are statistically indistinguishable. The 10X 'bigger' number is no different from the 1.63 no. Pays to look at the error bars. Bockris made a bad conclusion – Kirk Shanahan Nov 27 '12 at 20:16 What you are saying is that since the 115 keV is not at resonance, you won’t get any of this K-shell interaction. Of course, if that were true then XPS and Auger spectroscopy wouldn’t work either, because they irradiate the sample at a 1-10 keV and look at transitions at 10ev-1 keV, also not at the resonance point for the transitions they are observing. No, irradiating at 115 keV will stimulate some level of interaction with a 20-40 keV transition if such is possible. That would require inclusion in the energy balance equation, which has to be included in their modeling of electronic structure. {This was supposed to be a reply to Ron Maimon's comment above but ended up as an 'answer'. There seems to be no 'delete' button for this or I would have done so and moved it to the right place.} - This theory is submitted to Cornell University arXiv.org for review. I have enjoyed reading it as it opened my mind to the complex issues that arise in the dense many body system of the low energy nuclear reactive environment. LENR adatoms are of interest and I would suggest looking into plasmonic nanoparticle science. Peizo/thermal conversion science is of interest to understanding LENR phenomenon. And as this theory proposes, those understanding phenomenon of magnetic confinement and inertial confinement fusion may add to an understanding of how such could take place in (nano) atomic or subatomic space. The author invites theorists to participate in the review discussion... (link) "Two attempted crosslists for this article were rejected upon a notice from Arxiv moderators, who determined the submission to be inappropriate for the (General Relativity and Quantum Cosmology) and (High Energy Physics-Theory) subject classifications. So if you think that this article and theoretical works in the references deserve to be known by theorists (up to now, no other such kind of predictions of supraluminous effects with the correct order of magnitude came to my knowledge) please make it known." Do Dark Gravity Theories Predict Opera Superluminal Neutrinos and LENR Phenomena? F Henry-Couannier Centre de Physique des Particules de Marseille July 1, 2012 (PDF) Abstract We investigate whether Dark Gravity theories (DG) with two conjugate metrics gµν and gµν = ηµρ ηνλ gρλ where ηµρ is supposed to be a background non dynamical and flat metric or an auxiliary field, actually predicted the occurrence of apparently superluminal propagations (from our metric side gµν point of view) such as the one recently reported by the Opera experiment. We find that indeed such theories could predict the order of magnitude of the superluminal velocity and even explain the apparent conflict with the SN1987 normal neutrino speeds provided the neutrinos are able to oscillate between the two conjugate metrics while propagating in a dense medium. We then explain the theoretical motivations and explore all possible phenomenological consequences of the field discontinuities naturally expected in some Dark Gravity theories. Since the Opera result was not confirmed, these discontinuities do not actually allow a propagation of neutrinos oscillating between the two conjugate metrics. VII Field Discontinuities explain LENR Phenomena All searchers in the field of LENR phenomena are probably aware that LENR is not only associated with: A. Large extra heat (not possibly of chemical origin) with very low levels of nuclear radiations (alpha, beta, gamma, neutrons) as compared to what would be expected from nuclear processes producing the same amount of energy. B. Transmutations and isotopic anomalies.But also very often with: C. Observation of a new category of incredible objects which behaviour seems almost impossible to understand without postulating new physics (for instance caterpillar traces left by micron sized magnetic and radiating objects able to fly meters away from their source [12], to go through dense materials, to explode and release much energy in them[13], etc… ) objects which were discovered by many scientists independently (Matsumoto, Dash et al., Shoulders, Lewis, Savvatimova, Urutskoev et al., Ivoilov and other groups) in many kind of experiments involving macro or micro electric discharges and independently named Evos, EVs, Ectons, Plasmoids, Ufos (for instance in Tokamaks or at the LHC), Leptonic Monopoles, Charged Clusters, Nucleon Clusters, Micro Ball Lightning, etc ... In my opinion, any idea proposed to explain A or B but neglecting C is almost certainly wrong because it is unlikely that two kinds of very different new ideas would be needed, one to explain C and another to explain A and B, while the detections of the two kind of effects are clearly related. Indeed there is even an annual conference called Russian Conference on Cold Nuclear Transmutation and Ball-Lightning (RCCNT&BL) and there also have often been presentations on Ball Lightning at the ICCF conferences. On the other hand if you are able to provide an explanation for C which at the same times clarifies A and B, ... Bingo! Good references to start to gather a list of typical properties of these objects are [11] (see references therein) [12] [13] and i personally consider, following Lewis [11] that these objects can exist with very variable sizes and lifetimes and are all of the same nature as the much bigger Ball Lightning sometimes observed in thunderstorms so from now on i will generically call them micro ball lightning, or mbl. Their common source is most probably always an electric discharge including micro-discharges near metal surfaces in simple electrolysis experiments or in experiments where these discharges can result from the metal surface being submitted to mechanical, thermal or EM pulse shocks. - Thank you for the information. – Chimera Nov 15 '13 at 17:52
• 42 ## Goal For many years, means have been in place to assess the IQ of many people. Although not everyone agrees with the notion of intelligence, it is ironically quantifiable. This evaluation is carried out in particular through various tests. Today, you are one of those people who takes one of these tests, but being far too long and boring, you decide to create a program that can do it for you. For that you must know the principle of this test: -You have a grid of letters, each letter indicates the number of trips to be made according to its place in the alphabet (A=1, B=2, C=3...) So if you are on "b" for example, you move 2 letters if you are on "e" you move 5 letters... -You have to move from left to right and up to down. -When you are at the end of the line, you return to the beginning of the next line. -If you at the end of the last line then you go back to the beginning of the first line -You start from the letter in the top left corner of the grid. This departure counts as a movement. -When you make a trip you go directly from one letter to another without worrying about the letters that separate them. -When you encounter a "#" you must rotate the grid 90° clockwise. -When you encounter an "@" you must rotate the grid by 90° counter-clockwise. -Rotation does not count as a movement, this means that when you come across "#" or "@" the number of movements is the same before and after the rotation. - After the rotation, you are at the same coordinates as the "transformer ("#", "@")" before the rotation. (Assuming that the origin of the mark does not rotate, so your coordinates are unchanged but the symbol you were on will necessarily be changed) the grid rotates, not you! The purpose is to give the letter or symbol on which you are at the end of ii moves Example: ii=7 abcd dbdadcba aacbba#d b#bcdabc cdad So in the first movement we will be on "a", in the second on "b", in the third on "d", in the fourth on "a", in the fifth on "b", in the sixth on "#" which makes the grid rotates so we are on "b" then in the last movement we are on "c". Input ¤ The first line: the number ii of displacements to be made on the grid ¤ The second line: Number nb of lines and columns ¤ The nb lines: character string representing each line of the grid. Output ¤ The letter on which we are after ii displacements Constraints ¤ The shape of the grid is always a square ¤ Number of lines and columns nb<10 ¤ ii<10**12 Example Input 7 4 abcd dcba ba#d dabc Output c A higher resolution is required to access the IDE Join the CodinGame community on Discord to chat about puzzle contributions, challenges, streams, blog articles - all that good stuff! Online Participants
# Tutte’s Spring Embedding Theorem In 1963, William Tutte published a paper ambitiously entitled “How to Draw a Graph”. Let $$\Sigma$$ be any planar embedding of any simple planar graph $$G$$. • Nail the vertices of the outer face of $$\Sigma$$ to the vertices of an arbitrary strictly convex polygon $$P$$ in the plane, in cyclic order. • Build the edges of $$G$$ out of springs or rubber bands. • Let go! Tutte proved that if the input graph $$G$$ is sufficiently well-connected, then this physical system converges to a strictly convex planar embedding of $$G$$! Let me state the parameters of the theorem more precisely, in slightly more generality than Tutte did.1 A Tutte drawing of a planar graph $$G$$ is described by a position function $$p\colon V\to \mathbb{R}^2$$ mapping the vertices to points in the plane, subject to the two conditions: 1. The vertices of the outer face $$f_\infty$$ of some planar embedding of $$G$$ are mapped to the vertices of a strictly convex polygon in cyclic order. In particular, the boundary of $$F_\infty$$ must be a simple cycle. 2. Each vertex $$v$$ that is not in $$f_\infty$$ maps to a point in the interior of the convex hull of its neighbors; that is, we have $\sum_{u\mathord\to v} \lambda_{u\mathord\to v} (p_v - p_u) = 0$ for some positive real coefficients $$\lambda_{u\mathord\to v}$$ on the darts into $$v$$. (I will use subscript notation $$p_v$$ instead of function notation $$p(v)$$ throughout this chapter.) The edges of a Tutte drawing are line segments connecting their endpoints. Let me emphasize that that the definition of a Tutte drawing does not require mapping edges to disjoint segments, or even mapping vertices to distinct points. Moreover, the dart coefficients are not required to be symmetric; it is possible that $$\lambda_{u\mathord\to v} \ne \lambda_{v\mathord\to u}$$. A graph $$G$$ is 3-connected if we can delete any two vertices without disconnecting the graph, or equivalently (by Menger’s theorem) if every pair of vertices is connected by at least three vertex-disjoint paths. Finally, a planar embedding is strictly convex if the boundary of every face of the embedding is a convex polygon, and no two edges on any face boundary are collinear. Tutte’s spring-embedding theorem: Every Tutte drawing of a simple 3-connected planar graph $$G$$ is a strictly convex straight-line embedding. It is not hard to see that 3-connectivity is required. If $$G$$ has an articulation vertex $$v$$, that is, a vertex whose deleting leaves a disconnected subgraph, then a Tutte drawing of $$G$$ can map an entire component of $$G\setminus v$$ to the point $$p_v$$. Similarly, if $$G$$ has two vertices $$v$$ and $$v$$ such that $$G\setminus \{u,v\}$$ is disconnected, a Tutte drawing of $$G$$ can map an entire component of $$G\setminus \{u,v\}$$ to the line segment $$p_up_v$$. In both cases, the Tutte drawing is not even an embedding, much less a strictly convex embedding. ## Outer Face is Outer Whitney (1932) proved that every simple 3-connected graph $$G$$ has a unique embedding on the sphere (up to homeomorphism), or equivalently, a unique planar rotation system. I will describe Whitney’s proof later in this note. Thus, in every planar embedding of $$G$$, the faces are bounded by the same set of cycles; we can reasonably call these cycles the faces of $$G$$. The definition of a Tutte drawing requires choosing one of the faces of $$G$$ to be the outer face $$f_\infty$$. We call the vertices of $$f_\infty$$ boundary vertices, and the remaining vertices of $$G$$ interior vertices. Similarly, we call the edges of $$f_\infty$$ boundary edges, and the remaining edges of $$G$$ interior edges. This terminology is justified by the following observation: Outer face lemma: In every Tutte drawing of a simple 3-connected planar graph $$G$$, every interior vertex maps to a point in the interior of the outer face. In particular, no interior vertex maps to the same point as a boundary vertex. Proof: We say that an interior vertex $$w$$ directly reaches a boundary vertex $$z$$, or symmetrically that $$z$$ is directly reachable from $$w$$, if there is a path from $$w$$ to $$z$$ using only interior edges. 3-connectivity implies that every interior vertex of $$G$$ can directly reach at least three boundary vertices of $$G$$. We prove the lemma by applying Gaussian elimination to the system of linear equations defined by condition (2). Linear system (2) expresses the position $$p_v$$ of any vertex $$v$$ as a strict convex combination of the positions of its neighbors in $$G$$, that is, a weighted average where every neighbor of $$v$$ has positive weight. By pivoting on that row, we can remove the variables $$p_v$$ from the system. Such a pivot is equivalent to deleting vertex $$v$$ from the graph and adding new edges between the neighbors of $$v$$, with appropriate positive coefficients on their darts.2 (Of course the resulting graph may not be planar.) Pivoting out one interior vertex does not change which boundary vertices are directly reachable from any other interior vertex. Thus, if we eliminate all but one interior vertex $$w$$, the remaining constraint expresses $$w$$ as a strict convex combination of at least three boundary vertices. The same elimination argument implies that every assignment of positive dart coefficients $$\lambda_{u\mathord\to v} > 0$$ defines a unique Tutte drawing; the linear system containing the equation $\sum_{u\mathord\to v} \lambda_{u\mathord\to v} (p_v - p_u) = 0$ for every interior vertex $$v$$ always has full rank. ## Laplacian linear systems and energy minimization Tutte’s original formulation required that every interior vertex lie at the center of mass of its neighbors; this is equivalent to requiring $$\lambda_{u\mathord\to v} = 1$$ for every dart $$u\mathord\to v$$.3 More generally, the physical interpretation in terms of springs corresponds to the special case where dart coefficients are symmetric. Suppose each edge $$uv$$ is to a (first-order linear) spring with spring constant $$\omega_{uv} = \lambda_{u\mathord\to v} = \lambda_{v\mathord\to u}$$. For any vertex placement $$p \in (\mathbb{R}^2)^V$$, the total potential energy in the network of springs is $\Phi(p) := \frac{1}{2} \sum_{u, v} \omega_{uv} \| p_u - p_v \|^2.$ If we fix the positions of the outer vertices, $$\Phi$$ becomes a strictly convex4 function of the interior vertex coordinates. If we let the interior vertex positions vary, the network of springs will come to rest at a configuration with locally minimal potential energy. The unique minimum of $$\Phi$$ can be computed by setting the gradient of $$\Phi$$ to the zero vector and solving for the interior coordinates; thus we recover the original linear constraints $\sum_v \omega_{uv} (p_u - p_v) = 0$ for every interior vertex $$u$$. The underlying matrix of this linear system is called a weighted Laplacian matrix of $$G$$. This matrix is positive definite5 and therefore non-singular, so a unique equilibrium configuration always exists. When the dart coefficients are not symmetric, this physical intuition goes out the window; the linear system of balance equations is no longer the gradient of a convex function. Nevertheless, as we’ve already argued, any choice of positive coefficients $$\lambda_{u\mathord\to v}$$ corresponds to a unique straight-line drawing of $$G$$. None of the actual proof of Tutte’s theorem relies on any special properties of the coefficients $$\lambda_{u\mathord\to v}$$ other than positivity. Given the graph $$G$$, the outer convex polygon, and the dart coefficients, we can compute the corresponding vertex positions in $$O(n^3)$$ time via Gaussian elimination. (There are faster algorithms to solve this linear system. In particular, a numerically approximate solution can be computed in $$O(n\log n)$$ time in theory, or in $$O(n\,\text{poly}\!\log n)$$ time in practice.) ## Slicing with Lines For the rest of this note, fix a simple 3-connected planar graph $$G$$ and a Tutte drawing $$p$$. At the risk of confusing the reader, I will generally not distinguish between features of the abstract graph $$G$$ (vertices, edges, faces, cycles, paths, and so on) and their images under the Tutte drawing (points, line segments, polygons, polygonal chains, and so on). For example, an edge of the Tutte drawing $$p$$ is the (possibly degenerate) line segment between the images of the endpoints of an edge of $$H$$, and a face of the Tutte drawing $$p$$ is the (not necessarily simple) polygon whose vertices are the images of the vertices of a face of $$G$$ in cyclic order. Both sides lemma: For any interior vertex $$v$$ and any line $$\ell$$ through $$p_v$$, either all neighbors of $$v$$ lie on $$\ell$$, or $$v$$ has neighbors on both sides of $$\ell$$. Proof: Suppose all of $$v$$’s neighbors lie in one closed halfplane bounded by $$\ell$$. Then the convex hull of $$v$$’s neighbors also lies in that halfspace, which implies that $$v$$ does not lie in the interior of that convex hull, contradicting the definition of a Tutte drawing. $$\qquad\square$$ Halfplane lemma: Let $$H$$ be any halfplane that contains at least one vertex of $$G$$. The subgraph of $$G$$ induced by all vertices in $$H$$ is connected. Proof: Without loss of generality, assume that $$H$$ is the halfplane above the $$x$$-axis. Let $$t$$ be any vertex with maximum $$y$$-coordinate; the outer face lemma implies that $$t$$ is a boundary vertex. I claim that for any vertex $$u\in H$$, there is a directed path in $$G$$ from $$u$$ to $$t$$, where the $$y$$-coordinates never decrease. There are two cases to consider: • If $$t$$ and $$u$$ have the same $$y$$-coordinate, the outer-face lemma implies that either $$t=u$$ or $$tu$$ is an edge of the outer face. In either case the claim is trivial. • Otherwise, $$u$$ must lie below $$t$$. Let $$U$$ be the set of all vertices reachable from $$u$$ along horizontal edges of $$G$$. Because $$G$$ is connected, some vertex $$v\in U$$ has a neighbor that is not in $$U$$. The both-sides lemma implies that $$v$$ has a neighbor $$w$$ that has larger $$y$$-coordinate than $$v$$. The induction hypothesis implies that there is a $$y$$-monotone path in $$G$$ from $$w \mathord\leadsto t$$. Thus, $$u\mathord\leadsto v \mathord\to w \mathord\leadsto t$$ is a $$y$$-monotone path, which proves the claim. $$\qquad\square$$ ## No Degenerate Vertex Neighborhoods None of the previous lemmas actually require the planar graph $$G$$ to be 3-connected. The main technical challenge in proving Tutte’s theorem is showing that if $$G$$ is 3-connected, then every Tutte drawing of $$G$$ is non-degenerate. The assumption of 3-connectivity is necessary6—if $$G$$ is 2-connected but not 3-connected, then some subgraphs of $$G$$ can degenerate to line segments in the Tutte drawing, and if $$G$$ is connected but not 2-connected, some subgraphs of $$G$$ will degenerate to single points. Utility lemma: The complete bipartite graph $$K_{3,3}$$ is not planar. Proof: $$K_{3,3}$$ has $$n=6$$ vertices and $$m=9$$ edges, so by Euler’s formula, any planar embedding would have exactly $$2+m-n = 5$$ faces. On the other hand, because $$K_{3,3}$$ is simple and bipartite, every face in any planar embedding would have degree at least $$4$$. Thus, a planar embedding of $$K_{3,3}$$ would imply $$20 = 4f \le 2m = 18$$, which is obviously impossible.$$\qquad\square$$ Nondegeneracy lemma: No vertex of $$G$$ is collinear with all of its neighbors. Proof: By definition, no three boundary vertices are collinear, and thus no boundary vertex is collinear with all of its neighbors. For the sake of argument, suppose some vertex $$u$$ and all of its neighbors lies on a common line $$\ell$$, which without loss of generality is horizontal. Let $$V^+$$ and $$V^-$$ be the subsets of vertices above and below $$\ell$$, respectively. Let $$U$$ be the set of all vertices that are reachable from $$u$$ and whose neighbors all lie on $$\ell$$. The halfplane lemma implies that the induced subgraphs $$G[V^+]$$ and $$G[V^-]$$ are connected, and the induced subgraph $$G[U]$$ is connected by definition. Fix arbitrary vertices $$v^+\in V^+$$ and $$v^-\in V^-$$. Finally, let $$W$$ denote the set of all vertices that lie on line $$\ell$$ and are adjacent to vertices in $$U$$, but are not in $$U$$ themselves. Every vertex in $$W$$ has at least one neighbor not in $$\ell$$, so by the both-sides lemma, every vertex in $$W$$ has neighbors in both $$V^+$$ and $$V^-$$. Deleting the vertices in $$W$$ disconnects $$U$$ from the rest of the graph. Thus, because $$G$$ is 3-connected, $$W$$ contains at least three vertices $$w_1, w_2, w_3$$. Now suppose we contract the induced subgraphs $$G[V^+]$$, $$G[V^-]$$, and $$G[U]$$ to the vertices $$v^+$$, $$v^-$$, and $$u$$, respectively. The resulting minor of $$G$$ contains the complete bipartite graph $$\{v^+, v^-, u\}\times \{w_1, w_2, w_3\} = K_{3,3}$$. But this is impossible, because $$G$$ is planar and therefore every minor of $$G$$ is planar. $$\qquad\square$$ Both sides redux: Every interior vertex $$v$$ has neighbors on both sides of any line through $$p_v$$. ## No Degenerate Faces It remains to prove that the faces of the Tutte drawing are nondegenerate. First we need a combinatorial lemma, similar to Fáry’s lemma that any simple planar map can be refined into a simple triangulation. Geelen’s Lemma: Let $$uv$$ be any edge of $$G$$, let $$f$$ and $$f’$$ be the faces incident to $$uv$$, and let $$S$$ and $$S’$$ be the vertices of these two faces other than $$u$$ and $$v$$. Let $$P$$ be any path that starts at a vertex in $$S$$ and ends at a vertex of $$S’$$. Then every path from $$u$$ to $$v$$ in $$G$$ either consists of the edge $$uv$$ or contains a vertex of $$P$$. Proof: Fix any planar embedding of $$G$$ (not necessarily the Tutte drawing!) where $$uv$$ is an interior edge. The faces incident to $$uv$$ are disjoint disks on either side of $$uv$$. Let $$s$$ and $$t$$ be the endpoints of $$P$$. Let $$P’$$ be a path from $$l$$ to $$r$$ through the union of the faces incident to $$uv$$, crossing the edge $$uv$$ once. The closed curve $$C = P + P’$$ separates $$u$$ from $$v$$. Thus, by the Jordan curve theorem, every path $$Q$$ from $$u$$ to $$v$$ crosses $$C$$, which implies that either $$Q=uv$$ or $$Q$$ contains a vertex of $$P$$. $$\qquad\square$$ Split Faces Lemma: Let $$uv$$ be any interior edge of $$G$$, let $$f$$ and $$f’$$ be the faces incident to $$uv$$, and let $$S$$ and $$S’$$ be the vertices of these two faces other than $$u$$ and $$v$$. Finally, let $$\ell$$ be any line through $$p_u$$ and $$p_v$$. Then $$S$$ and $$S’$$ lie on opposite sides of $$\ell$$; in particular, no vertex in $$S\cup S’$$ lies on $$\ell$$. Proof: Without loss of generality, assume $$\ell$$ is horizontal. For the sake of argument, suppose both $$S$$ and $$S’$$ contain vertices $$s$$ and $$t$$ that lie on or below $$\ell$$. If $$s$$ lies on $$\ell$$, the nondegeneracy lemma implies that $$s$$ has a neighbor $$s’$$ strictly below $$\ell$$; otherwise, let $$s’ = s$$. Similarly, if $$t$$ lies on $$\ell$$, the nondegeneracy lemma implies that $$t$$ has a neighbor $$t’$$ strictly below $$\ell$$; otherwise, let $$t’ = t$$. The halfspace lemma implies that there is a path $$P’$$ in $$G$$ from $$s’$$ to $$t’$$ that lies entirely below $$\ell$$. Let $$P$$ be the path from $$s$$ to $$t$$ consisting of the edge $$ss’$$ (if $$s\ne s’$$), the path $$P’$$, and the edge $$t’t$$ (if $$t\ne t’$$). The nondegeneracy lemma also implies that $$u$$ and $$v$$ have respective neighbors $$u’$$ and $$v’$$ strictly above $$\ell$$, and the halfspace lemma implies that there is a path $$Q’$$ from $$u’$$ to $$v’$$ that lies strictly above $$\ell$$. Let $$Q$$ be the path from $$u$$ to $$v$$ consisting of the edge $$uu’$$, the path $$Q’$$, and the edge $$v’v$$. The edge $$uv$$ and the path $$P$$ satisfy the conditions of Geelen’s lemma. The path $$Q$$ clearly avoids the edge $$uv$$, so $$Q$$ must cross $$P$$. But $$P$$ and $$Q$$ lie on opposite sides of $$\ell$$. We have reached a contradiction, completing the proof.$$\qquad\square$$ Corollary: No edge of $$G$$ maps to a single point. Proof: Suppose $$p_u=p_v$$ for some edge $$uv$$. Let $$\ell$$ be any line through $$p_u=p_v$$ and some other vertex on a face incident to $$uv$$. We immediately have a contradiction to the previous lemma. $$\qquad\square$$ Convexity lemma: Every face of $$G$$ maps to a strictly convex polygon. Proof: Let $$f$$ be any face of $$G$$, let $$uv$$ be any edge of $$f$$, and let $$\ell$$ be the unique line containing $$p_u$$ and $$p_v$$. If $$uv$$ is a boundary edge, the outer face lemma implies that every vertex of $$f$$ except $$u$$ and $$v$$ lies strictly on one side of $$\ell$$. Similarly, if $$uv$$ is an interior edge, the split faces lemma implies that every vertex of $$f$$ except $$u$$ and $$v$$ lies strictly on one side of $$\ell$$. In particular, no other vertex of $$f$$ lies on the line $$\ell$$. It follows that $$uv$$ is an edge of the convex hull of $$f$$. We conclude that $$f$$ coincides with its convex hull. $$\qquad\square$$ Now we are finally ready to prove the main theorem. Proof: Call a point generic if it does not lie in the image of the Tutte drawing. Consider any path from a generic point $$p$$ out to infinity that does not pass through any vertex in the drawing. The split faces lemma implies that whenever the moving point crosses an edge $$e$$, it leaves one face and enters another. When the moving point reaches infinity, it is only in the outer face. Thus, every generic point lies in exactly one face. For the sake of argument, suppose two edges $$uv$$ and $$xy$$ intersect in the Tutte drawing. Then any generic point near the intersection $$uv\cap xy$$ must lie in two different faces, which we just showed is impossible. We conclude that the Tutte drawing is an embedding; in particular, every face is a simple polygon. We already proved that every face in this embedding is strictly convex. $$\qquad\square$$ ## Whitney’s Uniqueness Theorem Tutte’s theorem is as strong as possible in the following sense: Every planar graph with a strictly convex embedding is 3-connected. This observation follows from an earlier study of 3-connected planar graphs by Hassler Whitney. A planar map is polyhedral if (1) the boundary of every face is a simple cycle, and (2) the intersection of any two facial cycles is either empty, a single vertex, or a single edge. Every strictly convex planar map is polyhedral. Lemma: Every planar embedding of a 3-connected planar graph is polyhedral. Proof: Fix a planar embedding $$\Sigma$$ of some graph $$G$$. Suppose the boundary of some face $$f$$ is not a simple cycle. The boundary of $$f$$ must have a repeated vertex $$v$$. So the radial map $$\Sigma^\diamond$$ contains a cycle of length $$2$$ through $$v$$ and $$f$$, which has at least one other vertex of $$\Sigma$$ on either side. It follows that $$G\setminus v$$ must be disconnected. Suppose two faces $$f$$ and $$g$$ have two vertices $$u$$ and $$v$$ in common, but not the edge $$uv$$. Then the radial map $$\Sigma^\diamond$$ contains a simple cycle with vertices $$f, u, g, v$$, which has at least one other vertex of $$G$$ on either side. It follows that $$G\setminus \{u,v\}$$ is disconnected. We conclude that if $$\Sigma$$ is not polyhedral, then $$G$$ is not 3-connected. $$\qquad\square$$ Lemma: If a graph $$G$$ has a polyhedral planar embedding, then $$G$$ is 3-connected. Proof: Let $$G$$ be any graph that is not 3-connected, and let $$\Sigma$$ be any planar embedding of $$G$$. Again, there are two cases to consider: • Suppose $$G\setminus v$$ is disconnected, for some vertex $$v$$. Then the same face of $$\Sigma$$ must be incident to $$v$$ twice. • Suppose $$G\setminus \{u,v\}$$ is disconnected, for some vertices $$u$$ and $$v$$. We can assume without loss of generality that $$u$$ and $$v$$ are not adjacent, since otherwise $$G\setminus u$$ is already disconnected. Then some pair of faces $$f$$ and $$g$$ must have both $$u$$ and $$v$$ on their boundaries, but not the edge $$uv$$. In both cases, we conclude that $$\Sigma$$ is not polyhedral. $$\qquad\square$$ Lemma: The dual $$\Sigma^*$$ of any polyhedral planar map $$\Sigma$$ is polyhedral. Proof: Suppose $$\Sigma$$ has a face $$f$$ whose boundary is not a simple cycle. Then the boundary walk of $$f$$ encounters some vertex $$v$$ more than once; in other words, $$v$$ and $$f$$ are incident more than once. Thus, in the dual map $$\Sigma^*$$, the dual vertex $$f^*$$ and the dual face $$v^*$$ are incident more than once, so the boundary of $$v^*$$ is not a simple cycle. On the other hand, suppose $$\Sigma$$ has two faces $$f$$ and $$g$$ that share two vertices $$v$$ and $$w$$, but there is no dart with endpoints $$v$$ and $$w$$ and shores $$f$$ and $$g$$. It follows that the dual faces $$v^*$$ and $$w^*$$ in $$\Sigma^*$$ share the dual vertices $$f^*$$ and $$g^*$$, but there is no dart with endpoints $$f^*$$ and $$g^*$$ and shores $$v^*$$ and $$w^*$$. We conclude that if $$\Sigma$$ is not polyhedral, then neither is $$\Sigma^*$$. $$\qquad\square$$ Lemma (Whitney): Every planar graph has at most one polyhedral embedding. Proof: Let $$\Sigma$$ be a polyhedral planar embedding of some graph $$G$$ (which must be planar and 3-connected by previous lemmas), and let $$\Sigma'$$ be any embedding of $$G$$ that is not equivalent to $$\Sigma$$. Let $$\textsf{succ}$$ and $$\textsf{succ}'$$ be the successor permutations of $$\Sigma$$ and $$\Sigma'$$, respectively. Because $$\Sigma$$ and $$\Sigma'$$ are not equivalent, $$\textsf{succ}'$$ is not equal to either $$\textsf{succ}$$ or $$\textsf{succ}^{-1}$$. First, suppose there is a dart $$d$$ such that $$\textsf{succ}'(d)$$ is not equal to either $$\textsf{succ}(d)$$ or $$\textsf{succ}^{-1}(d)$$. In other words, suppose there is a vertex $$v = \textsf{head}(d)$$ where the cyclic orders of darts into $$v$$ in the two embeddings are different. The darts $$d$$ and $$\textsf{succ}'(d)$$ split the cycle of darts around $$v$$ into two non-empty intervals; color the darts in one interval red and the other interval blue. In particular, color $$\textsf{succ}(d)$$ red and color $$\textsf{succ}^{-1}(d)$$ blue. There must be another dart $$d'$$ that is red or blue, whose successor $$\textsf{succ}'(d)$$ in $$\Sigma'$$ is either blue or red, respectively. Let $$C$$ be the simple cycle in $$G$$ that bounds face $$f = \textsf{left}'(d) = \textsf{right}'(\textsf{succ}'(d))$$ in $$\Sigma'$$. (If the boundary of $$f$$ is not a simple cycle, then $$\Sigma'$$ is not polyhedral and we are done.) Similarly, let Let $$C'$$ be the cycle in $$G$$ that bounds $$f' = \textsf{left}'(d') = \textsf{right}'(\textsf{succ}'(d'))$$ in $$\Sigma'$$. The images of $$C$$ and $$C'$$ in the polyhedral embedding $$\Sigma$$ cross each other at $$v$$, and therefore (by the Jordan curve theorem) share at least one other vertex $$w$$. It follows that faces $$f$$ and $$f'$$ in $$\Sigma'$$ share vertices $$v$$ and $$w$$, but do not share the edge $$vw$$ (if that edge exists). We conclude that $$\Sigma'$$ is not polyhedral. On the other hand, suppose are two darts $$d$$ and $$d'$$ such that $$\textsf{succ}'(d) = \textsc{succ}(d)$$ and $$\textsf{succ}'(d')$$ or $$\textsf{succ}^{-1}(d')$$. In other words, suppose the dart order around $$v = \textsf{head}(d)$$ is the same in both embeddings, but the dart order around $$w = \textsf{head}(d)$$ is reversed from one embedding to the other. Without loss of generality, $$v$$ and $$w$$ are adjacent, and we can assume $$d = v\mathord\to w$$ and $$d' = \textsf{rev}(d) = w\mathord\to v$$. Let $$C$$ and $$C'$$ be the cycles in $$G$$ that bound faces $$f = \textsf{left}'(d) = \textsf{right}'(d')$$ and $$f' = \textsf{left}'(d') = \textsf{right}'(d)$$ in $$\Sigma'$$, respectively. After an arbitrarily small perturbation, the images of $$C$$ and $$C'$$ in the polyhedral embedding $$\Sigma$$ cross each other at the midpoint $$vw$$, and therefore share at least one other vertex $$x$$. It follows that the faces $$f$$ and $$f'$$ in $$\Sigma'$$ have disconnected intersection, and therefore $$\Sigma'$$ is not polyhedral. $$\qquad\square$$ Together, the previous lemmas now imply Whitney’s unique-embedding theorem. Theorem (Whitney): Every 3-connected planar graph has a unique planar embedding (up to homeomorphism), which is polyhedral. In light of Whitney’s observation, Tutte’s spring-embedding theorem immediately implies the following corollary: Convex Embedding Theorem: For every polyhedral planar embedding, there is an equivalent strictly convex embedding. ## Not Appearing • Weakly convex faces and internal 3-connectivity • Directed version allowing zero dart weights via “strong 3-connectivity” • Colin de Verdière matrices and spherical spectral embeddings • More spectral graph algorithms! 1. The formulation and proof of Tutte’s theorem that I’m presenting here follows a lecture note by Dan Spielman (2018), which is based on papers by Michael Floater (1997); László Lovász (1999); Steven Gortler, Craig Gotsman, and Dylan Thurston (2006); and Jim Geelen (2012).↩︎ 2. This modification is called a star-mesh transformation; the special case of removing a vertex of degree $$3$$ is called a Y-$$\Delta$$ transformation.↩︎ 3. It is sometimes more convenient to formalize Tutte’s description as $$\lambda_{u\mathord\to v} = 1/\deg(v)$$, so that the weights of all darts into each vertex $$v$$ sum to $$1$$. This formalization is inconsistent with the physical spring analogy, but instead treats weights as transition probabilities of a (backward) random walk. Both formalizations lead to the same Tutte drawing.↩︎ 4. The Hessian of $$\Phi$$ is positive definite, meaning all of its eigenvalues are positive.↩︎ 5. The Laplacian matrix is just the Hessian of $$\Phi$$.↩︎ 6. In fact, we only need the weaker assumption that $$G$$ is internally 3-connected, meaning each interior vertex has three vertex-disjoint paths to the outer face.↩︎
# derivative of vector using prime symbol [duplicate] The problem is shown on the picture. I want to use prime symbol for derivative of vector, but as you can see, it doesn't look well (prime symbol and vector arrow are 'glued' together). What would be the easiest way to fix it? ## marked as duplicate by egreg, Steven B. Segletes, LaRiFaRi, Manuel, Maarten DhondtJul 1 '15 at 12:52 • I would just add \, before the prime. This could be achieved in a macro with \def\primevec#1{\vec#1\,'} – Steven B. Segletes Jul 1 '15 at 12:28 • Apparently, this is angular momentum and the relevant derivative is respect to time, not space, so a dot should be used instead of a prime, from a Physicist's view – user31729 Jul 1 '15 at 12:41 From a Physicist's view I suggest both the usage of esvect (font) package and \dot instead of primes, since the relevant equation contains a time derivative. The esvect provides distinctive vector arrows, the relevant command is \vv instead of \vec, however. \documentclass{article} \usepackage{amsmath} \usepackage{esvect} \newcommand{\dotvec}[1]{\dot{\vv{#1}}} \begin{document} $\vv{h} = \vv{r} \times \vv{v} = \vv{r} \times \dotvec{r}$ \end{document}
• 论文 • 太湖流域南宋以来旱涝规律及其成因初探 1. 中国科学院南京地理与湖泊研究所 • 出版日期:1989-01-20 发布日期:1989-01-20 A PRELIMINARY RESEARCH ON THE REGULARITY AND FACTORS OF CHANGES OF FLOOD AND DROUGHT IN TAIHU LAKE BASIN SINCE THE SOUTHERN SONG DYNASTY Chen Jiaqi 1. Nanjing Institute of Geography and Limnology, Academia Sinica • Online:1989-01-20 Published:1989-01-20 Abstract: The grade sequence(1121-1983 AD)of flood and drought was reconstructed by means of processing information of the historic flood and drought,on the basis of systematical collection of historic climate records.It was divided into nine grades. The characteristics of the distribution both in time and space and the change regularity of flood and drought were studied through analysis of periodic change,In the meantime,the factors of flood and drought were preliminarily studied by analysing the relationships between flood/drought and cold/warm, as well as flood/drought and solar activity. The results showed that tha change of flood and drought in the Taihu Lake basin was a combined variation in which the main cycle was quasi-100 years and other cycles were quasi-11 years,Since the Southern Song Dynasty two wetter periods which lasted about 200 years respectively occurred in the 14th and 15th centuries and the 18th century.The former was wetter than the latter.Generally,during these wetter periods the winter was warmer and the solar activity was stronger.
# incrementalConceptDriftDetector Instantiate incremental concept drift detector ## Syntax ``IncCDDetector = incrementalConceptDriftDetector()`` ``IncCDDetector = incrementalConceptDriftDetector(DetectionMethod)`` ``IncCDDetector = incrementalConceptDriftDetector(DetectionMethod,Name=Value)`` ## Description ````IncCDDetector = incrementalConceptDriftDetector()` returns an incremental concept drift detector that utilizes the default method, Hoeffding's Bounds Drift Detection Method with moving average test (HDDMA).``` example ````IncCDDetector = incrementalConceptDriftDetector(DetectionMethod)` returns an incremental concept drift detector that utilizes the method `DetectionMethod`.``` example ````IncCDDetector = incrementalConceptDriftDetector(DetectionMethod,Name=Value)` specifies additional options using one or more `Name=Value` arguments.``` ## Examples collapse all Initiate the concept drift detector using the Drift Detection Method (DDM). `incCDDetector = incrementalConceptDriftDetector("ddm");` Create a random stream such that for the first 1000 observations, failure rate is 0.1 and after 1000 observations, failure rate increases to 0.6. ```rng(1234) % For reproducibility numObservations = 3000; switchPeriod = 1000; for i = 1:numObservations if i <= switchPeriod failurerate = 0.1; else failurerate = 0.6; end X(i) = rand()<failurerate; % Value 1 represents failure end``` Preallocate variables for tracking drift status. ```status = zeros(numObservations,1); statusname = strings(numObservations,1);``` Continuously feed the data to the drift detector and perform incremental drift detection. At each iteration: • Update statistics of the drift detector and monitor for drift using the new data point with `detectdrift`. (Note: `detectdrift` checks for drift after the warmup period.) • Track and record the drift status for visualization purposes. • When a drift is detected, reset the incremental concept drift detector by using `reset`. ```for i = 1:numObservations incCDDetector = detectdrift(incCDDetector,X(i)); statusname(i) = string(incCDDetector.DriftStatus); if incCDDetector.DriftDetected status(i) = 2; incCDDetector = reset(incCDDetector); % If drift detected, reset the detector sprintf("Drift detected at Observation #%d. Detector reset.",i) elseif incCDDetector.WarningDetected status(i) = 1; else status(i) = 0; end end``` ```ans = "Drift detected at Observation #1078. Detector reset." ``` After the change in the failure rate at observation number 1000, `detectdrift` detects the shift at observation number 1078. Plot the drift status versus the observation number. `gscatter(1:numObservations,status,statusname,'gyr','*',4,'on',"Observation number","Drift status")` Create a random stream such that the observations come from a normal distribution with standard deviation 0.75, but the mean changes over time. First 1000 observations come from a distribution with mean 2, the next 1000 come from a distribution with mean 4, and the following 1000 come from a distribution with mean 7. ```rng(1234) % For reproducibility numObservations = 3000; switchPeriod1 = 1000; switchPeriod2 = 2000; X = zeros([numObservations 1]); % Generate the data for i = 1:numObservations if i <= switchPeriod1 X(i) = normrnd(2,0.75); elseif i <= switchPeriod2 X(i) = normrnd(4,0.75); else X(i) = normrnd(7,0.75); end end``` In an incremental drift detection application, access to data stream and model update would happen consecutively. One would not collect the data first and then feed into the model. However, for the purpose of clarification, this example demonstrates the simulation of data separately. Specify the drift warmup period as 50 observations and estimation period for the data input bounds as 100. ```driftWarmupPeriod = 50; estimationPeriod = 100;``` Initiate the incremental concept drift detector. Utilize the Hoeffding's bounds method with exponentially weighted moving average method (EWMA). Specify the input type and warmup period. ```incCDDetector = incrementalConceptDriftDetector("hddmw",InputType="continuous", ... WarmupPeriod=driftWarmupPeriod,EstimationPeriod=estimationPeriod)``` ```incCDDetector = HoeffdingDriftDetectionMethod PreviousDriftStatus: 'Stable' DriftStatus: 'Stable' IsWarm: 0 NumTrainingObservations: 0 Alternative: 'greater' InputType: 'continuous' TestMethod: 'ewma' Properties, Methods ``` `incDDetector` is a `HoeffdingDriftDetectionMethod` object. When you first create the object, properties such as `DriftStatus`, `IsWarm`, `CutMean`, and `NumTrainingObservations` are at their initial state. `detectdrift` updates them as you feed the data incrementally and monitor for drift. Preallocate the batch size and the variables to record drift status and the mean the drift detector computes with each income of data. ```status = zeros([numObservations 1]); statusname = strings([numObservations 1]); M = zeros([numObservations 1]);``` Simulate the data stream of one observation at a time and perform incremental drift detection. At each iteration: • Monitor for drift using the new data with `detectdrift`. • Track and record the drift status and the statistics for visualization purposes. • When a drift is detected, reset the incremental concept drift detector by using the function `reset`. ```for i = 1:numObservations incCDDetector = detectdrift(incCDDetector,X(i)); M(i) = incCDDetector.Mean; if incCDDetector.DriftDetected status(i) = 2; statusname(i) = string(incCDDetector.DriftStatus); incCDDetector = reset(incCDDetector); % If drift detected, reset the detector sprintf("Drift detected at observation #%d. Detector reset.",i) elseif incCDDetector.WarningDetected status(i) = 1; statusname(i) = string(incCDDetector.DriftStatus); sprintf("Warning detected at observation #%d.",i) else status(i) = 0; statusname(i) = string(incCDDetector.DriftStatus); end end``` ```ans = "Warning detected at observation #1024." ``` ```ans = "Warning detected at observation #1025." ``` ```ans = "Warning detected at observation #1026." ``` ```ans = "Warning detected at observation #1027." ``` ```ans = "Warning detected at observation #1028." ``` ```ans = "Warning detected at observation #1029." ``` ```ans = "Drift detected at observation #1030. Detector reset." ``` ```ans = "Warning detected at observation #2012." ``` ```ans = "Warning detected at observation #2013." ``` ```ans = "Warning detected at observation #2014." ``` ```ans = "Drift detected at observation #2015. Detector reset." ``` Plot the drift status versus the observation number. `gscatter(1:numObservations,status,statusname,'gyr','*',5,'on',"Number of observations","Drift status")` Plot the mean values versus the number of observations. `scatter(1:numObservations,M)` You can see the increase in the sample mean from the plot. The mean value becomes larger and the software eventually detects the drift in the data. Once a drift is detected, reset the incremental drift detector. This also resets the mean value. In the plot, the observations where the sample mean is zero correspond to the estimation periods. There is an estimation period at the beginning and then twice after the drift detector is reset following the detection of a drift. Initiate the concept drift detector using the Drift Detection Method (DDM). `incCDDetector = incrementalConceptDriftDetector("ddm",Alternative="less",WarmupPeriod=100);` Create a random stream such that for the first 1000 observations, failure rate is 0.4 and after 1000 failure rate decreases to 0.1. ```rng(1234) % For reproducibility numObservations = 3000; switchPeriod = 1000; for i = 1:numObservations if i <= switchPeriod failurerate = 0.4; else failurerate = 0.125; end X(i) = rand()<failurerate; % Value 1 represents failure end``` Preallocate variables for tracking drift status and the optimal mean and optimal standard deviation value. ```optmean = zeros(numObservations,1); optstddev = zeros(numObservations,1); status = zeros(numObservations,1); statusname = strings(numObservations,1);``` Continuously feed the data to the drift detector and monitor for any potential change. Record the drift status for visualization purposes. ```for i = 1:numObservations incCDDetector = detectdrift(incCDDetector,X(i)); statusname(i) = string(incCDDetector.DriftStatus); optmean(i) = incCDDetector.OptimalMean; optstddev(i) = incCDDetector.OptimalStandardDeviation; if incCDDetector.DriftDetected status(i) = 2; incCDDetector = reset(incCDDetector); % If drift detected, reset the detector sprintf("Drift detected at Observation #%d. Detector reset.",i) elseif incCDDetector.WarningDetected status(i) = 1; else status(i) = 0; end end``` ```ans = "Drift detected at Observation #1107. Detector reset." ``` After the change in the failure rate at observation number 1000, `detectdrift` detects the shift at observation number 1096. Plot the change in the optimal mean and optimal standard deviation. ```tiledlayout(2,1); ax1 = nexttile; plot(ax1,1:numObservations,optmean) ax2 = nexttile; plot(ax2,1:numObservations,optstddev)``` Plot the drift status versus the observation number. ```figure(); gscatter(1:numObservations,status,statusname,'gyr','*',4,'on',"Observation number","Drift status")``` `detectdrift` concludes on a warning status for multiple observations before it decides on a drift. ## Input Arguments collapse all Incremental drift detection method, specified as one of the following. Detection MethodDefinition `"ddm"`Drift Detection Method (DDM) `"hddma"`Hoeffding's Bounds Drift Detection Method with moving average test (HDDMA) `"hddmw"`Hoeffding's Bounds Drift Detection Method with exponentially weighted moving average (EWMA) test (HDDMW) ### Name-Value Arguments Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter. Example: `Alternative="less",InputType="continuous",InputBounds=[-1,1],ForgettingFactor=0.075` specifies the alternative hypothesis as less, that is, left-sided, the input data type as continuous data, lower and upper bounds of the input data as [-1,1] and the value of the forgetting factor for the HDDMW method as 0.075. General Options collapse all Type of alternative hypothesis for determining drift status, specified as one of `"unequal"`, `"greater"`, or `"less"`. Given two test statistics ${F}_{1}\left(x\right)$ and ${F}_{2}\left(x\right)$, • `"greater"` tests for a drift in the positive direction, that is, ${F}_{1}\left(x\right)>{F}_{2}\left(x\right)$. In this case, the null hypothesis is ${F}_{1}\left(x\right)\le {F}_{2}\left(x\right)$. • `"less"` tests for a drift in the negative direction, that is, ${F}_{1}\left(x\right)<{F}_{2}\left(x\right)$. In this case, the null hypothesis is ${F}_{1}\left(x\right)\ge {F}_{2}\left(x\right)$. • `"unequal"` tests for a drift in the either direction, that is, ${F}_{1}\left(x\right)\ne {F}_{2}\left(x\right)$. In this case, the null hypothesis is ${F}_{1}\left(x\right)={F}_{2}\left(x\right)$. `"unequal"` is for the `HDDMA` and `HDDMW` methods only. For each type of test, `detectdrift` updates the statistics and checks whether it can reject the null hypothesis in favor of the alternative at the significance level of `WarningThreshold` or `DriftThreshold`. If it rejects the null hypothesis at the significance level of `WarningThreshold`, then it updates the `DriftStatus` to `'Warning'`. If it rejects the null hypothesis at the `DriftThreshold`, then it updates the `DriftStatus` to `'Drift'`. Example: `Alternative="less"` Type of input to the drift detector, specified as either `"binary"` or `"continuous"`. Example: `InputType="continuous"` Number of observations used for drift detector to warm up, specified as a nonnegative integer. Until the end of the warmup period, `detectdrift` trains the drift detector using the incoming data and updates the internal statistics, but does not check for the drift status. After the software reaches the warmup period, that is, once the drift detector is warm, it starts checking for any changes in the drift status. Example: `WarmupPeriod=50` Data Types: `double` | `single` Options for DDM collapse all Number of standard deviations for drift limit, specified as a nonnegative scalar value. This is the number of standard deviations the overall test statistic value can be away from the optimal test statistic value before the drift detector sets the drift status to drift. Default value of 3 corresponds to a 99.7% confidence level [1]. `DriftThreshold` value must be strictly greater than the `WarningThreshold` value. Example: `DriftThreshold=2.5` Data Types: `double` | `single` Number of standard deviations for warning limit, specified as a nonnegative scalar value. This is the number of standard deviations the overall test statistic value can be away from the optimal test statistic value before the drift detector sets the drift status to warning. Default value of 2 corresponds to a 95% confidence level [1]. `WarningThreshold` value must be strictly smaller than the `DriftThreshold` value. Example: `WarningThreshold=1.75` Data Types: `double` | `single` Options for HDDMA and HDDMW collapse all Threshold to determine if drift exists, specified as a nonnegative scalar value from 0 to 1. It is the significance level the drift detector uses for calculating the allowed error between a random variable and its expected value in Hoeffding's inequality and McDiarmid's inequality before it sets the drift status to drift [2]. `DriftThreshold` value must be strictly smaller than the `WarningThreshold` value. Example: `DriftThreshold=0.003` Data Types: `double` | `single` Number of observations used to estimate the input bounds for continuous data, specified as a nonnegative integer. That is, when `InputType` is `"continuous"` and you did not specify the `InputBounds` value, the software uses `EstimationPeriod` number of observations to estimate the input bounds. After the estimation period, the software starts the warmup period. If you specify the `InputBounds` value or `InputType` is `"binary"`, then the software ignores `EstimationPeriod`. Default value is 100 when there is a need for estimating the input bounds. Otherwise, default value is 0. Example: `EstimationPeriod=150` Data Types: `double` | `single` Lower and upper bounds of continuous input data, specified as a numeric vector of size 2. If `InputType` is `"continuous"` and you do not specify the `InputBounds` value, then `detectdrift` estimates the bounds from the data during the estimation period. Specify the number of observations to estimate the data input bounds by using `EstimationPeriod`. If `InputType` is `"binary"`, then the drift detector sets the `InputBounds` value to [0,1] and the software ignores the `InputBounds` name-value argument. HDDM uses Hoeffding's inequality and McDiarmid's inequality for drift detection and these inequalities assume bounded inputs [2]. Example: `InputBounds=[-1 1]` Data Types: `double` | `single` Note This option is only for the exponentially weighted moving average (EWMA) method (corresponding to `DetectionMethod` value set as `"hddmw"`). Forgetting factor in the HDDMW method, specified as a scalar value from 0 to 1. Forgetting factor is the `λ` in the EWMA statistic ${\stackrel{^}{X}}_{t}=\lambda {X}_{t}+\left(1-\lambda \right){\stackrel{^}{X}}_{t-1}$[2]. Forgetting factor determines how much the current prediction of mean is influenced by the past observations. A higher value of `ForgettingFactor` attains more weight to the current observations and less value to the past observations. Example: `ForgettingFactor=0.075` Data Types: `double` | `single` Threshold to determine warning versus drift, specified as a nonnegative scalar value from 0 to 1. It is the significance level the drift detector uses for calculating the allowed error between a random variable and its expected value in Hoeffding's inequality and McDiarmid's inequality before it sets the drift status to warning [2]. `WarningThreshold` value must be strictly greater than `DriftThreshold` value. Example: `WarningThreshold=0.007` Data Types: `double` | `single` ## Output Arguments collapse all Incremental concept drift detector, specified as either `DriftDetectionMethod` or `HoeffdingDriftDetectionMethod` object. For more information on these objects and their properties, see the corresponding reference pages. ## References [1] Gama, Joao, Pedro Medas, Gladys Castillo, and Pedro P. Rodrigues. “Learning with drift detection.“ In Brazilian symposium on artificial intelligence, pp. 286-295. Berlin, Heidelberg: Springer. 2004, September. [2] Frias-Blanco, Isvani, Jose del Campo-Ávila, Ramos-Jimenez Gonzalo, Rafael Morales-Bueno, Augustin Ortiz-Diaz, and Yaile Caballero-Mota. “Online and non-parametric drift detection methods based on Hoeffding's bounds.“ IEEE Transactions on Knowledge and Data Engineering, Vol. 27, No. 3, pp.810-823. 2014. ## Version History Introduced in R2022a
## December 9, 2005 ### Exotic Instanton Effects I’ve been reading the recent Beasley-Witten paper on instanton effects in string theory. As you know, instantons can have dramatic effects on the vacuum structure of supersymmetric gauge theories. They can induce a superpotential that lifts degeneracies that are present to all orders in perturbation theory. More subtly, as found by Seiberg, in the case of $SU(N_c)$ SQCD, with $N_f=N_c$ flavours, instanton effects can change the topology of the vacuum manifold. The singular affine variety, (1) $\det(M)- B\tilde B = 0$ (where $M$ is an $N_c\times N_c$ complex matrix) is deformed to (2) $\det(M)- B\tilde B = \Lambda^{2 N_c}$ In their previous paper, Beasley and Witten argued that this effect can be understood as the generation of an exotic sort of “superpotential” of the form (3) $W = \omega_{\overline{\imath}\overline{\jmath}}(\Phi,\overline{\Phi}) D_{\dot{\alpha}}\Phi^{\overline{\imath}}D^{\dot{\alpha}}\Phi^{\overline{\jmath}}$ where $\Phi^{\overline{\imath}}$ are anti-chiral superfields, parametrizing the moduli space, $M$, and $\omega_{\overline{\imath}\overline{\jmath}} = \frac{1}{2} \left(g_{\overline{\imath}k} \tensor{\omega}{_\overline{\imath}_^k} + g_{\overline{\jmath}k} \tensor{\omega}{_\overline{\imath}_^k}\right)$ where $g_{\overline{\imath}j}$ is the Kähler form on $M$ and $\tensor{\omega}{_\overline{\imath}_^j}\in H^1(M, T_M)$ is the deformation of complex structure of $M$ that deforms (1) into (2)1. In this particular case, this fancy formalism is somewhat superfluous. The constraint (2) can simply be imposed by introducing an additional chiral superfield, $S$, and a garden-variety superpotential (4) $W= S(\det(M)- B\tilde B - \Lambda^{2 N_c})$ For $\Lambda\neq0$ (and, even for $\Lambda=0$, away from the origin in field space), $S$ and some particular combination of the other fields are massive, and can be integrated out. When $\Lambda=0$, all the fields are massless at the origin, and so integrating out $S$ leads to the singular Lagrangian, of the form (3), constructed in detail by Beasley and Witten for $SU(2)$. In String Theory, alas, you can’t willy-nilly integrate-in fields. So there are situations where, presumably, you need to represent the instanton-induced deformation of the classical moduli space by an exotic superpotential of the form (3), instead of the more transparent (4). Beasley and Witten lay out a couple of instances where they argue that’s the case, and show how you can calculate a superpotential of the form (3), induced by worldsheet instantons in heterotic string theory. What’s quite interesting is that some of their heterotic computations are closely related to the higher-genus worldsheet instanton computations in the topological A-model. These compute what are, by now, quite well-known corrections to the $N=2$ supergravity action that one obtains from a Type-IIA compactification on a Calabi-Yau. Beasley and Witten compute the analogous corrections to the $N=1$ theory arising from a heterotic compactification on the same Calabi-Yau (they’re F-terms involving higher powers of the supersymmetric gauge field strength squared, $W_\alpha W^\alpha$). 1 We’re identifying a finite deformation of the complex structure with an infinitesimal tangent vector (an element of $H^1(M, T_M)$) to the space of deformations, at the point corresponding to the complex structure of the classical moduli space, $M_0$ (which, typically, but not in the example above, is a singular point in the space of complex structures). Moreover, to write this “F-term”, we need to use the Kähler metric on $M_0$, which is singular. It’s not obvious that this makes sense. However, when you can integrate-in some fields and write the deformation as an ordinary superpotential (as above), you can check this procedure reproduces the correct result. Posted by distler at December 9, 2005 11:43 PM TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/694 ### Higher dimensions of fiction? Hi there. Recently, I’ve been searching for a credible physicist or scholar to take a look at my novel “The Pink Room,” which was published last month. Briefly, the world’s leading physicist attempts to use the science of string theory to bring his daughter back from the dead. Government agents and a bestselling novelist race to find out if he was succesful. Posted by: Mark LaFlamme on January 27, 2006 7:53 PM | Permalink | Reply to this ### Re: Higher dimensions of fiction? Ultimately, your novel must be judged on its literary merits, not on whether — by luck, or by dint of great effort — you happened to get the science “right.” While my aunt was a writer of some repute, I don’t think I would be the best judge of the literary merits of your novel. But, since you’ve left comments on several Physics blogs about your novel, I’m sure that someone, somewhere, will provide you with the feedback you desire. [A less charitable explanation is that you are spamming every Physics (or other) blog that you can find, and couldn’t care less about getting feedback about your novel.] Posted by: Jacques Distler on January 27, 2006 9:21 PM | Permalink | PGP Sig | Reply to this ### Re: Exotic Instanton Effects Whoa, whoa! Perish that thought. I don’t spam. I’ve been getting great feedback from the people who enjoy a good thriller. The literary merits of the novel appear to be sound. Now, I’m just trying to satisfy my curiosity about the scientific accuracy of the story. You’re right, of course. It doesn’t need to be textbook solid. As I’ve said, it’s a curiosity. And I thought offering up a free copy might be the way to go about it. Posted by: Mark LaFlamme on January 29, 2006 12:41 AM | Permalink | Reply to this Post a New Comment
Question 11 # In the following question, select the odd word pair from the given alternatives. Solution Car works with the help of Petrol, Bulb works with help of Electricity, Pen works with the help of Ink but Pencil can work without the help of Paper. $$\therefore\$$Pencil - Paper is the odd word pair among the given word pairs. Hence, the correct answer is Option D
# News this Week Science  23 Nov 2007: Vol. 318, Issue 5854, pp. 1224 1. DEVELOPMENTAL BIOLOGY # Field Leaps Forward With New Stem Cell Advances 1. Gretchen Vogel, 2. Constance Holden For a year and a half, stem cell researchers around the world have been racing toward a common goal: to reprogram human skin cells directly into cells that look and act like embryonic stem (ES) cells. Such a recipe would not need human embryos or oocytes to generate patient-specific stem cells—and therefore could bypass the ethical and political debates that have surrounded the field for the past decade. The pace was set in June 2006, when Shinya Yamanaka of Kyoto University in Japan reported that his group had managed the feat in mice by inserting four genes into cells taken from their tails (Science, 7 July 2006, p. 27). Those genes are normally switched off after embryonic cells differentiate into the various cell types. The pace picked up in June this year, when Yamanaka and another group showed that the cells were truly pluripotent (Science, 8 June, p. 1404). Now the race has ended in a tie, with an extra twist: Two groups report this week that they have reprogrammed human skin cells into so-called induced pluripotent cells (iPCs), but each uses a slightly different combination of genes. In a paper published online in Cell on 20 November, Yamanaka and his colleagues report that their mouse technique works with human cells as well. And in a paper published at the same time online in Science (www.sciencemag.org/cgi/content/abstract/1151526), James Thomson of the University of Wisconsin, Madison, and his colleagues report success in reprogramming human cells, again by inserting just four genes, but two of the genes are different from those Yamanaka uses. Among stem cell scientists, the human cell reprogramming feats have somewhat overshadowed another major advance reported online in Nature last week: A team at the Oregon National Primate Research Center has officially become the first to obtain embryonic stem cells from cloned primate embryos, an advance that brings therapeutic cloning closer to reality for humans. Taken together, these feats suggest that scientists are getting very close to uncovering the secret of just what occurs in an oocyte to turn back the clock in the DNA of a differentiated cell. The two human reprogramming papers could help solve some of the long-standing political and ethical fights about stem cells and cloning. The technique produces pluripotent cells, cells with the potential to become any cell type in the body, without involving either embryos or oocytes—two sticking points that have made embryonic stem cell research so controversial. Ian Wilmut of the University of Edinburgh, U.K., says that once he learned of Yamanaka's mouse work, his lab set aside its plans to work on human nuclear transfer experiments, otherwise known as research cloning. The new work now confirms that decision, he says. Direct reprogramming to iPCs “is so much more practical” than nuclear transfer, he says. In the new work, Yamanaka and his colleagues used a retrovirus to ferry into adult cells the same four genes they had previously employed to reprogram mouse cells: OCT3/4, SOX2, KLF4, and c-MYC. They reprogrammed cells taken from the facial skin of a 36-year-old woman and from the connective tissue of a 69-year-old man. Roughly one iPC cell line was produced for every 5000 cells they treated with the technique, an efficiency that enabled them to produce several cell lines from each experiment. Thomson says he and his colleagues already had their own list of 14 candidate reprogramming genes when Yamanaka's mouse results were published. They, like Yamanaka's group, gradually whittled down the list through a systematic process of elimination. Thomson's experiments led to four factors as well: OCT3 and SOX2, as Yamanaka used, and two different genes, NANOG and LIN28. NANOG is another gene associated with ES cells, and LIN28 is a factor that seems to be involved in processing messenger RNA. Instead of cells from adults, Thomson and his team reprogrammed cells from fetal skin and from the foreskin of a newborn boy. But Thomson says they are working on experiments with older cells, which so far look promising. Their experiments reprogrammed about one in 10,000 cells. The efficiency is less than that of Yamanaka's technique, Thomson says, but is still enough to create several cell lines from a single experiment. Comparing the two techniques might help scientists learn how the inserted genes work to turn back the developmental clock, Yamanaka says. He says his team tried using NANOG but saw no effect, and LIN28 was not in their initial screen. Thomson says his team tried Yamanaka's four genes without success, but that they may have tried the wrong relative doses. The fact that Thomson's suite doesn't include a known cancer-causing gene is a bonus, says Wilmut. (The c-MYC Yamanaka used is an oncogene.) But both techniques still result in induced cells that carry multiple copies of the retroviruses used to insert the genes. Those could easily lead to mutations that might cause tumors in tissues grown from the cells. The crucial next step, everyone agrees, is to find a way to reprogram cells by switching on the genes rather than inserting new copies. “It's almost inconceivable at the pace this science is moving that we won't find a way to do this without oncogenes or retroviruses,” says stem cell researcher Douglas Melton of Harvard University. “It is not hard to imagine a time when you could add small molecules that would tickle the same networks as these genes” and produce reprogrammed cells without genetic alterations, he says. Although the cells “act just like human ES cells,” Thomson says, there are some differences between the cell types. Yamanaka's group reports that overall human iPC gene expression is very similar, but not identical, to human ES cell gene expression. “It will be probably a few years before we really understand these cells as well as we understand ES cells,” Thomson says. But “for drug screening, they're already terribly useful. IVF embryos are very skewed ethnically,” he says. But with the new iPC technique, “you can isolate cell lines that represent the genetic diversity of the United States. And I think it will be very straightforward to do.” The primate cloning success, although partially eclipsed by the human work, “is really a breakthrough,” says primate stem cell researcher Jose Cibelli of Michigan State University in East Lansing. Although scientists have cloned a host of other animals, primates have proved to be particularly resistant—as demonstrated by the failure of Korean scientist Woo Suk Hwang, whose work with human embryos was shown to be fraudulent 2 years ago. A group headed by Shoukhrat Mitalipov was able to generate two embryonic stem cell lines after injecting skin cells from a 9-year-old male rhesus macaque into 304 eggs collected from 14 female macaques. The cells showed all the requisite pluripotent stem cell markers; in lab dishes, they generated heart and brain neurons, and in live mice they formed teratomas—tumor tissues from all three germ layers. Scientists such as Robin Lovell-Badge of the U.K. Medical Research Council have lauded the feat while pointing out that the low success rate—0.7%—means more primate work is needed before women should be asked to donate eggs for such research. Mitalipov originally reported the achievement last June in Cairns, Australia, at the meeting of the International Society for Stem Cell Research. At the time, he met with some skepticism. Before publishing the paper, Nature took the unprecedented step of asking a group headed by David Cram of Monash University in Clayton, Australia, to be sure the cell lines had the same genotype as the donor of the skin cells. Their report is published in the same issue of Nature, which issued a statement declaring this a prudent step given the importance of the results and “recent history in the cloning field.” Scientists have discovered that the big peril in cloning, as the Hwang team ultimately discovered, is that what you may really come up with are parthenotes—that is, early embryos arising solely from the activated oocyte. Parthenotes—less useful than clones because they have only the genes of the egg donors—can result when the spindle containing the nuclear DNA is not completely removed before a foreign nucleus is introduced. The usual technique for locating the spindle is with a dye or ultraviolet light, which the researchers suspected could damage fragile primate oocytes. So instead, the Oregon group used a new noninvasive imaging system called Oosight to locate the spindle, then used a probe to suck it out and replace it with the skin cell. Enucleation of the oocyte is 100% efficient with this technique, said Mitalipov. The scientists also changed the culture medium, eliminating calcium and magnesium, which they believe cause premature activation of the oocyte and failure of the donor nucleus to become properly “remodeled.” Although the cloning “efficiency is still low,” Mitalipov said at a press conference, “I believe the technology we developed can be directly applicable to humans.” Robert Lanza of Advanced Cell Technology in Worcester, Massachusetts, calls the Oregon paper a “turnaround,” saying that it marks a “recovery for the field,” because the Hwang paper was retracted in January 2006. The next step, says Mitalipov, will be to test cloning for treatment of a disease, something that hitherto has been tried only in the mouse. A likely target is diabetes, says Mitalipov, who plans to inject cloned, genetically modified ES cells into a monkey model of the disease. “I cannot emphasize enough how useful these [cloned primate ES] cells will be” for studying other diseases that also affect humans, says Cibelli. Another application, he says, will be to compare the cloned primate ES cells with cells reprogrammed by the methods Yamanaka and Thomson used. “If their method is as good as the oocyte” in reprogramming somatic cells, says Cibelli, “we will be no longer in need of oocytes, and the whole field is going to completely change. People working on ethics will have to find something new to worry about.” 2. CLINICAL RESEARCH # ALS Trial Raises Questions About Promising Drug 1. Jennifer Couzin An antibiotic long thought to hold promise for treating neurologic diseases failed dramatically in a recent clinical trial; this news has sparked a debate about why the drug flopped and whether the results should be interpreted broadly. In a study of 412 people with amyotrophic lateral sclerosis (ALS), published earlier this month, scientists were startled to find that patients on the drug, minocycline, declined more quickly than those on a placebo. Now clinics are urgently tracking down ALS patients whose physicians have prescribed minocycline off label. Many have been taking the drug—approved in the 1970s to treat certain infections—because animal studies have shown it can slow the course of brain diseases, and some small human trials have suggested that it's safe. That did not appear to be the case here. The disappointing outcome may affect new or ongoing trials of minocycline for other conditions, including multiple sclerosis (MS) and Huntington's disease. Researchers leading these trials generally reject the idea that the drug's showing in ALS, published online 1 November in The Lancet Neurology, has much bearing. “We don't really believe that the ALS data adds something, but when it's out there, you have to deal with it,” says neurologist Luanne Metz of the University of Calgary in Canada, who is beginning a study of minocycline in 200 people at risk for MS. She argues that MS is very different from ALS—more autoimmune than neurodegenerative—and unlikely to respond in the same way. Still, Metz's trial has added extra safety monitoring. In contrast, the authors of The Lancet Neurology paper are quite concerned, arguing that their trial “generates the need to reexamine … the justification for other trials of minocycline in patients with neurological disorders.” Over 9 months of treatment, the ALS patients on minocycline declined 25% faster, as measured by a functional rating scale, than did those taking a placebo. But there was no significant difference in death rates between the two groups, and the patients rated quality of life equally, suggesting that the decline was subtle. Exactly what went wrong remains a mystery. Merit Cudkowicz, a neurologist at Massachusetts General Hospital in Boston who is running a minocycline trial in 100 patients with Huntington's disease, believes the drug doses in the ALS study were too high, as much as double what she and Metz are using. But Paul Gordon, who led the ALS trial and works as a neurologist at Columbia University and at Hôpital de la Pitie-Salpêtrière in Paris, notes that patients on the higher doses did no worse than those on lower ones, nor did side effects appear to correlate with deterioration. “We're sort of left with scratching our heads,” he says. The results are especially confusing because published animal studies suggested that the drug can suppress neuroinflammation and inhibit cell death. But Gordon notes that mouse studies didn't really reflect the way ALS patients are treated. For example, the mice received minocycline before they began showing symptoms. Animal experiments offer only a rough guide to human testing, says neurologist Nigel Leigh of King's College London, who still hopes that minocycline will prove useful for ALS. Leigh says it's possible that high doses of the antibiotic were neurotoxic for patients, whereas lower doses might be neuroprotective. He'd been planning with colleagues to launch a minocycline study in 1000 ALS patients; now he's seeking approval for a revised proposal to identify an ideal minocycline dose in a smaller group of patients. “It would be hasty to stop every trial” testing the drug, agrees John Marler, an associate director for clinical trials at the U.S. National Institute of Neurological Disorders and Stroke in Bethesda, Maryland. Even very brief trials of minocycline, though, will consider whether to modify their protocols, says Marler, such as a 3-day acute stroke trial recently funded by the institute. Adds Marler: “We're not going to overlook any possible relief” for these patients. 3. ASTRONOMY # If You Build It, Will They Come? 1. Richard Stone CHIANG MAI, THAILAND—With an espresso stand, a gift shop, and a misty forest festooned with orchids and red rhododendrons, the 2550-meter summit of Inthanon Mountain lacks the grandeur of Fuji or the towering remoteness of Everest. But new construction atop Thailand's tallest peak is still making a statement: This Southeast Asian nation intends to become a serious player in global astronomy and a regional leader in the field. Earlier this year, Thailand began building an observatory complex on Inthanon, about 100 kilometers west of the temple city of Chiang Mai in northern Thailand. The centerpiece of the $40 million program will be a 2.4-meter optical telescope for studying binary stars and searching for extrasolar planets. It will match the size of the largest instrument in mainland Asia—a newly installed 2.4-meter scope at Yunnan Observatory in Kunming, China, that is about to undergo a year of testing—and serve astronomers from across Southeast Asia and beyond. “It's a bold vision,” says John Hearnshaw, an astronomer at the University of Canterbury in Christchurch, New Zealand, who assessed Thailand's ambitions on behalf of the International Astronomical Union. Not everyone is enamored with the project, which is expected to be completed in March 2009. Some scientists have questioned such a large investment in astronomy by a country struggling to build capacity in biology and other fields more tightly coupled to economic growth. Others assail the telescope itself. “People in their right mind would not invest a huge sum of money to build an optical telescope in a tropical area with high humidity,” says one Thai astronomer, who dismisses the project as “a white elephant.” Hearnshaw defends the decision. “Certainly the rainy season will curtail the observing season,” he says. “But that won't be a disaster.” He predicts that observing “should be reasonably good” for 9 months a year and that the telescope will be a valuable addition in tracking sudden events like gamma ray bursts and supernovae. “Many of the most exciting discoveries of the last 25 years in optical astronomy have come from small to medium telescopes on the ground,” Hearnshaw says. Thailand is home to only about a dozen research astronomers. Even so, the new telescope, to be dedicated to King Bhumibol Adulyadej, the world's longest serving head of state, won't be the first major astronomical facility on the mountain. Earlier this year, three universities—Mahidol, Chulalongkorn, and Ubon Rajathanee—used equipment donated by Japan's Shinshu University to build a cosmic-ray detector at a unique spot in Earth's geomagnetic profile. The neutron monitor is named after Princess Maha Chakri Sirindhorn, an astronomy buff, and situated on the grounds of a Royal Thai Air Force radar installation adjacent to the 2.4-meter telescope site. It studies solar cycles and serves as a space-weather sentinel. The geomagnetic equator passes through Thailand, and Earth's magnetic field bulges toward Southeast Asia, producing the strongest horizontal magnetic field on the planet. That dynamo allows only the toughest cosmic particles through. “Our station is registering the most energetic cosmic rays of any neutron monitor in the world,” says astrophysicist David Ruffolo of Mahidol University in Bangkok. Such sensitivity, his group has shown, could provide a 4-hour warning of an oncoming geomagnetic storm—more than twice the lead time of space-weather satellites. Four years ago, the Thai government requested proposals to commemorate King Bhumibol's 80th birthday this month. Astronomer Boonrucksar Soonthornthum, who has labored to raise education standards and the country's profile in the international astronomical community, noted that the monarch liked to stargaze in his youth and that a star map graces a ceiling in his palace. Earlier this year, the Thai cabinet approved the National Astronomical Research Institute of Thailand (NARIT) as the sole science megaproject to mark the birthday. NARIT commissioned a$7.8 million telescope from EOS in Canberra, Australia. Construction of NARIT's training center—dormitories for scientists, conference facilities, and an educational display—is in full swing. Although first light will occur long after the birthday bash, Boonrucksar's outfit has found another way to celebrate. Next week, NARIT will host an International Olympiad here in astronomy and astrophysics. Some two dozen countries have agreed to send teams of five high school students. Boonrucksar sees the Olympiad as part of a larger effort to “teach young people how to think critically and analytically.” That's also the goal behind funding four graduate fellowships a year in astronomy, for study abroad, as part of the telescope project. Thailand hopes to lure foreign talent, too. Expanding the ranks of astronomers “is certainly necessary if the investment in the telescope is to be justified,” Hearnshaw says. Boonrucksar deserves credit for succeeding “against all odds” in putting Thailand on the astronomical map, says cosmologist Burin Gumjudpai of Naresuan University in Phitsanulok, Thailand. “It gives us hope,” he says, “that astronomy can be a real career here.” 4. AVIAN INFLUENZA # More Bumps on the Road to Global Sharing of H5N1 Samples 1. Martin Enserink, 2. Dennis Normile A battle between Indonesia and the World Health Organization (WHO) is escalating. Indonesia's health minister, Siti Fadilah Supari, has claimed that WHO is refusing to return dozens of H5N1 influenza viruses isolated from Indonesian samples. WHO calls the claims baseless and says Indonesia can get back the viruses once it shows it can handle them safely. The clash promised to complicate what was already expected to be a difficult international meeting about pandemic preparedness, slated to start in Geneva, Switzerland, this week, after Science went to press. At issue are 56 specimens from human H5N1 victims that Indonesia has shared with WHO as part of the Global Influenza Surveillance Network over the past few years. As usual, the samples have gone to WHO's four Collaborating Centres in London, Atlanta, Tokyo, and Melbourne, where researchers isolate and study the virus. Their analysis helps WHO monitor virus evolution, drug resistance, and pandemic risk, as well as aiding vaccine development. Indonesia, by far the most heavily afflicted country with 113 human cases and 91 deaths, has protested the scheme; the country worries that even if it collaborates, it will not have access to vaccines if a pandemic occurs. So far in 2007, Indonesia has shared only two samples, says David Heymann, who heads WHO's pandemic influenza efforts. Heymann says that at Supari's request, on 31 October WHO sent Indonesia a list of the 56 specimens the country had submitted to the network prior to 2007. He says the four labs had isolated H5N1 virus from 40 of them. But according to a report in the Jakarta Post, Supari said at a press conference in Jakarta on 8 November that WHO refuses to send the samples back. “We keep asking [WHO] to return the samples because they belong to us. This is for the sake of our country's sovereignty,” the newspaper quoted Supari as saying. Health ministry officials could not be reached to confirm the report. Heymann claims Supari is trying to cast WHO in a bad light. “She has always said she doesn't trust WHO, and she's finding new reasons not to trust us,” he says. Masato Tashiro, director of the WHO Collaborating Centre for Reference and Research on Influenza in Tokyo, says he believes Indonesia did not retain part of the samples, as countries usually do, because it previously did not have the biosafety level 3 (BSL-3) laboratories recommended for handling dangerous pathogens. Indonesia had cooperated with WHO's flu-sharing network until early this year when it learned that an Australian company had developed a vaccine using an Indonesian H5N1 strain. Fearing that such a vaccine made outside the country would be out of reach financially, Indonesia started developing its own research and development capabilities while withholding specimens and demanding capacity-building assistance from the international community. Tashiro says that when the institutions requesting viruses can certify that their planned BSL-3 labs are up and running, getting viruses returned from Japan—and probably other countries—would be routine. Although Indonesia is the only country that has stopped sending samples, it is reportedly trying to persuade others to follow course. Widjaja Lukito, a physician at the University of Indonesia in Jakarta and a member of Indonesia's delegation to the Geneva meeting, says Indonesia would “clarify everything” in Geneva. Heymann and others say any interruption of the 55-year-old sharing system would create a huge risk for global health. 5. GLOBAL WARMING # How Urgent Is Climate Change? 1. Richard A. Kerr Having issued their fair and balanced consensus document, many climate scientists now cite oft-overlooked reasons for immediate and forceful action to curb global warming The latest reports from the nobel Prize-winning Intergovernmental Panel on Climate Change (IPCC) were informative enough. Humans are messing with climate and will, sooner or later, get burned if they keep it up. But just how urgent is this global warming business? IPCC wasn't at all clear on that, at least not in its summary reports. In the absence of forthright guidance from the scientific community, news about melting ice and starving polar bears has stoked the public climate frenzy of the past couple of years. Climate researchers, on the other hand, prefer science to headlines when considering just how imminent the coming climate crunch might be. With a chance to digest the detailed IPCC products that are now available (http://www.ipcc.ch/), many scientists are more convinced than ever that immediate action is required. The time to start “is right now,” says climate modeler Gerald Meehl of the National Center for Atmospheric Research in Boulder, Colorado. “We can't wait any longer.” What worries these researchers is the prospect that we've started a slow-moving but relentless avalanche of change. A warming may well arrive by mid-century that would not only do immediate grievous harm—such as increase drought in vulnerable areas—but also commit the world to delayed and even more severe damage such as many meters of sea-level rise. The system has built-in time lags. Ice sheets take centuries to melt after a warming. The atmosphere takes decades to be warmed by today's greenhouse gas emissions. And then there are the decades-long lags involved in working through the political system and changing the world energy economy. “If you want to be able to head off a few trillions of [dollars of climate] damages per year a few decades out,” says glaciologist Richard Alley of Pennsylvania State University in State College, “you need to start now.” The disturbing message on the timing of global warming's effects comes in the IPCC chapters and technical summaries quietly posted online months after each of three working groups released a much-publicized Summary for Policymakers (SPM). An overall synthesis of the working group reports was released Saturday at the 27th session of IPCC. Earlier this year, only the SPMs went through the wringer of word-by-word negotiations with governments, which squeezed out a crucial table and part of another (Science, 13 April, p. 188). That information—which was always in the full reports—along with other report material, makes it clear that substantial impacts are likely to arrive sooner rather than later. Table TS.3 of Working Group II's technical summary, for example, lays out projected warmings. The uncertainties are obvious. Decades ahead, models don't agree on the amount of warming from a given amount of greenhouse gas, and no one can tell which of a half-dozen emission scenarios—from unbridled greenhouse-gas production to severe restraint—will be closest to reality. But this table strongly suggests that a middle-of-the-road, business-as-usual scenario would likely lead to a 2°C warming by about the middle of this century. Lined up beneath the projected warmings in the table are the anticipated effects of each warming. Beneath a mid-century, 2°C warming is a litany of daunting ill effects that had previously had no clear timing attached to them: increasing drought in mid-latitudes and semiarid low latitudes, placing 1 billion to 2 billion additional people under increased water stress; most corals bleached, with widespread coral mortality following within a few decades; and decreases in low-latitude crop productivity, as in wheat and maize in India and rice in China, among other pervasive impacts. At the bottom of the same table is a category of effects labeled “Singular Events,” most dramatically sea level rise. The table shows a “Long term commitment to several metres of sea-level rise due to ice sheet loss” falling between the middle-of-the-road 2°C warming and a 3°C warming, which without drastic emissions reductions might well come by the end of the century. The report calls it a “commitment” because although the temperatures needed to melt much of the Greenland ice sheet might be reached in the next 50 to 100 years, the ice sheet, similar to an ice cube sitting on a countertop, will take time to melt even after the surrounding air is warm enough. Its huge thermal inertia means a lag of at least several centuries before it would largely melt away, flooding much of South Florida, Bangladesh, and major coastal cities. ## A laggard system Ice sheets aren't the only thing that stretches out the time between an action—say, building a coal-fired power plant—and a global warming impact. For example, the atmosphere is slow to warm because the oceans are absorbing some of the heat trapped by the strengthening greenhouse. IPCC estimates that even if no greenhouse gases were added after the year 2000, the oceans'heat would warm the atmosphere 0.6°C by the end of the century, or as much as it warmed in the last century. So the world is already committed to almost one-quarter of the warming that can be expected late in the century. And half the warming of the next couple of decades will be carried over from emissions in the past century. Then there are the lags that come into play ahead of the climate system. The technological infrastructure that does most of the emitting—the gasoline-fed cars and coal-fired power plants, primarily—will have to be radically altered if greenhouse emissions are to be drastically reduced. The speed at which infrastructure can be changed depends on the perceived urgency, says energy-climate analyst James Edmonds of the Pacific Northwest National Laboratory's office in College Park, Maryland. Past transitions from one energy source to another—say, wood to coal—took upward of 50 to 100 years, he notes. But even with a Manhattan Project imperative—something nowhere in sight—weaning cars off oil, building nuclear power plants, and rigging coal power plants to shoot the carbon dioxide into the ground will take decades, not years. And there's the lag while governments crank up the will to fundamentally alter the global energy system. “The biggest lag is in the political system,” says geoscientist Michael Oppenheimer of Princeton University. A couple of decades have already passed discussing the seriousness of the threat, as he sees it, and at the present rate it could be another 20 years before a worldwide program up to the task is in place. Yet another lag would enter the calculation for taking action if policymakers waited for more research to narrow the scientific uncertainties. In the 1980s, for example, the biggest uncertainty in climate science was clouds and how they would react to climate change. Fifteen years later, “we are essentially where we were then,” says atmospheric scientist Robert Charlson of the University of Washington, Seattle. Clouds are still poorly understood, as are pollutant hazes, another collection of microscopic particles with a highly uncertain effect on future climate. With all these known time lags adding up to many decades, a lot of climate scientists say that the time for serious action is now. “We can't really afford to do a ‘wait and learn’ policy,” says Oppenheimer. “The most important question is, when do we commit to 2°? Really, there isn't a lot of headroom left. We better get cracking.” ## Fear of the unknown Physics and socioeconomics may make piloting the ponderous ship of climate a cumbersome business, but researchers are also worried about navigating around the hazards they fear may be lurking unseen beneath the surface. They've hit hidden obstacles before. Back in the 1970s, atmospheric chemists were worrying that pollutant chlorine might be destroying stratospheric ozone over their heads. Yet all the while, that chlorine was teaming up with ice-cloud particles over Antarctica to wipe out stratospheric ozone through a mechanism that scientists had overlooked. Prestigious committees have been warning for 25 years that similar surprises could spring from the climate system. A few may be starting to show themselves. Arctic sea ice took a nosedive last summer, prompting concerns that feedbacks not properly included in models are taking hold and accelerating ice loss (Science, 5 October, p. 33). Glaciers draining both southern Greenland and West Antarctic have suddenly begun rushing to the sea, and glaciologists aren't sure why (Science, 24 March 2006, p. 1698). And theorists recently reminded their colleagues that they will never be able to eliminate the small but very real chance that the climate system—contrary to most modeling—is hypersensitive to greenhouse gases. The uncertainties are adding up. “You can hope the uncertainties are going to break your way,” says policy analyst Roger Pielke Jr. of the University of Colorado, Boulder. “There have been times they did. But if you play that game often enough, you're going to lose some pretty big bets sometimes.” In the case of global warming, Pielke says, “we don't have a lot of time to wait around.” Edmonds agrees. If avoiding a 2°C warming is the goal, “the world really has to get its act together pretty damn fast. The current pace isn't going to do it.” 6. AUTOIMMUNE DISEASES # The B Cell Slayer 1. Robert Matthews* 1. Robert Matthews is a freelance writer based in Oxford, U.K. It took nearly a decade for Jonathan Edwards to persuade people that killing B cells could relieve symptoms of rheumatoid arthritis. Multiple sclerosis is his next target LONDON—For someone who has just seen his ideas for treating a crippling disease vindicated after years of rejection, Jonathan Edwards is remarkably self-effacing. Asked whether he feels he has brought hope to some of the millions with rheumatoid arthritis (RA), the 57-year-old rheumatologist tends to look at his shoes or up at the ceiling. Edwards would much rather talk about where he plans to go next than to dwell on how his University College London (UCL) team's decade-long pursuit of an unfashionable idea has now led to a new RA therapy approved in the United States and in Europe. The unfashionable idea advocated by Edwards is that RA stems from the misbehavior of denizens of the bloodstream known as B cells. These white blood cells play a key role in the body's immune system, releasing disease-fighting antibodies that have exquisite specificity for molecular targets, normally those on pathogens. B cells are also obvious suspects in autoimmune conditions such as RA, in which the immune system goes haywire and starts unleashing friendly fire within normal healthy tissue. Yet the idea of their involvement fell out of favor in the 1970s when autoimmune researchers interested in RA turned their attention to a different white blood cell, the T cell. Edwards, however, has breathed new life into the B cell theory of RA by providing evidence that eliminating these cells in patients can ease their symptoms. “Edwards deserves great personal credit; … he was a strong, consistent voice at a time when most others were not looking at B cells as central to rheumatoid arthritis pathogenesis,” says rheumatologist Gregg Silverman of the University of California, San Diego. B cell-depletion therapy doesn't work for every RA patient, and questions remain about how long the relief it brings lasts. Still, once again challenging conventional wisdom, Edwards is now arguing that B cells hold the key to another autoimmune affliction: multiple sclerosis (MS). And there is already tantalizing clinical evidence that the outcome could be the same—namely, a new therapy for a notoriously recalcitrant disease. ## The return of B cells When Edwards began his career in rheumatology in the 1970s, B cells seemed a prime suspect in the cause of RA, as they are the source of so-called rheumatoid factor (RF), a variety of antibody found in high levels in patients with the disease. Yet about 20% of RA patients don't have RF in their blood, and the level of RF doesn't correlate perfectly with the severity of disease. As researchers struggled to link B cells with what they were seeing in their patients, attention shifted to the behavior of T cells instead. These white blood cells help control B cells and turn up in joint tissue damaged by RA in larger quantities than B cells do. “By the late 1970s, B cells didn't seem to have anything to say. They'd become boring,” says Edwards. T cells “were just much more interesting.” For the next 20 years, T cells dominated the research agenda on autoimmune diseases. Yet it became clear, at least to Edwards, that the excitement about T cells was largely misplaced. Anti-T cell therapies failed to work for RA. It also remained unclear that T cells could cause inflammation without the involvement of B cells. Nor, he argued, could T cells account for the persistence of RA. All this prompted Edwards and his UCL colleague Geraldine Cambridge to begin pondering alternatives in the late 1990s—including the old idea that B cells held the key to therapy. The pair had been struck by the discovery that joint tissue attacked in RA contained two molecules—VCAM-1 and DAF—known to promote the persistence of B cells. By 1998, Edwards and Cambridge had the framework of a new explanation for RA based on B cells. Put simply, they suspected that the problem was a few bad apples. The body maintains a vast array of B cells, each producing an antibody with a unique shape, many of which prove useful in fighting disease. By chance, however, some B cells inadvertently produce antibodies that attack healthy tissue. Normally, the B cells producing these “autoantibodies” are destroyed. But in a disastrous twist in some people, the UCL team proposed, some of these autoantibodies undermine the weeding-out process, keeping the bad-apple B cells alive and also prompting T cells to help these B cells make yet more of their destructive antibodies. “The result is a vicious cycle,” says Edwards. The test of this hypothesis was obvious: Eliminate B cells from people with RA and let the immune system “reboot” with new B cells. The odds that the same bad apples would arise again and survive weeding out would be very small. As luck would have it, a drug capable of killing B cells but sparing the stem cells that make them had just reached the market. Approved in 1997 for use with B cell lymphomas, rituximab is a monoclonal antibody specially designed to home in on and knock out the immune cells. If Edwards and Cambridge were right, rituximab could also be the basis of a treatment for RA. A largely safe one, moreover: Studies of rituximab in hundreds of cancer patients had produced relatively few side effects and shown that people could temporarily lose all their B cells without suffering too many problems from suppression of their immune systems. The pair tried to publish their B cell theory—only to encounter responses from journals ranging from lukewarm to cryogenic. “We were told there was already a perfectly good explanation—based on T cells,” recalls Edwards. “The major medical journals wouldn't take it at all.” Their proposal finally appeared in Immunology in 1999 and provoked no response at all. Undeterred, the UCL team set up a pilot trial, giving B cell-depleting drugs, including rituximab, to five people with especially severe RA. Once their B cells disappeared, the patients' symptoms improved dramatically, and three continued to do well even after their B cells returned, 6 months or so later. Although journals dismissed the results on so few patients as inconclusive, Edwards scraped together more money from departmental funds and recruited more patients. At a 2000 meeting, the UCL team was able to present data on 20 patients, all but two of whom had shown major improvements, some for as long as 18 months. The media picked up on the apparent success, but the resulting headlines sparked accusations of hype, with the British Medical Journal condemning the “irresponsibility” of the UCL team in talking to the press about preliminary results from such a small trial. Still, Edwards got the support of the drug company Roche, which owns the European rights to rituximab, to push ahead. By 2002, the UCL team had the results of a randomized, controlled trial involving 161 patients, which showed that more than 40% of those receiving rituximab with methotrexate, a conventional anti-inflammatory agent, had experienced major improvements in symptoms at the end of 24 weeks, compared to just 13% of those receiving MTX alone. The relief continued even when the patients were checked 48 weeks after the B cell-depletion therapy. “When I presented these results, I think the penny finally dropped within the research community,” says Edwards. Indeed, The New England Journal of Medicine ultimately published the study's results in 2004. The success prompted further, larger trials, and last year, the regulatory authorities in both the United States and Europe approved rituximab for use with MTX in severe cases of RA. And in August, the United Kingdom concluded that the therapy was sufficiently cost-effective to be made available free of charge through the National Health Service. RA patient groups are understandably delighted about the new therapy, with the U.K. charity Arthritis Care describing it as “a triumph.” The RA research community is also excited. “I grew up thinking rheumatoid arthritis was a kind of classic B cell disease, and then the T cell people moved in,” says Alan Silman of the University of Manchester, U.K., who is medical director of the U.K. Arthritis Research Campaign. “Edwards put B cells back on the agenda.” ## Remaining doubts, new disease Yet Silman isn't totally won over. He and others continue to argue that most of RA's inflammation is caused by a direct effect of T cells. “B cells don't correlate perfectly with the disease,” says Silman. There have also been new safety concerns about rituximab, as two lupus patients receiving the drug last year died of a rare viral infection. But no link with the drug has been proven. Overall, says Silverman, almost a decade of rituximab use has shown the drug to have a good safety profile, with no increase in opportunistic infections among those receiving it. Edwards accepts that questions remain about the role of B cells in RA: Why do fewer than half of patients respond to rituximab, for example? Nevertheless, Edwards and his colleagues have become interested in whether B cell-depletion therapy can treat MS, a neurodegenerative condition stemming from the destruction of the myelin insulation surrounding vital nerve cells. This myelin loss triggers a host of symptoms, from muscle spasms and pain to loss of bladder and speech control. Many scientists have argued that T cells drive the myelin breakdown in MS, but no effective therapy has yet resulted from pursuing that theory. Edwards notes, however, that studies in the mid-1980s found that in the early stages of MS, the damage to nervous tissue seemed linked to the local accumulation of offspring of B cells known as plasma cells. Indeed, back in 1999, Edwards tried to interest neurologists in testing B cell-depletion therapy on MS patients but found no takers. Given the poor prognosis of many people with MS and the apparently low risks involved in rebooting the B cell system, Edwards has continued to lobby that rituximab is worth a try. And some MS researchers have started to agree. “There's strong circumstantial evidence implicating antibodies” and thus B cells in MS, says immunologist Christopher Linington of the University of Aberdeen, U.K. “Rituximab will almost certainly help some patients. The problem is, you cannot predict which ones.” In May at a neurology meeting, a team at the University of California, San Francisco (UCSF), unveiled preliminary results from an ongoing rituximab trial involving more than 100 patients with so-called remittingrelapsing MS. The data, primarily magnetic resonance imaging scans of the patients' nervous systems, indicated that the drug has dramatically reduced the nerve damage caused by the disease. “It's no longer a question of ‘Do B cells contribute to MS?’” but rather how, says Amit Bar-Or of McGill University in Montreal, Canada, who worked with the UCSF team assessing the safety of the potential new therapy. The UCL team is playing down any suggestion that they've been proved right again. Edwards says it's far too early for that, although he's hopeful. “The longer we stay in this business, the more we realize there may be further twists to the tale,” he says. 7. BIOMEDICAL RESEARCH # Cell Biology Meets Rolfing 1. David Grimm A diverse group of researchers wants to create a new discipline from scratch by bringing together experts in fascia and deep-tissue massage BOSTON—Peter Huijing was far from enthusiastic when he received an invitation to speak at the Fascia Research Congress. The meeting, held here last month, would be the first dedicated to the soft part of the body's connective tissue system—an important but medically neglected organ. It would bring together top scientists from fields as diverse as cell biology and biophysics, but it would also include alternative medicine practitioners, such as chiropractors and deep-tissue manipulators known as Rolfers. “I had a fear of damaging my reputation,” says Huijing, a world-renowned biomechanics researcher at Vrije Universiteit in Amsterdam, the Netherlands, who, despite his hesitation, decided to attend. By the time the conference was over, Huijing had agreed to organize the next one. The conference was the brainchild of Thomas Findley, an M.D.-Ph.D. co-director of research at the VA Medical Center in East Orange, New Jersey. For 30 years, Findley has been studying the science behind rehabilitation medicine; he is also director of research at the Rolf Institute of Structural Integration in Boulder, Colorado, which trains and certifies Rolfers. He became convinced early on that fascia—which weaves its way through the body like a gossamer blanket, cradling organs, ensheathing bones, and providing structural support—plays a key role in how patients respond to treatment. He wanted to learn more, but there were no identifiable fascia researchers. Frustrated, Findley began e-mailing scientists like Huijing in 2005. He knew that researchers around the world had been studying fascia in some form—MEDLINE references to i t have spiked in the past 3 years—but that they didn't see themselves as part of a coherent field. Huijing, for example, looks at how the body generates force via the interactions between muscles and fascia, but he was unaware of cell biologists who were studying how fascial cells respond to movement. Findley hoped that bringing such scientists together would stimulate new research collaborations and shed light on the mysterious tissue. Findley also wanted to bring in clinicians, but he knew that M.D.s wouldn't cut it. Some researchers have speculated that fascial anomalies may be responsible for black box disorders like fibromyalgia and lower back pain, yet doctors have traditionally ignored the tissue. Medical books barely mention fascia, and anatomical displays remove it. “It's just not sexy,” says Elizabeth Montgomery, a pathologist who specializes in soft tissue at Johns Hopkins University in Baltimore, Maryland. So Findley turned to the alternative-medicine community. Findley knew that Rolfers and other alternative therapists held fascia in high regard: They believe that rubbing and stretching the tissue brings about the improvements they see in clients. Yet they don't have the tools or data to prove their claims. “Practitioners want to know the science behind what they're doing,” says Findley, “and scientists want to see clinical applications of their work.” Combining the two groups to create a new field seemed natural. But as the meeting in Boston revealed, bridging the gap won't be easy. ## The great divide Frederick Grinnell picked up on the gap right away when he heard the applause—in the middle of a talk. It was 9:00 in the morning on the first day of the conference, and Paul Standley, a vascular physiologist at the University of Arizona College of Medicine in Phoenix, was describing his work on fibroblasts, the chief type of cell found in fascia. When Standley's team placed the cells on flexible collagen and stretched the collagen in ways that replicated repetitive motion strains on the body, many cells died. But when the team followed the strains by stretching the collagen in ways that approximate techniques like Rolfing, more cells survived. The audience erupted. “It's rare to see such enthusiasm at a conference,” says Grinnell, a cell biologist at the University of Texas (UT) Southwestern Medical Center in Dallas. “I was really struck by it.” The audience was composed mostly of alternative-medicine practitioners—chiropractors, massage therapists, and Rolfers—who signed up in droves when Findley first advertised the meeting in the fall of 2006. Within 5 months, the 500-seat venue at Harvard Medical School had sold out. The scientists took more convincing. In addition to Findley's aggressive e-mail campaign, a 51-year-old graduate student named Robert Schleip (see sidebar) traveled to labs around the world looking for plenary speakers. Some, like Grinnell, saw the conference as an opportunity to learn from other basic researchers. “I never realized my work on cell mechanics related to tissue mechanics until I heard about this meeting,” he says. But others, like Huijing, were turned off at first: “I had never heard of things like Rolfing before,” he says. “I didn't see the relevance.” In the end, 58 scientists signed up for the meeting—along with 51 M.D.s. Most of them took the podium, whereas the practitioners filled the seats. Clapping aside, many of the practitioners struggled with the science. Findley was adamant that the talks not be “watered down,” and intricate presentations on the first day pulled no punches. Cell biologists spoke about how fascial cells alter gene expression in response to force, while biomechanics researchers detailed how interactions between fascial cells and the extracellular matrix contribute to whole body mobility. By the afternoon, the auditorium was noticeably emptier. “My frontal lobe was tired,” says Briah Anson, a St. Paul, Minnesota-based Rolfer. For their part, the scientists had some problems connecting with the clinicians. Huijing's fears of stigma seemed to be borne out when he interacted with one group of attendees. “They started talking about aura,” he says. “I don't want my name associated with that.” And Giulio Gabbiani, a cell biologist at the University of Geneva in Switzerland who studies connective tissue and wound healing, acknowledged difficulty discussing some concepts with the practitioners. “It's like we were talking two different languages,” he says. All of this prompts Wallace Sampson to question whether putting the two camps together is a good idea. “Fascia is a legitimate target of study, but a field like this has to be generated organically,” says the alternative-medicine skeptic and professor emeritus at Stanford University in Palo Alto, California. “You have to do the basic science and see what evolves. You can't force the clinical side.” Partap Khalsa strongly disagrees. “It's not only valid to bring these groups together, it's essential,” says the program officer with the U.S. National Institutes of Health (NIH) National Center for Complementary and Alternative Medicine (NCCAM), which, along with organizations such as the Rolf Institute and the Evanston, Illinois-based Massage Therapy Foundation, provided funding for the meeting. “You need people who can do good basic science and clinicians who can inform them about their experiences,” he says. “It's the only way to advance the field.” ## Bridging the gap By the second day of the conference, things began to gel. A clinician-scientist panel fostered a dialogue between the two groups, and a networking lunch sparked new collaborations. “I heard clinicians talking about how manipulating fascial stiffness was key to their interventions,” says UT Southwestern's Grinnell. Now he plans to study the cell biological basis of stiffness and how it might contribute to wound repair and scarring. Huijing says he also learned new things from the alternative therapists—and he found that he had something to teach them as well. Establishing fascia research as a legitimate field, he says, will guarantee that these interactions continue. Findley knows it won't be easy. First, he'll need to attract more scientists. Publishing fascia research in top journals would help. He'll also need to cultivate a stable source of funding. Through the Rolf Institute, Findley has helped establish the Ida P. Rolf Research Foundation (named after the institute's founder), which is raising funds in hopes of awarding \$200,000 in grants per year in 2 to 3 years. That's still a pittance compared to the millions NIH can provide, and NCCAM's Khalsa says he likes what he saw at the meeting. “There's a lot of potential here,” he says. But Findley's greatest challenge will be keeping everyone happy. Practitioners want to see more of their own up on the podium, and scientists want assurances that everything will remain respectable. It's a tightrope Huijing looks forward to walking in 2009 when he puts together the next conference, to be held in Amsterdam. Huijing plans to give a larger spotlight to practitioners and to explore even more of the basic science. He's adding days, and he's reserved an auditorium for 1000 people—twice the size of the room at this year's event. “I have a feeling it could be very big,” he says. 8. BIOMEDICAL RESEARCH # From Rolfer to Researcher 1. David Grimm Robert Schleip remembers the moment he became a “born-again scientist.” For 13 years, he had been teaching Rolfing—a technique that involves rubbing and stretching a bodywide network of soft connective tissue known as fascia—when he began to question his lesson plans. “I found there was a pseudoscientific mentality behind what I was doing,” says Schleip, who in 1978 became Germany's first licensed Rolfer. “I thought, ‘I'd better check this stuff out.’” So Schleip turned to the scientific literature on fascia. “When I did my homework, I discovered that some of [the Rolfing dogma] didn't look so good.” For example, as part of their training, Rolfers assume that if they apply enough force to an area of fascia, they can lengthen it and remove tension. “But the science says you would have to apply a ton of pressure to effect these changes,” Schleip says. The literature also provided insight: Schleip discovered, for example, that fascia is highly innervated, and that might explain why manipulating the tissue could ease pain. “I knew there were many gold mines waiting,” Schleip says. So he stopped teaching and pursued a scientific career. Getting a research position wasn't easy. Ten professors turned Schleip down before one at Ulm University gave him a chance—but no lab space. Schleip spent his first year conducting experiments in his kitchen and in a storage room he rented from a nearby pharmacy. He began to study the ability of fascial tissue to contract—a property that could play a role in stiffness and lower back pain. “The professor was so impressed with how much I did on my own that he let me work in his lab,” Schleip says. Schleip now has a lab of his own. He earned his Ph.D. with honors in 2006 at the age of 52, and shortly thereafter established the Fascia Research Project at Ulm University. He's continuing his work on fascial contraction and has begun collaborating with Giulio Gabbiani, a preeminent cell biologist at the University of Geneva in Switzerland. Now Schleip says that when he calls professors to discuss research projects, they call him back. 9. SOCIETY OF VERTEBRATE PALEONTOLOGY MEETING # Did Horny Young Dinosaurs Cause Illusion of Separate Species? SOCIETY OF VERTEBRATE PALEONTOLOGY, 17–20 OCTOBER 2007, AUSTIN, TEXAS Of all the strange-headed dinosaurs, the prize for toughest, prickliest noggins probably belongs to the pachycephalosaurs—literally, the “thick-headed lizards.” Some sported domed skulls, and all had bony spikes studding their long snouts. Four species are known from roughly 65-million-year-old rocks in Wyoming, Montana, and South Dakota alone. It's an impressive display of diversity for the waning days of the dinosaurs. Or maybe not. At the Society of Vertebrate Paleontology's annual meeting here last month, Jack Horner of Montana State University (MSU) in Bozeman argued that three of the species are just one. What were thought to be two unique species, he says, are in fact juveniles of different ages that would have grown up to be bony-headed Pachycephalosaurus. “It's a dramatic remake,” says Peter Dodson of the University of Pennsylvania. The revision would remove two particularly colorful characters from the paleontological bestiary, and not everyone is convinced. Getting the taxonomy right has major implications, says David Evans of the Royal Ontario Museum in Toronto: “It's really important for understanding a whole range of evolutionary phenomena.” First described in 1931, Pachycephalosaurus wyomingensis has such a prodigious pate that paleontologists speculated males butted heads with each other, although many now doubt it (Science, 5 November 2004, p. 962). In 1983, a related species made its debut. Stygimoloch spinifer (“horned devil from the river of death”) had a smaller dome but fearsome spikes. The newest addition was introduced last year: Dracorex hogwartsia has a flat head with telltale nasal horns. The dragon-king was named in honor of J. K. Rowling, whose Harry Potter novels feature the Hogwarts School of Witchcraft and Wizardry. Horner and colleagues—MSU doctoral student Holly Woodward and Mark Goodwin of the University of California, Berkeley—suspected that young dinosaurs might have been misidentified as adults. During previous work on another pachycephalosaur, Stegoceras, they noticed that the bone of smaller specimens was full of radial canals, a spongelike texture that indicates rapidly growing bone and suggests that they were juveniles. The team cut open skulls of Pachycephalosaurus and found dense bone without canals, suggesting that the specimens were full-grown adults. Stygimoloch bone was full of canals. “This is not even close to being full-grown,” Horner says. The spikes had a spongy texture and showed signs that the bone was being resorbed—suggesting it was a juvenile Pachycephalosaurus. There is only one specimen of Dracorex, housed in the Children's Museum of Indianapolis, so Horner couldn't cut it open to look at the tissue. Horner notes that little bumps on the top of the head of Dracorex resemble those that give rise to radial bone growth in Stygimoloch. Two large holes in the top of the skull are another characteristic of juveniles that haven't finished growing. Given the lack of a dome and the shorter skull, Horner suspects that Dracorex is an even younger Pachycephalosaurus. He says the hypothesis could be falsified if researchers were to discover, say, a new skull of Dracorex that is as big as Pachycephalosaurus or that has mature bone. The argument makes sense to Robert Sullivan of the State Museum of Pennsylvania in Harrisburg, who co-authored a paper describing Dracorex published last year in the New Mexico Museum of Natural History and Science Bulletins. In another talk at the meeting, Sullivan speculated that juvenile pachycephalosaurs in Asia may have been misidentified as new species. But another author, Robert Bakker of the Houston Museum of Natural Science in Texas, adamantly opposes lumping together the three North American species. “The differences are [so] astonishing,” he says, that he can't imagine that one could have grown into the other. Evans, on the other hand, says such big changes are possible. “What dinosaurs teach us is that relative growth can be extreme, particularly in the skull,” he says. What's needed are careful measurements of many specimens to see how shape changes with size, Evans says; this can help reveal whether various specimens all belong to a so-called growth series. If some features, such as the height of the dome, do not depend on size, it would suggest they rightly belong to different species. Because juveniles tend to start out with features of their evolutionary ancestors and modify them as they grow, it's important to distinguish juveniles from adults or family trees may get confused. That would give researchers a skewed picture of how various pachycephalosaurs are related to one another and to more distant taxa. If Horner turns out to be right, the diversity of pachycephalosaurs would be 50% lower than previously thought for the latest Cretaceous. “It makes a lot more sense,” he says, because other kinds of dinosaurs were also declining in diversity at the time. Not even an honorary degree in wizardry, it seems, was enough to save them. 10. SOCIETY OF VERTEBRATE PALEONTOLOGY MEETING # Jaw Shows Platypus Goes Way Back SOCIETY OF VERTEBRATE PALEONTOLOGY, 17–20 OCTOBER 2007, AUSTIN, TEXAS When scientists first laid eyes on the duckbilled platypus and the echidnas in the late 18th century, they were so baffled by these bizarre egg-laying mammals that some considered the specimens a hoax. Modern researchers have uncovered other implausible features, including 40,000 tiny glands in the broad bill that sense electric currents, which may help the platypus catch prey underwater. The ant-eating echidna has about 100 in its tiny snout. The platypus and echidna are so unusual that they were assigned an order—the Monotremata—separate from the more common marsupial and placental mammals. The fossil record of monotremes is also sparse. The oldest known specimen is a single tooth from Patagonia, about 62 million years old, with a distinctive compressed shape like that of juvenile platypuses before they lose their teeth. A reanalysis of fossil jaws from Australia, reported at the meeting, suggests it belonged to a platypus that lived at least 112 million years ago. “It's really, really old for a monotreme,” Timothy Rowe of the University of Texas (UT), Austin, told the audience. Teinolophos trusleri was discovered near Inverloch, Australia, in 1997 and described by Thomas Rich of the Museum Victoria in Melbourne, Pat Vickers-Rich of Monash University, and colleagues. The specimens consist of jaws and teeth. Looking for more anatomical clues to the evolution of mammals, Rich's team took fossil jaws to Rowe, a paleontologist who also runs a computed tomography-scanning facility at UT Austin. Scans of three specimens revealed a large internal canal along the entire length of the jaw, like the canal in a modern platypus that carries nerve fibers from the electrosensory glands in the bill to the brain. “There's no other mammal that has a canal this size,” Rowe said. Even back in the early Cretaceous, it seems, the platypus was using electrosensation. “This is the most compelling evidence to us that Teinolophos is a platypus.” That would push back the fossil record of the platypus quite a bit; the next youngest fossil is Obdurodon dicksoni from 15-million-year-old rocks in Australia. It is also much older than current estimates from DNA of when platypuses and echidnas diverged from their most recent common ancestor. Molecular clocks put that date somewhere between 17 million and 80 million years ago. Rowe speculated that one reason for the underestimate may be that monotremes evolve at slower rates than other mammals do, an idea that fits with their lower diversity. Zhe-Xi Luo of the Carnegie Museum of Natural History in Pittsburgh, Pennsylvania, agrees that the canal in Teinolophos resembles that of a modern platypus: “I'm leaning toward accepting Rowe's idea.” 11. SOCIETY OF VERTEBRATE PALEONTOLOGY MEETING
ISSN:2164-6376 (print) ISSN:2164-6414 (online) Discontinuity, Nonlinearity, and Complexity Dimitry Volchenkov (editor), Dumitru Baleanu (editor) Dimitry Volchenkov(editor) Mathematics & Statistics, Texas Tech University, 1108 Memorial Circle, Lubbock, TX 79409, USA Email: dr.volchenkov@gmail.com Dumitru Baleanu (editor) Cankaya University, Ankara, Turkey; Institute of Space Sciences, Magurele-Bucharest, Romania Blow-up of Result in a Nonlinear Wave Equation with Delay and Source Term Discontinuity, Nonlinearity, and Complexity 10(4) (2021) 733--741 | DOI:10.5890/DNC.2021.12.012 Tayeb Lakroumbe, Mama Abdelli, Abderrahmane Beniani Laboratory of Analysis and Control of Partial Differential Equations, Djillali Liabes University, P. O. Box 89, Sidi Bel Abbes 22000, Algeria University of Mascara, 29000, Algeria University Center of Ain Temouchent, Department of Mathematics, Ain Temouchent 46000, Algeria Abstract In this paper we consider the initial boundary value problem for a nonlinear damping and a delay term of the form: $$|u_t|^{l}u_{tt}-\Delta u (x,t) -\Delta u_{tt}+\mu_1|u_t|^{m-2}u_t\\+\mu_2|u_t(t-\tau)|^{m-2}u_t(t-\tau)=b|u|^{p-2}u,$$ with initial conditions and Dirichlet boundary conditions. Under appropriate conditions on $\mu_1$, $\mu_2$, we prove that there are solutions with negative initial energy that blow-up finite time if $p \geq \max\{l+2,m\}$. References 1. [1] Adams, R.A. (1978), Sobolev spaces, Academic press, Pure and Applied Mathematics, vol. 65. 2. [2] Ball, J. (1977), {Remarks on blow-up and nonexistence theorems for nonlinear evolutions equations,} Quart. J. Math. Oxford, 28, 473-486. 3. [3] Cavalcanti, M.M., Domingos Cavalcanti, V.N., and Ferreira, J. (2001), {Existence and uniform decay for nonlinear viscoelastic equation with strong damping,} Math. Meth. Appl. Sci., 24, 1043-1053. 4. [4] Georgiev, V. and Todorova, G. (1994), Existence of solutions of the wave equation with nonlinear damping and source terms, J. Differential Equations, 109, 295-308. 5. [5] Haraux, A. and Zuazua, E. (1988), {Decay estimates for some semilinear damped hyperbolic problems,} Arch. Rational Mech. Anal., 150, 191-206. 6. [6] Hao, J.H. and Wei, H.Y. (2017), Blow-up and global existence for solution of quasilinear viscoelastic wave equation with strong damping and source term, Boundary Value Problems, 2017, 1687-2770. 7. [7] Kafini, M., Messaoudi, S.A., and Nicaise, S. (2014), A blow-up result in a nonlinear abstract evolution system with delay, Nonlinear Differ. Equ. App., 96, 18-73. 8. [8] Kalantarov, V.K. and Ladyzhenskaya, O.A. (1978), The occurrence of collapse for quasilinear equations of parabolic and hyperbolic type, J. Soviet Math., 10, 53-70. 9. [9] Kopackova, M. (1989), Remarks on bounded solutions of a semilinear dissipative hyperbolic equation, Comment. Math. Univ. Carolin., 30, 713-719. 10. [10] Levine, H.A. and Serrin, J. (1997), {Global nonexistence theorems for quasilinear evolution equation with dissipation,} Arch. Ration. Mech. Anal., 137, 341-361. 11. [11] Levine, H.A. and Park, S.Ro. (1998), {Global existence and global nonexistence of solutions of the Cauchy problem for a nonlinearly damped wave equation,} J. Math. Anal. Appl., 228, 181-205. 12. [12] Levine, H.A. (1974), {Instability and nonexistence of global solutions of nonlinear wave equation of the form $Pu_{tt} = Au +F(u)$,} Trans. Amer. Math. Soc., 192, 1-21. 13. [13] Levine, H.A. (1974), {Some additional remarks on the nonexistence of global solutions to nonlinear wave equations,} SIAM J. Math. Anal., 5, 138-146. 14. [14] Liu, W.J. (1998), Partial exact controllablity and exponential stability in higher-dimensional linear thermoelasticity, ESIAM: Control, Optimisation and Calculus of Variations, 3, 23-48. 15. [15] Liu, W.J. (1998), {The exponential stabilization of the higher-dimensional linear system of thermoviscoelasticity,} J. Math. Pures et Appliquees, 37, 355-386. 16. [16] Liu, W.J. (1998), {Partial exact controllability for the linear thermo-viscoelastic model,} Elect. J. Differential Eqns., 17, 1-11. 17. [17] Messaoudi, S.A. (2003), {Blow up and global existence in a nonlinear viscoelastic equation,} Math. Nachr., 260, 58-66. 18. [18] Messaoudi, S.A. (2001), {Blow up in a nonlinearly damped wave equation,} Math. Nachr., 231, 1-7. 19. [19] Messaoudi, S.A. and Said-Houari, B. (2004), {Blow up of solutions of a class of wave equations with nonlinear damping and source terms,} Math. Methods Appl. Sci., 27, 1687-1696. 20. [20] Nicaise, S. and Pignotti, C. (2006), {Stability and instability results of the wave equation with a delay term in the boundary or internal feedbacks,} SIAM Journal on Control and Optimization, 45, 1561-1585. 21. [21] Wu, S.T. (2013), {Asymptotic behavior for a viscoelastic wave equation with delay term,} Taiwanese Journal of mathematics, 17(3), 765-784. 22. [22] Vitillaro, E. (1999), {Global nonexistence theorems for a class of evolution equations with dissipation,} Arch. Ration. Mech. Anal., 49, 155-182. 23. [23] Zhijian, Y. (2002), {Existence and asymptotic behavior of solutions for a class of quasi-linear evolution equations with non-linear damping and source terms,} Math. Methods Appl. Sci., 25, 795-814.
# Chapter 6 - Review - Exercises: 8 $V=\frac{117 \pi }{5}$ #### Work Step by Step {Step 1 of 6}: First, find the points of intersection of the curves $x=1+y^{2}$ and $x=y+3 \left( x-3=y \right)$ $1+y^{2}=y+3$ $y^{2}-y-2=0$ $y^{2}-y-2=0$ $y \left( y-2 \right) +1 \left( y-2 \right) =0$ $y=2$ or $y=-1$ The points of intersection are (5, 2) and (2, -1). {Step 2 of 6}: Sketch the curves. {Step 3 of 6}: Rotate this shaded region about the y-axis to produce a solid. Cutting this solid horizontally produces a washer. Sketch the solid and the washer. {Step 4 of 6}: The outer radius of the washer is $y + 3$ The inner radius of the washer is $1+y^2$ The cross-sectional area of the washer $A \left( y \right) = \pi \left[ \left( y+3 \right) ^{2}- \left( 1+y^{2} \right) ^{2} \right]$ $= \pi \left[ y^{2}+9+6y-1-y^{4}+2y^{2} \right]$ $= \pi \left[ 8+6y-y^{2}-y^{4} \right]$ {Step 5 of 6}: Then the volume of the solid $V= \int _{-1}^{2}A \left( y \right) dy$ $= \int _{-1}^{2} \pi \left( 8+6y-y^{2}-y^{4} \right) dy$ $= \pi \int _{-1}^{2} \left( 8+6y-y^{2}-y^{4} \right) dy$ $= \pi \left[ 8y+3y^{2}-\frac{1}{3}y^{3}-\frac{1}{5}y^{5} \right] _{-1}^{2}$ {Step 6 of 6}: $V= \pi \left[ \left( 16+12-\frac{8}{3}-\frac{32}{5} \right) - \left( -8+3+\frac{1}{3}+\frac{1}{5} \right) \right]$ $= \pi \left[ 28-\frac{8}{3}-\frac{32}{5}+5-\frac{1}{3}-\frac{1}{5} \right]$ $= \pi \left[ 33-3-\frac{33}{5} \right]$ $= \pi \left[ 30-\frac{33}{5} \right]$ $V=\frac{117 \pi }{5}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# Bella has 8 pies and she can make 2 pies an hour. Tina has 2 pies and she can make 3 in an hour. How many hours will it take for Tina to have as many pies as Bella? ### Another question on Mathematics Mathematics, 21.06.2019 20:00 It is given that the quadratic equation hx²-3x+k=0, where h and k are constants, has roots $$\beta \: and \: 2 \beta$$express h in terms of k Mathematics, 21.06.2019 21:00 Need match the functions with correct transformation. f(x) = -3x f(x) = |x-1|+3 f(x) = √(x+3) 1/2x² f(x) = (x+1)²-3 4|x| 1. compress by a factor of 1/2 2. stretch by a factor of 4 3. shift to the left 3 4. shift to the left 1 5. shift up 3 6. reflection
Title Measurement of neutral strange particle production in the underlying event in proton-proton collisions $\sqrt{s}=7$ TeV Measurement of neutral strange particle production in the underlying event in proton-proton collisions $\sqrt{s}=7$ TeV Author Chatrchyan, S. Khachatryan, V. Sirunyan, A. M. Alderweireldt, S. Bansal, M. Bansal, S. Cornelis, T. de Wolf, E.A. Janssen, X. Knutsson, A. Luyckx, S. Mucibello, L. Roland, B. Rougny, R. van Haevermaet, H. Van Mechelen, P. Van Remortel, N. Van Spilbeeck, A. et al. Faculty/Department Faculty of Sciences. Physics Publication type article Publication 2013 Lancaster, Pa , 2013 Subject Physics Source (journal) Physical review : D : particles and fields. - Lancaster, Pa, 1970 - 2003 Volume/pages 88(2013) :5 , 21 p. ISSN 1550-7998 ISI 000323893400001 Article Reference 052001 Carrier E-only publicatie Target language English (eng) Full text (Publishers DOI) Affiliation University of Antwerp Abstract Measurements are presented of the production of primary K-S(0) and Lambda particles in proton-proton collisions at root s = 7 TeV in the region transverse to the leading charged-particle jet in each event. The average multiplicity and average scalar transverse momentum sum of K-S(0) and Lambda particles measured at pseudorapidities vertical bar eta vertical bar < 2 rise with increasing charged-particle jet p(T) in the range 1-10 GeV/c and saturate in the region 10-50 GeV/c. The rise and saturation of the strange-particle yields and transverse momentum sums in the underlying event are similar to those observed for inclusive charged particles, which confirms the impact-parameter picture of multiple parton interactions. The results are compared to recent tunes of the PYTHIA Monte Carlo event generator. The PYTHIA simulations underestimate the data by 15%-30% for K-S(0) mesons and by about 50% for Lambda baryons, a deficit similar to that observed for the inclusive strange-particle production in non-single-diffractive proton-proton collisions. The constant strange-to charged-particle activity ratios with respect to the leading jet p(T) and similar trends for mesons and baryons indicate that the multiparton-interaction dynamics is decoupled from parton hadronization, which occurs at a later stage. Full text (open access) https://repository.uantwerpen.be/docman/irua/28a8c0/8765.pdf E-info http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000323893400001&DestLinkType=RelatedRecords&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000323893400001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000323893400001&DestLinkType=CitingArticles&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Handle
Open access peer-reviewed chapter - ONLINE FIRST # Ethics of Using Placebo Controlled Trials for Covid-19 Vaccine Development in Vulnerable Populations Written By Lesley Burgess, Jurie Jordaan and Matthew Wilson Submitted: February 10th, 2022Reviewed: April 1st, 2022Published: May 5th, 2022 DOI: 10.5772/intechopen.104776 From the Edited Volume ## SARS-CoV-2 Variants - Two Years After [Working Title] Prof. Alfonso J. Rodriguez-Morales Chapter metrics overview View Full Metrics ## Abstract When clinical trials are conducted in vulnerable communities such as those found within low-to-middle-income-countries (LMICs), there is always the risk of exploitation or harm to these communities during the course of biomedical research. Historically, there have been multiple instances where significant harm was caused. Various organisations have proposed guidelines to minimise the risk of this occurring, however, questionable clinical trials are still conducted. Research Ethics Committees have an additional duty of care to protect these vulnerable populations. During the Covid-19 pandemic the ongoing use of placebo-controlled trials (PCTs), even after approval of a safe and efficacious vaccine, is a topic of great debate and is discussed from an ethical and moral perspective. ### Keywords • Covid-19 • research ethics • clinical trials • equity • placebo • randomisation ## 1. Introduction Randomised control trials have existed in medicine since the 1940s [1]. Many iterations of guidelines to govern their use in research to protect vulnerable populations that have historically been abused for the betterment of science. Unfortunately, with a concept as abstract as ethics in clinical research, there is no universal consensus, and one may argue for or against their use with equal vigour depending on the specific circumstances of the trial [1, 2, 3, 4, 5]. Ethics of conducting clinical trials, particularly placebo-controlled trials (PCTs) in low-to-middle-income-countries (LMICs), is controversial. The emergence of placebo-controlled vaccine clinical trials against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative agent of coronavirus disease 2019 (COVID-19), has renewed interest in the debate. According to the World Bank Group (2021) [6], LMICs are defined as countries in the developing world with a gross national income per capita of $1046–$4095. The need for clinical trials to be conducted in LMICs is paramount to bridging the health gap, a concept so starkly highlighted in the recent pandemic. Medical research in these countries requires proper adjudication and protection to maximise benefits and deter potential harm [7]. Pharmaceutical companies tend to pursue these communities, capitalising on lower costs, fewer restrictions, and weaker local standards [8]. In 2008, the United States Food and Drug Administration (FDA) published its decision to abandon the Declaration of Helsinki (DoH) as an ethical guideline when conducting and reviewing data from clinical trials conducted outside the United States of America (USA). As a result, there is limited protection for LMICs where USA-based companies conduct clinical trials [9]. The responsibility to uphold moral ethics thus shifts to the local investigators and review boards [9]. The limited protection referenced above pertains to the Good Clinical Practices of the Conference of International Harmonisation (ICH-GCP) being more flexible when choosing comparator drugs in clinical trials [10]. Article 33 in the DoH outlines stricter criteria for using placebo as an alternative to best proven intervention [7]. ICH-GCP, however, has no enforceable article guiding the choice of comparator leaving the decision to the study designer and review board’s discretion [10]. Consequentially, this avenue is open for exploitation in LMICs where scientific knowledge and ethics are comparatively weaker [8]. Together, the DoH and ICH-GCP should govern the impact clinical trials have in vulnerable communities, however, there is more nuance to the situation [1, 2, 3]. To ensure exploitation does not occur, participants must not be exposed to excessive risk and must understand the difference between clinical trials and clinical medicine [1, 5, 6, 7]. To participate in a trial, you willingly join an experimental process rather than have a therapy tailored to your medical needs [2, 3, 11, 12, 13, 14, 15, 16, 17]. We will discuss the aspects of trials as a whole and why LMICs are vulnerable to this exploitation, explore the concept of clinical equipoise, and discuss PCTs in the setting of COVID-19 vaccine trials during the pandemic. ## 2. What makes LMICs so vulnerable? All manner of ethical research necessitates non-use of undue influence to garner research participant compliance [7, 8, 10]. However, what may seem negligible to cause undue influence in developed countries, could very well double or even be whole income for a household in LMICs that suffer from poverty, poor living conditions and no access to running water. For these reasons, it’s imperative to consider the medical benefits of clinical trials [9]. In LMICs, there is a prevailing lack of access to healthcare [11]. Clinical trials offer the opportunity for recruited participants to receive healthcare not otherwise available to them [1, 2, 3, 4, 8]. The DoH states: “medical research within a vulnerable group is only justified if the research is responsive to the health needs...” [7]. This highlights an obvious gap in the protection of these populations which is easily exploited if not appropriately safeguarded. The argument for choice of comparator weighs heavily on this point. Internationally, there is a call for using “best available standard” when designing clinical trials. However, in LMICs, there is no legal basis enforcing an international gold standard over a locally available comparator choice [1, 2, 3, 4, 5, 8, 9, 10, 11, 12]. Problems arise when conducting clinical trials in LMICs where there is no “locally available standard”. In vaccine trials, specifically where cold chain continuation is paramount, there’s advocation against use of the international gold standard should the target country have insufficient infrastructure to ensure preservation of drug. Multiple publications reference the lack of ethics conducting clinical trials in LMICs [7, 18, 19, 20, 21]. Major challenges noted include: incomplete ethical regulations and guidance, limited knowledge of science, language barriers between researchers, sponsors and communities, and insufficient financial and material resources of local authorities to govern the conduct of clinical trials. Exploitation of LMICs is well documented throughout history. In 1994, the ACTG 076 trial involved in reviewing low-cost regimes of antiretroviral drugs to prevent mother to child transmission of human immunodeficiency virus (HIV), was rife with controversy. Placebo was administered despite having proven knowledge that zidovudine is effective in prevention of vertical transmission of HIV [8, 22]. Another example was a rotavirus vaccine trial conducted in India from 2011 to 2012. The use of placebo as comparator, despite availability of two internationally registered vaccines recommended by the World Health Organisation (WHO), and one locally registered vaccine. As a result, more than 3000 out of 7500 randomised children were exposed to rotavirus and associated risks [23]. Such use of placebo would be unacceptable in developed countries and the expectation that LMICs should carry this burden is unjust. ## 3. Principles of medical ethics and how they apply to LMICs The cardinal pillars of medical ethics are autonomy, beneficence, non-maleficence, and justice. All manner of clinical research takes heed of these pillars and can be described and applied in a myriad of ways [24]. ### 3.1 Social and clinical value Risk exposure must be justified; any resulting scientific knowledge gains should be significant enough to warrant inconveniencing and risking the population of unknown health outcomes for the greater good [1, 8]. There are certain areas of research that are better conducted in LMICs based on the endemic nature of diseases such as HIV and tuberculosis (TB) [19]. Populations are thus able to benefit directly from the trial as participants. The common shortfall is post-trial access to the tested intervention. There is massive debate on the responsibility of stakeholders to provide post-trial access to drugs. Some cite reciprocal justice to participants, while others cite practicality of large-scale rollouts being the jurisdiction of government entities [19]. Clinical trials in LMICs need to account for these factors during study design to ensure value. ### 3.2 Scientific validity Studies should be designed to answer specific questions, using research methods that are valid, and feasible [8]. Poor scientific reasoning and application of research trials damages the perception of clinical research. In the realm of LMICs it can be deleterious to future relationships in garnering support for clinical trials. Standards in LMICs may permit approval of fallacious studies by local authorities without better knowledge or insight. It is the responsibility of sponsors to ensure this avenue of exploitation is protected to the level of the country of origin [1, 2, 3, 4, 5, 18]. ### 3.3 Favourable risk-benefit ratio Uncertainty about the degree of risks and benefits associated with a treatment is implicit in clinical research [8, 13, 14, 15, 16, 17]. Clinical research is not conducted to provide health care though it is often a beneficial by-product [1, 2, 3, 4, 8]. When looking to conduct trials in LMICs and other vulnerable populations, there needs to be increased emphasis on these benefits [7]. Maintaining a favourable ratio requires the use of interim reviews during a study to timeously detect whether an intervention arm (active or control) is associated with increased risk. Should this occur, research should be stopped to allow the protection of participants [24]. ### 3.4 Independent review An independent review panel, with no vested interest in the outcome of the trial, should review proposal validity and ensure its integrity [8]. This is generally done by Institutional Review Boards (IRBs) and Research Ethics Committees (RECs). In high income countries, these committees comprise of highly qualified and experienced individuals who are well-suited to manage complex ethical issues. Their role serves to make sound, consistent and ethical decisions on matters related to patient safety. There is trepidation that IRBs and RECs in LMICs may not be adequately equipped to protect the human rights of clinical trial participants, due to lack of financial and material resources, inadequate training of members, lack of diversity of membership, lack of independence and inability to monitor approved protocols [21]. As a result, there may be inconsistencies in the review process outcomes thus compromising patient safety. Some countries intentionally weaken their regulatory framework to encourage foreign investment through externally sponsored research [24]. These shortcomings are historically exploited to conduct research in LMICs that would never pass the review process in the country of origin [24]. ### 3.5 Informed consent Participants must decide, independently and without duress, to take part in research [8, 24]. Subjects must be appropriately informed of the purpose, method, risks, benefit, and alternatives to research [24]. Furthermore, subjects must be made to understand these factors and how they apply to their situation. In LMICs, a common roadblock is language and minimum level of education [18, 21]. At times it is difficult to know the degree of participants understanding especially when translators are used [21]. Even in developed countries, the understanding of what a PCT is can be lacking. ### 3.6 Respect for subjects Confidentiality and ensuring informed voluntary participation are the hallmarks of respecting autonomy of subjects [8]. Abiding by a subject’s wishes to withdraw consent and discontinue from a trial without consequence is important [24]. Sharing information that arises from interim review of previously unknown adverse events is the responsibility of research staff, even though it may change the subject’s opinion on the risks and benefits of participation [24]. In LMICs there is a role for Community Advisory Boards (CABs) to advocate for trial participants and promote the ethical conduct of clinical trials [20]. In South Africa, CABs are used to protect the interests of participants on HIV and TB drug trials. Their main roles have been preventing exploitation and building capacity for research in communities. By consisting of community members and encouraging interest in research by the individual in a community, CABs bring the value of research to the home country. Assisting in advancing research goals, their use has had positive benefit in information distribution and recruitment for trials [25]. ## 4. Effects of Covid-19 on LMICs The Covid-19 pandemic augmented the effects of resource deficiency in LMICs. In such countries, where healthcare access is limited under normal circumstances, the added burden of a pandemic will and has caused vast morbidity and mortality. In India alone, there was an estimated 30,000–50,000 ventilators available to service a projected 1 million people with severe disease requiring ventilation during the peak of their first COVID-19 wave [26]. Access to oxygen in LMICs has been a challenge, with many hospitals running out of oxygen amidst a massive surge in patients due to COVID-19. During these periods, many hospitals in LMICs had an additional shortage of beds and staff [26]. The development of life-saving vaccines against Covid-19 can mitigate this devastation, by preventing severe disease completely and limiting hospitalisation. With development and initial distribution of vaccines occurring predominantly in the western world’s high-income countries, a potentially faster way for LMICs to receive any form of Covid-19 vaccination is through clinical trials. While controversial, it cannot be overstated (especially given the critical nature of the pandemic), that there is a need for clinical trials to collaborate with communities in LMICs and involve them in research of novel and affordable versions of these vaccines. This will provide them all the benefits of early access and equip their governments with tools to advance a vaccine rollout plan, while the scientific world benefits from the data they provide. Figure 1 [27] is from the United Nations Office for the Coordination of Humanitarian Affairs, showing countries with inter-agency humanitarian response plan (HRP). This graph depicts the vaccine coverage of population, provided by developed countries to HRP countries. This clearly depicts the health gap, the inequity of vaccine rollouts on a global scale. The genetic sequence of SARS-CoV-2 was published early in January of 2020. A global research and development effort followed to develop a safe and efficacious vaccine. Human clinical testing of the first vaccine candidate started in March 2020 [28]. At that time there was no doubt that PCTs were ethical and the preferred trial design to test potential Covid-19 vaccines. A global pandemic is a highly dynamic situation and regular change in the clinical landscape can be expected. Ethical and moral considerations will also change as new developments unfold and new information becomes available. These considerations will be influenced by existing global inequality and a lack of resources in LMICs. The moment a safe and efficacious Covid-19 vaccine became available, it opened the ethical debate as to how long it will stay ethical to continue using PCTs. ### 5.1 Advantages of using PCTs in Covid-19 vaccines Most of the arguments in favour of continuing the use of PCTs in Covid-19 vaccines stems from a document published in The New England Journal of Medicine by the WHO Ad Hoc Expert Group on the Next Steps for Covid-19 Vaccine Evaluation in January 2021 [29]. More than a year later, one could argue that this information is now outdated, but even though the global vaccine situation has changed dramatically, the WHO has not released any new guidelines. The WHO Ad Hoc Expert Group stated that initial vaccine roll-out will be slow and in limited quantities. This would provide an opportunity to ethically obtain socially valuable data. Data could then be used to improve regulatory and public health decision making. Better data would lead to increasing public and professional confidence in vaccines [29]. Rapid vaccination of large numbers of people will inevitably cause the vaccine to seem temporarily associated with some uncommon side effects. A large, simple PCT could identify any rare short-term side-effects or show the absence of such side-effects. These probably unrelated events occurring by chance after vaccination, may incorrectly be attributed to the vaccine [29]. Groups opposed to vaccination may deliberately spread these anecdotes and cause vaccine hesitancy [29, 30]. Randomised, noninferiority trials can provide clinically relevant data but according to the WHO, “at considerable cost to efficiency”. PCTs would assist with earning broad public confidence required for widespread vaccine acceptance. A PCT is still the scientific gold standard for testing any new intervention and alternative designs could yield inferior data [29, 30, 31, 32, 33, 34, 35]. Even following the availability of the first vaccines, it is still crucial to further evaluate candidate vaccines to meet the global needs. Observational data obtained from non-randomised studies after vaccine deployment could yield inaccurate answers and suffer from substantial biases [29]. The obligations researchers have to participants are not the same as those clinicians have towards patients [32, 33]. Proper informed consent would allow participants to voluntarily enrol in a trial and accept some risks to collect socially valuable data. In this unparalleled global crisis, billions of individuals might benefit from finding a new safe and efficacious vaccine, and thus some participants might enrol in PCTs because of altruistic reasons [29]. Those in favour of continuing to use PCTs in Covid-19 vaccines, argue that research-ethics guidelines such as the DoH and the Council for International Organizations of Medical Science (CIOMS) Guidelines were not written with Emergency Use Designation (EUD) of vaccines in mind [29]. A vaccine approved under EUD does not render it “best proven intervention” or an “established effective intervention” [29]. In January 2020 the WHO stated verbatim that trial sponsors are not ethically obligated to unblind participants who desire to obtain a different investigational vaccine [29]. As with any clinical trial, participants would still have the option to withdraw from the trial and the WHO did not comment on this probability when engaging with participants who request unblinding. ### 5.2 Ethical shortcomings of using PCTs in Covid-19 vaccines Several Covid-19 vaccines have been approved for emergency use, with some vaccines already receiving full approval from various regulatory authorities across the world. These vaccines have been proven to have high levels of efficacy and safety [36]. Once there is an available vaccine, with high level of safety and efficacy, new candidate vaccines should be tested against approved vaccines. Ongoing PCTs of Covid-19 vaccine candidates should be unblinded [37]. Continuation of PCTs of new vaccines in conditions where efficacious vaccines already exist, contravenes the bioethics principle of beneficence. It will be in direct conflict with the participants’ best interests. It puts them at a disadvantage compared to people who are not on the trial. Researchers have a duty not to harm participants in clinical trials. In using placebo, researchers fail to provide protection against a deadly pandemic, as various Covid-19 vaccines have been proven to be safe, efficacious, and available [36]. The harm participants in the control group are exposed to is not minor [38]. There are limitations in current treatment options, thus it is in everyone’s own interest to take the first vaccine found to be safe, instead of participating in a PCT. Ravinetto [39] stated that an ethical strategy cannot be built on an unethical premise. In this case it is the inequitable allocation of vaccines between countries which is in essence unethical. This is a prime example of so-called “ethics dumping”: undertaking research in LMIC settings, which would not be permitted in high income settings. This reverses the principle of benefit sharing in global research. The burden of the research is much higher for the most vulnerable communities, while the benefits are available only to the higher income countries. In social justice terms, global health research should generate knowledge that improves the health and well-being of disadvantaged and marginalised communities [39, 40]. In the context of Covid-19 these communities are mostly in LMICs, those who lack access to vaccines due to unequal global distribution. Research involving this group should be based on health and social justice, rather than building on the existing structural injustice and exploitation of these groups. Clinical equipoise can be defined as a state of uncertainty or true ambivalence toward the efficacy of a novel therapy in the medical or scientific community [41, 42, 43]. Based on the principle of beneficence if any novel therapy is believed, by consensus, to be efficacious, research subjects should not be denied access to that therapy. Similarly, if the novel therapy is found to cause harm, then it would go against the principle of nonmaleficence to continue that specific candidate as investigational product. The ethical loophole that supporters of continuing PCTs in Covid-19 vaccine trials in LMICs are using, is based on the fact the so-called local “standard of care” in many LMICs would be no vaccine or very restricted access to a vaccine [40]. Arguing equipoise between placebo and local standard, in settings where vaccines are not yet available, due only to vaccines nationalism and lack of equity, would be unethical. One could argue that the ethical standards applied in the LMICs should be the same as if the research were carried out in the sponsoring country. People should be treated fairly, regardless of where they live. The WHO states that if vaccines are not properly tested (with continuing PCTs), it might lead to public distrust [29]. However, in the authors’ opinion, using a different set of ethical standards for certain countries just because they are poor, disadvantaged and cannot afford vaccines, could lead to even more distrust and scepticism around how the WHO is dealing with global inequality during a deadly pandemic. Although those who argue in favour of continuing to use PCTs in Covid-19 vaccines state that randomised control trial is the golden standard for modern clinical decision making [29, 30, 31, 32, 33, 34, 35], there are various other study designs that could yield good data without endangering or leaving any participants unprotected [37, 38, 39, 40, 44, 45, 46, 47, 48]. Non-inferiority blinded active control trials can be done to compare trial vaccines to an established vaccine, without leaving any participants unvaccinated [39]. If research can be conducted in several ways, the method that minimises morbidity and loss of human life should be prioritised above a method that supposedly gives scientific results more efficiently, but at a greater risk to the participants. Various authors have commented on the WHO’s call to altruism aimed at communities in LMICs [39, 40, 44]. To quote the WHO document: “people who enrol in clinical trials would probably understand the value of gathering data that will further elucidate the safety and efficacy of these vaccines and their appropriate use” [29]. This seems to underestimate the complex nature of decision-making regarding participating in a clinical trial by people living in LMICs [49]. The WHO is expecting the people from poor, marginalised and disadvantaged communities to accept that their sacrifice will be for the greater good. Firstly, this sacrifice is per definition not expected from their counterparts in higher income countries. Secondly, the “value of gathering of data” will directly benefit the sponsor country long before it benefits the general population of the LMIC where the trial was conducted. Ironically, Dr Ghebreyesus, the WHO Director-General, in relation to global vaccine inequity, stated “The world is on the brink of a catastrophic moral failure—and the price of this failure will be paid with lives and livelihoods in the world’s poorest countries” [50]. Now it is the WHO Ad Hoc Expert Group that is essentially giving researchers free reign to continue using PCTs in countries that are too poor to afford vaccines by misusing the clinical equipoise argument and thus further marginalising these vulnerable communities that they are meant to protect. When conducting trials in LMICs there are certain specific considerations regarding informed consent [33, 34]. The level of literacy in LMICs is lower than in high income countries and thus it is more likely that a participant will have a diminished understanding of what a PCT entails [34]. The authors can reflect on several cases where despite extensive counselling and a thorough informed consent process, participants in various Covid-19 vaccine PCTs, when coming back for subsequent visits, could not recall that they were told of the possibility of having a salt-water injection. This caused great confusion for many participants and highlights the ethical challenges of conducting clinical trials in LMICs. In many LMICs, some participants are more likely to trust a clinician blindly as it is seen as a sacred profession [40]. The clinician is supposed to act in the patient's best interest and the participants cannot be expected to always know the subtle difference between a clinician and an investigator. In some cultures, the clinician is seen as an authority figure and participants find it difficult to say no [40]. Medical care is often limited in LMICs and joining a clinical trial is a way for a participant to gain access to a scarce resource. In many LMICs there is still great uncertainty about the access to vaccines and the people are vulnerable and desperate for any type of help. The Covid-19 pandemic has led to a massive socio-economic crisis with countless jobs lost. The exact economic effect of Covid-19 on LMICs, will not be discussed in this chapter, but in summary all can agree that the economic situation of many people living in LMICs is worse than ever. The monetary gain involved in being part of a clinical trial, when converted into the currency of the sponsor nation, is often seen as negligible and not considered a formal payment. For a participant in a LMIC, there is a higher probability that this payment might be a very strong motivator to join a clinical trial even if there are substantial risks. If the participant joins the trial as the only way to feed himself and his family, one could argue that it is not truly informed consent. The WHO stated that sponsors are not ethically obliged to unblind subjects as soon as a different investigational vaccine becomes available [29, 31]. In South Africa, the vaccine roll-out for health care workers was via the Sisonke open label clinical trial. Outside of a clinical trial setting, there was no other way for a health care worker to get vaccinated. A small group of these health care workers were already participating in Covid-19 vaccine PCTs. This is another example of how participants in LMICs are motivated to participate in PCTs, because a 50% chance of getting the vaccine is still better than no chance at all. Even though, according to the WHO, sponsors were not obliged to unblind these participants, it would have caused a great ethical scandal if they were not given the option. In the author's experience, many participants made it clear that if they were denied unblinding, when they have a 100% chance of getting a vaccine elsewhere that is already authorised in several other countries, they would certainly withdraw from the PCT. Ahmad and Dhrolia posed the following comparison. The question “Should we continue or permit placebo controlled vaccine trials for Covid-19 disease, when available vaccines have been found safe, efficacious and in use in many countries?” sounds very similar to “Should West African HIV-positive pregnant women receive placebo in HIV placebo-controlled trials when Zidovudine was found safe and efficacious for the prevention of vertical transmission of HIV infection elsewhere in the world?” or “Should African American men of Tuskegee, Alabama remain untreated even when penicillin was found safe and efficacious for the treatment of syphilis?” [40]. Any researcher who has attended training in Good Clinical Practice, should be familiar with these historical events, which serve as extreme examples of what should not happen during a clinical trial. They involve vulnerable, disadvantaged, poor populations with potentially life-threatening diseases. These populations do not have access to the standard of care that is available in higher income countries. They are trying to access interventions which the sponsors are able to provide, but deliberately chose not to do so. Multiple Covid-19 vaccines have been found to be safe, efficacious and one of the only lifesaving interventions that would be able to end a pandemic. Over ten billion doses of vaccines have been administered globally [35]. The vaccine is freely available in high income countries, but only 9.5% of people in low-income countries have received at least one dose [51]. It could easily be argued that continuing to use PCTs in Covid-19 vaccines, and thus further exploiting the vulnerable pollution in LMICs, is no different to the West Africa or Tuskagee trials, and thus grossly unethical and a violation of human rights. ## 6. Conclusions PCTs for a COVID-19 vaccine in vulnerable populations need to be reviewed with the above in mind. The onus of patient safety when looking to conduct clinical trials in vulnerable populations, which LMICs certainly are, should remain with the sponsor and country of origin that will benefit most from the data. Emphasis must be placed on autonomy; subjects need to understand what they are doing by participating. Cultural differences make this difficult and the need for community engagement is paramount. We have highlighted weakness in the ethical review process of LMICs, the lack of resources being a major factor. The COVID-19 pandemic has strained these resources further in all aspects, creating a need for investment in the health of LMICs by sponsors conducting clinical research. Necessary in these countries to bridge the health gap and provide healthcare access to the most impoverished of the world. We cannot however use this fact to exploit LMICs, the fact they are so desperate is more reason for sponsors to protect the communities, not less. The debate continues and each case will have its own nuance. The major points to take away are who benefits from the research, how will the local government use that research to generate national programs, are the communities adequately engaged throughout the process of clinical trials and what ethical frameworks, enforceable or otherwise, exist in the designated country where the trial will take place. Answering these questions will assist sponsors in maintaining ethical practice and protection of research participants in LMICs. ## Conflict of interest The authors declare no conflict of interest. ## References 1. 1.Miller FG, Brody H. What makes placebo-controlled trials unethical? The American Journal of Bioethics. 2002;2(2):3-9 2. 2.Millum J, Grady C. The ethics of placebo-controlled trials: Methodological justifications. Contemporary Clinical Trials. 2013;36(2):510-514. DOI: 10.1016/j.cct.2013.09.003 3. 3.Amdur RJ, Biddle CJ. An algorithm for evaluating the ethics of a placebo-controlled trial. International Journal of Cancer. 2001;96(5):261-269. DOI: 10.1002/ijc.1026 4. 4.Chiodo GT, Tolle SW, Bevan L. Placebo-controlled trials: Good science or medical neglect? The Western Journal of Medicine. 2000;172(4):271-273. DOI: 10.1136/ewjm.172.4.271 5. 5.Streiner DL. Placebo-controlled trials: when are they needed? Schizophrenia Research. 1999;35(3):201-210. DOI: 10.1016/s0920-9964(98)00126-1 6. 6.World Bank Official Definition of Low- and Middle-Income Countries. [Internet]. 2021. Available from:https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups.[Accessed: 15 January 2022] 7. 7.World Medical Association. Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects. [Internet]. 2013. Available from:https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/.[Accessed: 20 January 2022] 8. 8.Bittker BM. The ethical implications of clinical trials in low- and middle-income countries; Human Rights Magazine; Vol. 46(4). The truth about science; 14 June 2021 9. 9.Burgess LJ, Pretorius D. FDA abandons the Declaration of Helsinki: The effect on the ethics of clinical trial conduct in South Africa and other developing countries. South African Journal of Bioethics and Law. 2012;5(2):87-90. DOI: 10.7196/SAJBL.222 10. 10.International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). [Internet]. 2013. Available from:https://database.ich.org/sites/default/files/E6_R2_Addendum.pdf.[Accessed: 24 January 2022] 11. 11.Vieta E, Cruz N. Head to head comparisons as an alternative to placebo-controlled trials. European Neuropsychopharmacology. 2012;22(11):800-803. DOI: 10.1016/j.euroneuro.2011.11.011 12. 12.Miller FG. Placebo-controlled trials in psychiatric research: An ethical perspective. Biological Psychiatry. 2000;47(8):707-716. DOI: 10.1016/s0006-3223(00)00833-7 13. 13.Tramèr MR, Reynolds DJ, Moore RA, McQuay HJ. When placebo controlled trials are essential and equivalence trials are inadequate. British Medical Journal. 1998;317(7162):875-880. DOI: 10.1136/bmj.317.7162.875 14. 14.Christakis NA. Ethics are local: Engaging cross-cultural variation in the ethics for clinical research. Social Science and Medicine. 1992;35(9):1079-1091. DOI: 10.1016/0277-9536(92)90220-k 15. 15.Simon R. Are placebo-controlled clinical trials ethical or needed when alternative treatment exists? Annals of Internal Medicine. 2000;133(6):474-475. [published correction appears in Ann Intern Med 2000 Nov 7;133(9):754]. DOI: 10.7326/0003-4819-133-6-200009190-00017 16. 16.Temple R, Ellenberg SS. Placebo-controlled trials and active-control trials in the evaluation of new treatments. Part 1: Ethical and scientific issues. Annals of Internal Medicine. 2000;133(6):455-463. DOI: 10.7326/0003-4819-133-6-200009190-00014 17. 17.Ellenberg SS, Temple R. Placebo-controlled trials and active-control trials in the evaluation of new treatments. Part 2: Practical issues and specific cases. Annals of Internal Medicine. 2000;133(6):464-470. DOI: 10.7326/0003-4819-133-6-200009190-00015 18. 18.Moodley K, Myer L. Health Research Ethics Committees in South Africa 12 years into democracy. BMC Medical Ethics. 2007;8:1. Published 2007 Jan 25. DOI: 10.1186/1472-6939-8-1 19. 19.Millum J. Post-trial access to antiretrovirals: Who owes what to whom? Bioethics. 2011;25(3):145-154. DOI: 10.1111/j.1467-8519.2009.01736.x 20. 20.Zhao Y, Fitzpatrick T, Wan B, Day S, Mathews A, Tucker JD. Forming and implementing Community Advisory Boards in low- and middle-income countries: A scoping review. BMC Medical Ethics. 2019;20(1):73. Published 2019 Oct 17. DOI: 10.1186/s12910-019-0409-3 21. 21.Davies SEH. The introduction of research ethics review procedures at a university in South Africa: Review outcomes of a social science research ethics committee. Research Ethics. 2020;16(1-2):1-26. DOI: 10.1177/1747016119898408 22. 22.de Zulueta P. Randomised placebo-controlled trials and HIV-infected pregnant women in developing countries. Ethical imperialism or unethical exploitation? Bioethics. 2001;15(4):289-311. DOI: 10.1111/1467-8519.00240 23. 23.Kulkarni P, Desai S, Tewari T, et al. A randomized Phase III clinical trial to assess the efficacy of a bovine-human reassortant pentavalent rotavirus vaccine in Indian infants. Vaccine. 2017;35(45):6228-6237. DOI: 10.1016/j.vaccine.2017.09.014 24. 24.Avasthi A, Ghosh A, Sarkar S, Grover S. Ethics in medical research: General principles with special reference to psychiatry research. Indian Journal of Psychiatry. 2013;55(1):86-91. DOI: 10.4103/0019-5545.105525 25. 25.Reddy P, Buchanan D, Sifunda S, James S, Naidoo N. The role of community advisory boards in health research: Divergent views in the South African experience. Journal of Social Aspects of HIV/AIDS. 2010;7(3):2-8. DOI: 10.1080/17290376.2010.9724963 26. 26.Carter C, Lan Anh NT, Notter J. COVID-19 disease: perspectives in low- and middle-income countries. Clinics in Integrated Care. 2020;1:1-9. DOI: 10.1016/j.intcar.2020.100005 27. 27.United Nations Office for the Coordination of Humanitarian Affairs, COVID-19 Data Explorer: Global Humanitarian Operations. Monthly Report, 31 December 2021 [Internet].https://data.humdata.org/visualization/covid19-humanitarian-operations/.[Accessed: 31 March 2022] 28. 28.Thanh Le T, Andreadakis Z, Kumar A, et al. The COVID-19 vaccine development landscape. Nature Reviews Drug Discovery. 2020;19(5):305-306. DOI: 10.1038/d41573-020-00073-5 29. 29.WHO Ad Hoc Expert Group on the Next Steps for Covid-19 Vaccine Evaluation, Krause PR, Fleming TR, et al. Placebo-controlled trials of Covid-19 vaccines - Why we still need them. The New England Journal of Medicine. 2021;384(2):e2. DOI: 10.1056/NEJMp2033538 30. 30.Grady C, Shah S, Miller F, et al. So much at stake: Ethical tradeoffs in accelerating SARSCoV-2 vaccine development. Vaccine. 2020;38(41):6381-6387. DOI: 10.1016/j.vaccine.2020.08.017 31. 31.World Health Organisation Access to COVID-19 Tools Accelerator Ethics and Governance Working Group, Singh J. Emergency Use Designation of COVID-19 candidate vaccines: Ethical considerations for current and future COVID-19 placebo-controlled vaccine trials and trial unblinding Policy brief. 2020. DOI: 10.13140/RG.2.2.14560.30728 32. 32.Wendler D, Ochoa J, Millum J, Grady C, Taylor HA. COVID-19 vaccine trial ethics once we have efficacious vaccines. Science. 2020;370(6522):1277-1279. DOI: 10.1126/science.abf5084 33. 33.Sisa I, Noblecilla E, Orozco F. Rationale to continue approving placebo-controlled COVID-19 vaccine trials in LMICs. Lancet. 2021;397(10277):878. DOI: 10.1016/S0140-6736(21)00357-3 34. 34.Singh JA, Kochhar S, Wolff J, WHO ACT-Accelerator Ethics & Governance Working Group. Author Correction: Placebo use and unblinding in COVID-19 vaccine trials: Recommendations of a WHO Expert Working Group. Nature Medicine. 2021;27(5):925. DOI: 10.1038/s41591-021-01360-3 35. 35.Lenzer J. Covid-19: Should vaccine trials be unblinded? British Msedical Journal. 2020;371:m4956. Published 2020 Dec 29. DOI: 10.1136/bmj.m4956 36. 36.Covid 19 Vaccine Tracker: Approved Vaccines [Internet]. 2022. Available from:https://covid19.trackvaccines.org/vaccines/approved/[Accessed: 07 February 2022] 37. 37.Ortiz-Millán G. Placebo-controlled trials of Covid-19 vaccines - Are they still ethical? Indian Journal of Medical Ethics. 2021;VI(2):1-8. DOI: 10.20529/IJME.2021.015 38. 38.Wiesing U, Ehni HJ. Placebo control in Covid-19 trials: A missed opportunity for international guidance. Indian Journal of Medical Ethics. 2021;VI(2):1-7. DOI: 10.20529/IJME.2021.022 39. 39.Ravinetto R. Problematic Covid-19 vaccine trials in times of vaccine nationalism. Indian Journal of Medical Ethics. 2021;VI(2):1-7. DOI: 10.20529/IJME.2021.014 40. 40.Ahmad A, Dhrolia MF. "No" to placebo-controlled trials of Covid-19 vaccines. Indian Journal of Medical Ethics. 2021;VI(2):1-7. DOI: 10.20529/IJME.2021.019 41. 41.Freedman B. Equipoise and the ethics of clinical research. The New England Journal of Medicine. 1987;317(3):141-145. DOI: 10.1056/NEJM198707163170304 42. 42.London AJ. Social value, clinical equipoise, and research in a public health emergency. Bioethics. 2019;33(3):326-334. DOI: 10.1111/bioe.12467 43. 43.Friesen P, Caplan AL, Miller JE. COVID-19 vaccine research and the trouble with clinical equipoise. Lancet. 2021;397(10274):576. DOI: 10.1016/S0140-6736(21)00198-7 44. 44.Alqahtani M, Mallah SI, Stevenson N, Doherty S. Vaccine trials during a pandemic: Potential approaches to ethical dilemmas. Trials. 2021;22(1):628. Published 2021 Sep 15. DOI: 10.1186/s13063-021-05597-8 45. 45.van der Graaf R, Cheah PY. Ethics of alternative trial designs and methods in low-resource settings. Trials. 2019;20(Suppl. 2):705. Published 2019 Dec 19. DOI: 10.1186/s13063-019-3841-2 46. 46.Bonati M. Restrictions on the use of placebo in new COVID-19 vaccine trials. European Journal of Clinical Pharmacology. 2022;78(1):147-148. DOI: 10.1007/s00228-021-03203-z 47. 47.Knottnerus JA. New placebo-controlled Covid-19 vaccine trials are ethically questionable; it's now about comparative effectiveness and availability of registered vaccines. Journal of Clinical Epidemiology. 2021;133:175-176. DOI: 10.1016/j.jclinepi.2021.03.006 48. 48.Hunt A, Saenz C, Littler K. The global forum on bioethics in research meeting, “ethics of alternative clinical trial designs and methods in low- and middle-income country research”: Emerging themes and outputs. Trials. 2019;20(Suppl. 2):701. Published 2019 Dec 19. DOI: 10.1186/s13063-019-3840-3 49. 49.Browne JL, Rees CO, van Delden JJM, et al. The willingness to participate in biomedical research involving human beings in low- and middle-income countries: A systematic review. Tropical Medicine & International Health. 2019;24(3):264-279. DOI: 10.1111/tmi.13195 50. 50.WHO Vaccine Equity Declaration [Internet]. 2022. Available from:https://www.who.int/campaigns/vaccine-equity/vaccine-equity-declaration.[Accessed: 23 January 2022] 51. 51.Coronavirus (COVID-19) Vaccinations [Internet]. 2022. Available from:https://www.bloomberg.com/graphics/covid-vaccine-tracker-global-distribution/[Accessed: 23 January 2022] Written By Lesley Burgess, Jurie Jordaan and Matthew Wilson Submitted: February 10th, 2022Reviewed: April 1st, 2022Published: May 5th, 2022
# Math Help - Arithmetic mean and Geometric mean 1. ## Arithmetic mean and Geometric mean Q) A and B are two numbers such that there G.M is 20% lower than their A.M .Find the ratio between the two numbers. Thanks, Ashish 2. Originally Posted by a69356 Q) A and B are two numbers such that there G.M is 20% lower than their A.M .Find the ratio between the two numbers. Thanks, Ashish $\sqrt{a b} = \frac{4}{5} \left( \frac{a + b}{2} \right)$ $\Rightarrow ab = \frac{4}{25} (a + b)^2 \Rightarrow 25 ab = 4a^2 + 8 ab + 4 b^2 \Rightarrow 25 = 4 \left( \frac{a}{b}\right) + 8 + 4 \left(\frac{b}{a}\right)$. Let the ratio be $x = \frac{a}{b}$. Then: $25 = 4x + 8 + \frac{4}{x}$. Your job is to solve for x. 3. Hello, Ashish! My solution is similar to Mr. F's . . . $A$ and $B$ are two numbers such that there G.M is 20% lower than their A.M. Find the ratio between the two numbers. Let $a$ and $b$ be the two numbers. We have: . $\sqrt{ab} \:=\:\frac{4}{5}\left(\frac{a+b}{2}\right) \quad\Rightarrow\quad 5\sqrt{ab} \:=\:2(a+b)$ Square both sides: . $25ab \;=\;4a^2 + 8ab + 4b^2 \quad\Rightarrow\quad 4x^2 - 17ab + 4b^2 \:=\:0$ . . which factors: . $(a - 4b)(4a - b) \:=\:0$ Hence, we have: . $\begin{Bmatrix}a - 4b \:=\:0 & \Rightarrow & \dfrac{a}{b} \:=\: 4 \\ \\[-2mm] 4a - b \:=\:0 & \Rightarrow & \dfrac{a}{b} \:=\:\dfrac{1}{4} \end{Bmatrix}$
# Proving a topological space is/is not Hausdorff? I know this is a basic question, but I am having trouble proving that particular topological spaces are/are not Hausdorff and was wondering if I could get some guidance. For example, I have to decide whether or not the half-open topology is Hausdorff and then prove my decision. I think that it is, and my proof is as follows: Let $U, V \subset H'$ such that $x \in U$ and $y \in U$ with $x \not= y$. THen for $a,b,c \in \mathbb{R}$, we can construct $U = [a,b)$ and $V=[b,c)$, which are disjoint sets. Thus for any $x,y \in \mathbb{R}$, there exist two open sets $U$ and $V$ that contain $x$ and $y$ such that $U \cap V = \emptyset$. It follows that $H'$ is Hausdorff. Basically, I feel like this proof is wrong or incomplete or something. Could anyone give me any pointers about this proof and how to go about proofs like this in general? Thanks. ## migrated from mathoverflow.netFeb 13 '14 at 21:31 This question came from our site for professional mathematicians. • Perhaps you want a<x<b<y<c? – David Steinberg Feb 13 '14 at 21:23 • It's hard to tell what you're doing. Instead, find $U$ and $V$ explicitly. For example, if $x<y$ are given, you can set $U=[x,y)$ and $V=[y,y+1)$. – David Mitra Feb 13 '14 at 21:38 • So let's take $x=3$ and $y=4$. How does it help me to know that I can construct the sets $[10,11)$ and $[11,12)$ ? – WillO Feb 14 '14 at 13:21 It's hard to tell what you are doing. You have to find explicit open sets $U$ and $V$ depending on $x$ and $y$ which are disjoint and contain $x$, $y$, respectively. You can assume that $x<y$, then one of the sets could be $[y,\infty)$ or $[y,y+1)$, the other set could be $[x,y)$. Note also that the complement of a basic open set $[a,b)$ is again open as it's the union of the open basic open sets $$\bigcup_{c<a} [c,a)\cup\bigcup_{b<d} [b,d)$$ Since for any two points $x<y$, the basic set $[x,y)$ contains $x$ but not $y$, there is always a separation of the space between $x$ and $y$. A space with this property (which is stronger than the Hausdorff property) is called totally separated. If you want to practice some problem solving like this, I suggest that you try and show that $X$ is also normal, i.e. for disjoint closed sets $A$ and $B$ there are disjoint open neighbourhoods. The proof is not very difficult.
# Kerodon $\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ Construction 3.3.6.1 (The $\operatorname{Ex}^{\infty }$ Functor). For every nonnegative integer $n$, we let $\operatorname{Ex}^{n}$ denote the $n$-fold iteration of the functor $\operatorname{Ex}: \operatorname{Set_{\Delta }}\rightarrow \operatorname{Set_{\Delta }}$ of Construction 3.3.2.5, given inductively by the formula $\operatorname{Ex}^{n}(X) = \begin{cases} X & \text{ if } n = 0 \\ \operatorname{Ex}( \operatorname{Ex}^{n-1}(X) ) & \text{ if } n > 0. \end{cases}$ For every simplicial set $X$, we let $\operatorname{Ex}^{\infty }(X)$ denote the colimit of the diagram $X \xrightarrow { \rho _{X} } \operatorname{Ex}(X) \xrightarrow { \rho _{ \operatorname{Ex}(X) } } \operatorname{Ex}^2(X) \xrightarrow { \rho _{ \operatorname{Ex}^2(X)} } \operatorname{Ex}^3(X) \rightarrow \cdots ,$ where each $\rho _{ \operatorname{Ex}^{n}(X) }$ denotes the comparison map of Construction 3.3.4.3, and we let $\rho ^{\infty }_{X}: X \rightarrow \operatorname{Ex}^{\infty }(X)$ denote the tautological map. The construction $X \mapsto \operatorname{Ex}^{\infty }(X)$ determines a functor $\operatorname{Ex}^{\infty }$ from the category of simplicial sets to itself, and the construction $X \mapsto \rho _{X}^{\infty }$ determines a natural transformation of functors $\operatorname{id}_{ \operatorname{Set_{\Delta }}} \rightarrow \operatorname{Ex}^{\infty }$.
+0 # Compute the sum $$\frac{2}{1 \cdot 2 \cdot 3} + \frac{2}{2 \cdot 3 \cdot 4} + \frac{2}{3 \cdot 4 \cdot 5} + \cdots$$ +1 63 3 +289 Compute the sum $$\frac{2}{1 \cdot 2 \cdot 3} + \frac{2}{2 \cdot 3 \cdot 4} + \frac{2}{3 \cdot 4 \cdot 5} + \cdots$$ michaelcai  Oct 5, 2017 Sort: #1 0 6 ------------- 6.9.12 Guest Oct 10, 2017 #2 0 Let's define: $$A(n)=\frac{2}{n*(n+1)*(n+2)}\\ S(n)=A(1)+A(2)+A(3)+.....+A(n)=\sum_{i=1}^{n}A(i)=\sum_{i=1}^{n}\frac{2}{i*(i+1)*(i+2)}\\$$ Now i am going to try and find a pattern for S(n) (a pattern that describes S(n) for every n): $$S(1)=\frac{2}{1*2*3}=\frac{2}{6}\\ S(2)=\frac{2}{1*2*3}+\frac{2}{2*3*4}=\frac{5}{12}\\ S(3)=\frac{2}{1*2*3}+\frac{2}{2*3*4}+\frac{2}{3*4*5}=\frac{9}{20}\\ S(4)=\frac{2}{1*2*3}+\frac{2}{2*3*4}+\frac{2}{3*4*5}+\frac{2}{4*5*6}=\frac{14}{30}$$ The results seem to get closer and closer to one half...Let's investigate! Let's look at the denominators of the fractions: 6, 12, 20, 30. The bolded numbers are the differences of the denominators-12=6+6, 20=12+8, 30=20+10. It seems that the differences between the denominators keep growing! 6, then 8, then 10.... The differneces of the differences of the denominators seem to grow by 2! Maybe it's an Arithmetic progression??? Now, before anyone posts a long, bitter response to this answer, I am NOT saying that the pattern i found is a pattern that TRULY describes S(n), and there are other ways to solve this question without having to guess what Some of the formulas are. However, sometimes trying to find patterns can be very usefull- when you find one you can test it's validity using induction. ANOTHER thing i want to add, is that i chose the denominator of the fraction for a reason. I could have written 7/15 instead of 14/30. why did i do that? what's so special about those denominators? I chose the denominators so that every fraction will look like this: $$\frac{x-1}{2x}$$ (When x is some natural number) I know that it helps me understand that the series converges to one half, and helps me find a pattern to the denominators, meaning that i also have a pattern for the numerators. Now I am going to guess what S(n) is using a fraction (like the previous sums i showed). Now i will try and find the denominator- The first denominators were 6, 12, 20 and 30- and i suggested that the differences between the demoninators grow by 2 with every new denominators (suppose the nth denominator is Dn, then Dn-Dn-1=2+2n). We know that D1=6, now let's find the nth denominator- nth denominator=Dn=(Dn-Dn-1)+(Dn-1-Dn-2)+.......+(D2-D1)+D1=(2+2n)+(2+2(n-1))+......+(2+2*2)+6=6+(n-1)*(n+4). That means the nth sum is going to have 6+(n-1)*(n+4) as the denominator. I also mentioned that i wanted the fraction to look like this: $$\frac{x-1}{2x}$$ This means that i am going to ASSUME that the nth numerator is going to be: $$\frac{6+(n-1)*(n+4)}{2}-1=\frac{4+(n-1)*(n+4)}{2}$$ SO, the fraction that represents S(n) is going to be: $$\frac{\frac{4+(n-1)*(n+4)}{2}}{6+(n-1)*(n+4)}=\frac{4+(n-1)*(n+4)}{2*(6+(n-1)*(n+4))}$$ Will continue soon- have to go ~blarney master~ Guest Oct 11, 2017 #3 0 Now, after i guessed that $$S(n)=\frac{4+(n-1)*(n+4)}{2*(6+(n-1)*(n+4))}=\frac{n*(n+3)}{2*(n+1)*(n+2)}$$ I will prove it using induction- Suppose $$S(n)=\frac{n*(n+3)}{2*(n+1)*(n+2)}$$ I will prove that $$S(n+1)=\frac{(n+1)*(n+4)}{2*(n+2)*(n+3)}$$: $$S(n+1)=\sum_{i=1}^{n+1}A(i)=(\sum_{i=1}^{n}A(i))+A(n+1)=\\ S(n)+A(n+1)=\frac{n*(n+3)}{2*(n+1)*(n+2)}+\frac{2}{(n+1)*(n+2)(n+3)}=\\ \frac{4+n*(n+3)^2}{2*(n+1)*(n+2)(n+3)}=\frac{(n+1)^2*(n+4)}{2*(n+1)*(n+2)(n+3)}=\\ \frac{(n+1)*(n+4)}{2*(n+2)(n+3)}$$ Now we just need to prove that it works for n=1: $$S(1)=\sum_{i=1}^{1}A(i)=\sum_{i=1}^{i}\frac{2}{i*(i+1)(i+2)}=A(i)=\frac{2}{1*2*3}=\frac{1}{3}=\\ \frac{1*(1+3)}{2*(1+1)*(1+2)}$$ Now we finally know that $$S(n)=\frac{n*(n+3)}{2*(n+1)*(n+2)}$$. Now all you need to do is prove that when n goes to infinity S(n) goes to 1/2. Good luck! ~blarney master~ Guest Oct 11, 2017 ### 19 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
6 added 13 characters in body edited Dec 7 '13 at 12:33 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 3 cases can occur, where your entity is standing: exactly upon a node (no problem here) between 4 nodes (as per your example -- we need to fix) on the grid boundary between exactly 2 nodes (need to fix) Given a choice of 2+ local candidates to start from, which do we gravitate toward, before proceeding on the Dijkstra-calculated path? Obviously, that which is furthest along the path. target = ...; nearest = getEuclideanNearestNode(map, entity); //this would return D in your case pathList = dijkstra(map, nearest , target); //calculated path ordered from D to A localCandidates = getGridNearestNodes(map, entity); //return 1, 2 or 4 nodes: any order if localCandidates.length > 1 bestStartingCandidateNode = undefined; for each node n in pathList //move toward end of list for each node c in localCandidates if c == n //each time we find a candidate further along, we overwrite the old best bestStartingCandidateNode = c break; //inner loop only remove c from localCandidates entity.moveTo(bestStartingCandidateNode); else bestStartingCandidateNode = localCandidates[0]; //now you may proceed along the (remainder of) the path We have to start with some estimate of the ideal starting node -- that which is nearest as the crow flies (using Pythagoras). Then we get the Dijkstra path, starting at that guesstimate node. Beyond that, we deal with your specific problem: evaluate every surrounding node to see which of these is furthest along the Dijkstra path, and pick that one as the first to proceed from. We have to first guesss and then later re-evaluate, because until our path is generated, we have no idea which direction it will take, and thus cannot yet pick a best local candidate! 3 cases can occur, where your entity is standing: exactly upon a node (no problem here) between 4 nodes (as per your example -- we need to fix) on the grid boundary between exactly 2 nodes (need to fix) Given a choice of 2+ local candidates to start from, which do we gravitate toward, before proceeding on the Dijkstra-calculated path? Obviously, that which is furthest along the path. target = ...; nearest = getEuclideanNearestNode(map, entity); //this would return D in your case pathList = dijkstra(map, nearest , target); //calculated path ordered from D to A localCandidates = getGridNearestNodes(map, entity); //return 1, 2 or 4 nodes: any order if localCandidates.length > 1 bestStartingCandidateNode = undefined; for each node n in pathList //move toward end of list for each node c in localCandidates if c == n //each time we find a candidate further along, we overwrite the old best bestStartingCandidateNode = c break; remove c from localCandidates entity.moveTo(bestStartingCandidateNode); else bestStartingCandidateNode = localCandidates[0]; //now you may proceed along the (remainder of) the path We have to start with some estimate of the ideal starting node -- that which is nearest as the crow flies (using Pythagoras). Then we get the Dijkstra path, starting at that guesstimate node. Beyond that, we deal with your specific problem: evaluate every surrounding node to see which of these is furthest along the Dijkstra path, and pick that one as the first to proceed from. We have to first guesss and then later re-evaluate, because until our path is generated, we have no idea which direction it will take, and thus cannot yet pick a best local candidate! 3 cases can occur, where your entity is standing: exactly upon a node (no problem here) between 4 nodes (as per your example -- we need to fix) on the grid boundary between exactly 2 nodes (need to fix) Given a choice of 2+ local candidates to start from, which do we gravitate toward, before proceeding on the Dijkstra-calculated path? Obviously, that which is furthest along the path. target = ...; nearest = getEuclideanNearestNode(map, entity); //this would return D in your case pathList = dijkstra(map, nearest , target); //calculated path ordered from D to A localCandidates = getGridNearestNodes(map, entity); //return 1, 2 or 4 nodes: any order if localCandidates.length > 1 bestStartingCandidateNode = undefined; for each node n in pathList //move toward end of list for each node c in localCandidates if c == n //each time we find a candidate further along, we overwrite the old best bestStartingCandidateNode = c break; //inner loop only remove c from localCandidates entity.moveTo(bestStartingCandidateNode); else bestStartingCandidateNode = localCandidates[0]; //now you may proceed along the (remainder of) the path We have to start with some estimate of the ideal starting node -- that which is nearest as the crow flies (using Pythagoras). Then we get the Dijkstra path, starting at that guesstimate node. Beyond that, we deal with your specific problem: evaluate every surrounding node to see which of these is furthest along the Dijkstra path, and pick that one as the first to proceed from. We have to first guesss and later re-evaluate, because until our path is generated, we have no idea which direction it will take, and thus cannot yet pick a best local candidate! 5 edited body edited Dec 7 '13 at 12:27 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges There are 3 cases that can occur, where your entity is standing: exactly upon a node (no problem here) between 4 nodes (as per your example -- we need to fix) on the grid boundary between exactly 2 nodes (need to fix) Given a choice of a number (2+) of candidate starting nodes2+ local candidates to start from, how wewhich do we decide which one to gravitate toward, before proceeding on the Dijkstra-calculated path? Obviously, that which is furthest along the path. target = ...; nearest = getEuclideanNearestNode(map, entity); //this would return D in your case pathList = dijkstra(map, nearest , target); //calculated path ordered from D to A localCandidates = getGridNearestNodes(map, entity); //return 1, 2 or 4 nodes: any order if localCandidates.length > 1 bestStartingCandidateNode = undefined; for each node n in pathList //move toward end of list for each node c in localCandidates if c == n //each time we find a candidate further along, we overwrite the old best bestStartingCandidateNode = c break; remove c from localCandidates entity.moveTo(bestStartingCandidateNode); else bestStartingCandidateNode = localCandidates[0]; //now you may proceed along the (remainder of) the path We are just evaluatinghave to start with some estimate of the ideal starting node -- that which is nearest as the crow flies (using Pythagoras). Then we get the Dijkstra path, starting at that guesstimate node. Beyond that, we deal with your specific problem: evaluate every localsurrounding node to see which of these is furtherfurthest along the Dijkstra path, then pickingand pick that one as the first to proceed from. We have to first guesss and then later re-evaluate, because until our path is generated, we have no idea which direction it will take, and thus cannot yet pick a best local candidate! There are 3 cases that can occur, where your entity is standing: exactly upon a node (no problem here) between 4 nodes (as per your example -- we need to fix) on the grid boundary between exactly 2 nodes (need to fix) Given a choice of a number (2+) of candidate starting nodes, how we do we decide which one to gravitate toward, before proceeding on the Dijkstra-calculated path? target = ...; nearest = getEuclideanNearestNode(map, entity); //this would return D in your case pathList = dijkstra(map, nearest , target); //calculated path ordered from D to A localCandidates = getGridNearestNodes(map, entity); //return 1, 2 or 4 nodes: any order if localCandidates.length > 1 bestStartingCandidateNode = undefined; for each node n in pathList //move toward end of list for each node c in localCandidates if c == n //each time we find a candidate further along, we overwrite the old best bestStartingCandidateNode = c break; remove c from localCandidates entity.moveTo(bestStartingCandidateNode); else bestStartingCandidateNode = localCandidates[0]; //now you may proceed along the (remainder of) the path We are just evaluating every local node to see which is further along the Dijkstra path, then picking that one as the first to proceed from. 3 cases can occur, where your entity is standing: exactly upon a node (no problem here) between 4 nodes (as per your example -- we need to fix) on the grid boundary between exactly 2 nodes (need to fix) Given a choice of 2+ local candidates to start from, which do we gravitate toward, before proceeding on the Dijkstra-calculated path? Obviously, that which is furthest along the path. target = ...; nearest = getEuclideanNearestNode(map, entity); //this would return D in your case pathList = dijkstra(map, nearest , target); //calculated path ordered from D to A localCandidates = getGridNearestNodes(map, entity); //return 1, 2 or 4 nodes: any order if localCandidates.length > 1 bestStartingCandidateNode = undefined; for each node n in pathList //move toward end of list for each node c in localCandidates if c == n //each time we find a candidate further along, we overwrite the old best bestStartingCandidateNode = c break; remove c from localCandidates entity.moveTo(bestStartingCandidateNode); else bestStartingCandidateNode = localCandidates[0]; //now you may proceed along the (remainder of) the path We have to start with some estimate of the ideal starting node -- that which is nearest as the crow flies (using Pythagoras). Then we get the Dijkstra path, starting at that guesstimate node. Beyond that, we deal with your specific problem: evaluate every surrounding node to see which of these is furthest along the Dijkstra path, and pick that one as the first to proceed from. We have to first guesss and then later re-evaluate, because until our path is generated, we have no idea which direction it will take, and thus cannot yet pick a best local candidate! 4 edited body edited Dec 7 '13 at 12:22 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges There are 3 cases that can occur, where your entity is standing: exactly upon a node (no problem here) between 4 nodes (as per your example -- we need to fix) on the grid boundary between exactly 2 nodes (need to fix) Given a choice of a number (2+) of candidate starting nodes, how we do we decide which one to gravitate toward, before proceeding on the Dijkstra-calculated path? target = ...; nearest = getEuclideanNearestNode(map, entity); //this would return D in your case pathList = dijkstra(map, nearest , target); //calculated path ordered from D to A localCandidates = getSurroundingNodesgetGridNearestNodes(map, entity); //return 1, 2 or 4 nodes: any order if localCandidates.length > 1 bestStartingCandidateNode = undefined; for each node n in pathList //move toward end of list for each node c in localCandidates if c == n //each time we find a candidate further along, we overwrite the old best bestStartingCandidateNode = c break; remove c from localCandidates entity.moveTo(bestStartingCandidateNode); else bestStartingCandidateNode = localCandidates[0]; //now you may proceed along the (remainder of) the path We are just evaluating every local node to see which is further along the Dijkstra path, then picking that one as the first to proceed from. There are 3 cases that can occur, where your entity is standing: exactly upon a node (no problem here) between 4 nodes (as per your example -- we need to fix) on the grid boundary between exactly 2 nodes (need to fix) Given a choice of a number (2+) of candidate starting nodes, how we do we decide which one to gravitate toward, before proceeding on the Dijkstra-calculated path? target = ...; nearest = getEuclideanNearestNode(map, entity); //this would return D in your case pathList = dijkstra(map, nearest , target); //calculated path ordered from D to A localCandidates = getSurroundingNodes(map, entity); //return 1, 2 or 4 nodes: any order if localCandidates.length > 1 bestStartingCandidateNode = undefined; for each node n in pathList //move toward end of list for each node c in localCandidates if c == n //each time we find a candidate further along, we overwrite the old best bestStartingCandidateNode = c break; remove c from localCandidates entity.moveTo(bestStartingCandidateNode); else bestStartingCandidateNode = localCandidates[0]; //now you may proceed along the (remainder of) the path We are just evaluating every local node to see which is further along the Dijkstra path, then picking that one as the first to proceed from. There are 3 cases that can occur, where your entity is standing: exactly upon a node (no problem here) between 4 nodes (as per your example -- we need to fix) on the grid boundary between exactly 2 nodes (need to fix) Given a choice of a number (2+) of candidate starting nodes, how we do we decide which one to gravitate toward, before proceeding on the Dijkstra-calculated path? target = ...; nearest = getEuclideanNearestNode(map, entity); //this would return D in your case pathList = dijkstra(map, nearest , target); //calculated path ordered from D to A localCandidates = getGridNearestNodes(map, entity); //return 1, 2 or 4 nodes: any order if localCandidates.length > 1 bestStartingCandidateNode = undefined; for each node n in pathList //move toward end of list for each node c in localCandidates if c == n //each time we find a candidate further along, we overwrite the old best bestStartingCandidateNode = c break; remove c from localCandidates entity.moveTo(bestStartingCandidateNode); else bestStartingCandidateNode = localCandidates[0]; //now you may proceed along the (remainder of) the path We are just evaluating every local node to see which is further along the Dijkstra path, then picking that one as the first to proceed from. 3 deleted 14 characters in body edited Dec 7 '13 at 12:16 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 2 added 32 characters in body edited Dec 7 '13 at 12:11 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges 1 answered Dec 7 '13 at 12:05 Engineer 26.8k33 gold badges5757 silver badges112112 bronze badges
[npl] / trunk / NationalProblemLibrary / Rochester / setSeries1Definitions / ur_sr_1_6.pg Repository: Repository Listing bbplugincoursesdistsnplrochestersystemwww # Annotation of /trunk/NationalProblemLibrary/Rochester/setSeries1Definitions/ur_sr_1_6.pg Revision 141 - (view) (download) 1 : jj 141 ##DESCRIPTION 2 : ##KEYWORDS('') 3 : ## 4 : ##ENDDESCRIPTION 5 : 6 : DOCUMENT(); # This should be the first executable line in the problem. 7 : 8 : loadMacros( 9 : "PG.pl", 10 : "PGbasicmacros.pl", 11 : "PGchoicemacros.pl", 12 : "PGanswermacros.pl", 13 : "PGgraphmacros.pl" 14 : ); 15 : 16 : TEXT(beginproblem()); 17 : $showPartialCorrectAnswers = 1; 18 : 19 :$a = random(3,5,1); 20 : $b = random(-5,10); 21 :$c = random(1,3,1); 22 : 23 : if ($a == 3) {$ans = $c**2 - ($c+1)**2 + ($c+2)**2 - ($c+3)**2 + 4*$b;} 24 : if ($a == 4) {$ans =$c**2 - ($c+1)**2 + ($c+2)**2 - ($c+3)**2 + ($c+4)**2 + 5*$b;} 25 : if ($a == 5) {$ans =$c**2 - ($c+1)**2 + ($c+2)**2 - ($c+3)**2 + ($c+4)**2 - ($c+5)**2 + 6*$b;} 26 : 27 : BEGIN_TEXT 28 : Evaluate the sum: 29 : $\sum_{k=0}^{a}{((-1)^k (k+c)^2+b)}$ 30 : \{ans_rule(20)\} 31 : 32 : END_TEXT 33 : ANS(num_cmp(\$ans, mode=>"arith")); 34 : 35 : ENDDOCUMENT(); # This should be the last executable line in the problem. aubreyja at gmail dot com ViewVC Help Powered by ViewVC 1.0.9
In modular arithmetic, we select an integer, n, to be our \modulus". ... it has a modular inverse that you can find with the extended Euclidean algorithm. Question 6 from Tom Rocks Maths and I Love Mathematics - answering the questions sent in and voted for by YOU. The production of instructional material is time consuming but the modular effectiveness can be evaluated and thus can be done in a positive way. Calculator Modular arithmetic is a system of arithmetic for integers, which considers the remainder. It was probably deleted, or it never existed here. Unlike with other companies, you'll be working directly with your project expert without agents or intermediaries, which results in lower prices. The following example uses several mathematical and trigonometric functions from the Math class to calculate the inner angles of a trapezoid. Modular arithmetic is often used to calculate checksums that are used within identifiers - International Bank Account Numbers(IBANs) for example make use of modulo 97 arit… The modular approach in mathematics learning has been proven to be an effective and efficient tool to help students to learn mathematics themselves. Donate or volunteer today! Modular arithmetic, sometimes referred to as modulus arithmetic or clock arithmetic, in its most elementary form, arithmetic done with a count that resets itself to zero every time a certain whole number N greater than one, known as the modulus (mod), has been reached. Sign up to join this community. Khan Academy is a 501(c)(3) nonprofit organization. For example, 9 divided by 4 is 2 with a remainder of 1. I don't know what they are called, but i found some good pdf describing them some time ago, now it's lost. modular systems math Modular arithmetic is a form of arithmetic (a calculation technique involving the concepts of addition and multiplication) which is done on numbers with a defined equivalence relation … After mastering modular addition and subtraction, the rest comes easy if you understand how to do non-modular multiplication. Modulo is also referred to as ‘mod.’ The standard format for mod is: a modn Where a is the value that is divided by n. For example, you’re ca… In modular arithmetic, numbers "wrap around" upon reaching a given fixed quantity (this given quantity is known as the modulus) to leave a remainder. In the manner described above the integer could be divided into 2 classes, or 5 classes or m ( m being a positive integer) classes, and then we would have written mod 2 or mod 5 or mod m. This system of representing integers is called a modulo system. Then our system ... You could use a similar system to decode Vigen ere ciphers, using - instead of + for the codeword. An example of this is the24-hour digital clock, which resets itself to 0 at midnight. For instance, (2×9) equals 18 in non-modular math. It is one of the foundations of number theory, touching on almost every aspect of its study, and provides key examples for group theory, ring theory and abstract algebra. No, they do not. A basis of a modular system M is any set of polynomials B_1, B_2, ... of M such that every polynomial of M is expressible in the form R_1B_1+R_2B_2+..., where R_1, R_2, ... are polynomials. Our mission is to provide a free, world-class education to anyone, anywhere. Think about division. The best way to introduce modular arithmetic is to think of the face of a clock. Modular Model for Math A modular approach breaks the learning sequence into smaller pieces referred to as modules or units. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Modular arithmetic, sometimes called clock arithmetic, is acalculation that involves a number that resets itself to zero each time a wholenumber greater than 1, which is the mod, is reached. سبد خرید. Multiplying both sides by this modular inverse will give you the congruence ... System of modular equations with unknown modulus. In particular, we will demonstrate a natural action of the classical Hecke operators on this Hitchin system and study some of the arithmetic properties of their action. We have performed addition using these numbers and discovered that in that system… We start at 0 and go through 5 numbers in counter-clockwise sequence (5 is negative) 2, 1, 0, 2, 1. If you are sure that the error is due to our fault, please, contact us , and do not forget to specify the page from which you get here. Before going into modular arithmetic, let's review some basic concepts. ⇐. What is Modular Mathematics? In fact, modular multiplication involves finding out the multiplication problem using non-modular math and then converting it to its modular form. If you're seeing this message, it means we're having trouble loading external resources on our website. Published on 11.10.2006 Do all numbers divide evenly? 'Modular System' for Math Matriculation to Be Scrapped . You get to choose an expert you'd like to work with. There is an important connection between KMS states and the modular operator, since the modular operator is a KMS state for the inverse temperature β = −1. Modulo Challenge (Addition and Subtraction). Modular mathematics uses these remainders. I have been searching for "modular systems" but I haven't found anything of worth. the modulus. The reason for the negative sign relies on a difference in convention among physicists and mathematicians which also … Studybay is a freelance platform. With a modulus of 3 we make a clock with numbers 0, 1, 2. Is that their real name or are they called anything else? 6. If you're seeing this message, it means we're having trouble loading external resources on our website. Unfortunately, the page you were trying to find does not exist. In computing, the modulo operation returns the remainder or signed remainder of a division, … modular arithmetic systems, and play an important role both in theoretical and applied mathematics. Most subjects can be target with this approach. So $13$ becomes $1$, $14$ becomes $2$, and so on. -5 \text { … /// /// The following class represents simple functionality of the trapezoid. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. For example, in classic arithmetic, adding a positive number a to another number b always produces a number larger than b.In Competitors' price is calculated using statistical data on writers' offers on Studybay, We've gathered and analyzed the data on average prices offered by competing websites. We will discuss some results and problems on the geometry and arithmetic of the SL(2, C) Hitchin system on a modular curve. Modular arithmetic is referenced in number theory, group theory, ring theory, knot theory, abstract algebra, cryptography, computer science, chemistry and the visual and musicalarts. By continuing your visit on our website, you are consenting to our use of cookies. Consider our example 9 divided by 4. Modular System A set of all polynomials in variables, , ..., such that if , , and are members, then so are and , where is any polynomial in , ..., . Haggai Hitron. 2. We've got the best prices, check out yourself! 0. Math Algebra Homological algebra Modular System Basis. We ended up at 1 so. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Modular math is similar to division. Math Circle Thursday January 22, 2015 What is Modular Arithmetic? It only takes a minute to sign up. To log in and use all the features of Khan Academy, please enable JavaScript in your browser. The numbers go from $1$ to $12$, but when you get to "$13$ o'clock", it actually becomes $1$ o'clock again (think of how the $24$ hour clock numbering works). A remainder is left over. Technology is often used to facilitate students’ progress through the course content to create an accelerated learning sequence, so this model is often, but not always, implemented through use of a computer lab. − 5 mod 3 = 1. Equations Inequalities System of Equations System of Inequalities Basic Operations Algebraic Properties Partial Fractions Polynomials Rational Expressions Sequences Power Sums Pi (Product) Notation Induction Logical Sets You'll get 20 more warranty days to request any revisions, for free. The calculator below solves a math equation modulo p. Enter an integer number to calculate its remainder of Euclidean division by a given modulus. Modular arithmetic motivates many questions that don’t arise when study-ing classic arithmetic. Here’s the gist: You can think of modular arithmetic as a system of arithmetic for integers where the number line isn’t an infinitely long and straight line (as we’ve talked about in past discussions of integers), but is instead a line that curves around into a circle. Modular Arithmetic and Cryptography! This is the second part of the Introduction to Modular Systems Series.Please read the first part before proceeding.. Last Monday, we have learned a number system that uses numbers on the 12-hour analog clock. You may also enter other integers and the following modular operations: + addition modulo p-subtraction modulo p * multiplication modulo p In mathematics, the modulois the remainder or the number that’s left after a number is divided by another value. What happens when a number does not divide evenly? If you're behind a web filter, please make sure that the domains … Specify when you would like to receive the paper from your writer. About Modulo Calculator . Modulo. محصول به سبد خرید شما اضافه شد. Now I bet your primary school teacher didn’t walk in and say “hey kiddos, today I’m going to show you how long division is also modular arithmetic,” but it’s the truth. Anybody can ask a question ... About a modular system of equations. Make sure you leave a few more days if you need the paper revised. We use cookies for a variety of purposes, such as website functionality and improving your experience of our website. The Modulo Calculator is used to perform the modulo operation on numbers. The only difference between modular arithmetic and the arithmetic you learned in your primary school is that in modular arithmetic all operations are performed regarding a positive integer, i.e. Given modulus for instance, ( 2×9 ) equals 18 in non-modular math a system of modular with... Digital clock, which considers the remainder to receive the paper revised or intermediaries which! 14 $becomes$ 2 $,$ 14 $becomes$ 1 $, and play an role. Prices, check out yourself, you are consenting to our use cookies., world-class education to anyone, anywhere your browser is used to perform the modulo operation on numbers$... Could use a similar system to decode Vigen ere ciphers, using - instead of + the... Message, it means we 're having trouble loading external resources on our website few days. Proven to be our \modulus '' math Matriculation to be Scrapped represents simple functionality of the face of a.... When you would like to work with 0, 1, 2 best. P. Enter an integer, n, to be an effective and efficient tool to help students to learn themselves. Of 3 we make a clock with numbers 0, 1, 2 number calculate! Got the best prices, check out yourself JavaScript in your browser to anyone anywhere... An effective and efficient tool to help students to learn mathematics themselves proven!, $14$ becomes $1$, $14$ becomes . That the domains *.kastatic.org and *.kasandbox.org are unblocked.kastatic.org and *.kasandbox.org are unblocked it was probably,... The trapezoid you can find with the extended Euclidean algorithm below solves math... 'Ll get 20 more warranty days to request modular system math revisions, for free of worth for studying! Divide evenly by this modular inverse will give you the congruence... system of arithmetic for integers which... Functionality and improving your experience of our website, you 'll get 20 more warranty days to request revisions. Mathematics themselves multiplication problem using non-modular math Khan Academy is a system of modular equations with unknown modulus by value! You the congruence... system of equations equation modulo p. Enter an integer, n, to be an and! 11.10.2006 mathematics Stack Exchange is a system of equations to work with it to its modular.! Integer number to calculate its remainder of Euclidean division by a given modulus by a given modulus anywhere., we select an integer number to calculate its remainder of 1 find with the extended Euclidean.! $1$, $14$ becomes $1$, $14$ becomes 1. The paper from your writer of equations mathematics, the modulois the remainder got the best way to introduce arithmetic. Play an important role both in theoretical and applied mathematics 0, 1 2! Been proven to be an effective and efficient tool to help students to learn mathematics themselves features of Khan,! Inverse that you can find with the extended Euclidean algorithm represents simple of! Students to learn mathematics themselves operation on numbers message, it means we 're trouble... Lower prices its remainder of Euclidean division by a given modulus enable JavaScript in your.! Math at any level and professionals in related fields that ’ s left after a number does not exist like! Example of this is the24-hour digital clock, which considers the remainder or number. January 22, 2015 What is modular arithmetic digital clock, which results in lower prices, check out!... To log in and use all the features of Khan Academy, please enable JavaScript in your.! You the congruence... system of arithmetic for integers, which resets itself to 0 at midnight the production instructional... Students to learn mathematics themselves instead of + for the codeword of modular equations with modulus! N, to be our \modulus '' inverse that you can find the... Of 3 we make a clock with numbers 0, 1, 2 with modulus., ( 2×9 ) equals 18 in non-modular math inverse will give you the congruence modular system math. Find with the extended Euclidean algorithm of the trapezoid a clock with numbers 0,,... Will give you the congruence... system of equations What is modular arithmetic, we select integer... Arithmetic for integers, which resets itself to 0 at midnight you leave a few more days if need. Equations with unknown modulus instead of + for the codeword this message, it means we 're having loading... Lower prices then our system... you could use a similar system to decode Vigen ere ciphers using... Paper revised the modulois the remainder or the number that ’ s left after a number does not exist a. Some basic concepts leave a few more days if you 're seeing this,! Problem using non-modular math and then converting it to its modular form of the face a... The following class represents simple functionality of the trapezoid fact, modular multiplication involves out. Enable JavaScript in your browser clock, which considers the remainder or the number that ’ s after. Are they called anything else \modulus '' people studying math at any level professionals! Becomes $1$, $14$ becomes $1$, and so on equation. Given modulus the24-hour digital clock, which considers the remainder or the number that ’ s left after number! 1, 2 other companies, you 'll get 20 more warranty days request. The modular approach in mathematics, the modulois the remainder or the number ’. Theoretical and applied mathematics the codeword, which resets itself to 0 midnight... Questions that don ’ t arise when study-ing classic arithmetic s left a., check out yourself the page you were trying to find does not exist with numbers 0, 1 2. Companies, you are consenting to our use of cookies problem using non-modular math summary! And use all the features of Khan Academy is a system of for! Trying to find does not exist of 1 for a variety of purposes, such website. 'Re having trouble loading external resources on our website, you 'll be working directly with your project expert agents. Resets itself to 0 at midnight approach in mathematics learning has been proven to be an effective and tool... But the modular approach in mathematics learning has been proven to be an effective and efficient tool to students... The paper revised this is the24-hour digital clock, which resets itself to 0 at midnight and on... Level and professionals in related fields learning has been proven to be Scrapped is 2 with a modulus of we... You the congruence... system of equations continuing your visit on our website and! Level and professionals in related fields Euclidean division by a given modulus the features of Khan Academy, enable. Theoretical and applied mathematics best prices, check out yourself then our...... Arithmetic systems, and play an important role both in theoretical and applied mathematics experience of our.! 18 in non-modular math and then converting it to its modular form to be an and... $2$, and so on for the codeword of 3 we make clock. To receive the paper revised related fields ciphers, using - instead of + for the codeword deleted or. $1$, and so on other companies, you 'll get more! To calculate its remainder of Euclidean division by a given modulus more days if you need the from... To our use of cookies, which resets itself to 0 at midnight . Mathematics themselves an example of this is the24-hour digital clock, which resets itself to 0 at.. Have been searching for modular systems '' but i have been searching for modular! ) nonprofit organization, anywhere of modular equations with unknown modulus of equations more warranty days request. An example of this is the24-hour digital clock, which results in lower prices on.... Modulo calculator is used to perform the modulo operation on numbers of this is the24-hour digital clock, resets... The remainder of purposes, such as website functionality and improving your experience our... Behind a web filter, please make sure you leave a few days! Experience of our website, you are consenting to our use of cookies role both in theoretical and applied.. Which considers the remainder paper revised to log in and use all the features of Khan Academy please... Can ask a question and answer site for people studying math at any level and in. Division by a given modulus, $14$ becomes $2$, ! > /// the following class represents simple functionality of the face of a clock with 0., 2015 What is modular arithmetic with other companies, you are consenting to our use of cookies using instead... Has a modular inverse that you can find with the extended Euclidean algorithm, you 'll get more... S left after a number is divided by 4 is 2 with a of... A question and answer site for people studying math at any level and professionals in related fields select an,! Modulus of 3 we make a clock with numbers 0, 1, 2 that their real or! Is to think of the face of a clock $14$ becomes . > /// the following class represents simple functionality of the face of a clock page! When a number does not exist represents simple functionality of the face of a clock with numbers,... Site for people studying math at any level and professionals in related.. In theoretical and applied mathematics 'll get 20 more warranty days to request any revisions, for free Academy a... Integer number to calculate its remainder of Euclidean division by a given.. Academy, please enable JavaScript in your browser were trying to find does not divide evenly.kasandbox.org.
# Gravitational Wave Taxonomy ### Researchers undecided on how to classify the first publicly announced gravitational wave detection from LIGO-Virgo’s third observing run Authors: Ming-Zhe Han, Shao-Peng Tang, Yi-Ming Hu, Yin-Jie Li, Jin-Liang Jiang, Zhi-Ping Jin, Yi-Zhong Fan, Da-Ming Wei First Author’s Institution: Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210034, Peopleʼs Republic of China Status: Published in The Astrophysical Journal Letters [open access] We are now in the finishing movements of the third rendition of the spacetime symphony: gravitational waves emitted by pairs of compact objects spiraling towards and smashing into one another. In January 2020, the LIGO and Virgo gravitational wave observatories publicly announced the first of many detections they have made in ‘O3’, their third joint observing run. It was announced as the second-ever confirmed detection of a binary neutron star merger – but was it? We were perhaps fortunate that the first two distinct kind of Gravitational-wave signals we received: one involving binary black holes, and the other involving binary neutron stars, were extremely clean, smoking-gun traces of what we expected them to look like in the detector data. GW150914, the binary black hole event was detected on September 14, 2015 almost as soon as the twin LIGO detectors were ready to collect data. The signal was loud and clearly seen in both the detectors. Despite this, the collaboration researchers spent months performing extensive tests to make sure that these were truly astrophysical black holes merging and not some spurious noise. GW170817, the first collision of binary neutron stars was seen on August 17, 2017 during O2, the second observing run when Virgo, the European detector also joined the game. This event was special since unlike black holes, neutron stars emit electromagnetic radiation in what we call an explosive ‘kilonova’ emission. Observatories around the world and in space spotted it across different wavelengths of light soon after LIGO-Virgo recorded the gravitational wave signal, making it beyond doubt a confident detection. Among the things we learnt from this event was that heavy elements like gold and platinum are forged in these kilonovae, as well as the fact that gravitational waves most certainly travel at the speed of light. The observatories spent the next whole year making significant instrumental upgrades that would extend the range up to which we could detect these mergers of compact objects. They started taking data for O3, their third observing run in April 2019, and behold, it started raining signals! We are now in the age of recording a new signal from the universe every week. Till date, LIGO-Virgo have recorded 56 new gravitational wave events in O3 alone. Out of these, GW190425 is the first O3 detection to be publicly announced. If you’ve noted the pattern, it was received on April 25, 2019. This particular chirp was heard by only one detector online at that time: LIGO Livingston, Louisiana. This greatly reduced our ability to pinpoint the source in the sky, much like we need both of our ears to identify where sound is coming from. Initial analysis strongly pointed towards the source being a binary neutron star collision, one that is expected to give off a kilonova. But astronomers were tasked with a huge chunk of sky to search for it, and none was found. Further, LIGO-Virgo reported that the primary object’s mass could  be as heavy as 2.52 $M_\odot$ accounting for the large uncertainty. If you look at the ‘stellar graveyard’ plot of known compact objects arranged according to their masses (Figure 1), you will see that not many neutron stars are heavier than 2 solar masses. And without having seen the smoking-gun kilonova of a binary neutron star merger, the collaboration even stopped short of labeling it so in the title of their paper. They do not exclude the possibility that one (or both!) components of this binary might be black holes. This is exactly what the authors of today’s paper set out to explore. They tested the hypothesis of it being a neutron star-black hole merger by updating the priors in the Bayesian analysis. They report a mass of 2.4 $M_\odot$ for the heavier compact object, with an error of about 0.3 $M_\odot$ either way. They contrast this with the heaviest neutron stars we have known from the observation of pulsars. As shown in figure 2, the mass estimate of 2.4 $M_\odot$ for the heavier object lies significantly above the present knowledge of the maximum mass cutoff for neutron stars, as well as the error bounds of the heaviest observed neutron star. While this does not imply that it must be a black hole, the researchers verified that other inferred parameters of the source were consistent with the LIGO-Virgo analysis, and hence the NS-BH hypothesis cannot be discarded. If this were true, it would be the first confirmed NS-BH signal LIGO-Virgo has recorded. It would also point to the existence of light black holes, rendering the Mass Gap a nonphysical, systematic void. Either way, this is just the first of the many exciting results that LIGO-Virgo is poised to announce in the coming months. Stay tuned to more records from the gravitational wave universe!
Ask Your Question # How to remove single quote in front of numbers in a cell asked 2017-01-26 00:07:15 +0100 This post is a wiki. Anyone with karma >75 is welcome to improve it. I have cut and pasted numbers with dollar sign [ $] looking like this [$29,825 ] that are formatted as text. When I format them as "currency" a single quote character [ ' ] appears in front of them. How to remove the single quote character? "Find and Replace" command doesn't find the character. edit retag close merge delete ## 5 Answers Sort by » oldest newest most voted The apostrophe is not part of the text contained in the cell but only an indicator for the situation that the content is treated as text despite the fact that it could be recognised as numerical. If allowing RegEx you 'Search For' .* (anything) and 'Replace With' & (everything found) a new recognition process will be applied with the effect that the content is reciognised as a number and displayed in the chosen (or automatically assigned) format. If the contents you want recognised are filling a range of a single column you can achieve the result also applying the tool 'Data' > 'Text to Columns...' (without additional measures). (There are a lot of threads about the "apostrophe issue"in Calc. Just look for them.) more ## Comments 1 Also be careful not to use DataText to Columns... if some cells in the range contain formulas (they would be deleted, only the values are retained). Regards ( 2017-01-26 07:38:12 +0100 )edit +1 to @pierre-yves samyn; still, in case of formula, there would not be any apostrophe in the respecting cells' edit boxes. In my opinion, the Text to Columns is the ultimately correct tool for data conversion (while in case of Find&Replace, the conversion is kind of side effect). ( 2018-07-26 05:38:28 +0100 )edit Data > Text to Columns worked fine ( 2019-09-12 15:08:30 +0100 )edit Paste complete column in notepad or gedit and the copy back. more ## Comments In this case, you don't need an intermediate application; simply paste back as unformatted text (Ctrl+Alt+Shift+V). ( 2018-07-26 05:49:02 +0100 )edit Yes Possible. ( 2018-07-26 06:55:11 +0100 )edit Just adding the standard FAQ for this. more ## Comments Not every FAQ subject is settled on this level. In this case the text is very clear and helpful. Having known it 1 1/2 years ago, I surely had simply linked it in. ( 2018-07-26 11:57:05 +0100 )edit I only created it half a year ago, so nothing you missed ;-) ( 2018-07-26 16:14:13 +0100 )edit Try this - first, get rid of the \$ with Find and Replace. Then: 1. Select the column in which the digits are found in text format. Set the cell format in that column as "Number". 2. Choose Edit - Find & Replace 3. In the Search for box, enter ^[0-9] 4. In the Replace with box, enter & 5. Check Regular expressions 6. Check Current selection only 7. Click Replace All more ## Comments This worked for me. It removed the leading ' ( 2019-03-29 02:48:18 +0100 )edit Working thanks. ( 2020-04-20 08:54:33 +0100 )edit As described by @Lupp, the apostrophe is an indicator that a cell is formatted as numeric/date value, but the contents was textual at the moment of setting the format (which could be converted to a number). In this situation, Calc keeps the contents as text (because formatting of an existing value must not do data conversion), and indicates that by showing the apostrophe in edit box. The proper tool co convert data in Calc is DataText to Columns. One selects (a part of) a column, and starts the tool: The interface of the tool looks alike CSV import. In the tool, you select the proper type for each column; e.g.,for integers, Standard fits well; if your numbers contain decimal dots (not commas), the proper type is US English; for times, the proper type could be one of dates. Here you may also split texts to several columns using proper separators, and set each resulting column the proper type; or even convert numbers back to text. The tool is explicitly designed to convert data (as opposed to format data), and I recommend using it wherever one needs such conversion, including "removing the leading apostrophe", as in this case. more ## Comments Thank you very much, this is helpful - I did not know such tool existed - it works very well. First set column type and then use the tool on the relevant column. The tool name is a bit cryptic, but that's fine. ( 2019-07-05 10:43:07 +0100 )edit ## Stats Asked: 2017-01-26 00:07:15 +0100 Seen: 53,455 times Last updated: Jul 26 '18
## kyanda17 one year ago Simplify (1/2)4.squared 1. Michele_Laino is your expression like this: $\Large {\left( {\frac{1}{2}} \right)^4}$ 2. kyanda17 yes 3. Michele_Laino we can write this: $\Large {\left( {\frac{1}{2}} \right)^4} = \frac{1}{{{2^4}}} = ...?$ 4. mathstudent55 Rule: $$\Large \left( \dfrac{a}{b} \right) ^n = \dfrac{a^n}{b^n}$$ Apply the rule: $$\Large \left( \dfrac{1}{2} \right) ^4 = \dfrac{1^4}{2^4} = \dfrac{1 \times 1 \times 1 \times 1}{2 \times 2 \times 2 \times 2 }$$ What is 1 * 1 * 1 * 1 = ? What is 2 * 2 * 2 * 2 = ? 5. anonymous Would it not be (1/2) times (1/2) and 4 times 4?
# Math Help - 2 interesting problems 1. ## 2 interesting problems 1. Let $g(n)$ be the greatest odd divisor of $n$, show that $ \mathop {\lim }\limits_{n \to + \infty } \tfrac{1} {n} \cdot \sum\limits_{k = 1}^n {\tfrac{{g\left( k \right)}} {k}} $ exists and find it ( Bulgaria 1985) 2. Find all $ k \in \mathbb{Z}\text{ with }k \geq 2 $ such that $ n \not|g(k^n+1) $ , $ \forall n \in \mathbb{Z}/n > 1 $ (Olimpíada Rioplatense 2008) - $g(n)$ is defined as in 1 - Have fun! 2. Originally Posted by PaulRS 1. Let $g(n)$ be the greatest odd divisor of $n$, show that $ \mathop {\lim }\limits_{n \to + \infty } \tfrac{1} {n} \cdot \sum\limits_{k = 1}^n {\tfrac{{g\left( k \right)}} {k}} $ exists and find it ( Bulgaria 1985) a simple observation shows that $\sum_{k=1}^n \frac{g(k)}{k} = \sum_{k=1}^m \frac{1}{2^{k-1}} \left \lfloor \frac{n+2^{k-1}}{2^k} \right \rfloor,$ where $m=\lfloor \log_2 n \rfloor + 1.$ the value of $m$ is not important. what we need is to see that $m \rightarrow \infty$ as $n \rightarrow \infty.$ now since $x-1 < \lfloor x \rfloor \leq x,$ the squeeze theorem gives us: $\lim_{n\to\infty} \frac{1}{n} \sum_{k=1}^n \frac{g(k)}{k}=\frac{2}{3}. \ \ \Box$ the second problem is left for other members to try! 3. Originally Posted by NonCommAlg a simple observation shows that $\sum_{k=1}^n \frac{g(k)}{k} = \sum_{k=1}^m \frac{1}{2^{k-1}} \left \lfloor \frac{n+2^{k-1}}{2^k} \right \rfloor,$ where $m=\lfloor \log_2 n \rfloor + 1.$ the value of $m$ is not important. what we need is to see that $m \rightarrow \infty$ as $n \rightarrow \infty.$ now since $x-1 < \lfloor x \rfloor \leq x,$ the squeeze theorem gives us: $\lim_{n\to\infty} \frac{1}{n} \sum_{k=1}^n \frac{g(k)}{k}=\frac{2}{3}. \ \ \Box$ the second problem is left for other members to try! How did you obtain $\sum_{k=1}^n \frac{g(k)}{k} = \sum_{k=1}^m \frac{1}{2^{k-1}} \left \lfloor \frac{n+2^{k-1}}{2^k} \right \rfloor,$ where $m=\lfloor \log_2 n \rfloor + 1$? 4. Originally Posted by chiph588@ How did you obtain $\sum_{k=1}^n \frac{g(k)}{k} = \sum_{k=1}^m \frac{1}{2^{k-1}} \left \lfloor \frac{n+2^{k-1}}{2^k} \right \rfloor,$ where $m=\lfloor \log_2 n \rfloor + 1$? First note that: $ a + 2^{t } \equiv 0\left( {\bmod .2^{t+1} } \right) \Leftrightarrow a \equiv 2^{t} \left( {\bmod .2^{t+1} } \right) $ (sum $2^{t}$ on both sides) The idea is that $ a = 2^t \cdot s{\text{ with }}s \ne \dot 2 $ iff $a\equiv{2^{t}}(\bmod.2^{t+1})$ Indeed, to prove this: $ a = 2^t \cdot \left( {2l + 1} \right) \equiv 2^t \left( {\bmod .2^{t + 1} } \right) $ , conversly if $ a \equiv 2^t \left( {\bmod .2^{t + 1} } \right) $ then $ \left. {2^t } \right|{a } $ and $2 ^{t+1}\not|a$ Now since $ \left\lfloor {\tfrac{n} {a}} \right\rfloor - \left\lfloor {\tfrac{{n - 1}} {a}} \right\rfloor = \left\{ \begin{gathered} 1{\text{ if }}\left. a \right|n \hfill \\ 0{\text{ otherwise}} \hfill \\ \end{gathered} \right. $ It follows easily that: $ \sum\limits_{k = 1}^\infty {\tfrac{1} {{2^{k - 1} }} \cdot \left( {\left\lfloor {\tfrac{{n + 1 + 2^{k - 1} }} {{2^k }}} \right\rfloor - \left\lfloor {\tfrac{{n + 2^{k - 1} }} {{2^k }}} \right\rfloor } \right)} = \tfrac{{g\left( {n + 1} \right)}} {{n + 1}} $ (only one of the terms is not 0 ) My proof of (1) is based on the following observation: $ \tfrac{{g\left( n \right)}} {n} = 2 - \sum\limits_{\left. {2^s } \right|n{\text{ ; }}s \geqslant 0} {\tfrac{1} {{2^s }}} $ then by a simple counting argument ( $2^j$ appears as many times as multiples of $2^j$ there are in between 1 and n -both included-, that is $\left\lfloor {\tfrac{n} {{2^k }}} \right\rfloor$ ): $ \sum\limits_{k = 1}^n {\tfrac{{g\left( k \right)}} {k}} = 2 \cdot n - \sum\limits_{k = 0}^\infty {\tfrac{1} {{2^k }} \cdot \left\lfloor {\tfrac{n} {{2^k }}} \right\rfloor } $ ... 5. Sorry to be a bother, but I'm still a bit confused... 6. Originally Posted by PaulRS 1. Let $g(n)$ be the greatest odd divisor of $n$, show that $ \mathop {\lim }\limits_{n \to + \infty } \tfrac{1} {n} \cdot \sum\limits_{k = 1}^n {\tfrac{{g\left( k \right)}} {k}} $ exists and find it ( Bulgaria 1985) 2. Find all $ k \in \mathbb{Z}\text{ with }k \geq 2 $ such that $ n \not|g(k^n+1) $ , $ \forall n \in \mathbb{Z}/n > 1 $ (Olimpíada Rioplatense 2008) - $g(n)$ is defined as in 1 - Have fun! Any hints on 2? 7. Originally Posted by PaulRS Find all $ k \in \mathbb{Z}\text{ with }k \geq 2 $ such that $ n \not|g(k^n+1) $ , $ \forall n \in \mathbb{Z}/n > 1 $ (Olimpíada Rioplatense 2008) - $g(n)$ is defined as in 1 - Correct me if I am wrong but the question is not clear as stated. Does the question mean to ask $ r \not|g(k^r+1) $ , $ \forall r \in \mathbb{Z}/n > 1 $ or perhaps $ r \not|g(k^n+1) $ , $ \forall r \in \mathbb{Z}/n > 1 $ or does it mean to ask $ n\not|g(k^n+1) $ , $ \forall n $ the way that for all is used currently doesn't make sense to me, since the quantified variable $n$ is used to define the domain set $\mathbb{Z}/n$. 8. The bar there ("/") means "such that". So what I mean is that that should hold for all integers n greater than 1.
23rd Jul 2017 Stuhl @Stuhl Jul 23 2017 00:04 @Hijerboa No var is not global scoped, they also have function scope if they get defined in one they difference between let and var is that let is block scoped which means for example if you define one in a if condition and do for example {let a = 1} and want to print it out with console.log outside of the block, you will get an error var on the other hand gets hoisted and will be global in that case Sofus Heggemsens @Sofeo Jul 23 2017 01:12 hey is there a way to put all arguments in to the filter function? function destroyer(arr) { // Remove all the values var arg1 = arguments[1]; var arg2 = arguments[2]; var newArr = arr.filter(function(val) { return val !== arg1; }); return newArr; } destroyer([1, 2, 3, 1, 2, 3], 2, 3); korzo @korzo Jul 23 2017 01:14 @Sofeo You have to slice arguments to get all but first element of arguments @Sofeo because you don't know hove many arguments function has Sofus Heggemsens @Sofeo Jul 23 2017 01:16 @korzo can you exsplain it a little more? Jul 23 2017 01:17 Hey folks. Can someone explain to my why this 'sortNumber' function works? I found this solution for sorting numbers in an array. It works, but I can't seem to figure out why/how it works. var numList = [2, 45, 51, 4, 1, 10, 38, 30, 3, 5, 7, 78]; function sortNumber(a, b) { return a - b; } console.log(numList.sort(sortNumber)); // [1, 2, 3, 4, 5, 7, 10, 30, 38, 45, 51, 78] console.log(numList.sort()); // without sortNumber function... // [1, 10, 2, 3, 30, 38, 4, 45, 5, 51, 7, 78] korzo @korzo Jul 23 2017 01:17 @Sofeo 1. argument is source array and then you have x arguments to filter @Sofeo Therefore you have to filter rest of arguments using slice @Sofeo var needles = Array.prototype.slice.call(arguments, 1); Jake @JakeDVirus Jul 23 2017 01:18 @Sofeo your code seems inefficient. if you are given different arguments then you need to rewrite the code everytime. do something that your one piece of code will work for every argument no matter what korzo @korzo Jul 23 2017 01:19 @Sofeo and the filter arr and remove all elements that are in needles Dovydas Stirpeika @Giveback007 Jul 23 2017 01:19 hey peeps if I use redux will my app be faster or slower? Stanley Su @stanley-su Jul 23 2017 01:21 @RobMeador According to MDN: The default sort order is according to string Unicode code points. var scores = [1, 10, 21, 2]; scores.sort(); // [1, 10, 2, 21] // Note that 10 comes before 2, // because '10' comes before '2' in Unicode code point order. Ryan Williams @Ryanwfile Jul 23 2017 01:22 I am trying to finish the game of life project and currently I have everything working mostly except my function that is supposed to re-render the board repeatedly only happens once. Any help getting my board to continuously re-render would be very appreciated. https://codepen.io/Ryanwfile/pen/qXWxYY Jul 23 2017 01:24 @stanley-su Right, I understand that by default it's sorting by Unicode. In order to try to understand why the sortNumber function seems to solve this, I plugged this code into the Javascript visualizer at www.pythontutor.com. This just confused me further. It seems 'a' and 'b' are pulled randomly from the array, subtracted and then I'm not sure how sort() is interpreting these return values to actually sort the list. Stanley Su @stanley-su Jul 23 2017 01:26 @RobMeador I’m not entirely sure how it works, I think it might use bubblesort. From MDN: If compareFunction is supplied, the array elements are sorted according to the return value of the compare function. If a and b are two elements being compared, then: If compareFunction(a, b) is less than 0, sort a to an index lower than b, i.e. a comes first. If compareFunction(a, b) returns 0, leave a and b unchanged with respect to each other, but sorted with respect to all different elements. Note: the ECMAscript standard does not guarantee this behaviour, and thus not all browsers (e.g. Mozilla versions dating back to at least 2003) respect this. If compareFunction(a, b) is greater than 0, sort b to a lower index than a. compareFunction(a, b) must always return the same value when given a specific pair of elements a and b as its two arguments. If inconsistent results are returned then the sort order is undefined. let arr = [2, 45, 51, 4, 1, 10, 38, 30, 3, 5, 7, 78]; arr.sort(function(a, b) { return a - b; }); On the first iteration, a is 2, b is 45. The function passed to sort returns a negative number. Quote MDN again: If compareFunction(a, b) is less than 0, sort a to an index lower than b, i.e. a comes first. Jul 23 2017 01:30 @stanley-su When I ran it through the Javascript visualizer, 'a' and 'b' seemed to have no logically flow. Maybe it's just that tool is inaccurate? I'll try running it through the debugger and see what happens. Sofus Heggemsens @Sofeo Jul 23 2017 01:31 @korzo how do i get the numbers from needles? cause needles is a array now korzo @korzo Jul 23 2017 01:32 @Sofeo You don't have to. Use filter on arr and inside function test, if element provided by filter is in needles Jul 23 2017 01:33 can anyone give me a hand with a bug in my tic-tac-toe project? Jul 23 2017 01:34 @stanley-su I just ran it through the debugger and it's kicking out the same order of 'a' and 'b' values that seem to be randomly picked from the array (which I doubt is the case - I'm just misunderstanding something). Maybe I'm just dwelling on what's going on "under the hood" too much. Stanley Su @stanley-su Jul 23 2017 01:36 @RobMeador Yeah I’m not entirely sure. I just read this on StackOverflow though: The JavaScript interpreter has some kind of sort algorithm implementation built into it. It calls the comparison function some number of times during the sorting operation. The number of times the comparison function gets called depends on the particular algorithm, the data to be sorted, and the order it is in prior to the sort. Some sort algorithms perform poorly on already-sorted lists because it causes them to make far more comparisons than in the typical case. Others cope well with pre-sorted lists, but have other cases where they can be "tricked" into performing poorly. There are many sorting algorithms in common use because no single algorithm is perfect for all purposes. The two most often used for generic sorting are Quicksort and merge sort. Quicksort is often the faster of the two, but merge sort has some nice properties that can make it a better overall choice. Merge sort is stable, while Quicksort is not. Both algorithms are parallelizable, but the way merge sort works makes a parallel implementation more efficient, all else being equal. Your particular JavaScript interpreter may use one of those algorithms or something else entirely. The ECMAScript standard does not specify which algorithm a conforming implementation must use. It even explicitly disavows the need for stability. Sofus Heggemsens @Sofeo Jul 23 2017 01:40 @korzo it worked!! thanks im not really sure what i did but im gonna try to understand when i wake upp. but anyways thank you so mutch for the help :D CamperBot @camperbot Jul 23 2017 01:40 sofeo sends brownie points to @korzo :sparkles: :thumbsup: :sparkles: :cookie: 287 | @korzo |http://www.freecodecamp.com/korzo Jul 23 2017 01:46 @stanley-su Yeah, seems complicated. Thanks for the help! CamperBot @camperbot Jul 23 2017 01:46 robmeador sends brownie points to @stanley-su :sparkles: :thumbsup: :sparkles: :cookie: 69 | @stanley-su |http://www.freecodecamp.com/stanley-su Huỳnh Trần Khanh @khanh2003 Jul 23 2017 01:49 I just ran it through the debugger and it's kicking out the same order of 'a' and 'b' values that seem to be randomly picked from the array It is not random. For small$^1$ arrays, your browser uses merge sort algorithm. For larger$^1$ arrays, your browser uses quick sort. $^1$: depend on your browser. Both algorithms need to compare two elements in array by calling your function. Jul 23 2017 01:51 https://codepen.io/srhoades/pen/gRyyME?editors=0010 Can anyone help me with my tic-tac-toe project, I can't figure out why the first computers move does not display until the 2nd player move. Stanley Su @stanley-su Jul 23 2017 02:04 @srhoades It’s at line 252, in: $(".tile").on("click", function playersTurn() { validatePlay(this); if (playValid) { console.log("play valid");$(this).removeClass("free"); $(this).addClass("played");$(this).addClass("player"); checkWin(); computerPlay(); } else { $(".status").html( "That square has already been played. Please choose another square" ); } }); After the player clicks on a square, the computer plays. @srhoades wait i’m wrong let me try to find out Scott Rhoades @srhoades Jul 23 2017 02:06 Thanks @stanley-su CamperBot @camperbot Jul 23 2017 02:06 srhoades sends brownie points to @stanley-su :sparkles: :thumbsup: :sparkles: :cookie: 70 | @stanley-su |http://www.freecodecamp.com/stanley-su korzo @korzo Jul 23 2017 02:09 @srhoades At the first click, in computerRandomPlay(), playValid id false Scott Rhoades @srhoades Jul 23 2017 02:16 thanks @korzo CamperBot @camperbot Jul 23 2017 02:16 srhoades sends brownie points to @korzo :sparkles: :thumbsup: :sparkles: :cookie: 290 | @korzo |http://www.freecodecamp.com/korzo korzo @korzo Jul 23 2017 02:17 @srhoades var randomSquare =$("#square" + randomNumber); you don't have #square @srhoades change it to #cell and it will work Jul 23 2017 02:29 thanks again Joseph @revisualize Jul 23 2017 02:43 Hey. kumquatfelafel @kumquatfelafel Jul 23 2017 02:44 hi weifo @weifo Jul 23 2017 02:45 function update(id, prop, value) { if(value!=' '&&prop!='tracks') collectionCopy[id][prop]=value; else if(value!=' '&&prop=="tracks") collectionCopy[id][prop].push(value); else if(value==" ") delete collectionCopy[id][prop]; return collection; } I can't get this over ): kumquatfelafel @kumquatfelafel Jul 23 2017 02:45 @weifo For future: if you write it in a code block, it makes it easier to read and allows you to do some formatting. To write it in a code block... for one line :point_right: code goes here or for multiple lines :point_down: js code goes here (important: on a separate line) Also, note that ≠ ''' You can also edit your post if you make a mistake by clicking the … that appears when you hover your mouse over your comment Just going to format this so a little easier to read. function update(id, prop, value) { if(value!=' '&&prop!='tracks') collectionCopy[id][prop]=value; else if(value!=' '&&prop=="tracks") collectionCopy[id][prop].push(value); else if(value==" ") delete collectionCopy[id][prop]; return collection; } kumquatfelafel @kumquatfelafel Jul 23 2017 02:52 @weifo what's up with all this else if (value == ' ')? Joseph @revisualize Jul 23 2017 02:52 @weifo You shouldn't be working with collectionCopy but with the collection directly. @weifo If you change collectionCopy and return collection you're not actually modifying what you're returning. weifo @weifo Jul 23 2017 02:53 ok ,I' ll try,thanks a lot Joseph @revisualize Jul 23 2017 02:54 @weifo You're missing some code too. @weifo If there is no tracks array you need to create it before you can .push to it. kumquatfelafel @kumquatfelafel Jul 23 2017 02:55 :point_up: Also, " " != "" @weifo in case point I was getting at before too obscure. :p weifo @weifo Jul 23 2017 02:57 in this case---update(2548, "artist", "") , It failed,why? kumquatfelafel @kumquatfelafel Jul 23 2017 02:58 @weifo " " is not an empty string. It is a string with a space in it. Throughout your code (assuming you posted accurately), you are checking for a string with a space, which is incorrect. That isn't necessarily the only reason it fails, but it is one of them. kumquatfelafel @kumquatfelafel Jul 23 2017 03:06 I would make all the corrections @revisualize pointed out as well if you haven't yet. Jul 23 2017 03:23 my random quote machine doesn't work anymore? it was working a month ago i get this error message in the console Bashir Harrell @bookofbash Jul 23 2017 03:23 What message? Jul 23 2017 03:23 GET https://www.quotesondesign.com/wp-json/posts?filter[orderby]=rand&filter[posts_per_page]=1&_=1500780187326 net::ERR_INSECURE_RESPONSE jquery.min.js:4 korzo @korzo Jul 23 2017 03:31 @adamfaraj Change url to http Jul 23 2017 03:32 @korzo which URL? $(".quote__container--generator-btn").on("click", function() {$.ajax({ method: "GET", url: "https://www.quotesondesign.com/wp-json/posts?filter[orderby]=rand&filter[posts_per_page]=1", cache: false, dataType: "json" korzo @korzo Jul 23 2017 03:33 @adamfaraj http://www.quotesondesign.com/wp-json/posts?filter[orderby]=rand&filter[posts_per_page]=1 Jul 23 2017 03:34 @korzo XMLHttpRequest cannot load http://www.quotesondesign.com/wp-json/posts?filter[orderby]=rand&filter[posts_per_page]=1&_=1500780850246. The 'Access-Control-Allow-Origin' header has a value 'http://null' that is not equal to the supplied origin. Origin 'null' is therefore not allowed access. korzo @korzo Jul 23 2017 03:37 @adamfaraj Strange. It works for me, when I change it in your code Jul 23 2017 03:39 changing it in my main.js file? korzo @korzo Jul 23 2017 03:39 @adamfaraj Yes, line 26 Jul 23 2017 03:45 @korzo had to empty my cache and hard reload but it works now thanks! @korzo thank you CamperBot @camperbot Jul 23 2017 03:46 adamfaraj sends brownie points to @korzo :sparkles: :thumbsup: :sparkles: :cookie: 291 | @korzo |http://www.freecodecamp.com/korzo Moisés Man @moigithub Jul 23 2017 03:48 @shivamg11000 try (<li key={todo.id}>{JSON.stringify(todo)}</li>) its telling u,, it dont like to render objects ncaught Error: Objects are not valid as a React child korzo @korzo Jul 23 2017 03:53 Antonious Stewart @Antonious-Stewart Jul 23 2017 05:05 need help trying to figure this out just a hint no answers pls // Setup function phoneticLookup(val) { var result = ""; // Only change code below this line var lookup = { "bravo": "Boston", "charlie":"Chicago", "delta":"Denver", "echo": "Easy", "foxtrot":"Frank", }; val = lookup.alpha; result = val; // Only change code above this line return result; } // Change this value to test phoneticLookup("charlie"); i know i have to change the val but that was just a test statement kumquatfelafel @kumquatfelafel Jul 23 2017 05:06 @Astewart400 You don't have to change val. Antonious Stewart @Antonious-Stewart Jul 23 2017 05:07 the lookup.alpha part it should be lookup.something right Jianhao Tan @jaanhio Jul 23 2017 05:08 anyone familar with marked markdown parser? kumquatfelafel @kumquatfelafel Jul 23 2017 05:08 so... lookup.something is only fine if you want to search for the key "something" in lookup. You want to search for... whatever val is. just a reminder: val is assigned the value passed in when you call the function. phoneticLookup("charlie"); // val will be "charlie" Antonious Stewart @Antonious-Stewart Jul 23 2017 05:13 ok let me try something thanks ok sooo lol why doest lookup.val work lol i got it to run with lookup[val] @kumquatfelafel and thanks for the help CamperBot @camperbot Jul 23 2017 05:15 astewart400 sends brownie points to @kumquatfelafel :sparkles: :thumbsup: :sparkles: :cookie: 525 | @kumquatfelafel |http://www.freecodecamp.com/kumquatfelafel kumquatfelafel @kumquatfelafel Jul 23 2017 05:16 say I do something like this... var blah = "alpha"; result = lookup.blah; This will actually not look for "alpha". This will search for "blah". We need to let it know that we are searching for the value assigned to the variable blah. In order to do this, we need to use bracket notation, like so. lookup[blah] @Astewart400 Antonious Stewart @Antonious-Stewart Jul 23 2017 05:18 ok makes sense im just happy i had the right idea in aa way just needed to fix where i was lookingup kumquatfelafel @kumquatfelafel Jul 23 2017 05:19 basically, lookup.blah is equivalent to lookup["blah"], whereas lookup[blah] is equivalent to lookup["alpha"] (assuming blah is assigned alpha) Antonious Stewart @Antonious-Stewart Jul 23 2017 05:21 ooooooooooooooooo @kumquatfelafel thanks for the breakdown man!!!! CamperBot @camperbot Jul 23 2017 05:21 astewart400 sends brownie points to @kumquatfelafel :sparkles: :thumbsup: :sparkles: :warning: astewart400 already gave kumquatfelafel points kumquatfelafel @kumquatfelafel Jul 23 2017 05:22 @Astewart400 another instance you have to use the bracket notation is when the object key starts with a number. I'm pretty sure... but don't quote me on that. I'm drunk. :p ;) Antonious Stewart @Antonious-Stewart Jul 23 2017 05:22 lol take one for me kumquatfelafel @kumquatfelafel Jul 23 2017 05:23 key :point_up: value :point_up: (forget if lessons mention that, but good to know) silver537 @silver537 Jul 23 2017 05:25 lessons do mention that. just gotta read more carefully Antonious Stewart @Antonious-Stewart Jul 23 2017 05:25 they do kumquatfelafel @kumquatfelafel Jul 23 2017 05:29 @silver537 's not really a matter of reading carefully. 's a matter of opening a new window/tab, navigating the map, trying to find the specific lesson where they first mention the concept, and seeing if it's there. I'm far too lazy to do that. I'd much rather launch into a long-winded and exhaustive explanation detailing why I daren't waste the effort. :p ;) @Astewart400 though yeah, if key name starts with number, you can't use dot notation, you have to use bracket. silver537 @silver537 Jul 23 2017 05:34 or you can just send a link to MDN :D Stanley Su @stanley-su Jul 23 2017 05:34 @kumquatfelafel why does it work like that? Antonious Stewart @Antonious-Stewart Jul 23 2017 05:34 lol kumquatfelafel @kumquatfelafel Jul 23 2017 05:35 @silver537 But that's like... three letters! :o I'm not going to type three letters silver537 @silver537 Jul 23 2017 05:35 but you rather typpe more? kumquatfelafel @kumquatfelafel Jul 23 2017 05:36 You have blown my mind. .... Touché. :p @stanley-su Sorry, why does what work like what? :laughing: silver537 @silver537 Jul 23 2017 05:36 lol Stanley Su @stanley-su Jul 23 2017 05:39 @kumquatfelafel Why doesn’t dot notation work with numbers? kumquatfelafel @kumquatfelafel Jul 23 2017 05:45 not entirely sure if there's a more poignant reason than an adopted convention. It may also be something like "the moment we see a token that starts with a number, let's treat it like a number" What I can say is that with variable names, you can't start with a number. Antonious Stewart @Antonious-Stewart Jul 23 2017 05:47 is there a better way to get the result here or no // Setup var myObj = { pet: "kitten", bed: "sleigh" }; function checkObj(checkProp) { // Your Code Here if (myObj.hasOwnProperty(checkProp)){ return myObj[checkProp]; } else{ } return "Change Me!"; } // Test your code by modifying these values kumquatfelafel @kumquatfelafel Jul 23 2017 05:51 @Astewart400 well... you could technically just say something like... function checkObj(checkProp) { return myObj[checkProp] ? myObj[checkProp] : "Not Found"; } That being said, those unfamiliar with coercion and the conditional operator would have no idea what this means. :p @wxxxxxxxx Jul 23 2017 05:53 for(var i=0;i<arr.length;i++){ for(var j=0;j<arr[i].length; j++){ product *= arr[i] [j]; } } do not understand Who can explain the next Antonious Stewart @Antonious-Stewart Jul 23 2017 05:55 @kumquatfelafel lmao tf let me see if codecamp will take that kumquatfelafel @kumquatfelafel Jul 23 2017 05:57 @Astewart400 as a matter of fact, you could also do... return myObj[checkProp] || "Not Found"; Though the reverse : return "Not Found" || myObj[checkProp]; is not acceptable. Stanley Su @stanley-su Jul 23 2017 05:57 @wxxxxxxxx Could you repaste the code, it’s confusing @wxxxxxxxx But from what I can see, it’s just getting the product of all the numbers in a 2d array Antonious Stewart @Antonious-Stewart Jul 23 2017 05:58 it makes sense but i need to know how @wxxxxxxxx Jul 23 2017 05:58 function multiplyAll(arr) { var product = 1; for(var i=0;i<arr.length;i++){ for(var j=0;j<arr[i].length; j++){ product *= arr[i] [j]; } } return product; } multiplyAll([[1,2],[3,4],[5,6,7]]); I do not understand the two-dimensional array Antonious Stewart @Antonious-Stewart Jul 23 2017 06:00 @kumquatfelafel im bout to check out w3schoo;s and mdn for the how kumquatfelafel @kumquatfelafel Jul 23 2017 06:00 @Astewart400 in both the shorter examples I provided, there are a couple things going on that you're probably not entirely familiar with at this point. Antonious Stewart @Antonious-Stewart Jul 23 2017 06:01 not familiar but i understand what its doing its a shorthand version of an if statement like if the expression doesnt pass true it will do the other part of the code ie Stanley Su @stanley-su Jul 23 2017 06:02 @wxxxxxxxx it’s an array of arrays so in multiplyAll([[1,2],[3,4],[5,6,7]]): [[1,2],[3,4],[5,6,7]] is the arr [1, 2] is arr[0] [3, 4] is arr[1] [5, 6, 7] is arr[2] the first for loop in the function loops through arr the inner for loop loops through arr[0], arr[1], and arr[2] Antonious Stewart @Antonious-Stewart Jul 23 2017 06:03 return basketball[best_game_ever] || "kill yourself''; @wxxxxxxxx Jul 23 2017 06:03 arr[i] Is all? kumquatfelafel @kumquatfelafel Jul 23 2017 06:10 i : :arrow_right: j : :arrow_down: arr[0][0] arr[1][0] arr[2][0] ... arr[arr.length - 1][0] arr[0][1] arr[1][1] arr[2][1] ... arr[arr.length - 1][1] ... arr[0][arr[0].length - 1] arr[1][arr[1].length - 1] arr[2][arr[2].length - 1] ... arr[arr.length - 1][arr[arr.length - 1].length - 1] where arr[i][j] Brian @BrianCodes33 Jul 23 2017 06:13 tripleTrouble("this","test","lock"), "ttlheoiscstk”) function tripleTrouble(one, two, three){ var args = [...arguments]; var result; for (var i = 0; i < args.length; i++) { } return result; } kumquatfelafel @kumquatfelafel Jul 23 2017 06:13 @wxxxxxxxx for loop provides a convenient way to go through the elements of arrays/subarrays by incrementing the index in each iteration. kumquatfelafel @kumquatfelafel Jul 23 2017 06:19 @BrianCodes33 I'm not entirely sure what you're trying to do? The idea is to repeatedly take the first "remaining letter" of each of the following words and add it to result? Is there any more to the problem? Brian @BrianCodes33 Jul 23 2017 06:20 no that is it kumquatfelafel @kumquatfelafel Jul 23 2017 06:20 Do you have to handle scenario where one word is much longer than the others, e.g., in a certain way? @BrianCodes33 Brian @BrianCodes33 Jul 23 2017 06:21 Create a function that will return a string that combines all of the letters of the three inputed strings in groups. Taking the first letter of all of the inputs and grouping them next to each other. Do this for every letter, see example below! Ex) Input: "aa", "bb" , "cc" => Output: "abcabc" a7n007 @a7n007 Jul 23 2017 06:21 what is problem with my above code Johnny @JohnnyBizzel Jul 23 2017 06:22 @BrianCodes33 Another loop over the chars of each string then inside the main loop. a7n007 @a7n007 Jul 23 2017 06:22 it says potential harm of infinite loop Pedro Diaz @Pjdaze Jul 23 2017 06:22 hello coders queation.. when a constructor function is being created and lets say we put 4 parameters in it .. is that always just like a starting point which i should assume its going to grow later on?? kumquatfelafel @kumquatfelafel Jul 23 2017 06:23 @a7n007 this is really difficult to read. Could you put your code into a code block? js code goes inside here Brian @BrianCodes33 Jul 23 2017 06:24 @JohnnyBizzel ? a7n007 @a7n007 Jul 23 2017 06:24 sure kumquatfelafel @kumquatfelafel Jul 23 2017 06:24 @a7n007 you can edit your post by clicking the ... that appears in top right when you hover your mouse over comment Pedro Diaz @Pjdaze Jul 23 2017 06:24 because lets say i got an object i want to create and lets say i know its going to have 20 keys.. i know it would look weird to have 20 arguments inside the parenthesis . is it smart to not have any parameters at all ?? @kr5hn4 Jul 23 2017 06:30 @Pjdaze you make your function to take an object as an argument, and put all the arguments as key, value pairs inside that object kumquatfelafel @kumquatfelafel Jul 23 2017 06:30 @Pjdaze Technically, no function needs specified parameters because you can get things from arguments, but that doesn't mean it's a good idea to have no parameters, especially when you're supposed to be "passing 20 arguments". it would look weird to have 20 arguments You could also combine some of these parameters into an object itself and pass the object instead. For example, say I have parameters title, firstName, lastName and suffix. Instead of passing these separately, I could create a "name" object and pass that.... and have only one parameter: name @kr5hn4 Jul 23 2017 06:32 just like he said^ a7n007 @a7n007 Jul 23 2017 06:35 function convertToRoman(num) { var str=[]; var mainstr=[]; var i,ch1,ch2,ch3,x; while(num!=0) { ch1=num%10; str.push(ch1); num-=ch1; num/=10; } index=str.length; // for(i=str.length-1;i>=0;i--) { index--; // PLACE VALUE OF VAR if(index===0) // ones { ch1="I"; ch2="V"; ch3="X"; } else if(index===1) // tens { ch1="X"; ch2="L"; ch3="C"; } else if(index==2) // hundreds { ch1="C"; ch2="D"; ch3="M"; } else if(index==3) // thousands { ch1="M"; ch2="y"; ch3="z"; } else { ch1="I"; ch2="V"; ch3="X"; } x=str[i]; //x IS THE NUMBER AT THAT PLACE VALUE if(x>=1&&x<4) //the main converter { for(i=0;i<x;i++) mainstr=mainstr+ch1; //ch1='I' } else if(i==4) mainstr=mainstr+ch1+ch2; ///ch2='V' else if(i>4&&i<9) { mainstr=mainstr+ch2; for(i=5;i<x;i++) mainstr=mainstr+ch1; } else if(i==9) mainstr=mainstr+ch1+ch2; else mainstr=mainstr; }//for loop ends return mainstr; } convertToRoman(36); says potential harm for an infinite loop but i dont understand why kumquatfelafel @kumquatfelafel Jul 23 2017 06:40 using the same variable (where value is shared) in a bunch of nested for loops is pretty much a recipe for disaster. kumquatfelafel @kumquatfelafel Jul 23 2017 06:52 @a7n007 consider the following scenario. var i = 10; var j = 10; for (i = "Hello".length - 1; i >= 0; i--) { console.log(i); //what is the value of i here? for (i = 0; i < j; i++); console.log(i); //what is the value of i here? is this safe? Why or why not? @a7n007 Ronald T. Casili @nvlled Jul 23 2017 07:06 @a7n007 consider doing while (num > 0) instead, because of imprecisions involved in floating point computations, so you may not get a 0 exactly, but something like 0.000000000001 kumquatfelafel @kumquatfelafel Jul 23 2017 07:15 @nvlled While there is validity to this concern, you don't exactly want it to run through loop again when num == 0.0000000001 either. It's simpler to have something like .... num = Math.floor(num/10); as opposed to num /= 10; kumquatfelafel @kumquatfelafel Jul 23 2017 07:29 @BrianCodes33 sorry, got distracted. still need help/still around? kumquatfelafel @kumquatfelafel Jul 23 2017 07:36 @a7n007 other issues include: returning an array when you're supposed to be returning a string, trying to add characters to an array using the + operator, etc. Huỳnh Trần Khanh @khanh2003 Jul 23 2017 07:53 How do I find mouse position relative to <canvas> element? heroiczero @heroiczero Jul 23 2017 07:57 Huỳnh Trần Khanh @khanh2003 Jul 23 2017 08:20 @heroiczero thanks for the link. const mousePosition = (element, event) => { const rect = element.getBoundingClientRect(); const normal = { x: (event.clientX - rect.left) / rect.width * element.width, y: (event.clientY - rect.top) / rect.height * element.height }; const webgl = { x: (normal.x / rect.width) * 2 - 1, y: - (normal.y / rect.height) * 2 + 1 } return {normal, webgl}; } CamperBot @camperbot Jul 23 2017 08:20 khanh2003 sends brownie points to @heroiczero :sparkles: :thumbsup: :sparkles: :star2: 1519 | @heroiczero |http://www.freecodecamp.com/heroiczero Sachin Chandy @sachinmc Jul 23 2017 09:34 hi folks, noob question, > var stuff = new Array(10) undefined > stuff["a"] = 1; 1 > stuff [ , , , , , , , , , , a: 1 ] > as you can see, there is an key-value pair style entry. I didn't know this was possible (used obj literals), what do you call this ..? Stanley Su @stanley-su Jul 23 2017 09:34 arrays are actually objects Sachin Chandy @sachinmc Jul 23 2017 09:35 @stanley-su my query is on the syntax and its use :) Stanley Su @stanley-su Jul 23 2017 09:36 I don’t think there’s any use to doing that with an array Sachin Chandy @sachinmc Jul 23 2017 09:36 hmm @stanley-su I can use the array in this form to store key value pairs Joseph @revisualize Jul 23 2017 09:36 @sachinmc As @stanley-su state. Arrays are actually objects. Alexander Køpke @alexanderkopke Jul 23 2017 09:38 you can also add properties to functions but I haven't seen anyone do that yet in production but one could. @sachinmc Sachin Chandy @sachinmc Jul 23 2017 09:38 @revisualize @stanley-su could you explain a bit more in this context please. I understand array is an object, and I am creating one with a constructor here, but not sure how that is relevant to the question around the key value pair use in [] @alexanderkopke ok Joseph @revisualize Jul 23 2017 09:41 @sachinmc var stuff = new Array(3).fill(2); stuff["a"] = 1; console.log(stuff); for (var i in stuff) { console.log(stuff[i]); } console.log(stuff["a"]); console.log(Object.keys(stuff)); stuff.push("b"); console.log(stuff); console.log(Object.keys(stuff)); @sachinmc Basically, you should NOT be using a key value pair in an array. alpox @alpox Jul 23 2017 09:50 @sachinmc If you want a real key/value store use Map Sachin Chandy @sachinmc Jul 23 2017 09:51 @revisualize stuff["a"] = 1 . . > Object.keys(stuff) [ '0', '1', '2', '3', 'a' ] > when queried for the keys, it's returned a instead of the index position in that array. So the array object can accept a property a with a value 1, where the key is a and value is 1 .. alpox @alpox Jul 23 2017 09:51 @sachinmc Yes because an array is also an object. BUT the array is not made for this kind of use at all Sachin Chandy @sachinmc Jul 23 2017 09:52 @alpox ok cool Joseph @revisualize Jul 23 2017 09:52 @sachinmc Again, you should NOT be using arrays as key value pairs. That is NOT what they are for. alpox @alpox Jul 23 2017 09:53 @sachinmc So when you as example use methods of an array like .map it will just ignore every special key/value pair. Also, adding a key/value pair like this does NOT increase the array .length Sachin Chandy @sachinmc Jul 23 2017 09:56 @alpox interesting, I just tried that. how can we explain the array length not increasing when adding values like so: stuff["a"] = 1 alpox @alpox Jul 23 2017 09:57 @sachinmc Because an array consists of only number indices. Adding a string index just adds itself to the array object but doesn't count as an array item at all Its the same as you would do: function foo() { } foo["a"] = 1; Joseph @revisualize Jul 23 2017 09:58 @sachinmc Do Object's have .lengths? alpox @alpox Jul 23 2017 09:58 Because foo is also an object (A function is always also an object) you can do this. But its not meant to be done @sachinmc silver537 @silver537 Jul 23 2017 09:58 yes alpox @alpox Jul 23 2017 09:58 @silver537 no silver537 @silver537 Jul 23 2017 10:00 right Joseph @revisualize Jul 23 2017 10:00 silver537 @silver537 Jul 23 2017 10:00 Joseph @revisualize Jul 23 2017 10:01 Anyhow, this conversation is frustrating me. So, I'm out. Sachin Chandy @sachinmc Jul 23 2017 10:01 @alpox ah makes sense, thanks ;) CamperBot @camperbot Jul 23 2017 10:02 sachinmc sends brownie points to @alpox :sparkles: :thumbsup: :sparkles: :star2: 1274 | @alpox |http://www.freecodecamp.com/alpox Sachin Chandy @sachinmc Jul 23 2017 10:03 @alpox can you point me towards where I can read more on this .. the question was on, use of functions and objects, I can google that Joseph @revisualize Jul 23 2017 10:03 There isn't anywhere! alpox @alpox Jul 23 2017 10:03 :point_up: Stanley Su @stanley-su Jul 23 2017 10:03 You’re not going to need it. Joseph @revisualize Jul 23 2017 10:03 You're breaking stuff and you want explanations about why it's broken. Don't break stuff. It's like buying a vase ... breaking it and not understanding why it won't hold water. Functions are Objects. Arrays are Object silver537 @silver537 Jul 23 2017 10:05 lol short tempered as always Joseph @revisualize Jul 23 2017 10:05 in JavaScript almost everything is an Object. Huỳnh Trần Khanh @khanh2003 Jul 23 2017 10:05 @revisualize except primitive types Joseph @revisualize Jul 23 2017 10:05 Hence. almost. But, yeah. Marco Galizzi @Tezenn Jul 23 2017 10:06 hi guys i have a problem in an array of arrays i want to push another array if is not already there ut this doesn't work, why??? var x = [1,1]; var z = [[2,3],[4,5],[1,1]]; if (z.indexOf(x) == -1){ z.push(x); } console.log(z); //it still push the x array Sachin Chandy @sachinmc Jul 23 2017 10:07 @Tezenn that returns 2 Stanley Su @stanley-su Jul 23 2017 10:07 @Tezenn Probably better if you compare it manually Sachin Chandy @sachinmc Jul 23 2017 10:07 @Tezenn you probably meant !=-1 ? alpox @alpox Jul 23 2017 10:07 @Tezenn indexOf compares with === and that comparison compares the INSTANCE of an array - so [1,1] === [1,1] is false kumquatfelafel @kumquatfelafel Jul 23 2017 10:08 @Astewart400 hey, so what I mentioned earlier, looking back slightly more sober was not quite correct. Or rather, was wrong for what you were trying to do. But I'm kinda too lazy/tired to explain why at the moment. Just... behaves similarly, but take it with a grain of salt for the time being. Sachin Chandy @sachinmc Jul 23 2017 10:08 @Tezenn yea ignore me :D alpox @alpox Jul 23 2017 10:08 @Tezenn it would work if you had: var x = [1,1]; var z = [[2,3],[4,5],x]; Because then it would see that those arrays are the same instance kumquatfelafel @kumquatfelafel Jul 23 2017 10:11 @Tezenn when you compare arrays with === it looks at to see if they're referencing the same location in memory (i.e. pointing to the exact same object). Marco Galizzi @Tezenn Jul 23 2017 10:11 ok alright so how can i say don't push x to z if in z there is already a [1,1]? alpox @alpox Jul 23 2017 10:11 @Tezenn You have to loop over the inner array to see if it contains all the same items as the other array Marco Galizzi @Tezenn Jul 23 2017 10:12 ok thanks guys all this stuff is already inside another if :D @alpox @kumquatfelafel CamperBot @camperbot Jul 23 2017 10:12 tezenn sends brownie points to @alpox and @kumquatfelafel :sparkles: :thumbsup: :sparkles: :cookie: 526 | @kumquatfelafel |http://www.freecodecamp.com/kumquatfelafel :star2: 1275 | @alpox |http://www.freecodecamp.com/alpox kumquatfelafel @kumquatfelafel Jul 23 2017 10:12 "poiny_up: alpox @alpox Jul 23 2017 10:13 !z.some(arr => arr.every((num, i) => num === x[i])) would be my input but you may better go to find your own solution :-) kumquatfelafel @kumquatfelafel Jul 23 2017 10:13 ehat @camperbot alpox said. ugh. My mind is functioning but i chant' type :laughing: alpox @alpox Jul 23 2017 10:14 @kumquatfelafel hangover is stuck in the fingers? :D Marco Galizzi @Tezenn Jul 23 2017 10:15 i don't know yet the new way of writing js => etc Marco Galizzi @Tezenn Jul 23 2017 10:16 @sachinmc thanks i'll check it out! CamperBot @camperbot Jul 23 2017 10:16 tezenn sends brownie points to @sachinmc :sparkles: :thumbsup: :sparkles: :cookie: 260 | @sachinmc |http://www.freecodecamp.com/sachinmc Sachin Chandy @sachinmc Jul 23 2017 10:18 @Tezenn :) kumquatfelafel @kumquatfelafel Jul 23 2017 10:18 @alpox haha. more or less. ... There was more i was going to say, but I don't feel like typing it alpox @alpox Jul 23 2017 10:23 @Tezenn There is no need to learn it in early stage. If you don't know it yet, better skip it and do it your usual way which you know @kumquatfelafel :D give your fingers some rest then :D kumquatfelafel @kumquatfelafel Jul 23 2017 10:31 arrow is bit like, if you want to think of it from a rather low-level standpoint, an excuse not to write the word "function" and... in case of one-liners, "return" @alpox Never!!! :o ... :zzz: alpox @alpox Jul 23 2017 10:32 @kumquatfelafel So... arrow functions are good for your fingers - less typing :D Sulaiman @suli-g Jul 23 2017 11:00 Finished my URL shortener this morning (6am) all-nighter: here are the links: the deployed version github repo Stephen James @sjames1958gm Jul 23 2017 11:30 @suli-g :+1: Ghost @ghost~593024a8d73408ce4f63eac0 Jul 23 2017 12:15 hello people. Any idea why this code is returning [27,0,39,1001] [27,5,39,1001] function largestOfFour(arr) { var numberaccessed=0; var bigarray=[]; var numberaccessedarr; for(i=0;i<arr.length;i++){ numberaccessedarr=0; for(j=0;j<arr[i].length;j++){ if(numberaccessed<arr[i][j]){ numberaccessed=arr[i][j]; numberaccessedarr=numberaccessed; } } bigarray=bigarray.concat(numberaccessedarr); } return bigarray; } largestOfFour([[13, 27, 18, 26], [4, 5, 1, 3], [32, 35, 37, 39], [1000, 1001, 857, 1]]); for the algorithm challenge return largest number in array Sweet Coding :) @SweetCodingInc Jul 23 2017 12:20 @KshitijaaJaglan You also need to re initialize numberaccessed inside first for loop Stephen James @sjames1958gm Jul 23 2017 12:20 @KshitijaaJaglan You are resetting the largest value inside the the loop Sweet Coding :) @SweetCodingInc Jul 23 2017 12:21 So for(i=0;i<arr.length;i++){ numberaccessedarr=0; should be for(i=0;i<arr.length;i++){ numberaccessed = 0; numberaccessedarr=0; Stephen James @sjames1958gm Jul 23 2017 12:21 @KshitijaaJaglan Not sure why you have two variables both numberaccessed and numberaccessedarr Sweet Coding :) @SweetCodingInc Jul 23 2017 12:21 Yeah.. You don't really need that numberaccessedarr you can just do bigarray.push(numberaccessed ) instead of bigarray=bigarray.concat(numberaccessedarr); Ghost @ghost~593024a8d73408ce4f63eac0 Jul 23 2017 12:23 Thanks a lot @SweetCodingInc @sjames1958gm it worked!!! CamperBot @camperbot Jul 23 2017 12:23 kshitijaajaglan sends brownie points to @sweetcodinginc and @sjames1958gm :sparkles: :thumbsup: :sparkles: :cookie: 149 | @sweetcodinginc |http://www.freecodecamp.com/sweetcodinginc :star2: 8138 | @sjames1958gm |http://www.freecodecamp.com/sjames1958gm Ghost @ghost~593024a8d73408ce4f63eac0 Jul 23 2017 12:24 @SweetCodingInc but it says that bigarray.push is not a function Sweet Coding :) @SweetCodingInc Jul 23 2017 12:25 @KshitijaaJaglan Let me guess, you're doing bigarray = bigarray.push(numberaccessed )? if so, don't do it. Replace that entire line with just bigarray.push(numberaccessed ) because .push returns length of bigarray after you push an element to it so you change type of bigarray from Array to number Ghost @ghost~593024a8d73408ce4f63eac0 Jul 23 2017 12:28 Ah... so that was it Michiel @MichielHuijse Jul 23 2017 12:53 Is the way how I invoke the filterFunction correct? I am currently getting an empty array so I am not really sure. function whatIsInAName(collection, source) { // What's in a name? var arr = []; // Only change code below this line var sourceKeys = Object.keys(source); arr = collection.filter(filterFunction()); function filterFunction() { for (var i = 0; i < sourceKeys.length; i++) { if (collection.hasOwnProperty(sourceKeys[i])) { return true; } else { return false; } } } // Only change code above this line return arr; } whatIsInAName([{ "a": 1 }, { "a": 1 }, { "a": 1, "b": 2 }], { "a": 1 }); cowCrazy @cowCrazy Jul 23 2017 13:04 @MichielHuijse look at ths example on mdn, you should pass parameters and arguments to the filter function. Michiel @MichielHuijse Jul 23 2017 13:07 Hi @cowCrazy thanks, I already changed it to something new. function whatIsInAName(collection, source) { // What's in a name? var arr = []; // Only change code below this line var sourceKeys = Object.keys(source); for (var i = 0; i < sourceKeys.length; i++) { if (collection.hasOwnProperty(sourceKeys[i])) { arr = collection.filter(sourceKeys[i]); } else { return false; } } // Only change code above this line return arr; } whatIsInAName([{ "a": 1 }, { "a": 1 }, { "a": 1, "b": 2 }], { "a": 1 }); CamperBot @camperbot Jul 23 2017 13:07 michielhuijse sends brownie points to @cowcrazy :sparkles: :thumbsup: :sparkles: :cookie: 302 | @cowcrazy |http://www.freecodecamp.com/cowcrazy cowCrazy @cowCrazy Jul 23 2017 13:18 @MichielHuijse you still need help or you wanna try on your own first? Stephen James @sjames1958gm Jul 23 2017 13:18 @MichielHuijse filter requires a function you are passing it a value @MichielHuijse Also, collection an array - so hasOwnProperty on an array here, doesn't make much sense Michiel @MichielHuijse Jul 23 2017 13:22 Thanks @sjames1958gm thanks, I will take some time to figure it out myself. this will help me on my way CamperBot @camperbot Jul 23 2017 13:22 michielhuijse sends brownie points to @sjames1958gm :sparkles: :thumbsup: :sparkles: :star2: 8140 | @sjames1958gm |http://www.freecodecamp.com/sjames1958gm Stephen James @sjames1958gm Jul 23 2017 13:22 @MichielHuijse :+1: Manish Chandra @chandrajob365 Jul 23 2017 13:45 var chkFunc = function () { let name = 'Manish' return { getName: function () { console.log('Name is Manish') testName('Rahul') }, testName: function (name) { console.log('Now name is : ', name) } } } chkFunc().getName() Above giving error as testName not defined Joel Y. @zapcannon99 Jul 23 2017 13:47 I've never seen function defined like that O__O functions* Stephen James @sjames1958gm Jul 23 2017 13:50 @chandrajob365 testName doesn't exist as a unqualified name. You could use this.testName Joel Y. @zapcannon99 Jul 23 2017 13:51 I'm a noob. this would then be referring to the object getting returned right? Ronique Ricketts @RoniqueRicketts Jul 23 2017 13:52 @zapcannon99 yes Siphils @Siphils Jul 23 2017 13:52 DNA Pairing There is my method. function pair(str) { var arr = str.split(""); var pairs = { "A": "T", "T": "A", "G": "C", "C": "G" }; var newArr = []; for (var i = 0; i < arr.length; i++) { newArr.push([arr[i], pairs[arr[i]]]); } return newArr; } Is there a simpler way? Ronique Ricketts @RoniqueRicketts Jul 23 2017 13:52 Joel Y. @zapcannon99 Jul 23 2017 13:53 @sjames1958gm @RoniqueRicketts Thanks. That was enlightening. @chandrajob365 Thanks for reminding me how to pass functions into objects (or whatever the term was) CamperBot @camperbot Jul 23 2017 13:53 zapcannon99 sends brownie points to @sjames1958gm and @roniquericketts and @chandrajob365 :sparkles: :thumbsup: :sparkles: :cookie: 301 | @roniquericketts |http://www.freecodecamp.com/roniquericketts :cookie: 262 | @chandrajob365 |http://www.freecodecamp.com/chandrajob365 :star2: 8141 | @sjames1958gm |http://www.freecodecamp.com/sjames1958gm Ghulam Shabir @ghulamshabir Jul 23 2017 13:54 @Siphils i used map and switch @Siphils Manish Chandra @chandrajob365 Jul 23 2017 13:55 @sjames1958gm @RoniqueRicketts Thank you . It worked . BUt this is very confusing . Not able to get hold of it :( CamperBot @camperbot Jul 23 2017 13:55 chandrajob365 sends brownie points to @sjames1958gm and @roniquericketts :sparkles: :thumbsup: :sparkles: :cookie: 302 | @roniquericketts |http://www.freecodecamp.com/roniquericketts :star2: 8142 | @sjames1958gm |http://www.freecodecamp.com/sjames1958gm Stephen James @sjames1958gm Jul 23 2017 13:55 @Siphils I also, used map and switch, but map and object would also work. @chandrajob365 testName only exists inside of an object, so you have to reference the object to get to the function. When you call myObj.getName() the this variable references myObj so inside of getName() this.testName refers to the correct function Siphils @Siphils Jul 23 2017 13:58 @ghulamshabir @sjames1958gm I don't know how to use map. So I ussd object... I will try that later. Stephen James @sjames1958gm Jul 23 2017 13:58 @Siphils map would replace your for loop not your object. Manish Chandra @chandrajob365 Jul 23 2017 13:59 @sjames1958gm it seems silly , but this comes in picture when we create object using new? Stephen James @sjames1958gm Jul 23 2017 13:59 @chandrajob365 Yes, but not only then. When you call a function "on" an object this also comes into play. Manish Chandra @chandrajob365 Jul 23 2017 15:30 @sjames1958gm Actually , that is the problem , which I am trying to solve . Working on creating a HTTP server using only net module of node. But as this is not right forum to ask , didn’t put it here Michiel @MichielHuijse Jul 23 2017 16:09 How to compare the name of a property between two different array's? alpox @alpox Jul 23 2017 16:14 @MichielHuijse arrays have no property names except length and some functions :D Michiel @MichielHuijse Jul 23 2017 16:15 you are right I meen objects. So it would be: How to compare the name of a property between two different objects? alpox @alpox Jul 23 2017 16:16 @MichielHuijse you have an object of which you don't know the property names? And do you want to test for a specific property name? Or all of them Michiel @MichielHuijse Jul 23 2017 16:17 Yes, I think I should save the property name in a var and then compare it with obj.hasOwnProperty(' '). alpox @alpox Jul 23 2017 16:18 @MichielHuijse Hmm thats more a lookup if an object defines the given property name not a comparison Michiel @MichielHuijse Jul 23 2017 16:18 Yes it is. *a look up. alpox @alpox Jul 23 2017 16:22 @MichielHuijse Then you are right - to do it with hasOwnProperty Michiel @MichielHuijse Jul 23 2017 16:23 @alpox thanks, will give it try :) CamperBot @camperbot Jul 23 2017 16:23 michielhuijse sends brownie points to @alpox :sparkles: :thumbsup: :sparkles: :star2: 1278 | @alpox |http://www.freecodecamp.com/alpox makalohri @makalohri Jul 23 2017 16:41 @knrt10 thanks man CamperBot @camperbot Jul 23 2017 16:41 makalohri sends brownie points to @knrt10 :sparkles: :thumbsup: :sparkles: :cookie: 466 | @knrt10 |http://www.freecodecamp.com/knrt10 Gokula Krishna @AKX-X-32 Jul 23 2017 17:39 Hey guys, completed my Pomodoro Timer please take a look and let me know your review. https://codepen.io/gkrishna/pen/JyPVpX?editors=1000 kumquatfelafel @kumquatfelafel Jul 23 2017 17:45 @AKX-X-32 currently, when your seconds is in single digits, it says e.g., 24 : 9. Try to make it say 24 : 09 instead. Gokula Krishna @AKX-X-32 Jul 23 2017 17:46 @kumquatfelafel noted (y) @kumquatfelafel done kumquatfelafel @kumquatfelafel Jul 23 2017 17:52 okay... first transition to break appears to go properly first transition to work go properly works fine over multiple transitions. Michiel @MichielHuijse Jul 23 2017 17:57 Hey hello, I am busy with where art thou. I solved 2 of the 4 challenges. However my comparison is not good enough. Except comparing values I should also check if the other property the source has exists. How do I go about it. Now an object with only one but not two of the same values is getting trough. function whatIsInAName(collection, source) { // What's in a name? var arr = []; // Only change code below this line var sourceKeys = Object.keys(source); var sourceValues = Object.values(source); for (var i = 0; i < sourceKeys.length; i++) { for (var j = 0; j < collection.length; j++) { let arrVal = Object.values(collection[j]); for(var k = 0; k < arrVal.length; k++) { if (arrVal[k] === sourceValues[i]) { arr.push(collection[j]); } } } } // Only change code above this line console.log(arr); return arr; } whatIsInAName([{ "a": 1, "b": 2 }, { "a": 1 }, { "a": 1, "b": 2, "c": 2 }], { "a": 1, "b": 2 }); Gokula Krishna @AKX-X-32 Jul 23 2017 17:58 @kumquatfelafel thanks for the test CamperBot @camperbot Jul 23 2017 17:58 akx-x-32 sends brownie points to @kumquatfelafel :sparkles: :thumbsup: :sparkles: :cookie: 527 | @kumquatfelafel |http://www.freecodecamp.com/kumquatfelafel kumquatfelafel @kumquatfelafel Jul 23 2017 18:00 @MichielHuijse this seems like a lot of for loops. @MichielHuijse say I have ... {"a" : 1, "b" : 2 } and {"c" : 1, "d" : 2 } Do these have the same values? How about the same keys? Michiel @MichielHuijse Jul 23 2017 18:06 hmm okay, not the same keys indeed Guderian Raborg @hypercuber Jul 23 2017 18:08 I did this function a while ago and it works: function smallestMultipleOfOneTo(r) { //count r inside pList if prime? let arr = pList(r); //arr = 2, 3, 5, 7, 11, 13, 17, 19 let exp = 1; for (let i = 0; i < arr.length; i++) { while (Math.pow(arr[i], exp) < r) { exp++; } exp -= 1; arr[i] = Math.pow(arr[i], exp); exp = 1; } //arr = 16, 9, 5, 7, 11, 13, 17, 19 return arr.reduce((a,b) => a * b); } but then I tried to improve it and it doesnt work: function smallestMultipleOfOneTo(r) { let arr = pArr(r); //arr = 2, 3, 5, 7, 11, 13, 17, 19 let exp = 1; for (let i = 0; i < arr.length; i++) { while (arr[i] < r) { //include r? exp++; arr[i] = Math.pow(arr[i], exp); } exp = 1; } //16, 9, 5, 7, 11, 13, 17, 19 <--- arr should be this *** return arr; } nvm it works now: function smallestMultipleOfOneTo(r) { let arr = pArr(r); //arr = 2, 3, 5, 7, 11, 13, 17, 19 let exp = 1; for (let i = 0; i < arr.length; i++) { while (Math.pow(arr[i], exp + 1) < r) { //include r? exp++; } arr[i] = Math.pow(arr[i], exp); exp = 1; } //16, 9, 5, 7, 11, 13, 17, 19 <--- arr should be this *** return arr; } kumquatfelafel @kumquatfelafel Jul 23 2017 18:16 @hypercuber you can have more than one initializer inside for loop. e.g. for (let i = 0, exp = 1; i < arr.length; i++, exp = 1) Pagnito @Pagnito Jul 23 2017 18:21 anyone good with svg animations? kumquatfelafel @kumquatfelafel Jul 23 2017 18:28 @hypercuber You could also potentially speed it up a bit if you do something like while (Math.pow(arr[i], exp + 2) < r) exp += 2; if (Math.pow(arr[i], exp + 1) < r) exp++; As that will cut down on your Math.pow() calls and your increments to exp by about half. I think this would speed it up (in most cases), anyway. Too lazy to actually test though. :p kumquatfelafel @kumquatfelafel Jul 23 2017 18:37 ^that being said, this will double your calls if exp should be 1 or 2. So no guarantee. Basically, it's a matter of the size of r with relation to arr[i] @Pagnito wish I could say yes, as it seems to be dead in here... but no Jul 23 2017 18:40 Anyone who could go through my code and identify the error ? The code is it not working for the other 3 arrays. kumquatfelafel @kumquatfelafel Jul 23 2017 18:40 @adittyagi possibly. But won't know for sure until you post it. :p Jul 23 2017 18:41 Here you go! function largestOfFour(arr) { // You can do this! var max = []; for(var i=0; i<arr.length; i++) { for(var j=0; j<arr[i].length; j++) { if(arr[i][j]>max) { max[i]=arr[i][j]; } } } return [max[0],max[1],max[2],max[3]]; } largestOfFour([[4, 9, 1, 3], [13, 35, 18, 26], [32, 35, 97, 39], [1000000, 1001, 857, 1]]); kumquatfelafel @kumquatfelafel Jul 23 2017 18:42 Why are there so many closing brackets??? Jul 23 2017 18:42 Here you go! function largestOfFour(arr) { // You can do this! var max = []; for(var i=0; i<arr.length; i++) { for(var j=0; j<arr[i].length; j++) { if(arr[i][j]>max) { max[i]=arr[i][j]; } } } return [max[0],max[1],max[2],max[3]]; } largestOfFour([[4, 9, 1, 3], [13, 35, 18, 26], [32, 35, 97, 39], [1000000, 1001, 857, 1]]); kumquatfelafel @kumquatfelafel Jul 23 2017 18:42 ... oh for(var i=0; i<arr.length; i++) { for(var j=0; j<arr[i].length; j++) { if(arr[i][j]>max) For future, don't do this. Separate so that each is on it's own line. Jul 23 2017 18:43 The code got distorted when I pasted here kumquatfelafel @kumquatfelafel Jul 23 2017 18:44 Just gonna reformat this a bit. function largestOfFour(arr) { // You can do this! var max = []; for(var i=0; i<arr.length; i++) { for(var j=0; j<arr[i].length; j++) { if(arr[i][j]>max) { max[i]=arr[i][j]; } } } return [max[0],max[1],max[2],max[3]]; } largestOfFour([[4, 9, 1, 3], [13, 35, 18, 26], [32, 35, 97, 39], [1000000, 1001, 857, 1]]); Jul 23 2017 18:44 How to do this ? kumquatfelafel @kumquatfelafel Jul 23 2017 18:45 js some code becomes some code if(arr[i][j]>max)... what does this compare? @adittyagi Jul 23 2017 18:48 arr[i][j] are the values of sub-arrays I suppose which are compared to var of 0 value the code only works for first sub array the output is [9,13,null,null] Moisés Man @moigithub Jul 23 2017 18:52 whats value have max ? Jul 23 2017 18:53 [9,13,null,null] Moisés Man @moigithub Jul 23 2017 18:53 when ur code start max is [ ] Jul 23 2017 18:53 0 Moisés Man @moigithub Jul 23 2017 18:54 no its not 0.. re-read ur code ........... kumquatfelafel @kumquatfelafel Jul 23 2017 18:55 This link shows more or less what happens to your code as it progresses. Jul 23 2017 18:57 var max = 0; when I set this there's an error 'Cannot create property '0' on number '0' DistinctWolf @DistinctWolf Jul 23 2017 18:57 does the first parameter in the post method of express nodejs actually render the content in the file or is it just a name that has to match with html form action attribute Jul 23 2017 19:00 @kumquatfelafel still not getting it Moisés Man @moigithub Jul 23 2017 19:03 divide n conquer thing... first make a function to get the largest of a SINgle arrray : ie [1,2,3,4] should result 4 function max( arr ){ var result = 0; // fill the rest of code here return result; } kumquatfelafel @kumquatfelafel Jul 23 2017 19:05 @adittyagi you're trying to have the best of both worlds at the moment in your code, treating max as both a number and an array. That comparison you have at the moment works by converting array into a number. However, array can only be reliably changed to a number when it has a single numeric element or no elements at all. The second you put 13 into the next "slot", the array is converted to NaN during the comparison... which is more or less guaranteed to return false. console.log(Number([])); // prints 0 console.log(Number([5])); // prints 5 console.log(Number([5, 7])) // prints NaN Chris Juchtmans @kjuchtmans Jul 23 2017 19:17 evening coders! :wave: kumquatfelafel @kumquatfelafel Jul 23 2017 19:17 @adittyagi We can also think of it this way. Say we've gone through first iteration and have found our value for max[0]. Now, we're working on max[1]. When do we want to assign a new value to max[1]? hey Jul 23 2017 19:18 ohhhhhhhhh okay that means first I have to store the value of max[1] then the others kumquatfelafel @kumquatfelafel Jul 23 2017 19:19 Jul 23 2017 19:19 multiple loops ? kumquatfelafel @kumquatfelafel Jul 23 2017 19:20 @adittyagi just focus on the question. When... under what conditions do we want to assign a new value to max[1]? Jul 23 2017 19:21 after comparing the values in 2nd sub array kumquatfelafel @kumquatfelafel Jul 23 2017 19:21 comparing with what? Jul 23 2017 19:22 and assigning the max value to max[1] comparing max with the numbers in 2nd sub array kumquatfelafel @kumquatfelafel Jul 23 2017 19:23 @adittyagi Does max == max[1]? Jul 23 2017 19:23 no max was supposed to be 0 till now there's a catch baersc @baersc Jul 23 2017 19:26 Hey, guys I've got a function that's working, and solving the problem, but the linter is throwing a "Expected assignment or function call and instead saw an expression" error. The line with the error is: Array.isArray(arr[i]) ? flatten(arr[i]) : flat.push(arr[i]); Should I change this code or ignore the error? kumquatfelafel @kumquatfelafel Jul 23 2017 19:27 never mind what max was supposed to be. Let's talk about what it is. max is an array. Is an array the same thing as a single value? Jul 23 2017 19:29 umm no Amit Patel @AmitP88 Jul 23 2017 19:30 hey guys, I'm currently studying the topic of functional (or modular) JS and I was wondering: Are higher-order functions and callbacks the same thing? In a sense that they both refer to functions that are used as arguments for other functions kumquatfelafel @kumquatfelafel Jul 23 2017 19:30 how about max[1]? Is that a single value? @adittyagi Chris Juchtmans @kjuchtmans Jul 23 2017 19:31 "Arguments Optional" lesson on closures & currying: Question: How can I access arguments (2) and (3), when the test reads addTogether(2)(3); ? My below code returns a TypeError: 'addTogether( )( ) is not a function' code: function addTogether() { var a,b, result; a = arguments[0]; b = arguments[1]; //return typeof b; if (typeof a !== 'number'|| typeof b !== 'number'){ result = undefined; } else if(arguments.length === 2){ result = a+b; } else if (arguments.length === 1){ var adderCurried = function(x){ return function(y){ return x+y; }; }; } return result; } addTogether(2)(3); thanks! Stuhl @Stuhl Jul 23 2017 19:31 @baersc Cannot give you a clear answer, but the error is pointing at the way u made the ternary. I think JSlints want you to assign the ternary to variable or something baersc @baersc Jul 23 2017 19:32 @Stuhl Thanks, I'll read that CamperBot @camperbot Jul 23 2017 19:32 baersc sends brownie points to @stuhl :sparkles: :thumbsup: :sparkles: :cookie: 170 | @stuhl |http://www.freecodecamp.com/stuhl Jul 23 2017 19:32 @kumquatfelafel yes? Joseph @revisualize Jul 23 2017 19:32 @kjuchtmans Currying Chris Juchtmans @kjuchtmans Jul 23 2017 19:37 @revisualize thanks, I've been reading up on currying pretty much all PM, and I feel I grasp some of it now: my curried func adderCurried seems to return the correct resultm when isolated from the if ... else structure. But when I try to implement it, the test line addTogether(2)(3) is not accepted. Why is that? CamperBot @camperbot Jul 23 2017 19:37 kjuchtmans sends brownie points to @revisualize :sparkles: :thumbsup: :sparkles: :star2: 4373 | @revisualize |http://www.freecodecamp.com/revisualize kumquatfelafel @kumquatfelafel Jul 23 2017 19:39 @adittyagi How about arr[1][1]? Is that a single value? Joseph @revisualize Jul 23 2017 19:41 @kjuchtmans No idea. @kjuchtmans Here's some testing I did. Jul 23 2017 19:41 yes that's a single value @kumquatfelafel Joseph @revisualize Jul 23 2017 19:42 @kjuchtmans function addTogether() { return 1; } addTogether(2)(3); Error. However... @kjuchtmans function addTogether() { return function() { return 1; }; } addTogether(2)(3); No error. and returns 1 Chris Juchtmans @kjuchtmans Jul 23 2017 19:43 @revisualize ok. I understand Joseph @revisualize Jul 23 2017 19:43 @kjuchtmans function addTogether() { return function() { return arguments[0]; }; } addTogether(2)(3); returns 3 Chris Juchtmans @kjuchtmans Jul 23 2017 19:44 right. that indeed makes things clearer, cheers! will go and rework from here kumquatfelafel @kumquatfelafel Jul 23 2017 19:44 @adittyagi just going to restart my computer, back in a sec Joseph @revisualize Jul 23 2017 19:44 @kjuchtmans function addTogether() { var a = arguments[0]; return function() { return a + arguments[0]; }; } addTogether(2)(3); 5 Jul 23 2017 19:44 okay btw thank you! @kumquatfelafel CamperBot @camperbot Jul 23 2017 19:44 adittyagi sends brownie points to @kumquatfelafel :sparkles: :thumbsup: :sparkles: :cookie: 528 | @kumquatfelafel |http://www.freecodecamp.com/kumquatfelafel Joel Y. @zapcannon99 Jul 23 2017 19:45 Not that I said anything, but I learned something else again from you two. Thanks. @kjuchtmans @revisualize CamperBot @camperbot Jul 23 2017 19:45 zapcannon99 sends brownie points to @kjuchtmans and @revisualize :sparkles: :thumbsup: :sparkles: :cookie: 291 | @kjuchtmans |http://www.freecodecamp.com/kjuchtmans :star2: 4374 | @revisualize |http://www.freecodecamp.com/revisualize Tiago Correia @tiagocorreiaalmeida Jul 23 2017 19:45 hey var languages = { english: "Hello!", french: "Bonjour!", notALanguage: 4, spanish: "Hola!" }; // print hello in the 3 different languages for(var i =0; i<languages.length; i++){ if(typeof(languages[i]) === "string"){ console.log(languages[i]); } } why the thing not working? hum Joseph @revisualize Jul 23 2017 19:46 @tiagocorreiaalmeida objects don't have lengths Joel Y. @zapcannon99 Jul 23 2017 19:46 it's you need 3 @tiagocorreiaalmeida Chris Juchtmans @kjuchtmans Jul 23 2017 19:46 so basically your accessing two different index [0]'s, belonging to two different functions, true? Tiago Correia @tiagocorreiaalmeida Jul 23 2017 19:46 you are right Joseph @revisualize Jul 23 2017 19:47 @tiagocorreiaalmeida var myObj = { "a": "alpha", "b": "bravo" } What is the value of myObj.length;? Tiago Correia @tiagocorreiaalmeida Jul 23 2017 19:47 I have to use like for( var x in myobj) to get the keys right? Joseph @revisualize Jul 23 2017 19:47 You could. Tiago Correia @tiagocorreiaalmeida Jul 23 2017 19:47 for(var values in languages){ like this Joseph @revisualize Jul 23 2017 19:47 try it Tiago Correia @tiagocorreiaalmeida Jul 23 2017 19:47 done thanks :) kumquatfelafel @kumquatfelafel Jul 23 2017 19:52 @adittyagi okay. so we've established that arr[1][1] and max[1] are single values. max[1] is supposed to (at the end of iteration) contain the maximum value for that particular subarray. Say we start with the assumption that max[1] is arr[1][0]... which in this (hypothetical) case happens to be 5. If arr[1][1] is 3, do we want to change max[1]? How about if arr[1][1] is 7? Do we change it then? Stuhl @Stuhl Jul 23 2017 19:56 @revisualize Is there a reason why u hardcode arguments into the function instead of just making 2 parameters before hand ? Jul 23 2017 19:57 if arr[1][1] is 3 we will not change max[1] but if arr[1][1] is 7 we'll change max[1] @kumquatfelafel we need another variable to store max[0],max[1],max[2],max[3] values then which should be an array kumquatfelafel @kumquatfelafel Jul 23 2017 20:01 if arr[1][1] is 3 we will not change max[1] but if arr[1][1] is 7 we'll change max[1] @adittyagi So could we then generalize this a bit more in context of your actual code and say that if arr[1][j] is 3, we won't change max[1] (since max[1] is 5), and when arr[1][j] is 7, we'd change max[1] to 7? Jul 23 2017 20:02 yess @kumquatfelafel kumquatfelafel @kumquatfelafel Jul 23 2017 20:03 @adittyagi Don't worry about rest of max or anything for moment. How would you express what I just said above in an if statement? Joseph @revisualize Jul 23 2017 20:05 @Stuhl What? @Stuhl I didn't hard code arguments into the function. @Stuhl The arguments object is built into every function. Stuhl @Stuhl Jul 23 2017 20:07 @revisualize Yeah but its easier to specify the arguments in the function parameter, its easier that way and currying works fine with that too Joseph @revisualize Jul 23 2017 20:07 @Stuhl I was just doing testing. Jul 23 2017 20:08 max[1] = arr[1][0]; if(arr[1][j]>max[1]) { max[1]=arr[1][j]; } Joseph @revisualize Jul 23 2017 20:08 @Stuhl Why am I being critiqued on some quick test code? Stuhl @Stuhl Jul 23 2017 20:08 @revisualize Ah okay, didn't read the context from the complete beginning Thought u were explaining currying Joseph @revisualize Jul 23 2017 20:08 No. Stuhl @Stuhl Jul 23 2017 20:09 Im not criticizing, it just looked confusing in the beginning kumquatfelafel @kumquatfelafel Jul 23 2017 20:09 @adittyagi Did I say we were working off the assumption that max = arr[1][0]? I'm pretty sure I said we're assuming that max[1] = arr[1][0]. max != max[1]. Understanding this part is crucial to completing the task. max is the entire array of values. max[1] is the second element of that array. Jul 23 2017 20:12 oh sorry! I hope it's better now kumquatfelafel @kumquatfelafel Jul 23 2017 20:12 @adittyagi so this is... pretty close to what it looks like for max[1]. Now we just need to generalize it and take care of a few ... hiccups. @adittyagi say we want to generalize this even more... make it work for any element of max. What could we do? Note: if you were to copy paste this generalization and run your code, it still won't quite work in current state so don't be concerned Michiel @MichielHuijse Jul 23 2017 20:15 What is obj here in this program? Is it an argument? And what is its initial value? I can't really understand it. function whatIsInAName(collection, source) { // "What's in a name? that which we call a rose // By any other name would smell as sweet.” // -- by William Shakespeare, Romeo and Juliet var srcKeys = Object.keys(source); // filter the collection return collection.filter(function (obj) { for(var i = 0; i < srcKeys.length; i++) { if(!obj.hasOwnProperty(srcKeys[i]) || obj[srcKeys[i]] !== source[srcKeys[i]]) { return false; } } return true; }); } // test here whatIsInAName([{ first: "Romeo", last: "Montague" }, { first: "Mercutio", last: null }, { first: "Tybalt", last: "Capulet" }], { last: "Capulet" }); kumquatfelafel @kumquatfelafel Jul 23 2017 20:16 @MichielHuijse it's just an arbitrary variable name that holds current element being examined. Joseph @revisualize Jul 23 2017 20:17 @MichielHuijse Object.keys()? Jul 23 2017 20:17 @kumquatfelafel max[i] ? kumquatfelafel @kumquatfelafel Jul 23 2017 20:17 Michiel @MichielHuijse Jul 23 2017 20:17 @kumquatfelafel aaah, ok, thanks,. CamperBot @camperbot Jul 23 2017 20:17 :cookie: 529 | @kumquatfelafel |http://www.freecodecamp.com/kumquatfelafel michielhuijse sends brownie points to @kumquatfelafel :sparkles: :thumbsup: :sparkles: Joseph @revisualize Jul 23 2017 20:17 @MichielHuijse OH! obj that's the parameter you specified for your function. @MichielHuijse the .filter() method passes each array element to the function as arguments[0] kumquatfelafel @kumquatfelafel Jul 23 2017 20:19 @adittyagi So now we have... max[i] = arr[i][0]; if(arr[i][j]>max[i]) { max[i]=arr[i][j]; } Jul 23 2017 20:20 okay... kumquatfelafel @kumquatfelafel Jul 23 2017 20:20 @adittyagi now... here's where the slightly awkward part comes in. We have to reincorporate this into our code. Moisés Man @moigithub Jul 23 2017 20:21 @alenreggie what code u did ? alenreggie @alenreggie Jul 23 2017 20:22 var quotient = 4.4 / 2.0; // Fix this line Moisés Man @moigithub Jul 23 2017 20:22 it passed on my screen Joel Y. @zapcannon99 Jul 23 2017 20:23 @alenreggie Works for me too. Jul 23 2017 20:24 @kumquatfelafel function largestOfFour(arr) { // You can do this! var result = []; for(var i=0; i<arr.length; i++) { var max = arr[i][0]; for(var j=0; j<arr[i].length; j++) { if(arr[i][j]>max) { max = arr[i][j]; } } result[i] = max; } return result; } largestOfFour([[4, 9, 1, 3], [13, 35, 18, 26], [32, 35, 97, 39], [1000000, 1001, 857, 1]]); this worked !! :smile: alenreggie @alenreggie Jul 23 2017 20:25 Thanks to both of you.. I'll just skip it for now.. I've just read that this happens to some people.. Jul 23 2017 20:25 max is now a single value result is an array where new max will be stored @kumquatfelafel thanks a lot ! :smile: CamperBot @camperbot Jul 23 2017 20:26 adittyagi sends brownie points to @kumquatfelafel :sparkles: :thumbsup: :sparkles: Moisés Man @moigithub Jul 23 2017 20:26 @adittyagi u could initialize j to 1 for(var j=1; j<ar .... ..then u can figure out why :) Jul 23 2017 20:27 both are correct why is it so ? Moisés Man @moigithub Jul 23 2017 20:28 max already contains the first value of the subarray var max = arr[i][0]; no point on compare to itself... starting j with 1 will make it compare to the next value kumquatfelafel @kumquatfelafel Jul 23 2017 20:28 :point_up: @adittyagi say i is 0 and j is 0. we've just assumed that max = arr[0][0]. Now we're checking if arr[0][0] > max, aka arr[0][0] Jul 23 2017 20:30 @moigithub @kumquatfelafel ohh okay! thank you CamperBot @camperbot Jul 23 2017 20:30 adittyagi sends brownie points to @moigithub and @kumquatfelafel :sparkles: :thumbsup: :sparkles: :star2: 3526 | @moigithub |http://www.freecodecamp.com/moigithub Jul 23 2017 20:32 Good night guys! kumquatfelafel @kumquatfelafel Jul 23 2017 20:32 @adittyagi In this case, the optimization is pretty slight, but if you're doing something pretty intensive inside your for loop, not having to go through that extra iteration can make a difference (so it's good practice to try and notice these things) night Jul 23 2017 20:34 ok I'll take care of those things! kumquatfelafel @kumquatfelafel Jul 23 2017 20:34 don't... drive yourself crazy... but it's good to occasionally look back on what you did and say, "can this be improved" :p Jul 23 2017 20:35 haha yeah! Kelechi Chinaka @ke1echi Jul 23 2017 20:54 hi @kumquatfelafel Antonious Stewart @Antonious-Stewart Jul 23 2017 21:14 what is wrong with my code var Car = function() { this.wheels = 4; this.engines = 1; this.seats = 5; }; // Only change code below this line. var myCar = new Car(); myCar.blazer = "hellfire"; Stanley Su @stanley-su Jul 23 2017 21:17 @Astewart400 what are you trying to do? Antonious Stewart @Antonious-Stewart Jul 23 2017 21:17 In the editor, use the Car constructor to create a new instance and assign it to myCar. Then give myCar a nickname property with a string value. Joel Y. @zapcannon99 Jul 23 2017 21:18 @Astewart400 Ignoring my stupid answer from before, Everything seems to check out. Is there a test that's failing? Antonious Stewart @Antonious-Stewart Jul 23 2017 21:18 yea the last on the challenege Joel Y. @zapcannon99 Jul 23 2017 21:18 can you send the link Antonious Stewart @Antonious-Stewart Jul 23 2017 21:18 says it needs to be a string but it obviously is Stanley Su @stanley-su Jul 23 2017 21:18 @Astewart400 you said to give it a nickname property. Shouldn't it be myCar.nickName = "hellfire";? Antonious Stewart @Antonious-Stewart Jul 23 2017 21:18 Joel Y. @zapcannon99 Jul 23 2017 21:19 What Stanley said is right Antonious Stewart @Antonious-Stewart Jul 23 2017 21:19 bruh.. XD Joel Y. @zapcannon99 Jul 23 2017 21:19 lol And here I though blazer was another name for nickname of car >__> I feel dimb Can't spell either right now merp Antonious Stewart @Antonious-Stewart Jul 23 2017 21:20 @stanley-su @zapcannon99 thanks CamperBot @camperbot Jul 23 2017 21:20 astewart400 sends brownie points to @stanley-su and @zapcannon99 :sparkles: :thumbsup: :sparkles: :cookie: 74 | @stanley-su |http://www.freecodecamp.com/stanley-su :cookie: 312 | @zapcannon99 |http://www.freecodecamp.com/zapcannon99 kumquatfelafel @kumquatfelafel Jul 23 2017 21:25 @kelechy hi. Not truly here at moment. But hi, temporarily. :p DistinctWolf @DistinctWolf Jul 23 2017 21:38 anyone who can help me with nodejs kumquatfelafel @kumquatfelafel Jul 23 2017 21:40 Off again. :wave: Moisés Man @moigithub Jul 23 2017 21:45 @FlashHero if u ask ur question maybe some1 who knows will answer :) ... or maybe not :P Guderian Raborg @hypercuber Jul 23 2017 21:48 What is wrong with my code: https://projecteuler.net/problem=8 let str = 73167176531330624919225119674426574742355349194934 96983520312774506326239578318016984801869478851843 85861560789112949495459501737958331952853208805511 12540698747158523863050715693290963295227443043557 66896648950445244523161731856403098711121722383113 62229893423380308135336276614282806444486645238749 30358907296290491560440772390713810515859307960866 70172427121883998797908792274921901699720888093776 65727333001053367881220235421809751254540594752243 52584907711670556013604839586446706324415722155397 53697817977846174064955149290862569321978468622482 83972241375657056057490261407972968652414535100474 82166370484403199890008895243450658541227588666881 16427171479924442928230863465674813919123162824586 17866458359124566529476545682848912883142607690042 24219022671055626321111109370544217506941658960408 07198403850962455444362981230987879927244284909188 84580156166097919133875499200524063689912560717606 05886116467109405077541002256983155200055935729725 71636269561882670428252483600823257530420752963450; let u = String.fromCharCode(10); let arr = str.split('').filter(s => s !== u).map(s => Number(s)); Array.prototype.prod = function(a, b) { //from index a to b, including b return this.slice(a, b + 1).reduce((a,b) => a * b); //i to end-1 } function nDigitsMaxProduct(n) { let max = 1; for (let i = 0; i < arr.length - n; i++) { if (max < arr.prod(i, i + n)) max = arr.prod(i, i + n); } return max; } console.log(nDigitsMaxProduct(13)); // 70573265280 wrong answer eeflores @eeflores Jul 23 2017 21:50 is that a backtick for str? DistinctWolf @DistinctWolf Jul 23 2017 21:54 does app.post actually render the content of the file passed in as argument or is it just a string that has to be same as form's action attribute Guderian Raborg @hypercuber Jul 23 2017 22:03 @eeflores yes eeflores @eeflores Jul 23 2017 22:14 @hypercuber do you know what the expected answer is? Guderian Raborg @hypercuber Jul 23 2017 22:15 @eeflores It is 23514624000 Moisés Man @moigithub Jul 23 2017 22:17 why ? eeflores @eeflores Jul 23 2017 22:18 @hypercuber I fiddled with the slice, removed the +1 and got your answer @hypercuber I don't think that's right though @hypercuber actually the change I did to slice is correct when you do the +1 you're actually multiplying 14 numbers Jul 23 2017 22:24 hello i'm trying to access an objects keys/properties but it keeps saying undefined console.log(data[0].station); does not work console.log(data) however will give me the object Guderian Raborg @hypercuber Jul 23 2017 22:26 @eeflores I thought slice did not include the second number Moisés Man @moigithub Jul 23 2017 22:27 @adamfaraj data is NOT an array eeflores @eeflores Jul 23 2017 22:28 @hypercuber console.log([1,2,3,4,5].slice(0, 3)); or better still console.log([0,1,2,3,4,5].slice(0, 3)); Jul 23 2017 22:28 @moigithub gotcha!! thanks! @moigithub CamperBot @camperbot Jul 23 2017 22:28 adamfaraj sends brownie points to @moigithub :sparkles: :thumbsup: :sparkles: :star2: 3527 | @moigithub |http://www.freecodecamp.com/moigithub eeflores @eeflores Jul 23 2017 22:30 @hypercuber so essentially you want 13 elements from the array. when you use slice(0, 13) you'll get an array with elements from 0 to 12 but that's 13 elements It's Monday morning for me, so my description ... explanation may not be the best Guderian Raborg @hypercuber Jul 23 2017 22:33 @eeflores It is okay, you help me a lot. I had to check the day in my computer to make sure it wasnt Monday for me if not it means I skipped work lol eeflores @eeflores Jul 23 2017 22:34 :thumbsup: Simon Cordova @gbsimon87 Jul 23 2017 22:43 Hey all, good evening! eeflores @eeflores Jul 23 2017 22:47 hello @gbsimon87 :wave: Simon Cordova @gbsimon87 Jul 23 2017 22:50 Hello! It's been a while since I've been here, I miss the community :) dyon3334 @dyon3334 Jul 23 2017 23:00 @gbsimon87 welcome back Simon Cordova @gbsimon87 Jul 23 2017 23:02 Thanks guys :) I used to be here all the time, once (largely thanks to this platform) I got my first programming job about 8 months ago I barely find time anymore. But one can never forget where they came from! dyon3334 @dyon3334 Jul 23 2017 23:05 @gbsimon87 and do you like your job ? How long did you study brfo Before monkeyfingerz @monkeyfingerz Jul 23 2017 23:14 hey code camp function smallestCommons(arr) { var range=[]; var answer = 0; var sortedArr= arr.sort(function(a,b){ return a-b; }); for(var i = 0; i< sortedArr[1]; i++){ range.push(i); } var x=2; for(var j = range[0]; j<range.length ; j++){ if((largestPrime(sortedArr[1])*x)%range[j] ===0){ }else{ j=range[0]; x++; } } } return largestPrime(sortedArr[1])*x; } function largestPrime(num) { // function to check if the number presented is prime var range = []; var i =num; while(range.length ===0){ range.push(i); } i--; } return range[range.length-1]; } for( var j= 2 ; j < num ; j++){ if(num % j === 0){ return false; } } return true; } smallestCommons([23,18]) ; I'm trying to find theSmallest Common Multiple on the intermediate challenges but this code isn't passing all the tests I'm sorry if this is hard to read or understand. i'll write notes and repost OH i figured out the problem i'm trying to find the largest prime number in one of my functions but one of the test cases doesn't have a prime number in the range shoooooooot Nick Cleary @Hijerboa Jul 23 2017 23:21 So I'm trying to test my twitch API, and have it call the names off all the streamers that are offline. However, the code which displays the streamers which are in an array labeled offline is called before the code which pushes the offline streamers to that array finishes executing, despite the fact that I call it after the code which pushes the names to an array. Can someone tell me how I would go about delaying this code? var streamers = [ "ESL_SC2", "OgamingSC2", "cretetion", "freecodecamp", "storbeck", "habathcx", "RobotCaleb", "noobs2ninjas" ]; var streamURL = "https://wind-bow.glitch.me/twitch-api/streams/"; var usersURL = "https://wind-bow.glitch.me/twitch-api/users/"; var online = []; var offline = []; function getStatus() { for (let i = 0; i < streamers.length; i++) { $.getJSON(streamURL + streamers[i], function(status) { if (status.stream === null) { console.log(streamers[i] + " is currently offline."); offline.push(streamers[i]); console.log(offline); } else if (status.stream.stream_type == "live") { console.log(streamers[i] + " is currently streaming."); online.push(streamers[i]); console.log(online); } }); } } function findOffline() { if (offline.length > 0) { console.log("it works"); } else if (offline.length == 0) { console.log("it doesn't work"); } }$(document).ready(function() { getStatus(); findOffline(); }); Marcus Parsons @marcusparsons Jul 23 2017 23:34 @Hijerboa, Hey Nick, what you want to do is make an initial call to your \$.getJSON call and then call it each time there's a response, regardless of what that response is. You wouldn't want to loop through the streamers array because the loop is going to finish before there is a response back from the API. Here's a codepen I set up using your code. https://codepen.io/marcusparsons/pen/rzNeOe Stephen James @sjames1958gm Jul 23 2017 23:46 @Hijerboa If you want to launch the requests in parallel, you will need to process the responses in the callbacks. You could push the values to the arrays, and then when the sum of the lengths of the two arrays is streamers.length then you can process the response arrays Moisés Man @moigithub Jul 23 2017 23:48 another way is using ajaxStop @Hijerboa Dovydas Stirpeika @Giveback007 Jul 23 2017 23:49 @sjames1958gm can you explain this nonsense? I did nas[0][3] += 1 and it worked fine but in this instance it returns NaN Stephen James @sjames1958gm Jul 23 2017 23:50 @Giveback007 Are you trying to add to unitialized values? Dovydas Stirpeika @Giveback007 Jul 23 2017 23:50 @sjames1958gm you can see that they are set to 0 initialy the first log is the initial state second log is after the conditional += 1 Stephen James @sjames1958gm Jul 23 2017 23:51 @Giveback007 no, don't see that, I don't know what score array contains. If score[i][j] is 3 it looks like you add one to unitialized value Dovydas Stirpeika @Giveback007 Jul 23 2017 23:52 score array contains values between 0 to 8 Nick Cleary @Hijerboa Jul 23 2017 23:53 @marcusparsons so basically call it once as a function and use recursion to recall it? and unfortunatly that doesn't actually fix the issue :( Dovydas Stirpeika @Giveback007 Jul 23 2017 23:53 @sjames1958gm what does uninitialized mean? Moisés Man @moigithub Jul 23 2017 23:54 if nas array is empty nas[][j] will be undefined undefined +1 ==> NaN Stephen James @sjames1958gm Jul 23 2017 23:54 @Giveback007 You haven't given it a value Dovydas Stirpeika @Giveback007 Jul 23 2017 23:54 but I did i loged it on the right it's filled with 0 or am I missing something? Moisés Man @moigithub Jul 23 2017 23:54 nas is not filled... nas[i] = []` on ur code Dovydas Stirpeika @Giveback007 Jul 23 2017 23:55 @moigithub ahhh i re asigned it to [] ahhhH!! @sjames1958gm @moigithub tnx CamperBot @camperbot Jul 23 2017 23:55 giveback007 sends brownie points to @sjames1958gm and @moigithub :sparkles: :thumbsup: :sparkles: :star2: 3528 | @moigithub |http://www.freecodecamp.com/moigithub :star2: 8143 | @sjames1958gm |http://www.freecodecamp.com/sjames1958gm Dovydas Stirpeika @Giveback007 Jul 23 2017 23:57 right under my nose too Moisés Man @moigithub Jul 23 2017 23:57 what that code should do ? Dovydas Stirpeika @Giveback007 Jul 23 2017 23:58 I'm implementing extra features to the GOL Moisés Man @moigithub Jul 23 2017 23:58 rotate values ?? like if 0 then .. make it 1 if 1 make it 2 if 2 .. make it 3 if 3.. make it 0 Dovydas Stirpeika @Giveback007 Jul 23 2017 23:59 no score has a value of 0 to 8
# The operation of scalar field $\phi (\vec x)$ on vacuum state I am now learning the Quantum Field Theory by reading the lecture notes by David Tong. I have some question about the mode expansion about the real scalar field that is canonically quantized by promoting the classical Klein Gordon field to a quantum field. The mode expansion of the field is given by $$\phi (\vec x) = \int \frac{d^{3}p}{(2 \pi)^3} \frac{1}{\sqrt{2 \omega_{\vec p}}} (a_{\vec p} e^{i {\vec p} \cdot {\vec x}} + a^{\dagger}_{\vec p} e^{-i {\vec p} \cdot {\vec x}})$$ where $\omega_{\vec p} = \sqrt{p^{2} + m^2}$ and $a^{\dagger}_{\vec p}$ will creat a spin $0$ particle in the momentum state $\left| \vec p \right\rangle$, namely $a^{\dagger}_{\vec p} \left| {0} \right\rangle = \left| {\vec p} \right\rangle$. The question I am now curious about is about what will I get if I operate the quantum field on the vacuum state, that is $\phi (\vec x) \left| {0} \right\rangle = ?$ It seems that in the lecture note $\phi (\vec x) \left| {0} \right\rangle = \left| {\vec x} \right\rangle$, where $\left| {\vec x} \right\rangle$ is the spin $0$ particle in position state at $\vec x$. (eq 2.52 in http://www.damtp.cam.ac.uk/user/tong/qft/two.pdf) However this seems nontrivial to me so I carried out the following derivation. $$\phi (\vec x) \left| {0} \right\rangle = \int \frac{d^{3}p}{(2 \pi)^3} \frac{1}{\sqrt{2 \omega_{\vec p}}} (a_{\vec p} e^{i {\vec p} \cdot {\vec x}} + a^{\dagger}_{\vec p} e^{-i {\vec p} \cdot {\vec x}})\left| {0} \right\rangle$$ $$= \int \frac{d^{3}p}{(2 \pi)^3} \frac{1}{\sqrt{2 \omega_{\vec p}}} (\left| {\vec p} \right\rangle e^{-i {\vec p} \cdot {\vec x}})$$ However, I have no idea how to prove that $$\int \frac{d^{3}p}{(2 \pi)^3} \frac{1}{\sqrt{2 \omega_{\vec p}}} (\left| {\vec p} \right\rangle e^{-i {\vec p} \cdot {\vec x}}) = \left| {\vec x} \right\rangle$$ I know that in elementary quantum mechanics we have $$\left| {\vec x} \right\rangle = \int d^{3}p \left| {\vec p} \right\rangle \left\langle {\vec p} \right| \cdot \left| {\vec x} \right\rangle$$ However this doesn't resemble what I want to prove. It seems a stupid question but I was just stuck on it. I would be grateful for any suggestion! • It looks like you have basically shown it, you just need to recognise that this is a fourier transform from $x$ to $p$ space Apr 23, 2018 at 13:14 • related: Recovering QM from QFT. Apr 23, 2018 at 13:34 • Thanks, but there is a $\omega_{\vec p}$ in the integral, and I am not sure how to eliminate it to actually get $\left| {\vec x} \right\rangle$. Apr 23, 2018 at 13:47 • The question is properly answered below, but I would just say you can see equation 2.62 onwards for Tong's normalizations. Apr 23, 2018 at 15:22 ## 2 Answers You almost did everything except the relativistic considerations, which are not obvious of course. The factor $\sqrt{\omega}$ is the relativistic normalization for the eigenstates, and also you need to pay attention to the measure, $d^3 p$. This kind of normalization comes from the fact that the eigenstates and the measure should be both Lorentz invariant individually, eventhough the whole integral is invariant. Therefore, you can check that $\frac{d^3 p}{2 \omega}$ and $\sqrt{\omega} | p \rangle$ indeed will be Lorentz invariants. Since the normalization of the position eigenstate is $\frac{1}{\sqrt{\omega}} | x\rangle$, there remains only a $\sqrt{\omega}$ in the denominator. Sometimes there is a convention where the normalization done on the creation and annihilation operators, instead of the eigenstates. It is better to check your textbook which convention it follows, or your own notation, in order not to confuse the definitions and get some weird results in your calculations. you can proove that if you use the fourier transform from x to impulse space. • Thanks, but there is a $\omega_{\vec p}$ in the integral, and I am not sure how to eliminate it to actually get $\left| {\vec x} \right\rangle$. Apr 23, 2018 at 13:59
# “Hidden” variable in Simulink models Some example models that are included in the Simulink library, there are some variables that their value is not visible. For instance, in the model power_PVarray_grid_det the sample time of the powergui is defined as Ts_Power. I can not figure out the value of this variable. Is there a way to find these hidden variables, have access to them and change their value? 1. Click the File/Model Properties/Model Properties item in the model window menu 2. Select the Callbacks tab in the model properties dialog window Then you can see that the PreLoadFcn callback is defined as follows: This code is executed before the model is loaded, so Ts_Power is initialized to 10^-6. • thank you. Do you know if there is a way to edit these variables through command line or a script? – Yiannis S. Jan 16 at 8:32 • I was about to post this. Save the example locally and remove those variables out of the callback. – JonRB Jan 16 at 8:36 • You can also call your own script from the callback – AVK Jan 16 at 8:57
# How to convert PDF to text by command line? In this article, I will show you how to convert PDF to text by command line.  Even if now many PDF readers allow you to directly select the contained text by mouse drag and then copy the selected text using Ctrl+C & Ctrl+V , you can paste the copied text wherever you want. But if we need do the copy & paste in an effective way, this way seems a little weak.  So how can make it effective and accurately? If now you are searching a solution, please follow my steps. PDF analysis is necessary when you choose software in the market as there are thousands of tools for this conversion. Which one is better? Maybe you have tried some of them, however, they always screw up the text part. According to my knowledge, I divide PDF to two kinds: image PDF and text PDF.  Image PDF can not be selected in PDF reader but text PDF file allows you to do copy & paste  in PDF reader.  But there is one exception that if PDF file contains embedded fonts, even if you can do copy & paste  on it, but the output text is message code. First, convert text based PDF file to text by command line. • Download PDF to TXT Converter,  this software either can be used as GUI version or the command line version. So after downloading the exe, please install it by double click the exe file following installation message. • If the installation finishes and successes, there will be an icon on the desktop. And you need to find the executable file in installation folder. • Call it from MS Dos Windows and press Enter then you can check parameter list. If you need to know, please check user guide on our website. • For this software, it is extremely easy to use and it only charges $38.00 Usage: PDF2TXT <input PDF file> [output TXT file] Example: PDF2TXT C:\test\*.pdf C:\test\*.txt You can use wild character to do the conversion in batch. Example:PDF2TXT C:\input.pdf C:\output.txt -open –silent –first 1 –last 10 –unicode By this command line, you can set conversion page range is 1-10. Once conversion finishes, the text will be open automatically. And it will export text file use (UTF-8) encoding. Now let us check the conversion effect from the following snapshot. Second, image PDF to text by command line. • The software I mentioned above only can be used to convert text based PDF to text. When encounter image PDF file, it will output message code as image PDF needs OCR technology. • Then please choose software PDF to Text OCR Converter which can be used to recognize text from scanned documents, image PDF and image files. As it uses OCR technology, its price will be a little higher Server License$195.0 and it has more function than above one, I will list some of them below. Input formats: TIFF, BMP, PNG, JPG, PCX, and TGA PDF and others. Usage: pdf2txtocr [options] <input file> <output file> Example: pdf2txtocr.exe -text "PageText %PageNumber% of %PageCount%" C:\in.pdf C:\out.txt By this command, you can add page number for output text. Example:  for %F in (D:\temp\*.pdf) do pdf2txtocr.exe -ocr -lang deu "%F" "%~dpnF.txt" Following command line will OCR all PDF files in D:\temp\ folder to text files. If you need to know more about his software, you’d better experience it yourself. Now let us check OCR conversion effect form the following snapshot.
× Symbol doubt Sir, In the Picture., • Whats the meaning of that symbol (in the red box).? • What is the name of that symbol? • Explanation about the working way of that symbol? • Any websites(if any) suggest for more Detailed Information about that symbol? Note by Vamsi Krishna Appili 3 years, 8 months ago Sort by: A simple example: for all positive integers $$k$$, $k!=1\cdot 2\cdot 3\cdot \ldots\cdot (k-1)\cdot k = \prod_{i=1}^k i.$ · 3 years, 8 months ago As $$\sum$$ means addition, $$\prod$$ means product. · 3 years, 8 months ago Product symbol · 3 years, 8 months ago we call it pie...it stands for continous product.... · 3 years, 8 months ago ρ whatis standing for ut? · 3 years, 8 months ago pardon??? · 3 years, 8 months ago I think means what does $$\rho$$ stand for. · 3 years, 7 months ago
1. ## About Euler's constant (2)... Given ... $\displaystyle \gamma(z)= \int_{0}^{\infty} t^{z}\cdot e^{-t}\cdot dt$ (1) ... and the Euler's constant... $\displaystyle \gamma = \lim_{n \rightarrow \infty} \{\sum_{k=1}^{n} \frac{1}{k} - \ln n \}= .577215664901...$ (2) ... demonstrate from definition (2) that is... $\displaystyle \frac{1}{\gamma(z)} = e^{\gamma z}\cdot \prod_{n=1}^{\infty} (1+ \frac{z}{n})\cdot e^{-\frac{z}{n}}$ (3) Kind regards $\displaystyle \chi$ $\displaystyle \sigma$ 2. Originally Posted by chisigma Given ... $\displaystyle \gamma(z)= \int_{0}^{\infty} t^{z}\cdot e^{-t}\cdot dt$ (1) ... and the Euler's constant... $\displaystyle \gamma = \lim_{n \rightarrow \infty} \{\sum_{k=1}^{n} \frac{1}{k} - \ln n \}= .577215664901...$ (2) ... demonstrate from definition (2) that is... $\displaystyle \frac{1}{\gamma(z)} = e^{\gamma z}\cdot \prod_{n=1}^{\infty} (1+ \frac{z}{n})\cdot e^{-\frac{z}{n}}$ (3) Kind regards $\displaystyle \chi$ $\displaystyle \sigma$ Problem: Prove that if $\displaystyle \gamma(z)=\int_0^{\infty}t^ze^{-t}dt=\Gamma(z+1)$ then find $\displaystyle \frac{1}{\gamma(z)}$ Proof: Lemma: Let $\displaystyle \Gamma_\sigma=\frac{\sigma!\sigma^x}{x\cdot(1+x)\c dots(\sigma+x)}$ then $\displaystyle \Gamma(x)=\lim_{\sigma\to\infty}\Gamma_{\sigma}(x)$ Proof: It can be readily seen that $\displaystyle \Gamma_{\sigma}(x+1)=\frac{\sigma!\sigma^{x+1}}{(x +1)(x+2)\cdots(x+r+1)}=\frac{\sigma\cdot x}{\sigma+x+1}\Gamma_\sigma(x)$. So that $\displaystyle \Gamma(x+1)=\lim_{\sigma\to\infty}\Gamma_{\sigma}( x+1)=\lim_{\sigma\to\infty}\left\{\frac{\sigma\cdo t x}{\sigma+x+1}\Gamma_{\sigma}(x)\right\}=x\Gamma(x )$. Furthermore $\displaystyle \lim_{\sigma\to\infty}\Gamma_{\sigma}(1)=\lim_{\si gma\to\infty}\frac{\sigma}{\sigma+1}=1$. Thus the functional equation and the boundary condition are satisfied $\displaystyle \quad\blacksquare$ Now using this we can get somehwere. If we look closely at $\displaystyle \Gamma_{\sigma}(x)$ we can see that $\displaystyle \Gamma_{\sigma}(x)=\frac{\sigma!\sigma^{x}}{(x\cdo t(1+x)\cdots(\sigma+x)}=\frac{\sigma^x}{\frac{1}{\ sigma!}x\cdot(1+x)\cdots(\sigma+x)}$$\displaystyle =\frac{\sigma^x}{\frac{1}{1\cdots\sigma}x\cdot(1+x )\cdots(\sigma+x)}=\frac{\sigma^x}{x\cdot\left(1+\ frac{x}{1}\right)\cdots\left(1+\frac{x}{\sigma}\ri ght)}. Now using the common fact that \displaystyle \sigma^x=e^{x\ln(\sigma)} yields \displaystyle \Gamma_{\sigma}(x)=\frac{e^{x\ln(\sigma)}}{x\cdot\ left(1+\frac{x}{1}\right)\cdots\left(1+\frac{x}{\s igma}\right)}. Now using a litle snake-oil we can see that \displaystyle x\cdot\ln(\sigma)=x\left(\ln(\sigma)-H_{\sigma}+H_{\sigma} \right)=x\left(\ln(\sigma)-H_{\sigma}\right)+x\cdot H_{\sigma} where \displaystyle H_\sigma=\sum_{\ell=1}^{\sigma}\frac{1}{\ell}. So then \displaystyle \Gamma_{\sigma}(x)=\frac{e^{x\left(\ln(\sigma)-H_{\sigma}\right)}e^{x\cdot H_\sigma}}{x\cdot\left(1+\frac{x}{1}\right)\cdots\ left(1+\frac{x}{\sigma}\right)} or equivalently \displaystyle \Gamma_{\sigma}(x)=\frac{e^{-x\left(H_\sigma-\ln(\sigma)\right)}}{x}\cdot\prod_{\ell=1}^{\sigma }\left\{\frac{e^{\frac{x}{\ell}}}{1+\frac{x}{\ell} }\right\}. So that \displaystyle \frac{1}{\Gamma(x)}=\lim_{\sigma\to\infty}\frac{1} {\Gamma_{\sigma}(x)}=\lim_{\sigma\to\infty}\left\{ xe^{x\left(H_{\sigma}-\ln(\sigma)\right)}\prod_{\ell=1}^{\sigma}\left\{\ left(1+\frac{x}{\ell}\right)e^{\frac{-x}{\ell}}\right\}\right\}$$\displaystyle =xe^{\gamma x}\prod_{\ell=1}^{\infty}\left\{\left(1+\frac{x}{\ ell}\right)e^{\frac{-x}{\ell}}\right\}\quad\blacksquare$ 3. The 'lemma' ... $\displaystyle \gamma(z)= \lim_{k \rightarrow \infty} \gamma_{k} (z)$ (1) ... where... $\displaystyle \gamma_{k} (z)= \frac{(k+1)!\cdot k^{z}}{(z+1)\cdot (z+2)\dots (z+k)\cdot (z+k+1)}$ (2) ... must be demonstrated and that can be done with the 'little nice formula' discussed in a previous thread... $\displaystyle \int_{0}^{\infty} f(t)\cdot e^{-t}\cdot dt = \lim_{k \rightarrow \infty} k\cdot \int_{0}^{1} f(ku)\cdot (1-u)^{k}\cdot du$ (3) If we compute the integral (3) with $\displaystyle f(t)=t^{z}$ applying systematically the integration by parts we obtain... $\displaystyle \int_{0}^{\infty} t^{z}\cdot e^{-t}\cdot dt = \lim_{k \rightarrow \infty} k^{1+z}\cdot \int_{0}^{1} ku^{z}\cdot (1-u)^{k}\cdot du =$ $\displaystyle \lim_{k \rightarrow \infty} \frac{(k+1)!\cdot k^{z}}{(z+1)\cdot (z+2)\dots (z+k)\cdot (z+k+1)}=$ $\displaystyle \lim_{k \rightarrow \infty} \frac{k^{z}}{(1+z)\cdot (1+\frac{z}{2})\dots (1+\frac{z}{k})\cdot (1+\frac{z}{k+1})}$ (4) Now if we take into account the identity... $\displaystyle k^{z}= e^{z \ln k}= e^{z(\ln k -1-\frac{1}{2} -\dots -\frac{1}{k})}\cdot e^{z(1+\frac{1}{2} +\dots +\frac{1}{k})}$ (5) ... from (4) we derive the 'infinite product'... $\displaystyle \frac{1}{\gamma (z)}= e^{\gamma z}\cdot \prod_{k=1}^{\infty} (1+\frac{z}{k})\cdot e^{-\frac{z}{k}}$ (6) Kind regards $\displaystyle \chi$ $\displaystyle \sigma$ 4. Originally Posted by chisigma The 'lemma' ... $\displaystyle \gamma(z)= \lim_{k \rightarrow \infty} \gamma_{k} (z)$ (1) ... where... $\displaystyle \gamma_{k} (z)= \frac{(k+1)!\cdot k^{z}}{(z+1)\cdot (z+2)\dots (z+k)\cdot (z+k+1)}$ (2) ... must be demonstrated and that can be done with the 'little nice formula' discussed in a previous thread... $\displaystyle \int_{0}^{\infty} f(t)\cdot e^{-t}\cdot dt = \lim_{k \rightarrow \infty} k\cdot \int_{0}^{1} f(ku)\cdot (1-u)^{k}\cdot du$ (3) If we compute the integral (3) with $\displaystyle f(t)=t^{z}$ applying systematically the integration by parts we obtain... $\displaystyle \int_{0}^{\infty} t^{z}\cdot e^{-t}\cdot dt = \lim_{k \rightarrow \infty} k^{1+z}\cdot \int_{0}^{1} ku^{z}\cdot (1-u)^{k}\cdot du =$ $\displaystyle \lim_{k \rightarrow \infty} \frac{(k+1)!\cdot k^{z}}{(z+1)\cdot (z+2)\dots (z+k)\cdot (z+k+1)}=$ $\displaystyle \lim_{k \rightarrow \infty} \frac{k^{z}}{(1+z)\cdot (1+\frac{z}{2})\dots (1+\frac{z}{k})\cdot (1+\frac{z}{k+1})}$ (4) Now if we take into account the identity... $\displaystyle k^{z}= e^{z \ln k}= e^{z(\ln k -1-\frac{1}{2} -\dots -\frac{1}{k})}\cdot e^{z(1+\frac{1}{2} +\dots +\frac{1}{k})}$ (5) ... from (4) we derive the 'infinite product'... $\displaystyle \frac{1}{\gamma (z)}= e^{\gamma z}\cdot \prod_{k=1}^{\infty} (1+\frac{z}{k})\cdot e^{-\frac{z}{k}}$ (6) Kind regards $\displaystyle \chi$ $\displaystyle \sigma$ Why must it be 'demonstrated'? I in fact showed that it is equivalent.
# Using bbPress as a Support Forum bbPress is a tremendous plugin that provides a complete forum system done the WordPress way. Along with simply providing a great forum system for discussion boards, bbPress also works exceptionally well as a support platform, though there are several features that are missing from the core plugin. In this tutorial we're going to walk through configuring the plugin for an optimal support forum. I run two very active support forums, both on bbPress, so most of what I'm going to show you is from my own experience running those support forums. I mentioned a moment ago that there are several features missing from bbPress that are very important if you wish to run a successful support forum; these features are going to get added in the form of add-on plugins for bbPress that extend the plugin's default behavior. ## Setting Topics as Resolved Out of the box, bbPress does not give topics any sort of "status" that can be used to indicate if a question has been resolved by the support staff. This is one of the most fundamental features that every support system should have, and luckily the fine folks from GetShopped.org wrote an extension just for this. The GetShopped Support Forums extension will give you the ability to indicate if a forum should act as a support forum. Any forum that is set to be a support forum will then have a "status" attached to every topic that indicates whether it is resolved, unresolved, or not a support question. In the extension settings (Settings > Forums), you can easily control who has permission to change topic statuses and also what statuses to show. ## Assigning Topics to Support Staff Another fundamental feature that a good support platform must have, at least for support systems with more than one user, is the ability to assign topics (tickets) to staff members. The GetShopped Support Forums extension, mentioned above, provides this feature, along with the topic status feature. Anytime a topic is assigned to a staff member, they will receive an email that lets them know a ticket was assigned to them. Along with assigning topics, GetShopped Support Forums also allows staff members to "claim" topics. There is also a widget included that can display the assigned / claimed topics to the staff member that is currently logged in, providing them an easy way to access all topics that need their attention. ## Private Replies If your customers ever need to provide sensitive information, such as site URLs or account credentials, it is absolutely paramount that you give your customers a way to contact you privately. There are a variety of ways to do this, from personal emails to actual ticketing systems, such as Zendesk or Ticksy, but none of them are optimal if you are also using bbPress as your main support platform. Why? Simply because having multiple support areas to manage does nothing but convolute the process. It is always best if all of the support a customer needs can be provided in one place, whether they are leaving sensitive information or not. This is the exact reason Rémi Corson and I wrote the bbPress Private Replies extension. Private Replies allows customers to selectively choose whether they want their reply visible to only forum moderators (support staff) or pubic and visible to everyone. Private replies are highlighted with a blue background. When a private reply is viewed by a user that is not the original poster or a forum moderator, it will appear like this: ## Attachments Anyone that has ever managed product support knows the importance of screenshots, which is why providing your customers a way to upload screenshots (or other files) with their support tickets is vitally important to managing a good support forum. The GD bbPress Attachments extension handles this feature very well. When posting a reply, customers will be given an upload field that lets them choose the file(s) they wish to attach: The attachments will then be shown at the bottom of their reply after posting. ## Forum Search When first starting out, including a search feature with your forums may not be critical, but once you have a large number of topics, it is crucial. If your users are unable to find existing topics to answer their questions, they will simply create a new one (they will create a new one most of the time anyhow), so giving your users a way to search the forums will mean much less work on the part of you and your support staff. The default search built into WordPress does not work well with bbPress, but the bbPress Search Widget from David Decker does work quite well. Some people will disagree with this one, but in my opinion, if you are running a support forum for a commercial product, you must setup notifications for when new support topics are posted. Usually these notifications are in the form of emails, though there are other methods as well. Jared Atchison wrote an extension called bbPress New Topic Notifications that will send out an email every time a new topic is posted. As the site admin, you can set the users (support staff) that new topic notifications are sent to. While it is not difficult to simply log into your support forum once per day to check for new topics, I personally prefer to simply rely on my email inbox to alert me of new support tickets. I simply set up a filter in my inbox (Gmail) to catch all new topic notifications so that I can immediately see how many new topics I have to answer. bbPress has an option built into the core plugin that lets users choose if they want to receive an email notification anytime a new reply is posted to a topic. The problem with this feature is that there is no built-in way to customize the email that gets sent out. In my own support forums, I found that I really needed and wanted to change the contents of this email, so I built the bbPress Custom Reply Notifications extension, which lets you set the subject and content of the email to anything you want: Depending on your product / business model, you might find that you need to restrict access to your support forums to only those users that have paid for a subscription or purchased your product. Depending on how you sell your services or products, there are a variety of ways to do this. If you sell subscriptions through my Restrict Content Pro plugin, there is a free extension that lets you restrict forum access to only paid members called bbPress Restrict Content Pro. If you sell digital products through Easy Digital Downloads, there is an extension called Content Restriction that lets you restrict individual forums to only users that have purchased a particular product. If you use WooCommerce to sell products, there is a great extension called Product Support that will let you give customers access to product-specific support forums. If you sell on the Envato marketplaces and wish to require users enter a valid purchase key when registering, there is an extension available from Code Canyon for that. If there are other bbPress + product/subscription plugin integrations that you're aware of, please let me know in the comments! ## Notes on Core bbPress Configuration Aside from all of the extensions mentioned above, there are a couple of core bbPress settings you need to take into consideration when setting up a support forum. These settings can all be found in Settings > Forums. • 1. Subscriptions This option allows users to subscribe to topics and receive emails anytime a new reply is posted. There is absolutely no reason that you should not have this enabled as it is vitally important that your customers have an easy way of knowing when a response has been posted to their question. • 2. Topic Tags While not vital, tagging topics with relevant keywords can provide customers an easy way to find previously-posted topics that answer the question they have. If you choose to use topic tags, I'd strongly encourage you to actively tag (and retag) tags with relevant keywords, and also provide users a way to browse by tag. • 3. Fancy Editor I would definitely suggest that you enable the fancy (rich) editor for new topics and replies. The main reason is simply that it makes it much easier for customers to post links and format their questions in a way that is easy to read. If you read and answer as many support questions as I do every day, you will highly appreciate nicely formatted posts. • 4. Archive Slugs - Forum Base The Forum Base slug lets you define the URL that is used for displaying the main forum root (where all forums are listed). The reason this is really important is that it's best if you use something like "support" here. This simply makes it more intuitive for your users. If you're looking for support, what do you expect more, /support or /my-awesome-forums? Obviously /support. • 5. Akismet If you are an Akismet user, you can choose whether you want all forum topics to get scanned by Akismet when they are posted. This is probably a good idea if you allow Anonymous posting, but I personally found it to not work well. Support customers tend to post a lot of links, which usually throws up a red flag for Akismet, meaning you will have a lot of legitimate topics get caught in spam, so if you use Akismet, you MUST monitor your spam folder closely. I personally prefer to NOT allow anonymous posting and therefor all of my spam prevention happens during the forum registration process. ## Other Extensions There are several extensions that I didn't mention above but would like to note quickly: