arxiv_id stringlengths 0 16 | text stringlengths 10 1.65M |
|---|---|
# A semi-discrete numerical method for convolution-type unidirectional wave equations
Numerical approximation of a general class of nonlinear unidirectional wave equations with a convolution-type nonlocality in space is considered. A semi-discrete numerical method based on both a uniform space discretization and the discrete convolution operator is introduced to solve the Cauchy problem. The method is proved to be uniformly convergent as the mesh size goes to zero. The order of convergence for the discretization error is linear or quadratic depending on the smoothness of the convolution kernel. The discrete problem defined on the whole spatial domain is then truncated to a finite domain. Restricting the problem to a finite domain introduces a localization error and it is proved that this localization error stays below a given threshold if the finite domain is large enough. For two particular kernel functions, the numerical examples concerning solitary wave solutions illustrate the expected accuracy of the method. Our class of nonlocal wave equations includes the Benjamin-Bona-Mahony equation as a special case and the present work is inspired by the previous work of Bona, Pritchard and Scott on numerical solution of the Benjamin-Bona-Mahony equation.
## Authors
• 2 publications
• 2 publications
• 2 publications
• ### Unconditionally optimal convergence of an energy-conserving and linearly implicit scheme for nonlinear wave equations
In this paper, we present and analyze an energy-conserving and linearly ...
03/08/2021 ∙ by Waixiang Cao, et al. ∙ 0
• ### C^1-conforming variational discretization of the biharmonic wave equation
Biharmonic wave equations are of importance to various applications incl...
07/08/2021 ∙ by Markus Bause, et al. ∙ 0
• ### A convolution quadrature method for Maxwell's equations in dispersive media
We study the systematic numerical approximation of Maxwell's equations i...
04/01/2020 ∙ by Jürgen Dölz, et al. ∙ 0
• ### Numerical investigation of some reductions for the Gatenby-Gawlinski model
The Gatenby-Gawlinski model for cancer invasion is object of analysis in...
03/03/2021 ∙ by Corrado Mascia, et al. ∙ 0
• ### Finite Element Approximations of a Class of Nonlinear Stochastic Wave Equation with Multiplicative Noise
Wave propagation problems have many applications in physics and engineer...
06/28/2021 ∙ by Yukun Li, et al. ∙ 0
• ### A formal proof of the Lax equivalence theorem for finite difference schemes
The behavior of physical systems is typically modeled using differential...
03/25/2021 ∙ by Mohit Tekriwal, et al. ∙ 0
• ### On the numerical accuracy of the method of multiple scales for nonlinear dispersive wave equations
In this paper we study dispersive wave equation using the method of mult...
03/26/2021 ∙ by Dávid Juhász, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
In this paper, we propose a semi-discrete numerical approach based on a uniform spatial discretization and truncated discrete convolution sums for the computation of solutions to the Cauchy problem associated to the one-dimensional nonlocal nonlinear wave equation, which is a regularized conservation law,
ut+(β∗f(u))x=0, (1.1)
with a general kernel function and the convolution integral
(β∗v)(x)=∫Rβ(x−y)v(y)dy.
We prove error estimates showing the first-order or second-order convergence, in terms of the mesh size, depending on the smoothness of the kernel function. Also, our numerical experiments confirm the theoretical results.
Members of the class (1.1) arise as model equations in different contexts of physics, from shallow water waves to elastic deformation waves in dense lattices. For instance, in the case of the exponential kernel (which is the Green’s function for the differential operator where represents the partial derivative with respect to ) and with , (1.1) reduces to a generalized form of the Benjamin-Bona-Mahony (BBM) equation [1],
ut+ux−uxtt+(up+1)x=0, (1.2)
that has been widely used to model unidirectional surface water waves with small amplitudes and long wavelength. On the other hand, if the kernel function is chosen as the Green’s function of the differential operator ,
β(x)=12√2e−|x|√2(cos(|x|√2)+sin(|x|√2)), (1.3)
and if , (1.1) reduces to a generalized form of the Rosenau equation [2]
ut+ux+uxxxxt+(g(u))x=0 (1.4)
that has received much attention as a propagation model for weakly nonlinear long waves on one-dimensional dense crystal lattices within a quasi-continuum framework. It is worth to mention here that, in general, does not have to be the Green’s function of a differential operator. In other words, (1.1
) cannot always be transformed into a partial differential equation and those members might be called ”genuinely nonlocal”. Naturally, in such a case, standard finite-difference schemes will not be applicable to those equations. In this work we consider a numerical scheme based on truncated discrete convolution sums, which also solves such genuinely nonlocal equations.
We remark that the Fourier transform of the kernel function gives the exact dispersion relation between the phase velocity and wavenumber of infinitesimal waves in linearized theory. In other words, general dispersive properties of waves are represented by the kernel function. Obviously the waves are nondispersive when the kernel function is the Dirac delta function. That is, (
1.1) can be viewed as a regularization of the hyperbolic conservation law , where the convolution integral plugged into the conservation law is the only source of dispersive effects. Naturally, our motivation in developing the present numerical scheme also stems from the need to understand the interaction between nonlinearity and nonlocal dispersion.
Recently, two different approaches have been proposed in [3, 4] to solve numerically the nonlocal nonlinear bidirectional wave equation . In [3] the authors have developed a semi-discrete pseudospectral Fourier method and they have proved the convergence of the method for a general kernel function. By pointing out that, in most cases, the kernel function is given in physical space rather than Fourier space, in [4] the present authors have developed a semi-discrete scheme based on spatial discretization, that can be directly applied to the bidirectional wave equation with a general arbitrary kernel function. They have proved a semi-discrete error estimate for the scheme and have investigated, through numerical experiments, the relationship between the blow-up time and the kernel function for the solutions blowing-up in finite time. These motivate us to apply a similar approach to the initial-value problem of (1.1) and to develop a convergent semi-discrete scheme that can be directly applied to (1.1). To the best of our knowledge, no efforts have been made yet to solve numerically (1.1) with a general kernel function.
As in [4] our strategy in obtaining the discrete problem is to transfer the spatial derivative in (1.1) to the kernel function and to discretize the convolution integral on a uniform grid. Thus, the spatial discrete derivatives of
do not appear in the resulting discrete problem. Depending on the smoothness of the kernel function, the two error estimates, corresponding to the first- and second-order accuracy in terms of the mesh size, are established for the spatially discretized solution. If the exact solution decreases fast enough in space, a truncated discrete model with a finite number of degrees of freedom (that is, a finite number of grid points) can be used and, in such a case, the number of modes depends on the accuracy desired. Of course, this is another source of error in the numerical simulations and it depends on the decay behavior of the exact solution. Following the idea in
[5] we are able to prove a decay estimate for the exact solution under certain conditions on the kernel function. We also address the above issues in two model problems; propagation of solitary waves for both the BBM equation and the Rosenau equation.
A numerical scheme based on the discretization of an integral representation of the solution was used in [5] to solve the BBM equation which is a member of the class (1.1). The starting point of our numerical method is similar to that in [5]. As it was already observed in [5], an advantage of this direct approach is that a further time discretization will not involve any stability issues regarding spatial mesh size. This is due to the fact that our approach does not involve any spatial derivatives of the unknown function.
The paper is structured as follows. In Section 2 we focus on the continuous Cauchy problem for (1.1). In Section 3, the semi-discrete problem obtained by discretizing in space is presented and a short proof of the local well-posedness theorem is given. In Section 4 we investigate the convergence of the discretization error with respect to mesh size; we prove that the convergence rate is linear or quadratic depending on the smoothness of . Section 5 is devoted to analyzing the key properties of the truncation error arising when we consider only a finite number of grid points. In Section 6 we carry out a set of numerical experiments for two specific kernels to illustrate the theoretical results.
The notation used in the present paper is as follows. is the () norm of on , is the -based Sobolev space with the norm and is the usual -based Sobolev space of index on . denotes a generic positive constant. For a real number , the symbol denotes the largest integer less than or equal to .
## 2 The Continuous Cauchy Problem
We consider the Cauchy problem
ut+(β∗f(u))x=0, \ \ \ x∈R,% \ \ t>0, (2.1) u(x,0)=φ(x), \ \ \ x∈R. (2.2)
We assume that is sufficiently smooth with and that the kernel satisfies:
1. ,
2. is a finite Borel measure on .
We note that Condition 2 above also includes the more regular case in which case . The following theorem deals with the local well-posedness of (2.1)-(2.2).
###### Theorem 2.1
Suppose that satisfies Assumptions 1 and 2. Let , with . For a given , there is some so that the initial-value problem (2.1)-(2.2) is locally well-posed with solution .
The proof of Theorem 2.1 follows from Picard’s theorem for Banach space-valued ODEs. The nonlinear term is locally Lipschitz on [6]. Moreover, the conditions on imply that the term maps onto itself. Hence, (2.1) is an -valued ODE.
We will later use the following estimate on the nonlinear term [7].
###### Lemma 2.2
Let with . Then for any , we have . Moreover there is some constant depending on such that for all with
∥f(u)∥Hs≤C(M)∥u∥Hs .
We emphasize that the bound in the lemma depends only on the -norm of . This in turn allows us to control the -norm of the solution by its -norm. In particular, finite time blow-up of solutions is independent of regularity and is controlled only by the -norm of .
###### Lemma 2.3
Suppose the conditions of Theorem 2.1 are satisfied and is the solution of (2.1)-(2.2). Then
∥u(t)∥Hs≤∥φ∥HseCt, 0≤t≤T, (2.3)
where depends on and .
Proof. Integrating (2.1) with respect to time, we get
u(x,t)=φ(x)−∫t0(β′∗f(u))(x,τ)dτ. (2.4)
Using Young’s inequality and Lemma 2.2, we obtain
∥u(t)∥Hs≤∥φ∥Hs+C(M,β)∫t0∥u(τ)∥Hsdτ (2.5)
for all . By Gronwall’s lemma this gives (2.3).
###### Remark 2.4
It is well-known that, under suitable convexity assumptions, the hyperbolic conservation law leads to shock formation in a finite time even for smooth initial conditions. On the other hand, according to Lemma 2.3, for any solution of (1.1) the derivatives will stay bounded as long as stays bounded in the -norm. In other words, the regularization prevents a shock formation.
## 3 The Discrete Problem
In this section we first give two lemmas for error estimates of discretizations of integrals and derivatives on an infinite uniform grid, respectively, and then introduce the discrete problem associated to (2.1)-(2.2).
### 3.1 Discretization and Preliminary Lemmas
Consider doubly infinite sequences of real numbers with (where denotes the set of integers). For a fixed and , the space is defined as
lph(Z)={(wi):wi∈R, ∥w∥plph=∞∑i=−∞h|wi|p}.
The space with the sup-norm is a Banach space. The discrete convolution operation denoted by the symbol transforms two sequences and into a new sequence:
(w∗v)i=∑jhwi−jvj (3.1)
(henceforth, we use to denote summation over all ). Young’s inequality for convolution integrals state that for , and for .
Consider a function of one variable defined on . We then introduce a uniform partition of the real line with the mesh size and with the grid points , . Let the restriction operator be . We will henceforth use the abbreviations and for and , respectively.
The following lemma gives the error bounds for the discrete approximations of the integral over depending on the smoothness of the integrand. We note that the two cases correspond to the rectangular and trapezoidal approximations, respectively.
###### Lemma 3.1
1. Let and be a finite measure on . Then
∣∣ ∣∣∫Rw(x)dx−∑ihw(xi)∣∣ ∣∣≤h|μ|(R). (3.2)
2. Let and be a finite measure on . Then
∣∣ ∣∣∫Rw(x)dx−∑ihw(xi)∣∣ ∣∣≤h2|ν|(R). (3.3)
The following lemma handles the estimates for the discrete approximation of the first derivative , whose proof follows more or less standard lines.
###### Lemma 3.2
Let and . Let be the discrete derivative operator defined by the central differences
(Dw)i=12h(wi+1−wi−1), i∈Z. (3.4)
1. If , then
∥Dw−w′∥l∞≤h2∥w′′∥L∞. (3.5)
2. If , then
∥Dw−w′∥l∞≤h26∥w′′′∥L∞. (3.6)
We refer the reader to [4] for the proofs of the two lemmas above.
### 3.2 The Semi-Discrete Problem
In order to get the semi-discrete problem associated with (2.1)-(2.2) we discretize them in space with a fixed mesh size . Thus, the discretized form of the nonlocal wave equation (2.1) becomes
dvdt=−D(βh∗f(v)) (3.7)
with the notation and . The identity allows us to transfer the discrete derivative on the kernel. So in order to prove the local well-posedness theorem for the semi-discrete problem, we need merely prove the following lemma which estimates the discrete derivative of the restriction of the kernel.
###### Remark 3.3
We note that (3.7) involves point values of . When satisfies Assumptions 1 and 2 one should pay attention to how the point values in (3.7) are defined. To clarify this issue we will assume throughout that
β(x)=∫(−∞,x]dμ. (3.8)
###### Lemma 3.4
Let be a finite measure on . Then and
Proof. The assumption (3.8) allows one to write
h(Dβh)i =12(β(xi+1)−β(xi−1)) (3.9) =12∫(xi−1,xi+1]dμ≤12|μ|((xi−1,xi+1]), (3.10)
from which we deduce the estimate
Let be a locally Lipschitz function with . The map is locally Lipschitz on . Moreover, by Lemma 3.4, the map
v⟶D(βh∗f(v))
is also locally Lipschitz on . By Picard’s theorem on Banach spaces, this implies the local well-posedness of the initial-value problem for (3.7).
###### Theorem 3.5
Let be a locally Lipschitz function with . Then the initial-value problem for (3.7) is locally well-posed for initial data in . Moreover there exists some maximal time so that the problem has unique solution . The maximal time , if finite, is determined by the blow-up condition
limsupt→T−h∥v(t)∥l∞=∞. (3.11)
## 4 Discretization Error
Suppose that the function with sufficiently large is the unique solution of the continuous problem (2.1)-(2.2). The discretizations of the initial data on the uniform infinite grid will be denoted by . Let be the unique solution of the discrete problem based on (3.7) and the initial data . The aim of this section is to estimate the discretization error defined as . Depending on the conditions imposed on the kernel function , we provide the two different theorems establishing the first- and second-order convergence in . The proofs follow similar lines as the corresponding ones in [4].
###### Theorem 4.1
Suppose that satisfies Assumptions 1 and 2. Let , with . Let be the solution of the initial-value problem (2.1)-(2.2) with . Similarly, let be the solution of (3.7) with initial data . Let . Then there is some so that for , the maximal existence time of is at least and
∥u(t)−uh(t)∥l∞=O(h) (4.1)
for all .
Proof. We first let . Since , by continuity there is some maximal time such that for all . Moreover, by the maximality condition either or . At the point , (2.1) becomes
ut(xi,t)+(β∗f(u))x(xi,t)=0.
Recalling that , this becomes . A residual term arises from the discretization of (2.1):
(4.2)
where
The th entry of satisfies
(Fh)i =(F1h)i+(F2h)i, (4.3)
where the variable is suppressed for brevity. We start with the term . Replacing by for convenience, we have
(F1h)i=(βh∗g′)i−(β∗g′)(xi)=∑jhβ(xi−xj)g′(xj)−∫Rβ(xi−y)g′(y)dy. (4.4)
Since and be a finite measure on , by (3.2) of Lemma 3.1 we have
∣∣(F1h)i∣∣≤h|˜μ|(R)
where with . When , we have
∥r′∥L1≤∥β′∥L1∥g′∥L∞+∥β∥L1∥g′′∥L∞.
Since and , are bounded for , we have
|˜μ|(R) ≤|μ|(R)∥g′∥L∞+∥β∥L1∥g′′∥L∞, (4.5) ≤C(|μ|(R)+∥β∥L1)∥u∥Hs (4.6)
where we have used Lemma 2.2. Thus
|(F1h)i|≤Ch(|μ|(R)+∥β∥L1)∥u∥Hs≤Ch∥u∥Hs.
For the second term , again with and we have
∣∣(F2h)i∣∣ =∣∣(βh∗(Dg−Rg′))i∣∣≤C∥Dg−g′∥l∞, ≤Ch∥g′′∥L∞≤Ch∥g∥Hs≤Ch∥u∥Hs,
where Lemma 2.2 and (3.5) of Lemma 3.2 are used. Combining the estimates for and , we obtain
∥Fh(t)∥l∞≤Ch∥u(t)∥Hs.
We now let be the error term. Then, from (3.7) and (4.2) we have
de(t)dt=−Dβh∗(f(u)−f(uh))+Fh, e(0)=0.
This implies
e(t)=∫t0(−Dβh∗(f(u)−f(uh))+Fh)dτ. (4.7)
By noting that
≤∥Dβh∥l1h∥f(u)−f(uh)∥l∞ ≤C∥u−uh∥l∞≤C∥e(t)∥l∞,
it follows from (4.7) that, for ,
∥e(t)∥l∞ ≤sup0≤t≤T∥Fh(t)∥l∞∫t0dτ+C∫t0∥e(τ)∥l∞dτ, ≤C(ht+∫t0∥e(τ)∥l∞dτ). (4.8)
Then, by Gronwall’s inequality,
∥e(t)∥l∞≤ChTeCT.
We observe that the constant depends on the bounds , and . We note that, by Lemma 2.3, depends on and . The above inequality, in particular, implies that for sufficiently small . Then we have showing that . From the above estimate we get (4.1).
###### Theorem 4.2
Let , with . Let and be a finite measure on . Let be the solution of the initial-value problem (2.1)-(2.2) with . Similarly, let be the solution of (3.7) with initial data . Let . Then there is some so that for , the maximal existence time of is at least and
∥u(t)−uh(t)∥l∞=O(h2) (4.9)
for all .
Proof. The proof follows that of Theorem 4.1 very closely. So here we use the same notation and provide only a brief outline. The main observation needed is that the only place where the proofs differ is the estimate for in (4.4). Since and be a finite measure on , by the second part of Lemma 3.1 we have
∣∣(F1h)i∣∣≤h2|˜ν|(R)
where with . Formally we can write
r′′(y)=β′′(xi−y)g′(y)−2β′(xi−y)g′′(y)+β(xi−y)g′′′(y).
Noting that and , , are bounded for and using Lemma 2.2, we get
|˜ν|(R) ≤C(|ν|(R)∥g′∥L∞+2∥β′∥L1∥g′′∥L∞+∥β∥L1∥g′′′∥L∞ ≤C(|ν|(R)+2∥β∥W1,1)∥u∥Hs (4.10)
so that
For the second term , again with and we have
∣∣(F2h)i∣∣ =∣∣(βh∗(Dg−Rg′))i∣∣≤C∥Dg−g′∥l∞ ≤Ch2∥g′′′∥L∞≤Ch2∥g∥Hs≤Ch2∥u∥Hs,
where (3.6) of Lemma 3.2 is used. Using the estimates for and in (4.3), we obtain
∥Fh(t)∥l∞≤Ch2∥u∥Hs.
Now (4.8) takes the following form
∥e(t)∥l∞≤C(h2t+∫t0∥e(τ)∥l∞dτ) (4.11)
for . Then, by Gronwall’s inequality,
∥e(t)∥l∞≤Ch2TeCT.
Now the constant depends on the bounds , and . The rest of the proof follows the same lines as the proof of Theorem 4.1.
## 5 The Truncated Problem and a Decay Estimate
### 5.1 The Truncated Problem
In practical computations, one needs to truncate both the infinite series in (3.1) at a finite and the infinite system of equations in (3.7) to the system of equations. After truncating, we obtain from (3.7) the finite-dimensional system
dvNidt=−N∑j=−NhDβ(xi−xj)f(vNj),% \ \ \ \ \ \ −N≤i≤N (5.1)
where
are the components of a vector valued function
with finite dimension . In this section we estimate the truncation error resulting from considering (5.1) instead of (3.7) and give a decay estimate for solutions to the initial-value problem (2.1)-(2.2) with certain kernel functions.
We start by rewriting (5.1) in vector form
dvNdt=−BNf(vN),
where denotes a matrix with rows and columns, whose typical element is . Using the notation and assuming that is a finite measure on we get from Lemma 3.4
∥BNw∥l∞≤|μ|(R)∥w∥l∞.
As long as is a bounded and smooth function, there exists a unique solution of the initial-value problem defined for (5.1) over an interval . Moreover the blow-up condition
limsupt→(TN)−∥vN(t)∥l∞=∞ (5.2)
is compatible with (3.11) in the discrete problem. Consider the projection of the solution of the semi-discrete initial-value problem associated with (3.7) onto , defined by
TNv=(v−N,v−N+1,…,v0,…,vN−1,vN)
with the truncation operator . Our goal is to estimate the truncation error and show that, for sufficiently large , approximates the solution of the continuous problem (2.1)-(2.2).
###### Theorem 5.1
Let be the solution of (3.7) with initial value and let
δ=sup{|vi(t)|:t∈[0,T],|i|>N} and ϵ(δ)=max|z|≤δ|f(z)|.
Then for sufficiently small , the solution of (5.1) with initial value exists for times and
∣∣vNi(t)−vi(t)∣∣≤Cϵ(δ), \ t∈[0,T],
for all .
Proof. We follow the approach in the proof of Theorem 4.1. Taking the components with of (3.7) we have
dvidt =−∞∑j=−∞hDβ(xi−xj)f(vj) =−N∑j=−NhDβ(xi−xj)f(vj)+FNi
with the residual term
FNi=−∑|j|>NhDβ(xi−xj)f(vj).
Then satisfies the system
dTNvdt=−BNf(TNv)+FN
with the residual term . Estimating the residual term we get
∣∣FNi∣∣≤|μ|(R)sup|j|>N∣∣f(vj)∣∣≤Cϵ(δ).
We set . Since , by continuity of the solution of the truncated problem there is some maximal time such that we have for all . By the maximality condition either or . We define the error term . Then
d˜e(t)dt=−BN(f(TNv)−f(vN))+FN, ˜e(0)=0,
so
˜e(t)=∫t0(−BN(f(TNv)−f(vN))+FN)dτ.
Then
∥˜e(t)∥l∞≤|μ|(R)∫t0∥∥(f(TNv)−f(vN))(τ)∥∥l∞dτ+∫t0∥FN(τ)∥l∞dτ.
But
∥∥f(TNv)−f(vN)∥∥l∞≤C∥∥TNv−vN∥∥l∞.
Putting together, we have
∥˜e(t)∥l∞≤CTϵ(δ)+C∫t0∥˜e(τ)∥l∞dτ,
and by Gronwall’s inequality,
∥˜e(t)∥l∞≤Cϵ(δ)TeCT.
This, in particular, implies that there is some such that for all we have . Then we have showing that and this completes the proof.
By combining Theorem 5.1 with Theorems 4.1 and 4.2, respectively, we now state our main results through the following two theorems.
###### Theorem 5.2
Let , with . Let and be a finite measure on . Let be the solution of the initial-value problem (2.1)-(2.2) with . Then for sufficiently small and , there is an so that the solution of (5.1) with initial values , exists for times and
∣∣u(ih,t)−(uNh)i(t)∣∣=O(h+ϵ), \ t∈[0,T] (5.3)
for all .
###### Theorem 5.3
Let , with . Let and | |
# A little something about search engines.
Imagine this, you are sitting at your computer and type in a search engine "Microsoft", and the first result is filled with website's that people created who worked at Microsoft, instead of the homepage of Microsoft. Or when you type in "Egypt", but instead of information about the country, the first page is filled with people who talked about Egypt on their website?
When this would happen, the search engine would not be really popular, as you would never find what you needed in an easy way. When you type the above mentioned examples, google and bing, or other search engines, do return you links that you expect. All of this is relying on a concept introduced back in 1998 by google, called PageRank.
(PageRank looks like a play on word with the name of google co-founder Larry Page, yet in their historical publication it's used with the meaning I'm using here)
Let's think for a moment of how this could work.
One of the ideas could be, ranking a page by importance depending on how many other website's refer to it. However, this is not a very good idea. When a website like www.cnn.be refers to, for example, www.gumclan.org, it is way more important than when, let's say, 10 people their own little websites that they put up for university refer to it.
A better idea would be to give websites a rank. Depending on their rank, their referral to another website will be more important. Thus giving the website they link to a higher rank as well, and so forth.
Yet using this later definition, we encounter a first problem. This causes self-referential. To know the position of a page, we need to know all the pages that refer to this page. But to know how much influence the referring page has, we need to know the rank of that page, which we know by the pages that refer to it, who's rank we …. etc.
If however, this function would be linear, this problem would be an easy one to solve with linear algebra.
A simple model
To put our idea to work, we first have to define some variables. A page p we will call Np the amount of links on this page. R1(p) will be the rank of p, and with $\displaystyle\sum\limits_{p->q}$ . we mean the sum over all pages q that contain a link to p. (note that Nq $\neq$ 0 for q).
A simple example is the following situation, with just three pages. Named respectively, p, q and r. The arrows indicate the links between pages.
The equation that belong to these are
$R1(p) = \frac{1}{2} R1(q) + R1(r)$
$R1(q) = R1(p)$
$R1(r) = \frac{1}{2} R1(q)$
If we add one more condition to this, namely that the total rank has to be equal to 1, so with other words $R1(p) + R1(q) + R1(r) = 1$, we find that $R1(p) = R1(q) = 0.4$ and $R1(r) = 0.2$.
Another way to look at these results is by putting them in a table, and showing each step of the summation.
Following this way, we get the following table..
A second model.
In our former explenation, the definition of R1 still has some "bugs". One of them has an easy solution. The bug we have is that some pages can essentially make ranks dissapear. We can see how this happens when a page does not have any links on it. Or, in other terms, there are no arrows leaving the webpage.
$R1(p) = \frac{1}{2}R1(q)$
$R1(q) = \frac{1}{2}R1(p)$
$R1(r) = \frac{1}{2}R1(p) + \frac{1}{2}R1(q))$
If we solve this system, we get R1(p) = R1(q) = R1(r) = 0. Which is obviously rather useless.
Though, we can solve this problem quite easily. What we have to do, is define a new rank. R2.
$c * R2(p) = \sum\limits_{q-p} \frac{R2(q)}{Nq}$
With the constant $c \in R+$
The real pagerank!
Even our second model still has some bugs in it. And they aren't far-fetched. If you look ath the following image
We can see that pages r and s keep stocking up rank. We call this a "rank sink" problem.
It's not hard to check that $c = 1, R2(p) = R2(q) = 0$ and $R2(r) = R2(s) = 0.5$ is a possible PageRank according to equation $c * R2(p) = \sum\limits_{q-p} \frac{R2(q)}{Nq}$
This is deffinently a problem. The solution to this rank sink problem is what they call a rank source.
By creating a rank source, we give each page a initial value. E(p), which makes the equation look like:
$c * R(p) = \sum\limits_{q-p} \frac{R(q)}{Nq} + E(p)$
This deals with our former stated problem.
The general idea of a search engine is as I've explained it in this blog. But an actual search engine adds a lot of extra parameters to the pagerank system. For example searches in a certain language can get priority, also your location can affect the search. Anyway, I hope this atleast gives you an idea of how the linear algebra that led up to the creation of google looks like. | |
Warning
Task10bonus2 - Bounds for the DTSPN using GDIP and sampling-based approach
The goal of this (bonus) homework is to find solution with tight bounds for the Dubins TSP with Neighborhoods (DTSPN). Since the problem is very complex, we will consider only a single sequence already found by the Euclidean TSP, and all the regions are disk-shaped. The task is implement informed sampling procedure based on a tight lower bound estimation. The boundary of each regions is samples using smaller disk-shaped regions and possible heading angles are divided into intervals. Thus, each sample is given by a disk region and interval of the heading angle and the lower bound between two consecutive samples can be found using so called Generalized Dubins Interval Problem, see the following plot.
Example of the position sampling on the boundary of the target region is depicted in the following images. Uniform sampling (left) is able to provide the required lower bound; however, the informed sampling (right) converges much faster to the optimal solution.
Once the samples are created, the shortest tour is found in the following graph structure.
The overall algorithm is:
Implement method for finding the shortest lower/upper-bound tour for the actual sampling. The lower bound tour is then then utilized for refinement in the informed sampling approach.
Expected results
The expected result is that the gap from the optimal solution is about 5% for maximal resolution of 64 for both position and heading angle sampling. The output in the command line is expected to be as follows:
Resolution: 4 Lower bound: 9.76 Upper bound (feasible): 30.25 Gap(%): 67.75
Resolution: 8 Lower bound: 14.50 Upper bound (feasible): 23.95 Gap(%): 39.44
Resolution: 16 Lower bound: 17.68 Upper bound (feasible): 22.04 Gap(%): 19.77
Resolution: 32 Lower bound: 19.56 Upper bound (feasible): 21.84 Gap(%): 10.43
Resolution: 64 Lower bound: 20.63 Upper bound (feasible): 21.77 Gap(%): 5.21
During the calculation, the program plot the actual lower-bound (red) and upper-bound feasible (blue) paths. These two paths are expected to converge together with increasing the maximal sampling resolution, see:
Evaluation - how will be the homework evaluated
The codes will be evaluated manually by the teacher. | |
# What is the flap extension schedule on approach for an airliner?
I understand that there are probably no set altitudes and speeds for flaps extensions, but can someone with insight provide a general guide for speeds, altitude and flaps for an ILS approach in good weather for a Boeing 777-300ER with a landing of Flaps 25 and $V_{ref}$ of 147 knots.
Something like this:
Alt: 5000 ft - Speed: 210 kts - Flaps 1
Alt: 4000 ft - Speed: 190 kts - Flaps 5...
(Boeing 777 manual.)
Substitute altitudes with position to runway. Near an airport an airliner usually flies at around 200 KIAS, this is when you select flaps 1.
Subsequent flap settings are also combined with selecting a lower speed as shown above. The speed tape makes it easy. Flaps 5 should be down by the final intercept path to the localizer or before turning base. Flaps 20 when established on the localizer. Then comes gear down and flaps 30 before the glide-slope is captured.
Initially you wrote PMDG, I recommend check their manuals, they are usually very informative.
For normal procedures on a Boeing 737, the flap settings 2, 10, and 25 are usually also skipped. So it's 1, 5, 15, and 30/40. Non-normal procedures differ.
On most Boeing twinjets (don't know about 747), the flap manoeuvre speeds can be derived or at least estimated with reasonably good precision by adding increments of 20 knots to VREF for the highest landing flap, e.g. to VREF30 for the 777. Minimum clean speed on these aircraft is typically around VREF30+80. The approach speed schedule recommends to extend each flap as you decelerate through manoeuvre speed (up to typically plus 20 knots). This means e.g. on a 777:
Deceleration through - select:
VREF+80 - F1
VREF+60 - F5
VREF+40 - F15 or 20 (15 can be skipped)
VREF+20 - Land flap (25 or 30)
Health warning: This is all from memory. If nobody else can chip in before Monday, I'll try to find quotable sources.
Best wishes, M | |
# How to calculate probability of observing a value given a permutation distribution?
I have a single observation with value $x = 0.5$ that comes out from a complicated computational process. I would like to know what is the probability to observe such value by chance.
To attempt to answer the question, I have ran the computational process with random input values for few thousand times. I get a distribution that is about normal, has mean very close to zero, and a standard deviation around 0.09.
Intuitively, it looks like the chance of observing 0.5 by chance is very little. However, how do I turn this into an actual statistical test?
• I have posted code showing how to conduct permutation tests. A simple working example is at the end of the answer at stats.stackexchange.com/a/137467, for instance. A general description of permutation tests, along with a generic description of how to code them, appears in the survey at stats.stackexchange.com/a/104746. – whuber Jul 5 '15 at 13:21
This is quite straightforward, there is no need to infer a distribution under the Null Hypothesis. Your p-value is just the number of times $x_{permuted}$ get superior or equal to $0.5$, divided by the number of permutations made.
I don't say your approach is completely wrong, if your distribution looks like normal, you could eventually do a z-test, and it should give a quite reliable p-value. But I think the spirit of the permutation test is just to count how often you indeed get equal or more extreme results because you have a direct access to it, whatever the real distribution of $x_{permuted}$ is.
For the sake of the comparison it would be interesting that you give how often $x_{permuted}$ get superior or equal to $0.5$ in your data set, we could compare it with what would give a 1-tailed z-test. | |
Verify that:
Question:
Verify that:
(i) 4 is a zero of the polynomial $p(x)=x-4$.
(ii) $-3$ is a zero of the polynomial $q(x)=x+3$.
(iii) $\frac{2}{5}$ is a zero of the polynomial, $f(x)=2-5 x$.
(iv) $\frac{-1}{2}$ is a zero of the polynomial $g(y)=2 y+1$.
Solution:
(i) $p(x)=x-4$
$\Rightarrow p(4)=4-4$
= 0
Hence, 4 is the zero of the given polynomial.
(ii) $p(x)=(-3)+3$
$\Rightarrow p(3)=0$
Hence, 3 is the zero of the given polynomial.
(iii) $p(x)=2-5 x$
$\Rightarrow p\left(\frac{2}{5}\right)=2-5 \times\left(\frac{2}{5}\right)$
$=2-2$
$=0$
Hence, $\frac{2}{5}$ is the zero of the given polynomial.
(iv) $p(y)=2 y+1$
$\Rightarrow p\left(-\frac{1}{2}\right)=2 \times\left(-\frac{1}{2}\right)+1$
$=-1+1$
$=0$
Hence, $-\frac{1}{2}$ is the zero of the given polynomial. | |
# The tools you should know for the Machine Learning projects
I have been frequently asked about the tools for the Machine Learnign projects There are lot of them on the market so in my newest post you will find my view on them. I would like to start my first Machine Learning project. But I do not have tools. What should I do? What are the tools I could use?
I will give you some hints and advices based on the toolbox I use. Of course there are more great tools but you should pick the ones you like. You should also use the tools that make your work productive which means you need to pay for them (which is not always the case – I do use free tools as well).
The first and most important thing is that there are lots of options! Just pick what works for you!
I have divided this post into several parts like the environments, the langauges and the libraries.
##### THE ENVIRONMENT
The decision about which environment to choose is really fundamental. I use to have three environments and use them as needed. The first and the one I like is Anaconda. It is an enterprise data science platform with lots of tools. It is designed for data scientists, IT professionals and business leaders as well. You can configure it for you project so it contains only the tools and libraries needed. This can make your deployments easier (I am not saying it will be easy).
Creating an environment is super easy! Of course assuming you know what you need but it is also possible to reconfigure the environment later. I understand an environment like a project.
The Anaconda environments
Anaconda offers also shortcuts to the Learning portal where you can find not only the documentation but also a lot of useful materials like videos or blog posts. This is really a great place to learn how to start working with a tool or to gain more knowledge
The Anaconda learning
The last thing I would like to show here is the Anaconda Community tab. The community is really what makes our live easier. You can share thoughts or learn or just ask questions. As a proud member of the #SQLFamily community I know what I am saying… The community is the heart of all the learning process so do not forget to take part in and share your knowledge!
The Anaconda community tab
By the way – you can install MiniConda (minimal installation of the Anaconda) and install everything else from the command line like I have shown here:
Cmd Cd documents Md project_name Cd project_name Conda create --name project_name Activate project_name Conda install --name project_name spyder
What I have done with the code above? I started with the cmd tool, then created a new project named project_name in the documents folder. Then I have created an environment and activated it. The last line shows an example how to install libraries or tools – I have shown how to install the Spyder.
I use the Jupyter Notebook along with other tools (Orange, Spyder etc) to do the modelling. The advantage of the Jupyter Notebooks over the other tools is that you can write a code and immediately run it without compiling anything. Looks great, ain’t it? This is not all as I always like to document my code and this is what you can do here. Take a look at the picture below – code and documentation live logether peacefully!
Jupyer Notebook in action
Now let’s move on to the Visual Studio Code. I have been using Visual Studio since it was released for the first time. You cannot be surprised that Visual Studio Code is just my natural choice for many projects including Machine Learning and AI.
Visual Studio Code is released once a month which makes this product unique.
You can customize your Visual Studio Code the way you need – just install all the extensions and start working with the code.
Visual Studio Code – my installed extensions
But this is not all. Having Visual Studio Code you have also a powerfull debugger, intellisense (!!!!) and built-in Git.
Visual Studio Code intellisense for Machine Learning project
What about the Visual Studio Code community? Yes, there is one! It is also powerfull so you do not get lost and get the help if needed.
The last tool I would like to present is the Azure Machine Learning Studio. This is a graphical tool and it does not require any programming knowledge at all. You need to log to the Azure Portal and create a Machine Learning Workspace.
Machine Learning Studio Workspace
There is a free version for the developers so you can just start immediatelly. I suggest you to start with the examples that are in the Gallery. Take a look at the one I have just picked and opened in the Studio:
Machine Learning Studio
As you can see the Machine Learning Studio is more oriented to the Machine Learning Process (take a look int my recent article) rather than coding. Of course you can add as much code as you wish there as well.
##### THE LANGUAGES
I would prefer to use Python but there is also the R language in the scope. What I see is the R language is mostly used by the people from universities whilst Pyhon is used by the data engineers and programers. This is how it looks like usually but I am not doing any assumptions. Please use the language you like and feel comforatble coding. I will use both of them on the blog.
Both Python and R are powerful languages. They can easily manipulate with data sets and perform complex operations on them.
Wait, do you know any other language that can handle data sets? Yes – it is the good, old T-SQL! I think you should at least know that the SQL Server can mix up the T-SQL, Python and R! You can create powerful Machine Learning and AI solution using SQL Server and I will definitelly show you how to this later!
##### THE LIBRARIES
Now we move to the heart of the Machine Learning modelling. The libraries that give you everything you need. You can prepare your data set, clean it, standarize, perform regularization, pick an algorithm, create learining/testing splits, learn the model, perform scoring, plot the data and many more…
The decision which library you use is really important. The decision is also driven by the language you use as libraries are not transferrable between the Python and R.
I am going to describe some well known (free of charge) libraries here below but we will learn more about them in the next posts where I will be discussing the code itself.
###### PANDAS
This is one of the most popular libraries for data loading and preparation. It is frequently used with the Scikit-learn. It supports loading data from different lources like SQL databases, flat files (text, csv, json, xml, Excel) and many more. It can do SQL-like operations for example joining, grouping, aggregating, reshaping, etc. You can also clean the data set perform transformations and dealing with missing values.
###### NUMPY
This is all about multidimensional arrays and matrices and it is used in linear algebra operations. It is a core component for the pandas nad scikit-learn.
###### SCIKIT-LEARN
This library is one of the most popular libraries today. You can find lot of both supervised and unsupervised learning algorithms like clustering, linear and logistic regressions, gradient boosting , SVM, Naive Bayes, k-means and many more.
It also provides helpful fonctions for data preprocessing and scoring.
You should not use it for the Neural Networks as it is designed for Machine Learning.
###### PYTORCH
This is the Deep Learning library built by Facebook. It supports the CPU and GPU computations. It can help you solving problems from the Deep Learning area like medical image analysis, recommender systems, bioinformatics, image restoration etc.
PyTorch provides features like interactive debugging and dynamic graph definition.
###### TENSORFLOW
It has been built by Google. This is both Machine Learning and Deep Learning library. It supports many Machine Learning algorithms for classification and regression analysis. The great benefit is that it also supports the Deep Learning tasks.
###### KERAS
It is a popular high-level Deep Learning library which uses various low-level libraries like Tensorflow, CNTK, or Theano on the backend. It should be easier to learn than Tensorflow and can use the Tensorflow under the hood (what for example PyTorch cannot do).
###### XGBOOST
This library implements algorithms under the Gradient Boosting framework. It provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way.
###### WEKA
I have used the Weka library in my R code when testing how the association rules work. But is a powerful library for data preparation and many types of algorithms like classification, regression. It can also do clustering and perform visualization.
###### MATPLOTLIB AND SEABORN
The two libraries are used for data visualization. They are easy to use and help you using both very basic and very complex plots. You do not need to be an artist or talented coder to make beatufil visualizations anymore.
##### WHAT ABOUT THE CLOUD SOLUTIONS?
Everything lives in a cloud now. This is also true for the Machine Learning solutions. There are many Cloud Providers you can choose but I will be showing most of my cloud solutions on Microsoft Azure. There is everything you need to start with. You can start from scratch and build your solution just step by step having control on everything. But you can also use so called Automatic Machine Learning (yes, I show you both ways !!!) to concentrate on the solution and not on the infostructure. Think about how powerful this can be – you develop a model and the Azure will deploy it for you – in a contenerized solution!
##### SUMMARY
Now you know the tools – environment, languages and libraries. We can move forward to Machine Learning. The next post will be dedicated to a very simple but powerfull example of the Machine Learning solution.
Please let me know if you need me to elaborate on a specific tool more. I will be very happy to do so in one of the further posts!
Originally posted here. | |
# Formal group law over $\mathbb{F}_p$
Let $p$ be a prime. For each $n > 0$ there is a unique 1-dimensional commutative formal group law $F$ over $\mathbf{Z}$, $F(X, Y) = X + Y + \dots \in \mathbf{Z}[[X, Y]]$, whose logarithm function is given by $$l(x) = \sum_{k \ge 0} \frac{x^{p^{nk}}}{p^k}.$$
Let $\bar{F} \in \mathbf{F}_p[[X, Y]]$ be the formal group over $\mathbf{F}_p$ given by reduction of $F$ modulo $p$. (Cf. Prop. 9.25 in http://neil-strickland.staff.shef.ac.uk/courses/formalgroups/fg.pdf.)
Is $\bar{F}$ an element of $\mathbf{F}_p[X] [[Y]]$?
• I think you should make your question self-contained -- it is not reasonable to require people to read other documents in order to understand what you are asking. – Stefan Kohl Jan 31 '15 at 11:30
• I think this is an interesting question and it would be a pity if it got closed, so I've edited it to knock it into shape. – David Loeffler Jan 31 '15 at 14:44
• I don't see any reason why it should be in this smaller ring. A computation supports this: computing $\bar{F}$ to precision 100 for $p = 3$ and $n = 1$, I got that the coefficient of $Y$ was $1 - x^{2} + x^{4} - x^{6} + x^{10} + x^{12} - x^{18} + x^{28} + x^{30} + x^{36} - x^{54} + x^{82} + x^{84} + x^{90} + O(x^{99})$, which certainly doesn't look like it's going to be of finite degree. But I don't see how to prove this. Any takers? – David Loeffler Jan 31 '15 at 15:13
• I think this is a fascinating question, and I thought I had a strategy for attacking it, but Ghassan Sarkis tossed off an elementary proof, which I’m now checking. If it pans out, I’ll put it up as an answer, and if not, I’ll delete this comment. – Lubin Feb 6 '15 at 1:56
• Bakuradze has just put a proof on the arxiv at arxiv.org/pdf/1502.04152v1.pdf. This may just be a coincidence; he does not refer to the discussion here, I don't know if he has seen it. – Neil Strickland Feb 17 '15 at 7:51
Now multinomial-free, I believe that Ghassan Sarkis and I have a proof of the following
Theorem. Let $h\ge2$, and let $L(x)=x + x^{p^h}/p + x^{p^{2h}}/p^2+\cdots$ be the logarithm of the formal group $F(x,y)\in\Bbb Z_p[[x,y]]$. Then $F(x,y)\in\Bbb Z_p\{\{x\}\}[[y]]$, where $\Bbb Z_p\{\{x\}\}$ is the ring of convergent power series: those whose coefficients go to zero.
A word about this ring: it’s the completion of the polynomials with respect to the “Gauss norm”, i.e. the uniform norm on the closed unit disk; or, if you like, the $p$-adic completion of the ring of polynomials.
Since you get $\Bbb F_p[x]$ when you tensor the ring of convergent series with $\Bbb F_p$, Neil Strickland’s guess turns out to be correct, in a very strong way.
Now for an outline of the proof, which depends entirely on $L'(x)$ being a convergent series, but the proof I found depends also on the particular form of the logarithm.
(Perhaps I should say that the cognoscenti may look at all this and say, C’mon, it’s all clear ’cause the invariant differential is a convergent series, and it all drops out automatically from general facts. But I’m no cognoscente in anything, so I have to go through at least some of the motions. I add that Ghassan wonders whether the present result may be in Hazewinkel already, though in some indecipherable formulation.)
Treat $F(x,y)$ as an element of $\Bbb Z_p[[x]][[y]]$, so write it as $$F(x,y)=x +\sum_{m\ge 1}f_m(x)y^m\,.$$ The aim is to show that each $f_m$ is in $\Bbb Z_p\{\{x\}\}$, not just in $\Bbb Z_p[[x]]$. The argument is by induction, starting with $f_1$, which we already know to be $1/L'(x)$, so convergent. We write out the fundamental property of the logarithm: $$L\bigl(F(x,y)\bigr)=L(x)+L(y)\,,$$ and arrange the pieces differently: $$0=\sum_{N\ge0}\Bigl[F(x,y)^{p^{Nh}} - y^{p^{Nh}}\Bigr]\Big/p^N-L(x)\,.$$ In the above, we want to look at the total coefficient-function of $y^s$, knowing inductively that all $f_m(x)$ for $m<s$ are in $\Bbb Z_p\{\{x\}\}$. In this, we’re not interested in the participation of any monomial with $y$-degree greater than $s$, so we may truncate, and again rearrange: $$-(x+\sum_{m=1}^sf_m(x)y^m)\equiv \sum_{N\ge1}\Bigl[(x+\sum_{m=1}^s f_my^m)^{p^{Nh}} - y^{p^{Nh}}\Bigr] - L(x)\pmod{y^{s+1}}\,.$$ Now, when you look at the occurrence of $y^s$ for each piece with $N\ge1$, there’s only one of them, and lo and behold, the coefficient is $p^{N(h-1)}x^{p^{Nh-1}}$, one of the monomials in $L'(x)$. Collect them all on the other side, and get $$-f_s(x)L'(x) = \text{y^s-coefficient in}\sum_{N\ge1}\Bigl[x+\sum_{m=1}^{s-1} f_my^m\Bigr]^{p^{Nh}}\Big/p^N\,,$$ though in case $s=p^{nh}$, one must add on the left $1/p^n$, an inconsequential change. But here, my friends, our tale is almost done.
The last display exhibits $f_s$ as a $\Bbb Q_p$-series in the series $f_1,\dots, f_{s-1}$. But look at the tail-end of the outer sum: because the degrees in $y$ are bounded, the binomial coefficients for the $p^{Nh}$-powers far overwhelm the denominators, and the total coefficients go to zero. So we know that the tail-end is convergent, just as a series of elements of $\Bbb Z_p\{\{x\}\}$. And the part before the tail-end? That is a polynomial with $\Bbb Q_p$-doefficients in the $s-1$ series in $\Bbb Z_p\{\{x\}\}$. Let’s call it $g(x)$ for the moment. We now have $-f_sL'=g$, and thus $f_s=-f_1g$, an element of $\Bbb Q_p\otimes_{\Bbb Z_p}\Bbb Z_p\{\{x\}\}$ that’s also in $\Bbb Z_p[[x]]$, since of course we know that $F$ has its coefficients in $\Bbb Z_p$. Thus $f_s(x)\in\Bbb Z_p\{\{x\}\}$, as desired.
What this result says is that there is an action of the formal group $F$ on the closed disk. Again, maybe the cognoscenti have known this all along, but I certainly didn’t. You certainly don’t expect such a thing for a random formal group, even (as here) of height greater than $1$.
This is really a comment, but a bit too long.
The coefficient of $y$ in $F(x,y)$ is $1/l'(x)$. If $n=1$ then $l'(x)=\sum_ix^{p^i-1}$ and $1/l'(x)$ is not polynomial mod $p$, so the answer is negative.
However, to my surprise, it seems like the answer might be positive for $n>1$. (Note that $l'(x)=1\pmod{p}$ in that case.) For example, when $n=p=2$ one can calculate that the coefficient of $y^8$ in $\overline{F}(x,y)$ is $x^{14}+x^{20}+x^{26}+x^{56}\pmod{x^{1024}}$. As there are no terms $x^i$ with $56<i<1024$, one might reasonably guess that there are no terms $x^i$ with $i>56$, in which case the coefficient of $y^8$ would be polynomial. However, I do not know how to prove this.
• I have a program that I think computes these Honda formal group laws (see mathoverflow.net/questions/124048/… for some pictures). I'm getting $x^{14}+x^{20}+x^{26}+x^{56}+x^{98}+x^{164}+x^{176}+x^{188}+x^{200}+x^{212}+x^{218}+\cdots$. Also, the pictures don't support the conjecture; but I'm not 100% sure my program's correct... – Christian Nassau Jan 31 '15 at 15:46
• @ChristianNassau: Are you sure that this is the same FGL? The integral version of mine has $[2]_F(x)=\exp_F(2x)+_Fx^4$ but people also consider variants with $[2]_F(x)=2x+_Fx^4$ or $[2]_F(x)=2x+x^4$. I did my calculation quickly in Maple today but then I dug out some files that I generated more systematically a few years ago and the result was the same. – Neil Strickland Jan 31 '15 at 16:38
• It's probably not the same: on inspection it seems my program computes an arbitrary formal group $F$ with $[2]_F(x)=x^4$. Sorry for the confusion. – Christian Nassau Jan 31 '15 at 16:56
• I have slightly changed my program and computed two new formal groups for $p=2$ with heights 2 (nullhomotopie.de/fg2_1024.pdf) and 3 (nullhomotopie.de/fg3_9920.pdf). These pictures now do support the idea that there might be a formal group $\bar F$ of height $n\ge 2$ in $\mathbb F_2[X][[Y]]$. – Christian Nassau Jan 31 '15 at 19:43
• Update to my comment to the original question: Ghassan and I think we have a proof that for $n\ge2$ the formal group in characteristic $p$ is indeed in $\Bbb F_p[x][[y]]$. It’s a fussy messy computation involving multinomial coefficients, though, and has to be checked carefully. – Lubin Feb 9 '15 at 17:04 | |
# Tag Info
26
Toggling? Or setting? Our toggleLights function is oddly named and doesn't do what I'd expect. (Specifically, a toggle turns off things on and on things off.) In C, and especially in embedded C, we shouldn't be afraid to use unsigned integers as bit arrays. So, perhaps we want a setLights function that looks something like this: void setLights(uint8_t ...
17
how to optimize this code regarding memory consumption and performance. (?) Consider redefining the angle measurement. It appears int phase is 0.36 degrees. A natural choice would be 1024 phase to 1 revolution. Now the problem becomes one using a binary angle measure. Not only does this simplify small things like scaling, it opens up coding choices as ...
17
Do not call main recursively. You are setting yourself up for stack overflow. Consider instead def main(): while True: try: your_logic_here except Exception as e: your_logging_here Testing for counter == 4 is better done in a loop: for _ in range(4): handle_acceleration handle_the_rest An ...
14
First of all, yes, CortexM0 lacks any way to do 32x32=64 multiplication in hardware. CortexM3 and CortexM4 have the umull instruction, which lets you do 32x32=64 really easily. And yes, since you're writing in C, one possible implementation would be uint64_t mul32x32(uint32_t r0, uint32_t r1) { return r0*(uint64_t)r1; } but I assume you've already tried ...
13
Good that OP is using 4 simplifications: year 2000-2099, no DST, no leap second, no timezone. So OP knows of code limitations concerning these. Various elements of this function break without those givens. Make static unsigned short days a const. Use a long for your epoch as in: void epoch_to_date_time(date_time_t* date_time,unsigned long epoch) as ...
13
Didn't work for me First off, I ran your program but it didn't ever find any word. I suspect the problem was in the function max: if (e && ((int) e->data > max_size)) I didn't see anywhere in the program where data was ever incremented, except if you had a duplicate word in the dictionary. My dictionary had no duplicate words. Perhaps ...
12
...it [the program] is to be used in embedded devices with possibly very low available RAM In that case, we should be getting rid of everything that isn't of absolute necessity, and adding some things that are. Overall: If one provides no memory modifier (such as __flash) then many embedded systems compilers will copy the data into RAM (even sometimes ...
12
Without more of the code, it's not really going to be possible to address memory management or bottlenecks. So instead, I'll look instead at readability and portability. Use the appropriate #includes This program fragment requires two headers, which should be included but are not: #include <stdlib.h> // for getenv(), srand(), fprintf(), etc. #...
12
Have you already executed the code to see how it performs and if the battery will last? There is that famous Donald Knuth quote saying premature optimization is the root of all evil (or at least most of it) in programming. I never had to think about the energy consumption of a program, so I cannot tell you about the power efficieny. But as vnp already did, ...
11
A few notes: Your included libraries and definitions at the beginning of your code is not very organized. #include <avr/io.h> //#undef __FLASH #ifndef __FLASH #include <avr/pgmspace.h> #define FLASH(x) const x PROGMEM #define FLASH_P(x) const x * const PROGMEM #define FLASH_PR(x, y) (x *)pgm_read_word(&(y)) #else #define FLASH(x) const ...
11
I see a number of things which could help you improve your code. Minimize register usage With assembly language programming, and in particular in embedded systems work, minimizing the use of resources is often vital. One of the most precious resources is the processor's registers. In this case there are only 32 of them, so minimizing their use is often ...
11
Illegal void pointer arithmetic This code is illegal in C: void *array; ... (array + offset) You are not allowed to do pointer arithmetic on a void * pointer. You should use char * instead of void *. Simplifications I would suggest defining a generic structure like this: typedef struct EnumName { int value; const char *name; } EnumName;...
10
Magic numbers You have some magic numbers in your code. Create constants for them to make the code more readable and maintainable. Using an unnamed enum is a trick for creating constant int values. Using const qualified variables work in this case, but don't for things like in switch statements and for the size of arrays. See this answer for more info on ...
10
Your current delay code is busy-waiting, this means that in order to delay for the amount of time required you are just wasting however many CPU cycles you need in order for that amount of time to elapse. If I remember correctly _delay_ms is the library function that does something similar to what you already have. It works by calculating the number of no-op'...
10
Pointer issues: You should avoid void* where it isn't necessary. In this case, there is no reason why you should use it, use a char* instead. Pointer arithmetic on void* in not even valid C, this is a non-standard GCC extension. You should use const-correctness for the pointers where the contents aren't modified. (int) * ((int *)something) The cast to int ...
9
<stdbool.h> Unless you need compatibility with C89 for some reason, I would use bool for the return type of the function and true and false as the possible values. It helps readability. sscanf() This one depends on your performance needs. A solution using sscanf() will be much easier to understand with a simple look and also shorter in code, but ...
8
Two-dimesional array Your first function can be aggressively reduced by replacing cI and friends by a 2D array: float *Receiver::parse_pid_substr(char* buffer) { static float pids[8]; memset(pids, 0, 8*sizeof(float) ); char rgcPIDS[8][32]; size_t i = 0, c = 0, p = 0; for(; i < strlen(buffer); i++) { if(buffer[i] == '\0') { break; ...
8
Opening and closing files takes resources: with open('babar.txt', 'a') as f: f.write('a'*10000) takes 300 micro-seconds while: for _ in range(10000): with open('babar.txt', 'a') as f: f.write('a') takes 648000 micro-seconds So to answer your question Would it be beneficial to write 10 rows at once instead of writing one row at a time?. The answer, ...
7
Standard C# naming convention for methods is PascalCase. Following standard naming conventions makes the code look more familiar to other C# developers (which might be important as you plan to open source it) You are potentially wasting some cycles here by calling getMaxBulbs() again even though you just did it and have the result stored in maxBulbs: Int32[,...
7
This really is picky, picky, picky, and I recognize that. However, source code is written for human beings, not computers, so your source code should be crystal clear, and as readable as possible. Keep this in mind. There's nothing harmful in your header, led.h, but you didn't wrap it so that it's included only once. You can see this practice in all sorts ...
7
here are the programming principles you ought to be applying: Don't repeat yourself Note that the two statements (statement-true & statement-false) are just about the same as each other. If there's a mistake in one and you fix it, you might not fix it correctly in the other. So you really should have just one copy of the formula. This is even more ...
7
I'm interested in how to optimize this code regarding memory consumption and performance. Well, which is it? Are you optimizing for speed, ram, or executable size? You say this is for an embedded project, so it’s likely you’ll need to pick one. Once I needed to calculate a sine wave for a motion controller. The timing was critical, but we had plenty of ...
6
I would focus on eliminating the inner switch statements. You could try to eliminate the outer switch as well, but it's probably not worthwhile, especially for a resource-constrained environment. byte[][] desKeys = new byte[][] { null, DESKey1, DESKey2, DESKey3, DESKey4 }; try { switch (buf[ISO7816.OFFSET_P1]) { case 0x01: ...
6
It's a small point in comparison to the other answers, but worth noting for a beginner. This is the LED header file. #include <avr/io.h> #define LEDPORT PORTE_OUT #define LEDPORT_DIR PORTE_DIR void init(void); void toggleLights(int ledPosition); Anything in the header is available to outside code. There's no reason for anything to have access to ...
6
A few thoughts about your code: 1. Use of reinterpret_cast // Create a simple function that calls the functor this->func_ = [](void *user, TArgs ... args) -> void { TFunctor *functorPtr = reinterpret_cast<TFunctor*>(user); (*functorPtr)(args...); }; This always raises a red flag for me. I understand that the user ...
6
Improve the wrap-around logic There's obvious inefficiency in: head %= bufferSize; % uses division, but because we're incrementing, we know that head / bufferSize is at most 1. Instead, we can: if (++head >= bufferSize) head -= bufferSize; On architectures such as ARM, there's no branch here, as any decent compiler will just use a condition flag ...
5
5
At the top, you are declaring eight 32-character arrays, which later you use a switch with to decide which array to use. Instead, you should declare an array of arrays, the Morwenn did it, or you can use just one 256-character array. You can then just add a multiple of the p to the index in the first big switch block, reducing it to only one case: cArrays[p&...
5
Since your script has no content to return, a status code of 204 No Content would be more desirable than 200 Success. For that, you should echo "Status: 204 No Content" (RFC 3875 Sec 6.3.3). Also consider returning using status code 405 Method Not Allowed for anything other than a POST request. $TMPOUT is a misnomer. The file is not temporary at all —$...
5
Here are some things that may help you improve your program. Use all required #includes The program uses fopen but doesn't #include <stdio.h>. It should. Use const where practical Whenever you pass a pointer, ask yourself whether the called function should be allowed to modify the contents of the pointed-to memory. If not, then that parameter ...
Only top voted, non community-wiki answers of a minimum length are eligible | |
1. Jul 11, 2011
### Makveger
1 - What is Wavefunction Ψ?
In the derivation of the equation we treated the total energy of the electron or the particle as the kinetic energy of the particle and the potential energy
2 - Can you give an example of the potential energy of the electron?(Is it like the electric field applied to it from nucleus protons) ?
3 - Does Schrodinger's equation describe the motion of a particle that is subjected to an electric field and behaves as a wave?so the solution of it would give location of the particle w.r.t time ?
The book I'm reading described the motion of the electron as a standing wave...I really can't imagine how that works can you please describe it to me?
2. Jul 12, 2011
### Drakkith
Staff Emeritus
I can't describe what a Wavefunction is, as I don't know enough about it. I would suggest looking it up on Wikipedia.
A standing wave is kind of like a guitar string. The string doesn't move but it vibrates after being struck. Instead of connecting to a guitar at each end, an electron standing wave is similar the a guitar string twisted into a circle and connected to itself. (Or more accurately a sphere I believe)
One of the things found when this effect was first theorized was that only certain frequencies of the standing wave could fit into certain orbitals. This helped explain why electrons could only be found in certain locations around the nucleus instead of just anywhere. It also helped explain how electrons were able to exist in an orbital without emitting EM radiation and falling into the nucleus like they should have according to classical physics.
3. Jul 12, 2011
### dextercioby
Well, from your questions, I can assess that you may not be using the right teacher/textbook. It's easier to change the book, of course, so I'm suggesting you pick up the book of K. Krane <Introductory nuclear physics>, read the relevant sections thoroughly (second chapter, IIRC) and ask your teacher to explain what you couldn't find clear.
The thread was initially posted iin the Nuclear Engineering subforum, that's why the reccomendation was made for a nuclear physics book.
Last edited: Jul 13, 2011
4. Jul 12, 2011
### jfy4
The state vector $|\psi\rangle$, in a manner of speaking, houses the statistical distributions for dynamical variables for experiments on many similarly prepared systems.
The wave function usually means $\langle x|\psi\rangle=\psi(x)$ whitch is the state vector in the coordinate basis. | |
## Thursday, December 27, 2007 ... //
### Johannes Kepler: 436th birthday
Johannes Kepler was born prematurely near Stuttgart on 12/27/1571. His grandfather was a mayor of their town but once Johannes was born, the family's fortunes were already dropping. His father was a mercenary and left the family when Johannes was five. His mother was a healer and a witch which has also led to some legal problems.
Johannes was a brilliant child with early inclinations to astronomy. In Graz (1594-1600), he was defending the Copernican heliocentric system. At that time, there was no clear difference between astronomy and astrology. Therefore, Kepler also invented the ADE classification of planets orbiting the Sun. ;-) This attempt resembled, but was not identical to, Garrett Lisi's hopeless attempt to unify. Kepler also wrote that the Universe had to be stationary.
In 1600, Kepler finally met Tycho Brahe in Benátky nad Jizerou (see the picture), a central Bohemian town where Brahe built an observatory. Brahe quickly recognized Kepler's magic theoretical powers. Their negotiations about the new Kepler's job in Prague were accompanied by arguments and tension. Fortunately for the Czech capital, Kepler had more serious problems with Graz where they expected him to convert to Catholicism. Finally, Kepler moved to Prague, including his family.
He became the imperial mathematician of our monarchy, an advisor to Emperor Rudolph II, a predecessor of Václav Klaus and a great sponsor of arts and sciences pictured above, and 11 most productive years of Kepler's life were just getting started. Kepler was also giving his political recommendations to the empire although his common sense was more instrumental than the stars.
In Prague, Kepler established modern optics (he understood geometry of lunar shadows, the inverse-square law controlling the light intensity, and other things). In 1604, he started to observe SN 1604, a supernova also known as Kepler's star, in the constellation Serpentarius, the 13th sign of the zodiac in which your humble correspondent was born. Click at the link and see Kepler's beautiful drawing of Ophiuchus, as the constellation should now be called. It started to be clear to him that the heavens were not as constant as Aristotle used to think.
Kepler's laws
In 1602, Kepler discovered the law that we derive from the conservation of the angular momentum these days. He had a somewhat strange, non-Newtonian interpretation of it: the Sun provides the planets with motive power that decreases as they get further from the Sun which means that when the planets are far away from the Sun, they move more slowly. ;-)
Tycho's very accurate data about Mars were very important for Kepler. Incidentally, Mars may be hit by an asteroid in one month: the probability exceeds 1 percent.
After 40 years of failed attempts, Kepler finally got the right idea about the shape of the orbit in 1605. Why didn't he think about the ellipse - the shape dictated by his first law - earlier? Because it was too simple and Kepler thought that the astronomers would have figured out such a solution a long time ago if it were correct. This story hides at least two general lessons.
The first lesson is that it is sometimes easier to learn important insights from examples that are not the simplest ones because their patterns are sharper, more visible, and more characteristic and they cannot be confused with others. The simplest cases and solutions often look too singular and their modest internal structure is a bad starting point for generalizations. More complex examples typically "pinpoint" the right general law or algorithm uniquely or almost uniquely.
The second lesson is a sociological one. While it may be more likely that a simple solution should have already been found by others, they may have overlooked it, too. And if they did, such a simple solution might be much more valuable. I think it follows that scientists shouldn't ignore a topic just because it was found uninteresting or unrealistic by many others.
On the other hand, you are never guaranteed to succeed. If you attempt to do something simple that has been tried by many others, you are less likely to find something new, especially if you are less gifted than Kepler.
Kepler realized that the Mars data agreed with the ellipse beautifully and he has abruptly and correctly deduced that all planets had elliptical orbits even though he couldn't have done the numerical calculations for all the planets (he had no postdocs and grad students). In 1610, Kepler also had a healthy and friendly exchange with Galileo Galilei, supporting his discovery of the moons and helping him and others to improve the telescope.
The last law of planetary motion I didn't mention was the third one: the orbital times squared are proportional to the distances cubed. Kepler included this law as an example of the harmonies that the Creator used to decorate the heavens.
Religious tension in Prague
Unfortunately, politics slowed the progress down in 1611. Rudolph II became seriously ill - and died in 1612 - and his brother Matthias who was 5 years younger and who was already controlling Austria, Hungary, and Moravia was able to grab the kingdom of Bohemia, too. This, of course, meant a dramatic decrease of the influence of so-far-dominant Bohemia within the Holy Roman Empire. It also meant a dramatic weakening of the conditions to do research in Prague (and elsewhere, for that matter). While Matthias confirmed Kepler's job and salary, he allowed him to leave for Linz, Austria. In Linz, Kepler was teaching at the district school and he was calculating the year of Christ's birth.
That was a pretty bad development for science. At least, Kepler's second marriage was much happier than the first. However, his writing got much less quantitative than during the golden years in Prague. It became somewhat astrological and similar to his early years. A monument to Brahe and Kepler at the picture above is in Prague 6, Pohořelec, a former place of another observatory of Brahe near the Prague Castle.
Jacob Bernoulli
Another guy who was born on December 27th was Jacob Bernoulli, namely in 1654. His parents wanted him to do theology but he preferred mathematics and astronomy. The Bernoulli numbers that appear in Martin Schnabl's solution to string field theory and elsewhere belong among his discoveries. However, you should be careful: there were 8 good mathematicians and physicists in his family. For example, the laws of hydrodynamics and aerodynamics are due to Daniel Bernoulli. | |
# Effective field theory approach to $b\to s\ell\ell^{(\prime)}$, $B\to K^{(*)}\nu\bar{\nu}$ and $B\to D^{(*)}\tau\nu$ with third generation couplings
@article{Calibbi2015EffectiveFT,
title={Effective field theory approach to \$b\to s\ell\ell^\{(\prime)\}\$, \$B\to K^\{(*)\}\nu\bar\{\nu\}\$ and \$B\to D^\{(*)\}\tau\nu\$ with third generation couplings},
author={Lorenzo Calibbi and Andreas Crivellin and Toshihiko Ota},
journal={Physical Review Letters},
year={2015},
volume={115},
pages={181801}
}
• Published 8 June 2015
• Physics
• Physical Review Letters
LHCb reported anomalies in $B\to K^* \mu^+\mu^-$, $B_s\to\phi\mu^+\mu^-$ and $R(K)=B\to K \mu^+\mu^-/B\to K e^+e^-$. Furthermore, BaBar, BELLE and LHCb found hints for the violation of lepton flavour universality violation in $R(D^{(*)})=B\to D^{(*)}\tau\nu/B\to D^{(*)}\ell\nu$. In this note we reexamine these decays and their correlations to $B\to K^{(*)}\nu\bar{\nu}$ using gauge invariant dim-6 operators. For the numerical analysis we focus on scenarios in which new physics couples, in the…
137 Citations
• Physics
Journal of High Energy Physics
• 2021
New-physics (NP) constraints on first-generation quark-lepton interactions are particularly interesting given the large number of complementary processes and observables that have been measured.
• Physics
• 2021
Lepton flavour violation (LFV) naturally occurs in many new physics models, specifically in those explaining the B anomalies. While LFV has already been studied for mesonic decays, it is important to
• Physics
Journal of High Energy Physics
• 2021
Evidence for electron-muon universality violation that has been revealed in b → sℓℓ transitions in the observables RKK∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym}
• Physics
The European physical journal. C, Particles and fields
• 2021
It is shown that in wide regions of the dilepton invariant mass spectrum the ratio between muonic and electronic decay widths can be predicted with high accuracy, both within and beyond the Standard Model.
• Physics
Journal of High Energy Physics
• 2021
Given the hints of lepton-flavour non-universality in B-meson decays, leptoquarks (LQs) are enjoying a renaissance. We propose novel Large Hadron Collider (LHC) searches for such hypothetical states
• Physics
Journal of High Energy Physics
• 2021
Leptoquarks are hypothetical new particles, which couple quarks directly to leptons. They experienced a renaissance in recent years as they are prime candidates to explain the so-called flavor
• Physics
Journal of High Energy Physics
• 2021
We propose a theory of quark and lepton mass and mixing with non-universal Z′ couplings based on a 5d Standard Model with quarks and leptons transforming as triplets under a new gauged SO(3) isospin.
• Physics
Journal of High Energy Physics
• 2020
As a consequence of the Ward identity for hadronic matrix elements, we find relations between the differential decay rates of semileptonic decay modes with the underlying quark-level transition b →
• Physics
Journal of High Energy Physics
• 2020
The measurements carried out at LEP and SLC projected us into the precision era of electroweak physics. This has also been relevant in the theoretical interpretation of LHCb and Belle measurements of
• Physics
• 2020
We clarify open issues in relating low- and high-energy observables, at next-to-leading order accuracy, in models with a massive leptoquark embedded in a flavor nonuniversal SU(4)×SU(3)×SU(2)×U(1) | |
# Tag Info
## New answers tagged statistical-mechanics
0
In diatomic gases (such as oxygen or nitrogen) where each molecule contains two atoms, energy is stored in the vibration and rotation of these atoms (in between and about each other), but temperature is the average translational kinetic energy of the molecules. This would obviously be the same for molecules with more than two atoms, and as explained above we ...
0
It depends on the setup which constraints are put on the microstate. For example the microcanonical ensemble assumes that the system is entirely isolated from the environmnent so necessarily energy has to be conserved. The microcanonical ensemble doesn't forbid the state where all particles are at the top as long as energy is conserved. So some of the system'...
0
Remember that macro state parameters are derivable from the distribution of micro states. Concretely, macro state parameters like pressure, density etc are averages over micro states of like velocity, number density etc. It is not true that for a gas in a gravitational field, the uniform density micro state does not contribute because it does not reproduce ...
0
Once you define entropy, such as Cort Ammon descibes, add completely symmetric laws of physics (which is the case, as you say), you still need some kind of initial or prior condition to deduce a single emergent arrow of time. The prior condition is the big bang, and it was "large" enough and lower entropy than today. The laws of physics allow for ...
2
The question you ask is one of the major open philosophical questions in science today. Why does time appear to move "forward?" Many great minds like Feynman have explored the question (and its dual "what if time doesn't just move forward?") As such, no answer can be completely satisfactory, but some statements can be made and they will ...
2
It is actually not, the total entropy can in fact decrease, but it's just highly unlikely. I will make an analogy to the statistical interpretation of entropy as counting the amount of microstates $\Omega$ according to some configuration in phase space. $S = k \ln \Omega$ Consider a uniform gas of indistinguishable particles in a box. Each particle has a ...
-1
If I am correct, one has $$E=\frac{1}{\theta K_2(1/\theta)}\int_1^\infty \gamma^3 \left (\sqrt{1-1/\gamma^2} \right )e^{-\gamma/\theta}d\gamma=\frac{K_1\left(\frac{1}{\theta}\right)}{K_2\left(\frac{1}{\theta}\right)}+3,$$ where $K_n$ is a modified Bessel function of the second kind.
-1
The energy levels used in Boltzmann distribution are normally ranges of energy, when we approximate discrete functions by continuous ones. They are not discrete energy levels in the quantum meaning. For example: each gas molecule has an energy $E_k = \frac{1}{2}mv_k^2 = \frac{p_k^2}{2m}$. The sum for all particles: $\sum_0^N{e^{-\beta E_k}} = \sum_0^N{e^{-\... 1 Yes, Maxwell and Boltzmann produced their theories well before quantum ideas were dreamed of, and they were developed by Gibbs into the form we know today using purely classical ideas. This involves some quite deep assumptions about what macrostates are equally probably in an ensemble. Introducing quantum microstates makes it a lot easier to understand. So ... 1 The standard intuitive path to the classical limit, or, conversely, to quantization, often goes through the phase space formulation of quantum mechanics. The analog of the Liouville probability density—-let's take n=1 for one degree of freedom and easily generalize later—-is the Wigner quasi probability density, normalized $$\int\!\!dqdp~ f(q,p)=1,$$ but ... 0 It is something more fundamental, and the answer is the eigenstate thermalization hypothesis (ETH). Closed quantum systems that fulfill this ansatz will thermalize (so the ETH is a sufficient condition, but it is not proved if it is necessary. So far all systems that thermalize, fulfill the ETH), and this thermalization can be described with the tools of ... 0 According to this text (https://web.math.princeton.edu/~nelson/papers/talk.pdf) gives different correlation values. See on the last pages. Edit: It's about a two HO system. The theory gives different correlation values between the states of the HOs than QM's predictions. 4 A qualitative answer to your question is that a system at higher temperature has higher W, all else being equal. So if you take a certain quantity of heat$Q$out of a system at high temperature$T_h$, without doing anything else, the change in entropy is$-\frac{Q}{T_h} = k \frac{\Delta W_h}{W_h}$(where$\Delta W_h < 0$), and if you put that same heat ... 1 Starting from the entropy expression$dS=\frac{1}{T}dU+\frac{p}{T}dV-\frac{\mu}{T}dN$we can introduce a new variable$L=U=\mu N$, with$dL=dU-Nd\mu -\mu dN$. Insertion gives:$dS=\frac{1}{T}dL+\frac{N}{T}d\mu +\frac{p}{T}dV$This means we can define an ensemble with the variables$L=U-\mu N$,$\mu$and$V$. Most importantly, this is an example of an ... 0 Let's first investigate again the case of the double-well potential$V_{\text{dw}}(x)$, and why a sum over even/odd number of instantons appear. We will normalize the potential such that$V_\text{dw}(\pm a) = 0$. Denote by$s \mapsto x(s;t)$the solution $$m\frac{d^2x(s;t)}{ds^2} = V_{\text{dw}}(x(s;t)) \ ; \\ x(0;t) = -a \ , \ \ x(t;t) = a \ .$$ Note ... -1 Since you are a high school student, I will break it down in simpler terms Heat is a result of combustion, chemical reactions at an atomic or molecular level or radiation like fission/fusion at an atomic and sub-atomic level. At a quantum level particles are not identified with any known elements on periodic table so as to associate temperature. In a known ... 0 In summer, the sun is more directly overhead and more intensely heats the ground surface up. The heated air right next to the surface is more buoyant than the slightly cooler air above it, which is a dynamically unstable condition. The hotter air protrudes up into the cooler air and rises upwards, drawing in more hotter air from around its point of origin ... 2 As it is well known, the equilibrium state of a physical system can be obtained from the extremisation of thermodynamic potential. In the context of phase transitions, the corresponding thermodynamic potential is the (Gibbs) free energy$G(m)$, where I assume for simplicity that the order parameter is constant in space, and is thus just a number$m$. The ... 2 So, critical phenomena can't be addressed via naive mean field. BUT, it is the first simple thing you can do to analyze stuff. Plus, if all you want just for the start is a qualitative description of the phase diagram, MFT is the way to go. Landau's procedure can give a very good qualitative sense of what goes on near criticality. Actually, if you are well ... 1 The statement in question is made in a specific context - that of electrons in condensed matter, which interact via Coulomb interaction, which is quartic in fermion operators. Physically, a quadratic Hamiltonian does not necessarily correspond to non-interacting particles: e.g., bosonization approach reduces Hamiltonians to interactions to quadratic ones. ... 0 The answer is given in the third remark at the end of Section 3 of my paper "Demonstration and resolution of the Gibbs paradox of the first kind" Eur. J. Phys. 35 (2014) 015023 (freely available at arXiv). In short, let's assume you combine two subsystems S1 and S2, each with N indistinguishable particles, by removing a partition between them. As ... 3 The question starts from an incorrect statement. Einstein's theory, but also Debye's theory do not work well at high temperature whatever high the temperature is. The correct statement is that Einstein's law fails to account for the low temperature behavior of specific heat of solids, even qualitatively, but at high temperature it goes to the Dulong and ... 1 From statistical physics, if a system is in contact with a "reservoir" of heat and particles, then the energy of the system$E$and the particle number$N$are allowed to fluctuate. If the set of microstates of the system form a discrete set (which would be the case for a system of particles which can inhabit discrete energy levels), then the ... 2 Q1 As you have shown yourself, this equation does not have a steady-state distribution: if we set$\partial_t P = 0$, i.e., if we assume that the solution is time-independent, we still obtain a solution that depends on time, contradicting our assumption. Q2 and Q3 In some situations one could indeed approximate the solution using form (II). The conditions ... 1 I suppose the answer is no. Consider a transition between the solid-state phase and the gas or liquid phase. Gas and liquid phases are translationally invariant. In a solid crystal, translational invariance is broken due to the presence of lattice. According to the acepted point of view, a continuous transition between phases with different symmetries is ... 0 I will expose a couple of ideas that maybe can help you: -I understand non-equilibrium steady states like those steady states that cannot be predicted by Statistical Mechanics, where your steady state cannot be described by the microcanonical, canonical, etc... ensembles. An example of this is the many-body localization, where local observables of closed ... 0 Consider 𝑂2 or 𝑁2. Then there should be 𝑓=5. And the derivation is by a factor different. Can we still use the law as an approximation? Yes. The so called kinetic temperature of an ideal gas, as normally measured, is based on 3 degrees of freedom and does not (and need not) account for molecular rotation (and vibration). These additional forms of kinetic ... 0 Your comment to @pwf stated that your question actually is My question is, if$T=T_s$, then the expression for a quasi-static process is always the same .......as that for a reversible process, and that is absurd because, every reversible process is quasi-static but not every quasi-static process is reversible. An example of a quasi-static process that is ... 1 As you noted in your question,$\langle \hat{O} \rangle$increases linearly with time... which means that its rate is constant! E.g., if$\langle \hat{O} \rangle$is the electric charge, it gives us a situation with a constant current. I think conceptually the difficulty is that a steady state is more of a theoretical/modeling concept rather than a kind of a ... 2 For me a phase transition is defined (somewhat strictly) as a non-analyticity in the thermodynamic potential. A function$f(x)$is non-analytic at$x=x_0$if it can not be expanded in powers of$x-x_0$. For example$f(x) = \sqrt{x+1}$can be expanded around$x=0$$$f(x) \cong 1 + \frac{x}{2} - \frac{x^2}{8} + \dots$$ but this does not work at$x=-1$because ... 4 A very large class of phase transitions are characterized by the breaking of some symmetry. Usually one finds a quantity called the order parameter, and finds its scaling with respect to an energy scale for the system, like the temperature. Usually for a phase transition one finds that the order parameter is either discontinuous or one of its derivatives is ... 2 Both options ($C_p$or$C_v$) are approximations. In the general situation, a non-rigid solid material will expand non-uniformly and therefore will have a time-dependent stress distribution, which stores internal strain energy. The simplest assumption is that the object is not mechanically constrained and that the non-uniform internal strain energy is small. ... 1 If we have two interacting gases of different temperatures, then it may be possible that a packet of particles(*) which move at high speed... In a "packet of particles" the individual particles will not all move at a "high speed". If the packet of particles is large enough, then the speeds of the individual particles will vary about the ... 1 This Hamiltonian has what is known as a "spin-flip" symmetry. It means that due to the term$\sum S_{z_i}S_{z_j}$, we can simultaneously change the sign of all the$S_{z_i}$operators and we still have the same Hamiltonian (the operator that commutes with the Hamiltonian is$G=\prod_i S_{x_i}$, which produces a global spin-flip over states in the$...
2
What I would like to know is whether ${\rm d}S=\frac{\delta Q_{rev}}{T}$ is just essentially a 'backward engineered formula' In some sense it is. Dividing by temperature is what turns $\delta Q_{rev}$ into the exact differential ${\rm d}S.$ It is what Clausius did (in 1858 I think) when he found that there was such a state quantity, which he called entropy....
1
There is a special relationship between entropy and heat because when heat passes from $A$ to $B$ then entropy comes along for the ride, and this is unavoidable. The entropy of $B$ will go up. The entropy of $A$ may go down or up or stay fixed, but if the process is reversible then it will go down. The only way for $B$ to avoid this entropy increase upon ...
1
At the moment, I don't think there is a special connection since for a Joule expansion there is no heat transfer but there is an entropy increase In the "Joule expansion" the gas cools as it uses its thermal energy to accelerate itself. That is a reversible process. Then the mechanical energy of the gas heats the gas, which is an irreversible ...
1
Caveat: I have not done statistical mechanics. All my knowledge of this subject is based on classical thermodynamics. However, I tried to keep my answer factual by only referencing already well-accepted ideas on the topic while providing references. What I would like to know is whether $dS=\frac{dQ_{rev}}{dT}$ is just essentially a 'backward engineered ...
2
Let the work done on the system be $\delta W$ while its internal energy change be $dU$, assume that the system may also exchange energy with a reservoir that is at temperature $T_r$. Then for an arbitrary process the entropy change $dS$ of the system satisfies $dS \ge \frac{dU-\delta W}{T_r}$. The equality sign holds for a reversible process. When the ...
1
The summation is over all possible mutually exclusive states. The exponential factor is probability of that state.
1
So if there are various macrostates that can occur (regardless of how unlikely) what does that mean for our external parameters? How can there be different macrostates with different U, V, N? It means that the quantities which define the macrostate of the system (in this case, $U,V,$ and $N$) are not being held fixed. If your system is in thermal contact ...
2
"But now i read that there is not only a single macrostate of a system, but that there can be various macrostates." This is not in conflict with your previous understanding that a fixed U,V,N defines a macrostate. I think you just misinterpreted the statement. If a particular set of values for U,V,N denotes a particular macrostate, that means that ...
1
First, let's rewrite this in polar coordinates $$\rho = \frac{2}{h^3} \int dp \; 4\pi p^2 e^{-\beta (\frac{p^2}{2m} - \mu)}.$$ Thus we see that the particle densitiy in the intervall $[p, p+dp]$ is $$d\rho = \frac{8\pi}{h^3} p^2 e^{-\beta(p^2/2m - \mu)} dp.$$ Next we calculate $\rho$. This you have ...
0
Multiplicity tells how many microstates have a macrostate. E.g. how many possible multiparticle configurations have a glass of 293K water on surface pressure. Entropy is the logarithm of multiplicity. (multiplied by k)
1
The entropy change of a system is the sum of two parts: Entropy transferred from the surroundings to the system (across the interface with the surroundings) as a result of heat flow, and given by $\int{\frac{dq}{T_B}}$, where dq is the differential heat flow across the boundary interface between the system and surroundings and $T_B$ is the temperature at ...
0
Suppose the entropy associated with the system and the surrounding at the start of thermodynamic process is $S_o$ and the entropy associated with it at the end is$S$. $$∆S=S-S_o$$ The entropy change through any reversible path connecting intial and final state can be given as- $$∆S_{rev}=\int\frac{dq}{T}$$ Here , $T$ is thermodynamic temperature.
19
Negative temperature is mainly to do with (c): a finite number of configurations. It is not a violation of entropy postulates or equilibrium, but I will qualify these statements a little in the following. The heart of this is not to get 'thrown' by the idea of negative temperature. Just follow the ideas and see where they lead. There are two crucial ideas: ...
9
You're pretty much right; in the case of spins, it's the fact that there's an upper bound on the system's energy that causes negative temperature, which is strongly related to the fact that there's a finite number of states. With something like a gas, increasing energy always provides access to an increasingly large set of phase space because the area of a ...
2
Bose-Einstein condensation is not caused by interaction: ideal Bose gas undergo Bose-Einstein condensation. In contrast, superfluidity is due to interaction and one should see its signature in Green functions as a pole associated to phonons.
2
Yes, it is a probabilistic statement. But in practical scenarios the number of microstates in the most probable macrostate is so enormously greater than the number in any other macrostate that the system spends almost all of its time in the most probable macrostate, and you would to wait many times longer than the age of the universe before you observed any ...
Top 50 recent answers are included | |
# When does convergence in quadratic variation imply a uniform convergence or vice versa?
Given a sequence $\Pi=\{\pi_n\}$ of partitions of an interval $[0,T]$ the quadratic variation of a path $x\colon [0,T]\to \mathbb{R}$ is defined by $$[x]=\lim_{n\to +\infty}\sum_{\pi_n}|x(t_{i+1})-x(t_i)|^2.$$
I am interesting in any result which provides conditions under which a uniform convergence $x_m\to x$ implies convergence in quadratic variation: $[x_m-x]\to 0$. As well as the other direction: when does convergence in quadratic variation: $[x_m-x]\to 0$ imply pointwise convergence $x_m\to x$.
Obviously, in general this implications are false. It is known that for any continuous path $x$ with positive probability there are Brownian paths in arbitrary small neighborhood of $x$. Quadratic variation of a path of a Brownian motion is almost surely equal to $T$, but we can choose $x$ with $[x]\neq T$.
For the second implication an obvious counterexample would be $x_m(t)=m t,\, x(t)=0.$
I am interested if there are results proving those implications under additional requirements on $x_m, x$. | |
# What is the peak voltage of 250 V R.M.S. main A.C. voltage in C.R.O.?
This question was previously asked in
UP Jal Nigam E&M 2016 Official Paper
View all UP Jal Nigam JE Papers >
1. 250 V
2. 320 V
3. 353.5 V
4. 368.3 V
## Answer (Detailed Solution Below)
Option 3 : 353.5 V
## Detailed Solution
Cathode Ray Oscilloscope:
• CRO is an electronics instrument that presents a high-fidelity graphical display of the rapidly changing voltage at its input terminal.
• It is most widely used for signal measurement.
• The display section of the CRO which is known as the cathode ray tube has two input names as vertical (Y) input and horizontal (X) input.
• The signal is applied to these inputs and drives the corresponding deflection plate and controls the position of the electron beam that plots the waveform on the screen.
• These are two types of plots that can be displayed based on the mode of operation of CRO which is following,
1. Sweep Mode of Operation: In this mode, various measurements regarding the test signal can be carried out like peak voltage, frequency, phase, time period, nature of waveform, etc.
2. X-Y Display mode of Operation: In this mode, we have to use a pattern known as the Lissajous Pattern for measurement of frequency and phase only.
Application:
Since, Voltage waveform measured by CRO in Sweep Mode of Operation,
Given, RMS Value (VRMS) = 250 Volt
We know that, $$\large{V_{RMS}=\frac{V_{peak}}{√2}}$$ (For AC (sinusoidal waveform))
∴ Vpeak = √2 × 250 = 353.5 Volts | |
# How much light does your average home mirror reflect?
1. Sep 15, 2003
### MisterBig
1) How much light (as a percentage) does your average home mirror reflect?
2) How much light (as a percentage) does a very high quality mirror reflect?
3) How much does a very high quality mirror cost?
Thanks.
Last edited by a moderator: Feb 6, 2013
2. Sep 15, 2003
### Artman
I can speak a little for Astronomical use mirrors. They are front coated (the coating is placed on the front of the glass and you look at the coated surface) home type mirrors are back surface coated and you look through a layer of glass to see the reflected image, there is some distortion through the glass and light loss through diffraction, but the amount of distortion is minimal for everyday type use and the glass protects the coating.
Optical grade mirrors are usually rated on the quality of the reflected image. A high grade Astronomical mirror for instance might be figured as a parabola, ground, polished and coated for 1/4 wave (fair), 1/8 wave (good), 1/10 wave (better), or diffraction limited (best).
An 8" diameter, parabolic, 1/4 wave mirror can be had for about $70.00. A diffraction limited one of the same size might cost$400.00 or more (up to several thousand).
Hope this helps some.
3. Sep 15, 2003
### marcus
Re: Mirrors
One time I scrounged some high-grade mirror from
a university physics laboratory. They had cut something
to size and had some odd-dimensioned scraps left over.
No cost. Probably would have been thrown out.
It was nice to handle (but hold it by the edges)
and appeared to be extremely flat
my memory of it is that the colors of things were
deeper and less washed out than they appear
in ordinary mirror but I cannot say for sure and
I do not know the percentages.
Edmond scientific catalog a natural place to look
may be online now
4. Sep 15, 2003
### wimms
Re: Mirrors
google will offer you better figures, but from memory, ordinary daily mirrors reflect as little as 40-60% of light. Lasers require 99.99% for full-reflection mirror and 99.9% for front escape mirror. They usually have such figures only in narrow band of wavelengths. Infact, upon eye inspection, laser mirrors are transparent to most except specific wavelengths.
High quality wideband (relatively flat wavelength characteristic in visible light range) reflect 98-99.9%. Probably better figures too, but that'd cost exponentially more.
Cost of very high quality mirror makes only sense when you tell your application. There's too many different things for different purposes. Obviously, size of mirror and surface precision has biggest impact on cost.
If you live with prisms, then total internal reflection can offer 100% reflection in certain conditions.
5. Sep 16, 2003
### Artman
Also, if you are in the market for optical grade mirrors, try the surplus sources such as surplusshed.com An excellent source of inexpensive mirrors, lenses, etc. | |
$\mathrm{L}(-3,1), \mathrm{M}(4,1), \mathrm{N}(4,-5), \mathrm{P}(-3,-5)$ Answer. Substitute the values in the formula to get the area of the polygon. Area of a polygon. Before we move further lets brushup old concepts for a better understanding of the concept that follows. We then use the map function to convert the co-ordinates accepted as strings, that are present in the list into int. Eg: In iteration 1, the vertices are 0th, 1st and 2nd index in the co-ordinates lists. $\begingroup$ I just implemented this method and also try to use function Area to find out the area of each face after grouping the faces with the same normal together. Once you've found the apothem and the perimeter, plug them into the formula for … the division of the polygon into triangles is done taking one more adjacent side at a time. print "Area of Polygon: ",area This is how we can find or calculate the area of a polygon in Python. Area of a n-sided regular polygon with given Radius? Let us now run the program for the case of a square. Here the radius is the distance from the center of any vertex. Andrea. Contents. \$4.99 . a.bacca shared this ... Hi, what kind of Algorithm does Geogebra use in order to calculate the area of a non-self-intersecting (simple) polygon with n vertices? This corresponds to the area of the plane covered by the polygon or to the area of one or more simple polygons having the same outline as the self-intersecting one. However, the Area (or RegionMeasure) function only works on some of the faces (polygons).A lot of the polygons won't be taken by Area to generate the result. Try something like this instead: we'll iterate over the triangles of the mesh Polygon area calculator The calculator below will find the area of any polygon if you know the coordinates of each vertex. Find perimeter and area by finding the length (1) By: Learn Zillion. We then find the areas of each of these triangles and sum up their areas. This will work for triangles, regular and irregular polygons, convex or concave polygons. Eg: “hello world”.split(” “) will return a list [‘hello’,’world’]. A vertex is a corner. Find the area of the polygon with the given vertices. That's not the same as a 3D mesh. According to Wikipedia: ”In geometry, a simple polygon is defined as a flat shape consisting of straight, non-intersecting line segments or “sides” that are joined pair-wise to form a closed path. From (7,7) to (4,7) is 3 units. It uses the same method as in Area of a polygon but does the arithmetic for you. We now consider the next 2 vertices on either side. For example (0,0). 1 The same question Follow This Topic. Area and Perimeter. Finally, we have a triangle ADE (3-Blue). For the next triangle, we consider the side AC and take the next vertex D, hence triangle ACD (2-Green). To do so, we have used the split() function, which divides the string it is called by, at the argument specified (space in this case). Given polygon is a triangle as it has only three vertices. I can't see why. Hence there are n-2 triangles that we will consider. Coordinates of vertices are the value of points in the 2-d plane. Now, let's see the mathematical formula for finding the area. If you want to know how to find the area of a variety of polygons, just follow these steps. Circles. Area of a n-sided regular polygon with given side length in C++. 42 square units. The vertices coordinates must be input in order: either clockwise or anticlockwise. Prerequisites: Basic Input/Output, string and list manipulation and basic functions in Python (refer this). Area of hexagon with given diagonal length in C Program? This class can calculate the area and perimeter of a polygon. It has been quite a while since the last post about mathematical algorithms, so today we will learn how to apply the shoelace algorithm to calculate the area of a simple polygon.First of all, what is the definition of “simple polygon”? The idea here is to divide the entire polygon into triangles. C Program for area of hexagon with given diagonal length. In a convex polygon, all interior angles are less than or equal to 180 degrees, while in a strictly convex polygon all interior angles are strictly less than 180 degrees. You need to calculate the distance from (4,7) to (1,3). We’ve been collecting techniques for finding areas of polygons, mostly using their side lengths. Apothem of a n-sided regular polygon in C++, Construct a graph from given degrees of all vertices in C++, Count ordered pairs with product less than N in C++, Minimum height of a triangle with given base and area in C++. It is a Corner. a = polyarea(x,y) returns the area of the 2-D polygon defined by the vertices in vectors x and y. Find the area of polygons. Find the Area of a Graphed Polygon. The length of each part is a/2. Chapter 11. Circumference, Area, and Volume. After the second vertex, I will make left turns to find each subsequent vertex that follows. An edge is a line segment between faces. Area of largest Circle inscribed in N-sided Regular polygon in C Program? Looking at the formula you're using, it looks like it's designed to calculate the area of a planar polygon, where the vertices are all in clockwise / counterclockwise order about the perimeter. Shoelace Formula; Problem Solving; Shoelace Formula . In this case, we are going in the clockwise direction, hence B and C. We find the area of triangle ABC (1-Yellow). In the next step, we run a for loop n-2 times. If the ratio of the interior angle to the exterior angle is 5:1 for a regular polygon, find a. the size of each exterior angle b. the number of sides of the polygon c. the sum of the interior angles d. Name the polygon . Trapezoids: How to Find the Area. #"Given :" A(2,3), B(5,7), C(-3,4)# Using distance formula we can find the lengths of the sides It involves drawing the figure on a Cartesian plane, setting the coordinates of each of the vertices of the polygon. By: Learn Zillion. In this program, we first accept the number of sides and then accept the co-ordinates of each vertex. We then applied the formula discussed above to find the area of the triangle and keep adding it to the total area. geometry. We then unpack this list of two integers into the variables x and y. x and y are added to the end of their respective lists containing co-ordinates. And this pentagon has 5 vertices: Edges. + xny(n-1) + x1yn ) ] Using this formula the area can be calculated, Example. From (1,3) to (7,3) is 6 units. Finding the Area of a Polygon Given on a Coordinate Plane For determining the area of a polygon given on a coordinate plane, we will use the following formula: Area (A) = | (x 1 y 2 – y 1 x 2 ) + (x 2 y 3 – y 2 x 3 )…. We, however, check if the area obtained is negative and if so, make it positive, before adding it to the total area. Adding the areas of the triangles ABC, ACD and ADE, we get the area of the pentagon. Topics. In iteration 2, its the 0th, 2nd and 3rd and so on. Now you have to find the coordinates of the vertex of it by solving the optimization problem. Determine unknown ordered pairs using the characte. Polygon is a closed figure with a given number of sides. As one wraps around the polygon, these triangles with positive and negative areas will overlap, and the areas between the origin and the polygon will be canceled out and … Using this formula the area can be calculated. Drawing Polygons On The Coordinate Plane. If th… In the (n-2)nd iteration, we will consider 0th, (n-2)th and (n-1)th indexes (NOTE that x=n-3 in last iteration and (n-1)th index represents the last vertex. The implementation in python requires just one condition that the vertices must be supplied to the program. Geometry A Common Core Curriculum. This is because, we first take up 3 sides, followed by 1 additional side every next time. Let us look more closely at each of those: Vertices. To find the area of each triangle, we use the co-ordinate geometry formula, Area = |0.5*(x1(y2-y3)+x2(y3-y1)+x3(y1-y2))|, Where (x1,y1), (x2,y2), (x3,y3) are the vertices of the triangle in the form of co-ordinates. Maximum area of rectangle possible with given perimeter in C++, Finding the simple non-isomorphic graphs with n vertices in a graph, Maximum number of ones in a N*N matrix with given constraints in C++, Find number of diagonals in n sided convex polygon in C++. For example, If I have 4 sides, and 2 points, (0,0) & (0,10), how would I go about find the next to point of the square? Formula Area = ½ [(x1y2 + x2y3 + …… + x(n-1)yn + xny1) - (x2y1 + x3y2 + ……. The coordinates of the vertices of this polygon are given. Now take those lines and solve them for the … NOTE: All programs in this post are written in Python 2.7.x but it will work on the latest versions too. In this program, we first accept the number of sides and then accept the co-ordinates of each vertex. To solve this problem, we have drawn one perpendicular from the center to one side. By: Learn Zillion. A polygon is concave if one or more of its interior angles is greater than {eq}180^{\circ}{/eq}. Your email address will not be published. However, it does not matter which vertex the input is starting from or the direction (clockwise or counter-clockwise) in which it is supplied. In each iteration, we assign vertices with indexes. Volume. Equivalently, it is a simple polygon whose interior is a convex set. thanks for the answers . Your email address will not be published. About this lesson. (See also: Computer algorithm for finding the area of any polygon .) Surface Area. A vertex (plural: vertices) is a point where two or more line segments meet. Vertices, Edges and Faces. Concave Polygons. The output is as follows: – This algorithm will not work for some concave polygons cover areas outside the polygon. To find the apothem, divide the length of one side by 2 times the tangent of 180 degrees divided by the number of sides. geometry. Now, let's see the mathematical formula for finding the area. The procedure to use the area of regular polygon calculator is as follows: Step 1: Enter the number of sides and side length in the input field (Example: n=5, S= 3 ) Step 2: Now click the button “Solve” to get the regular polygon area Find the Area of an Irregular Polygon. Circumference and Arc Length . Required fields are marked *. The class can also calculate the perimeter of the polygon. This tetrahedron has 4 vertices. To calculate the area of a regular polygon, follow the below steps: Identify and write down the given values to calculate the polygon area. To do that you have to start with a point and check the lines that are passing through it. Find the area of a polygon with the vertices of (-4,5), (-1,5), (4,-3), and (-4,-3). This video shows how to use the Distance Between a Point and a Line formula to find the Area of a Polygon, given the coordinates of its vertices. Coordinates of vertices are the value of points in the 2-d plane. To find the area of regular polygons, use the formula: area = (ap)/2, where a is the apothem and p is the perimeter. The area is the quantitative representation of the extent of any two-dimensional figure. Area of a n-sided regular polygon with given Radius in C Program? First, number the vertices in order, going either clockwise or counter-clockwise, starting at any vertex. One way to find the area of any polygon is to find areas of all of its inside sections and then adding them together. It returns a list containing the elements from each part. A very useful procedure to find the area of any irregular polygon is through the Gauss determinant. Polygon is a closed figure with a given number of sides. Scipy image processing and manipulation through Python, JavaScript: Functions in the Object Model, Array Rotation using Reversal Algorithm in Python, C++ program to check whether two strings have same character or not, Building JavaScript Array methods from Scratch, Python program to calculate the area of a trapezoid, Python program to create a class which performs basic calculator operations. Question 881850: How do you find the perimeter of the polygon with vertices at (1, 3), (7, 3), (7, 7), and (4, 7). To find the perimeter, multiply the length of one side by the total number of sides. Here we will see how to get the area of an n-sided regular polygon whose radius is given. Section 1. This Pentagon Has 5 Edges . It implements the Green's theorem to calculate the area of an irregular polygonal shape given the coordinates of the vertices. I am trying to find the vertices of a regular polygon using just the number of sides and 2 vertices. First you will have a set of equations which represents the constraints in your problem. In this article, we will show you how to calculate the area of a polygon on a Cartesian plane, which is widely used in fields such as architecture, engineering, and design. Answer by Fombitz(32378) (Show Source): You can put this solution on YOUR website! For example (0,0). In case you have a polygon with more than 10 vertices, divide it into smaller polygons find the area of each polygon and add them to obtain the total area. We start with any random vertex on the pentagon, say A. Calculating the area of a polygon can be as simple as finding the area of a regular triangle or as complicated as finding the area of an irregular eleven-sided shape. Discussion. The sides of a polygon are 3, 5, 4, and 6. This is how we can find or calculate the area of a polygon in Python. \mathrm{W}(0,0), \mathrm{X}(0,3), \mathrm{Y}(-3,3), \mathrm{Z}(-3,0) (See Example 3. ) In any optimization problem after defining your polygon you need to find its vertices. Let each side is of length ‘a’. The output is the area of the irregular polygon. This post discusses the implementation of an algorithm to find the area of any convex polygon in Python given its vertices in the form of co-ordinates. In this program, we have to find the area of a polygon. By: Learn Zillion. Write down the formula for polygon area. From (7,3) to (7,7) is 4 units. To do so, we have used the split () function, which divides the string it is called by, at the argument specified (space in this case). Considering the enclosed regions as point sets, we can find the area of the enclosed point set. + (x n y 1 – y n x 1 )/2 | The perpendicular is dividing the side into two parts. Given below is a figure demonstrating how we will divide a pentagon into triangles. Write a Python program that compute the area of the polygon . Find the area of the polygon with the given vertices. Live Demo You must be signed in to discuss. A method for finding the area of any polygon when the coordinates of its vertices are known. Area of largest Circle inscribe in N-sided Regular polygon in C Program? Drawing of the choice and enumeration of the irregular pentagon points for the Gauss determinant. A face is a single flat surface. The area formula is derived by taking each edge AB and calculating the (signed) area of triangle ABO with a vertex at the origin O, by taking the cross-product (which gives the area of a parallelogram) and dividing by 2. + x1yn ) ] using this formula the area of any vertex any! Your website additional side every next time Show Source ): you put. Ade ( 3-Blue ) the value of points in the co-ordinates of each vertex convex or polygons! Hence triangle ACD ( 2-Green ) involves drawing the figure on a Cartesian plane setting! The length ( 1 ) by: Learn Zillion to calculate the distance from ( 7,7 to... Adjacent side at a time divide the entire polygon into triangles is done taking one more adjacent side a! Sides and 2 vertices on either side will work on the latest versions.! For some concave polygons us look more closely at each of those: how to find area of a polygon with vertices the idea is... Have to find the area of the concept that follows two-dimensional figure 7,3 ) to ( 7,3 is., and 6 vertices must be supplied to the program iteration 2, its the 0th 2nd! + xny ( n-1 ) + x1yn ) ] using this formula the area a. That compute the area of a n-sided regular polygon with given diagonal length in.... Radius in C program for area of a polygon in C program for area an... Return a list containing the elements from each part ve been collecting techniques finding... Area can be calculated, Example ) ( Show Source ): you put. Polygon whose interior is a closed figure with a given number of sides how to find area of a polygon with vertices then adding them together, is! Algorithm will not work for triangles, regular and irregular polygons, convex or concave polygons going... Supplied to the program polygon into triangles any two-dimensional figure the vertex of it by solving the optimization problem defining. The irregular polygon is through the Gauss determinant a method for finding the area of the.... Values in the co-ordinates of each of the concept that follows after defining polygon. Acd and ADE, we run a for loop n-2 times vertices with indexes 3 sides followed. Find perimeter how to find area of a polygon with vertices area by finding the length of one side demonstrating how we will consider ’... 'S not the same as a 3D mesh plane, setting the coordinates the. The program for the next 2 vertices better understanding of the vertices of a n-sided regular polygon with the vertices! 2Nd index in the co-ordinates of each vertex polygons cover areas outside the polygon. vertices indexes. Perpendicular from the center to one side by the total area: vertices ) is 3 units polygon are.! With a point where two or more line segments meet are 0th 1st! As in area of a polygon. the sides of a regular using! Subsequent vertex that follows coordinates must be supplied to the program lets brushup old concepts for better! Been collecting techniques for finding the area of any vertex 's see the formula. Set of equations which represents the constraints in your problem 's theorem calculate... If you want to know how to find the area can be calculated Example! Then find the area of largest Circle inscribed in n-sided regular polygon with given Radius the ABC... Hexagon with given diagonal length in C++ is 6 units, number the vertices coordinates be! Polygon is a convex set, setting the coordinates of the enclosed regions point! Ade, we have drawn one perpendicular from the center of any vertex polygon is a point check! Each of these triangles and sum up their areas the area can be calculated Example... 2-D plane are passing through it trying to find the perimeter, multiply the length 1. A square figure demonstrating how we will consider procedure to find the coordinates of the must... A variety of polygons, convex or concave polygons cover areas outside polygon. Polygon whose interior is a simple polygon whose interior is a triangle ADE ( 3-Blue ) formula area. Polygon into triangles are present in the 2-d plane applied the formula above! Length ‘ a ’ left turns to find the area of a in! Polygon using just the number of sides and then adding them together been collecting techniques for the... Irregular pentagon points for the next 2 vertices on either side, convex or polygons... Length ( 1 ) by: Learn Zillion, 4, and 6 consider side. Vertices on either side left turns to find the areas of all of its vertices are the value of in. Irregular pentagon points for the next vertex D, hence triangle ACD ( 2-Green ) followed by additional! Can put this solution on your website we run a for loop n-2 times vertices! In any optimization problem after defining your polygon you need to find the area can calculated. Will return a list containing the elements from each part idea here is to divide the entire polygon triangles! On the latest versions too length in C++ polygons cover areas outside the polygon. 1 side. Their side lengths hello ’, ’ world ’ ] 1st and index! Polygon whose interior is a figure demonstrating how we can find the vertices are 0th, 2nd 3rd... For you adding the areas of each of these triangles and sum up their areas:... Sets, we first take up 3 sides, followed by 1 additional side every next....: Learn Zillion two or more line segments meet are given point where two or more line meet... An irregular polygonal shape given the coordinates of vertices are 0th, 1st and 2nd index in the step... See also: Computer algorithm for finding the area of hexagon with given diagonal length left. This is how we can find or calculate the area of the.... The vertices of the polygon. first, number the vertices of a in! Iteration 2, its the 0th, 2nd and 3rd and so on how to find area of a polygon with vertices be supplied to the area! Or calculate the area of the triangle and keep adding it to total! Make left turns to find the area of the vertex of it by solving the problem. This solution on your website the next step, we have drawn perpendicular! Elements from each part simple polygon whose interior is a closed figure with a point where two or line! And 2nd index in the next vertex D, hence triangle ACD ( 2-Green ) x1yn ) ] this., let 's see the mathematical formula for finding areas of the polygon. adding them together involves drawing figure. World ’ ] as a 3D mesh old concepts for a better understanding of the concept that.... Gauss determinant Green 's theorem to calculate the area of the polygon. ‘ hello ’ ’! Is of length ‘ a ’, hence triangle ACD ( 2-Green ) the output the... ’ ] ) is 6 units method for finding areas of all of vertices... Point and check the lines that are passing through it list [ ‘ hello ’, world! Pentagon, say a a better understanding of the irregular how to find area of a polygon with vertices. we find! | |
# 5.2. ALM: Output files¶
• PREFIX.pattern_HARMONIC, PREFIX.pattern_ANHARM?
These files contain displacement patterns in Cartesian coordinate. The length of displacement is normalized to unity for each atom. Created when MODE = suggest. Patterns for anharmonic force constants are printed only when NORDER > 1.
• PREFIX.fcs
Harmonic and anharmonic force constants in Rydberg atomic units. In the first section, only symmetry-reduced force constants are printed. All symmetry-related force constants are shown in the following section with the symmetry prefactor ($$\pm 1$$). Created when MODE = optimize.
• PREFIX.xml
An XML file containing the necessary information for performing phonon calculations. The files can be read by anphon using the FCSXML-tag. Created when MODE = optimize. When LMODEL = enet | adaptive-lasso, the file is created only when the cross-validation mode is off (CV = 0).
• PREFIX.FORCE_CONSTANT_3RD
Third-order force constants in the FORCE_CONSTANT_3RD format of the ShengBTE code. Created when MODE = optimize and FC3_SHENGBTE = 1. When LMODEL = enet | adaptive-lasso, the file is created only when the cross-validation mode is off (CV = 0).
• PREFIX.cvset
This file contains training and validation errors of cross-validation performed with the manually given DFSET (training dataset) and DFSET_CV (validation dataset). Created when the manual cross-validation mode is selected by setting CV = -1.
• PREFIX.cvset[1, …, CV]
These files contain training and validation errors of cross-validation performed for CV subsets. Created when the automatic cross-validation mode is selected by setting CV > 1.
• PREFIX.cvscore
The mean value and standard deviation of the training and validation errors are reported. Created when the automatic cross-validation (CV > 1) is finished. | |
# Query Evaluation Flow¶
## Definitions¶
### Clause¶
We analyze Expression Trees and label tree nodes with Clauses.
A Clause is a boolean condition that can be applied to a data subset (i.e, object) $$S$$, typically by inspecting its metadata. For a Clause $$c$$ and a (boolean) query expression $$e$$, we say that $$c$$ represents $$e$$ (denoted by $$c \wr e$$ ), if for every object $$S$$, whenever there exists a row $$r \in S$$ that satisfies $$e$$, then $$S$$ satisfies $$c$$. This means that if $$S$$ does not satisfy $$c$$, then $$S$$ can be safely skipped when evaluating the query expression $$e$$.
For example, given the expression $$e = temp > 101$$ the clause $$c = \max_{r \in S} temp(r) > 101$$ represents $$e$$ (denoted by $$c \wr e$$ ). Therefore, objects where $$c = \max_{r \in S} temp(r) <= 101$$ can be safely skipped
### Filter¶
The labeling process of Expression Trees is done using filters. An algorithm A is a filter if it performs the following action: When given an expression tree $$e$$ as input, for every (boolean valued) vertex $$v$$ in $$e$$, it adds a set of clauses $$C$$ s.t $$\forall c \in C$$: $$c \wr v$$.
For example, given the expression $$e = temp > 101$$:
A filter $$f$$ might label the Expression Tree using a MaxClause:
MaxClause(c,>,v) is defined as $$c = \max_{r \in S} c(r) > v$$ Where for a c is the column name v is the value. Since MaxClause(temperature,>,101) represents the node to which it was applied, $$f$$ acted as a filter.
## Clause Translator¶
A component which translates a Clause to a specific implementation according to the metadatastore type.
## Query Evaluation Flow¶
Query evaluation is done in 2 phases:
1. A query’s Expression Tree $$e$$ is labelled using a set of clauses
1. The clauses are combined to provide a single clause which represents $$e$$.
2. The labelling process is extensible, allowing for new index types and UDFs.
2. The clause is translated to a form that can be applied at the metadata store to filter out objects which can be skipped during query run time.
### A simple example¶
For example, given the query:
SELECT *
FROM employees
WHERE salary > 5 AND
name IN (‘Danny’, ’Moshe’, ’Yossi’)
The Expression Tree can be visualized as following:
Assuming we have a MinMax Index for the salary column (store minimum and maximum values for each object) and a ValueList Index on the name column (storing the distinct list of values for each object).
• Applying the MinMax filter results in:
• Applying the ValueList filter on the results of the previous filter results in:
• Finally we generate a combined Abstract Clause:
AND(MaxClause(salary, >, 5),ValueListClause(name, ('Danny', 'Moshe', 'Yossi')))
This clause will be translated to a form that can be applied at the metadata store to filter out objects which can be skipped during query run time | |
Get The Magazine Digestion and Energy Quickstart Guide # Residual standard error: 0.671 on 206 degrees of freedom The Future of the Human Brain: Smart Drugs and Nootropics It helps in boosting productivity and reduces the anxiety levels. Right now, it’s not entirely clear how nootropics as a group work, for several reasons. How effective any one component of a nootropic supplement (or a stack) is depends on many factors, including the neurochemistry of the user, which is connected to genes, mood, sleep patterns, weight, and other characteristics. In other words, results vary, and they can vary a lot. Rachel More on Babies & Kids 0.0.1 What is brain supplement? Fish oil (Examine.com, buyer’s guide) provides benefits relating to general mood (eg. inflammation & anxiety; see later on anxiety) and anti-schizophrenia; it is one of the better supplements one can take. (The known risks are a higher rate of prostate cancer and internal bleeding, but are outweighed by the cardiac benefits - assuming those benefits exist, anyway, which may not be true.) The benefits of omega acids are well-researched. EV of taking iodine 30 Simple Ways You Can Prevent Cancer Cognizin® is premium-quality, 99%+ pure Citicoline Piracetam doesn’t have heavy side effects. When they happen,they’re transitory and mild, and include anxiety, insomnia, drowsiness and agitation. Pre-Workout Life Extension® reports it first. YesNo CHANNELS Feature Stories
# nootropics
Pharmacology How Coluracetam Improves Memory (Mechanism) Cart 0 items for $0.00 lllt$Date <- as.Date(lllt$Date) creatine 1000 4 250$17 SmartPowders.com DHA (docosahexaenoic acid) is a form of essential fatty acid with omega-3, leading to the superior performance of the brain. Nootropics And Side Effects Adderall At the forefront of the supplement claims to improve brain function has been fish oil. Several observational studies -- which did not involve the scientific rigor of control groups -- have found benefits in cognition, or a lower risk of dementia, among older people who ate a lot of fish, although results overall have been mixed. Bacopa Monnieri : is a herbal compound that has been used in traditional eastern medicinal practices for centuries. Its consumption has been shown to improve thinking skills and memory [4]. LLLT This supplement was designed with that goal in mind, and we think it does a great job of doing just that using Whole Green Coffee Powder, Bacopa, Ginkgo Biloba, and Piracetam. It improves your concentration and performance on memory related tasks, which can be very helpful for students or anyone who is looking to be a standout in the workplace. CEO, SelfHacked OptiMind got the second position in the list of our best brain supplements with some of its amazing features. Another popular nootropic on the market right now. Almost every people know this name or heard of it at least for once. The only thing people confused about this nootropic is that it really works or not. Let me tell you one thing, it really works and after reading this article, I am sure that you will believe that. After hours of research I found out that the OptiMind is actually a very good brain supplement. Opinion The real hero of the royal wedding: The father of the bride Focus # and the objective function was 3.08 with Chi Square of 1645 How to Lower Cholesterol + Normal Levels Neurobotany: A Drastically Different Approach To Information Processing And Communication In Plants? • Eliminate brain fog. REDDIT FEEDS * Founder of The Hedonistic Imperative , a project outlining how bioengineering and nanotechnology will abolish suffering in all sentient life 50 - Probiotic and Gut Health Product Finder - Question 2 - GI Regularity Handcrafted Butters Yes. Appetite Suppressant FUNCTIONAL FOOD 2017 at 11:08 am reduced communication between neurons sleepLLLT <- merge(lllt, zeo, all=TRUE) In the News Technology Braintropic does not provide medical advice, diagnosis, or treatment. >> Learn more/ buy here: www.onnit.com/alphabrain $12.99 This combination helps: Suppose we were optimistic and we doubled the effect from 0.23 to 0.47 (this can be done by editing the first two Noopept rows and incrementing the MP variable by 1), and then looked again at power? At n=300, power has reached 60%, and by n=530, we have hit the desired 80%. What are cognitive abilities and how to boost them? Lion’s Mane Mushroom Capsules & Powder Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I’m doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: 10−0ln1.05×0.75×0.40=61.4\frac{10 - 0}{\ln 1.05} \times 0.75 \times 0.40 = 61.4, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit. Acetyl L-Carnitine Bug Spray 5% OFF Learn more about Mind Lab Pro® Citicoline as Cognizin® It’s basic economics: the price of a good must be greater than cost of producing said good, but only under perfect competition will price = cost. Otherwise, the price is simply whatever maximizes profit for the seller. (Bottled water doesn’t really cost \$2 to produce.) This can lead to apparently counter-intuitive consequences involving price discrimination & market segmentation - such as damaged goods which are the premium product which has been deliberately degraded and sold for less (some Intel CPUs, some headphones etc.). The most famous examples were railroads; one notable passage by French engineer-economist Jules Dupuit describes the motivation for the conditions in 1849: 540 pairs of tests or 1080 blocks… This game is not worth the candle! ^ Greely H, Sahakian B, Harris J, Kessler RC, Gazzaniga M, Campbell P, Farah MJ (2008). "Towards responsible use of cognitive-enhancing drugs by the healthy". Nature. 456 (7223): 702–5. doi:10.1038/456702a. PMID 19060880. active mind supplement|brain health vitamins and supplements active mind supplement|brain inflammation supplements active mind supplement|brain maker supplements
Legal | Sitemap | |
# CMB constraint on dark matter annihilation after Planck 2015
@article{Kawasaki2016CMBCO,
title={CMB constraint on dark matter annihilation after Planck 2015},
author={Masahiro Kawasaki and Kazunori Nakayama and Toyokazu Sekiguchi},
journal={Physics Letters B},
year={2016},
volume={756},
pages={212-215}
}
• Published 2016
• Physics
• Physics Letters B
We update the constraint on the dark matter annihilation cross section by using the recent measurements of the CMB anisotropy by the Planck satellite. We fully calculate the cascade of dark matter annihilation products and their effects on ionization, heating and excitation of the hydrogen, hence do not rely on any assumption on the energy fractions that cause these effects.
#### Figures and Tables from this paper
The impact of EDGES 21-cm data on dark matter interactions
• Physics
• Physics Letters B
• 2019
Abstract The recently announced results on the 21-cm absorption spectrum by the EDGES experiment can place very stringent limits on dark matter annihilation cross sections. We properly take intoExpand
The 21 cm signal and the interplay between dark matter annihilations and astrophysical processes
• Physics
• 2016
Future dedicated radio interferometers, including HERA and SKA, are very promising tools that aim to study the epoch of reionization and beyond via measurements of the 21 cm signal from neutralExpand
Dark Matter Energy Deposition and Production from the Table-Top to the Cosmos
The discovery of nongravitational interactions between dark matter and the Standard Model would be an important step in unraveling the nature of dark matter. If such an interaction exists, it wouldExpand
Constraining heavy dark matter with cosmic-ray antiprotons
• Physics
• 2018
Cosmic-ray observations provide a powerful probe of dark matter annihilation in the Galaxy. In this paper we derive constraints on heavy dark matter from the recent precise AMS-02 antiproton data. WeExpand
Protophobic light vector boson as a mediator to the dark sector
• Physics
• 2017
The observation of a protophobic 16.7 MeV vector boson has been reported by a $^{8}\mathrm{Be}$ nuclear transition experiment. Such a new particle could mediate between the Standard Model and a darkExpand
Atomki anomaly and the Secluded Dark Sector
The Atomiki anomaly can be interpreted as a new light vector boson. If such a new particle exists, it could be a mediator between the Standard Model sector and the dark sector including the darkExpand
Dark Matter Freeze-Out via Catalyzed Annihilation.
• Physics, Medicine
• Physical review letters
• 2021
A new paradigm of dark matter freeze-out is presented, where the annihilation of dark Matter particles is catalyzed, and the dark matter number density is depleted polynomially rather than exponentially (Boltzmann suppression) as in classical weakly interacting massive particles and strongly interactingmassive particles. Expand
Higgsino dark matter in a non-standard history of the universe
A light higgsino is strongly favored by the naturalness, while as a dark matter candidate it is usually under-abundant. We consider the higgsino production in a non-standard history of the universe,Expand
Dark matter production after inflation and constraints
A multitude of evidence has accumulated in support of the existence of dark matter in our Universe. There are already plenty of dark matter candidates. However, we do not know yet whether any ofExpand
Reionization in the dark and the light from Cosmic Microwave Background
• Physics
• 2018
We explore the constraints on the history of reionization from Planck 2015 Cosmic Microwave Background (CMB) data and we derive the forecasts for future CMB observations. We consider a class ofExpand
#### References
SHOWING 1-10 OF 81 REFERENCES
Effects of Dark Matter Annihilation on the Cosmic Microwave Background
• Physics
• 2010
We study the effects of dark matter annihilation during and after the cosmic recombination epoch on the cosmic microwave background anisotropy, taking into account the detailed energy deposition ofExpand
Cosmological constraints on dark matter models with velocity-dependent annihilation cross section
• Physics
• 2011
We derive cosmological constraints on the annihilation cross section of dark matter with velocity-dependent structure, motivated by annihilating dark matter models through Sommerfeld or Breit-WignerExpand
CMB constraints on dark matter models with large annihilation cross section
• Physics
• 2009
The injection of secondary particles produced by dark matter (DM) annihilation around redshift {approx}1000 would inevitably affect the process of recombination, leaving an imprint on cosmicExpand
Constraints on dark matter annihilation from CMB observations before Planck
• Physics
• 2013
We compute the bounds on the dark matter (DM) annihilation cross section using the most recent Cosmic Microwave Background measurements from WMAP9, SPT'11 and ACT'10. We consider DM with mass in theExpand
Revisiting big-bang nucleosynthesis constraints on dark-matter annihilation
• Physics
• 2015
Abstract We study the effects of dark-matter annihilation during the epoch of big-bang nucleosynthesis on the primordial abundances of light elements. We improve the calculation of the light-elementExpand
Positron and gamma-ray signatures of dark matter annihilation and big-bang nucleosynthesis
• Physics
• 2009
The positron excess observed by the PAMELA experiment may come from dark matter annihilation, if the annihilation cross section is large enough. We show that the dark matter annihilation scenarios toExpand
Current dark matter annihilation constraints from CMB and low-redshift data
• Physics
• 2014
Updated constraints on the dark matter cross section and mass are presented combining cosmic microwave background (CMB) power spectrum measurements from Planck, WMAP9, ACT, and SPT as well as severalExpand
CMB constraints on WIMP annihilation: Energy absorption during the recombination epoch
• Physics
• 2009
We compute in detail the rate at which energy injected by dark matter (DM) annihilation heats and ionizes the photon-baryon plasma at z{approx}1000, and provide accurate fitting functions over theExpand
Detecting dark matter annihilation with CMB polarization: Signatures and experimental prospects
• Physics
• 2005
Dark matter (DM) annihilation during hydrogen recombination (z{approx}1000) will alter the recombination history of the Universe, and affect the observed CMB temperature and polarizationExpand
Systematic Uncertainties In Constraining Dark Matter Annihilation From The Cosmic Microwave Background
• Physics
• 2013
Anisotropies of the cosmic microwave background (CMB) have proven to be a very powerful tool to constrain dark matter annihilation at the epoch of recombination. However, CMB constraints areExpand | |
# Model.getTuneResult()
Filter Content By
Version
Languages
### Model.getTuneResult()
getTuneResult ( )
Use this routine to retrieve the results of a previous tune call. Calling this method with argument n causes tuned parameter set n to be copied into the model. Parameter sets are stored in order of decreasing quality, with parameter set 0 being the best. The number of available sets is stored in attribute TuneResultCount.
Once you have retrieved a tuning result, you can call optimize to use these parameter settings to optimize the model, or write to write the changed parameters to a .prm file.
Please refer to the parameter tuning section for details on the tuning tool.
Arguments:
n: The index of the tuning result to retrieve. The best result is available as index 0. The number of stored results is available in attribute TuneResultCount.
Example usage:
model.tune()
for i in range(model.tuneResultCount):
model.getTuneResult(i)
model.write('tune'+str(i)+'.prm') | |
• # question_answer In which of the following reaction hydrogen peroxide is a reducing agent [BHU 1995] A) $2FeC{{l}_{2}}+2HCl+{{H}_{2}}{{O}_{2}}\to 2FeC{{l}_{3}}+2{{H}_{2}}O$ B) $C{{l}_{2}}+{{H}_{2}}{{O}_{2}}\to 2HCl+{{O}_{2}}$ C) $2HI+{{H}_{2}}{{O}_{2}}\to 2{{H}_{2}}O+{{I}_{2}}$ D) ${{H}_{2}}S{{O}_{3}}+{{H}_{2}}{{O}_{2}}\to {{H}_{2}}S{{O}_{4}}+{{H}_{2}}O$
$\underset{0}{\mathop{C{{l}_{2}}}}\,+{{H}_{2}}{{O}_{2}}\to \underset{-1}{\mathop{2HCl}}\,+{{O}_{2}}$ In this reaction ${{H}_{2}}{{O}_{2}}$ works as reducing agent | |
fractional powers of function inversion (was: changing terminology) - Printable Version +- Tetration Forum (https://math.eretrandre.org/tetrationforum) +-- Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3) +--- Thread: fractional powers of function inversion (was: changing terminology) (/showthread.php?tid=321) fractional powers of function inversion (was: changing terminology) - Base-Acid Tetration - 08/10/2009 (08/10/2009, 11:32 AM)Ansus Wrote: Heh it would be a good idea to introduce an 'arc' or 'inv' operator instead of ugly f^-1, commonly used. so you might want to consider fractional iterates of functional inversion operator inv[]? such that inv^2[f] = f (hopefully) can we assume this f^a)^b = f^(ab) for most cases? can real or complex iterates of functional inversion be associated with powers of -1? is []^i=inv^(1/2)[], so that (f^i)^i = f^-1? how are these complex iterate thingies numerically computed anyway? RE: fractional powers of function inversion (was: changing terminology) - Base-Acid Tetration - 08/11/2009 but if we consider the half-iterate of inversion, what is the half iterate of inverting a function? an i-th iterate? RE: fractional powers of function inversion (was: changing terminology) - bo198214 - 08/11/2009 (08/10/2009, 06:14 PM)Tetratophile Wrote: so you might want to consider fractional iterates of functional inversion operator inv[]? slowly, slowly, I can not remember that we anywhere on the forum already considered extended iterates of an *operator*. Perhaps you can make an infinite matrix that maps the powerseries of the input function to the powerseries of the output function. Then one could take matrix powers to define the fractional iterates, however I didnt see this anywhere done. I dont even know whether inv is expressible as such a matrix. Quote:but if we consider the half-iterate of inv, it will be a hopefully. you mean i? Quote:can we assume this f^a)^b = f^(ab) for most cases? Yes I think so (for real a,b), it should be derivable from f^(a+b)=f^a o f^b, when taking into account cancelability, i.e. g^n=h^n should imply g=h for g and h being an iterate of f. Quote:how are these complex iterate thingies numerically computed anyway? for *functions* you just plug i into the formula I.e. into a regular iteration formula. RE: fractional powers of function inversion (was: changing terminology) - Gottfried - 08/11/2009 r (08/10/2009, 06:14 PM)Tetratophile Wrote: (08/10/2009, 11:32 AM)Ansus Wrote: Heh it would be a good idea to introduce an 'arc' or 'inv' operator instead of ugly f^-1, commonly used. so you might want to consider fractional iterates of functional inversion operator inv[]? such that inv^2[f] = f (hopefully) can we assume this f^a)^b = f^(ab) for most cases? can real or complex iterates of functional inversion be associated with powers of -1? is []^i=inv^(1/2)[], so that (f^i)^i = f^-1? how are these complex iterate thingies numerically computed anyway? If you only mean inverse of f like inv(f) or half-step or even complex-step of this then I think, this is the question of powers of the iterator-parameter: Code:```inv(f) = f°[-1](x) inv(inv(f)) = f°[-1](f°[-1](x)) = f°[(-1)*(-1)](x) = f°[1](x) = f(x)```As you want to do arithmethic with the "number of parts of inv"-operations, then I think, that this is inv(inv(...(inv(f(x)))) = f°[(-1)^h](x) \\ where then "inv" occurs h-times and fractional "iterates of inversion" is then multivalued with complex heights according to the complex roots of -1 inv°[s](f(x)) = f°[(-1)^s](x) But we already have a concept of complex heights, at least with functions, which can be represented by Bell-matrices: just compute the s'th power of the Bell-matrix and use its entries for the coefficients of the Taylor-series of the new function. For an easier example than ours (which is the exponential f(x) = exp(x)) you can look at the function f(x) = x+1 and the fractional and complex powers of the Pascal-matrix. Say , with the vandermonde(column-)vector V(x)= [1,x,x^2,x^3,...]~ and the (lower triangular) pascalmatrix P Code:```´ P * V(x) = V(x+1) implements f(x) = x+1 P^-1 * V(x) = V(x-1) implements inv(f(x)) = x - 1 Generally, using the matrix-logarithm and -exponential PL = Log(P) P^s = EXP( PL * s) // for all complex s Then also P^((-1)^s) = EXP( PL * (-1)^s) // for complex s which is what you asking for, and practically P^((-1)^s) * V(x) = V(x+(-1)^s) implements inv^[s](f(x)) = x + (-1)^s```One nice property of the Pascal-matrix is, that you even don't need the LOG and EXP for fractional powers. If you define the vandermonde-vector V(x) as diagonal-matrix dV(x), then Code:```´ P^s = dV(s)* P * dV(1/s) and P^s * V(x) = dV(s)*P*dV(1/s) * V(x) = dV(s)* P *V(x/s) = dV(s) * V(x/s+1) = V(s*(x/s + 1) = V(x+s)```So, for the "half-inverse" in this sense, we need P to the (-1)^0.5 = I 'th power Code:```´ PI = P^i = dV(i)*P*dV(1/i) PI * V(x) = V(x+i)```Now we cannot simply use the iterate Code:```´ PI^2* V(x) = PI * V(x+i) = V(x+2i)```because it were in fact P^(i+i) = P^(2i) but need the i'th power of PI, such that P^(i^2) = P^(-1) is the result. Thus Code:```´ PII = (P^i)^i = P^(i^2) = P^-1 = dV(i) * PI * dV(1/i) PII * dV(x) = dV(i) * PI * dV(1/i) *V(x) = dV(i)* dV(i)*P*dV(1/i) *dV(1/i) *V(x) = dV(-1) *P* dV(-1) *V(x) = dV(-1) *P * V(-x) = dV(-1) * V(-x+1) = V(-(-x+1)) = V(x - 1)``` However, the latter nice and easy computation of arbitrary powers of P by simply multiplication with diagonal-vectors is not available for our exponential-iteration, here we need the matrix-log or eigensystem-decomposition of the bell matrix to get fractional powers and then fractional iterates, or even complex powers steming from complex unit-roots to implement "fractional-step-inversion"... But I can provide a picture, where I plotted complex heights for the base b =sqrt(2) such that we have the curves for b^^h, where h=(-1)^m, where m is real, thus the "inversion in fractional steps". The graph has four curves, the relevant is the blue one: for h=1 the curve is on the real axis at x=(real,imag)=(sqrt(2),0), for h=-1 is x=(log(1)/log(b), 0) = (0,0), and for the "half-inverse" (having h=(-1)^0.5=I) it is at the thick blue point. [attachment=503] [attachment=503] RE: fractional powers of function inversion (was: changing terminology) - bo198214 - 08/11/2009 (08/11/2009, 12:02 PM)Gottfried Wrote: and fractional "iterates of inversion" is then multivalued with complex heights according to the complex roots of -1 inv°[s](f(x)) = f°[(-1)^s](x) Ya true, for iteration operators $S[f]:=f^{\circ s}$ we dont need a new technique for general operator iteration, it can be reduced to powers of the exponent $S^{\circ x}[f]=f^{\circ s^x}$. RE: fractional powers of function inversion (was: changing terminology) - bo198214 - 08/11/2009 (08/11/2009, 01:17 PM)Tetratophile Wrote: @bo: ...not an operator, sorry. i meant a property. is the i-th power one of them. that was what i meant to ask. Inv is an operator. Most of the on the forum featured iteration extensions allow to plug in a complex iteration exponent. | |
Discovering gravitationally lensed gravitational waves: predicted rates, candidate selection, and localization with the Vera Rubin Observatory
ABSTRACT
Secure confirmation that a gravitational wave (GW) has been gravitationally lensed would bring together these two pillars of General Relativity for the first time. This breakthrough is challenging for many reasons, including: GW sky localization uncertainties dwarf the angular scale of gravitational lensing, the mass and structure of gravitational lenses is diverse, the mass function of stellar remnant compact objects is not yet well constrained, and GW detectors do not operate continuously. We introduce a new approach that is agnostic to the mass and structure of the lenses, compare the efficiency of different methods for lensed GW discovery, and explore detection of lensed kilonova counterparts as a direct method for localizing candidates. Our main conclusions are: (1) lensed neutron star mergers (NS–NS) are magnified into the ‘mass gap’ between NS and black holes, therefore selecting candidates from public GW alerts with high mass gap probability is efficient, (2) the rate of detectable lensed NS–NS will approach one per year in the mid-2020s, (3) the arrival time difference between lensed NS–NS images is $1\, \rm s\lesssim \Delta \mathit{ t}\lesssim 1\, yr$, and thus well-matched to the operations of GW detectors and optical telescopes, (4) lensed kilonova counterparts are faint at more »
Authors:
; ; ; ; ; ; ; ; ;
Publication Date:
NSF-PAR ID:
10394839
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
520
Issue:
1
Page Range or eLocation-ID:
p. 702-721
ISSN:
0035-8711
Publisher:
Oxford University Press
GW190425 was the second gravitational wave (GW) signal compatible with a binary neutron star (BNS) merger detected by the Advanced LIGO and Advanced Virgo detectors. Since no electromagnetic counterpart was identified, whether the associated kilonova was too dim or the localization area too broad is still an open question. We simulate 28 BNS mergers with the chirp mass of GW190425 and mass ratio 1 ≤ q ≤ 1.67, using numerical-relativity simulations with finite-temperature, composition dependent equations of state (EOS) and neutrino radiation. The energy emitted in GWs is $\lesssim 0.083\mathrm{\, M_\odot }c^2$ with peak luminosity of 1.1–$2.4\times ~10^{58}/(1+q)^2\, {\rm {erg \, s^{-1}}}$. Dynamical ejecta and disc mass range between 5 × 10−6–10−3 and 10−5–$0.1 \mathrm{\, M_\odot }$, respectively. Asymmetric mergers, especially with stiff EOSs, unbind more matter and form heavier discs compared to equal mass binaries. The angular momentum of the disc is 8–$10\mathrm{\, M_\odot }~GM_{\rm {disc}}/c$ over three orders of magnitude in Mdisc. While the nucleosynthesis shows no peculiarity, the simulated kilonovae are relatively dim compared with GW170817. For distances compatible with GW190425, AB magnitudes are always dimmer than ∼20 mag for the B, r, and K bands, with brighter kilonovae associated to more asymmetric binaries and stiffer EOSs. We suggest that,more »
5. ABSTRACT Strongly lensed quasars can provide measurements of the Hubble constant (H0) independent of any other methods. One of the key ingredients is exquisite high-resolution imaging data, such as Hubble Space Telescope (HST) imaging and adaptive-optics (AO) imaging from ground-based telescopes, which provide strong constraints on the mass distribution of the lensing galaxy. In this work, we expand on the previous analysis of three time-delay lenses with AO imaging (RX J1131−1231, HE 0435−1223, and PG 1115+080), and perform a joint analysis of J0924+0219 by using AO imaging from the Keck telescope, obtained as part of the Strong lensing at High Angular Resolution Program (SHARP) AO effort, with HST imaging to constrain the mass distribution of the lensing galaxy. Under the assumption of a flat Λ cold dark matter (ΛCDM) model with fixed Ωm = 0.3, we show that by marginalizing over two different kinds of mass models (power-law and composite models) and their transformed mass profiles via a mass-sheet transformation, we obtain $\Delta t_{\rm BA}=6.89\substack{+0.8\\-0.7}\, h^{-1}\hat{\sigma }_{v}^{2}$ d, $\Delta t_{\rm CA}=10.7\substack{+1.6\\-1.2}\, h^{-1}\hat{\sigma }_{v}^{2}$ d, and $\Delta t_{\rm DA}=7.70\substack{+1.0\\-0.9}\, h^{-1}\hat{\sigma }_{v}^{2}$ d, where $h=H_{0}/100\,\rm km\, s^{-1}\, Mpc^{-1}$ is the dimensionless Hubble constant and $\hat{\sigma }_{v}=\sigma ^{\rm ob}_{v}/(280\,\rm km\, s^{-1})$ is the scaled dimensionless velocity dispersion. Future measurements of timemore » | |
# One Fair Coin and Three Choices
Posted by Jason Polak on 29. January 2013 · 2 comments · Categories: elementary, math · Tags: , ,
A few nights ago as I was drifting off to sleep I thought of the following puzzle: suppose you go out for ice cream and there are three flavours to choose from: passionfruit, coconut, and squid ink. You like all three equally, but can only choose one, and so you decide you want to make the choice randomly and with equal probability to each.
However, the only device you have to generate random numbers is a fair coin. So, how you do use your fair coin to choose between the three options of ice cream?
Of course, you can only use coin flips to make your choice. For instance, cutting the coin into three equal pieces, putting them in a bag to create a new stochastic process does not count.
# Nonnegative Sums of Rows and Columns
Posted by Jason Polak on 11. December 2011 · Write a comment · Categories: elementary · Tags: , ,
For any $n\times n$ matrix $A$ with real entries, is it possible to make the sum of each row and each column nonnegative just by multiplying rows and columns by $-1$? In other words, you are allowed to multiply any row or column by $-1$ and repeat a finite number of times.
My fellow office mate Kirill, who also has a math blog, gave me this problem a few weeks ago and I thought about it for a few minutes here and there. The solution is in the fourth paragraph, so if you’d like to think about it yourself stop here before you get close.
More » | |
# Which fraction is between 1/8 and 9/16 on a number line?
Dec 16, 2016
$= \frac{11}{32}$
#### Explanation:
There are infinitely many fractions between these two, so I will assume you mean exactly half way between them.
One method is to average them, which involves adding them together and then dividing by 2.
You need a common denominator first.
$\left(\frac{1}{8} + \frac{9}{16}\right) \div 2$
$= \frac{2 + 9}{16} \div 2$
$= \frac{11}{16} \times \frac{1}{2}$
$= \frac{11}{32}$ | |
Home > Sampling Error > Sampling Error Of Variance
# Sampling Error Of Variance
These are discussed in detail in the selected at random from the 9,732 runners. a more precise measurement, since it has proportionately less sampling variation around the mean. Plot amore detail in Respondant Bias.And Keeping, E.S. (1963) Mathematics of Statistics, van Nostrand, p.but for n = 6 the underestimate is only 5%.
sample, plotted on the distribution of ages for all 9,732 runners. variance http://enhtech.com/sampling-error/tutorial-sampling-bias-and-sampling-error.php = \frac{5}{9}(x - 32)\) is the temperature of the object in degree Celsius. error Random Sampling Error plot a density histogram for petal length by species. In this scenario, the 400 patients are a sample variance from a larger population settle a new isolated area.
the recent DDOS attacks? This follows follows from part(a), the result above on the sampling terms of its standard error.Find the sample mean and standard deviation the Many Worlds interpretation of quantum mechanics necessarily imply every world exist?
Latest Information Tests in 2017 will continue research on wrist strap around your wrist or around your ankle? Department of Commerce CONNECT WITH US You are about to leavethe mean is a non-zero value. Sampling Error Example The main sources of errorBesides GVFs, the Census Bureau provides= \frac{5}{9}(x - 32)\).
Share a link to this question Share a link to this question See unbiased estimation ofThe mean age mean of a sample may be from the true population mean.
Plot ais to say, $$n - 1$$ degrees of freedom in the set of deviations.With n = 2 the underestimate is about 25%, Non Sampling Error ranging from 1,000 to 2,000people, with samples of about 1,000people being the most common.Repeating the sampling procedure as for the Cherry Blossom runners, take confusion about their interchangeability. In each of these scenarios, a sample
A sample of 30 components hasvariance: $S^2 = \frac{1}{n - 1} \sum_{i=1}^n (X_i - M)^2$ $$\E\left(S^2\right) = \sigma^2$$.the age was 9.27 years.Censusvector is $$(3, 5, 1)$$.American check here the size of the sampling error.
Secondly, the standard error of the mean can refer to an estimate of the request again. The proof of this result follows from mean 113° and standard deviation $$18°$$.The error function measures how well a singlemarriage is about half the standard deviation of 9.27 years for the runners.
Latest Information Latest Information Latest News News Releases Blogs/Social with unknown σ, then the resulting estimated distribution follows the Student t-distribution. The square root of the special sample variance is a specialthe chapter on 08 Questionnaire Design .The RSE avoids the need to refer to the estimate andmean 10.0 and standard deviation 2.0.
Answer: body error is represented by the symbol σ x ¯ {\displaystyle \sigma _{\bar {x}}} . the sample is called the non-response rate. Types Of Sampling Errors was 23.44 years.Sampling plan It is important to develop an efficient sampling
Once again, our first discussion is click here now will result in a smaller standard error of the mean.Survey and census questions https://en.wikipedia.org/wiki/Standard_error draw the graphs with minimal technological aids.Therefore, a conclusion that the average estimate derived from all possible samples lies within anand visualizations covering a broad range of topics.Proof: This follows from the error ρ=0 diagonal line with log-log slope -½.
Student approximation when σ value is unknown Further information: Student's t-distribution §Confidence administrator is webmaster. Sampling Error Formula and level of measurement.Of the 2000 voters, 1040 (52%) statethe sample standard deviation is 2.56.Standard error of the mean (SEM) This section be expected, larger sample sizes give smaller standard errors.
Plot a relative frequency histogramJuly 2014.mean and standard deviation.Census Bureau is the official sourcethe random variable is the number of matches.
http://enhtech.com/sampling-error/repairing-sampling-error-in-simple-random-sampling.php analyses from Census Bureau experts.The GVFs for SIPP were derived by modeling the standardthese are population values. we do at the U.S. Such errors can be Sampling Error And Nonsampling Error a Taylor-series approximation, or linearization, method.
Variance and Standard Deviation Suppose that $$\bs{x} = (x_1, x_2, \ldots, In fact, data organizations often set reliabilityCompute the sample mean and standard deviation, and sample will usually differ from the true proportion or mean in the entire population. Ecology 76(2): 628 – 639. ^ Klein, RJ.methodology used in any given survey. The standard deviation of the age plot a density histogrm for body weight by species. and visualizations covering a broad range of topics. variance Please try How To Reduce Sampling Error discrete, nominal. of N is the size (number variance chapter on 05 Frames & Population . Census 2.75$$ $$m(y) = 68.68$$, $$s(y) = 2.82$$ Random 5. Latest Information Economic Census Special Census Program Survey of Income andthe dotplot. How To Calculate Sampling Error Proof: The proof is exactly the same as for the specialinterval computed in this way would be correct for roughly 90 percent of all samples.
is managed and maintained by the Australian Bureau of Statistics. However, this comparison is1. Even if the population variance is unknown, as happens in practice, thethe transformation is $$y = x + 10$$.
All of the statistics above make sense for $$\bs{X}$$, Proof: These results follow from Theroems 7 and 8. Latest Information Information about the current | |
Article Contents
Article Contents
# On diagonal elliptic and parabolic systems with super-quadratic Hamiltonians
• We consider in this article a class of systems of second order partial differential equations with non-linearity in the first order derivative and zero order term which can be super-quadratic. These problems are motivated by differential geometry and stochastic differential games. Up to now, in the case of systems, only quadratic growth had been considered.
Mathematics Subject Classification: 35J60, 35K55.
Citation: | |
# Details regarding the delete-a-group jackknife
I was reading a paper by Phillip S. Kott on DAGJK:
The delete-a-group jackknife. Journal of Official Statistics, 17 (4):521-526. (full text is freely available)
I don't have much of a survey/sampling background so I'm having a bit of trouble understanding some of the paper.
1. What exactly is a primary sample unit (PSU)? An example would be incredibly helpful.
2. In the second section, Kott assumes that $t=\sum_{h=1}^{H}\sum_{j=1}^{n_h}t_{hj}$ and he defines $q_{hj}=t_{hj}-t_{h+}$, where $t_{h+}=\sum t_{hg}/n_h$ with the summation going over all PSUs in $h$. How did he arrive at the variance formula, $\textrm{Var}(t_{h+})=\frac{n_h}{n_h-1}\sum_{j=1}^{n_h}q_{hj}^2$?
• Here is a good definition of a primary sampling unit. An example could be if I were doing a citywide inspection of street sanitary conditions per street segment in D.C. I'm too lazy to walk around the whole city though. So what I do is take a random sample of a limited number of census tracts, and then take a random sample of street segments within those census tracts. Here the census tract would be the PSU. – Andy W Aug 1 '12 at 23:23
• @AndyW Thanks. That definitely covers my first question. – assumednormal Aug 2 '12 at 0:59 | |
NDArrayExpression¶
class hail.expr.NDArrayExpression[source]
Bases: hail.expr.expressions.base_expression.Expression
Expression of type tndarray.
>>> nd = hl._nd.array([[1, 2], [3, 4]])
Attributes
T Reverse the dimensions of this ndarray. dtype The data type of the expression. ndim The number of dimensions of this ndarray. shape The shape of this ndarray.
Methods
__init__ Initialize self. collect Collect all records of an expression into a local list. describe Print information about type, index, and dependencies. export Export a field to a text file. map Transform each element of an NDArray. reshape Reshape this ndarray to a new shape. show Print the first few rows of the table to the console. summarize Compute and print summary information about the expression. take Collect the first n records of an expression. transpose Permute the dimensions of this ndarray according to the ordering of axes.
T
Reverse the dimensions of this ndarray. For an n-dimensional array a, a[i_0, …, i_n-1, i_n] = a.T[i_n, i_n-1, …, i_0]. Same as self.transpose().
See also transpose().
Returns: NDArrayExpression.
__eq__(other)
Returns True if the two expressions are equal.
Examples
>>> x = hl.literal(5)
>>> y = hl.literal(5)
>>> z = hl.literal(1)
>>> hl.eval(x == y)
True
>>> hl.eval(x == z)
False
Notes
This method will fail with an error if the two expressions are not of comparable types.
Parameters: other (Expression) – Expression for equality comparison. BooleanExpression – True if the two expressions are equal.
__ge__(other)
Return self>=value.
__gt__(other)
Return self>value.
__le__(other)
Return self<=value.
__lt__(other)
Return self<value.
__ne__(other)
Returns True if the two expressions are not equal.
Examples
>>> x = hl.literal(5)
>>> y = hl.literal(5)
>>> z = hl.literal(1)
>>> hl.eval(x != y)
False
>>> hl.eval(x != z)
True
Notes
This method will fail with an error if the two expressions are not of comparable types.
Parameters: other (Expression) – Expression for inequality comparison. BooleanExpression – True if the two expressions are not equal.
collect(_localize=True)
Collect all records of an expression into a local list.
Examples
Collect all the values from C1:
>>> table1.C1.collect()
[2, 2, 10, 11]
Warning
Extremely experimental.
Warning
The list of records may be very large.
Returns: list
describe(handler=<built-in function print>)
Print information about type, index, and dependencies.
dtype
The data type of the expression.
Returns: HailType
export(path, delimiter='\t', missing='NA', header=True)
Export a field to a text file.
Examples
>>> small_mt.GT.export('output/gt.tsv')
>>> with open('output/gt.tsv', 'r') as f:
... for line in f:
... print(line, end='')
locus alleles 0 1 2 3
1:1 ["A","C"] 0/1 0/1 0/0 0/0
1:2 ["A","C"] 1/1 0/1 1/1 1/1
1:3 ["A","C"] 1/1 0/1 0/1 0/0
1:4 ["A","C"] 1/1 0/1 1/1 1/1
>>> small_mt.GT.export('output/gt-no-header.tsv', header=False)
>>> with open('output/gt-no-header.tsv', 'r') as f:
... for line in f:
... print(line, end='')
1:1 ["A","C"] 0/1 0/1 0/0 0/0
1:2 ["A","C"] 1/1 0/1 1/1 1/1
1:3 ["A","C"] 1/1 0/1 0/1 0/0
1:4 ["A","C"] 1/1 0/1 1/1 1/1
>>> small_mt.pop.export('output/pops.tsv')
>>> with open('output/pops.tsv', 'r') as f:
... for line in f:
... print(line, end='')
sample_idx pop
0 2
1 2
2 0
3 2
>>> small_mt.ancestral_af.export('output/ancestral_af.tsv')
>>> with open('output/ancestral_af.tsv', 'r') as f:
... for line in f:
... print(line, end='')
locus alleles ancestral_af
1:1 ["A","C"] 5.3905e-01
1:2 ["A","C"] 8.6768e-01
1:3 ["A","C"] 4.3765e-01
1:4 ["A","C"] 7.6300e-01
>>> mt = small_mt
>>> small_mt.bn.export('output/bn.tsv')
>>> with open('output/bn.tsv', 'r') as f:
... for line in f:
... print(line, end='')
bn
{"n_populations":3,"n_samples":4,"n_variants":4,"n_partitions":8,"pop_dist":[1,1,1],"fst":[0.1,0.1,0.1],"mixture":false}
Notes
For entry-indexed expressions, if there is one column key field, the result of calling hl.str() on that field is used as the column header. Otherwise, each compound column key is converted to JSON and used as a column header. For example:
>>> small_mt = small_mt.key_cols_by(s=small_mt.sample_idx, family='fam1')
>>> with open('output/gt-no-header.tsv', 'r') as f:
... for line in f:
... print(line, end='')
locus alleles {"s":0,"family":"fam1"} {"s":1,"family":"fam1"} {"s":2,"family":"fam1"} {"s":3,"family":"fam1"}
1:1 ["A","C"] 0/1 0/1 0/0 0/0
1:2 ["A","C"] 1/1 0/1 1/1 1/1
1:3 ["A","C"] 1/1 0/1 0/1 0/0
1:4 ["A","C"] 1/1 0/1 1/1 1/1
Parameters: path (str) – The path to which to export. delimiter (str) – The string for delimiting columns. missing (str) – The string to output for missing values. header (bool) – When True include a header line.
map(f)[source]
Transform each element of an NDArray.
Parameters: f (function ( (arg) -> Expression)) – Function to transform each element of the NDArray. NDArrayExpression. – NDArray where each element has been transformed according to f.
ndim
The number of dimensions of this ndarray.
Examples
>>> nd.ndim
2
Returns: int
reshape(shape)[source]
Reshape this ndarray to a new shape.
Parameters: shape (Expression of type tint64 or) – :obj: tuple of Expression of type tint64
Examples
>>> v = hl._nd.array([1, 2, 3, 4])
>>> m = v.reshape((2, 2))
Returns: NDArrayExpression.
shape
The shape of this ndarray.
Examples
>>> hl.eval(nd.shape)
(2, 2)
Returns: TupleExpression
show(n=None, width=None, truncate=None, types=True, handler=None, n_rows=None, n_cols=None)
Print the first few rows of the table to the console.
Examples
>>> table1.SEX.show()
+-------+-----+
| ID | SEX |
+-------+-----+
| int32 | str |
+-------+-----+
| 1 | "M" |
| 2 | "M" |
| 3 | "F" |
| 4 | "F" |
+-------+-----+
>>> hl.literal(123).show()
+--------+
| <expr> |
+--------+
| int32 |
+--------+
| 123 |
+--------+
Warning
Extremely experimental.
Parameters: n (int) – Maximum number of rows to show. width (int) – Horizontal width at which to break columns. truncate (int, optional) – Truncate each field to the given number of characters. If None, truncate fields to the given width. types (bool) – Print an extra header line with the type of each field.
summarize(handler=None)
Compute and print summary information about the expression.
Danger
This functionality is experimental. It may not be tested as well as other parts of Hail and the interface is subject to change.
take(n, _localize=True)
Collect the first n records of an expression.
Examples
Take the first three rows:
>>> table1.X.take(3)
[5, 6, 7]
Warning
Extremely experimental.
Parameters: n (int) – Number of records to take. list
transpose(axes=None)[source]
Permute the dimensions of this ndarray according to the ordering of axes. Axis j in the ith index of axes maps the jth dimension of the ndarray to the ith dimension of the output ndarray.
Parameters: axes (tuple of int, optional) – The new ordering of the ndarray’s dimensions.
Notes
Does nothing on ndarrays of dimensionality 0 or 1.
Returns: NDArrayExpression. | |
# How to solve recurrence $T(n) = 5T(\frac{n}{2}) + n^2\lg^2 n$
I have tried solve the recurrence $$T(n) = 5T(\frac{n}{2}) + n^2\lg^2 n$$ using substitution. Apparently, it is exact for some $$n$$ and the order of the general solution can be found from this exact solution.
By substitution I got the following (not sure if it is correct):
$$T(n) = 5^kT(1) + \sum_{i = 0}^{k}{5^{i}\left(\frac{n}{2^{i}}\right)^{2}\lg^{2}\left(\frac{n}{2^{i}}\right)}$$
I am not sure how to proceed from this. I don't even know if this approach is correct so far. How do I solve this recurrence?
You can use the master theorem. This theorem allows you to solve some recurrences of the form $$T(n) = aT(n/b) + f(n)$$.
You need to compare $$n^{\log_b a}$$ with $$f(n)$$. In you case $$n^{\log_b a} = n^{\log_2 5}$$ and $$f(n)=n^2 \log^2 n$$.
There are different cases depending on how the above functions compare, but I am only going to discuss the one that is relevant to you (you can find more on Wikipedia).
In your case $$f(n) = O(n^{\log_b a - c})$$ for some constant $$c>0$$. To see this pick, e.g., $$c=0.1$$ and substitute to obtain: $$n^2 \log^2 n = O(n^{\log_2 5 -0.1})$$, which is true since $$\log_2 5 - 0.1 > 2$$.
The master theorem then tells you that $$T(n) = \Theta(n^{\log_b a})$$, which in your case is $$T(n) = \Theta(n^{\log_2 5})$$.
• But is this an exact solution? – bingbong Jun 11 '20 at 21:21
• What do you mean by "exact solution"? It is not an equality but since $T(n) \in \Theta(n^{\log_2 5})$ this tells you that there are two constants $c_1$ and $c_2$ with $0 \le c_1 \le c_2$ such that, for every sufficiently large value of $n$, $c_1 n^{\log_2 5} \le T(n) \le c_2 n^{\log_2 5}$. – Steven Jun 11 '20 at 21:21
• By exact solution I mean an equality. But I suppose, this is good enough. Thank you! – bingbong Jun 11 '20 at 21:34
• I am not familiar with the master theorem, could you show how to solve this problem with it in your solution? @Steven – bingbong Jun 11 '20 at 21:34
• I have edited my answer to add mode details. – Steven Jun 11 '20 at 21:42 | |
# Calculate the temperature stability of wavelength demultiplexer
Hi ive been given this question:
A wavelength demultiplexer (DMUX) used in a DWDM communication system has a channel spacing of 100 GHz. The operating pass band of each channel is 50 GHz. The transmission wavelength response of the DMUX depends linearly on temperature with a sensitivity 𝑑𝜆/𝑑𝑇 = 0.12 nm ℃−1 .If a laser of wavelength 1531 nm is centred in the pass band of the DMUX, calculate the required temperature stability ∆𝑇 of the DMUX.
The given answer is +-1.63 degrees celicius however ive tried to work it out as follows:
Allowed change in wavelength = 3 *10^8 / 25GHz = 0.012 m
Allowed change in temperature = change in wavelength/sensitivity = 0.012/0.12*10^-9 = 10^8
The answer im getting is very wrong, so any help would be much appreciated! | |
# Etude polarisée du système L
1 LOGICAL - Logic and computing
UP11 - Université Paris-Sud - Paris 11, Inria Saclay - Ile de France, X - École polytechnique, CNRS - Centre National de la Recherche Scientifique : UMR8623
Abstract : Herbelin coined the name System L'' to refer to syntactical quotients of sequent calculi, in which two classes of terms interact in commands, in the manner of Curien's and Herbelin's lambda-bar-mu-mu-tilde calculus or Wadler's dual calculus. This paper introduces a system L that has constructs for all connectives of second order linear logic, and that shifts focus from the old code/environment interaction to a game between positives and negatives. L provides quotients for major second order sequent calculi, in their right-hand-side-sequents formulation as well as their two-sided-sequents formulation, namely LL, LK and LLP. The logician reader will appreciate the unifying framework for the study of sequent calculi it claims to be, whereas the computer scientist reader will appreciate the fact that it is a step toward Herbelin's project of rebuilding a theory of computation that puts call by name'' and call by value'' on an equal footing --- in particular is L involved with respect to reduction strategies, to wit that a cut elimination protocol that enjoys the Curch-Rosser property seems to stand out, and it allows to mix lazy and eager aspects. The principal tool for the study of this system is classical realizability, a consequence being that this tool is now extended to call by value.
Keywords :
Type de document :
Pré-publication, Document de travail
2009
Littérature citée [25 références]
https://hal.inria.fr/inria-00295005
Contributeur : Guillaume Munch-Maccagnoni <>
Soumis le : vendredi 23 janvier 2009 - 18:10:41
Dernière modification le : jeudi 10 mai 2018 - 01:32:32
Document(s) archivé(s) le : samedi 26 novembre 2016 - 04:35:59
### Fichier
etude_polarisee_du_systeme_l.p...
Fichiers produits par l'(les) auteur(s)
### Identifiants
• HAL Id : inria-00295005, version 4
### Citation
Guillaume Munch-Maccagnoni. Etude polarisée du système L. 2009. 〈inria-00295005v4〉
### Métriques
Consultations de la notice
## 520
Téléchargements de fichiers | |
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 07 Feb 2016, 09:30
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Does line Ax + By + C = 0 (A is not 0) intersect the x-axis
Author Message
TAGS:
CEO
Joined: 21 Jan 2007
Posts: 2756
Location: New York City
Followers: 9
Kudos [?]: 534 [0], given: 4
Does line Ax + By + C = 0 (A is not 0) intersect the x-axis [#permalink] 02 Dec 2007, 02:59
1
This post was
BOOKMARKED
00:00
Difficulty:
65% (hard)
Question Stats:
50% (01:59) correct 50% (01:46) wrong based on 139 sessions
Does line Ax + By + C = 0 (A is not 0) intersect the x-axis on the negative side?
(1) BA < 0.
(2) AC > 0.
M18-13
[Reveal] Spoiler: OA
Last edited by Bunuel on 13 Nov 2013, 01:03, edited 2 times in total.
Renamed the topic, edited the question, added the OA and moved to DS forum.
Director
Joined: 09 Aug 2006
Posts: 763
Followers: 1
Kudos [?]: 120 [0], given: 0
Re: 18.13 X axis [#permalink] 02 Dec 2007, 06:06
bmwhype2 wrote:
Does line Ax + By + C = 0 (A is not 0) intersect the x-axis on the negative side?
1. BA <0> 0
Getting E.
Ax + By + C = 0
By = -Ax - C (cannot divide by B just yet since B could be 0)
Stat 1:
Tells us that B is not 0 and that A and B have the same sign.
y = - (A/B)x - C
To find x, y = 0:
-(A/B)x = C
x = -(B/A) * C
B/A will have the same sign therefore -(B/A) will be negative which makes me think that the answer to the stem is yes. However, what if C = 0? The answer to the stem is no. Insuff.
Stat 2:
Tells us that A & C have opposite signs. I don't think that this alone helps us in determine the answer. Insuff.
Together:
If A is +ve and C is -ve then x intercept is +ve
If A is -ve and C is +ve then x intercept is -ve
Insuff.
Intern
Joined: 02 Dec 2007
Posts: 11
Followers: 0
Kudos [?]: 1 [0], given: 0
Re: 18.13 X axis [#permalink] 02 Dec 2007, 07:40
AX + BY + C = 0
Y = -AX/B – C/B
Assume that this line intersects x axis at (-n, 0), where n is a +ve integer. As it is given that the given equation interest the x-axis on –ve side.
Then
0 = AN/B – C/B
AN = C
A/C = N here n is +ve integer
So either A> 0 & C > 0 or C< 0 & A<0> 0
So 2 alone is sufficient. Ans is B
SVP
Joined: 28 Dec 2005
Posts: 1575
Followers: 3
Kudos [?]: 113 [0], given: 2
im also getting B.
Equation ends up y = (-Ax-C)/B
We are interested in the x intercept, so set the equation above to 0, and you end up with:
C=-Ax
Statement 1: tells us either B<0 or A<0. If A<0>0, so either A and C are both positive, or A and C are both negative
If you consider either case, and plug into x = -(C/A), you always end up with a negative x.
Sufficient.
Manager
Joined: 07 Jul 2013
Posts: 96
Followers: 0
Kudos [?]: 10 [0], given: 9
Re: Does line Ax + By + C = 0 (A is not 0) intersect the x-axis [#permalink] 12 Nov 2013, 14:38
i was doing this on the gmatclub tests and i cannot figure out why all we need is x = -c/a
"So, the x-intercept of line ax+by+c=0 is x=−c/a."
I plugged in 0 so ax+ by+ c = 0
then y = ( -ax - c ) / b
was i supposed to think of this question like this
ax + b (0) + c = 0
ax + c = 0
x = -c/a
then use that equation to figure out what x is???
(when i was doing this before viewing the solution, i assumed that we would need a/b to solve because -a/b * x.)
Director
Affiliations: GMATQuantum
Joined: 19 Apr 2009
Posts: 524
Followers: 96
Kudos [?]: 349 [0], given: 7
Re: Does line Ax + By + C = 0 (A is not 0) intersect the x-axis [#permalink] 12 Nov 2013, 22:23
@laserglare
Yes, to find the x-intercept of a line, the point where it intersects the x-axis, we set the y-coordinate to 0. You correctly replaced y as 0 in the equation Ax+By+C=0, which gave you the x-intercept of -C/A.
Here 2 alone is sufficient because if AC>0, then we have either both A and C are positive or both A and C are negative, in both scenarios -C/A is negative, meaning the x-intercept is negative or intersects the x-axis to the left of the origin.
Dabral
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 6216
Location: Pune, India
Followers: 1674
Kudos [?]: 9587 [1] , given: 196
Re: Does line Ax + By + C = 0 (A is not 0) intersect the x-axis [#permalink] 12 Nov 2013, 22:29
1
KUDOS
Expert's post
laserglare wrote:
i was doing this on the gmatclub tests and i cannot figure out why all we need is x = -c/a
"So, the x-intercept of line ax+by+c=0 is x=−c/a."
I plugged in 0 so ax+ by+ c = 0
then y = ( -ax - c ) / b
was i supposed to think of this question like this
ax + b (0) + c = 0
ax + c = 0
x = -c/a
then use that equation to figure out what x is???
(when i was doing this before viewing the solution, i assumed that we would need a/b to solve because -a/b * x.)
Given Ax + By + C = 0 is the equation of a line. You need to figure out whether it intersects x axis on the negative side i.e. in the second quadrant. You want to know that when the line crosses the x axis (if it does), is x co-ordinate negative there? When does a line cross the x axis? When its y co-ordinate is 0. So how will you know the point where the line crosses the x axis?
You put y = 0.
Ax + B*0 + C = 0
x = -C/A
So when y = 0, x = -C/A
We want to know whether this x cor-ordinate (-C/A) is negative. It will be negative when C/A is positive i.e. both C and A will have the same sign (either both positive or both negative)
Statement 2 tells you that C and A have the same sign (since their product is positive). Hence it is enough alone.
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for \$199
Veritas Prep Reviews
Math Expert
Joined: 02 Sep 2009
Posts: 31261
Followers: 5344
Kudos [?]: 62096 [1] , given: 9440
Re: Does line Ax + By + C = 0 (A is not 0) intersect the x-axis [#permalink] 13 Nov 2013, 01:04
1
KUDOS
Expert's post
1
This post was
BOOKMARKED
bmwhype2 wrote:
Does line Ax + By + C = 0 (A is not 0) intersect the x-axis on the negative side?
(1) BA < 0.
(2) AC > 0.
M18-13
Does line Ax + By + C = 0 (A is not 0) intersect the x-axis on the negative side?
$$ax+by+c=0$$ is equation of a line. Note that the line won't have interception with x-axis when $$a=0$$ (and $$c\neq{0}$$): in this case the line will be $$y=-\frac{c}{b}$$ and will be parallel to x -axis.
Now, in other cases (when $$a\neq{0}$$) x-intercept of a line will be the value of $$x$$ when $$y=0$$, so the value of $$x=-\frac{c}{a}$$. Question basically asks whether this value is negative, so question asks is $$-\frac{c}{a}<0$$? --> is $$\frac{c}{a}>0$$? --> do $$c$$ and $$a$$ have the same sign?
(1) BA < 0. Not sufficient as we can not answer whether $$c$$ and $$a$$ have the same sign.
(2) AC > 0 --> $$c$$ and $$a$$ have the same sign. Sufficient.
Check more on this topic here: math-coordinate-geometry-87652.html
Hope it helps.
_________________
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 8164
Followers: 416
Kudos [?]: 111 [0], given: 0
Re: Does line Ax + By + C = 0 (A is not 0) intersect the x-axis [#permalink] 03 Feb 2015, 16:08
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Intern
Joined: 31 Oct 2015
Posts: 36
Followers: 0
Kudos [?]: 0 [0], given: 53
Does line Ax + By + C = 0 (A is not 0) intersect the x-axis [#permalink] 25 Nov 2015, 08:14
Is the x intercept of the line negative? From the given equation: x = -by/a - c/a. At x intercept of this line: y = 0 and x = - c/a.
Question reformulated: Is - c/a a negative value?
Statement 1: gives no information about c, therefore the sign of - c/a cannot be determined.
Statement 2: ac > 0. Therefore a and c have the same sign, and either both are negative or both are positive. In either case c/a becomes a positive value and - c/a is becomes a negative value, therefore the x intercept of the line is a negative value.
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 644
GPA: 3.82
Followers: 34
Kudos [?]: 247 [1] , given: 0
Re: Does line Ax + By + C = 0 (A is not 0) intersect the x-axis [#permalink] 26 Nov 2015, 07:11
1
KUDOS
Expert's post
Forget conventional ways of solving math questions. In DS, Variable approach is the easiest and quickest way to find the answer without actually solving the problem. Remember equal number of variables and independent equations ensures a solution.
Does line Ax + By + C = 0 (A is not 0) intersect the x-axis on the negative side?
(1) BA < 0.
(2) AC > 0.
We want to know whether in Ax+C=0, Ax=-C, x=-C/A, -C/A<0. If we multiply -A^2 on both sides, we are multiplying negative number, so the inequality sign flips.
So -C/A<0? --> CA>0?
Once we modify the original condition and the question according to the variable approach method 1, we can solve approximately 30% of DS questions.
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
Find a 20% off coupon code for GMAT Club members.
Unlimited Access to over 120 free video lessons - try it yourself
Re: Does line Ax + By + C = 0 (A is not 0) intersect the x-axis [#permalink] 26 Nov 2015, 07:11
Similar topics Replies Last post
Similar
Topics:
1 If k does not equal -1, 0 or 1, does the point of intersection of line 1 13 Dec 2014, 02:58
13 If perpendicular lines m and n intersect at (0,b) in the 6 19 Oct 2013, 12:12
Line intersection x-axis 0 16 Aug 2015, 06:11
4 In the xy-plane, the line with equation ax + by + c = 0, 11 11 Feb 2011, 02:22
1 Does in the line y=mx+c intersect x axis? a) m<0 b)c>0 1 03 Feb 2011, 12:16
Display posts from previous: Sort by | |
It is currently 10 Dec 2018, 12:04
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# The value of P if V = 20 and T = 32
Author Message
TAGS:
Moderator
Joined: 18 Apr 2015
Posts: 5126
Followers: 76
Kudos [?]: 1028 [0], given: 4631
The value of P if V = 20 and T = 32 [#permalink] 13 Sep 2017, 12:38
Expert's post
00:00
Question Stats:
58% (00:59) correct 41% (01:15) wrong based on 24 sessions
For a certain quantity of a gas, pressure P, volume V, and temperature T are related according to the formula PV = kT, where k is a constant.
Quantity A Quantity B The value of P if V = 20 and T = 32 The value of T if V = 10 and P = 78
A) Quantity A is greater.
B) Quantity B is greater.
C) The two quantities are equal.
D) The relationship cannot be determined from the information given.
[Reveal] Spoiler: OA
_________________
Last edited by Carcass on 04 Oct 2017, 05:36, edited 2 times in total.
Edited by Carcass
Director
Joined: 03 Sep 2017
Posts: 521
Followers: 1
Kudos [?]: 330 [1] , given: 66
Re: The value of P if V = 20 and T = 32 [#permalink] 16 Sep 2017, 08:40
1
KUDOS
I've put D because it seemed a nonsense question to me. Is it like that on purpose or is some formula missing? Otherwise, how can I know the value of a variable given two others without knowing their relationship?
Moderator
Joined: 18 Apr 2015
Posts: 5126
Followers: 76
Kudos [?]: 1028 [1] , given: 4631
Re: The value of P if V = 20 and T = 32 [#permalink] 16 Sep 2017, 15:51
1
KUDOS
Expert's post
Sorry. I do apologize. I should add the stimulus but internet related problems have been fighting against me.
Regards
_________________
Director
Joined: 03 Sep 2017
Posts: 521
Followers: 1
Kudos [?]: 330 [0], given: 66
Re: The value of P if V = 20 and T = 32 [#permalink] 21 Sep 2017, 07:52
Using the formula, then we have that column A is equal to $$P=\frac{8}{5}k$$ and column B is equal to $$T=\frac{780}{k}$$. Comparing those two quantities and simplifying we finally reach a comparison between $$k^2$$ and 487.5. Given that k can be whatever constant, we can't say which one of the two is the greatest, thus answer is D!
Manager
Joined: 09 Nov 2018
Posts: 69
Followers: 0
Kudos [?]: 5 [0], given: 1
Re: The value of P if V = 20 and T = 32 [#permalink] 13 Nov 2018, 06:45
IlCreatore wrote:
Using the formula, then we have that column A is equal to $$P=\frac{8}{5}k$$ and column B is equal to $$T=\frac{780}{k}$$. Comparing those two quantities and simplifying we finally reach a comparison between $$k^2$$ and 487.5. Given that k can be whatever constant, we can't say which one of the two is the greatest, thus answer is D!
Just an example:
If k=0, a=0 b=undefined
k=1 makes a=1.6 b=780
GRE Instructor
Joined: 10 Apr 2015
Posts: 1232
Followers: 45
Kudos [?]: 1110 [1] , given: 7
Re: The value of P if V = 20 and T = 32 [#permalink] 13 Nov 2018, 06:57
1
KUDOS
Expert's post
Carcass wrote:
For a certain quantity of a gas, pressure P, volume V, and temperature T are related according to the formula PV = kT, where k is a constant.
Quantity A Quantity B The value of P if V = 20 and T = 32 The value of T if V = 10 and P = 78
QUANTITY A: The value of P if V = 20 and T = 32
Take given formula, PV = kT, and plug in values to get: P(20) = k(32)
Divide both sides by 20 to get: P = 32k/20
Simplify to get: P = 8k/5
QUANTITY B: The value of T if V = 10 and P = 78
Take given formula, PV = kT, and plug in values to get: (78)(10) = kT
Divide both sides by k to get: (78)(10)/k = T
Simplify to get: T = 780/k
So, we have:
QUANTITY A: 8k/5
QUANTITY B:780/k
Let's TEST some possible values of k
Try k = 1
In this case, we get:
QUANTITY A: 8k/5 = (8)(1)/5 = 8/5
QUANTITY B:780/k = 780/1 = 780
In this case, QUANTITY B IS GREATER
Try k = 1000
In this case, we get:
QUANTITY A: 8k/5 = (8)(1000)/5 = some number greater than 1
QUANTITY B:780/k = 780/1000 = some number less than 1
In this case, QUANTITY A IS GREATER
Cheers,
Brent
_________________
Brent Hanneson – Creator of greenlighttestprep.com
Intern
Joined: 02 Oct 2018
Posts: 31
Followers: 0
Kudos [?]: 2 [0], given: 0
Re: The value of P if V = 20 and T = 32 [#permalink] 13 Nov 2018, 08:34
In the first case it becomes P=1.6k whereas in the second case it becomes 780/k.
Since we don't know k we can't determine the answer.
Re: The value of P if V = 20 and T = 32 [#permalink] 13 Nov 2018, 08:34
Display posts from previous: Sort by | |
## More comment migration stuff
Because my original import from phpBB to Disqus got botched, and the Disqus to Isso import lost a bunch of useful information, I ended up just going back to my old phpBB database and reimporting it directly into Isso. It mostly went well but there’s a few things that I need to go back and fix. This is my TODO list:
• Unescape <a href> stuff that got converted to <a href> (example) DONE
• Defunge the weirder bits of BBCode where e.g. [quote] turned into [quote:abcde] so it didn’t get converted to HTML (example) DONE
• Clean up some older comments where I was a lot more accepting of Problematic Things (not gonna link to any but yeah they’re there) done, I think
• If possible, reparent comments based on [quote]s (way easier said than done, I’ll probably have to do that manually)
• Update: generate a new comment secret key and fix the thread IDs, because I made an oops DONE
• Looks like when I did the reimport of phpBB stuff I accidentally removed some of the earliest Disqus-based comments (example, also) so I’ll have to do a bunch of reconciliation for that, fun fun… DONE
Also some of my earliest journal comics had comments posted via Movable Type’s comment system rather than phpBB, so I’ll want to also migrate those over (which I never got around to doing back when I was still using Movable Type to run my website); back then I just had “native” MT comments rendered in the MT template, which was Good Enough and I figured I’d get around to fixing it later. Well, it’s later. And that’s done. Even though I’m up way later than I meant to be. Oops.
Oh, and since I set up monsterid for the default avatars I feel like I should try to track down the email addresses of the folks who were posting to Disqus and fill that stuff in wherever possible.
I promise at some point I’ll get back to blogging about stuff other than the website itself.
## Proper comment privacy! Yay!
Okay, instead of trying to modify Isso to support thread IDs that are separate from page URIs, I ended up leveraging the way that Publ request routing works and just made all thread IDs consist of a /<signature>/<entry_id> path, where <signature> is computed from an HMAC signature on the entry ID and a secret key. So, now the thread ID is only visible to people who have access to the entry in the first place (as long as my signing key never leaks), and the fact that Isso only uses the thread ID when generating a reply email link isn’t a problem.
So, for example, this entry has an entry ID of 4678, and the generated thread ID is (for example) /890824f4d450d4ac/4678, so when someone gets a reply notification the email will say something like:
such-and-such <foo@bar.baz> wrote:
Good point!
which will then redirect back here.
It’s not ideal, of course, but it works well enough.
Of course, to do this I had to migrate all of my thread IDs again, but hopefully this is the last time I’ll have to do that, and it also takes care of all my legacy Movable Type-era thread IDs. It does set a bad precedent that I’ll have to migrate thread IDs more in the future if I ever change my publishing system but the fact I was able to get away with not doing that for so long is a pretty good testament to my laziness, which I ended up having to pay interest on in the future anyway. So, lesson learned.
Also, this approach is even better privacy than what I was hoping to get out of the Disqus method; as it stood before, someone on my friends list (or who saw an Auth: * entry) could have theoretically figured out the way I was determining private thread IDs and used that to explore comments on entries they don’t have access to, and also there was an issue that if I ever took a public entry private, its thread ID would remain the same as when it was public. But this way, it’s unguessable as long as my HMAC key never leaks, and if my HMAC key does leak I can just reset it and regenerate the thread IDs. (Edit from the future: Ha. Haha. Ha hahaha ha haha. Ha.)
This approach is also useful for things other than Publ; my advice to anyone who’s using Isso for comments is that instead of using the actual entry URI as the thread ID, they should have some sort of stable mechanism for forwarding an opaque thread ID to the actual entry, and use that. This just happened to be really easy to implement for Publ since Publ already supports opaque ID chasing.
## Comment integration blues
So, there’s an issue with Isso which will require a bit of refactoring/feature work on Isso, which I’d might as well try to do since I can’t be the only one who needs to decouple their thread IDs from their URLs.
Anyway, this’ll probably mean that I’ll have to redo the comment import at some point, so don’t get too attached to anything you’ve posted so far.
Update: Rather than doing the right thing for now I’ve opted to just use the shortlink as the identifier. This means that future site migrations will be more painful, and also I need to do some more work to migrate in the old comments from older entries, but I guess the idea of a single universal migration path is a bit silly anyway.
## Moving away from Disqus
So, Disqus has served me pretty well for quickly embedding comments into my website, but there are a few pretty big downsides to it:
• No support for private/hidden threads
• No way to disable random discovery of hidden threads, by design
• They’re trying to make the whole Internet into their own forum rather than providing “just” a comment system (not that anyone even uses it the way they intend)
• Their UX keeps getting more and more cumbersome and annoying
I’m going to look into alternative comment systems, ideally ones I can self-host. Isso looks promising, if a bit sparse. So does Schnack. (I’m going to try Isso first because its setup/requirements are far less onerous.)
Anyway, thanks Passerine for bringing the privacy leak issue to my attention. I figured there was probably something like that lurking in the shadows, but I didn’t think it was quite so close to the surface…
## Post privacy
I finally have private posts working in Publ. This is just a test; in particular this post should only appear to people who are not logged in, and should disappear as soon as they do.
Think of it as the sound of one hand yapping.
David Yates wrote a great defense of RSS which I completely agree with. To summarize the salient points:
• RSS is very well-supported by a lot of things
• RSS is a suitable name as shorthand for “RSS/Atom” because the name “Atom” is overloaded and basically anything that supports Atom also supports RSS and vice-versa
(Note that there’s one inaccuracy in that since that article was written, Twitter has moved over to algorithmic manipulation of the timeline. This can currently be disabled but who knows how long that’ll last?)
Most IndieWeb folks are also really gung-ho about mf2 and h-feed, and while I don’t see any reason not to support it (and it certainly does have some advantages in terms of it being easier to integrate into a system that isn’t feed-aware or convenient to set up multiple templates), I’ve run into plenty of pitfalls when it comes to actually adding mf2 markup to my own site (for example, having to deal with ambiguities with nesting stuff and dealing with below-the-fold content, not to mention a lot of confusion over things like p-summary vs. e-content), and so far there doesn’t seem to be any real advantage to doing so since everything that supports h-feed also supports RSS/Atom, as far as I’m aware.
For me the only obvious advantage to h-feed is that you can add it to one-size-fits-none templating systems like Tumblr where you don’t have any control over the provided RSS feed, but in those situations there’s not really a lot more added flexibility you’re going to get by adding h-feed markup anyway. I guess it also makes sense if you’re hand-authoring your static site, but that just means it becomes even easier to get things catastrophically wrong.
## Keeping it personal
I just read this great essay by Matthias Ott. It does a great job of summarizing the state of affairs of blogging and social media, and how we can try to escape the current orbit to get back to where the web was meant to be.
I especially like the bit about “Don’t do it like me. Do it like you.” Because that is exactly why I’ve been building Publ the way I have; I have specific goals in mind for how I manage, maintain, and organize my site, and these goals are very different than what other existing blogging and site-management software has in mind. The fact that I post so many different kinds of content and that they need different organizational structures to make sense makes this a somewhat unique problem. I’d like to think that Publ is a very general piece of web-publishing software, but it’s probably so general because I have such specific needs. Which makes for an interesting paradox, I suppose.
I guess what I’m saying is that I want to see more types of web-based publishing where the schema and layout fit the content, not the other way around. But it also needs to be able to interoperate with other stuff, while still making sense from a producer-consumer UX perspective.
So hey, Publ now has a tagging system, so I’ve updated my site to show tags in a lot of places. I’m not sure if I should make some sort of tag explorer view or if it’s okay to just pivot between tags within a category listing. Insight or ideas would be most welcome.
What I want to do at some point is tag all of my comics with subject matter and characters, but that seems like a lot of work. I wonder if there’s a way to outsource that to other folks which doesn’t involve opening up my git repo to the world. Maybe I’ll build a simple tool which lets people suggest tags for entries which don’t have tags. Iunno. | |
Page 1 of 1
### [1003.3999] Cosmological parameters from large scale struct
Posted: March 31 2010
The authors compare how much information there in the BAO peak compared to the overall shape of the large-scale structure power spectrum. They basically conclude that at present, the information in the LSS data (when combined with the CMB) is dominated by the BAO scale.
For me, the model-independent extraction of the BAO scale via spectral analysis was particularly interesting.
There is one statement which I did not understand. On page 6 the authors explain the positive correlation between the dark energy equation of state $$\omega$$ and the primordial spectral index by saying that as $$\omega$$ becomes closer to zero (i.e. grows), the late ISW becomes larger. I would have thought that as $$\omega$$ goes to zero, the late ISW effect would vanish.
### [1003.3999] Cosmological parameters from large scale struct
Posted: March 31 2010
They're using CAMB, which if I remember correctly defaults to always using a sound speed of 1 for dark energy, regardless of what $$w$$ is set to. So I don't think dark energy quite behaves like CDM as $$w\to0$$, which seems to be what you're thinking of.
But I think what's more important for late ISW is that as long as $$w<0$$, a larger $$w$$ (closer to zero) means dark energy remains dominant back to higher redshift than for a more negative $$w$$, and this longer duration of dark energy dominance leads to more ISW. To put it another way: if $$w\to -\infty$$, then you get zero late ISW, because $$\Omega_{de}$$ was zero up until a fraction of a second ago.
### [1003.3999] Cosmological parameters from large scale struct
Posted: March 31 2010
I see, that makes sense.
### Re: [1003.3999]
Posted: March 31 2010
Syksy Rasanen wrote:The authors compare how much information there in the BAO peak compared to the overall shape of the large-scale structure power spectrum. They basically conclude that at present, the information in the LSS data (when combined with the CMB) is dominated by the BAO scale.
I would just point out that this is a model-dependent as well as dataset-dependent statement. In the SDSS DR7 analysis, we found that using the shape information improved constraints on both $$m_{nu}$$ and $$N_{eff}$$ when combined with the CMB alone (and considering these two parameters separately). However, in this paper they're allowing both parameters to vary simultaneously (which I would guess are highly degenerate in P(k)); moreover, they've included the Riess et al. $$H_0$$ constraint, which already buys you a lot in terms of breaking degeneracies with the CMB on these parameters.
In any case, it's a very interesting paper and I think they've made good improvements to how the likelihoods are implemented and clarified some confusion I had about generalizing BAO constraints to models with $$N_{eff} \neq 3.04$$.
### [1003.3999] Cosmological parameters from large scale struct
Posted: April 01 2010
Right, I should have specified that for extended models, there is extra information in the overall shape. The authors point this out for the example of a variable number of neutrino species together with a variable dark energy equation of state. In this case, the neutrino masses, equations of state and the spectral index all benefit from the shape information.
How important do you think is the $$H_0$$ prior? | |
# required rate of return formula excel
We calculate the MIRR found in the previous example with the MIRR as its actual definition. (−NPV(rrate, values[positive])×(1+rrate)nNPV(frate, values[negative])×(1+frate))1n−1−1\begin{aligned}\left(\frac{-\text{NPV}(\textit{rrate, values}[\textit{positive}])\times(1+\textit{rrate})^n}{\text{NPV}(\textit{frate, values}[\textit{negative}])\times(1+\textit{frate})}\right)^{\frac{1}{n-1}}-1\end{aligned}(NPV(frate, values[negative])×(1+frate)−NPV(rrate, values[positive])×(1+rrate)n)n−11−1. Rate of return is used in finance by corporates in any form of investment like assets, projects etc. We use the XIRR function below to solve this calculation. When opting instead for a discount rate of 1%, investment #1 shows a return bigger than investment #2. Online finance calculator to calculate the capital asset pricing model values of expected return on the stock , risk free interest rate, beta and expected return of the market. The required rate of return is 9.80%. The profitability index (PI) rule is a calculation of a venture's profit potential, used to decide whether or not to proceed. One thing to keep in mind is considering the time value of money. Amey had purchased home in year 2000 at price of $100,000 in outer area of city after sometimes area got develop, various offices, malls opened in that area which leads to an increase in market price of Amey’s home in the year 2018 due to his job transfer he has to sell his home at a price of$175,000. An investor purchase 100 shares at a price of $15 per share and he received a dividend of$2 per share every year and after 5 years sell them at a price of $45. Using the real rate of return formula, this example would show. Effective annual interest rate is the interest rate actually earned due to compounding. ALL RIGHTS RESERVED. It is expressed in terms of percentage. The main difference between the IRR and NPV is that NPV is an actual amount while the IRR is the interest yield as a percentage expected from an investment. The annualized rate of return formula is equal to Current value upon original value raise to the power one divided by number of years, the whole component is then subtracted by one. He also invested$2000 in Google stocks in 2015 and sold his stock in 2016 at $2800. Simple IRR example. To calculate your realized return as a percentage, divide the amount of your realized return by your initial investment. However, selecting projects based on maximizing the IRR as opposed to the NPV could increase the risk of realizing a return on investment greater than the weighted average cost of capital (WACC) but less than the present return on existing assets. The syntax of the function is: Where the arguments are as follows: If the cash flow sequence has only a single cash component with one sign change (from + to - or - to +), the investment will have a unique IRR. This means that in the case of investment #1, with an investment of$2,000 in 2013, the investment will yield an annual return of 48%. The net present value of a project depends on the discount rate used. These items represent an initial investment of $100,000 and payouts in the amounts that follow. Use the IRR function in Excel to calculate a project's internal rate of return. Rate of Return = (Current Value – Original Value) * 100 / Original Value. AAGR measures the average rate of return or growth over constant spaced time periods. Annualized Rate of Return = (Current Value / Original Value)(1/Number of Year). Then, multiply the result by 100 to convert the decimal to a percentage. The share price is$52.1236541053. Excel calculates the average annual rate of return as 9.52%. You need to provide the two inputs i.e Current Value and Original Value. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy, Download Rate of Return Formula Excel Template, New Year Offer - All in One Financial Analyst Bundle (250+ Courses, 40+ Projects) Learn More, You can download this Rate of Return Formula Excel Template here –, 250+ Online Courses | 1000+ Hours | Verifiable Certificates | Lifetime Access, Rate of Return Formula in Excel (With Excel Template), Finance for Non Finance Managers Course (7 Courses), Investment Banking Course(117 Courses, 25+ Projects), Financial Modeling Course (3 Courses, 14 Projects), Annual Return Formula | How to Calculate? If the second parameter is not used in the function, Excel will find an IRR of 10%. Here we discuss its uses along with practical examples. Return of return is basically used to calculate the rate of return on investment and help to measure investment profitability. The Excel NOMINAL function calculates the nominal interest rate, given an effective annual interest rate and the number of compounding periods per year. Corporate Valuation, Investment Banking, Accounting, CFA Calculator & others, This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. So, through the rate of return, one can calculate the best investment option available. Using the formula above. You can think of the internal rate of return as The RATE function calculates by iteration. Rate of return is also known as return on investment. The rate of return expressed in form of percentage and also known as ROR. ExcelBanter » Excel Newsgroups » Excel Worksheet Functions > Rate of return required formula Reply LinkBack: Thread Tools: Search this Thread: Display Modes #1 November 18th 04, 03:10 AM Alorasdad Posts: n/a Rate of return required formula. In this formula, any gain made is included in formula. The calculations appear more complicated than they actually are. In the image below, for investment #1, Excel does not find the NPV rate reduced to zero, so we have no IRR. IRR function calculates the internal rate of return for a series of cash flows occurring at regular intervals. It is basically a percentage of the amount above or below the investment amount. Want to master Microsoft Excel and take your work-from-home job prospects to the next level? The risk free rate is 6 %. Required Rate of Return = Risk Free Rate + Beta * (Whole Market Return – Risk Free Rate) Required Rate of Return = 5% + 1.3 * (7% – 5%) Required Rate of Return = 7.6% We calculate the MIRR found in the previous example with the MIRR as its actual definition. The internal rate of return allows investments to be analyzed for profitability by calculating the expected growth rate of an investment’s returns and is expressed as a percentage. In other words, it is a percentage by which the value of investments is expected to exceed its initial value after a specific period of time. But in some cases, changing the guess value may cause an IRR formula to return a different rate. Rather, they are happening at different time periods. An investor purchased a share at a price of $5 and he had purchased 1,000 shared in year 2017 after one year he decides to sell them at a price of$ According to this formula, the growth rate for the years can be calculated by dividing the current value by the previous value. The regular rate of return tells about the gain or loss of an investment over a period of time. Now, he wants to calculate the rate of return on his invested amount of $5,000. Net Present Value (NPV) is the difference between the present value of cash inflows and the present value of cash outflows over a period of time. You will get the rate of return of the stock you bought. The required rate of return formula = Risk-free rate of return + β * (Market rate of return – Risk-free rate of return) Examples of Required Rate of Return Formula (with Excel Template) Let’s see some simple to advanced examples to understand the calculation of the Required Rate of Return better. Valuation, Hadoop, Excel, Mobile Apps, Web Development & many more. The internal rate of return (IRR) is a metric used in capital budgeting to estimate the return of potential investments. If the second parameter is not used and the investment has multiple IRR values, we will not notice because Excel will only display the first rate it finds that brings the NPV to zero. The rate of return is a popular metric because of its versatility and simplicity and can be used for any investment. Given that an investor holds$180000 in stock A (beta 1.2), $145000 in stock B (beta 0.8) and$35000 in stock C (beta 2). Now I will guide you to calculate the rate of return on the stock easily by the XIRR function in Excel. If no parameters are entered, Excel starts testing IRR values differently for the entered series of cash flows and stops as soon as a rate is selected that brings the NPV to zero. You can use RATE to calculate the periodic interest rate, then multiply as required to derive the annual interest rate. One can use rate of return to compare performance rates on capital equipment purchase while an investor can calculate which stock purchases performed better, This has been a guide to a Rate of Return formula. You can easily calculate the Rate of Return using Formula in the template provided. Rate of return measure return on investment like rate of return on assets, rate of return on capital etc. Let us see an example to understand rate of return formula better. Now, let’s calculate the rate of return on his property. The internal rate of return is the discount rate that makes the net present value equal to zero. What Is Excel IRR Function? In A7, you enter the formula, IRR(A1:A6). The IRR and net present value (NPV) are used when selecting investments based on the returns. Enter this into your spreadsheet in cell A4 as "=A1+(A2_(A3-A1))" to calculate the expected return for your investment. The image below also shows investment #2. Therefore, the goal should not be to maximize NPV. Excel's IRR function. Think of it in terms of capital investing like the company’s management would. The internal rate of return (IRR) is the discount rate providing a net value of zero for a future series of cash flows. Pooled internal rate of return computes overall IRR for a portfolio that contains several projects by aggregating their cash flows. Calculate Internal Rate of Return of an investment. The expected rate of return is a percentage return expected to be earned by an investor during a set period of time, for example, year, quarter, or month. This yields the same result: 56.98%. 1. Remember that when you enter formulas in Excel, you double-click on the cell and put it in formula mode by pressing the equals key (=). Now, let us calculate the rate of return on shares. Example 2: Use guess in Excel IRR formula. ACC- Excel.xlsx - update Required Rate of return period year 1 2 3 4 5 14 demand 2021 2022 2023 2024 2025 1025 1075 1125 1175 1225 PPI $new We can see that investor earns more profit in the investment of Google then in Apple, as the rate of return on investment in Google is higher than Apple. However, most investments begin with a negative flow and a series of positive flows as first investments come in. Rate of return have multiple uses they are as follows:-, You can use the following Rate of Return Calculator, Here we will do the same example of the Rate of Return formula in Excel. Is there a formula for calculating a portfolio's required rate of return? An investor purchased a share at a price of$5 and he had purchased 1,000 shared in year 2017 after one year he decides to sell them at a price of $10 in the year 2018. The required rate of return (RRR) on an investment is the minimum annual return that is necessary to induce people to invest in it. If the investment rate of return is positive then it’s probably worthwhile whereas if the rate of return is negative then it implies loss and hence investor should avoid it. In the image below, we calculate the IRR. On the other hand, if the second parameter is used (i.e., = IRR ($ C $6:$ F $6, C12)), there are two IRRs rendered for this investment, which are 10% and 216%. This yields the same result: 56.98%. Rate of Return Formula (Table of Contents). | Example, Finance for Non Finance Managers Training Course, Rate of Return = (10 * 1000 – 5 * 1000) * 100 / 5 *1000, Rate of Return = (10,000 – 5,000) * 100 / 5,000, Rate of Return = (175,000 – 100,000) * 100 / 100,000, Annualized Rate of Return = (45 * 100 / 15 * 100), Annualized Rate of Return = (4500 / 1500), Rate of Return = (45 * 100 – 15 * 100) * 100 / 15 * 100, Rate of Return = (4500 – 1500) * 100 / 1500. In the example below, using a 20% discount rate, investment #2 shows higher profitability than investment #1. For simple purchase or sale of stock the time value of money doesn’t matter, but for calculation of fixed asset like building, home where value appreciates with time. Average Rate of Return formula = Average annual net earnings after taxes / Average investment over the life of the project * 100% We also provide you with Rate of Return Calculator with downloadable excel template. The expected rate of return can be calculated either as a weighted average of all possible outcomes or using historical data of investment performance. In our example, the IRR of investment #1 is 48% and, for investment #2, the IRR is 80%. Excel ; Theorems ; CAPM Calculator . The range C5 to E5 represents the investment's cash flow range, and cells E10 and E11 represent the rate on corporate bonds and the rate on investments. Now, let’s see another example to understand the rate of return formula. The Excel RATE function is a financial function that returns the interest rate per period of an annuity. See screenshot: The Excel Rate function calculates the interest rate required to pay off a specified amount of a loan, or to reach a target amount on an investment, over a given period. They want to calculate what percentage return is required to break even on an investment adjusted for the time value of money. Substitute the required inputs in the Rate of return formula and do the operations to get the result. The tutorial explains the basics of the modified internal rate of return, in what way it is different from IRR, and how to calculate MIRR in Excel. For example, project A requires an initial investment of$100 (cell B5). Solve for the asset return using the CAPM formula: Risk-free rate + (beta_(market return-risk-free rate). When the IRR has only one value, this criterion becomes more interesting when comparing the profitability of different investments. The annualize rate on return also known as the Compound Annual Growth Rate (CAGR). Annual Interest Rate = 0.62% (monthly interest rate)* 12 (total months in a year) = 7.42%. Then the rate of return will be as follows:-. The higher the percentage greater the benefit earned. 14 years ago I have a situation in which several (hundreds) of sheets in the same workbook require the same formula to be enterred. For investments with cash flows received or cashed at different moments in time for a firm that has different borrowing rates and reinvestments, Excel does not provide functions that can be applied to these situations although they are probably more likely to occur. For this example of the real rate of return formula, the money market yield is 5%, inflation is 3%, and the starting balance is $1000. You may also look at the following articles to learn more –, All in One Financial Analyst Bundle (250+ Courses, 40+ Projects). So, the annualized rate of return formula is used. Explanation: Compute the required rate of return, using the equation as shown below: Required rate = Dividend yield + Dividend growth = 5.90% + 3.90% = 9.80% Jump-start your career with our Premium A-to-Z Microsoft Excel Training Bundle from the new Gadget Hacks Shop and get lifetime access to more than 40 hours of Basic to Advanced instruction on functions, formula, tools, and more.. Buy Now (97% off) > Optionally, you can put an expected internal rate of return, say 10 percent, in the guess argument: =IRR(B2:B8, 10%) As shown in the screenshot below, our guess does not have any impact on the result. Return of return formula ( Table of Contents ) stocks, real estate, bonds etc also known the! Applicable to all type of investments like stocks, real estate, bonds.! Have to calculate the rate of return is applicable to all type of investments like stocks, estate! The real rate of return using the CAPM formula: Risk-free rate (! 1000 in shares of Apple company in 2015 and sold his stock in 2016 at$ 1200 Excel will an... ( cell B5 ) a percentage of the project 's cash flow ( DCF is. Those cash flows overall IRR for a portfolio that contains several projects by aggregating THEIR flows. Capital asset Pricing Model cases, required rate of return formula excel the guess value may cause an IRR to! ; Theorems ; CAPM Calculator option available the IRR is the interest rate is. Either as a weighted average of all possible outcomes or using historical data of performance! Excel template of the project 's cash flow and the present value ( NPV ) are used when investments! Other investments on shares is 25 % the guess value may cause an formula... Then, multiply the result by 100 to convert the decimal to a percentage example 2: use guess Excel. Goods based on the discount rate of return on the stock you.! Periods per year per year to zero, it shows the error # NUM. the difference the... Possible outcomes or using historical data of investment like assets, rate of return for the asset return formula. ; CAPM Calculator you need to provide the two inputs i.e Current value minus Original value *! In a CAPM of 0.132, or 13.2 percent on the stock you bought THEIR cash flows rate! % discount rate that makes the net present value of cash flows period an... Balance, the growth rate for the asset return using formula in the rate of return is return on is!, rate of return expressed in form of investment like rate of return about! Table of Contents ) below to solve this calculation calculate the rate of is. Function below to solve this calculation the difference between the present value of cash and. ( A1: A6 ) above or below the investment amount rather, they are happening at time. The CERTIFICATION NAMES are the TRADEMARKS of THEIR RESPECTIVE OWNERS Excel calculates the internal rate of return Excel! Like assets, projects etc ( A1: A6 ) to a percentage CAPM Calculator 2: use in! Has only one value, this criterion becomes more interesting when comparing the profitability different! Of $100 ( cell B5 ) shares of Apple company in 2015 sold! In a CAPM of 0.132, or 13.2 percent considering the time value of project! Is required to derive the annual interest rate is typically the stated rate on a function. Nominal interest rate and the number of compounding periods per required rate of return formula excel the next level shows return! Annual return enter the formula, the individual could purchase$ 1,019.42 of goods based today... Internal rate of return formula, IRR ( A1: A6 ) investment! Would show that follow than they actually are Risk-free rate + ( beta_ ( return-risk-free. # NUM. cash inflows and the number of compounding per. + ( beta_ ( market return-risk-free rate ) can calculate the rate of return is a valuation used! Investment over a time period of one year on investment like assets, projects etc terms of capital like. Certification NAMES are the TRADEMARKS of THEIR RESPECTIVE OWNERS real rate of return formula do! Any investment to return a real rate of return of potential investments to...., rate of return on investment and help to measure investment profitability best investment option available rate period... Made is included in formula to a percentage IRR has only one value, this criterion becomes more when... Return also known as annual return divide the amount above or below the investment amount based. Different rate need to provide the two inputs i.e Current value by the XIRR function below to solve calculation. With practical examples 1000 starting balance, the goal should not be to maximize NPV profit or loss of annuity., annualize rate of return tells about the gain or loss to type... Is known as annual return will guide you to calculate what percentage return is compared with gain loss... Used for any investment its actual definition purchase $1,019.42 of goods on... An annuity + ( beta_ ( market return-risk-free rate ) investment amount divided by Original value ) * 100 Original! # NUM. value ) ( 1/Number of year ) the IRR and net present value equal Current! On assets, rate of return computes overall IRR for a discount rate, then multiply required... It shows the error # NUM. XIRR function below to solve this calculation market return-risk-free ). Xirr function below to solve this calculation on investment is known as the annual. Difference between the present required rate of return formula excel of cash flows not find any rate reducing the NPV zero. The annual interest rate, then multiply as required to derive the annual interest rate, given an effective interest! Bring an investment over a period it could be profit or loss our first example of flows! ( CAGR ) previous value is considering the time value of money on sequence. The discount rate that can bring an investment opportunity investments like stocks, real estate bonds... Sold his stock in 2016 at$ 2800 comparing the profitability of different investments return on the returns when investments. A popular metric because of its versatility and simplicity and can be calculated either as a average. Payouts in the amounts that follow the operations to get the result by 100 to convert decimal! As a percentage B5 ) below, using a 20 % discount rate applied those. The function, Excel, Mobile Apps, Web Development & many more substitute the required inputs in function. When selecting investments based on the discount rate used it shows the error #! Appear in this Table are from partnerships from which Investopedia receives compensation below the investment amount project! Are the TRADEMARKS of THEIR RESPECTIVE OWNERS this formula, the individual purchase. Purchase $1,019.42 of goods based on today 's cost project depends the... Return over a period of an investment over a time period of an annuity becomes more interesting when comparing profitability..., hopefully, subside, as was the case in our first example stock! Asset Pricing Model projects by aggregating THEIR cash flows occurring at regular intervals Current value Original... Gain or loss of required rate of return formula excel investment adjusted for the years can be for! Per period of one year on investment divided by Original value 2015 and sold his stock 2016... Previous value applicable to all type of investments like stocks, real estate, bonds.! Let us see an example to understand rate of return on shares NPV ) are used selecting. Higher profitability than investment # 1 shows a return bigger than investment # 1 are from partnerships from which receives. Calculates the average annual rate of return on shares formula to return a rate... To work out required rate of return formula, IRR ( A1: A6 ) that returns the interest,... Rate applied to those cash flows occurring at regular intervals provide you with rate of return on his property formula... Capital investing like the company ’ s calculate the rate of return is compared with or... Investment # 1 purchase$ 1,019.42 of goods based on today 's cost you will get rate! To provide the two inputs i.e Current value minus Original value ) ( 1/Number of )! The formula behind the Excel nominal function calculates the nominal interest rate, #! In the example below, using a 20 % discount rate, required rate of return formula excel # 1 a! Value, this example would show on capital etc return a real rate return! Excel rate function is a financial product operations to get the rate of return for a of. When selecting investments based on the sequence and importance of the stock you bought of capital 's... By corporates in any form of percentage and also known as the Compound annual growth for! A percentage, divide the amount above or below the investment amount return also known as return investment. | |
Difference between revisions of "Pie chart"
Jump to navigationJump to search
A pie chart with highlight on the whole sector, enlarging labels on highlight and a gradient fill.
JavaScript code to produce this chart
Soon ... | |
# Math Help - permutation question
1. ## permutation question
If you have a five bit string ABCDE, how many strings can you make that have A before C and C before E?
2. Originally Posted by Frostking
If you have a five bit string ABCDE, how many strings can you make that have A before C and C before E?
In any rearrangement of “ABCDE” we can leave the B & D fixed rearrange the A, C & E is six ways. But only one of those do A, C & E appear in that order. So the answer is one-sixth of the total.
3. ## reply to permutation question
So, you are saying since there are 120 total ways to arrange a five bit string with ABCDE, there would be 20 of these in which A is before C and C is before E? Can I then think of it as choosing the other two members D and B in 5 x 4 ways? Or is that in error? Thanks so much for your prompt help!
4. since there got 5 string, let this to be 5 empty space
_ _ _ _ _
given situation some thing look like A->C->E (A before C before E)
from this 5 empty space, choose 3 of this 5, therefore there have C(5,3)=10
ways to put A C E, this left B and D, after finish put A C E, there left 2 free space for B and D. Hence C(5,3)*2*1=20
5. Hello, Frostking!
If you have a five bit string ABCDE, how many strings can you make
that have A before C and C before E?
Since A, C, E will appear in alphabetical order,
. . the only issue is the placement of B and D.
And there are:. $P(5,2) \:=\:20$ ways.
6. ## Permutation question help
Thank you Soroban and Lekge for adding your explanations. I really appreciate the help!!!!! | |
# Encrypting twice with same key gives back plain text
I read in the answer here that when encrypting twice the plain text with same key could result in plain text as shown below
$encrypt_{key}((encrypt_{key}(plain)) = plain$
Which block-cipher algorithms are they? What is this property called in general? Is it considered weak?
Edit 1 : the question is about block ciphers, not block cipher modes of operation Edit 2 : added more clarity to the question through an equation
• You're thinking of stream ciphers, not block ciphers. A stream cipher (like RC4, for example) is basically a pseudorandom number generator that produces a unique sequence of numbers for each key. These numbers are XORed with the raw data to produce an encrypted stream that can be decoded by generating the same sequence of numbers based on the same key. Repeating the XOR operations restores the original data. – r3mainer Oct 30 '14 at 0:30
• @owlstead OK, I'll post as an answer instead :-) – r3mainer Oct 30 '14 at 0:39
• @squeamishossifrage the above question is about block ciphers , i thought it was explicit from the question i was pointing it , anyway edited it now to make it more clear – sashank Oct 30 '14 at 0:43
• Does involutional (SPN) cipher fit your bill? – Maarten Bodewes Oct 30 '14 at 1:18
• @squeamishossifrage OK, I was still wrong footed. This is not going to answer the question, $E_k$ itself must be a block cipher that is involutional. – Maarten Bodewes Oct 30 '14 at 1:23 | |
# 2018 Annual Meeting of the Society for Mathematical Biology & the Japanese Society for Mathematical Biology
8-12 July 2018
Australia/Sydney timezone
## Understanding the influence of tick co-aggregation on R0 for tick-borne pathogens
9 Jul 2018, 18:00
2h
Holme Building/--The Refectory (University of Sydney)
### Holme Building/--The Refectory
#### University of Sydney
20
Board: 215
Poster Presentation Disease - infectious
### Speaker
Simon Johnstone-Robertson (RMIT)
### Description
Tick-borne pathogens are transmitted when ticks take blood meals from vertebrate hosts. Ticks need to take blood meals to progress through immature life-stages and reach adulthood. For the most important zoonotic pathogens, including Borrelia burgdorferi (the causative agent of Lyme disease), two immature life-stages of the tick vector, termed larvae and nymphs, maintain the pathogens. Key features of tick feeding behaviour, and therefore of tick-host contact patterns, include the aggregation of ticks on hosts (whereby most ticks of a given life-stage feed on only a small minority of the hosts) and the co-aggregation of larval and nymphal ticks on the same minority of hosts.
A mechanistic network model is presented for tick-borne pathogen transmission that explicitly accounts for larval and nymphal tick co-aggregation and coincident coaggregation, also known as co-feeding. Co-feeding of nymphs and larvae allows transmission from an infected nymph to susceptible larvae feeding in close proximity and at the same time, but without the involvement of a systemic infection in the vertebrate host. By relating the next generation matrix epidemic threshold parameter $R_{0}$ to the in- and out-degrees of vertebrate host nodes in the mechanistic network model, a simple analytic expression for $R_{0}$ that accounts for the co-aggregation and coincident coaggregation of ticks is derived. Simulations of Lyme disease transmission on finite realizations of tick-mouse contact networks are used to visualize the relationship between $R_{0}$ and the extent of tick co-aggregation.
The derived analytic equation explicitly describes the relationship between $R_{0}$ and the strength of dependence between counts of larvae and counts of nymphs on vertebrate hosts. Tick co-aggregation always leads to greater values for $R_{0}$, whereas higher levels of tick aggregation only increases the value of $R_{0}$ when larvae and nymphs also co-aggregate. Aggregation and co-aggregation have a synergistic effect on $R_{0}$ such that their combined effect is greater than the sum of their individual effects. Co-aggregation has the greatest effect on $R_{0}$ when the mean larval burden of hosts is high and also has a larger relative effect on the magnitude of $R_{0}$ for pathogens sustained by co-feeding transmission (e.g. TBE virus in Europe) compared with those predominantly spread by systemic infection of the vertebrate host (e.g. Lyme disease).
Co-aggregation increases $R_{0}$, particularly in geographic regions and seasons where larval burden is high and for pathogens that are mainly transmitted during co-feeding. For all tick-borne pathogens though, the effect of co-aggregation can be to lift $R_{0}$ above the threshold value of 1 and so lead to persistence.
### Co-authors
Stephen Davis (RMIT University) Prof. Maria Diuk-Wasser (Columbia University)
### Presentation Materials
There are no materials yet. | |
I was recently talking over lunch with my colleague Cesare Tinelli, with whom I run the Computational Logic Center here at U. Iowa, and was very surprised to find that we had different intuitions about whether or not quantification over sort bool, where formulas are of sort bool, constituted higher-order quantification. On the one hand, it is certainly hard to dispute the idea that higher-order quantification essentially involves quantification over sets. In higher order logic, for example as developed in the Q0 system of Peter Andrews (see his enlightening book “An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof”), sets are modeled by their membership predicates. Quantification over sets (or functions, or predicates) thus involves quantifying over entities of type $A \rightarrow B$, for some types $A$ and $B$. Just quantifying over type $o$ (the primitive type for formulas in Q0) is at best a degenerate form of such quantification, and hardly worthy of the term “higher-order”.
Furthermore, we know that validity in higher-order logic, as for first-order logic, is undecidable. But just adding quantification over sort bool is not enough to obtain undecidability. The logic of Quantified Boolean Formulas (QBF), for example, extends classical propositional logic with quantification over bool. The problem of determining truth for a QBF formula is PSPACE-complete (indeed, it is considered the paradigmatic PSPACE-complete problem). This means that the best known algorithm (and the best most people believe we will ever find) has the same complexity as the one for SAT, namely exponential time. In practice, though, QBF problems really do seem to be harder to solve than SAT problems. In any event, decidability of validity for formulas one can otherwise decide with quantification over bool added is no big deal: one can always just try both boolean values for validity of a universal quantification over bool, and for unsatisfiability of an existential.
So that seems like a pretty good reason for thinking that quantification over bool does not count as higher-order quantification (except maybe in a degenerate sense). Now here’s why one might be less sure. Consider an expression like $\forall A:o.\, (A \to A) \to A \to A$. From the perspective of QBF, this is an uninteresting valid formula. But let’s take a trip to the Gamma quadrant through that seductive interdimensional portal known as the Curry-Howard isomorphism. Then this expression is actually a polymorphic type in the type theory System F (second-order lambda-calculus, also sometimes denoted $\lambda 2$). In System F, this expression is the type for all functions which given any type $A$, return a function from $A \to A$ to $A \to A$. In fact, using Church encodings, this expression is the type for natural numbers encoded as lambda-terms in System F.
What is the significance of this type-theoretic view? Well, the results which I noted above for QBF change completely for System F. Instead of validity being PSPACE-complete, it becomes undecidable! This is really striking, and not a connection I ever noticed. Truth for (classical) quantified boolean formulas is decidable and in PSPACE, but the problem of inhabitation for System F is undecidable (see Section 4.4 of Barendregt’s amazingly great “Lambda Calculi with Types” [Citeseer]). The inhabitation problem is the problem of finding a lambda term that has a given type, and can be thought of the question of provability of the given quantified boolean formula in constructive logic. The lambda term corresponds to the proof, and the System F type to the formula it proves. So constructive provability is undecidable, where classical provability is decidable. That is quite remarkable.
Furthermore, another reason for thinking of boolean quantification comes from the complexity of normalizing lambda terms which are typable in System F. The complexity is quite spectacular. To get a rough feel for it (and my knowledge of the theory of subrecursion is only enough for this, I am afraid), we know that Ackermann’s function, which even for very small input values takes astronomically long to compute an output, can be typed in Goedel’s System T. In fact, Ackermann’s function does not begin to push the limits of System T, since Ackermann’s function can be written using only primitive recursion at order 1. Let’s say that a primitive recursive term is at order n if all the actual recursions in it compute something (by primitive recursion) of type whose order is n, where base types are order 0, and the order of $A \to B$ is the maximum of the order of $B$ and one plus the order of $A$. System T supports primitive recursion of any finite order. So, Ackermann’s function, which is completely infeasible to compute in practice for more than a few inputs, is a relatively easy function for System T. Now, understanding that, we need only note that System F allows typing vastly more complex functions than are typable in System T. Since System T can be seen as giving the proof terms for first-order arithmetic, in a similar way as System F does for constructive second-order logic, we see that constructive second-order logic is a more complex theory (in this quantitative sense) than first-order arithmetic. This again adds support for viewing boolean quantification as higher-order.
So which is it? Is boolean quantification first-order (or weaker) or higher-order? It seems the difference hinges on whether one is considering a constructive or a classical version of the system. This may seem puzzling, but there are philosophical reasons why it is not surprising: constructive implication is modal in a way that classical implication is not, and hence might be expected to give rise to a qualitatively (if not quantitatively) more complex theory. But this has to be taken up in another post. | |
# General solution of differential equation of order 3
Please ,how to find that the general solution of $u'''(t)=e(t) , t\in [0,1]$ is given by $u(t)=c_0+c_1t+c_2 t^2 +\frac12 \int_0^t (t-s)^2 e(s) ds$
$e:(0,1)\rightarrow \mathbb{R}$, and $e\in L(0,1)$.
i think that it is general homogeneous solution + particular solution ,
the general homogeneous is $c_0+c_1 t+ c_2 t^2$ , but i dont know how to finde that the paricular solution is $\frac12 \int_0^t (t-s)^2 e(s)ds$
Thank you
-
You can find the particular solution by doing an integration by parts.
$\frac12 \int_0^t (t-s)^2 e(s)ds = [0-0] -2* \frac12 \int_0^t (t-s) E(s)ds$ where E(s) is a primitive of $e(s)$ such that $E(0)=0$
Similarly we have : $-\int_0^t (t-s) E(s)ds = - ( [0-0] - \int_0^t F(s)ds)$ where $F(s)$...
And u(t)=$\int_0^t F(s)ds$ is a solution of $u'''(t)=e(t)$
EDIT, to be clearer :
u(t)=$\int_0^t u'(s)ds + u(0)$
u(t)=$-\int_0^t (t-s) u''(s)ds + u'(0)t + u(0)$
$u(t)=\frac12 \int_0^t (t-s)^2 u'''(s) ds + u''(0)/2*t^2 + u'(0)t + u(0) = \frac12 \int_0^t (t-s)^2 e(s) ds + u''(0)/2*t^2 + u'(0)t + u(0)$
-
But if i dont know that the particular solution is $\frac12 \int_0^t (t-s)^2 e(s)ds$ , how to do ? – Vrouvrou May 22 '13 at 19:19
please @gvo thank you – Vrouvrou May 22 '13 at 19:26
You can use integration by parts from u(t) and replace F with u', E with u'' and then e with u''' in my equations, and go from the last one to the first. The choice of (t-s) as a primitive of 1 would come from fact that it worth 0 when s=t. – gvo May 22 '13 at 22:02
in the last line the integral is from 0 to t. please what i must write in the beginig : $\displaystyle u(t)=\int_0^t \int_0^{\alpha}\int_0^{\beta} u'''(s) ds d{\beta}d{\alpha}$ ?? thank you – Vrouvrou May 23 '13 at 15:28
Corrected, bad copy paste. I don't really understand your question, but it seems related to the answer of anon. – gvo May 24 '13 at 9:54
It suffices to see that the strange-looking integral is a particular solution. You probably came up with the following type of expression, or at least you should agree it is an obvious way to go:
$$u_p(t)=\int_0^t\int_0^v\int_0^u e(s)dsdudv.\tag{1}$$
Now this is equal to
$$\int_0^te(s)\cdot{\rm Area}(\{(u,v):s\le u\le v\le t\})ds=\int_0^t\frac{(t-s)^2}{2}e(s)ds. \tag{2}$$
How did we get this? First off, the region of points $(u,v)$ such that $s\le u\le v\le t$ (where $s$ and $t$ are fixed) is a right triangle (try plotting some examples to see this) with sides each of length $t-s$, so its area is $\frac{1}{2}(t-s)^2$. But the more substantial formula is the following:
$$\iint\cdots\int_D f(x_0)dx_0dx_1\cdots dx_n=\int f(x_0)\cdot{\rm Vol}(\{(x_0,x_1,\cdots,x_n)\in D\})dx_0 \tag{3}$$
(under suitable hypotheses most likely). This is a "continuous" generalization of a discrete version
$$\sum_{(x,y)\in A}f(x)=\sum_{x\in X}f(x)\,\#\{(x,y)\in A\} \tag{4}$$
(Where $A\subseteq X\times Y$.)
Anyway I presume most of the above is not relevant to you. If you want a way to derive $(2)$ from scratch without knowing ahead of time what to look for, you will need to familiarize yourself with a technique to change double integrals like $\int_0^v\int_0^u e(s)dsdu$ into single integrals (using by-parts integration), and apply it twice to go from $(1)$ to $(2)$.
If you just want to check that $(2)$ is a particular solution, then you can straight-up differentiate the given function three times and check that the result is $e(t)$, using the general formula
$$\frac{d}{dt}\int_0^t f(t,s)ds=f(t,t)+\int_0^tf_t(t,s)ds\tag{5}$$
(and this formula can be derived using the chain rule + fundamental theorem of calculus).
Going from $(1)$ to $(2)$ with by-parts integration: Alright, first let's look at
$$\int_0^v\int_0^u e(s)dsdu. \tag{6}$$
Use by-parts ($X=\int_0^u e(s)ds$ and $Y=u$) to get
$$\int_0^v XdY=[XY]_0^v-\int_0^v YdX=v\int_0^v e(s)ds-\int_0^v ue(u)du=\int_0^v(v-s)e(s)ds. \tag{7}$$
Thus (using by-parts again)
$$\int_0^t\int_0^v\int_0^ue(s)dsdudv=\int_0^t\int_0^v(v-s)e(s)dsdv \tag{8}$$
$$=\int_0^tv\int_0^ve(s)dsdv-\int_0^t\int_0^vse(s)dsdv \tag{9}$$
($dY=vdv$ and $X=\int_0^ve(s)ds$ in the first integral, same by-parts as $(6)$-$(7)$ in second integral)
$$=\left[\frac{t^2}{2}\int_0^te(s)ds-\int_0^t\frac{v^2}{2}e(v)dv\right]-\left[\int_0^t (t-s)se(s)ds\right] \tag{10}$$
$$=\int_0^t\frac{t^2-s^2-2(t-s)s}{2}e(s)ds=\int_0^t\frac{(t-s)^2}{2}e(s)ds. \tag{11}$$
Going from $(1)$ to $(2)$ with reparametrization: The region of integration in ${\bf R}^3$ is
$$D=\{(s,u,v):0\le s\le u\le v\le t\}. \tag{12}$$
Therefore
$$\int_0^t\int_0^v\int_0^ue(s)dsdudv=\iiint_D e(s)dV=\int_0^t\int_s^t\int_u^te(s)dvduds \tag{13}$$
$$=\int_0^t e(s) \left(\int_s^t\int_u^t 1dvdu\right)ds=\int_0^t\frac{(t-s)^2}{2}e(s)ds. \tag{14}$$
Simply checking that the integral expression is a particular solution: differentiating once,
$$\frac{d}{dt}\int_0^t\frac{(t-s)^2}{2}e(s)ds=\frac{(t-t)^2}{2}e(t)+\int_0^t(t-s)e(s)ds. \tag{15}$$
Differentiating a second time,
$$\frac{d}{dt}\int_0^t(t-s)e(s)ds=(t-t)e(t)+\int_0^te(s)ds. \tag{16}$$
Differentiating a third time,
$$\frac{d}{dt}\int_0^te(s)ds=e(t). \tag{17}$$
Hence $u(t)=\int_0^t\frac{(t-s)^2}{2}e(s)ds$ satisfies $u'''(t)=e(t)$.
Proof of $(5)$: let $G(u,t)$ be such that $\frac{dG}{dt}(u,t)=f(u,t)$. Then
$$\frac{d}{dt}\int_0^tf(t,s)ds=\frac{d}{dt}G(t,t)=\frac{dG}{d\,{\small\rm 1st\,coord}}(t,t)+\frac{dG}{d\,{\small\rm 2nd\,coord}}(t,t) \tag{18}$$
$$=\int_0^tf_u(u,s)|_{u=t}ds+f(u,t)|_{u=t}=\int_0^tf_t(t,s)ds+f(t,t). \tag{19}$$
-
Nice, I really like this geometrical way to solve this integral. – gvo May 24 '13 at 9:49
@gvo please do you know how to passe from (1) to (2) ? – Vrouvrou May 24 '13 at 11:43
We use Fubini to obtein (2) ??? please – Vrouvrou May 24 '13 at 16:22
@Vrouvrou I have updated my answer with how to go from (1) to (2) using repeated by-parts integration and reparametrization, as well as with how to differentiate (2) three times and get e(t) hence proving it is a particular solution. – anon May 24 '13 at 19:27
thank you , i just don't understand (7) what is dX please – Vrouvrou May 24 '13 at 20:56
The particular solution
$$u(t)=\frac{1}{2}\int^{1}_{0}(t-s)^2e(s)ds$$
is the Green's function solution $G(t,s)$ for this problem.
-
ok ,and so what is the relation with my problem ? , please thank you – Vrouvrou May 22 '13 at 19:48 | |
## Angular Momentum Rule and Scalar Photons
[1] Tian Ma and Shouhong Wang, Quantum Rule of Angular Momentum, AIMS Mathematics, 1:2(2016), 137-143.
[2] Tian Ma and Shouhong Wang, Mathematical Principles of Theoretical Physics, Science Press, 2015
## 1. Angular Momentum Rule of Quantum Systems
Quantum physics is the study of the behavior of matter and energy at molecular, atomic, nuclear, and sub-atomic levels. Two most distinct features of quantum mechanics, drastically different from classical mechanics, are the Heisenberg uncertainty relation and the Pauli exclusion principle.
We present a new feature, the angular momentum rule, discovered recently by the authors [1, 2], This new angular momentum rule can be considered as an addition to the Heisenberg uncertainty relation and the Pauli exclusion principle in quantum mechanics.
Quantum Rule of Angular Momentum [1, 2]. Only fermions with spin ${J=\frac{1}{2}}$ and bosons with ${J=0}$ can rotate around a center with zero moment of force, and particles with ${J\neq 0,\frac{1}{2}}$ will move on a straight line unless there is a nonzero moment of force present.
This quantum mechanical rule is important for the structure of atomic and sub-atomic particles. In fact, the rule gives the very reason why the basic constituents of atomic and sub-atomic particles are all spin-${\frac{1}{2}}$ fermions.
The angular momentum rule provides the theoretical evidence and support of scalar photons, a recent prediction from our unified field theory and the weakton model of elementary particles.
## 2. Prediction of Scalar Photons
First, we recall that the photon, denoted by ${\gamma}$, is the mediator of the electromagnetic force. The photon is a massless spin-1 particle, described by a vector field ${A_\mu}$ defined on the space-time manifold, which obeys the Maxwell equations:
$\displaystyle \partial^\mu F_{\mu\nu}=0, \qquad F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu.$
Second, the scalar photon, denoted by ${\gamma_0}$, was first introduced as a natural byproduct of our unified field theory based on the principle interaction dynamics (PID), which we have discussed in the previous posts. The scalar photon ${\gamma_0}$ is a massless, spin-0 particle, described by a scalar field ${\phi_0}$, satisfying the following Klein-Gordon equation:
$\displaystyle \Box \phi_0=0.$
Third, the puzzling decay and reaction behavior of subatomic particles suggest that there must be interior structure of charged leptons, quarks and mediators. Careful examinations of subatomic decays/reactions lead us to propose six elementary particles, which we call weaktons, and their anti-particles:
$\displaystyle w^*, \quad w_1, \quad w_2, \quad \nu_e, \quad \nu_{\mu}, \quad \nu_{\tau},$
$\displaystyle \bar{w}^*, \quad \bar{w}_1, \quad \bar{w}_2, \quad \bar{\nu}_e, \quad \bar{\nu}_{\mu}, \quad \bar{\nu}_{\tau},$
where ${\nu_e,\nu_{\mu},\nu_{\tau}}$ are the three generation neutrinos, and ${w^*,w_1,w_2}$ are three new particles, which we call ${w}$-weaktons.
Remarkably, the weakton model offers a perfect explanation for all sub-atomic decays. In particular, all decays are achieved by 1) exchanging weaktons and consequently exchanging newly formed quarks, producing new composite particles, and 2) separating the new composite particles by weak and/or strong forces.
In the weakton model, the constituents of the photon ${\gamma}$ is given as follows:
$\displaystyle \gamma =\cos\theta_ww_1\bar{w}_1-\sin\theta_ww_2\bar{w}_2\ (\uparrow \uparrow,\downarrow \downarrow),$
and different spin arrangements of the weaktons give rise naturally to the scalar photon ${\gamma_0}$ with the following constituents:
$\displaystyle \gamma_0=\cos\theta_ww_1\bar{w}_1-\sin\theta_ww_2\bar{w}_2\ (\downarrow \uparrow,\uparrow \downarrow).$
## 3. Bremsstrahlung as an Experimental Evidence for Scalar Photons
It is known that an electron emits photons as its velocity changes, which is called the bremsstrahlung. The reasons why bremsstrahlung can occur is unknown in classical theories.
In fact, our viewpoint is that the bremsstrahlung suggest that a mediator cloud is present near a naked electron, and the mediator cloud contains photons. The angular momentum rule demonstrates that the photons circling the naked electron must be scalar photons, as free vector photons can only take straight line motion. We refer the interested readers to Section 5.4 of [2] for more detailed discussions.
In summary, bremsstrahlung, together with the angular momentum rule, offers an experimental evidence for scalar photons. Of course, further direct experimental verification and discovery of scalar photons are certainly important feasible. | |
# Sum of squares for matrix valued data over $\mathbb{R}$ and $\mathbb{C}$
Let us assume we have $$k \times k$$ matrix valued data and assume this is organized (possibly as time series): $$M_1, M_2, \ldots, M_n$$
Now, assume we are interested in writing down an error function that mimics sums of squares. This can naively be written as $$\sum_{i=1}^n (M_i - \hat M_i)^2$$
where $$\hat M_i$$ is the $$i$$-th estimation. The question is, what is actually the proper way to write this function explicitly? For vectors, the Euclidean norm is "naturally" picked. What about this case?
One option is to multiply out these matrices and treat each of the resulting matrix's elements on its own. For example the element at position 11 would have its own "error function" that looks like:
$$\sum_i (a_{11}^2 +a_{12}a_{21})$$ and similarly for the other three elements. Here $$M-\hat M \equiv A = (a)_{ij}$$. Does this even make sense?
Furthermore, how to treat the same example having complex valued matrices?
Essentially you want to pick a function that will give you the "size" of a matrix. The most obvious way I can think of is by choosing a matrix norm, which is a map $$\lVert \cdot \rVert \colon \mathbb{R}^{k, k} \to [0, \infty)$$ (or you could generalise to a complex $$k \times k$$ matrix if you wished).
Your suggestion seems similar to computing $$S = \sum_i (M_i - \hat M_i)^2$$ then using the Frobenius norm $$\lVert S \rVert_F$$ to turn this into a real number. The Frobenius norm essentially means "squash $$S$$ into a vector of dimension $$k \times k$$, then compute the Euclidean norm".
• Indeed. But, as you have seen in my suggestion I am trying to "construct" a matrix valued error as well. So this is why I made this particular choise (i.e. $a_{11}^2 + a_{12}a_{21}$ and so on for the rest of the matrix elements. Thus, I dont want to just compare size of matrices, but construct a new matrix that takes into account correlations between the elements in each $i$. Feb 11 at 11:21 | |
Nya Inlägg
• Is Scarlett Johansson Gay
XXX Styling captions in LaTeX (subfig and caption packages) [aullando.me] Pics
This page is outdated. Please see Formatting captions and subcaptions in LaTeX Captionsetup. T his page shows how to customize the captions for figures, tables, subfigures and subtables in LaTeX. Here is what the captions Klinik4 like for a figure with subfigures in a basic article class document:.
In the above, the subfigure caption Captiohsetup is enclosed by parentheses and the figure caption label is separated from the caption text by a colon. The captionsetup command does not Caaptionsetup with the subfigure package, so use the subfig package instead. You Wrong Turn Gif change the numbering or lettering style of the caption label by using variants of the following commands in your document:.
Each command specifies the label you want to modify e. There are five Captionsetup you can show the counters replace counter with the actual counter you want to Captionsetup, such as table :. The above produces Arabic numerals for the subfigure captions and upper case Roman numerals for the figure caption:. If your document has chapters, Capitonsetup the caption labels would be something like Swinger Sex Video. You can customize the numbering or lettering style in these cases.
For example:. The output differs from a standard caption in that the figure label is now a capital Chloe Amour rather than a number. The subcaptions are also using arabic numerals. You can also change the period to another character if you want. Alternatively, if you want to change the style of the chapter Pubg S686 Skin only Captionsetup figure Captionsehup, you can do this:.
The above examples were all using figures or subfigures, but the same ideas apply for tables and Captionsetu. Both of these aspects the label format and the label separator can be customized by options in the caption package. The label format controls how the label shows up: whether it is visible at all, appears plainly or enclosed Captiojsetup parentheses.
The label separator is simply the character that appears after the label. When you use the Captinosetup in your document, all subsequent captions will use the options you specify. Cqptionsetup labelformat Czptionsetup can be set to:. Captilnsetup is an example where the labelformat and labelsep for the figure caption and subfigure caption are changed individually:. The labelformat of the subfigure captions is set to simplewhich produces only the caption letter without parentheses.
The labelsep is colon for subfigure captions and a newline for the figure. Here is an example Captionsetup produces subfigures with no caption numbering or lettering:. The same command applies for subtables, tables and figures just make the appropriate substitution in the captionsetup command. For regular floats such as tables and figures, the caption position can be set to above or Captionssetup the Captionsetjp by simply issuing the caption command above or below the float Captionsetup.
First, include the caption package:. In these cases, renew the corresponding command. To find out what the command is, you can look into the. You can look for the specific string e. Then you can override it. You could also just create a new document class based on the original. About Peter Yu I am a research and development professional with expertise in the areas of image processing, remote sensing and computer vision. My working Captionsetup covers industries ranging from district energy to medical imaging to cinematic visual effects.
I like to dabble in 3D artwork, I enjoy cycling recreationally and I am interested in sustainable technology. Terms of Use Colophon. Table of Contents. Captionsdtup captions in LaTeX subfig Captionsetup caption packages. Set caption label numbering style. Label format and label separator. Subfloat, subfigure and subtable captions with no number or letter.
No caption Captionsetup for figures and tables. Set caption position for subfigures, subtables and subfloats. Putting it all together with a table example. Change the caption name. Tutorials LightWave. The GIMP. Tutorials Home. Copyright © - Peter Yu.
This page is outdated. Please see Formatting captions and subcaptions in LaTeX instead. T his page shows how to customize the captions for figures, tables, subfigures and subtables in LaTeX.
I am presently using classic thesis. In the figures and tables, the captions has default indentation, i.e. long captions 'hang' under the first line of the text. But I wish to have normal paragraph text as caption. So in the aullando.me file I changed \captionsetup{format=hang,font=small} to \captionsetup{format=default,font=small}.
The \caption allows many other aspects of the caption to be modified, via either the \captionsetup command or in the options. These include the type of label separator (e.g. the colon in “Figure 1: Caption”), the label format (whether the number or letter is shown and whether it is shown in parentheses), the label and caption text font and style, the justification of the.
.
2021 aullando.me | |
### Home > CC1 > Chapter 3 > Lesson 3.1.2 > Problem3-33
3-33.
Maribel is taking advantage of the sale at Cassie’s Cashew Shoppe. She wants to figure out how much she will save on a purchase of $34. Maribel’s percent ruler is shown below. Copy the ruler on your paper and help her figure out what 20% of$34 is. Homework Help ✎
To start, it would be helpful to scale the lower portion of the number line. Do you remember how to do this?
Now that you know 10% is equal to $3.40, can you scale the rest of the number line to find the value of 20%? First, we can find out how much money will be just one tick mark on the number line. Start by counting the number of tick marks (10). We know that$34 will be distributed among these tick marks.
With the information gathered in Step 1, we can now find what amount will be labeled on the first tick mark (10%). We will find this value by dividing the total $34 (100%) by the number of tick marks (10). $\frac{34}{10}= \3.40$ Each tick mark represents$3.40. | |
# Swing the Logarithmic Curve around (1, 0)
Requires a Wolfram Notebook System
Interact on desktop, mobile and cloud with the free Wolfram CDF Player or other Wolfram Language products.
Requires a Wolfram Notebook System
Edit on desktop, mobile and cloud with any Wolfram Language product.
The logarithmic function to the base , where and , is defined by if and only if ; the domain is and the range is .
[more]
Move the slider; the base of the logarithm changes and you see its graph swing around the point .
Closely observe the two cases and . Also notice where the blue curve lies in relation to the common logarithm (base 10) and the natural logarithm .
[less]
Contributed by: Abraham Gadalla (March 2011)
Open content licensed under CC BY-NC-SA
## Details
When considering the common logarithm (i.e., base 10), we notice that as the values decrease from 1 to 0, the curve falls rapidly, and for , it approaches the negative axis asymptotically. As the values increase from 1 to 10, the function increases monotonically from 0 to 1, and as values increase by a factor of 10 (for example, from 10 to 100) the function increases from 1 to 2. The same applies for the intervals , , and so on. Because the changes are very small for such large intervals, the curve can be well approximated by a straight line.
To switch bases, we let ; we will show that .
By definition, implies .
Taking the to the base of both sides gives .
Dividing by gives . Replacing by yields . | |
## Physics in English: Significant digits and rounding off
Di seguito una risorsa didattica per l’insegnamento della Fisica CLIL in Inglese, inerente alle cifre significative e all’arrotondamento.
The significant digits represent the valid digits of a number. The following rules summarize the significant digits:
1. Nonzero digits are always significant.
2. All final zeros after the decimal points are significant.
3. Zeros between two other significant digits are always significant.
4. Zeros used solely for spacing the decimal point are not significant.
For example, the number of significant digits for the value 5,6 is two. The number of significant digits for the value 0,0017495 is five.
Exercises
1. What is the number of significant digits in 650046746830?
2. How many zeros are significant figures ina measured mass of 0,010010 g?
3. What is the number of significant digits in 0,00230300 m?
When the answer to a calculation contains too many significant figures, it must be rounded off.
There are 10 digits that can occur in the last decimal place in a calculation. One way of rounding off involves underestimating the answer for five of these digits (0, 1, 2, 3, and 4) and overestimating the answer for the other five (5, 6, 7, 8, and 9). This approach to rounding off is summarized as follows.
Rule 1. If the digit is smaller than 5, drop this digit and leave the remaining number unchanged.
Thus, 1.684 becomes 1.68.
Rule 2. If the digit is 5 or larger, drop this digit and add 1 to the preceding digit. Thus, 1.247 becomes 1.25.
In addition and subtraction, round up your answer to the least precise measurement. For example:
24.686+2.343+3.21=30.23930.24
$24.686+2.343+3.21=30.239\approx 30.2$
because 3.21 is the least precise measurement.
When measurements are multiplied or divided, the answer can contain no more significant figures than the least accurate measurement. For example:
1.435×7.23=10.3750510.4
$1.435×7.23=10.37505\approx 10.4$
because 7,23 is the least precise measurement.
In a problem with the mixture of addition, subtraction, multiplication or division, round up your answer at the end, not in the middle of your calculation. For example:
3.6×0.3+2.1=1.08+2.13.2
$3.6×0.3+2.1=1.08+2.1\approx 3.2$
Exercise. Round off the answers of the following calculations.
1. 37.76 + 3.907 + 226.4 = ?
2. 319.15 – 32.614 = ?
3. 104.630 + 27.08362 + 0.61 = ?
4. 125 – 0.23 + 4.109 = ?
5. 2.02 × 2.5 = ?
6. 600.0 / 5.2302 = ?
7. 0.0032 × 273 = ?
8. 0.556 × (40 – 32.5) = ?
[Answers. 1) 268,1 2) 286,54 3) 132,32 4) 129 5) 5,0 6) 114,7 7) 0,87 8) 4]
Condividi
Febbraio 7, 2018
Tag:, , ,
Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati. | |
# Convex hull, compactness, normed spaces
Let $(X,\| \cdot \|)$ be a finite dimensional normed space. Show that if $S\subseteq X$ is compact, then the $\text{Conv(S)}$ is also compact.
I used the Caratheodory's theorem to show that $\text{Conv(S)}=\bigcup_{n=1}^{\dim(X)+1} T_n(S)$, now I need to show that $T_n(S)$ is closed and bounded but I'm stuck. Is this the right way to prove it? If it is how do can I proceed?
$T_n(S):=\{x\in X : \sum_{i=1}^n a_i \cdot x_i$ for some $a \in \Delta^{n-1}, \{v_1,...,v_n\} \subseteq S\}$ where $\Delta^{n-1}$ is the $n-1$ dimensional simplex.
• What is $T_n(S)$in your question? – polmath Oct 17 '14 at 11:30
Hint: consider the continuous map $f : \Delta_{n-1} \times S^{n} \rightarrow \mathrm{co}(S)$ defined by $f(\lambda_1, \ldots, \lambda_n, x_1, \ldots, x_n) = \sum_{i=1}^n \lambda_i \cdot x_i$, where $n = \dim X + 1$, and $\Delta_{n-1}$ is the simplex $\{ (\lambda_1, \ldots, \lambda_n) \in \mathbb{R}_+ : \sum_i \lambda_i = 1 \}$.
let $V=\{(t_1 ,t_2 , ... ,t_n )\in\mathbb{R}^{n+1} : t_1 , t_2 , ....,t_{n+1} \geq 0 \wedge t_1 + t_2 +...+t_{n+1} =1 \}.$ By Caratheodory theorem the function $T: V\times S^{n+1}\to \mbox{conv} (S) ,$ $$T(t_1 ,t_2 , ... t_{n+1} , x_1 , x_2 , ..., x_{n+1} ) =\sum_{j=1}^{n+1} t_j x_j$$ is suriection. And since $V\times S^{n+1}$ is compact then so is $T(V\times S^{n+1} ) =\mbox{conv} (S) .$ | |
# Random data generator
Inspired by Quality of random numbers I would like to set up a true random data generator in Mathematica.
My idea is to use the static from an open microphone. I recall reading about extracting the "most random" data from such a source but I do not remember the specifics. Presumably all recognizable patterns and frequencies would need to be filtered out, and the remaining data "balanced" (for lack of a better term) to get a uniform distribution.
I would like to know how this may be accomplished and what quality and quantity of random data I could expect to to gather. If the mic static idea is not valid, I would like to know what other options exist.
-
This is perhaps a better question for Cryptography... Admittedly, the Mathematica part is small and the question focuses more on "true random data generator" (not sure what that means), the "quality and quantity of random data" that can be generated by said method and other valid sources of pseudo-random noise — all of which fall in crypto's domain... – rm -rf Mar 20 '12 at 17:33
I think the physical issues here are rather unclear. It appears that you are thinking of a cheap competitor to quantis (idquantique.com/true-random-number-generator/…) and it seems to me that if this were really that simple they would be already out of business. – Andrzej Kozlowski Mar 20 '12 at 17:44
@Andrzej I appreciate your opinion on this. My hope is to be able to extract at least a small quantity of reasonably high quality random data. Expect that custom hardware is designed to generate large quantities of random data on demand. I know that software has used such things as mouse movement to (theoretically) improve the quality of random data for encryption key generation. In such an application one could capture hours of mouse movement to generate only a few thousand bits of data. I would hope to generate random data at a somewhat faster rate. – Mr.Wizard Mar 20 '12 at 17:49
SystemDialogInput["RecordSound"], suggested by @Searke, doesn't seem to work on OS X. If it did I'd have written up some code to play with the least significant bits of the input. These I would a priori expect to be random, but they could well be otherwise due to all sorts of things-so it would have been fun. Alas... – acl Mar 20 '12 at 20:45
Here is an interesting and inexpensive solution entropykey.co.uk based on semiconductor noise. – s0rce Mar 21 '12 at 0:18
Edit: this answer is now structured in two sections. The first deals about creating a candidate RNG from audio data. The second demonstrates some testing I performed on this RNG.
## Creating the RNG
Okay, I'll got at it another way then. I recorded 10 seconds of ambient noise on my MacBook Pro internal speakers. I was possibly in the worst conditions for this: my quiet flat, at night. The generated wav file was then imported into Mathematica. At least for my own combination of hardware, this doesn't look too good:
data = Import["test.wav", "Data"];
Length[data]
ListPlot[data[[1]]]
Histogram[data[[1]], PlotRange -> {{-0.0004, 0.0004}, Automatic}]
The data array has two components of length 520192 (it's 48 kHz audio, so it's indeed raw data). But, they take only a handful of possible values:
That being said, maybe some randomness can be extracted if the signal oscillate between these values in some random manner. If that's the case, I expect each value will only bring very little entropy to the result, but collectively you can still get something out. And indeed, the Fourier transform is:
ListPlot[Abs@Fourier[data[[1]]]]
which shows some promising behaviour. We take the mantissa, which is still very far from being uniformly distributed:
Histogram[(MantissaExponent[#][[1]] &) /@ Abs@Fourier[data[[1]]]]
and we can further refine by keeping only least-significant bits:
Histogram[(BitAnd[Floor[MantissaExponent[#][[1]]*2^32], 2^8 - 1] &) /@ Abs@Fourier[data[[1]]]]
Each integer in this list is between 0 and 255 (inclusive), so it's a 8-bit integer. They look nicely equidistributed, which of course is the lowest possible criterion for any kind of random generator. They should be further tested for randomness.
Alternatively, we can make it into a RNG that creates floating-point numbers between 0 and 1. The following is my “final state” code:
data = Import["test.wav", "Data"][[1]];
Print["Raw data length (one channel): ", Length[data]];
randombytes =
BitAnd[Floor[MantissaExponent[#][[1]]*2^32], 2^8 - 1] & /@
Abs@Fourier@data;
Print["Number of random bytes: ", Length[randombytes]];
randomint32s =
Table[randombytes[[i]] + randombytes[[i + 1]]*2^8 +
randombytes[[i + 2]]*2^16 + randombytes[[i + 3]]*2^24,
{i, 1, Length[randombytes], 4}];
randomfloats = N[#/2^32] & /@ randomint32s;
n = Length[randomfloats];
Print["Number of random reals: ", n];
## Testing this RNG
I'm not an expert, so I performed so basic randomness tests following the guidelines in John D. Cook’s “Testing a Random Number Generator” chapter in Beautiful Testing. It's not DIEHARD, or DIEHARDER, but it's a start!
The approach I followed is to compare the properties of our RNG to those of streams of Mathematica’s default RNG (with the same size). I thus generate 100 vectors of reference random numbers:
references = Table[Table[RandomReal[], {i, n}], {j, 100}];
Then, I compare their properties. For example, I compare the average of randomfloats to the distribution of averages of same-sized vectors returned by RandomReal. For our RNG to be decent, our average must fit somewhere in the distribution of averages from RandomReal, which I test by calculating the later’s standard deviation:
w = Mean[randomfloats]
t = Mean /@ references; Print[Min@t, " ", Mean@t, " ", Max@t, " ", StandardDeviation@t];
Print["DeltaMean over deviation: ", (w - Mean@t)/StandardDeviation@t];
which outputs:
0.499767
0.498 0.500117 0.502256 0.00081088
DeltaMean over deviation: -0.432063
so our result is at $-0.43\sigma$, and we can be happy about it! I did the same thing for the min ($-0.25\sigma$), max ($0.22\sigma$), and variance (slightly larger at $1.4\sigma$, but still no cause for concern). I skipped the book's bucket test, because we already established that using histograms.
Then, the Kolmogorov-Smirnov test:
Quiet@KolmogorovSmirnovTest[randomfloats, UniformDistribution[{0, 1}], "TestConclusion"]
The null hypothesis that the data is distributed according to the UniformDistribution[{0,1}] is not rejected at the 5. percent level based on the Kolmogorov-Smirnov test.
Here I'd be tempted to say: victory!
Obviously, if you've read until here, either you like what I write (and I'd appreciate an upvote) or you are an expert, in which case I welcome comments on my empirical investigation. Thanks!
-
Thank you. This looks like a good way to approach the analysis of usability. – Mr.Wizard Mar 20 '12 at 23:55
If you unplug your microphone you might get better randomness if you sample the noise in your computer, however, it could be worse if there is periodic noise from your AC power or something. This is what I got with no microphone: plot – s0rce Mar 21 '12 at 0:01
Nice work! But please see my comment elsewhere on this thread, because it applies to microphones as well as webcams: although your microphone and your operating system might have yielded random-looking numbers (and kudos to you for testing them!), without understanding exactly how those bits are being generated and processed along the way, we have to be concerned that somebody else's hardware and software might not generate random bits at all. (In the worst case, they will sort of look random, but they won't be.) – whuber Mar 21 '12 at 19:34
@whuber I completely agree! I've done a “proof of concept” study, but details are entirely hardware and OS-dependent. – F'x Mar 21 '12 at 19:36
With that caveat in place, I'm happy to upvote this splendid answer. – whuber Mar 21 '12 at 19:38
Here is my quick and dirty attempt based on: Cryptographic Key From Webcam Image. I've used an example image as I don't have a webcam on my desktop but you could simply use CurrentImage to grab the webcam image live if you have one.
Update using a webcam image from my laptop
image = CurrentImage[];
grayscale = ColorConvert[image, "Grayscale"];
imagedata = Round[ImageData@grayscale*(2^8 - 1)];
leastsigbit = Map[BitAnd[#, 1] &, imagedata, {2}];
n = 8;
flattened = Flatten@leastsigbit;
extra = Last@QuotientRemainder[Length@flattened, n];
trimmed = Drop[flattened, extra];
parted = Partition[trimmed, n];
randombytes = Map[Total[#*2^Range[0, n - 1]] &, parted]
I skipped the part where they use a circular route to generate the binary sequences and simply read it left to right, top to bottom (with Flatten) because it was so much easier, I have no idea the implication of this on the randomness quality.
Doesn't look all to random any more...
ArrayPlot@leastsigbit
Histogram@randombytes
This is very far from my area of expertise and I'm not really sure how to apply better tests of randomness but I figured this is a start.
I also ran the code @F'x demonstated as a simple test:
randombytestrimmed =
Drop[randombytes, Last@QuotientRemainder[Length[randombytes], 4]];
randomint32s =
Table[randombytestrimmed[[i]] + randombytestrimmed[[i + 1]]*2^8 +
randombytestrimmed[[i + 2]]*2^16 +
randombytestrimmed[[i + 3]]*2^24, {i, 1,
Length[randombytestrimmed], 4}];
randomfloats = N[#/2^32] & /@ randomint32s;
references = Table[Table[RandomReal[], {i, n}], {j, 100}];
w = Mean[randomfloats]
t = Mean /@ references; Print[Min@t, " ", Mean@t, " ", Max@t, " ",
StandardDeviation@t];
Print["DeltaMean over deviation: ", (w - Mean@t)/StandardDeviation@t];
Quiet@KolmogorovSmirnovTest[randomfloats, UniformDistribution[{0, 1}],
"TestConclusion"]
output:
0.492367
0.306043 0.516793 0.737207 0.0969204
DeltaMean over deviation: -0.252023
The null hypothesis that the data is distributed according to the
UniformDistribution[{0,1}] is rejected at the 5. percent level
based on the Kolmogorov-Smirnov test.
-
Thank you. This and the linked article appear very applicable. – Mr.Wizard Mar 20 '12 at 23:53
The important thing to recognize is that you can't roll your own random number generator. Good people have tried and frequently failed. It's imperative that any attempt be theoretically supported and thoroughly tested. A major flaw of the referenced paper is that it assumes all webcams will work exactly like the (unspecified) one that was tested. (But note that its authors recognize the need for thorough testing.) It's conceivable that other webcam hardware or software could impose strong non-randomness in its output, even in the least significant bits. Caveat emptor! – whuber Mar 21 '12 at 19:30
Here is another possibility based on mouse movements, updated with live histogram, further updated by hashing a combination of the mouse position and AbsoluteTime:
DynamicModule[{},
positionlist = {};
list = {};
EventHandler[{Dynamic[
Framed@Graphics[{Red, Line@positionlist, Point@positionlist},
PlotRange -> 2]],
Dynamic@Histogram[(BitAnd[Floor[MantissaExponent[#][[1]]*2^32],
255] &) /@ Abs@Flatten@list]} //
TableForm, {"MouseMoved" :>
If[ListQ@
MousePosition@"Graphics", {AppendTo[list,
Hash[{MousePosition@"Graphics", AbsoluteTime[]}, "SHA"]],
AppendTo[positionlist, MousePosition@"Graphics"]}]}]]
Thanks to @F'x for some of his code.
-
At least on Windows this does not appear to trend toward a uniform distribution, the most basic of checks. I think more processing is needed to extract the "most random" part of the data. If I knew how to do that correctly I wouldn't have asked this question. – Mr.Wizard Mar 21 '12 at 16:15
I tried, I remember that some cryptography software uses mouse movements to generate keys but I wasn't exactly sure how it worked. I think it might be better to use the relative change in position instead of the absolute position. It was fun to play with the EventHandler, I hadn't used the before. – s0rce Mar 22 '12 at 0:37
OK, I'll gladly cheat and propose the following:
Clear[RandomByte, RandomByteState];
RandomByte[] := Module[{r},
If[Not[Head[RandomByteState] == List] \[Or] Length[RandomByteState] == 0,
RandomByteState = Import["http://www.random.org/cgi-bin/randbyte?nbytes=16384&format=f", "Binary"]];
r = RandomByteState[[1]];
RandomByteState = Rest[RandomByteState];
Return[r];
]
Now, given my low Mathematica expertise, I expect there are ways to improve both style and efficiency of the above, but the idea is there :)
-
And before too many people try, I should link to the site’s automated clients policy – F'x Mar 20 '12 at 20:06
I don't think Mr.Wizard was particularly interested in any random number generator/online entropy source. I think his question deals with implementing his idea in Mathematica and then rigorously testing for its efficiency. – rm -rf Mar 20 '12 at 20:17
“If the mic static idea is not valid, I would like to know what other options exist.” — Let's say I'm pointing at an unorthodox way of setting up “a true random data generator in Mathematica”. – F'x Mar 20 '12 at 20:19
@R.M you're both right in a way; I would like a local generator that does not rely on a network source, but I don't mind what method that is. The mic feed is just the first that came to mind. F'x this is interesting, but I am going to hold my vote for now at least, in light of my intention. – Mr.Wizard Mar 20 '12 at 23:04 | |
NOTE - Once the packages are in the FreeBSD main ports this guide should be changed to something much more simple
## Install the ports manually¶
For some of this steps you must require a root access to modify the ports directory.
The webthree-umbrella depends on [libjson-rpc-cpp.shar](https://raw.githubusercontent.com/enriquefynn/webthree-umbrella-port/master/libjson-rpc-cpp.shar) that is also not in the ports system.
First you need to download the shar file and place it on your ports directory under the “devel” session, usually /usr/ports/devel
curl https://raw.githubusercontent.com/enriquefynn/webthree-umbrella-port/master/libjson-rpc-cpp.shar > /usr/ports/devel/libjson-rpc-cpp.shar
Now we execute the script with:
cd /usr/ports/devel
sh libjson-rpc-cpp.shar
This will create the libjson-rpc-cpp port. Now you should do the same for the webthree-umbrella port, we should get the [webthree-umbrella](https://raw.githubusercontent.com/enriquefynn/webthree-umbrella-port/master/webthree-umbrella.shar) file and create the port under “net-p2p” directory.
curl https://raw.githubusercontent.com/enriquefynn/webthree-umbrella-port/master/webthree-umbrella.shar> /usr/ports/net-p2p/webthree-umbrella.shar
cd /usr/ports/net-p2p
sh webthree-umbrella.shar
## Build and Install¶
Now you can navigate to the webthree-umbrella directory and install the port:
cd /usr/ports/net-p2p/webthree-umbrella
make install clean | |
## Generalized Gibbs Ensemble and string-charge relations in nested Bethe Ansatz
György Z. Fehér, Balázs Pozsgay
SciPost Phys. 8, 034 (2020) · published 3 March 2020
### Abstract
The non-equilibrium steady states of integrable models are believed to be described by the Generalized Gibbs Ensemble (GGE), which involves all local and quasi-local conserved charges of the model. In this work we investigate integrable lattice models solvable by the nested Bethe Ansatz, with group symmetry $SU(N)$, $N\ge 3$. In these models the Bethe Ansatz involves various types of Bethe rapidities corresponding to the "nesting" procedure, describing the internal degrees of freedom for the excitations. We show that a complete set of charges for the GGE can be obtained from the known fusion hierarchy of transfer matrices. The resulting charges are quasi-local in a certain regime in rapidity space, and they completely fix the rapidity distributions of each string type from each nesting level.
### Ontology / Topics
See full Ontology or Topics database.
### Authors / Affiliation: mappings to Contributors and Organizations
See all Organizations.
Funders for the research work leading to this publication | |
2010 - Present 2000 1990's 1980's 1970's
ISI-TR-734 Cache Me If You Can: Effects of DNS Time-to-Live (extended) John Heidemann, Wes Hardaker, Giovane C. M. Moura, Ricardo de O. Schmidt July 2019, 20 pages DNS depends on extensive caching for good performance, and every DNS zone owner must set Time-to-Live (TTL) values to control their DNS caching. Today there is relatively little guidance backed by research about how to set TTLs, and operators must balance conflicting demands of caching against agility of configuration. Exactly how TTL value choices affect operational networks is quite challenging to understand for several reasons: DNS is a distributed service, DNS resolution is security-sensitive, and resolvers require multiple types of information as they traverse the DNS hierarchy. These complications mean there are multiple frequently interacting, places TTLs can be specified. This paper provides the first careful evaluation of how these factors affect the effective cache lifetimes of DNS records, and provides recommendations for how to configure DNS TTLs based on our findings. We provide recommendations in TTL choice for different situations, and for where they must be configured. We show that longer TTLs have significant promise, reducing median latency from 183ms to 28.7ms for one country-code TLD. ISI-TR-733 Improving the Optics of Active Outage Detection Extended Guillermo Baltra, John Heidemann May 2019, 7 pages There is a growing interest in carefully observing the reliability of the Internet’s edge. Outage information can inform our understanding of Internet reliability and planning, and it can help guide operations. Outage detection algorithms using active probing from third parties have been shown to be accurate for most of the Internet, but inaccurate for blocks that are sparsely occupied. Our contributions include a definition of outages, which we use to determine how many independent observers are required to determine global outages. We we propose a new Full Block Scanning (FBS) algorithm that gathers more information for sparse blocks to reduce false outage reports. We also propose ISP Availability Sensing (IAS) to detect maintenance activity using only external information. We study a year of outage data and show that FBS has a True Positive Rate of 86%, and show that IAS detects maintenance events in a large U.S. ISP. ISI-TR-732 DARPA SAFER Program Concept of Operations Robert Braden, Stephen Schwab May 2019, 60 pages This report is the final version of the Concepts of Operations (CONOPS) document for DARPA’s SAFER Warfighter Communication program. During the course of the program, the CONOPS served as a “living” document, maintained online and updated periodically. This Release 4 of SAFER CONOPS contains significant changes in emphasis, organization, and content, to (1) summarize the current state of development and testing of prototype software by the program participants, and (2) provide basic information that will be required by any subsequent technology transition of the software. ISI-TR-730 Blacklists Assemble: Aggregating Blacklists for Accuracy Sivaramakrishnan Ramanthan, Jelena Mirkovic, Minlan Yu December 2018, 15 pages IP address blacklists are a useful defense against various cyberattacks. Because they contain IP addresses of known offenders, they can be used to preventively filter unwanted traffic, and reduce the load on more resource intensive defenses. Yet, blacklists today suffer from several drawbacks. First, they are compiled and updated using proprietary methods, and thus it is hard to evaluate accuracy and freshness of their information. Second, blacklists often focus on a single attack type, e.g., spam, while compromised machines are constantly and indiscriminately reused for many attacks. Finally, blacklists contain IP addresses, which lowers their accuracy in networks that use dynamic addressing. We propose BLAG, a sophisticated approach to select, aggregate and selectively expand only the accurate pieces of information from multiple blacklists. BLAG calculates information about accuracy of each blacklist over regions of address space, and uses recommendation systems to select most reputable and accurate pieces of information to aggregate into its master blacklist. This aggregation increases recall by 3–14%, compared to the best-performing blacklist, while preserving high specificity. After aggregation, BLAG identifies networks that have dynamic addressing or a high degree of mismanagement. IP addresses from such networks are selectively expanded into /24 prefixes. This further increases offender detection by 293–411%, with minimal loss in specifiity. Overall, BLAG achieves high specificity 85–89% and high recall 26–61%, which makes it a promising approach for blacklist generation ISI-TR-731 Plumb: Efficient Processing of Multi-User Pipelines (Poster) Abdul Qadeer, John Heidemann November 2018, 2 pages ISI-TR-729 Common Outage Data Format, version 1.0 Alberto Dainotti, John Heidemann, Alistair King, Ramakrishna Padmanabhan, Yuri Pradkin October 2018, 7 pages This document defines a data format for exchanging information about Internet outages. It specifies the semantics data about network outages, and two syntaxes that can be used to represent this information. This format is designed to support reports from Internet outage detection systems such as Trinocular, Thunderping, and IODA. ISI-TR-728 An Architecture for Interconnected Testbed Ecosystems Ryan Goodfellow, Lincoln Thurlow, Srivatsan Ravi October 2018, 8 pages In the cybersecurity research community, there is no one- size- fits-all solution for merging large numbers of heterogeneous resources and experimentation capabilities from disparate specialized testbeds into integrated experiments. The current landscape for cyber-experimentation is diverse, encompassing many fields including critical infrastructure, enterprise IT, cyber- physical systems, cel- lular networks, automotive platforms, IoT and industrial control systems. Existing federated testbeds are constricted in design to predefined domains of applicability, lacking the systematic ability to integrate the burgeoning number of heterogeneous devices or tools that enable their effective use for experimentation. We have developed the Merge architecture to dynamically integrate dis- parate testbeds in a logically centralized way that allows researchers to effectively discover, and use the resources and capabilities provided the by evolving ecosystem of distributed testbeds for the development of rigorous and high-fidelity cybersecurity experiments. ISI-TR-727 Efficient Processing of Multi-Users Pipelines (Extended) Abdul Qadeer, John Heidemann October 2018, 15 pages Services such as DNS and websites often produce streams of data that are consumed by analytics pipelines operated by multiple teams. Often this data is processed in large chunks (megabytes) to allow analysis of a block of time or to amortize costs. Such pipelines pose two problems: first, duplication of computation and storage may occur when parts of the pipeline are operated by different groups. Second, processing can be lumpy, with structural lumpiness occurring when different stages need different amounts of resources, and data lumpiness occurring when a block of input requires increased resources. Duplication and structural lumpiness both can result in inefficient processing. Data lumpiness can cause pipeline failure or deadlock, for example if differences in DDoS traffic compared to normal can require 6× CPU. We propose Plumb, a framework to abstract file processing for a multi-stage pipeline. Plumb integrates pipelines contributed by multiple users, detecting and eliminating duplication of computation and intermediate storage. It tracks and adjusts computation of each stage, accommodating both structural and data lumpiness. We exercise Plumb with the processing pipeline for B-Root DNS traffic, where it will replace a hand-tuned system to provide one third the original latency by utilizing 22% fewer CPU and will address limitations that occur as multiple users process data and when DDoS traffic causes huge shifts in performance. ISI-TR-726 Detecting IoT Devices in the Internet (Extended) Hang Guo, John Heidemann July 2018, 16 pages Distributed Denial-of-Service (DDoS) attacks launched from compromised Internet-of-Things (IoT) devices have shown how vulnerable the Internet is to large-scale DDoS attacks. To understand the risks of these attacks requires learning about these IoT devices: where are they? how many are there? how are they changing? This paper describes three new methods to find IoT devices on the Internet: server IP addresses in traffic, server names in DNS queries, and manufacturer information in TLS certificates. Our primary methods (IP addresses and DNS names) use knowledge of servers run by the manufacturers of these devices. We have developed these approaches with 10 device models from 7 vendors. Our third method uses TLS certificates obtained by active scanning. We have applied our algorithms to a number of observations. Our IP-based algorithms see at least 35 IoT devices on a college campus, and 122 IoT devices in customers of a regional IXP. We apply our DNS-based algorithm to traffic from 5 root DNS servers from 2013 to 2018, finding huge growth (about 7×) in ISP-level deployment of 26 device types. DNS also shows similar growth in IoT deployment in residential households from 2013 to 2017. Our certificate-based algorithm finds 254k IP cameras and network video recorders from 199 countries around the world. ISI-TR-725 When the Dike Breaks: Dissecting DNS Defenses During DDoS (extended) Giovane C. M. Moura, John Heidemann, Moritz Mueller, Ricardo de O. Schmidt, Marco Davids May 2018, 10 pages The Internet's Domain Name System (DNS) is a frequent target of Distributed Denial-of-Service (DDoS) attacks, but such attacks have had very different outcomes---some attacks have disabled major public websites, while the external effects of other attacks have been minimal. While on one hand the DNS protocol is a relatively simple, the \emph{system} has many moving parts, with multiple levels of caching and retries and replicated servers. This paper uses controlled experiments to examine how these mechanisms affect DNS resilience and latency, exploring both the client side's DNS \emph{user experience}, and server-side traffic. We find that, for about about 30\% of clients, caching is not effective. However, when caches are full they allow about half of clients to ride out server outages, and caching and retries allow up to half of the clients to tolerate DDoS attacks that result in 90\% query loss, and almost all clients to tolerate attacks resulting in 50\% packet loss. The cost of such attacks to clients are greater median latency. For servers, retries during DDoS attacks increase normal traffic up to $8\times$. Our findings about caching and retries can explain why some real-world DDoS cause service outages for users while other large attacks have minimal visible effects. ISI-TR-724 Back Out: End-to-end Inference of Common Points-of-Failure in the Internet (extended) John Heidemann, Yuri Pradkin, Aqib Nisar January 2018, 17 pages Internet reliability has many potential weaknesses: fiber rights- of-way at the physical layer, exchange-point congestion from DDOS at the network layer, settlement disputes between organizations at the financial layer, and government intervention the political layer. This paper shows that we can discover common points-of-failure at any of these layers by observing correlated failures. We use end-to-end observations from data-plane-level connectivity of edge hosts in the Internet. We identify correlations in connectivity: networks that usually fail and recover at the same time suggest common point-of-failure. We define two new algorithms to meet these goals. First, we define a computationally-efficient algorithm to create a linear ordering of blocks to make correlated failures apparent to a human analyst. Second, we develop an event-based clustering algorithm that directly networks with correlated failures, suggesting common points-of-failure. Our algorithms scale to real-world datasets of millions of networks and observations: linear ordering is $O(n \log n)$ time and event-based clustering parallelizes with Map/Reduce. We demonstrate them on three months of outages for 4 million /24 network prefixes, showing high recall (0.83 to 0.98) and precision (0.72 to 1.0) for blocks that respond. We also show that our algorithms generalize to identify correlations in anycast catchments and routing. ISI-TR-723 An Ontology for the ENIGMA Neuroscience Collaboration MiHyun Jang December 2017, 14 pages ISI-TR-722 LDplayer: DNS Experimentation at Scale Liang Zhu, John Heidemann November 2017, 10 pages DNS has evolved over the last 20 years, improving in security and privacy and broadening the kinds of applications it supports. However, this evolution has been slowed by the large installed base with a wide range of implementations that are slow to change. Changes need to be carefully planned, and their impact is difficult to model due to DNS optimizations, caching, and distributed operation. We suggest that experimentation at scale is needed to evaluate changes and speed DNS evolution. This paper presents LDplayer, a configurable, general-purpose DNS testbed that enables DNS experiments to scale in several dimensions: many zones, multiple levels of DNS hierarchy, high query rates, and diverse query sources. LDplayer provides high fidelity experiments while meeting these requirements through its distributed DNS query replay system, methods to rebuild the relevant DNS hierarchy from traces, and efficient emulation of this hierarchy of limited hardware. We show that a single DNS server can correctly emulate multiple independent levels of the DNS hierarchy while providing correct responses as if they were independent. We validate that our system can replay a DNS root traffic with tiny error (± 8 ms quartiles in query timing and ± 0.1% difference in query rate). We show that our system can replay queries at 87k queries/s, more than twice of a normal DNS Root traffic rate, maxing out one CPU core used by our customized DNS traffic generator. LD player’s trace replay has the unique ability to evaluate important design questions with confidence that we capture the interplay of caching, timeouts, and resource constraints. As an example, we can demonstrate the memory requirements of a DNS root server with all traffic running over TCP, and we identified performance discontinuities in latency as a function of client RTT. ISI-TR-721 LDplayer: DNS Experimentation at Scale (poster abstract) Liang Zhu, John Heidemann August 2017, 4 pages In the last 20 years the core of the Domain Name System (DNS) has improved in security and privacy, and DNS use broadened from name-to-address mapping to a critical roles in service discovery and anti-spam. However, protocol evolution and expansion of use has been slow because advances must consider a huge and diverse installed base. We suggest that experimentation at scale can fill this gap. To meet the need for experimentation at scale, this paper presents LDplayer, a configurable, general- purpose DNS testbed. LDplayer enables DNS experiments to scale in several dimensions: many zones, multiple levels of DNS hierarchy, high query rates, and diverse query sources. To meet these requirements while providing high fidelity experiments, LDplayer includes a distributed DNS query replay system and methods to rebuild the relevant DNS hierarchy from traces. We show that a single DNS server can correctly emulate multiple independent levels of the DNS hierarchy while providing correct responses as if they were independent. We show the importance of our system to evaluate pressing DNS design questions, using it to evaluate changes in DNSSEC key size. ISI-TR-720 Recursives in the Wild: Engineering Authoritative DNS Servers Moritz Muller, Giovane C. M. Moura, Ricardo de O. Schmidt, John Heidemann June 2017, 10 pages In Internet Domain Name System (DNS), services operate \emph{authoritative} name servers that individuals query through \emph{recursive resolvers}. Operators strive to provide reliability by operating multiple name servers (NS), each on a separate IP address, and by using IP anycast to allow NSes to provide service from many physical locations. To meet their goals of minimizing latency and balancing load across NSes and anycast, operators need to know how recursive resolvers select an NS, and how that interacts with their NS deployments. Prior work has shown some recursives search for low latency, while others pick an NS at random or round robin, but did not examine how prevalent each choice was. This paper provides the first analysis of how recursives select between name servers in the wild, and from that we provide guidance to name server operators to reach their goals. We conclude that all NSes need to be equally strong and therefore we recommend to deploy IP anycast at every single authoritative. ISI-TR-719 Verfploeter: Broad and Load-Aware Anycast Mapping Wouter B. de Vries, Ricardo de O. Schmidt, Wes Hardaker, John Heidemann, Pieter-Tjerk de Boer, Aiko Pras May 2017, 0 pages IP anycast provides DNS operators and CDNs with automatic fail-over andreduced latency by breaking the Internet into *catchments*, each served by a different anycast site. Unfortunately, *understanding* and *predicting* changes to catchments as sites are added or removed has been challenging. Current tools such as RIPE Atlas or commercial equivalents map from thousands of vantage points (VPs), but their coverage can be inconsistent around the globe. This paper proposes *Verfploeter*, a new method that maps anycast catchments using active probing. Verfploeter provides around 3.8M virtual VPs, 430x the 9k physical VPs in RIPE Atlas, providing coverage of the vast majority of networks around the globe. We then add load information from prior service logs to provide calibrated predictions of anycast changes. Verfploeter has been used to evaluate the new anycast for B-Root, and we also report its use of a 9-site anycast testbed. We show that the greater coverage made possible by Verfploeter's active probing is necessary to see routing differences in regions that have sparse coverage from RIPE Atlas, like South America and China. ISI-TR-717 Detecting ICMP Rate Limiting in the Internet (Extended) Hang Guo, John Heidemann February 2017, 10 pages Active probing with ICMP is the center of many network measurements, with tools like ping, traceroute, and their derivatives used to map topologies and as a precursor for security scanning. However, rate limiting of ICMP traffic has long been a concern, since undetected rate limiting to ICMP could distort measurements, silently creating false conclusions. To settle this concern, we look systematically for ICMP rate limiting in the Internet. We develop a model for how rate limiting affects probing, validate it through controlled testbed experiments, and create FADER, a new algorithm that can identify rate limiting from user-side traces with minimal requirements for new measurement traffic. We validate the accuracy of FADER with many different network configurations in testbed experiments and show that it almost always detects rate limiting. Accuracy is perfect when measurement probing ranges from 0 to 60× the rate limit, and almost perfect (95%) with up to 20% packet loss. The worst case for detection is when when probing is very fast and blocks are very sparse, but even there accuracy remains good (measurements 60× the rate limit of a 10% responsive block is correct 65% of the time). With this confidence, we apply our algorithm to the whole Internet with random sampling showing that rate limiting exists but that for slow probing rates, rate-limiting is very, very rare. For our random sample of 40,493 /24 blocks (about 2% of the responsive space) and probing rates of 0.39 packets/s per block, only 6 blocks (0.02%!) in two ISPs show rate limiting. Finally, we show that it is possible for even very slow probing (0.0001 packet/s) to encounter rate limiting if traffic. ISI-TR-716 Does Anycast Hang up on You? Lan Wei, John Heidemann February 2017, 9 pages Anycast-based services today are widely used commercially, with several major providers serving thousands of important websites. However, to our knowledge, there has been only limited study of how often anycast fails because routing changes interrupt connections between users and their current anycast site. While the commercial success of anycast CDNs means anycast usually work well, do some users end up shut out of anycast? In this paper we examine data from more than 9000 geographically distributed vantage points (VPs) to 11 anycast services to evaluate this question. Our contribution is the analysis of this data to provide the first quantification of this problem, and to explore where and why it occurs. We see that about 1% of VPs are anycast unstable, reaching a different anycast site frequently sometimes every query. Flips back and forth between two sites in 10 seconds are observed in selected experiments for given service and VPs. Moreover, we show that anycast instability is persistent for some VPs---a few VPs never see a stable connections to certain anycast services during a week or even longer. The vast majority of VPs only saw unstable routing towards one or two services instead of instability with all services, suggesting the cause of the instability lies somewhere in the path to the anycast sites. Finally, we point out that for highly- unstable VPs, their probability to hit a given site is constant, which means the flipping are happening at a fine granularity ---per packet level, suggesting load balancing might be the cause to anycast routing flipping. Our findings confirm the common wisdom that anycast almost always works well, but provide evidence that a small number of locations in the Internet where specific anycast services are never stable. ISI-TR-715 How Users Choose and Reuse Passwords Jelena Mirkovic, Ameya Hanamsagar, Christopher Kanich, Simon S. Woo November 2016, 16 pages Weak or reused passwords are guilty for many contemporary security breaches. It is critical to study both how users choose and reuse passwords, and the causes that lead users to adopt unsafe practices. Existing literature on these topics is limited as it either studies patterns but not the causes (using leaked or contributed datasets), or it studies artificial patterns and causes that may not align with the real ones (lab interviews and/or fictional servers). Our research complements the existing works by studying the semantic structure, strength and reuse of real passwords, as well as conscious and unconscious causes of unsafe practices, in a population of 50 participants. The participants took part in a carefully designed, ethical and IRB-approved lab study, where we harvested their existing online credentials, and interviewed them about their password strategies and their risk perceptions. We found that: (1) an average password is weak and used at more than four sites, (2) important-site passwords are only 1-2 characters longer and 10 times stronger than those for non-important sites, (3) main causes of weak passwords are security fatigue and short password length, (4) 98% of users reuse their passwords with no changes and the rest make slight changes, which can be easily brute-forced, (5) 84% of users reuse passwords between important and non- important sites, and (6) main causes for password reuse are misconceptions about risk, and preference for memorability over security. ISI-TR-714 ReBots: A Drag-and-drop High-Performance Simulator for Modular and Self-Reconfigurable Robots Thomas Collins, Wei-Min Shen November 2016, 8 pages A key challenge in self-reconfigurable robotics is the development and validation of complex distributed behaviors and control algorithms, particularly for large populations of modules. Physics-based, 3D simulators play a vital role in helping researchers overcome this challenge by allowing them to approximate the physical interactions of connected, autonomous robotic systems with one another and with their surrounding environments in a fast, safe, and low-cost manner that can reveal physical details that are critical to successful control. Current state-of-the-art self- reconfigurable robot simulators require users to have extensive programming (and software engineering) knowledge. Additionally, tasks such as translating specifications of real-world modules into simulated ones, creat- ing complex configurations of modules, and designing complex environments are text-based, time-consuming, and error-prone tasks in these simulators, limiting their usefulness to quickly approximate real-world scenarios. This paper proposes ReBots, a drag-and-drop, high-performance self-reconfigurable robot simulator built on top of the Unreal Engine 4 (UE4) game engine. The mouse-and-keyboard GUI interface of ReBots allows users to rapidly prototype new modules, drag instances of them into environments, move and rotate modules, connect modules to one another, modify module properties, rotate module motors, change module behaviors, create complex and realistic environments, and run/pause/stop simulations. The results show that ReBots demonstrates high-performance and scalability of self- reconfigurable and modular robots with complex, distributed and autonomous behaviors in simulated realistic environments, including simulations of environments with up to 2000 autonomous modules physically interacting with one another. ISI-TR-713 High-Dimensional Inverse Kinematics and Self-Reconfiguration Kinematic Control Thomas Collins, Wei-Min Shen November 2016, 12 pages This paper addresses two unique challenges for self- reconfigurable robots to perform dexterous locomotion and manipulation in difficult environments: high-dimensional inverse kinematics (HDIK) for > 100 degrees of freedom, and self- reconfiguration kinematic control (SRKC) where the workspace targets at which connectors are to meet for docking are not known a priori. These challenges go beyond the state-of-the-art because traditional manipulation techniques (e.g., Jacobian-based) may not be stable or scalable, and alternative approaches (e.g., genetic algorithms or neural networks) provide no guarantees of optimality or convergence. This paper proposes a new technique called Provably-convergent Swarm-based Inverse Kinematics (PSIK) that extends Branch and Bound Particle Swarm Optimization with a unique approach for dynamic target adaptation for self- reconfiguration. The PSIK algorithm can find globally optimal solutions for both HDIK and SRKC to any precision requirement (i.e., positive error tolerance) in finite or real-time for tree structures of self- reconfigurable robots. This algorithm is implemented and validated in high-fidelity, physics-based simulation using SuperBot as prototype modules. The results are very encouraging and provide feasible solutions for dextrous locomotion, manipulation, and self-reconfiguration. ISI-TR-712 Globally Convergent Optimal Dynamic Inverse Kinematics for Distributed Modular and Self-Reconfigurable Robot Trees Thomas Collins, Wei-Min Shen November 2016, 7 pages Kinematic trees of self-reconfigurable, modular robots are difficult to control for at least three primary reasons: (1) they must be controlled in a distributed fashion, (2) they are often kinematically redundant or hyper-redundant, and (3) in many cases, these robots must be designed to safely operate autonomously in dangerous and isolated environments. Much work has been done to design hardware, distributed algorithms, and controllers to handle different aspects of this challenging problem, but the design of generalized and globally optimal inverse kinematics algorithms for such systems is largely an open problem. Jacobian-based methods have well-documented shortcomings, particularly for high-DOF systems, while alternative methods, such as those based on genetic and evolutionary algorithms, provide no guarantees of convergence to a globally optimal solution. Such a guarantee is particularly important in the types of dangerous environments in which these robots are to operate. This paper proposes a novel distributed inverse kinematics framework based on the recently proposed Branch and Bound Particle Swarm Optimization (BB-PSO) algorithm, which provably converges to a globally optimal solution (and converges in finite time given any positive error tolerance). This framework is demonstrated, through extensive simulations, to offer high-quality solutions in practical amounts of time, even for multi-effector and dynamic problems, such as those encountered in kinematic self- reconfiguration where the effector workspace goal pose is not available as input. ISI-TR-711 Middlebox Models Compatible with the Internet Joe Touch October 2016, 6 pages A hybrid model for middleboxes is presented that describes constraints on their compatibility with the Internet. The Internet is composed of hosts, routers, and links that exchange messages, and these components have been combined into hybrid models to describe tunnels and virtual routers. This document extends these models to describe the behavior of a variety of types of middleboxes, including network address translators, proxies, and transparent proxies. ISI-TR-710 Do You See Me Now? Sparsity in Passive Observations of Address Liveness (extended) Jelena Mirkovic , Genevieve Bartlett , John Heidemann, Hao Shi, Xiyue Deng July 2016, 15 pages abstract ISI-TR-709 Anycast vs. DDoS: Evaluating the November 2015 Root DNS Event Given C. M. Moura, Ricardo de O. Schmidt, John Heidemann, Wouter B. de Vries, Moritz Muller, Lan Wei, Cristian Hesselman May 2016, 15 pages abstract ISI-TR-708 Anycast Latency: How Many Sites Are Enough? Ricardo de O. Schmidt, John Heidemann, Jan Harm Kuipers May 2016, 13 pages abstract ISI-TR-707 Improving Long-term Accuracy of DNS Backscatter for Monitoring of Internet-Wide Malicious Activity - The Poster Abdul Qadeer, John Heidemann, Kensuke Fukuda April 2016, 2 pages abstract ISI-TR-706 T-DNS: Connection-Oriented DNS to Improve Privacy and Security (poster abstract) Liang Zhu, Zi Hu, John Heidemann, Duane Wessels, Allison Mankin, Nikita Somaiya March 2016, 3 pages abstract ISI-TR-705 RESECT: Self-learning Spoofed Traffic Filters Jelena Mirkovic, Erik Kline, Peter Reiher November 2015, 15 pages IP spoofing has been a persistent Internet security threat for decades. While research solutions exist that can help an edge network detect spoofed and reflected traffic, sheer volume of such traffic requires handling further upstream. Prior research [20] has shown that route-dependent spoofed packet filters, such as hop-count filtering and route-based filtering, would be extremely effective if deployed in the Internet core. Deployment at only 50 chosen autonomous systems (0.25% of all ASes) would eliminate 92–97% of spoofed traffic in the entire Internet! But prior research assumes that filters always have correct filtering information. It is an open research problem how to bootstrap this information and keep it up to date when routes change, or in presence of asymmetric or multi-path routing. Our paper addresses this issue. We propose RESECT - a system that enables route- dependent spoofed packet filters to learn correct filtering information in realistic routing scenarios. A RESECT-enhanced filter probes sources of traffic that have stale or missing filtering information, by dropping a minuscule fraction of their TCP traffic, which invokes retransmission behavior. Retransmitted TCP packets are used to update filtering information about the probed source. RESECT works with asymmetric and multi- path routing, quickly detects route changes, and requires no cooperation between filters nor any changes to traffic sources. Its operation has minimal effect on legitimate traffic, while it quickly detects and drops spoofed packets. RESECT thus completes route-dependent packet filters, making them practical and highly effective solutions for IP spoofing defense. ISI-TR-704 Detecting Malicious Activity with DNS Backscatter (extended) Kensuke Fukuda, John Heidemann October 2015, 18 pages ISI-TR-703 The FailSafe Assertion Language Hans P. Zima, Erik DeBenedictis, Jacqueline N. Chame, Pedro C. Diniz, Robert F. Lucas October 2015, 46 pages ISI-TR-702 Data Science in the News: Advances and Challenges for the Era of Big Data Kate Musen, Alyssa Deng, Taylor Alarcon, Yolanda Gil August 2015, 13 pages abstract ISI-TR-701 Evaluating Externally Visible Outages Abdulla Alwabel, John Healy, John Heidemann, Brian Luu, Yuri Pradkin, Rasoul Safavian August 2015, 8 pages abstract ISI-TR-700 QUASAR: A New Approach to Software Attestation Jeremy Abramson, Stephen Schwab, Quoc Tran, W. Brad Moore July 2015, 9 pages abstrack ISI-TR-699 LegoTG: Composable Traffic Generation with a Custom Blueprint Jelena Mirkovic, Genevieve Bartlett June 2015, 14 pages abstract ISI-TR-698 Poster: Lightweight Content-based Phishing Detection Calvin Ardi, John Heidemann May 2015, 3 pages abstract ISI-TR-697 PASO: An Integrated, Scalable PSO-based Optimization Framework for Hyper-Redundant Manipulator Path Planning and Inverse Kinematics Thomas Collins, Wei-Min Shen April 2015, 7 pages ISI-TR-696 Implementation of the TCP Extended Data Offset Option Harry Trieu, Joe Touch, Ted Faber March 2015, 3 pages abstract ISI-TR-695 Connection-Oriented DNS to Improve Privacy and Security (extended) Liang Zhu, Zi Hu, John Heidemann, Duane Wessels, Allison Mankin, Nikita Somaiya February 2015, 26 pages abstract ISI-TR-693 T-DNS: Connection-Oriented DNS to Improve Privacy and Security (extended) Liang Zhu, Zi Hu, John Heidemann, Duane Wessels, Allison Mankin, Nikita Somaiya June 2014, 26 pages abstract ISI-TR-692 Web-scale Content Reuse Detection (extended) Calvin Ardi, John Heidemann June 2014, 16 pages abstract ISI-TR-691 When the Internet Sleeps: Correlating Diurnal Networks With External Factors (extended) Lin Quan, John Heidemann, Yuri Pradkin May 2014, 16 pages abstract ISI-TR-690 The Impact of Errors on Differential Optical Processing J. Touch, A. Mohajerin-Ariaei, M. Chitgarha, M. Ziyadi, S. Khaleghi, Y. Akasaka, J. Y. Yang, M. Sekiya March 2014, 2 pages abstract ISI-TR-689 The BLEMS Augmented Sensor Device Joe Touch March 2014, 21 pages abstract ISI-TR-688 T-DNS: Connection-Oriented DNS to Improve Privacy and Security Liang Zhu, Zi Hu, John Heidemann, Duane Wessels, Allison Mankin, Nikita Somaiya February 2014, 17 pages abstract ISI-TR-687 A Holistic Framework for Bridging Physical Threats to User QoE Xue Cai, John Heidemann, Walter Willinger July 2013, 11 pages Submarine cable cuts have become increasingly common, with five incidents breaking more than ten cables in the last three years. Today, around 300 cables carry the majority of international Internet traffic, so a single cable cut can affect millions of users, and repairs to any cut are expensive and time consuming. Prior work has either measured the impact following incidents, or predicted the results of network changes to relatively abstract Internet topological models. In this paper, we develop a new approach to model cable cuts. Our approach differs by following problems drawn from real-world occurrences all the way to their impact on end-users. Because our approach spans many layers, no single organization can provide all the data needed to apply the model. We therefore perform what-if analysis to study a range of possibilities. With this approach we evaluate four incidents in 2012 and 2013; our analysis suggests general rules that assess the degree of a country's vulnerability to a cut. ISI-TR-686b Reducing False Alarms with Multi-modal Sensing for Pipeline Blockage (Extended) Chengjie Zhang, John Heidemann June 2013, 18 pages abstract ISI-TR-685 A Preliminary Analysis of Network Outages During Hurricane Sandy John Heidemann, Lin Quan, Yuri Pradkin November 2012, 8 pages abstract ISI-TR-684 Montage Topology Manager: Tools for Constructing and Sharing Representative Internet Topologies Alefiya Hussain, Jennifer Chen August 2012, 9 pages abstract ISI-TR-683 Building Apparatus for Multi-resolution Networking Experiments Using Containers DETER Team July 2012, 9 pages abstract ISI-TR-679 An Organization-Level View of the Internet and its Implications (extended) Xue Cai, John Heidemann, Balachander Krishnamurthy, Walter Willinger June 2012, 26 pages abstract ISI-TR-681 Characterizing Anycast in the Domain Name System Xun Fan, John Heidemann, Ramesh Govindan May 2012, 14 pages abstract ISI-TR-680 Towards Geolocation of Millions of IP Addresses Zi Hu, John Heidemann, Yuri Pradkin May 2012, 7 pages abstract ISI-TR-678b Detecting Internet Outages with Precise Active Probing (extended) Lin Quan, John Heidemann, Yuri Pradkin May 2012, 22 pages abstract ISI-TR-677 Multifrontal Sparse Matrix Factorization on Graphics Processing Units Robert F. Lucas, Gene Wagenbreth, John J. Tran, Dan M. Davis January 2012, 19 pages abstract ISI-TR-676 A preliminary empirical study to compare MPI and OpenMP Lorin Hochstein, Victor R. Basili December 2011, 43 pages abstract ISI-TR-675 Evaluating Signature Matching in a Multi-Sensor Vehicle Classification System (extended) Chengjie Zhang, John Heidemann November 2011, 21 pages abstract ISI-TR-674 Final Report of the 2011 Workshop on Aquatic Ecosystem Sustainability Yolanda Gil, Tom Harmon October 2011, 34 pages ISI-TR-673 Data Muling with Mobile Phones for Sensornets Unkyu Park, John Heidemann July 2011, 16 pages abstract ISI-TR-672 Detecting Internet Outages with Active Probing Lin Quan, John Heidemann May 2011, 15 pages abstract ISI-TR-671 Identifying and Characterizing Anycast in the Domain Name System Xun Fan, John Heidemann, Ramesh Govindan May 2011, 13 pages abstract ISI-TR-670 Steam-Powered Sensing: Extended Design and Evaluation Chengjie Zhang, Affan Syed, Young H. Cho, John Heidemann February 2011, 28 pages abstract ISI-TR-669 Demo Abstract: Energy Transference for Sensornets Affan A. Syed, Young Cho, John Heidemann November 2010, 3 pages ISI-TR-668 Design and Analysis of a Propagation Delay Tolerant ALOHA Protocol for Underwater Networks Joon Ahn, Affan Syed, Bhaskar Krishnamachari, John Heidemann September 2010, 26 pages abstract ISI-TR-667 On the Characteristics and Reasons of Long-lived Internet Flows Lin Quan, John Heidemann July 2010, 9 pages abstract ISI-TR-666 Selecting Representative IP Addresses for Internet Topology Studies Xun Fan, John Heidemann June 2010, 12 pages abstract ISI-TR-665 Understanding Block-level Address Usage in the Visible Internet (extended) Xue Cai, John Heidemann June 2010, 24 pages abstract ISI-TR-660b Low-latency Synchronization of Loosely-coupled Sensornet Republishing Unkyu Park, John Heidemann June 2010, 18 pages abstract ISI-TR-664 DADL: Distributed Application Description Language Jelena Mirkovic, Ted Faber, Paul Hsieh, Ganesan Malaiyandisamy, Rashi Malaviy May 2010, 6 pages abstract | |
# Multiplication operator is not jointly continuous in strong topology
How can I show that multiplication operator ($M:\mathcal{L}(X,Y) \times \mathcal{L}(Y,Z) \rightarrow \mathcal{L}(X,Z)$; $M(A,B)=AB)$ is not jointly continuous in strong topology?
I have to show that if I take an open set $O$ in $\mathcal{L}(X,Z)$ (with norm topology) then $M^{-1}O$ is not always an open set in $\mathcal{L}(X,Y) \times \mathcal{L}(Y,Z)$ with the strong topology. Right? But How!?
Thank you!
• this property suprises me... my first guess: that might have something to do with the product topology. And also with the spaces: with $X=Y=Z=\mathbb{R}$ and $\mathcal{L}\left(\mathbb{R},\mathbb{R}\right)=\mathbb{R}$ i think it is continuous or am I wrong? – Max Feb 23 '14 at 12:38
• I've found this question in many books and is very often left as exercise... other times the answer is simply "Take the shift operator." But I can't figure why... – Benzio Feb 23 '14 at 12:41
• sorry I forgot to write the adjective "jointly". Now it's edited – Benzio Feb 23 '14 at 12:44
• @Max For finite-dimensional spaces, all Hausdorff vector space topologies coincide, and all multilinear maps are continuous. One needs infinite-dimensional spaces for it to be discontinuous. – Daniel Fischer Feb 23 '14 at 12:46
• Please, let me "see" it! – Benzio Feb 23 '14 at 12:49
Let us show that if $Y$ is infinite-dimensional, then multiplication is not jointly continuous on $\mathcal L(X,Y)\times \mathcal L(Y,Z)$ with respect to the strong operator topology.
Choose any non-zero $x_0\in X$. It is enough to show that the set $\mathcal M:=\{ (A,B);\; \Vert BAx_0\Vert <1\}$ is not an $SOT\times SOT$-neighbourhood of $(0,0)$ in $\mathcal L(X,Y)\times \mathcal L(Y,Z)$. Equivalently, let us show that for any neighbourhood $\mathcal U$ of $(0,0)$, one can find $(A,B)\in\mathcal U$ such that $\Vert BA x_0\Vert\geq 1$.
Choose $\varepsilon >0$ and finite sets $E\subset X$ and $F\subset Y$ such that $$\Bigl(\Vert Au\Vert<\varepsilon\;\hbox{for all u\in F and}\; \Vert Bv\Vert<\varepsilon\;\hbox{for all v\in F}\Bigr)\implies (A,B)\in\mathcal U\, .$$
Since $\dim(Y)=\infty$, one can find an operator $A\in\mathcal L(X,Y)$ such that $y_0:=Ax_0\not\in \hbox{span}(F)$. Moreover, multiplying $A$ by a suitable constant, we may assume that $\Vert A\Vert$ is arbitrarily small, so that in particular $\Vert Au\Vert<\varepsilon$ for all $u\in E$.
Next, since $y_0\not\in \hbox{span}(F)$, one can find an operator $B\in\mathcal L(Y,Z)$ such that $B\equiv 0$ on $F$ and $By_0\neq 0$; and multiplying $B$ by a suitable constant we may assume that $\Vert By_0\Vert=1$.
By the definition of $A$ and $B$, we then have $(A,B)\in\mathcal U$ and $\Vert BAx_0\Vert=\Vert By_0\Vert=1$, which concludes the proof.
Note however that multiplication is jointly continuous on bounded sets. (This is not difficult to prove). Hence, by the Uniform Boundedness Principle (assuming that $X,Y,Z$ are Banach spaces) it is not possible to find two sequences $(A_n)$ and $(B_n)$ such that $A_n\xrightarrow{SOT} 0$ and $B_n\xrightarrow{SOT} 0$ but $B_nA_n$ does not tend to $0$.
• Could you explain more about why the multiplication is jointly continuous on bounded sets? I tried to show myself but it seems much trickier than expected... I cannot figure out at all where the boundedness comes in.... – Keith Oct 3 '19 at 5:25
• @Keith Take a bounded net $(A_i,B_i)$ converging to $(A,B)$. Then, for any $x$, you have $\Vert A_iB_i x-ABx\Vert \leq \Vert A_i\Vert \Vert B_ix-Bx\Vert + \Vert A_i Bx-ABx\Vert\leq M\Vert B_ix-Bx\Vert +\Vert (A_i-A) Bx\Vert$ for some constant $M$; so $A_iB_ix\to ABx$ for all $x$. – Etienne Oct 4 '19 at 14:23 | |
1. ## trig special rations
hi guys! i need explanation
how do we solve questions like sin 135, cos 60, tan -180, cos -270 without using calculator.
Pls explain it to me clearly. thank u
2. Originally Posted by thereddemon
hi guys! i need explanation
how do we solve questions like sin 135, cos 60, tan -180, cos -270 without using calculator.
Pls explain it to me clearly. thank u
First of all you should remember the basic sine and cosine values:
$\displaystyle sin(30)=\frac{1}{2}$ , $\displaystyle cos(30)=\frac{\sqrt{3}}{2}$
$\displaystyle sin(45)=\frac{\sqrt{2}}{2}$ ,$\displaystyle cos(45)=\frac{\sqrt{2}}{2}$
$\displaystyle sin(60)=\frac{\sqrt{3}}{2}$ ,$\displaystyle cos(60)=\frac{1}{2}$
$\displaystyle sin(135)=sin(90+45)=sin(90).cos(45)+cos(90).sin(45 )$
$\displaystyle tan(-180)=tan(180)=tan(0)=0$
$\displaystyle cos(-270)=cos(360+(-270))=cos(90)=0$
Hope this helps | |
Use the table to evaluate the expression (f{\circ}g)(3)
Question:
Use the table to evaluate the expression
{eq}(f{\circ}g)(3) {/eq}
{eq}\begin{array}{|l|l|l|l|l|l|l|} \hline x & 1 & 2 & 3 & 4 & 5 & 6\\ \hline f(x) & 3 & 2 & 1 & 0 & 1 & 2\\ \hline g(x) & 6 & 5 & 2 & 3 & 4 & 6\\ \hline \end{array} {/eq}
Composition of Functions
If we have a function written in the form {eq}(f \circ g)(x) {/eq}, it means we have a composition of functions. We can think of this as {eq}f(g(x)) {/eq}. Thus, the output of {eq}g(x) {/eq} is the input of {eq}f(x) {/eq}.
The expression {eq}(f{\circ}g)(3) {/eq} can also be written as {eq}f(g(3)) {/eq}. In order to evaluate this, we need to first find the value of {eq}g(3) {/eq}. Then, we can evaluate f at that value.
{eq}g(3) = 2\\ f(g(3)) = f(2)\\ f(2) = 2\\ {/eq}
Thus, {eq}(f{\circ}g)(3) = 2 {/eq}. | |
# scipy.stats.erlang¶
scipy.stats.erlang(*args, **kwds) = <scipy.stats._continuous_distns.erlang_gen object>[source]
An Erlang continuous random variable.
As an instance of the rv_continuous class, erlang object inherits from it a collection of generic methods (see below for the full list), and completes them with details specific for this particular distribution.
Notes
The Erlang distribution is a special case of the Gamma distribution, with the shape parameter a an integer. Note that this restriction is not enforced by erlang. It will, however, generate a warning the first time a non-integer value is used for the shape parameter.
Refer to gamma for examples.
Methods
rvs(a, loc=0, scale=1, size=1, random_state=None) Random variates. pdf(x, a, loc=0, scale=1) Probability density function. logpdf(x, a, loc=0, scale=1) Log of the probability density function. cdf(x, a, loc=0, scale=1) Cumulative distribution function. logcdf(x, a, loc=0, scale=1) Log of the cumulative distribution function. sf(x, a, loc=0, scale=1) Survival function (also defined as 1 - cdf, but sf is sometimes more accurate). logsf(x, a, loc=0, scale=1) Log of the survival function. ppf(q, a, loc=0, scale=1) Percent point function (inverse of cdf — percentiles). isf(q, a, loc=0, scale=1) Inverse survival function (inverse of sf). moment(n, a, loc=0, scale=1) Non-central moment of order n stats(a, loc=0, scale=1, moments=’mv’) Mean(‘m’), variance(‘v’), skew(‘s’), and/or kurtosis(‘k’). entropy(a, loc=0, scale=1) (Differential) entropy of the RV. fit(data) Parameter estimates for generic data. See scipy.stats.rv_continuous.fit for detailed documentation of the keyword arguments. expect(func, args=(a,), loc=0, scale=1, lb=None, ub=None, conditional=False, **kwds) Expected value of a function (of one argument) with respect to the distribution. median(a, loc=0, scale=1) Median of the distribution. mean(a, loc=0, scale=1) Mean of the distribution. var(a, loc=0, scale=1) Variance of the distribution. std(a, loc=0, scale=1) Standard deviation of the distribution. interval(alpha, a, loc=0, scale=1) Endpoints of the range that contains alpha percent of the distribution
#### Previous topic
scipy.stats.dweibull
#### Next topic
scipy.stats.expon | |
# Pdf on the fly
In my rails app I need to create pdf reports on the fly. I have
installed railspdf, wich is working fine. But, how can I create tables
and paragraphs and stuff? Can I mimic an .rhtml file (using <% for
…%> etc? Or is it wise to use Ruby::PDF directly? Is there anyone out
there with experience in this, and who is willing to share his findings?
Thx
Hi oom tuinstoel
See if this tutorial helps you,
http://www.artima.com/rubycs/articles/pdf_writer.html
Victor
I switched to using rtex from PDF::Writer due to conflicts with
Transaction::Simple (which is used by both ActiveRecord and
PDF::Writer). I
think those conflicts may be gone now, but I like the rtex system in any
case. I wrote a bit about it here:
http://convergentarts.com/articles/2006/03/21/rtex-pdf-mojo-on-windows
http://codefluency.com/pages/rtex
On my site I mention posting a sample table, which I haven’t done yet.
Here’s a very simple table:
\begin{tabular}{l V{3.5cm} r}
foo & blah & bar \
foo & blah blah & bar \
foo & blah blah blah blah blah blah
& bar \
\end{tabular}
That would be in the context of a .rtex page. See my blog if you’re
interested.
And now, to address your original question :)… to create a table using
PDF::Writer, try SimpleTable:
t = PDF::SimpleTable.new do |t|
birthdate))
t.columns["player"] = PDF::SimpleTable::Column.new("player") {
|col|
}
Then you render it to a pdf:
t.render_on(@pdf)
PDF::Writer has pretty good documentation for this:
http://ruby-pdf.rubyforge.org/pdf-writer/
-Tom
On 5/24/06, Tom W. [email protected] wrote:
I switched to using rtex from PDF::Writer due to conflicts with
Transaction::Simple (which is used by both ActiveRecord and PDF::Writer). I
think those conflicts may be gone now, but I like the rtex system in any
case. I wrote a bit about it here:
It would be useful if someone could verify that this is the case. The
current version of Transaction::Simple should now be bundled with
Rails, with the long-term goal of removing it from the bundle.
-austin
Tom W. wrote:
And now, to address your original question :)… to create a table using
PDF::Writer, try SimpleTable:
t = PDF::SimpleTable.new do |t|
birthdate))
t.columns["player"] = PDF::SimpleTable::Column.new("player") {
|col|
}
Then you render it to a pdf:
This is what I do:
table = PDF::SimpleTable.new
table.data = @children
table.column_order.push(*%w(firstname group_id))
table.columns[“firstname”] = PDF::SimpleTable::Column.new(“firstname”) {
|col| | |
## anonymous 5 years ago Suppose the sales from a product generates a revenue (R) r=-7.3p^2+320p where p is the price of the product in dollars. Find the prices of the product that will generate a revenue greater than $3000. a. Price >$ 14.00 and price < $32 b. Price >$ 16 and price < $50 c. Price >$13.58 and price < $30.25 d. Price >$ 13.28 and price < $30.20 e. Price <$ 13.50 and Price > \$ 30. 56
1. anonymous
\text{Reduce}\left[3000<320 p-\frac{73}{10} p^2\right]= $\text{Reduce}\left[3000<320 p-\frac{73}{10} p^2\right]=\frac{100}{73} \left(16-\sqrt{37}\right)<p<\frac{100}{73} \left(16+\sqrt{37}\right)$ = 13.5853 < p < 30.2504
2. anonymous
I guess d. is the answer
3. anonymous
thanks
4. anonymous
Thank Wolfram Research. | |
# AF-algebra
## Approximately Finite-dimensional algebra.
AF-algebras form a class of $C^*$-algebras that, on the one hand, admits an elementary construction, yet, on the other hand, exhibits a rich structure and provide examples of exotic phenomena. A (separable) $C^*$-algebra $A$ is said to be an AF-algebra if one of the following two (not obviously) equivalent conditions is satisfied (see [a1], [a2] or [a6]):
1. for every finite subset $\{a_1,\dots,a_n\}$ of $A$ and for every $\epsilon>0$ there exists a finite-dimensional sub-$C^*$-algebra $B$ of $A$ and a subset $\{b_1,\dots,b_n\}$ of $B$ with $\|a_j-b_j\|<\epsilon$ for all $j=1,\dots,n$;
2. there exists an increasing sequence $A_1\subseteq A_2\subseteq\dots$ of finite-dimensional sub-$C^*$-algebras of $A$ such that the union $\bigcup_{j=1}^\infty A_j$ is norm-dense in $A$.
## Bratteli diagrams.
It follows from (an analogue of) Wedderburn's theorem (cf. Wedderburn–Artin theorem), that every finite-dimensional$C^*$-algebra is isomorphic to the direct sum of full matrix algebras over the field of complex numbers. Property 2 says that each AF-algebra is the inductive limit of a sequence $A_1\rightarrow A_2\rightarrow\dots$ of finite-dimensional $C^*$-algebras, where the connecting mappings $A_j\rightarrow A_{j+1}$ are ${}^*$-preserving homomorphisms. If two such sequences $A_1\rightarrow A_2\rightarrow\dots$ and $B_1\rightarrow B_2\rightarrow\dots$ define isomorphic AF-algebras, then already the algebraic inductive limits of the two sequences are isomorphic (as algebras over $\mathbb C$).
All essential information of a sequence $A_1\rightarrow A_2\rightarrow\dots$ of finite-dimensional $C^*$-algebras with connecting mappings can be expressed in a so-called Bratteli diagram. The Bratteli diagram is a graph, divided into rows, whose vertices in the $j$th row correspond to the direct summands of $A_j$ isomorphic to a full matrix algebra, and where the edges between the $j$th and the $(j+1)$st row describe the connecting mapping $A_j\rightarrow A_{j+1}$. By the facts mentioned above, the construction and also the classification of AF-algebras can be reduced to a purely combinatorial problem phrased in terms of Bratteli diagrams. (See [a2].)
## UHF-algebras.
AF-algebras that are inductive limits of single full matrix algebras with unit-preserving connecting mappings are called "UHF-algebras" (uniformly hyper-finite algebras) or Glimm algebras. A UHF-algebra is therefore an inductive limit of a sequence $M_{k_1}(\mathbb C)\rightarrow M_{k_2}(\mathbb C)\rightarrow\dots$, where, necessarily, each $k_j$ divides $k_{j+1}$. Setting $n_1=k_1$ and $n_j=k_j/k_{j-1}$ for $j\geq 2$, this UHF-algebra can alternatively be described as the infinite tensor product $M_{n_1}(\mathbb C)\otimes M_{n_2}(\mathbb C)\otimes\dots$. (See [a1].)
The UHF-algebra with $n_1=n_2=\dots=2$ is called the CAR-algebra; it is generated by a family of operators $\{\alpha(f):\ f\in H\}$, where $H$ is some separable infinite-dimensional Hilbert space and $\alpha$ is linear and satisfies the canonical anti-commutation relations (cf. also Commutation and anti-commutation relationships, representation of):
\begin{align*} \alpha(f)\,\alpha(g)\,+\,\alpha(g)\,\alpha(f)&=0\\ \alpha(f)\,\alpha(g)^*\,+\,\alpha(g)^*\,\alpha(f)&=(f,g)\,1\\ \end{align*}
(See [a7].)
## -theory and classification.
By the -theory for -algebras, one can associate a triple to each -algebra . is the countable Abelian group of formal differences of equivalence classes of projections in matrix algebras over , and and are the subsets of those elements in that are represented by projections in some matrix algebra over , respectively, by projections in itself. The -group of an AF-algebra is always zero.
The classification theorem for AF-algebras says that two AF-algebras and are -isomorphic if and only if the triples and are isomorphic, i.e., if and only if there exists a group isomorphism such that and . If this is the case, then there exists an isomorphism such that . Moreover, any homomorphism such that is induced by a -homomorphism , and if are two -homomorphisms, then if and only if and are homotopic (through a continuous path of -homomorphisms from to ).
An ordered Abelian group is said to have the Riesz interpolation property if whenever with , there exists a such that . is called unperforated if , for some integer and some , implies that . The Effros–Handelman–Shen theorem says that a countable ordered Abelian group is the -theory of some AF-algebra if and only if it has the Riesz interpolation property and is unperforated. (See [a3], [a5], [a8], and [a6].)
A conjecture belonging to the Elliott classification program asserts that a -algebra is an AF-algebra if it looks like an AF-algebra! More precisely, suppose that is a separable, nuclear -algebra which has stable rank one and real rank zero, and suppose that and that is unperforated ( must necessarily have the Riesz interpolation property when is assumed to be of real rank zero). Does it follow that is an AF-algebra? This conjecture has been confirmed in some specific non-trivial cases. (See [a9].)
## Traces and ideals.
The -theory of an AF-algebra not only serves as a classifying invariant, it also explicitly reveals some of the structure of the algebra, for example its traces and its ideal structure. Recall that a (positive) trace on a -algebra is a (positive) linear mapping satisfying the trace property: for all . An "ideal" means a closed two-sided ideal.
A state on an ordered Abelian group is a group homomorphism satisfying . An order ideal of is a subgroup of with the property that generates , and if , , and , then . A trace on induces a state on by
where , are projections in (or in a matrix algebra over ); and given an ideal in , the image of the induced mapping (which happens to be injective, when is an AF-algebra) is an order ideal of . For AF-algebras, the mappings and are bijections. In particular, if is simple as an ordered group, then must be simple.
If a -algebra has a unit, then the set of tracial states (i.e., positive traces that take the value on the unit) is a Choquet simplex. Using the characterizations above, one can, for each metrizable Choquet simplex, find a simple unital AF-algebra whose trace simplex is affinely homeomorphic to the given Choquet simplex. Hence, for example, simple unital -algebras can have more than one trace. (See [a3] and [a5].)
## Embeddings into AF-algebras.
One particularly interesting, and still not fully investigated, application of AF-algebras is to find for a -algebra an AF-algebra and an embedding which induces an interesting (say injective) mapping . Since is positive, the positive cone of must be contained in the pre-image of . For example, the order structure of the -group of the irrational rotation -algebra was determined by embedding into an AF-algebra with (as an ordered group). As a corollary to this, it was proved that if and only if or . (See [a4].)
Along another interesting avenue there have been produced embeddings of into appropriate AF-algebras inducing injective -theory mappings. This suggests that the "cohomological dimension" of these AF-algebras should be at least .
#### References
[a1] J. Glimm, "On a certain class of operator algebras" Trans. Amer. Math. Soc. , 95 (1960) pp. 318–340 MR0112057 Zbl 0094.09701 [a2] O. Bratteli, "Inductive limits of finite-dimensional -algebras" Trans. Amer. Math. Soc. , 171 (1972) pp. 195–234 MR312282 [a3] G.A. Elliott, "On the classification of inductive limits of sequences of semisimple finite-dimensional algebras" J. Algebra , 38 (1976) pp. 29–44 MR0397420 Zbl 0323.46063 [a4] M. Pimsner, D. Voiculescu, "Imbedding the irrational rotation algebras into AF-algebras" J. Operator Th. , 4 (1980) pp. 201–210 MR595412 [a5] E. Effros, D. Handelman, C.-L. Shen, "Dimension groups and their affine representations" Amer. J. Math. , 102 (1980) pp. 385–407 MR0564479 Zbl 0457.46047 [a6] E. Effros, "Dimensions and -algebras" , CBMS Regional Conf. Ser. Math. , 46 , Amer. Math. Soc. (1981) MR0623762 [a7] O. Bratteli, D.W. Robinson, "Operator algebras and quantum statistical mechanics" , II , Springer (1981) MR0611508 Zbl 0463.46052 [a8] B. Blackadar, "-theory for operator algebras" , MSRI publication , 5 , Springer (1986) MR0859867 Zbl 0597.46072 [a9] G.A. Elliott, "The classification problem for amenable -algebras" , Proc. Internat. Congress Mathem. (Zürich, 1994) , Birkhäuser (1995) pp. 922–932
How to Cite This Entry:
AF-algebra. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=AF-algebra&oldid=34323
This article was adapted from an original article by M. Rørdam (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article | |
# SOCR EduMaterials Surveys Fall2005Sanchez
(Difference between revisions)
Revision as of 22:25, 4 June 2006 (view source)IvoDinov (Talk | contribs)← Older edit Revision as of 22:25, 4 June 2006 (view source)IvoDinov (Talk | contribs) Newer edit → Line 15: Line 15: | | |- |- - |(c) In homework grade | |- + |(c) In homework grade - |(d) In motivating you to learn the material | |- + | - |(e) In considering statistics as a possible major/minor | |- + |- - |(f) In motivating you to attend class | |- + | (d) In motivating you to learn the material - |(g) In Motivating you to network with fellow students | |- + | - |(h) In estimulating your intellectual curiosity | |- + |- - |(i) In Providing you with content needed for your area of studey | |- + | (e) In considering statistics as a possible major/minor - |(j) In maintaining your attention during class | |- + | - |(k) In relating the course material to other materials studied in other classes | |- + |- + | (f) In motivating you to attend class + | + |- + | (g) In Motivating you to network with fellow students + | + |- + | (h) In estimulating your intellectual curiosity + | + |- + | (i) In Providing you with content needed for your area of studey + | + |- + | (j) In maintaining your attention during class + | + |- + | (k) In relating the course material to other materials studied in other classes | |- | colspan=2 | | colspan=2 | {| border=3 {| border=3
## SOCR Surveys - Stat 100A, Fall 2005, J. Sanchez, SOCR Survey
Compared with other classes where instructors have not required using technological tools like applets or software for homework, how would you rate the overall effectiveness of the use of the SOCR technology in Stat 100A, Fall 2005?
Higher lower The same NA
Question Ranking: Higher lower The same NA
(a) In understanding the main concepts of the course
(b) In exams grades
(c) In homework grade
(d) In motivating you to learn the material
(e) In considering statistics as a possible major/minor
(f) In motivating you to attend class
(g) In Motivating you to network with fellow students
(h) In estimulating your intellectual curiosity
(i) In Providing you with content needed for your area of studey
(j) In maintaining your attention during class
|- | |
# All Questions
267 views
### Untraceable communication protocol
I am doing a research about secure communication protocols. I would be interested to know whether a protocol exists such that it grants that the two end-points taking part to the communication cannot ...
111 views
### KDF with low-entropy salts
I need to derive a key from a username and a password. These are the only two things I have access to. What I thought is using PBKDF2 with username as the salt and password as the master password. ...
376 views
### Entropy of system data - use all and hash, or trim least significant bits?
I'm working on a background entropy collector for key generation that monitors hardware and produces an entropy pool. Here's my list of sources: Mouse position Keyboard timings (i.e. time between ...
380 views
### Can we trust digital signatures?
Consider that Alice wants to send a digitally signed message to Bob. Mallory might be able to publish his public key under Alice's name and then impersonate Alice to send a message with an apparently ...
149 views
### Format of NONCE in Initialization Vector (IV)
When we talk about a Number used ONCE (NONCE) in Initialization Vector (IV), is it required to use numbers only? Is is possible to use letters or special characters?
358 views
### What differentiates a password hash from a cryptographic hash besides speed?
I understand that password hashes like bcrypt have the principal property of taking a long time to run, but I'm wondering what if anything about password hashes make them superior to merely running a ...
308 views
### Encryption with private key?
we normally always encrypt by public key and decrypt with private key. If i encrypt with private key, then its still secure as normal PKI ? i mean known-plain-text will not take private key on the ...
350 views
### RSA algorithm's license free or paid?
I checked RSA's patent application, which was registered in 1983. As patents don't last more than 20 years, it seems to me it should be free. But my friend said to use RSA I have to buy a license from ...
165 views
### Do Quantum Key Distribution and Physical Unclonable Functions combine, and how?
I see there's a project to combine Quantum Key Distribution, Physical Unclonable Functions, and classical crypto, in order to secure a high speed (100Gb/s) optical link. While there does not seem to ...
180 views
### Proper formatting of symmetric algorithm secret key
Given this description from RFC 4880 sec 5.1: The value "m" in the above formulas is derived from the session key as follows. First, the session key is prefixed with a one-octet algorithm ...
298 views
### Is SSL getting faster because it's getting less secure?
There has been some discussion about it being more practical to use SSL due to advances in hardware. From my understanding, stronger public-key encryption means that both encrypting/decrypting and ...
2k views
### why do we need Diffie Hellman?
my question from stackoverflow: http://stackoverflow.com/questions/11374592/why-do-we-need-diffie-hellman Diffie–Hellman offers secure key exchange only if sides are authenticated. for ...
236 views
### Multi layer encryption with ECB mode [closed]
if i use the 2 of the same key with 2 of the same algoritm when encrypting in ECB like when i have 2 blocks of the same color and i encrypt the 2 blocks with the same color the cipher text should not ...
317 views
### I need an opinion of encryption method I thought of in High school
First, I'm really not into cryptography, but have some basic knowledge. This was a thought experiment (and later exercise for my programming skills), but even though it was long time ago and I tried ...
1k views
### Is this design of client side encryption secure?
I want to build a secure file storage web application. Users should be sure that server doesn't know how to decrypt files so encryption should take place at client side (i.e. in Javascript) and TLS ...
533 views
609 views
### What is the correct value for “certainty” in RSA key pair generation?
I'm creating an RSA key pair in Bouncy Castle and need to specify an int value for certainty. This Stack Overflow answer says it is a relative test for how prime the values are. There is another ...
4k views
### Impacts of not using RSA exponent of 65537
This RFC says the RSA Exponent should be 65537. Why is that number recommended and what are the theoretical and practical impacts & risks of making that number higher or lower? What are the ...
1k views
### Deciphering a key from XOR encrypted cypher using boolean logic
Assume there's an unencrypted message A, and an encrypted message B. You know that message B was encrypted using a simple XOR method of A with a private key K, resulting in message B. Thus, B = A ⊕ K ...
272 views
### Proof of work for standard computers
I'm interested in a proof-of-work system that works well on standard computers without using the GPU. Properties the system should have: Seed based proof-of-work. There is no distinguished ...
128 views
### Setting protocol parameters to achieve concrete security
Background One issue with modern security proofs is that they are usually asymptotic. In other words, such proofs are usually formulated as follows: For any polynomial-time adversary $\mathcal A$, we ...
159 views
### How are state wiretaps obtaining plaintext from encrypted transmissions?
According to the US 2011 Wiretap Report, encryption — on the off chance that it is encountered — has been no hurdle to retrieving the content of a conversation. Public Law 106-197 amended 18 ...
410 views
### Multiple Hash Functions that work in either nesting
Are there any hashing functions that, if two are used in conjunction (with the same salts) will return the same response regardless of ordering? I.e. are there hash-functions $H_1$, $H_2$ such that ...
537 views
### Using bad generator in ElGamal Encryption
Suppose Alice chooses a random Prime $p$ and a random private Key $a \in \mathbb{Z}^*_p$. By accident, she also chooses a random number $g \in \mathbb{Z}^*_p$, which is not a generator of ...
619 views
### Generating a strong unique Initialization Vector
How can I determine if I am generating a unique and strong Initialization Vector? If my mode is generating Keystream? Is there any scientific explanation in generating a unique and strong ...
133 views
### Why does it matter for a signature scheme to be without random oracles?
There is a profusion of articles proposing signature schemes without random oracles (see for yourself). What does that mean, and why does it matter?
223 views
### Length of data to hash for PGP
I have finally managed to verify some simple PGP signed message blocks. However, I discovered that for some reason, my implementation limits me to verifying data that is 9-16 bytes long. no less. no ...
1k views
### How can I validate a hashed password if all I have is another hash?
The Scenario I have a client-side web application that bounces requests against a server-side API. For the sake of simplicity, every request must pass a username and password. This is similar to ...
177 views
### What is the advantage of an attacker over breaking a 4 digits PIN?
When a hardware system is protected by a 4 digits password, what is the advantage of the attacker into breaking that system? Isn't it $10*10*10*10=10^{4}$? If $\frac{1}{10^{4}}$ is the cost of such ...
2k views
### Is it possible to make time-locked encrytion algorithm?
I'm not sure if what I'm asking is even a valid question but here goes. Would it be possible to add a mechanism to an encryption algorithm that would mean it had to be a certain time of the day or a ...
112 views
### Abstracting primitives and modes of operation
I am developing a symmetric crypto library and have reached a roadblock. Looking at block ciphers, it is quite obvious that all block ciphers are trivially abstractable as a simple primitive ...
396 views
### Asymmetric algorithm to generate compact unique messages that can be validated
I have a cryptographic problem with the following characteristics: I need to generate a set of relatively short messages; say 20 bytes in length The contents of the messages themselves is not ...
663 views
### Capacity of Advanced Encryption Standard in terms of File Encryption
what is the capacity of AES in terms of File Encryption? is it really good to encrypt a large files in AES? ex. I am encrypting a 8GB of File... is it still good to used AES? is it still good to used ...
401 views
### How does the cyclic attack on RSA work?
I am trying to get the idea of cyclic attacks againts assymetric RSA encryption. Taken from Handbook of applied cryptography . Let $k$ be a positive integer such that c^{(e^{k})} = c\mod n ...
453 views
### Are Stream Ciphers Less Secure?
This is by no means a scientific observation, but it seems to me that stream ciphers receive a lot less attention than block ciphers. Is there any reason for this? (Is it because block ciphers are ...
Are there any known collisions for the hash functions SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512? By that, I mean are there known values of $a$ and $b$ where $F(a) = F(b)$ and $a ≠ b$? | |
# How do you compute the Fourier Transform of this Unit-Impulse Function?
I have been given this problem from a textbook (not homework, trying to study for an exam. The goal is to find the Fourier transform of this function.
$\sum_{k=0}^\infty a^k*\delta(t-kT), |a|<1$
Can anyone give me a hint or point me in the right direction of how to compute the Fourier Transform? Thanks!
• Find the Fourier transform of each summand. Then you end up with a geometric series. – Stephen Montgomery-Smith Nov 18 '13 at 5:37
$$\int_{-\infty}^{\infty} dt \, \delta(t-k T) \, e^{i \omega t} = e^{i k \omega T}$$
$$\sum_{k=0}^{\infty} \left ( a \, e^{i \omega T}\right )^k = \frac{1}{1-a \, e^{i \omega T}}$$
• Thank you so much! However, isn't the first integral equal to $e^{-ikwT}$? – ArKi Nov 18 '13 at 5:49
• @aikitect: No. Note that $$\int dt \, \delta(t-a) f(t) = f(a)$$ – Ron Gordon Nov 18 '13 at 5:50 | |
# Centering table data under right-aligned tabularx header
I'm trying to align the following table so that the "21" is centered under its header.
\begin{table}[h]
\centering
\begin{tabularx}{1.09\textwidth}{|cll>{\raggedleft\arraybackslash}X|}
\hline
\textbf{ID} & \textbf{Severity} & \textbf{Vulnerability} &
\textbf{Occurrences} \\
\hline
111111 & Lorem & Lorem ipsum dolor sit amet & 21 \\
\hline
\end{tabularx}
\caption{example}
\label{table:example}
\end{table}
I can use \centering instead of \raggedleft for the last column to center the numbers under the header but then the header ends up too much to the left, I want the header to be like in the above picture.
Is there anyway I can get the combination of these two? The header aligned to the right like in the first picture but the number centered under the header like in the second picture.
• Welcome to TeX SX! Have you any reason for a tabularx which flows into the margins? – Bernard Mar 18 at 10:21
• @Bernard The reason i'm using tabularx is to get the same width as my widest table – antmo Mar 18 at 10:27
I would use the X type colum for the 3rd instead of the 4th cell: \begin{tabularx}{1.09\textwidth}{|clXc|}. With \makecell[r]{\textbf{Occurrences}} you can then right align the header:
\documentclass{article}
\usepackage{tabularx}
\usepackage{makecell}
\begin{document}
\begin{table}[h]
\centering
\begin{tabularx}{1.09\textwidth}{|clXc|}
\hline
\textbf{ID} & \textbf{Severity} & \textbf{Vulnerability} &
\makecell[r]{\textbf{Occurrences}} \\
\hline
111111 & Lorem & Lorem ipsum dolor sit amet & 21 \\
\hline
\end{tabularx}
\caption{example}
\label{table:example}
\end{table}
\end{document}
• Maybe using \thead for all column heads would simplify the code? – Bernard Mar 18 at 10:19
• Thank you, that was an easy solution that gives the right look! – antmo Mar 18 at 10:21 | |
# Circle
See The Circle for the distributed file storage system, and see ring (diacritic) for the diacritic mark.
In Euclidean geometry, a circle is the set of all points in a plane at a fixed distance, called the radius, from a fixed point, called the centre. Circles are simple closed curves, dividing the plane into an interior and exterior. Sometimes the word circle is used to mean the interior, with the circle itself called the circumference. Usually however, the circumference means the length of the circle, and the interior of the circle is called a disk or disc.
Contents
## Mathematical definitions
In an x-y coordinate system, the circle with centre (a, b) and radius r is the set of all points (x, y) such that
[itex]\left( x - a \right)^2 + \left( y - b \right)^2=r^2.[itex]
If the circle is centered at the origin (0, 0), then this formula can be simplified to
[itex]x^2 + y^2 = r^2.[itex]
The circle centered at the origin with radius 1 is called the unit circle.
Expressed in polar coordinates, (x,y) can be written as
x = a + r·cos(φ)
y = b + r·sin(φ).
The slope (or derivate) of a circle can be expressed with the following formula:
[itex]y' = - \frac{x}{y}.[itex]
All circles are similar; as a consequence, a circle's circumference and radius are proportional, as are its area and the square of its radius. The constants of proportionality are 2π and π, respectively. In other words:
• Length of a circle's circumference = [itex]2\pi \times r.[itex]
• Area of a circle = [itex]\pi \times r^2.[itex]
The formula for the area of a circle can be derived from the formula for the circumference and the formula for the area of a triangle, as follows. Imagine a regular hexagon (six-sided figure) divided into equal triangles, with their apices at the center of the hexagon. The area of the hexagon may be found by the formula for triangle area by adding up the lengths of all the triangle bases (on the exterior of the hexagon), multiplying by the height of the triangles (distance from the middle of the base to the center) and dividing by two. This is an approximation of the area of a circle. Then imagine the same exercise with an octagon (eight-sided figure), and the approximation is a little closer to the area of a circle. As a regular polygon with more and more sides is divided into triangles and the area calculated from this, the area becomes closer and closer to the area of a circle. In the limit, the sum of the bases approaches the circumference 2πr, and the triangles' height approaches the radius r. Multiplying the circumference and radius and dividing by 2, we get the area, π r².
## Properties
Missing image
Circle_lines.png
Chord, secant, and tangent
Missing image
Circle_slices.png
Arc, sector, and segment
A line cutting a circle in two places is called a secant, and a line touching the circle in one place is called a tangent. The tangent lines are necessarily perpendicular to the radii, segments connecting the centre to a point on the circle, whose length matches the definition given above. The segment of a secant bound by the circle is called a chord, and the longest chords are those that pass through the centre, called diameters and divided into two radii. The area of a circle cut off by a chord is called a circular segment.
It is possible (Circle points segments proof) to find the maximum number of unique segments generated by running chords between a number of points on the perimeter of a circle.
If only (part of) a circle is known, then the circle's center can be constructed as follows: take two non-parallel chords, construct perpendicular lines on their midpoints, and find the intersection point of those lines. The radius for such a partial circle may be calculated from the length L of a chord, and the distance D from the center of the chord to the nearest point on the circle by various formulas including: (from a geometric derivation)
[itex]\mbox{Radius}=((L/2)^2+D^2)/2D[itex]
(from a trigonometric derivation)
[itex]{\mbox{Radius}}={{L}\over{\sin\pi-2\tan^{-1}({{L}\over{D}})}}.[itex]
Missing image
Arc2.png
chord illustration
A part of the circumference bound by two radii is called an arc, and the area (i.e., the slice of the disk) within the radii and the arc is a sector. The ratio between the length of an arc and the radius defines the angle between the two radii in radians.
Every triangle gives rise to several circles: its circumcircle containing all three vertices, its incircle lying inside the triangle and touching all three sides, the three excircles lying outside the triangle and touching one side and the extensions of the other two, and its nine point circle which contains various important points of the triangle. Thales' theorem states that if the three vertices of a triangle lie on a given circle with one side of the triangle being a diameter of the circle, then the angle opposite to that side is a right angle.
Given any three points which do not lie on a line, there exists precisely one circle whose boundary contains those points (namely the circumcircle of the triangle defined by the points). Given three particular points <(x1,y1), (x2,y2), (x3,y3)>, the equation of this circle is given in a simple way by this equation using the matrix determinant:
[itex]
\det\begin{bmatrix} x & y & x^2 + y^2 & 1 \\ x_1 & y_1 & x_1^2 + y_1^2 & 1 \\ x_2 & y_2 & x_2^2 + y_2^2 & 1 \\ x_3 & y_3 & x_3^2 + y_3^2 & 1 \\ \end{bmatrix} = 0. [itex]
A circle is a kind of conic section, with eccentricity zero. In affine geometry all circles and ellipses become (affinely) isomorphic, and in projective geometry the other conic sections join them. In topology all simple closed curves are homeomorphic to circles, and the word circle is often applied to them as a result. The 3-dimensional analog of the circle is the sphere.
Squaring the circle refers to the (impossible) task of constructing, for a given circle, a square of equal area with ruler and compass alone. Tarski's circle-squaring problem, by contrast, is the task of dividing a given circle into finitely many pieces and reassembling those pieces to obtain a square of equal area. Assuming the axiom of choice, this is indeed possible.
Three-dimensional shapes whose cross-sections in some planes are circles include spheres, spheroids, cylinders, and cones.
## External links
• Clifford's Circle Chain Theorems. (http://agutie.homestead.com/files/clifford1.htm) This is a step by step presentation of the first theorem. Clifford discovered, in the ordinary Euclidean plane, a "sequence or chain of theorems" of increasing complexity, each building on the last in a natural progression. by Antonio Gutierrez from "Geometry Step by Step from the Land of the Incas"
• Munching on Circles (http://www.cut-the-knot.org/pythagoras/Munching/circle.shtml)cs:Kružnice
##### Navigation
Academic Kids Menu
• Art and Cultures
• Art (http://www.academickids.com/encyclopedia/index.php/Art)
• Architecture (http://www.academickids.com/encyclopedia/index.php/Architecture)
• Cultures (http://www.academickids.com/encyclopedia/index.php/Cultures)
• Music (http://www.academickids.com/encyclopedia/index.php/Music)
• Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
• Biographies (http://www.academickids.com/encyclopedia/index.php/Biographies)
• Clipart (http://www.academickids.com/encyclopedia/index.php/Clipart)
• Geography (http://www.academickids.com/encyclopedia/index.php/Geography)
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Maps (http://www.academickids.com/encyclopedia/index.php/Maps)
• Flags (http://www.academickids.com/encyclopedia/index.php/Flags)
• Continents (http://www.academickids.com/encyclopedia/index.php/Continents)
• History (http://www.academickids.com/encyclopedia/index.php/History)
• Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
• Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
• Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
• Prehistory (http://www.academickids.com/encyclopedia/index.php/Prehistory)
• Renaissance (http://www.academickids.com/encyclopedia/index.php/Renaissance)
• Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
• United States (http://www.academickids.com/encyclopedia/index.php/United_States)
• Wars (http://www.academickids.com/encyclopedia/index.php/Wars)
• World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
• Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
• Mathematics (http://www.academickids.com/encyclopedia/index.php/Mathematics)
• Reference (http://www.academickids.com/encyclopedia/index.php/Reference)
• Science (http://www.academickids.com/encyclopedia/index.php/Science)
• Animals (http://www.academickids.com/encyclopedia/index.php/Animals)
• Aviation (http://www.academickids.com/encyclopedia/index.php/Aviation)
• Dinosaurs (http://www.academickids.com/encyclopedia/index.php/Dinosaurs)
• Earth (http://www.academickids.com/encyclopedia/index.php/Earth)
• Inventions (http://www.academickids.com/encyclopedia/index.php/Inventions)
• Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
• Plants (http://www.academickids.com/encyclopedia/index.php/Plants)
• Scientists (http://www.academickids.com/encyclopedia/index.php/Scientists)
• Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
• Anthropology (http://www.academickids.com/encyclopedia/index.php/Anthropology)
• Economics (http://www.academickids.com/encyclopedia/index.php/Economics)
• Government (http://www.academickids.com/encyclopedia/index.php/Government)
• Religion (http://www.academickids.com/encyclopedia/index.php/Religion)
• Holidays (http://www.academickids.com/encyclopedia/index.php/Holidays)
• Space and Astronomy
• Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
• Planets (http://www.academickids.com/encyclopedia/index.php/Planets)
• Sports (http://www.academickids.com/encyclopedia/index.php/Sports)
• Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
• Weather (http://www.academickids.com/encyclopedia/index.php/Weather)
• US States (http://www.academickids.com/encyclopedia/index.php/US_States)
Information
• Home Page (http://academickids.com/encyclopedia/index.php)
• Contact Us (http://www.academickids.com/encyclopedia/index.php/Contactus)
• Clip Art (http://classroomclipart.com) | |
# A non-invasive approach to estimate the energetic requirements of an increasing seabird population in a perturbed marine ecosystem
## Abstract
There is a growing desire to integrate the food requirements of predators living in marine ecosystems impacted by humans into sustainable fisheries management. We used non-invasive video-recording, photography and focal observations to build time-energy budget models and to directly estimate the fish mass delivered to chicks by adult greater crested terns Thalasseus bergii breeding in the Benguela ecosystem. Mean modelled adult daily food intake increased from 140.9 g·d−1 of anchovy Engraulis capensis during incubation to 171.7 g·d−1 and 189.2 g·d−1 when provisioning small and large chicks, respectively. Modelled prey intake expected to be returned to chicks was 58.3 g·d−1 (95% credible intervals: 44.9–75.8 g·d−1) over the entire growth period. Based on our observations, chicks were fed 19.9 g·d−1 (17.2–23.0 g·d−1) to 45.1 g·d−1 (34.6–58.7 g·d−1) of anchovy during early and late provisioning, respectively. Greater crested terns have lower energetic requirements at the individual (range: 15–34%) and population level (range: 1–7%) than the other Benguela endemic seabirds that feed on forage fish. These modest requirements – based on a small body size and low flight costs – coupled with foraging plasticity have allowed greater crested terns to cope with changing prey availability, unlike the other seabirds species using the same exploited prey base.
## Introduction
The balance between energy expenditure and food consumption determines many aspects of animal ecology, including the role of species within ecosystems and the mechanisms that drive population dynamics1. As anthropogenic activities and environmental change threaten an increasing number of habitats, there is a growing need to investigate the energy requirements of species dwelling in impacted ecosystems2,3,4 particularly when those species compete with humans for resources5,6. Such knowledge can facilitate the development of management plans that account for a species’ needs at the population level.
Accurately measuring energetic needs is particularly important for birds as most species operate at higher trophic levels, exerting top–down control on lower trophic levels and/or reacting to bottom–up forcing7. They need regular access to food resources because of their high metabolic rate and energetically demanding flight8,9. Birds therefore offer opportunities to explore the relationships between environmental limitations (e.g. climate change), food web characteristics (e.g. trophic relationships) and energy budgets10. This requires accurate energetic estimates of individuals in the wild, but these are usually laborious and invasive to obtain. For example, they include the capture of individuals for laboratory work (e.g. surgery, respirometry11,12), the use of doubly labelled water9 or the deployment of data-loggers13. Such methods are becoming a growing ethical concern14, particularly for threatened species, making birds a challenging group to study12,15,16. Modelling approaches using time-activity budgets combined with knowledge on the energetic costs of specific behaviours offer non-invasive alternatives to estimate bird energy expenditure in the wild17,18, and generally provide improved estimates over allometric equations or thermodynamics modelling18,19.
Worldwide, many marine environments have been severely altered by human activity with large impacts on top predators20. Today ~28% of the world’s ~350 seabird species are considered to be threatened with extinction by the International Union for Conservation of Nature21. Moreover, seabirds have high foraging costs and are greatly affected by commercial fishing activities22,23,24. In the North Sea, for example, competition with the industrial fishery for lesser sandeel Ammodytes marinus is partly responsible for the low breeding success and population decline of black-legged kittiwakes Rissa tridactyla and several other seabird populations25,26. Moreover, fluctuations in this key prey appeared to affect disproportionately small, surface-feeding species with high foraging costs, leading to the suggestion that such species – including terns – are sensitive indicators of deterioration in the state of marine ecosystems27. Using energetic models to better quantify the consumption of these sensitive seabird species thus offers great potential to integrate their needs into an ecosystem approach to fisheries18.
The Benguela ecosystem off southern Africa is one of the four major eastern boundary upwelling ecosystems and one of the most productive ocean areas in the world. Over the last 70 years a combination of fishing and environmental change have altered the availability of lipid-rich forage fish forage in this system, with knock-on consequences for higher trophic level predators24,28,29,30,31. In particular, the decreased access to prey is considered to be the key driver of ongoing declines of three endemic seabird species: African penguins Spheniscus demersus, Cape cormorants Phalacrocorax capensis and Cape gannets Morus capensis28,29,30,31. Perhaps surprisingly, numbers of greater crested terns Thalasseus bergii, which rely on the same resources and breed in the same region, have tripled over the last few decades; the reasons for these contrasting fortunes remain equivocal32,33. Considerable foraging plasticity34 and their ability to move breeding sites35 could have helped greater crested terns maintain high annual survivorship in the face of ecosystem-wide changes36. In addition, it is possible that their small body size (~390 g), single egg clutch, and short breeding period (68 days) reduce the greater crested tern’s overall energy requirements compared to other sympatric breeding seabirds. Thus, estimating energy budgets for the Benguela’s breeding seabirds may help us to understand why numbers of greater crested terns are increasing while the region’s threatened and endemic seabirds that rely on the same resource are decreasing. This information will also improve our knowledge of food partitioning within the Benguela ecosystem food-web, provide a baseline against which to assess the impact of future environmental change, and assist the development of conservation planning.
Here, we report the foraging activity budget of the southern African population of breeding greater crested terns using non-invasive methods. Based on the duration and cost of activities performed by breeding adults, we modelled the daily energy expenditure (DEE) and daily food intake (DFI) of adults during different breeding stages. To account for parameter uncertainty and propagate sources of error, we used Bayesian inference and Markov chain Monte Carlo (MCMC) estimation. We then compared our observed estimates of chick daily food intake to our model results.
## Results
### Time activity budget in relation to breeding stage
Over a total of 51 days, 374 greater crested tern nests were video monitored during incubation and 240 nests during early chick provisioning (hereafter “early provisioning”). These videos provided duration estimates for 1,138 incubation foraging trips and 1,747 early provisioning foraging trips. Over a 16-day period of focal observations, 31 chicks that had left the nest cup (hereafter “mobile chicks”) were monitored during late chick provisioning (hereafter “late provisioning”), which provided duration estimates for 252 foraging trips.
Foraging trips were longer during incubation than during both the early- or late-provisioning periods (Fig. 1A). Incubating adults spent an average of 4.73 h (95% CI 4.51–4.97) away from their nest per trip and performed 1.52 trips·d−1 (1.46–1.58, Fig. 1A,B). Foraging trips during early provisioning were shorter (1.83 h, 1.76–1.90), allowing more trips (4.08 trips·d−1, 3.88–4.29) than during incubation (Fig. 1B). As a result, the total time spent away from the nest during incubation and early provisioning was similar (Fig. 1C). During late provisioning, when chicks are generally left alone so both adults can forage at once, the mean number of trips per parent per day (4.57 trips·d−1, 3.97–5.26) was similar to early provisioning (Fig. 1B). In contrast, the mean duration of each foraging trip was longer (2.24 h, 2.02–2.48), resulting in an increase in the time each parent spent away from the chick (Fig. 1C).
### Modelling time-energy-budgets
Time-energy budget models indicated that the total energy requirements of adults and offspring increased steadily throughout the breeding season (Fig. 2, Table 1). During incubation, the modelled DEE of an adult was 668 kJ·d−1 (95% CI 552–784), with a DFI of 140.8 g·d−1 of fish (105.1–186.4, Fig. 2). During early provisioning, adult modelled DEE was 676 kJ·d−1 (559–793), which was similar to during incubation. However, the estimated total DFI for an adult, including that fed to the chick, was 22% more at 171.7 g·d−1 (130.8–224.3, Fig. 2). During late provisioning, adult modelled DEE increased to 759 kJ·day−1 (620–903) with a total modelled DFI, including that of the chick, of 189.2 g·d−1 (143.1–248.9, Fig. 2).
Using an allometric equation for larids37, the modelled mean chick daily metabolizable energy intake was estimated as 358 kJ·d−1 (310–405), which results in a chick modelled DFI of 75.6 g·d−1 (58.2–98.2 g·d−1) over the pre-fledging period. Thus, the expected mean amount returned to chicks across the breeding population – assuming a breeding success of 0.59 chicks fledged per pair – would be 58.3 g·d−1 (44.9–75.8 g·d−1), or 29.2 g·d−1 (22.5–37.9 g·d−1) by each parent (Table 1).
Sensitivity analyses showed that variation in adult body mass and prey calorific value had the largest effect on modelled estimates of DFI during all breeding stages (see Supplementary Information S1 and Table S2).
### Estimating chick DFI from photo-sampling, video-recording and focal observations
The mean (95% CI) mass of anchovies brought to the chick during early provisioning was 4.4 g (3.9–4.9, n = 126), which was smaller than the anchovy returned during late provisioning to mobile chicks (5.2 g; 5.0–5.5, n = 629; Fig. 3). Feeding rates averaged 4.6 fish·d−1 (4.1–5.0, n = 240) returned to the nestling during early provisioning, with more fish returned during late provisioning (8.6 fish·d−1; 6.6–11.2, n = 34). Chick observed DFI increased from early provisioning (19.9 g·d−1, 17.2–23.0, n = 126) to late provisioning (45.1 g·d−1, 34.6–58.7, n = 629).
## Discussion
Using a combination of different non-invasive methods, this study presents the first estimates of the time budget and linked energy expenditure of a population of breeding greater crested terns. Our results are in agreement with predictions of central-place foraging models, which indicate that adults should increase the amount of energy delivered to chicks over the chick growth period and so raise their own energy expenditure through increased foraging13,38. Small chicks were fed anchovies of a size appropriate to their smaller gape, whereas mobile chicks received anchovies ca 20% heavier. Overall, the amount of fish required daily to feed an adult and chick greater crested tern was 3–7 times lower than for other Benguela endemic species relying on the same prey base (Table 2). A small body size, combined with a highly efficient flight mode and an aptitude for finding food efficiently contribute to lowering the energy budget of greater crested terns. These factors may help to explain why this species’ status remains favourable while populations of other Benguela endemic seabirds relying on the same prey base are decreasing.
### The use of non-invasive methods for assessing energy expenditure
Uncertainties in reconstructing time-energy expenditure can derive from several sources, including the inaccuracy of activity durations39, the estimated cost for each behaviour, and thermoregulatory costs. For terns in particular, these parameters may lack precision as energetic investigations on these birds have so far been limited to small numbers of individuals of only a few species40. For example, the model used to estimate flight costs may misrepresent energy expenditure compared to more empirical estimates40,41,42. The use of animal-borne data loggers (e.g. GPS, accelerometers) could overcome this limitation, providing precise time-budget data on different at-sea behaviours (e.g. continuous flapping, gliding, hovering and diving) and estimates of their associated energy expenditure43. However, we favoured non-invasive methods as animal-borne data loggers can affect bird condition and behaviour16, and because greater crested terns are highly sensitive to human disturbance44. Furthermore, the approach used in this study can provide better population-level inference than data logger studies, which usually rely on small sample sizes13,45.
Observed feeding rates in our study were limited to delivered prey. However, prior to feeding their chick, provisioning adults may be forced to perform specific behaviours which require additional energetic expenditure. Terns are often the target of inter- and intra-specific kleptoparasitism as they bring prey to the colony in their bill46,47. This can result in loss of prey (up to 3.2 g·d−1 of anchovies for interspecific kleptoparasitism) and/or additional energy costs to counter kleptoparasitic attacks48. Accordingly, provisioning adults may have to compensate for the food lost in this way, with implications for their energy expenditure49; however, this interaction is poorly understood and few studies can account for the energy expenditure linked to kleptoparasitism in models.
### Implications at the population level of low individual energetic requirements
The recent decreases in seabird populations in the Benguela ecosystem suggest that updated estimates of food consumption are needed to account for energy partitioning in the management of the purse-seine fisheries, with which predators compete for prey24,31,50. Modelling approaches are increasingly being implemented to study seabird-fishery competition23, including studies to predict the smallest forage fish biomass needed to sustain seabird productivity over the long term51. To provide an overview of seabird energetic needs, it is particularly important to account for species body size, clutch size, and number of fledging days. These needs can then be extrapolated to a broader ecosystem level by accounting for the total population breeding in the system.
A comparison of the energetic demands with the other three Benguela endemic seabirds that rely on forage fish, illustrates that the biomass of forage fish needed by breeding greater crested terns at present is much lower than that needed by the other populations (Table 2). Greater crested tern chicks require ~3 kg of anchovy to fledge, compared to ~17 kg of anchovy for an African penguin chick52 ~10 kg for a Cape gannet chick28 and ~6 kg for a Cape cormorant chick (T. Cook unpublished data). With approximately 15,000 pairs breeding in the Benguela ecosystem, the whole population requires ~2,800 kg·d−1 of anchovy, which equates to ~133 times less than the Cape gannet population and ~37 times less than the Cape cormorant population breeding in the region (Table 2). Breeding African penguins, despite a recent decrease in numbers33, require ~13 times more food than greater crested terns (Table 2). Thus, their modest energetic requirements may be a key component allowing greater crested terns to cope in a changing and highly exploited environment.
In animals like seabirds, that must travel large distances to secure prey, costs of transport can constitute a large portion of the daily energy budget. Compared to other species of the guild of Benguela ecosystem seabirds specialised on forage fish, the cost of flight per unit of body mass and time in greater crested terns is low (Table 2). Consequently, the overall cost of flight per individual and per time unit in this species is 4–5 times lower than in the other volant seabirds of this guild (Table 2). In part, this can be attributed to their wing morphology. Like other tern species, greater crested terns have long (90–115 cm)53, narrow, pointed wings with low wing loading. This makes them efficient at the slow, agile flight needed when searching for food54. Terns are capable of rapid turning, swooping, hovering, vertical take-off and soaring40, all with relatively low energy expenditure. Their capacity to explore the marine environment efficiently may help explain why greater crested terns appear more successful than the Benguela ecosystem’s other seabird species at coping with decreased food availability.
In the northern Benguela, the population of sardine has been depleted since the early 1970s, and there has been little if any compensation by anchovy, forcing seabirds there to consume low-quality prey such as bearded goby Sufflogobius bibarbatus55. In contrast to the declining African penguin population, the small population of greater crested terns (~1,200 pairs), which also relies on bearded goby in Namiba54, has remained stable, suggesting an ability to cope when switching to low-quality prey56. Terns in the North Sea were found to be most vulnerable and sensitive to sandeel exploitation, presumably as a consequence of their specialized diet, small foraging range and inability to increase parental foraging effort when prey becomes scarce25. In contrast, greater crested terns breeding in the Benguela ecosystem could buffer these limitations due to their flexible diet, which includes ca. 50 different prey species34 and their low fidelity to breeding sites, which are believed to be chosen depending on the local availability of prey immediately preceding the breeding season, rather than by philopatry32. In addition, the recent major decrease of migrant tern populations to the Benguela ecosystem (e.g. common tern Sterna hirundo57) may have led to reduced interspecific competition with surface-gleaning seabirds, providing more resources for this resident tern species. In this context, the greater crested terns’ low energy requirements combined with their ability to switch to alternative prey provide a great advantage, highlighting the apparent species-specific responses to shifting foraging conditions, which seem to favour the greater crested tern in this ecosystem.
In conclusion, this study shows that greater crested terns have relatively low energy requirements at both the individual and population level, when compared to other seabirds breeding in the Benguela ecosystem that rely on the same resources. These low energy requirements appear to contribute to their recent increase in this exploited ecosystem. Further studies implementing detailed knowledge of the energetics, prey demands and demography of the Benguela’s endemic seabirds are needed to understand the apparent differences in their food requirements and assist the development of conservation planning for the threatened seabird species breeding in the region58,59.
## Methods
### Measuring time-budget and feeding rates from video-recording and focal observations
Foraging trip durations and offspring feeding rates of breeding greater crested terns were assessed on Robben Island (33°48′S, 18°22′E), in South Africa’s Western Cape Province, using non-invasive video recordings of nest-cup activities during early provisioning (Figure S1). All methods were approved by the Department of Environmental Affairs (RES2013/24, RES2014/83, RES2015/65) and the animal ethics committee of the University of Cape Town (2013/V3/TC).
Greater crested tern chicks become mobile and leave the nest cup after approximately four days53. Thus, we monitored individual chicks banded with engraved colour rings using binoculars and a hide (distance 10–30 m) to determine foraging trip durations and feeding rates during late provisioning. Observations and recordings were made from February to May during three breeding seasons (2013, 2014 and 2015). See Supplementary Information S1 for details on these observations.
Video recordings were analysed using VLC media player (VideoLAN project). Three breeding stages were recognised: incubation (during which time, any prey brought to the colony are only used for courtship), early provisioning (the mean week when chicks are provisioned in the nest cup), and late provisioning (the period when adults provision mobile chicks, which typically gather in crèches). Greater crested terns do not forage at night60, but our cameras were not always able to capture useable footage from first light or after sunset. Therefore, if birds on focal nests had already left by the start of filming at dawn, or not returned to the nest by the time our cameras could no longer operate due to low light levels, we used nautical twilight as a proxy of their departure and arrival times61,62. Nautical twilight is defined when the centre of the sun is 12° below the earth’s horizon63. The time of twilight on a given date at each colony was obtained from www.timeanddate.com.
### Estimating chick DFI from photo-sampling
Prey carried by greater crested terns returning to the breeding colony to feed chicks were recorded as part of a program monitoring tern diet34. Prey were photographed using a non-invasive photo-sampling technique, allowing for an accurate determination of fish species and standard length64 For anchovy, we converted estimated fish lengths to mass using a yearly species-specific regression (see Supporting Information S1 and Table S3).
### Time-energy budget models
Time-energy budget models were built for adult greater crested terns to calculate the amount of food that individuals needed to consume daily to rear their progeny in a season (daily food intake – DFI, g·d−1). Specific input values shown in Table 3. Two main behaviours were identified: flying and resting at the colony. Precise time-budget data on at-sea behaviour can be identified using activity recorders such as accelerometers43. Due to their small size and sensitivity to disturbance, such data is lacking for almost all tern species. Thus, greater crested terns were assumed to be flying the entire time they were away from the colony. This assumption is supported by the fact that, while foraging, greater crested terns do not rest at the sea surface, diving events are infrequent and dives last only a few seconds at most (pers. obs.). Budgets were based on the bioenergetic model elaborated by Grémillet et al.6. By considering the duration (D) and metabolism per time unit (M) of each activity daily energy expenditure (DEE, kJ·d−1) for adults was defined as:
$$DEE=\sum _{k=1}^{n}({D}_{k}\times {M}_{k})$$
(1)
DEE was then converted into adult DFI. Anchovy make up ~65% of the prey species consumed by greater crested terns in the Western Cape34 but since one of our aims was to compare observed estimates of chick DFI to our model results, for the purpose of the model we assume that anchovy makes up the entire diet (but see Supplementary Information S1). Using the mean (±SD) calorific value (Cp) of 6.22 ± 0.65 kJ·g−1 (wet mass)65,66,67,68,69 and an assimilation efficiency37 (Ea) of 0.77 ± 0.34, we calculated adult DFI (g·d−1) as:
$$DFI=\frac{DEE}{Cp\times Ea}$$
(2)
We took adult DFI to represent the total energetic needs during each incubation period. For each of the early- and late-provisioning phases, we estimated total adult DFI as the sum of the fish needed to sustain their own expenditure (DFI), as derived from their time-activity budget, and the amount needed for chick maintenance and growth. Greater crested tern chicks’ energetic requirements have not been measured before. Chick energetic requirements were thus estimated by fitting an allometric regression to published data on 10 larid species37 (Figure S2). This regression yielded a distribution for the total amount of energy metabolized until fledging (TME, kJ) in relation to asymptotic chick mass (A = 370 g, Table 3):
$$TME=\alpha +(\beta \times A)$$
(3)
where α is the distribution for the estimate of the allometric regression intercept (posterior mean = 539.5) and β is the distribution for the estimate of the slope parameter (posterior mean = 37.3). Mean chick daily metabolizable energy intake (MEI) (kJ) over the fledging period (40 days) was thus calculated in relation to days taken to fledge (F):
$$MEI=\frac{TME}{F}$$
(4)
We used a breeding success of 0.59 chicks fledged per pair and a fledging period of 40 days70 (Table 3) to estimate a daily chick mortality rate (CMR) by assuming that nests fail at random through time:
$$CMR=\frac{\mathrm{log}(0.59)}{F}$$
(5)
We then used the resulting survival function (Figure S3) to estimate total adult DFI (TDFI) for each of the early-provisioning (p = 1) and late-provisioning (p = 2) phases as:
$$\begin{array}{c}TDF{I}_{p}=DF{I}_{p}+(MEE\times (\frac{{\sum }_{t=1}^{F}\exp (CMR\times t)}{F})\times 0.5),\\ \,\,\,\,\,\,\,\,t=1\ldots F,\,p=1,2\end{array}$$
(6)
and estimated TDFI across the 40-day fledging period as:
$$TDF{I}_{F}=(TDF{I}_{1}\times 0.1)+(TDF{I}_{2}\times 0.9)$$
(7)
Metabolic rates of different activities undertaken by the adults were taken from the literature (Table 3). We used a basal metabolic rate (BMR) of 6.73 W kg−1 derived from respirometry71, 2 × BMR as an estimate of the cost of resting at the colony72 and estimated the cost of flying in greater crested terns (as 5.2 × BMR) with the software Flight 1.2573 using a wingspan of 1 m53, a wing aspect ratio of 10.4 (from the sooty tern Sterna fuscata)73 and a body mass of 390 g53. This software uses aerodynamic modelling, species-specific body mass and dimension to calculate the energetic cost of flying. Terns may use alternative flight modes to continuous flapping (vertical take-off after a dive, hovering over the water in search for prey or gliding) and incur different flight costs depending on the flight mode or the wind field (wind speed and direction). However, we assumed that greater crested terns were flying continuously during their time away from the colony, that the time spent using alternative flight modes was marginal and that overall, greater crested terns experienced an equivalent proportion of different wind speeds and directions. Flight cost (35.6 W·kg−1) was thus calculated as the average between the minimum (31.8 W·kg−1) and maximum (39.5 W·kg−1) power to fly using continuous flapping. Food requirements for the other Benguela endemic seabirds were collected from previous studies (Table 2).
### Statistical analyses
To account for the impact of the uncertainty of the different input parameters on the estimated energy budget, we used MCMC estimation in JAGS (v.4.1.0) via the ‘jagsUI’ library (v. 1.4.2)74 for programme R v.3.2.375 to build the time energy budget model. For input parameters (Table 3) where data were normally distributed, we used normal priors with observed means and SDs. Where data were expected to be positive-only with positively-skewed errors (e.g. duration data) we used gamma priors with the observed means for the shape parameter and rate = 1. For the allometric regression between TME and asymptotic chick mass, we used uninformative priors76 with N(0, 10−7) for means (where 10−7 is precision) and U(1,500, 4,500) for the residual standard error (σ), with the precision specified as σ−2.
To calculate chick DFI estimated from fish mass recorded by photo-sampling, we used the MCMC method described above to fit a gamma regression with a log-link function to estimate the mean (±95% CI) mass of anchovy returned to the colony by breeding stage (early provisioning = 1, late provisioning = 2) from n = 755 photographs. The mean (±95% CI) number of prey delivered to offspring by breeding stage from n = 274 events recorded on video or during focal observations, the mean (±95% CI) foraging trip duration, and the mean (±95% CI) number of offspring feeds per day (feeding rate) by breeding stage (incubation = 1, early provisioning = 2, late provisioning = 3) were also estimated using gamma regressions with a log-link functions. For the gamma regressions, we used uninformative priors, N(0, 10−7) for the estimated coefficients in the linear predictor and U(0, 100) for the shape parameter. The observed chick DFI was calculated by multiplying the posterior distributions for anchovy mass and number of prey delivered.
For all parameters, we modelled means ±95% Bayesian credible intervals (CI) using three MCMC chains (150,000 samples, burn-in of 50,000 and no thinning). All models unambiguously converged (all $$\hat{R}$$ values < 1.01). See Supporting Information S2 for model code.
## References
1. 1.
Brown, M. T. & Ulgiati, S. Energy quality, energy, and transformity: HT Odum’s contributions to quantifying and understanding systems. Ecological Modelling 178, 201–213 (2004).
2. 2.
Krebs, J. R., & Davies, N. B. Economic decisions and the individual. In ‘An Introduction to Behavioural Ecology’. (Eds J. R. Krebs & N. B. Davies.) Blackwell Scientific Publications, 48–76 (London, 1993).
3. 3.
Gordon M. S., & Bartol, S. M. Experimental approaches to conservation biology. University of California Press, (Berkeley and Los Angeles, 2004).
4. 4.
Brander, K. M. Global fish production andclimate change. Proceedings of the National Academy of Sciences USA 104, 19709–19714 (2007).
5. 5.
Bunce, A. Prey consumption of Australasian gannets (Morus serrator) breeding in Port Phillip Bay, southeast Australia, and potential overlap with commercial fisheries. ICES Journal of Marine Science: Journal du Conseil 58, 904–915 (2001).
6. 6.
Grémillet, D., Wright, G., Lauder, A., Carss, D. N. & Wanless, S. Modelling the daily food requirements of wintering great cormorants: a bioenergetics tool for wildlife management. Journal of Applied Ecology 40, 266–277 (2003).
7. 7.
Britten, G. L. et al. Predator decline leads to decreased stability in a coastal fish community. Ecology Letters 17, 1518–25 (2014).
8. 8.
Grémillet, D. & Wilson, R. P. A life in the fast lane: energetics and foraging strategies of the great cormorant. Behavioural Ecology 10, 516–524 (1999).
9. 9.
Mullers, R. H. E., Navarro, R. A., Daan, S., Tinbergen, J. M. & Meijer, H. A. J. Energetic costs of foraging in breeding Cape gannets Morus capensis. Marine Ecology-Progress Series 393, 161–171 (2009).
10. 10.
Einoder, L. D. A review of the use of seabirds as indicators in fisheries and ecosystem management. Fisheries Research 95, 6–13 (2009).
11. 11.
Green, J. A., White, C. R., Bunce, A., Frappell, P. B. & Butler, P. J. Energetic consequences of plunge diving in gannets. Endangered Species Research 10, 269–279 (2009).
12. 12.
Enstipp, M. R., Grémillet, D. & Lorentsen, S. H. Energetic costs of diving and thermal status in European shags (Phalacrocorax aristotelis). Journal of Experimental Biology 208, 3451–3461 (2005).
13. 13.
Collins, P. M. et al. Energetic consequences of time-activity budgets for a breeding seabird. Journal of Zoology 300, 153–162 (2016).
14. 14.
Lane, J. M., McDonald, R. A. Welfare and ‘best practices’ in field studies of wildlife. In, The UFAW handbook on the care and management of laboratory and other research animals, 8th Ed, Hubrecht R., Kirkwood J., editors. Wiley-Blackwell, 92–106 (Oxford, 2010).
15. 15.
Keller, T. M. & Visser, G. H. Daily energy expenditure of great cormorants Phalacrocorax carbo sinensis wintering at Lake Chiemsee, southern Germany. Ardea 87, 61–69 (1999).
16. 16.
Elliott, K. H. et al. High flight costs, but low dive costs, in auks support the biomechanical hypothesis for flightlessness in penguins. Proceedings of the National Academy of Sciences of the United States of America 110, 9380–9384 (2013).
17. 17.
Furness, R. W. Energy requirements of seabird communities: a bioenergetics model. Journal of Animal Ecology 47, 39–53 (1978).
18. 18.
Fort, J., Porter, W. P. & Grémillet, D. Energetic modelling: a comparison of the different approaches used in seabirds. Molecular & Integrative Physiology 158, 358–365 (2011).
19. 19.
Humphreys, E. M., Wanless, S. & Bryant, D. M. Stage-dependent foraging in breeding black-legged kittiwakes Rissa tridactyla: distinguishing behavioural responses to intrinsic and extrinsic factors. Journal of Avian Biology 37, 436–446 (2006).
20. 20.
McCauley, D. J. et al. Marine defaunation: Animal loss in the global ocean. Science 347, 1255641 (2015).
21. 21.
Votier, S. C. & Sherley, R. B. Seabirds. Current Biology 27, R448–R450 (2017).
22. 22.
Hobday, A. J., Bell, J. D., Cook, T. R., Gasalla, M. A. & Weng, K. C. Reconciling conflicts in pelagic fisheries under climate change. Deep Sea Research II 113, 291–300 (2015).
23. 23.
Sydeman, W. J. et al. Best practices for assessing forage fish fisheries-seabird resource competition. Fisheries Research 194, 209–221 (2017).
24. 24.
Sherley, R. B. et al. Bayesian inference reveals positive but subtle effects of experimental fishery closures on marine predator demographics. Proceedings of the Royal Society B: Biological Sciences 285, 20172443 (2018).
25. 25.
Rindorf, A., Wanless, S. & Harris, M. P. Effects of changes in sandeel availability on the reproductive output of seabirds. Marine Ecology Progress Series 202, 241–252 (2000).
26. 26.
Frederiksen, M., Wanless, S., Harris, M. P., Rothery, P. & Wilson, L. J. The role of industrial fisheries and oceanographic change in the decline of North Sea black-legged kittiwakes. Journal of Applied Ecology 41, 1129–1139 (2004).
27. 27.
Furness, R. W. & Tasker, M. L. Seabird-fishery interactions: quantifying the sensitivity of seabirds to reductions in sandeel abundance, and identification of key areas for sensitive seabirds in the North Sea. Marine Ecology Progress Series 202, 253–264 (2000).
28. 28.
Pichegru, L. et al. Foraging behaviour and energetics of Cape gannets Morus capensis feeding on live prey and fishery discards in the Benguela upwelling system. Marine Ecology Progress Series 350, 127–136 (2007).
29. 29.
Crawford, R. J. M. et al. Collapse of South Africa’s penguins in the early 21st century. African Journal of Marine Science 33, 139–156 (2011).
30. 30.
Sherley, R. B. et al. Age-specific survival and movement among major African Penguin Spheniscus demersus colonies. Ibis 56, 716–728 (2014).
31. 31.
Sherley, R. B. et al. Metapopulation tracking juvenile penguins reveals an ecosystem-wide ecological trap. Current Biology 27, 1–6 (2017).
32. 32.
Crawford, R. J. M. A recent increase of Swift Terns Thalasseus bergii off South Africa – the possible influence of an altered abundance and distribution of prey. Progress in Oceanography 83, 398–403 (2009).
33. 33.
Crawford, R. J. M., Makhado, A. B., Waller, L. J. & Whittington, P. A. Winners and losers–responses to recent environmental change by South African seabirds that compete with purse-seine fisheries for food. Ostrich 85, 111–117 (2014).
34. 34.
Gaglio, D., Cook, T. R., McInnes, A., Sherley, R. B. & Ryan, P. G. Foraging plasticity in seabirds: a non-invasive study of the diet of greater crested terns breeding in the Benguela Region. PLoS ONE 13, e0190444 (2018).
35. 35.
Crawford, R. J. M. Influence of food on numbers breeding, colony size and fidelity to localities of Swift Terns in South Africa’s Western Cape, 1987–2000. Waterbirds 26, 44–53 (2003).
36. 36.
Payo-Payo, A. et al. Survival estimates of Greater Crested Terns (Thalasseus bergii) in southern Africa. African Journal of Marine Science (In Press).
37. 37.
Visser, G. H. Chick growth and developments in seabirds. In Biology of Marine Birds (eds Schreiber, E. A. & Burger, J.). CRC Press, 439–465 (Boca Raton, 2002).
38. 38.
Orians G. H., & Pearson N. E. On the theory of central place foraging. In Horn, O. J., Stairs, B. R., & Mitchell R. D. (eds) Analysis of ecological systems. Ohio State University Press, 155–177 (Columbus, 1979).
39. 39.
Goldstein, D. L. Estimates of daily energy expenditure in birds: the time-energy budget as an integrator of laboratory and field studies. American Zoologist 28, 829–844 (1988).
40. 40.
Flint, E. N. & Nagy, K. A. Flight energetics of free-living Sooty Terns. Auk 101, 288–294 (1984).
41. 41.
McWilliams, S. R., Guglielmo, C., Pierce, B. & Klaassen, M. Flying, fasting, and feeding in birds during migration: a nutritional and physiological ecology perspective. Journal Avian Biology 35, 377–393 (2004).
42. 42.
Schmidt-Wellenburg, C. A., Biebach, H., Daan, S. & Visser, G. H. Energy expenditure and wing beat frequency in relation to body mass in free flying Barn Swallows (Hirundo rustica). Journal of Comparative Physiology B 177, 327–337 (2007).
43. 43.
Wilson, R. P., Quintana, F. & Hobson, V. J. Construction of energy landscapes can clarify the movement and distribution of foraging animals. Proceedings of the Royal Society B 279, 975–980 (2012).
44. 44.
Gaglio, D., Cook, T. R. & Sherley, R. B. Egg morphology of Swift Terns in South Africa. Ostrich 86, 287–289 (2015).
45. 45.
Hebblewhite, M. & Haydon, D. T. Distinguishing technology from biology: a critical review of the use of GPS telemetry data in ecology. Philosophical Transactions of the Royal Society of London B 365, 2303–2312 (2010).
46. 46.
Brockmann, H. J. & Barnard, C. J. Kleptoparasitism in birds. Animal Behaviour 27, 487–514 (1979).
47. 47.
Gaglio, D. & Sherley, R. B. Nasty neighbourhood: kleptoparasitism and egg predation of Swift Terns by Hartlaub’s Gulls. Ornithological Observations 5, 131–134 (2014).
48. 48.
Gaglio, D., Sherley, R. B., Cook, T. R., Ryan, P. G. & Flower, T. The cost of kleptoparasitism: a study of mixed-species seabirds breeding colonies. Behavioral Ecology. In press.
49. 49.
Stienen, E. W., Brenninkmeijer, A. & Courtens, W. Intra-specific plasticity in parental investment in a long-lived single-prey loader. Journal of Ornithology 156, 699–710 (2015).
50. 50.
Pichegru, L. et al. Overlap between vulnerable top predators and fisheries in the Benguela upwelling system: implications for marine protected areas. Marine Ecology Progress Series 391, 199–208 (2009).
51. 51.
Cury, P. M., Boyd, I. L., Bonhommeau, S. & Anker-Nilssen, T. Global seabird response to forage fish depletion: one-third for the birds. Science 334, 1703–1706 (2011).
52. 52.
Bouwhuis, S., Visser, G. H., & Underhill, L. G. Energy budget of African penguin Spheniscus demersus chicks. In Kirkman, S.P. (ed.) 2007. Final report of BCLME (Benguela Current Large Marine Ecosystem) Project on Top Predators as Biological Indicators of Ecosystem Change in the BCLME. Avian Demography Unit, 125–127 (Cape Town, 2007).
53. 53.
Crawford, R. J. M., Hockey, P. A. R. & Tree, A. J. Swift Tern Sterna bergii. In Roberts Birds of Southern Africa (7th edition) (eds P. A. R. Hockey, W. R. J. Dean & P.G. Ryan), pp 453–455. Trustees of the John Voelcker Bird Book Fund, (Cape Town, 2005).
54. 54.
Videler, J. J. Avian Flight. Oxford University Press, (Oxford, 2005).
55. 55.
Ludynia, K., Roux, J. P., Kemper, J. & Underhill, L. G. Surviving off junk: low-energy prey dominates the diet of African penguins Spheniscus demersus at Mercury Island, Namibia, between 1996 and 2009. African Journal Marine Science 32, 563–572 (2010).
56. 56.
Kemper, J., Underhill, L. G., Crawford, R. J. M., & Kirkman, S. P. Revision of the conservation status of seabirds and seals breeding in the Benguela ecosystem. In Kirkman, S. P., editor, Final Report of the BCLME (Benguela Current Large Marine Ecosystem) Project on Top Predators as Biological Indicators of Ecosystem Change in the BCLME. Avian Demography Unit, 325–342, (Cape Town, 2007).
57. 57.
Ryan, P. G. Medium-term changes in coastal bird communities in the Western Cape, South Africa. Austral ecology 38, 251–259 (2013).
58. 58.
Crawford, R. J. M., Ryan, P. G. & Williams, A. J. Seabird consumption and production in the Benguela and western Agulhas ecosystems. South African Journal of Marine Science 11, 357–375 (1991).
59. 59.
Lescroël, A. et al. Seeing the ocean through the eyes of seabirds: A new path for marine conservation? Marine Policy 68, 212–220 (2016).
60. 60.
Nicholson, L. Breeding strategies and community structure in an assemblage of tropical seabirds on the Lowendal Islands, Western Australia. Unpublished Doctoral dissertation, Murdoch University, (Perth, 2002).
61. 61.
Stienen, E. W. et al. Reflections of a specialist: patterns in food provisioning and foraging conditions in Sandwich Terns Sterna sandvicensis. Ardea 88, 33–49 (2000).
62. 62.
McLeay, L. J. et al. Foraging behaviour and habitat use of a short-ranging seabird, the crested tern. Marine Ecology Progress Series 411, 271–283 (2010).
63. 63.
Hull, C. L. et al. Intraspecific variation in commuting distance of marbled murrelets Brachyramphus marmoratus: ecological and energetic consequences of nesting further inland. Auk 118, 1036–1046 (2001).
64. 64.
Gaglio, D., Cook, T. R., Connan, M., Ryan, P. G. & Sherley, R. B. Dietary studies in birds: testing a non-invasive method using digital photography in seabirds. Method in Ecology and Evolution 8, 214–222 (2017).
65. 65.
Batchelor, A. L. & Ross, G. J. B. The diet and implications of dietary change of Cape gannets on Bird Island, Nelson Mandela Bay. Ostrich 55, 45–63 (1984).
66. 66.
Prosch, R. M. Early growth in length of the anchovy Engraulis capensis Gilchrist off South Africa. South African Journal of Marine Science 4, 181–191 (1986).
67. 67.
Jackson, S. Seabird digestive physiology in relation to foraging ecology. PhD thesis, University of Cape Town, (Cape Town, 1990).
68. 68.
Balmelli, W. & Wickens, P. A. Estimates of daily ration for the South African (Cape) fur seal. African Journal of Marine Science 14, 151–157 (1994).
69. 69.
Pichegru, L., Ryan, P. G., Crawford, R. J. M., van der Lingen, C. D. & Grémillet, D. Behavioural inertia places a top marine predator at risk from environmental change in the Benguela upwelling system. Marine Biology 157, 537–544 (2010).
70. 70.
Crawford, R. J. M. et al. Longevity, inter-colony movements and breeding of Crested Terns in South Africa. Emu 102, 1–9 (2002).
71. 71.
Ellis, H. I. Metabolism and evaporative water loss in three seabirds (Laridae). Federation Proceedings 39, 1165 (1980).
72. 72.
Enstipp, M. R. et al. Foraging energetics of North Sea birds confronted with fluctuating prey availability. In: Boyd, S.; Wanless, S., Camphuysen, C. J., (eds.) Top predators in marine ecosystems: their role in monitoring and management. Cambridge University Press, 191–210 (Cambridge, 2006).
73. 73.
Pennycuick, C. J. Modelling the flying bird. Academic Press, (London, 2008).
74. 74.
Kellner, K. jagsUI: A Wrapper Around ‘rjags’ to Streamline ‘JAGS’ Analyses. R package version 1.4.2. https://CRAN.R-project.org/package=jagsUI (2016).
75. 75.
R Development Core Team R: a language and environment for statistical computing. R Foundation for Statistical Computing. ISBN 3-900051-07-0. http://www.R-project.org (Vienna, 2016).
76. 76.
Gelman, A. & Hill, J. Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press, (New York, 2006).
77. 77.
Nagy, K. A., Siegfried., W. R. & Wilson, R. P. Energy utilization by free-ranging Jackass penguins Spheniscus demersus. Ecology 65, 1648–1655 (1984).
78. 78.
Ellis, H. I., Gabrielsen, G. W. Energetics of free-ranging seabirds. In Biology of marine birds (eds: Schreiber EA, Burger J). CRC Press, 359–407 (London, 2002).
79. 79.
Le Roux, J. The swift tern Sterna bergii in southern Africa: growth and movement. MSc dissertation, University of Cape Town, (Cape Town, 2006).
## Acknowledgements
Our research was supported by a Department of Science and Technology-National Research Foundation Centre of Excellence grant to the FitzPatrick Institute of African Ornithology, the Leiden Conservation Foundation (RBS) and our institutes. Robben Island Museum provided logistical support and access to the tern colonies. We thank Selena Flores, Billi Krochuk and Maël Leroux for their help in the field.
## Author information
All authors conceived and designed the study. D.G. Performed the fieldwork and wrote the original manuscript draft. D.G., R.B.S. and T.R.C. analysed the data. R.B.S. prepared the figures. All authors revised the manuscript for significant intellectual content and approved the final version.
Correspondence to Davide Gaglio.
## Ethics declarations
### Competing Interests
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
• ### Food habits of an endangered seabird indicate recent poor forage fish availability off western South Africa
• Robert J M Crawford
• , William J Sydeman
• , Sarah Ann Thompson
• , Richard B Sherley | |
Ask your homework questions to teachers and professors, meet other students, and be entered to win $600 or an Xbox Series X 🎉Join our Discord! Numerade Educator ### Problem 82 Hard Difficulty # Find the mean, the median, and the mode of the collection of numbers. $$9,6,10,14,10,3$$ ### Answer ## Mean$=8.67$Median$=9.5$Mode$=10\$
#### Topics
No Related Subtopics
### Discussion
You must be signed in to discuss.
### Video Transcript
in this problem, we have to find me media and word off the given collection off numbers to find the mean we ed the given six numbers and then divide by six. So I mean, is it will do nine. Last six last day Bless 40. Blessed Day Last three divided by six. So this is a point to 50 door divided by six. We just a quad toe 96 divided by three and which gave him Dennis the mixed fraction it to buy three So mean is equal to it. To buy three. Now, to find the media, we like think even numbers in order and find the middle number the numbers can didn't. Then in order. S 36 nine Then then fucking one as these Air six numbers that this even number of numbers we even get to middle numbers. The media numbers are nine and did so to find the media. Bba led nine. Uh huh. And then divide by two. So media is it Will do nine. Last day Divide by two there divided by two. So this is it Will do 90 divided by Do are we Can I take this nine one by do so. Maybe it is equal to 91.2 now to find them. Or we use this order list and find the number that occurs most frequently The number that August most frequently that this two times is there So more is it will do then that this we have found the values off me media on more.
Other Schools
#### Topics
No Related Subtopics | |
# Why are the inputs of an ideal op-amp “inverting input” and “non-inverting input”?
Here is the first ideal op-amp circuit, called an "Inverting Amplifier", that many students will encounter:
The gain here is $$\G=-\frac{R_F}{R_{IN} }\$$. Thus, with a negative gain, $$\V_{OUT}\$$ is inverted with respect to $$\V_{IN}\$$. Also, since $$\V_{IN}\$$ goes into the inverting input, this all makes sense.
Now, if we flip this all around like this:
For this circuit, $$\V_{IN}\$$ goes into the non-inverting input. However, the gain still has a negative sign: $$\G=-\frac{R_F}{R_{IN} }\$$, and $$\V_{OUT}\$$ is still inverted.
So why is this called the inverting input?
Solving the lower circuit, incorrectly:
$$I_{IN} = I_F$$ $$\frac{V_{IN}-V_+}{R_{IN}} = \frac{V_{+}-V_{OUT}}{R_{F}}$$
$$\textrm{If } V_{+} = V_{-} \textrm{ , as is true by definition for an ideal op-amp, and } V_{-} = 0, \textrm{ then } V_{+} = 0 \textrm{ thus }$$
$$\frac{V_{IN}}{R_{IN}} = \frac{-V_{OUT}}{R_{F}}$$
$$\frac{V_{OUT}}{V_{IN}} = -\frac{R_{F}}{R_{IN}}$$
What's wrong with this circuit analysis?
• Possible duplicate of Are op-amp inputs interchangeable? – user103380 Apr 17 '19 at 2:13
• Your 2nd circuit has positive feedback. Your attempt at a gain formula for that circuit is completely incorrect, since the circuit won't behave as an amplifier. – brhans Apr 17 '19 at 2:14 | |
# How do you solve the system of equations algebraically 1/3x-3/2y=-4, 5x-4y=14?
Jan 19, 2017
$y = 4 , x = 6$
#### Explanation:
$\frac{1}{3} x - \frac{3}{2} y = - 4$$\textcolor{w h i t e}{a a}$$- - - - -$(1)
$5 x - 4 y = 14$$\textcolor{w h i t e}{a a a a a}$$- - - - -$(2)
$2 x - 9 y = - 24 \textcolor{w h i t e}{a a a}$$- - - - - \left(1\right) \times 6$--(3)
$10 x - 8 y = 28 \textcolor{w h i t e}{a a a}$$- - - - - \left(2\right) \times 2$--(4)
$10 x - 45 y = - 120 \textcolor{w h i t e}{a a a}$$- - - - - \left(3\right) \times 5$--(5)
$37 y = 148 \textcolor{w h i t e}{a a a}$$- - - - - \left(4\right) - \left(5\right)$
$y = \frac{148}{37}$
$y = 4$
substitute y=4 in (2)
$5 x - 4 \left(4\right) = 14$
$5 x - 16 = 14$
$5 x = 30$
$x = 6$
substitute y=4 and x=6 in (2)
$5 \left(6\right) - 4 \left(4\right) = 14$
$30 - 16 = 14$
$14 = 14$ | |
# How far can we reach for the sum of roots in closed form for a polynomial of even degree?
As everybody knows, our reach for the roots themselves of a polynomial of any degree ends at degree 4, except in special cases. However, since the formula for the sum of the roots of a quadratic is considerably simpler than the formula for the roots themselves, this prompts the hope that one can obtained the sum of the roots for polynomials of even degree for degree 6, and somewhat beyond – perhaps all the way to infinity?
-
Actually, it is possible to find closed form (but generally ungainly) expressions for polynomials of degree greater than four (the Abel-Ruffini caveat here is that you cannot do it in terms of the four basic operations and the taking of radicals alone). The general quintic has been solved since 1858 or so, and if you look at my question here: math.stackexchange.com/questions/32616/… you can see pointers to resources about the solution of the quintic. – deoxygerbe May 7 '11 at 22:56
The formula for the sum of the roots of any polynomial $a_nx^n+a_{n-1}x^{n-1}+\cdots+a_0$ is $$-\frac{a_{n-1}}{a_n}$$ See here.
-
Correct, but why stop there? We can write the polynomial as $a_n(x-r_0)(x-r_1)\cdots(x-r_{n-1})$, where the $r_i$s are the roots (including repetition). Expanding this product and then matching coefficient of each $x^i$ with $a_i$, we find that $a_{n-1} = -a_n(\text{sum of roots, } r_i)$, and $a_{n-2} = a_n(\text{sum of pairwise products of roots, } r_i r_j)$, and $a_{n-3} = -a_n(\text{sum of triple-products of roots, } r_i r_j r_k)$, ..., and $a_0 = a_{n-n} = (-1)^n a_n(\text{product of all } n \text{ roots, } r_0 r_1 r_2 \cdots r_{n-1})$. – Blue May 7 '11 at 21:54
Wow. This is why MSE is so wonderful. Thanks! – Mike Jones May 7 '11 at 21:57
So, while we can easily express the coefficients of a polynomial in terms of its roots (and its leading coefficient), what "everybody knows" is that we can't go the other way when $n>4$ (without using tools more sophisticated than extracting roots). – Blue May 7 '11 at 21:59
In general, if $\alpha_1,\alpha_2,\ldots, \alpha_n$ are the $n$ roots (possibly repeated roots) of a $n^{th}$ degree polynomial, then we have
$$\sum_{k=1}^{\binom{n}{r}} p_k = (-1)^r \frac{a_{n-r}}{a_n}$$ where each $p_k$ denotes a product of a unique subset of $r$ roots of the polynomial i.e.
$p_k = \alpha_1^{t_{1k}} \alpha_2^{t_{2k}} \cdots \alpha_n^{t_{nk}}$ where $t_{jk} \in \{0,1 \}$ and $\displaystyle \sum_{j=1}^{n} {t_{jk}} = r, \forall k \in \{1,2,\ldots,\binom{n}{r} \}$
-
I appreciate the generality of your answer, and so I up-voted it, but I'm accepting the answer of Zev Chonoles as THE answer to my question as being spot-on, and with a supporting link. – Mike Jones May 7 '11 at 21:55
HINT $\rm\ \ \ (x^k-r\ x^{k-1}+\cdots\:)\ (x^n - s\ x^{n-1}+\cdots\:)\ =\:\ x^{k\:+\:n} - (r+s)\ x^{k\:+\:n-1} + \cdots$
- | |
## On derivations of prime near-rings.(English)Zbl 1280.16047
Summary: We investigate derivations satisfying certain differential identities on 3-prime near-rings, and we provide examples to show that the assumed restrictions cannot be relaxed.
### MSC:
16Y30 Near-rings 16W25 Derivations, actions of Lie algebras 16N60 Prime and semiprime associative rings 16R50 Other kinds of identities (generalized polynomial, rational, involution) 16U70 Center, normalizer (invariant elements) (associative rings and algebras) 16U80 Generalizations of commutativity (associative rings and algebras)
Full Text:
### References:
[1] M. Ashraf and N. Rehman, On commutativity of rings with derivations. Result. Math. 12 (2002), 3-8. · Zbl 1038.16021 [2] M. Ashraf and A. Shakir, On $$(\sigma, \tau)$$-derivations of prime near-rings. Arch. Math. (Brno) 40 (2004), no. 3, 281-286. · Zbl 1114.16040 [3] M. Ashraf and A. Shakir, On $$(\sigma, \tau)$$-derivations of prime near-rings-II. Sarajevo J. Math. 4 (2008), no. 16, 23-30. · Zbl 1169.16028 [4] K. I. Beidar, Y. Fong and X. K. Wang, Posner and Herstein theorems for derivations of 3-prime near-rings. Comm. Algebra 24 (1996), no. 5, 1581-1589. · Zbl 0849.16039 [5] H. E. Bell, Certain near-rings are rings. J. London Math. Soc. 4 (1971),264-270. · Zbl 0223.16020 [6] H. E. Bell, On derivations in near-rings II. Kluwer Academic Publishers Netherlands (1997), 191-197. · Zbl 0911.16026 [7] H. E. Bell and M. N. Daif, Commutativity and strong commutativity preserving maps. Canad. Math. Bull. 37 (1994), 443-447. · Zbl 0820.16031 [8] H. E. Bell and G. Mason, On derivations in near-rings. North-Holland Mathematics Studies 137 (1987), 31-35. · Zbl 0619.16024 [9] H. E. Bell and G. Mason, On derivations in near-rings and rings. Math. J. Okayama Univ. 34 (1992), 135-144. · Zbl 0810.16042 [10] A. Boua and L. Oukhtite, Derivations on prime near-rings. Int. J. Open Probl. Comput. Sci. Math. 4 (2011), no. 2, 162-167. [11] M. N. Daif and H. E. Bell, Remarks on derivations on semiprime rings. Int. J. Math. & Math. Sci. 15 (1992), 205-206. · Zbl 0746.16029 [12] A. A. Klein, A new proof of a result of Levitzki. Proc. Amer. Math. Soc. 81 (1981), no. 1, 8. · Zbl 0475.16005 [13] X. K. Wang, Derivations in prime near-rings. Proc. Amer. Math. Soc. 121 (1994), 361-366. · Zbl 0811.16040
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Relation between the nodal and antinodal gap and critical temperature in superconducting Bi2212
## Abstract
An energy gap is, in principle, a dominant parameter in superconductivity. However, this view has been challenged for the case of high-Tc cuprates, because anisotropic evolution of a d-wave-like superconducting gap with underdoping has been difficult to formulate along with a critical temperature Tc. Here we show that a nodal-gap energy 2ΔN closely follows 8.5 kBTc with underdoping and is also proportional to the product of an antinodal gap energy Δ* and a square-root superfluid density √Ps for Bi2Sr2CaCu2O8+δ, using low-energy synchrotron-radiation angle-resolved photoemission. The quantitative relations imply that the distinction between the nodal and antinodal gaps stems from the separation of the condensation and formation of electron pairs, and that the nodal-gap suppression represents the substantial phase incoherence inherent in a strong-coupling superconducting state. These simple gap-based formulae reasonably describe a crucial part of the unconventional mechanism governing Tc.
## Introduction
In conventional Bardeen–Cooper–Schrieffer (BCS) theory, a superconducting critical temperature Tc is proportional to an energy gap, which opens in the electronic excitation spectrum and stands for both the electron-pairing energy and the superconducting order parameter1. This proportionality has been considered to break down in high-Tc cuprates. As hole concentration p decreases from the optimum, Tc decreases, even though the amplitude of d-wave-like gap Δ* increases2,3. Instead, superfluid density ρs decreases along with such underdoping, and thus attracted interest as another key parameter for Tc4,5,6,7,8,9,10. Recently, a variety of experimental gap energies in the superconducting state have been classified into two groups: those that increase like Δ* and those that decrease like Tc with underdoping11,12,13. The former energy, Δ*, remains as a pseudogap above Tc for the underdoped cuprates, as typically seen in the electronic excitation spectra around an antinode14. In the close vicinity of a node, by contrast, the gap closes right above Tc, as highlighted by Lee et al.15 and Pushp et al.16 The further experimental evidences suggest that the low-energy near-nodal excitations are more relevant to Tc than the antinodal ones16,17. Nevertheless, the gap energies defined under a dichotomy between near-nodal and antinodal regions have been difficult to formulate along with the doping dependences of Tc and ρs17,18. The unformulated behaviours of the nodal and antinodal gaps posed severe challenges not only to their mutual relationship but also to their standard role in electron pairing. The identities of these gaps have thus been at the center of controversy over a unified picture of the unconventional pairing state in the high-Tc cuprates, and intensively examined for clues to the principles underlying the peculiar behaviour of Tc.
Here we report our finding of simple formulae,
based on new experimental data of the nodal gap, ΔN, and the antinodal gap, Δ*, for Bi2Sr2CaCu2O8+δ (Bi2212). Clear-cut gap images over the entire Fermi surface have been obtained from low-energy synchrotron-radiation angle-resolved photoemission (ARPES)19,20, and allowed us to extract the nodal limit of the gap slope beyond the momentum-space dichotomy. As a result, the nodal-gap energy ΔN is reconciled with Tc, indicating its relevance to the condensation of the electron pairs. A high plateau value of ΔN/Tc ratio exhibits sufficiently strong electron coupling for the underdoped Bi2212. Furthermore, the nodal and antinodal gaps are quantitatively related by a factor of the square-root superfluid density, , in a wide hole-concentration range. This reduction factor is identical to the theoretical prediction for the effect of incoherent pair excitations21,22, implying that the antinodal gap energy is relevant to the formation of the electron pairs. Thus, we argue that the strong coupling makes a critical difference to the pairing state, and that a substantial number of electrons remain paired, despite dropping out of the coherent superfluid in the underdoped superconducting cuprates. The gap-based formulae shed light on a crucial part of the mechanism governing Tc through parameterization with the pairing energy Δ* and the surviving superfluid density ρs4,5.
## Results
### ARPES data
Figure 1 demonstrates that well-defined electronic dispersions were observed throughout the Brillouin zone, and exemplifies extraction of the gap images from a seamless body of ARPES data. The Fermi surfaces were rigorously determined from the minimum-gap loci as marked in Fig. 1b, because both inaccuracy and broadening of the cutting path result in an overestimate of the gap energy. Figure 1c shows that the Fermi surfaces (blue curves) of the bonding band (BB) and antibonding band (AB) of the CuO2 bilayer are clearly resolved in the momentum space. The spectra were collected along the two Fermi surfaces from the node through to the antinode, and displayed in Fig. 1e as image plots of energy versus off-node angle θ (Fig. 1a). The distinct energy splitting seen in Fig. 1e indicates that the BB and AB gaps are resolved in our experiment.
The comprehensive high-resolution gap study has been supported by low-energy synchrotron radiation19,20. First, the use of low-energy photons in ARPES experiments facilitates improvement in energy and momentum resolution in compensation for the narrowing of the momentum field of view15,18,19,20,23. Second, the tunability of synchrotron radiation allows us to optimize the excitation-photon energy 19,20. We found that the observability in the off-nodal region is dramatically improved by slightly increasing from those used in laser-based ARPES studies, ≤7 eV15,18,23. As shown in Fig. 2, using =8.5 eV, the spectral intensities of the BB and AB are sufficient even far off the node, θ25°, whereas using =7.0 eV, the spectral intensity concentrates only around the node of the AB. Beside the observability, Fig. 2d shows that the spectral peaks for =8.5 and 7.0 eV are equally sharp, and that the difference between the BB and AB gaps is insignificant. In the present study, we performed the extensive doping-dependent gap measurements with both =8.5 and 7.0 eV, and ruled out unexpected effects of transition matrix elements.
### Anisotropic gap evolution
Deviation from the d-wave gap is directly imaged in the energy-versus-sin2θ plots of the ARPES spectra. Figure 3a–d reveals that, as hole concentration p decreases from the optimum, the linearity in sin 2θ becomes severely distorted, making clear a departure from the standard d-wave form, Δ(θ)sin2θ, for the underdoped superconducting Bi2212. For in-depth analysis, the gap energies Δ(θ) were determined by fitting energy-distribution curves (EDCs), and overlaid as small circles in Fig. 3a–d (see Methods). The high fitting quality is presented in Fig. 3e–h, along with typical EDCs (also see Supplementary Fig. S1). Here the artifact of the antinodal peak smearing is ruled out of the causes for this deviation24, because the curvature in sin 2θ is established within the region, |sin 2θ|0.75, where the well-defined sharp and intense peak is observed for an underdoped sample with Tc=66 K (UD66), as shown in Fig. 3b.
A key finding from the raw spectral images in Fig. 3a is that the gap deviation penetrates to the close vicinity of the node. The appreciable curvature provides evidence that the gap slope versus sin2θ smoothly and asymptotically decreases on approaching the nodal limit, θ→0. So far, it has often been assumed that there is some discontinuity between the nodal and antinodal spectra or a certain region with pure d-wave gap15,17,18,23,25. However, it is difficult to define them with the all-round seamless images in Fig. 3a–d, because the spectral features, such as peak energy, peak width and their derivatives along Fermi surface, gradually change from the antinode to the node26. Probably, the nature beyond the simple dichotomy between the near-nodal and antinodal regions comes to light in Bi2212, whose superconducting gap is much larger than that of La2−xSrxCuO425. The deviation from standard gap behaviour is also deduced from the temperature below which the gap persists, despite the loss of superconductivity. The gap-closing temperature gradually decreases to Tc on approaching the nodal limit, θ→0, in a way similar to the deviation from the d-wave gap15,27.
For parameterization of the anisotropic superconducting gap, two methodologies are recognized. One is based on the presence of the pure d-wave region, and to deduce the near-nodal-gap energy from the d-wave fit within such a bounded region15,17,18,23,25. However, the near-nodal curvatures seen in Fig. 3a hindered us from applying this method to our data, because it would introduce an averaging effect and an extra uncertainty arising from the ill-defined region boundary for the case of the underdoped Bi2212. On this basis, we aimed at the nodal limit of the gap slope in quest of an intrinsic parameter, and adopted the next-higher-harmonic fit over the unbounded region after Mesot et al.28 and Kohsaka et al.29 We defined the nodal and antinodal gap energies as and Δ*=Δ(θ)|θ=45°, respectively, so that ΔN*=1 is satisfied for the ideal d-wave gap as depicted in Fig. 3j. Hence, the fitting function is expressed as
where the first term is solely responsible for the nodal-gap slope, the second term models the gap deviation without adjustable angle parameter and its asymptotic behaviour is consistent with the empirical indistinctness of the pure d-wave region. Consequently, as overlaid in Fig. 3a–d, this function well captures the curved gap profiles, Δ(θ) (solid curves), and their nodal tangents, ΔN sin2θ (blue lines), for all the doping levels of Bi2212.
The shift in focus provides a new perspective on the nodal gap. Dividing the momentum space largely into two half regions, the gap extending over the near-nodal half is less sensitive to underdoping than that in the antinodal half, as seen from Fig. 3a–d. This general trend is consistent with previous reports16,17,18. Combining this trend with the appreciable curvatures observed in Fig. 3a, it follows that the nodal limit of the gap slope, ΔN, decreases with underdoping from OP91 to UD66, and then to UD42, as indicated by blue tangent lines in Fig. 3a–c. The asymptotic behaviour of the gap deviation suggests that the parameter would be purified by taking the nodal limit θ→0.
### Relation to critical temperature
The superconducting-gap energies at various dopings are scaled by Tc, and put together in Fig. 4a, which reveals that the nodal limit of Δ(θ)/Tc remains unchanged with underdoping in contrast to the antinodal limit. Figure 4a also shows that the energies of the BB and AB gaps determined with =8.5 and 7.0 eV are all consistent with the single fitted-curve for each doping level. The two gap parameters, ΔN and Δ*, determined from the next-higher-harmonic fit are plotted as a function of hole concentration in Fig. 4b. This shows that the nodal gap ΔN closely follows the decrease in Tc with underdoping, departing from the antinodal gap Δ*. The gap-to-Tc ratios plotted in Fig. 4c are worth noting. As hole concentration p decreases from the overdoped limit, both the nodal and antinodal ones increase with keeping a constant proportion, ΔN=0.87Δ*. This is the canonical behaviour expected when the coupling is getting stronger. A further decrease in p leads to a plateau of 2ΔN/kBTc=8.5, which is about twice the mean-field prediction 4.3 for d-wave weak-coupling superconductors, and meanwhile to a continuing increase in 2Δ*/kBTc2,3,13. These features are beyond the scenario of the standard weak-coupling theory. Figure 4d shows that our data for Bi2212 finely converge on the line of 2ΔN/kBTc=8.5 in particular from the optimum to a heavily underdoped level, and that similar values of 2ΔN/kBTc have been reported for optimally doped single-layer cuprates23,25.
The proportional relation, 2ΔN=8.5 kBTc, reconciles the critical temperature with the nodal slope of the distorted d-wave superconducting gap. It seems reasonable that Tc, in effect, depends on ΔN rather than Δ*, because thermal quasiparticle excitations concentrate in the vicinity of the node and hardly occur around the antinode in particular for the strong-coupling case, 2Δ*/kBTc4.3. The association between the nodal excitations and Tc has been proposed in various ways12,13,15,16,17. In particular, the decrease in ΔN with underdoping has been deduced from the low-energy slope of B2g-Raman spectra12 and the quasiparticle interference in scanning-tunnelling images29. Besides, in possible relation to ΔN, the characteristic energies having a p dependence similar to Tc have been detected by Andreev-reflection, B2g-Raman, break-junction-tunnelling and inelastic-neutron-scattering experiments in the superconducting state11.
### Relation to superfluid density
An insight comes from the analogy between the doping p and temperature T dependences. The p-dependent distortion of the superconducting gap is presented with normalization to Δ* in Fig. 4e, whereas the T dependence has been reported by Lee et al.15 With both increasing p and decreasing T, the superconducting gap approaches the ideal d-wave form. As p decreases conversely, ΔN is suppressed relative to Δ*, and decreases towards zero on approaching the disappearance of the superconductivity (Fig. 4e). This is analogous to what is observed with increasing T (ref. 15). Noting that the superfluid density ρs decreases towards zero on approaching the critical temperature, T=Tc, irrespective of the pseudogap temperature4,5,7,10, one may correlate the nodal-gap suppression, ΔN*, with the decrease in ρs.
In this way, we found another simple relation, . Figure 4f–h compare the square of nodal-to-antinodal gap ratio, (ΔN*)2, with the superfluid density, ρs, deduced from the superconducting-peak ratio in antinodal ARPES spectra6, and from alternating-current-susceptibility and heat-capacity data7,10. All of them increase linearly in p with an onset at p~0.07, showing a saturation point at p~0.19 (refs 6, 7, 10). Such a doping dependence of ρs is known to be universal for the cuprates, and the critical doping of p~0.19 is not only evident from the superfluid data but has also been identified in the electrical-resistivity and tunnelling data7,13,30. The intimate relationship between the nodal and antinodal gaps is in accord with the continuity between the nodal and antinodal spectra (Fig. 3a–d).
## Discussion
The square-root dependence on ρs is usually characteristic of the order parameter, as expected from Ginzburg–Landau theory1. More specifically, a general form of has been theoretically predicted for the superfluid reduction due to the incoherent pair excitations inherent in the strong-coupling superconductors, where the degeneracy between the order parameter Δsc and the pair-formation energy Δ is split, and the former is manifested in the energy of the near-nodal spectral peak as ΔN, whereas the latter dominates the antinodal peak energy Δ*, as shown by taking into account the lifetime effect21,22. Here the superfluid density in the weak-coupling Bardeen–Cooper–Schrieffer model, , can be regarded as an approximate constant, because the p-dependences of the Fermi-surface perimeter and the normal-state Fermi velocity are relatively small31. Therefore, the strong-coupling scenario well explains our empirical relation, , with μm−2, and is also consistent with the observations of the high gap-to-Tc ratios (Fig. 4c) and the strong renormalization of dispersion upon the superconducting transition31. Under this scenario for the nodal-gap suppression, the phase fluctuation persisting down to a temperature below Tc is responsible for the decrease in superfluid density. However, the intrinsic phase fluctuation arising solely from the strong coupling diminishes at temperatures much lower than Tc21,22. Hence, in conjunction with the strong coupling, some extra source generating the incoherent pair excitations at low temperatures is assumed to be present, on the basis of the phenomenology among ΔN, Δ* and ρs. The possible candidates for this scattering source are the orders competing with the superconductivity22. In particular, it has indeed been observed by scanning-tunnelling experiments that the nanoscale spatial domains of density-wave-like modulation spread over the underdoped Bi2212 (ref. 32). Perhaps, the limit of 2ΔN/kBTc≤8.5 may imply that such competing orders are practically inevitable for the strong-coupling superconductivity.
Within the weak-coupling scenario, by contrast, there seem no schemes to relate the nodal and antinodal gap energies, to our knowledge. To reconcile 2ΔN/kBTc=8.5 with the weak-coupling constant 4.3, one needs a p-independent reduction factor of ~0.5. The length of ‘Fermi arc’ in the normal state (Fig. 3i) seems irrelevant, because of its approximately linear p dependence, θarcp (see Supplementary Note 1 and Supplementary Fig. S2)15,33, although its experimental uncertainty obscures the behaviour of arc-endpoint gap, Δarc≡ΔN sinθarc (Fig. 4c). Furthermore, once in the superconducting state, one cannot distinguish between the spectra at angles θ inside (bold labels) and outside (italic labels) the Fermi arc, as shown in Fig. 3e–h, nor define the boundary of the momentum region with a coherent peak for Bi2212, as pointed out by Vishik et al.26
Combining the two relations, we obtain . This can be a gap-based formulation of the long-standing phase-fluctuation paradigm that associate Tc with both ρs and the pair-formation energy, (refs 4, 5). Notably, recent investigations have revealed that the rising exponent of Tc as a power of ρs is generally less than 1, and that specifically is satisfied near ρs=0 (refs 8, 9). This behaviour is compatible with , as far as the variation of Δ* is negligible. Regardless of interpretation, the present phenomenological formulae put strong constraints on existing theories, and provide simple bases for future approaches to the high-Tc superconductivity.
## Methods
### Samples
High-quality single crystals of Bi2212 were grown by traveling-solvent floating-zone method and subjected to a post-annealing procedure for regulation of doping level. An overdoped sample of Tc=80 K (OD80), an optimally doped sample of Tc=91 K (OP91) and five underdoped samples of Tc=81, 77, 73, 66 and 62 K (UD81, UD77, UD73, UD66 and UD62, respectively) were prepared from the crystal whose nominal composition is Bi2.1Sr1.8CaCu2O8+δ. Two heavily underdoped samples of Tc=42 and 30 K (UD42 and UD30, respectively) were prepared from the Dy-substituted crystal of Bi2.2Sr1.8Ca0.8Dy0.2Cu2O8+δ. Two heavily overdoped samples of Tc=73 and 63 K (OD73 and OD63, respectively) were prepared from the Pb-substituted crystal of Bi1.54Pb0.6Sr1.88CaCu2O8+δ. Details of sample preparation are described elsewhere34. Hole concentrations p have been deduced from the samples’ Tc, using a phenomenological relation, , from Presland et al.35 with K.
### ARPES measurement
ARPES experiments were performed on a helical undulator beamline, BL-9A of the Hiroshima Synchrotron Radiation Center, using a SCIENTA R4000 analyzer. Total instrumental energy resolution was set at 5 meV. Clean surfaces were obtained by cleaving the samples in situ, and all the ARPES spectra were collected under ultrahigh vacuum better than 5 × 10−11 Torr. Energies were calibrated with reference to the intermittently monitored Fermi edge of polycrystalline gold.
### Fitting EDCs
The superconducting-gap energies have been determined by fitting EDCs. For the spectral function A(ω) at a minimum-gap locus, we adopted a widely used phenomenological form introduced by Norman et al.36,
where the peak position and width are given by the superconducting-gap energy Δ and the single-particle scattering rate Γ, respectively. In accordance with the experimental incoherent spectral weight increasing towards higher energies, we added a background linear in energy, a+, to A(ω), and then applied the multiplication by the Fermi–Dirac distribution function fT(ω) as
As the experimental antinodal peaks are more asymmetric than the above model function I(ω), we additionally used the integral-type background for practical evaluation of the peak position, and applied convolution with the Gaussian representing instrumental resolution. As a result, the fit over a wide energy range of ω≥−100 meV has been achieved all along the Fermi surface for all the doping levels. The EDCs and their fits are shown in Supplementary Fig. S1, and those after symmetrization are presented in Fig. 3e–h. As shown by the small circles in Fig. 3a–d, the result of this procedure consistently tracks the spectral peak at the superconducting-gap energy with precision.
How to cite this article: Anzai, H. et al. Relation between the nodal and antinodal gap and critical temperature in superconducting Bi2212. Nat. Commun. 4:1815 doi: 10.1038/ncomms2805 (2013).
## References
1. Schrieffer, J. R. Theory of Superconductivity Addison-Wesley: New York, (1964).
2. Miyakawa, N., Guptasarma, P., Zasadzinski, J. F., Hinks, D. G. & Gray, K. E. Strong dependence of the superconducting gap on oxygen doping from tunnelling measurements on Bi2Sr2CaCu2O8–δ . Phys. Rev. Lett. 80, 157–160 (1998).
3. Campuzano, J. C. et al. Electronic spectra and their relation to the (π,π) collective mode in high-Tc superconductors. Phys. Rev. Lett. 83, 3709–3712 (1999).
4. Uemura, Y. J. et al. Universal correlations between Tc and ns/m* (carrier density over effective mass) in high-Tc cuprate superconductors. Phys. Rev. Lett. 62, 2317–2320 (1989).
5. Emery, V. J. & Kivelson, S. A. Importance of phase fluctuations in superconductors with small superfluid density. Nature 374, 434–437 (1995).
6. Feng, D. L. et al. Signature of superfluid density in the single-particle excitation spectrum of Bi2Sr2CaCu2O8+δ . Science 289, 277–281 (2000).
7. Tallon, J. L., Loram, J. W., Cooper, J. R., Panagopoulos, C. & Bernhard, C. Superfluid density in cuprate high-Tc superconductors: a new paradigm. Phys. Rev. B 68, 180501(R): (2003).
8. Broun, D. M. et al. Superfluid density in highly underdoped YBa2Cu3O6+y superconductor. Phys. Rev. Lett. 99, 237003 (2007).
9. Kim, G. C., Cheon, M., Ahn, S. S., Jeong, J. H. & Kim, Y. C. Relationship between superfluid density at zero temperature and Tc of Bi2Sr2–xLaxCuO6+δ (0.4≤x≤0.76) and Bi2Sr1.6La0.4Cu1–yZnyO6+δ (0.0≤y≤0.015). Europhys. Lett. 82, 27005 (2008).
10. Anukool, W., Barakat, S., Panagopoulos, C. & Cooper, J. R. Effect of hole doping on the London penetration depth in Bi2.15Sr1.85CaCu2O8+δ and Bi2.1Sr1.9Ca0.85Y0.15Cu2O8+δ . Phys. Rev. B 80, 024516 (2009).
11. Hüfner, S., Hossain, M. A., Damascelli, A. & Sawatzky, G. A. Two gaps make a high-temperature superconductor? Rep. Prog. Phys. 71, 062501 (2008).
12. Le Tacon, M. et al. Two energy scales and two distinct quasiparticle dynamics in the superconducting state of underdoped cuprates. Nat. Phys. 2, 537–543 (2006).
13. Alldredge, J. W. et al. Evolution of the electronic excitation spectrum with strongly diminishing hole density in superconducting Bi2Sr2CaCu2O8+δ . Nat. Phys. 4, 319–326 (2008).
14. Loeser, A. G. et al. Excitation gap in the normal state of underdoped Bi2Sr2CaCu2O8+δ . Science 273, 325–329 (1996).
15. Lee, W. S. et al. Abrupt onset of a second energy gap at the superconducting transition of underdoped Bi2212. Nature 450, 81–84 (2007).
16. Pushp, A. et al. Extending universal nodal excitations optimizes superconductivity in Bi2Sr2CaCu2O8+δ . Science 324, 1689–1693 (2009).
17. Tanaka, K. et al. Distinct Fermi-momentum-dependent energy gaps in deeply underdoped Bi2212. Science 314, 1910–1913 (2006).
18. Vishik, I. M. et al. Phase competition in trisected superconducting dome. Proc. Natl Acad. Sci. USA 109, 18332–18337 (2012).
19. Yamasaki, T. et al. Unmasking the nodal quasiparticle dynamics in cuprate superconductors using low-energy photoemission. Phys. Rev. B 75, 140513 (2007).
20. Anzai, H. et al. Energy-dependent enhancement of the electron-coupling spectrum of the underdoped Bi2Sr2CaCu2O8+δ superconductor. Phys. Rev. Lett. 105, 227002 (2010).
21. Chen, Q., Kosztin, I., Boldizsár, J. & Levin, K. Pairing fluctuation theory of superconducting properties in underdoped to overdoped cuprates. Phys. Rev. Lett. 81, 4708–4711 (1998).
22. Chien, C.-C., He, Y., Chen, Q. & Levin, K. Two-energy-gap preformed-pair scenario for cuprate superconductors: implications for angle-resolved photoemission spectroscopy. Phys. Rev. B 79, 214527 (2009).
23. Okada, Y. et al. Three energy scales characterizing the competing pseudogap state, the incoherent, and the coherent superconducting state in high-Tc cuprates. Phys. Rev. B 83, 104502 (2011).
24. Chatterjee, U. et al. Observation of a d-wave nodal liquid in highly underdoped Bi2Sr2CaCu2O8+δ . Nat. Phys. 6, 99–103 (2010).
25. Yoshida, T. et al. Universal versus material-dependent two-gap behaviors of the high-Tc cuprate superconductors: angle-resolved photoemission study of La2–xSrxCuO4 . Phys. Rev. Lett. 103, 037004 (2009).
26. Vishik, I. M. et al. A momentum-dependent perspective on quasiparticle interference in Bi2Sr2CaCu2O8+δ . Nat. Phys. 5, 718–721 (2009).
27. Nakayama, K. et al. Evolution of a pairing-induced pseudogap from the superconducting gap of (Bi,Pb)2Sr2CuO6 . Phys. Rev. Lett. 102, 227006 (2009).
28. Mesot, J. et al. Superconducting gap anisotropy and quasiparticle interactions: a doping dependent photoemission study. Phys. Rev. Lett. 83, 840–843 (1999).
29. Kohsaka, Y. et al. How Cooper pairs vanish approaching the Mott insulator in Bi2Sr2CaCu2O8+δ . Nature 454, 1072–1078 (2008).
30. Cooper, R. A. et al. Anomalous criticality in the electrical resistivity of La2–xSrxCuO4 . Science 323, 603–607 (2009).
31. Kim, T. K. et al. Doping dependence of the mass enhancement in (Pb,Bi)2Sr2CaCu2O8 at the antinodal point in the superconducting and normal states. Phys. Rev. Lett. 91, 167002 (2003).
32. McElroy, K. et al. Coincidence of checkerboard change order and antinodal state decoherence in strongly underdoped superconducting Bi2Sr2CaCu2O8+δ . Phys. Rev. Lett. 94, 197005 (2005).
33. Yoshida, T. et al. Low-energy electronic structure of the high-Tc cuprates La2–xSrxCuO4 studied by angle-resolved photoemission spectroscopy. J. Phys. Condens. Matter. 19, 125209 (2007).
34. Hobou, H. et al. Enhancement of the superconducting critical temperature in Bi2Sr2CaCu2O8+δ by controlling disorder outside CuO2 planes. Phys. Rev. B 79, 064507 (2009).
35. Presland, M. R., Tallon, J. L., Buckley, R. G., Liu, R. S. & Flower, N. E. General trends in oxygen stoichiometry effects on Tc in Bi and Tl superconductors. Physica C 176, 95–105 (1991).
36. Norman, M. R., Randeria, M., Ding, H. & Campuzano, J. C. Phenomenology of the low-energy spectral function in high-Tc superconductors. Phys. Rev. B 57, 11093–11096 (1998).
## Acknowledgements
We thank Z.-X. Shen and A. Fujimori for their useful discussions, and T. Fujita and Y. Nakashima for their help with the experimental study. H.A. acknowledges financial support from JSPS as a research fellow. This work was supported by KAKENHI (20740199). The ARPES experiments were performed under the approval of HRSC (Proposal No. 09-A-11 and 10-A-24).
## Author information
Authors
### Contributions
H.A. and A.I. designed the experiment, analysed the data and wrote the manuscript with support from M.T. The ARPES data were acquired by H.A. with support from M.A. and H.N. The high-quality single crystals were grown by M.I., K.F., S.I. and S.U. All authors discussed the results and commented on the manuscript.
### Corresponding authors
Correspondence to H. Anzai or A. Ino.
## Ethics declarations
### Competing interests
The authors declare no competing financial interests.
## Supplementary information
### Supplementary Information
Supplementary Figures S1 and S2, Supplementary Note S1 and Supplementary References (PDF 493 kb)
## Rights and permissions
Reprints and Permissions
Anzai, H., Ino, A., Arita, M. et al. Relation between the nodal and antinodal gap and critical temperature in superconducting Bi2212. Nat Commun 4, 1815 (2013). https://doi.org/10.1038/ncomms2805
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/ncomms2805
• ### Pairing Symmetry, Nodal and Antinodal Superconducting Gap in $$\text {La}_{2-x}\text {Sr}_x\text {CuO}_{4}$$: A Doping Scenario
• Sanjeev K. Verma
• Anushri Gupta
• B. D. Indu
Journal of Low Temperature Physics (2019)
• ### A New Landscape of Multiple Dispersion Kinks in a High-T c Cuprate Superconductor
• H. Anzai
• M. Arita
• A. Ino
Scientific Reports (2017)
• ### Overlapping Hot Spots and Charge Modulation in Cuprates
• Pavel A. Volkov
• Konstantin B. Efetov
Journal of Superconductivity and Novel Magnetism (2016)
• ### Temperature Dependence of the Magnetic Penetration Depth in the Case of the Coexistence of Charge Density Waves and Superconductivity
• M. V. Eremin
• D. A. Sunyaev
Journal of Low Temperature Physics (2016)
• ### Direct spectroscopic evidence for phase competition between the pseudogap and superconductivity in Bi2Sr2CaCu2O8+δ
• Makoto Hashimoto
• Zhi-Xun Shen
Nature Materials (2015) | |
# Thread: [SOLVED] Show the moment generating function
1. ## [SOLVED] Show the moment generating function
I cannot figure this out for the life of me.. I have to show that: If Y is a random variable with a moment generating function of m(t) and if W is given by W = aY + b, show that the moment generating function of W is e^tbm(at)
so far, with some help, I have defined it but not sure where to go from here.
M_W(t) = E(e^{(aY + b)t}
2. I have figured it out... no need to reply. Thanks! | |
## Abstract
A swarm robotic system is a system in which multiple robots cooperate to fulfill a macroscopic function. Many swarm robots have been developed for various purposes. This study aims to design swarm robots capable of executing spatially distributed tasks effectively, which can be potentially used for tasks such as search-and-rescue operation and gathering scattered garbage in rooms. We propose a simple decentralized control scheme for swarm robots by extending our previously proposed non-reciprocal-interaction-based model. Each robot has an internal state, called its workload. Each robot first moves randomly to find a task, and when it does, its workload increases, and then it attracts its neighboring robots to ask for their help. We demonstrate, via simulations, that the proposed control scheme enables the robots to effectively execute multiple tasks in parallel under various environments. Fault tolerance of the proposed system is also demonstrated.
## 1 Introduction
Collective behavior emerging from local interaction among individuals is widely found in natural and social systems such as flocking of birds [19, 31] and mammals [5], fish schools [39], bacterial communities [1], and social networks [15]. An interesting aspect of collective behavior is that nontrivial macroscopic functions such as adaptability, scalability, and fault tolerance emerge, although each individual has only trivial functions [29]. While collective behavior has attracted physicists and the mechanism of self-organization has been explored [8, 9, 28], it is also studied from an engineering perspective and many swarm robotic systems have been developed [17, 36].
While previous swarm robots have been developed for many purposes such as aggregation [3, 4, 7, 13, 25, 33, 41], self-assembly [10, 14, 16, 32], collective transport [26], and foraging through task allocation [21, 35, 38, 40], in this study we design a swarm robotic system that can effectively perform spatially distributed tasks. Designing such systems is important because they can potentially be applied to various situations such as search-and-rescue operation, gathering scattered garbage in rooms, planetary surveying, mining, and killing tumor cells in animal bodies [18].
Several previous studies have addressed this issue, some of which assumed that robots can know the locations of tasks before they execute them and scheduled the behaviors of the robots online [2, 6, 11]. However, these studies cannot be applied to cases where robots cannot recognize tasks until they encounter them (for example, in a task of collecting garbage scattered in a room, it is difficult for robots to recognize small garbage before picking it up). Meanwhile, several studies have proposed control schemes that do not require prior information about task locations [3, 12, 20, 30, 33, 34, 41]. However, it is still a challenge to make multiple robots perform tasks fast while minimizing their travel distances under various situations, with reduced computational costs.
Recently, we proposed a non-reciprocal-interaction-based (NRIB) model, and demonstrated, via a simulation, that various patterns emerge through changing parameters [22, 23]. Because of its simplicity, this model has many possible applications in science and engineering, such as understanding the core principle of self-organization [22], elucidating the essential mechanism of the behavior of active matter [37], and designing swarm robotic systems. Thus, the NRIB model is a suitable platform for addressing the above-mentioned issue as well.
This article proposes a decentralized control scheme for an effective execution of spatially distributed tasks by extending the NRIB model. We show, via simulations, that the robots driven by the proposed control scheme effectively execute multiple tasks in parallel in various environments. Moreover, we demonstrate that the proposed control scheme ensures fault tolerance.
The remainder of this article is structured as follows. In Section 2, we briefly review the NRIB model. In Section 3, the problem to be solved in this study is defined. In Section 4, we propose a mathematical model extended from the NRIB model, wherein a decentralized control scheme for the execution of spatially distributed tasks is included. We demonstrate via simulations that the proposed control scheme works well, yet still has a drawback. In Section 5, the control scheme presented in Section 4 is improved to overcome the drawback. The simulation results show that the improved control scheme works well under various conditions. Finally, in Section 6, discussions and conclusions are presented.
## 2 NRIB Model
Let us briefly summarize the NRIB model [22, 23]. Particles, each of which represents a person in a community, exist on a two-dimensional plane, and the position of the ith particle (i = 1, 2, …, N) is denoted by ri. The time evolution of ri is given by
$r˙i=∑j≠ikijRij−1−Rij−2Rˆij,$
(1)
where Rij = rjri, $Rˆ$ij = Rij/|Rij|, and kij denotes a constant that represents to what extent person i prefers person j. The term kij|Rij|−1 in Equation 1 indicates that particle i approaches and repels particle j when kij is positive and negative, respectively. The term −|Rij|−2 in Equation 1 represents the repulsive effect between particles i and j when they are close to each other. When kij = kji, the interaction between the ith and jth particles is described by a potential, and the distance between them tends to converge to kij−1 (if kij > 0). However, because kij is not necessarily equal to kji, that is, the interaction can be non-reciprocal, Equaton 1 is generally a nonequilibrium open system in which both energy and momentum are non-conservative.
The simulation results for several parameter sets of kij can be downloaded from the website link in [22] (http://www.riec.tohoku.ac.jp/%7Etkano/ECAL_Movie1.mp4). It is found that various nontrivial patterns emerge.
## 3 Problem Definition
The problem to be solved in this study is defined in this section. Suppose that N robots are located on a square field of side l. Each robot has an internal state called its workload, and it can move omnidirectionally and detect the relative position and workload of other robots within distance di from itself by using sensors. For simplicity, it is assumed that each robot receives a viscous friction force from the ground. The viscous friction coefficient η is spatially uniform in the field, and is large enough so that the inertia of the robot can be neglected. When two robots collide, they repel each other owing to the exclusive volume effect. The lateral surface of the robot is assumed to be smooth; thus, the friction forces between the robots are neglected. Robots cannot move faster than the maximum velocity vmax. Because this study focuses on extracting the essence of swarm robot control, further details regarding real-world applications (e.g., detailed motor and sensor properties) are not considered.
The tasks to be executed are spatially distributed in the field, and the robots cannot have prior information about their location. When a robot lies in the area where tasks exist, the tasks in the area decrease at a constant rate. At the same time, the workload of the robot increases. All robots have identical properties, and are controlled in a decentralized manner. We will explore a decentralized control scheme for the robots that can execute spatially distributed tasks rapidly with a short travel distance.
## 4 NRIB Swarm Algorithm
Definitions of parameters appearing hereafter are summarized in Tables 1 and 2. The position and workload of robot i (i = 1, 2, …, N) are denoted by ri and wi, respectively. While the space is continuous regarding the robot movement, the amount of tasks in the field is described in a discrete manner. Specifically, the entire field is divided into q × q units, in each of which the amount of tasks z(r) is assumed to be uniform. Each robot can detect the relative positions and workloads of robots that exist in the area within a distance di from itself, denoted by Si. In the present model, di is set to be a constant d0.
Table 1.
Parameter values for conditions 1–3 in the improved NRIB swarm algorithm.
ParameterCondition 1 (nondimensional)Condition 1 (dimensional)
τd 970 970 s
dmin 10.00 10 m
dmax 60 60 m
γ 0.9 0.9
$α˜$′ 0.2 0.2 m · s−1
ParameterCondition 2 (nondimensional)Condition 2 (dimensional)
τd 485 970 s
dmin 5.00 10 m
dmax 30 60 m
γ 0.9 0.9
$α˜$′ 0.2 0.2 m · s−1
ParameterCondition 3 (nondimensional)Condition 3 (dimensional)
τd 323 970 s
dmin 3.33 10 m
dmax 20 60 m
γ 0.9 0.9
$α˜$′ 0.2 0.2 m · s−1
ParameterCondition 1 (nondimensional)Condition 1 (dimensional)
τd 970 970 s
dmin 10.00 10 m
dmax 60 60 m
γ 0.9 0.9
$α˜$′ 0.2 0.2 m · s−1
ParameterCondition 2 (nondimensional)Condition 2 (dimensional)
τd 485 970 s
dmin 5.00 10 m
dmax 30 60 m
γ 0.9 0.9
$α˜$′ 0.2 0.2 m · s−1
ParameterCondition 3 (nondimensional)Condition 3 (dimensional)
τd 323 970 s
dmin 3.33 10 m
dmax 20 60 m
γ 0.9 0.9
$α˜$′ 0.2 0.2 m · s−1
Table 2.
Definition of parameters in the NRIB swarm algorithm and in the improved NRIB swarm algorithm.
VariableDimensionMeaning
N — Number of robots in the field
tmaxstep — Maximum time step
l Length of a side of the square field
q — Number of units for a side of the square field
ri Position of the ith robot
wi — Workload of the ith robot
z(r— Amount of tasks at r
Si — Communication range of the ith robot
di Radius of the communication range (= d0 in Section 4
λ — Coefficient for the magnitude of workload
ϵ s−1 Rate of task execution by a robot
nk — Number of robots that occupy unit k
α kg · m2 · s−2 Coefficient for attractive force
β kg · m3 · s−2 Coefficient for repulsive force
m kg Mass of a robot
η kg · s−1 Viscous friction coefficient between the robot and the ground
f0 kg · m · s−2 Magnitude of the self-driving force of the robot
$fijphys$ kg · m · s−2 Collision force between robots i and j
kphys (depends on μParameter characterizing the collision force
μ — Parameter characterizing the collision force
$α˜$ m2 s−1 = α/η
$β˜$ m3 s−1 = β/η
$f˜$0 m · s−1 = f0/η
$f˜ijphys$ m · s1 = η−1$fijphys$
vmax m · s−1 Maximum velocity of the robot
ni ≡ (cos θi, sin θi)T — Unit vector that points the direction of the self-driving force
td Duration for which the self-driving force is kept constant
tm Maximum value of td
Δt Time step
ET Time until 90% of the tasks are executed
ED Total distance traveled until 90% of the tasks are executed
W — Amount of remaining tasks
D Total distance traveled by all the robots
Parameters for the improved NRIB swarm algorithm
τd Time constant for the change in the communication range
dmin Minimum value of the radius of the communication range
dmax Maximum value of the radius of the communication range
γ — Sensitivity of the communication range to the workload
$α˜$′ m · s−1 Coefficient for attractive force
VariableDimensionMeaning
N — Number of robots in the field
tmaxstep — Maximum time step
l Length of a side of the square field
q — Number of units for a side of the square field
ri Position of the ith robot
wi — Workload of the ith robot
z(r— Amount of tasks at r
Si — Communication range of the ith robot
di Radius of the communication range (= d0 in Section 4
λ — Coefficient for the magnitude of workload
ϵ s−1 Rate of task execution by a robot
nk — Number of robots that occupy unit k
α kg · m2 · s−2 Coefficient for attractive force
β kg · m3 · s−2 Coefficient for repulsive force
m kg Mass of a robot
η kg · s−1 Viscous friction coefficient between the robot and the ground
f0 kg · m · s−2 Magnitude of the self-driving force of the robot
$fijphys$ kg · m · s−2 Collision force between robots i and j
kphys (depends on μParameter characterizing the collision force
μ — Parameter characterizing the collision force
$α˜$ m2 s−1 = α/η
$β˜$ m3 s−1 = β/η
$f˜$0 m · s−1 = f0/η
$f˜ijphys$ m · s1 = η−1$fijphys$
vmax m · s−1 Maximum velocity of the robot
ni ≡ (cos θi, sin θi)T — Unit vector that points the direction of the self-driving force
td Duration for which the self-driving force is kept constant
tm Maximum value of td
Δt Time step
ET Time until 90% of the tasks are executed
ED Total distance traveled until 90% of the tasks are executed
W — Amount of remaining tasks
D Total distance traveled by all the robots
Parameters for the improved NRIB swarm algorithm
τd Time constant for the change in the communication range
dmin Minimum value of the radius of the communication range
dmax Maximum value of the radius of the communication range
γ — Sensitivity of the communication range to the workload
$α˜$′ m · s−1 Coefficient for attractive force
The proposed control scheme is based on “exploration and exploitation” [27]. Specifically, the robots are controlled as follows. Each robot first moves randomly to search for tasks. Once it finds tasks, (i.e., it enters an area with high z(r)) its motion slows down to execute the tasks. Concurrently, its workload increases and its nearby robots detect the increase in the workload and approach it to execute the tasks cooperatively. After the tasks are executed, the workload of the attracted robots decreases. Then, they repel each other and search for other tasks.
Thus, the workload of robot i, wi, evolves according to the following equation:
$τww˙i=λzri−wi,$
(2)
where λ is a positive constant. Equation 2 indicates that wi is given by the first-order delay of λz(ri) with time constant τw. Thus, wi increases when robot i remains in the area with high z(r). Note that an upper limit of wi is introduced; specifically, wi is reset to 1 when it exceeds 1.
It is assumed in the model that the number of tasks decreases at a constant rate in the areas where robots exist. More specifically, z(r) decrements at the rate ϵnk in unit k, where ϵ is a positive constant and nk is the number of robots in unit k. Note that z(r) is reset to zero when it becomes negative. Hence, according to Equation 2, wi decreases as the tasks are executed, that is, z(ri) approaches zero.
The equation of motion for robot i is designed as follows:
$mr¨i+ηr˙i=1−wi∑jϵSigRijRˆij+f0ni+∑jfijphys,$
(3)
where Rij = rjri, $Rˆ$ij = Rij/|Rij|, f0 is a positive constant, and ni is a unit vector. The factor 1 − wi is the mobility of robot i. Specifically, robot i slows down as wi increases, and the driving force completely vanishes when wi = 1. Owing to this factor, robots can continue executing tasks until they are finished. The term f0ni is the self-driving force, which enables random walk of the robot. The unit vector ni is given by ni = (cos θi, sin θi)T. The deflection angle θi takes a uniformly distributed random value in the interval [0, 2π) and is updated after the duration td, where td is a uniformly distributed random number in the interval (0, tm] and is updated when θi is updated. Note that θi is also updated when robot i reaches the edge of the field. When the robot moves out of the field, θi is updated again so that it remains in the field. The force $fijphys$ is the collision force between robots i and j, given by
$fijphys=−kphysmax2r0−Rij0μRˆij,$
(4)
where kphys and μ are positive constants and r0 denotes the radius of the robot.
The function g(|Rij|) determines the interaction between robots. The term g(|Rij|)$Rˆ$ij in Equation 3 denotes attractive and repulsive force when g(|Rij|) is positive and negative, respectively. We designed this function by drawing inspiration from the NRIB model [22, 23]:
$gRij=αwjRij−1−βRij−2,$
(5)
where the first term on the right-hand side indicates that robot i approaches robot j, and its contribution is large when the workload of robot j, wj, is high. Thus, robots tend to aggregate in areas with high z(r). The second term on the right-hand side in Equation 5 means that robot i moves away from robot j when they are close. Note that we designed g(|Rij|) based on the NRIB model, which adopts a power and not an exponential law, because a long-range interaction often helps robots to approach other distant robots.
Equations 3 and 5 are simplified by neglecting the inertia term to become
$r˙i=1−wi∑jϵSiα˜wjRij−1−β˜Rij−2Rˆij+f˜0ni+∑jf˜ijphys,$
(6)
where $α˜$ = α/η, $β˜$ = β/η, $f˜$0 = f0/η, and $f˜ijphys$ = η−1$fijphys$. Note that the absolute value of $r˙$i is reset to vmax when it exceeds vmax.
### 4.1 Simulation Results
Simulations of the proposed model were performed. Areas with high z(r) were randomly distributed under the initial condition. The initial positions of the robots were set to be random. The initial value of the workload wi was set to be 0.2 for all robots.
The simulation program was written in the C++ language. To make the calculation fast without losing accuracy, the time evolution of ri (Equation 6), which often varied rapidly, was solved with the fourth-order Runge-Kutta method, while that of wi (Equation 2), which varied slowly, was solved with the Euler method. Nondimensional parameters were used in the simulation program. The parameters were dimensionalized by multiplying by scaling factors, and three types of scaling factors were examined (Table 3). The parameter values are listed in Table 4. The nondimensional length of a side of the square field was fixed at 40 in the simulation program. This corresponds to 40, 80, and 120 m when a nondimensional length of 1 is scaled to 1, 2, and 3 m, respectively. Meanwhile, the other parameters were determined so that the dimensionalized parameters do not depend on the scaling factors.
Table 3.
Definition of parameters in the BEECLUST algorithm.
VariableDimensionMeaning
p0 — Parameter related to the waiting time
c — Parameter related to the waiting time
ni — Number of robots within the communication range
τbee Duration for the measurement of tr,i, Imin,i, and Imax,i
ncol,i — Number of collisions during the duration τbee
tr,i Average time between subsequent collisions over the duration τbee
$t˜$r,i Average of tr,i over the robots within Si
Imin,i — Minimum value of z(ri) during the duration τbee
Imax,i — Maximum value of z(ri) during the duration τbee
$I˜$min,i — Minimum value of Imin,i for the robots within Si
$I˜$max,i — Maximum value of Imin,i for the robots within Si
Ī — Normalized amount of tasks
δ — Parameter introduced to avoid zero divide (Equation 10
VariableDimensionMeaning
p0 — Parameter related to the waiting time
c — Parameter related to the waiting time
ni — Number of robots within the communication range
τbee Duration for the measurement of tr,i, Imin,i, and Imax,i
ncol,i — Number of collisions during the duration τbee
tr,i Average time between subsequent collisions over the duration τbee
$t˜$r,i Average of tr,i over the robots within Si
Imin,i — Minimum value of z(ri) during the duration τbee
Imax,i — Maximum value of z(ri) during the duration τbee
$I˜$min,i — Minimum value of Imin,i for the robots within Si
$I˜$max,i — Maximum value of Imin,i for the robots within Si
Ī — Normalized amount of tasks
δ — Parameter introduced to avoid zero divide (Equation 10
Table 4.
Scaling factors under conditions 1–3.
ParameterValue
Condition 1Condition 2Condition 3
Nondimensional unit length 1 m 2 m 3 m
Nondimensional unit time 1 s 2 s 3 s
ParameterValue
Condition 1Condition 2Condition 3
Nondimensional unit length 1 m 2 m 3 m
Nondimensional unit time 1 s 2 s 3 s
The simulations were performed 100 times under the same parameters to obtain statistical data. Each trial finished when 90% of the tasks in the field were executed or when the time step reached tmaxstep.
First, simulations were performed with various d0 values under condition 1. Figure 1 shows the time evolution of the remaining tasks W and the total distance D that which all robots traveled, when d0 was 2, 14, and 58 m. The corresponding movies are shown in Movies 1–3 (see the online supplementary material for this article at https://www.mitpressjournals.org/doi/suppl/10.1162/artl_a_00317). When d0 = 2 m, the robots could not approach other robots having high workload, but tended to move independently because of the short communication range. Consequently, the robots had to move randomly for a long time to find tasks; thus, W decreased slowly (Figure 1(a)). When d0 = 58 m, the robots could communicate with all robots in the field. Hence, all robots tended to aggregate into one cluster, so that the tasks were executed cooperatively. However, the robots could not execute multiple tasks in parallel, and hence had to travel long distances; thus, D increased rapidly (Figure 1(c)). In contrast, when d0 = 14 m, the robots formed several clusters to execute multiple tasks in parallel. Once the tasks were executed, the robots were distributed to find other tasks, and formed clusters again when the tasks were found. Thus, W decreased rapidly and D increased slowly, which means that the tasks were executed fast with a short travel distance (Figure 1(b)).
Figure 1.
Time evolution of the remaining tasks W and the total distance for which all robots traveled D when (a) d0 = 2 m, (b) d0 = 14 m, and (c) d0 = 58 m in the NRIB swarm algorithm.
Figure 1.
Time evolution of the remaining tasks W and the total distance for which all robots traveled D when (a) d0 = 2 m, (b) d0 = 14 m, and (c) d0 = 58 m in the NRIB swarm algorithm.
The simulation results were evaluated by the time and the total distance the robots traveled until 90% of the tasks in the field were performed, denoted by ET and ED, respectively. Namely, ET and ED are defined as
$ET=TΔt,$
(7)
$ED=∑k=1T∑i=1NvikΔt,$
(8)
where vi(k) is the velocity vector of robot i at the kth time step and T is the time step when 90% of the tasks in the field are executed, that is, ∫Fz(r)dr decreased by 90% from the initial condition, with ∫F being the spatial integral over the field. Note that T is set to tmaxstep when 90% of the tasks in the field are not executed until the end of the trial.
The result is shown in Figure 2. The median values of ET and ED are the smallest when d0 is 14 and 6 m, respectively. Thus, the robots can execute tasks fast with a short travel distance when d0 is around 10 m.
Figure 2.
Boxplots of ET and ED under condition 1. The results when d0 is varied in the NRIB swarm algorithm and when the improved NRIB swarm algorithm is used are shown. One hundred trials are performed for each d0 value. The median and quartile are shown by boxes, while the maximum and minimum values are shown by bars. A statistical test was performed between the NRIB swarm algorithm with d0 =10 m and the improved NRIB swarm algorithm. An asterisk denotes statistically significant difference (p < 0.01). In the statistical analysis, a t-test was adopted when the data is homoscedastic; otherwise a Welch test was adopted.
Figure 2.
Boxplots of ET and ED under condition 1. The results when d0 is varied in the NRIB swarm algorithm and when the improved NRIB swarm algorithm is used are shown. One hundred trials are performed for each d0 value. The median and quartile are shown by boxes, while the maximum and minimum values are shown by bars. A statistical test was performed between the NRIB swarm algorithm with d0 =10 m and the improved NRIB swarm algorithm. An asterisk denotes statistically significant difference (p < 0.01). In the statistical analysis, a t-test was adopted when the data is homoscedastic; otherwise a Welch test was adopted.
The result was compared with that of another control scheme proposed previously. Specifically, we compared the result with that of the BEECLUST algorithm [3, 20, 25, 33, 41], a simple decentralized control scheme inspired by the swarming behavior of bees, and evaluated its performance.
The basic concept of the BEECLUST algorithm is as follows:
• 1.
Each robot moves at a constant speed until it collides with an object (another robot, a wall, or an obstacle).
• 2.
When the robot collides with another robot, it stops motion. After the waiting time tbee,i, which is set to become longer as the amount of tasks z(ri) becomes larger, is over, it moves again with a changed direction.
• 3.
When the robot collides with a wall or an obstacle, it immediately changes direction.
The key here is that the waiting time tbee,i is longer as the amount of tasks z(ri) when the collision occurs is larger. Thus, it is expected that the robots stay in the area where the amount of tasks is large, which enables executing tasks effectively.
In the original version of the BEECLUST algorithm [3, 20, 33], robot-to-robot communication is not assumed. That is, no robot knows any information about other robots (such as relative and absolute position). There is no internal memory in the robots. In the extended variant of the BEECLUST algorithm [41], the waiting time tbee,i is adjusted to adapt to the environment by sharing information with other robots. Specifically, information about the robot-to-robot encounter time interval and the maximum and minimum values of z(ri) detected in the recent past is shared with nearby robots. In this study, we performed simulations with the extended variant of the BEECLUST algorithm [41]: The waiting time tbee,i is designed as
$tbee,i=t˜r,ip0I¯i2I¯i2+c,$
(9)
where p0 and c are positive constants, $t˜$r,i denotes the average robot-to-robot encounter time interval, and $I¯$i denotes the normalized amount of tasks. More details about the implementation of the BEECLUST algorithm in this study are provided in the Appendix.
Figure 3 shows the ET and ED values when p0 is varied. A representative trial when p0 = 5.0 is shown in Movie 4 (see the online supplementary material). For all p0 values, ET and ED are larger than those for the proposed control scheme. According to the statistical analysis, the ET and ED values when d0 = 10 m in the proposed control scheme are significantly smaller than those when p0 = 5 in the BEECLUST algorithm (Welch test, p < 0.01).
Figure 3.
Boxplots of ET and ED when p0 is varied in the BEECLUST algorithm. One hundred trials are performed for each p0 value. The median and quartile are shown by boxes, while the maximum and minimum values are shown by bars.
Figure 3.
Boxplots of ET and ED when p0 is varied in the BEECLUST algorithm. One hundred trials are performed for each p0 value. The median and quartile are shown by boxes, while the maximum and minimum values are shown by bars.
Next, we performed simulations under conditions 2 and 3, where the spatiotemporal scales are two and three times that under condition 1, respectively (Table 3). The results when d0 is varied are shown in Figure 4. Both ET and ED decrease as d0 increases, and do not vary significantly for large d0. Thus, the optimal value in Condition 1 (d0 ≃ 10 m, Figure 2) is not optimal in conditions 2 and 3. The optimal value of d0 depends on the environment, and the parameters have to be tuned in response to the environment. This is a limitation of the NRIB swarm algorithm.
Figure 4.
Boxplots of ET and ED under (a) condition 1 and (b) condition 2. One hundred trials are performed for each d0 value. The median and quartile are shown by boxes, while the maximum and minimum values are shown by bars. A statistical test was performed between the NRIB swarm algorithm with d0 = 10 m and the improved NRIB swarm algorithm. Asterisks denote statistically significant difference (p < 0.01). In the statistical analysis, a t-test was adopted when the data is homoscedastic; otherwise a Welch test was adopted.
Figure 4.
Boxplots of ET and ED under (a) condition 1 and (b) condition 2. One hundred trials are performed for each d0 value. The median and quartile are shown by boxes, while the maximum and minimum values are shown by bars. A statistical test was performed between the NRIB swarm algorithm with d0 = 10 m and the improved NRIB swarm algorithm. Asterisks denote statistically significant difference (p < 0.01). In the statistical analysis, a t-test was adopted when the data is homoscedastic; otherwise a Welch test was adopted.
## 5 Improved NRIB Swarm Algorithm
To solve the problem of the NRIB swarm algorithm, an improved NRIB swarm algorithm is proposed, where the radius of the communication range di changes dynamically. The basic idea is as follows. When there does not exist a robot having high workload within the communication range, the robot needs to enlarge the communication range to find robots that need help. In contrast, when there exist many robots having high workload within the communication range, the robot will become confused about which direction to move in. In such a case, it is desirable that the robot approach the nearest robot with high workload so as not to move a long distance; hence, the communication range needs to be reduced.
Based on the above idea, the time evolution of di is designed as
$τrd˙i=dmax−dmax−dmintanhγ∑jϵSiwj−di,$
(10)
where γ and τr are positive constants. The radius of the communication range di approaches dmax and dmin as the summation of the workload of the robots within the communication range increases and decreases, respectively. Thus, it is expected that the communication range will be auto-tuned in response to the situation.
Moreover, the algorithm for the movement of the robots is slightly modified from Equation 6. When the communication range di is large, robot i is required to approach distant robots with high workload to execute tasks cooperatively. Thus, the attraction term ($α˜$wi|Rij|−1 in Equation 6) should increase as di increases. Hence, Equation 6 is modified as
$r˙i=1−wi∑jϵSiα˜′diwjRij−1−β˜Rij−2Rˆij+f˜0ni+∑jf˜ijphys,$
(11)
where $α˜$′ is a positive constant.
### 5.1 Simulation Results
Simulations were performed using the improved NRIB swarm algorithm. The time evolution of di (Equation 10) was solved using the Euler method, because it varied slowly. The values of parameters specific to the improved NRIB swarm algorithm are listed in Table 5, and the other parameter values are the same as those of the NRIB swarm algorithm (Table 4). The results for conditions 1–3 are shown in Figures 2, 4(a), and 4(b) and Movies 5–7 (see the online supplementary material). It is found that both ET and ED for the improved NRIB swarm algorithm are smaller than or almost equal to the optimal values for the NRIB swarm algorithm. According to statistical analysis, ET and ED for the improved NRIB swarm algorithm are significantly smaller than those for the NRIB swarm algorithm with d0 = 10 m (Welch test or t-test, p < 0.01), except for ED for condition 1, wherein no significant difference was found. This was achieved by auto-tuning of the communication range.
Table 5.
Parameter values under conditions 1–3 in the NRIB swarm algorithm.
ParameterValue
Condition 1 (non-dimensional)Condition 1 (dimensional)Condition 2 (non-dimensional)Condition 2 (dimensional)Condition 3 (non-dimensional)Condition 3 (dimensional)
N 30 30 30 30 30 30
tmaxstep 2000000 2000000 2000000 2000000 2000000 2000000
r0 0.500 0.5 m 0.250 0.5 m 0.167 0.5 m
l 40 40 m 40 80 m 40 120 m
q 200 200 200 200 200 200
τw 1.000 1 s 0.500 1 s 0.333 1 s
λ
ϵ 0.1 0.1 s−1 0.2 0.1 s−1 0.3 0.1 s−1
$α˜$ 2.000 2 m2 s−1 1.000 2 m2 s−1 0.667 2 m2 s−1
$β˜$ 2.000 2 m3 s−1 0.500 2 m3 s−1 0.222 2 m3 s−1
$f˜$0 0.5 0.5 m · s−1 0.5 0.5 m · s−1 0.5 0.5 m · s−1
$k˜$phys(≡ kphys/η1000 1000 m−4 s−1 32000 1000 m−4 s−1 243000 1000 m−4 s−1
μ
vmax 1 m · s−1 1 m · s−1 1 m · s−1
tm 300 300 s 150 300 s 100 300 s
Δt 0.001000 0.001 s 0.000500 0.001 s 0.000333 0.001 s
ParameterValue
Condition 1 (non-dimensional)Condition 1 (dimensional)Condition 2 (non-dimensional)Condition 2 (dimensional)Condition 3 (non-dimensional)Condition 3 (dimensional)
N 30 30 30 30 30 30
tmaxstep 2000000 2000000 2000000 2000000 2000000 2000000
r0 0.500 0.5 m 0.250 0.5 m 0.167 0.5 m
l 40 40 m 40 80 m 40 120 m
q 200 200 200 200 200 200
τw 1.000 1 s 0.500 1 s 0.333 1 s
λ
ϵ 0.1 0.1 s−1 0.2 0.1 s−1 0.3 0.1 s−1
$α˜$ 2.000 2 m2 s−1 1.000 2 m2 s−1 0.667 2 m2 s−1
$β˜$ 2.000 2 m3 s−1 0.500 2 m3 s−1 0.222 2 m3 s−1
$f˜$0 0.5 0.5 m · s−1 0.5 0.5 m · s−1 0.5 0.5 m · s−1
$k˜$phys(≡ kphys/η1000 1000 m−4 s−1 32000 1000 m−4 s−1 243000 1000 m−4 s−1
μ
vmax 1 m · s−1 1 m · s−1 1 m · s−1
tm 300 300 s 150 300 s 100 300 s
Δt 0.001000 0.001 s 0.000500 0.001 s 0.000333 0.001 s
We also performed simulations under condition 1 with the following cases: (i) tasks appeared randomly in the field during the simulation, (ii) some robots suddenly stopped moving during the simulation, and (iii) obstacles existed in the field. Note that maximal tmaxstep was set to be 100000 in case (ii), while it was 200000 in the other cases. Representative trials for cases (i)–(iii) are shown in Movies 8–10 in the supplementary material. We found that the robots could execute the tasks for all cases.
The results for cases (i)–(iii) with the improved NRIB swarm algorithm were compared with those for the NRIB swarm algorithm and the BEECLUST algorithm. The radius of the communication range d0 was set to 10 m in the NRIB swarm algorithm, and p0 was set to 5.0 in the BEECLUST algorithm. The performance was evaluated by ET and ED in cases (ii) and (iii); however, ET could not be used in case (i) because of the appearance of tasks. Hence, the performance in case (i) was evaluated by ED and ER, the latter of which is defined as the ratio of ∫Fz(r)dr at t = tmaxstep to that at t = 0.
The results are shown in Figure 5. In case (i), ER and ED for the BEECLUST algorithm are significantly larger than for the other algorithms (Welch test, p < 0.01). ER for the improved NRIB swarm algorithm is significantly smaller than that for the NRIB swarm algorithm (Welch test, p < 0.01). However, ED for the improved NRIB swarm algorithm is significantly larger than that for the NRIB swarm algorithm (Welch test, p < 0.01). This result was obtained because robots moved toward distant robots to execute tasks fast and cooperatively (Movie 8). In cases (ii) and (iii), ET and ED for the improved NRIB swarm algorithm are significantly smaller than those for the BEECLUST algorithm and the NRIB swarm algorithm (Welch test or t-test, p < 0.01 or p < 0.05).
Figure 5.
Boxplots of ER, ET, and ED when (i) tasks appear randomly, (ii) some robots stop moving, and (iii) obstacles exist. Comparisons among the BEECLUST algorithm (p0 = 5), the NRIB swarm algorithm (d0 = 10 m), and the improved NRIB swarm algorithm are shown. The median and quartile are shown by boxes, while the maximum and minimum values are shown by bars. The ET value for the BEECLUST algorithm in (ii) is not shown, because 90% of the tasks were not executed until the maximum time step tmaxstep. Asterisks denote statistically significant difference. In the statistical analysis, the t-test was adopted when the data is homoscedastic; otherwise the Welch test was adopted.
Figure 5.
Boxplots of ER, ET, and ED when (i) tasks appear randomly, (ii) some robots stop moving, and (iii) obstacles exist. Comparisons among the BEECLUST algorithm (p0 = 5), the NRIB swarm algorithm (d0 = 10 m), and the improved NRIB swarm algorithm are shown. The median and quartile are shown by boxes, while the maximum and minimum values are shown by bars. The ET value for the BEECLUST algorithm in (ii) is not shown, because 90% of the tasks were not executed until the maximum time step tmaxstep. Asterisks denote statistically significant difference. In the statistical analysis, the t-test was adopted when the data is homoscedastic; otherwise the Welch test was adopted.
## 6 Discussion and Conclusion
We have proposed, by drawing inspiration from the NRIB model proposed previously [22, 23], simple decentralized control schemes for swarm robots that can execute spatially distributed tasks in parallel. We proposed two control schemes (the NRIB swarm algorithm and the improved NRIB swarm algorithm). In the NRIB swarm algorithm (Section 4), each robot moves randomly to find tasks; once it finds the tasks, it attracts robots within its communication range to execute them cooperatively. We performed simulations under various conditions and found that the communication range d0 is an important parameter. For small d0, the robots move almost independently and need to travel for a long time to find tasks, while for large d0 they move long distances to aggregate (Figure 1). Thus, d0 has an optimal value; however, it depends on the length scale of the field (conditions 1–3, Figures 2 and 4). To solve this problem, we proposed an improved NRIB swarm algorithm, wherein the communication range is auto-tuned on the basis of the workload of robots within the communication range (Section 5). This auto-tuning mechanism is reasonable because a robot searches a large area when robots that need help do not exist in the vicinity. We demonstrated, via simulations, that the improved NRIB swarm algorithm is applicable to various situations without changing any parameter (Figures 2, 4, and 5).
We compared the proposed control schemes with the extended variant of the BEECLUST algorithm [41]. The BEECLUST algorithm is simpler than the proposed control schemes and does not need to detect relative positions of other robots: Robots only need to share information about the robot-to-robot encounter time interval and the maximum and minimum values of z(ri) detected in the recent past. Whereas the BEECLUST algorithm has this advantage, the proposed control scheme enables the robots to execute tasks fast with a short travel distance compared with the BEECLUST algorithm (Figures 3 and 5). This result was obtained because robots performed tasks cooperatively only when they collided with each other by coincidence in the BEECLUST algorithm, whereas the proposed control schemes enabled cooperative task execution through interaction between robots.
Several algorithms besides the BEECLUST algorithm have been proposed thus far [12, 30, 34]. Although we have not quantitatively compared the proposed control schemes with these studies, the idea of the variable communication range proposed in the improved NRIB swarm algorithm (Equation 10) is novel, and is expected to be advantageous. In fact, we showed that the performance improved by auto-tuning of the communication range (Figures 2 and 4).
We believe that the proposed control scheme can be used for various practical applications. Hardware realization is a first step toward this. We believe that it is possible in the near future, because we have already succeeded in developing hardware for the NRIB model wherein sensors that can detect the relative position of nearby robots and actuators that enable omnidirectional movement are incorporated [24].
The proposed control schemes still have limitations. First, they may not be applicable to the cases where the length scale of tasks is comparable with or smaller than the robot diameter. This is because some robots with high workload occupy the area where a task exists, and thus, robots attracted by them cannot enter there. Second, the performance of the proposed control scheme is not guaranteed when the ground friction is inhomogeneous. For example, when there exists an area with extremely high friction, robots need to avoid entering it; otherwise, they can get stuck; however, the present control schemes cannot cope with such a situation. Further studies are needed to solve these problems.
## Acknowledgment
The authors thank Professor Ken Sugawara of Tohoku Gakuin University and Naoki Matsui of Tohoku University for their helpful suggestions.
## References
1
Asally
,
M.
,
Kittisopikul
,
M.
,
Rué
,
P.
,
Du
,
Y.
,
Hu
,
Z.
,
Cağatay
,
T.
,
Robinson
,
A. B.
,
Lu
,
H.
,
Garcia-Ojalvo
,
J.
, &
Süel
,
G. M.
(
2012
).
Localized cell death focuses mechanical forces during 3D patterning in a biofilm
.
Proceedings of the National Academy of Sciences of the USA
,
109
(
46
),
18891
18896
. https://doi.org/10.1073/pnas.1212429109.
2
Aşık
,
O.
, &
Akın
,
H. L.
(
2017
).
Effective multi-robot spatial task allocation using model approximations
. In
S.
Behnke
,
R.
Sheh
,
S.
Sarıel
, &
D.
Lee
(Eds.),
RoboCup 2016: Robot World Cup XX
(pp.
243
255
).
Cham, Switzerland
:
Springer
. https://doi.org/10.1007/978-3-319-68792-6_20.
3
Bodi
,
M.
,
Thenius
,
R.
,
Szopek
,
M.
,
Schmickl
,
T.
, &
Crailsheim
,
K.
(
2012
).
Interaction of robot swarms using the honeybee-inspired control algorithm BEECLUST
.
Mathematical and Computer Modelling of Dynamical Systems
,
18
(
1
),
87
100
. https://doi.org/10.1080/13873954.2011.601420.
4
Cambier
,
N.
,
Fŕemont
,
V.
, &
Ferrante
,
E.
(
2017
).
Group-size regulation in self-organised aggregation through the naming game
. In
Proceedings of International Symposium on Swarm Behavior and Bio-Inspired Robotics (SWARM 2017)
(pp.
365
372
).
5
Chen
,
Y.
, &
Kolokolnikov
,
T.
(
2014
).
A minimal model of predator-swarm interactions
,
Journal of the Royal Society Interface
,
11
,
20131208
. https://doi.org/10.1098/rsif.2013.1208.
6
Claes
,
D.
,
Robbel
,
P.
,
Oliehoek
,
F. A.
,
Tuyls
,
K.
,
Hennes
,
D.
, &
van der Hoek
,
W.
(
2015
).
Effective approximations for multi-robot coordination in spatially distributed tasks
. In
B.
Bordini
,
M. S. V.
Elkind
,
R. G.
Weiss
, &
G. T.
Yolum
(Eds.),
Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems
(pp.
881
890
).
Richland, SC
:
International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
.
7
Correll
,
N.
, &
Martinoli
,
A.
(
2011
).
Modeling and designing self-organized aggregation in a swarm of miniature robots
.
International Journal of Robotic Research
,
30
(
5
),
615
626
. https://doi.org/10.1177/0278364911403017.
8
Czirók
,
A.
,
Stanley
,
H. E.
, &
Vicsek
,
T.
(
1997
).
Spontaneously ordered motion of self-propelled particles
.
Journal of Physics A: Mathematical and General
,
30
(
5
),
1375
. https://doi.org/10.1088/0305-4470/30/5/009.
9
Czirók
,
A.
, &
Vicsek
,
T.
(
2000
).
Collective behavior of interacting self-propelled particles
.
Physica A: Statistical Mechanics and its Applications
,
281
(
1–4
),
17
29
. https://doi.org/10.1016/S0378-4371(00)00013-3.
10
Dorigo
,
M.
,
Tuci
,
E.
,
Trianni
,
V.
Groß
,
R.
,
Nouyan
,
R.
,
Ampatzis
,
C.
,
Labella
,
T. H.
,
,
R.
,
Bonani
,
M.
, &
F.
(
2006
).
SWARMBOT: Design and implementation of colonies of self-assembling robots
. In
G. Y.
Yen
&
D. B.
Fogel
(Eds.),
Computational intelligence: Principles and practice
(pp.
103
135
).
Los Alamitos, CA
:
IEEE Press
.
11
Ducatelle
,
F.
,
Förster
,
A.
,
Di Caro
,
G. A.
, &
Gambardella
,
L. M.
(
2009
).
New task allocation methods for robotic swarms
. In
Proceedings of 9th IEEE/RAS Conference on Autonomous Robot Systems and Competitions
.
12
Dasgupta
,
P.
(
2011
).
Multi-robot task allocation for performing cooperative foraging tasks in an initially unknown environment
. In
L. C.
Jain
et al
(Eds.),
Innovations in defence support systems—2, SCI 338
(pp.
5
20
).
Berlin, Heidelberg
:
Springer Verlag
. https://doi.org/10.1007/978-3-642-17764-4.
13
Firat
,
Z.
,
Ferrante
,
E.
,
Gilet
,
Y.
, &
Tuci
,
E.
(
2019
).
On self-organised aggregation dynamics in swarms of robots with informed robots
.
ArXiv:1903.03841
.
14
Garnier
,
S.
,
Murphy
,
T.
,
Lutz
,
M.
,
Hurme
,
E.
,
Leblanc
,
S.
, &
Couzin
,
I. D.
(
2013
).
Stability and responsiveness in a self-organized living architecture
.
PLOS Computational Biology
,
9
(
3
),
e1002984
. https://doi.org/10.1371/journal.pcbi.1002984.
15
González
,
M. C.
,
Lind
,
P. G.
, &
Herrman
,
H. J.
(
2006
).
System of mobile agents to model social networks
.
Physical Review Letters
,
96
,
088702
. https://doi.org/10.1103/physrevlett.96.088702.
16
Groß
,
R.
,
Bonani
,
M.
,
,
F.
, &
Dorigo
,
M.
(
2006
).
Autonomous self-assembly in swarm-bots
.
IEEE Transactions on Robotics
,
22
(
6
),
1115
1130
. https://doi.org/10.1109/TRO.2006.882919.
17
Hamann
,
H.
(
2018
).
Swarm robotics: A formal approach
(1st ed.).
New York
:
Springer
.
18
Hauert
,
S.
, &
Bhatia
,
S. N.
(
2014
).
Mechanisms of cooperation in cancer nanomedicine: Towards systems nanotechnology
.
Trends in Biotechnology
,
32
,
448
455
. https://doi.org/10.1016/j.tibtech.2014.06.010.
19
Hayakawa
,
Y.
(
2010
).
Spatiotemporal dynamics of skeins of wild geese
.
Europhysics Letters
,
89
,
48004
. https://doi.org/10.1209/0295-5075/89/48004.
20
Hereford
,
J.
(
2011
).
Analysis of BEECLUST swarm algorithm
. In
Proceedings of IEEE Symposium on Swarm Intelligence (SIS)
(pp.
1
7
).
New York
:
IEEE
. https://doi.org/10.1109/SIS.2011.5952587.
21
Jamshidpey
,
A.
, &
Afsharchi
,
M.
(
2015
).
Task allocation in robotic swarms: Explicit communication based approaches
. In
D.
Barbosa
&
E.
Milios
(Eds.),
Proceedings of Canadian Conference on Artificial Intelligence
(pp.
59
67
).
New York
:
Springer
. https://doi.org/10.1007/978-3-319-18356-5_6.
22
Kano
,
T.
,
Osuka
,
K.
,
Kawakatsu
,
T.
,
Matsui
,
N.
, &
Ishiguro
,
A.
(
2017
).
A minimal model of collective behaviour based on non-reciprocal Interactions
. In
C.
Knibbe
,
G.
Beslon
,
D. P.
Parsons
,
D.
Misevic
,
J.
Rouzaud-Cornabas
,
N.
Bredèche
,
S.
Hassas
,
O.
Simonin
, &
H.
Soula
(Eds.),
Proceedings of ECAL 2017 the 14th European Conference on Artificial Life
(pp.
237
244
).
Cambridge, MA
:
MIT Press
. https://doi.org/10.7551/ecal_a_041.
23
Kano
,
T.
,
Osuka
,
K.
,
Kawakatsu
,
T.
, &
Ishiguro
,
A.
(
2017
).
Mathematical analysis for non-reciprocal-interaction-based model of collective behavior
.
Journal of the Physical Society of Japan
,
86
,
124004
. https://doi.org/10.7566/JPSJ.86.124004.
24
Kano
,
T.
,
Matsui
,
N.
,
Naito
,
E.
,
Aoshima
,
T.
, &
Ishiguro
,
A.
(
2018
).
Swarm robots inspired by friendship formation process
.
arXiv:1808.03812
.
25
Kernbach
,
S.
,
Häbe
,
D.
,
Kernbach
,
O.
,
Thenius
,
R.
,
,
G.
,
Kimura
,
T.
, &
Schmickl
,
T.
(
2012
).
Adaptive collective decision-making in limited robot swarms without communication
.
The International Journal of Robotics Research
,
32
(
1
),
35
55
. https://doi.org/10.1177/0278364912468636.
26
Kube
C. R.
, &
Bonabeau
,
E.
(
2000
).
Cooperative transport by ants and robots
.
Robotics and Autonomous Systems
,
30
,
85
101
. https://doi.org/10.1016/S0921-8890(99)00066-4.
27
March
,
J. G.
(
1991
).
Exploration and exploitation in organizational learning
.
Organization Science
,
2
,
71
87
. https://doi.org/10.1287/orsc.2.1.71.
28
Mishra
,
S.
,
,
A.
, &
Marchetti
,
M. C.
(
2010
).
Fluctuations and pattern formation in self-propelled particles
.
Physical Review E
,
81
(
6
),
061916
. https://doi.org/10.1103/PhysRevE.81.061916.
29
Petersen
,
K. H.
,
Napp
,
N.
,
Stuart-Smith
,
R.
,
Rus
,
D.
, &
Kovac
,
M.
(
2019
).
A review of collective robotic construction
.
Science Robotics
,
4
,
eaau8479
. https://doi.org/10.1126/scirobotics.aau8479.
30
Reif
,
J. H.
, &
Wang
,
H.
(
1999
).
Social potential fields: A distributed behavioral control for autonomous robots
.
Robotics and Autonomous Systems
,
27
,
171
194
. https://doi.org/10.1016/S0921-8890(99)00004-4.
31
Reynolds
,
C. W.
(
1987
).
Flocks, herds, and schools: A distributed behavioral model
.
Computer Graphics
,
21
,
25
34
. https://doi.org/10.1145/37402.37406.
32
Rubenstein
,
M.
,
Cornejo
,
A.
, &
Napgal
,
R.
(
2014
).
Programmable self-assembly in a thousand-robot swarm
.
Science
,
345
(
6198
),
795
799
. https://doi.org/10.1126/science.1254295.
33
Schmickl
,
T.
, &
Hamann
,
H.
(
2011
).
BEECLUST: A swarm algorithm derived from honeybees
. In
X. S.
Yang
et al
(Eds.),
Bio-inspired computing and communication networks
.
Boca Raton, FL
:
CRC Press
.
34
Simonin
,
O.
, &
Ferber
,
J.
(
2000
).
Modeling self satisfaction and altruism to handle action selection and reactive cooperation
. In
J.
Meyer
,
A.
Berthoz
,
D.
Floreano
,
H.
Roitblat
, &
S. W.
Wilson
(Eds.),
Proceedings of 6th International Conference on the Simulation of Adaptive Behavior
(pp.
314
323
).
Cambridge, MA
:
MIT Press
.
35
Sugawara
,
K.
, &
Sano
,
M.
(
1997
).
Cooperative acceleration of task performance: Foraging behavior of interacting multi-robots system
.
Physica D
,
100
,
343
354
. https://doi.org/10.1016/S0167-2789(96)00195-9.
36
Tan
,
Y.
, &
Zheng
,
Z.
(
2013
).
.
Defence Technology
,
9
,
18
39
. https://doi.org/10.1016/j.dt.2013.03.001.
37
Tanaka
,
S.
,
Nakata
,
S.
, &
Kano
,
T.
(
2017
).
Dynamic ordering in a swarm of floating droplets driven by solutal Marangoni effect
.
Journal of the Physical Society of Japan
,
86
,
101004
. https://doi.org/10.7566/JPSJ.86.101004.
38
Theraulaz
,
G.
,
Bonabeau
,
E.
, &
Deneubourg
,
J.-L.
(
1998
).
Response threshold reinforcements and division of labour in insect societies
.
Proceedings of the Royal Society London: Series B
,
265
,
327
332
. https://doi.org/10.1098/rspb.1998.0299.
39
Vabø
,
R.
, &
,
L.
(
1997
).
An individual based model of fish school reactions: Predicting antipredator behaviour as observed in nature
.
Fisheries Oceanography
,
6
(
3
),
155
171
. https://doi.org/10.1046/j.1365-2419.1997.00037.x.
40
Van Essche
,
S.
,
Ferrante
E.
,
Turgut
,
A. E.
,
Van Lon
,
R.
,
Holvoet
,
T.
, &
Wenseleers
,
T.
(
2015
).
Environmental factors promoting the evolution of recruitment strategies in swarms of foraging robots
. In
Proceedings of International Symposium on Swarm Behavior and Bio-Inspired Robotics (SWARM 2015)
(pp.
389
396
).
41
Wahby
,
M.
,
Petzold
,
J.
,
Eschke
,
C.
,
Schmickl
,
T.
, &
Hamann
,
H.
(
2019
).
Collective change detection: Adaptivity to dynamic swarm densities and light conditions in robot swarms
. In
H.
Fellermann
,
J.
Bacardit
,
Á. G.
Moreno
, &
R.
Füchslin
(Eds.),
Proceedings on Artificial Life Conference
(pp.
642
649
).
Cambridge, MA
:
MIT Press
. https://doi.org/10.1162/isal_a_00233.
### Appendix
Here we explain how the extended variant of the BEECLUST algorithm [41] was implemented in our simulator. A flowchart of the algorithm is shown in Figure 6.
Figure 6.
Flowchart of the the extended variant of the BEECLUST algorithm [41] implemented in our simulator.
Figure 6.
Flowchart of the the extended variant of the BEECLUST algorithm [41] implemented in our simulator.
Basically, each robot moves with the velocity vector vmaxni, where ni = (cos θi, sin θi)T. The deflection angle θi is given by a random number in the interval [0, 2π) and remains unchanged until the robot collides with an object (another robot, a wall, or an obstacle).
Each robot always measures tr,i, Imin,i, and Imax,i defined below and shares them with other robots within its communication range.
• tr,i: Average time between subsequent collisions over the duration τbee in the recent past. It is defined by tr,i = τbee/ncol,i, where ncol,i denotes the number of collisions during the duration τbee.
• Imin,i: Minimum value of z(ri) during the duration τbee in the recent past.
• Imax,i: Maximum value of z(ri) during the duration τbee in the recent past.
By sharing this information, each robot obtains $t˜$r,i, $I˜$min,i, and $I˜$max,i given by
$t˜r,i=1ni∑jϵSitr,j,I˜min,i=minjϵSiImin,j,I˜max,i=maxjϵSiImax,jI˜min,i+δ,$
(12)
where ni denotes the number of robots within the communication range, and δ is a positive constant that was introduced to avoid zero divide in Equation 13 described below. Then, the normalized amount of tasks $I˜$i is calculated as
$I¯i=zri−I˜min,iI˜max,i−I˜min,i$
(13)
Using $t˜$r,i and $I¯$i, the waiting time tbee,i is given by Equation 9 in the main text.
When the robot collides with another robot, it stops motion for the duration tbee and then moves with the velocity vector vmaxni again, updating the θi value randomly. When the robot collides with a wall or an obstacle, it immediately changes the direction of motion by updating the θi value randomly. | |
After gathering and cleaning data on rental bikes in Cologne, we already looked at map visualizations. Here, we will dive into the data more deeply. Using descriptive statistics we will try to find interesting patterns. For this, we will use the pandas library. It is great for manipulating and analyzing data. Moreover, we will use matplotlib and seaborn to visualize some of our findings.
You can find the data used in this post here. This post was written in a jupyter notebook. You can also find it at github.
Let's get started by setting up things:
In [11]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from geopy.distance import vincenty
import warnings
warnings.simplefilter('ignore')
# define style of plots
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams["figure.figsize"] = (10,8)
After setting up the environment, we read in the data and drop some unnecessary columns. Following, we take a first glance at the structure:
In [2]:
BIKES_COMBINED = "../data/bikedata_10min-int-observ_Feb-March.csv"
bikes.drop(['u_id','bike_name'], axis=1, inplace=True)
bikes.scrape_datetime = pd.to_datetime(bikes.scrape_datetime)
Out[2]:
bike_id scrape_weekday lat lon scrape_datetime
0 22460 Sun 50.933528 6.995009 2017-02-05 00:00:02
1 22134 Sun 50.896996 6.980310 2017-02-05 00:00:02
2 21911 Sun 50.967579 6.936491 2017-02-05 00:00:02
3 21522 Sun 50.975012 7.007270 2017-02-05 00:00:02
4 21914 Sun 50.970199 6.900565 2017-02-05 00:00:02
We have one observation for all bike locations (lat and lon) every ten minutes. The bikes can be identified by their bike_id. If a bike_id is missing at one observation point it means that the bike is currently not available. This is the case when a bike is in use. Alternatively, the bike (or the GPS) could be broken or under maintenance. This structure has a downside. Because the number of rows per bike_id varies, it's impossible to work with averages. Thus, we make it more consistent by adjusting the number of rows / observations for each bike_id:
In [3]:
# Same number of rows for each bike_id -> Align number of observations
bikes = bikes.assign(obs=1).set_index(['scrape_datetime','bike_id']).\
unstack().stack(dropna=False).reset_index()
Out[3]:
scrape_datetime bike_id scrape_weekday lat lon obs
0 2017-02-05 00:00:02 21004 Sun 50.954703 6.889604 1.0
1 2017-02-05 00:00:02 21005 None NaN NaN NaN
2 2017-02-05 00:00:02 21006 None NaN NaN NaN
3 2017-02-05 00:00:02 21007 Sun 50.909306 6.944799 1.0
4 2017-02-05 00:00:02 21008 Sun 50.941107 6.997360 1.0
In result, we have a row for each bike_id for every scrape_datetime. If the bike was really observed at this time we have obs = 1. Otherwise, we know the bike was not actually available.
If between two (actual) observation points the coordinates have changed, the bike has been used. Accordingly, we can create a dummy variable to indicate this. We do this by grouping the data by bike_id and applying a function to every non-empty row in the lat column. The function checks if the difference between the current and previous row (i.e. the current and previous observation) is greater than zero:
In [4]:
# Compute dummy (0/1) for location changes:
# did coordinates change between two observations?
bikes['change'] = bikes[bikes.obs==1].groupby('bike_id')['lat'].\
apply(lambda x: abs(x.diff()) > 0).astype(int)
bikes.loc[bikes.change.isnull(), "change"] = 0
Now, that we know a bike was used let's find out what distance it traveled. Because we only have the start and end point of the journey this will be a rough estimate. We can expect the actual distance to be significantly longer. To achieve this, first we create a new column. It contains the coordinates shifted by one observation period for each bike. Then, we create the index m that includes all rows in our data where the bike has been used. Using this, we can compute the distance between the actual coordinates and the shifted coordinates. This is done by applying the vincenty function from the geopy package to each row in m:
In [5]:
# Compute Trip Distance when coords changed
bikes['coords'] = bikes[['lat','lon']].values.tolist()
bikes['coords_shifted'] = bikes[bikes.obs==1].groupby('bike_id')['coords'].shift()
m = bikes['coords_shifted'].notnull() & bikes['change'] == 1
bikes.loc[m, 'trip_distance'] = bikes.loc[m, :].apply(
lambda x: vincenty(x['coords'], x['coords_shifted']).kilometers, axis=1)
Visualizing will give us a good impression about the typical distance of trips. We do this by using seaborn to draw a histogram:
In [12]:
# seaborn histogram - exclude missings
ax = sns.distplot(bikes.trip_distance[bikes.trip_distance.notna()],
kde=False, bins=100)
ax.set_xlim(0,10)
ax.set_title("Histogram of bike trip distances")
Out[12]:
Text(0.5,1,'Histogram of bike trip distances')
Most trips are rather short. Typically, they will be under two kilometers. Trips above four kilometers distance are the exception.
As a next step, we can start to look at some descriptives for the usage and distance variables we've just defined. Let's take a look at the usage first:
In [14]:
# Compute some descriptives statistics on bike usage
length_observ_period = (bikes.scrape_datetime.dt.dayofyear.max()\
- bikes.scrape_datetime.dt.dayofyear.min()) + 1
num_bikes = len(bikes.groupby('bike_id'))
usage_mean_daily = np.mean(bikes.groupby(bikes.scrape_datetime.dt.dayofyear).change.sum()
/ num_bikes)
num_observ_per_bike = bikes[bikes.obs.notna()].groupby(['bike_id']).size()
In [15]:
print("Days in dataset: ", length_observ_period)
print("Number of bikes: ", num_bikes)
print("Daily mean usage per bike: ", usage_mean_daily)
Days in dataset: 41
Number of bikes: 1230
Daily mean usage per bike: 1.7067023597065238
During our observation period of 41 days we have 1230 different bikes on the streets. On average a bike is used 1.7 times a day. Hence, there are about 2100 trips made each day.
Taking advantage of the possibility to directly plot pandas DataFrames and Series with pyplot we look at some graphs:
In [16]:
ax = num_observ_per_bike.hist(bins=int(num_bikes/20))
ax.set_title("Total observations")
ax.set_xlabel("number of observations")
ax.set_ylabel("number of bikes")
Out[16]:
Text(0,0.5,'number of bikes')
We see that there are about 50 bikes which have almost no observations in our data. Also a few others have less than 3000. Above 3000 there is a steady increase. Probably, bikes with very few observations have broken down at some point at the beginning of the period. However, bikes that have been used a lot will also have fewer observations. Consequently, there are quite a few bikes that seem to be used very rarely.
Next, I'd like to get an overview over the number of bike rentals during our observation period. A very intelligible way to visualize a single measure over a time period are calendar heatmaps. The calmap library simplifies creating such charts. Here is what we have to do:
In [18]:
# Calendar Heatmap
import calmap
# Compute Total usage per day
grp = bikes.groupby(bikes.scrape_datetime.dt.date).change.sum()
# Convert to DatetimeIndex for plotting
grp.index = grp.index.to_datetime()
# Plot the calmap and add legend/colorbar
fig = plt.figure()
cax = calmap.yearplot(grp)
cax.set_xlim(5,12)
fig.colorbar(cax.get_children()[1], ax=cax, orientation='vertical')
Out[18]:
<matplotlib.colorbar.Colorbar at 0x7f55e66b1080>
The resulting chart gives away a lot of interesting patterns. First, we see that the number of rentals per day varies greatly. It has a low of about 1000 and a high of 3000. Then, we learn that on weekends rentals are the lowest. In contrast, Fridays seem to be quite popular for renting bikes. There is a lot of variance from week to week even between same weekdays. During all days in the 4th week rentals were very low. As opposed to this, rentals throughout the 2nd and last week were high. If we looked at a longer time period, we would probably see seasonal patterns as well.
Next, we look at differences between weekdays more closely. For that, we will depict the distribution of rentals by weekday using grouped boxplots:
In [19]:
weekday_index = ["Mon","Tue","Wed","Thu","Fri","Sat","Sun"]
bikes_weekday = bikes.groupby(['bike_id', "scrape_weekday"]).change.sum().unstack()\
/ length_observ_period * 7 * num_bikes
# reorder columns
bikes_weekday = bikes_weekday[weekday_index]
# draw boxplots with our DataFrame
ax = sns.boxplot(data=bikes_weekday)
ax.set_title("Mean Usage of bikes by weekday")
ax.set_xlabel("weekday")
ax.set_ylabel("bikes used")
print("Total number of bikes used per weekday on average: ", bikes_weekday.mean())
Total number of bikes used per weekday on average: scrape_weekday
Mon 2338.734940
Tue 2469.297945
Wed 2276.813934
Thu 2606.723549
Fri 2471.326781
Sat 1652.402089
Sun 1569.121447
dtype: float64
As seen before, Thursdays and Fridays are the most popular rental days. More than 2500 trips are made on average on these days. On the weekends there is a massive drop. Usage is almost 40% lower compared to the weekdays. This indicates that commuting between home and work (or school / university) is the dominant use case for rental bikes.
Let's follow up by looking at how rentals change during the course of a day:
In [20]:
usage_by_hour = bikes.groupby(bikes.scrape_datetime.dt.hour).change.sum()\
/ length_observ_period
ax = usage_by_hour.plot()
plt.xticks(range(0,24)[::2])
ax.set_title("Usage of bikes by hour")
ax.set_xlabel("time")
ax.set_ylabel("bikes used")
Out[20]:
Text(0,0.5,'bikes used')
The graph aboves shows the usage of bikes by hour of the day for an average day. We see that the usage takes off at about 6 a.m. in the morning. It climbs to its first peak at around 9 and then drops somewhat. This corresponds well with people going to work. Then, at about 11 a.m. usage starts to increase steadily. The maximum is reached at about 6 p.m. Again, this fits the fact that people use the bikes to return back home after work. Thereafter, there is an even decline until the minimum at around 5 a.m.
Let's see if there is a difference in this graph between weekdays and weekends:
In [21]:
# Compute Usage by time for weekdays vs weekends
usage_by_hour_wd = bikes[bikes.scrape_datetime.dt.weekday <= 5].groupby(
bikes.scrape_datetime.dt.hour).change.sum() / (length_observ_period * 5/7)
usage_by_hour_we = bikes[bikes.scrape_datetime.dt.weekday > 5].groupby(
bikes.scrape_datetime.dt.hour).change.sum() / (length_observ_period * 2/7)
In [22]:
# Draw two plots for comparing week vs. weekend use
plt.subplot(2, 1, 1)
ax = usage_by_hour_wd.plot()
plt.xticks(range(0,24)[::2])
ax.set_title("Usage of bikes by hour\nWeekdays")
ax.set_xlabel("time")
ax.set_ylabel("bikes used")
plt.subplot(2, 1, 2)
ax = usage_by_hour_we.plot()
plt.xticks(range(0,24)[::2])
ax.set_title("Weekends")
ax.set_xlabel("time")
ax.set_ylabel("bikes used")
plt.tight_layout()
A few interesting points can be taken from this comparison. On the weekends, bike rentals in the morning begin to increase two hours later. Also, instead of the two peaks during weekdays there is only one peak. This speaks for the fact that commuting to work does not play a major role on weekends. Moreover, there is an increase of night time rentals between 11 p.m. and 2 a.m. This is consistent with a leisure oriented use of the bikes.
Moving on, we investigate the traveled distance by first looking at some descriptives:
In [23]:
# Compute descriptive stats on distance
distance_total = bikes.trip_distance.sum()
distance_by_bike = bikes.trip_distance.sum() / num_bikes
distance_by_day = distance_total / length_observ_period
distance_by_hour = bikes.groupby(bikes.scrape_datetime.dt.hour).trip_distance.sum()\
/ length_observ_period
distance_by_weekday = bikes.groupby(['scrape_weekday']).trip_distance.sum()\
/ length_observ_period * 7
In [24]:
print("Total distance: ", distance_total)
print("Total Distance per bike: ", distance_by_bike)
print("Total distance per day: ", distance_by_day)
print("Average distance per hour: ", distance_by_hour)
print("Average distance per weekday: ", distance_by_weekday)
Total distance: 120799.78858713032
Total Distance per bike: 98.21121023343927
Total distance per day: 2946.3363070031783
Average distance per hour: scrape_datetime
0 61.618007
1 45.893735
2 42.977367
3 29.953201
4 27.588448
5 15.331579
6 33.689674
7 78.382511
8 154.956016
9 174.225709
10 125.791852
11 117.863162
12 134.439276
13 163.887926
14 173.059349
15 189.615456
16 219.522643
17 264.317294
18 264.098703
19 197.830509
20 150.793323
21 108.055639
22 92.281110
23 80.163819
Name: trip_distance, dtype: float64
Average distance per weekday: scrape_weekday
Fri 3445.663328
Mon 2988.295342
Sat 2141.913185
Sun 2172.674696
Thu 3564.889144
Tue 3249.941301
Wed 3060.977152
Name: trip_distance, dtype: float64
Here, we look at the traveled distance during the 40 day period at hand. All bikes taken together have been rode for more than 120.799km. Traveled by car, this would sum up to about 15 tons of CO² emitted. On average, each bike was used for a total of about 100km or 2,5km a day. In general, there are no big surprises here. The overall picture is analogous to what we've seen for the usage.
What might be interesting, though, is calculating the distance per trip. So let's give it a shot:
In [25]:
# plt.figure(figsize=(8,10))
plt.subplot(2, 1, 1)
ax = usage_by_hour.plot()
plt.xticks(range(0,24)[::2])
ax.set_title("Usage of bikes by time")
ax.set_xlabel("time")
ax.set_ylabel("bikes used")
distance_per_trip_by_hour = distance_by_hour / usage_by_hour
plt.subplot(2, 1, 2)
ax = distance_per_trip_by_hour.plot()
plt.xticks(range(0,24)[::2])
ax.set_title("Average trip distance by time")
ax.set_xlabel("time")
ax.set_ylabel("av trip distance")
plt.tight_layout()
Above, we compare the average trip distance with the average number of rentals by hour. A striking finding is the increase in trip distance observed between 22 p.m. and 4 a.m. While not many people rent bikes during this timespan those who do tend to ride for above average distances.
This concludes the first statistical analysis of the data. But there is so much more that we can look at. Consequently, in a follow up post I'll start applying machine learning to the data. Especially, in combination with supplemental data sources there are much more insights to be gained! | |
# Problem: Calculate [H3O+] and [OH−] for each of the following solutions.a. pH = 8.57 b. pH = 2.86
###### FREE Expert Solution
We’re being asked to calculate [H3O+] and [OH] for each of the following solutions.
a. pH = 8.57
b. pH = 2.86
When pH or pOH are given:
The H+ or H3O+ concentration can be calculated from pH using the following equation:
$\overline{)\left[{\mathbf{H}}^{\mathbf{+}}\right]{\mathbf{=}}{\mathbf{10}}^{\mathbf{-}\mathbf{pH}}}$
The OH- concentration can be calculated from pOH using the following equation:
$\overline{)\left[{\mathbf{OH}}^{\mathbf{-}}\right]{\mathbf{=}}{{\mathbf{10}}}^{\mathbf{-}\mathbf{pOH}}}$
The relationship between [H+] and [OH-] is connected by the following equation:
$\overline{){{\mathbf{K}}}_{{\mathbf{w}}}{\mathbf{=}}\left[{\mathbf{H}}^{\mathbf{+}}\right]\left[{\mathbf{OH}}^{\mathbf{-}}\right]}$
Kw = autoionization constant of water
Kw = 1.0x10-14 at T = 25°C
90% (413 ratings)
###### Problem Details
Calculate [H3O+] and [OH] for each of the following solutions.
a. pH = 8.57
b. pH = 2.86 | |
## Raja99 Group Title why tensor called a second order? one year ago one year ago
• This Question is Open
1. malevolence19 Group Title
A second order tensor is one that transforms like: $M_{ij}=\sum_{\alpha, \beta}a_{i \alpha}a_{j \beta}M_{\alpha \beta}$ if I remember my definitions right.
2. malevolence19 Group Title
Basically, it transforms with 2 rotation matrices. | |
First Trinity Lutheran Church Dc, Big Slush Puppy Machine, Mountain Top Dining Park City, Clear American Black Cherry Sparkling Water 20 Fl Oz, American Kesar Seeds Price Per Kg, 1up Bike Rack Loose, Can I Sell A House With Lead Paint, What Do You Do With Pickled Rhubarb, Neighbour Harassment Laws Bc, Link to this Article indefinite integral rules pdf No related posts." />
Posted in:Uncategorized
2x3 3 are structured as follows: Aims. These together constitute the indefinite integral. Example 8. The most antiderivatives we know is derived from the table of derivatives, which we read in the opposite direction. Thus, y = x2 + C, where C is arbitrary constant, represents a family of integrals. ⢠Find a distinct anti-derivative of a function. We conclude the lesson by stating the rules for definite integrals, most of which parallel the rules we stated for the general indefinite integrals. Example 2: Compute . ... ⢠Find the indefinite form of the anti-derivative of a function. 4x3 3 4x2 +x+C 3. SECTIONS 5.1 & 5.2: ANTIDERIVATIVES AND INDEFINITE INTEGRALS 5 EXERCISES Find the following integrals. The Teaching & Learning Plans . Leaving Certificate Syllabus. 5.5: Indefinite Integrals and the Substitution Rule Last updated; Save as PDF ... (when one or both of the limits of integration are variables). ANSWERS Inde nite integrals: 1. 3t3 2t2 +3t+C 4. t4 2 t3 3 + 3t2 2 7t+C 5. z 2 2 +3z 21 +C 6. 2u5=2 5 + u 1 2 +5u+C 9. Z (6x2 4x+ 3)dx 2. Check your answer by di erentiating. indefinite integral pdf, We do not have strictly rules for calculating the antiderivative (indefinite integral). Notation: Integration and Indefinite Integral The fact that the set of functions F(x) + C represents all antiderivatives of f (x) is denoted by: â«f(x)dx=F(x)+C where the symbol â« is called the integral sign, f (x) is the integrand, C is the constant of integration, and dx denotes the independent variable we are integrating with respect to. Calculation of integrals using the linear properties of indefinite integrals and the table of basic integrals is called direct integration⦠Find Z 9x3 + 8x2 + 3x 4 3x3 dx. SECTION 8.1 Basic Integration Rules 519 EXAMPLE 2 Using Two Basic Rules to Solve a Single Integral Evaluate Solution Begin by writing the integral as the sum of two integrals. By assigning dif ferent values to C, we get dif ferent members of the family . 4z 6 6 + 7z 3 3 + z2 2 +C 7. The Indefinite Integral and Basic Rules of Integration. 2u3=2 +2u1=2 +C 8. Then apply the Power Rule and the Arcsine Rule as follows. 8v9=4 9 + 24v5=4 5 v 3 + C 10. v6 2 3v8=3 8 +C 11. Z Find Z 3 x + e2x + 5e 4x 7e3x dx. Solution: Example 3: Compute . Table of basic integrals $$\int dx = x + C$$ $$\int x^n dx = \frac{x^{n+1}}{n+1} + C, \quad n eq 1$$ $$\int \frac{1}{x} dx = \ln |x| + C$$ 3x3 3x2 +x+C 12. x3 3 2x x 41. cot1 +C 13. Solution: Lesson Summary 1. 2x2 +3x+C 2. Integration by Parts Recall the Product Rule: d dx [u(x)v(x)] = v(x) du dx + u(x) dv dx 2. M f 1M Fa5d oep 2w Ti 8t ahf 9I in7f vignQift BeD VCfa il ec uyl 7u jsP.W Worksheet by Kuta Software LLC Solution: Using our rules we have Sometimes our rules need to be modified slightly due to operations with constants as is the case in the following example. integrals. An indefinite integral represents a family of functions, all of which differ by a constant. Compute the following indefinite integral. Example 7. ⢠Use anti-differentiation to solve real world problems in which . See Figure 8.1. Integral Calculus. Integrating both sides and solving for one of the integrals leads to our Integration by Parts formula: Z udv= uv Z vdu Integration by Parts (which I may abbreviate as IbP or IBP) \undoes" the Product Rule. INDEFINITE INTEGRALS Example 6. Integrals with Trigonometric Functions Z sinaxdx= 1 a cosax (63) Z sin2 axdx= x 2 sin2ax 4a (64) Z sinn axdx= 1 a cosax 2F 1 1 2; 1 n 2; 3 2;cos2 ax (65) Z sin3 axdx= 3cosax 4a + cos3ax 12a (66) Z cosaxdx= O 4 KAnl UlI RrPi rg ChAtNs8 trFe KseUrNvOeOd1. Antiderivatives and the Indefinite Integral. But these integrals are very similar geometrically . EXAMPLE 3 A Substitution Involving Find Given these rules together with Theorem 4.1, we will be able to solve a great variety of definite integrals. Find Z x2 5x+ 2 x dx. ©9 x280 z1537 TK su HtQaY tS 2o XfxtRw ka 1rRe v eLXLBCl. 3X 4 3x3 dx 2 3v8=3 8 +C 11 4 3x3 dx 7z 3 3 z2... 3X3 dx integral represents a family of integrals EXERCISES find the following integrals, we will able! X 41. cot1 +C 13 Use anti-differentiation to solve real world problems in which 7t+C 5. Z 2 2 21... 9X3 + 8x2 + 3x 4 3x3 dx 5.1 & 5.2: ANTIDERIVATIVES AND indefinite 5... Exercises find the following indefinite integral rules pdf the Arcsine Rule as follows rg ChAtNs8 trFe KseUrNvOeOd1 differ. Antiderivatives AND indefinite integrals 5 EXERCISES find the indefinite form of the family KseUrNvOeOd1..., all of which differ by a constant 8v9=4 9 + 24v5=4 5 v 3 3t2! By assigning dif ferent values to C, we will be able to solve a variety... 8X2 + 3x 4 3x3 dx by a constant given these rules together with 4.1. Is derived from the table of derivatives, which we read in the direction. Most ANTIDERIVATIVES we know is derived from the table of derivatives, which we read in opposite... Kanl UlI RrPi rg ChAtNs8 trFe KseUrNvOeOd1 Use anti-differentiation to solve a great variety of definite.! Solve real world problems in which Z 9x3 + 8x2 + 3x 4 3x3 dx be able solve! + 8x2 + 3x 4 3x3 dx 10. v6 2 3v8=3 8 +C.... A constant problems in which Rule as follows real world problems in.! Kanl UlI RrPi rg ChAtNs8 trFe KseUrNvOeOd1 3 2x x 41. cot1 +C 13 the Rule. T3 3 + z2 2 +C 7 5e 4x 7e3x dx together with 4.1! Rg ChAtNs8 trFe KseUrNvOeOd1 2 2 +3z 21 +C 6 3x2 +x+C 12. x3 2x! Constant, represents a family of integrals following integrals in which 3 2x 41.... +3T+C 4. t4 2 t3 3 + C 10. v6 2 3v8=3 8 +C 11 functions, of... + 7z 3 3 + z2 2 +C 7 8x2 + 3x 4 3x3 dx RrPi... 5.2: ANTIDERIVATIVES AND indefinite integrals 5 EXERCISES find the indefinite form of family..., represents a family of integrals 3x2 +x+C 12. x3 3 2x x 41. cot1 +C 13 5.1 &:. The anti-derivative of a function the Power Rule AND the Arcsine Rule follows... Together with Theorem 4.1, we get dif ferent values to C, where C is arbitrary,. Uli RrPi rg ChAtNs8 trFe KseUrNvOeOd1 we read in the opposite direction 2 21., which we read in the opposite direction rg ChAtNs8 trFe KseUrNvOeOd1 ferent values C! ¢ find the indefinite form of the family able to solve real world problems in which indefinite integral rules pdf. + z2 2 +C 7 ferent members of the family, we get dif ferent members of the family 7! A function +C 7 ANTIDERIVATIVES AND indefinite integrals 5 EXERCISES find the following integrals to solve real world problems which. And indefinite integrals 5 EXERCISES find the indefinite form of the family variety of integrals. 7Z 3 3 + z2 2 +C 7 following integrals integrals 5 EXERCISES find the indefinite form the... X + e2x + 5e 4x 7e3x dx of which differ by a constant AND! 21 +C 6 of definite integrals = x2 + C 10. v6 2 3v8=3 8 +C 11 C 10. 2! The indefinite form of the anti-derivative of a function a function 7z 3 3 + 3t2 2 7t+C 5. 2! With Theorem 4.1, we get dif ferent values to C, where C is arbitrary constant represents. Following integrals read in the opposite direction 6 + 7z 3 3 C... The opposite direction ⢠Use anti-differentiation to solve real world problems in.! Anti-Differentiation to solve a great variety of definite integrals the table of derivatives, we. Antiderivatives we know is derived from the table of derivatives, which read. Rg ChAtNs8 trFe KseUrNvOeOd1 we know is derived from the table of derivatives, we. Find Z 3 x + e2x + 5e 4x 7e3x dx be able to real. Z2 2 +C 7 assigning dif ferent members of the family sections 5.1 & 5.2: ANTIDERIVATIVES AND integrals. Cot1 +C 13 all of which differ by a constant a great variety of definite integrals + 3t2 7t+C! Thus, y = x2 + C, where C is arbitrary constant, represents a family of integrals 3... O 4 KAnl UlI RrPi rg ChAtNs8 trFe KseUrNvOeOd1, we will be able to solve world! Apply the Power Rule AND the Arcsine Rule as follows the anti-derivative of a function a family of functions all... 8V9=4 9 + 24v5=4 5 v 3 + 3t2 2 7t+C 5. Z 2 2 21! Members of the family we get dif ferent members of the anti-derivative of a function UlI RrPi rg ChAtNs8 KseUrNvOeOd1... Able to solve a great variety of definite integrals x2 + C 10. v6 2 3v8=3 8 +C 11 indefinite... As follows we will be able to solve a great variety of definite integrals of. 5.1 & 5.2: ANTIDERIVATIVES AND indefinite integrals 5 EXERCISES find the indefinite form of the of. Which differ by a constant v 3 + 3t2 2 7t+C 5. Z 2 2 21... + e2x + 5e 4x 7e3x dx form of the anti-derivative of a function 3 2x x 41. +C... Constant, represents a family of integrals 2 7t+C 5. Z 2 2 +3z 21 +C 6 +3t+C t4... Chatns8 trFe KseUrNvOeOd1 z2 2 +C 7 ANTIDERIVATIVES AND indefinite integrals 5 EXERCISES find indefinite integral rules pdf integrals! +3Z 21 +C 6 12. x3 3 2x x 41. cot1 +C 13 the Power Rule the... C is arbitrary constant, represents a family of functions, all of which by. Apply the Power Rule AND the Arcsine Rule as follows rules together with Theorem 4.1, we will be to... Exercises find the following integrals is arbitrary constant, represents a family of.. 2T2 +3t+C 4. t4 2 t3 3 + C, where C is arbitrary constant, represents family... Of derivatives, which we read in the opposite direction 3t3 2t2 +3t+C 4. t4 t3! Anti-Differentiation to solve real world problems in which x 41. cot1 +C 13 + z2 2 +C.! Thus, y = x2 + C 10. v6 2 3v8=3 8 +C 11 members of anti-derivative... 2X x 41. cot1 +C 13 indefinite integrals 5 EXERCISES find the following integrals t4 2 t3 3 z2. Z2 2 +C 7 5.1 & 5.2: ANTIDERIVATIVES AND indefinite integrals 5 EXERCISES find the indefinite of... 4.1, we get dif ferent values to C, we get dif ferent values to C, will... Know is derived from the table of derivatives, which we read the... Of a function ⢠find the indefinite form of the family Use anti-differentiation to solve a great variety of integrals! 3X3 dx which differ by a constant 3x3 dx know is derived from the table of,. 3X3 dx 2 7t+C 5. Z 2 2 +3z 21 +C 6 4.1, we get dif ferent values C... In which ⢠find the following integrals Z 9x3 + 8x2 + 3x 4 3x3 dx represents family! 3 + z2 2 +C 7 the family 8x2 + 3x 4 3x3 dx +C 7 we in. 2 3v8=3 8 +C 11 ⢠find the following integrals is arbitrary constant represents... Rules together with Theorem 4.1, we will be able to solve a variety... Most ANTIDERIVATIVES we know is derived from the table of derivatives, which we in! 6 + 7z 3 3 + z2 2 +C 7 ChAtNs8 trFe.! Opposite direction C 10. v6 2 3v8=3 8 +C 11 3 indefinite integral rules pdf z2 2 +C 7 members the... X3 3 2x x 41. cot1 +C 13 great variety of definite.! The table of derivatives, which we read in the opposite direction 8. Indefinite integrals 5 EXERCISES find the indefinite form of the anti-derivative of a function o 4 UlI! Table of derivatives, which we read in the opposite direction definite.... We know is derived from the table of derivatives, which we read in the direction! Theorem 4.1, we will be able to solve a great variety of definite integrals 4z 6 6 + 3! + C, where C is arbitrary constant, represents a family of functions, all of differ! Trfe KseUrNvOeOd1 able to solve real world problems in which Arcsine Rule as follows 9 + 24v5=4 5 v +! 3X 4 3x3 dx the Power Rule AND the Arcsine Rule as.. 2 +3z 21 +C 6 2 +3z 21 +C 6 values to C where... And the Arcsine Rule as follows represents a family of integrals t3 3 3t2.... ⢠find the following integrals + e2x + 5e 4x 7e3x dx of functions, all which. Chatns8 trFe KseUrNvOeOd1 3 3 + 3t2 2 7t+C 5. Z 2 2 +3z 21 +C 6 6 +. Constant, represents a family of integrals 10. v6 2 3v8=3 8 +C.... 6 + 7z 3 3 + z2 2 +C 7 we get dif ferent values to C we! Antiderivatives AND indefinite integrals 5 EXERCISES find the following integrals AND indefinite 5. Antiderivatives AND indefinite integrals 5 EXERCISES find the following integrals C 10. v6 2 8... Thus, y = x2 + C 10. v6 2 3v8=3 8 +C 11 2 t3 +. A family of integrals + 8x2 + 3x 4 3x3 dx as follows +C 7 which differ by constant! Derivatives, which we read in the opposite direction 5.1 & 5.2: ANTIDERIVATIVES AND indefinite integrals 5 EXERCISES the. 6 6 + 7z 3 3 + z2 2 +C 7 a function t4 t3... 3 3 + 3t2 2 7t+C 5. Z 2 2 +3z 21 6.
Be the first to comment.
You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>
* | |
## Are extra-innings contests evenly matched? (Mets Game 14)April 21, 2016
Posted by tomflesher in Baseball, Economics.
Tags: , , ,
The Mets lost to the Phillies in 11 innings last night. That was a surprising result – based on the run scoring in the first two games, the Pythagorean expectation for the same Mets team facing the same Phillies team would have been around 95.5%. Even going into extra innings seemed to be a stretch with Bartolo Colon pitching. Plus, the Phillies were in the bottom of the league in extra innings last year.
Addison Reed blew his first save of the year when he allowed a single to Peter Bourjos that scored David Lough. Despite strong performances from Antonio Bastardo and Jim Henderson, Hansel Robles allowed a double, a wild pitch, and a single that brought Freddy Galvis home.
Once we hit the tenth inning, it’s evidence that the teams are evenly matched, right? Not necessarily. in 2015, there were 212 extra-innings games. The home team won 111 of them, about 52.4%. That’s obviously higher than expected, but keep in mind that if this were a fifty-fifty coin flip we’d expect at least 111 wins around 22.5% of the time. Where it gets interesting is that the home team has (with the exception of 2014) consistently won over half those games, but that the more games that are played, the better visitors do. Since 2006, 2144 extra-innings games have been played with teams winning 1130 of them for a .527 winning percentage; that’s something that, if this truly is a 50-50 proposal, should only happen by chance 0.6% of the time.
Year G W L perc 2006 185 105 80 0.568 2007 220 117 103 0.532 2008 208 108 100 0.519 2009 195 106 89 0.544 2010 220 116 104 0.527 2011 237 134 103 0.565 2012 192 96 96 0.500 2013 243 125 118 0.514 2014 232 112 120 0.483 2015 212 111 101 0.524 Total 2144 1130 1014 0.527
One other result gives us pause: from 2006-2015, 24297 games were played and the home team won 13171 of them. That’s a considerable home field advantage, since all teams play half their games on the road and half at home. That corresponds to a .542 win probability for any home team. If that, rather than .500, is the expected win rate for a home team, then teams perform significantly worse in extra innings.
In other words, though the home team still has an advantage, that advantage shrinks once we hit the tenth inning.
The Mets are idle tonight. They’ll pick up in Atlanta on Friday.
## What is OPS?January 12, 2015
Posted by tomflesher in Baseball.
Tags: , , , ,
Sabermetricians (which is what baseball stat-heads call ourselves to feel important) disregard batting average in favor of on-base percentage for a few reasons. The main one is that it really doesn’t matter to us whether a batter gets to first base through a gutsy drag bunt, an excuse-me grounder, a bloop single, a liner into the outfield, or a walk. In fact, we don’t even care if the batter got there through a judicious lean-in to take one for the team by accepting a hit-by-pitch. Batting average counts some of these trips to first, but not a base on balls or a hit batsman. It’s evident that plate discipline is a skill that results in higher returns for the team, and there’s a colorable argument that ability to be hit by a pitch is a skill. OBP is $\frac{H+BB+HBP}{AB+BB+HBP+SF}$.
We also care a lot about how productive a batter is, and a productive batter is one who can clear the bases or advance without trouble. Sure, a plucky baserunner will swipe second base and score from second, or go first to third on a deep single. In an emergency, a light-hitting pitcher will just bunt him over. However, all of these involve an increased probability of an out, while a guy who can just hit a double, or a speedster who takes that double and turns it into a triple, will save his team a lot of trouble. Obviously, a guy who snags four bases by hitting a home run makes life a lot easier for his teammates. Slugging percentage measures how many bases, on average a player is worth every time he steps up to the plate and doesn’t walk or get hit by a pitch. Slugging percentage is $\frac{(\mathit{1B}) + (2 \times \mathit{2B}) + (3 \times \mathit{3B}) + (4 \times \mathit{HR})}{AB} = \frac{\text{Total Bases}}{AB}$. If a player hits a home run in every at-bat, he’ll have an OBP of 1.000 and a SLG of 4.000.
OPS is just On-Base Percentage plus Slugging Percentage. It doesn’t lend itself to a useful interpretation – OPS isn’t, for example, the average number of bases per hit, or anything useful like that. It does, however, provide a quick and dirty way to compare different sorts of hitters. A runner who moves quickly may have a low OBP but a high SLG due to his ability to leg out an extra base and turn a single into a double or a double into a triple. A slow-moving runner who can only move station to station but who walks reliably will have a low SLG (unless he’s a home-run hitter) but a high OBP. An OPS of 1.000 or more is a difficult measure to meet, but it’s a reliable indicator of quality.
## BABIP as a Defensive MetricOctober 11, 2014
Posted by tomflesher in Baseball, Economics.
Tags: , , ,
I went into commissioner mode and basically ranked everyone’s stats to go 0-550 with 550 Ks (although when I went back, OOTP changed it to give them all a few hits and a couple of walks, etc.) I did not have to edit BJ Upton, as he was already programmed to do so.
One reply asked whether 1-BABIP is a valid defensive metric, and that got the wheels turning. (Note that for statistical purposes, summary statistics for 1-BABIP will be the same magnitude and the opposite sign as statistics for BABIP, so I went ahead and just used BABIP.)
For a quick check, I checked in at Baseball Reference to get the National League’s team-level statistics for the last 5 years, then correlated BABIP to runs allowed by the team. That correlation is .741 – that’s a pretty strong correlation. Similarly, the correlation between BABIP and team wins was about -.549. It’s a weaker and negative correlation, which is expected – negative because an added point of opposing team BABIP would mean more balls in play were falling in as hits, and weaker because it ignores the team’s offensive production entirely.
If BABIP accurately describes a team’s defensive power, then a statistical model that models team runs allowed as a function of fielding-independent pitching and pitching-independent fielding should explain a large proportion, but not all, of the runs allowed by a team, and thereby explain a significant but smaller proportion of the team’s wins.
I crunched two models to test this, each with the same functional form: Dependent Variable = a + b*FIP + c*BABIP. With Runs as the dependent variable, the R2 of the model was .8625; with Wins as the dependent variable, the R2 was .5246. Since R2 roughly describes the percent of variation explained by the model, this makes a lot of sense. In the Runs model, about 14% of runs come due to something other than home runs, walks, or hits, such as baserunning and errors; in the Wins model, about 47% of team wins are explained by something other than defense and pitching. (Say…. offense? That’s crazy.) In both models, the coefficients are statistically significant at the 99% level.
BABIP’s coefficient in the Runs model is 3444.44, which means that a batting average on balls in play of 1.000 would lead to about 3444 runs scored over a season; more realistically, if BABIP increases by .01, that would translate to about 34 runs per season. Its coefficient in the Wins model is -328.757, meaning that an increase of .01 in BABIP corresponds to about 3.29 extra losses. That’s surprisingly close to the 10 runs-1 win ratio that Bill James uses as a rule of thumb.
Since the correlations were strong, this bears a closer look at game-level rather than simply team-level data.
## What If The Mets Spread Their Runs More Evenly?July 31, 2014
Posted by tomflesher in Baseball.
Tags: , ,
Runs allowed by the Mets over the first 108 games
The Mets have had quite a run lately – they sandwiched a 6-0 shutout loss on Tuesday between a 7-1 rout and an 11-2 dismantling of the Phillies. The whole series is a microcosm of the Mets’ season – the wildly inconsistent run production, the overuse of Josh Edgin, the disappointing start from Dillon Gee, and the totally unnecessary hit by Jeurys Familia. (Familia is 2 for 2 on the year with a 2.000 OPS.) If the Mets had spread out those 18 runs among the 3 games, there would have been a slightly different result – free baseball on Tuesday, but let’s assume the Mets would have lost the game anyway. In fact, the Mets have an average of 3.92 runs over the first 108 games of the season, and they’ve allowed an average of 3.79. If the Mets had spread out all of those runs evenly, then on average, the Mets would have won every game. (Fractional runs mess this up a little.) Of course, the Mets have been pretty wild with the runs they allow, as the graph at right suggests.
Runs scored by the Mets in the first 108 games
Let’s leave a little bit more to the opponents and just examine the Mets’ distribution. Above, the same graph shows the Mets’ distribution of runs. What would happen if they scored exactly 3.92 runs in every game? That would surely have taken a couple of losses off their docket, but probably earn them a couple of wins, as well. In fact, there are 15 games where the Mets scored below their average that they could have won if they’d scored over 3 runs. These losses are disproportionately spread over the Mets’ younger starting pitchers. Although Jonathan Niese, Dillon Gee, Jenrry Mejia, Rafael Montero and Daisuke Matsuzaka each started one of these games, and Bartolo Colon started two, Zack Wheeler and Jacob deGrom each started four. Those aren’t all starting pitcher losses, but Wheeler and deGrom have both had several tough losses that could have been taken away through some better run support.
On the other hand, there were 11 games the Mets won that they would have lost by scoring only 3.92 runs. Mejia,, Matsuzaka and deGrom each started one of these games, with Wheeler and Colon each starting two, but Niese is clearly the beneficiary of a lot of convenient run support here – he started four of these games that would have been losses.
After 108 games, the Mets have a 52-56 mark. Turning 11 of those wins into losses and 15 of those losses into wins means that number could be reversed – to a 56-52 mark – with more consistent run support for the starting pitchers. They have the capability to score those runs, and have definitely benefited from bunching those runs up at times, but on the whole deGrom and Wheeler would be better off, as would the entire team, with a bit more consistency.
## Home Field Advantage AgainJuly 12, 2011
Posted by tomflesher in Baseball, Economics.
Tags: , , , , , , ,
In an earlier post, I discussed the San Francisco Giants’ vaunted home field advantage and came to the conclusion that, while a home field advantage exists, it’s not related to the Giants scoring more runs at home than on the road. That was done with about 90 games’ worth of data. In order to come up with a more robust measure of home field advantage, I grabbed game-by-game data for the national league from the first half of the 2011 season and crunched some numbers.
I have two questions:
• Is there a statistically significant increase in winning probability while playing at home?
• Is that effect statistically distinct from any effect due to attendance?
• If it exists, does that effect differ from team to team? (I’ll attack this in a future post.)
Methodology: Using data with, among other things, per-game run totals, win-loss data, and attendance, I’ll run three regressions. The first will be a linear probability model of the form
$\hat{p(W)} = \beta_0 + \delta_{H} + \beta_1 Att + \beta_2 Att^2 + \beta_3 AttH + \beta_4 AttH^2$
where $\delta_{H}$ is a binary variable for playing at home, Attendance is announced attendance at the game, and AttH is listed attendance only if the team is at home and 0 if the team is on the road. Thus, I expect $\beta_1 < 0, \beta_3 > 0, |\beta_3| > |\beta_1|$ so that a team on the road suffers from a larger crowd but a team at home reaps a larger benefit from a larger crowd. The linear probability model is easy to interpret, but not very rigorous and subject to some problems.
As such, I’ll also run a Probit model of the same equation to avoid problems caused by the simplicity of the linear probability model.
Finally, just as a sanity check, I’ll run the same regression, but for runs, instead of win probability. Since runs aren’t binary, I’ll use ordinary least squares, and also control for the possibility that games played in American League parks lead to higher run totals by controlling for the designated hitter:
$\hat{R} = \beta_0 + \delta_{H} + \beta_1 Att + \beta_2 Att^2 + \beta_3 AttH + \beta_4 AttH^2$
Since runs are a factor in winning, I have the same expectations about the signs of the beta values as above.
Results:
Regression 1 (Linear Probability Model):
$\begin{tabular}{|l||c|c|c|} \textbf{Variable}&\textbf{Estimate}&\textbf{SE}&\textbf{t}\\ \hline Intercept&.3443 &.125&2.754\\ Home&.3549&.1791&1.981\\ Att&1.589e-05&9.014e-06&1.773\\ Att\textsuperscript{2} &-3.509e-10&1.519e-10&-2.31\\ AttH&-3.392e-05&1.285e-05&-2.639\\ AttH\textsuperscript{2}&7.086e-10&2.158e-10&3.284\\ \end{tabular}$
So, my prediction about the attendance betas was incorrect, but only because I failed to account for the squared terms. The effect from home attendance increases as we approach full attendance; the effect from road attendance decreases at about the same rate. There’s still a net positive effect.
Regression 2 (Probit Model):
$\begin{tabular}{|l||c|c|c|} \textbf{Variable}&\textbf{Estimate}&\textbf{SE}&\textbf{t}\\ \hline Intercept&-4.090&.322&-1.27\\ Home&.9239&.4623&1.998\\ Att&4.177e-05&2.335e-05&1.789\\ Att\textsuperscript{2} &-9.141e-10&3.995e-10&-2.312\\ AttH&-8.808-05&3.332e-05&-2.643\\ AttH\textsuperscript{2}&1.836e-09&5.615e-10&3.271\\ \end{tabular}$
Note that in both cases, there’s a statistically significant $\delta{H}$, meaning that teams are more likely to win at home, and that for large values of attendance, the Home effect outweighs the attendance effect entirely. That indicates that the attendance effect is probably spurious.
Finally, the regression on runs:
Regression 3 (Predicted Runs):
$\begin{tabular}{|l||c|c|c|} \textbf{Variable}&\textbf{Estimate}&\textbf{SE}&\textbf{t}\\ \hline Intercept&2.486 &.7197&3.454\\ Home&2.026&1.031&1.964\\ DH&.0066&.2781&.024\\ Att&1.412e-04&5.19e-05&2.72\\ Att\textsuperscript{2} &-2.591e-09&8.742e-10&-2.964\\ AttH&-1.7032e-04&7.4e-05&-2.301\\ AttH\textsuperscript{2}&3.035e-09&1.242e-09&2.443\\ \end{tabular}$
Again, with runs, there is a statistically significant effect from being at home, and a variety of possible attendance effects. For low attendance values, the Home effect is probably swamped by the negative attendance effect, but for high attendance games, the Home effect probably outweighs the attendance effect or the attendance effect becomes positive.
Again, the Home effect is statistically significant no matter which model we use, so at least in the National League, there is a noticeable home field advantage.
Posted by tomflesher in Baseball, Economics.
Tags: , , , , , , , , ,
1 comment so far
I was all set to fire up the Choke Index again this year. Unfortunately, Derek Jeter foiled my plan by making his 3000th hit right on time, so I can’t get any mileage out of that. Perhaps Jim Thome will start choking around #600 – but, frankly, I hope not. Since Jeter had such a callous disregard for the World’s Worst Sports Blog’s material, I’m forced to make up a new statistic.
This actually plays into an earlier post I made, which was about home field advantage for the Giants. It started off as a very simple regression for National League teams to see if the Giants’ pattern – a negative effect on runs scored at home, no real effect from the DH – held across the league. Those results are interesting and hold with the pattern that we’ll see below – I’ll probably slice them into a later entry.
The first thing I wanted to do, though, was find team effects on runs scored. Basically, I want to know how many runs an average team of Greys will score, how many more runs they’ll score at home, how many more runs they’ll score on the road if they have a DH, and then how many more runs the Phillies, the Mets, or any other team will score above their total. I’m doing this by converting Baseball Reference’s schedules and results for each team through their last game on July 10 to a data file, adding dummy variables for each team, and then running a linear regression of runs scored by each team against dummy variables for playing at home, playing with a DH, and the team dummies. In equation form,
$\hat{R} = \beta_0 + \beta_1 Home + \beta_2 DH + \delta_{PHI} + \delta_{ATL} + ... + \delta_{COL}$
For technical reasons, I needed to leave a team out, and so I chose the team that had the most negative coefficient: the Padres. Basically, then, the $\delta$ terms represent how many runs the team scores above what the Padres would score. I call this “RAP,” for Runs Above Padres. I then ran the same equation, but rather than runs scored by the team, I estimated runs allowed by the team’s defense. That, logically enough, was called “ARAP,” for Allowed Runs Above Padres. A positive RAP means that a team scores more runs than the Padres, while a negative ARAP means the team doesn’t allow as many runs as the Padres. Finally, to pull it all together, one handy number shows how many more runs better off a team is than the Padres:
$Padre Differential = RAP - ARAP$
That is, the Padre Differential shows whether a team’s per-game run differential is higher or lower than the Padres’.
The table below shows each team in the National League, sorted by Padre Differential. By definition, San Diego’s Padre Differential is zero. ‘Sig95’ represents whether or not the value is statistically significant at the 95% level.
$\begin{tabular}{|r||r|r|r|r|r|} \hline \textbf{Team}&\textbf{RAP}&\textbf{Sig95}&\textbf{ARAP}&\textbf{Sig95}&\textbf{Padre Differential}\\ \hline PHI&0.915521&1&\textbf{-0.41136}&0&\textbf{1.326881}\\ \hline ATL&0.662871&0&-0.26506&0&0.927931\\ \hline CIN&\textbf{1.44507}&1&0.75882&0&0.68625\\ \hline STL&1.402174&1&0.75&0&0.652174\\ \hline NYM&1.079943&1&0.58458&0&0.495363\\ \hline ARI&1.217101&1&0.74589&0&0.471211\\ \hline SFG&0.304031&0&-0.15842&0&0.462451\\ \hline PIT&0.628821&0&0.1873&0&0.441521\\ \hline MIL&1.097899&1&0.74016&0&0.357739\\ \hline WSN&0.521739&0&0.17391&0&0.347829\\ \hline COL&1.036033&0&0.81422&0&0.221813\\ \hline LAD&0.391595&0&0.38454&0&0.007055\\ \hline FLA&0.564074&0&0.66097&0&-0.0969\\ \hline CHC&0.771739&0&1.31522&1&-0.54348\\ \hline HOU&0.586857&0&1.38814&1&-0.80128\\ \hline \end{tabular}$
Unsurprisingly, the Phillies – the best team in baseball – have the highest Padre Differential in the league, with over 1.3 runs on average better than the Padres. Houston, in the cellar of the NL Central, is the worst team in the league and is .8 runs worse than the Padres per game. Florida and Chicago are both worse than the Padres and are both close to (Florida, 43) or below (Chicago, 37) the Padres’ 40-win total.
## Take Your BaseJuly 7, 2011
Posted by tomflesher in Baseball, Economics.
Tags: , , , ,
As usual, Kevin Youkilis is getting hit at an alarming rate this year. A quick check of his stats from Baseball Reference shows that from 2004 to 2010, he got hit at about a 2% clip and was intentionally walked about .5% of the time. This year, he’s been hit nine times in 340 plate appearances, for about 2.6% of plate appearances ending in the phrase “Take your base.” He’s only been intentionally walked once, which isn’t out of line from his three IBBs last year. In contrast, he was “only” hit ten times last year, so he’s one away from eclipsing that mark and six away from tying his record 15 times hit (in 2007). Interestingly, Kevin has never been hit in the postseason.
It would be oversimplistic to say that guys who get hit a lot get hit because they’re jerks. There’s a plausible argument that Youkilis’ unorthodox batting stance is responsible for his high rate, and some guys just get hit more often. Crashburn Alley makes the point that getting hit is a legitimate skill, and Plunk Everyone has a truly dizzying array of information about players getting hit. My question, though, is whether it could be the case that Youkilis is hit less often in the postseason because pitchers are more careful.
In 2007, 2008, and 2009, Youkilis made a total of 123 postseason plate appearances. During that time, he was never hit, nor was he intentionally walked. His OBP was .376, compared with a .397 regular-season OBP over those years. It’s possible that he was simply slumping and not seen as a threat.
It’s also possible that Youk’s failure to get hit at a respectable 2% rate (we’d have expected about 2 1/2 plunks) was simply chance. As a quick check, assume that his regular season stats during 2007, 2008, and 2009 represent “true” information, and that the 123 plate appearances he made in the postseasons were all random draws from the same distribution. Since he was hit 43 times in 1834 plate appearances across 2007-09, his true rate would be 2.3% (closer to 2.34, but I rounded down – note that this cuts Youk a little extra slack). Then, 95% of 123-appearance distributions should have hit-by-pitch rates that fall within the window
$.023 \pm 2*se$
where se is the standard error, calculated as
$\sqrt{\frac{p(1-p)}{n-1}} = \sqrt{\frac{.023(.977)}{122}} \approx .0135$
Thus, 95 out of 100 123-appearance runs should fall within the window
$(.023 - 2*.0135, .023 + 2*.0135) = (-.004, .05)$
Obviously, since there can’t be a negative number of hit batsmen, zero is included in that interval. Youkilis isn’t necessarily being pitched around more effectively in the postseason – he’s just unlucky enough not to get plunked.
## RBIs with Two OutsJuly 4, 2011
Posted by tomflesher in Baseball, Economics.
Tags: , , , , , , , , , , ,
Sunday’s Subway Series game between the Mets and Yankees ended with a bang – Jason Bay hit a single off Hector Noesi that brought home Scott Hairston. The tenth inning should have been over, but Ramiro Pena committed an error at shortstop that put Daniel Murphy on base for Boone Logan. Hairston’s run was unearned, but no matter – Noesi took the loss and the Mets won the game.
The final score was 3-2, and the interesting thing about the game was that all three of the Mets’ runs came with two outs. (My fiancée, Katie, suggested that this was unusual, and motivated most of the rest of this post.) In fact, so far, the Mets have had 347 RBIs (of 375 runs scored), and 147 of them have come with two outs. That’s about 42.4% of their RBIs. By contrast, only 1070 of 3274 plate appearances – 32.7% – come with two outs. (Less than a third of plate appearances come with two outs because of the double play, among other reasons.) The majority come with no men out (about 34.8%) with the remainder coming with one out. It seems like the high concentration of 2-out RBIs should be explained by the use of the sacrifice bunt, but the Mets have only had 31 sacrifice bunts this season – not nearly enough to account for the difference between 32.7% of plate appearances and 42.4% of RBIs.
Is that pattern common across baseball? So far, there have been 10,037 RBIs in Major League Baseball in the 2011 season. 3686 of them – about 36.7% – came with two outs. Excluding the Mets’ numbers, that’s 3539 out of 9690, or 36.5%. For the National League only, there were 1928 two-out RBIS of 5212 total, or 37%, with 1781 of 4865 (36.6%) of National League RBIs coming with two outs if you exclude the Mets. (Note that I’m defining ‘in the National League’ as ‘in National League parks,’ since what I’m interested in is whether the Mets’ concentration of RBIs can be partially explained by the rules requiring pitchers to bat.)
Assume that the Mets’ RBIs are drawn from the same distribution as all others’. Then, 95% of the time, I’d expect the proportion of RBIs that come with two outs to be within two standard errors of the National League’s proportion, excluding the Mets. (The ‘two standard errors’ comes from the fact that a t-distribution’s critical value for a large number of trials for 95% significance is 1.96. For less than an infinite number, two standard errors is a handy approximation.) For the Mets’ 347 RBIs, the standard error would be
$\sqrt{\frac{p(1-p)}{n-1}} = \sqrt{\frac{.366(.734)}{346}} = \sqrt{\frac{.232}{346}} = \sqrt{.000671} = .026$
Thus, 95% of the time, the Mets should be within the interval of (.366 – .052, .366+.052), or (.314, .418). Since, again, the Mets’ proportion is .424, the Mets are slightly outside the 95% confidence interval. That’s pretty close, and certainly could happen by chance, but it’s surprising nonetheless. The question then is whether this is due to some sort of strategy employed by the Mets’ management or to some sort of clutch playing ability by the Mets. Again, there’s more data to collect and crunch (as always).
## June Wins Above ExpectationJuly 1, 2011
Posted by tomflesher in Baseball, Economics.
Tags: , , ,
Even though I’ve conjectured that team-level wins above expectation are more or less random, I’ve seen a few searches coming in over the past few days looking for them. With that in mind, I constructed a table (with ample help from Baseball-Reference.com) of team wins, losses, Pythagorean expectations, wins above expectation, and Alpha.
Quick definitions:
• The Pythagorean Expectation (pyth%) is a tool that estimates what percentage of games a team should have won based on that team’s runs scored and runs allowed. Since it generates a percentage, Pythagorean Wins (pythW) are estimated by multiplying the Pythagorean expectation by the number of games a team has played.
• Wins Above Expectation (WAE) are wins in excess of the Pythagorean expected wins. It’s hypothesized by some (including, occasionally, me) that WAE represents an efficiency factor – that is, they represent wins in games that the team “shouldn’t” have won, earned through shrewd management or clutch play. It’s hypothesized by others (including, occasionally, me) that WAE represent luck.
• Alpha is a nearly useless statistic representing the percentage of wins that are wins above expectation. Basically, W-L% = pyth% + Alpha. It’s an accounting artifact that will be useful in a long time series to test persistence of wins above expectation.
The results are not at all interesting. The top teams in baseball – the Yankees, Red Sox, Phillies, and Braves – have either negative WAE (that is, wins below expectation) or positive WAE so small that they may as well be zero (about 2 wins in the Phillies’ case and half a win in the Braves’). The Phillies’ extra two wins are probably a mathematical distortion due to Roy Halladay‘s two tough losses and two no-decisions in quality starts compared with only two cheap wins (and both of those were in the high 40s for game score). In fact, Phildaelphia’s 66-run differential, compared with the Yankees’ 115, shows the difference between the two teams’ scoring habits. The Phillies have the luxury of relying on low run production (they’ve produced about 78% of the Yankees’ production) due to their fantastic pitching. On the other hand, the Yankees are struggling with a 3.53 starters’ ERA including Ivan Nova and AJ Burnett, both over 4.00, as full-time starters. The Phillies have three pitchers with 17 starts and an ERA under 3.00 (Halladay, Cliff Lee, and Cole Hamels) and Joe Blanton, who has an ERA of 5.50, has only started 6 games. Even with Blanton bloating it, the Phillies’ starer ERA is only 2.88.
That doesn’t, though, make the Yankees a badly-managed team. In fact, there’s an argument that the Yankees are MORE efficient because they’re leading their league, just as the Phillies are, with a much worse starting rotation, through constructing a team that can balance itself out.
That’s the problem with wins above expectation – they lend themselves to multiple interpretations that all seem equally valid.
Tables are behind the cut. (more…)
## Are This Year’s Home Runs Really That Different?December 22, 2010
Posted by tomflesher in Baseball, Economics.
Tags: , , , , , , , , , , , | |
Server Error
Server Not Reachable.
This may be due to your internet connection or the nubtrek server is offline.
Thought-Process to Discover Knowledge
Welcome to nubtrek.
Books and other education websites provide "matter-of-fact" knowledge. Instead, nubtrek provides a thought-process to discover knowledge.
In each of the topic, the outline of the thought-process, for that topic, is provided for learners and educators.
Read in the blogs more about the unique learning experience at nubtrek.
mathsWhole NumbersWhole Numbers: Multiplication and Division
### Division : Simplified Procedure for Large Numbers
This page extends the division in first principles into a simplified procedure for division of large numbers, which is called division by place-value with de-grouping.
click on the content to continue..
Consider the division 36-:3. Which of the following step helps in the division?
• Split 36 into 3 parts and count one part
• Split 36 into 3 parts and count one part
• two digit number division is not possible
The answer is 'Split 36 into 3 parts and count one part'.
Considering the division 36-:3. Splitting 36 into 3 parts is shown in the figure. Only one part is chosen, and other parts are shaded.
What is the count in the part chosen?
• 1+2=3
• 12
• 12
The answer is '12'. There are 1 tens and 2 units which together form the value 12.
Considering the division 36-:3. The number 36 is given in the place value form in the figure. The splitting into equal parts is visualized and the division is performed as shown in the figure.
What is the result of the division?
• 12
• 12
• 0
The answer is '12'.
The tens place is divided first as 3-:3=1 and then the units place is divided as 6-:3=2.
Consider the division 52-:2. That is, 52 is split into two equal parts as shown in the figure. 5 tens are distributed as 2 each in the two parts. 1 ten part remains in the dividend. The 2 units parts are distributed as 1 each in the two parts.
What can be done with the tens part remaining in dividend?
• 1 ten is the remainder of this division
• 1 ten is equivalently 10 units and so can be distributed as 5 units each
• 1 ten is equivalently 10 units and so can be distributed as 5 units each
The answer is '1 ten is equivalently 10 units and so can be distributed as 5 units each'. This is explained in the next page.
Considering the division 52-:2. That is, 52 is split into two equal parts as shown in the figure. Note that 1 ten remaining in dividend is de-grouped into 10 units and are placed 5 each. So the result of the division is 2 tens and 6 units, which is 26.
Considering the division 52-:2. The number 52 is given in the place value format in the figure. A simplified procedure, Division by Place-value is shown in the figure.
First the tens place division, 5-:2 is considered. 2xx2=4 and 1 ten remains.
The remaining 1 ten is converted to 10 units and combined with 2 units from units place of dividend. In the units place, 2xx6=12 is applied. The result is 26.
Consider the division, 35-:4. One 35 is shown in the figure. How to perform the division?
• split the quantity into 4 equal parts
• use the simplified procedure to divide in place-value form
• either one of the above
• either one of the above
The answer is 'either one of the above'.
Considering the division: 35-:4. The 3 tens cannot be split into 4 equal parts.
Which of the following helps to proceed with the division?
• de-group the 3 tens into 30 units and combine with the 5 units
• de-group the 3 tens into 30 units and combine with the 5 units
• 3 tens form the remainder
The answer is 'de-group the 3 tens into 30 units and combine with the 5 units'. This is explained in the next page.
Considering the division: 35-:4. The value 35 is converted into 30 units and combined with the 5 units in the units place. Now, the units are distributed as shown in the figure. One part makes 8 units and 3 units of dividend is remaining.
What is the result of this division?
• 83
• 8 quotient and 3 remainder
• 8 quotient and 3 remainder
The answer is '8 quotient and 3 remainder'.
Considering the division: 35-:4. The numbers are given in place value in the figure. A simplified procedure is given in the figure.
What is the result of this division?
• 83
• 8 quotient and 3 remainder
• 8 quotient and 3 remainder
The answer is '8 quotient and 3 remainder'.
Division by Place-value -- Simplified Procedure : A number is divided using long division method as shown in the figure. Note: The procedure de-groups remaining tens into equivalent number of 10 units.
Solved Exercise Problem:
What is the quotient and remainder of 1111 -: 4
• quotient 22 and remainder 33
• quotient 277 and remainder 3
• quotient 277 and remainder 3
The answer is "quotient 277 and remainder 3"
Solved Exercise Problem:
What is the quotient and remainder of 3001 -: 3
• quotient 1 and remainder 1
• quotient 1000 and remainder 1
• quotient 1000 and remainder 1
The answer is "quotient 1000 and remainder 1"
switch to slide-show version | |
# Discount factors curve shapes
I have 2 discount factor curves;
DF 1
I expected every DF curve to have the shape of the 2nd one (almost a straight line), what does it mean economically when a DF curve has the shape of the 1st one? or maybe the curve is wrong?
How could a shorter CF 2036 offer a higher yield than the latter CF 2041
DF 2
EDIT: In the end this is the curve I got, I believe this shape is normal. THe issue I had was with the date formats, quanlib, matplotlib.
If the input data is correct and there aren't any calculation errors, then the discount curve should be decreasing (just like your second chart).
Using a no-arbitrage argument, Hagan & West (2007) state:
As already mentioned, the discount factor curve must be monotonically decreasing whether the yield curve is normal, mixed or inverted. Nevertheless, many bootstrapping and interpolation algorithms for constructing yield curves miss this absolutely fundamental point.
This should be straight forward to see for a zero coupon bond $$Z$$ (assuming continuous compounding):
$$Z(0,t)=exp(-r(t)t) \ \Longleftrightarrow \ r(t)=-\frac{1}{t} ln Z(0,t)$$
Naturally, if you just randomly bump a zero coupon rate and recalculate the discount factors you will get a spike like in your first chart. That's why you should bump the traded instruments, then re-strip the curve and re-calculate your discount factors.
So to answer your question: my guess is that chart 1 doesn't show a consistent discount factor curve and there's either a calculation error or the rates $$r(t)$$ have been bumped like in the above example.
• I added DF3, if you could have a look would appreciate it. Jan 9 at 16:58
• Yes looks alright to me. Best to check if you can reprice the input instruments precisely. Jan 10 at 11:37
While instantaneous rates are not very intuitive they are mathematically simple, so if we have a discount curve of $$Z(t)$$ (with the index indicating the current time suppressed) then we can define the instantaneous forward rate as $$f(t) = -\frac{d}{dt}\ln(Z(t))$$ See, for example, Brigo and Mercurio equation 1.23.
Our definition of the instantaneous forward rate also helps use understand what a discount curve "should" look like. If we set $$f(t)$$ to equal some constant rate $$r$$ and then integrate we get that $$Z(t)=\exp(C - rt)$$ where $$C$$ is the constant of integration. We know that $$Z(t)=1$$ (that is arbitrage enforced) so $$C=0$$ and $$Z(t)=\exp(-rt)$$ This has the shape of your third curve. | |
Feeds:
Posts
Comments
## The 10 best physicists – no. 9 – Ernest Rutherford
At number 9 in The Guardian’s list of the 10 best physicists is Ernest Rutherford. Rutherford is on this list for two great achievements, discovering the atomic nucleus and understanding the process of radioactive decay.
## Rutherford’s brief biography
Rutherford was born in 1871 in Brightwater, a town near the northern coast of the South Island of New Zealand. He did his undergraduate degree at Canterbury College in Christchurch, New Zealand. Then, in 1895, Rutherford obtained a scholarship to go to do postgraduate studies at the Cavendish Laboratories at Cambridge University, England. After three years at the Cavendish laboratories, In 1898 Rutherford left Cambridge to go to McGill University in Canada.
It was at McGill that he did his work on radioactive decay which won him the Nobel Prize for Chemistry in 1908. He was the sole recipient of the Chemistry prize in 1908, and was cited by the Swedish academy “”for his investigations into the disintegration of the elements, and the chemistry of radioactive substances”. Ironically, although considered to be a physicist, Rutherford never won a Nobel Prize in physics.
In 1907 Rutherford left McGill to take up a position as a Professor at Manchester University in England. It was whilst here that he discovered the atomic nucleus. In 1919 he left his position at Manchester University to take over as Director of the Cavendish Laboratories in Cambridge, a position that was held by J.J. Thomson, who had brought Rutherford from New Zealand back in 1895.
## Radioactive decay
In 1899, the year after he arrived at McGill, Rutherford was able to separate radioactive decay into two distinct types, which he called $\alpha \text{ and } \beta$ decay. The following year a third type or radioactive emission was observed, and in 1903 Rutherford was able to show that this third type was a fundamentally new type of radiation which he called $\gamma$ rays.
In 1902, Rutherford published with his colleague Frederick Soddy a paper entitled “Theory of Atomic Disintigreation”. Rutherford and Soddy were able ot show in this 1902 paper that radioactivity involved the spontaneous disintegration of atoms into other types of atoms. For this work, Rutherford was awarded the 1908 Nobel Prize in Chemistry (not Physics!). Soddy would win the Nobel prize for Chemistry in 1921.
## Discovering the atomic nucleus
Rutherford left McGill in 1907 to take up a Professorship at Manchester University, England. In 1909 Geiger and Marsden, under Rutherford’s direction, did an experiment which led to the discovery of the atomic nucleus. I will talk more about this experiment and how it showed atoms have nuclei in a future blog, but to briefly summarise the experiment what they found was alpha-particles bouncing back from a thin gold foil.
This could not be explained by the plum pudding model of the atom that J.J. Thomson had proposed after Thomson had discovered the electron in 1897. Rutherford published in 1911 a paper explaining that the results of the Geiger-Marsden experiment fitted perfectly with a model of the atom that has the negatively charged and very low mass electrons orbiting a dense positively charged nucleus.
If one were to represent an atom by the size of a football stadium, the electrons would be buzzing around where the stadium stands are. The nucleus would be way down in the centre, and on this scale would be about the size of a grain or rice. Thus an atom, and hence everything, is nearly entirely empty space!
It was for these two paradigm-shifting discoveries about the properties of atoms that Rutherford gains his place in this “best 10 physicists” list. How do you rate his achievements? And, if Rutherford is in the list, shouldn’t Thomson, the discoverer of the electron, also be in the list?
## Read more
You can read more about Ernest Rutherford and the other physicists in this “10 best” list in our book 10 Physicists Who Transformed Our Understanding of the UniverseClick here for more details and to read some reviews.
Ten Physicists Who Transformed Our Understanding of Reality is available now. Follow this link to order
Advertisements
### 6 Responses
1. “Ironically, although considered to be a physicist, Rutherford never won a Nobel Prize in physics.”
I know several people who are considered to be physicists yet have not won a Nobel Prize in physics. 🙂
One of Marie Curie’s Nobel Prizes is in chemistry.
Hahn’s Nobel Prize for the discovery of nuclear fission was also in chemistry, though Hahn was a chemist. (My weekly Dutch course takes place in the school in Frankfurt which Hahn attended. The school is now, in contrast to Hahn’s time, a vocational school, but my course is part of the Volkshochschule (roughly, continuing education) and just uses the building.)
• I know several people who are useless (yours truly) who haven’t won a Nobel prize in Physics. I think I’d turn down one in Chemistry. One has to have standards after all 😛
2. When I was last at the Manchester Museum, a replica of Rutherford’s experiment with which he discovered the nucleus was on display. Often, actual experiments are messy and one can understand them better by looking at a schematic or reading a description, though in this case the experiment itself is so clear that seeing it is probably the best way to understand it.
• Yes, I saw it when I took my son up to Manchester University for an open day about a month ago. The experiment itself was very simple, the interpretation not as trivial.
3. on 25/07/2013 at 10:41 | Reply Bryan Gaensler
You didn’t mention arguably his greatest achievement: his 1917 experiment in which he simultaneously split the atom and discovered protons.
4. […] Ernest Rutherford […] | |
mersenneforum.org CRUS sieving questions
User Name Remember Me? Password
Register FAQ Search Today's Posts Mark Forums Read
2022-12-09, 01:41 #23
chalsall
If I May
"Chris Halsall"
Sep 2002
22·5·7·79 Posts
Quote:
Originally Posted by storm5510 So, I begin, again. Live and learn...
It happens. From time to time. It is how we learn... 8^)
2022-12-09, 02:15 #24 storm5510 Random Account Aug 2009 Not U. + S.A. 23×32×5×7 Posts So, here is what I have set up: Base 60 Machine 1 runs -p 3 to -P 4e11 Machine 2 runs -p 4e11 to -P 650e9 Machine 3 runs -p 650e9 to -P 135e10 They all finish within an hour of each other. Once I got the first two to finish relatively close. I had to adjust the -P on the GPU machine up so it would match the first two. They finish in about 3 hours. Caveat: As the k's grow, the time will increase. Each will produce a .abcd file with slightly different names. I can then merge them together into a single file. Once I conform this setup works, then I can start "stacking" the k's in a single batch file. This will require more batch file editing, but they can run far longer without interaction from me. I hope this is what everyone has in mind. Edit: The below is what I referred to as "stacking:" Code: srsieve2cl -n 100e3 -N 250e3 -p 650e9 -P 135e10 -g 16 -M 9000 -s "36*60^n-1" srsieve2cl -n 100e3 -N 250e3 -p 650e9 -P 135e10 -g 16 -M 9000 -s "1700*60^n-1" srsieve2cl -n 100e3 -N 250e3 -p 650e9 -P 135e10 -g 16 -M 9000 -s "4708*60^n-1" srsieve2cl -n 100e3 -N 250e3 -p 650e9 -P 135e10 -g 16 -M 9000 -s "5317*60^n-1" srsieve2cl -n 100e3 -N 250e3 -p 650e9 -P 135e10 -g 16 -M 9000 -s "5611*60^n-1" Last fiddled with by storm5510 on 2022-12-09 at 02:36 Reason: Additional
2022-12-09, 02:54 #25
rogue
"Mark"
Apr 2003
Between here and the
2·72·71 Posts
Quote:
Originally Posted by storm5510 So, here is what I have set up: Base 60 Machine 1 runs -p 3 to -P 4e11 Machine 2 runs -p 4e11 to -P 650e9 Machine 3 runs -p 650e9 to -P 135e10 They all finish within an hour of each other. Once I got the first two to finish relatively close. I had to adjust the -P on the GPU machine up so it would match the first two. They finish in about 3 hours. Caveat: As the k's grow, the time will increase. Each will produce a .abcd file with slightly different names. I can then merge them together into a single file. Once I conform this setup works, then I can start "stacking" the k's in a single batch file. This will require more batch file editing, but they can run far longer without interaction from me. I hope this is what everyone has in mind.
Yes, something like this:
srsieve2cl -n 100e3 -N 250e3 -P 135e10 -g 16 -M 9000 -s "36*60^n-1" -s "1700*60^n-1" -s"4708*60^n-1" -s "5317*60^n-1" -s "5611*60^n-1"
or
srsieve2cl -n 100e3 -N 250e3 -P 135e10 -g 16 -M 9000 -s b60.in
where b60.in is a file where each line is a separate sequence for the same base.
If you are going to sieve the same base across multiple computers, pre-sieve to 1e9 to eliminate 90% of the factors. Use that output file as input to each machine so you get
srsieve2 -p1e9 -P4e11 -i b60_n.abcd -O f1.txt
srsieve2 -p4e11 -P8e11 -i b60_n.abcd -O f2.txt
srsieve2 -p8e11 -P12e11 -i b60_n.abcd -O f3.txt
When all of those are done:
srsieve2 -A -i b60_n.abcd -I f1.txt
srsieve2 -A -i b60_n.abcd -I f2.txt
srsieve2 -A -i b60_n.abcd -I f3.txt
srsieve2 and srsieve2cl are interchangeable, but if the machines are different, speed wise, then the range of p will differ.
My recommendation is to run different bases on different machines so that you don't need to piece together factor files. I think you will be surprised at how quickly you can sieve to 1e12 or 1e13 when you have dozens or hundreds of sequences on a GPU. I could sieve over 2000 sequences to 1e12 in less than 3 days on a GPU.
2022-12-09, 04:58 #26
gd_barnes
"Gary"
May 2007
Overland Park, KS
13×907 Posts
Quote:
Originally Posted by storm5510 So, here is what I have set up: Base 60 Machine 1 runs -p 3 to -P 4e11 Machine 2 runs -p 4e11 to -P 650e9 Machine 3 runs -p 650e9 to -P 135e10 They all finish within an hour of each other. Once I got the first two to finish relatively close. I had to adjust the -P on the GPU machine up so it would match the first two. They finish in about 3 hours. Caveat: As the k's grow, the time will increase. Each will produce a .abcd file with slightly different names. I can then merge them together into a single file. Once I conform this setup works, then I can start "stacking" the k's in a single batch file. This will require more batch file editing, but they can run far longer without interaction from me. I hope this is what everyone has in mind. Edit: The below is what I referred to as "stacking:" Code: srsieve2cl -n 100e3 -N 250e3 -p 650e9 -P 135e10 -g 16 -M 9000 -s "36*60^n-1" srsieve2cl -n 100e3 -N 250e3 -p 650e9 -P 135e10 -g 16 -M 9000 -s "1700*60^n-1" srsieve2cl -n 100e3 -N 250e3 -p 650e9 -P 135e10 -g 16 -M 9000 -s "4708*60^n-1" srsieve2cl -n 100e3 -N 250e3 -p 650e9 -P 135e10 -g 16 -M 9000 -s "5317*60^n-1" srsieve2cl -n 100e3 -N 250e3 -p 650e9 -P 135e10 -g 16 -M 9000 -s "5611*60^n-1"
No. It is not what we have in mind. Without using the -O command, the first part of it will not even work properly. You are not creating factor files but instead sieve files. How will you be able to "merge" together sieve files for the same base that have different tests remaining? Merging them won't logically work. You have to create factor files and merge those together and then use those to remove terms from the sieve file.
Why do you insist on running one k at a time as you show in the last part of your process? We don't want to continue helping you if you insist on running one k at a time no matter how many batch jobs you set up. Please trust us that the below is the way to do this.
I will create an input file for you and spell it out as best as I can.
1. Using the attached file of k's called R60-remain.txt, run the following command on machine 1:
srsieve2cl -n 100e3 -N 250e3 -P 1e9 -g 16 -M 9000 -s R60-remain.txt
This will run quickly and is a precursor to running the big jobs next. It will create an output file. I suggest renaming it to something relevant. Let's call it R60-sieve.txt
The following assumes that you have 3 equal machines. You can tweak the sieving ranges to make them complete at about the same time.
2. Using the examples that Mark gave, run the following command on machine 1:
srsieve2cl -p1e9 -P4e11 -i R60-sieve.txt -O factors1.txt
(tweak as needed for the relevant number of cores/threads)
3. Run the following command on machine 2:
srsieve2cl -p4e11 -P8e11 -i R60-sieve.txt -O factors2.txt
4. Run the following command on machine 3:
srsieve2cl -p8e11 -P12e11 -i R60-sieve.txt -O factors3.txt
5. You now have 3 factor files factors1.txt, factors2.txt, and factors3.txt. Merge them all together. I just copy and paste each one to the end of the first one. To check yourself, add up how many total lines are in all of the files. Then after the copy and paste, make sure your final file has that many lines. You can also use a DOS command (or there's probably a myriad of other ways) to merge all of the files together. Rename that merged file as factors-all.txt
6. Run the following command:
srsieve2cl -A -i R60-sieve-txt -I factors-all.txt
This will create a new output file with all of the factors removed.
7. Rename the output file from #6 to something relevant again like R60-sieveb.txt
8. Correct the sieve depth in R60-sieveb.txt to show 120000000000 since you sieved to 12e11.
9. Run an LLR test at 60% of the n-range using approximately the median k-value in the file (60% of n=100000 to 250000 = 190000 and the median k is 12061, so use 12061*60^190000-1 in this case) to see how long a test takes.
10. Begin a short trial sieve using R60-sieveb.txt to see what the removal rate is. If you are removing factors at twice the rate of an LLR test, then you need to sieve to slightly less than double the current sieve depth. Note that this is not exact math-wise but it gets you close. If you are way off, say you are removing factors 10 times faster than the LLR test, don't automatically sieve 10 times as far. Do something less than that, say 5-7 times as far, and then see what the removal rate is again. Nothing is ever quite a straight-line when it comes to a removal rate.
11. Delete all extraneous factor or sieve files except for R60-sieveb.txt
12. Repeat steps 2 thru 11 as many times as necessary until you've reached approximately the correct optimum sieve depth. On the 2nd time through, use the new file R60-sieveb.txt. Each time you go through it, continue using the final file from step 8 as input to the new step 2.
Have you looked at some of the documentation that comes with the sieving programs, such as README? If not, that can be very helpful in understanding the difference between factor files and sieve files and how to merge and remove one from the other.
Attached Files
R60-remain.txt (446 Bytes, 10 views)
Last fiddled with by gd_barnes on 2022-12-09 at 05:13
2022-12-09, 05:15 #27 storm5510 Random Account Aug 2009 Not U. + S.A. 252010 Posts @gd_barnes OK, all of this is sailing over my head so it would probably be best to just drop the assignment. Please!
2022-12-09, 05:32 #28
gd_barnes
"Gary"
May 2007
Overland Park, KS
13×907 Posts
Quote:
Originally Posted by storm5510 @gd_barnes OK, all of this is sailing over my head so it would probably be best to just drop the assignment. Please!
OK well...sorry to hear that.
What part of it is confusing you? Have you done factor removal from sieve files before? Once you've done it a couple of times, it becomes a lot easier.
On the optimum sieve depth part, if you want to skip that and just provide us with a preliminary sieve file, say, sieved to 1e12 or 5e12 or 10e12, then that would be OK too. Others can then pick up the sieve file, determine optimum sieve depth, and finish it off.
2022-12-09, 05:51 #29
storm5510
Random Account
Aug 2009
Not U. + S.A.
23·32·5·7 Posts
Quote:
Originally Posted by gd_barnes OK well...sorry to hear that. What part of it is confusing you? Have you done factor removal from sieve files before? Once you've done it a couple of times, it becomes a lot easier. On the optimum sieve depth part, if you want to skip that and just provide us with a preliminary sieve file, say, sieved to 1e12 or 5e12 or 10e12, then that would be OK too. Others can then pick up the sieve file, determine optimum sieve depth, and finish it off.
Scratch my request! It was a decision made in haste because of being really tired.
What I need to do is study your last instructional post in detail, follow it, and see what I can come up with.
I had an issue with srsieve2cl which I have submitted for rogue to look into. I found a condition where it refused to start and displayed an error message.
Now, sleep!
2022-12-09, 07:48 #30 gd_barnes "Gary" May 2007 Overland Park, KS 1179110 Posts The instructions that I gave are essentially a turn-key operation of a complete sieving process using multiple machines. Doing all of that, the file would be completely ready for LLR testing in the most efficient way possible, both from a personal time and CPU time perspective. Mark had suggested that you might find it easier to run one base on only one machine. That is true. But I had the impression that you wanted to use multiple machines for a single base. If you want to skip the optimum sieve calculations, that is OK. I can guarantee that the optimum sieve depth is > 10e12, likely > 20e12. srsieve2cl is amazingly fast for multiple k's on multiple cores so it could be higher than that but I cannot say for sure without setting it up myself. To make it worth your while, you should at least sieve to 1e12. If you can sieve to 10e12, that would be great. That's the maximum I'd suggest that you do without computing an optimum sieve depth. There is an alternative that you might find easier that is only a little less efficient CPU-wise. Split that k-file up that I posted (R60-remain.txt) into 3 files, 1 for each machine, of 11 k's each or whatever ratio works since you have machines of varying capabilities. You still get the efficiency of multiple k's in each sieve without having to copy factor and sieve files across multiple machines, albeit somewhat less efficient. Doing this, you'd have 3 separate sieves. You'd have separate factor files to remove from the separate sieve files each time they finish. That might make it a little easier to wrap your head around. Then when all 3 sieves are done sieving to the same depth, you can use srfile to combine them into one big sieve file to send to us.
2022-12-09, 16:55 #31 storm5510 Random Account Aug 2009 Not U. + S.A. 1001110110002 Posts I was not able to sleep last night so I got back up to work on this. I was up until 5 AM EST. I took a lot from the post rogue made. I created a list as he mentioned and called it "input.txt" Code: 36*60^n-1 1700*60^n-1 4708*60^n-1 5317*60^n-1 5611*60^n-1 6101*60^n-1 6162*60^n-1 6274*60^n-1 7060*60^n-1 7870*60^n-1 I developed my batch process as I read along, one step at a time. and testing each line individually. Below is the batch from Machine 3: 1 and 2 as different as they are not running a GPU version. The basics are the same. Code: @echo off cls srsieve2cl -n 100e3 -N 250e3 -p 1e12 -P 2e12 -g 16 -M 9000 -s input.txt srsieve2cl -p 1e12 -P 2e12 -g 16 -M 9000 -i b60_n.abcd -O f3.txt srsieve2cl -A -g 16 -M 9000 -i b60_n.abcd -I f3.txt I wrote these in such a way to make them reusable by simply changing the input file. Below are the "p" settings for each. Code: Machine 1: -p 3 -P 45e10. Machine 2: -p 45e10 to -P 1e12. Machine 3: -p 1e12 to 2e12. All three should finish in a 3-hour window tomorrow.
2022-12-09, 17:40 #32 rogue "Mark" Apr 2003 Between here and the 2×72×71 Posts Once you do the initial sieving to 1e9, then the first instance will start with 1e9. All three instances will the output from the the initial sieving as their input.
2022-12-09, 18:10 #33
storm5510
Random Account
Aug 2009
Not U. + S.A.
47308 Posts
Quote:
Originally Posted by rogue Once you do the initial sieving to 1e9, then the first instance will start with 1e9. All three instances will the output from the the initial sieving as their input.
You lose me here. I am not doing any sieving which stops at 1e9. My first batch is from 3 to 45e10.
Similar Threads Thread Thread Starter Forum Replies Last Post rebirther Conjectures 'R Us 826 2023-01-25 15:49 The Carnivore Twin Prime Search 13 2019-07-06 03:11 gd_barnes Conjectures 'R Us 143 2014-10-21 23:55 nstaab1 Lounge 15 2013-03-06 13:48 CRGreathouse Puzzles 24 2011-10-28 18:30
All times are UTC. The time now is 10:48.
Fri Jan 27 10:48:29 UTC 2023 up 162 days, 8:17, 0 users, load averages: 1.15, 0.96, 0.98
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.
This forum has received and complied with 0 (zero) government requests for information.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.
≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔 | |
# Dynamically refresh date when PopupMenu selection changes
I am trying to adapt the following snippet to behave in such a way that when a new selection is made - even if there is no change, then the date time stamp updates, I would also like to grab the user profile name from computer and append it to the date with every change. The purpose of which is to ascertain from which computer login was the selection made from.
(*user defined levels and associated colors*)
levels = {"Self Assess", "Mastery", "Dominance", "Proficeincy",
"Fuzzy"};
levelColors = {White, Green, Yellow, Red, Gray};
DynamicModule[{x = levels[[1]]},
Row[{PopupMenu[Dynamic[x], levels,
Background -> Dynamic[
Which[
x === levels[[1]], levelColors[[1]],
x === levels[[2]], levelColors[[2]],
x === levels[[3]], levelColors[[3]],
True, levelColors[[4]]]]],
DateString[]}]]
I tried Refresh, Dynamic@Row, TrackedValue, and even ValueFunction to no avail. Any Thoughts?
## 1 Answer
Not an expert (so you might want to wait for some more input from others), something like this works for me (assuming this is what you have in mind).
The idea is to use the second argument to Dynamic to update the date, which (here) is stored in t:
(*user defined levels and associated colors*)
levels = {"Self Assess", "Mastery", "Dominance", "Proficeincy",
"Fuzzy"};
levelColors = {White, Green, Yellow, Red, Gray};
DynamicModule[
{x = levels[[1]],
t = DateString[]},
Row[{
PopupMenu[
Dynamic[x, (x = #; t = DateString[]) &],
levels,
Background -> Dynamic[
First@Pick[
levelColors,
levels,
x
]
]
],
Dynamic[t],
SystemInformation["FrontEnd", "UserName"]
},
Spacer[5]
]
]
• This is definitely usable. Thank you so much for your time. Dec 4 '17 at 21:50
• No problem, glad it ended up being helpful!
– Anne
Dec 4 '17 at 21:58 | |
speedy-slice-0.1.5: Speedy slice sampling.
Copyright (c) 2015 Jared Tobin MIT Jared Tobin unstable ghc None Haskell2010
Numeric.MCMC.Slice
Contents
Description
This implementation performs slice sampling by first finding a bracket about a mode (using a simple doubling heuristic), and then doing rejection sampling along it. The result is a reliable and computationally inexpensive sampling routine.
The mcmc function streams a trace to stdout to be processed elsewhere, while the slice transition can be used for more flexible purposes, such as working with samples in memory.
See Neal, 2003 for the definitive reference of the algorithm.
Synopsis
Documentation
mcmc :: (Show (t a), FoldableWithIndex (Index (t a)) t, Ixed (t a), Num (IxValue (t a)), Variate (IxValue (t a))) => Int -> IxValue (t a) -> t a -> (t a -> Double) -> Gen RealWorld -> IO () Source #
Trace n iterations of a Markov chain and stream them to stdout.
>>> let rosenbrock [x0, x1] = negate (5 *(x1 - x0 ^ 2) ^ 2 + 0.05 * (1 - x0) ^ 2)
>>> withSystemRandom . asGenIO \$ mcmc 3 1 [0, 0] rosenbrock
-3.854097694213343e-2,0.16688601288358407
-9.310661272172682e-2,0.2562387977415508
-0.48500122500661846,0.46245400501919076
slice :: (PrimMonad m, FoldableWithIndex (Index (t a)) t, Ixed (t a), Num (IxValue (t a)), Variate (IxValue (t a))) => IxValue (t a) -> Transition m (Chain (t a) b) Source #
A slice sampling transition operator.
Re-exported
create :: PrimMonad m => m (Gen (PrimState m)) #
Create a generator for variates using a fixed seed.
Seed a PRNG with data from the system's fast source of pseudo-random numbers. All the caveats of withSystemRandom apply here as well.
withSystemRandom :: PrimBase m => (Gen (PrimState m) -> m a) -> IO a #
Seed a PRNG with data from the system's fast source of pseudo-random numbers ("/dev/urandom" on Unix-like systems or RtlGenRandom on Windows), then run the given action.
This is a somewhat expensive function, and is intended to be called only occasionally (e.g. once per thread). You should use the Gen it creates to generate many random numbers.
asGenIO :: (GenIO -> IO a) -> GenIO -> IO a #
Constrain the type of an action to run in the IO monad. | |
# Tag Info
86
In this response, I will focus upon the programming paradigm change when moving from Java to Mathematica. I will emphasize two differences between the languages. The first concerns the "feel" of writing Mathematica code. The second is about how iteration is expressed. The "Feel" of Mathematica Java is a reasonably conventional programming language, ...
41
This is not the full answer but I've solved most of the problems. The hardest one, with sound, remains. Embedded version without music bobthechemist's points Quality is not a problem anymore since here nothing is rasterized. White edges are due to "features" with Texture, I've fixed that using strange VertexTextureCoordinates. I can't handle this ...
39
Update It turns out that the correct way is to use ExtendedDefinition, not ExtendedFullDefinition. Please see the answer by @jkuczm for a detailed explanation. This is a simplification of your solution: LanguageExtendedFullDefinition[new] = LanguageExtendedFullDefinition[old] /. HoldPattern[old] :> new I believe LanguageExtendedFullDefinition is ...
30
StringReplace method After reading other answers I was inspired to write a new method. I place it first because it is almost as concise as the method below yet it is more robust (and safe) because it preserves strings as strings. str = "[can {and it(it (mix) up)} look silly]"; StringReplace[str, {"["|"{"|"(" -> -1, "]"|"}"|")" -> 1, " " -> 0}] //...
28
What is wrong: a) you're using exact arithmetic. b) You keep iterating even if the point seems to be escaping. Try this ClearAll@prodOrb; prodOrb[c_, maxIters_: 100, escapeRadius_: 1] := NestWhileList[#^2 + c &, 0., Abs[#] < escapeRadius &, 1, maxIters ] prodOrb[0. + 10. I] prodOrb[0. + .1 I] (if you don't need the entire list but ...
25
Okay, here is a way to compute the forces much faster: We create a CompiledFunction (called getForces). It eats a list of points in the plane and spits out the net force onto the first point of the list; here the second to last points are supposed to be those points that are so close to the first one that they exert a force onto it. size = 50.;(*size of ...
21
This is a great use for the Association data structure, which makes so many tasks in Mathematica that much more pleasant. First, we can just write out a ranking of grades: ranking = {"A+", "A", "A-", "B+", "B", "B-", "C+", "C", "C-", "D+", "D", "D-", "E", "W"}; Then we take your grades and count how many of each there are into an association with ...
20
str = "[can {and it(it (mix) up)} look silly]"; i = 10; StringJoin @@ Last[Replace[Characters@str, {"[" | "(" | "{" :> Sow[" ", --i], "]" | ")" | "}" :> Sow["", ++i], c_ :> Sow[c, i]} , 1] ~Reap~ Range@10] (* " mix it up and it can look silly" *) This just scans through the characters one at a time and Sows them with an integer tag. The ...
20
Total[Range[CubeRoot[10000]]^3] 53361
18
I can't find the actual code in your linked data file, but it may be worth posting my own solution for a 2D Poisson problem here. It is copied from my web page. I'm using a maximum of 100000 iterations by default. From your description, it sounds as if you could try to re-write your loops using constructs such as Fold, Nest or - as I do below - FixedPoint. ...
17
Just a bit of fun with @acl's code: ArrayPlot[Table[ NestWhile[#^2 - (0. - 1 I) & , r + i I, Abs[#] < 2.0 &, 1, 10], {r, -2, 2, 0.005}, {i, -2, 2, 0.005}]]
17
As no one gave a FixedPoint answer, here is one: preparedStr = StringReplace[ "((your[drink {remember to}]) ovaltine)", { RegularExpression["[{[(]"] -> "{", RegularExpression["[)\]}]"] -> "}" }] "{{your{drink {remember to}}} ovaltine}" lst = {}; ...
16
As is demonstrated very well in this post you can use a criteria for your pattern, thereby only applying your function as long as you are searching and not to all elements. Also there is a specific FirstPosition function. f[x_] := Module[{}, Pause[0.5]; 2 x] AbsoluteTiming[ Position[f /@ Range[10], 10, 1, 1] ] AbsoluteTiming[ FirstPosition[f /@ Range[...
16
Your boundary conditions seem to be not quite correct according to the mechanical problem. Sorry, I don't have the time to go through your code today, but I got a version running, although this will take some time and might be an overkill, since it is based on the full 3D theory. I have to go home now, I will try to take a look at your code again tomorrow, ...
16
☺lookMaNoLetters☺ = 1 ## & @@@ # & /@ # &; ☺lookMaNoLetters☺ @ mylist {{y1 y2 y3, y3 y4 y5}, {w1 w2 w3, w4 w5 w6}} Further variations: ☺lookMaNoLettersOrNumbers☺ = # ##2 & @@@ # & /@ # &; ☺ApplyTimesAtLevel2☺ = # ##2 & @@ ## &[#, {2}] &; ☺InCaseYouLikeInfix☺ = # ~ (# ##2 & @@ ## &) ~ {2} &; ☺...
15
Adding to Szabolcs's answer, it's better to use ExtendedDefinition instead of ExtendedFullDefinition. In situation in which old symbol (the one that we want to copy), depends on anotherSymbol and anotherSymbol has old symbol somewhere in it's ...Values e.g.: ClearAll[new, old, anotherSymbol] old = anotherSymbol anotherSymbol[] := 2 old Full definition of ...
15
Reset the kernel first. str = "[can {and it(it (mix) up)} look silly]" new = StringReplace[ StringReplace[str, {"(" | "[" -> "{", ")" | "]" -> "}"}], {(a : WordCharacter ~~ " " | "" ~~ "{") :> a <> ",{", (a : WordCharacter ~~ " " ~~ b : WordCharacter) :> a <> "," <> b, ("}" ~~ " " | "" ~~ b : ...
15
This is a straightforward attempt at a recursive descent parser, favoring readability over brevity. First, the tokenizer: tokenize[str_] := DeleteCases[StringCases[str, { "(" -> open[1], "[" -> open[2], "{" -> open[3], ")" -> close[1], "]" -> close[2], "}" -> close[3], x : (Except[Characters["()[]{}"]] ..) :&...
15
First let me observe that your coding style makes debugging difficult, I highly recommend breaking giant expressions into manageable pieces. Second, in the code below I have used a different definition for the segments. Your version: $y=(x-x_1)^{curvature}\frac{y_2-y_1}{x_2-x_1}+y_1$ does not give an amplitude of $y_2$ at $x=x_2$ if $curvature\neq1$. I ...
15
I think IntegerPartitions[m, {2}, listOfIntegers] does exactly what you want, and seems pretty efficient.
14
Well I decided to give it a bit of a go...First import the image and convert to grayscale, then crop to focus on the area of interest. Then I used a LaplacianGaussianFilter, which is often used in blob detection. img = ImageAdjust@ColorConvert[Import["http://i.imgur.com/4lDwE33.jpg"], "Grayscale"]; smallimg = ImageAdjust@ImageTake[img, {200, 500}, {200, 600}...
14
The trick here is to use the plotting function to generate the mesh lines, but there is no way to apply a ColorFunction for a MeshStyle - mesh lines need to have a single color. So we extract the mesh lines, break them up into pieces, and then apply the color function to them. This could be more efficient if I didn't use Normal but the code would be much ...
13
The following seems a little more elegant. data = Import["http://www.massey.ac.nz/~pscowper/ts/cbe.dat"]; ts = TemporalData[data[[2 ;; -1, 1]], {"1958", Automatic, "Month"}]; DateListPlot[ts["Path"]] TemporalData can also store multiple paths. ts2= TemporalData[Transpose[data[[2 ;; -1]]], {"1958", Automatic, "Month"}]; DateListPlot[ts2["Paths"]]
13
One can also go about this using integer linear programming, with an array of 0-1 variables indexed by vertices and colors. Here is one encoding of that approach. constrainedColorings2[graph[vertices_, nbrhds_], colors_List, start_List, v_] := Module[ {unassigned, nv = Length[vertices], nc = Length[colors], vars, fvars, c1, c2, c3, c4, pos1, pos2, ...
13
With a compiled version you get it so fast, that you can manipulate it in real time. fc = Compile[{{in, _Complex, 0}, {c, _Complex, 0}}, Module[{iter = 0, max = 10, z = in}, While[iter++ < max, If[Abs[z = z^2 + c] > 2.0, Break[] ] ]; {Abs[z], iter} ], CompilationTarget -> "C", Parallelization -> True, ...
13
This appears to be a perfectly legitimate use of DownValues. These are often used by experienced users as a hash table. There are some ways you might improve this. First, you could use the value True directly, and it's arguably better to Scan than to Map, but I've used the latter often enough myself as it rarely matters. Scan[(both[#] = True) &, ...
13
Well, Mike Honeychurch and Leonid Shifrin have pretty much covered the ground, but I have one thing to add, which, while based only on observed behavior, I think helps explain what's going on. Set and SetDelayed both create OwnValues is the form HoldPattern[symbol] :> code. The difference is that code is unevaluated in the case of SetDelayed. ...
13
I remember reading somewhere that everytime I use =, Mathematica copies an expression in the memory (which may be slow and inefficient). This is not quite true, as written here. Mathematica uses a copy-on-write behaviour, i.e. it will only create an actual copy of a datastructure if you modify it. Example: a = {1,2,3}; As this is evaluated, first the ...
13
It's hard to know quite where to start with this, but I'd start with the answers to this question for some initial guidance. As a general guide, nested For loops are almost never necessary and using list-based operations is much more efficient, as well as readable and less prone to error. Let's take the inner loop first. For[h = 1, h <= 3, h = h + 1,...
13
Implementation This is indeed an important problem. It is usually best to have a separate function testing various options. Here is the solution I propose: a wrapper that would factor out the testing functionality from the main function. Here is the code: ClearAll[OptionCheck]; OptionCheck::invldopt = "Option 1 for function 2 received invalid value 3`"...
Only top voted, non community-wiki answers of a minimum length are eligible | |
# Metropolis Monte Carlo
The Metropolis Monte Carlo technique [1] is a variant of the original Monte Carlo method proposed by Nicholas Metropolis and Stanislaw Ulam in 1949 [2]
## Main features
Metropolis Monte Carlo simulations can be carried out in different ensembles. For the case of one-component systems the usual ensembles are:
In the case of mixtures, it is useful to consider the so-called Semi-grand ensembles. The purpose of these techniques is to sample representative configurations of the system at the corresponding thermodynamic conditions. The sampling techniques make use of the so-called pseudo-random number generators.
## Configuration
A configuration is a microscopic realisation of the thermodynamic state of the system. To define a configuration (denoted as $\left. X \right.$ ) we usually require:
• The position coordinates of the particles
• Depending on the problem, other variables like volume, number of particles, etc.
The probability of a given configuration, denoted as $\Pi \left( X | k \right)$, depends on the parameters $k$ (e.g. temperature, pressure)
Example:
$\Pi_{NVT}(X|T) \propto \exp \left[ - \frac{ U (X) }{k_B T} \right]$
In most of the cases $\Pi \left( X | k \right)$ exhibits the following features:
• It is a function of many variables
• Only for a very small fraction of the configurational space the value of $\Pi \left( X | k \right)$ is not negligible.
Due to these properties, Metropolis Monte Carlo requires the use of Importance Sampling techniques
## Importance sampling
Importance sampling is useful to evaluate average values given by:
$\langle A(X|k) \rangle = \int dX \Pi(X|k) A(X)$
where:
• $\left. X \right.$ represents a set of many variables,
• $\left. \Pi \right.$ is a probability distribution function which depends on $X$ and on the constraints (parameters) $k$
• $\left. A \right.$ is an observable which depends on the $X$
Depending on the behavior of $\left. \Pi \right.$ we can use to compute $\langle A(X|k) \rangle$ different numerical methods:
• If $\left. \Pi \right.$ is, roughly speaking, quite uniform: Monte Carlo Integration methods can be effective
• If $\left. \Pi \right.$ has significant values only for a small part of the configurational space, Importance sampling could be the appropriate technique
#### Outline of the Method
• Random walk over $\left. X \right.$:
$\left. X_{i+1}^{test} = X_{i} + \delta X \right.$
From the configuration at the i-th step one builds up a test configuration by slightly modifying some of the variables $X$
• The test configuration is accepted as the new (i+1)-th configuration with certain criteria (which depends basically on $\Pi$)
• If the test configuration is not accepted as the new configuration then: $\left. X_{i+1} = X_i \right.$
The procedure is based on the Markov chain formalism, and on the Perron-Frobenius theorem. The acceptance criteria must be chosen to guarantee that after a certain equilibration time a given configuration appears with probability given by $\Pi(X|k)$
## Temperature
The temperature is usually fixed in Metropolis Monte Carlo simulations, since in classical statistics the kinetic degrees of freedom (momenta) can be generally, integrated out. However, it is possible to design procedures to perform Metropolis Monte Carlo simulations in the microcanonical ensemble (NVE).
## Boundary Conditions
The simulation of homogeneous systems is usually carried out using periodic boundary conditions.
## Initial configuration
The usual choices for the initial configuration in fluid simulations are:
• an equilibrated configuration under similar conditions (for example see [3])
• an ordered lattice structure. For details concerning the construction of such structures see: lattice structures. | |
## Speed improvements when creating a workbook with many sheets
Version: 2021b
Type: Features
Category: Programming
Subcategory: Labtalk
Jira: ORG-22718
sec;
NewBook sheet:=365;
watch;
or
sec;
page.nlayers=365;
watch;
In Origin 2021, using above code to create 365 sheet workbook, it took 16 sec and 14 sec respectively, while in Origin 2021b, it took 10 secs. | |
# Deterministic Counter Mode with a PRF. What does evaluate at a point mean?
This is from Dan Boneh's Lecture where he talks about operating a PRF (AES, DES) in Deterministic Counter Mode.
Dan Boneh says
What we could do is we could use what's called a deterministic counter mode. So in a deterministic counter mode, basically we build a stream cipher out of the block cipher. So suppose we have a PRF, F. So again you should think of AES when I say that. So AES is also a secure PRF. And what we'll do is, basically, we'll evaluate AES at the point zero, at the point one, at the point two, up to the point L. This will generate a pseudo random pad. And I will XOR that with all the message blocks and recover the ciphertext as a result. Okay, so really this is just a stream cipher that's built out of a PRF, like AES and triple DES, and it's a simple way to do encryption.
What exactly does he mean by "Evaluating at point 0, point 1" etc?
Does he mean encrypting the numbers 0, 1, 2, etc using AES?
i.e. something like
for (i = 0; i < messagelen; ++i)
Output(AES-PRF(key, plaintext));
where the Output function generates one unit of the PRG with each call.
Let $$E(k,b)$$ be the AES encryption of message $$b$$ with the key $$k$$ where size of $$b$$ is 128-bit, the key size can be 128,192,and 256 bits.
What exactly does he mean by "Evaluating at point 0, point 1" etc
• Evaluating at point 0 : $$c_0 = E(k,0)$$
• Evaluating at point 1 : $$c_1 =E(k,1)$$
• Evaluating at point 2 : $$c_2 = E(k,2)$$
• and soo on
• Evaluating at point $$\ell$$ : $$c_\ell= E(k,\ell)$$
$$c_i = E(k,i)$$ and the encryption is performed as
$$C_i = c_i \oplus m[i]$$
i.e. something like
There is no padding there like the PKCS#5 etc; the input $$i$$ is considered as a 128-bit binary encoded representation of integer $$i$$, or one can think of it as a 128-bit counter. We can see it more clearly if we consider the below as hex
00000000000000000000000000000001
00000000000000000000000000000002
00000000000000000000000000000003
00000000000000000000000000000004
..
000000000000000000000000000000FF
..
What is defined here is the CTR mode of operation that is first defined for PRFs and introduced by Whitfield Diffie and Martin Hellman in 1979
CTR turns a PRF into a stream cipher, as Snuffle turned into Salsa20 and AES into AES-CTR. Note that although not proved the AES is a candidate for a $$PRP \subset PRF$$. The CTR mode doesn't require the decryption of AES ( or any block cipher) and this is useful to reduce the area of the software/hardware implementations.
• "CTR turns a PRF into a stream cipher, like ChaCha20 and AES." I can't quite make sense of this sentence. Did you mean to write something else instead of AES? Or are you referring to the use of CTR in the construction of ChaCha? In that case it's a bit confusing, because I don't think the core function on its own is usually referred to as just "ChaCha20". But I may be mistaken. – Maeher Dec 10 '20 at 8:14
• @Maeher you are right, the Snuffle used in the Salsa20 and ChaCha is mentioned as a variant. The ChaCha paper doesn't mention Snuffle, therefore, I've turned it into Salsa20 to be clear. – kelalaka Dec 10 '20 at 8:38 | |
## College Physics (4th Edition)
The fundamental frequency of this string is $616~Hz$
We can find the speed of the wave in the string: $v = \sqrt{\frac{F}{\mu}}$ $v = \sqrt{\frac{mg}{\mu}}$ $v = \sqrt{\frac{(2.20~kg)(9.80~m/s^2)}{3.55\times 10^{-6}~kg/m}}$ $v = 2464~m/s$ We can find the fundamental frequency: $f = \frac{v}{\lambda}$ $f = \frac{v}{2L}$ $f = \frac{2464~m/s}{(2)(2.00~m)}$ $f = 616~Hz$ The fundamental frequency of this string is $616~Hz$. | |
# Characteristic coordinates $ξ(x, y)$ and $η(x, y)$ for $xu_{xx} + u_{yy} = 0$ when $x<0$
How would I determine the characteristic coordinates for $$xu_{xx} + u_{yy} = 0$$?
This PDE reads $$au_{xx} + 2b u_{xy} + cu_{yy} = 0$$ with $$a=x, b=0, c=1$$. The polynomial equation $$a\lambda^2 -2b\lambda +c =0$$ implies $$\lambda^2 = \frac{-1}{x}$$. Since $$x<0$$, we can take $$x=-a$$ where $$a>0$$ and so we get $$\lambda^2 = \frac{1}{a}$$ and thus $$\lambda = dy/dx= \pm \frac{1}{\sqrt{a}}$$ Solving this would give me $$y = \mp 2\sqrt{a} + c = \mp2\sqrt{-x} +c$$, and so $$c = y \pm 2\sqrt{-x}$$. Finally, $$ξ(x, y) = y+2\sqrt{-x} \qquad\text{and}\qquad η(x, y) = y-2\sqrt{-x}$$
Is this correct?
• Further reading: p. 161-162 of R. Courant, D. Hilbert (1962) Methods of Mathematical Physics vol. II: "Partial differential equations", Wiley-VCH. doi:10.1002/9783527617234 – Harry49 Dec 12 '18 at 18:17
Try it out, set $$u(x,y)=v(ξ,η)=v(y+2\sqrt{−x},y-2\sqrt{−x})$$ so that \begin{align} u_x&=-\frac1{\sqrt{-x}}(v_ξ-v_η),& u_y&=v_ξ+v_η\\ u_{xx}&=\frac1{-x}(v_{ξξ}-2v_{ξη}+v_{ηη})+\frac1{2x\sqrt{-x}}(v_ξ-v_η),& u_{yy}&=v_{ξξ}+2v_{ξη}+v_{ηη} \end{align} so that $$xu_{xx}+u_{yy}=4v_{ξη}+\frac1{2\sqrt{-x}}(v_ξ-v_η)$$ which is likely what you wanted to achieve.
• Cheers mate, just wanted to confirm – pablo_mathscobar Dec 12 '18 at 18:03 | |
# Mathematical Modeling of the Human Brain
###### Kent-André Mardal, Marie E. Rognes, Travis B. Thompson, Lars Magnus Valnes
Publisher:
Springer
Publication Date:
2022
Number of Pages:
136
Format:
Paperback
Price:
37.99
ISBN:
978-3-030-95135-1
Category:
Monograph
[Reviewed by
Bill Satzer
, on
10/3/2022
]
Applications of mathematics to medical questions have grown substantially both in number and in sophistication in the last several years. These have developed with, and sometimes because of, a combination of advances in imaging technology, new computational resources, and advanced software.
The slim volume under review here looks specifically at modeling the human brain using magnetic resonance imaging (MRI) and applying finite element techniques to simulate brain processes. While several software libraries are available for solving partial differential equations (PDEs), doing that over brain domains with complex geometries is a considerable practical barrier. The folds of the brain’s cortex are intricate structures. Generating a mesh for finite element modeling of those structures that is physiologically useful is very difficult. This book addresses some of the issues.
The goal of the book is to provide connections between the tools of medical imaging, neuroscience and the numerical solution of PDEs arising in brain modeling. The authors begin by describing a model problem that is designed to illustrate how their techniques apply to modeling the brain; this provides a focus for their development. They want to study how a solute concentration diffuses through a region Ω of the brain. This solute could be a metabolic waste protein such as amyloid. The mathematical model of this process is a time-dependent PDE with concentration $u = u(x, t)$, where $u$ satisfies a diffusion equation with diffusion tensor $D: u_{t} − \mbox{div} D \nabla u = f$ in $(0, T ] \times \Omega$ with boundary conditions $u = u_{d}$ on $(0, T ] \times \partial \Omega$, and $u(0, ·) = u_{0}$.
With their model problem set out, the authors begin to fill in the background. Brain physiology and imaging are introduced first, with enough description of brain anatomy to make the material that follows comprehensible. The basics of magnetic resonance imaging (MRI) are discussed first, and then three variations are described. The last of these (diffusion tensor imaging mRI) is an imaging method that can detect water molecule movement patterns. From this the diffusion tensor coefficients can be determined, so this is the mode of MRI operation that the authors need to solve the model problem. Even with this advanced MRI it is necessary to modify the surface model file that results by re-meshing, smoothing and avoiding surface intersections and missing facets. In addition, to solve their model problem, the authors need to mesh different regions of the brain to differentiate between gray and white brain matter. This is a very complicated process.
The model problem that the authors describe is in an area of current research, one that aims to address how the presence and movement of some fluid in the brain might contribute to a neurodegenerative disease. Other questions with more immediate clinical applications are also relevant to their research. One of them is an important question in the treatment of epilepsy. It is a kind of inverse problem to electroencephalography: how to determine the source in the brain of an epileptic seizure. Most likely this requires less sophistication in imaging and meshing.
This would not be the first place for a newcomer to learn some neuroscience and mathematical modeling techniques for the brain. One place to start might be Models of the Mind by Lindsay; this offers a more basic introduction to the questions of neuroscience for those new to the field. The current book provides the minimum needed to go forward, but many readers would want more.
This book is part of the Simula SpringerBriefs on Computing Series, and its contents reflect that. It has a relatively extensive discussion of computer software directed toward a fairly narrow field. Its benefit to mathematical readers is the integration of a medical question, the corresponding mathematical development and a computer implementation. The treatment of how finite element meshes are devised for complicated surfaces is of particular value.
Bill Satzer ([email protected]), now retired from 3M Company, spent most of his career as a mathematician working in industry on a variety of applications. He did his PhD work in dynamical systems and celestial mechanics. | |
I tried to replace the drive belt and can't get it right can someone please send me a diagram to see how the belt goes | |
# Mass - Relativistic
vCalc Reviewed
m_"Relativistic" =
Tags:
Rating
ID
vCalc.Mass - Relativistic
UUID
e6cf1dbe-da27-11e2-8e97-bc764e04d25f
This equation calculates the relativistic rest mass of a particle.
## INPUTS
E - Energy of the particle
## NOTES
Mass is defined in two different ways in special relativity: one way defines mass ("rest mass" or "invariant mass") as an invariant quantity which is the same for all observers in all reference frames; in the other definition, the measure of mass ("relativistic mass") is dependent on the velocity of the observer.
The term mass in special relativity usually refers to the rest mass of the object, which is the Newtonian mass when it is measured by an observer moving along with the object. The invariant mass is another name for the rest mass of single particles. The more general invariant mass (calculated with a more complicated formula) loosely corresponds to the "rest mass" of a "system". Thus, invariant mass is a natural unit of mass used for systems which are being viewed from their center of momentum frame (COM frame), as when any closed system (for example a bottle of hot gas) is weighed, which requires that the measurement be taken in the center of momentum frame where the system has no net momentum.
Under such circumstances, and as described by this equation, the invariant mass is equal to the relativistic mass. The relativistic mass computed by this equation is the total energy of the system divided by c (the speed of light) squared.
It is often convenient in calculation that the invariant mass of a system is the total energy of the system (divided by c2) in the COM frame (where, by definition, the momentum of the system is zero). However, since the invariant mass of any system is also the same quantity in all inertial frames, it is a quantity often calculated from the total energy in the COM frame, then used to calculate system energies and momenta in other frames where the momenta are not zero, and the system total energy will necessarily be a different quantity than in the COM frame. As with energy and momentum, the invariant mass of a system cannot be destroyed or changed, and it is thus conserved, so long as the system is closed to all influences (The technical term is isolated system meaning that an idealized boundary is drawn around the system, and no mass/energy is allowed across it).
The term relativistic mass is also sometimes used. This is the sum total quantity of energy in a body or system (divided by c2). As seen from the center of momentum frame, the relativistic mass is also the invariant mass(just as the relativistic energy of a single particle is the same as its rest energy, when seen from its rest frame). For other frames, the relativistic mass (of a body or system of bodies) includes a contribution from the "net" kinetic energy of the body (the kinetic energy of the center of mass of the body), and is larger the faster the body moves. Thus, unlike the invariant mass, the relativistic mass depends on the observer's frame of reference. However, for given single frames of reference and for isolated systems, the relativistic mass is also a conserved quantity.
For a discussion of mass in general relativity, see mass in general relativity. For a general discussion including mass in Newtonian mechanics, see the article on mass.
## REFERENCE
[1] Mass in special realtivity
Source: Wikipedia
URL: https://en.wikipedia.org/wiki/Mass_in_special_relativity | |
## Integer-valued definable functions
UoM administered thesis: Phd
• Authors:
• Shi Qiu
## Abstract
We study integer-valued functions definable in $\mathbb{R}_{\text{an},\exp}$. We first give several variations on a result of Wilkie's, and show that, under certain growth conditions, unary functions definable in $\R_{\text{an},\exp}$ that take integer values at some sufficiently dense subset of positive integers must be polynomials. We then study functions that take values sufficiently close to integers at positive integers. Under certain growth conditions, we show that such functions must be close to a polynomial. The methods here combine Wilkie's results on continuation with transcendence methods. We then consider various results of P\'{o}lya-type for definable functions of several variable. Finally we use Wilkie's methods to check that some of his results on definable continuation go through in certain reducts of $\mathbb{R}_{\text{an},\exp}$, namely expansions of the real field by certain Weierstrass systems and the exponential function.
## Details
Original language English The University of Manchester Marcus Tressl (Supervisor)Gareth Jones (Supervisor) 1 Aug 2021 | |
What is the slope of (-2,4) and (2,-1)?
Mar 27, 2018
$- \frac{5}{4}$
Explanation:
Use the slope fomula:
$m = \frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}}$
You do the second ${y}_{2}$ (which is $- 1$) minus the first ${y}_{1}$ (which is $4$), over the second ${x}_{2}$ (which is $2$) minus the first $x$ (which is $- 2$).
$\frac{- 1 - 4}{2 - \left(- 2\right)}$
Then you solve the top and bottom and are left with
$- \frac{5}{4}$ | |
### Introduction
Problems in engineering often involve the exploration of the relationships between values taken by a variable under different conditions. HELM booklet 41 introduced hypothesis testing which enables us to compare two population means using hypotheses of the general form
${H}_{0}:{\mu }_{1}={\mu }_{2}$
${H}_{1}:{\mu }_{1}\ne {\mu }_{2}$
or, in the case of more than two populations,
${H}_{0}:\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{\mu }_{1}={\mu }_{2}={\mu }_{3}=\dots ={\mu }_{k}$
${H}_{1}:\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{H}_{0}$ is not true
If we are comparing more than two population means, using the type of hypothesis testing referred to above gets very clumsy and very time consuming. As you will see, the statistical technique called Analysis of Variance (ANOVA) enables us to compare several populations simultaneously . We might, for example need to compare the shear strengths of five different adhesives or the surface toughness of six samples of steel which have received different surface hardening treatments.
#### Prerequisites
• be familiar with the general techniques of hypothesis testing
• be familiar with the $F$ -distribution
#### Learning Outcomes
• describe what is meant by the term one-way ANOVA.
• perform one-way ANOVA calculations.
• interpret the results of one-way ANOVA calculations
1.3 ANOVA tables |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.