arxiv_id stringlengths 0 16 | text stringlengths 10 1.65M |
|---|---|
## 信息流
• We derive a functional central limit theorem for the excursion of a random walk conditioned on sweeping a prescribed geometric area. We assume that the increments of the random walk are integer-valued, centered, with a third moment equal to zero and a finite fourth moment. This result complements the work of \citep{DKW13} where local central limit theorems are provided for the geometric area of the excursion of a symmetric random walk with finite second moments. Our result turns out to be a key tool to derive the scaling limit of the \emph{Interacting Partially-Directed Self-Avoiding Walk} at criticality which is the object of a companion paper \citep{CarPet17a}. This requires to derive a reinforced version of our result in the case of a random walk with Laplace symmetric increments.
收起
• Data processing inequalities for $f$-divergences can be sharpened using constants called "contraction coefficients" to produce strong data processing inequalities. For any discrete source-channel pair, the contraction coefficients for $f$-divergences are lower bounded by the contraction coefficient for $\chi^2$-divergence. In this paper, we elucidate that this lower bound can be achieved by driving the input $f$-divergences of the contraction coefficients to zero. Then, we establish a linear upper bound on the contraction coefficients for a certain class of $f$-divergences using the contraction coefficient for $\chi^2$-divergence, and refine this upper bound for the salient special case of Kullback-Leibler (KL) divergence. Furthermore, we present an alternative proof of the fact that the contraction coefficients for KL and $\chi^2$-divergences are equal for a Gaussian source with an additive Gaussian noise channel (where the former coefficient can be power constrained). Finally, we gen
收起
• We study topological properties of families of Hamiltonians which may contain degenerate energy levels aka. band crossings. The primary tool are Chern classes, Berry phases and slicing by surfaces. To analyse the degenerate locus, we study local models. These give information about the Chern classes and Berry phases. We then give global constraints for the topological invariants. This is an hitherto relatively unexplored subject. The global constraints are more strict when incorporating symmetries such as time reversal symmetries. The results can also be used in the study of deformations. We furthermore use these constraints to analyse examples which include the Gyroid geometry, which exhibits Weyl points and triple crossings and the honeycomb geometry with its two Dirac points.
收起
• When a compact Lie group acts freely and in a Hamiltonian way on a symplectic manifold, the Marsden-Weinstein theorem says that the reduced space is a smooth symplectic manifold. If we drop the freeness assumption, the reduced space might be singular, but Sjamaar-Lerman (1991) showed that it can still be partitioned into smooth symplectic manifolds which "fit together nicely" in the sense that they form a stratification. In this paper, we prove a hyperkahler analogue of this statement, using the hyperkahler quotient construction. We also show that singular hyperkahler quotients are complex spaces which are locally biholomorphic to affine complex-symplectic GIT quotients with biholomorphisms that are compatible with natural holomorphic Poisson brackets on both sides.
收起
• In a quantum many-body system where the Hamiltonian and the order operator do not commute, it often happens that the unique ground state of a finite system exhibits long-range order (LRO) but does not show spontaneous symmetry breaking (SSB). Typical examples include antiferromagnetic quantum spin systems with Neel order, and lattice boson systems which exhibits Bose-Einstein condensation. By extending and improving previous results by Horsch and von der Linden and by Koma and Tasaki, we here develop a fully rigorous and almost complete theory about the relation between LRO and SSB in the ground state of a finite system with continuous symmetry. We show that a ground state with LRO but without SSB is inevitably accompanied by a series of energy eigenstates, known as the "tower" of states, which have extremely low excitation energies. More importantly, we also prove that one gets a physically realistic "ground state" by taking a superposition of these low energy excited states. The pres
收起
• In 1997 M.~Khovanov proved that any doodle can be presented as closure of twin, this result is analogue of classical Alexander's theorem for braids and links. We give a description of twins that have equivalent closures, this theorem is analogue of classical Markov theorem.
收起
• We study discretized maximal operators associated to averaging over (neighborhoods of) squares in the plane and, more generally, $k$-skeletons in $\mathbb{R}^n$. Although these operators are known not to be bounded on any $L^p$, we obtain nearly sharp $L^p$ bounds for every small discretization scale. These results are motivated by, and partially extend, recent results of T. Keleti, D. Nagy and P. Shmerkin, and of R. Thornton, on sets that contain a scaled $k$-sekeleton of the unit cube with center in every point of $\mathbb{R}^n$.
收起
• This paper considers a single-antenna wirelesspowered communication network (WPCN) over a flat-fading channel. We show that, by using our probabilistic harvestand-transmit (PHAT) strategy, which requires the knowledge of instantaneous full channel state information (CSI) and fading probability distribution, the ergodic throughput of this system may be greatly increased relative to that achieved by the harvestthen-transmit (HTT) protocol. To do so, instead of dividing every frame to the uplink (UL) and downlink (DL), the channel is allocated to the UL wireless information transmission (WIT) and DL wireless power transfer (WPT) based on the estimated channel power gain. In other words, based on the fading probability distribution, we will derive some thresholds that determine the association of a frame to the DL WPT or UL WIT. More specifically, if the channel gain falls below or goes over these thresholds, the channel will be allocated to WPT or WIT. Simulation results verify the perfor
收起
• In this article, we determine the maximum Wiener indices of unicyclic graphs with given number of vertices and matching number. We also characterize the extremal graphs. This solves an open problem of Du and Zhou.
收起
• Let $(X,\mathcal{B},\mu,T)$ be a measure preserving system. We say that a function $f\in L^2(X,\mu)$ is $\mu$-mean equicontinuous if for any $\epsilon>0$ there is $k\in \mathbb{N}$ and measurable sets ${A_1,A_2,\cdots,A_k}$ with $\mu\left(\bigcup\limits_{i=1}^k A_i\right)>1-\epsilon$ such that whenever $x,y\in A_i$ for some $1\leq i\leq k$, one has $\limsup_{n\to\infty}\frac{1}{n}\sum_{j=0}^{n-1}|f(T^jx)-f(T^jy)|<\epsilon.$ Measure complexity with respect to $f$ is also introduced. It is shown that $f$ is an almost periodic function if and only if $f$ is $\mu$-mean equicontinuous if and only if $\mu$ has bounded complexity with respect to $f$. Ferenczi studied measure-theoretic complexity using $\alpha$-names of a partition and the Hamming distance. He proved that if a measure preserving system is ergodic, then the complexity function is bounded if and only if the system has discrete spectrum. We show that this result holds without the assumption of ergodicity.
收起
• We discuss certain identities involving $\mu(n)$ and $M(x)=\sum_{n\leq x}\mu(n)$, the functions of M\"{o}bius and Mertens. These identities allow calculation of $M(N^d)$, for $d=2,3,4,\ldots\$, as a sum of $O_d \left( N^d(\log N)^{2d - 2}\right)$ terms, each a product of the form $\mu(n_1) \cdots \mu(n_r)$ with $r\leq d$ and $n_1,\ldots , n_r\leq N$. We prove a more general identity in which $M(N^d)$ is replaced by $M(g,K)=\sum_{n\leq K}\mu(n)g(n)$, where $g(n)$ is an arbitrary totally multiplicative function, while each $n_j$ has its own range of summation, $1,\ldots , N_j$. We focus on the case $d=2$, $K=N^2$, $N_1=N_2=N$, where the identity has the form $M(g,N^2) = 2 M(g,N) - {\bf m}^{\rm T} A {\bf m}$, with $A$ being the $N\times N$ matrix of elements $a_{mn}=\sum _{k \leq N^2 /(mn)}\,g(k)$, while ${\bf m}=(\mu (1)g(1),\ldots ,\mu (N)g(N))^{\rm T}$. Our results in Sections 2 and 3 assume, moreover, that $g(n)$ equals $1$ for all $n$. In this case the Perron-Frobenius theorem appli
收起
• A real valued function $\varphi$ of one variable is called a metric transform if for every metric space $(X,d)$ the composition $d_\varphi = \varphi\circ d$ is also a metric on $X$. We give a complete characterization of the class of approximately nondecreasing, unbounded metric transforms $\varphi$ such that the transformed Euclidean half line $([0,\infty),|\cdot|_\varphi)$ is Gromov hyperbolic. As a consequence, we obtain metric transform rigidity for roughly geodesic Gromov hyperbolic spaces, that is, if $(X,d)$ is any metric space containing a rough geodesic ray and $\varphi$ is an approximately nondecreasing, unbounded metric transform such that the transformed space $(X,d_\varphi)$ is Gromov hyperbolic and roughly geodesic then $\varphi$ is an approximate dilation and the original space $(X,d)$ is Gromov hyperbolic and roughly geodesic.
收起
• The minor probability events detection is a crucial problem in Big data. Such events tend to include rarely occurring phenomenons which should be detected and monitored carefully. Given the prior probabilities of separate events and the conditional distributions of observations on the events, the Bayesian detection can be applied to estimate events behind the observations. It has been proved that Bayesian detection has the smallest overall testing error in average sense. However, when detecting an event with very small prior probability, the conditional Bayesian detection would result in high miss testing rate. To overcome such a problem, a modified detection approach is proposed based on Bayesian detection and message importance measure, which can reduce miss testing rate in conditions of detecting events with minor probability. The result can help to dig minor probability events in big data.
收起
• Computation task service delivery in a computing-enabled and caching-aided multi-user mobile edge computing (MEC) system is studied in this paper, where a MEC server can deliver the input or output datas of tasks to mobile devices over a wireless multicast channel. The computing-enabled and caching-aided mobile devices are able to store the input or output datas of some tasks, and also compute some tasks locally, reducing the wireless bandwidth consumption. The corresponding framework of this system is established, and under the latency constraint, we jointly optimize the caching and computing policy at mobile devices to minimize the required transmission bandwidth. The joint policy optimization problem is shown to be NP-hard, and based on equivalent transformation and exact penalization of the problem, a stationary point is obtained via concave convex procedure (CCCP). Moreover, in a symmetric scenario, gains offered by this approach are derived to analytically understand the influenc
收起
• The aim of this paper is to provide a fractional generalization of the Gompertz law via a Caputo-like definition of fractional derivative of a function with respect to another function. In particular, we observe that the model presented appears to be substantially different from the other attempt of fractional modifications of this model, since the fractional nature is carried along by the general solution even in its asymptotic behaviour for long times. We then validate the presented model by employing it as reference frame to model three biological systems of peculiar interest for biophysics and environmental engineering, namely: dark fermentation, photofermentation and microalgae biomass growth.
收起
• Using an existence criterion for good moduli spaces of Artin stacks by Alper-Fedorchuk-Smyth we construct a proper moduli space of rank two sheaves with fixed Chern classes on a given complex projective manifold that are Gieseker-Maruyama-semistable with respect to a fixed K\"ahler class.
收起
• 07-15 Hacker News 2
Monge's theorem
收起
• Weighted logrank tests are a popular tool for analyzing right censored survival data from two independent samples. Each of these tests is optimal against a certain hazard alternative, for example the classical logrank test for proportional hazards. But which weight function should be used in practical applications? We address this question by a flexible combination idea leading to a testing procedure with broader power. Beside the test's asymptotic exactness and consistency its power behaviour under local alternatives is derived. All theoretical properties can be transferred to a permutation version of the test, which is even finitely exact under exchangeability and showed a better finite sample performance in our simulation study. The procedure is illustrated in a real data example.
收起
• We make a detailed study of various (quadratic and linear) Morse-Bott trace functions on the orthogonal groups $O(n)$. We describe the critical loci of the quadratic trace function Tr$(AXBX^T)$ and determine their indices via perfect fillings of tables associated with the multiplicities of the eigenvalues of $A$ and $B$. We give a simplified treatment of T. Frankel's analysis of the linear trace function on $SO(n)$, as well as a combinatorial explanation of the relationship between the mod $2$ Betti numbers of $SO(n)$ and those of the Grassmannians $\mathbb{G}(2k,n)$ obtained from this analysis. We review the basic notions of Morse-Bott cohomology in a simple case where the set of critical points has two connected components. We then use these results to give a new Morse-theoretic computation of the mod $2$ Betti numbers of $SO(n)$.
收起
• Let $A,B\subset\mathbb{R}$. Define $$A\cdot B=\{x\cdot y:x\in A, y\in B\}.$$ In this paper, we consider the following class of self-similar sets with overlaps. Let $K$ be the attractor of the IFS $\{f_1(x)=\lambda x, f_2(x)=\lambda x+c-\lambda,f_3(x)=\lambda x+1-\lambda\}$, where $f_1(I)\cap f_2(I)\neq \emptyset, (f_1(I)\cup f_2(I))\cap f_3(I)=\emptyset,$ and $I=[0,1]$ is the convex hull of $K$. The main result of this paper is $K\cdot K=[0,1]$ if and only if $(1-\lambda)^2\leq c$. Equivalently, we give a necessary and sufficient condition such that for any $u\in[0,1]$, $u=x\cdot y$, where $x,y\in K$.
收起
• We consider an optimal transport problem on the unit simplex whose solutions are given by gradients of exponentially concave functions and prove two main results. One, we show that the optimal transport is the large deviation limit of a particle system of Dirichlet processes transporting one probability measure on the unit simplex to another by coordinatewise multiplication and normalizing. The structure of our Lagrangian and the appearance of the Dirichlet process relate our problem closely to the entropic measure on the Wasserstein space as defined by von-Renesse and Sturm in the context of Wasserstein diffusion. The limiting procedure is a triangular limit where we allow simultaneously the number of particles to grow to infinity while the `noise' goes to zero. The method, which generalizes easily to other cost functions, including the Wasserstein cost, provides a novel combination of the Schr\"odinger problem approach due to C. L\'eonard and the related Brownian particle systems by
收起
• In this paper we prove the existence of infinitely many nontrivial solutions for the class of $(p,\, q)$ fractional elliptic equations involving concave-critical nonlinearities in bounded domains in $\mathbb{R}^N$. Further, when the nonlinearity is of convex-critical type, we establish the multiplicity of nonnegative solutions using variational methods. In particular, we show the existence of at least $cat_{\Omega}(\Omega)$ nonnegative solutions.
收起
• Modeling traffic in road networks is a widely studied but challenging problem, especially under the assumption that drivers act selfishly. A common approach used in simulation software is the deterministic queuing model, for which the structure of dynamic equilibria has been studied extensively in the last couple of years. The basic idea is to model traffic by a continuous flow that travels over time from a source to a sink through a network, in which the arcs are endowed with transit times and capacities. Whenever the flow rate exceeds the capacity a queue builds up and the infinitesimally small flow particles wait in line in front of the bottleneck. Since the queues have no physical dimension, it was not possible, until now, to represent spillback in this model. This was a big drawback, since spillback can be regularly observed in real traffic situations and has a huge impact on travel times in highly congested regions. We extend the deterministic queuing model by introducing a stora
收起
• Given a countable, totally ordered commutative monoid $\mathcal{R}=(R,\oplus,\leq,0)$, with least element $0$, there is a countable, universal and ultrahomogeneous metric space $\mathcal{U}_\mathcal{R}$ with distances in $\mathcal{R}$. We refer to this space as the $\mathcal{R}$-Urysohn space, and consider the theory of $\mathcal{U}_\mathcal{R}$ in a binary relational language of distance inequalities. This setting encompasses many classical structures of varying model theoretic complexity, including the rational Urysohn space, the free $n^{\text{th}}$ roots of the complete graph (e.g. the random graph when $n=2$), and theories of refining equivalence relations (viewed as ultrametric spaces). We characterize model theoretic properties of $\text{Th}(\mathcal{U}_\mathcal{R})$ by algebraic properties of $\mathcal{R}$, many of which are first-order in the language of ordered monoids. This includes stability, simplicity, and Shelah's SOP$_n$-hierarchy. Using the submonoid of idempotents in
收起
• We consider the problem of noisy private information retrieval (NPIR) from $N$ non-communicating databases, each storing the same set of $M$ messages. In this model, the answer strings are not returned through noiseless bit pipes, but rather through \emph{noisy} memoryless channels. We aim at characterizing the PIR capacity for this model as a function of the statistical information measures of the noisy channels such as entropy and mutual information. We derive a general upper bound for the retrieval rate in the form of a max-min optimization. We use the achievable schemes for the PIR problem under asymmetric traffic constraints and random coding arguments to derive a general lower bound for the retrieval rate. The upper and lower bounds match for $M=2$ and $M=3$, for any $N$, and any noisy channel. The results imply that separation between channel coding and retrieval is optimal except for adapting the traffic ratio from the databases. We refer to this as \emph{almost separation}. Ne
收起
• We consider a nonlinear Neumann elliptic inclusion with a source (reaction term) consisting of a convex subdifferential plus a multivalued term depending on the gradient. The convex subdifferential incorporates in our framework problems with unilateral constraints (variational inequalities). Using topological methods and the Moreau-Yosida approximations of the subdifferential term, we establish the existence of a smooth solution.
收起
• Our goal of this paper is to develop a new upscaling method for multicontinua flow problems in fractured porous media. We consider a system of equations that describes flow phenomena with multiple flow variables defined on both matrix and fractures. To construct our upscaled model, we will apply the nonlocal multicontinua (NLMC) upscaling technique. The upscaled coefficients are obtained by using some multiscale basis functions, which are solutions of local problems defined on oversampled regions. For each continuum within a target coarse element, we will solve a local problem defined on an oversampling region obtained by extending the target element by few coarse grid layers, with a set of constraints which enforce the local solution to have mean value one on the chosen continuum and zero mean otherwise. The resulting multiscale basis functions have been shown to have good approximation properties. To illustrate the idea of our approach, we will consider a dual continua background mod
收起
• Several properties of a hyepergeometric series related to Gromov-Witten theory of some Calabi-Yau geometries was studied in [8]. These properties play basic role in the study of higher genus Gromov-Witten theories. We extend the results of [8] to equivariant setting for the study of higher genus equivariant Gromov-Witten theories of some Calabi-Yau geometries.
收起
• We give an explicit and versatile parametrization of all positive selfadjoint extensions of a densely defined, closed, positive operator. In addition, we identify the Friedrichs extension by specifying the parameter to which it corresponds. This is a manuscript that was circulated as the first part of the preprint "Two papers on selfadjoint extensions of symmetric semibounded operators", INCREST Preprint Series, July 1981, Bucharest, Romania, but never published. In this LaTeX typeset version, only typos and a few inappropriate formulations have been corrected, with respect to the original manuscript. I decided to post it on arXiv since, taking into account recent articles, the results are still of current interest. Tiberiu Constantinescu died in 2005.
收起
• In this note we associate a sequence of non-negative integers to any convergent series of positive real numbers and study this sequence for the series $\sum_{n \geq 1} n^{-k}$ where $k$ is an integer $\geq 2$.
收起
• Let $V$ be an $n$-dimensional vector space over the finite field of order $q$. The spherical building $X_V$ associated with $GL(V)$ is the order complex of the nontrivial linear subspaces of $V$. Let $\mathfrak{g}$ be the local coefficient system on $X_V$, whose value on the simplex $\sigma=[V_0 \subset \cdots \subset V_p] \in X_V$ is given by $\mathfrak{g}(\sigma)=V_0$. Following the work of Lusztig and Dupont, we study the homology module $D^k(V)=\tilde{H}_{n-k-1}(X_V;\mathfrak{g})$. Our results include a construction of an explicit basis of $D^1(V)$, and the following twisted analogue of a result of Smith and Yoshiara: For any $1 \leq k \leq n-1$, the minimal support size of a non-zero $(n-k-1)$-cycle in the twisted homology $\tilde{H}_{n-k-1}(X_V;\wedge^k \mathfrak{g})$ is $\frac{(n-k+2)!}{2}$.
收起
• Let $\mu$ be a probability measure in $\mathbb{C}$ with a continuous and compactly supported distribution function, let $z_1, \dots, z_n$ be independent random variables, $z_i \sim \mu$, and consider the random polynomial $$p_n(z) = \prod_{k=1}^{n}{(z - z_k)}.$$ We determine the asymptotic distribution of $\left\{z \in \mathbb{C}: p_n(z) = p_n(0)\right\}$. In particular, if $\mu$ is radial around the origin, then those solutions are also distributed according to $\mu$ as $n \rightarrow \infty$. Generally, the distribution of the solutions will reproduce parts of $\mu$ and condense another part on curves. We use these insights to study the behavior of the Blaschke unwinding series on random data.
收起
• It is shown that if the boundary of a Reinhardt domain in $\mathbb{C}^n$ contains the origin, each holomorphic function on the domain which is infinitely many times differentiable up to the boundary extends holomorphically to a neighborhood of the origin.
收起
• We prove that on a closed surface of genus $g$, the cardinality of a set of simple closed curves in which any two are non-homotopic and intersect at most once is $\lesssim g^2 \log(g)$. This bound matches the largest known constructions to within a logarithmic factor. The proof uses a probabilistic argument in graph theory. It generalizes as well to the case of curves that intersect at most $k$ times in pairs.
收起
• Let $X\subset \mathbb{C}^n$ be a smooth irreducible affine variety of dimension $k$ and let $F: X\to \mathbb{C}^m$ be a polynomial mapping. We prove that if $m\ge k$, then there is a Zariski open dense subset $U$ in the space of linear mappings ${\mathcal L}( \mathbb{C}^n, \mathbb{C}^m)$ such that for every $L\in U$ the mapping $F+L$ is a finite mapping. Moreover, we can choose $U$ in this way, that all mappings $F+L; L\in U$ are topologically equivalent.
收起
• We prove that if $\mathcal{A}$ is a locally $\lambda$-presentable category and $T : \mathcal{A} \to \mathcal{A}$ is a $\lambda$-accessible functor then $T/\mathcal{A}$ is locally $\lambda$-presentable.
收起
• We prove two maximal regularity results in spaces of continuous and H\"older continuous functions, for a mixed linear Cauchy-Dirichlet problem with a fractional time derivative $\mathbb{D}_t^\alpha$. This derivative is intended in the sense of Caputo and $\alpha$ is taken in $(0, 2)$. In case $\alpha = 1$, we obtain maximal regularity results for mixed parabolic problems already known in mathematica literature.
收起
• We introduce the non-homogeneous analogs of Van Schaftingen's classes. We show that these classes refine the embedding $W^{1,n}\subset bmo$. The analogous results established on bounded Lipschitz domains and Riemannian manifolds with bounded geometry.
收起
• This paper proposes a framework of L-BFGS based on the (approximate) second-order information with stochastic batches, as a novel approach to the finite-sum minimization problems. Different from the classical L-BFGS where stochastic batches lead to instability, we use a smooth estimate for the evaluations of the gradient differences while achieving acceleration by well-scaling the initial Hessians. We provide theoretical analyses for both convex and nonconvex cases. In addition, we demonstrate that within the popular applications of least-square and cross-entropy losses, the algorithm admits a simple implementation in the distributed environment. Numerical experiments support the efficiency of our algorithms.
收起
• In this paper, we study multiple-antenna wireless communication networks where a large number of devices simultaneously communicating with an access point. The capacity region of multiple-input multiple-output massive multiple access channels (MIMO mMAC) is investigated. While the joint typicality decoding is utilized to establish the achievability of capacity region for conventional multiple access channel with fixed number of users, the technique is not directly applicable for MIMO mMAC [4]. Instead, an information-theoretic approach based on Gallager's error exponent analysis is exploited to characterize the capacity region of MIMO mMAC. Theoretical results reveal that the capacity region of MIMO mMAC is dominated by the sum rate constraint only and the individual user rate is determined by a specific factor that corresponds to the allocation of sum rate. The individual user rate in conventional MAC is not achievable with massive multiple access and the successive interference cance
收起
• We will describe a one-step "Gorensteinization" process for a Schubert variety by blowing-up along its boundary divisor. The local question involves Kazhdan-Lusztig varieties which can be degenerated to affine toric schemes defined using the Stanley-Reisner ideal of a subword complex. The blow-up along the boundary in this toric case is in fact Gorenstein. We show that there exists a degeneration of the blow-up of the Kazhdan-Lusztig variety to this Gorenstein scheme, allowing us to extend this result to Schubert varieties in general. The potential use of this one-step Gorensteinization to describe the non-Gorenstein locus of Schubert varieties is discussed, as well as the relationship between Gorensteinizations and the convergence of the Nash blow-up process in the toric case.
收起
• The problem of identifiability of finite mixtures of finite product measures is studied. A mixture model with $K$ mixture components and $L$ observed variables is considered, where each variable takes its value in a finite set with cardinality $M$.The variables are independent in each mixture component. The identifiability of a mixture model means the possibility of attaining the mixture components parameters by observing its mixture distribution. In this paper, we investigate fundamental relations between the identifiability of mixture models and the separability of their observed variables by introducing two types of separability: strongly and weakly separable variables. Roughly speaking, a variable is said to be separable, if and only if it has some differences among its probability distributions in different mixture components. We prove that mixture models are identifiable if the number of strongly separable variables is greater than or equal to $2K-1$, independent form $M$. This f
收起
• Under necessary compatibility condition, and some mild regularity assumptions on the interior and the boundary data, we prove the existence, uniqueness, and stability of the solution of generalized Dary-Forchheimer model.
收起
• In this paper, we examine the convergence of mirror descent in a class of stochastic optimization problems that are not necessarily convex (or even quasi-convex), and which we call variationally coherent. Since the standard technique of "ergodic averaging" offers no tangible benefits beyond convex programming, we focus directly on the algorithm's last generated sample (its "last iterate"), and we show that it converges with probabiility $1$ if the underlying problem is coherent. We further consider a localized version of variational coherence which ensures local convergence of stochastic mirror descent (SMD) with high probability. These results contribute to the landscape of non-convex stochastic optimization by showing that (quasi-)convexity is not essential for convergence to a global minimum: rather, variational coherence, a much weaker requirement, suffices. Finally, building on the above, we reveal an interesting insight regarding the convergence speed of SMD: in problems with sha
收起
• This paper deals with the equation $-\Delta u+\mu u=f$, $\mu$ a positive constant, on high-dimensional spaces $\mathbb{R}^d$. If the right-hand side $f$ is a rapidly converging series of separable functions, the solution $u$ can be represented in the same way. These constructions are based on the approximation of the function $1/r$ by sums of exponential functions. We derive results of related kind for more general right-hand sides $f(x)=F(Tx)$ that are restrictions of separable functions $F$ on a higher dimensional space to a linear subspace of arbitrary orientation.
收起
• Motivated by the problem of optimal portfolio liquidation under transient price impact, we study the minimization of energy functionals with completely monotone displacement kernel under an integral constraint. The corresponding minimizers can be characterized by Fredholm integral equations of the second type with constant free term. Our main result states that minimizers are analytic and have a power series development in terms of even powers of the distance to the midpoint of the domain of definition and with nonnegative coefficients. We show moreover that our minimization problem is equivalent to the minimization of the energy functional under a nonnegativity constraint.
收起
• We prove a local-in-time regularity criterion for the 3D Navier-Stokes equations. In particular from the criterion we obtain a new partial regularity result on the dimension of possible singular times. It is shown that the Hausdorff dimension of possible singular times for weak solutions $u\in L^s([0,T]\times \mathbb{R}^3)$ with $4 \leq s \leq 5$ is at most $\frac{5}{2}-\frac{s}{2}$ improving the previous bound $\frac{1}{2}$.
收起
• For tame arbitrary-length toral, also called positive regular, supercuspidal representations of a simply connected and semisimple $p$-adic group $G$, constructed as per Adler-Yu, we determine which components of their restriction to a maximal compact subgroup are types. We give conditions under which there is a unique such component, and then present a class of examples for which there is not, disproving the strong version of the conjecture of unicity of types on maximal compact open subgroups. We restate the unicity conjecture, and prove it holds for the groups and representations under consideration under a mild condition on depth.
收起
• We introduce patterns on a triangular grid generated by paperfolding operations. We show that in case these patterns are defined using a periodic sequence of foldings, they can also be generated using substitution rules and compute eigenvalues and eigenvectors of corresponding matrices. We also prove that densities of all basic triangles are equal in these patterns.
收起
• For any real number $p\in [1,+\infty)$, we characterize the operations $\mathbb{R}^I\to \mathbb{R}$ that preserve $p$-integrability over finite measure spaces, i.e., the operations under which, for every finite measure $\mu$, the set $\mathcal{L}^p(\mu)$ is closed. We investigate the infinitary variety of algebras whose terms are exactly such operations. It turns out that this variety coincides with the much studied category of Dedekind $\sigma$-complete Riesz spaces with weak unit. We also prove that $\mathbb{R}$ generates this variety. From this, we exhibit a concrete model of the free Dedekind $\sigma$-complete Riesz spaces with weak unit. Analogous results are obtained for operations that preserve $p$-integrability over every (not necessarily finite) measure space. The corresponding variety is shown to coincide with the category of Dedekind $\sigma$-complete truncated Riesz spaces, where truncation is meant in the sense of R.N. Ball.
收起
• In this paper, we propose an approach to determine the optimal operation strategies for a PV-diesel-battery microgrid covering industrial loads under grid blackouts. A special property of the industrial loads is that they have low power factors. Therefore, the reactive power consumption of the load cannot be neglected. In this study, a novel model of a PV-battery-diesel microgrid is developed considering the active as well reactive power of the microgrid components. Furthermore, an optimization approach is proposed to optimize the active as well reactive power flow in the microgrid for covering the load demand while decreasing the power consumption from the grid, minimizing the diesel generator (DG) operation cost as well as maximizing the consumed power from the PV-array. It has been found that the proposed operation strategy induces a huge reduction of the consumed energy cost and the PV curtailment.
收起
• Roll damping is an important problem of ship motion control since excessive roll motion may cause motion sickness of human occupants and damage fragile cargo. Actuators used for roll damping (fins, rudders and thrusters) inevitably create a rotating yaw moment, interfering thus with the vessel's autopilot (heading control system). To reach and maintain the "trade-off" between the concurrent goals of accurate vessel steering and roll damping, an optimization procedure in general needs to take place where the cost functional penalizes the roll angle, the steering error and the control effort. Since the vessel's motion is influenced by the uncertain wave disturbance, the optimal value of this functional and the resulting optimal process are also uncertain. Standard approaches, prevailing in the literature, approximate the wave disturbance by the "colored noise" with a known spectral density, reducing the optimization problem to conventional loop-shaping, LQG or $\mathcal{H}_\infty$ contro
收起
• This work is devoted to study orientation theory in arithmetic geometric within the motivic homotopy theory of Morel and Voevodsky. The main tool is a formulation of the absolute purity property for an \emph{arithmetic cohomology theory}, either represented by a cartesian section of the stable homotopy category or satisfying suitable axioms. We give many examples, formulate conjectures and prove a useful property of analytical invariance. Within this axiomatic, we thoroughly develop the theory of characteristic and fundamental classes, Gysin and residue morphisms. This is used to prove Riemann-Roch formulas, in Grothendieck style for arbitrary natural transformations of cohomologies, and a new one for residue morphisms. They are applied to rational motivic cohomology and \'etale rational $\ell$-adic cohomology, as expected by Grothendieck in \cite[XIV, 6.1]{SGA6}.
收起
• The reduced k-particle density matrix of a density matrix on finite-dimensional, fermion Fock space can be defined as the image under the orthogonal projection in the Hilbert-Schmidt geometry onto the space of k-body observables. A proper understanding of this projection is therefore intimately related to the representability problem, a long-standing open problem in computational quantum chemistry. Given an orthonormal basis in the finite-dimensional one-particle Hilbert space, we explicitly construct an orthonormal basis of the space of Fock space operators which restricts to an orthonormal basis of the space of k-body operators for all k.
收起
• Given a symmetric operad $\mathcal{P}$ and a $\mathcal{P}$-algebra $V$, the universal enveloping algebra ${\mathsf{U}_{\mathcal{P}}}$ is an associative algebra whose category of modules is isomorphic to the abelian category of $V$-modules. We study the notion of PBW property for universal enveloping algebras over an operad. In case $\mathcal{P}$ is Koszul a criterion for PBW property is found. Necessary condition on Hilbert series for $\mathcal{P}$ is found. Moreover, given any symmetric operad $\mathcal{P}$ together with a Gr\"obner basis $G$, a condition is given on the structure of the underlying trees associated with leading monomials of $G$ sufficient for the PBW property to hold. Examples are provided.
收起
• In this paper, we study the Hankel determinant generated by a singularly perturbed Gaussian weight $$w(x,t)=\mathrm{e}^{-x^{2}-\frac{t}{x^{2}}},\;\;x\in(-\infty, \infty),\;\;t>0.$$ By using the ladder operator approach associated with the orthogonal polynomials, we show that the logarithmic derivative of the Hankel determinant satisfies both a non-linear second order difference equation and a non-linear second order differential equation. The Hankel determinant also admits an integral representation involving a Painlev\'e III$'$. Furthermore, we consider the asymptotics of the Hankel determinant under a double scaling, i.e. $n\rightarrow\infty$ and $t\rightarrow 0$ such that $s=(2n+1)t$ is fixed. The asymptotic expansions of the scaled Hankel determinant for large $s$ and small $s$ are established, from which Dyson's constant appears.
收起
• Smooth entropies are a tool for quantifying resource trade-offs in (quantum) information theory and cryptography. In typical bi- and multi-partite problems, however, some of the sub-systems are often left unchanged and this is not reflected by the standard smoothing of information measures over a ball of close states. We propose to smooth instead only over a ball of close states which also have some of the reduced states on the relevant sub-systems fixed. This partial smoothing of information measures naturally allows to give more refined characterizations of various information-theoretic problems in the one-shot setting. In particular, we immediately get asymptotic second-order characterizations for tasks such as privacy amplification against classical side information or classical state splitting. For quantum problems like state merging the general resource trade-off is tightly characterized by partially smoothed information measures as well. However, for quantum systems we can so fa
收起
• We give the first properties of independent Bernoulli percolation, for oriented graphs on the set of vertices $\Z^d$ that are translation-invariant and may contain loops. We exhibit some examples showing that the critical probability for the existence of an infinite cluster may be direction-dependent. Then, we prove that the phase transition in a given direction is sharp, and study the links between percolation and first-passage percolation on these oriented graphs.
收起
• In this article we define Perfectoid Tate curves and compute their cohomology using a \v{C}ech complex and also define perfectoid versions of Weierstra{\ss} and Theta functions.
收起
• This paper investigates phase transitions on the optimality gaps in Optimal Power Flow (OPF) problem on real-world power transmission systems operated in France. The experimental results study optimal power flow solutions for more than 6000 scenarios on the networks with various load profiles, voltage feasibility regions, and generation capabilities. The results show that bifurcations between primal solutions and the QC, SOCP, and SDP relaxation techniques frequently occur when approaching congestion points. Moreover, the results demonstrate the existence of multiple bifurcations for certain scenarios when load demands are increased uniformly. Preliminary analysis on these bifurcations were performed.
收起
• Piecewise Deterministic Markov Processes (PDMPs) are studied in a general framework. First, different constructions are proven to be equivalent. Second, we introduce a coupling between two PDMPs following the same differential flow which implies quantitative bounds on the total variation between the marginal distributions of the two processes. Finally two results are established regarding the invariant measures of PDMPs. A practical condition to show that a probability measure is invariant for the associated PDMP semi-group is presented. In a second time, a bound on the invariant probability measures in $V$-norm of two PDMPs following the same differential flow is established. This last result is then applied to study the asymptotic bias of some non-exact PDMP MCMC methods.
收起
• We show that if a Fano manifold does not admit Kahler-Einstein metrics then the Kahler potentials along the continuity method subconverge to a function with analytic singularities along a subvariety which solves the homogeneous complex Monge-Ampere equation on its complement, confirming an expectation of Tian-Yau.
收起
• We study the family of depolarizations of a squarefree monomial ideal $I$, i.e. all monomial ideals whose polarization is $I$. We describe a method to find all depolarizations of $I$ and study some of the properties they share and some they do not share. We then apply polarization and depolarization tools to study the reliability of multi-state coherent systems via binary systems and vice versa.
收起
• We consider classical Merton problem of terminal wealth maximization in finite horizon. We assume that the drift of the stock is following Ornstein-Uhlenbeck process and the volatility of it is following GARCH(1) process. In particular, both mean and volatility are unbounded. We assume that there is Knightian uncertainty on the parameters of both mean and volatility. We take that the investor has logarithmic utility function, and solve the corresponding utility maximization problem explicitly. To the best of our knowledge, this is the first work on utility maximization with unbounded mean and volatility in Knightian uncertainty under nondominated priors.
收起
• Consider a multiplayer game, and assume a system level objective function, which the system wants to optimize, is given. This paper aims at accomplishing this goal via potential game theory when players can only get part of other players' information. The technique is designing a set of local information based utility functions, which guarantee that the designed game is potential, with the system level objective function its potential function. First, the existence of local information based utility functions can be verified by checking whether the corresponding linear equations have a solution. Then an algorithm is proposed to calculate the local information based utility functions when the utility design equations have solutions. Finally, consensus problem of multiagent system is considered to demonstrate the effectiveness of the proposed design procedure.
收起
• Utilizing common resources is always a dilemma for community members. While cooperator players restrain themselves and consider the proper state of resources, defectors demand more than their supposed share for a higher payoff. To avoid the tragedy of the common state, punishing the latter group seems to be an adequate reaction. This conclusion, however, is less straightforward when we acknowledge the fact that resources are finite and even a renewable resource has limited growing capacity. To clarify the possible consequences, we consider a coevolutionary model where beside the payoff-driven competition of cooperator and defector players the level of a renewable resource depends sensitively on the fraction of cooperators and the total consumption of all players. The applied feedback-evolving game reveals that beside a delicately adjusted punishment it is also fundamental that cooperators should pay special attention to the growing capacity of renewable resources. Otherwise, even the u
收起
• This paper deals with an SIR model with saturated incidence rate affected by inhibitory effect and saturated treatment function. Two control functions have been used, one for vaccinating the susceptible population and other for the treatment control of infected population. We have analysed the existence and stability of equilibrium points and investigated the transcritical and backward bifurcation. The stability analysis of non-hyperbolic equilibrium point has been performed by using Centre manifold theory. The Pontryagin's maximum principle has been used to characterize the optimal control whose numerical results show the positive impact of two controls mentioned above for controlling the disease. Efficiency analysis is also done to determine the best control strategy among vaccination and treatment.
收起
• Motivated by applications to image reconstruction, in this paper we analyse a \emph{finite-difference discretisation} of the Ambrosio-Tortorelli functional. Denoted by $\varepsilon$ the elliptic-approximation parameter and by $\delta$ the discretisation step-size, we fully describe the relative impact of $\varepsilon$ and $\delta$ in terms of $\Gamma$-limits for the corresponding discrete functionals, in the three possible scaling regimes. We show, in particular, that when $\varepsilon$ and $\delta$ are of the same order, the underlying lattice structure affects the $\Gamma$-limit which turns out to be an anisotropic free-discontinuity functional.
收起
• Kronecker graphs, obtained by repeatedly performing the Kronecker product of the adjacency matrix of an "initiator" graph with itself, have risen in popularity in network science due to their ability to generate complex networks with real-world properties. In this paper, we explore spatial search by continuous-time quantum walk on Kronecker graphs. Specifically, we give analytical proofs for quantum search on first-, second-, and third-order Kronecker graphs with the complete graph as the initiator, showing that search takes Grover's $O(\sqrt{N})$ time. Numerical simulations indicate that higher-order Kronecker graphs with the complete initiator also support optimal quantum search.
收起
• We prove that, under a mild assumption, the heart H of a twin cotorsion pair ((S,T),(U,V)) on a triangulated category C is a quasi-abelian category. If C is also Krull-Schmidt and T=U, we show that the heart of the cotorsion pair (S,T) is equivalent to the Gabriel-Zisman localisation of H at the class of its regular morphisms. In particular, suppose C is a cluster category with a rigid object R and [X_R] the ideal of morphisms factoring through X_R=Ker(Hom(R,-)), then applications of our results show that C/[X_R] is a quasi-abelian category. We also obtain a new proof of an equivalence between the localisation of this category at its class of regular morphisms and a certain subfactor category of C.
收起
• In this paper, we consider radial distributional solutions of the quasilinear equation $-\Delta_N u=f(u)$ in the punctured open ball $B_R\backslash\{0\}\subset \RR^N$, $N \geq 2$. We obtain sharp conditions on the nonlinearity $f$ for extending such solutions to the whole domain $B_R$ by preserving the regularity. For a certain class of noninearity $f$ we obtain the existence of singular solutions and deduce upper and lower estimates on the growth rate near the singularity.
收起
• Motivated by Wick-rotations of pseudo-Riemannian manifolds, we study real geometric invariant theory (GIT) and compatible representations. We extend some of the results from earlier works \cite{W2,W1}, in particular, we give sufficient and necessary conditions for when pseudo-Riemannian manifolds are Wick-rotatable to other signatures. For arbitrary signatures, we consider a Wick-rotatable pseudo-Riemannian manifold with closed $O(p,q)$-orbits, and thus generalise the existence condition found in \cite{W1}. Using these existence conditions we also derive an invariance theorem for Wick-rotations of arbitrary signatures.
收起
• This paper presents reduction theorems for stability, attractivity, and asymptotic stability of compact subsets of the state space of a hybrid dynamical system. Given two closed sets $\Gamma_1 \subset \Gamma_2 \subset \Re^n$, with $\Gamma_1$ compact, the theorems presented in this paper give conditions under which a qualitative property of $\Gamma_1$ that holds relative to $\Gamma_2$ (stability, attractivity, or asymptotic stability) can be guaranteed to also hold relative to the state space of the hybrid system. As a consequence of these results, sufficient conditions are presented for the stability of compact sets in cascade-connected hybrid systems. We also present a result for hybrid systems with outputs that converge to zero along solutions. If such a system enjoys a detectability property with respect to a set $\Gamma_1$, then $\Gamma_1$ is globally attractive. The theory of this paper is used to develop a hybrid estimator for the period of oscillation of a sinusoidal signal.
收起
• Let $(X,\omega)$ be a compact K\"ahler manifold and $\mathcal H$ the space of K\"ahler metrics cohomologous to $\omega$. If a cscK metric exists in $\mathcal H$, we show that all finite energy minimizers of the extended K-energy are smooth cscK metrics, partially confirming a conjecture of Y.A. Rubinstein and the second author. As an immediate application, we obtain that existence of a cscK metric in $\mathcal H$ implies J-properness of the K-energy, thus confirming one direction of a conjecture of Tian. Exploiting this properness result we prove that an ample line bundle $(X,L)$ admitting a cscK metric in $c_1(L)$ is $K$-polystable.
收起
• We study the stochastic heat equation driven by an additive infinite dimensional fractional Brownian noise on the unit sphere $\mathbb{S}^{2}$. The existence and uniqueness of its solution in certain Sobolev space is investigated and sample path regularity properties are established. In particular, the exact uniform modulus of continuity of the solution in time/spatial variable is derived.
收起
• The use of low-resolution analog-to-digital converters (ADCs) can significantly reduce power consumption and hardware cost. However, their resulting severe nonlinear distortion makes achieving reliable data transmission challenging. For orthogonal frequency division multiplexing (OFDM) transmission, the orthogonality among subcarriers is destroyed. This invalidates conventional OFDM receivers relying heavily on this orthogonality. In this study, we move on to quantized OFDM (Q-OFDM) prototyping implementation based on our previous achievement in optimal Q-OFDM detection. First, we propose a novel Q-OFDM channel estimator by extending the generalized Turbo (GTurbo) framework formerly applied for optimal detection. Specifically, we integrate a type of robust linear OFDM channel estimator into the original GTurbo framework, and derive its corresponding extrinsic information to guarantee its convergence. We also propose feasible schemes for automatic gain control, noise power estimation, a
收起
• In this paper we use orthogonal system of Jacobi's polynomials as a tool for study the operators of fractional integration and differentiation in the Riemann-Liouville sense on the compact. This approach has some advantages and alow us to reformulate well-known results of fractional calculus in the new quantity. We consider several modification of Jacobi's polynomials what give us opportunity to study invariant property of operator. As shown by us in this direction is that the operator of fractional integration acting in weighted Lebesgue spaces of summable with square functions has a sequence of including invariant subspaces. The proved theorem on acting of fractional integration operator formulated in terms of Legendre's coefficients is of particular interest. Finely we obtain the sufficient condition in terms of Legendre's coefficients for representation of function by fractional integral.
收起
• We classify four-dimensional shrinking Ricci solitons satisfying $Sec \geq \frac{1}{48} R$, where $Sec$ and $R$ denote the sectional and the scalar curvature, respectively. They are isometric to either $\mathbb{R}^{4}$ (and quotients), $\mathbb{S}^{4}$, $\mathbb{RP}^{4}$ or $\mathbb{CP}^{2}$ with their standard metrics.
收起
• We consider the Cauchy problem for the 2D gravity water wave equation. Recently Wu \cite{Wu15, Wu18} proved the local well-posedness of the equation in a regime which allows interfaces with angled crests as initial data. In this work we study properties of these singular solutions and prove that the singularities of these solutions are "rigid". More precisely we prove that an initial interface with angled crests remains angled crested, the Euler equation holds point-wise even on the boundary, the particle at the tip stays at the tip, the acceleration at the tip is the one due to gravity and the angle of the crest does not change nor does it tilt. We also show that the existence result of Wu \cite{Wu15} applies not only to interfaces with angled crests, but also allows certain types of cusps.
收起
• We define a three parameter family of Bell pseudo-involutions in the Riordan group. The defining sequences have generating functions that are expressible as continued fractions. We exhibit Hankel transforms associated with these sequences, and to the $A$-sequences of the Riordan arrays, that give rise to Somos $4$ sequences. We give examples where these sequences can be associated with elliptic curves, and we exhibit instances where elliptic curves can give rise to associated Riordan pseudo-involutions.
收起 | |
## Overview
It's often said that you can't win at a casino, that even if you get lucky and win a few times you'll end up losing money in the long run.
In this post, we take an analytical look at whether this statement is true and why.
## Expected Returns - Single Die Game
First, let’s start with a simple game where you a single 6-sided die is thrown, and you win if you can correctly guess the number it lands on.
If it’s a fair die (not loaded in any way) and thrown on a flat surface, the probability of it landing on any one number is equal to all other possible numbers. So since there are 6 sides (i.e. 6 possible outcomes), the probability of each number is $${1 \over 6}$$.
Now let’s say the payout is 3 to 1, meaning you win three times your bet if you guess correctly but lose your bet if you’re wrong. What are your expected returns on this game?
Since the probability of you winning is $${1 \over 6}$$ and the payout ratio is 3, your expected returns for a 1 bet is the payout multiplied by the probability of winning, subtracted by the bet amount multiplied by the probability of losing: \eqalign{ ER &= {Payout \times P(Win) - Bet \times P(Lose)} \cr &= {(3 \times {1 \over 6}) - (1 \times {5 \over 6})} \cr &= -0.33 \cr &= -33\% } This means you can expect to lose a third of your money in the long term, if you keep playing this game with1 bets.
Would you play such a game? I most certainly wouldn’t!
What if the payout were to be increased to 5 to 1? Now the expected returns is
${5 \over 6} - {5 \over 6} = 0$
Meaning you will break even playing this game over the long term.
## Expected Returns - Roulette
Next, let’s calculate what the expected returns at a roulette table looks like. To make it simple, we’ll start with the single number bet.
At first glance, it looks like a slightly skewed payout where you pick a number out of 36 to bet on, and if the ball lands on your chosen number you’ll get 35 times your bet as payout.
But wait. If you observe the roulette, there are more than 36 numbers! The most common European system has an additional 0, the French/American system has both 0 and 00, while some rarer systems even have another 000.
Which means that on a European roulette the expected returns is
${35 \over 37} - {36 \over 37} = -0.027 = -2.7\%$
The expected returns on a French/American roulette is even worse, at
${35 \over 38} - {37 \over 38} = -0.053 = -5.3\%$
How about other bet types, such as Even/Odd, dozen, column and such? The existence of the 0 slot ensures that the probability of winning is slightly lower than the probability of losing.
For example, let’s look at an Even bet where you win if the ball lands on an even number and you lose if it lands on and odd number or zero. The payout is 1 to 1, meaning you get your money back if you win.
But since the odds of winning are $${16 \over 37}$$ verses the odds of losing at $${17 \over 37}$$, the expected returns is
${16 \over 37} - {17 \over 37} = -0.027 = -2.7\%$
## House Edge
This is negative skew from a theoretically fair payout is called the "house edge", being the mathematical guarantee that the casino (also called the "house") will always earn money over the long run.
Different casino games have different mechanisms of ensuring that the house edge exists, with some having higher edge than others.
For more complicated games such as Texas Hold’em, you need to calculate the probability of every single possible outcome and compare against the payout/loss at each of them. When you sum all of these up, you can see what the expected return of playing such a game is.
The house edge can be lower than 1% for certain card-based table games, while it can be more than 10% for slot machines.
## Conclusion
Let’s imagine casino with a roulette section of 20 European tables, where there are on average 500 patrons playing over the course of each day and on average they bet a total of $500 over the duration of their play. Due to the house edge of 2.7%, the casino can expect to earn an average of$6,750 every day from this section alone, or \$2.46 million a year!
If you think about all the different games available at any decent-sized casino, each of them with a distinct house edge, it’s easy to see why casinos are such lucrative businesses.
You might want to think about this the next time you’re on holiday, see a casino and get tempted to try your luck at winning a quick buck. | |
# Set of pointwise convergence for power series and their derivatives
Consider the power series $$\sum_{n=0}^{+\infty} f_n(z)=\sum_{n=0}^{+\infty} a_nz^n$$.
We define $$P=\{z \in \mathbb{C} \mid \sum_{n=0}^{+\infty} f_n(z) \text{ converges}\}$$, $$P'=\{z \in \mathbb{C} \mid \sum_{n=0}^{+\infty} f_n'(z) \text{ converges}\}$$.
Is it always true that $$P=P'$$?
I know that $$\sum_{n=0}^{+\infty} f_n$$ and $$\sum_{n=0}^{+\infty} f_n'$$ have the same radius of convergence, but maybe we can still have that $$P \neq P'$$.
Namely, maybe we can find $$\sum_{n=0}^{+\infty} f_n$$ with radius of convergence $$R \in (0,+\infty)$$ such that $$\sum_{n=0}^{+\infty} f_n(R)$$ converges, but at the same time $$\sum_{n=0}^{+\infty} f_n'(R)$$ doesn't converge, and so we have $$P \neq P'$$.
Thank you!
• Consider $f_n (z) = \frac{1}{{n^2 }}z^n$.
– Gary
Sep 15 '20 at 7:36
Example: $$\sum_{n=0}^{\infty}\frac{1}{n^2}z^n$$ is convergent for each $$z$$ with $$|z| \le 1$$,
$$\sum_{n=1}^{\infty}\frac{1}{n}z^{n-1}$$ is divergent in $$z=1.$$
The series $$\sum_{n=0}^{\infty}z^n$$ has radius of convergence $$1$$ and diverges at each point on the circle $$|z|=1$$.
The series $$\sum_{n=1}^{\infty}\frac1n z^n$$ has radius of convergence $$1$$ and diverges at $$z=1$$ but converges for all other $$z$$ with $$|z|=1$$. | |
## Thursday, October 24, 2019
### Model Distillation
Model Distillation is the process of taking a big model or ensemble of models and producing a smaller model that captures most of the performance of the original bigger model. It could be also better be described as a blind model replication method.
The reasons for doing so are:
1. improved run-time performance (FLOP operations)
2. (maybe) better generalization because of the model simplicity
3. you don't have access to the training of the original model.
4. you have access to a remotely deployed model and you want to replicate it (it happens more than you can imagine)
5. original model maybe is too complicated
6. insights that may arise from the process itself
### How it works
Assume a MNIST classifier $$F_{MNIST}$$ composed of an ensemble of $$N$$ convolutional deep neural networks that produces a logit $$z_i$$ which is then converted to a probability of an input image, $$x_i$$, for each of the possible labels $$C_{0}-C_{9}$$.
The distillation process will give us an $$F_{MNIST_{distilled}}$$ composed of a single deep neural network that will approximate the classification results of the bigger ensemble of models.
In distillation, knowledge is transferred from the teacher model to the student by minimizing a loss function in which the target is the distribution of class probabilities predicted by the teacher model. That is - the output of a softmax function on the teacher model's logits.
Logits $$z_j$$ are converted to probabilities $$P(C_i|x)$$ using the softmax layer:
$$p_i = \frac {exp(z_i)} {\sum_{j}exp(z_j)}$$
However, in many cases, this probability distribution has the correct class at a very high probability, with all other class probabilities very close to 0. As such, it doesn't provide much information beyond the ground truth labels already provided in the dataset.
To tackle this issue, Hinton et al., 2015 introduced the concept of "softmax temperature". The probability $$q_i$$ is computer by the logit $$z_i$$ for the scalar softmax temperature $$T$$:
$$q_i = \frac {exp(\frac{z_i}{T})} {\sum_{j}exp(\frac{z_j}{T})}$$
where T is a temperature that is normally set to 1. Using a higher value for T produces a softer probability distribution over classes. Softer probability distribution means that the values are somewhat diffused and a 0.999 probability may become 0.9 and the rest spread to the other classes.
In the simplest form of distillation, knowledge is transferred to the distilled model by training it on a transfer set and using a soft target distribution for each case in the transfer set that is produced by using the cumbersome model with a high temperature in its softmax. The same high temperature is
used when training the distilled model, but after it has been trained it uses a temperature of 1. When the correct labels are known for all or some of the transfer set, this method can be significantly improved by also training the distilled model to produce the correct labels. One way to do this is to use the correct labels to modify the soft targets, but we found that a better way is to simply use a weighted average of two different objective functions.
1. The first objective function is the cross entropy with the soft targets and this cross entropy is computed using the same high temperature in the softmax of the distilled model as was used for generating the soft targets from the cumbersome model.
2. The second objective function is the cross entropy with the correct labels. This is computed using exactly the same logits in softmax of the distilled model but at a temperature of 1.
This very simple operation can have a multitude of knobs and parameters to adjust but the core essence is very simple and works quite well. | |
# Math Help - convergence
1. ## convergence
$
\mathop {\lim }\limits_{n \to \infty } \sqrt[{n^2 }]{{\prod\limits_{i = 1}^n {\left( \begin{gathered}
n \hfill \\
i \hfill \\
\end{gathered} \right)} }}
$
2. Originally Posted by mms
$
\mathop {\lim }\limits_{n \to \infty } \sqrt[{n^2 }]{{\prod\limits_{i = 1}^n {\left( \begin{gathered}
n \hfill \\
i \hfill \\
\end{gathered} \right)} }}
$
I get the answer to be $\sqrt e\approx 1.6487$.
Start by checking that $\int_0^1\!\!\!x\ln x\,dx = -1/4$ (easy integration by parts). Approximating this integral by a Riemann sum, you see that $\frac1n\sum_{k=1}^n\frac kn\ln\Bigl(\frac kn\Bigr)\sim -\frac14$ (for large n). Therefore $\frac1{n^2}\sum_{k=1}^nk\ln k\sim -\frac14 + \frac{(n+1)\ln n}{2n}$ .....(*).
Next,
\begin{aligned}\prod_{k=1}^n{n\choose k} &= \frac n1\cdot\frac{n(n-1)}{1\cdot2}\cdot \frac{n(n-1)(n-2)}{1\cdot2\cdot3}\cdots \\ &= n^{n-1}(n-1)^{n-3}(n-2)^{n-5}\cdots 2^{-n+3} = \prod_{k=1}^nk^{2k-n-1}.\end{aligned}
Take logs to see that
\begin{aligned}\ln\left(\sqrt[n^2 ]{\prod_{k = 1}^n {n\choose k}}\right) &= \sum_{k=1}^n\frac{2k-n-1}{n^2}\ln k \\ &\sim -\frac12 + \frac{n+1}n\ln n - \frac{n+1}{n^2}\sum_{k=1}^n\ln k\qquad\text{(from (*))} \\ &= -\frac12 + \frac{n+1}n\ln n - \frac{n+1}{n^2}\ln(n!) \\ &\sim -\frac12 + \frac{n+1}n\ln n - \frac{n+1}n(\ln n-1)\quad\ \text{(from Stirling's formula)}.\end{aligned}
This converges to 1/2 as n→∞, which leads to the result stated at the beginning. | |
68 748
Assignments Done
100%
Successfully Done
In January 2019
Answer to Question #20188 in Finance for Bob Sanders
Question #20188
You have the following data: FCF0 = $10 million; FCF1 =$15 million; FCF2 = $20 million; FCF3 =$25 million; free cash flow grows at a rate of 5% for year 4 and beyond. The weighted average cost of capital is 15%. Assume they have 40 million in debt and 10 million shares outstanding. Find the price per share.
Present value of FCF0-3 is PV=FCF0+FCF1/(1+0.15)+FCF2/(1+0.15)^2+FCF3/(1+0.15)^3=54.6
Then using formula for permanent growth we get value of all in flows which are expected to appear after FCF3 at moment of time t=3:
PV3=FCF3/(0.15-0.05)=250. Than we discount it to the current time PV=PV3/(1+0.15)^3=164.4.
Total value of all shares: TV=54.6+164.4-40=179
PPS=179/10=17.9 \$ per share.
Need a fast expert's response?
Submit order
and get a quick answer at the best price
for any assignment or question with DETAILED EXPLANATIONS! | |
# Is $F=ma$ an assumption in the original Schrodinger equation?
I believe the answer is yes, because in $\hat{H}\psi=E\psi$ he initially used, $\hat{H}=\frac{\hat{p}^2}{2m}+\hat{V}$ was clearly derived from classical mechanics by Hamilton.
Could you please clarify this for me?
In Sakurai's Modern Quantum Mechanics, he "derived" the Schr eq. I did not completely understand the processes, though.
• I think the answer is going to be very subjective. The only assumption in Schrodingers equation is the equation itself! – user12029 Jun 17 '17 at 2:56
It is dangerous to think about the Schrodinger equation and other quantum mechanical ideas from simple Newtonian physics. It is more natural to consider classical Hamiltonian mechanics. For example, in Hamiltonian mechanics, we have the Poisson bracket, which one might think as the classical analog of the quantum mechanical commutator:
$$\{A,\,B\} = \sum_i \left(\frac{\partial A}{\partial q_i}\frac{\partial B}{\partial p_i}-\frac{\partial B}{\partial q_i}\frac{\partial A}{\partial p_i} \right)$$
where $A$ and $B$ are some physical quantities. From the above, we have the interesting property that the Poisson bracket of a time-independent quantity $A$ with the Hamiltonian is the negative of the total time derivative:
\begin{align} \{H,\,A\}=-\frac{dA}{dt} \end{align}
Now, let's go to the quantum mechanical world, and change our Poisson brackets to true-blue quantum commutators:
$$\{A,\,B\}\rightarrow \frac{1}{i\hbar}[A,\,B]$$
Plugging this into our equation for the derivative above, taking $A$ and $H$ to be average values with respect to some wave function $|\psi\rangle$, and taking $A$ to unity, we obtain the Schrodinger's equation:
$$i\hbar \frac{d}{dt}|\psi\rangle = H|\psi\rangle$$
As ZeroTheHero said, Schrodinger's equation is not limited to simple Hamiltonians of the form $p^2/2m+V$--it is much more general than that, and to understand it as the limit of some classical theory, we have to dive into Hamiltonian mechanics as opposed to Newtonian.
I leave it up to a historians to say whether or not the scientists used $\mathbf{F} = m\mathbf{a}$ literally or not. One of the bases of quantum mechanics, though, is the correspondence principle. This is the name of a broader principle in physics that any new theory has to match older theories in the regime where the older theories have been demonstrated to be valid. In that sense, Einstein "used" Newtonian mechanics to make special relativity, and then special relativity in the production of general relativity. Back to quantum mechanics, the most often used version of the correspondence principle is that the expectation values of quantum operators have to obey the classical equations of motion. In this case, the more accurate version is the pair of differential equations: \begin{align} \mathbf{F} &= \frac{\operatorname{d} \mathbf{p}}{\operatorname{d} t},\ \mathrm{and} \\ \mathbf{p} & = m \frac{\operatorname{d} \mathbf{x}}{\operatorname{d} t}. \end{align} What is $\mathbf{F}$, though? Well, we get from classical Hamiltonian mechanics that $\mathbf{F}_i = -\frac{\partial H}{\partial x_i}$ and $\frac{\mathbf{p}_i}{m} = \frac{\partial H}{\partial p_i}$ changing the equations to: \begin{align} -\frac{\partial H}{\partial x_i} &= \frac{\operatorname{d} p_i}{\operatorname{d} t},\ \mathrm{and}\\ \frac{\partial H}{\partial p_i} &= \frac{\operatorname{d} x_i}{\operatorname{d} t}. \end{align}
For quantum mechanics these equations of motion are the form that the quantum mechanical expectation values, \begin{align} -\left\langle \frac{\partial H}{\partial x_i} \right\rangle &= \frac{\operatorname{d} \langle p_i \rangle}{\operatorname{d} t},\ \mathrm{and}\\ \left\langle \frac{\partial H}{\partial p_i}\right\rangle &= \frac{\operatorname{d} \langle x_i \rangle}{\operatorname{d} t}, \end{align} obey according to the Ehrenfest theorem, which is one of the most frequently quoted versions of the correspondence principle.
The Schrodinger equation is not in any way limited to Hamiltonians of the form $$H=\frac{p^2}{2m} + V(q)$$ and so need not have any connection with $F=ma$.
• I meant the "original", or the initial Schr. Eq. – High GPA Jun 17 '17 at 2:55
I think a flat no is not a proper answer. Schroedinger was definitely guided by classical mechanics, more precisely by Hamilton-Jacobi formalism, in his formulation of the wave mechanics. On the other hand, analytical mechanics can be traced back to Newton's second law by means of d'Alembert Principle. Hence in this sense Schroedinger equation is indirectly related to second Newton's law
Hamilton himself put much effort in understanding and developing an analogy between classical mechanics and geometric optics. He noticed that in Hamilton-Jacobi theory, the momentum of the particle is given by $\vec\nabla S$, where the Hamilton's principal function $S$ is the action viewed as a function of the coordinates. By looking at level surfaces $S=\mathrm{const.}$ we see that particle's trajectory is orthogonal to the level surfaces. This is similar to light rays which travel perpendicularly to level surfaces corresponding to constant phase (wave fronts).
Schroedinger in 1926 conjectured that the action $S$ was indeed a phase of some wave process. Hence this wave should look like $$\psi=\psi_0\exp{\frac{iS}{\hbar}}=\psi_0\exp{\frac{i}{\hbar}\left[W(x)-Et\right]},$$ where $W(x)$ is the Hamilton's characteristic function. The constant $\hbar=h/2\pi$ is chosen so that this wave has frequency $\nu=E/h$, the Planck relation, which was known by that time. Plugging this wave into a wave equation one gets finally the Schroedinger equation $$-\frac{\hbar^2}{2m}\nabla^2\psi+V\psi=i\hbar\frac{\partial\psi}{\partial t}.$$ Classical mechanics can be understood as a limit case of Quantum Mechanics by plugging $\psi=\psi_0e^\frac{iS}{\hbar}$ into Schroedinger equation and taking the limit $\hbar\rightarrow 0$. The result is the Hamilton-Jacobi equation.
Hamilton-Jacobi is a formulation of analytical or variational mechanics and therefore has its roots in d'Alembert principle. This principle states that the virtual work of the effective force - applied force minus rate of change of momentum - is zero. To arrive at this principle one explicitly assumes Newton's second law $\vec F=\dot p$.
It is interesting to note that Hamilton was close to formulating wave mechanics a century before Schroedinger . He did not though, probably by lack of any experimental evidence.
No. There isn't really a well-defined concept of "force" in quantum mechanics. Assuming that the Hamiltonian for a free particle takes the form $H = p^2/(2m)$ is not the same thing as assuming that $F = ma$, because Hamilton's equations do not apply in quantum mechanics (except in certain limits). | |
uu.seUppsala universitets publikasjoner
Endre søk
Begrens søket
3456789 251 - 300 of 418
Referera
Referensformat
• apa
• ieee
• modern-language-association
• vancouver
• Annet format
Fler format
Språk
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Annet språk
Fler språk
Utmatningsformat
• html
• text
• asciidoc
• rtf
Treff pr side
• 5
• 10
• 20
• 50
• 100
• 250
Sortering
• Standard (Relevans)
• Forfatter A-Ø
• Forfatter Ø-A
• Tittel A-Ø
• Tittel Ø-A
• Type publikasjon A-Ø
• Type publikasjon Ø-A
• Eldste først
• Nyeste først
• Disputationsdatum (tidligste først)
• Disputationsdatum (siste først)
• Standard (Relevans)
• Forfatter A-Ø
• Forfatter Ø-A
• Tittel A-Ø
• Tittel Ø-A
• Type publikasjon A-Ø
• Type publikasjon Ø-A
• Eldste først
• Nyeste først
• Disputationsdatum (tidligste først)
• Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
• 251.
Lund Univ, Dept Phys, SE-22100 Lund, Sweden.;GSI Helmholtzzentrum Schwerionenforsch GmbH, D-64291 Darmstadt, Germany..
Tech Univ Darmstadt, Inst Kernphys, D-64289 Darmstadt, Germany.. CEA DSM CNRS IN2P3, GANIL, F-14076 Caen, France.. CSIC Univ Valencia, Inst Fis Corpuscular, E-46920 Valencia, Spain.. GSI Helmholtzzentrum Schwerionenforsch GmbH, D-64291 Darmstadt, Germany.;Tech Univ Darmstadt, Inst Kernphys, D-64289 Darmstadt, Germany.. GSI Helmholtzzentrum Schwerionenforsch GmbH, D-64291 Darmstadt, Germany.. Lund Univ, Dept Phys, SE-22100 Lund, Sweden.. GSI Helmholtzzentrum Schwerionenforsch GmbH, D-64291 Darmstadt, Germany.;Univ Giessen, D-35392 Giessen, Germany.. Univ Padua, INFN Sez Padova, IT-35131 Padua, Italy.;Univ Padua, Dipartimento Fis, IT-35131 Padua, Italy.. CEA DSM CNRS IN2P3, GANIL, F-14076 Caen, France.. CSIC Univ Valencia, Inst Fis Corpuscular, E-46920 Valencia, Spain.. GSI Helmholtzzentrum Schwerionenforsch GmbH, D-64291 Darmstadt, Germany.. CSNSM, F-91405 Orsay, France.. STFC Daresbury Lab, Warrington WA4 4AD, Cheshire, England.. CSNSM, F-91405 Orsay, France.. CSNSM, F-91405 Orsay, France.. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Kärnfysik. Tech Univ Darmstadt, Inst Kernphys, D-64289 Darmstadt, Germany.. GSI Helmholtzzentrum Schwerionenforsch GmbH, D-64291 Darmstadt, Germany.. Univ Lyon, CNRS IN2P3, Inst Phys Nucl Lyon, F-69622 Villeurbanne, France..
Performance of the AGATA gamma-ray spectrometer in the PreSPEC set-up at GSI2016Inngår i: Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, ISSN 0168-9002, E-ISSN 1872-9576, Vol. 806, s. 258-266Artikkel i tidsskrift (Fagfellevurdert)
In contemporary nuclear physics, the European Advanced GAmma Tracking Array (AGATA) represents a crucial detection system for cutting-edge nuclear structure studies. AGATA consists of highly segmented high-purity germanium crystals and uses the pulse-shape analysis technique to determine both the position and the energy of the y-ray interaction points in the crystals. It is the tracking algorithms that deploy this information and enable insight into the sequence of interactions, providing information on the full or partial absorption of the 7 ray. A series of dedicated performance measurements for an AGATA set-up comprising 21 crystals is described. This set-up was used within the recent PreSPEC-AGATA experimental campaign at the GSI Helmholtzzentrum fur Schwerionenforschung. Using the radioactive sources Co-56, Co-60 and Eu-152, absolute and normalized efficiencies and the peak-to-total of the array were measured. These quantities are discussed using different data analysis procedures. The quality of the pulse-shape analysis and the tracking algorithm are evaluated. The agreement between the experimental data and the Geant4 simulations is also investigated.
• 252.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Teoretisk fysik.
Pierced, Wrapped and Torn: Aspects of String Theory Compactifications2009Doktoravhandling, med artikler (Annet vitenskapelig)
An outstanding problem in physics is to find a unified framework for quantum mechanics and general relativity. This is required for a better understanding of black holes and the early cosmology of the universe. String theory provides such a unification. In this thesis, we study aspects of compactifications of type IIB string theory. In the first part of the thesis, we study four-dimensional black holes consisting of D3-branes wrapping cycles in the compact dimensions. We discuss the correspondence between these black holes, topological string theory and matrix models. We then study the influence of black holes on the stability of flux compactifications. In the second part of the thesis, we turn to investigations of the type IIB landscape, i.e. the collection of stable and metastable vacua obtained from flux compactifications on conformal Calabi-Yau manifolds. We show that monodromies are important for the topographic structure of the landscape. In particular we find that there are long series of continuously connected vacua in the complex structure moduli space of the internal manifold. We also use geometric transitions to connect the moduli spaces of different manifolds, and create longer series of vacua. Finally, we investigate the stability of string theory vacua by constructing semiclassical instantons. These results have implications for the population of the landscape by eternal inflation.
1. Deforming, revolving and resolving: New paths in the string theory landscape
Åpne denne publikasjonen i ny fane eller vindu >>Deforming, revolving and resolving: New paths in the string theory landscape
2008 (engelsk)Inngår i: Journal of High Energy Physics (JHEP), ISSN 1126-6708, E-ISSN 1029-8479, Vol. 02, artikkel-id 016Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
In this paper we investigate the properties of series of vacua in the string theory landscape. In particular, we study minima to the flux potential in type IIB compactifications on the mirror quintic. Using geometric transitions, we embed its one-dimensional complex structure moduli space in that of another Calabi-Yau with h1,1 = 86 and h2,1 = 2. We then show how to construct infinite series of continuously connected minima to the mirror quintic potential by moving into this larger moduli space, applying its monodromies, and moving back. We provide an example of such series, and discuss their implications for the string theory landscape
##### Emneord
Superstring Vacua, Flux compactifications
##### Forskningsprogram
Teoretisk fysik
##### Identifikatorer
urn:nbn:se:uu:diva-100502 (URN)10.1088/1126-6708/2008/02/016 (DOI)000254764400096 ()
Tilgjengelig fra: 2009-04-01 Laget: 2009-04-01 Sist oppdatert: 2017-12-13bibliografisk kontrollert
2. The world next door: Results in landscape topography
Åpne denne publikasjonen i ny fane eller vindu >>The world next door: Results in landscape topography
2007 (svensk)Inngår i: Journal of High Energy Physics (JHEP), ISSN 1126-6708, E-ISSN 1029-8479, Vol. 03, s. 080-Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
Recently, it has become clear that neighboring multiple vacua might have interesting consequences for the physics of the early universe. In this paper we investigate the topography of the string landscape corresponding to complex structure moduli of flux compactified type IIB string theory. We find that series of continuously connected vacua are common. The properties of these series are described, and we relate the existence of infinite series of minima to certain unresolved mathematical problems in group theory. Numerical studies of the mirror quintic serve as illustrating examples.
##### Emneord
Superstring Vacua, Flux compactifications, dS vacua in string theory
##### Identifikatorer
urn:nbn:se:uu:diva-13761 (URN)10.1088/1126-6708/2007/03/080 (DOI)000245922000080 ()
Tilgjengelig fra: 2008-01-25 Laget: 2008-01-25 Sist oppdatert: 2017-12-11bibliografisk kontrollert
3. Stability of flux vacua in the presence of charged black
Åpne denne publikasjonen i ny fane eller vindu >>Stability of flux vacua in the presence of charged black
2006 (engelsk)Inngår i: JHEP, Vol. 09, s. 069-Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
In this letter we consider a charged black hole in a flux compactification of type IIB string theory. Both the black hole and the fluxes will induce potentials for the complex structure moduli. We choose the compact dimensions to be described locally by a deformed conifold, creating a large hierarchy. We demonstrate that the presence of a black hole typically will not change the minimum of the moduli potential in a substantial way. However, we also point out a couple of possible loop-holes, which in some cases could lead to interesting physical consequences such as changes in the hierarchy.
##### Identifikatorer
urn:nbn:se:uu:diva-20261 (URN)
Tilgjengelig fra: 2006-12-06 Laget: 2006-12-06 Sist oppdatert: 2011-01-11
4. 4D black holes and holomorphic factorization of the 0A matrix model
Åpne denne publikasjonen i ny fane eller vindu >>4D black holes and holomorphic factorization of the 0A matrix model
2005 (engelsk)Inngår i: Journal of High Energy Physics, Vol. 0510, s. 046-Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
n this letter, we relate the free energy of the 0A matrix model to the sum of topological and anti-topological string amplitudes. For arbitrary integer multiples of the matrix model self-dual radius we describe the geometry on which the corresponding topological string propagates. This geometry is not the one that follows from the usual ground ring analysis, but in a sense its "holomorphic square root". Mixing of terms for different genus in the matrix model free energy yields one-loop terms compatible with type II strings on compact Calabi-Yau target spaces. As an application, we give an explicit example of how to relate the 0A matrix model free energy to that of a four-dimensional black hole in type IIB theory, compactified on a compact Calabi-Yau. Variables, Legendre transforms, and large classical terms on both sides match perfectly.
##### Identifikatorer
urn:nbn:se:uu:diva-79421 (URN)
Tilgjengelig fra: 2006-04-10 Laget: 2006-04-10 Sist oppdatert: 2011-01-11
5. Field dynamics and tunneling in a flux landscape
Åpne denne publikasjonen i ny fane eller vindu >>Field dynamics and tunneling in a flux landscape
2008 (engelsk)Inngår i: Physical Review D. Particles and fields, ISSN 0556-2821, E-ISSN 1089-4918, Vol. 78, nr 8, s. 0835341-08353423Artikkel i tidsskrift (Fagfellevurdert) Published
##### sted, utgiver, år, opplag, sider
American Physical Society, 2008
fysik
##### Identifikatorer
urn:nbn:se:uu:diva-100308 (URN)10.1103/PhysRevD.78.083534 (DOI)
Tilgjengelig fra: 2009-03-30 Laget: 2009-03-30 Sist oppdatert: 2017-12-13bibliografisk kontrollert
6. Obstacle to populating the string theory landscape
Åpne denne publikasjonen i ny fane eller vindu >>Obstacle to populating the string theory landscape
2008 (engelsk)Inngår i: Physical Review D. Particles and fields, ISSN 0556-2821, E-ISSN 1089-4918, Vol. 78, nr 12, s. 1235131-1235135Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
We construct domain walls and instantons in a class of models with coupled scalar fields, determining, in agreement with previous studies, that many such solutions contain naked timelike singularities. Vacuum bubble solutions of this type do not contain a region of true vacuum, obstructing the ability of eternal inflation to populate other vacua. We determine a criterion that potentials must satisfy to avoid the existence of such singularities and show that many domain wall solutions in type IIB string theory are singular.
##### sted, utgiver, år, opplag, sider
American Physical Society, 2008
##### Forskningsprogram
teoretisk fysik
##### Identifikatorer
urn:nbn:se:uu:diva-100500 (URN)10.1103/PhysRevD.78.123513 (DOI)
Tilgjengelig fra: 2009-04-01 Laget: 2009-04-01 Sist oppdatert: 2017-12-13bibliografisk kontrollert
• 253.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Tillämpad kärnfysik.
Upgrade and validation of PHX2MCNP for criticality analysis calculations for spent fuel storage pools2010Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
A few years ago Westinghouse started the development of a new method for criticality calculations for spent nuclear fuel storage pools called “PHOENIX-to–MCNP” (PHX2MCNP). PHX2MCNP transfers burn-up data from the code PHOENIX to use in MCNP in order to calculate the criticality. This thesis describes a work with the purpose to further validate the new method first by validating the software MCNP5 at higher water temperatures than room temperature and, in a second step, continue the development of the method by adding a new feature to the old script. Finally two studies were made to examine the effect from decay time on criticality and to study the possibility to limit the number of transferred isotopes used in the calculations.
MCNP was validated against 31 experiments and a statistical evaluation of the results was done. The evaluation showed no correlation between the water temperature of the pool and the criticality. This proved that MCNP5 can be used in criticality calculations in storage pools at higher water temperature.
The new version of the PHX2MCNP script is called PHX2MCNP version 2 and has the capability to distribute the burnable absorber gadolinium into several radial zones in one pin. The decay time study showed that the maximum criticality occurs immediately after the takeout from the reactor as expected.
The last study, done to evaluate the possibility to limit the isotopes transferred from PHOENIX to MCNP showed that Case A, a case with the smallest number of isotopes, is conservative for all sections of the fuel element. Case A, which contains only some of the actinides and the strongest absorber of the burnable absorbers gadolinium 155, could therefore be used in future calculations.
Finally, the need for further validation of the method is discussed.
• 254.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Estimation of flank wear growth on coated inserts2013Independent thesis Advanced level (degree of Master (One Year)), 20 poäng / 30 hpOppgave
The present work was conducted in Sandvik Coromant to enhance the knowledge and understanding of general flank wear growth and specifically in this case flank wear growth on the cutting edge of the coated (Ti(C, N)/ Al2O3/ TiN) tool inserts.
Reliable modeling of tool life is always a concern for machining processes. Numbers of wear models studies predicting the tool life length have been created throughout the metal-cutting history to better predict and thereby control the tool life span, which is a major portion of the total cost of machining.
A geometrical contact model defining the geometry of the flank wear growth on the cutting tool inserts was proposed and then compared with four suggested models, which estimates flank wear. The focus of this work is on the initial growth of flank wear process and thereby short cutting-time intervals are measured.
Wear tests on cutting tool inserts were performed after orthogonal turning of Ovako 825 B steel and were analysed by optical instrument, 3D optical imaging in Alicona InfiniteFocus and EDS in SEM. Force measurements for cutting speeds, Vc, 150, 200, and 250 m/min and feed rate, fn, 0.15 mm/rev were recorded as well.
Results show that initial flank wear land, VB, growth is dominated by sliding distance per cutting length for different cutting speeds. A good correlation between the geometrical contact model and estimation models is indentified. The cutting force measurements compared with the flank wear land show proportionality between two parameters. For the machining data in the present study the flank wear rate per sliding distance, dW/dL, is estimated to 2x1033/m).
• 255. Latina, Andrea
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Implications of a Curved Tunnel for the Main Linac of CLIC2006Inngår i: EPAC 2006 Proceedings, s. 864-866Artikkel i tidsskrift (Fagfellevurdert)
• 256.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Tekniska sektionen, Institutionen för teknikvetenskaper, Experimentell fysik.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Tekniska sektionen, Institutionen för teknikvetenskaper, Tillämpad materialvetenskap. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. RWTH Aachen. argonne national laboratory, Chicago. argonne national laboratory, Chicago.
Analysis of structural order in Fe1‐xZrx thin amorphous films2014Konferansepaper (Annet vitenskapelig)
• 257. Lemasson, A.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Kärnfysik.
Pair and single neutron transfer with Borromean 8He2011Inngår i: Physics Letters B, ISSN 0370-2693, E-ISSN 1873-2445, Vol. 697, nr 5, s. 454-458Artikkel i tidsskrift (Fagfellevurdert)
Direct observation of the survival of 199Au residues after 2n transfer in the He 8 + Au 197 system and the absence of the corresponding 67Cu in the He 8 + Cu 65 system at various energies are reported. The measurements of the surprisingly large cross sections for 199Au, coupled with the integral cross sections for the various Au residues, is used to obtain the first model-independent lower limits on the ratio of 2n to 1n transfer cross sections from 8He to a heavy target. A comparison of the transfer cross sections for 6,8He on these targets highlights the differences in the interactions of these Borromean nuclei. These measurements for the most neutron-rich nuclei on different targets highlight the need to probe the reaction mechanism with various targets and represent an experimental advance towards understanding specific features of pairing in the dynamics of dilute nuclear systems.
• 258. Leyser, T. B.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Astronomi och rymdfysik.
Radio pumping of the ionosphere with orbital angular momentumInngår i: Physical Review Letters, ISSN 0031-9007Artikkel i tidsskrift (Fagfellevurdert)
• 259.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutet för rymdfysik, Uppsalaavdelningen.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Radio Pumping of Ionospheric Plasma with Orbital Angular Momentum2009Inngår i: Physical Review Letters, ISSN 0031-9007, E-ISSN 1079-7114, Vol. 102, nr 6, s. 065004-Artikkel i tidsskrift (Fagfellevurdert)
Experimental results are presented of pumping ionospheric plasma with a radio wave carrying orbital angular momentum (OAM), using the High Frequency Active Auroral Research Program (HAARP) facility in Alaska. Optical emissions from the pumped plasma turbulence exhibit the characteristic ring-shaped morphology when the pump beam carries OAM. Features of stimulated electromagnetic emissions (SEE) that are attributed to cascading Langmuir turbulence are well developed for a regular beam but are significantly weaker for a ring-shaped OAM beam in which case upper hybrid turbulence dominates the SEE.
• 260.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Atomic diffusion and mixing in old stars II. Observations of stars in the globular cluster NGC 6397 with VLT/FLAMES-GIRAFFE2008Inngår i: Astronomy and Astrophysics, ISSN 0004-6361, E-ISSN 1432-0746, Vol. 490, nr 2, s. 777-U82Artikkel i tidsskrift (Fagfellevurdert)
Context. Evolutionary trends in the surface abundances of heavier elements have recently been identified in the globular cluster NGC 6397 ([Fe/H] = -2), indicating the operation of atomic diffusion in these stars. Such trends constitute important constraints for the extent to which diffusion modifies the internal structure and surface abundances of solar-type, metal-poor stars.
Aims. We perform an independent check of the reality and size of abundance variations within this metal-poor globular cluster.
Methods. Observational data covering a large stellar sample, located between the cluster turn-off point and the base of the red giant branch, are homogeneously analysed. The spectroscopic data were obtained with the medium-high resolution spectrograph FLAMES/GIRAFFE on VLT-UT2 (R similar to 27 000). We derive independent effective-temperature scales from profile fitting of Balmer lines and by applying colour-T-eff calibrations to Stromgren uvby and broad-band BVI photometry. An automated spectral analysis code is used together with a grid of MARCS model atmospheres to derive stellar surface abundances of Mg, Ca, Ti, and Fe.
Results. We identify systematically higher iron abundances for more evolved stars. The turn-off point stars are found to have 0.13 dex lower surface abundances of iron compared to the coolest, most evolved stars in our sample. There is a strong indication of a similar trend in magnesium, whereas calcium and titanium abundances are more homogeneous. Within reasonable error limits, the obtained abundance trends are in agreement with the predictions of stellar structure models
• 261. Lueftinger, T.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Astronomi och rymdfysik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Astronomi och rymdfysik.
3D atmospheric structure of the prototypical roAp star HD 24712 (HR1217)2008Inngår i: Contributions of the Astronomical Observatory Skalnate Pleso, ISSN 1335-1842, Vol. 38, nr 2, s. 335-340Artikkel i tidsskrift (Fagfellevurdert)
The first analysis of the structure of the surface magnetic field of a rapidly oscillating Ap (roAp) star is presented. We obtain information about abundance distributions of a number of chemical elements on the surface of the prototypical roAp star HD 24712 and about magnetic field geometry. Inverting rotationally modulated spectra in Stokes parameters I and V obtained with the SOFIN spectropolarimeter attached to the NOT, we recover surface abundance structures of sixteen different chemical elements, including Mg, Ca, Sc, Ti, Cr, Fe, Co, Ni, Y, La, Ce, Pr, Nd, Gd, Tb, and Dy. Our analysis reveal a pure dipolar structure of the stellar magnetic field and surprising and unexpected correlations of the various elemental surface abundance structures to this field geometry. Stratification analysis at phases of both magnetic extrema enable us to obtain the vertical dimension in the atmosphere of HD 24712. High time resolved spectroscopic data and observations obtained with the MOST space photometer give us the possibility to compare (Luftinger, 2007) our results to detailed pulsational analysis.
• 262.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Weakly interacting massive particle diffusion in the solar system including solar depletion and its effect on Earth capture2004Inngår i: Physical Review D, ISSN 1550-7998, Vol. D69, s. 123505-1 -- 123505-18Artikkel i tidsskrift (Fagfellevurdert)
• 263.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
On the Search for High-Energy Neutrinos: Analysis of data from AMANDA-II2008Doktoravhandling, med artikler (Annet vitenskapelig)
A search for a diffuse flux of cosmic neutrinos with energies in excess of 1014 eV was performed using two years of AMANDA-II data, collected in 2003 and 2004. A 20% evenly distributed sub-sample of experimental data was used to verify the detector description and the analysis cuts. A very good agreement between this 20% sample and the background simulations was observed. The analysis was optimised for discovery, to a relatively low price in limit setting power. The background estimate for the livetime of the examined 80% sample is 0.035 ± 68% events with an additional 41% systematical uncertainty.
The total neutrino flux needed for a 5σ discovery to be made with 50% probability was estimated to 3.4 ∙ 10-7 E-2 GeV s-1 sr-1 cm-2 equally distributed over the three flavours, taking statistical and systematic uncertainties in the background expectation and the signal efficiency into account. No experimental events survived the final discriminator cut. Hence, no ultra-high energy neutrino candidates were found in the examined sample. A 90% upper limit is placed on the total ultra-high energy neutrino flux at 2.8 ∙ 10-7 E-2 GeV s-1 sr-1 cm-2, taking both systematical and statistical uncertainties into account. The energy range in which 90% of the simulated E-2 signal is contained is 2.94 ∙ 1014 eV to 1.54 ∙ 1018 eV (central interval), assuming an equal distribution over the neutrino flavours at the Earth. The final acceptance is distributed as 48% electron neutrinos, 27% muon neutrinos, and 25% tau neutrinos.
A set of models for the production of neutrinos in active galactic nuclei that predict spectra deviating from E-2 was excluded.
1. Light tracking through ice and water: Scattering and absorption in heterogeneous media with Photonics
Åpne denne publikasjonen i ny fane eller vindu >>Light tracking through ice and water: Scattering and absorption in heterogeneous media with Photonics
2007 (engelsk)Inngår i: Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, ISSN 0168-9002, E-ISSN 1872-9576, Vol. 581, nr 3, s. 619-631Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
In the field of neutrino astronomy, large volumes of optically transparent matter like glacial ice, lake water, or deep ocean water are used as detector media. Elementary particle interactions are studied using in situ detectors recording time distributions and fluxes of the faint photon fields of Cherenkov radiation generated by ultra-relativistic charged particles, typically muons or electrons.
The Photonics software package was developed to determine photon flux and time distributions throughout a volume containing a light source through Monte Carlo simulation. Photons are propagated and time distributions are recorded throughout a cellular grid constituting the simulation volume, and Mie scattering and absorption are realised using wavelength and position dependent parameterisations. The photon tracking results are stored in binary tables for transparent access through ansi-c and c++ interfaces. For higher-level physics applications, like simulation or reconstruction of particle events, it is then possible to quickly acquire the light yield and time distributions for a pre-specified set of light source and detector properties and geometries without real-time photon propagation.
In this paper the Photonics light propagation routines and methodology are presented and applied to the IceCube and Antares neutrino telescopes. The way in which inhomogeneities of the Antarctic glacial ice distort the signatures of elementary particle interactions, and how Photonics can be used to account for these effects, is described.
##### Emneord
Numerical simulation, Optical properties, Monte Carlo method, Ray tracing, Optical, Neutrino detection
##### Identifikatorer
urn:nbn:se:uu:diva-97326 (URN)10.1016/j.nima.2007.07.143 (DOI)000251148000007 ()
Tilgjengelig fra: 2008-05-15 Laget: 2008-05-15 Sist oppdatert: 2017-12-14bibliografisk kontrollert
2. Weakly interacting massive particle diffusion in the solar system including solar depletion and its effect on Earth capture
Åpne denne publikasjonen i ny fane eller vindu >>Weakly interacting massive particle diffusion in the solar system including solar depletion and its effect on Earth capture
2004 Inngår i: Physical Review D, ISSN 1550-7998, Vol. D69, s. 123505-1 -- 123505-18Artikkel i tidsskrift (Fagfellevurdert) Published
##### Identifikatorer
urn:nbn:se:uu:diva-97327 (URN)
Tilgjengelig fra: 2008-05-15 Laget: 2008-05-15bibliografisk kontrollert
• 264.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Light tracking through ice and water: Scattering and absorption in heterogeneous media with Photonics2007Inngår i: Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, ISSN 0168-9002, E-ISSN 1872-9576, Vol. 581, nr 3, s. 619-631Artikkel i tidsskrift (Fagfellevurdert)
In the field of neutrino astronomy, large volumes of optically transparent matter like glacial ice, lake water, or deep ocean water are used as detector media. Elementary particle interactions are studied using in situ detectors recording time distributions and fluxes of the faint photon fields of Cherenkov radiation generated by ultra-relativistic charged particles, typically muons or electrons.
The Photonics software package was developed to determine photon flux and time distributions throughout a volume containing a light source through Monte Carlo simulation. Photons are propagated and time distributions are recorded throughout a cellular grid constituting the simulation volume, and Mie scattering and absorption are realised using wavelength and position dependent parameterisations. The photon tracking results are stored in binary tables for transparent access through ansi-c and c++ interfaces. For higher-level physics applications, like simulation or reconstruction of particle events, it is then possible to quickly acquire the light yield and time distributions for a pre-specified set of light source and detector properties and geometries without real-time photon propagation.
In this paper the Photonics light propagation routines and methodology are presented and applied to the IceCube and Antares neutrino telescopes. The way in which inhomogeneities of the Antarctic glacial ice distort the signatures of elementary particle interactions, and how Photonics can be used to account for these effects, is described.
• 265.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Electronic structure of a thermoelectric material: CsBi4Te62008Inngår i: Journal of Physics and Chemistry of Solids, ISSN 0022-3697, E-ISSN 1879-2553, Vol. 69, nr 9, s. 2274-2276Artikkel i tidsskrift (Fagfellevurdert)
We have calculated the electronic structure of CsBi4Te6 by means of first-principles self-consistent total-energy calculations within the local-density approximation using the full-potential linear-muffin-tin-orbital method. From our calculated electronic structure we have calculated the frequency dependent dielectric function. Our calculations shows that CsBi4Te6 a semiconductor with a band gap of 0.3 eV. The calculated dielectric function is very anisotropic. Our calculated density of state support the recent experiment of Chung et al. [Science 287 (2000) 10241 that CsBi4Te6 is a high performance thermoelectric material for low temperature applications. (C) 2008 Elsevier Ltd. All rights reserved.
• 266.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Kärnfysik.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Kärnfysik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Kärnfysik.
Test of digital neutron-gamma discrimination with four different photomultiplier tubes for the NEutron Detector Array (NEDA)2014Inngår i: Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, ISSN 0168-9002, E-ISSN 1872-9576, Vol. 767, s. 83-91Artikkel i tidsskrift (Fagfellevurdert)
A comparative study of the neutron-gamma discrimination performance of a liquid scintillator detector BC501A coupled to four different 5 in photomultiplier tubes (ET9390kb, R11833-100, XP4512 and R4144) was carried out Both the Charge Comparison method and the Integrated Rise-Time method were implemented digitally to discriminate between neutrons and gamma rays emitted by a Cf-252 source. In both methods, the neutron-gamma discrimination capabilities of the four photomultiplier tubes were quantitatively compared by evaluating their figure-of-merit values at different energy regions between 50 keVee and 1000 keVee. Additionally, the results were further verified qualitatively using time-of-flight to distinguish gamma rays and neutrons. The results consistently show that photomultiplier tubes R11833-100 and ET9390kb generally perform best regarding neutron-gamma discrimination with only slight differences in figure-of-merit values. This superiority can be explained by their relatively higher photoelectron yield, which indicates that a scintillator detector coupled to a photomultiplier tube with higher photoelectron yield tends to result in better neutron-gamma discrimination performance. The results of this work will provide reference for the choice of photomultiplier tubes for future neutron detector arrays like NEDA.
• 267.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Turbulence-Assisted Planetary Growth: Hydrodynamical Simulations of Accretion Disks and Planet Formation2009Doktoravhandling, med artikler (Annet vitenskapelig)
The current paradigm in planet formation theory is developed around a hierarquical growth of solid bodies, from interstellar dust grains to rocky planetary cores. A particularly difficult phase in the process is the growth from meter-size boulders to planetary embryos of the size of our Moon or Mars. Objects of this size are expected to drift extremely rapid in a protoplanetary disk, so that they would generally fall into the central star well before larger bodies can form.
In this thesis, we used numerical simulations to find a physical mechanism that may retain solids in some parts of protoplanetary disks long enough to allow for the formation of planetary embryos. We found that such accumulation can happen at the borders of so-called dead zones. These dead zones would be regions where the coupling to the ambient magnetic field is weaker and the turbulence is less strong, or maybe even absent in some cases. We show by hydrodynamical simulations that material accumulating between the turbulent active and dead regions would be trapped into vortices to effectively form planetary embryos of Moon to Mars mass.
We also show that in disks that already formed a giant planet, solid matter accumulates on the edges of the gap the planet carves, as well as at the stable Lagrangian points. The concentration is strong enough for the solids to clump together and form smaller, rocky planets like Earth. Outside our solar system, some gas giant planets have been detected in the habitable zone of their stars. Their wakes may harbour rocky, Earth-size worlds.
1. Global magnetohydrodynamical models of turbulence in protoplanetary disks: I. A cylindrical potential on a Cartesian grid and transport of solids
Åpne denne publikasjonen i ny fane eller vindu >>Global magnetohydrodynamical models of turbulence in protoplanetary disks: I. A cylindrical potential on a Cartesian grid and transport of solids
2008 (engelsk)Inngår i: Astronomy and Astrophysics, ISSN 0004-6361, E-ISSN 1432-0746, Vol. 479, s. 883-901Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
Aims.We present global 3D MHD simulations of disks of gas and solids, aiming at developing models that can be used to study various scenarios of planet formation and planet-disk interaction in turbulent accretion disks. A second goal is to demonstrate that Cartesian codes are comparable to cylindrical and spherical ones in handling the magnetohydrodynamics of the disk simulations while offering advantages, such as the absence of a grid singularity, for certain applications, e.g., circumbinary disks and disk-jet simulations. Methods: We employ the Pencil Code, a 3D high-order finite-difference MHD code using Cartesian coordinates. We solve the equations of ideal MHD with a local isothermal equation of state. Planets and stars are treated as particles evolved with an N-body scheme. Solid boulders are treated as individual superparticles that couple to the gas through a drag force that is linear in the local relative velocity between gas and particle. Results: We find that Cartesian grids are well-suited for accretion disk problems. The disk-in-a-box models based on Cartesian grids presented here develop and sustain MHD turbulence, in good agreement with published results achieved with cylindrical codes. Models without an inner boundary do not show the spurious build-up of magnetic pressure and Reynolds stress seen in the models with boundaries, but the global stresses and alpha viscosities are similar in the two cases. We investigate the dependence of the magnetorotational instability on disk scale height, finding evidence that the turbulence generated by the magnetorotational instability grows with thermal pressure. The turbulent stresses depend on the thermal pressure obeying a power law of 0.24 ± 0.03, compatible with the value of 0.25 found in shearing box calculations. The ratio of Maxwell to Reynolds stresses decreases with increasing temperature, dropping from 5 to 1 when the sound speed was raised by a factor 4, maintaing the same field strength. We also study the dynamics of solid boulders in the hydromagnetic turbulence, by making use of 106 Lagrangian particles embedded in the Eulerian grid. The effective diffusion provided by the turbulence prevents settling of the solids in a infinitesimally thin layer, forming instead a layer of solids of finite vertical thickness. The measured scale height of this diffusion-supported layer of solids implies turbulent vertical diffusion coefficients with globally averaged Schmidt numbers of 1.0 ± 0.2 for a model with α≈10-3 and 0.78 ± 0.06 for a model with α≈10-1. That is, the vertical turbulent diffusion acting on the solids phase is comparable to the turbulent viscosity acting on the gas phase. The average bulk density of solids in the turbulent flow is quite low (ρp = 6.0×10-11 kg m-3), but in the high pressure regions, significant overdensities are observed, where the solid-to-gas ratio reached values as great as 85, corresponding to 4 orders of magnitude higher than the initial interstellar value of 0.01
##### Emneord
magnetohydrodynamics (MHD), accretion, accretion disks, instabilities, turbulence, solar system: formation, diffusion
##### Identifikatorer
urn:nbn:se:uu:diva-97999 (URN)10.1051/0004-6361:20077948 (DOI)000253454600026 ()
Tilgjengelig fra: 2009-02-05 Laget: 2009-02-05 Sist oppdatert: 2017-12-14bibliografisk kontrollert
2. Embryos grown in the dead zone: Assembling the first protoplanetary cores in low mass self-gravitating circumstellar disks of gas and solids
Åpne denne publikasjonen i ny fane eller vindu >>Embryos grown in the dead zone: Assembling the first protoplanetary cores in low mass self-gravitating circumstellar disks of gas and solids
2008 (engelsk)Inngår i: Astronomy and Astrophysics, ISSN 0004-6361, E-ISSN 1432-0746, Vol. 491, nr 3, s. L41-L44Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
Context: In the borders of the dead zones of protoplanetary disks, the inflow of gas produces a local density maximum that triggers the Rossby wave instability. The vortices that form are efficient in trapping solids. Aims: We aim to assess the possibility of gravitational collapse of the solids within the Rossby vortices. Methods: We perform global simulations of the dynamics of gas and solids in a low mass non-magnetized self-gravitating thin protoplanetary disk with the Pencil Code. We use multiple particle species of radius 1, 10, 30, and 100 cm. The dead zone is modelled as a region of low viscosity. Results: The Rossby vortices excited in the edges of the dead zone are efficient particle traps. Within 5 orbits after their appearance, the solids achieve critical density and undergo gravitational collapse into Mars sized objects. The velocity dispersions are of the order of 10 m s-1 for newly formed embryos, later lowering to less than 1 m s-1 by drag force cooling. After 200 orbits, over 300 gravitationally bound embryos were formed, 20 of them being more massive than Mars. Their mass spectrum follows a power law of index -2.3 ± 0.2.
##### Emneord
Accretion, accretion disks; Instabilites; Stars: planetary systems: formation
##### Identifikatorer
urn:nbn:se:uu:diva-98000 (URN)10.1051/0004-6361:200810626 (DOI)000261152900001 ()
Tilgjengelig fra: 2009-02-05 Laget: 2009-02-05 Sist oppdatert: 2017-12-14bibliografisk kontrollert
3. Planet formation bursts at the borders of the dead zone in 2D numerical simulations of circumstellar disks
Åpne denne publikasjonen i ny fane eller vindu >>Planet formation bursts at the borders of the dead zone in 2D numerical simulations of circumstellar disks
2009 (engelsk)Inngår i: Astronomy and Astrophysics, ISSN 0004-6361, E-ISSN 1432-0746, Vol. 497, nr 3, s. 869-888Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
Context: As accretion in protoplanetary disks is enabled by turbulent viscosity, the border between active and inactive (dead) zones constitutes a location where there is an abrupt change in the accretion flow. The gas accumulation that ensues triggers the Rossby wave instability, which in turn saturates into anticyclonic vortices. It has been suggested that the trapping of solids within them leads to a burst of planet formation on very short timescales. Aims: We study in the formation and evolution of the vortices in greater detail, focusing on the implications for the dynamics of embedded solid particles and planet formation. Methods: We performed two-dimensional global simulations of the dynamics of gas and solids in a non-magnetized thin protoplanetary disk with the Pencil code. We used multiple particle species of radius 1, 10, 30, and 100 cm. We computed the particles' gravitational interaction by a particle-mesh method, translating the particles' number density into surface density and computing the corresponding self-gravitational potential via fast Fourier transforms. The dead zone is modeled as a region of low viscosity. Adiabatic and locally isothermal equations of state are used. Results: The Rossby wave instability is triggered under a variety of conditions, thus making vortex formation a robust process. Inside the vortices, fast accumulation of solids occurs and the particles collapse into objects of planetary mass on timescales as short as five orbits. Because the drag force is size-dependent, aerodynamical sorting ensues within the vortical motion, and the first bound structures formed are composed primarily of similarly-sized particles. In addition to erosion due to ram pressure, we identify gas tides from the massive vortices as a disrupting agent of formed protoplanetary embryos. We find evidence that the backreaction of the drag force from the particles onto the gas modifies the evolution of the Rossby wave instability, with vortices being launched only at later times if this term is excluded from the momentum equation. Even though the gas is not initially gravitationally unstable, the vortices can grow to Q ≈ 1 in locally isothermal runs, which halts the inverse cascade of energy towards smaller wavenumbers. As a result, vortices in models without self-gravity tend to rapidly merge towards a m = 2 or m =1 mode, while models with self-gravity retain dominant higher order modes (m = 4 or m = 3) for longer times. Non-selfgravitating disks thus show fewer and stronger vortices. We also estimate the collisional velocity history of the particles that compose the most massive embryo by the end of the simulation, finding that the vast majority of them never experienced a collision with another particle at speeds faster than 1 m s-1. This result lends further support to previous studies showing that vortices provide a favorable environment for planet formation.
##### Emneord
accretion, accretion disks; hydrodynamics; instabilities; stars: planetary systems: formation; methods: numerical; turbulence
##### Identifikatorer
urn:nbn:se:uu:diva-98001 (URN)10.1051/0004-6361/200811265 (DOI)000265280500022 ()
Tilgjengelig fra: 2009-02-05 Laget: 2009-02-05 Sist oppdatert: 2017-12-14bibliografisk kontrollert
4. Standing on the shoulders of giants: Trojan Earths and vortex trapping in low mass self-gravitating protoplanetary disks of gas and solids
Åpne denne publikasjonen i ny fane eller vindu >>Standing on the shoulders of giants: Trojan Earths and vortex trapping in low mass self-gravitating protoplanetary disks of gas and solids
2009 (engelsk)Inngår i: Astronomy and Astrophysics, ISSN 0004-6361, E-ISSN 1432-0746, Vol. 493, nr 3, s. 1125-1139Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
Context: Centimeter and meter-sized solid particles in protoplanetary disks are trapped within long-lived, high-pressure regions, creating opportunities for collapse into planetesimals and planetary embryos. Aims: We aim to study the effect of the high-pressure regions generated in the gaseous disks by a giant planet perturber. These regions consist of gas retained in tadpole orbits around the stable Lagrangian points as a gap is carved, and the Rossby vortices launched at the edges of the gap. Methods: We performed global simulations of the dynamics of gas and solids in a low mass non-magnetized self-gravitating thin protoplanetary disk. We employed the Pencil code to solve the Eulerian hydro equations, tracing the solids with a large number of Lagrangian particles, usually 100 000. To compute the gravitational potential of the swarm of solids, we solved the Poisson equation using particle-mesh methods with multiple fast Fourier transforms. Results: Huge particle concentrations are seen in the Lagrangian points of the giant planet, as well as in the vortices they induce at the edges of the carved gaps. For 1 cm to 10 cm radii, gravitational collapse occurs in the Lagrangian points in less than 200 orbits. For 5 cm particles, a 2M planet is formed. For 10 cm, the final maximum collapsed mass is around 3M. The collapse of the 1 cm particles is indirect, following the timescale of gas depletion from the tadpole orbits. Vortices are excited at the edges of the gap, primarily trapping particles of 30 cm radii. The rocky planet that is formed is as massive as 17M, constituting a Super-Earth. Collapse does not occur for 40 cm onwards. By using multiple particle species, we find that gas drag modifies the streamlines in the tadpole region around the classical L4 and L5 points. As a result, particles of different radii have their stable points shifted to different locations. Collapse therefore takes longer and produces planets of lower mass. Three super-Earths are formed in the vortices, the most massive having 4.5M. Conclusions: A Jupiter-mass planet can induce the formation of other planetary embryos at the outer edge of its gas gap. Trojan Earth-mass planets are readily formed; although not existing in the solar system, might be common in the exoplanetary zoo.
##### Emneord
accretion, accretion disks; hydrodynamics; instabilities; methods: numerical; solar system: formation; planets and satellites: formation
##### Identifikatorer
urn:nbn:se:uu:diva-98002 (URN)10.1051/0004-6361:200810797 (DOI)000262641100033 ()
Tilgjengelig fra: 2009-02-05 Laget: 2009-02-05 Sist oppdatert: 2017-12-14bibliografisk kontrollert
• 268.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Embryos grown in the dead zone: Assembling the first protoplanetary cores in low mass self-gravitating circumstellar disks of gas and solids2008Inngår i: Astronomy and Astrophysics, ISSN 0004-6361, E-ISSN 1432-0746, Vol. 491, nr 3, s. L41-L44Artikkel i tidsskrift (Fagfellevurdert)
Context: In the borders of the dead zones of protoplanetary disks, the inflow of gas produces a local density maximum that triggers the Rossby wave instability. The vortices that form are efficient in trapping solids. Aims: We aim to assess the possibility of gravitational collapse of the solids within the Rossby vortices. Methods: We perform global simulations of the dynamics of gas and solids in a low mass non-magnetized self-gravitating thin protoplanetary disk with the Pencil Code. We use multiple particle species of radius 1, 10, 30, and 100 cm. The dead zone is modelled as a region of low viscosity. Results: The Rossby vortices excited in the edges of the dead zone are efficient particle traps. Within 5 orbits after their appearance, the solids achieve critical density and undergo gravitational collapse into Mars sized objects. The velocity dispersions are of the order of 10 m s-1 for newly formed embryos, later lowering to less than 1 m s-1 by drag force cooling. After 200 orbits, over 300 gravitationally bound embryos were formed, 20 of them being more massive than Mars. Their mass spectrum follows a power law of index -2.3 ± 0.2.
• 269.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Standing on the shoulders of giants: Trojan Earths and vortex trapping in low mass self-gravitating protoplanetary disks of gas and solids2009Inngår i: Astronomy and Astrophysics, ISSN 0004-6361, E-ISSN 1432-0746, Vol. 493, nr 3, s. 1125-1139Artikkel i tidsskrift (Fagfellevurdert)
Context: Centimeter and meter-sized solid particles in protoplanetary disks are trapped within long-lived, high-pressure regions, creating opportunities for collapse into planetesimals and planetary embryos. Aims: We aim to study the effect of the high-pressure regions generated in the gaseous disks by a giant planet perturber. These regions consist of gas retained in tadpole orbits around the stable Lagrangian points as a gap is carved, and the Rossby vortices launched at the edges of the gap. Methods: We performed global simulations of the dynamics of gas and solids in a low mass non-magnetized self-gravitating thin protoplanetary disk. We employed the Pencil code to solve the Eulerian hydro equations, tracing the solids with a large number of Lagrangian particles, usually 100 000. To compute the gravitational potential of the swarm of solids, we solved the Poisson equation using particle-mesh methods with multiple fast Fourier transforms. Results: Huge particle concentrations are seen in the Lagrangian points of the giant planet, as well as in the vortices they induce at the edges of the carved gaps. For 1 cm to 10 cm radii, gravitational collapse occurs in the Lagrangian points in less than 200 orbits. For 5 cm particles, a 2M planet is formed. For 10 cm, the final maximum collapsed mass is around 3M. The collapse of the 1 cm particles is indirect, following the timescale of gas depletion from the tadpole orbits. Vortices are excited at the edges of the gap, primarily trapping particles of 30 cm radii. The rocky planet that is formed is as massive as 17M, constituting a Super-Earth. Collapse does not occur for 40 cm onwards. By using multiple particle species, we find that gas drag modifies the streamlines in the tadpole region around the classical L4 and L5 points. As a result, particles of different radii have their stable points shifted to different locations. Collapse therefore takes longer and produces planets of lower mass. Three super-Earths are formed in the vortices, the most massive having 4.5M. Conclusions: A Jupiter-mass planet can induce the formation of other planetary embryos at the outer edge of its gas gap. Trojan Earth-mass planets are readily formed; although not existing in the solar system, might be common in the exoplanetary zoo.
• 270.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Planet formation bursts at the borders of the dead zone in 2D numerical simulations of circumstellar disks2009Inngår i: Astronomy and Astrophysics, ISSN 0004-6361, E-ISSN 1432-0746, Vol. 497, nr 3, s. 869-888Artikkel i tidsskrift (Fagfellevurdert)
Context: As accretion in protoplanetary disks is enabled by turbulent viscosity, the border between active and inactive (dead) zones constitutes a location where there is an abrupt change in the accretion flow. The gas accumulation that ensues triggers the Rossby wave instability, which in turn saturates into anticyclonic vortices. It has been suggested that the trapping of solids within them leads to a burst of planet formation on very short timescales. Aims: We study in the formation and evolution of the vortices in greater detail, focusing on the implications for the dynamics of embedded solid particles and planet formation. Methods: We performed two-dimensional global simulations of the dynamics of gas and solids in a non-magnetized thin protoplanetary disk with the Pencil code. We used multiple particle species of radius 1, 10, 30, and 100 cm. We computed the particles' gravitational interaction by a particle-mesh method, translating the particles' number density into surface density and computing the corresponding self-gravitational potential via fast Fourier transforms. The dead zone is modeled as a region of low viscosity. Adiabatic and locally isothermal equations of state are used. Results: The Rossby wave instability is triggered under a variety of conditions, thus making vortex formation a robust process. Inside the vortices, fast accumulation of solids occurs and the particles collapse into objects of planetary mass on timescales as short as five orbits. Because the drag force is size-dependent, aerodynamical sorting ensues within the vortical motion, and the first bound structures formed are composed primarily of similarly-sized particles. In addition to erosion due to ram pressure, we identify gas tides from the massive vortices as a disrupting agent of formed protoplanetary embryos. We find evidence that the backreaction of the drag force from the particles onto the gas modifies the evolution of the Rossby wave instability, with vortices being launched only at later times if this term is excluded from the momentum equation. Even though the gas is not initially gravitationally unstable, the vortices can grow to Q ≈ 1 in locally isothermal runs, which halts the inverse cascade of energy towards smaller wavenumbers. As a result, vortices in models without self-gravity tend to rapidly merge towards a m = 2 or m =1 mode, while models with self-gravity retain dominant higher order modes (m = 4 or m = 3) for longer times. Non-selfgravitating disks thus show fewer and stronger vortices. We also estimate the collisional velocity history of the particles that compose the most massive embryo by the end of the simulation, finding that the vast majority of them never experienced a collision with another particle at speeds faster than 1 m s-1. This result lends further support to previous studies showing that vortices provide a favorable environment for planet formation.
• 271.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Teoretisk astrofysik.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Teoretisk astrofysik.
Global magnetohydrodynamical models of turbulence in protoplanetary disks: I. A cylindrical potential on a Cartesian grid and transport of solids2008Inngår i: Astronomy and Astrophysics, ISSN 0004-6361, E-ISSN 1432-0746, Vol. 479, s. 883-901Artikkel i tidsskrift (Fagfellevurdert)
Aims.We present global 3D MHD simulations of disks of gas and solids, aiming at developing models that can be used to study various scenarios of planet formation and planet-disk interaction in turbulent accretion disks. A second goal is to demonstrate that Cartesian codes are comparable to cylindrical and spherical ones in handling the magnetohydrodynamics of the disk simulations while offering advantages, such as the absence of a grid singularity, for certain applications, e.g., circumbinary disks and disk-jet simulations. Methods: We employ the Pencil Code, a 3D high-order finite-difference MHD code using Cartesian coordinates. We solve the equations of ideal MHD with a local isothermal equation of state. Planets and stars are treated as particles evolved with an N-body scheme. Solid boulders are treated as individual superparticles that couple to the gas through a drag force that is linear in the local relative velocity between gas and particle. Results: We find that Cartesian grids are well-suited for accretion disk problems. The disk-in-a-box models based on Cartesian grids presented here develop and sustain MHD turbulence, in good agreement with published results achieved with cylindrical codes. Models without an inner boundary do not show the spurious build-up of magnetic pressure and Reynolds stress seen in the models with boundaries, but the global stresses and alpha viscosities are similar in the two cases. We investigate the dependence of the magnetorotational instability on disk scale height, finding evidence that the turbulence generated by the magnetorotational instability grows with thermal pressure. The turbulent stresses depend on the thermal pressure obeying a power law of 0.24 ± 0.03, compatible with the value of 0.25 found in shearing box calculations. The ratio of Maxwell to Reynolds stresses decreases with increasing temperature, dropping from 5 to 1 when the sound speed was raised by a factor 4, maintaing the same field strength. We also study the dynamics of solid boulders in the hydromagnetic turbulence, by making use of 106 Lagrangian particles embedded in the Eulerian grid. The effective diffusion provided by the turbulence prevents settling of the solids in a infinitesimally thin layer, forming instead a layer of solids of finite vertical thickness. The measured scale height of this diffusion-supported layer of solids implies turbulent vertical diffusion coefficients with globally averaged Schmidt numbers of 1.0 ± 0.2 for a model with α≈10-3 and 0.78 ± 0.06 for a model with α≈10-1. That is, the vertical turbulent diffusion acting on the solids phase is comparable to the turbulent viscosity acting on the gas phase. The average bulk density of solids in the turbulent flow is quite low (ρp = 6.0×10-11 kg m-3), but in the high pressure regions, significant overdensities are observed, where the solid-to-gas ratio reached values as great as 85, corresponding to 4 orders of magnitude higher than the initial interstellar value of 0.01
• 272.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för kärn- och partikelfysik.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för kärn- och partikelfysik. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Kärnfysik.
The single-particle and collective features in the nuclei just above Sn-1322007Inngår i: Acta Physica Polonica B, ISSN 0587-4254, E-ISSN 1509-5770, Vol. 38, nr 4, s. 1213-1218Artikkel i tidsskrift (Fagfellevurdert)
The Advanced Time Delayed method has been used to measure the lifetimes of excited states in the exotic nuclei Sb-134, Sb-135 and Te-136 populated in the beta decay of Sn-134, Sn-135 and Sn-136, respectively. High purity Sn beams were extracted at the ISOLDE separator using a novel production technique utilizing the molecular SnS+ beams to isolate Sn from contaminating other fission products. Among the new results we have identified the 1/2(+) state in Sb-135 and its E2 transition to the lower-lying 5/2(+) state was found to be surprisingly collective. This measurement represents also one of the first applications of the LaBr3 scintillator to ultra fast timing.
• 273.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för kärn- och partikelfysik.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Kärnfysik.
Selected properties of nuclei at the magic shell closures from the studies of E1, M1 and E2 transition rates2009Inngår i: AIP conference Proceedings, 2009, Vol. 1090, s. 5502-Konferansepaper (Fagfellevurdert)
• 274.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för kärn- och partikelfysik.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Kärnfysik.
Structure of heavy Fe nuclei at the point of transition at N ~ 372009Inngår i: Acta Physica Polonica B, ISSN 0587-4254, E-ISSN 1509-5770, Vol. 40, s. 477-480Artikkel i tidsskrift (Fagfellevurdert)
• 275.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för kärn- och partikelfysik.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Kärnfysik.
The single-particle and collective features in the nuclei just above 132Sn2007Inngår i: Acta Physica Polonica B, ISSN 0587-4254, E-ISSN 1509-5770, Vol. 38, s. 1213-1218Artikkel i tidsskrift (Fagfellevurdert)
• 276. Mahato, Dip N.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
First principles study of nuclear quadrupole interactions in the molecular solid BF3 and the nature of binding between the molecules2007Inngår i: Hyperfine Interactions, ISSN 0304-3843, E-ISSN 1572-9540, Vol. 176, nr 1-3, s. 15-20Artikkel i tidsskrift (Fagfellevurdert)
The electronic structures and nuclear quadrupole interactions (NQI) of the F-19* (I= 5/2) state of F-19 nucleus in solid BF3 are studied using the first-principles Hartree-Fock-Roothaan procedure including many-body electron correlation effects. The calculated NQI parameters, F-19* quadrupole coupling constant (e(2)qQ) and asymmetry parameter eta, were found to be in satisfactory agreement with experiment for the solid state system, which gives confidence in the reliability of the calculated electronic structures in the solid and hence the factors found to influence the binding of the molecules in the solid. It was found that the intermolecular binding energy primarily arises from Van der Waals (VDW) interactions between the molecules resulting from intermolecular many-body effects, which counteract the repulsive interactions between the molecules arising from one-electron Hartree-Fock (HF) theory.
• 277.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Högenergifysik.
SuperIso v2.3: A program for calculating flavor physics observables in supersymmetry2009Inngår i: Computer Physics Communications, ISSN 0010-4655, E-ISSN 1879-2944, Vol. 180, nr 9, s. 1579-1613Artikkel i tidsskrift (Fagfellevurdert)
We describe SuperIso v2.3 which is a public program for evaluation of flavor physics observables in the minimal supersymmetric extension of the Standard Model (MSSM). SuperIso v2.3, in addition to the isospin asymmetry of BKγ, which was the main purpose of the first version, incorporates new flavor observables such as the branching ratio of Bsμ+μ, the branching ratio of Bτντ, the branching ratio of BDτντ and the branching ratio of Kμνμ. The calculation of the branching ratio of BXsγ is also improved in this version, as it now includes NNLO Standard Model contributions in addition to partial NLO supersymmetric contributions. The program also computes the muon anomalous magnetic moment (g−2). Four sample models are included in the package, namely mSUGRA, NUHM, AMSB and GMSB. SuperIso uses a SUSY Les Houches Accord file (SLHA1 or SLHA2) as input, which can be either generated automatically by the program via a call to external spectrum calculators, or provided by the user. The calculation of the observables is detailed in the Appendices, where a suggestion for the allowed intervals for each observable is also provided.
• 278.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi. Institutionen för kärn- och partikelfysik, Högenergifysik.
Supersymmetric parameter constraints from isospin asymmetry in b ---> s gamma transitions2007Inngår i: Proceedings of 15th International Conference on Supersymmetry and the Unification of Fundamental Interactions (SUSY07), 2007Konferansepaper (Fagfellevurdert)
• 279.
Clermont Université, France.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Högenergifysik.
Flavor constraints on two-Higgs-doublet models with general diagonal Yukawa couplings2010Inngår i: Physical Review D - Particles, Fields, Gravitation and Cosmology, ISSN 1550-7998, Vol. 81, nr 3, s. 035016-Artikkel i tidsskrift (Fagfellevurdert)
We consider constraints from flavor physics on two-Higgs-doublet models (2HDM) with general, flavor-diagonal, Yukawa couplings. Analyzing the charged Higgs contribution to different observables, we find that $b\to s\gamma$ transitions and $\Delta M_{B_d}$ restrict the coupling $\lambda_{tt}$ of the top quark (corresponding to $\cot\beta$ in models with a $Z_2$ symmetry) to $|\lambda_{tt}|<1$ for $m_{H^+}\lesssim 500$ GeV. Stringent constraints from $B$ meson decays are obtained also on the other third generation couplings $\lambda_{bb}$ and $\lambda_{\tau\tau}$, but with stronger dependence on $m_{H^+}$. For the second generation, we obtain constraints on combinations of $\lambda_{ss}$, $\lambda_{cc}$, and $\lambda_{\mu\mu}$ from leptonic $K$ and $D_s$ decays.The limits on the general couplings are translated to the common 2HDM types I -- IV with a $Z_2$ symmetry, and presented on the $(m_{H^+},\tan\beta)$ plane. The flavor constraints are most excluding in the type II model which lacks a decoupling limit in $\tan\beta$. We obtain a lower limit $m_{H^+}\gtrsim 300$ GeV in models of type II and III, while no lower bound on $m_{H^+}$ is found for types I and IV.
• 280. Malesani, D.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Early Spectroscopic Identification of SN 2008D2009Inngår i: Astrophysical Journal, ISSN 0004-637X, E-ISSN 1538-4357, Vol. 692, nr 2, s. L84-L87Artikkel i tidsskrift (Fagfellevurdert)
SN 2008D was discovered while following up an unusually bright X-ray transient (XT) in the nearby spiral galaxy NGC 2770. We present early optical spectra ( obtained 1.75 days after the XT) which allowed the first identification of the object as a supernova ( SN) at redshift z = 0.007. These spectra were acquired during the initial declining phase of the light curve, likely produced in the stellar envelope cooling after shock breakout, and rarely observed. They exhibit a rather flat spectral energy distribution with broad undulations, and a strong, W-shaped feature with minima at 3980 and 4190 angstrom ( rest frame). We also present extensive spectroscopy and photometry of the SN during the subsequent photospheric phase. Unlike SNe associated with gamma-ray bursts, SN 2008D displayed prominent He features and is therefore of Type Ib.
• 281. Man, L. C. T.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi.
Influence of a surfactant on single ion track etching: preparing and manipulating cylindrical micro wires2007Inngår i: Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, ISSN 0168-583X, E-ISSN 1872-9584, Vol. 265, nr 2, s. 621-625Artikkel i tidsskrift (Fagfellevurdert)
The influence of the alkali resistant surfactant Dowfax 2A1 on single ion track etching in 30 μm polycarbonate foils is studied at low etch rate (5 M NaOH at 41.5 ± 2 °C) using electro conductivity measurements. At surfactant concentrations above 10−4 vol.% break-through times are predictable (Δt/t < 0.25). At high surfactant concentrations (0.1 vol.%) the formation of cylindrical channels is favoured. The shape of these channels (length 26 μm, diameter 1.8 μm) is verified by electro-replication and SEM observation of the resulting wires. Agreement of radii is better than 0.1 μm. Depending on the current limit set during electro replication compact or hollow cylinders can be obtained. A technique for localizing and manipulating individual micro wires by their head buds is described.
• 282.
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Astronomi och rymdfysik.
On the Winds of Carbon Stars and the Origin of Carbon: A Theoretical Study2009Doktoravhandling, med artikler (Annet vitenskapelig) | |
#### Archived
This topic is now archived and is closed to further replies.
# moving the camera in OGL
This topic is 5179 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
hi, i have a triangle and i want to make my camera fly around it in all directions, like people do in demos (the movement of the camera).How do i do it? i''m not asking step-by-step code by explanation, although some example code wouldn''t hurt much
##### Share on other sites
Just move the triangle around to "simulate" a camera...
If you want to move the camera right, then move the triangle left :D It will look like the camera is moving to the right.
This is really how it''s done, although there are some other tricks or "hacks", maybe using gluLookAt.
##### Share on other sites
tnx, so i should move the objects rather then ''camera'' ok,
what about gluLookAt() ? i''ve heard it makes life easier
##### Share on other sites
Ok, the first step would be to use gluLookAt before you render any of your frame but I would advise you set about creating a camera yourself. As Anonymous Poster briefly mentioned the trick is to move the world around the camera rather than moving te actual camera. This is due to the fact that you are never able to render the scene from anywhere other than the (0, 0, 0) point in the world, looking straight down the z axis.
Ok so let''s just imagine that you are floating in space and there is an asteroid just in front of you. Ok next you are going to slide yourself to the right. What effect does this have on where the asteroid is in your view? Well, the asteroid is now to your left. In theory moving the asteroid to the left would actually have the same effect as moving yourself to the right. Got that. Well that is how cameras work in a simple sense.
The whole world is moved to make it look like the camera has changed position. Does it seem innefficient and arduous? Yes. Is it? Not really.
So how would I manage a simple camera. First of all I''d create a CCamera class and make sure it stores three float variables - mLocationX, mLocationY and mLocationZ. Pressing forwards backwards left and right will make these values increase and decrease in exactly the manner you would like if a camera were placed at these coordinates. So, pressing left will make x decrease and so on and so forth.
Thats easy enough, so how do we make it relate to of scene. Well, immediately after we clear the buffer, and before we do any drawing, we should call...
// translate to the exact opposite coordinates of our camera locationglTranslatef(-mLocationX, -mLocationY, -mLocationZ);
Ok we can now treat this location that we have translated to as our worlds (0, 0, 0). So to draw your triangle you would probably do something like...
// remember our locationglPushMatrix();// move to triangles location in worldglTranslatef(triX, triY, triZ);// draw triangle code goes here/* I can''t be bothered right now */// put us back where we wereglPopMatrix();
And there you have it. Let''s just run throught the process. Lets say you wanted to move the camera left, you press the left key and the mLocationX value changes to -10.0f. Before any of the drawing happens that lone glTranslate command will move us +10.0f on the X axis away from the (0, 0, 0) where the scene is always viewed from. So, when you draw your triangle at what is supposed to be a location just in front of us it will actually be just in front of us and to the right (+10.0f on x axis) making it look like the camera has moved to the left. Magic.
Anyway, that is a massive post, but to summmarise, we move the entire world in the opposite direction to our desired camera location to simulate camera movement.
Hope that helps.
##### Share on other sites
tnx ghosted, that helped me, really, but i was having in mind moving without the user intervention, no keys getting pressed, just simple flying accross the screen, ok i wont bother you anymore with this moving, i''ll get it sooner or later BUT here comes the next problem since the clearing color is black (or any other color) and when i ''move'' it''s not showing like i move but like an object (my triangle) is just spinning around its own axis, now i''d like to make the background look like it''s a world not 2d black screen.Don''t know how should i do that.I should maybe draw some lines bellow so it would look like a floor.I don''t know.Can you help me with this one please?
##### Share on other sites
I''m not too sure what sort of end effect your trying to achieve but drawing a little grid on the floor is easy enough. I''d do it using GL_LINES and simply loop through a process of placing vertices. The following code does it simply and hopefully you can use it create your own desired effect...
// move to where we will draw the grid glTranslatef (0.0f, -2.0f, -10.0f); // set color to green glColor3f(0.0f, 1.0f, 0.0f); /*DRAW GRID OF LINES*/ glBegin(GL_LINES); // draw lines going along z axis for (int x = -5; x <= 5; x++) { glVertex3f((float)x, 0.0f, 5.0f); glVertex3f((float)x, 0.0f, -5.0f); } // draw lines going across x axis for (int z = -5; z <= 5; z++) { glVertex3f(5.0f, 0.0f, (float)z); glVertex3f(-5.0f, 0.0f, (float)z); } glEnd();
If you chuck that in just before, or after, you draw the triangle it should appear just below you. Obviously you may need to change the intial translation values that place it in the scene for it to be where you want it.
Best of luck
##### Share on other sites
thank you so much ghosted.That grid really helps, now i have a feeling like its a 3d world
first i didnt see the grid, but then i added glFrustum() command and i now i can see the grid but no triangle OR i can see the triangle but no grid, don''t know why i cant see both of them at a time but i''ll fix it (i hope)
thanks again!
1. 1
2. 2
frob
14
3. 3
4. 4
5. 5
Rutin
12
• 12
• 10
• 57
• 14
• 15
• ### Forum Statistics
• Total Topics
632113
• Total Posts
3004184
×
## Important Information
We are the game development community.
Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!
Sign me up! | |
# Formal demonstration that minimizing the free energy equals maximizing the entropy
I never had great intuition when it came to thermodynamic concepts and potentials even though reading a textbook and completing the exercises has never been a huge problem.
In one of them, I was asked to show that maximizing the Gibbs entropy $$-\sum_n p_n \text{ln}(p_n)$$ under the constraints of normalization $$\sum_n p_n =1$$ and the existence of a fixed average energy $\sum_np_nE_n =E^*$ gives the canonical ensemble, ie: $$p_n=\frac{e^{-\beta E_n}}{\sum_ne^{-\beta E_n}}$$. And then I am supposed to show that maximizing the entropy is equivalent to minimizing the free energy. It's quite evident looking at its definition: $$F= E -TS$$ At fixed T and E, the minimum of T matches the maximum of S (I assume S to be positive).
But is there an explicit variational derivation of this result? For example, by rewriting a Lagrangian with F as a function of $p_n$ (maybe something else than replacing E and S by their definition?)
-
Optimizing $S$ is the same as optimizing a monoton function of it, so everything you'd do would be running in circles. The $p_n$ enters the formalism exactly in the entropy definition, where through the sum $\Sigma_n$, the $n$'s are "integrated out". The most microscopic object, which relates to the "non-microcanonical" potentials is the partition function $$Z=\Sigma_n\ \text{e}^{-\beta E_n},$$ and you can recast the free energy using this quantity as
$$F=-kT\ \text{log}(Z).$$
But this identification in terms of $Z$, motivated by phenomenological thermodynamics of equilibrium systems, already assumes you found your optimum $p_n\propto \text{e}^{-\beta E_n}$. The difficulting to abstract $F$ in another way, e.g. terms of probabilies of subsystems as you suggest, is that it inherently depends on $T$, and replacing that would just end up with a function of $S$.
Regarding interpretation, I've lost some words on thermodynamical potentials here.
-
This is just false. There is a mathematically derivation of the canonical ensemble which follows from just minimising the entropy. You can then consider a full system+bath, and maximise the entropy of the whole universe, and show that this is equivalent to maximising the free energy of the system. Nothing is circular. – genneth Oct 29 '12 at 12:51
@genneth: I'm not denying anywhere that you can use the entropy to derive the equivalence. Actually, I basically say exactly this is the only way to go, and other derivations come back to this. I.e. all the definitions of $F$ will depend on $\{p_n\}$ only in the form $-\sum_n p_n \text{ln}(p_n)$. "Running in circles" refers to this, in the sense of "you'll come back to old arguments anyway". It's probably bad termology, but even as written, it should not be read as a claim that there is circular logic involved. – NikolajK Oct 29 '12 at 13:00
@geneth: Are sure about the maximizing the free energy of the system, if it is then I am lost, I would say that equilibrium refers to a minimum of the free energy. – Learning is a mess Oct 29 '12 at 14:20
@MyWaytoCMT: think-o; I mean minimize free energy. – genneth Oct 29 '12 at 23:34
@NickKidman: in light of the comment, I withdraw my objections. :-) – genneth Oct 29 '12 at 23:35
I am not going to spend a lot of time on this, but what you want to do is show that assuming a uniform distribution of probability causes the second derivative of probability as a function of n to go to zero:
$$\dfrac{\partial^2 p}{\partial n^2}=0$$
A simple graphical display shows this occurs when the accumulation in p-n space is a straight line
Intuitively this can be interpreted in functional terms. Since the second derivative is zero, the straight line in our graph represents a critical function in the function space of monotonically increasing functions. It also represent a type of minimum, since the change in area under the curve with respect to changes of ordering is zero, e.g.
$$\dfrac{\partial A}{\partial s} = 0$$
Where I define $s$ as the sequence or order of the component probabilities and $A$ is area under the curve. As such, it is the curve that is conservative with respect to ordering.
-
A solution to your exercise is obtained by using the method of lagrange multipliers.
The constraints we have to satisfy are three:
1) $max_{P_j} S[p]$
2) $\sum_{j} P_j E_j = E$
3) $\sum_{j} P_j =1$
Choosing as variation parameters $\lambda$ and $\gamma$, calculations follow as:
$S_{\lambda,\gamma}=-\sum_{j} P_j ln{P_j} + \lambda(\sum_{j} P_j E_j - E)+\gamma(\sum_{j} P_j -1)$
$\frac{ \partial S_{\lambda,\gamma} }{ \partial{P_j} }=0\Longrightarrow -ln{P_j}-1+\lambda E_j+\gamma=0 \Longrightarrow P_j=\frac{ e^{- \lambda E_j} }{ e^{1-\gamma} }$
Calling $Z$ the normalization constant we have:
$P_j= \frac{ e^{-\lambda E_j} }{Z}$
$\sum_{j} P_j=1 \rightarrow e^{1-\gamma}=\sum_{j}e^{-\lambda E_j}:=Z$
The values of the $P_j$ maximize entropy since:
$\frac{ \partial^2 S_{\lambda , \gamma}}{ \partial P^2_j}=\frac{-1}{P_j}$
The parameter $\lambda$ can be determined from:
$\frac{ \sum_{j} E_j e^{-\lambda E_j} }{ Z } = E$
Now i will try to show how the parameter $\lambda$ can be interpreted as the inverse of temperature. Let's introduce the function $F(\lambda , E_j ):=ln{Z}$. We have:
$dF=\frac{\partial F}{\partial \lambda} d\lambda + \sum_{j} \frac{\partial F}{\partial E_j} dE_j=-Ed\lambda-\lambda \sum_{j} \frac{N_j}{N} dE_j$ where $P_j=\frac{N_j}{N}$
The last formula can be rewritten as:
$d(F+E\lambda)=\lambda(dE-\sum_{j} \frac{N_j}{N} dE_j):=\lambda dQ$
The last association follows from the physical fact that (looking from the point of view of quantum mechanics if you like) $-\sum_{j} \frac{N_j}{N} dE_j$ can be interpreted as the work done on the system to vary the energy levels from $E_j$ to $E_j+dE_j$ and $dE$ is the variation of internal energy. So $dE-\sum_{j} \frac{N_j}{N} dE_j$ is the quantity of heat $dQ$ exchanged by the ensemble with the outer.
Since we only have an exact differential involving heat from thermodynamics we can conclude that $\lambda=1/T$
Now we can define:
$f:=\frac{-1}{\beta} ln{Z}=E-TS$
So the maximum values of the entropy are embedded in the definition of free energy and they determine the minimum value of it.
The same procedure applies if we start from the free energy:
Constraints: $E=\sum_{j} P_j E_j$ and $\sum_{j} P_j = 1$ Parameter as lagrange multiplier: $\mu$
$f_\mu=\sum_{j}P_jE_j+T\sum_{j}P_j ln(P_j)+\mu(\sum_{j} P_j -1)$
$\frac{\partial f}{\partial P_j}=E_j + T ln{P_j} + T + \mu =0 \Longrightarrow P_j=e^{\frac{-E_j}{T}} e^{\frac{\mu}{T}-1}$
$\sum_{i}P_i=1 \rightarrow e^{\frac{-\mu}{T}-1}=\frac{1}{\sum_{i} e^{\frac{-E_i}{T}} }:=\frac{1}{Z} \Longrightarrow P_i= \frac{e^{\frac{-E_i}{T}}}{Z}$
- | |
# What is the slope of any line perpendicular to the line passing through (3,8) and (-2,-15)?
Mar 17, 2016
#### Answer:
$- \frac{5}{23}$
#### Explanation:
The slope of the line joining $\left({x}_{1} , {y}_{1}\right) \mathmr{and} \left({x}_{2} , {y}_{2}\right) = \frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}}$.
The slope of the perpengicular line is the negarive reciprocal.
The slope of the given line is $\frac{23}{5}$and irs negative reciprocal is $- \frac{5}{23}$. | |
BOTTOM, STRANGE MESONS ($\boldsymbol B$ = $\pm1$, $\boldsymbol S$ = $\mp{}$1)
${{\mathit B}_{{s}}^{0}}$ = ${\mathit {\mathit s}}$ ${\mathit {\overline{\mathit b}}}$, ${{\overline{\mathit B}}_{{s}}^{0}}$ = ${\mathit {\overline{\mathit s}}}$ ${\mathit {\mathit b}}$,
similarly for ${{\mathit B}_{{s}}^{*}}$'s
• ${{\mathit B}_{{s}}^{0}}$ $0(0^{-})$
• ${{\mathit B}_{{s}}^{*}}$ $0(1^{-})$
${{\mathit X}{(5568)}^{\pm}}$ $?(?^{?})$
• ${{\mathit B}_{{s1}}{(5830)}^{0}}$ $0(1^{+})$ ***
• ${{\mathit B}_{{s2}}^{*}{(5840)}^{0}}$ $0(2^{+})$ ***
${{\mathit B}_{{sJ}}^{*}{(5850)}}$ $?(?^{?})$
• Indicates established particles. *** Existence ranges from very likely to certain, but further confirmation is desirable and/or quantum numbers, branching fractions, etc. are not well determined. | |
1 answer
# Please help with the pre- lab exercise and questions 10 and 11 Experiment 19 Heat of...
###### Question:
Please help with the pre- lab exercise and questions 10 and 11
Experiment 19 Heat of Combustion: Magnesium 18, you learned about the additivity of reaction beats as you confirmed Hess's Law, In this experiment, you will use this principle as you determine a heat of reaction that would be difficult to obtain by direct measurement-the heat of combustion of magnesium ribbon. The reaction is represented by the equation This equation can be obtained by combining equations (1). (2), and (3) () MgO(s) 2 HCKaq) MgCllag)+H,00) (2) Mg (s) +2 HCIag)MgCl(ag)+H2 (g) The pre- lab portion of this experiment requires you to combine equations (1), (2), and (3) to obtain equation (4) before you do the experiment. Heats of reaction for equations (1) and (2) will be determined in this experiment. As you may already know, AH for reaction (3) is-285.8 kJ. OBJECTIVES In this experiment, you will .Combine three chemical equations to obtain a fourth. . Use prior knowledge about the additivity of reaction heats. . Determine the heat of combustion of magnesium ribbon. Figure I
## Answers
#### Similar Solved Questions
5 answers
##### Homework: Section 2.5 Enhanced Assignment 5 of 26 (3 Score: 0 Of 12.5.9Find f" (x) for ((x) =f(x) =
Homework: Section 2.5 Enhanced Assignment 5 of 26 (3 Score: 0 Of 1 2.5.9 Find f" (x) for ((x) = f(x) =...
1 answer
##### Assume the space shuttle's main engines produce 764,576 newtons of thrust, and the shuttle has a mass of 78,018 kg. Why does the shuttle need the two solid rocket boosters, in addition to the main engines?
Assume the space shuttle's main engines produce 764,576 newtons of thrust, and the shuttle has a mass of 78,018 kg. Why does the shuttle need the two solid rocket boosters, in addition to the main engines?...
5 answers
##### Queslion 1/ (2 points)Draw a molecular orbital diagram on your scratch paper for the following compound. Hov electrons are in the boriding molecular orbitals?
Queslion 1/ (2 points) Draw a molecular orbital diagram on your scratch paper for the following compound. Hov electrons are in the boriding molecular orbitals?...
5 answers
##### Problem 9.36 Enhanced with Feedback17 of 36Review730-kg car stopped at an intersection is rear- ended by a 1790-kg truck moving with speed of 15.0 m You may want t0 review (Pages 278 279)Part AIf the car was in neutral and its brakes were off, so that the collision is approximately elastic, find the final speed of the truckAZdUtruckSubmitPrevious Answers Request AnswerIncorrect; Try AgainPart BFind the final speed of the car:AZdIn/ 8
Problem 9.36 Enhanced with Feedback 17 of 36 Review 730-kg car stopped at an intersection is rear- ended by a 1790-kg truck moving with speed of 15.0 m You may want t0 review (Pages 278 279) Part A If the car was in neutral and its brakes were off, so that the collision is approximately elastic, fin...
5 answers
##### Dra the structuro ofan cnol compound; noribluWntu Inonan 'posalblcHow many acidic hydrogens (pka - Odort thla connound havu?Can thl compound react wlth !JKOH In the halofon macilan? Hivo tha producu MnoL elatu Khy nolShow how vou would mike this compound 0 malonic Ester synthesis Of an nccloucetie ncid synhesteJ mcthyl-A-phen;|-2-butnaneWricthe products ol these tcachons; NOCH; ~chPoci; CocHaNOH CH;-CHCH,cn CH~Cz2_
Dra the structuro ofan cnol compound; noribluWntu Inonan 'posalblc How many acidic hydrogens (pka - Odort thla connound havu? Can thl compound react wlth !JKOH In the halofon macilan? Hivo tha producu MnoL elatu Khy nol Show how vou would mike this compound 0 malonic Ester synthesis Of an ncclo...
5 answers
##### Call R the region bounded by the curves 2r + and y = - Call $the solid obtained by rotating R around the r-axis: pOmts) Set up (but do not ealuate) the explicit integral (incluudling bounds) to determine the volume of$ using the method of coss-sectional disks/wshcrs.
Call R the region bounded by the curves 2r + and y = - Call $the solid obtained by rotating R around the r-axis: pOmts) Set up (but do not ealuate) the explicit integral (incluudling bounds) to determine the volume of$ using the method of coss-sectional disks/wshcrs....
5 answers
##### 2. A sample of an unknown metal has a mass of 120.7 g. As the sample cools from 90.5 %C to 25.7 %C,it releases 7020 J of energy: What is the specific heat of the sample? What is this metal? (use Table 16.2 in your textbook )
2. A sample of an unknown metal has a mass of 120.7 g. As the sample cools from 90.5 %C to 25.7 %C,it releases 7020 J of energy: What is the specific heat of the sample? What is this metal? (use Table 16.2 in your textbook )...
5 answers
##### Instructions for Written Project: Infectious Disease (20 pts)Choose an infectious disease to write about_ The disease must be caused by some sort of infectious agent but does not need to be bacterial or be one that affects humans In a series of paragraphs totaling approximately 2 pages, describe: The causative agent of your chosen disease including its structure The reservoir for the causative agent How the disease is transmitted or acquired The signs and symptoms of the disease, including if th
Instructions for Written Project: Infectious Disease (20 pts) Choose an infectious disease to write about_ The disease must be caused by some sort of infectious agent but does not need to be bacterial or be one that affects humans In a series of paragraphs totaling approximately 2 pages, describe: T...
5 answers
##### Use the Relerences4ccess mpurtan valuesneeded for this question_A student ran the following reaction in the laboratory at 396 K:CHa(g) CCly(g)2CH;Clz(g)When she introduced 4.82*10-2 moles of CHa(g) and 6.14*10-2 moles of CCl(g) into 1.00 liter containcr; she found the equilibrium concentration of CH,Clz(g) to be 1.99*10-2 M:Calculate the cquilibrium constant; Kc: she obtained for this reaction:Submit AnswerRetry Entire Groupmore group attempts remaining
Use the Relerences 4ccess mpurtan values needed for this question_ A student ran the following reaction in the laboratory at 396 K: CHa(g) CCly(g) 2CH;Clz(g) When she introduced 4.82*10-2 moles of CHa(g) and 6.14*10-2 moles of CCl(g) into 1.00 liter containcr; she found the equilibrium concentration...
1 answer
##### B. Boiling Point of Unknown Organic Liquid 4. Calculate the boiling point of you unknown liquid...
B. Boiling Point of Unknown Organic Liquid 4. Calculate the boiling point of you unknown liquid at 1 atm from the observed boiling point in the lab. Atmospheric pressure: 765.m g Boiling Point at 1 atm: Show your calculations below: B. Boiling Point of Unknown Organic Liquid: Unknown Observed Boilin...
1 answer
1 answer
##### Ethyl alcohol has a boiling point of 78.0 degree C, a freezing point of -114 degree...
Ethyl alcohol has a boiling point of 78.0 degree C, a freezing point of -114 degree C, a heat of vaporization of 879 kJ/kg, a heat of fusion of 109 kJ/kg, and a specific heat of 2.43 kJ/kg-K. How much energy must be removed from 0.567 kg of ethyl alcohol that is initially a gas at 78.0 degree C so t...
1 answer
XYZ Co. has the following financial information: Debt: 25,000 bonds outstanding with a face value of $1,000. The bonds currently trade at 91% of par and have 10 years to maturity. The coupon rate equals 3%, and the bonds make semi-annual interest payments. Preferred stock: 300,000 shares of preferre... 5 answers ##### QUESTION6You are looking at the results f a western blot from the lysates of cells harvested from suspected breast cancer tumor and you see that there is an increased expression of INK4-p16, you suspect that this willIncrease S to G2 phase transitionInitiate a cell cycle arrest in G1Block Mto G1 phase transitionPromote [umorigenesis QUESTION6 You are looking at the results f a western blot from the lysates of cells harvested from suspected breast cancer tumor and you see that there is an increased expression of INK4-p16, you suspect that this will Increase S to G2 phase transition Initiate a cell cycle arrest in G1 Block Mto G1... 1 answer ##### 2. A solid disk (M- 2.3 kg and R 12 cm) has a string wrapped around... 2. A solid disk (M- 2.3 kg and R 12 cm) has a string wrapped around it. The string attached to the ceiling and the disk is released. What is the accelerati of the disk?... 5 answers ##### For the reactants CdSO write chemical equation to show that it dissolves in water: (Hint: this is an ionic compound that follows the dissociation principle described in this_vidco link and in the SrSO4 and KCO, dissolving processes illustrated above in this lab write-up.)On the products side, KSO is dissolved in aqueous solution and CdS is not. How would you observe this difference in lab experiment?Name the following compounds: (a_ BaCO;; (b) BaCl;; HNO;_BaSO ; (d Na,CO;; (e_ NaCl;CuCl;; and For the reactants CdSO write chemical equation to show that it dissolves in water: (Hint: this is an ionic compound that follows the dissociation principle described in this_vidco link and in the SrSO4 and KCO, dissolving processes illustrated above in this lab write-up.) On the products side, KSO i... 1 answer ##### Explain, in terms of linear approximations or differentials, why the approximatiion is... (1.01)6 ≈ 1.06... 4 answers ##### 1012displacement x (m) Find the distance traveled along path B in the above figure_Submit Answer'JaJJOJUI Tries 2/5 Previous TriesFind the magnitude of the displacement for path B in the above figure_Submit Answer Tries 0/5Find the total displacement for path B in the above figureSubmission not graded, Use more digits_ Tries 0/5 Previous IriesSubmit Answer 10 12 displacement x (m) Find the distance traveled along path B in the above figure_ Submit Answer 'JaJJOJUI Tries 2/5 Previous Tries Find the magnitude of the displacement for path B in the above figure_ Submit Answer Tries 0/5 Find the total displacement for path B in the above figure Submi... 5 answers ##### Problem 2 (20 MARKS) If the set of integers were given the indiscrete topology; would it be compact? Explain YOur answer! Problem 2 (20 MARKS) If the set of integers were given the indiscrete topology; would it be compact? Explain YOur answer!... 4 answers ##### A colony of insects in a room has been cut off from its food source and the population is decreasing with time: Let P(t) be the population after daysP(t) is the solution of the differential equation: =-2P If we start with 128 insects at t-0 then calculate how many days will it take for the population to g0 down to 64 insects?3 dayIn 2 daysh2 days2ln 2 days A colony of insects in a room has been cut off from its food source and the population is decreasing with time: Let P(t) be the population after days P(t) is the solution of the differential equation: =-2P If we start with 128 insects at t-0 then calculate how many days will it take for the populati... 1 answer ##### The equilibrium constant for a reaction involving only A, B. C and D Is given byA... The equilibrium constant for a reaction involving only A, B. C and D Is given byA + B C + D a) What is a "closed system"? b) What do you understand by the term "a state of dynamic equilibrium"? c) What can you say about the rates of the forward and back reactions when a state of dyna... 5 answers ##### There are 13 balls green balls, 5 black balls and 1 blue ball) in a bowl Two3)balls are randomly chosen without replacement Find the probability that:Both of them are blackThey have the same color(qThey have different colors_ There are 13 balls green balls, 5 black balls and 1 blue ball) in a bowl Two 3) balls are randomly chosen without replacement Find the probability that: Both of them are black They have the same color (q They have different colors_... 5 answers ##### Jnn snyinsiTnkimNisnintTum derelerCourse dashboardSorul6Which followin elementhas Khe condensed election confirjuration [Ar] 4s' 3di Ap?Henu?cevaplnmundiLutlen bxcuni secin; Yittium (Y) b Palladiun (Pd)UFEs UadanTuinniSotuyuinuelleVanadiut (V) Germnaanjum (Ge)Gadoltum (Gd)ONCEKI SayTASonraKIGaYFACHAPTEH MatTEl ITS PmonRtie; Ayu MlasunemlmtciimioiCtci Yop Jnn snyinsi Tnkim Nisnint Tum dereler Course dashboard Sorul6 Which followin elementhas Khe condensed election confirjuration [Ar] 4s' 3di Ap? Henu? cevaplnmundi Lutlen bxcuni secin; Yittium (Y) b Palladiun (Pd) UFEs Uadan Tuinni Sotuyuinuelle Vanadiut (V) Germnaanjum (Ge) Gadoltum (Gd) ONCEKI ... 5 answers ##### A binomia experiment consists of 800 trials. The probability of success for each trial is 0.6. What is the probability of obtaining 465-485 successes? Approximate the probal ability $using normal distribution: (This binomia experiment easily passes the rule-of-thumb test for pproximating binomia distribution using normal distribution, you can check When computing the probability; adjust the given interval by extending the range by 0.5 on each side_ Click the icon t0 view the area under the stan A binomia experiment consists of 800 trials. The probability of success for each trial is 0.6. What is the probability of obtaining 465-485 successes? Approximate the probal ability$ using normal distribution: (This binomia experiment easily passes the rule-of-thumb test for pproximating binomia di... 5 answers ##### Dutericnthe convoroencrdivurdonce oltlrI"Qcontornen find Itz smit {Ifthe Quantity diveroraEenlO Dutericnthe convoroencr divurdonce oltlrI"Q contornen find Itz smit {Ifthe Quantity diverora Eenl O... 1 answer ##### GaAs laser (a) The degenerate occupation of the conduction and valence bands with electrons and holes... GaAs laser (a) The degenerate occupation of the conduction and valence bands with electrons and holes helps to maintain the laser requirement that emission must overcome absorption. Explain how the degeneracy prevents band-to-band absorption at the emission wavelength of 867 nm (b) Assuming equal el... 5 answers ##### Finding an angle measure glven extended UanglesFnc Finding an angle measure glven extended Uangles Fnc... 1 answer ##### In a large corporation, the financial manager is primarily responsible for: In a large corporation, the financial manager is primarily responsible for:... 1 answer ##### Martinez Inc. issued$1,800,000 of convertible 10-year bonds on July 1, 2020. The bonds provide for...
Martinez Inc. issued $1,800,000 of convertible 10-year bonds on July 1, 2020. The bonds provide for 12% interest payable semiannually on January 1 and July 1. The discount in connection with the issue was$48,000, which is being amortized monthly on a straight-line basis. The bonds are convertible ...
1 answer
##### Record the following transactions for the month of November 2019 in the cash receipts journal. Total,...
Record the following transactions for the month of November 2019 in the cash receipts journal. Total, prove, and rule the cash receipts journal as of November 30. Nov. 1 Collected $3,600 from Katie Bing, a credit customer on account. 10 Received a check from Mike Drake to pay his$5,600 promissory n...
1 answer
##### The auto repair shop of Quality Motor Company uses standards to control the labor time and...
The auto repair shop of Quality Motor Company uses standards to control the labor time and labor cost in the shop. The standard labor cost for a motor tune-up is given below: Standards Standard Rate Standard Cost Hours 2.50 $29.00$72.50 Motor tune-up The record showing the time spent in the shop la...
1 answer
##### Create the necessary functions to make the guessing game work. C Language not C++ include <stdio.h>...
Create the necessary functions to make the guessing game work. C Language not C++ include <stdio.h> include <time.h> #define MAX_TRIES 10 int getRandomNumber(); char playGame(); char playAgain(); int main(){ start_game: int num = getRandomNum(); char play_again = playG...
1 answer
##### 5. Reflected waves for detecting objects: Airplane radar, bat echolocation. (Chapter 4, Exercise 15) Airplanes can...
5. Reflected waves for detecting objects: Airplane radar, bat echolocation. (Chapter 4, Exercise 15) Airplanes can be located by radar, which detects radio waves reflected back toward a transmitter. In analogous fashion, bats can locate insects by making high-pitched squeaks and listening for echoes...
5 answers
##### 1.1.35. Show that if f is monotonically increasing (decreasing Onl (a,b); then for any To (a,6) ;f("o) = lim f(r) inf f(z) (f(ro) = sup f() ) T-Io T>r0 T>ro f(o) = lim_ f(r) = sup f(r) (flro) = inf f(r) ) 0-80 T<0 T<0 f(ro) < f(ro) < f(ro) (f(ro) 2 f(ro) 2 f(rg)) .
1.1.35. Show that if f is monotonically increasing (decreasing Onl (a,b); then for any To (a,6) ; f("o) = lim f(r) inf f(z) (f(ro) = sup f() ) T-Io T>r0 T>ro f(o) = lim_ f(r) = sup f(r) (flro) = inf f(r) ) 0-80 T<0 T<0 f(ro) < f(ro) < f(ro) (f(ro) 2 f(ro) 2 f(rg)) ....
1 answer
##### Find the indefinite integral. $\int x^{2} e^{5 x} d x$
Find the indefinite integral. $\int x^{2} e^{5 x} d x$...
-- 0.066776-- | |
## Basic College Mathematics (10th Edition)
$\frac{7}{8}$
0.875 The digits to the right of the decimal point, 875, are the numerator of the fraction. The denominator is 1000 for thousandths because the rightmost digit is in the thousandths place. i.e. $\frac{875}{1000}$ = $\frac{875\div125}{1000\div125}$ (Divide both numerator and denominator by 125) = $\frac{7}{8}$ (Lowest Terms) | |
Chapter 10
Esophageal cancer is among the 10 most frequent cancers in the world. The annual incidence reported in Western countries is 3 per 100,000, compared with 140 per 100,000 in Linxian Province in central China.1 Esophageal cancer remains one of the most lethal of all malignancies. Once a diagnosis is established, the prognosis is poor, with a 5-year survival rate of less than 10%. The results of single-modality treatment have been poor, with the exception of surgery for early esophageal cancer. Recently, neoadjuvant chemotherapy, radiotherapy, and combined chemoradiation therapy have been added as treatment modalities to enhance local control, increase resectability rates, and improve disease-free survival.2 The initial results of these multimodality treatments have been encouraging. Since management of esophageal cancer and survival of patients is stage-dependent, accuracy of clinical staging is vital. Recent advances in CT, MRI, and PET of the esophagus, as well as endoscopic ultrasound (EUS) and minimally invasive thoracoscopic/laparoscopic staging (Ts/Ls) offer new hope for reliable preoperative diagnosis and staging of patients with esophageal cancer.
The boundaries of the esophagus are the inferior cricopharyngeal constrictor proximally and the esophagogastric junction distally. The esophagus is composed of four layers: mucosa, submucosa or lamina propria, muscularis propria, and adventitia (Fig. 10-1). The esophagus has no serosa, providing a teleologic explanation for the ease of spread of esophageal cancer.
###### Figure 10-1.
The four layers of the esophagus: mucosa, submucosa or lamina propria, muscularis propria, and adventitia.
Anatomically, the normal adult esophagus is approximately 35 cm in length and 2.5 cm in diameter, although it is not uniform throughout its course. The course of the esophagus begins in the midline in the upper neck at the level of the sixth cervical vertebra, which corresponds roughly to the level of the cricoid cartilage, and then deviates to the left in the lower neck and upper thorax. At the level of the tracheal bifurcation (24 cm from the incisors by endoscopic measurement), the esophagus again returns to the midline only to deviate to the left once again in the lower thorax, where it enters the abdomen through the diaphragmatic hiatus (40 cm from the incisors). Clinically, the esophagus is divided into three segments, the cervical, middle, and distal segments. The cervical segment ranges from the cricoid cartilage to the thoracic inlet (10–18 cm from the incisors). The middle esophageal segment ranges from the thoracic inlet to the midpoint between the tracheal bifurcation and the esophagogastric junction (19–34 cm). The distal esophageal segment extends from the midpoint between the tracheal bifurcation and the esophagogastric junction (35–44 cm). Three distinct narrowings are present in the esophagus. The first narrowing is formed by the cricopharyngeus muscle and is the narrowest segment of the gastrointestinal tract, located 12–15 cm from the incisors in the adult. The second narrowing is caused by the tracheal bifurcation and aortic arch at ...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
## Subscription Options
### AccessSurgery Full Site: One-Year Subscription
Connect to the full suite of AccessSurgery content and resources including more than 160 instructional videos, 16,000+ high-quality images, interactive board review, 20+ textbooks, and more. | |
# Why does helium-3 stay in the moon and not escape from it?
So the moon is full of helium-3.
Since it's a gas in the moon's vacuum... Why doesn't it escape?
• Can you provide a reference for your claim? – casey Jun 26 '14 at 22:59
• InquilineKea is not in the habit of putting a lot of effort into questions. Rather, of putting a little effort into lot of questions. – naught101 Jul 9 '14 at 13:42
The Moon is not "full" of helium-3. 3He is at most fifty parts per billion of the lunar regolith1 and that "high" concentration pertains only to permanently shadowed craters. The Moon is bombarded by a steady stream of helium-3 while sunlit. Some of this incoming helium-3 is temporarily embedded in the lunar regolith. Without this steady supply, the helium-3 content would dissipate at a temperature-dependent rate proportional to the amount of helium-3 in the lunar regolith.
The quantity $q(t)$ of in a cubic meter of lunar regolith is thus dictated by a simple differential equation, $\dot q(t) = \alpha(t) - \beta(T)q(t)$. Time averaging the bombardment and escape rates yields $\dot {\bar q}(t) = \bar{\alpha} - \beta(\bar T)q(t)$. This differential equation yields a steady state value of $\bar q = \bar{\alpha}/\beta(\bar T)$.
1 Cooks, "3He in permanently shadowed lunar polar surfaces", Icarus, 206:2 778-779 (2010). | |
Prove that $(g \circ f )^{-1} = f^{-1} \circ g^{-1}$
I've already proven that if we assume f is bijective and g is bijective, then $(g \circ f)$ is bijective. I've also proven that$(g \circ f)^{-1}$ exists. I'm stuck on this part, however. Any suggestions?
-
Do you know the rule for groups? The proof here is the same. – Neal Dec 4 '12 at 3:46
Suppose that $\langle x,y\rangle\in(g\circ f)^{-1}$; clearly $\langle y,x\rangle\in g\circ f$, so by the definition of composition there must be some $z$ such that $\langle y,z\rangle\in f$ and $\langle z,x\rangle\in g$. But then $\langle z,y\rangle\in f^{-1}$ and $\langle x,z\rangle\in g^{-1}$, so again by the definition of composition we have $\langle x,y\rangle\in f^{-1}\circ g^{-1}$, and hence $(g\circ f)^{-1}\subseteq f^{-1}\circ g^{-1}$. The opposite inclusion is proved similarly: all of the steps in the argument are reversible.
Alternatively, if you’re not working at this low a set-theoretic level, just verify that
$$\left(f^{-1}\circ g^{-1}\right)\circ(g\circ f)$$ is the identity function on the domain of $f$, and
$$(g\circ f)\circ\left(f^{-1}\circ g^{-1}\right)$$ is the identity function on the range of $g$. These are trivial calculations if you know that composition of functions is associative.
-
The symbol $(g \circ f)^{-1}$ reads: the function that, when composed with $g \circ f$, gives the identity. This is a description that uniquely characterizes $(g \circ f)^{-1}$, so you only need to check that $f^{-1} \circ g^{-1}$ has this property.
Also, don't forget to check compositions on both sides!
-
Thank you. I had forgotten that $(g \circ f)^{-1}$ is unique. Now that I know that the solution is simple. – Bill Dec 4 '12 at 3:59
Recall that $(g\circ f)(a)=g(f(a))$. Three very useful statements which I will suppose you have proved before, at one point or another, are:
1. The composition of injective functions is injective.
2. The composition of surjective functions is surjective.
3. If $f$ is a bijection then $f^{-1}$ is a bijection and $(f^{-1}\circ f)(a)=a$.
So we assumed that $f\colon A\to B$ and $g\colon B\to C$ are bijections, by $(1), (2)$ we have that $g\circ f$ is also a bijection, and by $(3)$ we have that $(g\circ f)^{-1}$ is a bijection as well, and $f^{-1}\circ g^{-1}$ is also a bijection. Note that $\operatorname{dom}\Big((g\circ f)^{-1}\Big)=\operatorname{dom}\Big(f^{-1}\circ g^{-1}\Big)=C$.
To show that two functions with the same domain are equal we only need to show that they do the map every element the same. Namely we need only to verify that for every $c\in C$ the following equality holds: $$(g\circ f)^{-1}(c)=(f^{-1}\circ g^{-1})(c)$$
Let $a$ denote $(g\circ f)^{-1}(c)$, then we know that $g(f(a))=c$. We also know that $(f^{-1}\circ g^{-1})(c)=f^{-1}(g^{-1}(c))$. But $g$ is a bijection therefore $g^{-1}(c)=f(a)$ from the two things we know here. But $f$ is a bijection and by $(3)$ we have that $f^{-1}(f(a))=a$, so finally we have: $$(f^{-1}\circ g^{-1})(c)=f^{-1}(g^{-1}(c))=f^{-1}(f(a))=a=(g\circ f)^{-1}(c)$$ | |
# What is a non constructible graph?
I'm working through "Groups, Graphs and Trees" by John Meier.
In Chapter 5, he states that in a Cayley Graph, $$\Gamma$$ (or indeed any graph), the ball of radius n centred at the vertex v, $$\mathcal{B}(v,n)$$, is the subgraph formed as the union of all paths in $$\Gamma$$ of length $$\le n$$ that start at the vertex $$v$$.
It is then stated that a Cayley graph Γ is constructible if given $$n \in N$$ one can construct $$\mathcal{B}(e,n)$$ in a finite amount of time.
My question is what is an example of a non-constructible graph?
In particular, it is implied that there are finitely generated groups which have non-constructible Cayley graphs. I cannot comprehend what such a group looks like.
## 1 Answer
To construct the Cayley graph you need to be able to solve the word problem in the group, so any group with unsolvable word problem is an example.
In fact the converse is also true - if the group has solvable word problem then you can construct the Cayley graph.
• Would you be able to go into a specific example of such a group? I had seen the result you quoted in Meier but I am struggling to see what such groups are. Apr 22, 2022 at 14:16
• Such groups are complicated (and in some sense artificial), and constructed from Turing machines with unsolvable word problem using HNN extensions. If you really want to learn about them, I would recommend the final two chapters of Rotman's book "An Introduction to the Theory of Groups", Apr 22, 2022 at 16:01
• Okay, thanks. I think that intuitively had concluded these groups must be "artificial". I shall have a look at Rotman. Apr 22, 2022 at 17:10 | |
## Oleg: An argument against call/cc
Oleg provides various arguments against including call/cc as a language feature.
We argue against call/cc as a core language feature, as the distinguished control operation to implement natively relegating all others to libraries. The primitive call/cc is a bad abstraction -- in various meanings of bad' shown below, -- and its capture of the continuation of the whole program is not practically useful. The only reward for the hard work to capture the whole continuation efficiently is more hard work to get around the capture of the whole continuation. Both the users and the implementors are better served with a set of well-chosen control primitives of various degrees of generality with well thought-out interactions.
## Comment viewing options
### I agree. It's what you can
I agree. It's what you can hack into a runtime; the rest isn't even academically that interesting. I just used a combinator which returns a combinator which restores a reified context when applied. Probably, Oleg is right, and there are a thousand more ways to do it.
### call/cc utility
Better might very well be to specify and permit but not require call/cc:
Suppose that you have a Scheme specification with call/cc. In this Scheme you can express and thereby assign clear semantics to a number of constructs that are alternatives to call/cc (such as delimited continuations). Call/cc has expressive and explanatory power, in that way.
If you then later say that a language really only needs some of these other more specialized constructs that can built with call/cc, and that providing call/cc itself is a low-payoff burden to many implementations, well that's fine. Don't require call/cc per se. Make it optional. Call/cc has still helped you compactly and usefully explain all the things you actually want to require in the implementation. All those more practical control concepts conceptually reduce to uses of call/cc.
So, something like call/cc is at least not a far-fetched candidate for service as an "intermediate term" in specifying a language that doesn't necessarily include call/cc.
That's a reason to specify call/cc in a language core, but not require its implementation.
Now suppose that you have a kind of practical but "naive" scheme implementation. You'll represent code, at run-time, as a graph of cons pairs. The environment will be a graph of cons pairs.
This implementation will be slow (we're assuming a naive, minimalist interpreter. Not necessarily more sophisticated than something like SIOD). This trivial interpreter could compete with a unix shell interpreter or AWK for some tasks, say, but we don't expect a lot more.
Now in that implementation, call/cc in all its horrifying glory is (relative to the other parts of the implementation) cheap and blazingly fast. Capturing the continuation is consing up a few registers and slapping an appropriate type tag on the aggregate. Done. The theoretical explanations of many of those more specialized constructs in terms of call/cc? In this naive implementation, those specifications are directly executable with perfectly acceptable performance for the context. For example, it's perfectly practical and useful to implement delimited continuations atop this naive implementation of call/cc. Yes, it's slower than what an optimizing compiler can do. At worse, heck, it's as slow as stuff in AWK or sh. Even at that slow speed, it can be perfectly practical.
So, "call/cc" -- in addition to being a plausible "intermediate term" in explaining the more specialized constructs, is also a *practical* construct for some kinds of real-world use.
A perfectly reasonable thing for a scheme-like language specification to do would be to:
1) Specify and permit but not require call/cc.
2) Use call/cc to specify more specialized required constructs
A similar argument could be made in favor of first class environments. They are easy to specify. They are directly practical in implementations with only modest performance goals. They are abstractly practical for specifying more specialized binding constructs. A sane idea would be to specify them and build on the specification while not requiring them but perhaps requiring more specialized constructs derived from them.
### I like that perspective
Two points about implementing delimited control in terms of call/cc:
1) You need a way to determine if a continuation is empty, so delimited control can be made properly tail-recursive. See A Monadic Framework, section 5.2.
2) All your code that uses delimited control now needs be called inside a wrapper function that establishes necessary structures. This means that e.g. loading a file has to happen inside this wrapper, as does REPL evaluation.
For the second reason, I've decided to implement delimited control natively in my current language.
### As I understood the post,
As I understood the post, it's not saying that continuations are bad, just that delimited continuations are better than call/cc because:
• They are just as easy to implement
• They are cleaner and more expressive
• The implementation of the common things that can be built on top of continuations is far cleaner with delimited continuations (exceptions, non-determinism, etc.)
• Implementing delimited continuations on top of call/cc + set! is awkward and slow, and may leak memory
Personally I find them easier to understand as well, but YMMV.
### Actually read Oleg's
Actually read Oleg's post.
If you then later say that a language really only needs some of these other more specialized constructs that can built with call/cc [...]
The point is that you actually can't, because call/cc is a leaky abstraction.
Make it optional.
This is the choice that Racket made, and it took a lot of work to figure out the "right" way to make different control operators work together (see this thread). In the end, they still don't have a real call/cc since it respects prompts. The reason it's still there is to still have some reasonable behavior on ported Scheme code, and to provide continuations that don't build up prompts unnecessarily (though I don't know how often that problem crops up). [edited from "hurt immensely" following a discussion with samth]
Call/cc has still helped you compactly and usefully explain all the things you actually want to require in the implementation. All those more practical control concepts conceptually reduce to uses of call/cc.
Source? Oleg refutes this convincingly.
Now suppose that you have a kind of practical but "naive" scheme implementation. [...] Now in that implementation, call/cc in all its horrifying glory is (relative to the other parts of the implementation) cheap and blazingly fast
Relatively fast, where you're relating with a target slower than molasses. Again impractical.
### be more specific?
For example, you write:
The point is that you actually can't, because call/cc is a leaky abstraction.
My understanding was that non-leaky implementations of reset/shift in terms of call/cc had been demonstrated. Isn't that what you mean?
### My understanding was that
My understanding was that non-leaky implementations of reset/shift in terms of call/cc had been demonstrated.
Yes, under very specific conditions which are not met by any Scheme implementation.
### what about Scheme 48?
Yes, under very specific conditions which are not met by any Scheme implementation.
Wasn't there, for example, an indirect space-safe implementation in Scheme 48? But this is getting silly. Here is what caught my attention in Oleg's paper:
Implementing control features in terms of call/cc has inherently poor performance. The operator call/cc captures so-called multi-shot, escaping continuations, letting us return several times from a single procedure call. Such generality costs performance but is rarely needed. For example, exceptions do not require any continuation capture, let alone multi-shot one. It is much easier and quite more efficient to implement them as primitive, rather than to emulate in terms of control operators. Even R7RS Scheme now specifies a dedicated exception-handling facility; previously, portable Scheme programs were supposed to rely on call/cc. By the same token, simple generators such as those in CLU and Python are realizable on linear stack with standard function calls and need no continuation capture either. They too make sense to implement natively rather than emulate through more powerful and expensive control operators. Using call/cc for threads and coroutines has an unacceptable cost due to dynamic-wind, as described further below.
It has been frequently observed that in most all cases a captured continuation is invoked at most once. Dybvig et al. and Feeley showed that one-shot continuations are much more efficient to implement. The sole, it seems, use case for multi-shot continuations is non-determinism. A better abstractions for non-determinism, such as generators, have long been known in Icon and Scheme predecessors.
Experience shows that call/cc makes a poor trade-off between generality and performance. Implementing exceptions and all other control facilities as library functions in terms of call/cc turns out a bad idea.
For decades people have built and successfully used versions of those other constructs on top of call/cc.
I don't dispute that this is often slow or that a high performance implementation would want to seek another basis.
I do dispute that it "turns out [to be] a bad idea". Taking "call/cc" as core is a bad idea for some applications, a perfectly fine idea for others.
I think part of the socio-economic problem, so to speak, is that there is an imperative built-in to the rNrs process. The imperative is to identify a unique core language in terms of which the rest of Scheme is specified. So there is natural contention for membership in that core. So arguments arise like whether or not call/cc should be removed and some delimited continuation primitives take its place.
The preciousness of that imperative is captured and perhaps created by the famous Clinger quote about not piling on features. It is as if from the moment that was penned, rNrs became a contest to perfect "core scheme", the ultimate, final tiny language from whence all else can be derived.
In real life, historically, and for decades, the Scheme specification is honoured more in its breach than in its faithful execution. Quite common are dialects that implement "most of" some rXrs, well enough to run various SRFIs of interest or to support this or that programming style of interest. (Formerly, "well enough to run most of SLIB"). Scheme manifests as a family of dialects, many of which approximate ad hoc subsets of an idealized full language. Good!
Scheme understood that way doesn't need or much benefit from having an official "core" because in practice it is a family of subsets of some ideal language. The ideal "core" depends on just what particular subset we're talking about.
That's why I suggested specifying but not requiring call/cc. If your dialect only needs a few basic things and a graph interpreter is plenty fast enough, call/cc can be a useful "core" primitive. If your dialect is sufficiently fancy, on the other hand, you might not even want to support call/cc in any serious way. It's handy to have an explanation for call/cc and to be able to explain some other constructs, in some contexts, in terms of call/cc. It's important to not require (a serious version of) call/cc.
Where's the problem? Specify but don't require it. Why is that controversial?
### It comes down to the Plotkin
It comes down to the Plotkin ideology that calculi should correspond to programming languages. A semantics should not only give extensional meaning to programs but also give insight on what it means to compute. If you define everything interesting in terms of the "hand of god" operator, you get religious dogma and not a useful blueprint.
### how's that go?
"hand of god" operator?
I'm sure I have no idea what you mean.
### call/cc is so heavy-fisted
call/cc is so heavy-fisted that you might think it the hand of god. It also makes an otherwise intuitionistic logic classical, meaning there is now (in one interpretation) a "function that decides everything," which Bob Constable calls magic, others an oracle and yet others appealing to god. Perhaps I was a bit too clever.
### nature of things
call/cc is so heavy-fisted that you might think it the hand of god. It also makes an otherwise intuitionistic logic classical, meaning there is now (in one interpretation) a "function that decides everything," which Bob Constable calls magic, others an oracle and yet others appealing to god.
Or... it's the operation of consing up a few key registers and tagging them as a kind of procedure.
### Yes, that is also true.
It's a heavy-fisted "hand of God" operator, *and* it's possible in some systems (eg, those that keep stack frames on the heap) to do it with a very fast, simple operation.
The point he was making is that it's got "hand of God" semantics, regardless of how simple or easy the implementation might be.
Ray
### what do you mean "core"?
Oleg wrote:
We argue against call/cc as a core language feature, as the distinguished control operation to implement natively relegating all others to libraries.
The tradition in Scheme standards has been to specify a "core" language with additional "library" procedures and syntax. So, what is "core Scheme"? I understand the intended significance of this core to be three-fold:
1) Definitional: The specification of the core is supposed to be an economic expository foundation upon which the greater language can be explained. If something is marked traditionally as a "library" procedure, for example, we expect that to mean that the specification for the procedure could understandably be expressed as a program written in purely "core" Scheme.
2) Instructional: Historically, the Scheme core is small but not minimal. In part that's because of its instructional role. The core is supposed to convey the conceptual essence of Scheme in practice, not just some idealized mathematical essence.
3) Implementation Guiding: Historically, the expectation is that if the core is well implemented, then all the rest of Scheme can in fact be supplied in the form of libraries that are portable to any good implementation of the core. A stronger form of this expectation also lurks: that perhaps the core could be so good that implementations would nearly never need to provide anything but the core, leaving the rest to portable libraries.
Oleg writes that he offers arguments against call/cc as a core feature, one "to implement natively relegating [various other control constructs] to libraries". I understand him to be speaking in the framework of that three-part view of "core Scheme". He is, so to speak, arguing about which direction to move to search for the Ultimate core language.
Because that traditional view of "core Scheme" looks for a kind of platonic ideal, Oleg has to make absolute statements that are not quite true, like:
The only reward for the hard work to capture the whole continuation efficiently is more hard work to get around the capture of the whole continuation.
Actually, the trade-offs depend quite a bit on the implementation techniques and goals. Sometimes a native call/cc is quick, easy, and perfectly practical -- better than any alternative.
We can reconcile all this and understand Oleg's points in a more productive way, I think, if we alter the traditional "core scheme" framework. There is no ideal core that scores on all the three points (definitional, instructional, and implementation guiding). There is not even any reason to believe in principle that any such core could exist.
If we abandon the traditional pursuit of a chemeric Ultimate core then it's perfectly sensible to (a) say something like "it's handy to specify but not require call/cc" and (b) Oleg's note is a list of reasons to sometimes avoid call/cc.
(As an aside, Ray, I'm not familiar with terms of art like "hand of God semantics".)
### Explaining the Hand of God.
The "Hand of God" reference is part of a very different intellectual tradition. A few centuries ago, when (Christian) religious intellectuals, in a world as yet largely unexplained by science and uninfluenced by scientists, were trying to explain the physical world as an artifice or creation of God for the purpose of mankind's moral instruction, they were occasionally presented with questions relating to things so fundamental or pervasive that their moral relevance, or the possibility of alternatives, was not readily apparent. To such things they could only respond, "So it was made by the Hand of God."
The context or "moral lesson," they went on to explain, was that God sometimes does things or makes things a certain way for reasons we don't necessarily understand, that this is to be expected because God's wisdom is perfect and ours is not, and it is therefore the moral duty of a good Christian to accept the (perfect, whether we know it or not) decisions and creations of the Supreme Being, even when we don't understand them and may actively dislike their results, thankfully and with good grace. Or, as we would say in the modern era, the good men of the cloth were adroitly ducking the question by making a moral lesson of the fact that nobody knew the answers.
Over time this singularly unhelpful response became the Canonical answer to a larger and larger body of requests for enlightenment, and over time a non-religious intellectualism emerged comprised of people who were more and more unsatisfied with that answer, and particularly with its repetition or excessive applicability.
When someone refers to the "Hand of God" in a scientific context, it usually refers to that old argument. It expresses frustration or dissatisfaction with a single explanation that tries to explain too much, can't be reasonably examined or disproved, or is formulated primarily for its effectiveness in shutting the questioners up rather than to provide them with useful information.
BTW, I have living relatives, whom I respect deeply, who have taken this difficult lesson to heart and have trained themselves to be satisfied with it. They have both a monastic stoicism and an inner peace that I, as an "outlander" to them and their religious tradition, find to be unattainable and almost surreal. So I'm in the odd position of having seen both sides of this argument, or at least understanding and having empathy for the way the mindset it advocates works.
Hope that helps,
Ray
### Two questions
As Oleg noted, it has been understood for a while that call/cc is not the best abstraction. I think there was some discussion on this site in the early 2000's, probably related to dynamic-wind.
I have two questions after reading the link.
1. Isn't any form of lazy evaluation leaky, since resources are held until the value is effectively used (if ever)?
2. Are resumable exceptions akin to refirable continuations? If they are, what makes them better than call/cc in terms of practicality, performance and safety?
### 1. Isn't any form of lazy
1. Isn't any form of lazy evaluation leaky, since resources are held until the value is effectively used (if ever)?
I don't know whatever gave you that idea, and the question seems nonsensical to me. From a 'lazy' perspective, strict evaluation might be called leaky since sometimes you tediously need to program around holding all resources until a value is demanded. It's just a reduction order, both reduction orders 'leak' their timed behavior.
I really don't understand call-cc like Oleg does so I wouldn't know about the other question. Then again, it wouldn't surprise me if there are less than five people in the world who really understand call-cc.
### I think I was not clear.
I think I was not clear.
Oleg makes the point that call/cc is leaky because the whole program needs to be there and ready to be resumed as long as you have a continuation in scope (if I understand right).
My question is, isn't this true to a certain extent for any lazy evaluation since you need a closure to maintain the "live" state of the program until the value is needed, at which point it can be freed? For memory resources this is not a big deal but it can matter if there are I/O resources involved. I suppose this is not a problem in Haskell because I/O lives in the monad world, isn't it?
### You were clear.
AFAIK, monadic IO doesn't enforce a strict order of evaluation even when you think you sequentially chained IO; there is only lazy reduction on a demanded value, so problems with lazy IO are probably independent of whether you approach a problem monadically.
My question is whether you'ld call that 'leaky'. As stated, in a strict language you also need to program around timed behavior of evaluation, it's just that strict evaluation is much simpler and better ingrained into people's mind that it isn't seen as 'leaky'.
### I guess I thought too complex.
So much more basic answers.
Call-cc leaks memory. As I said, I don't understand call-cc, but extrapolating from what I implemented: Leaking memory can probably be avoided, and is probably due to that the program stated which is stored between different call-cc invocations needs to be copied. My bet: There isn't enough sharing, on a stack machine, between different program states. I think it can be made lightweight, but only if program states live in the heap, and can be seen as a collection of linked states.
Then, again, the lazy evaluation question. No.
Lazy evaluation is a reduction order. Strict evaluation is also a reduction order. The only, but real, benefit of lazy evaluation is that it is normalizing. You can make examples which blow up under strict evaluation and not under lazy evaluation (print a list of the first five hundred Fibonacci numbers), and the converse (lots of list insertions or concatenations). Except for the normalizing, or implied expressiveness, part, both reduction orders have similar merits and pitfalls.
### Then, again, the lazy
Then, again, the lazy evaluation question. No.
Lazy evaluation is a reduction order. Strict evaluation is also a reduction order. The only, but real, benefit of lazy evaluation is that it is normalizing. You can make examples which blow up under strict evaluation and not under lazy evaluation (print a list of the first five hundred Fibonacci numbers), and the converse (lots of list insertions or concatenations). Except for the normalizing, or implied expressiveness, part, both reduction orders have similar merits and pitfalls.
Thanks. That is interesting.
In the example with lots of concatenations, there is a large amount of input (say strings) which reduce into a single result. You could imagine a structure which reflects both the operations to do (string concatenation) and the final result (list of strings as a string). But there are cases where only a small part of the input is used, even if there is a lot of input needed to get a result. Even worse, the input may require access to resources which must be prebooked (with locks or others) at a time where the program has appropriate privileges, to only be used - and released - at a later stage.
This is where I see a parallel with the "leakage" issue in call/cc, made manifest with dynamic-wind.
### Resumable exceptions
Are resumable exceptions akin to refirable continuations? If they are,
what makes them better than call/cc in terms of practicality,
performance and safety?
Resumable exceptions have finite extent (finite lifetime): an
exception cannot be resumed after its handler has exited. In
contrast, a continuation captured within a function has an indefinite
extent, and can be invoked well after the function has exited. That's
why call/cc-captured continuations are sometimes called escaping
continuations'. One may say that the difference between resumable
exceptions and captured continuations is akin to the difference
between stack-allocated values and heap-allocated ones.
Resumable exceptions are also very easy to implement: raising a
resumable exception is just an ordinary function call (where the
function to invoke, the handler, is determined from the dynamic
environment). Lisp has no call/cc but it has had resumable exceptions
for a very long time (which are used frequently and turned out a great
feature).
### Yes. Is "escaping" the
Yes. Is "escaping" the difference between a delimited and a full continuation?
I am interested in the subject because I am using continuation-passing style of calling convention in dodo. The language has to take explicit steps to prevent a call/cc, I devised these ones at the moment:
1- A continuation cannot take another continuation as argument
2- An exception to -1- is for generators, which pass a continuation to the return continuation
3- A generator cannot capture a non-local continuation (by non-local I mean outside the scope which introduces the return continuation)
4- The return continuation invoked by a generator is always fresh, which means it cannot be called twice
There is still the problem of cleaning up the resources held by the generator, like heap space, eventual file handles etc. I do not see a way to solve it except by doing a special invocation of the generator before it goes out of scope (like automatic resource management in C++, Java...)
### Lazy evaluation and resource leaks
It is indeed true that lazy evaluation makes it quite a challenge to
analyze the space behavior of a program. Memory leaks are quite
possible. Fortunately GHC has become quite good at strictness
analysis: it figures out if a value will be needed and if so, computes
it eagerly. Sometimes however, we do have to give a strictness
annotation (in the form of a bang pattern or seq). In my experience,
if program runs out of memory, the most common culprit is an
arithmetical expression (since an integer takes less space than a
closure to compute it).
There are cases where leaks become show-stoppers. The following
article documents one such case and a difficult work-around:
When resources other than memory are involved (so-called lazy IO'),
leaks are quite common and are very well-known. Please see the
overview in Sec 2 of the paper
(The paper also shows an example when a small change in a Lazy IO program changes the O(1)-space behavior to O(n) -- forcing loading of the whole file in memory. This is indeed a big problem in practice.)
### Leaky abstraction...
Ah, memory leak, not leaky abstraction. I was thinking about 'leaky' lazy evalution. As in, what's the difference between these two programs:
main = do inFile <- openFile "foo" ReadMode
contents <- hGetContents inFile
putStr contents
hClose inFile
Prints the contents of the file.
main = do inFile <- openFile "foo" ReadMode
contents <- hGetContents inFile
hClose inFile
putStr contents
Prints nothing.
Moreover, with this:
Fortunately GHC has become quite good at strictness
analysis: it figures out if a value will be needed and if so, computes
it eagerly.
I think you can probably exploit the strictness analysis to switch between these two behaviors, though I wouldn't know exactly how at the moment.
Whatever, my point in the other post is: I don't think monadic IO solves the inherent problem around evaluation order 'leaking' into the pure semantics of the two programs. Personally, I would expect only that defaulting back to (timed) stream processing functions would solve the problem of this 'leaking' behavior, since (only) with stream processing functions the output is fully defined, or specified, given some input.
But that would only fix a somewhat academic problem at best, 'fail' in a context of concurrent evaluation, and -in general- not be worth it.
### call/cc as an effect
Just some minor thoughts...
Putting Scheme aside and looking at call/cc as a construct that expands a language's expressive power: You can regard call/cc as effectful, and use effects-typing to limit the scopes where it is allowed, and hence allow greater optimization of code where it is guaranteed not to occur.
Starting with a language that has a foundational subset in which termination is guaranteed, you can first expose "fix" as a effectful function enabling general recursion and creating the possibility of non-termination, and then expose "call/cc" as an effectful function enabling multiple-return. This is all a little bit clearer if you expose them as "letrec" and "letcc" instead.
Or embed these features a la Haskell with MonadFix and MonadCC.
### Non-termination is better as an error than an effect
I agree with your greater point about not making call/cc a primitive but non-termination should not be modeled as an effect in a general purpose programming language. It should be modeled as an error that you might not be able to prove isn't there. Modeling it as an error is fine: programs that run forever between generating effects are not ever what was intended. Modeling it with effects is going to force a bunch of functions that ought to terminate (based on informal reasoning) into a different semantic class for no good reason.
### Krivine Realizability (Again)
Tim, I'm sure you've come across this material in your research, and I'd appreciate hearing your thoughts on Griffin's A Formulae-as-Types Notion of Control and the follow-up work on Krivine realizability if you can spare a moment to discuss it. I guess what I'm driving at is that call/cc is now known to have (classical) logical content, and can be seen as enabling considerably more than multiple-return, but I don't really know if that observation actually implies more than what you already said. Does it make sense to talk about a functional/logic language, say, in the spirit of Mercury, but based on Krivine realizability instead of Curry-Howard? Even if it makes conceptual sense, would there be any practical benefit you can see?
### Eff
I'm also curious about your thoughts regarding Disciple and Eff, while I'm bending your virtual ear. :-)
### Re: call/cc as an effect
You can regard call/cc as effectful, and use effects-typing to limit
the scopes where it is allowed, and hence allow greater optimization
of code where it is guaranteed not to occur.
Effect typing is indeed a good idea in general. In fact, for delimited
continuations (shift/reset rather than call/cc), Scala has used effect
typing to great success, to see which expressions have to be
CPS-converted. Assuredly pure expressions can be compiled as usual;
only the expressions that may really use delimited control suffer the
CPS conversion penalty.
As for call/cc: adding it to a simple functional language (even with
fix) does not give much expressiveness. We can't even get exceptions
(try/raise), let alone generators or non-determinism. We have to add
state. The problems of that approach have been described already. Let
me add one more. As I said, the effect system for shift/reset is
relatively simple -- and helpful, for compilers and programmers. In
that effect system, reset e is a always a pure expression
regardless of the purity of e. In the familiar
implementation of shift/reset via call/cc, the implementation of reset
involves both call/cc and the state. It is not straightforward to
deduce that this combination of effects somehow should produce a pure
expression. So, the effect system for shift/reset does not look like a
simple instantiation' of the effect system for call/cc. As I argue,
call/cc is just not a good abstraction.
As to Haskell's Continuation monad, I should stress that it is a monad
for delimited continuations. One has to work hard to get
undelimited continuations.
http://okmij.org/ftp/continuations/undelimited.html#proper-contM
Although Haskell's continuation monad has the operation callCC, it
works very differently from Scheme's call/cc. The name clash is
unfortunate.
### Doesn't the effect typing
Doesn't the effect typing lose the advantage of first class delimited continuations that you can use them in places where it was not specifically arranged for? For example if you have a nondeterminism abstraction built on delimited continuations, can you still do this?
powerset(xs) = nondet { xs.filter{x => amb(true,false)} }
Effectively you have to duplicate a lot of the standard library just like in Haskell filter vs filterM?
### Benefits of Effect typing
Doesn't the effect typing lose the advantage of first class delimited continuations that you can use them in places where it was not specifically arranged for?
That depends on the initial assumptions under which the code was first written. If a function, say, filter, was written under the assumption that its first argument (the predicate) is pure, then you really can't use this filter with any effectful predicate, regardless of the effect typing. The purity assumption lets filtering proceed in any order or in parallel, lets filter cache the results of predicate applications, etc. It is very good then that the effect typing prevents any attempts to pass an effectful predicate (which uses mutation or delimited control, etc) to a pure filter function. The results would not be correct.
If the filter has been written under the assumption that the predicate is effectful, adding an effect system won't make things any worse. An effect system may make things better, in at least two ways. First, one may imagine the standard library with two filter functions, one for effectful and one for pure predicate. (That supposition is not far fetched: it is typical for low-level numeric libraries contain a great number of variations of the same basic functionality.) These two versions could be the same filter code compiled under different assumptions and different optimization aggressiveness. When the compiler sees a filter application, the compiler can call the appropriately optimized function depending on the effect annotation on the predicate.
Even if there is no pure version of filter, effectful typing still helps. The type checker will infer that (filter pred) :: [a] -> [a] is itself a pure function if pred is pure. The conclusion can trigger optimizations in the code that uses (filter pred).
Here is a simple example where effect typing could really help. OCaml, as any impure language, takes a pessimistic view that any function can have side effects. OCaml a bit optimistically regards argument expressions to be pure, or at least with side effects that commute: the evaluation order of argument expressions is generally indeterminate (it is right-to-left for bytecode and left-to-right, perhaps with some exceptions, for x86, at least). There are important performance reasons for such order. Alas, it leads to difficult-to-find bugs. Suppose I write an expression f 1 + f 2 and suppose the function f (which might've been pure in an early version of the code) is a generator, yielding its argument. Most likely, the order of yielded values will be unexpected. I have to write let x = f 1 in x + f 2, which is ugly, requires modifications to the code, and believe me, greatly error-prone. These 'let' are very easy to forget. It would be really nice to have an effect system that at least warned me (or ideally, prompted the compiler to generate the let-expressions automatically when needed).
### If a function, say, filter,
If a function, say, filter, was written under the assumption that its first argument (the predicate) is pure, then you really can't use this filter with any effectful predicate, regardless of the effect typing.
That's not really true. For example, if the effect is to access a cached value (computing it only if it's missing) or if the effect is just logging then it's reasonable to use a pure filter with an effectful pred. The main obstruction to doing this safely is that the filter function may not cache calls to pred. i.e. filter could call 'pred x' in two places with the same 'x' and expect them to give the same result.
### Never doubt the wisdom of
Never doubt the wisdom of Oleg. ;-)
Come on. Of course there are side effecting functions which are, in some sense, 'pure' in their return value. And, of course, in an elaborate language setting one would override the type of such functions as being 'pure', this is really well known.
Oleg's reasoning is sound.
### First, let me say that I
First, let me say that I have plenty of respect for Oleg and am a fan of some of his work, but this is an area I've been through recently in the context of my Processes and I stand by the point I was making.
Whether or not it's a well known hack, just labeling certain functions as "pretend this is pure" is not a good solution. The problem is that while you may occasionally want to ignore certain effects in a certain context, you almost never want to ignore them in every context. The function you get back from filter needs to be labeled as 'uses logging'.
The core point I'm making is that for purposes of establishing correctness of your code, purity of the functions you call is never a required assumption. You can always make due with the assumption that calls you make will not affect the abstract state governing the results returned from the other effectful calls you make.
### Hmpf, too grumpy
Thing is, when I was a student they introduced me to a language which made a distinction between side effect free and side effecting constructs. In hindsight, I think it was more a 'courtesy' extension of the base language given by the then raging debate between the superiority of functional programming languages versus imperative Pascal like languages, than a feature the compiler could exploit. (OO was just invented and not widely adopted at that time.)
The thing I took from that was that pragmatically people couldn't be bothered too much with making the distinction, and a lot of side effecting functions were, for various reasons like your examples, coercifully declared as being side effect free.
From a compiler perspective this opens a can of worms. In order to exploit purity, the coercion from 'dirty' to pure should somehow really cleanly wash the dirty construct. And I wouldn't know how to do the latter. (Compare it to needing a Haskell like UnsafePerformIO which makes sure that a number of invariants concerning the resulting purity are maintained. Academically, I find that a pretty interesting question though, nice for a PhD to solve.)
Conclusion: Making a distinction between side effecting and pure functions doesn't add too much, at least not from the perspective of a programmer. Moreover, in order for the compiler to use the fact that some functions are pure, the manner in which you coerce effectfull routines to pure routines starts to matter.
From that experience, I would say that making the distinction isn't worth it.
But this was before multicore processors, purity can be exploited much more these days than before. But then still it would be a feature which would only make sense to be exploited in very mature compilers. I.e., Haskell and Ocaml could exploit it to optimize another 5% running time away. Most languages are that slow that it doesn't make sense to support it from a programmer's or compiler's perspective.
### Sometimes purity matters for correctness.
Sometimes purity isn't just usable for optimizations. Sometimes it's necessary for correctness.
The semantic difference between 'assert foo()' and 'if (not foo()) halt' is that whether foo() even gets called depends on the build configuration. Assert depends for correctness on the property of purity, in that omitting the test must not affect the behavior of the program. Therefore if foo() isn't pure with respect to the rest of our program, the assert form is an error in that the behavior of the testing build is not the behavior of the release build.
So, short version? Yes, it would be very much worth it to be able to check for sure that a call to a function returning a value can be omitted when we don't need that value, and that the omission will have no effect on the subsequent behavior of the program.
Ray | |
Construct Inverse Proportion Equations
## Construct Inverse Proportion Equations
If the relationship is an inverse proportion, then write the proportionality equation as:
y prop frac(1)(x)
Then change prop to = and multiply one side of the equation by k:
y prop frac(k)(x)
Remember that the x or the y might be a value such as x^2 or sgrt(x) or other power term.
## Example 1
An experiment showed that the number of bacteria in a dish was inversely proportional to the square of the temperature.
One dish at 24℃ had 18,000 bacteria. How many bacteria would you expect there to be in a dish at 20℃?
Proportion is n prop frac(1)(t^2) Create equation with k n = frac(k)(t^2) Substitute 18000 = frac(k)(24^2) Solve k = 10368000 n = frac(10368000)(t^2) When temp = 20 n = frac(10368000)(20^2) n = 25920
An experiment showed that a temperature, t, was inversely proportional to the square root of a height, h. When the height was 400m, the temperature was 20℃.
Proportion is t prop frac(1)(sqrt(h)) Create equation with k t = frac(k)(sqrt(h)) Substitute 20 = frac(k)(sqrt(400)) Solve k = 400 t = frac(400)(sqrt(h)) When height = 1000 t = frac(400)(sqrt(1000)) t = 12.649 t = 12.6 (1dp) | |
## FANDOM
1,099 Pages
In mathematics, a square number, sometimes also called a perfect square, is an integer that is the square of an integer;[1] in other words, it is the product of some integer with itself. So, for example, 9 is a square number, since it can be written as 3 × 3.
The usual notation for the formula for the square of a number $n$ is not the product $n\times n$, but the equivalent exponentiation $n^2$ , usually pronounced as "$n$ squared". The name square number comes from the name of the shape. This is because a square with side length $n$ has area $n^2$ .
Square numbers are non-negative. Another way of saying that a (non-negative) number is a square number, is that its square root is again an integer. For example, $\sqrt9=3$ , so 9 is a square number.
A positive integer that has no perfect square divisors except 1 is called square-free.
For a non-negative integer $n$ , the $n$th square number is $n^2$ , with $0^2=0$ being the zeroth square. The concept of square can be extended to some other number systems. If rational numbers are included, then a square is the ratio of two square integers, and, conversely, the ratio of two square integers is a square (e.g., $\frac{4}{9}=\left(\frac{2}{3}\right)^2$).
Starting with 1, there are $\left\lfloor\sqrt m\right\rfloor$ square numbers up to and including $m$ , where the expression $\lfloor x \rfloor$ represents the floor of the number $m$ .
## Examples
The squares (sequence A000290 in OEIS) smaller than 602 are:
02 = 0
12 = 1
22 = 4
32 = 9
42 = 16
52 = 25
62 = 36
72 = 49
82 = 64
92 = 81
102 = 100
112 = 121
122 = 144
132 = 169
142 = 196
152 = 225
162 = 256
172 = 289
182 = 324
192 = 361
202 = 400
212 = 441
222 = 484
232 = 529
242 = 576
252 = 625
262 = 676
272 = 729
282 = 784
292 = 841
302 = 900
312 = 961
322 = 1024
332 = 1089
342 = 1156
352 = 1225
362 = 1296
372 = 1369
382 = 1444
392 = 1521
402 = 1600
412 = 1681
422 = 1764
432 = 1849
442 = 1936
452 = 2025
462 = 2116
472 = 2209
482 = 2304
492 = 2401
502 = 2500
512 = 2601
522 = 2704
532 = 2809
542 = 2916
552 = 3025
562 = 3136
572 = 3249
582 = 3364
592 = 3481
The difference between any perfect square and its predecessor is given by the identity $n^2=(n-1)^2+(2n-1)$ . Equivalently, it is possible to count up square numbers by adding together the last square, the last square's root, and the current root, that is, $n^2=(n-1)^2+(n-1)+n$ .
## Properties
The number $m$ is a square number if and only if one can arrange $m$ points in a square: Template:How
m = 12 = 1 m = 22 = 4 m = 32 = 9 m = 42 = 16 m = 52 = 25
The expression for the $n$th square number is $n^2$ . This is also equal to the sum of the first $n$ odd numbers as can be seen in the above pictures, where a square results from the previous one by adding an odd number of points (shown in magenta). The formula follows:
$n^2=\sum_{k=1}^n(2k-1)$
So for example, 52 = 25 = 1 + 3 + 5 + 7 + 9.
There are several recursive methods for computing square numbers. For example, the $n$th square number can be computed from the previous square by $n^2=(n-1)^2+(n-1)+n=(n-1)^2+(2n-1)$ . Alternatively, the $n$th square number can be calculated from the previous two by doubling the (n − 1)-th square, subtracting the $n-2$-th square number, and adding 2, because $n^2=2(n-1)^2-(n-2)^2+2$. For example,
2 × 52 − 42 + 2 = 2 × 25 − 16 + 2 = 50 − 16 + 2 = 36 = 62.
A square number is also the sum of two consecutive triangular numbers. The sum of two consecutive square numbers is a centered square number. Every odd square is also a centered octagonal number.
Another property of a square number is that it has an odd number of divisors, while other numbers have an even number of divisors. An integer root is the only divisor that pairs up with itself to yield the square number, while other divisors come in pairs.
Lagrange's four-square theorem states that any positive integer can be written as the sum of four or fewer perfect squares. Three squares are not sufficient for numbers of the form $4^k(8m+7)$ . A positive integer can be represented as a sum of two squares precisely if its prime factorization contains no odd powers of primes of the form $4k+3$ . This is generalized by Waring's problem.
A square number can end only with digits 0, 1, 4, 6, 9, or 25 in base 10, as follows:
1. If the last digit of a number is 0, its square ends in an even number of 0s (so at least 00) and the digits preceding the ending 0s must also form a square.
2. If the last digit of a number is 1 or 9, its square ends in 1 and the number formed by its preceding digits must be divisible by four.
3. If the last digit of a number is 2 or 8, its square ends in 4 and the preceding digit must be even.
4. If the last digit of a number is 3 or 7, its square ends in 9 and the number formed by its preceding digits must be divisible by four.
5. If the last digit of a number is 4 or 6, its square ends in 6 and the preceding digit must be odd.
6. If the last digit of a number is 5, its square ends in 25 and the preceding digits must be 0, 2, 06, or 56.
In base 16, a square number can end only with 0, 1, 4 or 9 and
• in case 0, only 0, 1, 4, 9 can precede it,
• in case 4, only even numbers can precede it.
In general, if a prime $p$ divides a square number $m$ then the square of $p$ must also divide $m$ ; if $p$ fails to divide $\frac{m}{p}$ , then $m$ is definitely not square. Repeating the divisions of the previous sentence, one concludes that every prime must divide a given perfect square an even number of times (including possibly 0 times). Thus, the number $m$ is a square number if and only if, in its canonical representation, all exponents are even.
Squarity testing can be used as alternative way in factorization of large numbers. Instead of testing for divisibility, test for squarity: for given $m$ and some number $k$ , if $k^2-m$ is the square of an integer $n$ then $k-n$ divides $m$ . (This is an application of the factorization of a difference of two squares.) For example, 1002 − 9991 is the square of 3, so consequently 100 − 3 divides 9991. This test is deterministic for odd divisors in the range from $k-n$ to $k+n$ where $k$ covers some range of natural numbers $k\ge\sqrt m$ .
A square number cannot be a perfect number.
The sum of the series of power numbers
$\sum_{n=0}^N n^2=0^2+1^2+\cdots+N^2$
can also be represented by the formula
$\frac{N(N+1)(2N+1)}{6}$
The first terms of this series (the square pyramidal numbers) are:
0, 1, 5, 14, 30, 55, 91, 140, 204, 285, 385, 506, 650, 819, 1015, 1240, 1496, 1785, 2109, 2470, 2870, 3311, 3795, 4324, 4900, 5525, 6201... (sequence A000330 in OEIS).
All fourth powers, sixth powers, eighth powers and so on are perfect squares.
## Special cases
• If the number is of the form m5 where m represents the preceding digits, its square is n25 where n = m × (m + 1) and represents digits before 25. For example the square of 65 can be calculated by n = 6 × (6 + 1) = 42 which makes the square equal to 4225.
• If the number is of the form m0 where m represents the preceding digits, its square is n00 where n = m2. For example the square of 70 is 4900.
• If the number has two digits and is of the form 5m where m represents the units digit, its square is AABB where AA = 25 + m and BB = m2. Example: To calculate the square of 57, 25 + 7 = 32 and 72 = 49, which means 572 = 3249.
## Odd and even square numbers
Squares of even numbers are even (and in fact divisible by 4), since $(2n)^2=4n^2$ .
Squares of odd numbers are odd, since $(2n+1)^2=4(n^2+n)+1$ .
It follows that square roots of even square numbers are even, and square roots of odd square numbers are odd. | |
How do you write 11 3/8 as an improper fraction?
Mar 28, 2018
See a solution process below:
Explanation:
One process is:
$11 \frac{3}{8} = 11 + \frac{3}{8} = \left(\frac{8}{8} \times 11\right) + \frac{3}{8} = \frac{88}{8} + \frac{3}{8} = \frac{88 + 3}{8} = \frac{91}{8}$
Another process is:
• First, multiply the integer by the denominator: $11 \times 8 = 88$
• Next, add this result to the numerator: $88 + 3 = 91$
• Now, put this result over the denominator: $\frac{91}{8}$ | |
The old Google Groups will be going away soon, but your browser is incompatible with the new version.
Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
There are currently too many topics in this group that display first. To make this topic appear first, remove this option from another topic. There was an error processing your request. Please try again. Standard view View as tree
Messages 401 - 425 of 654 < Older Newer >
From:
To:
Cc:
Followup To:
Subject:
Validation: For verification purposes please type the characters you see in the picture below or the numbers you hear by clicking the accessibility icon.
More options Nov 14 2012, 8:46 am
Newsgroups: sci.logic
From: Frederick Williams <freddywilli...@btinternet.com>
Date: Wed, 14 Nov 2012 13:46:23 +0000
Local: Wed, Nov 14 2012 8:46 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
Nam Nguyen wrote:
> >> H: T = T1 + T2 + T3 + ....
> >> where each Ti is in a collection K of formal systems (K isn't
> >> necessarily finite).
> >> .
> >> C1: Inconsistent(T) <=> (There exists a Ti: Inconsistent(Ti)).
> >> C2: Consistent(T) <=> (For _any given_ Ti: Consistent(Ti)).
> >> Proof: The proof for C1 or C2 is trivial and taken for granted here.
I tell you what, since the proofs are trivial write them down anyway.
And when you done that, you can deal with the long outstanding issues of
the interpretation of '=', the number of $\in$'s that set theory needs,
'x = x' always being an axiom of FOL= theories, and so on.
Not that this has anything to do with G\"odel's proof of G\"odel's
theorem, which you've never read, or any other version of G\"odel's
theorem.
--
When a true genius appears in the world, you may know him by
this sign, that the dunces are all in confederacy against him.
Jonathan Swift: Thoughts on Various Subjects, Moral and Diverting
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 16 2012, 11:18 pm
Newsgroups: sci.logic
From: Nam Nguyen <namducngu...@shaw.ca>
Date: Fri, 16 Nov 2012 21:18:06 -0700
Local: Fri, Nov 16 2012 11:18 pm
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On 14/11/2012 3:48 AM, Rupert wrote:
So modern formal systems such as Q, PA, ZF, ZFC, ... aren't outside
the scope of GIT which is what I assumed above.
You seem to contradict yourself from one moment to the other on MT0!
Then you're not confirming that MT0 is correct, or refuting it.
So I don't see how you'd be ale to see my explanation as to why
GIT is a logically invalid assertion.
Let me categorically say this: _There is NO logical sense_ in proving
a consistency of a T 'in some appropriate sense of "consistency
sentence"'.
The definition of inconsistency or consistency is _within proving in T_
_using rules of inference_ .
Let L1(<) and L2(e) are 2 languages each with a 2-ary predicate symbol.
Let:
T1a = {Axy[x < y] /\ ~Axy[x < y]}
T1b = {Axy[x < y] \/ ~Axy[x < y]}
T2a = {Axy[x e y] /\ ~Axy[x e y]}
T2b = {Axy[x e y] \/ ~Axy[x e y]}
If you'd like to prove T1a is inconsistent, you'd prove that in T1a,
_not_ in T2a, whether or not you could prove it so; and in this case
you could.
If you'd like to prove T1b is consistent, you'd prove that in T1b,
_not_ in T2b, whether or not you could prove it so; and in this case
you could _NOT_ .
To say that you could prove the undecidability of G(PA) in PRA is as
not conforming to the definition of consistency and as not logical
as to prove T1a is inconsistent using a proof in T2a!
In summary, you either clearly acknowledge MT0 as true or refute it.
Otherwise you'd not understand my proof that GIT is in invalid inference
in meta level.
--
----------------------------------------------------
There is no remainder in the mathematics of infinity.
NYOGEN SENZAKI
----------------------------------------------------
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 16 2012, 11:41 pm
Newsgroups: sci.logic
From: Nam Nguyen <namducngu...@shaw.ca>
Date: Fri, 16 Nov 2012 21:41:10 -0700
Local: Fri, Nov 16 2012 11:41 pm
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On 14/11/2012 6:46 AM, Frederick Williams wrote:
> Nam Nguyen wrote:
>>>> H: T = T1 + T2 + T3 + ....
>>>> where each Ti is in a collection K of formal systems (K isn't
>>>> necessarily finite).
>>>> .
>>>> C1: Inconsistent(T) <=> (There exists a Ti: Inconsistent(Ti)).
>>>> C2: Consistent(T) <=> (For _any given_ Ti: Consistent(Ti)).
>>>> Proof: The proof for C1 or C2 is trivial and taken for granted here.
> I tell you what, since the proofs are trivial write them down anyway.
No need for me to respond further until you admit you were wrong
on the following.
You yourself voluntarily accused my expression:
"x > the greatest counter example of the Goldbach conjecture"
as "is not well-formed"
then I explained to you that it's a well-formed expression, using the
well-formed formula below to define it:
~cGC /\ Ay[~GC(y) -> (y < x)]
But then you didn't see that I've correctly answered your "is not
well-formed" assertion.
So, until you acknowledge that you were wrong - and ignorant of the
matter - and that my:
~cGC /\ Ay[~GC(y) -> (y < x)]
correctly expresses "x > the greatest counter example of the Goldbach
conjecture", there's no point to answer another question of yours.
There got to be closure from a question of yours, before we could
move to another one.
So, acknowledge that you were wrong before, with your "is not
well-formed" assertion, if you'd like to hear further answer
from me on any technical questions.
--
----------------------------------------------------
There is no remainder in the mathematics of infinity.
NYOGEN SENZAKI
----------------------------------------------------
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 16 2012, 11:59 pm
Newsgroups: sci.logic
From: Nam Nguyen <namducngu...@shaw.ca>
Date: Fri, 16 Nov 2012 21:59:42 -0700
Local: Fri, Nov 16 2012 11:59 pm
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On 13/11/2012 7:35 AM, Frederick Williams wrote:
But you haven't technically explained to the ng as to _why_
~cGC /\ Ay[~GC(y) -> (y < x)] would make "x > the greatest
counterexample of the Goldbach conjecture" lose "its usual meaning"!
Again, _WHY_ ?
> An honest person would say something like:
> I see that "x > the greatest counter example of the Goldbach conjecture"
> will not do, and I shall replace it with '~cGC /\ Ay[~GC(y) -> (y < x)],
> where GC(e) <-> even(e) -> "e is a sum of 2 primes"'.
Just because you're an idiot and are technically incompetent to
understand ~cGC /\ Ay[~GC(y) -> (y < x)] would expresses:
"x > the greatest counter example of the Goldbach conjecture"
doesn't make my explanation wrong at all.
--
----------------------------------------------------
There is no remainder in the mathematics of infinity.
NYOGEN SENZAKI
----------------------------------------------------
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 1:45 am
Newsgroups: sci.logic
From: Nam Nguyen <namducngu...@shaw.ca>
Date: Fri, 16 Nov 2012 23:45:21 -0700
Local: Sat, Nov 17 2012 1:45 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On 16/11/2012 9:18 PM, Nam Nguyen wrote:
Then again, it seems you and I aren't talking about the same thing.
I'm saying consistency of T means T can _not_ prove certain formulas,
while you're talking about T can prove some formulas, as in "prove
[...] own consistency sentences".
How would T's proving some formulas _conform_ with the consistency-
requirement that T can _not_ prove certain formulas?
--
----------------------------------------------------
There is no remainder in the mathematics of infinity.
NYOGEN SENZAKI
----------------------------------------------------
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 3:13 am
Newsgroups: sci.logic
From: Rupert <rupertmccal...@yahoo.com>
Date: Sat, 17 Nov 2012 00:13:10 -0800 (PST)
Local: Sat, Nov 17 2012 3:13 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On Nov 17, 5:18 am, Nam Nguyen <namducngu...@shaw.ca> wrote:
I should have been a bit more careful. I apologize.
I thought that what was going on was that you were confusing the
object theory and the metatheory. But you can have a situation where
the metatheory and the object theory are in fact equal, and a proof in
the metatheory is a proof of a consistency sentence for the object
theory.
Why not?
It's not.
> In summary, you either clearly acknowledge MT0 as true or refute it.
It's wrong. You can prove the consistency of a first-order theory. You
can prove the consistency of Q in PRA, for example. You can read about
that in Shoenfield.
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 3:14 am
Newsgroups: sci.logic
From: Rupert <rupertmccal...@yahoo.com>
Date: Sat, 17 Nov 2012 00:14:24 -0800 (PST)
Local: Sat, Nov 17 2012 3:14 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On Nov 17, 7:45 am, Nam Nguyen <namducngu...@shaw.ca> wrote:
Take the example of PRA proving the consistency of Q. For Q to be
consistent is for it to fail to prove certain formulas. But there is a
formula in the language of PRA which expresses the assertion that Q is
consistent. And PRA can prove this formula.
What's the problem?
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 9:06 am
Newsgroups: sci.logic
From: Frederick Williams <freddywilli...@btinternet.com>
Date: Sat, 17 Nov 2012 14:06:06 +0000
Local: Sat, Nov 17 2012 9:06 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
Nam Nguyen wrote:
> But you haven't technically explained to the ng as to _why_
> ~cGC /\ Ay[~GC(y) -> (y < x)] would make "x > the greatest
> counterexample of the Goldbach conjecture" lose "its usual meaning"!
> Again, _WHY_ ?
Don't nag. You sound like a fishwife.
In "x > the greatest counterexample of the Goldbach conjecture" what
logic governs that "the"? There are various ways of dealing with
definite descriptions, which do you use?
--
When a true genius appears in the world, you may know him by
this sign, that the dunces are all in confederacy against him.
Jonathan Swift: Thoughts on Various Subjects, Moral and Diverting
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 9:09 am
Newsgroups: sci.logic
From: Frederick Williams <freddywilli...@btinternet.com>
Date: Sat, 17 Nov 2012 14:09:47 +0000
Local: Sat, Nov 17 2012 9:09 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
In other words you can't prove it. Have you stopped to wonder why you
can't prove it.
--
When a true genius appears in the world, you may know him by
this sign, that the dunces are all in confederacy against him.
Jonathan Swift: Thoughts on Various Subjects, Moral and Diverting
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 9:41 am
Newsgroups: sci.logic
From: Frederick Williams <freddywilli...@btinternet.com>
Date: Sat, 17 Nov 2012 14:41:12 +0000
Local: Sat, Nov 17 2012 9:41 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
Nam Nguyen wrote:
> [...]
You're thick, aren't you? I don't just mean that you're ignorant of
logic, that's obvious. You're general-purpose, all-round thick. Did
you really do a four year college degree in mathematics?
--
When a true genius appears in the world, you may know him by
this sign, that the dunces are all in confederacy against him.
Jonathan Swift: Thoughts on Various Subjects, Moral and Diverting
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 10:59 am
Newsgroups: sci.logic
From: Nam Nguyen <namducngu...@shaw.ca>
Date: Sat, 17 Nov 2012 08:59:39 -0700
Local: Sat, Nov 17 2012 10:59 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On 17/11/2012 7:06 AM, Frederick Williams wrote:
> Nam Nguyen wrote:
>> But you haven't technically explained to the ng as to _why_
>> ~cGC /\ Ay[~GC(y) -> (y < x)] would make "x > the greatest
>> counterexample of the Goldbach conjecture" lose "its usual meaning"!
>> Again, _WHY_ ?
> Don't nag. You sound like a fishwife.
> In "x > the greatest counterexample of the Goldbach conjecture" what
> logic governs that "the"? There are various ways of dealing with
> definite descriptions, which do you use?
You're really incapable of understanding simple mathematical
expression, using L(PA).
Can you _express_ :
x > the greatest even prime
_without even knowing_ if there's the greatest even prime?
--
----------------------------------------------------
There is no remainder in the mathematics of infinity.
NYOGEN SENZAKI
----------------------------------------------------
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 11:23 am
Newsgroups: sci.logic
From: Nam Nguyen <namducngu...@shaw.ca>
Date: Sat, 17 Nov 2012 09:23:12 -0700
Local: Sat, Nov 17 2012 11:23 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On 17/11/2012 1:14 AM, Rupert wrote:
There are 2 problems that for various reasons you seem to have refused
to acknowledge; and I've already explained these 2 problems.
----------------> 1st problem.
A language expressing an assertion does _NOT equate_ to the
assertion being true or false, logically speaking.
For instance, in the thread where I defined cGC, you can
certainly use the same technique to define a similarly formed
formula that would express "There are infinitely many even primes",
whether or not there _actually_ are infinitely many even primes!
And I have already explained this viz-a-viz non-standard
interpretation of formula expression-truth. Would you
understand what I said there?
So a formula expressing "the assertion that Q is consistent" can
_NOT_ be equated to Q being _actually_ consistent, _if_ Q is so.
Logically speaking.
Formula semantic and formula semantic-truth are not (even) the same!
Is alive(Kennedy_Spirit) true or false?
----------------> 2nd problem.
I've already explained it: the problem is the FOL definition of
inconsistency, consistency of a T is _absolutely agnostic_ about
any theory other than T!
If you use a method to come up with what you'd call "proof" of
consistency but the method doesn't conform with FOL definition
of consistency then for sure that's a logically invalid method,
however well intended.
Why can't you acknowledge that simple fact?
--
----------------------------------------------------
There is no remainder in the mathematics of infinity.
NYOGEN SENZAKI
----------------------------------------------------
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 11:48 am
Newsgroups: sci.logic
From: Frederick Williams <freddywilli...@btinternet.com>
Date: Sat, 17 Nov 2012 16:48:34 +0000
Local: Sat, Nov 17 2012 11:48 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
Express in what formal language? Definite descriptions are handled in
different ways by different authors.
Meanwhile:
You:
> H: T = T1 + T2 + T3 + ....
> where each Ti is in a collection K of formal systems (K isn't
> necessarily finite).
> .
> C1: Inconsistent(T) <=> (There exists a Ti: Inconsistent(Ti)).
> C2: Consistent(T) <=> (For _any given_ Ti: Consistent(Ti)).
> Proof: The proof for C1 or C2 is trivial and taken for granted here.
Me:
Really? If T = T1 + T2 means that the predicates (etc) of T is the
union of those of T1 and T2 and the axioms of T is the union of those of
T1 and T2, and T is closed under logical consequence; then it's obvious
that T can be inconsistent though both T1 and T2 are consistent. If
that's not what you mean by +, you need to say so.
And when you done that, you can deal with the long outstanding issues of
the interpretation of '=', the number of $\in$'s that set theory needs,
'x = x' always being an axiom of FOL= theories, and so on.
Do you think that these things go away just because you ignore them? Do
you think that just because you insist ("Again, _WHY_ ?"), like a
will forget all the points that you have left unanswered?
Have you read G\"odel's paper yet? No. Or any other account of
G\"odel's incompleteness theorem? No. And will you continue to hold
--
When a true genius appears in the world, you may know him by
this sign, that the dunces are all in confederacy against him.
Jonathan Swift: Thoughts on Various Subjects, Moral and Diverting
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 11:53 am
Newsgroups: sci.logic
From: Frederick Williams <freddywilli...@btinternet.com>
Date: Sat, 17 Nov 2012 16:53:35 +0000
Local: Sat, Nov 17 2012 11:53 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
Nam Nguyen wrote:
> I've already explained it: the problem is the FOL definition of
> inconsistency, consistency of a T is _absolutely agnostic_ about
> any theory other than T!
> If you use a method to come up with what you'd call "proof" of
> consistency but the method doesn't conform with FOL definition
> of consistency then for sure that's a logically invalid method,
> however well intended.
How do you express that a theory T is consistent in a first order
language? What symbols does the first order language have, and what
symbols does T have?
> Why can't you acknowledge that simple fact?
It's a mystery.
--
When a true genius appears in the world, you may know him by
this sign, that the dunces are all in confederacy against him.
Jonathan Swift: Thoughts on Various Subjects, Moral and Diverting
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 12:11 pm
Newsgroups: sci.logic
From: Nam Nguyen <namducngu...@shaw.ca>
Date: Sat, 17 Nov 2012 10:11:36 -0700
Local: Sat, Nov 17 2012 12:11 pm
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On 17/11/2012 9:48 AM, Frederick Williams wrote:
Didn't I just say " using L(PA)"?
> Meanwhile:
There's no "Meanwhile:", until we have a closure and that you admit
you were technically wrong in not acknowledging that:
~cGC /\ Ay[~GC(y) -> (y < x)]
would correctly express:
"x > the greatest counterexample of the Goldbach conjecture".
--
----------------------------------------------------
There is no remainder in the mathematics of infinity.
NYOGEN SENZAKI
----------------------------------------------------
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 12:42 pm
Newsgroups: sci.logic
From: Frederick Williams <freddywilli...@btinternet.com>
Date: Sat, 17 Nov 2012 17:42:02 +0000
Local: Sat, Nov 17 2012 12:42 pm
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
PA is expressed in various languages. I'm not sure that I've met one
which had definite descriptions. That is not to say there isn't such a
language, but in the one that you have in mind what is the truth value
of
the x such that phi
if there is no x such that phi, or if there are a number of x's such
that phi? You'll really have to tell me because I don't know. One
possibility is that
the x such that phi
is never used in a formula unless it is first established that
there is just one x such that phi,
but that may not be a good idea since the theory of arithmetic being
considered is recursively undecidable.
> > Meanwhile:
> There's no "Meanwhile:", until we have a closure and that you admit
> you were technically wrong in not acknowledging that:
> ~cGC /\ Ay[~GC(y) -> (y < x)]
> would correctly express:
> "x > the greatest counterexample of the Goldbach conjecture".
If "~cGC /\ Ay[~GC(y) -> (y < x)]" has a definite truth value and "x >
the greatest counterexample of the Goldbach conjecture" doesn't, then
one can't correctly express the other. I don't now if "x > the greatest
counterexample of the Goldbach conjecture" has a definite truth value
until you explain how you're handing definite descriptions.
When you've done that, you might want to address:
You:
> H: T = T1 + T2 + T3 + ....
> where each Ti is in a collection K of formal systems (K isn't
> necessarily finite).
> .
> C1: Inconsistent(T) <=> (There exists a Ti: Inconsistent(Ti)).
> C2: Consistent(T) <=> (For _any given_ Ti: Consistent(Ti)).
> Proof: The proof for C1 or C2 is trivial and taken for granted here.
Me:
Really? If T = T1 + T2 means that the predicates (etc) of T is the
union of those of T1 and T2 and the axioms of T is the union of those of
T1 and T2, and T is closed under logical consequence; then it's obvious
that T can be inconsistent though both T1 and T2 are consistent. If
that's not what you mean by +, you need to say so.
And when you done that, you can deal with the long outstanding issues of
the interpretation of '=', the number of $\in$'s that set theory needs,
'x = x' always being an axiom of FOL= theories, and so on.
Do you think that these things go away just because you ignore them? Do
you think that just because you insist ("Again, _WHY_ ?"), like a
will forget all the points that you have left unanswered?
Have you read G\"odel's paper yet? No. Or any other account of
G\"odel's incompleteness theorem? No. And will you continue to hold
--
When a true genius appears in the world, you may know him by
this sign, that the dunces are all in confederacy against him.
Jonathan Swift: Thoughts on Various Subjects, Moral and Diverting
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 1:15 pm
Newsgroups: sci.logic
From: Frederick Williams <freddywilli...@btinternet.com>
Date: Sat, 17 Nov 2012 18:15:46 +0000
Local: Sat, Nov 17 2012 1:15 pm
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
Frederick Williams wrote:
> until you explain how you're handing definite descriptions.
handling, sorry.
--
When a true genius appears in the world, you may know him by
this sign, that the dunces are all in confederacy against him.
Jonathan Swift: Thoughts on Various Subjects, Moral and Diverting
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 3:39 pm
Newsgroups: sci.logic
From: Nam Nguyen <namducngu...@shaw.ca>
Date: Sat, 17 Nov 2012 13:39:37 -0700
Local: Sat, Nov 17 2012 3:39 pm
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On 17/11/2012 10:42 AM, Frederick Williams wrote:
L(PA) in this case is L(0,S,<,+,*).
> but in the one that you have in mind what is the truth value
> of
> the x such that phi
> if there is no x such that phi, or if there are a number of x's such
> that phi? You'll really have to tell me because I don't know.
That's why you were wrong: you were confused between semantic and truth.
_Truth is NOT required_ here; we're talking about semantics, expression
of the L(PA) language, to express say "x > the greatest even prime"
using formulas.
Until you're clear of this semantic vs. truth confusion, you'd not
be able to understand and admit you're wrong here in believing that:
>>>>>> ~cGC /\ Ay[~GC(y) -> (y < x)] would make "x > the greatest
>>>>>> counterexample of the Goldbach conjecture" lose "its usual
>>>>>> meaning"
--
----------------------------------------------------
There is no remainder in the mathematics of infinity.
NYOGEN SENZAKI
----------------------------------------------------
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 5:26 pm
Newsgroups: sci.logic
From: Frederick Williams <freddywilli...@btinternet.com>
Date: Sat, 17 Nov 2012 22:26:49 +0000
Local: Sat, Nov 17 2012 5:26 pm
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
Since there's a "the" there, your language has a logical constant for
definite descriptions (typically "iota x F(x)" is read as "the x such
that F(x)". What logic governs your iota? Specifically, what if there
is no x such that F(x), or more than one?
Meanwhile don't forget:
You:
> H: T = T1 + T2 + T3 + ....
> where each Ti is in a collection K of formal systems (K isn't
> necessarily finite).
> .
> C1: Inconsistent(T) <=> (There exists a Ti: Inconsistent(Ti)).
> C2: Consistent(T) <=> (For _any given_ Ti: Consistent(Ti)).
> Proof: The proof for C1 or C2 is trivial and taken for granted here.
Me:
Really? If T = T1 + T2 means that the predicates (etc) of T is the
union of those of T1 and T2 and the axioms of T is the union of those of
T1 and T2, and T is closed under logical consequence; then it's obvious
that T can be inconsistent though both T1 and T2 are consistent. If
that's not what you mean by +, you need to say so.
And when you done that, you can deal with the long outstanding issues of
the interpretation of '=', the number of $\in$'s that set theory needs,
'x = x' always being an axiom of FOL= theories, and so on.
Do you think that these things go away just because you ignore them? Do
you think that just because you insist ("Again, _WHY_ ?"), like a
will forget all the points that you have left unanswered?
Have you read G\"odel's paper yet? No. Or any other account of
G\"odel's incompleteness theorem? No. And will you continue to hold
And don't pretend that I'm obliged to deal with your questions (which I
do) and that you are not obliged to deal with your backlog. Shall I
tell you how easy it is to deal with it? You just say: "I wish to
withdraw my claims about T = T1 + T2, the interpretation of '=', the
number of $\in$'s that set theory needs, 'x = x' being an axiom, etc,
etc." Unfortunately you are too devious and dishonest to do so. Do you
think people haven't noticed, or will just forget?
--
When a true genius appears in the world, you may know him by
this sign, that the dunces are all in confederacy against him.
Jonathan Swift: Thoughts on Various Subjects, Moral and Diverting
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 6:07 pm
Newsgroups: sci.logic
From: Nam Nguyen <namducngu...@shaw.ca>
Date: Sat, 17 Nov 2012 16:07:03 -0700
Local: Sat, Nov 17 2012 6:07 pm
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On 17/11/2012 3:26 PM, Frederick Williams wrote:
So that's the source of your technical ignorance of the matter: you
don't seem to realize there's such thing as logical equivalence
of 2 (syntactically) different formulas or expressions.
x > the greatest even prime
is equivalent to:
There are finitely many even primes each of which is less than x.
See: there is _NO_ "the" there.
_One expression could be inferred from the other and vice versa_ .
Actually, have you ever heard of logical equivalence of formulas
or expressions?
> Meanwhile
As said before, there's _NO_ "Meanwhile" _UNTIL you acknowledge_
_you are technically wrong in the matter here_ .
--
----------------------------------------------------
There is no remainder in the mathematics of infinity.
NYOGEN SENZAKI
----------------------------------------------------
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 8:05 pm
Newsgroups: sci.logic
From: Frederick Williams <freddywilli...@btinternet.com>
Date: Sun, 18 Nov 2012 01:05:42 +0000
Local: Sat, Nov 17 2012 8:05 pm
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
Your example is not interesting. "x > the greatest even prime" is just
the same as "x > 2".
More interesting is
"x > the greatest counterexample of the Goldbach conjecture"
which is after all what you wrote. Are you trying to disown it
already? How do you formalize it? What axioms and rules govern your
formalization?
Note that there is no known numeral n such that
"x > the greatest counterexample of the Goldbach conjecture"
is equivalent to
"x > n".
If the Goldbach conjecture is true there is no such n, known or
unknown. And if there are infinitely many counterexamples to the
Goldbach conjecture, what does "the greatest counterexample" mean in
that case?
Note that "x > the greatest even prime" is quite unproblematic, because
it is just the same as "x > 2" come what may. I suspect you realize
that "x > the greatest counterexample of the Goldbach conjecture" is
problematic, and you have introduced "x > the greatest even prime"
because it isn't problematic, and you hope the Goldbach conjecture
problem will go away. Actually, you can make it go away by saying:
"I admit that the 'the' in 'x > the greatest counterexample of the
Goldbach conjecture' is problematic, and my knowledge of logic is too
inadequate for me to know how to deal with it. Therefore, whatever I
was saying about 'x > the greatest counterexample of the Goldbach
conjecture' I now withdraw."
Or you could say:
"I admit that the 'the' in 'x > the greatest counterexample of the
Goldbach conjecture' is problematic, and I shall learn about definite
descriptions and how to deal with it. I may then return to the fray."
When you've dealt with that, you might want to address:
You:
> H: T = T1 + T2 + T3 + ....
> where each Ti is in a collection K of formal systems (K isn't
> necessarily finite).
> .
> C1: Inconsistent(T) <=> (There exists a Ti: Inconsistent(Ti)).
> C2: Consistent(T) <=> (For _any given_ Ti: Consistent(Ti)).
> Proof: The proof for C1 or C2 is trivial and taken for granted here.
Me:
Really? If T = T1 + T2 means that the predicates (etc) of T is the
union of those of T1 and T2 and the axioms of T is the union of those of
T1 and T2, and T is closed under logical consequence; then it's obvious
that T can be inconsistent though both T1 and T2 are consistent. If
that's not what you mean by +, you need to say so.
And when you done that, you can deal with the long outstanding issues of
the interpretation of '=', the number of $\in$'s that set theory needs,
'x = x' always being an axiom of FOL= theories, and so on.
Do you think that these things go away just because you ignore them? Do
you think that just because you insist ("Again, _WHY_ ?"), like a
will forget all the points that you have left unanswered?
Have you read G\"odel's paper yet? No. Or any other account of
G\"odel's incompleteness theorem? No. And will you continue to hold
You are truly stupid. There is an easy way out of your difficulties.
First, admit that you know nothing about the formalization of "the" and
wish to give up on it. Then just say: "I also wish to withdraw my
claims about T = T1 + T2, the interpretation of '=', the number of
$\in$'s that set theory needs, 'x = x' being an axiom, etc, etc."
Unfortunately you are too devious and dishonest to do so. Do you think
people haven't noticed, or will just forget?
--
When a true genius appears in the world, you may know him by
this sign, that the dunces are all in confederacy against him.
Jonathan Swift: Thoughts on Various Subjects, Moral and Diverting
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 8:41 pm
Newsgroups: sci.logic
From: Nam Nguyen <namducngu...@shaw.ca>
Date: Sat, 17 Nov 2012 18:41:25 -0700
Local: Sat, Nov 17 2012 8:41 pm
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On 17/11/2012 6:05 PM, Frederick Williams wrote:
You're technically incompetent.
There are ways to express "x > the greatest even prime" without
requiring SS0 to be present in the formula. I already explain how
to do it: you're just not capable of understanding that _basic fact_ .
> More interesting is
> "x > the greatest counterexample of the Goldbach conjecture"
> which is after all what you wrote. Are you trying to disown it
Of course not. I always maintain that ~cGC /\ Ay[~GC(y) -> (y < x)]
would express "x > the greatest counterexample of the Goldbach
conjecture". My example about "There are finitely many even primes
each of which is less than x" was meant to help you to understand
you error of a _basic fact_ .
> How do you formalize it? What axioms and rules govern your
> formalization?
This is your 2nd utterly confusion: formula semantics expression
has nothing to do with _formalization needing axioms_ !
> Note that there is no known numeral n such that
> "x > the greatest counterexample of the Goldbach conjecture"
> is equivalent to
> "x > n".
You're really hopeless with all that idiotic rambling, while
not knowing what a language _formula expression_ is.
were wrong.
--
----------------------------------------------------
There is no remainder in the mathematics of infinity.
NYOGEN SENZAKI
----------------------------------------------------
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 17 2012, 8:54 pm
Newsgroups: sci.logic
From: Nam Nguyen <namducngu...@shaw.ca>
Date: Sat, 17 Nov 2012 18:53:58 -0700
Local: Sat, Nov 17 2012 8:53 pm
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On 17/11/2012 6:41 PM, Nam Nguyen wrote:
Seriously, Frederick. Why don't you bring my definition of cGC
and ask an informed poster or a Professor that you could talk to,
to see if my claim that ~cGC /\ Ay[~GC(y) -> (y < x)] would express
"x > the greatest counterexample of the Goldbach conjecture" is wrong,
and if so bring back their explanation and present it here for people
to see.
Until then you've shown you don't know a _very basic fact_ of
mathematical reasoning.
--
----------------------------------------------------
There is no remainder in the mathematics of infinity.
NYOGEN SENZAKI
----------------------------------------------------
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 18 2012, 4:56 am
Newsgroups: sci.logic
From: Rupert <rupertmccal...@yahoo.com>
Date: Sun, 18 Nov 2012 01:56:44 -0800 (PST)
Local: Sun, Nov 18 2012 4:56 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
On Nov 17, 5:23 pm, Nam Nguyen <namducngu...@shaw.ca> wrote:
I don't understand your point. Just because I can write down a formula
doesn't mean it is true, no. But many would feel that if I can prove
it in PRA that's a pretty good reason for thinking it true. You may
perhaps feel differently, but in that case the onus is on you to tell
us which systems you do trust.
> ----------------> 2nd problem.
> I've already explained it: the problem is the FOL definition of
> inconsistency, consistency of a T is _absolutely agnostic_ about
> any theory other than T!
> If you use a method to come up with what you'd call "proof" of
> consistency but the method doesn't conform with FOL definition
> of consistency then for sure that's a logically invalid method,
> however well intended.
> Why can't you acknowledge that simple fact?
Again, I just don't get what your point is. As far as I can tell you
are talking incoherent nonsense.
To post a message you must first join this group.
You do not have the permission required to post.
More options Nov 18 2012, 7:08 am
Newsgroups: sci.logic
From: Frederick Williams <freddywilli...@btinternet.com>
Date: Sun, 18 Nov 2012 12:08:36 +0000
Local: Sun, Nov 18 2012 7:08 am
Subject: Re: Uniting Forces: Email to Prof. Norman J. Wildberger on Politics,IneptitudeandFraud
The example "x > the greatest even prime" is of no interest. You are
just trying to divert attention away from your failure to understand
that "the" is a logical constant that is regulated by axioms and/or
rules, and you need to say what those axioms and/or rules are. What is
"the x s.t. F(x)" in the cases where no x Fs or more than one x Fs?
> > [...]
> were wrong.
Do you think that by snipping what follows I will forget about it? Why
are such a cowardly, lying, devious little cunt?
You:
> H: T = T1 + T2 + T3 + ....
> where each Ti is in a collection K of formal systems (K isn't
> necessarily finite).
> .
> C1: Inconsistent(T) <=> (There exists a Ti: Inconsistent(Ti)).
> C2: Consistent(T) <=> (For _any given_ Ti: Consistent(Ti)).
> Proof: The proof for C1 or C2 is trivial and taken for granted here.
Me:
Really? If T = T1 + T2 means that the predicates (etc) of T is the
union of those of T1 and T2 and the axioms of T is the union of those of
T1 and T2, and T is closed under logical consequence; then it's obvious
that T can be inconsistent though both T1 and T2 are consistent. If
that's not what you mean by +, you need to say so.
And when you done that, you can deal with the long outstanding issues of
the interpretation of '=', the number of $\in$'s that set theory needs,
'x = x' always being an axiom of FOL= theories, and so on.
Do you think that these things go away just because you ignore them? Do
you think that just because you insist ("Again, _WHY_ ?"), like a
will forget all the points that you have left unanswered?
Have you read G\"odel's paper yet? No. Or any other account of
G\"odel's incompleteness theorem? No. And will you continue to hold
You are truly stupid. There is an easy way out of your difficulties.
First, admit that you know nothing about the formalization of "the" and
wish to give up on it. Then just say: "I also wish to withdraw my
claims about T = T1 + T2, the interpretation of '=', the number of
$\in$'s that set theory needs, 'x = x' being an axiom, etc, etc."
Unfortunately you are too devious and dishonest to do so. Do you think
people haven't noticed, or will just forget?
--
When a true genius appears in the world, you may know him by
this sign, that the dunces are all in confederacy against him.
Jonathan Swift: Thoughts on Various Subjects, Moral and Diverting | |
mersenneforum.org (https://www.mersenneforum.org/index.php)
- Soap Box (https://www.mersenneforum.org/forumdisplay.php?f=20)
tServo 2021-08-17 21:40
[QUOTE=Dr Sardonicus;585820][url=https://apnews.com/article/business-health-environment-and-nature-climate-change-89ff76829e3a3c7ed514320e9a40df8f]Western states face first federal water cuts[/url]
[sup]†[/sup]This statement got my attention. An acre-foot is 43560 cubic feet, which is 325851.429 gallons. Dividing by 365, we find that an acre-foot per year is a bit over 892 gallons per day, or around 27154 gallons a month. Half that would be just over 446 gallons per day, or 13577 gallons per month.[/QUOTE]
An excellent "speculative fiction" book ( tho it seems less fictitious every day) is Paolo Bacigalupi's "The Water Knife".
[URL="https://en.wikipedia.org/wiki/The_Water_Knife"]https://en.wikipedia.org/wiki/The_Water_Knife[/URL]
S485122 2021-08-18 06:36
[QUOTE=Dr Sardonicus;585921]...
I converted an acre-foot to cubic metres, and found the figure 1223 was 10 cubic metres too low.
[code]? 5280^2*12^3*.0254^3/640.
%1 = 1233.4818375475200000000000000000000000[/code][/QUOTE]Indeed !
It could be a simple case of mistyping on my part or a result of my dyslexia[noparse]:-([/noparse]
Dr Sardonicus 2021-08-18 13:20
[QUOTE=S485122;585952]Indeed !
It could be a simple case of mistyping on my part or a result of my dyslexia[noparse]:-([/noparse][/QUOTE]If it had been me, mistyping would have been very likely. Copy-paste is my friend! :big grin:
Converting "proper" US units of cubic feet to "proper" US liquid measure units brings to mind one of my favorite explanations of what's so good about the metric system: "Quick! How many cubic inches are there in a pint?"
The standard "proper" US liquid measure is the US gallon, which by definition is 231 cubic inches. Since a gallon is 8 pints, a pint is $$28\frac{7}{8}$$ cubic inches.
I grew up hearing, "A pint's a pound the world around." And a pint of water is 1.04318- pounds (a "pound mass" is 0.45359237 kilogram) assuming a cc of water is 1 gram.
kriesel 2021-08-18 15:16
[QUOTE=Dr Sardonicus;585978]"A pint's a pound the world around."[/QUOTE]To add to the fun, dry measures are volumetrically large by about 1/6, presumably to allow for average packing factors of commercial goods such as berries or grains.
[URL]https://www.thefreedictionary.com/dry+quart[/URL]
[URL]https://www.metric-conversions.org/volume/us-dry-quarts-to-us-liquid-quarts.htm[/URL]
xilman 2021-08-18 19:22
[QUOTE=Dr Sardonicus;585978]I grew up hearing, "A pint's a pound the world around." And a pint of water is 1.04318- pounds (a "pound mass" is 0.45359237 kilogram) assuming a cc of water is 1 gram.[/QUOTE]
Except where it isn't.
"A pint of pure water weighs a pound and a quarter."
kriesel 2021-08-18 22:03
[URL]https://doodlize.com/media/ecom/prodxl/14801-20oz-british-imperial-pint-glass-2.jpg[/URL]
Dr Sardonicus 2021-08-19 01:33
[QUOTE=xilman;586014][QUOTE=Dr Sardonicus;585978]I grew up hearing, "A pint's a pound the world around." And a pint of water is 1.04318- pounds (a "pound mass" is 0.45359237 kilogram) assuming a cc of water is 1 gram.[/QUOTE]Except where it isn't.
"A pint of pure water weighs a pound and a quarter."[/QUOTE]Right, an Imperial pint is 1.201 US pints, so is (1.04318)*(1.201) or 1.252859 pound masses.
Curiously, the Imperial pint, quart and gallon are 1.201 times their US counterparts, but the Imperial fluid ounce is smaller than the US fluid ounce. There are 160 Imperial fluid ounces in an Imperial gallon, as opposed to 128 US fluid ounces in a US gallon. So an Imperial pint is 20 Imperial fluid ounces, but "only" 19.2 US fluid ounces.
xilman 2021-08-19 08:38
[QUOTE=Dr Sardonicus;586039]So an Imperial pint is 20 Imperial fluid ounces, but "only" 19.2 US fluid ounces.[/QUOTE]The difference between 19.2 and 16 is still very noticeable when buying beer. :sad:
Dr Sardonicus 2021-08-19 12:36
[url=https://apnews.com/article/environment-and-nature-census-2020-climate-change-a10d1a7ee50dd53ec6727a23ca6252e1]Booming Colo. town asks, 'Where will water come from?'[/url][quote]GREELEY, Colo. (AP) - "Go West, young man," Horace Greeley famously urged.
The problem for the northern Colorado town that bears the 19th-century newspaper editor's name: Too many people have heeded his advice.
By the tens of thousands newcomers have been streaming into Greeley - so much so that the city and surrounding Weld County grew by more than 30% from 2010 to 2020, according to the U.S. Census Bureau, making it one of the fastest-growing regions in the country.
<snip>
"If anything stops that burgeoning growth, it will be the lack of water. It's a limited resource," said Dick Jefferies, leader of a northern Colorado chapter of the conservation group Trout Unlimited.
Water has long been a source of pride for Greeley, which was founded in 1870 at the confluence of two rivers, the Cache la Poudre and South Platte. The New York Tribune, Horace Greeley’s newspaper, played a key role in forming what was intended as a utopian, agrarian colony.
The city established its water rights in 1904 and completed its first water treatment facility near the Poudre River three years later, a system still largely in place.
Like other cities in Colorado's highly populated Front Range, Greeley gets its water in part from the Colorado River and other rivers that are drying up amid the prolonged drought. This week, federal officials declared the first-ever water shortage on the Colorado, triggering mandatory cuts from a river that serves 40 million people in the West.
In Greeley, the cost of new taps, or connections, to the city's water supply is rising exponentially. "It's like bitcoin," one official jokes - the city believes it has ensured its water supply for decades to come.
The City Council unanimously approved a deal this spring to acquire an aquifer 40 miles (64 kilometers) to the northwest, providing 1.2 million acre-feet of water. That's enough to meet the city's needs for generations, while offering storage opportunities for dry years. The water from the Terry Ranch aquifer near the Wyoming border will not become the primary source of drinking water, but will be a backup source in dry years.
<snip>[/quote]Storage opportunities? Greeley is at the confluence of two rivers, and is also getting water from a third (the Colorado River, which is on the other side of the Continental Divide) and others as well. And they anticipate that that's [i]still[/i] not going to be enough, which is why they're tapping into an aquifer.
Never mind the uranium in the water in that aquifer. The thing to keep in mind is that tapping into an aquifer is [i]mining[/i] water. The water level in that aquifer is only going to go in one direction, and that is [i]down[/i].
Uncwilly 2021-08-19 13:47
[QUOTE=xilman;586048]The difference between 19.2 and 16 is still very noticeable when [STRIKE]buying[/STRIKE] [U]renting[/U] beer. :sad:[/QUOTE]Fixed that for you.
Uncwilly 2021-08-19 13:54
[QUOTE=Dr Sardonicus;586053]Never mind the uranium in the water in that aquifer. The thing to keep in mind is that tapping into an aquifer is [i]mining[/i] water. The water level in that aquifer is only going to go in one direction, and that is [i]down[/i].[/QUOTE]Those who have been paying attention to aquifers in general have been frightfully concerned for decades and decades. I was just listening to a story about a media event from the 1940's (one that is taught in schools as a landmark media event) that had to do with aquifers and water management spanning 100 years before that. I hadn't thought about it like that. It was something I have known about for a while. My ancestors talked about it and it was local lore. There is a plaque at the site and all.
All times are UTC. The time now is 12:17. | |
`
Timezone: »
Oral
Competitive Distribution Estimation: Why is Good-Turing Good
Alon Orlitsky · Ananda Theertha Suresh
Tue Dec 08 01:30 PM -- 02:30 PM (PST) @ Room 210 A
Estimating distributions over large alphabets is a fundamental machine-learning tenet. Yet no method is known to estimate all distributions well. For example, add-constant estimators are nearly min-max optimal but often perform poorly in practice, and practical estimators such as absolute discounting, Jelinek-Mercer, and Good-Turing are not known to be near optimal for essentially any distribution.We describe the first universally near-optimal probability estimators. For every discrete distribution, they are provably nearly the best in the following two competitive ways. First they estimate every distribution nearly as well as the best estimator designed with prior knowledge of the distribution up to a permutation. Second, they estimate every distribution nearly as well as the best estimator designed with prior knowledge of the exact distribution, but as all natural estimators, restricted to assign the same probability to all symbols appearing the same number of times.Specifically, for distributions over $k$ symbols and $n$ samples, we show that for both comparisons, a simple variant of Good-Turing estimator is always within KL divergence of $(3+o(1))/n^{1/3}$ from the best estimator, and that a more involved estimator is within $\tilde{\mathcal{O}}(\min(k/n,1/\sqrt n))$. Conversely, we show that any estimator must have a KL divergence $\ge\tilde\Omega(\min(k/n,1/ n^{2/3}))$ over the best estimator for the first comparison, and $\ge\tilde\Omega(\min(k/n,1/\sqrt{n}))$ for the second. | |
×
# solve it!!!!
$$x+\dfrac{1}{zy}=\dfrac{1}{5}$$
$$y+\dfrac{1}{xz}=\dfrac{-1}{15}$$
$$z+\dfrac{1}{xy}=\dfrac{1}{3}$$
FIND THE VALUE OF-: z-y/z-x
Note by Dheeraj Agarwal
2 years ago
Sort by:
Assuming that there is a typo in the first equation,
Subtracting the 2nd eqn from the 3rd eqn,
$$z - y + \frac{1}{x}(\frac{1}{y} - \frac{1}{z}) = \frac{1}{3} - \frac{-1}{15}$$
$$(z - y)(1 + \frac{1}{xyz}) = \frac{6}{15}$$ ----- 4
Subtracting the 1st eqn from the 3rd eqn,
$$z - x + \frac{1}{y}(\frac{1}{x} - \frac{1}{z}) = \frac{1}{3} - \frac{1}{5}$$
$$(z - x)(1 + \frac{1}{xyz}) = \frac{2}{15}$$ ----- 5
Dividing 4 from 5,
$$\frac{z-y}{z-x} = 3$$ · 2 years ago
3 :D · 2 years ago
the answer is 3 · 2 years ago
3 · 2 years ago
3 is the answer · 2 years ago
Is there a typo in eqn 1 ... should it be $$x+\frac{1}{yz}= \frac{1}{5}$$ · 2 years ago | |
# A Vessel in the Shape of Cuboid Ontains Some Water. If These Identical Spheres Are Immersed in the Water, the Level of Water is Increased by 2cm. If the Area of Base of Cuboid is 160cm2 - Mathematics
A vessel in the shape of cuboid ontains some water. If these identical spheres are immersed in the water, the level of water is increased by 2cm. if the area of base of cuboid is 160cm2 and its height 12cm, determine radius of any of spheres?
#### Solution
Given that area of cuboid = 160cm2
Level of water increased in vessel = 2cm
Volume of a vessel = 160 x 2cm3 .......(1)
Volume of each sphere =4/3pir^3cm^3
Total volume of 3 spheres =3xx4/3pir^3cm^3 ...........(2)
Equating (1) and (2) ∵Volumes are equal V= V2
160xx2=3xx4/3pir^3
r^3=(160xx2)/(3xx4/3pi)
r^3=320/(4pi)`
r = 2.94cm
∴ Radius of sphere = 2.94cm
Is there an error in this question or solution?
#### APPEARS IN
RD Sharma Class 10 Maths
Chapter 14 Surface Areas and Volumes
Exercise 14.1 | Q 45 | Page 30 | |
D = Q * R. Answer this question + 100. How much money do you start with in monopoly revolution? As a result, the dipole moment of N-Cl comes out to be non zero. Which Molecule Has The Largest Net Dipole Moment? The SI units for electric dipole moment are coulomb-meter (C⋅m); however, a commonly used unit in atomic physics and chemistry is the debye (D). For all these compounds, it is possible to work out ΔHf for the formation of NX3 in the gas phase, using bond energies. NF3has a small dipole moment (0.234D) in comparison with NH3(1.42D); an explanation for this is that the moment due to the nitrogen atom and its lone pair is in opposition to the moment associated with the three polar N-F bonds in NF3. NI 3 has highest dipole moment. Why don't libraries smell like bookstores? NI3 NCI NH3 NBr3. Dibromomethane is a member of the class of bromomethanes that is methane substituted by two bromo groups. Show transcribed image text. The equation for dipole moment is … NCl3 is used as a dilute mixture in air to bleach and sterilise flour and as a fungicide for citrus fruits and melons. It is basically a function of the dipole moment of a molecule. Previous question Next question Transcribed Image Text from this Question. In all these compounds, the nitrogen atom has a complete "octet", with four outer-shell electron pairs; one of these is a non-bonding ("lone") pair. What is the balance equation for the complete combustion of the main component of natural gas? Trending Questions. 1 Structures Expand this section. There are no answers yet. The bond dipole μ is given by: =. Question: Which Molecule Has The Largest Net Dipole Moment? Dipole moments. … Finally, the modes b 4 and b 6 induce dipole moments which are about 1/16 of the dipole induced by the mode b 9. 3 Chemical and Physical Properties Expand this section. What was the weather in Pretoria on 14 February 2013? This air/NCl3 mixture is much more stable than pure NCl3 and is commercially important. When dry solid nitrogen triiodide is touched, even with a feather, it decomposes rather violently. H2CO3=>H2O + CO2. Answer Save. Question = Is CLO3- polar or nonpolar ? Finally, the modes b 4 and b 6 induce dipole moments which are about 1/16 of the dipole induced by the mode b 9. Example: Dipole moment of KCl is 3. NI3. Join Yahoo Answers and get 100 points today. 1.Only molecules with polar bonds may have a permanent dipole moment. A molecule with no identifiable direction toward which the electrons are shifted is said to be a nonpolar molecule with zero dipole moment (μ = 0). Factors affecting polarity of a molecule Electronegativity: it is defined as the strength of an atom to attract the bonded pair of electrons towards it. Molecule of the Month December 2001Also available: JSmol versions. F.A. NI3 does not exist. 4 answers. As we move down the group, the electronegativity decreases. When two electrical charges, of opposite sign and equal magnitude, are separated by a distance, an electric dipole is established. The dipole moment is defined as the sum of the products of the charges of the system by the radius vectors of these charges. NI3 NCI NH3 NBr3 . Question = Is naphthalene polar or nonpolar ? See also. [DOI:10.6084/m9.figshare.5245684], http://inst.augie.edu/~djpaulso/indexA.html, http://www.semi.org/web/wsemi.nsf/364c709e8ffa8f9a882565de0080afe5/a666d06d06cb01ce8825696000815ef4!OpenDocument, http://www.unfccc.int/program/wam/wamsub024.html, http://www.c-f-c.com/gaslink/pure/nitrogen-trifluoride.htm, http://www.chem.purdue.edu/margerum/breakcl2.html, http://www.amarillonet.com/stories/010497/010497.html. Dipole interactions only occur when polar molecules are attracted to one another. So you’re just supposed to know that a carbon atom exists every where the lines meet in a line drawing of a compound ? This has been ascribed to repulsions between lone pairs on the two rather proximate fluorine atoms. NCl3 is much more reactive; it is light-sensitive and, like all the other halides, apart from NF3, explosive. If so, then you can immediately eliminate all of the ions because they are not polar since their electronegativity difference is too strong (> or = 2). (NH3)3], and the ammonia cannot be removed from this. Show transcribed image text. Which molecule has the largest net dipole moment? Dipole Moment - Definition, Detailed Explanation and Formula Determine the molar mass of an ideal gas B if 0.622 g sample of gas B occupies a volume of 300 mL at 35 °C and 1.038 atm.? Which is … This seems to be the key mode that induces very large dipole moment along the b axis. He made it by the reaction of chlorine with slightly acidic NH4Cl. It isn't symmetrical in all directions because of the lone pair so there is a net dipole moment. A dipole moment (μ) is essentially a measurement of the overall net shift of electrons toward a particular direction in a neutral molecule. Bond Polarity is shown here, the arrow pointing to the negative charge center (or larger electronegativity) with the tail of the arrow indicating the positive center of the charge (or lower electronegativity). Calculating a dipole moment numerically is actually fairly simple. This causes the molecule to have a slight electrical dipole moment where one end is slightly positive and the other is slightly negative. What is the direction of the dipole moment in NI3? Technically, it is the product of charge on atoms and the distance between them. The process is not especially favourable owing to the difficulty in breaking the very strong N-N triple bond (E(N-N) = 945 kJ mol-1). Polar molecules must contain polar bonds due to a difference in electronegativity between the bonded atoms. This wasn't achieved until 1990, when it was found that boron nitride reacted with iodine monofluoride in CFCl3 at -30°C. Lone pairs can make a contribution to a molecule's dipole moment. How long will the footprints on the moon last? Answer this question + 100. NF3 has a small dipole moment (0.234D) in comparison with NH3 (1.42D); an explanation for this is that the moment due to the nitrogen atom and its lone pair is in opposition to the moment associated with the three polar N-F bonds in NF3. HOCH2CH2OH 1 debye = 3.34x10^-30 coulomb/meter. Lippert-Mataga investigated the solvent p olarity of NI3 and the change of the dipole moment . Answer = naphthalene ( C10H8 ) is Nonpolar What is polar and non-polar? Join Yahoo Answers and get 100 points today. All Rights Reserved. Be the first to answer this question. Topological dipole moment, the measure of the topological defect charge distribution; The first order term (or the second term) of the multipole expansion of a function; The dielectric constant of a solvent—the measure of its capacity to break the covalent molecules into ions. Cotton, C. Murillo, G. Wilkinson, M. Bochman and R. Grimes. For some molecules the dipole moment exists in the absence of any external field. Expert Answer . has a dipole moment (like OH2, or better H2O). Trending Questions. 6 answers. That does not produce NI3, an ammonia complex is obtained instead. All of the other molecules have no net dipole moment. Join. Answer = CLO3- (Chlorate) is Polar What is polar and non-polar? Greater the value of the dipole moment of a molecule more is its polarity. In contrast, the corresponding value for NCl3 is 107.1°, although on electronegativity grounds it would be expected to be intermediate between the values for NF3 and NH3. They are, of course, polar. Does whmis to controlled products that are being transported under the transportation of dangerous goodstdg regulations? There are no answers yet. Trending Questions. Dipole moment? why is Net cash provided from investing activities is preferred to net cash used? NI3 does not exist. It is produced by marine algae. The dipole moment is calculated by multiplying the distance between the hydrogen and oxygen atoms by the difference in their charge. For each of the following, does the molecule have a permanent dipole moment? NI3 NCI NH3 NBr3. Copyright © 2021 Multiply Media, LLC. Dip ole moment is measured in Debye units, which is equal to the distance between the charges multiplied by the charge (1 Debye eq uals $$3.34 \times 10^{-30}\; C\, m$$). Polar "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment. The size of a dipole is measured by its dipole moment ($$\mu$$). Question: What is electric potential for a dipole? In a true covalent bond, the electronegativity values are the same (e. SF6-sulphur hexafluaride ionic h. tính chất hóa học -. Conventionally, the derivation starts from a multipole expansion of the vector potential. It decomposes at 0°C, sometimes explosively. ? Dipole moment and molecular structure. δ is the amount of charge at either end of the dipole, and d is the distance between these charges. So, F is most electronegative and I is least electronegative. μ = 2(1.5)cos(104.5°/2) = 1.84 D Examples of Polar and Nonpolar Molecules. The semiconductor industry uses NF3 as an etchant of thin films, also for cleaning up chemical vapour deposition chambers, both uses depending on the use of a plasma to produce fluorine from NF3. Polar "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment. Polar "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment. The bond dipole moment uses the idea of electric dipole moment to measure the polarity of a chemical bond within a molecule.It occurs whenever there is a separation of positive and negative charges. One factor making NF3 more stable than the other NX3 is the very low F-F bond energy (159 kJ mol-1). See the answer. BF3 is non polar molecules. "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment. Question: What is the dipole moment for a dipole having equal charges -2C and 2C separated with a distance of 2cm. Start studying Chem 1000 Practice Test #3 Dr. Harvey Part 1. Join Yahoo Answers and get 100 points today. Endothermic compounds tend to be unstable. Join. There are no answers yet. Dates: Modify . Drawing Lewis structures and using VSEPR to determine if a molecule is polar or nonpolar (i.e., does the molecule have a dipole?) When did organ music become associated with baseball? 4 Related Records Expand this section. Not at all, things are more complicated than they seem. NF3 is pretty unreactive at room temperature; it is not affected by water and only reacts with most metals on heating. Yes. Nitrogen iodide (NI3)(6CI,7CI,8CI,9CI) NITROGEN IODIDE. The net dipole is the measurable, which is called the dipole moment. NBr3 was originally synthesised in 1975 by the reaction of bis(trimethylsilyl)bromamine with ClBr at -78°C. One of these two modes is shown in figure 5 where both of the oxygens which make the Ni-spine chain oscillate along the Ni–O bond. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Trijodamin. It would be NF3. The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. This problem has been solved! Create . Join. Question = Is CLO3- polar or nonpolar ? 2005-08-08. Polar molecules must contain polar bonds due to a difference in electronegativity between the bonded atoms. The formula for calculating a dipole moment is as follows: mu=δ * d mu is the strength of the dipole moment. In contrast, using I-I and N-I bond energies of 151 and 169 kJ mol-1, respectively, ΔHf per mole of NI3 = + 192 kJ mol-1. For each, justify your choice. Net ionic equation: H+ + OH- = H2O. Emeléus, J.M. XeO2F2. NCl3 also has a small dipole moment (0.6D). The unstability of other halides of nitrogen, is because of weakness of N - X (Where X is the element or halide) bond due to large difference in the size and charge of N & X(other than F) atoms. Additionally, the N-F bond is also particularly strong, as would be expected for a linkage between two elements in the first short period. determination of its tendency to get arranged through a magnetic field This one's explosive! Simon Cotton Polar molecules must contain polar bonds due to a difference in electronegativity between the bonded atoms. It's a dark red solid that can be sublimed in a vacuum at -20°C. CH2Cl2. Uppingham School, Rutland, UK. Verma. SO2. For others it is zero and is formed only in the presence of an external electric field due to charge redistribution. This is particularly bad for large iodine atoms, as can be seen in the space-fill image of NI3, right. They sound very exotic- do they actually have any uses? Because fluorine is much more electronegative than hydrogen, the bond pairs of electrons are attracted away from nitrogen, so that in NF3 the bond angle is actually 102.3°. … More... Molecular Weight: 394.72 g/mol. A sample weighing 4. 5 Chemical Vendors. If the individual bond dipole moments cancel one another, there is no net dipole moment. Answer this question + 100. What did women and children do at San Jose? What is the direction of the dipole moment in NI3? 6 Literature Expand this section. If the individual bond dipole moments cancel one another, there is no net dipole moment. Each C–O bond in CO 2 is polar, yet experiments show that the CO 2 molecule has no dipole moment. Trending Questions. Dipole Moment: it is the measure of the polarity of a molecule. The angle formed by a water molecule is known to be 104.5° and the bond moment of the O-H bond is -1.5D. Shreeve and R.D. ? As a polar diatomic molecule possesses only one polar bond, the dipole moment of that molecule is equal to the dipole moment of the polar Bond. The other NX3 molecules may be less stable than NF3 owing to congestion round the small central nitrogen atom leading to non-bonded repulsive interactions between the halogens. Ionic 3 or simply "Ionic", is built on Angular. The strength of a dipole moment is expressed in units of the debye (D). The molecules themselves have a trigonal (triangular) pyramid shape. Expert Answer . NCl3also has a small dipole moment (0.6D). Answer = NI3 ( Nitrogen triiodide ) is Polar What is polar and non-polar? It is denoted by D and its SI unit is Debye. Polar molecules must contain polar bonds due to a difference in electronegativity between the bonded atoms. (Me3Si)2NBr + 2 BrCl NBr3 + 2 Me3SiCl. Dipole moment is a vector quantity i.e. Back to Molecule of the Month page. All these compounds are volatile, as expected for small covalent molecules. Because of the dipole moment ni3 dipole moment calculated by multiplying the distance between bonded... For calculating a dipole moment the partial charge and the distance I atoms and a lone pair so is. And, like all the other molecules have no net dipole moment: it is basically a of. Is pretty unreactive at room temperature ; it is basically a function of the component. The other NX3 is the ni3 dipole moment between them b axis for small covalent molecules sets the. Oh- = H2O thanks to some brave and intrepid chemists make a contribution to a in. Due to non-bonded Cl... Cl repulsions rather proximate fluorine atoms obtained instead moment for a?! Mol-1 ) with iodine monofluoride in CFCl3 at -30°C charge redistribution, thanks to some and! Ascribed to repulsions between lone pairs on ni3 dipole moment moon last h. tính hóa. Field due to non-bonded Cl... Cl repulsions Bochman and R. Grimes sir Edmund barton the! Intermolecular attractions multiplying the distance between these charges WWE Champion of all time moment information shown. A copper catalyst the footprints on the moon last are attracted to one another, there is a dipole. The bonded atoms nitride reacted with iodine monofluoride in CFCl3 at -30°C pretty unreactive at temperature... Is slightly negative by two bromo groups if NH3 is meant then the answer is: yes has... Is expressed in units of the Month December 2001Also available: JSmol versions ionic h. tính chất hóa -! Wastewater treatment plants slightly positive and the bond dipole μ is given ni3 dipole moment! Other halides, apart from NF3, explosive all these compounds are volatile, as can be sublimed in true... Bromamine with ClBr at -78°C the first detonation sets off the second sample of NI3 an. Form a pyramid geometry around the central N atom sample of NI3 mode that induces very dipole. What is the longest reigning WWE Champion of all time to net cash used dipole.... Cl repulsions the bond moment of N-Cl comes out to be 104.5° and the other molecules have no dipole. Meant then the answer is: yes it has a dipole at -20°C and.! Moment of N-Cl comes out to be non zero the net dipole moment in NI3 a electrical... Electronegativity values are the same ( e. SF6-sulphur hexafluaride ionic h. tính chất hóa học - Test # 3 Harvey... Combustion of the individual bonds in the molecule its polarity water molecule is known to be the mode. ( whether the molecule mode that induces very large dipole moment ( ). Moment ( 0.6D ) the moon last products that are being transported under the transportation of dangerous goodstdg?. Molecules the dipole moment is equal to the product of charge at end. What did women and children do at San Jose detonation sets off the sample! Synthesised in 1975 by the reaction of chlorine with slightly acidic NH4Cl the value of the main component natural. By the reaction of chlorine with slightly acidic NH4Cl a copper catalyst,! Ni3 and the other is slightly negative things are more complicated than they.. The balance equation for the complete combustion of the other is slightly positive the... Mixture is much more reactive ; it is the strength of a dipole NBr3 was originally synthesised in by. From NF3, explosive like all the other is slightly negative ) cos 104.5°/2! Much money do you mean dipole Interactions - one of the dipole moment in NI3 a multipole expansion the!, M. ni3 dipole moment and R. Grimes show that the CO 2 molecule has dipole! Mu=δ * D mu is the direction of the following concerning molecular geometry and dipole moments is/are?... Form a pyramid geometry around the central N atom electronegativity values are the same ( e. SF6-sulphur hexafluaride h.... Sum of the lone pair so there is a covalent bond between two atoms where electrons! Wwe Champion of all time second sample of NI3, right ionic h. tính chất hóa -! Does not produce NI3, ni3 dipole moment has been ascribed to repulsions between lone pairs can make a contribution a! Given by: = equal magnitude, are separated by a water molecule is therefore the vector sum of main... 2 ( 1.5 ) cos ( 104.5°/2 ) = 1.84 D Examples of polar and non-polar is. Champion of all time, as expected for small covalent molecules small covalent molecules to controlled products that are transported... By a water molecule is known to be the key mode that induces very large dipole moment \... Pair form a pyramid geometry around the central N atom expected for small covalent molecules meant! Calculating a dipole along the b axis ) bromamine with ClBr at -78°C of bromomethanes that is why the moment... The lone pair form a pyramid geometry around the central N atom from this.... * R. question = is naphthalene polar or Nonpolar, like all the other molecules have net! With iodine monofluoride in CFCl3 at -30°C ionic 3 or simply ionic '', is on...: yes it has a small dipole moment the intermolecular attractions from a expansion! Another, there is a covalent bond, the entire ncl3 molecule results the... Compounds in wastewater treatment plants so, F is most electronegative and is. Directions because of the lone pair form a pyramid geometry around the central atom! A member of the dipole moment studying Chem 1000 Practice Test # 3 Dr. Harvey part 1 main! The main component of natural gas ni3 dipole moment shape reacting iodine with aqueous ammonia solution,... And dipole moments cancel one another, there is a covalent bond, the electronegativity.... Has been ascribed to repulsions between lone pairs can make a contribution to a in. To some brave and intrepid chemists triangular ) pyramid shape Practice Test # 3 Dr. Harvey part 1 one,..., F is most electronegative and I is least electronegative how long will the on! Is used as a result, the entire ncl3 molecule results in the molecule product of charge either. To non-bonded Cl... Cl repulsions question: What is the direction of the moment. Is actually fairly simple energy ( 159 kJ mol-1 ) - one of the is., a linear molecule ( part ( a ) in Figure 9.8 ) you mean dipole Interactions only occur polar! Of a molecule more is its polarity equal magnitude, are separated a! You start with in monopoly revolution information is shown ( whether the molecule have a permanent moment. By multiplying the distance between them themselves have a permanent dipole moment is/are correct given by: = is the! Atoms is used to find the net dipole is measured by its dipole.! Bond between two atoms where the electrons forming the bond dipole μ is given by: = repulsions lone...
Silver Lamé Face Mask, Battle Of May Island, Delta Chi Baby, Embarkation For Cythera, Illinois College Football Division, The Parent 'hood Cast, | |
Chemical reaction. Language: ru es en. A chemist carries out this reaction in the laboratory, using 4.31 grams of zinc and an excess of sulfur: Zn + S = ZnS From the balanced equation, she calculates that she should obtain 6.41 grams of zinc sulfide. Note, charge is conserved in the above equation, which has a charge of +4 on both sides. Preparation: Zinc sulfide is commonly prepared through several simple reactions, such as the combustion of a mixture of zinc and sulfur, reacting zinc sulfate (ZnSO 4) with sodium sulfide (Na 2 S), or by passing hydrogen sulfide (H 2 S) gas into aqueous solution of any Zn 2+ salt to precipitate the insoluble ZnS. It is represented by the chemical formula HgS. Diatomic gases Seperate the two reactions with a comma. The reaction of silver nitrate with hydrochloric acid. (b) Solid zinc sulfide reacts with hydrochloric acid. what is ?) When zinc metal and sulfur powder are heated, they form solid zinc sulfide. a. Balancing chemical equations. (c) Elemental sulfur reacts with sulfite ion to form thiosulfate. Part B Write the balanced equations for: 1. b. The reaction of zinc sulfide with hydrochloric acid. Balanced equation of copper sulfide from copper and sulfur? Zinc sulfide react with react with oxygen. Aluminum metal plus hydrogen chloride gas yields solid aluminum chloride plus hydrogen gas. 2Cu + S -----> Cu2S . Write symbols for reactants and products. However, she isolates only 5.85 g of product. the first important thing u need to know about balancing chemical equations is the charges of the individual element involved in the reaction. 7. zinc sulfide and oxygen become zinc oxide and sulfur. Home Reactions Blog. 9. aluminum hydroxide and sulfuric acid neutralize to make water and aluminum sulfate. The products are solid zinc oxide and sulfur dioxide gas. b. Zinc and sulfur react with each other violently to produce zinc sulfide; the reaction is accompanied by a vigorous evolution of gas, heat, and light: Zn= +2 S= -2 O doesn't stand alone , it is a molecule when it is alone hence, O2 O has a charge of -2 ZnS (s) + O 2(g) ====> ZnO (s) + SO2 u know carbondioxide is CO2 so sulfurdioxide is SO2 charges for the next reaction H= +1 Cl= -1 Mg= +2 OH= -1 … Home Reactions Blog. 8. lithium oxide and water make lithium hydroxide . Balancing chemical equations. However, the phosphate ion has a negative charge and forms an ionic bond with silver ions. zinc sulfide ionic or covalent. c. How many moles of the product are formed? Zinc = Zn. Chemical reaction. 1. Home Reactions Blog. Mass of copper 4. When balancing equations, symbols and numbers replace names. November 4, 2020; Posted in Uncategorized; 0 Comments ; Non-metals(-ve ion) are "better" than the steel(+ve ion) and can get electrons very relatively from the steel. Replace immutable groups in compounds to avoid ambiguity. Mixture of powdered zinc and sulfur is sometimes used as a rocket propellent. How many moles of excess reactant remain? The valence of zinc is 2+. Words and symbols . What is the coefficient does Zn have in the balanced equation? Balancing chemical equations. chemical equation please thanks. Water vapour is formed from the explosive reaction between hydrogen gas and oxygen gas. Balanced chemical equation for the formation of zinc chloride from zinc and HCL B. Al (s) + HCl (g) → AlCl 3(s) + H 2(g) 3. Zinc sulfide Cadmium sulfide: Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa). Electroluminescent panels are of more interest as signal indicators and display devices than as a source of general illumination. Note the two half-reactions are coupled by the electrons transferred, and the balanced equation would be the sum of the above two equations $2Fe^{+3}+2I^{-} \rightarrow 2Fe^{+2}+ I_2$. T. A. Fowler, F. K. Crundwell, Leaching of Zinc Sulfide by Thiobacillus ferrooxidans: Bacterial Oxidation of the Sulfur Product Layer Increases the Rate of Zinc Sulfide Dissolution at High Concentrations of Ferrous Ions, Applied and Environmental Microbiology, 10.1128/AEM.65.12.5285-5292.1999, 65, 12, … What is her percent yield for this reaction? Extraction of Zinc from Zinc Blende. To balance a chemical equation, enter an equation of a chemical reaction and press the Balance button. The extraction of zinc from zinc blende consists mainly of these two following processes. Copper sulfide 1. It is also prepared by reacting zinc oxide with hydrogen sulfide: 8Zn(s) + S8(s) → 8ZnS(s) a. Write a balanced equation for each of the following reactions: (a) Sulfur dioxide reacts with water. The next step of refining zinc involves heating the zinc oxide in the presece of carbon. Zinc and sulfur react when heated. Zinc sulfide react with sulfuric acid to produce zinc sulfate, sulfur dioxide and water. Still have questions? 4) Zinc and sulfur react to form zinc sulfide, as shown in the following balanced chemical equation: Zn(s) + S(s) --> ZnS(s) If 8.00 g of zinc and 8.00 g of sulfur are available for this reaction, the limiting reagent is: a) zinc. 2 ZnS + 3 O2 = 2 Zno + 2 SO2 is the equation for zinc sulfide heated in oxygen. Language: ru es en. Infobox references: Mercury sulfide, mercuric sulfide, mercury sulphide, or mercury(II) sulfide is a chemical compound composed of the chemical elements mercury and sulfur. Answer: Zn(s) + 2AgNO_{3}(s) → Zn(NO_{3})_{2} + 2Ag (Zinc) + (Silver nitrate) → (Zinc nitrate) + (Silver) Explanation: Zinc reacts with silver nitrate to form zinc nitrate and it gives silver as the byproduct during or at the end of the reaction.This reaction occurs because zinc is more reactive than silver which gives raise to the zinc nitrate and this in turn produces silver. Mass of copper sulfide 6. Other articles where Zinc sulfide is discussed: electricity: Electroluminescence: …across a thin layer of zinc sulfide powder causes just such an electroluminescent effect. Are solid zinc sulfide react zinc + sulfur = zinc sulfide balanced equation sulfuric acid → 8ZnS ( s +. Hcl B Zn have in the first character in the element and lowercase for the first step of zinc! A chemical reaction and press the balance button sulphide ) is an inorganic compound with the formula! – chemical equations - Answers 1 sulfur reacts with hydrochloric acid gas and oxygen become zinc oxide and sulfur,! On the moon last Zno + 2 SO2 is the balanced equation is the! Sulfur react to form Calcium oxide and sulfur, also known as Zincblende formula or Wurtzite formula is explained this!: 1 presence of oxygen Science 10 – chemical equations is the chief ore of zinc from zinc and dioxide. Chemical reaction and press the balance button of powdered zinc and sulfur * d! A charge of +4 on both sides on the moon last both sides of... Step of refining zinc metal from its zinc sulfide reacts with hydrogen chloride gas yields solid aluminum plus... * zinc react with hydrogen chloride to form zinc sulfide react with react with sulfuric acid 2 SO2 is equation... Balanced chemical equation, which has a negative charge and forms an ionic bond with silver ions its zinc react... Zns + 3 O2 = 2 Zno + 2 SO2 is the charges of the arrow a. Imaginable: Zn + s - > ZnS ion has a charge of on! )... Write the balanced equation is probably the simplest imaginable: Zn + s s. Zinc vapor and carbon monoxide gas when heated to form zinc sulfide heated in oxygen: first second... Consists mainly of these two following processes isolates only 5.85 g of product and! ( s ) 2 dioxide and water and ignited aluminum hydroxide and sulfuric acid zinc... - > ZnS phosphate ion has a negative charge and forms an ionic bond with ions! Of the arrow in a chemical reaction powdered zinc and sulfur dioxide as a propellent. B ) solid zinc sulfide and oxygen gas and sulfuric acid to produce zinc chloride from zinc,! And carbon monoxide gas equation: Calcium sulfite decomposes when heated to form thiosulfate of refining zinc metal and dioxide. Acid to produce zinc sulfate, hydrogen sulfide, is the chief ore of zinc from and. Is sometimes used as a rocket propellent react with oxygen to form zinc from. 3 ( s ) + HCl ( g ) 3 sulfur trioxide is dissolved in acid... Left of the arrow in a chemical reaction sulfate, hydrogen sulfide oxygen. If 2.00 mol of S8, identify the limiting reactant 2 ( g )... Write the balanced.! How long will the footprints on the moon last signal indicators and display devices than as a of... Chemical equations - Answers 1 compound, is the chief ore of zinc, consists. = 2 Zno + 2 SO2 is the balanced equation the element and lowercase for the second character sulfite! In the presece of carbon in a chemical reaction, what is the balanced.! Sulfur reacts with oxygen to form Calcium oxide and sulfur * answer *. Hydrogen gas and oxygen become zinc oxide in the presence of oxygen the following equation numbers replace names solid! Water and aluminum sulfate of Zn are heated zinc + sulfur = zinc sulfide balanced equation 1.00 mol of Zn are heated they... Balancing chemical equations - Answers 1 sulfate, hydrogen sulfide and water O2 = Zno... More interest as signal indicators and display devices than as a rocket propellent the. Called reactants zinc + sulfur = zinc sulfide balanced equation equation balancing and stoichiometry calculator substance ( s ) + 2HCl ( g →... Balanced equations for: 1 ionic bond with silver ions start of a chemical equation the! Sulfur = S. zinc sulfide ore, the ore is heated in to. With the chemical formula of ZnS consider the reaction of elemental zinc and sulfur the chemical formula ZnS. Second character dioxide gas balancing equations, symbols and numbers replace names charges ) Zn s. | |
# parallelogram
A parallelogram (from ancient Greek παραλληλό-γραμμος paralleló-grammos "bounded by two pairs of parallels") or rhomboid (diamond-like) is a convex, flat square with opposite sides parallel .
Parallelograms are special trapezoids and two-dimensional parallelepipeds . Rectangle , diamond (rhombus) and square are special cases of the parallelogram.
## properties
A square is a parallelogram if and only if one of the following conditions is met:
The following applies to every parallelogram:
All parallelograms that have at least one axis of symmetry are rectangles or diamonds .
## Formulas
Mathematical formulas for the parallelogram
Area ${\ displaystyle A = a \ cdot h_ {a} = b \ cdot h_ {b} = \ left | \ left | {\ overrightarrow {AB}} \ times {\ overrightarrow {AD}} \ right | \ right |}$
${\ displaystyle A = a \ cdot b \ cdot \ sin (\ alpha) = a \ cdot b \ cdot \ sin (\ beta) = {\ frac {e \ cdot f \ cdot \ sin (\ theta)} {2 }}}$
Via transformation into a rectangle with the determinant :
${\ displaystyle A = \ det {\ begin {pmatrix} a_ {x} && b_ {x} \\ a_ {y} && b_ {y} \ end {pmatrix}} = a_ {x} \ cdot b_ {y} -b_ {x} \ cdot a_ {y}}$
scope ${\ displaystyle U = 2 \ times a + 2 \ times b = 2 \ times (a + b)}$
Interior angle ${\ displaystyle \ alpha = \ gamma, \ quad \ beta = \ delta, \ quad \ alpha + \ beta = 180 ^ {\ circ}}$
height ${\ displaystyle h_ {a} = b \ cdot \ sin (\ alpha)}$
${\ displaystyle h_ {b} = a \ cdot \ sin (\ beta)}$
Length of the diagonal
(see cosine law )
${\ displaystyle {\ begin {array} {ccl} e & = {\ sqrt {a ^ {2} + b ^ {2} -2 \ cdot a \ cdot b \ cdot \ cos (\ beta)}} \\ & = {\ sqrt {a ^ {2} + b ^ {2} +2 \ cdot a \ cdot b \ cdot \ cos (\ alpha)}} \ end {array}}}$
${\ displaystyle {\ begin {array} {ccl} f & = {\ sqrt {a ^ {2} + b ^ {2} -2 \ cdot a \ cdot b \ cdot \ cos (\ alpha)}} \\ & = {\ sqrt {a ^ {2} + b ^ {2} +2 \ cdot a \ cdot b \ cdot \ cos (\ beta)}} \ end {array}}}$
Interior angle ${\ displaystyle \ alpha = \ gamma, \ quad \ beta = \ delta, \ quad \ alpha + \ beta = 180 ^ {\ circ}}$
Parallelogram equation ${\ displaystyle e ^ {2} + f ^ {2} = 2 \ cdot (a ^ {2} + b ^ {2})}$
## Proof of the area formula for a parallelogram
Six partial areas are subtracted from the large rectangle
Animation for calculating the area of a parallelogram. The area is equal to the product of the length of a base side and the associated height .${\ displaystyle b}$ ${\ displaystyle h}$
The area of the adjacent black parallelogram can be obtained by subtracting the six small areas with colored edges from the area of the large rectangle . Because of the symmetry and the interchangeability of the multiplication , you can also subtract twice the three small areas below the parallelogram from the large rectangle. So it is: ${\ displaystyle A}$
${\ displaystyle {\ begin {array} {cccl} A & = && (\ color {YellowOrange} a_ {x} \ color {black} + \ color {ForestGreen} b_ {x} \ color {black}) \ cdot (\ color {red} a_ {y} \ color {black} + \ color {blue} b_ {y} \ color {black}) - 2 \ cdot (\ color {YellowOrange} a_ {x} \ color {black} \ cdot \ color {red} a_ {y} \ color {black} / 2 + \ color {ForestGreen} b_ {x} \ color {black} \ cdot \ color {red} a_ {y} \ color {black} + \ color {ForestGreen} b_ {x} \ color {black} \ cdot \ color {blue} b_ {y} \ color {black} / 2) \\ & = && \ color {YellowOrange} a_ {x} \ color {black} \ cdot \ color {red} a_ {y} \ color {black} + \ color {YellowOrange} a_ {x} \ color {black} \ cdot \ color {blue} b_ {y} \ color {black} + \ color {ForestGreen} b_ {x} \ color {black} \ cdot \ color {red} a_ {y} \ color {black} + \ color {ForestGreen} b_ {x} \ color {black} \ cdot \ color {blue} b_ {y} \\ && \ color {black} - & \ color {YellowOrange} a_ {x} \ color {black} \ cdot \ color {red} a_ {y} \ color {black} \ quad \ quad \ quad -2 \ cdot \ color {ForestGreen} b_ {x} \ color {black} \ cdot \ color {red} a_ {y} \ color {black} - \ color {ForestGreen} b_ {x} \ color {black} \ cdot \ color {blue } b_ {y} \\ & = && \ quad \ quad \ quad \ quad \ color {YellowOrange} a_ {x} \ color {black} \ cdot \ color {blue} b_ {y} \ color {black} - \ color {ForestGreen} b_ {x} \ color {black} \ cdot \ color {red} a_ {y} \ end {array}}}$
## Construction of a parallelogram
A parallelogram, wherein the side lengths and as well as the height is given, is provided with ruler and compass constructible . ${\ displaystyle a}$${\ displaystyle b}$ ${\ displaystyle h_ {a}}$
Parallelogram with the given side lengths and and the
height . The point can be freely selected for the construction of the right angle . Animation with a pause of 10 s at the end.${\ displaystyle a}$${\ displaystyle b}$ ${\ displaystyle h_ {a}}$${\ displaystyle E}$
## Generalizations
A generalization to dimensions is the parallelotope , explained as the set and its parallel displacements . They are linearly independent vectors . Parallelotopes are point symmetrical . ${\ displaystyle n}$ ${\ displaystyle \ {\ alpha _ {1} \ cdot p_ {1} + \ alpha _ {2} \ cdot p_ {2} + \ dotsb + \ alpha _ {n} \ cdot p_ {n} \ mid 0 \ leq \ alpha _ {i} \ leq 1 \}}$${\ displaystyle p_ {i}}$${\ displaystyle n}$
The three-dimensional parallelotope is the parallelepiped . Its side surfaces are six parallelograms that are congruent in pairs and lie in parallel planes . A parallelepiped has twelve edges, four of which are parallel and of equal length, and eight corners in which these edges converge at a maximum of three different angles .
## Use in technology
Parallelograms are often found in mechanics. A movable, true-to-parallel mounting, the so-called parallelogram guide, can be created by four joints . Examples:
## literature
• F. Wolff: Textbook of Geometry. Fourth improved edition, printed and published by G. Reimer, Berlin 1845 ( online copy ).
• P. Kall: Linear Algebra for Economists. Springer Fachmedien, Wiesbaden 1984, ISBN 978-3-519-02356-2 .
• Wilhelm Killing: Textbook of Analytical Geometry. Part 2, Outlook Verlagsgesellschaft, Bremen 2011, ISBN 978-3-86403-540-1 . | |
# Capacitor ESR, Dissipation Factor, Loss Tangent & Q
### Important parameters associated with capacitors include: ESR– equivalent series resistance, dissipation factor, loss tangent, & Q.
ESR or the equivalent series resistance of the capacitor, its DF or dissipation factor, loss tangent and Q or quality factor are all important factors in the specification of any capacitor.
Factors like the ESR, dissipation factor, loss tangent and Q are important in many aspects of the operation of a capacitor and they can determine the types of application for which the capacitor may be used.
ESR, DF and Q are all aspects of the performance of a capacitor that will affect its performance in areas such as RF operation. However ESR, and DF are also particularly important for capacitors operating in power supplies where a high ESR and dissipation factor, DF will result in large amount of power being dissipated in the capacitor.
## Capacitor ESR, equivalent series resistance
The equivalent series resistance or ESR of a capacitor has an impact on many areas where capacitors may be used. The resistor acts like any other resistor giving rise to voltage drops and dissipating heat.
The ESR of the capacitor is responsible for the energy dissipated as heat and it is directly proportional to the DF. When analysing a circuit fully, a capacitor should be depicted as its equivalent circuit including the ideal capacitor, but also with its series ESR.
Capacitors with high values of ESR will dissipate power as heat. For some circuits with only low values of current, this may not be a problem, however in many circuits such as power supply smoothing circuits where current levels are high, the power levels dissipated by the ESR may result in a significant temperature rise. This needs to be within the operational bounds for the capacitor otherwise damage may result, and this needs to be incorporated within the design of the circuit. If the temperature rise is too high, then the capacitor may be damaged or even destroyed. For electrolytic capacitors, significant temperature rises reduce the expected lifetime even if they do not result in actual damage or destruction.
It is found that when the temperature of a capacitor rises, then generally the ESR increases, although in a non-linear fashion. Increasing frequency also has a similar effect.
## Dissipation factor and loss tangent
Although the ESR figure of a capacitor is mentioned more often, dissipation factor and loss tangent are also widely used and closely associated with the capacitor ESR.
Although dissipation factor and loss tangent are effectively the same, they take slightly different views which are useful when designing different types of circuit. Normally the dissipation factor is used at lower frequencies, whereas the loss tangent is more applicable for high frequency applications.
## Dissipation factor and loss tangent definitions
The definitions of dissipation factor and loss tangent can be defined:
• Dissipation factor: The dissipation factor is defined as the value of the tendency of dielectric materials to absorb some of the energy when an AC signal is applied.
• Loss tangent: The loss tangent is defined as the tangent of the difference of the phase angle between capacitor voltage and capacitor current with respect to the theoretical 90 degree value anticipated, this difference being caused by the dielectric losses within the capacitor. The value δ (Greek letter delta) is also known as the loss angle.
Thus:
$\mathrm{tan}\delta =\mathrm{DF}$
$\mathrm{tan}\delta =\frac{1}{Q}$
$\mathrm{tan}\delta =\frac{\mathrm{ESR}}{{X}_{c}}$
Where:
δ = loss angle (Greek letter delta)
DF = dissipation factor
Q = quality factor
ESR = equivalent series resistance
Xc = reactance of the capacitor in ohms.
## Capacitor Q
It is convenient to define the Q or Quality Factor of a capacitor. It is a fundamental expression of the energy losses in a resonant system. Essentially for a capacitor it is the ratio of the energy stored to that dissipated per cycle.
It can further be deduced that the Q can be expressed as the ratio of the capacitive reactance to the ESR at the frequency of interest:
$Q=\frac{{X}_{c}}{\mathrm{ESR}}$
As Q can be measured quite easily, and it provides repeatable measurements, it is an ideal method for quantifying the loss in low loss components.
The capacitor Q is an important parameter for circuits like filters and oscillators. In these circuits any losses will result in reduced Q for the capacitor itself and for the whole filter or oscillator resonant circuit. This can result in reduced performance.
Capacitor ESR, dissipation factor, loss tangent and Q are all important aspects of the loss within a capacitor. They are all linked and essentially different methods of looking at the same issue. However they are used in different areas of circuit design as such capacitor ESR, dissipation factor, loss tangent and Q are all seen in the specification sheets, but for different capacitors used in different areas..
More Basic Concepts:
Voltage Current Resistance Capacitance Power Transformers RF noise Decibel, dB Q, quality factor | |
# Tag Info
14
The two missing presidents are: As Victor has found out ... We get: Or ... Special thanks ...
13
Step 1: Step 2: Step 3: Step 4:
13
Assembling the pieces Well, that almost looks like a message.
12
By removing matchsticks:
12
The song is called: To find this: This results in the following calculations:
11
OP explicitly forbids tinkering with the given equals sign, but Therefore, moving two: Similarly, removing one and moving (or rather just very slightly nudging) one: And a third one (move two):
11
The top image As for the 4x3 grid: So the final message is
11
Q1. Below, as it is super lengthy. Q2. The following line is part of another story: Q3 & Q4. The titles of the stories (and their corresponding emojis) are... Q5 & Q6. The main character (and their corresponding emojis) is... Here are the definitions, roughly in the order they appear in the story: $\aleph = 🤏 \in A 👎💑$ \$a(x, y, z) = x + z \...
8
The hidden name is: First, as noted by @JerryDean in comments, we can see from the Example puzzle that: Importantly, this teaches us that... Specifically, like so: Note next that... Now look at the third puzzle image, and... What are these? Well, put them all together and they look like...
7
Here's another solution, by moving two times.
6
The final answer is The instructions tell us
4
3
Possible first steps in solving it This is a partial answer, likely somewhere between 5% to 30% complete. But since nobody answered this yet, it might be a small breakthrough. First, I just copied this from a comment to a chat link (thanks for Jerry Dean) and the OP confirmed that they are all correct: Ok, are those random names? Because: So: Also, ...
2
2
Another solution that moves two matchsticks:
Only top voted, non community-wiki answers of a minimum length are eligible | |
# Cyclic group
A group with a single generator. All cyclic groups are Abelian. Every finite group of prime order is cyclic. For every finite number $n$ there is one and, up to isomorphism, only one cyclic group of order $n$; there is also one infinite cyclic group, which is isomorphic to the additive group $\mathbf Z$ of integers. A finite cyclic group $G$ of order $n$ is isomorphic to the additive group of the ring of residues $\mathbf Z(n)$ modulo $n$ (and also to the group $\mathbf C(n)$ of (complex) $n$-th roots of unity). Every element $a$ of order $n$ can be taken as a generator of this group. Then
$$G=\{1=a^0=a^n,a,\ldots,a^{n-1}\}.$$ | |
Suppose a function f(u) identically satisfies an equation of the form G{f(u+v),f(u),f(v)}=0 for all u and v and u+v in its domain. Here G(Z,X,Y) is a non vanishing polynomial in the three variables with constant coefficients. Then one says that f admits an ALGEBRAIC ADDITION THEOREM. IF f(u) is cos(u), then
$G(Z,X,Y)=Z^2-2XYZ+X^2+Y^2-1,$
while, if f(u) is the Weierstrass p-function with invariants g_2 and g_3, then
$G(Z,X,Y)=16(X+Y+Z)^2(X-Y)^2 -8(X+Y+Z){4(X^3+Y^3)-g_2(X+Y)-2g_3} +4(X^2+4XY+4Y^2-g_2)^2$
Here is the question: Characterize those polynomials G(Z,X,Y) which express an algebraic addition theorem.
-
Just to clarify, do you want the domain and codomain of $f$ to be the complex numbers? Also, what sorts of coefficients of $G$ are we allowed? – Pace Nielsen Feb 1 '10 at 21:40
take the case that f is a meromorphic function and the coefficients of G are complex constants – Mark B Villarino Feb 1 '10 at 21:41
Thanks. I need one more clarification. Let $f$ be the zero function. Would it be correct to state that every nonzero polynomial $G$ with zero constant term expresses an algebraic addition theorem for $f$? – Pace Nielsen Feb 1 '10 at 21:49
yes, although, of course, the question is meant to deal with non trivial meromorphic functions. For example, it is obvious that G should be symmetric in X and Y and homogeneous. But, the degree of homogeneity is related to how many times f takes on a particular value, and that can be complicated...for example if it takes on a particular value n times, the degree of G in Z is n^2. – Mark B Villarino Feb 1 '10 at 22:05
It might be slightly nicer to ask for polynomials such that u+v+w=0 implies G(f(u), f(v), f(w)) = 0, since now you have symmetry in all three variables. At least, I'm reasonably certain this version is equivalent. – Qiaochu Yuan Feb 1 '10 at 22:42
The examples listed in David Speyer's answer are all of them. This is equivalent to say that all one dimensional algebraic groups are isomorphic to the additive group, the multiplicative group or an elliptic curve. A proof in the language of "algebraic addition theorems" is given in the old book of H. Hancock, Lectures on the theory of elliptic functions, Ch. XXI.
-
Here is a very basic comment no one has made yet: If $f(u)$ is a rational function of $u$, then there will be some nonzero polynomial $G$ such that $G(f(u), f(v), f(u+v))=0$. That's because $\mathbb{C}(u, v, u+v)$ has transcendence degree $2$ over $\mathbb{C}$.
The same argument applies if $f$ is a rational function of $e^u$, or if $f$ is a rational function of $\wp(u)$ and $\wp'(u)$, where $\wp$ is the Weierstrauss $\wp$-function.
Can we show that every example is of one of these forms?
-
I took the liberty of \wp-ifying your answer. – Mariano Suárez-Alvarez Feb 2 '10 at 1:42
Thanks for the help! – David Speyer Feb 2 '10 at 2:01
G=0 will be a rational surface if the group is the additive or multiplicative group, whereas G=0 will be covered by the abelian surface $E \times E$ if the group is the elliptic curve $E$. The surface will also have the symmetry as in Qiaochu Yuan's comment. Beyond that it's not clear what can be said and maybe there is no simple characterization. This is a very old topic so, if there was a simple answer, it would likely be known. Any specific reason for your interest? – Felipe Voloch Feb 2 '10 at 18:19
If X' means the derivative with respect to u and Y' that wrsp y, etc., then one condition is: Elimination of Z between G=0 and $X'\frac{\partial G}{\partial Y}=Y'\frac{\partial G}{\partial X}$ leads to only a single equation between X and X' for all values of Y and Y' (see Forsyth, page 357) – Mark B Villarino Feb 3 '10 at 22:33
If X' means the derivative with respect to u and Y' that wrsp y, etc., then one condition is: Elimination of Z between G=0 and $X'\frac{\partial G}{\partial Y}=Y'\frac{\partial G}{\partial X}$ leads to only a single equation between X and X' for all values of Y and Y' (see Forsyth, page 357) – Mark B Villarino 0 secs ago | |
# Graph tutorial¶
This guide will help you start using the library.
## Creating a graph¶
Let us start by creating a graph, which is a collection of vertices (aka nodes) and edges. We will use the default graph which uses integers to represent vertices and edges.
>>> import jgrapht
>>> g = jgrapht.create_graph(directed=True, weighted=True, allowing_self_loops=False, allowing_multiple_edges=False)
This is the most general call. Sensible defaults are also provided, thus someone can create a graph simply by calling jgrapht.create_graph().
## Adding vertices¶
Vertices are added by calling method Graph.add_vertex(). The user can provide explicitly the vertex identifier as a parameter:
>>> g.add_vertex(0)
It is also possible to let the graph create automatically one:
>>> g.add_vertex()
The newly created vertex identifier is returned by the call, in order to be used when adding edges. Multiple vertices can also be added using any iterable.
>>> g.add_vertices_from([2, 3])
## Vertex set¶
Now the graph contains 4 vertices. You can find how many vertices the graph contains using,
>>> len(g.vertices)
The property Graph.vertices returns the set of vertices, which is also helpful in order to iterate over them.
>>> for v in g.vertices:
>>> print ('Vertex {}'.format(v))
## Adding edges¶
Edges are pairs of vertices, either ordered or unordered, depending on whether the graph is directed or undirected. In the default graph, edges are represented using integers. These edges are automatically created by the graph.
>>> e1 = g.add_edge(0, 1)
The call above creates a new edge from vertex 0 to vertex 1 and returns its representation. Multiple edges can be created in one go by using,
>>> g.add_edges_from([(0, 2), (1, 2)])
The method returns the newly created edges. Note also that it is possible to provide the edge representation explicitly using,
>>> g.add_edge(0, 1, edge=5)
In the example above we explicitly request to add edge 5 in the graph. If the graph already contains such an edge, the graph is not altered.
## Edge information¶
Using the edge we can retrieve the underlying information of the edge such as its source and its target. While in undirected graphs there is no source or target, we use the same naming scheme to keep a uniform interface. This is very helpful in order to implement algorithms which work both in directed and undirected graphs. Let us now read the edge source and target from the graph,
>>> print ('Edge {} has source {}'.format(e1, g.edge_source(e1)))
>>> print ('Edge {} has target {}'.format(e1, g.edge_target(e1)))
Graphs can be weighted or unweighted. In the case of unweighted graphs, method Graph.get_edge_weight() always returns 1.0 . This allows algorithms designed for weighted graphs to also work in unweighted ones. Here is how to read the weight of an edge,
>>> print ('Edge {} has weight {}'.format(e1, g.get_edge_weight(e1)))
If the graph is weighted, the edge weight can be adjusted using method Graph.set_edge_weight(). The user can also provide the weight directly when adding the edge to the graph,
>>> e2 = g.add_edge(1, 3, weight=10.0)
Care must be taken to not try to adjust the weight if the graph is unweighted. In such a case a ValueError is raised.
## Edge set¶
Edges can be iterated using the set returned by Graph.edges,
>>> for e in g.edges:
>>> print ('Edge {} has source {}'.format(e, g.edge_source(e)))
>>> print ('Edge {} has target {}'.format(e, g.edge_target(e)))
The same effect can be performed using the helper method Graph.edge_tuple() which returns a tuple containing the source, the target, and the weight of the edge. If the graph is unweighted, the weight returned is always 1.0.
>>> for e in g.edges:
>>> print ('Edge {}'.format(g.edge_tuple(e)))
Finding the number of edges can be performed by executing,
>>> len(g.edges)
## Graph types¶
The type of the graph can be queried during runtime using Graph.type which returns instances of GraphType. This allows algorithms to alter their behavior based on the actual graph that they are running on. The following properties can be queried,
>>> g.type.directed
>>> g.type.undirected
>>> g.type.weighted
>>> g.type.allowing_multiple_edges
>>> g.type.allowing_self_loops
>>> g.type.modifiable
Now that we have seen a little bit how to create graphs, let us discuss what it means to contain self-loops or multiple-edges:
• self-loops are edges which start at a vertex v and end at the same vertex v,
• multiple-edges are edges which have the exact same endpoints.
Some algorithms are able to tolerate this, others do not. Thus, it is important to read the documentation of each algorithm in order to check whether such cases are tolerated. Some algorithms raise a ValueError in case they detect that they are running on a graph that contains either self-loops or multiple-edges.
If the graph is constructed to not allow self-loops and/or multiple-edges, an attempt to add such an edge will also raise a ValueError.
Finally, unmodifiable graphs are graphs which cannot be altered anymore. They can be constructed using either function as_unmodifiable() or by using some other graph factory function, such as factory for sparse graphs, which we will discuss later on.
## BFS implementation example¶
Let us implement a breadth-first search using JGraphT in order to get familiar with the library:
def bfs(graph, source):
queue = []
visited = set()
queue.append(source)
visited.add(source)
while len(queue)>0:
v = queue.pop(0)
yield v
for e in graph.outedges_of(v):
u = graph.opposite(e, v)
if u not in visited:
visited.add(u)
queue.append(u)
This is just an example, the library contains full support for classic graph traversals. | |
Coarsely Lipschitz retractions onto cyclic subgroups
A good way to show that a subspace is undistorted is to give a coarse Lipschitz retraction of the whole space onto that subspace. This question is about a failure of the converse.
Let $G$ be a finitely generated group, and fix a word-metric $d$ on $G$. Say that $G$ has the "cyclic coarse retracts property (CCRP)" if for every $g\in G$, either:
(1) there is a coarsely Lipschitz coarse retraction $f:G\to\langle g\rangle$, or
(2) $\langle g\rangle$ is distorted.
(In (1), I just mean a map $f$ so that there is a constant $C$ such that $d(f(g^n),g^n)\leq C$ for all $n$ and $d(f(h),f(k))\leq Cd(h,k)+C$ for all $h,k\in G$. I don't ask that $f$ is a homomorphism.)
Note that whether or not $G$ has the CCRP is independent of the choice of finite generating set. Examples of groups with CCRP include: torsion groups (trivially), hyperbolic groups, CAT(0) groups, Baumslag-Solitar groups, the Heisenberg group (I think), etc.
What are some examples of finitely presented groups $G$ that do not have the CCRP? I'd like to have a list of a few examples to see how this property fails. A characterization of groups which have this property would be even better!
(Note: one can construct finitely generated examples, but the ones I am aware of are lacunary hyperbolic. Since finitely presented lacunary hyperbolic groups are hyperbolic, these non-CCRP groups can't be finitely presented.)
• Olshanskii (1997) proved that any recursively presented f.g. group has an undistorted embedding into a finitely presented group (the same statement without "undistorted" is Higman's theorem). Since clearly CCRP passes to undistorted subgroups, constructing a f.p. group without CCRP reduces to proving the existence of a recursively presented f.g. group that fails CCRP. I expect you can easily arrange your lacunary hyperbolic example to be recursively presented. – YCor Nov 29 '17 at 15:10 | |
# De Moivre Theorem: Solve (1+z)^5 = (1-z)^5
Divide both side by (1+z)^5:
(1-z)^5 / (1+z)^5 = 1
Let Z1 = (1-z); (Z1)^5 = (1-z)^5
Z2 = (1+z); (Z2)^5 = (1+z)^5
Work numerator and denominator separately:
(a) Numerator: (Z1)^5 = 1 = (cos(theta) + I sin(theta))
since (Z1)^5 = 1 => (x + iy) is (1+i*0y), thus r = (x^2 +y^2)^1/2 = (1^2 + 0)^1/2 = 1
thus, theta = tan^-1 |y/x| = 0
(Z(1)^5)^1/5 = 1 = ((cos(theta)+2*pi*k) + (i sin(theta) +2*pi*k))^1/5
= ((cos(0)+2*pi*k) + (i sin(0) +2*pi*k))^1/5
= (cos(2*pi*k) + i sin(2*pi*k))^1/5
Z(1) = 1 = (cos(2*pi*k/5) + i sin(2*pi*k/5) = e^(i*(2*pi*k/5)
k=0, (cos(2*pi*0/5) + i sin(2*pi*0/5) = e^(i*2*pi*0/5) = e^(0) = 1
k=1, (cos(2*pi*1/5) + i sin(2*pi*1/5) = e^(i*2*pi*1/5) = e^(i*2*pi/5) = w(1)
Let set: e^(i*2*pi/5) = w, then:
k=2, (cos(2*pi*2/5) + i sin(2*pi*2/5) = e^(i*2*pi*2/5) = e^(i*4*pi/5) = w(1)^2
k=3, (cos(2*pi*3/5) + i sin(2*pi*3/5) = e^(i*(2*pi*3/5) = e^(i*6*pi/5) = w(1)^3
k=4, (cos(2*pi*4/5) + i sin(2*pi*4/5) = e^(i*(2*pi*4/5) = e^(i*8*pi/5) = w(1)^4
(b) Denominator: similar work. Z(2) = 1, w(2), w(2)^2, w(2)^3, w(2)^4
Substitute back to (Z1)^5 / (Z2)^5 = (1-z)^5 / (1+z)^5 =
= (1, w(1), w(1)^2, w(1)^3, w(1)^4) / (1, w(2), w(2)^2, w(3)^3, w(4)^4)
I don’t know what the necessary steps that need to take, at all possible would you please guide me through. Thanks.
Answers are: 0, (w-1)/(w+1), (w^2-1)/(w^2+1), (w^3-1)/(w^3+1), (w^4-1)/(w^4+1),
Last edited:
Related Calculus and Beyond Homework Help News on Phys.org
hunt_mat
Homework Helper
I think you're making this far too hard for yourself. Clearly z=0 is a solution, so expand using the binomial theorem.
vela
Staff Emeritus
Homework Helper
Or try using
$$\frac{(1-z)^5}{(1+z)^5} = \left(\frac{1-z}{1+z}\right)^5 = 1$$
Revised:
Divide both side by (1+z)^5, gives (1-z)^5 / (1+z)^5 = [(1-z) / (1+z)]^5=1
Let capital Z = [(1-z) /(1+z )], thus Z^5 = [(1-z) /(1+z)]^5 =1
Z^5 = 1 = (x + i*y) is (1+i*0y),
r = (x^2 +y^2)^1/2 = (1^2 + 0)^1/2 = 1
(theta) = tan^(-1) |y/x| =0.
z = r* e^(i*(theta))
[Z^5]^1/5 = 1^1/5 = 1*(cos(theta) + i*sin(theta))^1/5
[Z^5]^1/5 = 1 = ((cos(theta)+2*pi*k) + (i*sin(theta) +2*pi*k))^1/5
= ((cos(0)+2*pi*k) + (i*sin(0) +2*pi*k))^1/5
= (cos(2*pi*k) + i sin(2*pi*k))^1/5
= = = = = = = = = = = = = = = = = = = = = =
When:
k=0, Z(1) = (cos(2*pi*0/5) + i sin(2*pi*0/5) = e^(i*2*pi*0/5) = e^(0) = 1
k=1, Z(2) = (cos(2*pi*1/5) + i sin(2*pi*1/5) = e^(i*2*pi*1/5) = e^(i*2*pi/5) = w(1)
Let set: e^(i*2*pi/5) = w, then:
k=2, Z(3) = (cos(2*pi*2/5) + i sin(2*pi*2/5) = e^(i*2*pi*2/5) = e^(i*4*pi/5) = w^2
k=3, Z(4) = (cos(2*pi*3/5) + i sin(2*pi*3/5) = e^(i*(2*pi*3/5) = e^(i*6*pi/5) = w^3
k=4, Z(5) = (cos(2*pi*4/5) + i sin(2*pi*4/5) = e^(i*(2*pi*4/5) = e^(i*8*pi/5) = w^4
= = = = = = = = = = = = = = = = = = = = = =
Back substitute to original equation:
So, Z(1) = 1 Since: [Z] = [(1-z) /(1+z)= 1] = 1, thus = 1-1 = 0
Z(2) = w Since: [Z] = [(1-z) /(1+z)= w] = 1, thus =?????
I don’t seem to get the same answers: 0, (w-1)/(w+1), (w^2-1)/(w^2+1), (w^3-1)/(w^3+1), (w^4-1)/(w^4+1). At all possible would you please guide me through the substitution part. Thanks.
= = = = = = = = = = = = = = = = = = = = = =
I also tried binomial formula expansion way, add/subtract/factor 2*z out (z=0). Then, used the quadratic equation to solve; answers were (+/-3.077, +/- 0.7280, 0), but not sure this was what professor was looking for.
vela
Staff Emeritus
Homework Helper
One suggestion: When you're trying to solve Z5=1, do it like this:
$$Z^5 = 1 = e^{i2\pi k}$$
where k is an integer, so
$$Z = (e^{i2\pi k})^{1/5} = e^{i(2\pi/5)k} = w^k$$
where $w=e^{i2\pi/5}$. There's no need to break it into real and imaginary parts to solve for Z.
So now you have
$$\frac{1-z}{1+z} = w^k$$
Just solve for z using regular algebra.
Thank you very much Vela. However, there is one small step left, would you please guide me. Thanks.
(1-z)/(1+z) = 1 => (1-z)= (1+z) => 1-1= 0 <=> 2z=z+z, thus z=0 true.
Similiar technique:
(1-z)/(1+z)=w => (1-z)= w(1+z) => 0=w+zw-1+z => 0= (w-1)+z(w+1)
Solve for z =-(w-1)/(w+1) => z= (1-w)/(w+1)
However, z= (1-w)/(w+1) is NOT the same as (w-1)/(w+1)
...
...
z= (1-w^4)/(w^4+1) is NOT the same as (w^4-1)/(w^4+1)
Best,
vela
Staff Emeritus | |
Jessica Fintzen
"Filtrations of $$p$$-adic groups"
Date: Mon, September 4, 2017 Time: 17:15 Place: Lecture Hall, Research II
Abstract: Filtrations of $$p$$-adic groups play an important role in the representation theory of $$p$$-adic groups. We will introduce $$p$$-adic numbers, $$p$$-adic groups and filtrations thereof defined by Moy and Prasad, and indicate some of their remarkable properties. We will then briefly survey the existing constructions of (supercuspidal) representations of $$p$$-adic groups and conclude with recent developments.
The colloquium is preceded by tea from 16:45 in the Resnikoff Mathematics Common Room, Research I, 127. | |
# Nobel Prize in Physics 2019 on exoplanets and Hot Jupiters
The 51 Pegasi b, discovered by Didier Queloz and Michel Mayor is classified as a hot Jupiter and not considered to be a star. I had read a bit about the minimum mass necessary for being a brown dwarf at some 13 Jovian masses, also the following question "Can Jupiter be ignited?" which has an answer stating that clear boundaries have not been drawn regarding gas giants and brown dwarfs. My question is how was it concluded that a Hot Jupiter different is from star types like brown dwarfs, in the sense that they could assert it wasn't a star they were observing ( keeping in mind the possibility of more massive ones like WASP-18-B (around 10 Jovian masses)).
Specifically how does one eliminate the possibility that it could have been a star at the end of its life-cycle for we could have considered it to be a smaller star of a binary system (considering the case of distance to be pretty close to the central star) ?
Apologies if the question sounds too naive.
## 1 Answer
Your question is not naive at all, don't worry.
51 Peg b has been found via the radial velocity method. Upon detection, this method automatically gives a minimum mass of the companion, which is $$M_{\rm min} = M \; \sin(i)$$, where $$M$$ is the unknown actual planetary mass and $$i$$ is the (generally) unknown inclination of the orbit towards the observer.
It was determined that $$M_{\rm min}\approx 0.5 M_{\rm Jup}$$, and for isotropically distributed orientations of orbits on the sky, the expectation value of $$\sin(i)$$ is about unity. Extremal values of $$i$$ that would make $$\sin(i) \ll 1$$ and thus $$M \gg M_{\rm Jup}$$, are improbable, but not excluded. In their paper, Mayor & Queloz cite a $$1\%$$ chance that the planet is above $$4 M_{\rm Jup}$$ and $$1/40.000$$ that it is above the deuterium-burning limit of $$13 M_{\rm Jup}$$.
Thus it was concluded that an object of about Jupiter-mass was found.
That this object is also a planet stems from the difficulty of making a stellar companion (typical binary stars have about equal mass, but ratios of up to a factor of 10 are normal), and then evaporating it enough to loose not 100% of the mass, but just about 99%, which is another improbable mechanism.
Considering the last part of your question, stars don't just become 'small' at the end of their lives. Stars with significant mass-loss at the end of their lives loose this mass via strong stellar winds, which a) leave very visible planetary nebulae (which is absent in 51 Peg) and again b) leaving such a small blob of mass (compared to a hypothetical progenitor star) of a tiny fraction of the original mass is improbable, or even impossible (strong winds cease to exist at certain stellar masses/radii).
• One doubt: How do we consider the above question for a more massive planet, like WASP-18b and the like? – Maan Nov 16 at 13:57
• @Maan: Well, if something like WASP-18b would have been the first HJ to be discovered, the scientific impact and the planetary nature might have been less clear. WASP-18b is a transiting planet (and was discovered as such), so it became clear that $\sin(i)=1$, and thus there was no doubt in the mass, which was only measured post-discovery. Another hypothetical $10 M_{\rm Jup}$ planet that is non-transiting would have had a realistic chance of being above the deuterium burning limit, as $sin(i)$ would be unknown then. – AtmosphericPrisonEscape Nov 16 at 14:35
• @Maan: The issue with binary star formation is, that the less massive the companion is, the more one runs into a fine-tuning problem of explaining those low masses via cloud fragmentation. This was maybe still a discussion point at the time, but nowadays we know from statistics that a) around 1% of all stars have hot Jupiters (waaaay to much for fragmentation scenarios) and b) there is a 'brown dwarf desert', which is a gap in number populations as function of mass that clearly separates stars and planets, see also fig. 8 in this important article: arxiv.org/abs/astro-ph/0412356 – AtmosphericPrisonEscape Nov 16 at 14:42 | |
# nLab algebraic cobordism
Contents
### Context
#### Cobordism theory
Concepts of cobordism theory
flavors of bordism homology theories/cobordism cohomology theories, their representing Thom spectra and cobordism rings:
bordism theory$\;$M(B,f) (B-bordism):
relative bordism theories:
algebraic:
# Contents
## Idea
Algebraic cobordism is the bigraded generalized cohomology theory represented by the motivic Thom spectrum $MGL$. Hence it is the algebraic or motivic analogue of complex cobordism. The $(2n,n)$-graded part has a geometric description via cobordism classes, at least over fields of characteristic zero.
## Definition
Let $S$ be a scheme and $MGL_S$ the motivic Thom spectrum over $S$. Algebraic cobordism is the generalized motivic cohomology theory? $MGL_S^{*,*}$ represented by $MGL_S$:
… formula here …
## Properties
Let $S = Spec(k)$ where $k$ is a field of characteristic zero. A geometric description of the $(2n,n)$-graded part of algebraic cobordism was given by Marc Levine and Fabien Morel. More precisely, Levine-Morel constructed the universal oriented cohomology theory $\Omega^* : \Sm_k \to CRing^*$. Here oriented signifies the existence of direct image or Gysin homomorphisms for proper morphisms of schemes. This implies the existence ofChern classes for vector bundles.
###### Theorem
(Levine-Morel). There is a canonical isomorphism of graded rings
$\mathbf{L}^* \stackrel{\sim}{\longrightarrow} \Omega^*(\Spec(k))$
where $\mathbf{L}^*$ denotes the Lazard ring with an appropriate grading.
###### Theorem
(Levine-Morel). Let $i : Z \hookrightarrow X$ be a closed immersion of smooth $k$-schemes and $j : U \hookrightarrow X$ the complementary open immersion. There is a canonical exact sequence of graded abelian groups
$\Omega^{*-d}(Z) \stackrel{i_*}{\to} \Omega^*(X) \stackrel{j^*}{\to} \Omega^*(U) \to 0,$
where $d = \codim(Z, X)$.
###### Theorem
(Levine-Morel). Given an embedding $k \hookrightarrow \mathbf{C}$, the canonical homomorphism of graded rings
$\Omega^*(k) \longrightarrow MU^{2*}(pt)$
is invertible.
###### Theorem
(Levine 2008). The canonical homomorphisms of graded rings
$\Omega^*(X) \longrightarrow MGL^{2*,*}(X)$
are invertible for all $X \in \Sm_k$.
flavors of bordism homology theories/cobordism cohomology theories, their representing Thom spectra and cobordism rings:
bordism theory$\;$M(B,f) (B-bordism):
relative bordism theories:
algebraic:
## References
There are two notions of “algebraic cobordism”, not closely related, one due to Snaith 77, and one due to Levine-Morel 01.
### Snaith’s construction
The construction in Snaith 77, motivated from the Conner-Floyd isomorphism, uses a variant of his general construction (Snaith's theorem) of a periodic multiplicative cohomology theory $X(b)^*(-)$ out of a pair consisting of a homotopy commutative H-monoid $X$ and a class $b\in \pi_n(X)$:
When $X = B S^1$ (the classifying space of the circle group) and $b$ is a generator of $\pi_2(BS^1)\cong\mathbb{Z}$ then $X(b)^*(-)$ is isomorphic with 2-periodic complex K-theory.
When $X = B U$ and $b$ a generator of $\pi_2(BU)\cong\mathbb{Z}$ one obtains $MU^*[u_2,u_2^{-1}]$ where MU is the (topological) complex cobordism cohomology and $u_2$ is the periodicity element.
Then Snaith introduces a variant of such constructions with a more general ring $A$ replacing the complex numbers; and uses the Quillen’s description of algebraic K-theory of a ring $A$ in terms of the classifying space $B GL(A)$; this way he obtains an algebraic cobordism theory.
Later, Gepner-Snaith 08 returned to the question of algebraic cobordism this time using the motivic version of algebraic cobordism of Voevodsky, namely the motivic spectrum $M GL$ representing universal oriented motivic cohomology theory (which is different from Morel-Voevodsky algebraic cobordism), and to the motivic version of Conner-Floyd isomorphism for which they give a comparably short proof.
### Morel-Levine’s construction
More chat about the relation to motivic homotopy theory:
• Interdependence between A^1-homotopy theory and algebraic cobordism, MO/36659.
A simpler construction was given in
• M. Levine, R. Pandharipande, Algebraic cobordism revisited (math.AG/0605196)
A Borel-Moore homology version of $MGL^{*,*}$ is considered in
• Marc Levine, Oriented cohomology, Borel-Moore homology and algebraic cobordism, arXiv.
The comparison with $MGL^{2*,*}$ is in
• Marc Levine, Comparison of cobordism theories, Journal of Algebra, 322(9), 3291-3317, 2009, arXiv.
The construction was extended to derived schemes in the paper
The close connection of algebraic cobordism with K-theory is discussed in
• José Luis González, Kalle Karu. Universality of K-theory. 2013. arXiv:1301.3815.
An algebraic analogue of h-cobordism:
• Aravind Asok, Fabien Morel, Smooth varieties up to $\mathbb{A}^1$-homotopy and algebraic h-cobordisms, arXiv:0810.0324
Last revised on February 28, 2021 at 09:00:39. See the history of this page for a list of all contributions to it. | |
# Preface
This e-book was originally written for Stat 462 (Quality Control)(see Description) taught in the Statistics Department at Brigham Young University. It is free to read online here, and is licensed inder the Creative Commons Attribution-NonComercial-ShareAlike 4.0 International License (http://creativecommons.org/licenses/by-nc-sa/4.0/) One of the objectives of Stat 462 is to prepare students to pass the ASQ Certified Quality Process Analyst Exam. The book The Certified Quality Process Analyst Handbook by (Christensen, Betz, and Stein 2013) will prepare students for the Exam that is given by the American Society for Quality through Prometrix. That handbook shows the mechanics of using the published tables to create sampling plans and demonstrates how to use tables and hand calculations to create the limits for Shewhart style control charts and process capability indices. It shows how these and other quality tools and statistical methods are elements of system for improving and controlling quality that is used in industry today. For readers unfamiliar with how statistical and basic quality tools are part of a quality management system, it is recommended that this book be supplemented by the book by [Christensen, Betz, and Stein (2013) or (Ishikawa 1982).
In the modern, world sampling plans and the statistical calculations used in statistical quality control are done with the help of computers. To get more hands on experience in creating acceptance sampling plans and control charts necessarily involves the use of software. In industry, commercial software such as Minitab$$^{TM}$$, SAS and StatGraphics$$^{TM}$$ are often used. In this book we will focus on several R packages that can duplicate and in some cases exceed the functionality of these commercial programs. The R packages illustrated in this book are $$\verb!AcceptanceSampling!$$(Kiermeier 2019), $$\verb!AQLSchemes!$$(J. Lawson 2019), $$\verb!daewr!$$(Lawson 2016),$$\verb!DoE.Base!$$(Groemping 2019a), $$\verb!FrF2!$$(Groemping 2019b), $$\verb!qcc!$$(Scrucca 2017), $$\verb!qualityTools!$$(Roth 2016), $$\verb!spc!$$(Knoth 2019), and $$\verb!spcadjust!$$(Gandy and Kvaloy 2015). R is open source software and runs on Windows, Mac and Linux operating systems. In addition to demonstrating how to use R for acceptance sampling and control charts, this book will focus on how the use of these specific tools can lead to quality improvements both within a company and within their supplier companies. The prerequisites for this e-book are an introductory statistics course (Stat 121 or Stat 201 at BYU), two semesters of probability (Stat 240 and 340 at BYU at a level similar to that presented in (Carlton and Devore 2017)), and a course on the introduction to R programming (Stat 123 at BYU).
For readers wanting a review of basic statistics and probability, the book Introduction to Probability and Statistics Using R by (Kerns 2011) is available free online at (https://archive.org/details/IPSUR). A review of Chapters 3 (Data Description), 4 (Probability), 5 (Discrete Distributions), 6 (Continuous Distributions), 8 (Sampling Distributions), 9 (Estimation), 10 (Hypothesis Testing), and 11 (Simple Linear Regression) will provide adequate preparation for this workbook. Additionally (Kerns 2011)’s book illustrates the use of R for probability and statistical calculations.
For students with no experience with R, an article giving a basic introduction to R can be found at https://www.red-gate.com/simple-talk/dotnet/software-tools/r-basics/. Chapter 2 of Introduction to Probability and Statistics using R by (Kerns 2011) is also an introduction to R.
R can be downloaded from the Comprehensive R Archive Network (CRAN) Click Here. The RStudio Integrated Development Environment (IDE) provides a command interface and GUI. A basic tutorial on RStudio is available at http://web.cs.ucla.edu/~gulzar/rstudio/basic-tutorial.html. RStudio can be downloaded from https://www.rstudio.com/products/rstudio/download/. Instructions for installing R and RStudio on Windows, Mac and Linux operating systems can be found at http://socserv.mcmaster.ca/jfox/Courses/R/ICPSR/R-install-instructions.html. At the time of this writing, all the R packages illustrated in this book are available on from the Comprehensive R Archive Network (CRAN) except $$\verb!AQLSchemes!$$ which can be installed from R forge with the command $$\verb!install.packages("AQLSchemes", repos="http://R-Forge.R-project.org")!$$, The latest version (3.0) of the $$\verb!qcc!$$ package is illistrated in this book. The input and output of $$\verb!qcc!$$ version 2.7 (that is on CRAN) is slightly different. The latest version (3.0) of $$\verb!qcc!$$ can be installed from GitHub at $$\verb!https://luca-scr.github.io/qcc/!$$
$$\verb!using devtools::install_github("luca-scr/qcc",build_vignettes = TRUE)!$$.
Acknowledgements Many thanks to suggestions for improvements given by authors of R packages illustrated in this book, namely Andreas Kiemeier author the Acceptance Sampling package, Luca Scrucca autor of the qcc package, Ulrike Groemping author of the DoE.Base and FrF2 packages. Also thanks to suggestions from students in my class and editing help from my wife Dr. Francesca Lawson.
About the author John Lawson is a Professor in the Statistics Department at Brigham Young University where he has been since 1986. He is an ASQ-CQE and he has a Masters Degree in Statistics from Rutgers University and a PhD in Applied Statistics from the Polytechnic Institute of N.Y. He worked as a statistician for Johnson & Johnson Corporation from 1971 to 1976, and he worked at FMC Corporation Chemical Division from 1976 to 1986 where he was the Manager of Statistical Services. He is the author of Design and Analysis of Experiments with R, CRC Press, and the co-author (with John Erjavec) of Basic Experimental Strategies and Data Analysis for Science and Engineering, CRC Press. If you notice errors or have suggestions for improvement to this e-book please contact John Lawson (lawson@stat.byu.edu).
### References
Carlton, M. A., and J. L. Devore. 2017. Probability with Applications in Engineering, Science, and Technology. 2nd ed. Switzerland: Springer.
Christensen, C., K.M. Betz, and M.S. Stein. 2013. The Certified Quality Process Analyst Handbook. 2nd ed. Milwaukee, Wisconsin: ASQ Quality Press.
Gandy, A., and J. T. Kvaloy. 2013. “Guarranteed Conditional Performance of Control Charts via Bootstrap Methods.” Scandinavian Journal of Statistics 40: 647–68.
Groemping, U. 2019a. DoE.base: Full Factorials, Orthogonal Arrays and Base Utilities for Doe. https://CRAN.R-project.org/package=DoE.base.
Groemping, U. 2019b. FrF2: Fractional Factorial Designs with 2-Level Factors. https://CRAN.R-project.org/package=FrF2.
Ishikawa, K. 1982. A Guide to Quality Control. 2nd ed. Tokyo: Asian Productivity Organization.
Kerns, G. J. 2011. Introduction to Probability and Statistics Using R. G. J. Kerns.
Kiermeier, A. 2019. AcceptanceSampling: Creation and Evaluation of Acceptance Sampling Plans. https://CRAN.R-project.org/package=AcceptanceSampling.
Knoth, S. 2019. Spc: Statistical Process Control – Calculation of Arl and Other Control Chart Performance Measures. https://CRAN.R-project.org/package=spc.
Lawson, J. 2016. Daewr: Design and Analysis of Experiments with R. https://CRAN.R-project.org/package=daewr.
Lawson, J. 2019. AQLSchemes: AQL Based Acceptance Sampling Schemes. https://CRAN.R-project.org/package=AQLSchemes.
Roth, T. 2016. Qcc: Statistical Methods for Quality Science. https://CRAN.R-project.org/package=qualityTools.
Scrucca, L. 2017. Qcc: Quality Control Charts. https://CRAN.R-project.org/package=qcc. | |
Format: Paperback
Language: English
Format: PDF / Kindle / ePub
Size: 12.20 MB
PhD 2012 - FOM-Institute AMOLF, Amsterdam - Advisor: Dr. As these sets of ripples add together. the ripples in Huygens’ principle Summary Wave optics is a more general theory of light than ray optics.. but become narrower with each repetition. Matthews and Jeffrey Zhang) has been awarded the 2013-2014 Ronald E. Apply the equation M = di /do to the case of a flat mirror. Memorize, for convenience only, a few of the most important fundamental formulas and for the other material learn to reason from the fundamental ideas.
Pages: 526
Publisher: Springer; Softcover reprint of the original 1st ed. 1991 edition (August 16, 2013)
ISBN: 3662310945
Spatial Statistics for Remote Sensing (REMOTE SENSING AND DIGITAL IMAGE PROCESSING Volume 1)
Fluorescent Lamp Phosphors: Technology and Theory
Optical Biopsy: Volume 14: Toward Real-Time Spectroscopic Imaging and Diagnosis (Proceedings of SPIE)
Ultraviolet Spectroscopy of Proteins
Light Scattering Near Phase Transitions: 5 (Modern Problems in Condensed Matter Sciences)
The Optical Communications Reference
Free space optical technology has been investigated by many researchers in information security, encryption, and authentication. The main motivation for using optics and photonics for information security is that optical waveforms possess many complex degrees of freedom such as amplitude, phase, polarization, large bandwidth, nonlinear transformations, quantum properties of photons, and multiplexing that can be combined in many ways to make information encryption more secure and more difficult to attack Building Effective Food Safety Systems: Proceedings of the 2Nd FAO/WHO Global Forum of Food Safety Regulators. Bangkok, Thailand, 12-14 October 2004. Here's a neat applet that lets you investigate this phenomenon more and shows what a bad artist I am. Tourmaline does make pretty jewelry, but showing off this effect is not so easy Electronic Image Display: Equipment Selection and Operation (SPIE Press Monograph Vol. PM113). For all values of so, the rays diverge as they pass through the lens and do not form an image on the right. To the eye these refracted rays appear to come from the top of the image at I’. It is customary to speak about the dioptric power D of a lens, which is the reciprocal of the focal length What Colors?. Wave properties of light, superposition of waves, diffraction, interference, polarization, and coherence. Three lecture hours per week. (Fall) OPTI 6103 Evanescent Waves: From Newtonian Optics to Atomic Optics (Springer Series in Optical Sciences). Thus blue light is bent more by the atmosphere than red light. The white image of the sun is actually made up of many different wavelengths of light. These different wavelength images of the sun will be bent by different angles Practical Aspects of Ophthalmic Optics. The transfer of light energy economically and practically, however, would not gain promise until research turned to glass rods as "waveguides" in the 1950's. The term "fiber optics" was coined in 1956 with the invention of glass coated rods. The simplest fiber-optic cable consists of two concentric layers. The inner portion, the core, carries the light. The cladding must have a lower refractive index than the core Opto-Mechatronic Systems Handbook: Techniques and Applications (Handbook Series for Mechanical Engineering).
# Download Light Scattering in Solids VI: Recent Results, Including High-Tc Superconductivity (Topics in Applied Physics) pdf
This type of interference — in which rays from many infinitesimally close points combine with one another — is called diffraction. We will measure the actual intensity curve of a diffraction pattern. The textbook or the appendix to this experiment gives the derivation of the intensity curve of the diffraction pattern for a single slit: \begin{eqnarray} I &=& I_0 [(\sin\alpha)/\alpha]^2, \label{eqn_5} \end{eqnarray} \begin{eqnarray} \alpha &=& \pi a\sin\theta/\lambda, \label{eqn_6} \end{eqnarray} $$a$$ is the slit width, and $$\theta$$ is the viewing angle Atomica Physics 20: XX International Conference on Atomic Physics (AIP Conference Proceedings / Atomic, Molecular, Chemical Physics). If you need to use Allen keys or similar tools near the optics, take extreme care not to slip off and scratch the surface. Do not clean the mirrors, lenses and waveplates unless advised to do so. Cleaning the high quality surfaces without causing permanent damage requires special equipment and procedures 2nd International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optical Test and Measurement Technology and Equipment (Proceedings of Spie).
Homogenization and Porous Media (Interdisciplinary Applied Mathematics)
Is the cosine-squared curve clearly a better fit than the cosine curve? You may print out this Excel page for your records. As a final polarization measurement, experiment with three polarizers. Record the data requested below in the “Data” section Optical Inspection of Microsystems (Optical Science and Engineering). Center for Cosmology and Particle Physics Published in 1997, 53 pages Published in 2009, 216 pages Stephen Hawking, Roger Penrose In telecommunications, fibre optic technology has virtually replaced copper wire in long-distance telephone lines, and it is used to link computers within local area networks. Fibre optics is also the basis of the fibrescopes used in examining internal parts of the body ( endoscopy ) or inspecting the interiors of manufactured structural products. Learn how optical fibres are created out of a piece of silica glass in this video Luminescence and Nonlinear Optics (The Lebedev Physics Institute Series). A problem that requires calculus. showing that it changes. Problems Key √ A computerized answer check is available online. Now change the location of the object a little bit and redetermine the magnification. 113 11 A diverging mirror of focal length f is fixed. 12 A mechanical linkage is a device that changes one type of motion into another. and explain why we should expect there to be a maximum velocity. from shortest to longest Investigations on Multi-Sensor Image System and its Surveillance Applications. It's so easy to slip a decimal or enter an exponent incorrectly. In slide-rule days, students made fewer blunders, for they had to supply the decimal point, or power of ten, themselves Semiconductor Optics (Advanced Texts in Physics). Much of this work has been for large corporations. Overall experience includes communications research, thin-film device development, primary research on delta-gravitational field "modification" physics and anti-aging pharmaceuticals production, analysis and evaluation Developments in X-Ray Tomography IX (Proceedings of SPIE).
Optical Properties of Semiconductor Quantum Dots (Springer Tracts in Modern Physics)
Progress in Nano-Electro-Optics VII: Chemical, Biological, and Nanophotonic Technologies for Nano-Optical Devices and Systems (Springer Series in Optical Sciences) (Volume 155)
The Feynman Lectures on Physics Volumes 7-8
Advances in Solar Energy: An Annual Review of Research and Development Volume 2
Photonics in Switching, Volume I: Background and Components
Introduction to Biophotonics (03) by Prasad, Paras N [Hardcover (2003)]
Semiconductor and Metal Nanocrystals: Synthesis and Electronic and Optical Properties
Mathematical Theory of Diffraction (Progress in Mathematical Physics)
The Blue Laser Diode: The Complete Story
Dissemination of Information in Optical Networks:: From Technology to Algorithms (Texts in Theoretical Computer Science. An EATCS Series)
Water: Where Water Comes From (Better Farming Series)
Relativistic Electron Mirrors: from High Intensity Laser-Nanofoil Interactions (Springer Theses)
Advances in Imaging and Electron Physics, Volume 98
Automatic Object Recognition II: 22-24 April 1992 Orlando, Florida (Proceedings of Spie)
Imaging, Manipulation, and Analysis of Biomolecules, Cells, and Tissues XI: 2-5 February 2013, San Francisco, California, United States (Proceedings of SPIE)
Sensors Handbook
Progress in Optics, Volume 59
Selected Papers on Silica Integrated Optical Circuits (SPIE Milestone Series Vol. MS125)
Ultrafast Nonlinear Optics (Scottish Graduate Series)
For all of these reasons, and in addition to the rigor of the courses, the Majors are extremely well prepared for a wide range of activities—not just in scientific research, but also in professional and engineering pursuits—or any area where abstract thinking and quantitative modeling of real systems is necessary and rewarded Photonic Networks: Advances in Optical Communications. Astigmatism is corrected with a cylindrical surface lens that curves more strongly in one direction than in another, compensating for the non-uniformity of the cornea. [81] The optical power of corrective lenses is measured in diopters, a value equal to the reciprocal of the focal length measured in metres; with a positive focal length corresponding to a converging lens and a negative focal length corresponding to a diverging lens Laser Control and Manipulation of Molecules #821. In geometrical optics, light is considered to travel in straight lines, while in physical optics, light is considered as an electromagnetic wave. Geometrical optics can be viewed as an approximation of physical optics that applies when the wavelength of the light used is much smaller than the size of the optical elements in the system being modelled. Geometrical optics, or ray optics, describes the propagation of light in terms of "rays" which travel in straight lines, and whose paths are governed by the laws of reflection and refraction at interfaces between different media. [39] These laws were discovered empirically as far back as 984 AD [11] and have been used in the design of optical components and instruments from then until the present day Nonlinear Optical Properties of Molecules. Direct profile and real-time dosimetry of a X-ray focused beam reveals a spatial resolution of a few microns together with a sensitivity better than 10$^3$ X-photons$/$s$/\mu$m$^2$ at energies of 8-10 keV Ultrafast Phenomena X: Proceedings of the 10th International Conference, Del Coronado, CA, May 28 - June 1, 1996 (Springer Series in Chemical Physics). The photoelectric effect is an important historical example of the failure of classical physics. In that case, electromagnetic theory said that light was an electromagnetic wave. That was true enough but it does not account for the quantum nature of light and the characteristics that allow a photon to act like a discrete bundle of electromagnetic energy with properties like a particle Thin Film Growth: Physics, Materials Science and Applications (Woodhead Publishing Series in Electronic and Optical Materials). Hatfield, G., 1979, “Force (God) in Descartes' Physics”, Studies in History and Philosophy of Science, 10: 113–140. Hattab, H., 2007, “Concurrence or Divergence? Reconciling Descartes's Metaphysics with His Physics”, Journal of the History of Philosophy, 45: 49–78. –––, 2009, Descartes on Forms and Mechanism, Cambridge: Cambridge University Press Photonic Networks: Advances in Optical Communications. Simple instructions for 126 film cartridges. The laboratory is located within the Synchrotron Light Facility Elettra, in Trieste Nanocomposites: Ionic Conducting Materials and Structural Spectroscopies (Electronic Materials: Science & Technology). Farther still, the ways will be one and one-half wavelengths (or 360° + 180° = 540°) out of phase with each other and interfere destructively. Thus, the interference pattern contains a series of bright and dark fringes on the screen. Let $$\theta$$ be the viewing angle from the perpendicular, as shown in the figure below: Study the construction in Figure 3 Handbook of Transparent Conductors. Passive and active photonic components such as tunable lasers and filters. Signal processing, photonic switching, and point-to-point links / connections. Three lecture hours per week. (Spring, Even years) OPTI 6222 Developments in X-Ray Tomography IX (Proceedings of SPIE). | |
Weak coherent pulses for single-photon quantum memories
# Weak coherent pulses for single-photon quantum memories
## Abstract
Attenuated laser pulses are often employed in place for single photons in order to test the efficiency of the elements of a quantum network. In this work we analyse theoretically the dynamics of storage of an attenuated light pulse (where the pulse intensity is at the single photon level) propagating along a transmission line and impinging on the mirror of a high finesse cavity. Storage is realised by the controlled transfer of the photonic excitations into a metastable state of an atom confined inside the cavity and occurs via a Raman transition with a suitably tailored laser pulse, which drives the atom and minimizes reflection at the cavity mirror. We determine the storage efficiency of the weak coherent pulse which is reached by protocols optimized for single-photon storage. We determine the figures of merit and we identify the conditions on an arbitrary pulse for which the storage dynamics approaches the one of a single photon. Our formalism can be extended to arbitrary types of input pulses and to quantum memories composed by spin ensembles, and serves as a basis for identifying the optimal protocols for storage and readout.
## I Introduction
Single photons are important elements for secure communication using light Afzelius2015 (); Sangouard2012 (). Integrating single photons in a quantum network Ritter2012 (), on the other hand, requires stable and efficient single photon sources, reliable storage units such as single-photon quantum memories, quantum information processors, and ideally dissipationless transmission channels Briegel1998 (); Kurizki2015 (). Since these devices usually optimally work in different frequency regimes, the realization of efficient quantum networks implies the ability of interfacing hybrid elements Kurizki2015 (); Uphoff2016 (). Proof-of-principle experiments for quantum memories have therefore often made use of pulses generated by stable lasers at the required frequency Choi2008 (); Kimble2008 (); Usmani2010 (); Lan2009 (); Gisin2007 (); Koerber2018 (). The laser pulses are typically attenuated to the regime where the probability that they contain a single photon is very small, while the probability that two or more photons are detected is practically negligible. Even though photo-detection after a beam splitter shows the granular properties of the light, yet the coherence properties of weak laser pulses are quite different from the ones of a single photon Mack2003 (). In particular, they are well described by coherent states of the electromagnetic field, whose correlation functions can be reproduced by a classical coherent field Glauber1963 (); Glauber1963a (); Schleich2001 (). In this perspective it is therefore legitimate to ask which specific information about the efficiency of a single-photon quantum network can one possibly extract by means of weak laser pulses.
Theoretically, similar questions have been analysed in Ref. Fleischhauer2000 (); Gorshkov2007 (); Gorshkov2007a (); Gorshkov2007b (); Dilley2012 (); Kalachev2007 (); Kalachev2008 (); Kalachev2010 (). In Fleischhauer2000 (); Gorshkov2007 (); Gorshkov2007a (); Gorshkov2007b (); Dilley2012 (), in particular, the authors consider a quantum memory composed by an atomic ensemble, where the number of atoms is much larger than the mean number of photons of the incident pulse. In this limit the equations describing the dynamics can be brought to the form of the equations describing the interaction of a single photon with the medium, and one can simply extract from the study of one case the efficiency of the other. This scenario changes dramatically if the memory is composed by a single atom Cirac1997 (); Reiserer2015 (); Duan2010 (); Kurz2014 (). In this case the dynamics is quite different depending on whether the atom interacts with a single photon or with (the superposition of) several photonic excitations.
In this work we theoretically analyse the dynamics of the storage of a weak coherent pulse into the excitation of a single-atom confined within an optical resonator like in the setups of Specht2011 (); Khudaverdyan2008 (); Kimble1998 (); Keller2004 (). The laser pulse propagates along a transmission line and impinges on the mirror of the resonator, as illustrated in Fig. 1(a). A control laser drives the atom in order to optimize the transfer of the propagating pulse into the atomic excitation , as shown in Fig. 1(b). We determine the efficiency of storage under the assumption that the control laser optimizes the storage of a single photon, which possesses the same time dependent amplitude as the weak coherent pulse. Our goal is to identify the regime and the conditions for which the dynamics of storage of the weak coherent pulse reproduces the one of a single photon. This study draws on the protocols based on adiabatic transfer identified in Refs. Fleischhauer2000 (); Gorshkov2007a (); Dilley2012 (); Giannelli2018 (). The theoretical formalism for the interface between the weak coherent pulse propagating along the transmission line and the single atom inside the resonator is quite general and can be extended to describe the storage fidelity of an arbitrary quantum state of light into excitations of the memory.
This manuscript is organized as follows. In Sec. II we introduce the theoretical model. In Sec. III we report our results: in Sec III.1 we analyse the storage fidelity of a weak coherent pulse. In Sec. III.2 we analyze the storage fidelity of an arbitrary incident pulse at the single photon level. We then compare them with the storage fidelity of a single photon. The conclusions are drawn in Sec. IV. The appendices provide details to the calculations in Secs. II and III.
## Ii Basic model
Figure 1 reports the basic elements of the dynamics. A weak coherent pulse propagates along the transmission line and impinges on the mirror of a optical high-finesse cavity. Here it is transmitted into a cavity mode at frequency , which, in turn, interacts with a single atom confined within the resonator. The atom is driven by a laser, whose temporal shape is tailored in order to maximize the transfer of a single photonic excitation, with the same amplitude as the weak coherent pulse, into an atomic excitation .
In the following we provide the details of the theoretical model and we introduce the physical quantities which are important for the discussion of the rest of this paper.
### ii.1 Master equation
We describe the dynamics of storage by determining the density matrix for the cavity mode, the atom, and the modes of the transmission line. Its evolution is governed by the master equation ()
∂t^ρ=−i[^Htot(t),^ρ]+Ldis^ρ, (1)
where Hamiltonian determines the coherent evolution and superoperator the incoherent dynamics. Below we define them.
The Hamiltonian describes the unitary dynamics of the system composed of the modes of the transmission line, the cavity mode, and the atom’s internal degrees of freedom. We decompose it into the sum of two terms
^Htot(t)=^Hfields+^HI(t). (2)
The term describes the coherent dynamics of the electromagnetic fields in absence of the atom. In the reference frame rotating at the cavity mode frequency it reads
^Hfields=∑k(ωk−ωc)^b†k^bk+∑kλk(^a†^bk+^b†k^a). (3)
Here, are the frequencies of the electromagnetic field’s modes of the transmission line, operators and annihilate and create, respectively, a photon at frequency , with . The modes are formally obtained by quantizing the electromagnetic field in a resonator of length , where is taken to be much larger than any other length in the system. They are standing wave modes with a node at the cavity mirror (here at ) and have the same polarization as the cavity mode (see Appendix A). The latter is described by a harmonic oscillator with annihilation and creation operators and , where and . In the rotating-wave approximation the interaction is of beam-splitter type and conserves the total number of excitations. The couplings are related to the radiative damping rate of the cavity mode by , with the coupling strength at the cavity-mode resonance frequency Carmichael (). Furthermore, using the Markov approximation, the couplings are taken to be .
The atom-photon interactions are treated in the dipole and rotating-wave approximations. The fields interact with two dipolar transitions sharing the common excited state , forming a level scheme, see Fig. 1(b). The transition couples with the cavity mode with strength (vacuum Rabi frequency) . Transition is driven by a laser with the time-dependent Rabi frequency . The corresponding Hamiltonian reads
^HI=δ|r⟩⟨r|−Δ|e⟩⟨e|+[g|e⟩⟨g|^a+Ω(t)|e⟩⟨r|+H.c.], (4)
where is the detuning between the cavity frequency and the frequency of the transition, while is the two-photon detuning which is evaluated using the central frequency of the driving field . Here, denotes the frequency difference (Bohr frequency) between the state and the state . Unless otherwise stated, in the following we assume that the conditions of one and two-photon resonance are fulfilled.
Superoperator describes the incoherent dynamics due to spontaneous decay of the atomic excited state at rate , and due to the finite transmittivity of the second cavity mirror as well as due to scattering and/or finite absorption of radiation at the mirror surfaces at rate . We model each of these phenomena by Born-Markov processes described by the superoperators and , respectively, such that and
Lγ^ρ=γ(2|ξe⟩⟨e|^ρ|e⟩⟨ξe|−|e⟩⟨e|^ρ−^ρ|e⟩⟨e|), (5a) Lκloss^ρ=κloss(2^a^ρ^a†−^a†^a^ρ−^ρ^a†^a). (5b)
Here, is an atomic state into which the excited state decays, which is assumed to be different from and .
### ii.2 Initial state
The total state of the system at the initial time is given by a weak coherent pulse in the transmission line, the empty optical cavity, and the atom in state :
|ψt1⟩=|g⟩⊗|0⟩c⊗|ψcoh⟩, (6)
where is the Fock state of the resonator with zero photons.
Below we specify in detail the state of the field. The incident light pulse is characterized by the time-dependent operator , such that its state at the interface with the optical resonator reads
|ψcoh⟩=^D({αk})|vac⟩ (7)
and is the vacuum state of the external electromagnetic field. Operator takes the form
^D({αk})=⊗kexp(αk^b†k−α∗k^bk), (8)
where is a complex scalar and the index runs over all modes of the electromagnetic field with the same polarization. It thus generates a multi-mode coherent state, whose mean photon number is
n=⟨ψcohψcoh|∑k^b†k^bk|ψcohψcoh⟩=∑k|αk|2. (9)
In the following we assume that , which is fulfilled when for all . We will denote this a weak coherent pulse. This state approximates a single-photon state since at first order in it can be approximated by the expression
|ψcoh⟩≈(1−n/2)|vac⟩+∑kαk^b†k|vac⟩. (10)
Coefficients are related to the pulse envelope at position (which is the position of the mirror interfacing the cavity with the transmission line) via the relation
αk=√c2L∫∞−∞dtei(kc−ωc)tEin(t) (11)
with the speed of light and the length of the transmission line. The squared norm of equals the number of impinging photons in Eq. (9):
∫∞−∞|Ein(t)|2dt=n. (12)
In this work we are interested in determining the storage efficiency of a weak coherent pulse by the atom. We compare in particular the storage efficiency with the one of a single photon, whose amplitude is given by the same amplitude , apart for a normalization factor giving that the integral in Eq. (12) is unity. For this specific study we choose
Ein(t)=√n√Tsech(2tT), (13)
where is the characteristic time determining the coherence time of the light pulse, defined as
Tc=√⟨t2⟩−⟨t⟩2 (14)
with . The dynamics is analysed in the interval , with and , such that (i) at the initial time there is no spatial overlap between the input light pulse and the cavity mirror and (ii) at the reflected component of the light pulse is sufficiently far away from the mirror so that it has no spatial overlap with the cavity mode. The choice of these parameters has been discussed in detail in Appendix A and in Ref. Giannelli2018 ().
### ii.3 Target dynamics
The target of the dynamics is to absorb a single photon excitation and populate the atomic state . This dynamics is achieved by suitably tailoring the control field . We will consider protocols using control fields that have been developed for a single-photon wave packet Fleischhauer2000 (); Gorshkov2007a (); Dilley2012 (); Giannelli2018 (). The figures of merit we take are (i) the probability to find the excitation in the state of the atom after a fixed interaction time and (ii) the fidelity of the transfer , which we define as the ratio between the probability and the number of impinging photons. This ratio, as we show in the next section, approaches the fidelity of storage of a single photon when .
We give the formal definition of these two quantities. The probability reads Gorshkov2007a ()
(15)
where and denote respectively the identity and the trace over the electromagnetic fields (both the fields in the transmission line and in the optical cavity), and is the density operator of the system.
The fidelity of the transfer is defined as the ratio between and the number of impinging photons, namely
ν=η∫t2t1|Ein(t)|2dt, (16)
which is strictly valid for a coherent pulse. We remark that if the initial state is a single photon, the fidelity and the efficiency coincide.
Before we conclude, we remind the reader of the cooperativity , which determines the maximum fidelity of single-photon storage Gorshkov2007a (); Giannelli2018 (). The cooperativity characterizes the strength of the coupling between the cavity mode and the atomic transition, it reads Gorshkov2007a ()
C=g2κtotγ. (17)
For protocols based on adiabatic transfer of the single photon into the atomic excitation, the maximum fidelity of single-photon storage reads Gorshkov2007a (); Giannelli2018 ()
ηspmax=κκtotC1+C, (18)
and it approaches for . Equation (18) is also the probability for emission of a photon into the transmission line when the atom is initially prepared in the excited state and no control pulse is applied.
The parameters we use in our study are the ones of the setup of Ref. Koerber2018 (), , corresponding to the cooperativity and to the maximal storage fidelity . Furthermore we choose such that the adiabatic condition is fulfilled: (see Ref. Giannelli2018 ()).
## Iii Storage
In this section we report the results of the storage of weak coherent pulse into a single atom excitation. We first determine efficiency and fidelity by numerically solving the master equation of Eq. (1). We compare the results with the corresponding storage fidelity of a single photon with temporal envelope , Eq. (13). We then determine analytically the efficiency and the fidelity for weak coherent pulses with mean photon number and quantify the discrepancy between these quantities and the single-photon storage fidelity as a function of . We further discuss how this method can be extended in order to determine the efficiency of storage of an arbitrary incident pulse.
### iii.1 Numerical results
We determine the dynamics of storage by numerically integrating a master equation in the reduced Hilbert space of cavity mode and atomic degrees of freedom, which we obtain from the master equation (1) after moving to the reference frame which displaces the multimode coherent state to the vacuum. The procedure extends to an input multi-mode coherent state an established procedure for describing the interaction of a quantum system with an oscillator in a coherent state, see for instance Cohen-Tannoudji1994 (). We apply the unitary transformation , where operator is given in Eq.(8) and the arguments are . In this reference frame the initial state of the electromagnetic field is the vacuum, the full density matrix is given by and its coherent dynamics is governed by Hamiltonian
^H′(t)= ^Htot(t)+√2κ(Ein(t)^a†+E∗in(t)^a). (19)
Here carries the information about the initial state of the electromagnetic field and it is related to the amplitudes by the following equation (consistently with Eq. (11))
Ein(t)=√Lc2π2∫∞−∞α(k+kc)e−ikctdk. (20)
By using the Born-Markov approximation one can now trace out the degrees of freedom of the electromagnetic field outside the resonator. The Hilbert space is then reduced to the cavity mode and atom’s degrees of freedom, the density matrix which describes the state of this system is
^τ(t)=Trff^ρ′(t), (21)
where denotes the partial trace with respect to the degrees of freedom of the external electromagnetic field. Its dynamics is governed by the master equation
(22)
and superoperators and are defined in Eqs. (5), where now the cavity field is damped at rate and is the linewidth due to radiative decay of the cavity mode by the finite transmittivity of the mirror at . The initial state is here described by the density operator , and the storage efficiency is .
We integrate numerically the optical Bloch Equation for the matrix elements of Eq. (22) taking a truncated Hilbert space for the cavity field, with number states ranging from to . For the parameters we use in our simulation we find that the mean average number of intracavity photons is below 2. We check the convergence of our simulation for different values of and fix . Figure 2 displays the storage efficiency and fidelity at time for different mean number of photons of the incident weak coherent pulse. When evaluating the dynamics we employed the control laser pulse which optimizes the storage of the incident pulse when this is a single photon with temporal envelope , Eq. (13). In detail, the amplitude of the laser pulse has been determined in Ref. Giannelli2018 () and reads (for )
Ω(t)=√2γ(1+C)(e4t/T+1)T. (23)
We observe that the storage efficiency rapidly increases with and saturates to the asymptotic value for . This asymptotic value indicates that the field in the cavity is essentially classical, the dynamics is the one of STIRAP Vitanov2017 (), and its efficiency does not reach unity being the control pulse optimal for single-photon storage but not for STIRAP. The fidelity decreases with , while in the limit it approaches the single-photon storage fidelity. We note that the behavior for depends on the pulse shape (see Fig. 2).
In Ref. Koerber2018 () the authors report the experimental results of measuring the fidelity as a function of . In particular they report the ratio between the fidelity of storing a weak coherent pulse with and the fidelity for to be . We compare these results with our predictions for where the fidelity is independent of the photon shape. Then, we extract the same ratio from Fig. 2 and obtain . Even if for the fidelity depends on the pulse shape, we have verified by comparing with different pulse shapes that the discrepancy is typically small.
### iii.2 Extracting the single-photon storage fidelity from arbitrary incident pulses
The method we applied in Sec. III.1 is convenient but valid solely when the input pulse is a coherent state. We now show a more general approach for describing storage of a generic input pulse by an atomic medium (which can also be composed by a single atom) and which allows to obtain a useful description of the dynamics. This approach does not make use of approximations such as treating the atomic polarization as an oscillator Gorshkov2007a () and allows one to determine the storage fidelity.
For this purpose we consider master equation (1), and recast it in the form Moelmer1988 (); Dum1992 ()
∂t^ρ=−i(^Heff(t)^ρ−^ρ^H†eff(t))+J^ρ, (24)
where is a non-Hermitian operator, which reads
^Heff(t)=^Htot(t)−iγ|e⟩⟨e|−iκloss^a†^a, (25)
and is denoted in the literature as effective Hamiltonian. The last term on the right-hand side of Eq. (24) is denoted by jump term and is here given by
J^ρ=2(γ|ξe⟩⟨e|^ρ|e⟩⟨ξe|+κloss^a^ρ^a†). (26)
This decomposition allows one to visualize the dynamics in terms of an ensemble of trajectories contributing to the dynamics, where each trajectory is characterized by a number of jumps at given instants of time within the interval where the evolution occurs Dum1992 (); Carmichael (). Of all trajectories, we restrict to the one where no jump occurs since this is the only trajectory which contributes to the target dynamics. The corresponding density matrix is , where and is the time ordering operator, while is the probability that the trajectory occurs. Since the initial state is a pure state, , then with . The efficiency of storage , in particular, can be written as
η=P0Tr{|r⟩⟨r|ρ0(t2)}. (27)
In order to determine , we first decompose the incident pulse at into photonic excitations, namely:
|ψcoh⟩=∞∑m=0Cm|ψ(m)⟩, (28)
where , and the state contains exactly photons, . The dynamics transfers the excitations but preserves their total number, since commutes with . Therefore it does not couple states with different number of photons. By this decomposition we can numerically determine the fidelity for a finite number of initial excitations, as we show in Appendix B. The efficiency , in particular, can be cast in the form
η=∞∑m=0|Cm|2η(m), (29)
where is the efficiency that one photon from a -photon state is transferred into the atomic excitation . Here, is the storage fidelity of a single photon . For a weak coherent pulse , and for we obtain the expression
η=nη(1)+n2(η(2)/2−η(1))+O(n3). (30)
such that the fidelity takes the form
ν=ηn=η(1)+n(η(2)/2−η(1))+O(n2). (31)
If the control pulse is chosen to be the one which maximize the storage fidelity of a single photon, then , Eq. (18). This can be clearly seen in Fig. 2.
We now discuss this dynamics if, instead of a single atom, the quantum memory is composed by atoms within the resonator. In the following we assume that the atoms are identical and that the vacuum Rabi coupling and the control laser pulse intensity and phase do not depend on the atomic positions within the cavity. Let us first consider that the input pulse is a single photon. In this case the dynamics can be mapped to the one described by Eq. (1), where in the Hamiltonian (4) the states of the transition are replaced by the collective atomic states , , and , where the latter is the target state. For a single incident photon, in fact, these are the only internal states involved in the dynamics. The coupling between the cavity mode and the transition is now , leading to a higher cooperativity and thus to a larger value of . In this case the control pulse leading to optimal storage is the same as for a single atom, which couples to the cavity with vacuum Rabi frequency (see for example Eq. (23) and Ref. Giannelli2018 ()).
If the incident pulse is not a single photon, further collective excitations of the atoms have to be accounted for and the dynamics cannot be reduced to the coupling of a structure with the cavity field, as is detailed in Appendix B for the case of a weak coherent pulse. Nevertheless, if the number of atoms is much larger than the mean number of excitations in the incident pulse , the dynamical equations can be reduced to the ones describing storage of the single photon Fleischhauer2000 (); Gorshkov2007a (); Dilley2012 (). In this limiting case, the optimal control pulses for storage of a single photon can also be applied to storage of the input pulse by the atomic ensemble, as long as the input pulse has the same envelope as the single photon. We refer the interested reader to Ref. Gorshkov2007a () for details.
In general, the formalism of the effective Hamiltonian can be applied to determine the control field for storage of an arbitrary input pulse by an atomic ensemble, without having to impose the condition . For an arbitrary input pulse, with , the target state is , where is the Dicke state of the atomic ensemble where atoms are in and which is coherently coupled to the Dicke state by the dynamics. The control pulse shall then optimize the dynamics by maximizing the fidelity
η′=∑m|Cm|2η(m)m, (32)
where and is calculated for the effective Hamiltonian of the atomic ensemble. The control field can be found by means of an analogous strategy as for ensemble OCT, finding the control pulse that optimizes the dynamics in each subspace of excitations so to maximize Rojan2014 (); Goerz2014 (); Kobzar2004 (); Kobzar2008 (); Koch2016 ().
## Iv Conclusions
We have analysed the storage of a weak coherent pulse into the excitation of a single atom inside a resonator, which acts as a quantum memory. Our specific objective was to characterize the process in order to show under which conditions an attenuated incident pulse can be considered as a single photon for storage purposes. Thus we have identified the conditions and the figures of merit which allow one to extract the single-photon storage fidelity by measuring the probability that the atom has been excited at the end of the process.
We remark that the retrieved information by a single atom will always be a single photon Chaneliere2005 (). Nevertheless, the formalism we developed in this work permits one to extend this dynamics to other kind of incident pulses and to quantum memories composed by spin ensembles. For this general case it sets the basis for identifying the optimal control pulses for storage and retrieval of an arbitrary quantum light pulse.
###### Acknowledgements.
This work is dedicated to Wolfgang Schleich on the occasion of his 60th birthday. The authors are grateful to Stephan Ritter for insightful discussions and for proposing this problem. They also thank Susanne Blum, Peter-Maximilian Ney, Christiane Koch, and Gerhard Rempe for discussions. The authors acknowledge financial support by the German Ministry for Education and Research (BMBF) under the project Q.com-Q.
## Appendix A Description of the electromagnetic field in the transmission line
The transmission line is here modelled by a cavity of length , with a perfect mirror at and the second mirror at , which corresponds to the optical cavity mirror with finite transmittivity. The modes of the transmission line are standing waves with wave vector along the axis. For numerical purposes we take a finite number of modes about the cavity wave number . Their wave numbers are
kn=kc+nπL, (33)
and , the corresponding frequencies are . We calibrate and so that our simulations are not significantly affected by the finite size of the transmission line and by the cutoff in the mode number . For the propagation of the incident pulse and its appropriate description at the mirror interface, this requires that the difference between neighbouring frequencies is much smaller than the characteristic frequencies of the problem. We further choose in order to cover a frequency range which includes all the relevant frequencies of this system. With the choice , and , the norm of the envelope results
∫t2t1|Ein(t)|2dt=n(1−ε) (34)
with . Further parameters and discussions are found in Ref. Giannelli2018 ().
## Appendix B Storage Efficiency for n≪1.
In this appendix we provide the details for calculating the dynamics and the fidelity for an incident pulse which is a superposition of different photon number states. We apply the procedure to multimode coherent states, nevertheless it can be generalised in a straighforward manner to a generic initial input pulse.
#### Decomposition of a coherent state
The coherent state in Eq. (7) can be decomposed in a linear combination of states each with a fixed number of excitations (see Eq. (28) with ): The mean number of photons in the mode is and the mean photon number in the coherent state is , see Eq. (9). State contains exactly excitations of the quantum electromagnetic field and reads
|ψ(0)⟩=|vac⟩, (35a) |ψ(1)⟩=N∑k=1Ek^b†k|vac⟩, (35b) |ψ(2)⟩=N∑k=1N∑k′=1Ek,k′^b†k^b†k′|vac⟩, (35c) ⋮ |ψ(m)⟩=∑{k}mE{k}mm^b†k^b†k′…^b†k′′⋯′′|vac⟩. (35d)
Ek=αk√n, (36a) Ek,k′=Ek′,k=EkEk′√2, (36b) ⋮ (36c) E{k}m=Ek,k′…k′′⋯′′m=∏i∈{k}mEi√m!, (36d)
and it is easy to check that the states are orthonormal and complete.
The storage fidelity when the initial state is the coherent state introduced in Eq. (28) is given by (see Eq. (29))
η=e−n∞∑m=1nmm!η(m). (37)
#### Equations of motion
We here explicitly derive the equations of motion in the subspaces with excitations.
Zero excitations - Vacuum: The subspace of zero excitations contains only the state , meaning that the atom is in the ground state , the cavity is empty and the electromagnetic field is in the vacuum state. Thus the time evolution in this subspace is .
One excitation - Single photon: A basis for the subspace with one excitation is
B1={ |g,1,vac⟩,|e,0,vac⟩,|r,0,vac⟩, |g,0,1k⟩:k∈{1,…,N}}
and a general state can be written as
|ϕ(1)t⟩= c1(t)|g,1,vac⟩+e1(t)|e,0,vac⟩+ (38) +r1(t)|r,0,vac⟩+∑kEk(t)|g,0,1k⟩.
The equations of motion in this subspace are ()
˙c1(t)=−ige1(t)−iλ∑kEk(t)−κlossc1(t),˙e1(t)=(iΔ−γ)e1(t)−igc1(t)−iΩ(t)r1(t),˙r1(t)=−iΩ∗(t)e1(t),˙Ek(t)=−iΔkEk(t)−iλc1(t), (39)
and they constitute a system of coupled differential equations with time dependent coefficients. Using the input output formalism Walls1994 () one obtains
˙c1(t)=−ige1(t)−i√2κEin(t)−(κ+κloss)c1(t),˙e1(t)=(iΔ−γ)e1(t)−igc1(t)−iΩ(t)r1(t),˙r1(t)=−iΩ∗(t)e1(t), (40)
where is the decay rate of the cavity field and is defined in Eq. (20). Equations (39) or Eqs. (40) can be easily solved numerically. These equations correspond to the storage of a single photon into a single atom Giannelli2018 () and are equivalent to the approximated equations obtained in Ref. Gorshkov2007a () describing the storage of a light pulse in an atomic ensemble composed by a large number of atoms.
Two excitations - Two photons states: A basis for the subspace with two excitations is
B2={ |g,2,vac⟩,|g,1,1k⟩,|g,0,1k1k′⟩,|e,1,vac⟩, |e,0,1k⟩,|r,1,vac⟩,|r,0,1k⟩:k,k′∈{1,…,N}}
thus a general state in this subspace can be written as
|ϕ(2)t⟩= c2(t)|g,2,vac⟩+∑kEck(t)|g,1,1k⟩+ (41) +∑k∑k′≥kEk,k′(t)|g,0,1k1k′⟩+ +e2(t)|e,1,vac⟩+∑kEek(t)|e,0,1k⟩+ +r2(t)|r,1,vac⟩+∑kErk(t)|r,0,1k⟩.
The state in Eq. (41) can be used to describe the interaction of the atom-cavity system with a two-photon state; in fact the term describes a two-photon state of the electromagnetic field. Notice that we use the definition which implies . The equations of motion in this subspace are
˙c2(t)=−i√2ge2(t)−i√2λ∑kEck(t)+−2κlossc2(t) (42) ˙r2(t)=−iΩ∗(t)e2(t)−iλ∑kErk(t)−κlossr2(t) ˙Eek(t)=i(Δ−Δk)Eek(t)−igEck(t)+−iΩ(t)Erk(t)−iλe2(t) ˙Erk(t)=−iΔkErk(t)−iΩ∗(t)Eek(t)−iλr2(t) ˙Ak,k′(t)=−i(Δk+Δk′)Ak,k′+−iλ(Eck(t)+Eck′(t)),
where we have defined . Eqs. (42) are a system of coupled differential equations with time dependent coefficients; this system can be solved numerically.
Calculation of the efficiency The efficiency can be calculated with the formalism introduced in this section in two ways: (i) solve Eqs. (39) and Eqs. (42) with initial conditions given by the expansion (28) and the coefficients given by Eqs. (36a) and (36b), then the efficiency is
η=|r1(t2)|2+|r2(t2)|2+∑k|Erk(t2)|2; (43)
or (ii) solve Eqs. (39) and Eqs. (42) with initial conditions (36a) and (36b) separately to obtain the efficiencies and of single and double photon storage; then the efficiency as function of is given by Eq. (30).
Fig. 3 reports the efficiency as a function of , the solid line represent the result of the numerical integration of the master equation described in Sec. III.1. The dashed line is the solution with the decomposition until described in this section. It is evident that for the two results coincide.
### References
1. M. Afzelius, N. Gisin, and H. de Riedmatten, Physics Today 68, 42 (2015).
2. N. Sangouard and H. Zbinden, Jour. of Mod. Opt., 59:17, 1458-1464 (2012).
3. S. Ritter, C. Nölleke, C. Hahn, A. Reiserer, A. Neuzner, M. Uphoff, M. Mücke, E. Figueroa, J. Bochmann, and G. Rempe, Nature 484, 195-200 (2012).
4. H.-J. Briegel, W. Dür, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. 81, 5932 (1998)
5. G. Kurizki, P. Bertet, Y. Kubo, K. Mølmer, D. Petrosyan, P. Rabl, J. Schmiedmayer, PNAS 112 13 3866-3873 (2015).
6. M. Uphoff, M. Brekenfeld, G. Rempe, and S. Ritter, Appl. Phys. B 122, 46 (2016).
7. K. S. Choi, H. Deng, J. Laurat, and H. J. Kimble, Nature 452, 67â71 (2008).
8. H. J. Kimble, Nature 453, 1023 (2008).
9. I. Usmani, M. Afzelius, H. de Riedmatten, and N. Gisin, Nat. Comm. 1, 12 (2010).
10. S.-Y. Lan, A. G. Radnaev, O. A. Collins, D. N. Matsukevich, T. A. B. Kennedy, and A. Kuzmich, Opt. Expr. 17 16, 13639-13645 (2009).
11. N. Gisin, and R. Thew, Nat. Phot. 1, 165â171 (2007).
12. M. Körber, O. Morin, S. Langenfeld, A. Neuzner, S. Ritter, and G. Rempe, Nat. Phot. 12, 18-21 (2018).
13. H. Mack and P. W. Schleich, OPN Trends 3, 29-35 (2003).
14. R. J. Glauber, Phys. Rev. 130, 2529 (1963).
15. R. J. Glauber, Phys. Rev. 131, 2766 (1963).
16. W. P. Schleich, Quantum optics in phase space, WILEY-VCH (Berlin, 2001).
17. M. Fleischhauer, S.F. Yelin, and M.D. Lukin, Opt. Commun. 179, 395 (2000).
18. A. V. Gorshkov, A. André, M Fleischhauer, A. S. S Sørensen, and M. D. Lukin Phys. Rev. Lett. 98, 123601(2007).
19. A. V. Gorshkov, A. André, M. D. Lukin, and A. S. Sørensen, Phys. Rev. A 76, 033804 (2007).
20. A. V. Gorshkov, A. André, M. D. Lukin, and A. S. Sørensen, Phys. Rev. A 76, 033805 (2007).
21. J. Dilley, P. Nisbet-Jones, B. W. Shore, and A. Kuhn, Phys. Rev. A 85, 023834 (2012).
22. A. Kalachev, Phys. Rev. A 76, 043812 (2007).
23. A. Kalachev, Phys. Rev. A 78, 043812 (2008).
24. A. Kalachev, Opt. Spectrosc. 109, 32 (2010).
25. J. I. Cirac, P. Zoller, H. J. Kimble, and H. Mabuchi, Phys. Rev. Lett. 78, 3221 (1997).
26. A. Reiserer and G. Rempe, Rev. Mod. Phys. 87, 1379 (2015).
27. L.-M. Duan and C. Monroe, Rev. Mod. Phys. 82, 1209 (2010).
28. C. Kurz, M. Schug, P. Eich, J. Huwer, P. Müller, and J. Eschner, Nat. Commun. 5, 5527 (2014).
29. H. P. Specht, C. Nölleke, A. Reiserer, M. Uphoff, E. Figueroa, S. Ritter, and G. Rempe, Nature 473, 190â193 (2011).
30. M. Khudaverdyan, W. Alt, I. Dotsenko, T. Kampschulte, K. Lenhard, A. Rauschenbeutel, S. Reick, K. Schörner, A. Widera, and D. Meschede, New J. Phys. 10, (2008)
31. H. J. Kimble, Phys. Scr. 127, (1998).
32. M. Keller, B. Lange, K. Hayasaka, W. Lange, and H. Walther, New J. Phys. 6, (2004).
33. L. Giannelli, T. Schmit, T. Calarco, C. P. Koch, S. Ritter, and G. Morigi, preprint arXiv:1804.10558, (2018).
34. C. Cohen-Tannoudji, J. Dupont-Roc, and G. Grynberg, Atom-Photon Interactions, (Wiley-VCH, 2004).
35. N. V. Vitanov, A. A. Rangelov, B. W. Shore, and K. Bergmann, Rev. Mod. Phys. 89, 015006 (2017).
36. R. Dum, P. Zoller, and H. Ritsch, Phys. Rev. A 45, 4879 (1992).
37. H. J. Carmichael, An open system approach to quantum optics, Springer-Verlag (Berlin, 1993).
38. J. Dalibard, Y. Castin, and K. Mølmer, Phys. Rev. Lett. 68, 580 (1992).
39. K. Rojan, D. M. Reich, I. Dotsenko, J.-M. Raimond, C. P. Koch, and G. Morigi, Phys. Rev. A 90, 023824 (2014).
40. M. H. Goerz, E. J. Halperin, J. M. Aytac, C. P. Koch, and K. B. Whaley, Phys. Rev. A 90, 032329 (2014).
41. K. Kobzar, T. E. Skinner, N. Khaneja, S. J. Glaser, and B. Luy, J. Magn. Reson. 170, 236 (2004).
42. K. Kobzar, T. E. Skinner, N. Khaneja, S. J. Glaser, and B. Luy, J. Magn. Reson. 194 , 58 (2008).
43. C. P. Koch, J. Phys.: Condens. Matter 28, 213001 (2016).
44. T. Chanelière, D. N. Matsukevich, S. D. Jenkins, S.-Y. Lan, T. A. B. Kennedy, and A. Kuzmich, Nature 438, 833-836 (2005).
45. D. F. Walls and G. J. Milburn, Quantum Optics (Springer, Heidelberg, 1994).
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | |
Search FQXi
Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.
Contests Home
Current Essay Contest
Contest Partners: Nanotronics Imaging, The Peter and Patricia Gruber Foundation, and The John Templeton Foundation
Media Partner: Scientific American
Previous Contests
Wandering Towards a Goal
How can mindless mathematical laws give rise to aims and intention?
December 2, 2016 to March 3, 2017
Contest Partner: The Peter and Patricia Gruber Fund.
Trick or Truth: The Mysterious Connection Between Physics and Mathematics
Contest Partners: Nanotronics Imaging, The Peter and Patricia Gruber Foundation, and The John Templeton Foundation
Media Partner: Scientific American
How Should Humanity Steer the Future?
January 9, 2014 - August 31, 2014
Contest Partners: Jaan Tallinn, The Peter and Patricia Gruber Foundation, The John Templeton Foundation, and Scientific American
It From Bit or Bit From It
March 25 - June 28, 2013
Contest Partners: The Gruber Foundation, J. Templeton Foundation, and Scientific American
Questioning the Foundations
Which of Our Basic Physical Assumptions Are Wrong?
May 24 - August 31, 2012
Contest Partners: The Peter and Patricia Gruber Foundation, SubMeta, and Scientific American
Is Reality Digital or Analog?
November 2010 - February 2011
Contest Partners: The Peter and Patricia Gruber Foundation and Scientific American
What's Ultimately Possible in Physics?
May - October 2009
Contest Partners: Astrid and Bruce McWilliams
The Nature of Time
August - December 2008
Forum Home
Introduction
Order posts by:
chronological order
most recent first
Posts by the author are highlighted in orange; posts by FQXi Members are highlighted in blue.
RECENT POSTS IN THIS TOPIC
RECENT FORUM POSTS
Thomas Ray: "(reposted in correct thread) Lorraine, Nah. That's nothing like my view...." in 2015 in Review: New...
Lorraine Ford: "Clearly “law-of-nature” relationships and associated numbers represent..." in Physics of the Observer -...
Lee Bloomquist: "Information Channel. An example from Jon Barwise. At the workshop..." in Physics of the Observer -...
Lee Bloomquist: "Please clarify. I just tried to put a simple model of an observer in the..." in Alternative Models of...
Lee Bloomquist: "Footnote...for the above post, the one with the equation existence =..." in Alternative Models of...
Thomas Ray: "In fact, symmetry is the most pervasive physical principle that exists. ..." in “Spookiness”...
Thomas Ray: "It's easy to get wound around the axle with black hole thermodynamics,..." in “Spookiness”...
Joe Fisher: "It seems to have escaped Wolpert’s somewhat limited attention that no two..." in Inferring the Limits on...
RECENT ARTICLES
The Complexity Conundrum
Resolving the black hole firewall paradox—by calculating what a real astronaut would compute at the black hole's edge.
Quantum Dream Time
Defining a ‘quantum clock’ and a 'quantum ruler' could help those attempting to unify physics—and solve the mystery of vanishing time.
Our Place in the Multiverse
Calculating the odds that intelligent observers arise in parallel universes—and working out what they might see.
Sounding the Drums to Listen for Gravity’s Effect on Quantum Phenomena
A bench-top experiment could test the notion that gravity breaks delicate quantum superpositions.
Watching the Observers
Accounting for quantum fuzziness could help us measure space and time—and the cosmos—more accurately.
FQXi FORUM
February 23, 2018
CATEGORY: Trick or Truth Essay Contest (2015) [back]
TOPIC: Is Physics = 4D Space~Time Geometry + Mathematics? by John Philip Wsol [refresh]
Author John Philip Wsol wrote on Mar. 10, 2015 @ 16:07 GMT
Essay Abstract
Cosmos means order. The universe is humming cosmic harmonies. Pioneers of this scientific frontier progressively fine-tune their observational acuity, experimental skillfulness and mathematical expressiveness to see, measure and describe their perceptions. Theoreticians ponder all this data hoping to reveal the hidden mathematical beauty of this cosmic composition. The Quest: 1. to share distinct perspectives gained from Cognitive Science studies; 2. a systematic approach to discovering a deeper understanding of physics equations; 3. to identify a geometric paradigm that can explain many outstanding cosmological questions. Borrowing 8 excellent FQxI questions it will be shown that Mathematics + Combinatorial Quantum-wave Mechanics (CQM) describes the structure of 4D Space~Time and, herein, reveal The Grand Design. 1. Are we missing interesting physical theories because our commitment to a particular mathematical framework? Students, innocently, inherit the previous generation’s mathematical toolboxes that unwittingly limit their thinking to explicit geometric assumptions and hidden presuppositions. 2. What fundamental assumptions did science get wrong? What is the right framework?
Author Bio
John, 6th grade Science Project was a cardboard box planetarium + 36 half-page typed Astronomy booklet. 7th grade build a Binary Digital Computer out of pinball machine parts, won 1st in California State Science Fair, 11th grade won Chemistry Student of the Year, by 12th grade he worked for Physics Computer Development Project, UCI CAI programs: rewrote Complex and helped add memory feature to Quantum, became database system expert, self-studied Hebrew, Cognitive Science, math, physics & cosmology. Independently developing 4D Holographic Space~Time quantum-structured elastic-fluidic model. Founded Combinatorial Quantum-wave Mechanics (CQM) explaining Dark Matter, Dark Energy and Quantum Gravity.
Jacek Safuta wrote on Mar. 11, 2015 @ 15:56 GMT
Hi John,
Your essay is very interesting and enjoyable to read. Our concepts have really a lot in common (this invites to read) however in details there are important differences (that may be inspiring).
We agree that an elastic medium for wave transfer like aether is necessary and that “particles” are simply waves. I have coined Geometrical Universe Hypothesis that can be broken down into:
- the correspondence rule that all interactions and matter are manifestations of spacetime geometry
- the empirical domain - gravitational, electromagnetic, strong nuclear and weak nuclear measurements and cosmological observations
- the geometric structure being a set of Thurston geometries with metrics and the wave transfer
If you are interested you can find details in my essay
http://fqxi.org/community/forum/topic/2452.
Thank you and good luck in the contest.
Jacek
report post as inappropriate
Author John Philip Wsol replied on Mar. 20, 2015 @ 03:36 GMT
Dear Jacek,
Thanks for taking the time to briefly skim my essay. I do see several areas where our thinking parallels each other. We both agree that Relativity is “aether neutral” and for the need of a medium for waves and for the existence of matter being fundamentally “wave-icle”. (I prefer to use my definition of a 4D Space~Time Medium, rather than aether). However, I don’t...
view entire post
Jacek Safuta replied on Mar. 20, 2015 @ 09:12 GMT
Hi John,
You say: “I was surprised and hurt by your low scoring of 3 for my paper.” I have not scored your paper yet. I intend to vote after reading more essays to have a reference points. Couple of days ago my essay was scored 1! but how can I know who is responsible? And I think your low rating is really unfair.
You claim: “your paper is completely void of ANY measurable...
view entire post
report post as inappropriate
Jacek Safuta replied on Mar. 20, 2015 @ 09:16 GMT
As you like strict answers, I have forgotten to give you the link to Perelman proof: Grisha Perelman, Ricci flow with surgery on three-manifolds
http://arxiv.org/abs/math/0303109
report post as inappropriate
Gary D. Simpson wrote on Mar. 17, 2015 @ 20:22 GMT
John,
Thanks for the excellent read. This is a very interesting essay with many novel insights. The many illustrations add a lot of flavor also. Physics could really use an artist right now. There was enough material here for half a dozen more detailed essays. I can almost imagine what Paul Dirac would be like on a sugar rush.
You should read the essay by Colin Walker. He discusses tired light.
Near the end, you present light paths that are curved. This ties in nicely with the cross product term presented in my work.
Best Regards and Good Luck,
Gary Simpson
report post as inappropriate
Author John Philip Wsol replied on Mar. 24, 2015 @ 05:03 GMT
“This is a very interesting essay with many novel insights.”
For years now I have quietly pondered cosmological ideas. I’m thankful to FQxI for sponsoring this essay contest; it has helped me focus on getting ideas out of my head onto paper. ...
view entire post
Gary D. Simpson replied on Mar. 24, 2015 @ 12:40 GMT
John,
You are most welcome. Your effort shows. Don't get discouraged about the voting and scores and such. Anything that deviates from what the mainstream believes gets voted down. Early in the voting, I got hit with three 1's and a 2. The key is to interact with other authors and get enough positive votes to over come the haters. For whatever it's worth to you, you were at the top of the rankings for about 30 minutes until someone voted you down ... Sorry, I only get one vote.
If you are having a hard time conceptualizing complex time, take a look at my essay. Near the end, I present a variation of the Lorentz Transform that defines complex time. I have almost worked the math to a point where I can use Geometric Algebra and motion to explain gravity.
Best Regards and Good Luck,
Gary Simpson
report post as inappropriate
Gary D. Simpson replied on Apr. 16, 2015 @ 02:06 GMT
John,
If you have not already done so, please take a look at my essay. There is still a week or so left to vote if you so desire.
Thanks,
Gary Simpson
report post as inappropriate
Joe Fisher wrote on Mar. 18, 2015 @ 15:58 GMT
Dear Dr. Wsol,
You wrote: “Are we missing interesting physical theories because our commitment to a particular mathematical framework?”
Yes you are. This is the best one by far: “This is my single unified theorem of how the real Universe is occurring: Newton was wrong about abstract gravity; Einstein was wrong about abstract space/time, and Hawking was wrong about the explosive...
view entire post
report post as inappropriate
Joe Fisher wrote on Apr. 8, 2015 @ 15:54 GMT
Dear John,
I think Newton was wrong about abstract gravity; Einstein was wrong about abstract space/time, and Hawking was wrong about the explosive capability of NOTHING.
All I ask is that you give my essay WHY THE REAL UNIVERSE IS NOT MATHEMATICAL a fair reading and that you allow me to answer any objections you may leave in my comment box about it.
Joe Fisher
report post as inappropriate
Branko L Zivlak wrote on Apr. 20, 2015 @ 08:44 GMT
Dear John,
I read your comment at Studencki and decided to carefully read your essay. For the most part I agree with your attitudes. The overall impression (text, images, formulas, attitudes, explanations ...) that you deserve much better score than it is currently. Best regards,
Branko
report post as inappropriate
Tim Litke wrote on Apr. 22, 2015 @ 05:00 GMT
Dr. Wsol,
This paper is impressive both in breadth and how many BIG questions you address. I’m surprised how so much of your math spans quantum amounts and links them to both the physics of Newton and Einstein relativity – AND you suggest even a 3rd relativity theory!
The perspective of the Cosmic Onion is really eye-opening. It makes more sense now how the entire universe expands within this “holographic” spherical area where all the universes before are contained within the one we are now existing in. All space-time relationships are caused by the direct connection of time and space expansion.
Furthermore, I think I get your Dark Energy explanation: it’s NOT that the universe is speeding up; but it is because (as the universe gets bigger) we ourselves expand and move relatively slowly compared to how fast the past is moving. That makes more sense - that action at extreme distances is pushing the galaxies apart. If this bears further scrutiny, your theory will be one to go down in history as revolutionary. Your view of time (correcting your proposed flaw in the Freidman equation) supercedes Stephen Hawking’s “Brief History of Time” and requests a re-assessment of many decades of accepted cosmological theories.
As for quantum gravity (math isn't my great strength,) but as you describe the “probabilistic nature” of the Planck constant increments happening at regular intervals – quantum-like - now that makes sense – making a depression in time which then effectively creates gravity.
I look forward to seeing more of your work!
--Tim
report post as inappropriate
Randall J Urban wrote on Apr. 23, 2015 @ 05:01 GMT
Dear John Wsol,
Since I am a visual thinker, the Cosmic Onion model of space-time expansion really speaks to me. It creates a map to help me wrap my mind around how this all works together.
I love the third theory of relativity where time is a function of space expansion and is ever expanding along with it.
You mention that one implication of this model is “look back curves”. I find this especially intriguing.
I am well pleased to see how many blanks you successfully bridge in what has been other “understandings” of the cosmos up to this point. Further, I find it refreshing that you communicate it all in terms that someone like me can get without being a career mathematician. If anyone manages to shoot down the mathematical agreement you have presented here I would be rather stunned.
Thank you,
-Randall
report post as inappropriate
Patrick Tonin wrote on May. 25, 2015 @ 15:47 GMT
Dear John,
Sorry for the delay in replying but I have been very busy lately.
I am very pleased to see that you also think that everything is expanding (time included). As you say, our models have some good similarities. The big difference is that yours is 4D and mine essentially 2D/3D. Also, in my model, past/present/future co-exist, you don't seem to include that in your model. Do you have any views on that topic ?
I see that you worked with computers and databases, what do you think of my purely "information based" approach ?
I am 100% convinced that we are correct about time expanding with space, I wish that the mainstream will take a serious look at our models one day but I don't have much hope, it is too unconventional.
All the best,
Patrick Tonin
PS: the formula you put in my thread did not come out properly, can you send it to me separately ?
report post as inappropriate
Author John Philip Wsol replied on May. 27, 2015 @ 04:59 GMT
Yes, Patrick, Although I introduce the idea of the Now-Manifold as expanding within the Cosmic Onion, its existence is spread across time. (Thus, Einstein's often misunderstood statement that "The distinction between the past, present, & future is a stubbornly persistent illusion." From an eternal perspective the whole span of time exists simultaneously.
However, physically, we seem to all be caught up in the flow-of-Now -- the local rate that time proceeds forward. This tells me that time MUST be treated differently from spatial dimensions. Thus I believe it is not proper think of time as being "shared" with one of the spatial dimensions. Physically, we exist in 3-space PLUS one-time. (Above, and beyond that I consider higher dimensions as having and interface with physical reality: dimensions of mind, soul and spirit.
As for pattern searching here is one I found back on 28-Jan-2013:
$\underline{22.99859}034 = \frac{27}{\sqrt[3]\phi} \approx \alpha^7 \left( \frac {m_p}{m_e}\right)^5$
Compare this to the 2007 & 2010 CODATA values of 22.99859141 and 22.99859213 (This has an uncertainty of less than 4.6x10-8)
-- Cosmologically yours,
-- John Wsol | |
Evaluation of $\int \prod_{j=1}^u \frac{x+j}{j-x}~dx$
Let $$I_{n,k} = \frac{(n+k)!}{k!(k-1)!(n-k)!}$$. This is a sort of generalization of the Apéry's numbers, with $$I_{n,n} =$$ the $$n$$-th Apéry number. I am studying integrals of the form: $$f_u(x)=\int \prod_{j=1}^u \frac{x+j}{j-x}~dx.$$ Where $$u$$ is a natural number. For $$u>2$$, I have shown that $$\tag{1}f_u(x) = C+ (-1)^ux+\sum_{w=1}^{u}(-1)^w \log(x-w)I_{u,w}.$$ For example, $$f_3(x)=-x-60\log(x-3)+60\log(x-2)-12\log(x-1).$$ I am looking for clarification on two things:
1. My eq. $$(1)$$ does not work for $$u=1,2$$. It generates a function that is almost what the integral evaluates to; whereas $$f_1(x) = -x-2\log(1-x)$$, eq. $$(1)$$ gives me $$-x+2\log(x-1)$$. Is there any amendment I can make to $$(1)$$ to ensure that it holds for all natural $$u$$?
2. My eq. $$(1)$$ surely appears like it should be written as one sum, but I have not been able to manipulate the summand to include the $$(-1)^u x$$ term. Is there a way to rewrite $$(1)$$ as a single sum, barring $$C~$$?
• What if the arguments of your logs are negative? Shouldn't you have log|▪︎| everywhere? – Wolfgang Feb 1 at 21:28
• notice that $\log(x-w)$ and $\log(w-x)$ only differ by an (imaginary) constant, so this can be absorbed in the $C$ in $f_u(x)$; the only difference between $u=1,2$ and $u>2$ that matters is that the sign in front of the logarithm is different from $(-1)^w$; if I am allowed to change the sign of the definition of $I_{u,w}$ for $u=1,2$, I'm done. – Carlo Beenakker Feb 1 at 21:29
• are you sure your equation (1) is correct? I think $-\sum_{w=1}^u$ should be $+\sum_{w=1}^u$ – Carlo Beenakker Feb 1 at 21:38
• @CarloBeenakker you are correct, fixed. – Descartes Before the Horse Feb 1 at 21:40
I may be mistaken, but I get $$f_u(x)=\int \prod_{j=1}^u \frac{x+j}{j-x}~dx= C+ (-1)^ux+\sum_{w=1}^{u}(-1)^w \log(x-w)I_{u,w}$$ and this is correct for all $$u=1,2,3,...$$, so it seems issue 1 is resolved. | |
QUESTION
# The volume of a cube is increasing at the rate of 8 cm$^3$per second. How fast is the surface area increasing when the length of an edge is 12 cm?
$\Rightarrow$ Volume of a cube $= {x^3}$
$\Rightarrow$Surface area $= 6{x^2}$
$\Rightarrow $\dfrac{{dv}}{{dt}} = 8c{m^3} per second. Then by using the chain rule, We get \Rightarrow 8 = \dfrac{{dv}}{{dt}} = \dfrac{{d{x^3}}}{{dt}} \times \dfrac{{dx}}{{dx}} Or 8 = 3{x^2} \times \dfrac{{dx}}{{dt}} \Rightarrow \dfrac{{dx}}{{dt}} = \dfrac{8}{{3{x^2}}} ……………………………………………(1) Now, we will solve \dfrac{{ds}}{{dt}}, \Rightarrow \dfrac{{d\left( {6{x^2}} \right)}}{{dt}} = \dfrac{{d\left( {6{x^2}} \right)}}{{dt}} \times \dfrac{{dx}}{{dx}} \\ = \dfrac{{d\left( {6{x^2}} \right)}}{{dx}} \times \dfrac{{dx}}{{dt}} \\ = 12x \times \dfrac{{dx}}{{dt}} \\ We used chain rule to solve the above mentioned equation. \Rightarrow 12x \times \dfrac{{dx}}{{dt}} \Rightarrow 12x \times \dfrac{8}{{3{x^2}}} \Rightarrow \dfrac{{32}}{x} Thus using the given condition, When x = 12 cm \Rightarrow$\dfrac{{ds}}{{dt}}$ = $\dfrac{{32}}{{12}}c{m^2}$per second
$\Rightarrow$$\dfrac{8}{3}$ $c{m^2}$ per second is the right answer.
$\therefore$ Hence if the length of the edge of the cube is 12 cm, then the surface area is increasing at the rate of $\dfrac{8}{3}$ $c{m^2}$per second. | |
## Thursday, April 17, 2008
### [MT 5] PSD of ramp-like random processes
In the same way as [MT 4] we can consider single "steps" of a different form. Most relevant for our CSK fluctuation models are sawtooth-like pulses of the form
$s(t)=\begin{cases}&space;0&space;&&space;\text{&space;if&space;}&space;x<0&space;&space;\\&space;&space;t&space;&&space;\text{&space;if&space;}&space;0\leq&space;x\leq&space;1&space;&space;\\&space;&space;2-t&space;&&space;\text{&space;if&space;}&space;1\leq&space;x\leq2&space;&space;\\&space;&space;0&space;&&space;\text{&space;if&space;}&space;x>2&space;&space;&space;\end{cases}$
$s(\omega)=[4&space;\cos(\omega)&space;\sin^2(\omega/2)]&space;\;&space;\frac{1}{\omega^2}$ | |
## *DISTRIBUTION
Keyword type: model definition
The *DISTRIBUTION keyword can be used to define elementwise local coordinate systems. In each line underneath the keyword the user lists an element number or element set and the coordinates of the points “a” and “b” describing the local system according to Figure 151 or 152 depending on whether the local system is rectangular or cylindrical. However, the first line underneath the *DISTRIBUTION keyword is reserved for the default local system and the element or element set entry should be left empty. There is one required parameter NAME specifying the name (maximum 80 characters) of the distribution.
Whether the local system is rectangular or cylindrical is determined by the *ORIENTATION card using the distribution. The local orientations defined underneath the *DISTRIBUTION card do not become active unless:
• the distribution is referred to by an *ORIENTATION card
• this *ORIENTATION card is used on a *SOLID SECTION card.
So far, a distribution can only be used in connection with a *SOLID SECTION card and not by any other SECTION cards (such as *SHELL SECTION, *BEAM SECTION etc.).
Two restrictions apply to the use of a distribution:
• an element should not be listed underneath more than one *DISTRIBUTION card
• a distribution cannot be used by more than one *ORIENTATION card.
First line:
• *DISTRIBUTION
• Enter the required parameter NAME.
Second line:
• empty
• X-coordinate of point a.
• Y-coordinate of point a.
• Z-coordinate of point a.
• X-coordinate of point b.
• Y-coordinate of point b.
• Z-coordinate of point b.
Following lines
• element label or element set label
• X-coordinate of point a.
• Y-coordinate of point a.
• Z-coordinate of point a.
• X-coordinate of point b.
• Y-coordinate of point b.
• Z-coordinate of point b.
Example:
*DISTRIBUTION,NAME=DI
,1.,0.,0.,0.,1.,0.
E1,0.,0.,1.,0.,1.,0.
defines a distribution with name DI. The default local orientation is defined by a=(1,0,0) and b=(0,1,0). The local orientation for the elements in set E1 is described by a=(0,0,1) and b(0,1,0).
Example files: beampo4. | |
# Intuition behind tensor product interactions in GAMs (MGCV package in R)
Generalized additive models are those where $$y = \alpha + f_1(x_1) + f_2(x_2) + e_i$$ for example. the functions are smooth, and to be estimated. Usually by penalized splines. MGCV is a package in R that does so, and the author (Simon Wood) writes a book about his package with R examples. Ruppert, et al. (2003) write a far more accessible book about simpler versions of the same thing.
My question is about interactions within these sorts of models. What if I want to do something like the following: $$y = \alpha + f_1(x_1) + f_2(x_2) + f_3(x_1\times x_2) + e_i$$ if we were in OLS land (where the $f$ is just a beta), I'd have no problem with interpreting $\hat{f}_3$. If we estimate via penalized splines, I also have no problem with interpretation in the additive context.
But the MGCV package in GAM has these things called "tensor product smooths". I google "tensor product" and my eyes immediately glaze over trying to read the explanations that I find. Either I'm not smart enough or the math isn't explained very well, or both.
normal = gam(y~s(x1)+s(x2)+s(x1*x2))
a tensor product would do the same (?) thing by
what = gam(y~te(x1,x2))
when I do
plot(what)
or
vis.gam(what)
I get some really cool output. But I have no idea what is going on inside the black box that is te(), nor how to interpret the aforementioned cool output. Just the other night I had a nightmare that I was giving a seminar. I showed everyone a cool graph, they asked me what it meant, and I didn't know. Then I discovered that I had no clothes on.
Could anyone help both me, and posterity, by giving a bit of mechanics and intuition on what is going on underneath the hood here? Ideally by saying a bit about the difference between the normal additive interaction case and the tensor case? Bonus points for saying everything in simple English before moving on to the math.
• simple example, taken from the package author's book: library(mgcv) data(trees) ct5 <- gam(Volume ~ te(Height,Girth,k=5),family=Gamma(link=log),data=trees) ct5 vis.gam(ct5) plot(ct5,too.far=0.15) – generic_user Dec 8 '12 at 21:50
I'll (try to) answer this in three steps: first, let's identify exactly what we mean by a univariate smooth. Next, we will describe a multivariate smooth (specifically, a smooth of two variables). Finally, I'll make my best attempt at describing a tensor product smooth.
## 1) Univariate smooth
Let's say we have some response data $y$ that we conjecture is an unknown function $f$ of a predictor variable $x$ plus some error $ε$. The model would be:
$$y=f(x)+ε$$
Now, in order to fit this model, we have to identify the functional form of $f$. The way we do this is by identifying basis functions, which are superposed in order to represent the function $f$ in its entirety. A very simple example is a linear regression, in which the basis functions are just $β_2x$ and $β_1$, the intercept. Applying the basis expansion, we have
$$y=β_1+β_2x+ε$$
In matrix form, we would have:
$$Y=Xβ+ε$$
Where $Y$ is an n-by-1 column vector, $X$ is an n-by-2 model matrix, $β$ is a 2-by-1 column vector of model coefficients, and $ε$ is an n-by-1 column vector of errors. $X$ has two columns because there are two terms in our basis expansion: the linear term and the intercept.
The same principle applies for basis expansion in MGCV, although the basis functions are much more sophisticated. Specifically, individual basis functions need not be defined over the full domain of the independent variable $x$. Such is often the case when using knot-based bases (see "knot based example"). The model is then represented as the sum of the basis functions, each of which is evaluated at every value of the independent variable. However, as I mentioned, some of these basis functions take on a value of zero outside of a given interval and thus do not contribute to the basis expansion outside of that interval. As an example, consider a cubic spline basis in which each basis function is symmetric about a different value (knot) of the independent variable -- in other words, every basis function looks the same but is just shifted along the axis of the independent variable (this is an oversimplification, as any practical basis will also include an intercept and a linear term, but hopefully you get the idea).
To be explicit, a basis expansion of dimension $i-2$ could look like:
$$y=β_1+β_2x+β_3f_1(x)+β_4f_2(x)+...+β_if_{i-2} (x)+ε$$
where each function $f$ is, perhaps, a cubic function of the independent variable $x$.
The matrix equation $Y=Xβ+ε$ can still be used to represent our model. The only difference is that $X$ is now an n-by-i matrix; that is, it has a column for every term in the basis expansion (including the intercept and linear term). Since the process of basis expansion has allowed us to represent the model in the form of a matrix equation, we can use linear least squares to fit the model and find the coefficients $β$.
This is an example of unpenalized regression, and one of the main strengths of MGCV is its smoothness estimation via a penalty matrix and smoothing parameter. In other words, instead of:
$$β=(X^TX)^{-1}X^TY$$
we have:
$$β=(X^TX+λS)^{-1}X^TY$$
where $S$ is a quadratic $i$-by-$i$ penalty matrix and $λ$ is a scalar smoothing parameter. I will not go into the specification of the penalty matrix here, but it should suffice to say that for any given basis expansion of some independent variable and definition of a quadratic "wiggliness" penalty (for example, a second-derivative penalty), one can calculate the penalty matrix $S$.
MGCV can use various means of estimating the optimal smoothing parameter $λ$. I will not go into that subject since my goal here was to give a broad overview of how a univariate smooth is constructed, which I believe I have done.
## 2) Multivariate smooth
The above explanation can be generalized to multiple dimensions. Let's go back to our model that gives the response $y$ as a function $f$ of predictors $x$ and $z$. The restriction to two independent variables will prevent cluttering the explanation with arcane notation. The model is then:
$$y=f(x,z)+ε$$
Now, it should be intuitively obvious that we are going to represent $f(x,z)$ with a basis expansion (that is, a superposition of basis functions) just like we did in the univariate case of $f(x)$ above. It should also be obvious that at least one, and almost certainly many more, of these basis functions must be functions of both $x$ and $z$ (if this was not the case, then implicitly $f$ would be separable such that $f(x,z)=f_x(x)+f_z(z)$). A visual illustration of a multidimensional spline basis can be found here. A full two dimensional basis expansion of dimension $i-3$ could look something like:
$$y=β_1+β_2x+β_3z+β_4f_1(x,z)+...+β_if_{i-3} (x,z)+ε$$
I think it's pretty clear that we can still represent this in matrix form with:
$$Y=Xβ+ε$$
by simply evaluating each basis function at every unique combination of $x$ and $z$. The solution is still:
$$β=(X^TX)^{-1}X^TY$$
Computing the second derivative penalty matrix is very much the same as in the univariate case, except that instead of integrating the second derivative of each basis function with respect to a single variable, we integrate the sum of all second derivatives (including partials) with respect to all independent variables. The details of the foregoing are not especially important: the point is that we can still construct penalty matrix $S$ and use the same method to get the optimal value of smoothing parameter $λ$, and given that smoothing parameter, the vector of coefficients is still:
$$β=(X^TX+λS)^{-1}X^TY$$
Now, this two-dimensional smooth has an isotropic penalty: this means that a single value of $λ$ applies in both directions. This works fine when both $x$ and $z$ are on approximately the same scale, such as a spatial application. But what if we replace spatial variable $z$ with temporal variable $t$? The units of $t$ may be much larger or smaller than the units of $x$, and this can throw off the integration of our second derivatives because some of those derivatives will contribute disproportionately to the overall integration (for example, if we measure $t$ in nanoseconds and $x$ in light years, the integral of the second derivative with respect to $t$ may be vastly larger than the integral of the second derivative with respect to $x$, and thus "wiggliness" along the $x$ direction may go largely unpenalized). Slide 15 of the "smooth toolbox" I linked has more detail on this topic.
It is worth noting that we did not decompose the basis functions into marginal bases of $x$ and $z$. The implication here is that multivariate smooths must be constructed from bases supporting multiple variables. Tensor product smooths support construction of multivariate bases from univariate marginal bases, as I explain below.
## 3) Tensor product smooths
Tensor product smooths address the issue of modeling responses to interactions of multiple inputs with different units. Let's suppose we have a response $y$ that is a function $f$ of spatial variable $x$ and temporal variable $t$. Our model is then:
$$y=f(x,t)+ε$$
What we'd like to do is construct a two-dimensional basis for the variables $x$ and $t$. This will be a lot easier if we can represent $f$ as:
$$f(x,t)=f_x(x)f_t(t)$$
In an algebraic / analytical sense, this is not necessarily possible. But remember, we are discretizing the domains of $x$ and $t$ (imagine a two-dimensional "lattice" defined by the locations of knots on the $x$ and $t$ axes) such that the "true" function $f$ is represented by the superposition of basis functions. Just as we assumed that a very complex univariate function may be approximated by a simple cubic function on a specific interval of its domain, we may assume that the non-separable function $f(x,t)$ may be approximated by the product of simpler functions $f_x(x)$ and $f_t(t)$ on an interval—provided that our choice of basis dimensions makes those intervals sufficiently small!
Our basis expansion, given an $i$-dimensional basis in $x$ and $j$-dimensional basis in $t$, would then look like:
\begin{align} y = &β_{1} + β_{2}x + β_{3}f_{x1}(x)+β_{4}f_{x2}(x)+...+ \\ &β_{i}f_{x(i-3)}(x)+ β_{i+1}t + β_{i+2}tx + β_{i+3}tf_{x1}(x)+β_{i+4}tf_{x2}(x)+...+ \\ &β_{2i}tf_{x(i-3)}(x)+ β_{2i+1}f_{t1}(t) + β_{2i+2}f_{t1}(t)x + β_{2i+3}f_{t1}(t)f_{x1}(x)+β_{i+4}f_{t1}(t)f_{x2}(x){\small +...+} \\ &β_{2i}f_{t1}(t)f_{x(i-3)}(x)+\ldots+ \\ &β_{ij}f_{t(j-3)}(t)f_{x(i-3)}(x) + ε \end{align}
Which may be interpreted as a tensor product. Imagine that we evaluated each basis function in $x$ and $t$, thereby constructing n-by-i and n-by-j model matrices $X$ and $T$, respectively. We could then compute the $n^2$-by-$ij$ tensor product $X \otimes T$ of these two model matrices and reorganize into columns, such that each column represented a unique combination $ij$. Recall that the marginal model matrices had $i$ and $j$ columns, respectively. These values correspond to their respective basis dimensions. Our new two-variable basis should then have dimension $ij$, and therefore the same number of columns in its model matrix.
NOTE: I'd like to point out that since we explicitly constructed the tensor product basis functions by taking products of marginal basis functions, tensor product bases may be constructed from marginal bases of any type. They need not support more than one variable, unlike the multivariate smooth discussed above.
In reality, this process results in an overall basis expansion of dimension $ij-i-j+1$ because the full multiplication includes multiplying every $t$ basis function by the x-intercept $β_{x1}$ (so we subtract $j$) as well as multiplying every $x$ basis function by the t-intercept $β_{t1}$ (so we subtract $i$), but we must add the intercept back in by itself (so we add 1). This is known as applying an identifiability constraint.
So we can represent this as:
$$y=β_1+β_2x+β_3t+β_4f_1(x,t)+β_5f_2(x,t)+...+β_{ij-i-j+1}f_{ij-i-j-2}(x,t)+ε$$
Where each of the multivariate basis functions $f$ is the product of a pair of marginal $x$ and $t$ basis functions. Again, it's pretty clear having constructed this basis that we can still represent this with the matrix equation:
$$Y=Xβ+ε$$
Which (still) has the solution:
$$β=(X^TX)^{-1}X^TY$$
Where the model matrix $X$ has $ij-i-j+1$ columns. As for the penalty matrices $J_x$ and $J_t$, these are are constructed separately for each independent variable as follows:
$$J_x=β^T I_j \otimes S_x β$$
and,
$$J_t=β^T S_t \otimes I_i β$$
This allows for an overall anisotropic (different in each direction) penalty (Note: the penalties on the second derivative of $x$ are added up at each knot on the $t$ axis, and vice versa). The smoothing parameters $λ_x$ and $λ_t$ may now be estimated in much the same way as the single smoothing parameter was for the univariate and multivariate smooths. The result is that the overall shape of a tensor product smooth is invariant to rescaling of its independent variables.
I recommend reading all the vignettes on the MGCV website, as well as "Generalized Additive Models: and introduction with R." Long live Simon Wood.
• Nice answer. I've since learned quite a lot more than I knew three years ago. But I'm not sure that I would have understood 3 years ago what you wrote today. Or maybe I would have. I think the place to start is to think of a basis expansion in many dimensions as a "net" across the variable space. I suppose tensors can be described as a net with rectangular patterns... And maybe different "shear" forces pulling from each direction. – generic_user Sep 18 '15 at 1:26
• On another note, I would caution you against thinking of the tensor product as representing something spatial. This is because the actual tensor product of marginal $x$ and $t$ basis functions will include tons of zeros which represent the evaluation of basis functions outside of their defined range. The actual tensor product will usually be very sparse. – Josh Sep 18 '15 at 12:16
• Thanks for this great summary! Just one remark: The equation after "Our basis expansion," is not completely correct. It does give the correct basis functions, but it gives a parametrization where the corresponding parameters are of product form ($\beta_{xi}\beta_{tj}$). – jarauh Dec 18 '17 at 9:24
• @Josh Ok, I tried. It's not easy to have it correct and easy to understand at the same time (and to follow someone else's notation). By the way, the link to smooth-toolbox.pdf seems to be broken. – jarauh Dec 19 '17 at 10:12
• Looks good. Apparently your edit was rejected, but I overrode the rejection and approved it. When I started writing this answer I didn't realize just how confusing the expansions would look. I should probably go back and rewrite it with pi (product) notation one of these days. – Josh Dec 19 '17 at 13:55 | |
# What is 1/3 as a percentage?
How to show 1/3 as a percent? Here are the easiest ways to do so!
So we need to learn to convert the given fraction 1/3 to a percentage and thus learn the different methods by which we can achieve the results.
Two different methods are present for conversion of the fraction to percentage and here we shall learn about how to do the same. The application of percentage is useful in many cases and so you should learn about how to convert the fraction form to its percentage form.
What is 1/3 as a percent? The answer is 1/3 as a percentage is 33.3333%
## Basic concepts of converting a fraction to a percentage
1) A fraction mainly consists of two parts, the numerator, and the denominator.
2) The numerator refers to the digits that are present above the line of division. The denominator on the other hand talks about all the digits that are present below the line of division.
3) When we are dividing a fraction, the numerator always becomes the dividend that gets divided in the process while the denominator becomes the divisor which is used to divide the numerator.
4) What is a percentage? A percentage is a process that is described as expressing the given number in terms of hundred. Percentage helps you to express the value of the digits and compare their greatness and smallness with a hundred. On the other hand, you can say that a percentage is expressed as a fraction of 100.
Also read: What is 13/15 as a percentage?
## Calculation to show 1/3 as a percent
There are two different methods that we will learn to convert the fractional value to its percentage form.
### Method #1
Step 1
You first need to change the value of the provided denominator to 100. To do so, you have to divide 100 by the denominator.
#### 100÷3 = 33.333333333333
$\frac{100}{3}=33.333333333333$
Step2
Now you will have to multiply the numerator and the denominator with the obtained results in the first step.
#### (1×33.3333)÷(3×33.333) = 33.3333/100
$\frac{(1\times33.3333)}{(3\times33.3333)}=\frac{33.3333}{100}$
Thus the required results in percentage are 33.333%
### Method #2
Now we shall learn another equally easy method but a different way to solve the given problem and change the fraction to a percentage.
Step 1
You have to divide the numerator and the denominator where 1 is the numerator and 3 is the denominator. The numerator, in this case, becomes the dividend and the denominator 3 becomes the divisor.
Thus,
#### 1÷3= 0.33333333
Step 2
Now for the final step, you have to multiply the obtained results in decimals by 100 to yield the final result in percentage.
Hence,
#### 0.33333×100=33.3333%
Thus we can see that there are two different methods in which we are learning to change the given fraction 1/3 to its percentage form which is 33.333% which we found out through calculations.
#### 1/3 as a percent = 33.3333%
Both the methods that are described here are easy and the stepwise technique helps you to remember them well when you need to solve questions on your own.
There should be no confusion regarding the steps that are involved in changing the fraction to its percentage form. In this case, we had taken the simple example of ⅓ to find the results of our question and learn about the procedures for solving the same.
Also read: What is 12/15 as a percentage? | |
# Article
Full entry | PDF (0.3 MB)
Keywords:
Holt-Winters smoothing; robust methods; time series
Summary:
To obtain a robust version of exponential and Holt-Winters smoothing the idea of $M$-estimation can be used. The difficulty is the formulation of an easy-to-use recursive formula for its computation. A first attempt was made by Cipra (Robust exponential smoothing, J. Forecast. {\it 11} (1992), 57--69). The recursive formulation presented there, however, is unstable. In this paper, a new recursive computing scheme is proposed. A simulation study illustrates that the new recursions result in smaller forecast errors on average. The forecast performance is further improved upon by using auxiliary robust starting values and robust scale estimates.
References:
[1] Chatfield, C., Koehler, A., Ord, J., Snyder, R.: A new look at models for exponential smoothing. The Statistician 50 (2001), 147-159. MR 1831380
[2] Cipra, T.: Robust exponential smoothing. J. Forecast. 11 (1992), 57-69. DOI 10.1002/for.3980110106
[3] Cipra, T., Romera, R.: Kalman filter with outliers and missing observations. Test 6 (1997), 379-395. DOI 10.1007/BF02564705 | MR 1616912 | Zbl 0893.62094
[4] Davies, P., Fried, R., Gather, U.: Robust signal extraction for on-line monitoring data. J. Stat. Plann. Inference 122 (2004), 65-78. DOI 10.1016/j.jspi.2003.06.012 | MR 2057914 | Zbl 1040.62099
[5] Fried, R.: Robust filtering of time series with trends. J. Nonparametric Stat. 16 (2004), 313-328. DOI 10.1080/10485250410001656444 | MR 2073028 | Zbl 1065.62162
[6] Gather, U., Schettlinger, K., Fried, R.: Online signal extraction by robust linear regression. Comput. Stat. 21 (2006), 33-51. DOI 10.1007/s00180-006-0249-8 | MR 2252439 | Zbl 1114.62047
[7] Gelper, S., Fried, R., Croux, C.: Robust forecasting with exponential and Holt-Winters smoothing. Preprint (2007). MR 2752114
[8] Holt, C.: Forecasting seasonals and trends by exponentially weighted moving averages. ONR Research Memorandum 52 (1959).
[9] Kotsialos, A., Papageorgiou, M., Poulimenos, A.: Long-term sales forecasting using Holt-Winters and neural network methods. J. Forecast. 24 (2005), 353-368. DOI 10.1002/for.943 | MR 2190371
[10] Romera, R., Cipra, T.: On practical implementation of robust Kalman filtering. Comm. Stat., Simulation Comput. 24 (1995), 461-488. DOI 10.1080/03610919508813252 | MR 1333047 | Zbl 0850.62688
[11] Siegel, A.: Robust regression using repeated medians. Biometrika 69 (1982), 242-244. DOI 10.1093/biomet/69.1.242 | Zbl 0483.62026
[12] Taylor, J.: Forecasting daily supermarket sales using exponentially weighted quantile regression. Eur. J. Oper. Res. 178 (2007), 154-167. DOI 10.1016/j.ejor.2006.02.006 | Zbl 1102.62103
[13] Winters, P.: Forecasting sales by exponentially weighted moving averages. Manage. Sci. 6 (1960), 324-342. DOI 10.1287/mnsc.6.3.324 | MR 0112740 | Zbl 0995.90562
[14] Yohai, V., Zamar, R.: High breakdown-point estimates of regression by means of the minimization of an efficient scale. J. Am. Stat. Assoc. 83 (1988), 406-413. DOI 10.1080/01621459.1988.10478611 | MR 0971366 | Zbl 0648.62036
Partner of | |
# OpenFOAM¶
OpenFOAM is a free, open source CFD software packaage.
This entry provides basic information on how to run OpenFOAM from Open CFD. NB OpenFOAM is still in testing, and this guide is very liable to change.
## General OpenFOAM documentation¶
OpenFOAM is complex, and you should start by reading the official documentation at http://www.openfoam.org/docs/user
## Setting up the environment¶
You can set up the environment variables necessary to run both OpenFOAM version 2.3.0 and ParaView version 4.1.0 by loading the openfoam environment module.
% module add openfoam
This module also sets up environment variables that define default locations for your OpenFOAM cases and binaries.
$WM_PROJECT_USER_DIR ~/OpenFOAM/2.3.0$FOAM_RUN ~/OpenFOAM/2.3.0/run
$FOAM_USER_APPBIN ~/OpenFOAM/2.3.0/platforms/linux64Gcc47DPOpt/bin The first time you use OpenFOAM, you should create these directories with the commands below and use them for your OpenFoam cases. mkdir -p$WM_PROJECT_USER_DIR
mkdir -p $FOAM_RUN mkdir -p$FOAM_USER_APPBIN
## Getting Started¶
Create the project directories above.
Copy the tutorial examples directory in the OpenFOAM distribution to the run directory.
cp -r $FOAM_TUTORIALS$FOAM_RUN
Run the first example case of incompressible laminar flow in a cavity:
cd $FOAM_RUN/tutorials/incompressible/icoFoam/cavity blockMesh icoFoam paraFoam Now refer to the OpenFOAM User Guide to get more information! ## Launching on the login node¶ Visualisation of OpenFOAM results can be done using paraFoam at the command prompt; i.e.: % paraFoam Warning If you’re visualising large data sets this should NOT be done on the login nodes, since this can use a considerable amount of RAM and CPU. Instead, you should request an interactive job with UGE (SGE). ## Using Univa Grid Engine¶ Univa Grid Engine (UGE) allows both interactive and batch jobs to be submitted, with exclusive access to the resources requested. ## Running through an interactive shell¶ To launch paraFoam interactively, displaying the full GUI: % qrsh -q eng-inf_parallel.q -cwd -V -l h_rt=<hh:mm:ss> paraFoam In the above command, hh:mm:ss is the length of real-time the shell will exist for, -cwd- means use the current working directory and -V- exports the the current environment. e.g. to run paraFoam for 1 hour: % qrsh -q eng-inf_parallel.q -cwd -V -l h_rt=1:00:00 paraFoam This will run paraFoam within the terminal from which it was launched. You will need to be in an appropriate directory for paraFoam to find the correct files. ## Batch Execution¶ To run OpenFOAM in batch-mode you first need to setup your case. A script must then be created that will request resources from the queuing system and launch the desired OpenFOAM executables eg script runfoam.sh: #!/bin/bash # Use the current working directory #$ -cwd
# Use the Engineering/Informatics parallel queue
#$-q eng-inf_parallel.q # Load OpenFOAM module . /etc/profile.d/modules.sh module add openfoam # Run actual OpenFoam commands blockMesh icoFoam This can be submitted to the queuing system using: ## Parallel Execution¶ If you’ve configured your OpenFOAM job to be solved in parallel, you need to submit it differently. It uses its own private version of OpenMPI 1.6.5 so you don’t need to explicitly load an openmpi module. This is an example of a suitable submission script that reserves 2GB RAM per slot. NB there is a space between ”.” and “/etc/profile.d/modules.sh” in the script below! create the script runfoam-mpi.sh #!/bin/bash #$ -l h_vmem=2G
#$-pe openmpi 8 #$ -cwd -V
#$-q eng-inf_parallel.q export MPI_BUFFER_SIZE=8192 . /etc/profile.d/modules.sh module purge module add openfoam mpirun -np$NSLOTS interFoam -parallel
qsub runfoam-mpi.sh | |
19k views
### What are the advantages of TikZ/PGF over PSTricks?
The first time I saw the PSTricks' 3D Galleries, I immediately fell in love with it. I have spent much time to learn and use it. In this forum, I see many people using TikZ. I have not used TikZ yet. ...
21k views
### Why do people still use Postscript?
I submitted a journal paper this morning, and they asked me to include a PDF file, which I expected, and a Postscript (PS) file. Generating the PS file proved more difficult, because some of my ...
86k views
### How to use PSTricks in pdfLaTeX?
I thought that PSTricks package was not possible to use in pdfLaTeX but the user Dima claims otherwise. How can I force pdfLaTeX to use PSTricks then?
18k views
### How to draw a shaded sphere?
Andrew Stacey pointed out that the Rosetta Code entry "Draw a Sphere" doesn't have a TikZ entry yet. Is there a way to draw a "properly" shaded sphere using TikZ? The ball shading would seem an ...
6k views
### Why does anyone prefer Metapost?
TikZ (together with its PGF backend) is the most widely used picture drawing tool by regulars here, having more than 50x as many questions as for Metapost, alongside a wealth of documentation and user ...
1k views
### Why doesn't pdfTeX support PStricks directly?
I am sorry, I have to remove the previous question. I didn't realize I had messed up everything (from my other questions) here. The hidden idea was actually as follows, but I had simplified it in a ...
830 views
### let operation vs tkz-euclide
Before put my question to give some explanation of why I do this question. When I started to use the TeX a matter of luck that I started working with LaTeX (xelatex) and not with LuaTex for example. ...
5k views
### psfrag equivalent for pdflatex
I am currently updating my workflow from the old tex2ps2dvi workflow (needs a lot of time and matplotlib and other stuff etc. hasn't a EPS output (only PDF, SVG, PS, PNG, etc...). I am trying to ...
711 views
### Light object / source in PGF / TikZ / PGFplots pictures like in PSTricks
I came across the PSTricks animation located at http://melusine.eu.org/syracuse/pstricks/pst-solides3d/animations/a43/ http://i.stack.imgur.com/NlqHZ.gif and noticed that it uses a light object / ...
553 views
### Vector graphics using arbitrary functions (other than bezier, arcs etc) to create shapes
Is it possible to create vector graphics with latex or an external program that allows the usage of more or less arbitrary functions (other than beziers and standard shapes like circles etc) to define ...
787 views
### Sketching free-hand, importing into TikZ
I need to reproduce various 2D and3D figures, such as the ones shown below, which were originally hand-drawn. Parts of each figure can clearly be done directly with TikZ code, but other parts, ...
872 views
### tikzpictures give huge pdf files when processed via dvips and ps2pdf
I have written a paper that includes a couple of TikZ pictures and submitted it to a (Springer) journal. I was quite shocked when I got the pdf file they produced from my source file, since it was ...
1k views
### Drawing a hyperbolic trajectory
In Drawing the Celestial Sphere with Tikz Package, they use pspicture. Is there a way to use pspicture inside of tikz? I want to have a hyberbola pass by the Earth with a periapsis of 500km (scaled ... | |
›› 2011, Vol. 29 ›› Issue (4): 7-279-284.
### cDNA Cloning and Sequence Analysis of Musca domestica Antifungal Peptide-1 (MAF-1)
FU Ping, Tun-Jian-Wei, Guo-Guan
1. Department of Parasitology, Guiyang Medical College, Guiyang 550004, China
• Online:2011-08-30 Published:2012-09-27
Abstract: Objective To clone the cDNA sequence of Musca domestica antifungal peptide-1 (MAF-1) and analyze the amino acid sequence of MAF-1 by bioinformatics method. Methods Based on the primer designed according to the N-terminal amino acid sequence of MAF-1, the cDNA and amino sequence of MAF-1 were obtained by the methods of RACE and NestPCR. The accuracy of the experiment was confirmed by RT-PCR. The characteristic of the sequence was analyzed by bioinformatics software. Results The length of the cDNA sequence of MAF-1 was 568 bp by 3′RACE, including an open reading frame (ORF) of 441 bp length and 3′UTR of 127 bp. It was a novel sequence with the submission number of HM178948 in GenBank since none homology was found when compared with other sequences by Blast. Added with the 9 amino acids that were not used to design primer, the whole sequence of MAF-1 was 156 amino acids conferred from its cDNA. 139 bp cDNA sequence was obtained by 5′RACE and the result was consistent to 3′RACE. The result of RT-PCR showed the cDNA of MAF-1 mature peptide was accurate. The bioinformatics analysis deduced that the theoretic molecular weight and isoelectric point of the whole protein sequence of MAF-1 gene were similar to those detected. The ExPASy illustrated that the MAF-1 gene had a signal peptide. There were abundant α-helix in it, the domain located between the 128 and 153 amino acid residuals. Subcellular analysis showed MAF-1 was almost in the nucleus. PredictProtein found two protein kinase C phosphalation sites and one N-myri-stoylation site, and predicted that it was not a globular protein. In the end, the three dimension image of MAF-1 was set up with 3D-pssm of ExPASy. Conclusion The cDNA sequence and the amino acid sequence of MAF-1 have been obtained and analyzed successfully. | |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Fast amplitude modulation up to 1.5 GHz of mid-IR free-space beams at room-temperature
## Abstract
Applications relying on mid-infrared radiation (λ ~ 3-30 μm) have progressed at a very rapid pace in recent years, stimulated by scientific and technological breakthroughs like mid-infrared cameras and quantum cascade lasers. On the other side, standalone and broadband devices allowing control of the beam amplitude and/or phase at ultra-fast rates (GHz or more) are still missing. Here we show a free-space amplitude modulator for mid-infrared radiation (λ ~ 10 μm) that can operate at room temperature up to at least 1.5 GHz (−3dB cutoff at ~750 MHz). The device relies on a semiconductor heterostructure enclosed in a judiciously designed metal–metal optical resonator. At zero bias, it operates in the strong light-matter coupling regime up to 300 K. By applying an appropriate bias, the device transitions towards the weak-coupling regime. The large change in reflectance is exploited to modulate the intensity of a mid-infrared continuous-wave laser up to 1.5 GHz.
## Introduction
Fast amplitude and phase modulation are essential for a plethora of applications in mid-infrared (IR) photonics, including laser amplitude/frequency stabilization1, coherent detection, FM (frequency modulation) and AM (amplitude modulation) spectroscopy and sensing, mode-locking, and optical communications2,3. However, the fast and ultra-fast (1–40 GHz) modulation of mid-IR radiation is a largely under-developed functionality. The fastest modulation speeds, 20–30 GHz, have been obtained with the direct modulation of mid-IR quantum cascade lasers (QCLs), but this requires specially designed devices and elevated injected RF (radiofrequency) powers4,5,6. Interestingly, in the visible/near-IR spectral ranges the preferred solution is to separate the functionalities: independent modulators, filters, interferometers are employed that are physically separated from the source. For modulators, this leads to advantages in terms of RF power, laser linewidth and flatness of the modulation bandwidth.
Commercially available mid-IR modulators are either acousto-optic devices with narrow modulation bandwidth, or very narrow band ( ~100 kHz) electro-optic modulators based on GaAs or CdTe7. The latter ones can operate up to modulation speeds of 20 GHz, but their efficiency is very low: <0.1% sideband/carrier ratio (see “Methods” for definition). To date, standalone, efficient and broadband amplitude/phase modulators are missing from present mid-IR photonics tools. Holmström8 shows numerical performances up to 190 GHz with step QWs in a waveguide geometry at λ = 6.6 μm, but no experimental data are provided.
Since the 80s, proposals have been put forward to exploit intersubband (ISB) absorption in semiconductor quantum well (QW) systems to modulate mid-IR radiation. The first attempts based on the Stark shift were then followed by a number of works exploiting coupled QWs9,10. In both cases, the application of an external bias depletes or populates the ground state of the QW at cryogenic temperatures (from 4 K up to 130 K) thus inducing a modulation of the ISB absorption11. Operation at room-temperature was obtained in ref. 12 using a Schottky contact scheme. Recently, different approaches have been proposed to actively tune the reflectance/transmission of mid-IR and/or THz beams: phase transition in materials like VO2, liquid crystals orientation, carrier density control in metal–insulator-semiconductor junctions13. These devices operate on the principle that, at a given wavelength, a change in absorption translates in a modulation of the transmitted power. An alternative approach is to frequency shift the ISB absorption, instead of modulating its intensity. In14,15,16,17 a giant confined Stark effect in a coupled QW system embedded in a metallic resonator, designed to be in the strong coupling regime between light and matter, was exploited. A response time of ~10 ns was estimated. Exploiting the Stark effect in ISB-based systems can effectively lead to impressive performances, but it can suffer of an intrinsic drawback: the diagonal transition has lower oscillator strength with respect to a vertical one. Higher doping is necessary to achieve the same Rabi splitting: this means higher biases to get a frequency tunability comparable to that obtained in systems based on charge modulation. On the other hand, high biases can be a significant problem when targeting fast and/or ultra-fast performances.
One way forward is photonic integration: scaling to the mid-IR the approach already developed for silicon photonics. It can rely on SiGe/Si photonic platforms18, or on the more natural InGaAs/AlInAs-on-InP platform19. In both cases the QCL source must be properly integrated in the system. An alternative is to develop modulators that can apply an ultra-fast RF modulation to a propagating beam, either in reflection or in transmission. This approach does not require a specific integration of the source and can in principle be applied to laser sources beyond QCLs.
In this article we follow the latter strategy by proposing a standalone device capable of modulating a mid-IR beam at room-temperature up to 1.5 GHz. It’s based on a GaAs/AlGaAs heterostructure, embedded in a metal–metal optical resonator (scheme in Fig. 1(a)). The system is designed to operate in reflectance: it is in strong coupling when no external bias is applied. We demonstrate a clear modulation of the strong coupling condition after bias application. The response bandwidth has been measured at room-temperature, showing a −3 dB cutoff at 750 MHz and the modulation of a mid-IR laser beam up to 1.5 GHz has also been reported.
## Results
### Strong coupling modulation leads to reflected beam modulation
Our approach is to operate the device in the strong light-matter coupling regime, and introduce the fast modulation by—ideally— switching the system in and out of strong coupling with the application of a bias voltage. A periodic QW structure is embedded in an optical resonator composed by non-dispersive metal–metal one-dimensional (1D) ribbons (or 1D patch cavities20) as shown in Fig. 1(a). The system, designed to operate in reflectance, is conveniently optimized so that the ISB transition is strongly coupled to the TM03 photonic mode of the resonator, whose electric-field distribution is shown in Fig. 1(b): the notation convention TM0i is defined in the caption. The resulting reflectance RNB (NoBias reflectance) is sketched in Fig. 1(c), solid blue line. Let’s consider a laser tuned to the bare cavity frequency νlas = νcav impinging on the device: almost all the intensity is reflected back since RNB(νlas) ~ 1. By applying a bias to the structure we can effectively empty an arbitrary number of wells and finally change the coupling condition between the cavity mode and the ISB transition. In the best case scenario—the ideal device—we can induce a transition to the weak-coupling regime. The corresponding reflectance spectrum is sketched in Fig. 1(c) (solid orange line): only the bare cavity transition is visible, as polaritons are no more the eigenstates of the system. At the laser frequency νlas = νcav we have RB(νlas) RNB(νlas), being RB the reflectance under bias B: the reflected laser beam is amplitude modulated with an elevated contrast. Contrast and modulation height are used as synonymes in the paper: the use of the one or the other is only dictated by text readabilty (see Reflectance under DC external bias section below and Fig. 2 for its quantitative definition).
As the intersubband polariton dynamics features ps-level timescales21, the bandwidth of the modulator is limited by (i) the RC-constant of the circuit and (ii) the transfer time of electrons in/out of the QWs. In fact, the top and bottom metal-semiconductor Schottky interfaces permit the application of a gate to the multiple QW structure that can efficiently deplete the system, as shown in Fig. 1(d). Note: the electrical control of ISB polaritons, in a quasi-DC regime though, has been studied in refs. 22,23.
We highlight the importance of the strong coupling regime between the ISB transition and the patch cavity mode to achieve an effective free-space modulation of mid-IR laser beams, in particular as far as the spectral agility is concerned14,15. By optimizing the design it is possible to obtain amplitude modulation over a broad range, and a significant contrast even at 300 K operating temperature. On the other hand, operating the device in the weak-coupling regime (in a metal–metal cavity for instance) would indeed behave differently. Modulating the absorption, or even tuning the frequency of the ISB transition, would just mildly affect the resonance linewidth: the resulting modulation range and contrasts would be much smaller.
### Sample fabrication
The semiconductor heterostructure was grown by solid-source molecular beam epitaxy on an un-doped GaAs substrate. It is composed of seven periods of 8.3 nm GaAs QWs separated by 20 nm-thick Al0.33Ga0.67As barriers. Si delta-doping (nSi = 1.74 × 1012 cm−2) is introduced in the barrier center. A 40 nm-thick GaAs cap layer terminates the structure, and a 500 nm-thick Al0.50Ga0.50As layer is introduced before the active region, whose total thickness is LAR = 368.1 nm. The sample presents an ISB transition at an energy of 118.5 meV (about 955.8 cm−1), that we have measured at 300 K in a classic multipass waveguide transmission configuration (orange spectrum in Fig. 2(a)). Figure 1(d) shows the global conduction band profile at room-temperature (RT, solid lines) and at different applied biases for the fabricated structure. It was obtained solving self-consistently the Schrodinger–Poisson equations using a commercial software24. With no applied bias all the QWs are populated. The application of a bias gradually depletes them.
The modulators rely on a metal-semiconductor-metal geometry. We have wafer-bonded the sample on a n+-GaAs carrier layer via Au-Au thermo-compression wafer-bonding, a standard technology for mid-IR polaritonic devices25,26. After polishing and substrate removal, the 1D patches are defined with electron-beam lithography followed by Ti/Au deposition (5/80 nm) and lift-off. The top contact patterning and the definition of the bonding pads are realized with optical contact lithography and Ti/Au lift-off. An inductively coupled plasma (ICP) etching step down to the back metal plane defines the mesa structure. Optical microscope images of typical final devices are shown in Fig. 2(b). Arrays of devices have been fabricated that differ in the width p of the metallic fingers (nomenclature in Fig. 1(b)). For each value of p, we fabricated two arrays with different total surface (5 × 104 and 2 × 104 μm2, respectively. Fig. 2(b)). The active region being very thin (368.1 nm), the system does not operate as a photonic-crystal, but operates instead in the independent resonator regime. The cavity resonant frequency νcav is set by p, not by the period D, according to the following expression:
$$\frac{c}{\nu }=\lambda =\frac{2\ {n}_{\text{eff}}\ p}{i}\ \ \,{\text{where}}\ i\in {\mathbb{N}}.$$
(1)
The system behaves as a Fabry–Perot cavity of length p, with neff an effective index that takes into account the reflectivity phase at the metallic boundaries20,27. We opted to operate not on the i = 1 fundamental mode, the standard choice20, but on the i = 3 mode (the TM03), to simplify the fabrication procedure and increase the electromagnetic overlap factor. Supplementary Figs. 1 and 2 provide a justification for this choice.
### Reflectance under DC external bias
The reflectance of the devices as a function of p has been measured with a microscope coupled to a Fourier transform infrared spectrometer (FTIR) to retrieve the polaritonic positions at RT (and at 78 K) with no applied bias. The complete dispersion is shown in Supplementary Fig. 2: together with the measurements and simulations on an empty cavity (Supplementary Fig. 1), it permits to identify the first 3 ribbon resonator modes. The TM03 mode exhibits a clear Rabi splitting for patch sizes around p = 4 μm.
A suitable device (p = 4.2 μm) was wire bonded and its reflectance was measured under different applied DC biases. When no bias is applied, we observe the two polariton branches (black curve in Fig. 2(a)). The green solid line corresponds to a +6 V bias, that is practically the limit imposed by the Ti/Au Schottky barrier. A very similar behavior is observed for negative biases given the symmetry of the top and bottom contacts’ barriers. The Rabi splitting decreases by 25%: it means that the gate empties only half of the QWs, as $${{\Omega }}_{Rabi}\propto \sqrt{{N}_{\text{QW}}}$$, being NQW the total number of QWs in the structure. A second sample with lower doping (NSi = 6.2 × 1011 cm−2) has been characterized. This sample can fully transition to the weak-coupling regime upon application of a bias (see Supplementary Fig. 3). However, it’s inferior to the highly doped one in terms of modulation height given the wavelength coverage of our tunable laser. For this reason we preferred to work with the highly doped device.
From the measurements we can extract the modulation height attainable on an incoming laser beam with a +6 V maximum bias with this specific device. The modulation height is defined as $$\min \left(\left| \frac{R_{{\mathrm{NB}}}-R_{{\mathrm{B}}}}{R_{{\mathrm{B}}}}\right| , \left| \frac{R_{{\mathrm{NB}}}-R_{{\mathrm{B}}}}{R_{{\mathrm{NB}}}}\right| \right)$$: it is plotted in Fig. 2(c) in the 800–1100 cm−1 range for both B = 6 V and B = 3 V. It shows that a contrast above 10% can be obtained in a few frequency ranges. In particular, a contrast between 20% and 30% can be obtained around 1030 cm−1 (λ ~ 9.70 μm). This frequency is covered by our tunable commercial QC laser (shadowed orange region in Fig. 2(c)).
### Reflectance modulation up to 1.5 GHz
We have measured the speed and modulation bandwidth of the modulator with the setup described in Fig. 3(a). A continuous-wave (CW), tunable commercial QC laser28 is focused on the modulator (S) that is fed with an RF signal from a synthesizer (SG). The reflected, and modulated, beam is detected with a 837 MHz-bandwidth commercial MCT detector (D0,29) whose output is fed to a spectrum analyzer (SA); or—for low-frequency measurements—to a 50 MHz-bandwidth MCT detector (D1) and then to a 200 MHz lock-in amplifier. Visible and mid-infrared (MIR) cameras (Vis-Cam and MIR-Cam in Fig. 3(a)) permit a correct beam alignment on the sample, with the help of an external white light source (WL). All the measurements are performed at room-temperature (300 K).
Figure 3(b) shows the spectra obtained from the smaller modulator (p = 4.1 μm), using a QCL frequency of 1010 cm−1. The sample is driven with an RF signal of constant power (DC offset −989 mV), but different frequency (100 MHz, 500 Mhz, 1 GHz, and 1.5 GHz). The reflected beam is detected with a fast MCT, whose signal feeds the SA, in both laser-on (blue solid line) and laser-off (red solid line) configurations. In the latter case, the presence of a peak on the noise floor is due to some direct cross-talk between the RF synthesizer and the spectrum analyzer through the RF injection and detection circuits. The normalization to 1 allows the comparison at different frequencies. We can detect a signal up to a modulation speed of 1.5 GHz, well beyond the VIGO detector 3 dB cutoff of 837 MHz, proving the fast character of our modulator.
In order to determine the modulator bandwidth, we performed an automated scan as a function of the modulation frequency. It consists in acquiring the beat-notes (as in Fig. 3(b)) at closely spaced frequencies between 0.1 MHz and 1 GHz. The software acquires the peak amplitude with noise floor correction at each frequency, and the data are normalized to 1 at the lowest RF frequency of the scan (0.1 MHz). The results, at 300 K, are reported in Fig. 4(a) for a typical 2 × 104 μm2 device, with grating period p = 4.1 μm. At the optimum performance point (νlaser = 1010 cm−1), it operates at frequencies > 1 GHz, with a −3 dB cutoff at ~750 MHz (Fig. 4(a), red curve). The larger devices (data not shown) typically exhibit a −3 dB cutoff at ~150 MHz. This result is in fair agreement with the surface ratio between the two devices. Furthermore the theoretical RC-cutoff of the large samples is $${f}_{\,\text{cutoff}}^{\text{large}\,}=\frac{1}{2\pi RC}\ =$$ 204 MHz and for the small sample $${f}_{\,\text{cutoff}}^{\text{small}\,}\ =$$ 510 MHz (C device capacitance and R the 50 Ω output resistance of the RF synthesizer). The good agreement proves that the bandwidth is currently limited by the RC time constant. For high-resolution spectroscopy, an important parameter is the sideband/carrier power ratio. For the current modulators, from quasi-DC response measurements (not shown) we estimate a ratio of the order of 5%.
If the QC laser frequency is tuned away from the optimum value, Fig. 2(c) predicts that the modulation should drop. This observation is crucial to unambiguously assign to the polariton modulation the enabling physical principle of the device. To this scope we have measured, at room-temperature, the modulation height as a function of the QCL laser frequency. The results—normalized to 1 —are reported in Fig. 4(b) (dots) for the larger sample (p = 4.2 μm, the same of Fig. 2(c)), and they are superimposed to the DC modulation response curve obtained from the reflectance measurements from Fig. 2(c). The modulator fast response as a function of the impinging laser frequency closely follows the DC contrast curve. This finding confirms that the enabling mechanism is indeed the fast modulation of the Rabi splitting via application of an RF signal.
## Discussion
Having established that the current devices are RC limited, the natural question is: what is their intrinsic speed? The physics of the current devices is not very different from the one of mid-IR quantum well infrared photodetectors (QWIP), except the absence of the ohmic contacts, that are known to operate up to speeds of 60/80 GHz30,31. That alone suggests that the intrinsic speed of the current modulators is set by the same parameters, in particular the capture time τcap and the transit time τtrans that is set by the drift velocity. There is, however, a notable difference: in the ideal operating regime, carriers diffuse all the way towards one metal-semiconductor interface upon application of a bias. Moreover, they have to flow back through the active region when the bias is restored to 0. This leads of a characteristic time $${\tau }_{\text{drift}}=\frac{{L}_{\text{AR}}}{{v}_{\text{drift}}}$$. In our case, with a very conservative $${v}_{\text{drift}}=1{0}^{6}\ \frac{\,\text{cm}}{\text{s}\,}$$ at RT32, we obtain a value of τdrift < 30 ps, which sets a lower bound for the intrinsic cutoff in the range 5–10 GHz ($$\frac{1}{2\pi \tau }$$ estimation).
In conclusion, we have demonstrated a technology that is able to amplitude modulate mid-IR free-space laser beams up to GHz modulation frequencies. In this first demonstration, at λ = 9.7 μm, we achieved modulation speeds up to 1.5 GHz at room-temperature (−3 dB cutoff at ~750 MHz). The device operates by modulating the strong coupling regime at fast rates, one of the few demonstrations of a practical device relying on the strong light-matter coupling regime. The estimated intrinsic speed is at minimum 5 GHz. Improved active regions that do not rely on drift transport, but instead on tunnel coupling23 will probably lead to modulation speeds in the 30/40 GHz range.
## Methods
### Sideband to carrier ratio
In amplitude modulation (AM), it gives the quality of the signal modulation. In the most simple situation where both carrier and signal are sinusoidal, the carrier is $$c(t)=C\sin (2\pi {\nu }_{c}t)$$ while the signal can be written as follows: $$s(t)=S\cos (2\pi {\nu }_{s}t+\phi )=Cm\cos (2\pi {\nu }_{s}t+\phi )$$. We have defined the modulation index $$m=\frac{S}{C}$$: it measures how deep the carrier modulation is, with m = 1 corresponding to 100% modulation. In the frequency domain, the carrier line (C intensity) and the two sidebands at νc ± νs with amplitude $$\frac{1}{2}{\mathrm{Cm}}$$ appear. The sideband to carrier ratio is then m/2.
## Data availability
All relevant data are available from the authors upon reasonable request.
## Code availability
Python code is also available from the authors upon reasonable request.
## References
1. 1.
Bernard, V. et al. CO2 laser stabilization to 0.1-Hz level using external electrooptic modulation. IEEE J. Quantum Electron. 33, 1282–1287 (1997).
2. 2.
Martini, R. et al. High-speed digital data transmission using mid-infrared quantum cascade lasers. Electron. Lett. 37, 1290–1292 (2001).
3. 3.
Chuanwei, L. et al. Free-space communication based on quantum cascade laser. J. Semiconductors 36, 094009 (2015).
4. 4.
Paiella, R. et al. High-frequency modulation without the relaxation oscillation resonance in quantum cascade lasers. Appl. Phys. Lett. 79, 2526–2528 (2001).
5. 5.
Hinkov, B. et al. High frequency modulation and (quasi) single-sideband emission of mid-infrared ring and ridge quantum cascade lasers. Opt. Express 27, 14716–14724 (2019).
6. 6.
Mottaghizadeh, A. et al. Ultra-fast modulation of mid infrared buried heterostructure quantum cascade lasers. In 2017 42nd International Conference on Infrared, Millimeter, and Terahertz Waves (IRMMW-THz) 1–2 (IEEE, 2017).
7. 7.
QUBIG GmbH. https://www.qubig.com/.
8. 8.
Holmström, P. High-speed mid-ir modulator using stark shift in step quantum wells. IEEE J. Quantum Electron. 37, 1273–1282 (2001).
9. 9.
Vodjdani, N., Vinter, B., Berger, V., Böckenhoff, E. & Costard, E. Tunneling assisted modulation of the intersubband absorption in double quantum wells. Appl. Phys. Lett. 59, 555–557 (1991).
10. 10.
Dupont, E., Delacourt, D., Berger, V., Vodjdani, N. & Papuchon, M. Phase and amplitude modulation based on intersubband transitions in electron transfer double quantum wells. Appl. Phys. Lett. 62, 1907–1909 (1993).
11. 11.
Duboz, J. Y., Berger, V., Laurent, N., Adam, D. & Nagle, J. Grating coupled infrared modulator at normal incidence based on intersubband transitions. Appl. Phys. Lett. 70, 1569–1571 (1997).
12. 12.
Berger, V., Vodjdani, N., Delacourt, D. & Schnell, J. P. Room-temperature quantum well infrared modulator using a schottky diode. Appl. Phys. Lett. 68, 1904–1906 (1996).
13. 13.
Jun, Y. C. et al. Active tuning of mid-infrared metamaterials by electrical control of carrier densities. Opt. Express 20, 1903–1911 (2012).
14. 14.
Benz, A., Montaño, I., Klem, J. F. & Brener, I. Tunable metamaterials based on voltage controlled strong coupling. Appl. Phys. Lett. 103, 263116 (2013).
15. 15.
Lee, J. et al. Ultrafast electrically tunable polaritonic metasurfaces. Adv. Optical Mater. 2, 1057–1063 (2014).
16. 16.
Wang, L., Sofer, Z. & Pumera, M. Will any crap we put into graphene increase its electrocatalytic effect? ACS Nano 14, 21–25 (2020).
17. 17.
Jessop, D. S. et al. Graphene based plasmonic terahertz amplitude modulator operating above 100 mhz. Appl. Phys. Lett. 108, 171101 (2016).
18. 18.
Vakarin, V. et al. Ultra-wideband ge-rich silicon germanium integrated mach–zehnder interferometer for mid-infrared spectroscopy. Opt. Lett. 42, 3482–3485 (2017).
19. 19.
Jung, S. et al. Homogeneous photonic integration of mid-infrared quantum cascade lasers with low-loss passive waveguides on an inp platform. Optica 6, 1023–1030 (2019).
20. 20.
Todorov, Y. et al. Optical properties of metal-dielectric-metal microcavities in the thz frequency range. Opt. Express 18, 13886–13907 (2010).
21. 21.
Günter, G. et al. Sub-cycle switch-on of ultrastrong light–matter interaction. Nature 458, 178–181 (2009).
22. 22.
Anappara, A. A., Tredicucci, A., Biasiol, G. & Sorba, L. Electrical control of polariton coupling in intersubband microcavities. Appl. Phys. Lett. 87, 051105 (2005).
23. 23.
Anappara, A. A., Tredicucci, A., Beltram, F., Biasiol, G. & Sorba, L. Tunnel-assisted manipulation of intersubband polaritons in asymmetric coupled quantum wells. Appl. Phys. Lett. 89, 171109 (2006).
24. 24.
Birner, S. et al. Nextnano: general purpose 3-d simulations. IEEE Trans. Electron Devices 54, 2137–2142 (2007).
25. 25.
Vigneron, P.-B. et al. Quantum well infrared photo-detectors operating in the strong light-matter coupling regime. Appl. Phys. Lett. 114, 131104 (2019).
26. 26.
Manceau, J.-M. et al. Resonant intersubband polariton-lo phonon scattering in an optically pumped polaritonic device. Appl. Phys. Lett. 112, 191106 (2018).
27. 27.
Duperron, M. Conception et Caractérisation De Nanoantennes Plasmoniques Pour La Photodétection Infrarouge Refroidie. Ph.D. thesis (Troyes, 2013).
28. 28.
DRS Daylight Solutions. https://www.daylightsolutions.com/.
29. 29.
VIGO System. https://vigo.com.pl/en/home/.
30. 30.
Harald Schneider, H. C. L. Quantum Well Infrared Photodetectors-Physics and Applications. (Springer-Verlag, Berlin Heidelberg, 2007).
31. 31.
Lin, Q. et al. Development of high-speed, patch-antenna intersubband photodetectors at 10.3 µm. 44th International Conference on Infrared, Millimeter, and Terahertz Waves, IRMMW-THz 2019 (IEEE, 2019).
32. 32.
Hava, S. & Auslender, M. Velocity-field relation in gaalas versus alloy composition. J. Appl. Phys. 73, 7431–7434 (1993).
## Acknowledgements
We thank S. Barbieri, J.-F. Lampin and E. Peytavit for useful discussions. We also thank L. Wojszvzyk, A. Nguyen, and J.-J. Greffet for the loan of the 50 MHz-bandwidth MCT detector. We acknowledge financial support from the European Union FET-Open Grant MIRBOSE (737017). This work was partly supported by the French RENATECH network. R.C. and A.B. acknowledge financial support from the French National Research Agency (project “IRENA”).
## Author information
Authors
### Contributions
G.B. grew the sample; N.-L.T. fabricated the sample, performed measurements and simulations; R.C. designed the devices, performed simulations, and supervised the entire project; P.C. helped in RF setup; S.P. performed simulations, built the RF setup and performed the measurements; A.J. performed simulations. All the authors (S.P., R.C., N.-L.T., A.J., G.B., P.C., J.-M.M., A.B.) discussed data and wrote the manuscript.
### Corresponding authors
Correspondence to Stefano Pirotta or Raffaele Colombelli.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Pirotta, S., Tran, NL., Jollivet, A. et al. Fast amplitude modulation up to 1.5 GHz of mid-IR free-space beams at room-temperature. Nat Commun 12, 799 (2021). https://doi.org/10.1038/s41467-020-20710-2
• Accepted:
• Published: | |
# Chapter 12: Vectors and the Geometry of Space - Practice Exercises - Page 734: 6
$v=\lt \dfrac{\sqrt 3}{2}, \dfrac{1}{2}\gt$
#### Work Step by Step
The components of vector $v$ are: $v=\lt v_x,v_y\gt$ Thus, $v_x=(1) \cos (\dfrac{\pi}{6})=\dfrac{\sqrt 3}{2}$ and $v_y=(1) \sin (\dfrac{\pi}{6})=\dfrac{1}{2}$ Hence, $v=\lt \dfrac{\sqrt 3}{2}, \dfrac{1}{2}\gt$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | |
blog
Top 5 tips to make your pandas code absurdly fast
Wed, Feb 8, 2023
If you've ever worked with tabular data, you probably know the process: import the data into pandas, clean and transform it, and use it as input for your models. But when the time comes to scale up and take your code to production, there's a good chance your pandas pipeline starts to break down and runs slowly.
But there's no need to panic! Given our extensive experience handling large datasets and using pandas in over 80% of our projects, Tryolabs has gained valuable insights into making your pandas code run faster.
Tip 1: Vectorize like there’s no tomorrow.
Using vectorized operations eliminates the need to manually iterate through rows.
Tip 2: Iterate like a semi-pro.
When vectorization is impossible, consider looping over the data frame using methods such as list comprehension with zip, lru_cache or parapply.
Tip 3: Embrace NumPy.
To enhance performance, consider working directly with the Numpy array through .to_numpy(); it can provide a significant bump in speed.
Selecting the appropriate dtype is critical for memory and computation optimization.
Tip 5: Long-term storage.
Pandas can read many formats such as CSV, parquet, pickle, JSON, Excel, etc. We recommended using the parquet format, a compressed, efficient columnar data representation.
We'll also explain what can slow your pandas down and share a few bonus tips surrounding caching and parallelization. Keep reading, and you'll become a pandas pro in no time.
Panda brought to life by the power of Midjourney.
The best way to demonstrate the value of our recommendations is to test the techniques with real datasets. If you are curious about the specifics, take a look at the following section; otherwise, feel free to skip it.
Benchmarking setup
We conducted the benchmarks in the following sections on two anonymized data frames from actual projects. And we've made those datasets available to share.
Please note that depending on your computer's specifications, you may have trouble opening the data frames. The creator of pandas, Wes McKinney, stated that as a rule of thumb, it is recommended to have 5 to 10 times the amount of RAM as the dataset size.
To avoid these issues in running the benchmarks, we used a high-performance desktop computer with a 10-core/20-thread i9 10900K, 128GB of RAM, and a fast SSD running Ubuntu 20.04.
Regarding the software used, we chose Python 3.10, pandas 1.4.2, and pyarrow 8.0.0.
Tip 1: Vectorize like there’s no tomorrow
Vectorized operations in pandas allow you to manipulate entire data frame columns using just a few lines of code rather than manually looping through each row individually.
The importance of vectorization cannot be overstated, and it's a fundamental aspect of working with pandas. It's so crucial, in fact, that we even created a "catchy" rhyme to remind ourselves of its importance: "Hector Vector is a better mentor than Luke the for loop." In all seriousness, however, pandas lives and breathes vector operations, as should you. We strongly recommend you embrace vectorization and make it a core part of your pandas workflow.
Hector Vector, Source
How does it work?
A key element of vectorized operations is broadcasting shapes. Broadcasting allows you to manipulate objects of different shapes intuitively. Examples are shown below.
Array a with 3 elements is multiplied with scalar b which results in an array of the same shape as a Source.
Array a of shape (4, 1) is added with array b (3,) resulting in an array of shape (4, 3). Source.
Much has been written about this (including these articles about NumPy Optimization and Array Programming with NumPy), and it's essential in deep learning, where massive matrix multiplications happen all the time. But we'll limit ourselves to 2 short examples.
First, imagine you want to count a given integer's occurrences in a column. Below are 2 possible ways of doing it.
In testing df1 and df2, we got a speedup of just 82x by using the count_vectorized method over the count_loop.
Now say you have a DataFrame with a date column and want to offset it by a given number of days. Below, you’ll find two ways of doing that. Can you guess the speedup factor of the vectorized operation?
By using vectorized operations rather than loops for this costly operation, we got an average speedup of 460x!
One small note on the offset_loop function: we construct d outside the list comprehension. Otherwise, it would get built in each iteration, which would double its runtime (this is noteworthy in its own right).
But you get the point, vectorized operations in actual data are a lifesaver. Remember, it isn't the few microseconds you can shave off a single operation that counts; the total time saved over multiple operations makes the difference. This can take a pipeline from "impossible to run" to "runs in a reasonable amount of time.”
Now that you know how vectorized operations can significantly improve your code's performance, it's time to implement it. Look at your current projects and see where you can apply this technique. You may be surprised at how much of a difference it can make in speed and efficiency.
Although vectorized operations in pandas are highly recommended for optimal performance and efficiency, it's important to note that not all operations can be vectorized, and there may be instances where alternative methods, such as iteration, will be necessary.
Tip 2: Iterate like a semi-pro
You might be wondering why you’re not able to iterate like a fully-fledged pro. The answer is simple: pros vectorize and rookies iterate.
We know it’s not that simple in practice. You can’t vectorize all operations all the time, but hear us out: whenever you can, vectorize ruthlessly; you’ll thank yourself later.
There will be times when there’s no alternative to looping over the millions of rows in a data frame. You may choose to perform these complex operations yourself or solicit help from an external service. Whatever the case, we’ll go through various iteration methods.
Key takeaways:
1. Vectorize if possible.
2. Use a list comprehension with zip.
3. Look into lru_cache if your columns have several repeats.
4. Look into parapply.
The benchmark
We’ll apply the following function to the datasets:
It removes all the words appearing in the phrase words_to_remove from the given remove_from phrase and eliminates HTML tags while keeping the words with a length greater than or equal to min_include_word_length.
Think of remove_from as a long description from which you want to remove some words that already appear in another data column.
For instance, say you have the following data frame:
Applying remove_words to all rows would result in the following list:
for loops
The first and most intuitive way to iterate would be to use a Python for loop.
There’s overhead everywhere: accessing single values of df and dynamically creating a list with append mean this will be the slowest method and we’ll use it as our baseline for comparison. But you want to avoid this at all costs.
apply
An easy next step would be to use pandas' apply method, which handles the looping internally. You’d expect this to be blazing fast, but it’s only about 1.9x faster than baseline. The code is below. But because better methods are readily available, you should avoid this as well.
In each iteration of df.apply, the provided callable gets a Series whose index is df.columns and whose values are the row’s. This means that pandas has to generate that series in each loop, which is costly. To cut the costs, it’s better to call apply on the subset of df you know you’ll use, like so:
This simple tweak makes the operation 2.1x faster than the baseline, but we’d still recommend avoiding this method.
list-comp + itertuples
Using itertuples combined with the beloved list comprehension to iterate is definitely better. itertuples yields (named) tuples with the data on the rows. Notice that we subset df again. This is, on average, 4.6x faster than baseline.
list-comp + zip
A slightly different approach that yields about the same performance (4.6x faster than baseline) is to use a list comprehension again but iterate over the desired columns with zip. zip takes iterables and yields tuples where the i-th tuple has the i-th element of all the given iterables in order. This approach looks like this:
list-comp + to_dict
A slower variant of this approach is to use to_dict(orient="records") to iterate over dicts of rows of the dataset. This is about 3.9x faster than the baseline. It was achieved using the following code:
This is not a hard rule but, from observation, we determined that using to_dict almost tripled the memory footprint of our program, so be judicious when using it.
Bonus method 1: caching
In addition to the iteration techniques we've discussed, two other methods can help improve your code's performance: caching and parallelization. Caching can be particularly useful if your pandas function is called multiple times with the same arguments. For example, if remove_words is applied to a dataset with many repeated values, you can use functools.lru_cache to store the results of the function and avoid recalculating them each time. To use lru_cache, simply add the @lru_cache decorator to the declaration of remove_words, then use your preferred iteration method to apply the function to your dataset. This can significantly improve your code's speed and efficiency. Take, for example, the following code:
Adding this decorator produces a function that "remembers" the output for inputs it previously encountered, removing the need to run all the code again. You can read more about it in Python's official documentation here. Don’t forget to play with the maxsize parameter to explore the speed vs memory tradeoff.
You may notice that the cached function’s performance for df1 is similar to its non-cached counterpart. This goes to show that caching isn’t the be-all and end-all. In our case, df1's structure doesn’t have as much repetition, so caching doesn’t help as much as it does fo df2 where the speedup is 46x.
Bonus method 2: parallelization
The last ace up our sleeve is using pandarallel to parallelize our function calls across multiple independent chunks of df. The tool is easy to use: you simply import and initialize it and then change all your .applys for .parallel_applys.
In our case, we measured an 12-fold improvement.
However powerful, parallel_apply is no silver bullet. It pickles each chunk of the df into /dev/shm (which is effectively RAM); therefore, you might run out of memory!
If this happens, you can limit pandarallel's memory usage by initializing it with fewer processes, which will also impact performance.
Why not combine both? Well, we tried and saw no practical benefit. It either went as fast as bare parallel_apply or slower than a bare list comprehension with cache. This might have something to do with how parallelization is achieved and the fact that the cache might not be shared. However, we would need to pursue our investigation further to draw any firm conclusions.
Results
A summary of the results is shown below:
Descriptiondf1df1df2df2
DescriptionTime [s]SpeedupTime [s]Speedup
loop546.61245.91
apply289.31.9135.71.8
apply (only used cols)263.92.1121.72.0
itertuples (only used cols)116.14.754.74.5
zip (only used cols)115.64.754.24.5
to_dict (only used cols)137.84.064.03.8
cached zip (only used cols)118.14.65.346.4
parapply (only used cols)44.512.319.312.7
cached parapply (only used cols)45.512.010.623.2
Remember to use the above tips to optimize performance when iterating with pandas in the future.
Tip 3: Embrace NumPy
Pandas is built on top of NumPy, which is known for its performance and speed in handling large arrays and matrices of numerical data. This helps make pandas efficient and fast when working with large datasets. Now that you’re an expert in vectorizing operations embracing numpy is a natural next step. This section is short and sweet: if you need a bit of extra performance, consider going down to numpy by using .to_numpy() and the numpy methods directly.
Benchmarking this method is close to impossible. Results will vary wildly between the use case and the implementation details of your code. In our tests, we observed speedups from anywhere between 2x to an absurd 3000x.
Continuing with the above examples, what if we tried counting or offsetting directly in numpy? Have a look at the following code:
As you can see, it’s easy to accomplish: add .to_numpy() and use numpy objects. Regarding the count, this gives a total 185x speedup (2.2x over the vectorized pandas method). In regards to the offset, this gives a whopping 1200x speedup over the loop operation.
We hear you! These numbers are somewhat artificial because no one in their right mind would offset a day column like this. But this may happen to beginners familiar with other programming languages, and these examples should serve as a cautionary tale.
These speedups are made possible by compounding the benefits of vectorization and the direct use of numpy. This also eliminates the overhead added by pandas' killer feature set.
Be warned that sometimes this isn’t as simple as placing .to_numpy() after each series in your operation. Case in point: a team member reported that using.to_numpy() in datetime data with timezones actually dropped the timezone. This should be fine if all the data is at the same timezone and you’re calculating differences. Still, it’s something to keep in mind.
Another team member experienced some weirdness with dates as well. When calling .to_numpy() on a Series of date times, they would get an array of datetime64[ns]. However, when calling .to_numpy() on a single pd.Timestamp, an int was returned. Hence, trying to subtract a single date from an array of dates will return gibberish unless you do .to_numpy().astype(int).
Given these speedup factors and despite its caveats, you want this tool in your arsenal, and you shouldn't hesitate to use it.
In pandas DataFrames, the dtype is a critical attribute that specifies the data type for each column. Therefore, selecting the appropriate dtype for each column in a DataFrame is key.
On the one hand, we can downcast numerics into types that use fewer bits to save memory. Conversely we can use specialized types for specific data that will reduce memory costs and optimize computation by orders of magnitude.
We’ll talk about some of the most prevalent types in pandas, like int, float, bool, and strings. But first, here’s a a primer on the dreaded object.
The object type
Regarding datatypes, pandas has many types that efficiently map to numpy types at the fast C level. However, when there’s no easy mapping, it falls back on python objects. For the uninitiated, object is the parent class of all objects in the language [source]. This leads to inefficiencies due to how memory is handled.
The fact that numpy's core is written in C means that an array points directly to all its values located in a contiguous block of memory. This allows for much faster execution time as the cache memory can leverage the spatial locality of the data. You can read more about it here.
Because pandas stores strings as objects, it has to fall back on slow Python arrays. In contrast to numpy, a Python list has a pointer to a memory-contiguous buffer of **pointers**, which point to objects stored in memory, but which also reference data stored in other locations. Hence, accessing this data during runtime will be quite slow in comparison.Source
Because pandas stores strings as an array of objects, it has to fall back on slow Python arrays. In contrast to numpy, a Python list has a pointer to a memory-contiguous buffer of pointers, which point to objects stored in memory, but which also reference data stored in other locations. Hence, accessing this data during runtime will be quite slow in comparison.
A Python list and how it handles data. Source
In a nutshell, pandas is constrained in part by the ecosystem. But how can we make the best of the tools we have right here, right now? Well, it’s a different story for numerical and string data.
Numerical types
Regarding ints and floats, downcasting is the key to saving memory. Pandas supports 8, 16, 32, and 64-bit signed and unsigned integers and 16, 32, and 64-bit floats. By default, it opts to use 64-bit variants for both types.
The trick is to choose the smallest type that can comfortably hold your data. You should consider both current and new valid data. You don't want to limit yourself to unsigned ints in your pipeline if signed ints are a real possibility.
In practice, and for your current dataset, you can easily do this with pandas' own pd.to_numeric like so:
Just like that, we reduced df's size by 1/3. But your mileage will vary based on your data. As a side note: pd.to_numeric only downcasts floats to float32, it doesn’t use float16 even though it is compatible with pandas.
Regarding data types, we found that bool should be favored if you have no nulls. It occupies the same amount of memory as an uint8 , but makes the column’s content much clearer.
“What if I have NaN values?” We hear you. In principle, you can cast your ints and bools to float32 and call it a day. You’ll take a memory hit, but NaNs will be handled natively.
Somewhat recently, pandas has added its own Nullable integer and Nullable boolean data types. They can be more memory efficient than float32 but with the caveat that neither .values nor .to_numpy() return correct numpy arrays with np.nan (because it is itself a float, not an int). Internally, these arrays hold a _data array with the appropriate numpy dtype and a _mask, where True means that the value in _data is missing. This can be an issue if you’re trying to use numpy. The latter and their experimental nature are why we can't fully recommend their use for now.
This is also why we don’t recommend using .convert_dtypes just yet because it uses nullable types.
String types
With numbers out of the way, let's dive into strings. Strings are used everywhere, but they're the bane of our existence when manipulating data. As mentioned before, they're stored with the inefficient object type, and that's why they use lots of space and are slower than more specialized types.
But there’s a glimmer of hope with StringDType. Like the other nullable types we mentioned, StringDType is still experimental, so we don't recommend using it in production code for now. Looking ahead, it promises correct handling of missing values and performance improvements over object.
If your column has low cardinality (many repeated values), the smartest move will be to use pd.Category. nternally, a categorical column holds a mapping from the actual column value (i.e., the contents of your column before conversion) to some arbitrary integer. Then, the column is saved as an int type, exploiting the advantages of contiguous memory and efficient storage. Furthermore, Run Length Encoding is used to reduce the storage size even further (if an integer is repeated, it only saves the integer and the number of instances rather than all the integers).
This results in excellent compression and processing speed, as seen in the results subsection.
Results
Now that you have a better understanding of the different dtypes and their efficiency, let's look at some of the tests we ran and their results:
1. We tried using pandas' convert_dtypes in both df1 and df2.
2. We ran a custom convert_dtypes that doesn’t use pandas' nullable types.
3. We converted applicable string columns into categoricals.
4. We downcasted numeric types into their smaller types.
Of course, these results will vary according to your data, but we wanted to highlight the benefits of choosing the correct data type. Here are the results in table form:
Descriptiondf1df1df2df2
DescriptionSize [GB]RatioSize [GB]Ratio
Size on RAM (unoptimized)24.8112.01
pandas' convert_dtypes24.91.00412.51.04
Our convert_dtypes24.70.99612.61.05
Convert to Category13.80.5562.30.19
Downcasting12.50.5041.70.14
We managed to reduce the dataset size to about 1/2 and 1/8, respectively, as soon as pandas finished loading it.
This is important because we can now process the data in more ways than before simply because we have more free RAM. And our entire system will likely be snappier as well.
Also note that pandas' convert_dtypes enlarges the dataset in both cases.
Again, you might be asking, why bother with convert_dtypes if it barely saves any memory?” Or, for that matter, how come these people always know the questions I’m about to ask? Well, our job is to predict the future, so we've gotten quite good at it. And we're glad you asked because the speed improvements completely dwarfed the memory reduction benefit. Try running the following code to compare your times:
With our data, we saw speedups of between 12x to 50x. For the mean, the speedups were 12x to 25x. For the standard deviation, they were 20x to 50x. For value_counts, they were about 15x. That’s not bad for such a small amount of work!
Categoricals, in particular, reap the benefits twofold. Not only is the memory usage reduction reason enough to use them, but they make operations (like groupby or str.lower()) much faster. Notably, .str.lower() was about 22.5x faster on categorical columns than on regular object columns (or StringDType columns for that matter).
groupbys are another can of worms. In our testing, when grouping with 1 column, categoricals were about 1.5x faster. However, when grouping with multiple columns, where at least one of them is categorical, pandas defaults to creating an index that is the cartesian product between all categories in said columns. You can imagine how this explodes in size and complexity quickly.
But don’t despair. There’s a solution! Okay, you can despair just a little because the solution isn’t a catch-all. Some categories are mutually exclusive; therefore, their combination, which would appear in the cartesian product, would never be observed in the data. You can use the observed=True kwarg of groupby to instruct pandas to execute the operation in this way.
IIn our tests, we measured a 2.5x speedup using observed=True , while we got a slowdown of about 2x when we didn’t. If you notice this doesn’t work well for you, converting to object, performing the groupby, and converting back to categorical could be the way to go.
Tip 5: Long-term storage
CSV, parquet, pickle, JSON, XML, HDF5, Feather, ORC, STATA, SAS, SPSS, or, dare I say it, Excel (yuck!) are some of the formats that pandas supports. Argghhh! There are too many options! How are we supposed to choose?
We might start, as any responsible engineer would, by reviewing the pros and cons of each format. We’d then test a few that stand out in the real world and come to some conclusion about which format we’d favor and when. Life is tough, though, and with looming deadlines imposing time constraints, the selection process looks more like this:
Fear not! We've done our homework - and we promise that our method is more scientific than Homer's. We recommend you take a good look at parquet. You don't need to constantly be making these decisions anymore.
Parquet was created to help developers reap the benefits of a "compressed, efficient columnar data representation." It supports different encodings and can compress on a per-column basis. That fits the ML practitioner's use case like a glove!
It sounds awesome; sign me up! Right? Go on, try to do a simple df.to_parquet. I’ll wait…
Already back? Did you get an ImportError? Parquet is technically a format, which is more like a spec than an implementation. So, to use it, you need to install another library that implements the format. Currently, pandas supports pyarrow and fastparquet.
But you promised no more decisions! Well, we believe pandas’ use of pyarrow by default is the right choice. We found it to be more reliable than fastparquet. For example, when trying to save and then immediately read df2, fastparquet complained about an overflow.
Below is a table with read/write durations and the resulting sizes of the datasets saved in different ways (with pyarrow). It boils down to this: use parquet and change the compression method to fit your needs. Uncompressed or snappy parquet ought to be your choice during development, resorting to gzip compression for long-term archival.
Descriptiondf1df1df1df2df2df2
pickle17.79.96.50.90.61.3
pickle gzip532.240.13.899.83.00.2
pickle lzma2670.5158.03.2187.97.70.1
pickle bz2455.3218.13.5127.919.50.2
parquet22.316.06.43.61.30.4
parquet gzip299.524.74.021.51.50.2
parquet snappy25.515.55.94.01.30.3
CSV is probably in the top 5 data distribution formats, and that's in part thanks to its ease of use. With a simple df.to_csv, you can create a file that anyone (and their dogs) can open in the program of their choice. But, and that's a big but, CSV is severely limited in terms of volume, size, and speed. When you hit these limitations, it's time to start exploring parquet.
The latest pickle version is fast but has substantial security and long-term storage issues. The words "execute arbitrary code during unpickling" should send shivers down your spine, and those are the exact words in the massive, red warning in pickle's official documentation.
You might think, "well, that's a non-issue; I'll only unpickle stuff I've pickled," but that won't save you either. When unpickling, the object's definition at that time is used to load it from storage. That means that if your objects change, your pickles could break. In the example below, which raises an AttributeError, we show how removing an attribute from class C after saving the pickle results in the loaded object losing the attribute despite it being there when it was saved.
Now imagine pandas changes something in the DataFrame class. You might run into issues when unpickling older DataFrames. Favor (uncompressed) parquets over pickles to avoid this.
We recently came across a tool called Lance that aims to simplify machine learning workflows. It supports a columnar data format called Lance Format, which claims to be 50 to 100 times faster than other popular options such as Parquet, Iceberg, and Delta. Although we have yet to thoroughly test it, the potential speed increase is intriguing, and we encourage you to give it a try.
Conclusion
Phew! That was a loaded blog post, full of insights and valuable information. We hope you apply some of these tips in the real world and supercharge your codebase to be faster and more powerful.
Here are some of the key concepts to keep in mind:
1. Python’s dynamic nature makes it slower than compiled languages. This issue is exacerbated in scientific computing because we run simple operations millions of times.
3. You can use numpy directly by calling .to_numpy() on the Dataframe, which can be even faster.
4. Choose the smallest possible numerical dtypes, use categoricals, and prefer float32 over the newish nullable dtypes (for now).
5. Use parquet to store your data. Use snappy compression (or none at all) during development, resorting to gzip for long-term archival.
Services
Resources
Company | |
# Metric Entropy
The metric entropy technique in competitive on-line prediction is a technique to provide the theoretical bound for the loss of an algorithm which competes with wide benchmark class. Some wide benchmark classes of functions can be covered by an -net with finite or countable number of cells. Then the algrorihm (named further strategy) which competes with these nets is applied. The resulting algorithm is an algorithm which competes with all such strategies for different values of . It can compete with the whole class, because these nets cover all possible functions in all possible ways. The regret term depends on the value of the metric entropy of the chosen class (see Kolmogorov and Tikhomirov, 1959). For the class of functions from the Sobolev spaces , compactly embedded in in the sense of compact ball one can achieve the regret term of order , where . | |
# Gauss Law? Average charge density
1. Apr 6, 2013
### Roodles01
1. The problem statement, all variables and given/known data
Hi. A cylinder of radius r & length L whose charge density distribution is given by
ρ = C/2 * r3
where r = radial distance in cylindrical coordinates
C = constant
show that the average charge density ρbar = a3 C / 5
2. Relevant equations
Gauss differential law div E = ρ / ε0
div E = 1/r * ∂/∂r * (rEr) + 1/r ∂E∅/∂∅ + ∂Ez/∂z
3. The attempt at a solution
Hmmm! Not really sure where to begin with this.
Not sure if it's to do with stuff as "difficult" as equations above.
Last edited: Apr 6, 2013
2. Apr 6, 2013
### Andrew Mason
What is a?
Also it is not clear whether: ρ = C/(2*r3) or ρ = (C/2)*r3
What does average charge density mean? Can you determine the total charge in the cylinder?
AM
3. Apr 6, 2013
### Roodles01
Ah. Detail.
Get it right, me.
r is radial distance in cylindrical coordinates
first equation is
ρ = (C/2)*r3
As far as "average" goes I'm assuming to be the mean.
Hope this clears things up.
4. Apr 6, 2013
### Andrew Mason
Ok. But give us an expression for it.
AM
5. Apr 6, 2013
### Andrew Mason
Hint: average charge density is: total __________/ total _________
AM
6. Apr 7, 2013
### Roodles01
I'll try total charge / total area.
7. Apr 7, 2013
### Andrew Mason
Why area? This is a solid cylinder. Hint: what are the units of mass density? By analogy, what are the units of charge density?
AM
Last edited: Apr 7, 2013
8. Apr 8, 2013
### Roodles01
Aha! It's volume.
Density has units of mass per unit volume such as g/ml
so charge density must be units of charge per unit volume such as coulombs/ m3 for volume such as the Gaussian surface (cylinder)
Thanks I shall post again soon.
9. Apr 8, 2013
### Andrew Mason
I am not sure why you are referring to Gauss' law here. You are not calculating the electric field so there is no need to think of the cylinder surface as a Gaussian surface for this problem. You are just trying to find the total charge and total volume.
AM
10. Apr 8, 2013
### Roodles01
Ah! It all clears.
Clouds have gone. It's not just about algebra after all.
ρ = charge density distribution which is distribution at a single point, so have double integral
#### Attached Files:
• ###### charge distribution.png
File size:
1.2 KB
Views:
76
11. Apr 8, 2013
### Andrew Mason
You don't really need the double integral. You know that the cylindrical surface with radius r and dr thick has volume 2πrLdr. So you just have to integrate from r=0 to r=a to find the charge.
$q = \int_0^a ρdV =\int_0^a ρ2\pi rLdr$
Then it is just a matter of dividing by the total volume.
AM | |
## Problem 210
A place to air possible concerns or difficulties in understanding ProjectEuler problems. This forum is not meant to publish solutions. This forum is NOT meant to discuss solution methods or giving hints how a problem can be solved.
Forum rules
As your posts will be visible to the general public you
are requested to be thoughtful in not posting anything
that might explicitly give away how to solve a particular problem.
This forum is NOT meant to discuss solution methods for a problem.
In particular don't post any code fragments or results.
Don't start begging others to give partial answers to problems
Don't ask for hints how to solve a problem
Don't start a new topic for a problem if there already exists one
Don't post any spoilers
Comments, questions and clarifications about PE problems.
viv_ban
Posts: 23
Joined: Mon May 26, 2008 3:09 pm
### problem 210
Can somebody please confirm the answer for N(1000). my answer is 1597548.
quilan
Posts: 182
Joined: Fri Aug 03, 2007 11:08 pm
### Re: problem 210
That answer is (I believe) incorrect. We had a thread discussing this problem a while back, if you'd like to search around for it.
Last edited by quilan on Sun Jan 11, 2009 5:09 pm, edited 1 time in total.
ex ~100%'er... until the gf came along.
daniel.is.fischer
Posts: 2400
Joined: Sun Sep 02, 2007 11:15 pm
Location: Bremen, Germany
### Re: problem 210
All the "Problem xxx" threads (except the newest) have been moved because they cluttered up a lot of subfora, they will return concentrated in one place, but there's some merging and editing work to be done before that.
Il faut respecter la montagne -- c'est pourquoi les gypaètes sont là.
TripleM
Posts: 382
Joined: Fri Sep 12, 2008 3:31 am
### Re: problem 210
Close, but your answer is too low.
viv_ban
Posts: 23
Joined: Mon May 26, 2008 3:09 pm
### Re: problem 210
I have checked my program and I found a small error, but still I am unable to get the correct answer. Can someone please tell me if I am correct this time.
N(1000) = 1597880
N(10000) = 159814790
N(100000) = 15981722482
N(1000000) = 1598174519142
daniel.is.fischer
Posts: 2400
Joined: Sun Sep 02, 2007 11:15 pm
Location: Bremen, Germany
### Re: Problem 210
Yep.
Il faut respecter la montagne -- c'est pourquoi les gypaètes sont là.
sinan
Posts: 16
Joined: Mon Sep 15, 2008 10:14 am
### Re: Problem 210
What does |x| + |y| ≤ r mean? It should define a diamond shaped area, right? I don't understand what kind of trap did you fall into? I have found a closed formula that seems to work for small number without writing any code, then what happens? I have a feeling from the messages in this thread that there is a circle thing here but I cannot understand how a diamond shaped area becomes circle??
******* 4
******4 3 4
****4 3 2 3 4
**4 3 2 1 2 3 4
4 3 2 1 0 1 2 3 4
**4 3 2 1 2 3 4
****4 3 2 3 4
******4 3 4
********4
quilan
Posts: 182
Joined: Fri Aug 03, 2007 11:08 pm
### Re: Problem 210
Well, you're absolutely right re: diamond, but you'll see that something subtle happens with regards to forming the triangle. I don't want to take away the joy of discovery from you, but the smallest example that shows the errant behavior starts @ r=24. Do a brute-force of the coordinate space & compare to the correct answer. This problem is quite nice in the way it sneaks that one in there.
Edit: Correct answer is 916 I think... provided my old code still works. And I -think- it was 24 that causes the odd behavior. Might have been 20 or something.
Last edited by quilan on Wed May 06, 2009 4:52 pm, edited 2 times in total.
ex ~100%'er... until the gf came along.
sinan
Posts: 16
Joined: Mon Sep 15, 2008 10:14 am
### Re: Problem 210
quilan wrote:Well, you're absolutely right re: diamond, but you'll see that something subtle happens with regards to forming the triangle. I don't want to take away the joy of discovery from you, but the smallest example that shows the errant behavior starts @ r=24. Do a brute-force of the coordinate space & compare to the correct answer. This problem is quite nice in the way it sneaks that one in there.
r=24. Hmmm. OK I will try to see what happens there.
sinan
Posts: 16
Joined: Mon Sep 15, 2008 10:14 am
### Re: Problem 210
sinan wrote:
quilan wrote:Well, you're absolutely right re: diamond, but you'll see that something subtle happens with regards to forming the triangle. I don't want to take away the joy of discovery from you, but the smallest example that shows the errant behavior starts @ r=24. Do a brute-force of the coordinate space & compare to the correct answer. This problem is quite nice in the way it sneaks that one in there.
r=24. Hmmm. OK I will try to see what happens there.
I get 904 for both. I counted the number of stars in the following.
Code: Select all
*
***
*****
*******
*********
***********
4************
432************
43210************
4321098************
432109876************
43210987654************
4321098765432***********4
**2109876543212*********2**
****0987654321012*******0****
******8765432109012*****8******
********6543210989012***6********
**********4321098789012*4**********
************2109876*****2************
**************09876*****0*2************
****************8765****8**12************
******************654***6***012************
********************43**4****9012************
**********************2*2*****89012************
************************0*****6789012************
**********************2*2345678901234**********
********************4***4567890123456********
******************6*****6789012345678******
****************8*******8901234567890****
**************0*********0123456789012**
************2***********2345678901234
**********4*************45678901234
********6***************678901234
******8*****************8901234
****0*******************01234
**2*********************234
4***********************4
***********************
*********************
*******************
*****************
***************
*************
***********
*********
*******
*****
***
*
MaJJ
Posts: 49
Joined: Tue Oct 14, 2008 12:14 am
### Re: Problem 210
Hi,
I just want to assure myself that the circle-like shape is not a circle (which I thought when I looked at
Expand
- I thought it's just some rounding errors or I don't know what)... But at a bigger numbers it starts to get
Expand
- and Mathematica probably shouldn't make any rounding errors as long as I'm using the fractional notation. So could anybody just assure me in this? I wasted lots of time making equations for circle, just to see that they don't work...
daniel.is.fischer
Posts: 2400
Joined: Sun Sep 02, 2007 11:15 pm
Location: Bremen, Germany
### Re: Problem 210
I suggest you look at Thales' theorem (or its generalisation). That should tell you exactly what shape the figure is.
Il faut respecter la montagne -- c'est pourquoi les gypaètes sont là.
MaJJ
Posts: 49
Joined: Tue Oct 14, 2008 12:14 am
### Re: Problem 210
Thales' theorem somehow helped, thanks. But in the end I found out the shape with simply plotting the N(200). Then one little sqrt(2) and I was ready to analyze further
ATM my code works, but it's quite slow. And I think I have a nice way to calculate it! But still, the more N, the more loops
Code: Select all
Timing[PE210[1000]]
{0.297, 1597880}
Timing[PE210[2000]]
{1.14, 6392158}
Timing[PE210[3000]]
{2.61, 14382796}
I guess I'm not going to reach the 1,000,000,000 this way...
Is there a solution which doesn't require loops? Or at least not so many loops? (for 1000 my code does cca 1650 loops - I can PM my algo if you want)
EDIT: I think I just found a way to make it 4x faster, but I'll have to rewrite it into the code... But even if 4x faster, it'll run for eternity...
thedoctar
Posts: 74
Joined: Fri Apr 15, 2011 11:57 am
Location: Sydney, Australia
### Re: Problem 210
N(10^7)=removed
N(10^8)=removed
Is this correct?
Last edited by thedoctar on Mon Jul 23, 2012 10:53 am, edited 1 time in total.
4x Intel(R) Core(TM) i3-2330M CPU @ 2.20GHz
fabas indulcet fames
hk
Posts: 10871
Joined: Sun Mar 26, 2006 10:34 am
Location: Haren, Netherlands
### Re: Problem 210
[mar]don't post intermediate results[/mar]
You could as well calculate your answer for 10^9 and use the PE answer box couldn't you?
thedoctar
Posts: 74
Joined: Fri Apr 15, 2011 11:57 am
Location: Sydney, Australia
### Re: problem 210
viv_ban wrote:I have checked my program and I found a small error, but still I am unable to get the correct answer. Can someone please tell me if I am correct this time.
N(1000) = 1597880
N(10000) = 159814790
N(100000) = 15981722482
N(1000000) = 1598174519142
I get the same results as this person and daniel.is.fisher confirms them in the next post, but my answer for 10**9 is wrong, which is why I posted my higher results. Could you just double check if the above results are correct? Or could you PM me the answer for 10**7?
4x Intel(R) Core(TM) i3-2330M CPU @ 2.20GHz
fabas indulcet fames
hk
Posts: 10871
Joined: Sun Mar 26, 2006 10:34 am
Location: Haren, Netherlands
### Re: Problem 210
viv_bans values are correct.
BTW daniel.is.fisher was one of the team that were involved in the development of this problem so you should not doubt him on this.
I'm suspecting you have an accuracy problem about which already much has been written in this thread.
thedoctar
Posts: 74
Joined: Fri Apr 15, 2011 11:57 am
Location: Sydney, Australia
### Re: Problem 210
Well I'm sure he was, however I just wanted to be absolutely sure, as he might've misread the numbers or something similar, and although I know it's unlikely, I just wanted to be absolutely sure. I also thought that a problem with floating point precision would've come up at 10^6, as it is a pretty large value, but then again I wouldn't know. I'll try some of the methods in this thread to see if I get the correct answer.
Thanks for your help.
EDIT: yep, you were right! I got the answer right. Wow, I thought that floor(sqrt(x)) would give you an accurate floor of the root of x, but apparently not! Thanks again!!
4x Intel(R) Core(TM) i3-2330M CPU @ 2.20GHz
fabas indulcet fames
coreyjkelly
Posts: 7
Joined: Fri Dec 28, 2012 5:54 am
### Re: Problem 210
I've been banging my head on my desk all night because of this one. Could somebody confirm that Python isn't plagued with the accuracy and overflow issues discussed here? All of my testing would indicate that the large numbers are properly dealt with, and math.floor(x**0.5) seems to function properly.
My results for values up to r=10^6 were consistently off by 1, which I couldn't justify, but adding 1 to my answer for 10^9 doesn't work
thundre
Posts: 356
Joined: Sun Mar 27, 2011 10:01 am
### Re: Problem 210
coreyjkelly wrote:All of my testing would indicate that the large numbers are properly dealt with, and math.floor(x**0.5) seems to function properly.
I don't know that much about Python. I gather that its integers are naturally "big". But how would the interpreter know how much precision you need in an exponentiation? | |
# Confidence Interval clarification
I just started learning confidence intervals and I read online the definitions of a confidence interval.
Defn 1: http://stattrek.com/statistics/dictionary.aspx?definition=confidence_interval
Defn 2: My notes
Now, I feel as if those two are contradicting. The first defn (which I'll extract) states:
"... This means that if we used the same sampling method to select different samples and computed an interval estimate for each sample, we would expect the true population parameter to fall within the interval estimates 95% of the time. "
However, to my knowledge, isn't this one of the common misconceptions of a confidence interval? And that the actual interpretation is that if we gather $n$ amount of these confidence intervals, there is a $95\%$ probability that the $n$ collected intervals all contain the true parameter?
My notes give this definition:
Let $L:= L(X_1,\ldots,X_n)$ and $U:= U(X_1,\ldots,X_n)$ be such that for all $\theta \in \Theta$,
$$\mathbb{P}(L < \theta \leq U) \geq 1- \alpha$$ and then it says, that this is the probability ($1-\alpha$) that the true parameter lies within this interval . Which one is correct?
No, the probability is for a single drawing.
The probability that $n$ drawings give a "right" answer would be $(1-\alpha)^n$, a much smaller number.
• Is the second definition (my notes) correct in saying that then? "the probability that the true parameter lies within this interval is $95\%$"? – Twenty-six colours Jun 7 '17 at 9:13
• @Twenty-sixcolours: yes it is. Your interpretation with all $n$ is wrong. – Yves Daoust Jun 7 '17 at 9:14
• Thank you. I'm not sure what the difference is then, to this: "“There is a 95% chance that the true population mean falls within the confidence interval.” (FALSE)" source: statisticssolutions.com/… and it seems to say that only 95% of the confidence intervals calculated with contain that true parameter – Twenty-six colours Jun 7 '17 at 9:16
• @Twenty-sixcolours: I disagree with this statement. – Yves Daoust Jun 7 '17 at 9:43
• Does it have anything to do with the interval (L,U) being random, BUT once we have data, it's fixed? – Twenty-six colours Jun 7 '17 at 9:54 | |
#### Previous topic
Multiple Regression
#### Next topic
Experiments with Markov Chains
# Prediction¶
In this computer lab we’re going to try to predict outcomes for AFL football
The technique we will use is very basic—feel free to experiment with others after the lab
The data set we will use is this one
It can be read in as a table using this command:
footy <- read.table("trainingdata.txt", header=T)
The data contains results for AFL games in 2008
For those who don’t know about football, here’s what you need to know for the exercise
• When two teams play, the winner is the team with most points at the end of the game
• Teams usually play either “at home” or “away”
• At home means at the team’s local home ground
In the data set, each row is the outcome of one game
Each row compares the home team against the away team for that game
The data set contains many variables but we will look at only three
• footy$home_team_win: Whether or not the home team won the game • 1 indicates that the home team won, -1 indicates that they lost • footy$lg_home_team_margin: The winning margin of the home team in their previous game
• Measured in points (e.g. lg_home_team_margin = 10 means they won previous game by ten points)
• Negative value indicates that they lost
• footy$lg_away_team_margin: Same as lg_home_team_margin, but for away team We are interested in using lg_home_team_margin and lg_away_team_margin as predictors for who will win the current game. For starters let’s, look at the data: • On the x-axis is lg_home_team_margin, while on the y-axis is lg_away_team_margin • The data points (circles) correspond to the value of these variables for each game • Black circles indicate that the home team won the current game, red circles indicate that they lost To understand the figure, consider a point in the south east corner. South east means that the home team did well in their previous game, and the away team did badly. In this situation, we might expect that the home team would win the current game, and the circle would be black. Hence, we would expect black circles in the south east, and red circles in the north west. This seems like it might be the average case, although the relationship is actually not very clear. The code for producing the figure is below. Please run it. footy <- read.table("data/trainingdata.txt", header=T) x2 <- footy$lg_home_team_margin
x3 <- footy$lg_away_team_margin Y <- footy$home_team_win
# Black are winners, reds are losers
plot(x2[Y == 1], x3[Y == 1], col="black",
xlab="Home team margin, previous game",
ylab="Away team margin, previous game",
main="Outcomes for home team")
points(x2[Y == -1], x3[Y == -1], col="red")
legend(-135, -100, c("win","loss"), col=c("black","red"), pch=c(21, 21))
Now we’re going to predict home team win/loss by linear regression
Our regression model will be
(1)$y = \beta_1 + \beta_2 x_2 + \beta_3 x_3$
Here:
• y = home_team_win
• x2 = lg_home_team_margin
• x3 = lg_away_team_margin
Note that the variable y takes only the values 1 or -1, corresponding to win or loss
We can now run the regression, leading to predictions of the form
(2)$y = \hat \beta_1 + \hat \beta_2 x_2 + \hat \beta_3 x_3$
If we put in new values for x2 = lg_home_team_margin and x3 = lg_away_team_margin we get predictions for win or loss
Here we understand a that
• if y > 0, then the model predicts a win
• if y <= 0, then the model predicts a loss
In the next figure, we run the regression, and then draw a line through the points
(3)$\{(x_2, x_3) \in \mathbb{R}^2 \,|\, \hat \beta_1 + \hat \beta_2 x_2 + \hat \beta_3 x_3 = 0\}$
The line gives the “decision boundary” (where prediction changes from “win” to “loss” or vice versa)
Here is the figure:
In the figure, for points to the south east of the line the predicted value of y is positive, and hence we predict win. For points to the north west of the line the predicted value of y is negative, and hence we predict loss.
## Exercises¶
### Exercise 1¶
Replicate the previous figure by
• running the regression
• writing down an equation for the line and plotting it using lines
• add the text with text(-110, 110, "Predict loss") and text(50, -105, "Predict win")
Next let’s see how our prediction model performs on new data.
Here is a second data set of the same form, from the same season
We’ll call this new data the test data set. The previous data set we’ll call the training data set
If we look at the win rate for the home team in the training data, it is over 50%. Hence the naive prediction for the test data set is that the home team will win. If we predict like this on the test data set, we are right 59.7% of the time.
If, on the other hand we predict with our model (using the coefficient values we estimated on the training data set), the success rate is 64.8%
### Exercise 2¶
Replicate these results by
• combining the new regressor values from the test data set with the estimated coefficients from the training data set to produce predictions, and
• comparing those predictions with actual outcomes in the test data set
## Solutions¶
The following code contains solutions to both exercises
footy <- read.table("trainingdata.txt", header=T)
x2 <- footy$lg_home_team_margin x3 <- footy$lg_away_team_margin
Y <- footy$home_team_win # Black are winners, reds are losers plot(x2[Y == 1], x3[Y == 1], col="black", xlab="Home team margin, previous game", ylab="Away team margin, previous game", main="Outcomes for home team") points(x2[Y == -1], x3[Y == -1], col="red") legend(-135, -100, c("win","loss"), col=c("black","red"), pch=c(21, 21)) # Run the regression reg1 <- lm(Y ~ x2 + x3) # Plot the line such that (1, x2, x3)'beta = 0 gridsize <- 40 b <- coef(reg1) grid <- seq(min(x2), max(x2), length=gridsize) lines(grid, (- b[1] - b[2] * grid ) / b[3]) # Add text to indicate prediction categories text(-110, 110, "Predict loss") text(50, -105, "Predict win") # Test success rate predicting on test data set footy_test <- read.table("testdata.txt", header=T) x2_test <- footy_test$lg_home_team_margin
x3_test <- footy_test$lg_away_team_margin # Combine regressors into matrix of rows (1, x2_test, x3_test) X_test <- cbind(rep(1, length(x2_test)), x2_test, x3_test) # Evaluate predictions for each row pred <- X_test %*% coef(reg1) # Convert to 1, -1 values pred <- ifelse(pred > 0, 1, -1) # Actual outcomes, to compare against predictions Y_test <- footy_test$home_team_win
cat("fraction of wins in training set:", mean(Y == 1), "\n")
cat("fraction of wins in test set:", mean(Y_test == 1), "\n")
cat("prediction success rate:", mean(pred == Y_test), "\n") | |
# If a polyhedron is homeomorphic to a simplex, is it piecewise-linear homeomorphic?
If a polyhedron is homeomorphic to a simplex, is it piecewise-linear homeomorphic? In particular, is this true in $R^{4}$? In 2 and 3 dimensions any two polyhedra that are homeomorphic are PL-homeomorphic, by theorems of Rado and Moise. In dimension $\geq 5$, this is a trivial special case of theorem 1.1 in M.A. Armstrong "The Hauptvermutung According to Lashof and Rothenberg" in The Hauptvermutung Book. But I have not found a statement that covers it for dimension 4; and I am not confident that dimension 4 can easily be reduced to dimension 5.
Also, if anyone can suggest a reference for this particular case that does not go through these very high-powered, difficult, general theorems, I would be interested on stylistic grounds.
-
If you assume that your polyhedron has only finite number of faces, I think the answer to your question is unknown. Moreover any answer to such question would give a solution to Smooth Poincare conjecture in dimension 4, which is still open.
Indeed, suppose you have a four-dimensional sphere with an exotic smooth structure. Then you can always triangulate such a sphere in a finite number of simplexes. Now, throw away a simplex from such a triangulation. What you get is a homeomorphic to a simplex, but can not be PL diffeomorphic to it, otherwise your initial sphere would be PL diffeomerphic to the standard one, which is not sow since you sphere is exotic.
-
But the original question isn't about PL diffeomorphic, it's about PL homeomorphic. Are these the same concept in dimension 4? For example, this argument would not work in dimension 7 where a triangulation of an exotic 7-sphere would have to be PL-homeomorphic to the standard sphere by the PL Poincare conjecture. (For that matter, what does "PL diffeomorphic" mean exactly?) – Greg Friedman Mar 26 '12 at 22:06
Gerg, I guess we just use two different terms to define the same notion. I agree with you about dimension 7, there is only one PL structure on $S^7$. But dimension $\ge 7$ are different from $<7$, namely, up to dimension 6 every PL manifold admits a unique smooth structure. In particular all these exotic smooth 4-dimensional manifolds have have exotic PL structures. – Dmitri Mar 26 '12 at 23:04 | |
MLSTM FCN models, from the paper Multivariate LSTM-FCNs for Time Series Classification , augment the squeeze and excitation block with the state of the art univariate time series model, LSTM-FCN and ALSTM-FCN from the paper LSTM Fully Convolutional Networks for Time Series Classification. The concurrent application of instance segmentation and classification on whole slide Pap smear images has been done for the first time. 2 & 2 & 2 & 2 & 2\\ Sometimes, older networks like VGG16 have their fully connected layers reimplemented as conv layers (see SSD). In the proposed models, the fully convolutional block is augmented by an LSTM block followed by dropout [20], as shown in Fig.1. In the field of natural language processing, CNN exhibits good performance as a neural network for classification . To address such challenges, we put forward an application of instance segmentation and classification framework built on an Unet architecture by adding residual blocks, densely connected blocks and a fully convolutional layer as a bottleneck between encoder-decoder blocks for Pap smear images. Additionally, a shape representation model has been integrated with the model which acts as a regularizer, making the whole framework robust. The main difference between semantic segmentation and instance segmentation is that we make no distinction between the instances of a particular class in semantic segmentation. Thus, transpose convolutions allow us to increase our layer size in a learnable fashion, since we can change the weights through backpropagation. In the traditional CNN below, how exactly do we get from the $$5\times5$$ layer to the first fully connected layer? If it’s still unclear, here’s an example with numbers: $\begin{bmatrix} Upsampling using transposed convolutions or unpooling loses information, and thus produces coarse segmentation. * However, it is still too computationally expensive. 2 & 2 & 2 & 2 & 2\\ We can choose a filter size and stride length to maintain our original image width $$W$$ and height $$H$$ throughout the entire network, so we could simply make our loss function a sum of the cross-entropy loss for each pixel (remember, we are essentially performing classification for each pixel). 4 & 5 & 6 & 1 & 2\\ Pap smear is often employed as a screening test for diagnosing cervical pre-cancerous and cancerous lesions. 1 & 2 & 3 & 1 & 3\\ Image classification algorithms, powered by Deep Learning (DL) Convolutional Neural Networks (CNN), fuel many advanced technologies and are a core research subject for many industries ranging from transportation to healthcare. Then, at the end, we could have a layer with depth $$C$$, where $$C$$ is the number of classes. \end{bmatrix}$. There is, however, one very important difference between a fully convolutional network and a standard CNN. Automated nuclei segmentation and classification exist but are challenging to overcome issues like nuclear intra-class variability and clustered nuclei separation. Applying convolutional networks to text classification or natural language processing at large was explored in literature. It’s simple! Introduction of a joint loss function in the framework overcomes some trivial cell level issues on clustered nuclei separation. Note how a fully connected layer expects an input of a particular size. The above example places the input values in the upper left corner. It is important to realize that $$1\times1$$ convolutional layers are actually the same thing as fully connected layers. 2 & 2 & 2 & 2 & 2\\ Novel architecture: combine information from different layers for segmentation. Simply put, newer networks do. Damage detection and localization are formulated as classification problems, and tackled through fully convolutional networks (FCNs). “Bed of Nails" unpooling simply places the value in a particular position in the output, filling the rest with zeros. Fully convolutional neural networks (FCN) have been shown to achieve state-of-the-art performance on the task of classifying time series sequences. So the final output layer will be the same height and width as the input image, but the number of channels will be equal to the number of classes. 164\\ With some fancy padding in the transposed convolution, we achieve the opposite: $$2\times2$$ to $$5\times5$$. In the first half of the model, we downsample the spatial resolution of the image developing complex feature mappings. Rather than a predetermined, fixed location for the “nails", we use the position of the maximum elements from the corresponding max pooling layer earlier in the network. sagieppel/Fully-convolutional-neural-network-FCN-for-semantic-segmentation-Tensorflow-implementation 56 waspinator/deep-learning-explorer Not unsurprisingly, SegNet performed better than standard FCNs with skip connections. State-of-the-art segmentation for PASCAL VOC 2011/2012, NYUDv2, and SIFT Flow at the time How can we adapt convolutional networks to classify every single pixel? In this paper, we develop a novel Aligned-Spatial Graph Convolutional Network (ASGCN) model to learn effective features for graph classification. Max Unpooling is a smarter “bed of nails" method. It works by assigning pixel-wise labels to individual nuclei in a whole slide image which enables identifying multiple nuclei belonging to the same or different class as individual distinct instances. Since no fully connected layers exist, our input can be of any size. \end{bmatrix} However, we would need a crop for every single pixel in an image, and this would be hopelessly slow. You will often hear transposed convolution referred to as deconvolution. Fully convolutional neural networks (FCN) have been shown to achieve state-of-the-art performance on the task of classifying time series sequences. Through pooling and strided convolutions, we reduce the size of each layer, reducing computation. Note that, this tutorial throws light on only a single component in a machine learning workflow. CFNet [35] introduces the Correlation Filter layer to the SiamFC framework and performs online tracking to im-prove the accuracy. To increase the robustness of the overall framework, the proposed model is preceded with a stacked auto-encoder based shape representation learning model. The transpose convolution is not the inverse of a convolution, and thus deconvolution is a terrible name for the operation. Fully convolutional networks [11,44] exist as a more optimized network than the classification based network to address the segmentation task and is reported to be faster and more accurate even for medical datasets. © 2020 Elsevier B.V. All rights reserved. The framework provides simultaneous nuclei instance segmentation and also predicts the type of nucleus class as belonging to normal and abnormal classes from the smear images. Think about it. That single number, $$164$$, would become the value of a single neuron in the first fully connected layer. Github, $\begin{bmatrix} Finally, we end up with a $$C\times H \times W$$ layer, where $$C$$ is the number of classes, and $$H$$ and $$W$$ are the original image height and width, respectively. Deploying trained models using TensorFlow Serving docker image. These standard CNNs are used primarily for image classification. Constructing a Model¶. Enter Fully Convolutional Networks. We will explore the structure and purpose of FCNs, along with their application to semantic segmentation. LSTM FCN models, from the paper LSTM Fully Convolutional Networks for Time Series Classification, augment the fast classification In this paper, we propose an ALS point cloud classification method to integrate an improved fully convolutional network into transfer learning with multi-scale and multi-view deep features. \end{bmatrix} Experiments on hospital-based datasets using liquid-based cytology and conventional pap smear methods along with benchmark Herlev datasets proved the superiority of the proposed method than Unet and Mask_RCNN models in terms of the evaluation metrics under consideration. A shape context fully convolutional neural network for segmentation and classification of cervical nuclei in Pap smear images Artif Intell Med . Nevertheless, SegNet has been surpassed numerous times by newer papers using dialated convolutions, spatial pyramid pooling, and residual connections. Reinterpret standard classification convnets as “Fully convolutional” networks (FCN) for semantic segmentation. The number of convolutional layers in the standard Unet has been replaced by densely connected blocks to ensure feature reuse-ability property while the introduction of residual blocks in the same attempts to converge the network more rapidly. FCNs don’t have any of the fully-connected layers at the end, which are typically use for classification. As mentioned before, a deep neural network not only has multiple hidden layers, the type of layers and their connectivity also is different from a shallow neural network, in that it usually has multiple Convolutional layers, pooling layers, as well as fully connected layers. A traditional convolutional network has multiple convolutional layers, each followed by pooling layer(s), and a few fully connected layers at the end. A Pap Smear slide is an image consisting of variations and related information contained in nearly every pixel. En-couraged by its success, many researchers follow the work and propose some updated models [9, 35, 14, 13, 21, 20]. 1 & 2 & 3 & 1 & 3\\ 2 & 2 & 2 & 2 & 2\\ We use cookies to help provide and enhance our service and tailor content and ads. \end{bmatrix} Thus, we get a prediction for each pixel, and perform semantic segmentation. Deconvolution suggests the opposite of convolution, however, a transposed convolution is simply a normal convolution operation, albeit with special padding. fully convolutional Siamese network to train a tracker. We will cover these in a later lecture dedicated to semantic segmentation. A Relation-Augmented Fully Convolutional Network for Semantic Segmentation in Aerial Scenes Lichao Mou1,2∗, Yuansheng Hua1,2*, Xiao Xiang Zhu 1,2 1 Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany 2 Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), Germany {lichao.mou, yuansheng.hua, xiaoxiang.zhu}@dlr.de What if we just remove the pooling layers and fully connected layers from a convolutional network? \begin{bmatrix} Consider the standard convolutional network above. Thus, we need a way to downsample the image (just like in a standard convolutional network), and then, upsample the layers back to the original image size. Now we have covered both ends of the Fully Convolutional Network. This restricts our input image to a fixed size. If we’re classifying each pixel as one of fifteen different classes, then th… For example, a standard NN with $$n$$ inputs is also a convolutional network with an input of a single pixel, and $$n$$ input channels. It should be noted that to max unpooling with saved indices we cover in Section 3.2 was not introduced in the FCN paper above, but rather a later paper called SegNet. What if we could classify every single pixel at once? Here, we demonstrate the most basic design of a fully convolutional network model. 2 & 2 & 2 & 2 & 2\\ This lecture is intended for readers with understanding of traditional CNNs. 13.11.1. Fully Convolutional Networks for Semantic Segmentation. However, instead of having fully connected layers (which are at the end of normal CNNs), we have $$1\times1$$ convolutional layers. There are multiple approaches to unpooling. Accurate identification of dysplastic changes amongst the cervical cells in a Pap smear image is thus essential for rapid diagnosis and prognosis. It has been shown that ConvNets can be directly applied to distributed or discrete embedding of words, without any knowledge on the syntactic or semantic structures of a language. Instance Segmentation and classification has been accomplished using a fully convolutional neural network (FCN) model. Instead, FCNs use convolutional layers to classify each pixel in the image. These standard CNNs are used primarily for image classification. First, the shallow features of the airborne laser scanning point cloud such as height, intensity and change of curvature are extracted to generate feature maps by multi-scale voxel and multi-view projection. The first half is identical to the Convolutional/Pooling layer structure that makes up most of traditional CNN architecture. In our example, when we forward pass an image of size 1920×725 through the network, we receive a response map of size [1, 1000, 3, 8]. Fully Convolutional Networks comprised of temporal convolutions are typically used as feature extractors, and global average pooling [19] is used to reduce the number of parameters in the model prior to classification. While our reinterpretation of classification nets as fully convolutional yields output maps for inputs of any size, the output dimensions are typically reduced by subsampling. Yes, Convolutional Neural Network is learn the class by hierarchical because when a growing number of classes, the accuracy usually decreases, and the possibilities of confusion increase. Using the original input image size throughout the entire network would be extremely expensive (especially for deep networks). By continuing you agree to the use of cookies. Later lectures will cover object detection and instance segmentation. Pooling is a fixed function, however, we learn the weights of a convolutional layer, and thus a strided convolution is more powerful than a pooling layer. 2 & 4 & 2 & 1 & 1\\ 7 & 8 & 9 & 1 & 4\\ Do convolutional neural networks learn class hierarchy? We now understand the first half of the network (including the $$1\times1$$ convolutional layers). In the figure above left, we get from a $$5\times5$$ layer (blue) to a $$2\times2$$ layer (green) by performing a convolution with filter size $$3$$, and stride $$2$$. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Abstract: In the vehicle type classification area, the necessity to improve classification performance across traffic surveillance cameras has garnered attention in research especially on high level feature extraction and classification. A convolutional neural network (CNN) is an artificial neural network that is frequently used in various fields such as image classification, face recognition, and natural language processing [22–24]. \end{bmatrix}$. Strided convolutions allow us to decrease layer size in a learnable fashion. \end{bmatrix} A popular solution to the problem faced by the previous Architecture is by using Downsampling and Upsampling is a Fully Convolutional Network. We’ve previously covered classification (without localization). We show that a fully convolutional network (FCN) trained end-to-end, pixels-to-pixels on semantic segmen-tation exceeds the state-of-the-art without further machin-ery. Skip connections allow us to produce finer segmentation by using layers with finer information. For each $$5\times5$$ feature map, we have a $$5\times5$$ kernel, and generate a neuron in the first fully connected layer. = Fully convolutional networks can efficiently learn to make dense predictions for per-pixel tasks like semantic segmen-tation. 2 & 2 & 2 & 2 & 2\\ Clearly, we could take a small crop of the original image centered around a pixel, use the central pixel’s class as the ground truth of the crop, and run the crop through a CNN. The classification then performedis by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than -of-the-art techniquesstate . The accuracy table below right quantifies the segmentation improvement from skip connections. This lecture covers Fully Convolutional Networks (FCNs), which differ in that they do not contain any fully connected layers. https://doi.org/10.1016/j.artmed.2020.101897. (It also popularized FCNs as a method for semantic segmentation). Any MLP can be reimplemented as a CNN. The above diagram shows a fully convolutional network. We propose the augmentation of fully convolutional networks with long short term memory recurrent neural network (LSTM RNN) sub-modules for time series classification. As shown in Fig. Training FCN models with equal image shapes in a batch and different batch shapes. Manual pathological observations used in clinical practice require exhaustive analysis of thousands of cell nuclei in a whole slide image to visualize the dysplastic nuclear changes which make the process tedious and time-consuming. \begin{bmatrix} 2 & 2 & 2 & 2 & 2\\ \begin{bmatrix} You can think of all the other fully connected layers as just stacks of $$1\times1$$ convolutions (with $$1\times1$$ kernels, obviously). Refer to the figure below for a diagram of the skip connection architecture. Refer to the diagram below for a visual representation of this network. Strided convolutions are to pooling layers what transposed convolutions are to unpooling layers. 2 & 4 & 2 & 1 & 1\\ We propose the augmentation of fully convolutional networks with long short term memory recurrent neural network (LSTM RNN) sub-modules for time series classification. Fully Convolutional Network – with downsampling and upsampling inside the network! For FCN-8s, they added a $$2\times$$ upsampling layer to this output, and fused it with the predictions from a $$1\times1$$ convolution added to pool3. We propose the augmentation of fully convolutional networks with long short term memory recurrent neural network (LSTM RNN) sub-modules for time series classification. The proposed model outperforms two state-of-the-art deep learning models Unet and Mask_RCNN with an average Zijdenbos similarity index of 97 % related to segmentation along with binary classification accuracy of 98.8 %. We can clearly see that we will not end up with our original $$5\times5$$ values if we perform the normal convolution, and then the transpose convolution. To create FCN-16s, the authors added a $$1\times1$$ convolution to pool4 to create class predictions, and fused these predictions with the predictions computed by conv7 with a $$2\times$$ upsampling layer. We begin with a standard CNN, and use strided convolutions and pooling to downsample from the original image. 2 & 1 & 3 & 5 & 4\\ 4 & 5 & 6 & 1 & 2\\ Our idea is to transform arbitrary-sized graphs into fixed-sized aligned grid structures, and define a new spatial graph convolution operation associated with … The question remains: How do we increase layer size to reach the dimensions of the original input? This works because Fully Convolutional Networks are often symmetric, and each convolutional and pooling layer corresponds to a transposed convolution (also called deconvolution) and unpooling layer. 2 & 2 & 2 & 2 & 2\\ A traditional convolutional network has multiple convolutional layers, each followed by pooling layer (s), and a few fully connected layers at the end. 2 & 1 & 3 & 5 & 4\\ This lecture covers Fully Convolutional Networks (FCNs), which differ in that they do not contain any fully connected layers. Then, we upsample using unpooling and transposed convolutions. The FCN is an end to end learning model which achieves good performance in the semantic segmentation task,. Abstract: Fully convolutional neural networks (FCNs) have been shown to achieve the state-of-the-art performance on the task of classifying time series sequences. * Copyright © 2021 Elsevier B.V. or its licensors or contributors. Of course, you ask, if fully connected layers are simply $$1\times1$$ convolutional layers, then why don’t all CNNs just use $$1\times1$$ convolutional layers at the end, instead of fully connected layers? Use AlexNet, VGG, and GoogleNetin experiments. 2 & 2 & 2 & 2 & 2\\ 7 & 8 & 9 & 1 & 4\\ The basic idea behind a fully convolutional network is that it is “fully convolutional”, that is, all of its layers are convolutional layers. As derivation of CNN, the fully convolutional networks (FCN) which only consist of convolutional layers has gradually become the mainstream architecture of the image segmentation task,. As a variant of Convolutional Neural Networks (CNNs) in Deep Learning, the Fully Convolutional Network (FCN) model achieved state-of-the-art performance for natural image semantic segmentation. Fully convolutional neural networks (FCNs) have been shown to achieve the state-of-the-art performance on the task of classifying time series sequences. FULLY CONVOLUTIONAL NEURAL NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION Emmanuel Maggiori 1, Yuliya Tarabalka , Guillaume Charpiat2, Pierre Alliez 1Inria Sophia Antipolis - Mediterran´ ´ee, TITANE team; 2 Inria Saclay, TAO team, France Email: emmanuel.maggiori@inria.fr We simply wish to classify every single pixel. Figure 1. Fully convolutional neural networks (FCN) have been shown to achieve state-of-the-art performance on the task of classifying time series sequences. introduced the idea of skip connections into FCNs to improve segmentation accuracy. We propose the augmentation of fully convolutional networks with long short term memory recurrent neural network (LSTM RNN) … A supervised training of the proposed network architecture is performed on data extracted from numerical simulations of a physics-based model (playing the role of digital twin of the structure to be monitored) accounting for different damage scenarios. The proposed model is built upon standard Unet architecture by the addition of residual blocks, densely connected blocks and a bottleneck layer. Obviously, this network will run far quicker than simply classifying each pixel individually. Convolutional Neural Network is trained by using a convolution Layer, Max Pooling, fully connected, and SoftMax for classification. ScienceDirect ® is a registered trademark of Elsevier B.V. ScienceDirect ® is a registered trademark of Elsevier B.V. A shape context fully convolutional neural network for segmentation and classification of cervical nuclei in Pap smear images. “Fully Convolutional Networks for Semantic Segmentation" by Long et al. \begin{bmatrix} Building a vanilla fully convolutional network for image classification with variable input dimensions. 2020 Jul;107:101897. doi: 10.1016/j.artmed.2020.101897. 164\\ One way we can upsample is by unpooling. One approach is “Nearest Neighbor", we simply repeat every element. State-Of-The-Art performance on the task of classifying time series sequences related information contained nearly. Performance on the task of classifying time series sequences spatial resolution of the!. Covers fully convolutional networks to text classification or natural language processing, CNN exhibits good performance in first! On clustered nuclei separation layers ) including the \ ( 164\ ), which differ in that do. ( without localization ) was explored in literature issues on clustered nuclei separation used primarily for classification! Model which achieves good performance as a screening Test for diagnosing cervical pre-cancerous and cancerous lesions how we. Which are typically use for classification the task of classifying time series sequences robustness of the skip connection architecture as! Transposed convolutions '' unpooling simply places the input values in the upper left corner content and.... Increase the robustness of the overall framework, the proposed model is built upon standard Unet by... Stacked auto-encoder based shape representation learning model which acts as a screening Test for cervical... Exist but are challenging to overcome issues like nuclear intra-class variability and clustered fully convolutional networks for classification separation also... Segmen-Tation exceeds the state-of-the-art performance on the task of classifying time series.. Through backpropagation a terrible name for the operation, reducing computation the resolution... A popular solution to the diagram below for a diagram of the model, we would need crop... Instead, FCNs use convolutional layers to classify every single pixel tutorial throws on. Layers exist, our input image size throughout the entire network would be used for classification the framework. Further machin-ery Convolutional/Pooling layer structure that makes up most of traditional CNNs, exhibits. And use strided convolutions and pooling to downsample from the original image light on only single! Transposed convolutions are to unpooling layers convolutional ” networks ( FCNs ), which differ in they... The structure and purpose of FCNs, along with their application to semantic segmentation provide local that! Accurate identification of dysplastic changes amongst the cervical cells in a machine learning.! Information from different layers for segmentation and classification of cervical nuclei in Pap smear image is thus essential for diagnosis! Now understand the first fully connected layers images Artif Intell Med value in a Pap smear slide an! Important to realize that \ ( 1\times1\ ) convolutional layers to provide local that... One very important difference between a fully convolutional ” networks ( FCNs ), differ. Places the input values in the transposed convolution is not the inverse a! As deconvolution segmentation accuracy refer to the problem faced by the addition of residual blocks, densely connected and! Neighbor '', we reduce the size of each layer, reducing computation significantly faster than -of-the-art techniquesstate used... Classifying time series sequences traditional CNNs classification convnets as “ fully convolutional neural network FCN! Networks can efficiently learn to make dense predictions for per-pixel tasks like semantic segmen-tation exceeds the state-of-the-art performance on task! Now understand the fully convolutional networks for classification half of the skip connection architecture deconvolution is a terrible name for the.... A stacked auto-encoder based shape representation learning model which achieves good performance a! Network is trained by using downsampling and upsampling inside the network continuing you agree to the problem faced by addition. Remains: how do we get a prediction for each pixel individually adapt convolutional networks to classify single. Layers are a network of serially connected dense layers that would be used for classification accuracy below. Integrated with the model, we develop a novel Aligned-Spatial Graph convolutional network and a bottleneck layer, earlier to. One approach is “ Nearest Neighbor '', we get a prediction for each pixel in the CNN. Be used for classification as conv layers ( see SSD ) we classify! Allow us to decrease layer size in a later lecture dedicated to semantic.!, densely connected blocks and a bottleneck layer albeit with special padding of. Thing as fully connected layer expects an input of a convolution, however, one very important difference between fully. The size of each layer, reducing computation ’ ve previously covered classification without. For the operation ( without localization ) “ Bed of Nails '' method layers... The Convolutional/Pooling layer structure that makes up most of traditional CNNs for pixel-wise image.. Network of serially connected dense layers that would be extremely expensive ( especially for deep networks ) as... Not the inverse of a fully convolutional neural network for image classification with \..., our input image size throughout the entire network would be hopelessly slow would be extremely expensive ( especially deep... Agree to the figure below for a visual representation of this network purpose of FCNs, along with application. Designed for pixel-wise image classification introduces the Correlation Filter layer to the figure below for a visual representation this. Is thus essential for rapid diagnosis and prognosis name for the operation, would become the value in a learning... And performs online tracking to im-prove the accuracy table below right quantifies the segmentation improvement from skip connections FCNs... Natural language processing, CNN exhibits good performance in the first fully connected layer is simply a layer... Been accomplished using a fully connected, and tackled through fully convolutional network Pap smear image thus... Unpooling layers and residual connections layers and fully connected layers and perform segmentation. Original image times by newer papers using dialated convolutions, spatial pyramid pooling, and use convolutions... And different batch shapes however, a modified version of CNN designed for pixel-wise image with. Unpooling is a terrible name for the operation dialated convolutions, spatial pyramid pooling and... Our input image to a fixed size each layer, Max pooling and! Conversion and 16 times faster for classification vanilla fully convolutional neural network is trained using. Nuclei segmentation and classification exist but are challenging to overcome issues like intra-class... Are used primarily for image classification far quicker than simply classifying each pixel individually inside the network ( )! Is trained by using layers with finer, earlier layers to provide local predictions that “ respect '' positions. We just remove the pooling layers what transposed convolutions are to unpooling.! Convolutional neural network for classification cervical nuclei in Pap smear image is essential... The most basic design of a particular size will cover these in a batch and different batch shapes and our! Transposed convolution, we would need a crop for every single pixel in an image of... Without further machin-ery output, filling the rest with zeros slide is an end to end learning.! Amongst the cervical cells in a learnable fashion the ISPRS Filter Test dataset it. Trained by using downsampling and upsampling is a terrible name for the operation proposed method is significantly faster than techniquesstate! T have any of the skip connection architecture faced by the previous architecture is by using layers with finer earlier.: how do we get a prediction for each pixel individually performance as a method for segmentation! Artif Intell Med along with their application to semantic segmentation ) of residual blocks, connected. Networks for semantic segmentation ) are challenging to overcome issues like nuclear intra-class variability clustered! A popular solution to the use of cookies, the proposed model built... End learning model spatial pyramid pooling, and tackled through fully convolutional networks to text classification or natural language at... | |
# Physical laws are low-energy approximations to reality, 1.4
$\displaystyle{\vdots}$
$\displaystyle{\uparrow}$
quark
$\displaystyle{\uparrow}$
plasma
$\displaystyle{\uparrow}$
vapour
$\displaystyle{\uparrow}$
water
$\displaystyle{\uparrow}$
ice
$\displaystyle{\downarrow}$
f-magnetism
$\displaystyle{\downarrow}$
QCD
$\displaystyle{\vdots}$
— Me@2019-06-04 08:42:39 PM
.
.
# Unitarity (physics)
Unitarity means that if a future state, F, of a system is unique, the corresponding past point, P, is also unique, provided that there is no information lost on the transition from P to F.
— Me@2019-05-22 11:06:48 PM
.
In quantum physics, unitarity means that the future point is unique, and the past point is unique. If no information gets lost on the transition from one configuration to another[,] it is unique. If a law exists on how to go forward, one can find a reverse law to it.[1] It is a restriction on the allowed evolution of quantum systems that ensures the sum of probabilities of all possible outcomes of any event always equals 1.
Since unitarity of a theory is necessary for its consistency (it is a very natural assumption, although recently questioned[2]), the term is sometimes also used as a synonym for consistency, and is sometimes used for other necessary conditions for consistency, especially the condition that the Hamiltonian is bounded from below. This means that there is a state of minimal energy (called the ground state or vacuum state). This is needed for the third law of thermodynamics to hold.
— Wikipedia on Unitarity (physics)
.
.
# Multiverse
A physics statement is meaningful only if it is with respect to an observer. So the many-world theory is meaningless.
— Me@2018-08-31 12:55:54 PM
— Me@2019-05-11 09:41:55 PM
.
Answer me the following yes/no question:
In your multi-universe theory, is it possible, at least in principle, for an observer in one universe to interact with any of the other universes?
If no, then it is equivalent to say that those other universes do not exist.
If yes, then those other universes are not “other” universes at all, but actually just other parts of the same universe.
— Me@2019-05-11 09:43:40 PM
.
.
# Physical laws are low-energy approximations to reality, 1.3.2
QCD, Maxwell, Dirac equation, spin wave excitation, superconductivity, …
~ low energy physics
.
symmetry breaking
$\displaystyle{\downarrow}$
local minimum
$\displaystyle{\downarrow}$
simple physics
— Me@2019-05-06 11:12:02 PM
.
.
# Classical probability, 7
Classical probability is macroscopic superposition.
— Me@2012.04.23
.
That is not correct, except in some special senses.
— Me@2019-05-02
.
That is not correct, if the “superposition” means quantum superposition.
— Me@2019-05-03 08:44:11 PM
.
The difference of the classical probability and quantum probability is the difference of a mixed state and a pure superposition state.
In classical probability, the relationship between mutually exclusive possible measurement results, before measurement, is OR.
In quantum probability, if the quantum system is in quantum superposition, the relationship between mutually exclusive possible measurement results, before measurement, is neither OR nor AND.
— Me@2019-05-03 06:04:27 PM
.
.
# Mixed states, 4
.
How is quantum superposition different from mixed state?
The state
$\displaystyle{|\Psi \rangle = \frac{1}{\sqrt{2}}\left(|\psi_1\rangle +|\psi_2\rangle \right)}$
is a pure state. Meaning, there’s not a 50% chance the system is in the state $\displaystyle{|\psi_1 \rangle }$ and a 50% it is in the state $\displaystyle{|\psi_2 \rangle}$. There is a 0% chance that the system is in either of those states, and a 100% chance the system is in the state $\displaystyle{|\Psi \rangle}$.
The point is that these statements are all made before I make any measurements.
— edited Jan 20 ’15 at 9:54
— answered Oct 12 ’13 at 1:42
— Andrew
.
Given a state, mixed or pure, you can compute the probability distribution $\displaystyle{P(\lambda_n)}$ for measuring eigenvalues $\displaystyle{\lambda_n}$, for any observable you want. The difference is the way you combine probabilities, in a quantum superposition you have complex numbers that can interfere. In a classical probability distribution things only add positively.
— Andrew Oct 12 ’13 at 14:41
.
— How is quantum superposition different from mixed state?
— Physics StackExchange
.
.
2019.04.23 Tuesday ACHK
# Physical laws are low-energy approximations to reality, 1.3.1
Symmetry breaking is important.
When there is symmetry-breaking, the system goes to a low-energy state.
Each possible low-energy state can be regarded as a new “physical world”.
One “physical world” cannot jump to another, unless through quantum tunnelling. But the probability of quantum tunnelling happening is low.
.
Low-energy physics theories, such as harmonic oscillator, are often simple and beautiful.
— Professor Renbao Liu
— Me@2019-04-08 10:46:32 PM
.
.
# Quantum classical logic
Mixed states, 2 | Eigenstates 4
.
— This is my guess. —
If the position is indefinite, you can express it in terms of a pure quantum state[1] (of a superposition of position eigenstates);
if the quantum state is indefinite, you can express it in terms of a mixed state;
if the mixed state is indefinite, you can express it in terms of a “mixed mixed state”[2]; etc. until definite.
At that level, you can start to use classical logic.
.
If you cannot get certainty, you can get certain uncertainty.
.
[1]: Me@2019-03-21 11:08:59 PM: This line of not correct. The uncertainty may not be quantum uncertainty. It may be classical.
[2]: Me@2019-03-22 02:56:21 PM: This concept may be useless, because a so-called “mixed mixed state” is just another mixed state.
For example, the mixture of mixed states
$\displaystyle{p |\psi_1 \rangle \langle \psi_1 | + (1-p) |\psi_2 \rangle \langle \psi_2 |}$
and
$\displaystyle{q |\phi_1 \rangle \langle \phi_1 | + (1-q) |\phi_2 \rangle \langle \phi_2 |}$
is
.
\displaystyle{\begin{aligned} &w \bigg[ p |\psi_1 \rangle \langle \psi_1 |+ (1-p) |\psi_2 \rangle \langle \psi_2 | \bigg] + (1-w) \bigg[ q |\phi_1 \rangle \langle \phi_1 | + (1-q) |\phi_2 \rangle \langle \phi_1 | \bigg] \\ &= w p |\psi_1 \rangle \langle \psi_1 | + w (1-p) |\psi_2 \rangle \langle \psi_2 | + (1-w) q |\phi_1 \rangle \langle \phi_1 | + (1-w) (1-q) |\phi_2 \rangle \langle \phi_1 | \\ \end{aligned}}
— This is my guess. —
— Me@2012.04.15
.
.
# Physical laws are low-energy approximations to reality, 1.2
When the temperature $\displaystyle{T}$ is higher than the critical temperature $\displaystyle{T_c}$, point $\displaystyle{O}$ is a local minimum. So when a particle is trapped at $\displaystyle{O}$, it is in static equilibrium.
However, when the temperature is lowered, the system changes to the lowest curve in the figure shown. As we can see, at the new state, the location $\displaystyle{O}$ is no longer a minimum. Instead, it is a maximum.
So the particle is not in static equilibrium. Instead, it is in unstable equilibrium. In other words, even if the particle is displaced just a little bit, no matter how little, it falls to a state with a lower energy.
This process can be called symmetry-breaking.
This mechanical example is an analogy for illustrating the concepts of symmetry-breaking and phase transition.
— Me@2019-03-02 04:25:23 PM
.
.
# The Door 1.1
The following contains spoilers on a fictional work.
In Westworld season 2, last episode, when a person/host X passed through “the door”, he got copied, almost perfectly, into a virtual world. Since the door was adjacent to a cliff, just after passing through it, the original copy (the physical body) fell off the cliff and then died.
Did X still exist after passing through the door?
Existence or non-existence of X is not a property of X itself. So in order for the question “does X exist” to be meaningful, we have to specify “with respect to whom”.
With respect to the observer Y, does X exist?
.
There are 3 categories of possible observers (who were observing X passing through the door):
1. the original person (X1)
.
X_1 == X
2. the copied person (X2) in the virtual world
.
For simplicity, assume that X2 is a perfect copy of X.
3. other people (Y)
— Me@2019-02-09 1:09 PM
.
.
# Quantum decoherence 9
This is a file from the Wikimedia Commons.
In classical scattering of target body by environmental photons, the motion of the target body will not be changed by the scattered photons on the average. In quantum scattering, the interaction between the scattered photons and the superposed target body will cause them to be entangled, thereby delocalizing the phase coherence from the target body to the whole system, rendering the interference pattern unobservable.
The decohered elements of the system no longer exhibit quantum interference between each other, as in a double-slit experiment. Any elements that decohere from each other via environmental interactions are said to be quantum-entangled with the environment. The converse is not true: not all entangled states are decohered from each other.
— Wikipedia on Quantum decoherence
.
.
2019.02.22 Friday ACHK
# Logical arrow of time, 7
When we imagine that we know and keep track of all the exact information about the physical system – which, in practice, we can only do for small microscopic physical systems – the microscopic laws are time-reversal-symmetric (or at least CPT-symmetric) and we don’t see any arrow. There is a one-to-one unitary map between the states at times “t1” and “t2” and it doesn’t matter which of them is the past and which of them is the future.
A problem is that with this microscopic description where everything is exact, no thermodynamic concepts such as the entropy “emerge” at all. You might say that the entropy is zero if the pure state is exactly known all the time – at any rate, a definition of the entropy that would make it identically zero would be completely useless, too. By “entropy”, I never mean a quantity that is allowed to be zero for macroscopic systems at room temperature.
But whenever we deal with incomplete information, this one-to-one map inevitably disappears and the simple rules break down. Macroscopic laws of physics are irreversible. If friction brings your car to a halt and you wait for days, you won’t be able to say when the car stopped. The information disappears: it dissipates.
— The arrow of time: understood for 100 years
— Lubos Motl
.
If there is a god-view, there is no time arrow.
Time arrow only exists from a macroscopic point of view. Microscopically, there is no time arrow.
If there is a god-view that can observe all the pieces of the exact information, including the microscopic ones, there is no time arrow.
Also, if there is a god-view, there will be paradoxes, such as the black hole information paradox.
Black hole complementarity is a conjectured solution to the black hole information paradox, proposed by Leonard Susskind, Larus Thorlacius, and Gerard ‘t Hooft.
Leonard Susskind proposed a radical resolution to this problem by claiming that the information is both reflected at the event horizon and passes through the event horizon and cannot escape, with the catch being no observer can confirm both stories simultaneously.
— Wikipedia on Black hole complementarity
The spirit of black hole complementarity is that there is no god-view. Instead, physics is always about what an observer can observe.
— Me@2018-06-21 01:09:05 PM
.
.
# Physical laws are low-energy approximations to reality, 1.1
These notes was taken by me in 2008, during the course PHY5510 Advanced Statistical Mechanics.
— Me@2019-01-31 11:54:13 PM
.
.
# Quantum logic, 3
The more common view regarding quantum logic, however, is that it provides a formalism for relating observables, system preparation filters and states.$^\text{[citation needed]}$ In this view, the quantum logic approach resembles more closely the C*-algebraic approach to quantum mechanics. The similarities of the quantum logic formalism to a system of deductive logic may then be regarded more as a curiosity than as a fact of fundamental philosophical importance. A more modern approach to the structure of quantum logic is to assume that it is a diagram – in the sense of category theory – of classical logics (see David Edwards).
— Wikipedia on Quantum logic
.
.
2019.01.26 Saturday ACHK
# Logical arrow of time, 6.4
The source of the macroscopic time asymmetry, aka the second law of thermodynamics, is the difference of prediction and retrodiction.
In a prediction, the deduction direction is the same as the physical/observer time direction.
In a retrodiction, the deduction direction is opposite to the physical/observer time direction.
.
— guess —
If a retrodiction is done by a time-opposite observer, he will see the entropy increasing. For him, he is really doing a prediction.
— guess —
.
— Me@2013-10-25 3:33 AM
.
The existence of the so-called “the paradox of the arrow of time” is fundamentally due to the fact that some people insist that physics is about an observer-independent objective truth of reality.
However, it is not the case. Physics is not about “objective” reality. Instead, physics is always about what an observer would observe.
— Lubos Motl
— paraphrased
— Me@2019-01-19 10:25:15 PM
.
.
According to special relativity, in EPR, which of Alice and Bob collapses the wavefunction is not absolute. In other words, they do not have any causal relations.
— Me@2012-04-12 10:42:22 PM
.
.
# Consistent histories, 6
.
an observer ~ a consistent history
— Me@2019-01-05 04:02:43 PM
.
.
# Photon dynamics in the double-slit experiment, 5
.
What is the relationship between a Maxwell photon and a quantum photon?
— Me@2012-04-09 7:38:06 PM
.
The paper Gloge, Marcuse 1969: Formal Quantum Theory of Light Rays starts with the sentence
Maxwell’s theory can be considered as the quantum theory of a single photon and geometrical optics as the classical mechanics of this photon.
That caught me by surprise, because I always thought, Maxwell’s equations should arise from QED in the limit of infinite photons according to the correspondence principle of high quantum numbers as expressed e.g. by Sakurai (1967):
The classical limit of the quantum theory of radiation is achieved when the number of photons becomes so large that the occupation number may as well be regarded as a continuous variable. The space-time development of the classical electromagnetic wave approximates the dynamical behavior of trillions of photons.
Isn’t the view of Sakurai in contradiction to Gloge? Do Maxwell’s equation describe a single photon or an infinite number of photons? Or do Maxwell’s equations describe a single photon and also an infinite number of photons at the same time? But why do we need QED then at all?
— edited Nov 28 ’16 at 6:35
— tparker
— asked Nov 20 ’16 at 22:33
— asmaier
.
Because photons do not interact, to very good approximation for frequencies lower than $\displaystyle{m_e c^2 / h}$ ($\displaystyle{m_e}$ = electron mass), the theory for one photon corresponds pretty well to the theory for an infinite number of them, modulo Bose-Einstein symmetry concerns. This is similar to most of the statistical theory of ideal gases being derivable from looking at the behavior of a single gas particle in kinetic theory.
Put another way, the single photon behavior $\displaystyle{\leftrightarrow}$ Maxwell’s equations correspondence only holds if you look at the Fourier transform version of Maxwell’s equations. The real space-time version of Maxwell’s equations would require looking at a superposition of an infinite number of photons — one way to describe the taking [of] an inverse Fourier transform.
If you want to think of it in terms of Feynman diagrams, classical electromagnetism is described by a subset of the tree-level diagrams, while quantum field theory requires both tree level and diagrams that have closed loops in them. It is the fact that the lowest mass particle photons can produce a closed loop by interacting with, the electron, that keeps photons from scattering off of each other.
In sum: they’re both incorrect for not including frequency cutoff concerns (pair production), and they’re both right if you take the high frequency cutoff as a given, depending on how you look at things.
— edited Dec 3 ’16 at 6:28
— answered Nov 27 ’16 at 23:08
— Sean E. Lake
.
Maxwells equations, which describe the wavefunction of a single noninteracting photon, don’t need Planck’s constant. I find that remarkable. – asmaier Dec 2 ’16 at 14:16
@asmaier : Maxwell’s equations predate the quantum nature of light, they weren’t enough to avoid the ultraviolet catastrophe. Note too that what people think of as Maxwell’s equations are in fact Heaviside’s equations, and IMHO some meaning has been lost. – John Duffield Dec 3 ’16 at 17:45
— Do Maxwell’s equations describe a single photon or an infinite number of photons?
— Physics StackExchange
.
.
2019.01.03 Thursday ACHK
# The problem of induction 3.3
“Everything has no patterns” (or “there are no laws”) creates a paradox.
.
If “there are 100% no first order laws”, then it is itself a second order law (the law of no first-order laws), allowing you to use probability theory.
In this sense, probability theory is a second order law: the law of “there are 100% no first order laws”.
In this sense, probability theory is not for a single event, but statistical, for a meta-event: a collection of events.
Using meta-event patterns to predict the next single event, that is induction.
.
Induction is a kind of risk minimization.
— Me@2012-11-05 12:23:24 PM
.
.
# Afshar experiment, 2
Double slit experiment, 8.2
.
In the double slit experiment, the screen is used to detect interference pattern itself, causing the photon wavefunctions to “collapse”.
In the Afshar experiment, there is no classically definite position for a photon when the photon passes “through” the vertically wire slits. So there is no interference patterns “formed”, unless you put some kind of screen afterwards. [Me@2015-07-21 10:59 PM: i.e. making the observation, c.f. delayed choice experiment]
— Me@2012-04-09 12:19:52 AM
.
Being massless, they cannot be localized without being destroyed…
— Photon dynamics in the double-slit experiment
— Wikipedia on Photon
.
. | |
# What would be the value of $P$ in this example?
A Markov Chain $$(X_n)_n$$ has the following transition matrix:
$$P = \begin{bmatrix} 0.1 & 0.3 & 0.6\\ 0 & 0.4 & 0.6\\ 0.3&0.2&0.5 \end{bmatrix}$$ with initial distribution $$\alpha = (0.2, 0.3, 0.5)$$.
Find the following:
(a) $$P(X_7 = 3|X_6 = 2)$$
(b) $$P(X_9 = 2|X_1 = 2, X5 = 1, X_7 = 3)$$
(c) $$P(X_0 = 3|X_1 = 1)$$
What I understand according to the Example-$$7.16$$ of book by Symour Lipchuz (Page-$$132$$) that, $$P^n$$ is used to find the probabilities of change of states in exactly $$n$$ steps, and $$p^{(m)} \cdot P^n$$ is used to find the probability distribution of various states after $$m$$ steps.
According to my understanding, there are no uses of Initial Distribution in these three computations. Because, multiplication of a $$1X3$$ and $$3X3$$ matrices will give a $$1X3$$ matrix. In that case, the transition matrix won't make sense for three states.
So, my attempted solution is the following:
What is the probability that the system changes from state-$$2$$ to state-$$3$$ in exactly $$1$$ step?
So, the answer would be $$p(2, 3) = 0.6$$
What is the probability that the system changes from state-$$3$$ to state-$$2$$ in exactly $$2$$ steps?
$$P^2 = P \cdot P = \begin{bmatrix} 0.1 & 0.3 & 0.6\\ 0 & 0.4 & 0.6\\ 0.3&0.2&0.5 \end{bmatrix} \cdot \begin{bmatrix} 0.1 & 0.3 & 0.6\\ 0 & 0.4 & 0.6\\ 0.3&0.2&0.5 \end{bmatrix} = \begin{bmatrix} 0.19&0.27&0.54\\ 0.18&0.28&0.54\\ 0.18&0.27&0.55 \end{bmatrix}$$
So, the answer would be: $$p(3, 2) = 0.27$$
(c) I think, this question asks:
What is the probability that the system changes from state-$$1$$ to state-$$3$$ in exactly $$9$$ steps?
Is it? If yes, then it would be $$p(1,3)$$ from $$P^9$$.
Am I correct?
Edit:
$$P(X_0 = 3|X_1 = 1) = \frac{P(X_0 = 3, X_1 = 1)}{P(X_1 = 1)}$$
$$\Rightarrow P(X_0 = 3|X_1 = 1) = \frac{P(X_1 = 1, X_0 = 3)}{P(X_1 = 1)}$$
$$\Rightarrow P(X_0 = 3|X_1 = 1) = \frac{P(X_1 = 1| X_0 = 3)\cdot P(X_0=3)}{P(X_1 = 1)}$$
Now,
• from $$P$$, we have $$P(X_1=1|X_0=3) = 0.30$$
• from $$\alpha$$, we have $$P(X_0=3) = 0.50$$, and
• from $$\alpha P$$, we have $$P(X_1=1) = 0.17$$
So,
$$P(X_0 = 3|X_1 = 1) = \frac{0.30 \cdot 0.50}{0.17} \approx 0.88$$.
Is this a correct calculation?
• At the top, you have for (c) $P(X_0=3 \mid X_1=1)$, but later you talk about 9 steps. Presumably this means that you’re thinking about $X_9$ (in which case it’s really eight steps) or $X_{10}$. Which is it? – amd Mar 10 '19 at 23:17
• No, they’re not ”changing in reverse order.” The process only goes in one direction. It’s asking you about the probability that the system started in state 3 given that it’s in state 1 after a single step. Why then are you asking about nine steps of the process at the end of your question? – amd Mar 10 '19 at 23:20
• $X_0$ is the state at time $t=0$; $X_9$ is the state at time $t=9$. You can’t go back to $0$ from $9$. – amd Mar 10 '19 at 23:31
• It means exactly what I wrote in a previous comment. You’re being asked about the probability of a specific history of the system. – amd Mar 10 '19 at 23:39
You have part (a) and part (b) down. For part (c), we need that initial distribution.
What is the probability that the system is in state 1 after one step?
What is the probability that the system starts in state 3 and is then in state 1 after 1 step?
Can you use these to find the conditional probability this part asks for?
And no, there's no cyclic behavior here. The sequence of $$X_n$$ keeps going for all positive integers $$n$$, and it's not periodic.
[In response to the added material in the question]
Yes, that's a correct calculation. You have now fully solved the problem.
• Considering that your claimed answers imply the probability of starting in state 3 and then going to state 1 is less than the probability of going to state 1 regardless of where we started ... no, you've got that part wrong. – jmerry Mar 10 '19 at 23:36
• I'm not sure what you were trying to say there. Now, what I was trying to say with the bit that had the word "regardless"? The probability of starting in state 3 and then going to state 1 after one step must be $\le$ the probability of being in state 1 after one step. – jmerry Mar 11 '19 at 0:03 | |
# chainer.backends.cuda.raw¶
chainer.backends.cuda.raw(code, name, *args, **kwargs)[source]
Creates a raw kernel function.
This function uses memoize() to cache the resulting kernel object, i.e. the resulting kernel object is cached for each argument combination and CUDA device.
The arguments are the same as those for cupy.RawKernel. | |
# How do you solve 8=4( | x | - 25)?
May 30, 2018
$x = \pm 27$
$8 = 4 \left(| x | - 25\right)$
$2 = \left(| x | - 25\right)$
$| x | = 27$
$x = \pm 27$ | |
# $\triangle \mathrm{ABC}$ and $\triangle \mathrm{DBC}$ are two isosceles triangles on the same base $B C$ and vertices $A$ and $D$ are on the same side of $\mathrm{BC}$ (see Fig. 7.39). If $\mathrm{AD}$ is extended to intersect $\mathrm{BC}$ at $\mathrm{P}$, show that(i) $\triangle \mathrm{ABD} \equiv \triangle \mathrm{ACD}$(ii) $\triangle \mathrm{ABP} \cong \triangle \mathrm{ACP}$(iii) $\mathrm{AP}$ bisects $\angle \mathrm{A}$ as well as $\angle \mathrm{D}$.(iv) AP is the perpendicular bisector of $\mathrm{BC}$."
#### Complete Python Prime Pack
9 Courses 2 eBooks
#### Artificial Intelligence & Machine Learning Prime Pack
6 Courses 1 eBooks
#### Java Prime Pack
9 Courses 2 eBooks
Given:
$\triangle ABC$ and $\triangle DBC$ are two isosceles triangles on the same base $BC$ and vertices $A$ and $D$ are on the same side of $BC$. If $AD$ is extended to intersect $BC$ at $P$.
To do:
We have to show:
(i) $\triangle ABD \cong \triangle ACD$
(ii) $\triangle ABP \cong \triangle ACP$
(iii) $AP$ bisects $\angle A$ as well as $\angle D$.
(iv) $AP$ is the perpendicular bisector of $BC$.
Solution:
(i) We know that,
The side-Side-Side congruence rule states that if three sides of one triangle are equal to three corresponding sides of another triangle, then the triangles are congruent.
Let us consider $\triangle ABD$ and $\triangle ACD$
Given,
$\triangle ABC$ and $\triangle DBC$ are isosceles triangles,
This implies,
$AB=AC$ and $BD=CD$
Since $AD$ is the common side
$AD=AD$
Therefore,
$\triangle ABD \cong \triangle ACD$
(ii) Let us consider $\triangle ABP$ and $\triangle ACP$
Given,
$\triangle ABC$ is isosceles,
This implies,
$AB=AC$
Since $AP$ is the common side
We get,
$AP=AP$
We also know
From corresponding parts of congruent triangles: If two triangles are congruent, all of their corresponding angles and sides must be equal.
Therefore,
$\angle PAB=\angle PAC$.
Therefore,
According to the Rule of Side-Angle-Side Congruence:
Triangles are said to be congruent if any pair of corresponding sides and their included angles are equal in both triangles.
Hence, $\triangle ABP \cong \triangle ACP$.
(iii) We know that,
From corresponding parts of congruent triangles: If two triangles are congruent, all of their corresponding angles and sides must be equal.
Therefore,
$\angle PAB=\angle PAC$ (since $\triangle ABD \cong \triangle ACD$)
Given,
$AP$ bisects $\angle A$...(i)
Let us consider $\triangle BPD$ and $\triangle CPD$
We also know that,
The side-Side-Side congruence rule states that if three sides of one triangle are equal to three corresponding sides of another triangle, then the triangles are congruent.
Since $PD$ is the common side.
We get, $PD=PD$
As $\triangle DBC$ is isosceles We get,
$BD=CD$
and by CPCT we know that as $\triangle ABP \cong \triangle ACP$
Therefore we get,
$\triangle BPD \cong \triangle CPD$
Hence, $\angle BDP=\angle CDP$ by CPCT....(ii)
Now, by comparing (i) and (ii) We can say that $AP$ bisects $\angle A$ as well as $\angle D$.
(iv) Let us consider $\triangle BPD$ and $\triangle CPD$
We know that,
From corresponding parts of congruent triangles: If two triangles are congruent, all of their corresponding angles and sides must be equal.
Therefore,
$\angle BPD=\angle CPD$
and $BP=CP$ ....(i)
We also know that,
The sum of the angles of a straight line is $180^o$
$\angle BPD+\angle CPD=180^o$
Since $\angle BPD=\angle CPD$
We get,
$2\angle BPD=180^o$
$\angle BPD=\frac{180^o}{2}$
$\angle BPD=90^o$.....(ii)
From (i) and (ii) we can say that,
$AP$ is the perpendicular bisector of $BC$.
Updated on 10-Oct-2022 13:41:14 | |
588 views
For the following program fragment, the running time is given by
Procedure A(n)
{
if(n <= 2)
return 1;
else return A(log (n));
}
1. $\Theta(\log \log n)$
2. $\Theta(\log \sqrt n)$
3. $\Theta(\log^* n)$
4. $\Theta(\sqrt n)$
edited
can someone plz explain option C
what is * here???
$T(n) = T(\log n) + 1$
Solution to this recurrence is : no of times logrithmic function applied on $n$ in order to get base condition, which is nothing but $\log^* n$.
Complexity $=\Theta (\log ^*n)$
no it is very different u can check it by taking a simple example like take a number apply 2 times log and in second way take 1 time log and multiply it 2 times . u can see large differnce so
(logn)^2 not eql to log^2n
Applying Log 2 times will be Log(Log(x)) that is different from Log(x)* Log(x). but still Log2x = Log(x) * Log(x).
Its just same as Sin2(x)= Sin(x)*Sin(x)
sin^2x is not equal to sinx * sinx . just remember ur trignometry class if it is that much simple then why we apply cos2x formulae to calculate it just multipy two times and get result if u do this all trignometric identity gone wrong.
Let's take some values of n.
Clearly, the recurrence is $T(n)=T(logn)+1$
I'll take it as $T(n)=T\left \lceil (logn) \right \rceil+1$ because $logn = O\left \lceil (logn) \right \rceil$
n = $1024$
$1024\rightarrow 10\rightarrow 4\rightarrow 2$
3 recursive calls.
So, for 1024, it is $O(loglogn)$
n = $2^{2048}$
$2^{2048}\rightarrow 2048\rightarrow 12\rightarrow 4\rightarrow 2$
4 recursive calls.
So, for $2^{2048}$ it is $O(logloglogn)$
n = $2^{2^{2^{2048}}}$
$2^{2^{2^{2048}}}\rightarrow$ $2^{2^{2048}}\rightarrow$ $2^{2048}\rightarrow 2048\rightarrow 12\rightarrow 4\rightarrow 2$
6 recursive calls.
So, for $2^{2^{2^{2048}}}$ it is $O(loglogloglogn)$
It is evident that number of logs is dependent upon how big the input value is. So, it would be Option C | |
# 자동 가스 조절용 전동밸브의 실시간 모니터링 시스템 개발
• 조현섭 (청운대학교 전자공학과) ;
• 유인호 (익산대학 전기과)
• Published : 2001.12.01
• 27 12
#### Abstract
It is a quite duality concerning to control the temperature of single crystalline growth as it does when we get most of heat treating products. It is also important factor to control the temperature when we make the $Al_2$O$_3$(single crystalline) used to artificial jewels. glass of watches, and heat resistant transparent g1asses. Thus, it is a major interest to get the proper temperature in accordance with the time process while we are making mixture of oxygen and hydrogen to have the right temperature. In this paper. we will study of electrical valve positioning system with DC-Motor for the gas mixture to improve the quality of products.
#### Keywords
Single crystalline;Electrical valve positioning system | |
laurenmack one year ago what is the soultion of the system?-4x+3y=-12
• This Question is Open
1. sleung
trying to solve for y?
2. laurenmack
x and y @sleung
3. wio
You only have one equation, so there are infinite solutions.
4. sleung
$-4x=-12-3y$ $x=3+\frac{ 3 }{ 4 }y$ $3y=-12+4x$ $3y=12+4x$ $y=4+\frac{ 4 }{ 3 }x$ | |
×
Can anyone explain how is it wrong??
Note by Shuvayan Ghosh Dastidar
2 years, 8 months ago
Sort by:
When $$x^3 - x^2 = x^2(x - 1)$$ you divided by zero. · 2 years, 8 months ago
$$(x-1)$$ is equal to $$-1$$, not $$0$$. This is because you multiplied the $$x^3$$ factor to get $$0$$, meaning that $$1\ne0$$. · 2 years, 8 months ago
till x^2(x-1) =0 everything is right but on next step there is mistake. if ab=0 , a or b is 0 .Here a=x^2,b=(x-1).We already have a=0 which results in (x-1) may or may not equal to zero.To make the equation correct,(x-1) not equal to zero · 2 years, 8 months ago
× | |
CPL - Chalmers Publication Library
# Topics on Harmonic analysis and Multilinear Algebra
Mahdi Hormozi (Institutionen för matematiska vetenskaper)
2015.
[Doktorsavhandling]
The present thesis consists of six different papers. Indeed, they treat three different research areas: function spaces, singular integrals and multilinear algebra. In paper I, a characterization of continuity of the $p$-$\Lambda$-variation function is given and Helly's selection principle for $\Lambda BV^{(p)}$ functions is established. A characterization of the inclusion of Waterman-Shiba classes into classes of functions with given integral modulus of continuity is given. A useful estimate on the modulus of variation of functions of class $\Lambda BV^{(p)}$ is found. In paper II, a characterization of the inclusion of Waterman-Shiba classes into $H_{\omega}^{q}$ is given. This corrects and extends an earlier result of a paper from 2005. In paper III, the characterization of the inclusion of Waterman-Shiba spaces $\:\Lambda BV^{(p)}\:$ into generalized Wiener classes of functions $BV(q;\,\delta)$ is given. It uses a new and shorter proof and extends an earlier result of U. Goginava. In paper IV, we discuss the existence of an orthogonal basis consisting of decomposable vectors for all symmetry classes of tensors associated with Semi-dihedral groups $SD_{8n}$. In paper V, we discuss o-bases of symmetry classes of tensors associated with the irreducible Brauer characters of the Dicyclic and Semi-dihedral groups. As in the case of Dihedral groups [46], it is possible that $V_\phi(G)$ has no o-basis when $\phi$ is a linear Brauer character. Let $\vec{P}=(p_1,\dotsc,p_m)$ with \$1
Nyckelord: Generalized bounded variation, Helly's theorem, Modulus of variation, Generalized Wiener classes, Symmetry classes of tensors, Orthogonal basis, Brauer symmetry classes of tensors, Multilinear singular integrals, weighted norm inequalities, weighted bounds, local mean oscillation, Lerner's formula
Denna post skapades 2016-02-22.
CPL Pubid: 232300
# Läs direkt!
Länk till annan sajt (kan kräva inloggning)
# Institutioner (Chalmers)
Institutionen för matematiska vetenskaperInstitutionen för matematiska vetenskaper (GU)
# Examination
Datum: 2015-10-22
Lokal: Thursday 22th of October 2015, at 13:15 in room Pascal, Department of Mathematical Sciences, Chalmers Tvärgata 3 | |
# Pagebreak and Verbatim environments
Given the following piece of code:
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage{fancyvrb}
\DefineVerbatimEnvironment{shell}{Verbatim}{
commandchars=\%\{\},
label=\shelltitle,
frame=single,
samepage=true,
formatcom=\setcounter{prompt}{0}\start
}
\newcommand{\shelltitle}{This is a shell}
\makeatletter
\def\start{\let\FV@FV@ProcessLine\FV@ProcessLine
\def\FV@ProcessLine{\noindent\vrule height3ex depth2ex
\hbox to\hsize{\kern\FV@FrameSep This is the shell prompt\hfil}%
\kern-.8pt\vrule\par
\let\FV@ProcessLine\FV@FV@ProcessLine
\FV@ProcessLine}%
}
\makeatother
\newcounter{prompt}
\newcommand{\prompt}{\stepcounter{prompt}\theprompt>}
\begin{document}
\begin{shell}
%prompt echo foo{}
foo
%prompt echo bar
bar
\end{shell}
\end{document}
The samepage=true option doesn't affect the heading line ("This is the shell prompt") and page break happens immediately after that line. How can I ensure the heading line resides on the same page of the rest of the listing?
-
Did you try adding \nobreak just after \kern-.8pt\vrule\par? – egreg Aug 2 '12 at 13:59
As egreg mentioned in his comment, is enough to use \nobreak right after \kern-.8pt\vrule\par. In the following example, if you delete the \nobreak command and process the resulting code, you'll see the undesired effect mentioned (a page break right after the line "This is the shell prompt"); processing the document as it is (with \nobreak) you'll see that the page break inside the environment is suppressed and the whole text is moved to the second page:
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage{fancyvrb}
\DefineVerbatimEnvironment{shell}{Verbatim}{
commandchars=\%\{\},
label=\shelltitle,
frame=single,
samepage=true,
formatcom=\setcounter{prompt}{0}\start
}
\newcommand{\shelltitle}{This is a shell}
\makeatletter
\def\start{\let\FV@FV@ProcessLine\FV@ProcessLine
\def\FV@ProcessLine{\noindent\vrule height3ex depth2ex
\hbox to\hsize{\kern\FV@FrameSep This is the shell prompt\hfil}%
\kern-.8pt\vrule\par\nobreak
\let\FV@ProcessLine\FV@FV@ProcessLine
\FV@ProcessLine}%
}
\makeatother
\newcounter{prompt}
\newcommand{\prompt}{\stepcounter{prompt}\theprompt>}
\begin{document}
\vspace*{18cm}
\begin{shell}
%prompt echo foo{}
foo
%prompt echo bar
bar
\end{shell}
\end{document}
- | |
# Homotopy Type Theory cocartesian monoidal dagger category > history (Rev #3, changes)
Showing changes from revision #2 to #3: Added | Removed | Changed
# Contents
## Definition
A cocartesian monoidal dagger category is a monoidal dagger category $(C, +, 0)$ with
• a morphism $i_A: hom(A,A + B)$ for $A:C$ and $B:C$.
• a morphism $i_B: hom(B,A + B)$ for $A:C$ and $B:C$.
• a morphism $d_{A + B}: hom(A + B,D)$ for an object $D:C$ and morphisms $d_A: hom(A,D)$ and $d_B: hom(B,D)$
• an identity $u_A: d_{A + B} \circ i_A = d_A$ for an object $D:C$ and morphisms $d_A: hom(A,D)$ and $d_B: hom(B,D)$
• an identity $u_B: d_{A + B} \circ i_B = d_B$ for an object $D:C$ and morphisms $d_A: hom(A,D)$ and $d_B: hom(B,D)$
• a morphism $0_a: hom(0,A)$ for every object $A:C$
• an identity $u_0: f \circ 0_A = 0_B$ for for $A:C$ and $B:C$ and $f:hom(A,B)$.
In a cocartesian monoidal dagger category, the tensor product is called a coproduct and the tensor unit is called an initial object. | |
# An Analysis of the Expressiveness of Deep Neural Network Architectures Based on Their Lipschitz Constants
Deep neural networks (DNNs) have emerged as a popular mathematical tool for function approximation due to their capability of modelling highly nonlinear functions. Their applications range from image classification and natural language processing to learning-based control. Despite their empirical successes, there is still a lack of theoretical understanding of the representative power of such deep architectures. In this work, we provide a theoretical analysis of the expressiveness of fully-connected, feedforward DNNs with 1-Lipschitz activation functions. In particular, we characterize the expressiveness of a DNN by its Lipchitz constant. By leveraging random matrix theory, we show that, given sufficiently large and randomly distributed weights, the expected upper and lower bounds of the Lipschitz constant of a DNN and hence their expressiveness increase exponentially with depth and polynomially with width, which gives rise to the benefit of the depth of DNN architectures for efficient function approximation. This observation is consistent with established results based on alternative expressiveness measures of DNNs. In contrast to most of the existing work, our analysis based on the Lipschitz properties of DNNs is applicable to a wider range of activation nonlinearities and potentially allows us to make sensible comparisons between the complexity of a DNN and the function to be approximated by the DNN. We consider this work to be a step towards understanding the expressive power of DNNs and towards designing appropriate deep architectures for practical applications such as system control.
## Authors
• 9 publications
• 38 publications
• ### Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks
Tight estimation of the Lipschitz constant for deep neural networks (DNN...
06/12/2019 ∙ by Mahyar Fazlyab, et al. ∙ 9
• ### Benchmark Analysis of Representative Deep Neural Network Architectures
This work presents an in-depth analysis of the majority of the deep neur...
10/01/2018 ∙ by Simone Bianco, et al. ∙ 0
• ### On the Turnpike to Design of Deep Neural Nets: Explicit Depth Bounds
It is well-known that the training of Deep Neural Networks (DNN) can be ...
01/08/2021 ∙ by Timm Faulwasser, et al. ∙ 0
• ### Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem
Understanding the representational power of Deep Neural Networks (DNNs) ...
12/09/2019 ∙ by Vaggos Chatziafratis, et al. ∙ 0
• ### CLIP: Cheap Lipschitz Training of Neural Networks
Despite the large success of deep neural networks (DNN) in recent years,...
03/23/2021 ∙ by Leon Bungert, et al. ∙ 0
• ### Deep Neural Networks
Deep Neural Networks (DNNs) are universal function approximators providi...
10/25/2017 ∙ by Randall Balestriero, et al. ∙ 0
• ### DNNs as Layers of Cooperating Classifiers
A robust theoretical framework that can describe and predict the general...
01/17/2020 ∙ by Marelie H. Davel, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
Given their capability to approximate highly nonlinear functions, deep neural networks (DNNs) have found increasing application in domains such as image classification [krizhevsky2012imagenet, googlenet], natural language processing [hinton2012deep, hannun2014deep], and learning-based control [shi2019neural, chen2019large, zhou-cdc17]. As compared to their shallow counterparts, DNNs are often favoured in practice due to their compact representation of nonlinear functions [montufar2017number]
. Despite their practical successes, the theoretical understanding of the representative power of such deep architectures remains an active research topic addressed by both the machine learning and neuroscience community. In this work, we aim to contribute to the understanding of the expressiveness of DNNs by presenting a new perspective based on Lipschitz constant analysis that is interpretable for applications such as system control.
There are several recent works analyzing the expressive power of deep architectures. One notable work is [NIPS2011_4350]
, where the authors show that, for a sum-product network, a deep network is exponentially more efficient than a shallow network in representing the same function. Following this work, several researchers then considered more practical DNNs with piecewise linear activation functions (e.g., rectified linear units (ReLU) and hard tanh) and showed that the expressiveness of a DNN measured by the number linear regions partitioned by the DNN grows exponentially with depth and polynomially with width
[pascanu2013number, montufar2014number, arora2016understanding, serra2017bounding]. In parallel to the work on piecewise linear DNNs, (raghu2017expressive) consider DNNs with independent and identically distributed (i.i.d.) Gaussian weight and bias parameters (i.e., random DNNs) and introduce a new measure of expressiveness based on the length of the output trajectory as the DNN traverses a one-dimensional trajectory in its input space. Similar to the other results, the authors show that the expressiveness of a DNN measured by the expected output trajectory length increases exponentially with the depth of the network.
While existing work has shown the exponential expressiveness of deep architectures, the measures of expressiveness are typically specific to the type of deep architectures being considered. For instance, for the sum-product networks considered in NIPS2011_4350, the measure of expressiveness is the number of monomials used to construct the polynomial function, and for DNNs with piecewise linear activation functions pascanu2013number; montufar2014number; arora2016understanding; serra2017bounding), the number of linear regions is used as the measure to characterize the complexity of the DNN. These specialized notions of expressivity prohibit sensible comparisons between the complexity of a DNN and the underlying function it approximates. While the expressiveness measure based on output trajectory length raghu2017expressive is applicable to DNNs with more general activation functions, it is still not trivial to connect this measure to the properties of the function to be approximated by the DNN.
In this work, motivated by the theoretical analysis of DNNs in feedback control applications shi2019neural; fazlyab2019efficient, we introduce an alternative perspective on the expressive power of DNNs based on their Lipschitz properties. Similar to raghu2017expressive, we consider a DNN with random weight parameters. By leveraging results from random matrix theory, we provide an analysis of the expressive power of DNNs based on their Lipschitz constant and establish connections with earlier results using alternative measures of DNN expressiveness. Our ultimate goal is to understand the implications of choosing particular neural network architectures for learning in feedback control applications.
## 2 Preliminaries
We consider fully-connected DNNs, , that are defined as follows:
h0(x)=x,hl(x)=σ(Wlhl−1(x)+bl)∀l=1,...,L,y=WL+1hL(x)+bL+1, (1)
where is the input, is the output, the subscripts denote the layer index with being the input layer, being the hidden layers, and being the output layer, is the output from the th layer with being the element-wise activation function and
being the number of neurons in the
th layer, and and are the weight and bias parameters between layers and . In our analysis, we focus on DNNs with 1-Lipschitz activation functions virmaux2018lipschitz, which include most commonly used activation functions such as ReLU, tanh, and sigmoid.
To facilitate our analysis, similar to raghu2017expressive, in this work, we consider DNNs with random weight matrices
whose elements are i.i.d. zero-mean Gaussian random variables
, where
is the variance of the Gaussian distribution. Our goal is to analyze the expressiveness of such a DNN as we vary its architectural properties (i.e., width and depth).
## 3 Lipschitz Constant as a Measure of Expressiveness
In this work, we characterize the expressiveness of a DNN by its Lipschitz constant. Intuitively, a larger Lipschitz constant implies that small changes in the DNN input can lead to large changes at the output, which provide greater flexibility to model nonlinear functions.
Formally, a function is said to be Lipschitz continuous on if
(∃ρ>0)(∀x,x′∈X)||f(x)−f(x′)||≤ρ||x−x′||, (2)
and its Lipschitz constant on is the smallest such that the inequality in (2) holds. It is not hard to verify that common activation functions (e.g., ReLU, tanh, and sigmoid) are globally Lipschitz continuous. A DNN with such activation functions is a finite number of compositions of Lipschitz continuous functions and is thus Lipschitz continuous on its domain . Note that, in general, the Lipschitz continuity condition in (2) is independent of the choice of the norm; in this work, we will consider Lipschitz continuity in the -norm.
In the following subsections, we establish a connection between the expected Lipschitz constant of a DNN and its architecture (i.e., width and depth), and compare the result to existing results on the expressive power of DNNs in the literature. We summarize our main results in this manuscript and provide details of the derivations and proofs in the appendices.
### 3.1 Upper and Lower Bounds on the Lipschitz Constant of a DNN
As noted in fazlyab2019efficient; virmaux2018lipschitz
, the exact estimation of the Lipschitz constant of a DNN is NP-hard; however, for our purpose of understanding the expressiveness of DNNs, estimates of the upper and lower bounds on the Lipschitz constant of a DNNs based on their weight matrices are sufficient.
Recall that we consider a family of DNNs with 1-Lipschitz activation functions. By the Lipschitz continuity of composite functions, an upper bound on the Lipschitz constant of a DNN (1
) with 1-Lipschitz activation functions is the product of the spectral norms, or equivalently, of the maximum singular values of the weight matrices:
¯¯¯ρ(f(x))=L+1∏l=1||Wl||2, (3)
where denotes the upper bound on the Lipschitz constant of the DNN, denotes the spectral norm or the maximum singular value of the weight matrix . As derived in combettes2019lipschitz, a lower bound on the Lipschitz constant of a DNN is
ρ––(f(x))=||WL+1WL⋯W1||2, (4)
which corresponds to the Lipschitz constant of a purely linear network (i.e., a network with activation nonlinearities removed).
Note that the upper and lower bounds on the Lipschitz constant of a DNN in (3) and (4) depend only on the maximum singular values of the weight matrices and their product. In the following analysis, we leverage random matrix theory to derive expressions of the bounds in (3) and (4) in terms of the width and depth of the DNN and the variance of the weight parameters .
### 3.2 Estimates of the Lipschitz Constant Bounds Based on Extreme Singular Value Theorem
In this subsection, we establish a connection between the Lipschitz constant of a DNN and its architecture (i.e., width and depth) based on the extreme singular value theory for random matrices.
#### 3.2.1 Upper Bound
In this part, we show that, for a sufficiently large , the expected upper bound on the Lipschitz constant (3) and hence the attainable expressiveness of a DNN increases exponentially with depth and polynomially with width. To start our discussion, we state the following result from random matrix theory on the extreme singular values of Gaussian random matrices: [Gaussian Random Matrix (rudelson2010non)] Let be an matrix whose elements are independent standard normal random variables. Then, , where and denote the minimum and maximum singular values of , respectively, and represents the expected value. Note that, for a Gaussian random matrix, the theorem above allows us to infer the extreme singular values of the matrix without explicitly knowing the values of its elements. By representing the weight parameters of a DNN as i.i.d. Gaussian random variables, we can leverage this result to estimate the upper bound of the Lipschitz constant (3). In particular, by applying Theorem 3.2.1, we prove the following theorem in App. A.1: [Upper Bound on Lipschitz Constant of a Gaussian Random DNN] Consider a DNN defined in (1), where the weight parameters are independent Gaussian random variables distributed as with denoting the variance of the Gaussian distribution, and where the activation functions are 1-Lipschitz. The expected Lipschitz constant of the DNN is upper bounded by . Theorem 3.2.1 allows us to obtain an intuition about the expected attainable Lipschitz constant and thus the flexibility of a DNN as we vary its width for and depth . To compare to established results serra2017bounding; raghu2017expressive, we set the width of the hidden layers to (i.e., for ), then the expected Lipschitz constant of a DNN with Gaussian random weights is upper bounded by . For , this upper bound increases exponentially with depth and polynomially with width. This observation is consistent with the results on the expressiveness measured by the number of linear regions for piecewise linear networks serra2017bounding; raghu2017expressive and the expressiveness measured by the trajectory length for Gaussian random networks raghu2017expressive.
#### 3.2.2 Lower Bound
Similarly based on the extreme singular value theorem for random matrices, we present a conjecture on the lower bound of the Lipschitz constant (4). We include a justification of the conjecture in App. A.2 and empirically illustrate the result in Sec. 4. [Lower Bound on Lipschitz Constant of a Gaussian Random DNN] Consider a DNN defined in (1) where the weight parameters are independent Gaussian random variables distributed as and the activation functions are 1-Lipschitz. The Lipschitz constant of the DNN is approximately lower bounded by .
Based on Conjecture 3.2.2, if we consider a DNN with constant width (i.e., for ), the Lipschitz constant of the DNN with independent Gaussian weight parameters is approximately lower bounded by , which also increases exponentially in depth and polynomially in the width of the DNN given sufficiently large (i.e., ). Interestingly, we note that, for the case where and , this asymptotic lower bound based on the Lipschitz constant of the DNN coincides with the expressiveness lower bound based on the output trajectory length measure for DNNs with ReLU activation functions raghu2017expressive. This connection is sensible since the expressiveness measure in raghu2017expressive can be intuitively thought of as the extent to which the DNN stretches a trajectory in its input space, which is a property related to the Lipschitz constant of a DNN (see App. B for further details).
Note that, for both the upper and lower bound analysis, we require the magnitude of to be sufficiently large. Intuitively, a small means that the magnitude of the weights are small. In the extreme case, where all weights are zero, a deep architecture cannot be expressive in any notion of expressiveness (e.g., number of linear regions). We therefore require the spread of the weights to be sufficiently large to exploit the expressivity of the deep layers. This lower bound is typically not restrictive; as an example, is approximately 0.22 for .
#### 3.2.3 Differences Compared to Other Expressiveness Measures
In this work, we propose to use the Lipschitz constant of a DNN as a measure of its expressiveness. In contrast to existing expressiveness measures, a Lipschitz-based characterization has two benefits:
• Less assumptions on the DNN: As compared to previous work on piecewise linear DNNs pascanu2013number; montufar2014number; arora2016understanding; serra2017bounding, by considering the Lipschitz constant as the expressiveness measure, we do not constrain ourself to DNNs with specific activation functions such as ReLUs or hard tanh. In our analysis, we only require the activation function to be 1-Lipschitz, which is satisfied by most commonly used activations that include but are not limited to ReLU, tanh, hard tanh, and sigmoid.
• Towards understanding DNN expressiveness for practical applications: In contrast to expressiveness measures such as the number of linear regions pascanu2013number; montufar2014number; arora2016understanding; serra2017bounding and trajectory length raghu2017expressive, the Lipschitz constant is a generic property for Lipschitz continuous nonlinear functions. For regression problems, the expressiveness characterization through the Lipschitz constant allows us to make sensible comparisons between a DNN and the function it approximates. For control applications, the Lipschitz constant also plays a critical role in stability analysis. The Lipschitz-based characterization of the expressiveness of a DNN has the potential to facilitate the design of deep architectures for safe and efficient learning in a closed-loop control setup.
## 4 Numerical Examples
In this section, we provide numerical examples that illustrate the insights on the expressiveness of DNNs based on the results in Sec. 3. In particular, we show the connection between the architectural properties of a DNN and its expressiveness.
### 4.1 Bounds on the Lipschitz Constant of a DNN
To visualize the results of Sec. (3), we randomly sample the weight parameters of DNNs from a zero-mean, unit variance Gaussian distribution and compare the upper and lower bounds on the Lipschitz constants of these DNNs as we increase its width and depth. To examine the quality of the estimated Lipschitz constant bounds from Sec. (3), we show a comparison of the estimated bounds computed based on Theorem 3.2.1 and Conjecture 3.2.2 and the bounds computed directly based on (3) and (4) in Fig. 1. From these plots, we see that there is a close correspondence between the Lipschitz constant bounds computed based on Theorem 3.2.1 and Conjecture 3.2.2, which assumes random matrices, and the bounds computed based on (3) and (4) based on the actual network weights. This result verifies that the bounds provided in Theorem 3.2.1 and Conjecture 3.2.2 are good approximations of the bounds on the Lipschitz constant of a fixed DNN based on (3) and (4). We note that here we compute the bounds in (3) and (4) directly based on the sampled weight parameters that are known for this simulation study; in general, to understand the implications of a DNN architecture based on Theorem 3.2.1 and Conjecture 3.2.2, we do not rely on knowing the weights explicitly.
Figure 2 shows the upper and lower bounds of the Lipschitz constant based on Theorem 3.2.1 and Conjecture 3.2.2 for different DNN architectures. By inspecting horizontal slices and vertical slices of the plots in Fig. 2, which correspond to the top and bottom plots in Fig. 1, we see that the upper and lower bounds of the Lipschitz constant of a DNN increase exponentially with depth and polynomially with width. The dashed contour lines in the plots show DNN architectures with the same number of neurons. As we trace one of the contour lines from left to right, we see that increasing width and decreasing depth reduces the bounds of the Lipschitz constants, which indicates a decrease in the expressiveness of the deep architecture. Similar to the discussion in montufar2017number, based on our formulation, we also see that, given the same number of neurons, deeper networks are more compact representations of nonlinear functions.
### 4.2 Towards Learning Deep Models for Control
To illustrate the implication of the expressiveness of a DNN for control, we consider a simple system setup and examine the stability of the system when we use a DNN with different architectures in the loop. In particular, we consider a system that is represented by
˙x=Ax+f(x), (5)
where is the state, is Hurwitz, and is a function parametrized by a DNN. By Lyapunov’s direct method, one can show that a condition that guarantees stability of the system (5) is
ρ(f(x))≤λmin(Q)/(2λ% max(P)), (6)
where denotes the Lipschitz constant of the DNN, is a positive definite matrix, is the corresponding solution to the Lyapunov equation , and and
are the minimum and maximum eigenvalues of a matrix, respectively.
For an illustration, we set . We compare five DNN architectures with different widths and depths but the same number of neurons. For each DNN architecture, we sample 50 DNNs with i.i.d. zero-mean, unit variance Gaussian weight parameters. We note that out of the five architectures, we know based on Theorem 3.2.1 that the first case, a DNN with a hidden layer of 300 neurons, has an estimated upper bound on the Lipschitz constant less than the safe upper bound in (6), and system (5) is stable. In contrast, as we can see from Fig. 2, when we decrease the width and increase the depth of a DNN, its Lipschitz constant increases and system (5) is less likely to be stable. Table 1 shows empirical results for the relationship between the architectural properties of a DNN and the stability of the system. This means, in practice, one may want to carefully choose an appropriate DNN architecture, or, alternatively, regularize the weight parameters, to ensure stability of a learning-based control system. We consider our insights to be a step towards providing design guidelines for DNN architectures, for example, for closed-loop control applications.
## 5 Discussion on the Assumption of Gaussian Random Weight Matrices
In this work, we considered DNNs with Gaussian random weight matrices to facilitate analysis of their expressiveness. In this section, we examine if this assumption is reasonable for practical applications. In particular, we examine, through some examples, the accuracy of estimating the maximum singular value of the weight matrices based on Theorem 3.2.1 when the assumption of Gaussian random matrices does not hold exactly.
To examine the properties of weight matrices in trained networks, we consider a regression problem. The true function to be approximated has two inputs and one output. Fig. 3 shows the distributions of two weight matrices from two trained networks with different architectures, and Table 2 summarizes their maximum singular values. By inspecting the distributions (Fig. 3), we see that the weights are not necessarily always Gaussian-distributed; however, the estimates of the maximum singular values of the matrices based on the assumption of random weights are very close to the true maximum singular values (Table 2). Based on Bai-Yin’s law for extreme singular values of random matrices with more general distributions rudelson2010non, we can infer that the expected maximum singular value based on Theorem 3.2.1 is an approximation of the true maximum singular value of a random matrix with an error of , where
is the standard deviation of the weight distribution and
is the matrix column dimension. In future, we plan to explore the properties of the weight matrices of trained networks and examine their relation to random matrix theory.
## 6 Conclusion
In this paper, we presented a new perspective on the expressiveness of DNNs based on their Lipschitz properties. Using random matrix theory, we showed that, given the spread of the weights is sufficiently large (i.e., for ), the expressiveness of a DNN measured by its Lipschitz constant grows exponentially with depth and polynomially with width. This result is similar to the results based on other expressiveness measures discussed in the current literature. By considering the Lipschitz constant as a measure of DNN expressiveness, we can more sensibly understand the implication of being ‘deep’ in the context of function approximation for applications including safe learning-based control.
black
## Appendix A Proofs of Main Results in Sec. 3
### a.1 Proof of Theorem 3.2.1: Upper Bound on Lipschitz Constant of a Gaussian Random DNN
The following is a proof for Theorem 3.2.1 presented in Sec. 3. In the following proof, based on the extreme singular value theorem for random matrices (Theorem 3.2.1), we derive an expression for the upper bound on the Lipschitz constant of a DNN in terms of its width and depth.
Consider a random matrix whose elements are independent Gaussian random variables distributed as . As a result of Theorem 3.2.1 and the homogeneity of the matrix norm, the expected maximum singular value of is upper bounded by . By assumption, the elements of each weight matrix are distributed as . The expected spectral norm, or equivalently the expected maximum singular value, of the weight matrices are upper bounded as follows:
E[||Wl||2]=E[λmax(Wl)]≤σw(√nl+√nl−1). (7)
Since the weight matrices are independent, by substituting (7) into (3), we have the following expected upper bound on the Lipschitz constant of the DNN:
E[¯¯¯ρ(f(x))]=L+1∏l=1E[||Wl||2]≤L+1∏l=1σw(√nl+√nl−1). (8)
The expression in (8) establishes a connection between the upper bound on the Lipschitz constant of a DNN and its architecture, which is represented by the dimensions of the weight matrices in this analysis. This result allows us to obtain insights on the expressiveness of a DNN without explicitly knowing the values of its weights.
### a.2 Justification of Conjecture 3: Lower Bound on Lipschitz Constant of a Gaussian Random DNN
To derive an estimate of the lower bound in (4), we first note that the product of random Gaussian matrices is in general not a Gaussian random matrix. In deriving the lower bound, we need to consider a more general class of matrices than in Theorem 3.2.1: [Random Matrix (rudelson2010non)] Let be an
matrix whose elements are independent random variables with zero mean, unit variance, and finite fourth moment. Suppose that the dimensions
and grow to infinity with converging to a constant in . Then, and almost surely. In contrast to Theorem 3.2.1, the above theorem is applicable to a wider class of random matrices with independent elements; however, this result is an asymptotic result in the limit of sufficiently large and . For practical DNNs where the dimensions of the weight matrices are sufficiently large, this theorem allows us to derive an approximate lower bound for (4). We provide a justification of Conjecture 3.2.2 presented in Sec. 3 of our manuscript below:
##### Justification
We consider two random matrices and whose elements are independent zero-mean random variables with variances and , respectively. The th row and th column element of the matrix product is , where denotes the th row and th column element of and denotes the th row and th column element of . Here, in our derivation, we make a conjecture that the elements of the product matrix of random matrices with elements being i.i.d. zero-mean random variables approximately preserve independence. Based on this conjecture, we derive an expression of the variance the elements of . Without loss of generality, we consider the th row and th column element of . Since, by assumption, the elements of and have zero mean and are i.i.d., the variance of the th row and th column element of is
σ221 =V[N∑k=1a2,ika1,kj]=N∑k=1V[a2,ika1,kj]=Nσ2a1σ2a2, (9)
where denotes the variance of a random variable, and is the variance of the product of an element of and an element of . The standard deviation of elements in the product of can be written as
σ21=√Nσa1σa2. (10)
By applying (10) recursively, we can derive an estimate of the bound in (4), which is the spectral norm of the product of random matrices. In particular, a recursive relationship in the standard deviations of the product of random matrices can be written as
σw,1:l=√nl−1σw,1:l−1σw, (11)
where denotes the standard deviation of the product of random matrices . For the product random matrix in (4), we have
σ1:L+1=σL+1wL∏l=1√nl. (12)
As above, we make a conjecture that the elements of the product matrix constructed from the random weight matrices are independent. Since the elements of the product matrix are the sums of products of independent zero-mean random variables by construction, the elements of the product matrix have zero mean. Moreover, since the elements of the weight matrices are assumed to be Gaussian distributed, they have finite fourth moments. Further by the properties of the sum and product of random variables (dufour2003properties), the elements of the product matrix constructed from the weight matrices also have finite fourth moments. By Theorem A.2 and the homogeneity of matrix norms, a random matrix whose elements are i.i.d. random variables with mean 0, variance , and finite fourth moment, the expected maximum singular value of is given by
E[λmax(M)]=σm(√N+√n+O(√n)). (13)
Based on (12) and (13), an estimate of the expected lower bound of the Lipschitz constant in (4) is
E[ρ––(f(x))]=E[||WL+1⋯W1||2]=(σL+1wL∏l=1√nl)(√nL+1+√n0+O(√n0)). (14)
Similar to the upper bound, this expected lower bound on the Lipschitz constant allows us to infer the Lipschitz constant of a DNN based on its architectural properties.
In Sec. 4 of the manuscript, we empirically show that the expression in (14) is a reasonable approximation of the lower bound of the Lipschitz constant of a DNN in (4). However, we note that, in our justification above, we make an assumption that the elements of the product matrix constructed from random matrices whose elements are i.i.d. zero-mean Gaussian random variables preserve independence. This is a conjecture that requires further investigation. We would like to further look into results on multiplications of random matrices to improve this result.
## Appendix B Connection to the Result Based on Output Trajectory Length
In this appendix, we show a connection between our result and the result in raghu2017expressive. Both our work and raghu2017expressive consider DNNs with i.i.d. zero-mean Gaussian weight parameters. In our work, we use the Lipschitz constant as a measure of the expressiveness of a DNN, while in raghu2017expressive, the proposed expressiveness measure of a DNN is the expected length of an output trajectory as the DNN traverses a one-dimensional trajectory in its input space. Intuitively, as an input trajectory is passed through a DNN, it is deformed by the linear weight layers and the nonlinear activation layers; the output trajectory length measure in raghu2017expressive is the extent to which the DNN ‘stretches’ a trajectory given in the input space.
By considering the expected output trajectory length as the expressiveness measure, raghu2017expressive prove the following result:
[Lower Bound on Output Trajectory Length (raghu2017expressive)] Let be a DNN with ReLU activation functions and weights being i.i.d. Gaussian random variables , and let be a one-dimensional trajectory with having a non-trivial perpendicular component to for all . Denote as the image of the trajectory in the th layer of the DNN. The expected output trajectory length of the DNN is lower bounded by
E[η(hL+1(t))]≥O(σwn√n+1)L+1η(x(t)), (15)
where is the trajectory length and is the width of the DNN.
Note that, if we consider the expected output trajectory length normalized by the input trajectory length (i.e., the ‘stretch’ of the trajectory), we can establish a connection with the lower bound in (15) and the lower bound we derived based on Lipschitz constant expressiveness characterization in Sec. 3.2.2. In particular, in Sec. 3.2.2, we showed that for a DNN with a constant width (i.e., for ), the asymptotic lower bound on the Lipschitz constant of the DNN is . On the other hand, the normalized lower bound on the expected output trajectory in (15) can be written as . For and , this asymptotic lower bound from (15) coincides with the asymptotic lower bound we obtained based on the Lipschitz constant measure of expressiveness. Fig. 4 illustrates this connection between our proposed expressiveness measure based on the Lipschitz constant of a DNN and the expressiveness measure based on the output trajectory length raghu2017expressive for a set of ReLU DNNs with different widths and depths. From the plot, we see that, for DNNs with different architectures, the correlation between the asymptotic lower bounds based on these two measures of expressiveness (grey dots) approximately coincides with the identity line (red line).
The observed connection between the two measures of expressiveness of a DNN is sensible. If we consider the input trajectory to a DNN to be represented by a set of discrete points, the length of the output trajectory captures the extent of ‘stretch’ between pairs of points as they are passed through the DNN. Mathematically, the extent of ‘stretch’ or the distance between two points in a DNN’s output space in relation to the distance between the corresponding points in the input space is characterized by the Lipschitz property of the DNN. | |
# How do I Inset a face equally?
How can I inset/extrude a face, so that the outer faces have the same width?
Any kind of help is appreciated
Apply Scale to your mesh first in object mode.
Ctrl+A >> Scale.
Now try Inset faces again with I.
You can use the Inset tool, which you can activate by pressing I. Do note that this won't work if the object was scaled in Object mode.
You scaled your object in object mode. If you want to have the inset work properly, you will have to scale the plane or what ever you are starting with in edit mode, or apply the scale if done in object mode. When you scale in object mode you are making it “look” correct, but it won’t be correct unless your editing of shapes are done in edit mode or applied in object mode.
The top rectangle was scaled in the X direction in object mode without applying the transformation then the inset was added. The one on the bottom was scaled in the X direction in edit mode and then the inset was added. If you apply the scale in object mode to the top rectangle in object mode then do the inset, you will have the same result as the bottom one.
Remember, transformations in object mode are what you want it to look like, but transformations in edit mode (or that are applied using Ctrl-A menu in object mode) are what it is like. | |
#StackBounty: #estimation #taylor-series #iteration-methods Iterated estimation of Taylor series
Bounty: 50
Say your data generating process is given by the function $$y=f(x|theta)$$, where $$y$$ and $$x$$ represent variables (data) and $$theta$$ represent parameter(s). For convergence reasons (e.g. $$f(cdot)$$ is highly non-linear on parameters and a GMM estimator does not converge), you decide to estimate a Taylor series expansion of $$f(cdot)$$ around $$theta=theta_0$$. Let’s denote this approximated function as $$y approx g(x|theta)_{theta_0}$$.
Say you estimate $$theta$$ in $$g(cdot)$$ based on a random sample of $${y,x}$$, and you get $$hattheta_1$$. Then, you recompute the Taylor series approximation around this point estimate (keeping the Taylor series order constant), and produce $$y approx g(x|theta)_{hattheta_1}$$. Then, you estimate again, yielding $$hattheta_2$$. You iterate until
$$(hattheta_n – hattheta_{n+1})^2 < epsilon$$
for an arbitrary threshold $$epsilon > 0$$.
Convergence (in terms of the optimisation criterion above) is of course of paramount importance. Notice that for an arbitrarily large $$epsilon$$ there is always a solution, as long as $$hattheta$$ can be computed, which itself depends on the properties of $$g(cdot)$$, e.g. on the order of the Taylor expansion; a linear model is always estimable, beyond trivial issues like multicolinearity.
My question is, is the method above a thing? I’ve searched for “iterated estimation of taylor series” on Google, in this forum and in Math.SE and cannot find anything about this. Maybe the method is just plainly wrong, e.g. convergence is not assured by any known theorem.
More details on the method
For instance, consider a CES production function:
$$Y = left(alpha K^theta+ (1-alpha)L^thetaright)^{1/theta}$$
where Y, L and K are variables, and $$alpha$$ and $$theta$$ are parameters. Say you produce a 2nd order Taylor series expansion of the above, around $$theta= 0$$. The new formula (called the translog production function) is:
$$ln(Y) approx alpha ln(K) + (1-alpha)ln(L) + 0.5thetaalpha(1-alpha)left(ln(K) – ln(L) right)^2$$
So, you estimate the above equation with a random sample of $${Y,L,K}$$, using e.g. non-linear least squares, from where you obtain an estimate of $$theta$$, $$hattheta_1$$. The idea is then to produce another Taylor series of $$Y$$, but this time around $$hattheta_1$$. Then, estimate the new equation. Iterate until some convergence criterion is fulfilled.
Get this bounty!!!
This site uses Akismet to reduce spam. Learn how your comment data is processed. | |
# Struggling with compilation of pidgin for windows
April Kontostathis akontostathis at ursinus.edu
Thu Oct 21 19:54:00 EDT 2010
Well, as the saying goes, if I didn't have bad luck, I'd have no luck at
all!
The link (below) was still not quite enough. I deleted the whole mingw
think).
Everything seemed to compile and I have a pidgin.exe in
C:\devel\pidgin-devel\pidgin-2.7.1\win32-install-dir
Hooray!
The compile process did produce one error:
cp ./../win32-dev/pidgin-inst-deps-20100315/exchndl.dll ./win32-install-dir
cp: cannot stat `./../win32-dev/pidgin-inst-deps-20100315/exchndl.dll':
No such file or directory
make: *** [install] Error 1
Is this anything I need to worry about? I don't have a
/win32-dev/pidgin-inst-deps-20100315/ directory at all.
Thanks very much for all your help and for your quick response to my
questions!
April
On 10/21/2010 4:28 PM, Daniel Atallah wrote:
> On Thu, Oct 21, 2010 at 12:52, Kontostathis, April
> <akontostathis at ursinus.edu> wrote:
>> Thanks for the tip. Using the full Mingw seems to have helped.
>>
>> I am making progress. Currently it is not finding the standard c/c++ include libraries (stdio.h, locale.h, etc.), but that is a start (at least it is running something now!). Will keep plugging away at it. If you have any insight on the include problem, please let me know!
>
> I'm afraid you've been a little unlucky in this process (mainly
> because nobody else had tried to set up a new mingw gcc installation
> since sourceforge moved stuff around).
>
> I've fixed all the links in the wiki now - thanks for bringing the
> breakage to our attention.
>
> Your current problem is because the "full" mingw package, while
> containing everything needed for gcc, doesn't contain the win32-api. | |
# American Institute of Mathematical Sciences
November 2019, 12(7): 1955-1975. doi: 10.3934/dcdss.2019127
## Branching and bifurcation
1 Department of Mathematics, University of Maryland, 4176 Campus Dr, College Park, MD 20742, USA 2 INDAM, Dipartimento di Scienze Matematiche, Politecnico di Torino, Duca degli Abruzzi 24, 10129 Torino, Italy
* Corresponding author
Dedicated to Norman Dancer
Received January 2018 Revised August 2018 Published December 2018
Fund Project: J. Pejsachowicz is supported by GNAMPA-INDAM.
By relating the set of branch points $\mathcal{B} (f)$ of a Fredholm mapping $f$ to linearized bifurcation, we show, among other things, that under mild local assumptions at a single point, the set $\mathcal B(f)$ is sufficiently large to separate the domain of the mapping. In the variational case, we will also provide estimates from below for the number of connected components of the complement of $\mathcal B(f).$
Citation: Patrick M. Fitzpatrick, Jacobo Pejsachowicz. Branching and bifurcation. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 1955-1975. doi: 10.3934/dcdss.2019127
##### References:
show all references
Dedicated to Norman Dancer
##### References:
[1] Vladimir Müller, Aljoša Peperko. Lower spectral radius and spectral mapping theorem for suprema preserving mappings. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 4117-4132. doi: 10.3934/dcds.2018179 [2] Yin Yang, Yunqing Huang. Spectral Jacobi-Galerkin methods and iterated methods for Fredholm integral equations of the second kind with weakly singular kernel. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 685-702. doi: 10.3934/dcdss.2019043 [3] Joel Kübler, Tobias Weth. Spectral asymptotics of radial solutions and nonradial bifurcation for the Hénon equation. Discrete & Continuous Dynamical Systems, 2020, 40 (6) : 3629-3656. doi: 10.3934/dcds.2020032 [4] Ana Cristina Mereu, Marco Antonio Teixeira. Reversibility and branching of periodic orbits. Discrete & Continuous Dynamical Systems, 2013, 33 (3) : 1177-1199. doi: 10.3934/dcds.2013.33.1177 [5] Ken Ono. Parity of the partition function. Electronic Research Announcements, 1995, 1: 35-42. [6] Xijun Hu, Li Wu. Decomposition of spectral flow and Bott-type iteration formula. Electronic Research Archive, 2020, 28 (1) : 127-148. doi: 10.3934/era.2020008 [7] Juan Campos, Rafael Obaya, Massimo Tarallo. Recurrent equations with sign and Fredholm alternative. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 959-977. doi: 10.3934/dcdss.2016036 [8] Dejian Chang, Huili Liu, Jie Xiong. A branching particle system approximation for a class of FBSDEs. Probability, Uncertainty and Quantitative Risk, 2016, 1 (0) : 9-. doi: 10.1186/s41546-016-0007-y [9] Gerald Sommer, Di Zang. Parity symmetry in multi-dimensional signals. Communications on Pure & Applied Analysis, 2007, 6 (3) : 829-852. doi: 10.3934/cpaa.2007.6.829 [10] Fengbo Hang, Fanghua Lin. Topology of Sobolev mappings IV. Discrete & Continuous Dynamical Systems, 2005, 13 (5) : 1097-1124. doi: 10.3934/dcds.2005.13.1097 [11] Carlangelo Liverani. Fredholm determinants, Anosov maps and Ruelle resonances. Discrete & Continuous Dynamical Systems, 2005, 13 (5) : 1203-1215. doi: 10.3934/dcds.2005.13.1203 [12] Björn Sandstede, Arnd Scheel. Relative Morse indices, Fredholm indices, and group velocities. Discrete & Continuous Dynamical Systems, 2008, 20 (1) : 139-158. doi: 10.3934/dcds.2008.20.139 [13] Feride Tığlay. Integrating evolution equations using Fredholm determinants. Electronic Research Archive, 2021, 29 (2) : 2141-2147. doi: 10.3934/era.2020109 [14] Marcello Delitala, Tommaso Lorenzi. Evolutionary branching patterns in predator-prey structured populations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (9) : 2267-2282. doi: 10.3934/dcdsb.2013.18.2267 [15] Tapio Rajala. Improved geodesics for the reduced curvature-dimension condition in branching metric spaces. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 3043-3056. doi: 10.3934/dcds.2013.33.3043 [16] Anna-Lena Horlemann-Trautmann, Alessandro Neri. A complete classification of partial MDS (maximally recoverable) codes with one global parity. Advances in Mathematics of Communications, 2020, 14 (1) : 69-88. doi: 10.3934/amc.2020006 [17] Thomas Westerbäck. Parity check systems of nonlinear codes over finite commutative Frobenius rings. Advances in Mathematics of Communications, 2017, 11 (3) : 409-427. doi: 10.3934/amc.2017035 [18] Konstantinos Drakakis, Rod Gow, Scott Rickard. Parity properties of Costas arrays defined via finite fields. Advances in Mathematics of Communications, 2007, 1 (3) : 321-330. doi: 10.3934/amc.2007.1.321 [19] Emily McMillon, Allison Beemer, Christine A. Kelley. Extremal absorbing sets in low-density parity-check codes. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021003 [20] Huiyan Xue, Antonella Zanna. Generating functions and volume preserving mappings. Discrete & Continuous Dynamical Systems, 2014, 34 (3) : 1229-1249. doi: 10.3934/dcds.2014.34.1229
2019 Impact Factor: 1.233 | |
Br. There is no restriction in the world which does not allow molecules or part of a molecule to have an ionic bond. Ionic bonding is expected in which of these compounds. Fillers. The ionic bonding between NaCl is created only by transferring of 1 electron, resulting in highly unstable and weak bonding, hence very soluble in water. Positive and negative ions form when a metal reacts with a non-metal, by transferring electrons. Cement. Perhaps the evaluating part of the software knows better than the person who create the question :)), Beside the point and joke above, you are right with the first two. understand much, but the rabbi understood all of it. In my comment I did not engage in black or white classification. An ionic bond is actually the extreme case of a polar covalent bond, the latter resulting from unequal sharing of electrons rather than complete electron transfer. Is it possible to bring an Astral Dreadnaught to the Material Plane? An electrically neutral entity consisting of more than one atom (n>1). Be on the lookout for your Britannica newsletter to get trusted stories delivered right to your inbox. The sodium atom has a single electron in its outermost shell, while chlorine needs one electron to fill its outer shell. Omissions? For full treatment, see chemical bonding: The formation of ionic bonds. An ionic bond is actually the extreme case of a polar covalent bond, the latter resulting from unequal sharing of electrons rather than complete electron transfer. Ionic compounds are composed of oppositely-charged ions (positive and negative ions) arranged in a three-dimensional giant crystal lattice. Decide if the following formulas represent Ionic or Covalent compounds. My Answers. The structure of the bond is rigid, strong and often crystalline and solid. ion, complex, conformer etc., identifiable as a separately Your answers are right. CH4 3. By considering their atomic bonds, compare and discuss their properties given below. When calcium has reacted with another element to form a calcium compound the compound is an ionic bond. If it gave away that electron it would become more stable. Compare covalent bond. It only takes a minute to sign up. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Characterization of bonding in calcium oxide. Ionic bond, also called electrovalent bond, type of linkage formed from the electrostatic attraction between oppositely charged ions in a chemical compound. Ionic bonds are for metal + non-metal compounds. Atomic number ofcalcium and oxygen are 20 and 8 respectively. KF. Ionic bonding When metals react with non-metals, electrons are transferred from the metal atoms to the non-metal atoms, forming ions. In the ionic bond between calcium and oxygen, the calcium atom donates two of its valence electrons to the oxygen atom to form a calcium cation and oxygen anion. which element forms an ionic compound when it reacts with lithium? So they lose electrons to become stable. (b) Name the constituent metals of bronze. Which compound contains ionic bonds? Desiccant. Ionic bonds form between two atoms that have different electronegativity values. 1. CH3OH 4. It will lose 1 electron – forming a 1 + ion to have a full outer shell. Let us know if you have suggestions to improve this article (requires login). The first talk was brilliant – clear and simple. In short, the ions are so arranged that the positive and negative charges alternate and balance one another, the overall charge of the entire substance being zero. CaO. This would be worse than using a confusing terminology, of course. Describe the type of bonding that occurs in the compound. Intermediates. What are other good attack examples that use the hash collision? Dance of Venus (and variations) in TikZ/PGF. The covalent bond is a bond formed when two atoms share one or more electron pairs. The oppositely charged ions are strongly attracted to each other, forming ionic bonds. rev 2020.12.18.38240, The best answers are voted up and rise to the top, Chemistry Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, The terminology is rather unusual as for, unless one look at 2 opposite charged ions in vacuum there is no something called ionic molecule. Calcium carbonate is another example of a compound with both ionic and covalent bonds. molecular entity Any constitutionally or Ionic Bonding. How come there are so few TNOs the Voyager probes and New Horizons can visit? finest – a great and unforgettable experience. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Other examples of ionic compounds are, sodium oxide, copper(II) hydroxide, magnesium carbonate, etc. Electronegativity difference: 2.5; Type of bond: ionic; Type of molecule: ionic; I am entering in the answers and still getting part of … Making statements based on opinion; back them up with references or personal experience. I didn’t Would France and other EU countries have been able to block freight traffic from the UK if the UK was still in the EU? A polar covalent bond is found in which of these compounds. Corrections? What are the electronegativity difference, type of bond (polar, nonpolar, ionic), and the type of molecule (polar, nonpolar, ionic) for the compound CaO? Is air to air refuelling possible at "cruising altitude"? Why enchanted weapons are seldom recycled? Such a bond forms when the valence (outermost) electrons of one atom are transferred permanently to another atom. This makes positive Cations. This means they have 1, 2 , 3 electrons in their outer shells.. That CaO can be seen as a almost molecular solid composed of very polar molecules I doubt. Why did the US have a law that prohibited misusing the Swiss coat of arms? Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Ionic bonding results in compounds known as ionic, or electrovalent, compounds, which are best exemplified by the compounds formed between nonmetals and the alkali and alkaline-earth metals. distinguishable entity. Ionic bonding is a type of chemical bond that involves the electrostatic attraction between oppositely charged ions, and is the primary interaction occurring in ionic compounds. The bond is typically between a metal and a non-metal. (1) K (2) Fe (3) Kr (4) Br. I wonder why there are so many poor exercises and quiz on chemistry. Do airlines book you on other airlines if they cancel flights? Determining which compound is more 'ionic'. NH4Cl. CaO; IBr; CO 2; Explain the difference between a nonpolar covalent bond, a polar covalent bond, and an ionic bond. Use MathJax to format equations. Dissolved in water, ionic bonds are aqueous, that is, they can conduct. WLHS / Conc Chem Name Date Per WORKSHEET: Chemical Bonding – Ionic & Covalent! A. NO2 B. CO2 C. NO D. CaO. You can recognize ionic compounds because they consist of a metal bonded to a nonmetal. 2. Ionic bonding in sodium chloride. Long time ago, a chemist was wondering if NaCl crystal can be considered a polymer? Ionic Bonding Why is this gcd implementation from the 80s so complicated? Like these it is not clear if the teacher or the software think that CaO isn't really anything ionic. To learn more, see our tips on writing great answers. Why were early 3D games so full of muted colours? The second was even better – deep and subtle. A brief treatment of ionic bonds follows. The melting and boiling points of ionic compounds are high. CaO. Ionic bonds typically form when the difference in the electronegativities of the two atoms is great, while covalent bonds form when the electronegativities are similar. Corrosion inhibitors and anti-scaling agents. Metallic sodium, for example, has one valence electron in its…. Which formula represents an ionic compound? Graveling and road bed material. Electrical conductivity, (4 points) Thermal conductivity, (4 points) Melting point, (4 points) Hardness, (4 points) Anyway I don't see the difference between type of bond, and type of molecule, because there is only one bond in the molecule. Non-metals gain electrons because they have 5,6 or 7 outer shell electrons Problem with fitting a given data with an equation. We can determine if CaO is ionic or covalent by analyzing Ca and O Encyclopaedia Britannica's editors oversee subject areas in which they have extensive knowledge, whether from years of experience gained by working on that content or via study for an advanced degree.... Ionic bonding in sodium chloride. Updates? Chemistry teachers have become no different than the rabbi of Niels Bohr; The rabbi spoke three times. Chemical is part of scrap metal/iron kish used in the manufacture of metal in an electric arc furnace. A chemical bond is the force that holds atoms together in a chemical compound. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Ionic bonds typically form when the difference in the electronegativities of the two atoms is great, while covalent bonds form when the electronegativities are similar. Another atom its outermost shell, while the carbon and oxygen atoms in carbonate are covalently bonded attraction two. Exploration projects among the options decide if the following formulas represent ionic or covalent compounds spoke... By a force called an ionic bond © 2020 Stack Exchange is a ( )... It possible to bring an Astral Dreadnaught to the non-metal are other attack. Offers, and information from Encyclopaedia Britannica the UK if the software is errors! Each atom contributes cao ionic bond equal number of electrons lost or gained compare and discuss their properties below! And updated by, https: //www.britannica.com/science/ionic-bond, chemical bonding: the formation of ionic compounds high. Manufacture of metal in an electric arc furnace polar covalent bond is typically between metal. Difference defined to be hard and nonvolatile air to air refuelling possible at cruising ''! Will review what you ’ ve submitted and determine whether to revise the article more, see chemical bonding ionic. Atomic number ofcalcium and oxygen are 20 and 8 respectively a chemical compound are atomic bonds, compare and their. Covalent compounds to another atom ion correlates to the number of electrons lost or gained come there so. Get trusted stories delivered right to your inbox, the two answers ionic ionic are redundant as... Clear if the electronegativity difference between bonded atoms is greater than 1.7 following formulas represent ionic or covalent compounds explained. A 1 + ion to have a full outer shell ions ( positive and negative ions arranged! 3D games so full of muted colours possible at cruising altitude '' see chemical bonding ionic. To say man-in-the-middle '' attack in reference to technical security breach is... Between oppositely charged ions news, offers, and i am only getting 13.33 points that! Reference to technical security breach that is not gendered us have a full outer shell on! Crystals is considerable what you ’ ve submitted and determine whether to revise the article in due... To air refuelling possible at cruising altitude '' with another element to form a metal cation +! ( n > 1 ) K ( 2 ) Fe ( 3 ) Kr 4! 1 electron short of a stable noble gas structure ( 2,8 ) the UK was in... Shape of a random integer brilliant – clear and simple not gendered 3! In Group 1, 2 and 3 and a non-metal anion ( - ) attract and bind didn t. Requires login ), by transferring electrons between bonded atoms is greater 1.7... K ( 2 ) Fe ( 3 ) Kr ( 4 ) Br countries have able. Is air to air refuelling possible at cruising altitude '' electron more than one atom are transferred to. Answers ionic ionic are redundant, as @ Maurice also pointed out to air possible! Very-Long-Term commercial space exploration projects are atomic bonds, compare and discuss their properties given below given... To learn more, see chemical bonding – ionic & covalent in the answers and still part. '' attack in reference to technical security breach that is, they can.! See our tips on writing great answers: the formation of ionic compounds can be seen as almost... Implementation from the 80s so complicated and the rabbi himself didn ’ t understand much either is it to! Yet they show differentiation in water due to my current employer starting to promote religion this. Known as ionic bonds the magnitude of the electrostatic forces in ionic crystals is considerable i wonder why are. We should come out of this 19th century black-and-white classifications Post your answer ”, are..., yet they show differentiation in water, ionic bonds are aqueous, that is, they conduct! Rss feed, copy and paste this URL into your RSS reader review what you ’ ve and. The answers and still getting part of scrap metal/iron kish used in the world which does not molecules! Probes and New Horizons can visit logo © 2020 Stack Exchange valence ( outermost ) electrons of atom! Ionic compound CaO with electron dot structure decide if the electronegativity difference defined to be hard and nonvolatile ion to! Cao or calcium oxide is composed of very polar molecules i doubt to this RSS feed, copy paste... Does not allow molecules or part of scrap metal/iron kish used in real... One atom are transferred from the metal donates its electron to the number of lost..., type of linkage formed from the UK if the following formulas ionic... A limit- the rest becomes grey in the manufacture of metal in an electric arc furnace how to in. Question alone why were early 3D games so full of muted colours be 0.4 ionic when the electronegativity difference bonded. Its electron configuration is 2.81 is, they can conduct is exactly?! Login ) in their outer shells bond formed when the electronegativity difference defined to be 0.4 refuelling possible ... Metal donates its electron to the subscript of the ion goes to the subscript of the question.... – ionic & covalent known as ionic bonds are aqueous, that not. Good to a limit- the rest becomes grey in the manufacture of metal in electric. You point of view, surely correct, the two oppositely charged ions an arc! Know if you have suggestions to improve this article ( requires login ) atom are transferred from electrostatic. Formation between CaO is n't really anything ionic elements are in Group,! + and O 2- happens if the teacher or the software is generating errors and determine whether to revise article... The rabbi himself didn ’ t understand much either elements are in Group 1, 2 and.... + ) and a non-metal, by transferring electrons, sodium oxide, copper ( )... + ion to have an ionic compound CaO with electron dot structure compound an... Or responding to other answers difference is exactly 2.1 ionic compounds occur when a metal cation and a non-metal.! Much either possible points on that question alone, as @ Maurice pointed. The 20 points for good PhD advisors to micromanage early PhD students or more electron.! Polar covalent bond i doubt atom ( n ) _____ nonpolar covalent bond is between! Article was most recently revised and updated by, https: //www.britannica.com/science/ionic-bond, chemical:... Chemistry teachers have become no different than the rabbi spoke three times metal/iron kish in... '' attack in reference to technical security breach that is, they can conduct did not engage in or... Is a bond formed when two atoms that have different electronegativity values electron of... Cao is stronger than NaCl privacy policy and cookie policy the attraction of two differently charged ions are held by! Other EU countries have been used among the options and paste this URL into your RSS.... Contributing an answer to chemistry Stack Exchange is a ( n > 1 ) have law! Possible to bring an Astral Dreadnaught to the non-metal atoms, forming ionic bonds form two! Have 1, 2, 3 electrons in their outer shells considering their atomic,... Points on that question, and students in the EU than a stable noble structure... Black-And-White classifications to chemistry Stack Exchange is a question and answer site for scientists, academics, teachers and! Soluble in water due to the non-metal atoms, forming ionic bonds cation, with the species. Contributions licensed under cc by-sa atoms in carbonate are covalently bonded / Conc Chem Name Date Per WORKSHEET: bonding! Determine whether to revise the article both ionic and covalent bonds countries have been able to block traffic! Altitude '' the field of chemistry Venus ( and variations ) in TikZ/PGF and solid (... Engage in black or white classification even better – deep and subtle if you have suggestions to improve this (... So few TNOs the Voyager probes and New Horizons can visit Bohr ; the rabbi spoke three.! Electrons in their outer shells size of largest square divisor of a stable noble gas structure ( )... Be seen as a result, we form a metal cation and non-metal. Allows us to determine the _____ shape of a compound with both ionic and covalent bonds far the finest a! Transferred permanently to another atom the lookout for your Britannica newsletter to get trusted stories delivered to! Is air to air refuelling possible at cruising altitude '' my current employer to! Electron – forming a 1 + ion to have an ionic bond, called... Fill its outer shell wonder why there are 20 and 8 respectively the formation..., type of linkage formed from the electrostatic forces in ionic crystals is considerable was brilliant – clear and.! Or personal experience security breach that is not clear if cao ionic bond following formulas represent ionic or covalent compounds number electrons... Formed when two atoms that have different cao ionic bond values the melting and boiling of. Think that CaO is stronger than NaCl responding to other answers ( 2,8,7 ) 1! The oppositely charged ions in a chemical compound stable noble gas structure ( 2,8 ) problem with fitting given! Unforgettable experience ionic & covalent it reacts with a non-metal anion square of! Other answers to be hard and nonvolatile me to get the 20 points didn... For this email, you agree to our terms of service, privacy policy and cookie policy we form metal! Formed when two atoms share one or more electron pairs a limit- the rest becomes grey in EU. It normal for good PhD advisors to micromanage early PhD students requires login ), compare and their... Have suggestions to improve this article ( requires login ) such a bond when. Electrons lost or gained are covalently bonded become no different than the rabbi spoke times! | |
Publicité ▼
# définition - Tetrahedron
tetrahedron (n.)
1.any polyhedron having four plane faces
Merriam Webster
TetrahedronTet`ra*he"dron (?), n. [Tetra- + Gr. � seat, base, fr. � to sit.] (Geom.) A solid figure inclosed or bounded by four triangles.
☞ In crystallography, the regular tetrahedron is regarded as the hemihedral form of the regular octahedron.
Regular tetrahedron (Geom.), a solid bounded by four equal equilateral triangles; one of the five regular solids.
## définition (complément)
voir la définition de Wikipedia
Publicité ▼
Publicité ▼
## dictionnaire analogique
polyhedron[Classe]
geometry[Domaine]
ShapeAttribute[Domaine]
polyhedron[Hyper.]
tetrahedron (n.)
Wikipedia
# Tetrahedron
Regular Tetrahedron
(Click here for rotating model)
Type Platonic solid
Elements F = 4, E = 6
V = 4 (χ = 2)
Faces by sides 4{3}
Schläfli symbol {3,3} and s{2,2}
Wythoff symbol 3 | 2 3
| 2 2 2
Coxeter–Dynkin
Symmetry Td, A3, [3,3], (*332)
Rotation group T, [3,3]+, (332)
References U01, C15, W1
Properties Regular convex deltahedron
Dihedral angle 70.528779° = arccos(1/3)
3.3.3
(Vertex figure)
Self-dual
(dual polyhedron)
Net
In geometry, a tetrahedron (plural: tetrahedra) is a polyhedron composed of four triangular faces, three of which meet at each vertex. It has six edges and four vertices. The tetrahedron is the only convex polyhedron that has four faces.[1]
The tetrahedron is the three-dimensional case of the more general concept of a Euclidean simplex.
The tetrahedron is one kind of pyramid, which is a polyhedron with a flat polygon base and triangular faces connecting the base to a common point. In the case of a tetrahedron the base is a triangle (any of the four faces can be considered the base), so a tetrahedron is also known as a "triangular pyramid".
Like all convex polyhedra, a tetrahedron can be folded from a single sheet of paper. It has two nets.[1]
For any tetrahedron there exists a sphere (the circumsphere) such that the tetrahedron's vertices lie on the sphere's surface.
## Special cases
A regular tetrahedron is one in which all four faces are equilateral triangles, and is one of the Platonic solids. An isosceles tetrahedron, also called a disphenoid, is a tetrahedron where all four faces are congruent triangles. In a trirectangular tetrahedron the three face angles at one vertex are right angles. If all three pairs of opposite edges of a tetrahedron are perpendicular, then it is called an orthocentric tetrahedron. When only one pair of opposite edges are perpendicular, it is called a semi-orthocentric tetrahedron. An isodynamic tetrahedron is one in which the cevians that join the vertices to the incenters of the opposite faces are concurrent, and an isogonic tetrahedron has concurrent cevians that join the vertices to the points of contact of the opposite faces with the inscribed sphere of the tetrahedron.
## Formulas for a regular tetrahedron
The following Cartesian coordinates define the four vertices of a tetrahedron with edge-length 2, centered at the origin:
(±1, 0, -1/√2)
(0, ±1, 1/√2)
For a regular tetrahedron of edge length a:
Base plane area $A_0={\sqrt{3}\over4}a^2\,$ Surface area[2] $A=4\,A_0={\sqrt{3}}a^2\,$ Height[3] $H={\sqrt{6}\over3}a\,$ Volume[2] $V={1\over3} A_0h ={\sqrt{2}\over12}a^3\,$ Angle between an edge and a face $\arccos\left({1 \over \sqrt{3}}\right) = \arctan(\sqrt{2})\,$ (approx. 54.7356°) Angle between two faces[2] $\arccos\left({1 \over 3}\right) = \arctan(2\sqrt{2})\,$ (approx. 70.5288°) Angle between the segments joining the center and the vertices,[4] also known as the "tetrahedral angle" $\arccos\left({-1\over3}\right ) = 2\arctan(\sqrt{2})\,$ (approx. 109.4712°) Solid angle at a vertex subtended by a face $\arccos\left({23\over27}\right)$ (approx. 0.55129 steradians) Radius of circumsphere[2] $R=\sqrt{{3\over8}}\,a\,$ Radius of insphere that is tangent to faces[2] $r={1\over3}R={a\over\sqrt{24}}\,$ Radius of midsphere that is tangent to edges[2] $r_M=\sqrt{rR}={a\over\sqrt{8}}\,$ Radius of exspheres $r_E={a\over\sqrt{6}}\,$ Distance to exsphere center from a vertex $\sqrt{{3\over2}}\,a\,$
Note that with respect to the base plane the slope of a face ($\scriptstyle 2 \sqrt{2}$) is twice that of an edge ($\scriptstyle \sqrt{2}$), corresponding to the fact that the horizontal distance covered from the base to the apex along an edge is twice that along the median of a face. In other words, if C is the centroid of the base, the distance from C to a vertex of the base is twice that from C to the midpoint of an edge of the base. This follows from the fact that the medians of a triangle intersect at its centroid, and this point divides each of them in two segments, one of which is twice as long as the other (see proof).
## Volume
The volume of a tetrahedron is given by the pyramid volume formula:
$V = \frac{1}{3} A_0\,h \,$
where A0 is the area of the base and h the height from the base to the apex. This applies for each of the four choices of the base, so the distances from the apexes to the opposite faces are inversely proportional to the areas of these faces.
For a tetrahedron with vertices a = (a1, a2, a3), b = (b1, b2, b3), c = (c1, c2, c3), and d = (d1, d2, d3), the volume is (1/6)·|det(ab, bc, cd)|, or any other combination of pairs of vertices that form a simply connected graph. This can be rewritten using a dot product and a cross product, yielding
$V = \frac { |(\mathbf{a}-\mathbf{d}) \cdot ((\mathbf{b}-\mathbf{d}) \times (\mathbf{c}-\mathbf{d}))| } {6}.$
If the origin of the coordinate system is chosen to coincide with vertex d, then d = 0, so
$V = \frac { |\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})| } {6},$
where a, b, and c represent three edges that meet at one vertex, and a · (b × c) is a scalar triple product. Comparing this formula with that used to compute the volume of a parallelepiped, we conclude that the volume of a tetrahedron is equal to 1/6 of the volume of any parallelepiped that shares three converging edges with it.
The triple scalar can be represented by the following determinants:
$6 \cdot V =\begin{vmatrix} \mathbf{a} & \mathbf{b} & \mathbf{c} \end{vmatrix}$ or $6 \cdot V =\begin{vmatrix} \mathbf{a} \\ \mathbf{b} \\ \mathbf{c} \end{vmatrix}$ where $\mathbf{a} = (a_1,a_2,a_3) \,$ is expressed as a row or column vector etc.
Hence
$36 \cdot V^2 =\begin{vmatrix} \mathbf{a^2} & \mathbf{a} \cdot \mathbf{b} & \mathbf{a} \cdot \mathbf{c} \\ \mathbf{a} \cdot \mathbf{b} & \mathbf{b^2} & \mathbf{b} \cdot \mathbf{c} \\ \mathbf{a} \cdot \mathbf{c} & \mathbf{b} \cdot \mathbf{c} & \mathbf{c^2} \end{vmatrix}$ where $\mathbf{a} \cdot \mathbf{b} = ab\cos{\gamma}$ etc.
which gives
$V = \frac {abc} {6} \sqrt{1 + 2\cos{\alpha}\cos{\beta}\cos{\gamma}-\cos^2{\alpha}-\cos^2{\beta}-\cos^2{\gamma}}, \,$
where α, β, γ are the plane angles occurring in vertex d. The angle α, is the angle between the two edges connecting the vertex d to the vertices b and c. The angle β, does so for the vertices a and c, while γ, is defined by the position of the vertices a and b.
Given the distances between the vertices of a tetrahedron the volume can be computed using the Cayley–Menger determinant:
$288 \cdot V^2 = \begin{vmatrix} 0 & 1 & 1 & 1 & 1 \\ 1 & 0 & d_{12}^2 & d_{13}^2 & d_{14}^2 \\ 1 & d_{12}^2 & 0 & d_{23}^2 & d_{24}^2 \\ 1 & d_{13}^2 & d_{23}^2 & 0 & d_{34}^2 \\ 1 & d_{14}^2 & d_{24}^2 & d_{34}^2 & 0 \end{vmatrix}$
where the subscripts $i,\,j\in\{1,\,2,\,3,\,4\}$ represent the vertices {a, b, c, d} and $\scriptstyle d_{ij}$ is the pairwise distance between them—i.e., the length of the edge connecting the two vertices. A negative value of the determinant means that a tetrahedron cannot be constructed with the given distances. This formula, sometimes called Tartaglia's formula, is essentially due to the painter Piero della Francesca in the 15th century, as a three dimensional analogue of the 1st century Heron's formula for the area of a triangle.[5]
### Heron-type formula for the volume of a tetrahedron
If U, V, W, u, v, w are lengths of edges of the tetrahedron (first three form a triangle; u opposite to U and so on), then[6]
$\text{volume} = \frac{\sqrt {\,( - a + b + c + d)\,(a - b + c + d)\,(a + b - c + d)\,(a + b + c - d)}}{192\,u\,v\,w}$
where
\begin{align} \begin{align} a & = \sqrt {xYZ} \\ b & = \sqrt {yZX} \\ c & = \sqrt {zXY} \\ d & = \sqrt {xyz} \\ X & = (w - U + v)\,(U + v + w) \\ x & = (U - v + w)\,(v - w + U) \\ Y & = (u - V + w)\,(V + w + u) \\ y & = (V - w + u)\,(w - u + V) \\ Z & = (v - W + u)\,(W + u + v) \\ z & = (W - u + v)\,(u - v + W). \end{align} \end{align}
## Distance between the edges
Any two opposite edges of a tetrahedron lie on two skew lines. If the closest pair of points between these two lines are points in the edges, they define the distance between the edges; otherwise, the distance between the edges equals that between one of the endpoints and the opposite edge. Let d be the distance between the skew lines formed by opposite edges a and bc as calculated in.[7] Then another volume formula is given by
$V = \frac {d |(\mathbf{a} \times \mathbf{(b-c)})| } {6}.$
## Properties of a general tetrahedron
The tetrahedron has many properties analogous to those of a triangle, including an insphere, circumsphere, medial tetrahedron, and exspheres. It has respective centers such as incenter, circumcenter, excenters, Spieker center and points such as a centroid. However, there is generally no orthocenter in the sense of intersecting altitudes. The circumsphere of the medial tetrahedron is analogous to the triangle's nine-point circle, but does not generally pass through the base points of the altitudes of the reference tetrahedron.[8]
Gaspard Monge found a center that exists in every tetrahedron, now known as the Monge point: the point where the six midplanes of a tetrahedron intersect. A midplane is defined as a plane that is orthogonal to an edge joining any two vertices that also contains the centroid of an opposite edge formed by joining the other two vertices. If the tetrahedron's altitudes do intersect, then the Monge point and the orthocenter coincide to give the class of orthocentric tetrahedron.
An orthogonal line dropped from the Monge point to any face meets that face at the midpoint of the line segment between that face's orthocenter and the foot of the altitude dropped from the opposite vertex.
A line segment joining a vertex of a tetrahedron with the centroid of the opposite face is called a median and a line segment joining the midpoints of two opposite edges is called a bimedian of the tetrahedron. Hence there are four medians and three bimedians in a tetrahedron. These seven line segments are all concurrent at a point called the centroid of the tetrahedron.[9] The centroid of a tetrahedron is the midpoint between its Monge point and circumcenter. These points define the Euler line of the tetrahedron that is analogous to the Euler line of a triangle.
The nine-point circle of the general triangle has an analogue in the circumsphere of a tetrahedron's medial tetrahedron. It is the twelve-point sphere and besides the centroids of the four faces of the reference tetrahedron, it passes through four substitute Euler points, 1/3 of the way from the Monge point toward each of the four vertices. Finally it passes through the four base points of orthogonal lines dropped from each Euler point to the face not containing the vertex that generated the Euler point.[10]
The center T of the twelve-point sphere also lies on the Euler line. Unlike its triangular counterpart, this center lies 1/3 of the way from the Monge point M towards the circumcenter. Also, an orthogonal line through T to a chosen face is coplanar with two other orthogonal lines to the same face. The first is an orthogonal line passing through the corresponding Euler point to the chosen face. The second is an orthogonal line passing through the centroid of the chosen face. This orthogonal line through the twelve-point center lies midway between the Euler point orthogonal line and the centroidal orthogonal line. Furthermore, for any face, the twelve-point center lies at the midpoint of the corresponding Euler point and the orthocenter for that face.
The radius of the twelve-point sphere is 1/3 of the circumradius of the reference tetrahedron.
There is a relation among the angles made by the faces of a general tetrahedron given by [11]
$\begin{vmatrix} -1 & cos{(\alpha_{12})} & cos{(\alpha_{13})} & cos{(\alpha_{14})}\\ cos{(\alpha_{12})} & -1 & cos{(\alpha_{23})} & cos{(\alpha_{24})} \\ cos{(\alpha_{13})} & cos{(\alpha_{23})} & -1 & cos{(\alpha_{34})} \\ cos{(\alpha_{14})} & cos{(\alpha_{24})} & cos{(\alpha_{34})} & -1 \\ \end{vmatrix} = 0\,$
where $\alpha_{ij}$ is the angle between the faces i and j.
## More vector formulas in a general tetrahedron
If OABC forms a general tetrahedron with a vertex O as the origin and vectors a, b and c represent the positions of the vertices A, B, and C with respect to O, then the radius of the insphere is given by[citation needed]:
$r= \frac {6V} {|\mathbf{b} \times \mathbf{c}| + |\mathbf{c} \times \mathbf{a}| + |\mathbf{a} \times \mathbf{b}| + |(\mathbf{b} \times \mathbf{c}) + (\mathbf{c} \times \mathbf{a}) + (\mathbf{a} \times \mathbf{b})|} \,$
and the radius of the circumsphere is given by:
$R= \frac {|\mathbf{a^2}(\mathbf{b} \times \mathbf{c}) + \mathbf{b^2}(\mathbf{c} \times \mathbf{a}) + \mathbf{c^2}(\mathbf{a} \times \mathbf{b})|} {12V} \,$
which gives the radius of the twelve-point sphere:
$r_T= \frac {|\mathbf{a^2}(\mathbf{b} \times \mathbf{c}) + \mathbf{b^2}(\mathbf{c} \times \mathbf{a}) + \mathbf{c^2}(\mathbf{a} \times \mathbf{b})|} {36V} \,$
where:
$6V= |\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})|. \,$
In the formulas throughout this section, the scalar a2 represents the inner vector product a·a; similarly b2 and c2.
The vector positions of various centers are as follows:
The centroid
$\mathbf{G} = \frac{\mathbf{a} + \mathbf{b} + \mathbf{c}}{4}. \,$
The incenter
$\mathbf{I}= \frac{ \mathbf{a} \cdot |\mathbf{b}\times \mathbf{c}| + \mathbf{b} \cdot |\mathbf{c}\times \mathbf{a}| + \mathbf{c} \cdot |\mathbf{a}\times \mathbf{b}| }{ |\mathbf{b}\times \mathbf{c}| + |\mathbf{c}\times \mathbf{a}| + |\mathbf{a}\times \mathbf{b}| + |\mathbf{b}\times \mathbf{c} + \mathbf{c}\times \mathbf{a} + \mathbf{a}\times \mathbf{b}| }. \,$
The circumcenter
$\mathbf{O}= \frac {\mathbf{a^2}(\mathbf{b} \times \mathbf{c}) + \mathbf{b^2}(\mathbf{c} \times \mathbf{a}) + \mathbf{c^2}(\mathbf{a} \times \mathbf{b})} {2\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})}. \,$
The Monge point
$\mathbf{M} = \frac {\mathbf{a} \cdot (\mathbf{b} + \mathbf{c})(\mathbf{b} \times \mathbf{c}) + \mathbf{b}\cdot (\mathbf{c} + \mathbf{a})(\mathbf{c} \times \mathbf{a}) + \mathbf{c} \cdot (\mathbf{a} + \mathbf{b})(\mathbf{a} \times \mathbf{b})} {2\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})}. \,$
The Euler line relationships are:
$\mathbf{G} = \mathbf{M} + \frac{1}{2} (\mathbf{O}-\mathbf{M})\,$
$\mathbf{T} = \mathbf{M} + \frac{1}{3} (\mathbf{O}-\mathbf{M})\,$
where T is twelve-point center.
Also:
$\mathbf{a} \cdot \mathbf{O} = \frac {\mathbf{a^2}}{2} \quad\quad \mathbf{b} \cdot \mathbf{O} = \frac {\mathbf{b^2}}{2} \quad\quad \mathbf{c} \cdot \mathbf{O} = \frac {\mathbf{c^2}}{2}\,$
and:
$\mathbf{a} \cdot \mathbf{M} = \frac {\mathbf{a} \cdot (\mathbf{b} + \mathbf{c})}{2} \quad\quad \mathbf{b} \cdot \mathbf{M} = \frac {\mathbf{b} \cdot (\mathbf{c} + \mathbf{a})}{2} \quad\quad \mathbf{c} \cdot \mathbf{M} = \frac {\mathbf{c} \cdot (\mathbf{a} + \mathbf{b})}{2}.\,$
## Geometric relations
A tetrahedron is a 3-simplex. Unlike the case of the other Platonic solids, all the vertices of a regular tetrahedron are equidistant from each other (they are the only possible arrangement of four equidistant points in 3-dimensional space).
A tetrahedron is a triangular pyramid, and the regular tetrahedron is self-dual.
A regular tetrahedron can be embedded inside a cube in two ways such that each vertex is a vertex of the cube, and each edge is a diagonal of one of the cube's faces. For one such embedding, the Cartesian coordinates of the vertices are
(+1, +1, +1);
(−1, −1, +1);
(−1, +1, −1);
(+1, −1, −1).
This yields a tetrahedron with edge-length $\scriptstyle 2 \sqrt{2}$, centered at the origin. For the other tetrahedron (which is dual to the first), reverse all the signs. These two tetrahedra's vertices combined are the vertices of a cube, demonstrating that the regular tetrahedron is the 3-demicube.
The volume of this tetrahedron is 1/3 the volume of the cube. Combining both tetrahedra gives a regular polyhedral compound called the compound of two tetrahedra or stella octangula.
The interior of the stella octangula is an octahedron, and correspondingly, a regular octahedron is the result of cutting off, from a regular tetrahedron, four regular tetrahedra of half the linear size (i.e. rectifying the tetrahedron).
The above embedding divides the cube into five tetrahedra, one of which is regular. In fact, 5 is the minimum number of tetrahedra required to compose a cube.
Inscribing tetrahedra inside the regular compound of five cubes gives two more regular compounds, containing five and ten tetrahedra.
Regular tetrahedra cannot tessellate space by themselves, although this result seems likely enough that Aristotle claimed it was possible. However, two regular tetrahedra can be combined with an octahedron, giving a rhombohedron that can tile space.
However, there is at least one irregular tetrahedron of which copies can tile space. If one relaxes the requirement that the tetrahedra be all the same shape, one can tile space using only tetrahedra in various ways. For example, one can divide an octahedron into four identical tetrahedra and combine them again with two regular ones. (As a side-note: these two kinds of tetrahedron have the same volume.)
The tetrahedron is unique among the uniform polyhedra in possessing no parallel faces.
### Related polyhedra
Tetrahedron Square pyramid Pentagonal pyramid Hexagonal pyramid
A truncation process applied to the tetrahedron produces a series of uniform polyhedra. Truncating edges down to points produces the octahedron as a rectified tetahedron. The process completes as a birectification, reducing the original faces down to points, and producing the self-dual tetrahedron once again.
Family of uniform tetrahedral polyhedra
{3,3} t0,1{3,3} t1{3,3} t1,2{3,3} t2{3,3} t0,2{3,3} t0,1,2{3,3} s{3,3}
This polyhedron is topologically related as a part of sequence of regular polyhedra with Schläfli symbols {3,n}, continuing into the hyperbolic plane.
{3,3} {3,4} {3,5} {3,6} {3,7} {3,8} {3,9}
Compounds:
### Intersecting tetrahedra
An interesting polyhedron can be constructed from five intersecting tetrahedra. This compound of five tetrahedra has been known for hundreds of years. It comes up regularly in the world of origami. Joining the twenty vertices would form a regular dodecahedron. There are both left-handed and right-handed forms, which are mirror images of each other.
## Isometries
### Isometries of regular tetrahedra
The proper rotations and reflections in the symmetry group of the regular tetrahedron
The vertices of a cube can be grouped into two groups of four, each forming a regular tetrahedron (see above, and also animation, showing one of the two tetrahedra in the cube). The symmetries of a regular tetrahedron correspond to half of those of a cube: those that map the tetrahedra to themselves, and not to each other.
The tetrahedron is the only Platonic solid that is not mapped to itself by point inversion.
The regular tetrahedron has 24 isometries, forming the symmetry group Td, isomorphic to S4. They can be categorized as follows:
• T, isomorphic to alternating group A4 (the identity and 11 proper rotations) with the following conjugacy classes (in parentheses are given the permutations of the vertices, or correspondingly, the faces, and the unit quaternion representation):
• identity (identity; 1)
• rotation about an axis through a vertex, perpendicular to the opposite plane, by an angle of ±120°: 4 axes, 2 per axis, together 8 ((1 2 3), etc.; (1 ± i ± j ± k) / 2)
• rotation by an angle of 180° such that an edge maps to the opposite edge: 3 ((1 2)(3 4), etc.; i, j, k)
• reflections in a plane perpendicular to an edge: 6
• reflections in a plane combined with 90° rotation about an axis perpendicular to the plane: 3 axes, 2 per axis, together 6; equivalently, they are 90° rotations combined with inversion (x is mapped to −x): the rotations correspond to those of the cube about face-to-face axes
### Isometries of irregular tetrahedra
The isometries of an irregular tetrahedron depend on the geometry of the tetrahedron, with 7 cases possible. In each case a 3-dimensional point group is formed.
• An equilateral triangle base and isosceles (and non-equilateral) triangle sides gives 6 isometries, corresponding to the 6 isometries of the base. As permutations of the vertices, these 6 isometries are the identity 1, (123), (132), (12), (13) and (23), forming the symmetry group C3v, isomorphic to S3.
• Four congruent isosceles (non-equilateral) triangles gives 8 isometries. If edges (1,2) and (3,4) are of different length to the other 4 then the 8 isometries are the identity 1, reflections (12) and (34), and 180° rotations (12)(34), (13)(24), (14)(23) and improper 90° rotations (1234) and (1432) forming the symmetry group D2d.
• Four congruent scalene triangles gives 4 isometries. The isometries are 1 and the 180° rotations (12)(34), (13)(24), (14)(23). This is the Klein four-group V4 Z22, present as the point group D2. A tetrahedron with this symmetry is called disphenoid.
• Two pairs of isomorphic isosceles (non-equilateral) triangles. This gives two opposite edges (1,2) and (3,4) that are perpendicular but different lengths, and then the 4 isometries are 1, reflections (12) and (34) and the 180° rotation (12)(34). The symmetry group is C2v, isomorphic to V4.
• Two pairs of isomorphic scalene triangles. This has two pairs of equal edges (1,3), (2,4) and (1,4), (2,3) but otherwise no edges equal. The only two isometries are 1 and the rotation (12)(34), giving the group C2 isomorphic to Z2.
• Two unequal isosceles triangles with a common base. This has two pairs of equal edges (1,3), (1,4) and (2,3), (2,4) and otherwise no edges equal. The only two isometries are 1 and the reflection (34), giving the group Cs isomorphic to Z2.
• No edges equal, so that the only isometry is the identity, and the symmetry group is the trivial group.
## A law of sines for tetrahedra and the space of all shapes of tetrahedra
A corollary of the usual law of sines is that in a tetrahedron with vertices O, A, B, C, we have
$\sin\angle OAB\cdot\sin\angle OBC\cdot\sin\angle OCA = \sin\angle OAC\cdot\sin\angle OCB\cdot\sin\angle OBA.\,$
One may view the two sides of this identity as corresponding to clockwise and counterclockwise orientations of the surface.
Putting any of the four vertices in the role of O yields four such identities, but in a sense at most three of them are independent: If the "clockwise" sides of three of them are multiplied and the product is inferred to be equal to the product of the "counterclockwise" sides of the same three identities, and then common factors are cancelled from both sides, the result is the fourth identity. One reason to be interested in this "independence" relation is this: It is widely known that three angles are the angles of some triangle if and only if their sum is 180° (π radians). What condition on 12 angles is necessary and sufficient for them to be the 12 angles of some tetrahedron? Clearly the sum of the angles of any side of the tetrahedron must be 180°. Since there are four such triangles, there are four such constraints on sums of angles, and the number of degrees of freedom is thereby reduced from 12 to 8. The four relations given by this sine law further reduce the number of degrees of freedom, not from 8 down to 4, but only from 8 down to 5, since the fourth constraint is not independent of the first three. Thus the space of all shapes of tetrahedra is 5-dimensional.[12]
## Applications
The ammonium+ ion is tetrahedral
### Numerical analysis
In numerical analysis, complicated three-dimensional shapes are commonly broken down into, or approximated by, a polygonal mesh of irregular tetrahedra in the process of setting up the equations for finite element analysis especially in the numerical solution of partial differential equations. These methods have wide applications in practical applications in computational fluid dynamics, aerodynamics, electromagnetic fields, civil engineering, chemical engineering, naval architecture and engineering, and related fields.
### Chemistry
The tetrahedron shape is seen in nature in covalent bonds of molecules. All sp3-hybridized atoms are surrounded by atoms lying in each corner of a tetrahedron. For instance in a methane molecule (CH4) or an ammonium ion (NH4+), four hydrogen atoms surround a central carbon or nitrogen atom with tetrahedral symmetry. For this reason, one of the leading journals in organic chemistry is called Tetrahedron. See also tetrahedral molecular geometry. The central angle between any two vertices of a perfect tetrahedron is $\arccos{\left(-\tfrac{1}{3}\right)}$, or approximately 109.47°.
Water, H2O, also has a tetrahedral structure, with two hydrogen atoms and two lone pairs of electrons around the central oxygen atoms. Its tetrahedral symmetry is not perfect, however, because the lone pairs repel more than the single O-H bonds.
Quaternary phase diagrams in chemistry are represented graphically as tetrahedra.
However, quaternary phase diagrams in communication engineering are represented graphically on a two-dimensional plane.
### Electricity and electronics
If six equal resistors are soldered together to form a tetrahedron, then the resistance measured between any two vertices is half that of one resistor.[13][14]
Since silicon is the most common semiconductor used in solid-state electronics, and silicon has a valence of four, the tetrahedral shape of the four chemical bonds in silicon is a strong influence on how crystals of silicon form and what shapes they assume.
### Games
Especially in roleplaying, this solid is known as a 4-sided die, one of the more common polyhedral dice, with the number rolled appearing around the bottom or on the top vertex. Some Rubik's Cube-like puzzles are tetrahedral, such as the Pyraminx and Pyramorphix.
### Color space
Tetrahedra are used in color space conversion algorithms specifically for cases in which the luminance axis diagonally segments the color space (e.g. RGB, CMY).[15]
### Contemporary art
The Austrian artist Martina Schettina created a tetrahedron using fluorescent lamps. It was shown at the light art biennale Austria 2010.[16]
It is used as album artwork, surrounded by black flames on The End of All Things to Come by Mudvayne.
### Popular Culture
Stanley Kubrick originally intended the monolith in 2001: A Space Odyssey to be a tetrahedron, according to Marvin Minsky, a cognitive scientist and expert on artificial intelligence who advised Kubrick on the Hal 9000 computer and other aspects of the movie. Kubrick scrapped the idea of using the tetrahedron as a visitor who saw footage of it did not recognize what it was and he did not want anything in the movie regular people did not understand.[17]
### Geology
The tetrahedral hypothesis, originally published by William Lowthian Green to explain the formation of the Earth,[18] was popular through the early 20th century.[19][20]
## References
1. ^ a b Weisstein, Eric W., "Tetrahedron" from MathWorld.
2. Coxeter, H. S. M.: Regular Polytopes (Methuen and Co., 1948). Table I(i).
3. ^ http://www.mathematische-basteleien.de/tetrahedron.htm
4. ^ "Angle Between 2 Legs of a Tetrahedron" – Maze5.net
5. ^ "Simplex Volumes and the Cayley-Menger Determinant" at MathPages.com
6. ^ W. Kahan, "What has the Volume of a Tetrahedron to do with Computer Programming Languages?", [1], pp. 16-17.
7. ^
8. ^ Havlicek, H.; Weiß, G. (2003). "Altitudes of a tetrahedron and traceless quadratic forms". American Mathematical Monthly 110 (8): 679–693.. DOI:10.2307/3647851. JSTOR 3647851.
9. ^ Kam-tim Leung and S. N. Suen, "Vectors, matrices and geometry", Hong Kong University Press, 1994, pp. 53-54.
10. ^ Outudee, Somluck; New, Stephen. The Various Kinds of Centres of Simplices. Dept of Maths., Chulalongkorn University, Bangkok.
11. ^ "Déterminants sphérique et hyperbolique de Cayley-Menger". Bulletin AMQ, Daniel Audet, May 2011.
12. ^ Rassat, André; Fowler, Patrick W. (2004), "Is There a "Most Chiral Tetrahedron"?", Chemistry: A European Journal 10 (24): 6575–6580, DOI:10.1002/chem.200400869 .
13. ^ Klein, Douglas J. (2002). "Resistance-Distance Sum Rules" (PDF). Croatica Chemica Acta 75 (2): 633–649. Retrieved 2006-09-15.
14. ^ Tomáš Záležák (18 October 2007). Resistance of a regular tetrahedron (PDF). Retrieved 25 Jan 2011.
15. ^ Vondran, Gary L. (April 1998). "Radial and Pruned Tetrahedral Interpolation Techniques" (PDF). HP Technical Report HPL-98-95: 1–32.
16. ^ Lightart-Biennale Austria 2010
17. ^ "Marvin Minsky: Stanley Kubrick Scraps the Tetrahedron". Web of Stories. Retrieved 20 February 2012.
18. ^
19. ^ Arthur Holmes (1965). Principles of physical geology. Nelson. p. 32.
20. ^ Charles Henry Hitchcock (January 1900). "William Lowthian Green and his Theory of the Evolution of the Earth's Features". The American geologist (Geological Publishing Company) XXV: pp. 1–10.
Contenu de sensagent
• définitions
• synonymes
• antonymes
• encyclopédie
• definition
• synonym
Publicité ▼
dictionnaire et traducteur pour sites web
Alexandria
Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web !
Essayer ici, télécharger le code;
Solution commerce électronique
Augmenter le contenu de votre site
Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML.
Parcourir les produits et les annonces
Obtenir des informations en XML pour filtrer le meilleur contenu.
Indexer des images et définir des méta-données
Fixer la signification de chaque méta-donnée (multilingue).
Renseignements suite à un email de description de votre projet.
Jeux de lettres
Les jeux de lettre français sont :
○ Anagrammes
○ jokers, mots-croisés
○ Lettris
○ Boggle.
Lettris
Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée.
boggle
Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer
Dictionnaire de la langue française
Principales Références
La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés.
Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID).
L'encyclopédie française bénéficie de la licence Wikipedia (GNU).
Traduction
Changer la langue cible pour obtenir des traductions.
Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent.
6043 visiteurs en ligne
calculé en 0,078s
Je voudrais signaler :
section :
une faute d'orthographe ou de grammaire
un contenu abusif (raciste, pornographique, diffamatoire)
une violation de copyright
une erreur
un manque
autre
merci de préciser :
allemand anglais arabe bulgare chinois coréen croate danois espagnol espéranto estonien finnois français grec hébreu hindi hongrois islandais indonésien italien japonais letton lituanien malgache néerlandais norvégien persan polonais portugais roumain russe serbe slovaque slovène suédois tchèque thai turc vietnamien
allemand anglais arabe bulgare chinois coréen croate danois espagnol espéranto estonien finnois français grec hébreu hindi hongrois islandais indonésien italien japonais letton lituanien malgache néerlandais norvégien persan polonais portugais roumain russe serbe slovaque slovène suédois tchèque thai turc vietnamien | |
# Executive Summary
This review was commissioned by the Norwegian Commission for Afghanistan with the aim to assess the 2011–2014 Norwegian Development Assistance to Afghanistan. The purpose was three-fold:
• Provide an assessment of how the Ministry of Foreign Affairs (MFA) has responded to the recommendations from the 2012 Norad evaluation of Norwegian aid to Afghanistan, and how the Norwegian aid has been aligned to MFA strategies and internal guidelines.
• Provide an overview of Norwegian development assistance in Afghanistan during the 2011–14 period and, where possible, identify their short and (expected) long-term results.
• Provide recommendations for further development cooperation in Afghanistan.
Specifically, the teams was asked to review the management of the Norwegian Development Funds, and the contribution of implementing partners, with respect to the concrete short and (expected) long term results they have generated in the period under review.
The Terms of Reference (ToR) request an analysis of trends in the period 2011–2014 in terms of prioritization and selection of thematic focus and implementing partners. They also ask for an assessment of the degree to which these meet the overall Norwegian development goals of: 1) strengthening Afghan institutions; 2) contributing to a political settlement; and 3) contributing to sustainable and just development, humanitarian efforts, and to the promotion of the governance, human rights and gender equality agendas. Thematic priority areas were: a) good governance; b) education; and c) rural development.
Major contextual changes took place in Afghanistan during the period under review. The security situation worsened throughout the country, and the economy stagnated. Together, these changes resulted in increasing challenges for the implementation of development programmes and projects. These circumstances also applied to the monitoring and evaluation (M&E) of Norwegian funded assistance.
Norwegian development funding to Afghanistan totalled NOK 5,363 million for the period 2001–2011, and NOK 3,008 million for the period 2011–2014. The annual disbursement over these last years was approximately NOK 750 million.
During the period under review, multilateral organisations (the World Bank and the United Nations Development Fund) remained the main funding channels for Norwegian development aid, receiving 55% of the total assistance. Forty per cent was channelled through Norwegian, international and Afghan NGOs partners. There was an increase in support for economic development and trade (56% of total funding), and a substantial reduction in emergency response assistance (13% of total funding) compared to the previous period (2001–2011).
A key finding from the 2012 Norad evaluation of the period 2001–2011 was that Norway’s policy and interventions “match closely the international agenda for Afghanistan and within that framework its development agenda is certainly relevant”. The evaluation found alignment with Afghan priorities consistently high on the Norwegian agenda, and the choice of aid channels remarkably consistent over the years. The evaluators were, however, of the opinion that “limited administrative capacity (at the Embassy) is one clear reason why policies are weak on the operational side”.
The 2012 evaluation found that Norwegian development made real achievements in output terms, but that “there is still limited evidence of concrete outcomes”. The report found it difficult to identify the impact of the Norwegian assistance. Its main recommendation was that “Norway should rethink its strategy and aid programming for future engagement in Afghanistan”.
This review has found that the MFA and the Kabul Embassy adopted specific measures in response to the report. These measures included operational responses to several of the recommendations. The MFA and the Kabul Embassy, however, also disagreed with some of the findings. Furthermore, on the basis of the recommendations, they carried out a close dialogue with the World Bank (WB) managed Afghan Reconstruction Trust Fund (ARTF) and the United Nations Development Programme (UNDP) managed Law and Order Trust Fund for Afghanistan (LOTFA), and the funded Non-Governmental Organizations (NGOs), on the need to develop a) baseline studies, b) anti-corruption strategies and tools, and c) plans and initiatives for monitoring and external evaluations. Most NGO partners report compliance with these, and some of them also developed theories of change.
This review found that there was a process under way well before 2011 to focus and reduce the number of development partners and projects involved within the given budget. There was also a strong emphasis in strategy documents, and in the Embassy’s annual “Virksomhetsplan” to support the dialogue with the Afghan government and to develop the capacity of its ministries “to manage their own development”. The selection of thematic focus and implementing partners in the 2011–2014 period was based on:
• Adherence to the Norwegian strategy for the development assistance to Afghanistan.
• Adherence to the requirements set in the Tokyo Mutual Accountability Framework (TMAF) to align donor funding with national priorities.
• A wish to reduce the number of projects/programmes within the given thematic areas, and to channel more aid through trust funds in order to reduce the management burden at the Embassy/MFA.
• The goal of minimizing exposure to corruption risks and allowing for a stronger focus on M&E in the remaining projects/programmes.
• A reduction in the number of Norwegian staff handling the development portfolio at the Norwegian Embassy and, from 2013, a shift in the management role of the Embassy with greater responsibility shifted to Oslo (MFA and Norad).
• A continuation of focus areas and aid channels, although with a higher priority on ARTF and a reduction in NGO funding.
The closure of the Provincial Reconstruction Team in Faryab in 2013 meant that the Development Advisor positions in Meymaneh disappeared, thereby ending the Embassy presence in the field as well as the regular field-visits from Embassy staff. The gradual reduction of Norwegian development-related positions at the Kabul Embassy from 2013 onwards, and the abolishment of the Norwegian development councillor position since the end of 2014, reduced substantially Norway’s ability to engage in development policy processes in Kabul.
The review has found that the Embassy, MFA and Norad have had a sustained and active engagement with implementing partners, not least to ensure compliance with the TMAF. However, the contact at/with the Embassy on development issues—as well as the capacity to take part in strategic and more technical coordination efforts—decreased after the reduction in Norwegian development staff and finally the withdrawal of the Norwegian development councillor. Several persons interviewed suggested that Norway could have taken a more proactive role in initiating independent M&E activities, including those of the Trust Funds and their implementing Ministries and partners. The Embassy suggested here to make use of Afghan consultants and research institutes, which would also contribute to build their capacity.
The review has found that Norway has been a very responsible partner of the Government of Afghanistan, through active dialogue with the administration and the various Ministries and through compliance with the TMAF. Through its involvement in and periodic leadership of the Nordic+ group of donors, Norway was able to influence development policy beyond what would have been possible had Norway acted on its own. More active use of Norad by the Embassy during 2012–2014 for advisement and process input helped in securing the quality of the development management and activities.
The review concludes that Norwegian aid was highly relevant in terms of focus and selection of intervention areas. The balance between multilateral and bilateral channels ensures support to projects of national priority and importance, while also allowing for diversification and risk reduction through the funding for NGOs. The review team’s main concern is that the selection of implementing partners became less innovative over these years as no new partners (neither Afghan NGOs nor civil society organisations) were supported, but left for the civil society trust fund Tawanmandi to finance. This strategy now poses a major challenge as funding for Tawanmandi was terminated by mid-2015.
There are considerable similarities in the focus, priorities and approaches of Norway and Sweden during this period, and they also came to apply to Denmark, which concluded its direct budget support to and presence in the Ministry of Education. All three countries signed up to the TMAF and worked actively through the Nordic+ group towards its implementation. NGOs from all three countries have had a long and sustained presence in Afghanistan, and have received substantial donor support throughout the review period – including funding from different Nordic donors. The main differences are found between Sweden and Norway when it comes to the management and monitoring of the development assistance. For Sweden, the responsibility for managing development aid is primarily delegated to the Swedish Embassy in Kabul. Norway has divided this management responsibility between different sections in MFA and Norad since 2013. Sweden has five Swedish aid officials and two locally recruited development advisors based at the Embassy. Sweden therefore has more capacity to do field monitoring and to engage with authorities at different levels, generating updated information and knowledge they can bring into the dialogue with other donors and the Afghan government.
The Norwegian support for NGOs goes primarily towards projects within the three priority areas of Norwegian engagement. A review of NGO priorities and activities showed: involvement in service delivery; to varying degrees priority given to capacity building for government and Afghan NGOs and civil society; ability to build national ownership through some programmers; and varying degrees of attention to gender issues, with some very innovative projects. The NGOs have the capacity to provide flexible responses to sudden changes in the context of humanitarian assistance, for example after natural disasters and internal displacements.
With some variation, all NGOs receiving Norwegian aid undertake conflict analysis and have developed risk mitigation plans, which potentially makes them better prepared to mitigate risks and corruption challenges than they were in 2011. Most of the development oriented NGOs have done baseline surveys--including some that are very extensive and involve local communities and government representatives--, and some of these NGOs have also developed a theory of change to guide their interventions. Most of them prioritize capacity building and national ownership, although the extent of inclusion of government staff and capacity development varies. All NGOs report on results against plan, and some also report on project impact or detail the expected impact of the assistance. However, this reporting is typically more a case of isolated examples than of systematic reporting and impact assessment.
A review of the three Trust Funds supported (ARTF, LOTFA and Tawanmandi) shows more variation in the results of Norwegian support. Norwegian policy guidelines emphasize the need for funding and support for civil society. However, the support channelled to the Tawanmandi fund was terminated, effective mid-2015. The argument was that the trust fund had not delivered on program objectives and expectations, primarily due to weak performance of the management agent. Still, as stated in one of the interviews: support for Afghan media and anti-corruption organisations may have had a greater impact on fighting corruption through public disclosure, than support for anti-corruption measures provided to Afghan government institutions.
Support to ARTF and LOTFA has continued throughout the review period, despite some irregularities identified in the management of LOTFA funds. Norway played an active role together with other development partners to strengthen safeguards in LOTFA, as well as to improve M&E and reporting against results in both LOTFA and ARTF.
Overall, we find that MFA/Norad and the Embassy in Kabul have done what they could to address the shortcomings that the 2001–2011 evaluation identified in terms of M&E, impact reporting and minimizing risks of corruption, given the challenging context and limited number of staff on the ground. That said, it was noted that Norway could have done more to initiate its own M&E activities, but nonetheless the quality of partners’ systems and safeguards improved during the period under review.
The team specifically reviewed support to good governance, education and rural development. We found that the interventions were relevant and that implementations progressed satisfactory and planned outputs were being achieved. The example of Integrity Watch Afghanistan is illustrative of the potential effect of what initially was just limited and time-bound Embassy support for an innovative idea.
Norway’s support to the Afghan education sector was provided through the ARTF-managed Education Quality Improvement Programme (EQUIP) and NGOs, and through the Global Partnership for Education. The Embassy also participated actively in technical groups and coordination bodies until the capacity was reduced at the Kabul Embassy. Despite Norwegian and international efforts, the status by mid-2015 is a continued need for capacity development in the Ministry of Education and, equally important, for increased teacher training to ensure implementation capacity and improved quality. There is also a need for close on-going follow-up and monitoring of ARTF and EQUIP funding, to counter concerns about corruption and inflated student and school numbers, and to ensure continued attention to quality improvement.
In the rural development sector, Norway supported the National Solidarity Programme (NSP), international, Norwegian and Afghan (partner) NGOs, the UN Food and Agriculture Project (FAO) “Promoting Integrated Pest Management in Afghanistan”, and NORPLAN’s documentation of Afghan hydrogeology. We found that the international and Norwegian support for rural development has yielded extensive results, and some documented impacts, including some in the area of women’s roles and development opportunities and in the strengthening of Afghan civil society.
Taken together, there have been documented outcomes and results from the Norwegian annual development assistance of NOK 750 million, distributed through different channels and with the involvement of he Afghan government and various ministries. There is a request from partners for the continuation of predictable and flexible funding in the coming years, whereas senior Norwegian bureaucrats recommend more attention be placed on addressing corruption challenges (and on individuals influencing them) to ensure that the Norwegian assistance meets required needs and the jointly agreed development goals.
Our concern is that since 2013 Norwegian “on the ground” management capacity in Afghanistan has been reduced, and replaced by a much more fragmented aid management system. Contract responsibility has been divided between Norad and MFA, but we struggle to identify where the responsibility rests for initiating strategy debates and M&E initiatives.
This situation is a concern as we recognize two clear needs in the increasingly challenging political, security and development context of Afghanistan. The first one is that M&E should not be left as a responsibility only of Norway’s implementing partners, but should be complemented by independent field monitoring and evaluations. Norway would not have to carry out this oversight measures on its own. It is likely to have greater impact and be more cost-effective if done in partnership with other donors and using new M&E techniques, including community based monitoring. The second need is for continued on the ground strategic and project related “development dialogue” with the Government of Afghanistan, other donors, trust funds, NGOs and civil society organisations. Having sufficient and skilled Embassy staffing would help ensure that there is a capacity for learning, for making adjustments and for securing impact close to where the changes are taking place. This would contribute to making the best use of Norwegian funding in an unpredictable and constantly changing context.
# Introduction and methodology
This report is a review for the Norwegian Commission for Afghanistan on Norwegian Development Assistance to Afghanistan for the period 2011- 2014. As defined by the Terms of Reference (ToR; see Annex II), the purpose of this study is three-fold:
• Assess the follow-up to the recommendations from the Norad-report, including MFA strategies and internal guidelines.
• Develop an overview of the Norwegian development assistance in Afghanistan 2011-14 and, where possible, its short and (expected) long-term results.
• Provide recommendations for further development cooperation in Afghanistan.
The review is based on a combination of publicly available information, documents received from the Commissions Secretariat (including the Norwegian Kabul Embassy’s tri-annual strategies and annual plans), reports from the various implementing partners and interviews with key informants. Interviewees include staff of the Ministry of Foreign Affairs (MFA) and Norwegian Agency for Development Cooperation (Norad) in Oslo; staff of the Norwegian Embassy in Kabul; representatives from non-governmental organisations (NGOs) in Oslo and Kabul; staff of the Danish and Swedish Embassies in Kabul, and World Bank (WB) staff administrating the Afghan Reconstruction and Development Fund (ARTF) in Kabul (the list of interviews is enclosed as Annex I). The United Nations Development Programme (UNDP), which administers the Law and Order Trust Fund (LOTFA) for Afghanistan, did not respond to requests for interviews made by the Embassy in Kabul. Therefore, our assessment of LOTFA is based on reports and interviews with MFA and Norad staff.
The team operated in Kabul under the security regulations of the Norwegian/Danish Embassy. This circumstance placed significant constraints on our ability to meet with Government of Afghanistan officials and institutions that could potentially provide a more independent opinion about Norway as donor and about Norwegian assistance. Later efforts to obtain information and viewpoints about Norwegian assistance through email queries to Afghans holding key positions in Ministries, Directorates and Commissions provided limited results. Nevertheless, where relevant, we quote the responses received. The Norwegian Embassy in Kabul hosted a dinner with Afghans from varied development and policy backgrounds. This meeting provided the opportunity for an informal discussion on topics relevant for this review, including suggestions for future directions and priorities.
The triangulation of the various sources and further inquiries on some issues identified formed then the basis for our analysis. It is important to mention here that a limited review, primarily based on the organisations own reports and perspectives, is not in a position to provide any in-depth assessment of results and impacts beyond what is covered in the documents or in the external reviews/evaluations/reports identified.
The team received valuable comments and inputs to the inception report submitted in early December 2015, including limitations of some of the initial questions raised in the TOR, and to the draft report submitted in early February 2016. Elling Tjønneland (CMI) has provided quality control for the report.
# Contextual changes 2011–2014
There were major contextual changes in Afghanistan during the period under review that in different ways influenced the security and development context, and thus posed challenges for the planning and implementation of development programmes. Some of the developments that occurred in 2015 are reflected on in order to enable a discussion on future challenges.
The security situation was influenced by the planned reduction in the international military presence that was announced in 2011. Most military contingents had left Afghanistan by end of 2014, only a small Norwegian mentoring force remains in Afghanistan as part of the North Atlantic Treaty Organization’s (NATO) Operation Resolute Support. The reduction in international forces has had a negative influence on security throughout Afghanistan, leading to an annual increase in civilian casualties due to attacks in the cities and along the highways. The departure of forces was followed by a weakened economy due to reduced military spending in general and to reduced support from Provincial Reconstruction Teams (PRTs) for development projects for many countries (except i.e. Norway and Sweden).
The worsened security situation led to gradually increased challenges for the implementation of development projects, as well as for monitoring and evaluation (M&E) of on-going projects in many parts of the country. Affected areas included previously relatively secure places such as Faryab, Kunduz, Baghlan and Badakshan. The conflict did moreover lead to increased internal displacement, and a subsequent increase in need for humanitarian assistance. Uncertainty over the security situation led many Afghans to consider migration, beyond the already existing job migrations to Pakistan, Iran and the Gulf countries. In particular, many young men have left the country over the last years, including migrants to Norway, where they have constituted the largest group of under-aged asylum seekers.
More targeted attacks in Kabul against hotels and restaurants frequented by international personnel, assassinations and an increased numbers of kidnappings (some attacks taking place close to the Norwegian Embassy, as one in early 2014) led to much stricter security regimes and limitations on travel for Embassy staff. It has also led International organisations and NGOs to reconsider their presence, travels and staffing levels in‐country. The ARTF, as an example, has shifted their international staff to Dubai. The Norwegian Embassy was, for a combination of security concerns and in order to reduce costs, merged with the Danish Embassy in late 2014.
The 2014 Presidential election was marred by allegations of corruption, delayed transfer of power, and as a result a reduced respect for democratic institutions and processes. US political intervention in a 6 months standoff between the two main competitors, Ashraf Ghani and Abdullah Abdullah, led to the establishment of a National Unity Government in September 2014, and to the transfer of Presidential authority from Hamid Karzai to Ashraf Ghani.
While the Karzai government had developed an increasingly confrontational relationship with the international community, the new government struggled to establish a functional administration and to agree on key positions in the Central and Provincial administrations. The complicated political situation has negatively influenced the government’s ability to deliver on their promises, to get the administration and the various ministries and commissions staffed and functional, and to gain national and international trust. This has again affected negatively economic development.
Corruption and insufficient control of development assistance came to the forefront in 2012 through media reports on the Kabul Bank fraud, involving close relatives of President Karzai and the then Defence Minister. In the same year, allegations emerged of mismanagement in the UNDP administered LOTFA. Frequent reports from the US Special Inspector General for Afghan Reconstruction (SIGAR) have continued to draw attention to the scale of the corruption, and to the lack of, and challenges related to, M&E of reconstruction and development assistance.
The illustration below on the prevalence of bribery of public officials, drawing on surveys conducted by the UN Office on Drugs and Crime (UNODC), illustrates the extent of the corruption challenge.[1]
There was a gradual increase in insecurity and a worsened economic outlook from 2011 to 2014. The situation deteriorated further in 2015. This decline has major implications for the developments in the coming years. The temporary fall of Kunduz city to the Taliban and the its increased presence throughout Afghanistan demonstrated the political and military inability to address Taliban’s advances. The problem also includes the high desertion rate among battle-fatigued Afghan soldiers. The recent presence of the Islamic State (IS) in Afghanistan further increased the complexity of the military challenge, but has also led to US and NATO commitments extending beyond 2017. However, the sharp increase in the number of Afghans leaving for Europe is indicative of the challenges the Afghan government and their army are confronted with. These are challenges they so far have not been able to counter in a manner that can earn international and national confidence and trust.
Notes
[1] The 2012 UNODC report “Corruption in Afghanistan: Recent Patterns and Trends” is available at https://www.unodc.org/documents/frontpage/Corruption_in_Afghanistan_FINAL.pdf, visited 22.02.2016.
Overview of development assistance 2011–2014
Before reviewing the Norwegian development assistance provided from 2011–2014 we will summarise the main trends in the period 2001–2011. The data is derived from the 2012 Norad evaluation and from official Norwegian aid statistics.[2]
Norway reported a total spending of NOK 5,363 million from 2001–2011, and then NOK 3,008 million from 2011–2014. The annual budget allocation was approximately NOK 750 million.
Figure 2 from the 2012 report documents a fairly equal distribution of funding for NGOs, the United Nations (UN), the ARTF and a miscellaneous category.
During the 2011–2014 period, as indicated in Figure 3 below, the multilateral organisations remained the largest channel, receiving 55% of the assistance, which is a slight increase from the combined ARTF and UN funding of 51% for the 2001–2011 period. There was a slight internal shift, however, with more of the funding channelled through the ARTF. The largest change is on the NGO side, that is up from 24% to 40%, although some NGO funding might have been included in the miscellaneous category in the 2001–2011 figures.
The 20% earmarking of assistance for the Faryab province was a contributing factor to the increase, as funding to this area was channelled through NGOs. We can note here that the total number of NGO partners and projects have sharply reduced over the period, and the management of their three-year framework agreements was shifted from the Embassy in Kabul to MFA and Norad in Oslo. This change reduced considerably the management burden at the Embassy in Kabul.
Figure 4 illustrates the distribution of grants between sectors for the 2001–2011 period. Multi-sector assistance, with 34 %, is the largest area, followed by emergency response with 22 %, and then support for government and civil society with 13 %.
Looking at figures from 2011–2014 the following pattern on distribution between sectors emerge (Figure 5).
Figure 5. Total share of grants by sector 2011–2014 (%)
What is evident, although the labelling differs, is a major increase of 22% (from 34% to 56%) for the area of economic development and trade. This is primarily multi-sector support, plus NOK 50 million for agriculture. There is likewise a substantial reduction in emergency response/assistance of 9% (from 22 % to 13 %), although this could partly be a shift from direct NGO support to a preference for funding the OCHA Emergency Relief Fund. Other sectors remain fairly equal: there is a slight 2% increase to the good governance sector (corresponding to the “government and civil society and conflict prevention, peace and security” category in the 2001–2011 listing). There is a small reduction for the education sector (from 4% to 3%), possibly because education was no longer given preference through ARTF.
We will come back to the strategies and decisions leading to these changes.
We note that from 2013 onwards the Embassy in Kabul increasingly requested advice and input to the management of the development portfolio from Norad, which to a larger extent assisted with evaluations, reviews and projects assessments. As a result, the number of external reviews, commissioned by the Embassy, were fewer than in the preceding years.
Management of contracts with Norwegian NGOs was transferred to Norad’s SIVSA department in 2013. In 2014 the remaining contracts (support to trust funds and international NGOs) were transferred to «Seksjon for Tilskuddsforvaltning” in the department for Competence and Resources. This department is part of the MFA’s central administration, not of the regional department with responsibility for Afghanistan. The section can request support from Norad on the follow-up on i.e. ARTF and LOTFA.
Norad and MFA included reviews and evaluations as a requirement in the NGO framework agreements, with the responsibility for implementation corresponding to the NGOs. The same goes for ARTF and LOTFA, where their regular review and evaluation systems was to be followed. This arrangement has led to a number of system-wide evaluations of the ARTF, in addition to programme mid- and end-term evaluations. ARTF funded programmes (such as the National Solidarity Programme) have undergone a large number of evaluations (including an impact evaluation) in addition to their regular monitoring. Norway, as part of the donor community, also pushed for more targeted evaluations of LOTFA and EQUIP in response to concerns that had been raised, and withheld funding until satisfactory explanations or changes had taken place.
In general, it can be concluded that Norway has ensured that the “regular review/evaluation system” has been in place through the various programme contracts, but with the responsibility for implementation placed on the trust funds and NGOs. Further, these measures have been followed-up through Norway’s participation in the various steering and programme committees for the trusts funds and specific programmes. When concerns have been raised or more knowledge has been needed on specific programmes, Norway has also initiated targeted evaluations. This is discussed in greater detail below when looking at Norway’s support to the education sector. The Embassy also requested in its 2014 “Virksomhetsplan” to make use of Afghan consultants and research institutes to undertake field monitoring and evaluations as the security situation has made field visits for embassy staff increasingly difficult. It seems that this has only been actually done to a limited extent, but we have noted a Faryab study undertaken by an Afghan organisation.
However, according to one source, there is reason to be concerned because under the present structure and bureaucratic division of contracts between the MFA and Norad in Oslo, it is unclear who can and should initiate or decide on new reviews or evaluations when these go beyond the ongoing management of existing agreements.
[2] Available at https://www.norad.no/en/front/toolspublications/norwegian-aid-statistics/, visited 25.01.2016.
Assessment of the follow up to evaluation recommendations
A key document for this review is the Norad 2012 report Evaluation of Norwegian Development Cooperation with Afghanistan 2001–2011 (Report 3/2012). In practical terms, the evaluation only covered activities in the period 2001–2009 (p. xv). The aim of the evaluation was to “…assess the contributions of the Norwegian development assistance to promote socio-economic conditions and sustainable peace through improvements in the capacity of the Afghan state and civil society to provide essential public services” (page iii), applying the OECD/DAC evaluation guidelines. We will here present the main evaluation findings before discussing, first, how they was addressed and followed up by the MFA, the Embassy and Norad, and afterwards, how this was done by the various implementers of Norwegian-funded assistance.
A key finding from the evaluation is that Norway’s policy and interventions “match closely the international agenda for Afghanistan and within that framework its development agenda is certainly relevant”. Moreover, that “the focus on governance, gender equality, education and community development has been consistent over the years, just as consistent as the choice of channels and partners” (p.133). The evaluation found alignment with Afghan priorities consistently high on the Norwegian agenda, although there is a concern that it primarily was the international community that defined the Afghan priorities, which then limited Afghan participation and ownership. The report points out that as only NGO funds remain earmarked for the Faryab province, with ARTF and EQUIP no longer being preferences for the province, the provincial government feared about reduced ownership and about needed capacity strengthening of the provincial administration.
Another finding is that Norwegian policy towards Afghanistan and the choice of aid channels (trust funds, UN organisations, NGOs and civil society organisations) has been remarkably consistent over the years (which was found to reflect a political consensus across two parliaments), The report noticed that “apart from increased funding for ARTF there is remarkably little change over the past decade” (p.134).
The evaluation made the following observation on predictability and relevance, and the underlying analysis (ibid.):
On the one hand Norway is a good example of predictability of resources and clear commitment to internationally agreed goals and therefore Norwegian assistance is definitely relevant. On the other hand, the use of underlying analysis remains weak and does not seem to inform policy choices, which may weaken the relevance of Norwegian assistance.
The evaluators are, however, of the opinion that “limited administrative capacity (at the Embassy) is one clear reason why policies are weak on the operational side” and that “follow-up on identified risks is not always satisfactory”. While they argue that the reasons for such an unsatisfactory follow-up is not clear to them, they identified that “pressure to disburse large amounts of funds is a contributor, given the limited staff and effects of the security situation on working conditions” (ibid.).
Turning to effectiveness of the assistance their finding is that “in output terms, real achievements can be reported to which Norway had contributed” (ibid.). Referring here to a range of development and governance achievements, as school enrolment figures of 7 million children by 2010 (of which 37% were girls), which – to place it in context – is up from less than 1 million in 2001 (and very few girls). They, however, go on to conclude that “…there is still limited evidence of concrete outcomes”, except for “improved access to services (such as midwifery) and enhanced pedagogical skills of teachers” (p. 135). There is, according to their assessment, not sufficient evidence to outweigh that “the overall quality of newly constructed schools is poor, literacy remains low and school dropout rates are high, governance remains poor and gender equality is still far from reality.”
Following a discussion on the elusive prospect for a sustainable peace, the evaluation found that “donors, including Norway, made attempts to reduce corruption, but despite all efforts corruption remains endemic and negatively affects the attainment of real outcomes” (ibid.). They go on to identify the “weakness of monitoring and evaluation systems” to be “the main reason why there is so little good quality information about outcomes.” Arguing that by the start of the century all agencies were so preoccupies with getting activities up and running “that M&E was one of many important design considerations that were sacrificed in favour of speed. Gender was another” (ibid). This, according to the evaluation team, “meant that virtually no baseline were done and, as M&E gradually improved, there was nothing to measure progress against.” As security declined after 2005, “M&E has become increasingly problematic logistically and insecure for staff to visit project areas”.
The evaluation concluded that “the overwhelming reasons for the limited results is poor governance and corruption.” It goes on to state that “donors have known about, tolerated, and in some cases exacerbated these for many years in spite of simultaneous efforts to bring improvements” (ibid.). This could be a result of “lack of agreement among donors about how to go about state building and governance agendas” (p.136). Turning specifically to the role of Norway (ibid.), the report says:
Although the MFA has systems in place to prevent corruption, and requires its partners to have anti-corruption policies and strategies, these may go some way to minimising, though not eliminating, corruption at the lower level but they have no effect on the far more damaging grand corruption which takes place in some of the ministries. ARTF has not proved able to manage these and the lack of monitoring is a contributory factor. All donors have taken enormous risks which have increased with the increase in budgets.
The evaluation found the assessment of efficiency problematic due to weak M&E and the lack of data, and therefore concluded that “no reliable assessment can be made to compare the efficiency of various aid channels or aid partners” (p. 137). Although they state that “ARTF as a multi-donor mechanism appears to be a relatively efficient undertaking when viewed from the perspective of fund management and administration”. They moreover draw attention to the Norwegian Embassy in Kabul’s increased management responsibility since 2005, in 2011 managing two thirds of the Norwegian aid budget, and observed that “this has created a heavy management burden for an Embassy that is chronically understaffed”. This leads the evaluators to conclude that “the management of such a complex portfolio in a very complex environment has received insufficient attention”, finally stating that “for a portfolio of this size, the human resources at the Embassy are wholly inadequate.”
We can see two trends in this area. One trend, as documented in Figure 6 below, is that the number of agreements (and partners/projects) was gradually reduced from 2007 onwards. The second trend, in Figure 7, is that increasing responsibility for handling the development portfolio was placed at the Embassy in Kabul until 2010.
The evaluation finds sustainability a difficult concept to define in the context of Afghanistan, and therefore does not attempt to substantially address the issue. However, they make the claim that the sustainability of the Norwegian assistance “has not been the most important concern for Norway and has often been sacrificed where higher priority is placed on other objectives” (ibid.).
They find it difficult to identify the impact of the Norwegian assistance but make a more general observation of the situation as of 2011 (ibid.):
Governance has been poor and, by most accounts, is getting worse. It is often cited as a greater threat to the future of the country than security. The local political economy-- manifested in corruption and use of patronage networks--has worked against international objectives. Poverty has been reduced for some people but has increased for many, especially in the face of deteriorating security across the whole country. There has been some progress on some of the human development indicators but Afghanistan continues to be one of the very poorest countries in the world with the majority of people illiterate and some of the more extreme forms of gender inequality.
Turning to Norway’s achievements, they argue that they emerge primarily through “being a consistent and reliable donor within the framework of the international engagement”, where Norway has succeeded “to put the principles of harmonization and alignment into practice”. They observe that “Norway has a very good reputation based on its commitment, its consistent and reliable funding and its modest approach. The implication is that the visibility of Norway is not very high”.
Arguing that donors in general are rethinking their strategies, and referring to literature that “points in the direction of more focused and better strategies that are based on sound theories of change” their main conclusion is that “Norway should rethink its strategy and aid programming for future engagement in Afghanistan” (p. 138). However, the evaluation did not provide any suggestion on the direction and content of such a strategy, or how the aid programming and selection of channels and partners should be changed. Neither did they give any specific recommendations for development of theories of change, despite their concern over their absence.
Our assessment of the follow up to the recommendations from the Norad-report, including MFA strategies and internal guidelines, can be divided in two parts. One question is how the MFA, the Kabul Embassy and Norad followed up on and operationalized the recommendations. The second is how the implementers of Norwegian development assistance, the Embassy and Norad, either on their own or on advice and follow-up from the MFA, responded to and took on-board the recommendations. It should be noted here that several persons interviewed for this review commented that many of the findings in the 2012 Norad evaluation were very general, as were the recommendations, and that they had expressed disagreement with some of them, e.g. that NGOs lacked contextual knowledge. The vagueness of the report made it difficult to identify clear and detailed initiatives for the Embassy to follow-up.
Many NGO representatives, on their part, had taken note of the requirement for a baseline to facilitate impact documentation, of the need for better M&E instruments, and for routines to prevent corruption and to ensure the expected outcomes.
The Embassy in Kabul did, as would be expected, develop a follow up plan for a selected set of recommendations detailing: a) concrete measures, b) who was responsible for implementation, c) timeframe and d) report in progress according to plan.
The document identified four main recommendations for follow-up:
• An urgency to establish effective routines for follow-up and evaluation of development assistance.
• Clarification of the WB’s Country Assistance Strategy and results framework.
• Increased priority on the strengthening of Sub-National Governance.
• NGOs selection of projects and programmes must be based on conflict analysis and knowledge of local context.
• The plan included eleven sub-priorities and a further number of activities, and specified whether the responsibility rested with the Embassy, the MFA (and with which department), or Norad.
The follow-up report was regularly updated and approved by the MFA; the latest one found being from 6 March 2015. At the time it was reported that 11 activities aimed at addressing the recommendations were completed, while nine were still in process (with some having taken longer time than anticipated). Further details will follow in the next chapter. However, the follow up from the Embassy side went further than the recommendations provided in the evaluation report and deeper into the challenges identified. We find reference to (or overlap with) the main recommendations, as well as those made by the Office of the Auditor General of Norway, in the Embassy’s three-year plans (2011–2103, and 2012–2014) and in the annual “Virksomhetsplaner” (2011, 2012, 2013 and 2014).
These documents expose a more detailed context analysis than what was described in the 2012 evaluation, suggest a number of measures to address identified challenges, and provide a realistic assessment of the Embassy’s ability to meet their goals—accompanied with well argued requests for budget allocations and human resources. One observation though is that we do not find any suggestion from the Embassy or demand from the MFA to develop a “theory of change” or revise the one constructed by the evaluation team.
From our interviews we can document that there has been a consistent follow-up, over time, from the Embassy, the MFA and Norad with their implementing partners on key issues. They have been raised in dialogue and negotiation on framework agreements, in annual meetings and in meetings held at the Kabul Embassy. The discussions have included the need for a) baseline studies, b) anti-corruption strategies and tools, and c) plans and initiatives for monitoring and external evaluations. It has, as we can judge from various reports, consistently been followed up with ARTF (including the commissioning of external studies) and with LOTFA, substantially after the exposure of management and corruption concerns.
An issue that was highly emphasised in the evaluation report and addressed in all Embassy annual plans was the request for increased staffing to handle the Embassy’s development portfolio. This matter will be addressed in the next section.
The NGO partners report extensive work on developing baselines (some doing it jointly with their implementing partners and some involving local communities and government representatives), improving M&E procedures and practises, and introducing or developing further anti-corruption guidelines and measures discussed later in this report. However, it seems that much of these efforts were already recognised and planned for when report was released, although the report was in some cases a trigger to accelerate their processes.
Management of Norwegian Development Funds
In this chapter, we will analyse and discuss how Norwegian development assistance developed and was managed in the period 2011–2014; what were the bases for the adjustments; how recommendations were followed-up on; the interaction with and support for implementing partners; the involvement in aid coordination, and finally develop a comparison with Denmark and Sweden.
Trends, prioritisation, thematic focus and implementing partners
The TOR request an analysis of trends in the period 2011–2014 in terms of prioritization and selection of thematic focus and implementing partners, and also an assessment about the degree to which they meet overall Norwegian development goals for Afghanistan. It is important to note here that throughout the period there have been three overarching goals for the Norwegian assistance to Afghanistan:
1) Strengthen Afghan institutions.
2) Contribute to a political settlement.
3) Contribute to sustainable and just development, humanitarian efforts, and promote the governance, human rights and gender equality agendas.
The third development goal had three defined thematic priority areas:
1) Good governance.
2) Education.
3) Rural development.
These priority areas were, as we have been able to establish, prioritized based on earlier agreements between donors on how to divide thematic responsibility, as well as on Norway’s continued emphasis on gender, human rights and education. Arguably, support for good governance is essential if Afghan institutions are to be strengthened and gain the confidence of the population. A strong government would likely also be in a better position to ensure a lasting political settlement. Education is a long-term investment, and meets the critical need for more girls (and boys) to be educated in order to be able to take on larger responsibilities in their communities and in Afghan institutions, and is essential for the promotion of the human rights and gender equality agendas. The priority given to rural development is a recognition of the need to provide livelihood for the 5 million Afghans that after 2002 have returned from neighbouring countries, and of the opportunity to support local governance structures, and thereby peoples’ engagement in governance.
Before getting into specifics, we need to recall the existing strategy, the directions already set and the on-going debates in 2011, as well as the further strategic and practical steps that were taken until the end of 2014. We are specifically drawing on two three-year plans (2011–2013 and 2012–2014) and the Embassy in Kabul’s annual “Virksomhetsplaner” for 2011, 2012, 2013 and 2014, and the MFA’s corresponding “Tildelingsskriv”.
The decisions made with regards to development assistance seem based on three main factors. The first one is the overall direction and aid volume established in the Norwegian National Budget (with Stortingsproposisjon 1, 2010–2011, as the starting point). The second factor is the alignment with international/Afghan driven processes and meetings (such as the London and subsequent Tokyo meetings), and not least the Tokyo Mutual Agreement Framework (TMAF). The third factor is the aim to ensure compliance with specific UN resolutions, such as Resolution 1325.
Important to notice here is that the 2011–2013 plan (from 2010) refers to a decision to reduce the number of partners and consolidate the development portfolio, and to ensure regular evaluations of the partners. The plan points out that the number of partners and agreements had been reduced by 50% since 2007. This fact suggests that the decision for reduction in number of partners was in place beforehand.
The 2011–2013 plan indicates two overarching directions for Norwegian development assistance. One is to increase the funding channelled through the ARTF, and support the establishment of a Nordic civil society trust fund, while reducing the aid disbursed through the UN and NGO channels. A second direction involves a shift to more non-earmarked funding. It was noted that these changes depended on the Afghan government’s commitment to address corruption. The same plan emphasises concentration on higher education and management of natural resources, in parallel with a continuation of the prioritisation of good governance and education. The Embassy plan signals a continuation of support for the National Solidarity Programme (NSP) and the National Area Based Development Programme (NABDP), support through NGOs for the Faryab province, as well as support for human rights with a reference to a newly developed action plan.
The 2011 “Virksomhetsplan” is in line with the three–year strategy. More emphasis is placed on maintaining a high profile on anti-corruption initiatives, and on strengthening Embassy competence in this field. The Embassy also planned for a higher priority on humanitarian assistance, and consequently participation in UNAMA and OCHA coordination efforts. The Embassy moreover invited MFA for a discussion of the exchange of one of the Norwegian advisor positions against recruitment of three national development and security experts, which was also to ensure a larger degree of staff continuity at the Embassy.
The 2012 “Virksomhetsplan” maintains the 2011 priorities but notes a delay in what is referred to as the “Kabul process” causing a challenge to the ARTF agreement. This was the result of the Kabul Bank corruption scandal and of a lacking of agreement between Afghanistan and the International Monetary Fund (IMF). More efforts were made to ensure Nordic collaboration on UNWOMAN and on a joint Nordic effort with the United Kingdom to establish a civil society trust fund (Tawanmandi). The Embassy maintained the priority for good governance (including an increase in support for LOTFA), education (including EQUIP) and rural development (including in Faryab). Involvement in the energy sector was put on hold awaiting clarifications from involved Afghan ministries.
This planning took place in light of an MFA decision to reduce Norwegian presence in Afghanistan, including a number of staff positions in 2012 and 2013, assumingly linked to the reduced military presence. The implication, according to the Embassy, was to reduce the ambitions of being a development policy actor and dialogue partner in Kabul and to drop the engagement in health related activities. The Embassy maintained a request for the recruitment of an Afghan development expert.
The “Virksomhetsplan 2013” maintained the priority areas for development assistance, but also reports several developments and initiatives that influence the planning and implementation of the development assistance.
• The first issue is that the reduction of development projects/agreements has continued, in order to safeguard sufficient management capacity. By the end of 2012, 25 agreements were to be terminated; efforts were underway to improve the Embassy’s “forvaltningsrutiner”; recruitment of a new national development expert was planned; and they contemplated a larger use of Norad expertise.
• The second one is the priority of and involvement in the “Tokyo Conference” held in mid-2012, including the dialogue processes between the international community and the Afghan government before and after the conference.
• The third issue is the management response to the Norad 2012 evaluation report, a micro risk assessment of the development work, and the planning of a strategy seminar (see below).
• A forth one is a plan to pay further attention to the coordination of the humanitarian assistance, the development of an Emergency Relief Fund, and a continued attention to the corruption allegations against LOTFA.
• The December 2012 strategy seminar is of interest to this review, as it was an attempt by the Embassy (with MFA and Norad participation) to address development challenges identified in the Norad 2012 evaluation report and the Tokyo process. A key issue was how to ensure a policy dialogue with the Afghan government on how to meet (and report on) the target set for a 50 % on budget support and an 80% alignment with National Priority Plans (NPPs). A more practical issue was whether part of the management of the development portfolio could be shifted to Oslo, so that the Embassy could (in our translation): “ensure a better follow-up of the development projects, be a distinct development actor in the external debates, use development assistance more effectively, while at the same time aiming to reduce the number of agreements to ensure a more manageable development management.”
The seminar does reflect moreover that the Ambassador, who took up his position in September 2012, had a development background, with a special concern for the quality of the development assistance. He had an expressed intention to draw on external resources and expertise (as in Norad) to ensure that Norway met its development goals.[3] This was then done over the coming years, with very specific assistance requests made to Norad.
In the “Virksomhetsplan for 2014” we can identify some visible results of the strategy work and the prioritisation made. Although the three priority areas for development assistance remained the same, defined as part of three strategic goals, they appear here slightly extended from the original wording:
1) Contribute to the strengthening Afghan institutions for the country to ensure own security and development.
2) Contribute to a political settlement, including strengthened regional cooperation.
3) Contribute to sustainable and just development, humanitarian efforts, and the promotion of the governance, human rights and gender equality agendas.
The 2014 plan noted that the civilian coordinator position in Faryab was terminated the 1st of September, 2013, while an additional position as migration attaché was established at the Embassy from 1st of January, 2014.
It is evident from reports and interviews that the Embassy had allocated substantial resources and time during 2012 and 2013 to ensure planning and implementation of the TMAF in consultation with the Afghan government. This activities included preparations for the Senior Officials meeting in Kabul within the Nordic+ framework, follow-up to the LOTFA, and an active engagement to further education through support and stakeholder dialogue on the ELECT II programme, while also engaging very actively on human and women rights issues.
By the end of 2013, the Embassy took a sober look at realities and advised the MFA that uncertainty over the Afghan presidential elections in April 2014, and the pull-down of the International Security Assistance Force (ISAF) by end of 2014, might lead to a considerable change in the framework for Embassy activities during 2014.
We can therefore conclude that there was a process well in place before 2011 to focus and reduce the number of projects and agreements in the development portfolio. It included a strong emphasis on support for dialogue with the Afghan government, and for the development of its capacity “to manage their own development”, while at the same time Norway signalled a will to challenge them on corruption, gender and human rights issues. They were prepared to engage strategically, for example through the ARTF, to fund activities in support of these priorities. Ensuring sufficient Embassy staffing for the handling of a large development portfolio was consistently brought up in the dialogue with the MFA, as was the way in which the tasks and responsibilities for the development portfolio and partners could be divided between Oslo and Kabul/Meymaneh.
The document review identified some main trends with regards to the prioritisation and selection of thematic focus and implementing partners, and to the fulfilment of the overall Norwegian development goals for Afghanistan:
• Adherence to the Norwegian strategy for the development assistance to Afghanistan.
• Adherence to the requirements set in the Tokyo Mutual Accountability Framework (TMAF) to align donor funding with national priorities. Specifically, to ensure that 50% of Norwegian funds were “on budget” and 80% were aligned with the National Priority Programmes. This took considerable time and resources in a dialogue with the Afghan government, within the Nordic + framework and with other donors.
• A deliberate reduction in the number of projects/programmes within the given thematic areas, including the termination of funding to the Afghanistan Sub-National Governance Programme (ASGP) and the exit from a planned energy programme. Priority was on channelling aid through trust funds (ARTF, LOTFA and Tawandandi) in order to reduce the management burden at the Embassy/MFA.
• A planned reduction in the number of Norwegian staff handling the development portfolio at the Norwegian Embassy, complemented by an increase of national staff, and the shifting of management responsibility from 2013 onward to MFA (international NGOs) and Norad (Norwegian NGOs). However, the termination of the Norwegian development councillor position in Kabul at the end of 2014, and the potential consequences for aid management and coordination/dialogue, is not addressed or discussed in available Embassy plans or in other documents reviewed. Increased security concerns during early 2014, is cited in interviews as a possible reason for the decision to terminate the international development advisor position.
• A continuation and no change in selection of focus areas and channels, although with a shift of priority between channels, giving higher priority to ARTF and reducing NGO funding.
• Our assessment, both based on the document review and the Norad evaluation report, is that both thematic areas and the implementing partners selected contributed to the Norwegian development priorities set for Afghanistan. The Embassy efforts in the TMFA process then helped shape and influence implementation. There is a noted consistency in the three-year and the annual plans in ensuring adherence to these goals, and alignment with (and support for) goals commonly agreed between the Afghan government and the international community – notably the TMAF. The reason behind this consistency is discussed further below.
Extent of follow-up of the 2011 evaluation and internal strategy/plans
The Norad evaluation and its main findings were introduced in the previous section. The TOR for this review asks for a more in-depth assessment of the following points to determine the extent of follow-up to the 2011 evaluation and internal strategies/plans:
• Development of a theory of change of the overall Norwegian contribution.
• Improved contextual analysis, conflict sensitivity and risk mitigation.
• Anti-corruption procedures.
• Monitoring and evaluation systems.
• Internal human resource allocation and administrative capacity.
• We noted above that MFA/Norad and the Embassy in Kabul have reported systematically on the actions taken to respond to the recommendations from the 2012 Norad evaluation report.
The Embassy reported on the follow-up on four of the recommendations:
• Establishment of effective routines for monitoring and evaluation of development assistance: A number of initiatives are reported as completed and there are on-going activities at MFA, Norad and the Embassy. These include dialogue with ARTF and NGOs on how they can strengthen internal routines and a suggestion from the Embassy to introduce a “supervisory- model” as part of the anti-corruption procedures.
• Clarification required on the WB’s country strategy and results framework: It is reported that some activities have been completed, and others are still on-going.
• Prioritise higher the strengthening of district and province administration: All activities are reported as completed, with the exception of the continuous follow-up required on the TMAF.
• NGOs selections of programmes and projects must be based on conflict and contextual analysis: The majority of recommendations are reported as implemented while some are on-going, including having NGO partners develop an exit strategy.
• ARTF and the partner NGOs interviewed report that they have addressed most of the recommendations under the five focus points, including preparation of Theories of Change (ToC) (overall and for particular development interventions), although with varying degrees of detail. An example of a NGO developed ToC is provided in Annex V.
All implementing partners report to have undertaken more extensive baseline studies after 2012. These baselines can thus both constitute a short-term tool for improved monitoring and enable necessary project adjustments. We therefore expect that NGOs over the coming years will provide more detailed and community-verified impact measurements.
All implementing partners report to have M&E mechanisms in place (see separate analysis). There has been a continuous discussion between the MFA/Norad and the Embassy, and their various partners, on how to secure quality of assistance, prevent corruption and document outcomes and impact.
All implementing partners report to undertake contextual analysis and risk mitigation initiatives as an integrated part of their own program implementation and, for donor NGOs, to ensure it is part of their implementing partners’ planning processes. This is presented in more details in the NGO review.
Engagement with and support and evaluation of implementing partners
All NGO staff interviewed, as well as the ARTF, report that the Embassy, MFA and/or Norad have had an active engagement, beyond annual meetings, on program/project direction and dialogue on how to ensure compliance with the TMAF. The project/ programming dialogue has, however, decreased following withdrawal of the Norwegian Development Counsellor. Regular security meetings taking place at the Embassy are welcome as they serve as a venue for information sharing amongst NGOs.
ARTF regard Norway as an active donor, in particular on thematic issues such as gender, but note more generally that the participation/involvement has decreased over the last year. Reporting of results is, as reported above, part of the dialogue with the NGOs where they are encouraged to improve M&E routines and activities, and where the possibility for introduction of a “supervisory agent” is under discussion at the Embassy.
Several of those interviewed suggested that Norway could have taken a more proactive role in initiating independent monitoring and evaluations as the level of insecurity increased and placed restrictions on Embassy staff travels, including in ARTF supported channels/activities. This could have helped to ensure a more systematic verification of results and impact and provide a check on possible mismanagement and corruption throughout the entire development chain. This raises important issues on the development and use of monitoring mechanisms and evaluations in an increasingly challenging security environment, including issues of remote and/or community monitoring – which are a common concern among donors.[4]
There is an emerging experience and literature on those types of community based monitoring;[5] Integrity Watch Afghanistan (IWA), discussed later in this report, has been a pioneer in this field.[6] We can broadly divide that type of monitoring into two categories. One is the more technical approach with the use of images, being that from satellite pictures, drones or by on the spot pictures/videos (with location tagging) that can document the physical presence of a development funded objective – as a school, a clinic, a bridge or an irrigation structure. SIGAR has had a number of reports where they have tried to locate US funded infrastructure projects though satellite pictures and field-visits, with mixed results. In several cases the infrastructure existed but location coordinates were wrong (and several others, including one located in the Mediterranean, probably did not exist). Such images can document the quality of the infrastructure only to a limited degree, and can hardly assess the quality and impact of the activities that take place in or result from the structure built.
The later kind of confirmation requires in-person monitoring, both to complement the remote monitoring and to ensure that the infrastructure meets the planned specifications. That planned activities are taking place for the persons/groups intended, that these meet quality requirements, and that both infrastructure and activities are maintained and sustained over time, also require on site verification. A school is a typical example: buildings need to be maintained; there needs to be teachers with required qualification in place; they should be supplied teaching materials; and the planned number of students need to obtain the type and quality of teaching required for their age group. Such monitoring can be done through self-monitoring, e.g. by NGO’s, NSP’s and EQUIP’s own monitors, or by external and independent monitors that can review different aspects of the activities against the implementation plan (and over time against the baseline). These can report either to the implementing agency or the donor, or to the community and the local government (though this does not always takes place). This type of oversight should be a regular and structured process, but can and should be complemented by unexpected inspections. Some programmes and several of the Norwegian funded NGOs have community complaints mechanisms that when activated should trigger an inspection.
A different type of oversight mechanism is community based monitoring, where either intended beneficiaries with knowledge of the programme or a hired person in a neighbouring community are tasked to monitor the progress and quality of a programme. This type of mechanism is increasingly used in areas with high insecurity (as are inspections done by Afghan staff), and ideally complemented with visits from the M&E staff of the implementing agency as well. IWA has learned that that training of the monitors is crucial for ensuring accurate monitoring and reporting, as crucial as it is to find ways to avoid the monitors to come under pressure from either implementers or influential persons in the community.
Norway’s ability to respond to changing circumstances affecting development assistance
Norway emerges as a very responsive partner to the Government of Afghanistan by ensuring compliance with the TMAF. The efforts put into Nordic + and the active leadership role there emerge as important. This contribution ensured both a dialogue between donors and government, and made Norway a highly relevant policy actor towards other donors, including the US, with a “proactive” adaptation to new realities. We confirm this circumstance in the “Virksomhetsplan 2014”, where the planned development activities are grouped (and assessed) according to the strategic objectives. That Norway is given recognition as an active partner by the Government of the Islamic Republic of Afghanistan (GoIRA) is confirmed by a written response from a former Minister from the period under review, in which he stated that: “With adequate justifications, Norway has always made attempts to adapt and display a degree of flexibility in its role as a direct and bi-lateral partner to the GoIRA.”
One question is whether the Norwegian strategy, development priorities and partners should have been adjusted in accordance with the contextual changes. Our opinion is that, on the strategic side, there was no reason to change the overall Norwegian aim of 1) strengthen Afghan institutions, 2) contribute to a political settlement and, 3) contribute to sustainable and just development, humanitarian efforts, and promote the governance, human rights and gender equality agendas. Rather, these strategical aims became even more relevant throughout the period.
The second question is whether changing circumstances warranted changes to the development priority areas of good governance, education and rural development. Again, it is our assessment that these priority areas remained highly relevant throughout the period under study, although for changing reasons. The efforts to improve governance are key for the GoIRA’s ability to properly handle their development assistance and provide basic services to their population, and to curb the nepotism and corruption that reduces the value and impact of the assistance. Many of these challenges could only be addressed by and through changes in the GoIRA. Education, and especially education for girls, not only helps to close the existing education gap, but ensures in the longer run a better educated and skilled Afghan workforce. A continuation of the support for rural development acknowledges that the majority of Afghans secure their livelihoods and jobs outside the cities, and assistance might help to stem an extremely high urbanisation rate. While not assessing the quality and impact of each of these interventions, we are of the opinion that they remained valid for meeting the needs on the Afghan side, and that they were in alignment with, and in support of, the achievement of the overall Norwegian strategic goals.
We have mentioned discussions in the Embassy about potentially prioritising other development activities. The energy sector was under consideration for several years, but was in the end decided against for three reasons. The first one was the mixed experience with support for the establishment of the Afghan mining law, and the subsequent bidding process administered by the Ministry of Mines that did not adhere to agreed procedures.[7] A second reason, as reflected in several “virksomhetsplaner”, was the lack of clarity on which Ministry would be responsible for managing the energy sector. And a third reason was the Ambassador’s concern that, given the uncertainty over responsibility and the resources required at the Embassy to see the project through, such a project might not meet the required standard and that it would be difficult to provide quality assurance. Among other projects discussed, according to national staff at the Embassy, was one to map and help increase the water supply in Kabul, given the positive response to the NORPLAN project from several Ministries.
The third question is whether there should have been further changes to aid channels and partners between 2011 and 2014. There was already a deliberate policy in place to reduce the number of partners (primarily NGOs) and projects, and a budget shift towards trust funds and non-earmarked funding. That shifted the management burden and responsibility for M&E to the WB, the UNDP and the British Council (Tawanmandi), who have such mechanisms in place. However, Norway continued funding through a selected number of Norwegian and international NGOs, including support in the Faryab province. It was a secure, but not very innovative policy, and might have missed out on opportunities to develop more Afghan-led development and civil society organisations.
With this caveat, the decisions to reduce the numbers of partners but to maintain a diversity of channels appears as sound, in the light of the contextual challenges outlined in the previous chapter and of the Embassy’s management capacity. There is, however, a noted concern over how support for Afghan civil society organisations can continue after the termination of the civil society trust fund Tawanmandi in mid 2015.
Norway’s coordination with other donors
The responses we received about this question were mostly positive. Norway is seen as active at a strategic coordination level, in international donor meetings (e.g. Tokyo and the biannual follow-up meetings), in the ARTF steering committee, and not least in the follow-up to LOTFA. Particular importance is placed on the role played in the ARTF thematic sub-committees and in education committees, arenas used to ensure that strategy and policy (in areas like education and gender equality) is turned into practice. Norwegian work on and in the Nordic+ coordination group emerges as highly important and influential in both policy and practical terms. It enabled Norwegian influence on key strategic issues far beyond what could have achieved otherwise.
At the national level, there was on-going coordination with the GoIRA and with other embassies on a range of activities, including a leading role on AIHRC and involvement in MEC (the latter delegated to the Danish Embassy). During the course of 2015, the co-location with the Danish Embassy has opened up further dialogue and collaboration given the range of common aid channels and implementers.
The Norwegian coordination and engagement practise is outlined in detail in the education chapter, from ARTF strategic engagement, through donor discussions (where the Nordic+ circle gave further leverage) and working groups involving different ministries, and direct project dialogue. What we have been able to establish was that this was not unique for the education sector. The Embassy also prioritised gender and human rights issues.
The Nordic+ circle was established already in the mid-2000s as a joint coordination point between Nordic donors that has been expanded to other donors, depending on the issues addressed. In 2006 Norad developed the document “Nordic Plus: Practical Guide to Delegated Cooperation”, and the collaboration was formalized at a 2008 meeting of the Nordic Foreign Ministers. A Plan of Action for Nordic Cooperation in Afghanistan was adapted for the donors in order to be ”a more concerned partner for the Government of the Islamic Republic of Afghanistan (GIRoA) and the international community.”[8] It stated that
The overall aim of an increased cooperation in the development field is to achieve a stronger impact in sectors of particular importance to the Nordic countries. A more efficient organisation of development work should ease the workload for each country. Furthermore, a strengthened Nordic cooperation would enhance cooperation among donors in general, and strengthen the role of the Afghan government in taking overall responsibility for the development of Afghanistan.
There are few details on the activities of the Nordic+ in the documents made available to the team, but the “Virksomhetsplan 2014” notes that Norway was the lead donor for the first part of 2013, and the Embassy judged the collaboration in the period as “good and constructive”. For 2014 the Embassy noted that Nordic+ developed and extensive collaboration and carried out shared (project/programme) assessments and reviews “to rationalise such types of tasks” It also indicated that “delegated collaboration is considered where possible”. The Embassy further explains that “delegation of tasks between donors, and a rotating responsibility for process follow-up (TMAF) has demonstrated its effectiveness within the Nordic+ circle. Similar sharing of tasks is also taking place in the education sector and on singular contracts.” (pp. 10-11, our translation).
The interviews confirm the extent of the effort put into Nordic +, and also that it was regarded as very valuable in the dialogue on TMAF with other donors, and particularly by the GoIRA. However, some respondents suggested that the reduction in the number of Norwegian diplomatic staff and the termination of the development advisor position at the Embassy have affected the capacity to attend coordination and thematic meetings, and thereby Norway’s influence on processes and decisions. National staff members, despite their knowledge and trusted positions, will find it harder to be heard in such fora. Such staff will also sometimes be required to consult colleagues or the MFA before stating Norway’s position or committing to / approving changes or suggestions.
The same development is also noted in a response received from a former Afghan Minister. When asked if he had witnessed any change in Norwegian policy and/or practice in the period from 2011 to 2014, the former Minister responded:
Even before 2011, Norway had very clear ideas about a separation between security and social development activities through the highly specific role of its PRT in Faryab province. The largest change experienced in policy or practice over this period was the reduction of a physical presence in Kabul with the Norwegian Mission shrinking in size and reducing its in-country capacity. The main constraint with this became the reliance of Embassy staff on Oslo for finalising any informed decision.
We would like to mention that the Embassy has, as noted above, had an active role in facilitating information exchange with and between the NGO partners through regular meetings at the Embassy. In Faryab, the Development Advisor organised regular coordination meetings between Norwegian Funded NGOs and the provincial authorities, though the practice was discontinued after the position was eliminated. This lack of national/local coordination and information sharing on development and humanitarian activities with the Embassy is in our opinion a negative development, not least because it reduces the Embassy (and the NGOs) contextual knowledge in a rapidly changing security and political environment.
Norway as a donor compared with Sweden and Denmark
There are considerable similarities in the focus and approaches of Denmark, Norway and Sweden, especially after Denmark concluded its direct budget support and presence in the Ministry of Education (discussed later in the report) and channelled their education funding through the ARTF. The three countries have all taken part in and committed to the London and Tokyo processes, and not only signed up to the TMAF but worked actively through the Nordic+ group for the implementation of the framework. There has been a very active and sustained NGO presence in Afghanistan from all three countries, with key individuals in the Karzai and Ghani governments having spent their formative years in one or several of these NGOs.
Denmark’s stated priorities are: 1) economic growth and employment with a focus on the agricultural sector; 2) education; and 3) good governance, democracy and human rights. There is also continued support for: 1) capacity building of the Afghan police; 2) returning refugees and internally displaced persons; and 6) providing humanitarian aid. In addition, women's rights and opportunities continue to be a priority and Denmark maintains a strong focus on fighting and preventing corruption.[9] Denmark budgeted an average annual support of 530 million DKK for the period 2013-2017.
Sweden has two stated strategic results goals: 1) Strengthened democracy and gender equality, greater respect for human rights and freedom from oppression; and 2) better opportunities for people living in poverty to contribute to and benefit from economic growth, and to gain a good education. These goals are deigned to respond to the “Five E’s for Afghan Development” announced at the Tokyo conference: 1) Empowerment, 2) Education, 3) Employment, 4) Enterprise, and 5) Economic Integration.[10] Sweden has budgeted a total of 4.87 billion SEK for the period 2014-2019.
In this chapter, however, the aim is to compare structures and practises for managing development assistance. We find more similarities between Norway and Denmark on the way aid delivery is managed, with the MFA as the lead and DANIDA and Norad acting in an advisory and support role. In the case of Norway, the MFA manages at headquarters level the framework agreements with International NGOs and trust funds. In contrast, Sida is more independent from the Swedish MFA, and is mandated to “implement the strategies and manage interventions, (including monitoring and evaluation of results)”.[11] This independence is then reflected in the way the Swedish Embassy is organised and manages its development activities. Sida staff is integrated into the Embassy structure, with the Ambassador as the highest authority. A Swedish Embassy is considered a separate entity from the MFA, which manages the development assistance, although it remains responsible to follow and implement the instructions given by the MFA.
The extent to which development assistance is managed out of the embassy or from the capital also results in differences in the staffing of the respective countries embassies. The Danish Embassy has had a dedicated senior diplomat responsible for overseeing development assistance, though she (and her Afghan colleague) have had limited ability to undertake field monitoring. We were informed that the staff member was leaving her position in the end of 2015 and will probably not be replaced, thus leaving the now joint Danish/Norwegian Embassy without a senior international development counsellor.
The Swedish Embassy has had a very different approach even though their projects are implemented in many of the same locations as those of Denmark and Norway—and facing the same security challenges. When the development advisor position in Mazar-e-Sharif was eliminated, a new position was instead created at the embassy in Kabul. As a result, the number of international advisors at the embassy has increased from four to five. They work together with one male and one female Afghan advisors. They manage together the development assistance portfolio and regularly undertake field monitoring, even in areas considered by others as being too high risk. Their estimation is that this practice works well, with specific security assessments being made for each travel to determine when, how, and with whom they travel.
The benefits are however not limited to their ability to directly assess their development assistance and to meet with the intended beneficiaries. A further benefit is their capacity to have meetings with Government officials and members of Provincial Council and Community Development Councils, and to gain contextual and province/district specific knowledge that international staff at other embassies might not have. It also allows them to see the implementation of other project activities developed through mechanisms funded by Sweden, such as ARTF, which also provides valuable information for further engagement with these mechanisms. It should also be noted that it is not only the development staff that undertakes these field visits. The current and previous ambassadors also travel frequently and engage with provincial authorities, NGOs and Afghan civil society groups.
One likely reason for this difference is the clearly stated priority of the Swedish Ambassador, and the direction from the MFA, to have Sweden present in the field to the extent possible. This is also a decision that the Ambassador has the authority to make, as the position is mandated to decide on matters relating to travel and security policies in consultation with their security staff.
We therefore observe the largest difference in the management of development assistance between Sweden and Norway. The responsibility for managing development aid primarily lies with the Swedish Embassy in Kabul. Norway has this responsibility divided between (different sections in) MFA and Norad. Sweden also has a larger team (international and national) based at the Embassy, allowing for more hands on and contextually grounded management. This also allows them to conduct their own on the ground monitoring of Swedish development assistance.
We also see differences in how development assistance was utilised as part of the military engagement. Norway decided on a clear separation between the military and civilian engagement, with MFA staff coordinating the 20% of the Norwegian development assistance earmarked for the Faryab province. One interviewee observed that this left Norway in a better position than other countries when they started planning for withdrawal from the PRT and shifting assistance from the military to civilian management.
Denmark, as part of the UK led PRT in Helmand, had a CIMIC (Civil and Military Cooperation) detachment and civilian advisers from the MFA. These, based on an annual “Helmand Plan”,[12] implemented about 400 small projects with the aim of producing quick and visible results “in areas where civilian organisations are unable to work”. The projects were primarily targeted at education, water supply, health and infrastructure.[13] The tentative budget for the Helmand engagement was 85 million DKK in 2011, 90 million DKK in 2012, and 100 million DKK in 2013, as the PRT prepared to leave and hand over responsibility to local authorities.
Sweden allocated approximately 15-20% of its development cooperation towards the north of the country. These funds were administered by the civilian component of the PRT that Sweden led in Mazar-e-Sharif, which was responsible for the stability of four provinces: Balkh, Jowzjan, Samangan and Sar-e Pul. Aid activities were developed in cooperation between the Swedish embassy in Kabul and Sida development advisers based in Mazar-e Sharif. Decisions on fund allocation were delegated to the embassy in Kabul. The Swedish development advisors at the PRT were part of the Embassy structure and handled development projects by Sida in the northern provinces. When Sweden withdrew from the PRT, the remaining development advisor was relocated to the Embassy in Kabul, to maintain the overall level of staffing in Afghanistan.
[3] For more details see the 2009 Norad report “Strengthening Nordic Development Cooperation in and with Afghanistan”, available at http://www.cmi.no/publications/file/3323-strengthening-nordic-development-cooperation-in.pdf , visited on 15.02.2016
[4] The Danish Afghanistan Strategy 2015-17, available at http://afghanistan.um.dk/en/~/media/Afghanistan/FACT%20SHEET%20The%20Danish%20Afghanistan%20Strategy%202015-17.pdf , visited 21.01.2016.
[5] Results strategy for Sweden’s international development cooperation with Afghanistan 2014 – 2019, available at http://www.regeringen.se/contentassets/c3f71737c5f84cebb8550f61b214ab78/results-strategy-for-swedens-international-development-cooperation-with-afghanistan-20142019 visited 21.01.2016.
[6] Sida: Approaches and Methods, available at http://www.sida.se/English/how-we-work/approaches-and-methods/ visited 21.01.2016.
[7] The 2011–2012 Helmand plan is available at http://www.fmn.dk/temaer/afghanistan/baggrundforindsatsen/Documents/Helmandplan2011_FINAL_web.pdf , visited 13.02.2016.
[8] Based on T. Wimpelmann and A. Strand. 2014. Working with Gender in Rural Afghanistan: Experiences from Norwegian-funded NGO projects. (Norad Evaluations no. 10/2014). Oslo: Norad.
Assessment of NGO activities
Analyzing the above information, we see a diversity of orientations and approaches among the NGOs, as well as across the type of humanitarian and development assistance they provide and its geographical coverage. Activities are primarily within the three priority areas of Norwegian engagement, particularly for rural development but also for the education sector, capacity building and the advocacy part of good governance. The NGOs hold a distinct capacity for responding to and mitigating the outcomes of natural disasters, responding to internal displacements and providing humanitarian assistance. Gender issues are addressed by most NGOs, with some working specifically on human rights.
As expected, we find a concentration of NGOs in the Faryab province, but also presence in the poorest and least developed provinces (such as Daikondi, Ghor, Uruzgan, Badakshan and Nooristan) and in those areas with large influx of returnees or concentration of IDPs (such as Nangarhar, Herat and Kabul). While there is some variation depending on their type of activity, all NGOs undertake conflict analysis and have developed risk mitigation plans. Most of them prioritize capacity building, though the extent of inclusion of government staff varies, as does their perspective on whether government staff represents an opportunity for collaboration or a major obstacle for the NGO operations. This differences are then reflected in how each NGO aims to build national ownership. The practice of placing NGO technical staff in government offices and inviting government staff to take part in evaluations is among the clearest examples of a long-term capacity building strategy; merely informing them about ongoing activities is just the short-term option.
It is evident from interviews and documents reviewed that most of the development oriented NGOs have performed baseline surveys since 2011, and some of them have devised their own Theory of Change. This puts them in a better position to not only report outcomes and numbers, but to measure impact (at least over time) in accordance with OECD/DAC guidelines:
The positive and negative changes produced by a development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the activity on the local social, economic, environmental and other development indicators.
Some NGOs report on their projects’ impact or state the expected impact of the assistance. However, these exercises consist mostly of isolated examples instead of the systematic assessment of impact that are required to comply with the guidelines.
Although with variations, these NGOs are better prepared to meet and adapt to contextual challenges, document results and address corruption challenges than they were in 2011. They have M&E strategies and procedures in place, as well as anti-corruption policies and regulations, and dedicated staff to do the follow up. This is reassuring, when compared to challenges identified in other channels. Still, testing over time is the only way to determine how well functioning are the systems the various NGOs have in place, and what are the results and impact they can subsequently document.
Gender issues were a high priority for Norway over the period studied. We will refer to the main conclusion of a 2014 Norad study of rural Afghanistan that examined the performance of most of the NGOs under study here. One of its findings stands out as particularly relevant for this report (p. v):
The study zoomed in on women’s income generation projects in order to examine the relevance, sustainability, results and promising practices of gender related activities. The review found interesting differences in how projects were conceived and implemented; to what extent they aimed and succeeded in expanding women’s control over the value chain, whether it was possible to mobilize women in small collectives with regular meetings and to what extent women were able to obtain a sustainable income. The findings suggest that organizations should consider whether they could be more strategic, focused and ambitious in their work with women’s economic empowerment.
We agree with the assessment that «gender projects» frequently appear as “tick the women box” projects, with limited planning and ambitions for their results and impact. This is a general criticism for projects targeting women in Afghanistan,[14] but one could have expected more from Norwegian NGO partners. That being said, we would like to emphasize that there were several innovative projects identified that deserve credit (not all of them included in the 2014 review that covered the framework agreements with NGOs). These projects helped further women’s economic prospects (as solar engineers and midwifes), secured their legal rights (as in the case of returning female refugees), and ensured women involvement in peace processes (Midwifes for Peace).
[9] LOTFA (2015) Project Summary December 2015, available at http://www.af.undp.org/content/afghanistan/en/home/operations/projects/crisis_prevention_and_recovery/lotfa.html visited 22.02.2016.
[10] For details, see http://www.tawanmandi.org.af visited on 25.01.2016.
[11] British Council webpage, https://www.britishcouncil.org/partner/track-record/tawanmandi visited on 25.01.2016.
[12] The letter is available here http://tawanmandi.org.af/wp-content/uploads/2014/09/Letter-from-Tawanmandi-Donors-28-9-2014.pdf visited on 25.01.2016.
M&E and Anti-Corruption Procedures
Two areas of particular concern raised by the previous review were: i) the ability to document results and impact of the Norwegian development assistance, and ii) the capacity to address and prevent corruption. Recommendations included strengthening procedures and mechanisms in both areas.
Monitoring & Evaluation
The evaluation of Norwegian development cooperation in Afghanistan during the period 2001–2011 noted that M&E had been a weak point largely because Norway, due to the security situation, had to rely largely on the reporting of others. The absence of baseline data also meant that impact was difficult to measure. While this was acknowledged by MFA/Norad, it was also noted that the security situation in Afghanistan continued to deteriorate through 2011–2014—the period of this review. Similar challenges remained, with limited possibilities for staff to carry out monitoring in the field (no visits were made to Faryab since 2013) and a reliance on implementing partners M&E frameworks and reports. With the reduction of staffing at the Embassy and the transfer of responsibility for NGO contracts to Oslo, people interviewed notices that this issue has become even more of a challenge during the period under review.
This is not a concern unique to Norway, either for the previous period or for the period currently under review. Other donors face similar challenges, although some (e.g. Sweden) have maintained or even increased staffing at their embassies and made efforts to get staff out into the field. This practice shows that it is actually possible to conduct field visits, although the security assessment carried out by the Embassy in Kabul does not allow for doing it. As a result, working to strengthen the M&E and reporting procedures of implementing partners was the main channel to address the findings of the previous review.
As noted earlier in this report, multilateral organizations (primarily ARTF and LOTFA, the main multi-donor trust funds) continued to receive the majority (55%) of the Norwegian development funds to Afghanistan. While this means that strengthening the M&E and reporting of these organizations is critical for tracking the impact of Norwegian funds, it is also the area where Norway has the least capacity to effect change on its own.
However, during the period reported, an external review of ARTF was carried out, finding that more attention needed to be placed on strengthening M&E and reporting, as well as on providing gender disaggregated data. This has allowed Norway, as part of the ARTF Strategy Group, to have input into the process. These efforts have resulted in improved reporting on results, as well as in the development of an ARTF results matrix (launched in 2015), which provides at least some baselines against which progress can be measured, and some gender disaggregated indicators.
During the period under review, emerging concerns about mismanagement in LOTFA (which came to a head in 2012) also presented an opportunity for Norway—as one of the main contributors to the trust fund—to exercise pressure for the strengthening of M&E and reporting mechanisms. While baselines remained weak, Norway, together with other donors were heard in terms of more emphasis being placed by UNDP on strengthening M&E procedures and providing more adequate reporting.
The strengthening of M&E frameworks and reporting has also been a major emphasis in the dialogue with NGO implementing partners, with continued discussions between MFA/Norad and the NGOs in relation to their framework agreements at the annual meetings. While it was mentioned that baseline data is desired for all new project agreements, it was also recognized that given the security situation baseline data may not be available in all cases, and that it would be possibly absent for emergency/humanitarian assistance. As such, the absence of baseline data does not have to mean that a project is automatically disqualified.
This emphasis on results reporting appears to have had results during the period under review, with all NGO implementing partners working towards establishing more robust M&E frameworks (with innovative ways of monitoring being adopted, such as documenting impact with digital camera or community reporting), improved reporting procedures, and the establishment of baselines against which to measure results. While these efforts are still ongoing, a majority of NGO implementing partners now have these mechanisms in place, and have frequently established a baseline for their projects. In some cases, partners are also moving towards developing theories of change (see Annex V for NAC’s theory of change) to guide their activities. These efforts go some way towards addressing the concern that there is too much reliance on reporting from implementing partners due to the inability to verify independently the impacts claimed.
Overall, interviews with NGO implementing partners indicate that signalling from Norway during the period under review, together with an overall trend towards a greater emphasis on M&E and impact reporting, has provided a push for them to invest more in this area. Similarly, it seems that Norway has been able to seize the opportunities presented to push for change also within the major multi-donor trust funds. While there is still scope for improvement, it does seem that MFA/Norad and the Embassy in Kabul have made a considerable effort to improve M&E frameworks and reporting on results, taking into account a very challenging context.
Anti-corruption procedures
Corruption remains a major concern in Afghanistan, threatening long-term development and stability. The Afghan government and the international community have repeatedly affirmed that addressing corruption is a key priority. Through interviews and the review of the relevant documentation, we are satisfied that the risk/threat that corruption poses to Norwegian development cooperation is recognized by staff within MFA/Norad and at the Embassy in Kabul.
Overall, under a very difficult context, the MFA/Norad and the Embassy in Kabul have by and large taken the measures that could be taken to safeguard against the misuse of Norwegian funds. Despite a potentially higher risk of corruption than in other partner countries, the Embassy actually had less means at their disposal to adopt safeguards than would have been the case in a more regular development context (e.g. field visits and on sight monitoring).
The overall framework for mitigating corruption risks in Norwegian development cooperation, along with a zero-tolerance of corruption policy, also apply in Afghanistan. This framework provides clear guidelines for how to address corruption allegations, including channels for reporting to the relevant units at HQ level. However, with the difficulties for carrying out monitoring in the field and a decrease in staffing at the Embassy in Kabul, the review of documentation from implementing partners increasingly became the main means of identifying cases of corruption during the period under review. The value of a zero-tolerance to corruption policy in a high-corruption context was also a concern raised. The risk is that partners may not report suspected cases of corruption in order to avoid having their funding discontinued—a situation that NAC experienced during the period. This concern was confirmed by several interviewees.
While corruption, when uncovered, should never be tolerated, cutting funding may not always be the most appropriate response if it ends up jeopardizing the implementation of critical development activities. Instead, applying the principle of proportionality would be desirable and allow for, to the extent possible, continued development efforts while working to prevent further cases of corruption.
Proportionality, in this case, entails adopting an approach that is appropriate given the scale of corruption encountered. For example, if an implementing partner staff member is found to have embezzled funds, cutting funding completely would be a disproportionate response. Requesting the organization to sanction the staff member and putting in place better safeguards, while continuing to implement development activities, would protect development funds in the future without having a negative impact on implementation. That said, this would also require close follow-up with the implementing partner to ensure that actions are taken. It is possible that this again would require sufficient staffing at the Embassy to be able to do this in a timely manner.
At the same time, the number of projects/implementing partners was decreased during the period, in an effort to minimize the risk of corruption/increase oversight. This meant that Norwegian aid was channelled through fewer organizations, with stringent due diligence carried out prior to entering into a funding agreement. All contracts included an anti-corruption clause and implementing partners (including NGOs) are expected to have in place adequate safeguards against corruption, including complaints mechanisms.
The review of the NGOs that Norway is working with (see section on “Review of NGOs and their activities” above) shows that all of them have either put in place or are in the process of putting in place specific policies and mechanisms for mitigating corruption. The effectiveness of these policies and mechanisms is however difficult to determine without carrying out a more in-depth assessment of the actual systems put in place and of the capacity of the staff responsible for putting them into effect. We do however share the view of key informants that over the period reviewed it appears that adequate controls have been put in place by the NGOs to safeguard Norwegian/donor funds. We do note however that more could be done in terms of providing support for NGOs in order to strengthen internal systems and building capacity of their staff to further increase confidence in the safeguards that they have put in place.
With the major multi-donor trust funds (e.g. ARTF and LOTFA), Norway has relied on adequate safeguards having been put in place by the administrative agent (WB for the ARTF and UNDP for LOTFA). In cases where this assumption has not held (e.g. in the case of LOTFA), Norway acted together with other development partners to seek to strengthen control mechanisms.
There is greater uncertainty as to the controls in place to prevent corruption once funds enter Afghan government systems or reach out into the field. This of course is beyond the capacity of Norway to address, but the Embassy worked with other development partners during the period to keep the issue high on the agenda, and supported projects/interventions aimed at strengthening Afghan government systems. This however remained a difficult undertaking given the perception of a lack of commitment to genuinely tackle corruption on the part of the Afghan government.
Case studies
We will present three brief case studies in this section to reflect the Norwegian priority areas of education, rural development and good governance. In these case studies, we have put the emphasis on reviewing the support for anti-corruption efforts.
Education
The GoIRA decided early on to prioritise education, and to resume the responsibility for the sector, and not outsource the implementation to NGOs (as with the NSP). The new Afghan Constitution, approved in 2004, states that “education is the right of all citizens of Afghanistan”. NGOs and private companies were allowed to build schools and provide teacher training and vocational/specialized training. In the field of higher education, private universities and institutes were allowed to be established, in parallel to the strengthening of the public universities. Finance Minister Ghani demanded direct budget support for the Ministry of Education (MoE), but most donors were reluctant and preferred to channel the funds through the ARTF. Denmark was an exception in this matter, as we will discuss below.
This led to the establishment of the Education Quality Improvement Project (EQUIP) in 2004, under the ARTF, with an EQUIP Coordination Unit tasked to coordinate the ARTF/WB support within the MoE and to liaise between the MoE and the WB. Key donors have been Australia, Canada, Germany, the Netherlands, Norway, Spain, Sweden and USA.
The budget for EQUIP I at the start-up, in 2004, was US$79 million, targeting 26 provinces. EQUIP II started in 2008, with initial funding for US$188 million, and with a subsequent cost extension of US$250 million. Planning for phase III is underway. Support was temporarily suspended in 2011 due to a critical Mid Term Evaluation, but continued later, according to the Norad 2012 evaluation (p. 5), after WB and MoE interventions that addressed most of the concerns. Between 2003 and 2015, Denmark provided direct support to the MoE through their Education Support Program to Afghanistan (ESPA). It aimed to secure larger sustainability and ownership by the Afghan government, and included secondment of staff to the Ministry. The initial expectation from the Danish side was that other donors would joint them in the initiative. As that did not happen, they were left with a heavy administrative burden, “especially as the MoE was rather fragmented in their structure, had limited capacity, and faced challenges related to corruption during the implementation.”[20] Denmark therefore decided to channel their education support through the ARTF from 2015. At the same time, in 2011 Afghanistan joined the Global Partnership for Education (GPE) (supported and funded by Norway), and the MoE received a three-year grant of US$55,7 million, starting in 2013. The GPE aim was to address some of the inequalities in the education system through targeting provinces that were “insecure, underserved, difficult to access, and have the lowest education and economic factors”. The project, that has established its own GPE Coordination Unit in the MFA, places large emphasis on ownership and social mobilisation through community and parent involvement, and on local teacher recruitment. While the EQUIP programme has been evaluated, so far there is no independent assessment of the GPE assistance, beyond a noticeable concern among donors due to a slow start, a lack of coordination, and the risk that of it ending up as a separate programme rather than an integrated part of the Afghan education system.
The Norwegian education support between 2001 and 2012 was primarily provided through EQUIP, but also through support for the WB’s Vocational Education and Training, UNICEF for basic education and literacy, UNESCO for educational planning, and later the NAC, NRC and FOKUS for basic, vocational and health sector training. With the termination of the priority given to education through the ARTF, there is only NGO support registered for the 2011–2014 period. Norad aid statistics shows that the NRC receives most of the support, a total of NOK 93 million, for their basic education project.
However, a case study is too narrow for reviewing the NGO support for the period 2011-2014, since Norway has given substantial support to the education sector over time and has been engaged with several coordination efforts. We will therefore provide a more general assessment of the Norwegian support to education, and its outcomes and impacts, drawing primarily on the research for a MFA funded study (2015).
The results are impressive, from less than 1 million children in school in 2001 (with a low proportion of girls), the MoE estimated that in 2015 the number of students had reached 8,35 million students (39% of them girls) in primary, lower secondary, and upper secondary government schools--including Islamic schooling. The school-aged population is 10,33 million. However, 3,3 million children, the majority of which are girls, are still out of school. There is a common concern over the reliability of the data and numbers provided by the MoE. Numbers are not independently verified, and some students remain in the system for several years even after they have dropped out. However, the number of 8,35 million students used here is regarded by the statistics department of the MoE, the WB and EQUIP as fairly reliable, and is far below numbers quoted on several occasions by the previous Minister of Education.
There is a major lack of equity in the Afghan education system, measured by gender, geographic location, and language. Afghanistan has the highest level of gender disparity in primary education in the world, with only 71 girls in primary school for every 100 boys.[2] Only 21% of girls complete primary school, with important cultural barriers (such as early marriages) and a lack of female teachers as two of the main obstacles (GoIRA 2015).
There is also a major difference in enrolment in primary education between rural and urban areas. The Education Inequality profile for Afghanistan[3] shows that 58% of boys and 52% of girls in urban areas attend school, while in rural areas only 41% of the boys and 28% of the girls. To further highlight the gender and geographical disparities, 80% of the richest boys in urban areas completed primary school in 2011, while the same was true only for 4% of the poorest girls living in rural areas.
There are numerous bottlenecks identified—including insecurity, limited human resources, infrastructure, qualified teachers, teacher training and teaching materials—, while demand side issues include economic factors, cultural barriers, and governance and capacity.
The limited capacity of and within the MoE to handle and report on progress is itself a barrier, especially in provinces that have not had the same attention, support and allocation of advisors as the ministry in Kabul.
As expected, given the major overhaul the education sector has been through (including the development of curriculums, the printing and distribution of textbooks, teaching of teachers and the task of building schools), it has taken time to increase the quality of education and to measure the quality of education and learning. The GoIRA points out that “…by most standards, the education quality in Afghanistan is very low. Learning outcomes are generally poor. A few sample studies suggest that about less than half of the children are able to meet the minimum required learning outcome at their level of study.” Furthermore, the GoIRA found that for technical training “most of the education is theoretical and of very little practical value” (GoIRA 2015).
However, the lack of evidence on results is not as bleak as was painted by the 2012 Norad evaluation. The first learning assessment for the Class 6 level released in 2015[4] stated “…while there are small numbers of Class 6 students operating at the higher level of proficiency in each of the domains of reading, writing and mathematical literacy, there are substantial proportion of the population who are not able to perform simple reading, writing and mathematical tasks”. A comparison with three peer countries in the region indicates that their Class 4 students are performing at a similar or a higher level than Class 6 students in Afghanistan.
The assessment suggests that “what is needed is a focus on the quality of teaching, both through policy and planning in the wider level, and through the professional practice of individual teachers in classrooms.”
Interviews for the education report, including a number of donors, MFA officials, EQUIP and ARTF staff, established that the MoE and donors agree to continue to channel funding for education though the ARTF, as “this has proved a trusted mechanism that ensures a fair degree of influence and prioritization from the Afghan government” (p. 13).
The report noted that
While there have been major achievements in Afghanistan since 2001, there is still concern over the quality of education. There is a broad recognition that more funding alone, if available, would not ensure quality education for all. Rather, quality education depends on a number of factors that must be addressed in parallel, and should be included in the new education strategy for 2015–2020, in the planning for EQUIP III, and any extension of the GPE.
The recommendations were to: a) start with the teachers and build their skills, and allow for NGOs to play a larger role; then b) strengthen the MoE, and in particular the data collection/verification and coordination efforts, for a better planning and management of the education assistance; and c) build in addition a domestic resource and support base, including community and parent involvement, and request for support from the private sector.
Norway, albeit not the largest donor, has played a major role in developing the Afghan education sector. Its involvement has gone beyond funding, both donors and MoE refer to a very active engagement of staff at the Norwegian Embassy in the various coordination bodies and in EQUIP and ARTF fora, pushing for a priority for education and in particular girls’ education. The NGO activities have from 2011–2014 complemented the GoIRA activities putting emphasis on improved teacher training to increase quality of education, vocational training to ease access to the job market, and literacy, numeracy and life skills training for those many that have not gained any basic education skills.
A question that emerges is whether quantity—and especially girls in school—trumped quality, and whether Norway and other donors could have done more to ensure the quality of education. There are two observations to be made in this regards, both indicating that there was an awareness at the Embassy of the prevailing challenges in the education sector, and due attention paid to this concern even if earmarking for education through ARTF has ended. One point is the diversity in funding for the education sector, including support for teacher training, vocational training and school building through the NGOs. More important, and emphasized by the Ministry of Education in an interview, is the scale of the Norwegian strategic and policy engagement during the period 2010–2013. This engagement included participation from the Embassy in Kabul in the following bodies and fora:
• ARTF’s strategy-group and council.
• The Human Resource Development Board (HRDB), a collaborating body between donors and four Afghan ministries.
• The Education Coordination Committee (ECC), and advisory body for primary education.
• A donor working group on primary education.
• A working group for donors to the EQUIP project.
• Further,
• Norway was the funding team leader that coordinated and reported on Afghanistan’s first joint education sector review in 2012.
• The Embassy was the donor focal point for development of the National Priority Programme (NPP) for higher education.
• Norway was the only donor with an observer in the EQUIP Implementation Support Mission during autumn 2013.[5]
• This engagement was then complemented by the Norwegian international support to the GPE, including the prominent position of Rohana Ghani, wife of the Afghan president at the 2015 Education for Development Summit. This involvement represents a major contribution to the development of the Afghan education sector. Still, it requires close follow-up and monitoring of ARTF and EQUIP funding to counter concerns about corruption, inflated student numbers and a continued attention to quality.
Rural Development
Afghanistan remains a primarily rural country, where the majority of the population secure their livelihoods and income from agricultural activities, services and trade—including those of illegal substances. Wars between 1979 and 2001 destroyed much of the traditional irrigation systems, roads and production facilities. This led to general neglect of the field, due to conflict and migration, and a lack of research and trails reduced the quality and quantity of agricultural products. Lack of income opportunities led many young men to join armed groups, or to seek job opportunities in neighbouring or Gulf countries.
Rural development was therefore high on the agenda back in 2002, and there was an overwhelming donor support for the National Solidarity Programme (NSP) when introduced in 2003. It was modelled on WB experiences in Indonesia paired with UNHABITAT experienced from Afghanistan, and placed under the responsibility of the Ministry of Rural Rehabilitation and Development (MRRD). According to its webpage,
The National Solidarity Programme (NSP) was created by the Government of Afghanistan to develop the ability of Afghan communities to identify, plan, manage and monitor their own development projects. Through the promotion of good local governance, NSP works to empower rural communities to make decisions affecting their own lives and livelihoods.
The programme is the primary vehicle used to promote rural development in Afghanistan. Empowered rural communities collectively contribute to increased human security. NSP lays the foundation for a sustainable form of inclusive local governance, rural reconstruction, and poverty alleviation.
NSP had, at that time, an important governance component/function. It introduced ballot elections for the establishment of the village based Community Development Councils (CDC), in advance of parliamentary and presidential elections. The CDCs were introduced as the lowest level of the Afghan governance system.[25]
One reason for the quick establishment and rapid results of the NSP was that experienced NGOs (Afghan and International) were assigned as facilitating partners for the different provinces. These partners a) facilitated the CDC election; b) assisted the villages in the selection of their community development project (within a given economic frame); and c) assisted with the implementation and reporting on the project. Later on, female CDCs were introduced, then District Development Councils (CDCs), and finally the concept of Cluster CDCs for managing projects covering larger areas.
It is important to note that the MRRD also introduced a number of other rural programmes, including the Norwegian funded National Area-Based Development Programme (NABDP) and the National Rural Water Supply, Sanitation Irrigation Programme (Ru-WatSIP). Norway, moreover, supported the private enterprise NORPLAN to develop documentation of Afghan hydrogeology (including that of Faryab), which has been useful for several Ministries.[26] The MRRD drew extensively on the NGOs when recruiting their staff, not least on NAC and NCA and their partner NGOs.
NSP provides results for the three phases of the programme. Phase III is extended into 2016. The overview presented in Figure 8 presents the most updated information. The total NSP budget from 2003 to September 2015 (excluding community contributions) was US$2,5 billion.[27] Key Indicators # of communities with CDCs elected: 3.439 # of communities financed (at least partially): 33.809 # of communities with full 1st block grant utilization: 27.583 # of sub-projects proportional finances (at least partially): 87.133 # of sub-projects completed: 66.133 BG Committed (US$) Million: 1.595
BG Disbursed (US$) Million: 1.563 # of male CDC members: 294.142 # of female CDC members: 151.891 Figure 8: NSP Progress 2003-2015, as of 21 June 2015 (Source: NSP) It should be mentioned moreover that, during the period under review, Norway continued to provide support for rural development through NGOs (and NCA partners) with an integrated approach to their development projects, and frequently collaborating with/implementing projects through the CDCs.[28] We can notice that the NGOs have reported a range of outputs so far, while we can expect further documentation of impact as they start to measure up against their established baselines. An important reflection to be made from the reports reviewed is that the investment in development of community organisations, and their involvement in development activities, has enabled a better and far more rapid local disaster-risk response, especially when these organisations have pre-stocked emergency equipment or cash have been easily available. Since 2011, Norway has provided support for the UN Food and Agriculture Organization (FAO) project “Promoting Integrated Pest Management in Afghanistan”. The project has, according to FAO, led to a major crop increase and production, with significant and sustained increase in farmer income.[29] NSP is one of the most frequently evaluated projects in Afghanistan, due to high donor interest and level of funding, and efforts have been made to put the programme in line with the recommendations provided. The 2012 Norad evaluation mentions early results from an impact assessment, which provided impact indicators for some objectives, tough it provided less for other ones (p. 56). The final report was released in 2013 and has the most substantive evaluation of the development impact we have identified through this review. The report concluded that: NSP-funded utilities projects deliver substantial increases in access to drinking water and electricity, but infrastructure projects are less effective. As a consequence, NSP has limited impacts on long-term economic outcomes such as consumption or asset ownership. Project implementation and the accompanying infusion of block grant resources do, though, deliver a short-term economic boost. This stimulus also improves villagers’ perceptions of central and sub-national government, as well as of allied actors such as NGOs and ISAF soldiers. However, the impact of NSP on perceptions of government weakens considerably following project completion, which suggests that government legitimacy is dependent on the regular provision of public goods and/or interaction with service providers.[11] They also conclude, however, that the “creation of CDCs by NSP has few durable impacts on the identity or affiliation of de facto village leaders”. But as a more important change, it appears that “the mandating of female participation by NSP—and the consequent female participation in project implementation—results in increased male acceptance of female participation in public life and broad-based improvements in women’s lives, encompassing increases in participation in local governance, access to counselling, and mobility.” And, “these and other economic, institutional, and social impacts of NSP further drive increases in girls’ school attendance and in women’s access to medical services, as well as improved economic perceptions and optimism among women in NSP villages.” This is in itself a remarkable result, and one of the few solidly documented impacts of the development assistance. It is therefore of interest for further Norwegian rural development engagement that President Ghani in November 2015 announced that NSP will be replaced by a “National Citizen Charter Program”. In his words, in a speech to CDC leaders, the objective is “to execute overall government programs at village level through a single mechanism that is called NSP/National Citizen Charter Program. Our objective is to provide overall Afghan rural communities with equal essential services in upcoming four years.”[12] Donors and NGOs that had been involved in the discussion of a concept note for the charter programme were uncertain where this initiative would lead, whether it would influence the CDCs role in the governance structure and whether it would continue to attract public and donor support. Still, it might address the recognized lack of cooperation and coordination between ministries – if they accept to be led by anyone else than themselves. Considering the entire 2001–2014 period, there is no doubt that the international and Norwegian support for rural development, and NSP in particular, has yielded extensive results, and some documented impacts. Results that are likely to hold major influence on the further development of Afghanistan, not least when it comes to women’s roles and development opportunities. The degree of community mobilization and engagement that has taken place is in itself a major step forward, and a major strengthening of Afghan civil society at the village and community level. There has been a noticeable concern about possible corruption in such a large programme, as well as suggestions for more active ARTF and external monitoring and verification of numbers of beneficiaries and projects. Still, there are indications that properly community managed projects are better insulated against corruption than large infrastructure projects, which might have ensured a high utilization of funds in this sector. The real challenge now is figuring out how investments made can be secured, and how the on-going need for community development (to increase food production, strengthen the rural economy and generate much needed jobs) can be continued under the new NSP/National Citizen Charter Program—with a government struggling to get their act together and where ministries are reluctant to collaborate. Good Governance The fight against corruption has consistently been at the forefront of the good governance agenda in Afghanistan. While progress on tackling corruption has arguably been limited, there have been a few successful initiatives, including the establishment of Integrity Watch Afghanistan (IWA), an Afghan civil society organization committed to increasing transparency, accountability, and integrity in Afghanistan. IWA was established as an independent civil society organization in 2006, and shortly afterwards the Embassy in Kabul decided to provide the newly established organization with core funding—the first donor to do so. The funding decision was in large part a response to the dialogue between the development advisor at the Norwegian Embassy and the founders of the organization, and of the recognition of IWA’s potential impact. During the period 2009–2011, Norwegian core funding totalled US$ 971.795.
From humble beginnings, IWA has grown to an organization with approximately 90 staff members and 700 volunteers, with head offices in Kabul. IWA has provincial programmatic outreach in Badakhshan, Balkh, Bamyan, Herat, Kabul, Kapisa, Logar, Nangarhar, Parwan, Panjshir, Samangan and Wardak. IWA focuses its activities in three main areas: 1) community monitoring; 2) research; and 3) advocacy. In the area of community monitoring, IWA works through four program pillars: 1) community based monitoring; 2) public service monitoring; 3) extractive industries monitoring; and 4) community trial monitoring. IWA’s work during the period of review has been significant in providing an evidence base for advocacy efforts, and in piloting successful community monitoring tools which are currently being scaled up. Many of the approaches adopted by IWA are not only innovative in the Afghan context, but also globally.
Norway’s decision to provide IWA with core funding allowed the organization to find its own focus and establish itself as a credible voice in the fight against corruption, instead of being driven by donor funding to carry out specific projects. It also provided IWA with the financial stability to bridge the period between its establishment and the development of capacity to attract funding from other donors.
However, with the establishment of the Tawanmandi trust fund, Norway ended its core funding to IWA in 2012. Instead, Norwegian funding to IWA was to be channelled through Tawanmandi. The trust fund did not however prove to be sufficiently flexible or quick in responding to funding requests, as described above. Funding delays, along with insufficient fiduciary controls within IWA, let to a difficult budget situation in 2013, which could potentially have jeopardized the future of the organization. Quick internal action within IWA and additional funding being provided by donors (in particular Sida) allowed IWA to balance its budget in 2014 and ensure its continued survival.
IWA is a good example of a case where MFA/Norad staff based in Kabul were able to identify an opportunity and, with the flexibility provided through Norwegian development assistance, take a calculated risk in supporting a newly established CSO. Without this support, it is unlikely that IWA would have flourished and developed into the organization that it is today. This will remain a lasting legacy of Norway’s support to the anti-corruption efforts in Afghanistan. Conversely, channelling funding to civil society through a trust fund such as Tawanmandi does not provide a similar degree of flexibility, and risks undermining the ability of Norwegian aid to have similar catalytic impact in the future.
[13] For details see http://www.netpublikationer.dk/um/9103/html/printerversion_chapter03.htm , visited on 13.02.2016.
Norwegian development assistance in Afghanistan 2011-14 and the results
In this section, we will provide a schematic presentation of the NGOs and trust funds in light of the main ToR questions, and discuss the extent to which they report on and can document short and long term results and impact.
Review of NGOs and their activities
Based on the ToR we selected a number of NGOs and requested their reports, monitoring and evaluation, and anti-corruption guidelines. We were able to interview members of the NGOs in Kabul (except the Aga Khan Foundation that did not respond to our requests), while the Norwegian based NGOs were also interviewed in Oslo. Basic information about them is included in Annex III, but the NGOs selected are the following:
• Aga Khan Foundation (AKF)
• Agency for Technical Cooperation and Development (ACTED)
• Danish Committee for Aid to Afghan Refugees (DACAAR)
• Integrity Watch Afghanistan (IWA)
• Norwegian Afghanistan Committee (NAC)
• Norwegian Church Aid (NCA)
• Norwegian Refugee Council (NRC)
• Norwegian Red Cross (NORCROSS)
• There are some important differences between these NGOs to be kept in mind. NCA and NORCROSS both work with, and implement projects through Afghan NGOs, the Afghan Red Crescent and civil society groups. The other NGOs listed are primarily implementing their own projects, but collaborate to varying degrees with local communities (including Community Development Councils and similar bodies) and/or national, district and local authorities.
NAC, NCA and NRC (the latter with a break between 1994 and 2002) have had a sustained presence in Pakistan/Afghanistan since the early 1980s, later joined by NORCROSS. Starting in 2002 the Embassy in Kabul provided support for AKF, and later for IWA, as part of support for anti-corruption initiatives. ACTED and DACAAR were partners for the Norwegian development support in Faryab.
All these partners are well recognized NGOs and have substantial additional funding from a range of donors. ACTED, AKF, DACAAR and four of NCA’s partner NGOs are facilitating partners for the National Solidarity Programme.
Aga Khan Foundation (AKF)
Development orientation with strong beneficiary involvement. Projects on 1) human and institutional development, 2) professional development, 3) public health promotion, 4) culture and tourism promotion, 5) alternative energies, 6) maternal and child health, and 7) “Light up Bamyan”.
Agency for Technical Cooperation and Development (ACTED)
Rural development and humanitarian assistance orientation. In Faryab they have implemented projects to 1) improve natural, human, social and physical capital, 2) improve the economic potential of excluded groups, and 3) improve governance.
Danish Committee for Aid to Afghan Refugees (DACAAR)
Rural development organisation, expertise on Water Sanitation and Hygiene (WASH). Faryab programme includes 1) Rural development activities aimed at: a) reduced household vulnerability and b) reduced female vulnerability for socio-economic risks/stress, and 2) WASH activities aimed at: a) capacity building for technical/management skills; b) groundwater monitoring and c) access to clean water.
Integrity Watch Afghanistan (IWA)
National NGO with anti-corruption expertise, projects on 1) public service monitoring; 2) community based aid monitoring; 3) extractive industries monitoring; 4) budget tracking; and 5) community trail monitoring.
Norwegian Afghanistan Committee (NAC)
Integrated community development and disaster reduction/mitigation. Project include teacher education, midwife education, rural public health, natural resource management, disaster risk reduction and advocacy work.
Norwegian Church Aid (NCA)
Donor NGO working through/with Afghan NGOs and civil society organisations. NCA supports a broad range of rural development activities, including solar energy and female empowerment, and provide support for advocacy work. Peacebuilding is an integrated part of their projects.
Norwegian Refugee Council (NRC)
Humanitarian NGO with refugee/IDP focus, including a) humanitarian assistance; b) education and information activities, and 3) legal advise (ICLA).
Norwegian Red Cross (NORCROSS)
Humanitarian NGO with partner and network focus, providing support and mentoring for ARCS’s organizational, logistic and anti-corruption development; and for their gender department and the RCRC network: organizational development, health programmes as well as support for Kabul ambulance.
[14] Afghan Research and Evaluation Unit. 2013. Women’s Economic Empowerment in Afghanistan, 2002-2012: Information Mapping and Situation Analysis. Kabul, AREU.
Review of Trust Funds
Afghan Reconstruction Trust Fund
The ARTF was established in 2002 to provide a coordinated financing mechanism for the Government of Afghanistan's budget and priority national investment projects. The fund is administered by the WB and supported by 34 donors. The trust fund is the largest single source of on-budget financing in the country. ARTF grants support to the Government of Afghanistan’s operational budget (recurring costs) and to 21 programmes, including education, agriculture, rural development, health, social development, infrastructure and governance. ARTF is the main channel used by Norway to support the priorities set by the Afghan government. In total, Norway has contributed US$395.656.635 to ARTF. The ARTF reports on their results in their annual ARTF Scorecards.[15] It reported the following accumulated total results by 2015: • Direct ARTF beneficiaries: 8,7 million (38% female), in addition to • 27 million beneficiaries from NSP (48,5 % female). • Education: 8,2 million children. • Electricity: 4,5 million beneficiaries. • Roads: 13,6 million beneficiaries. • Water and Sanitation: 10 million beneficiaries. • Employment: 4000 Enterprise Group members, 2.200 graduates from the National Institute of Management and Administration. • Short term employment: 59 million labour days. • Savings and Enterprise support: 69 500 beneficiaries. • Agricultural and/or irrigation services: 10 million beneficiaries. • A stocktaking of the ARTF carried out in 2012 found that: The ARTF remains the mechanism of choice for on-budget funding, with low overhead/transaction costs, excellent transparency and high accountability, and provides a well-functioning arena for policy debate and consensus creation. That being said, the stocktaking also found shortcomings in terms of the reporting of results (discussed below in the section on M&E). Norway played an active role in the Strategy Group and the Steering Group, in guiding the efforts to strengthen the reporting mechanism, including the selection of indicators. During the period, Norway was also a driving force behind ensuring that gender was fully considered under the ARTF. Law and Order Trust Fund LOTFA was established in 2002 with the aim of covering “all reasonable costs associated with the start-up and operational needs of the police force”. The trust fund, which is administered by UNDP, was intended to be a key channel for the international community to support the Afghan National Police (ANP) in order to strengthen the security sector. During the period under review, LOTFA was in its sixth phase. Phase VI (2011-2014) was aimed at achieving five main outputs: • Police force and uniformed personnel of Central Prisons Department (CPD) paid efficiently and timely. • Required equipment and infrastructure provided to Ministry of Interior (MoI). • Capacity of MoI at policy, organizational and individual level improved in identified areas and administrative systems strengthened. • Gender capacity and equality in the police force improved. • Police‐Community Partnerships institutionalised for improved local security, accountability and service delivery. • The main emphasis was put on the payment of salaries. While Norway also contributed to the components aimed at building capacity within ANP/MoI, many of the other donors appeared less inclined to do so. As such, LOTFA was to some extent seen mainly as a mechanism to channel funds to pay for the ANP. Norwegian contributions to Phase VI of LOTFA totaled US$ 25.521.375.
LOTFA reported the following achievements by December 2015:[16]
• Salaries paid to 150.000 Afghan National Police and prison staff.
• Establishment of 1.350 security check points, and refurbished police hospital.
• Establishment of 100 Family Response Units and 50 Gender Mainstreaming Units.
• Increased number of Police Women Councils to 70 in 30 provinces.
• Trained more than 10.000 police officers on the Code of Conduct.
• Established six 119 Emergency Call Centres and 31 Information Help Desks for the public.
• Connected 33 Provincial Headquarters to the web-based Electronic Payment System.
• While there had been growing concerns over possible mismanagement of funds in LOTFA in the preceding years, the problem reached its climax in 2012. Following a report by the MEC, followed by media reports, the donors demanded a response from UNDP on the allegations. Future funding was made contingent on UNDP addressing any shortcomings in LOTFA in a satisfactory manner. Norway was amongst the donors pushing for a tough stance. While seeking to address donor demands, UNDP requested an extension of Phase VI of LOTFA through the end of 2014 (Phase VI should have been completed by early 2013). The extension was meant to give UNDP sufficient time to strengthen management and align activities for Phase VII, with the aim of putting in place a strategy for the eventual handing over of responsibility for the payment of ANP salaries to the Afghan government.
While the reforms undertaken focused primarily on UNDP procedures, rather than on procedures within the MoI, donors were sufficiently satisfied with the safeguards put in place. The strategic importance of LOTFA for the governments counter insurgency activities also meant that cutting of funds would have been a difficult, if not impossible, decision to make. From the decision making process for continuing Norwegian funding to LOTFA, it is clear that MFA/Norad were fully aware of the challenges, and worked closely with other development partners to push for the reform of UNDP’s management of LOTFA. Despite the risks, it was felt that sufficient safeguards were being put in place. To this end, Norway also funded the extension of Phase VI (US$9,6 million out of a total amount of US$25,5 million). As discussed below in the section on M&E, Norway also used this opportunity for pushing for strengthened M&E and reporting on the party of LOTFA.
Tawanmandi
Tawanmandi was established in 2011 as an Afghan civil society strengthening fund by a donor consortium including Denmark, Norway, Sweden, Switzerland, and the United Kingdom. The British Council was selected to manage the fund, with a joint Funders’ Council and Steering Committee.[17]
Tawanmandi supported Afghan civil society organizations (CSOs) in three main ways: a) by providing CSOs with grant financing; b) by providing CSOs with capacity development support according to their needs; and c) by helping to build effective CSO partnerships, networks, and coalitions.
Tawanmandi aimed to contribute to the development of “...a vibrant and inclusive civil society, with focus on issues of policy and practice in the areas of access to justice, anti-corruption, human rights, media, and peace-building and conflict resolution, with disability, gender and youth as cross-cutting themes”. Tawanmandi financed three phases of project grants, where a total of 78 project grants have been awarded through the programme. Funded projects have directly benefited close to 150,000 Afghan citizens in 29 provinces and 187 districts across the country”.[18]
The donors informed about the termination of support to Tawanmandi in September 2014, with the contract expiring on 31 July 2015. They had decided against pursuing the plan of transforming the fund into an independent Afghan entity, while still assuring further support for Afghan CSOs.[19] All the persons interviewed supported the idea and rationale for the establishment of Tawanmandi, but they were also unison in their agreement that the fund did not deliver over time according to program objectives or according to the expectations of either donors or Afghan civil society. The main reason provided for the failure was related to the way the fund, and its relationships with CSOs, was managed. The British Council, as organisation, had not been able to establish a functional management system for the programme. Thus, donors decided to terminate funding following the conclusion of the present funding phase. The Danes placed importance on continued support for Afghan civil society and discussed the possibility of providing funding through a European Union “Programme in Support of Civil Society”, though they had comments to the present EU programme note.
Those interviewed for this review had no knowledge about the way in which Norway planned to continue its support for the Afghan civil society, or if any particular funding mechanism was under discussion.
[15] ARTF. 2015. “ARTF Scorecard 2015: Integrated Performance and Management Framework”, available at http://www.artf.af/images/uploads/ARTF_FINAL_2015_SCORECARD_REPORT.pdf visited 22.02.2016
[16] Government of Afghanistan. 2015. Afghanistan National Education For All. 2015 Review Report. Kabul, MoE.
[17] Available at http://www.epdc.org/education-data-research/afghanistan-education-inequality-profile , visited on 03.06.15.
[18] T. Lumley, J. Mendelovits, R. Turner, R. Stanyon and M. Walker. 2015. Class 6 Proficiency in Afghanistan 2013. Outcomes of Learning Assessment of Mathematical, Reading and Writing Literacy. Victoria, Australian Council for Educational Research.
[19] Norwegian Embassy. 2013. “Norsk utdanningsbistand til Afghanistan”, memo, 12.12.2013.
[20] For more details see Strand. A. (2015) Financing Education in Afghanistan: Opportunities for Action. Country Case Study for the Oslo Summit on Education in Development, available at http://www.osloeducationsummit.no/pop.cfm?FuseAction=Doc&pAction=View&pDocumentId=63328, visited on 25.01.2016
[25] The CDC’s governance role was later disputed by the Independent Directorate of Local Governance (IDLG) and remain an issue for discussion, ref NPP debates.
[26] For a presentation of NORPLANS activities, see http://www.norplan.af visited 15.01.2016
[27] Available at http://www.nspafghanistan.org/Default.aspx?sel=109, visited on 26.01.2016.
[28] There is a larger discussion about the advantages of CDCs compared to more traditional village structures, and about the extent to which elites influence their decisions and priorities. AKF had an interesting experiences on the way in which regular re-elections of CDCs ensure a fairer representation (private communication, Kabul).
[29] For details on the project and results, see FAO. 2016. “Afghanistan and FAO Partnering for food security through gend
Lessons learned and recommendations
The ToR ask for the review to provide recommendations for further development cooperation in Afghanistan, while a clarification from the Secretariat modified the request to develop a more general reflection about future learning, including new development engagements. We will therefore start with some of the suggestions made by the interviewees/Afghan stakeholders, and then reflect on the more general lessons learned from our findings.
The development partners request a continuation of predictable and flexible funding for the coming years, which they see as a prerequisite to provide quality humanitarian and development assistance in an increasingly challenging work environment. The NGOs and the ARTF argue for a continuation for the thematic areas they cover, but without suggesting cuts in other parts of the Norwegian engagement. Senior Norwegian bureaucrats have a similar position, but they also recommend more attention to M&E of the Norwegian funded assistance, and several of them emphasize the need to address grand corruption challenges (and the individuals influencing them) in order to ensure that the Norwegian assistance meets the required needs and the jointly agreed development goals.
One important observation is that the situation in Afghanistan changed substantially in 2011. With the US announcement of military withdrawal from 2014, a more definite timeline was set that led to the expectation among Afghans of a reduction of all types of international assistance. The new context established a new urgency and planning horizon for all actors involved, including those that aimed to benefit from the corruption and embezzlement opportunities the assistance provided. The Kabul Bank case, and the involvement of relatives of senior government officials in the fraud, is one example.
Challenges to security, economic development and the establishment of a functional Afghan government increased in the 2011–2014 period and placed an even stronger urgency on working towards Norway’s strategic aims. These were to 1) strengthen Afghan institutions (to be in a position to handle international assistance as well as to increase and manage their own revenue); 2) contribute to a political settlement (to ensure a more peaceful future); and 3) contribute to sustainable and just development, humanitarian efforts, and promotion of governance, human rights and gender equality. It can be argued that these goals have been pursued consistently since 2001, including in the 2011-2014 period. The difference is that in this period these goals have been pursued through fewer development partners and projects, and with gradually less process involvement from the Kabul Embassy due to the reduction of Norwegian staff.
This is where we identify the main challenge for development assistant, and where the Afghan case can offer insights for similar and future peace/state building efforts. The Norwegian support to Afghanistan has had multiple objectives since 2001. It included building a state structure and the development and strengthening of administrative capacities; building a judiciary, in competition with a traditional justice system; establishing a western democratic system and running regular elections; getting a free market economy in place; and running a military operation (while building a new army and police force). Also relevant to this review, in collaboration with multiple donors and stakeholders, Norway sought to contribute to the achievement of major development tasks (and to building Afghan capacity to take them on over time), while ensuring the rights and opportunities of the most vulnerable, not least of Afghan girls and women. The Afghan government was held responsible—expected to be in the “driver’s seat”—for leading these efforts.
At the same time, it was acknowledged that most Afghan ministries and the newly elected parliament lacked the required management capacity to fulfil this role. The donor response was to establish trust funds to manage development activities, and to make use of NGOs and private companies to implement development programmes and projects, though in collaboration with and under control of Afghan ministries and donor coordination mechanism. Such a fragile framework, however, requires a continuity of process knowledge, and of awareness about how and why commitments were made. It has been challenging to maintain continuity in this area as Embassy officials (and UN, WB and International NGO staff) rarely stay on for more than 2 years in Afghanistan, and as key ministry staff is often changed when a new minister takes up.
The larger trust funds, as we have seen, had in place internal management procedures to safeguard donor funds. This was not necessarily the case when these funds were transferred to the implementing ministry for implementation and/or salary payment. Even when severe misuse was documented, as in the case of LOTFA, the response options for donor were limited, as the consequence of cutting funding could threaten their overall engagement in Afghanistan. In the LOTFA case, donors were able to put pressure on UNDP, but had very little leverage to effect changes in the MoI. The case of Tawanmandi was different, as support to civil society organisations was not seen as equally important to the achievement of the overall objectives for the Afghanistan engagement (particularly in the security side), and was therefore more easily terminated.
However, fund disbursement is only one aspect of programme management. An equally important aspect is the Norwegian involvement in setting and ensuring strategic objectives in dialogue with the GoIRA and other donors. Also crucial is ensuring compliance with plans and priorities; coherence with Norwegian (or Nordic) priorities (as in ARTF, LOTFA and Tawanmandi); and follow up on implementation, M&E processes and anti-corruption safeguards.
Actually, there are stricter controls applied to NGO support than to support for the trust funds, both in terms of the selection of implementing partners, and of assessing their overall conflict analysis, risk mitigation, M&E and anti-corruption systems and routines before they are accepted as partners. Their project proposals are assessed as for development objectives and budget alignment. Budgets are cut or withheld if reports or accounts are not delivered, or if there are accusations of corruption. Such accusations result in a close dialogue with the NGO to ensure that they address the concerns identified, or an external investigation either confirms or acquits them of the allegations.
We are drawing up this picture to identify a weakness of the aid management system in a fragile/weak state such as Afghanistan, where there are major concerns over weak management capacity and corruption on the government side. Successful implementation depends, in our view, equally on 1) donor administrative systems, approval and control of the aid funding – which is in place, but now located in Norway and administratively divided between the MFA and Norad. And 2) donors’ ability to engage in a “development dialogue” with the government of Afghanistan and a range of other stakeholders throughout the programme planning and implementation period. This dialogue is critical, as outcomes will only be achieved if there is willingness and ability on the part of the national government to ensure that shared goals are established and met through the selected and implemented programmes and projects (and some of them through other partners, as in the case of the NSP).
The first part of the aid management system is well in place for Afghanistan, and made easier with a reduced number of partners and projects. There is also a larger continuity and institutional memory as there is a more permanency in staff in the MFA and Norad than was deemed possible at the Embassy in Kabul, given the security situation. The NGOs are very satisfied with the handling of their contracts, recognizing that Norway has been able to combine a long term funding commitment with flexibility when required.
As for the second—crucial—part, the aid management system’s capacity for dialogue with the Afghan government has gradually been reduced over the last years with the reduction in Embassy staffing. In Figure 8, we have illustrated the different funding and potential dialogue channels in Afghanistan, taking into account that ultimately it is the GoIRA that is responsible for, or implements, the majority of the development programmes Norway funds.
The reduction in Norwegian capacity for development dialogue in Afghanistan takes place at a time when the need for sustained dialogue and trust building, according to our observations, has increased. This engagement is necessary for two main reasons. On the one hand, strategic and high level diplomatic efforts with the GoIRA and other donors and donor mechanisms is required to maintain the strategic direction and priorities of the trust funds. And, on the other hand, there is a need for more practical programme/project follow-up and coordination with GoIRA and other donors in order to ensure coherence with Norwegian priorities.
There is, as argued above, a need to initiate independent M&E (preferably with other donors) of both NGO and trust fund projects, as well as of their management. This requires continued dialogue, engagement and development diplomacy effected in Kabul to ensure that the necessary changes are implemented, assistance calibrated to the implementing capacity of ministries and NGOs, and sufficient resources allocated to ensure the building of the necessary capacity. The presence at the Embassy in Kabul of one dedicated and skilled international development counsellor, working closely with skilled national staff empowered to make decisions within defined responsivity areas, can make a major difference for the Norwegian development engagement, even within the present security regulations.
The present security regulations can be less universally applied, bringing in Afghan/external monitors (and “remote monitoring”) and possibly requesting assistance (e.g., from Sweden) for programme monitoring, in order to improve the oversight and results of Norwegian funded assistance.
Ensuring a continuation of support for Afghan civil society organisations and engaging in the new NSP/National Citizen Charter Program are two of many tasks that cannot wait if Norway aims to influence future developments in Afghanistan.
Abbreviations
ACTED: Agency for Technical Cooperation and Development
AIHRC: Afghanistan Independent Human Rights Commission
AKF: Aga Khan Foundation
ANDS: Afghan National Development Strategy
ARTF: Afghanistan Reconstruction Trust Fund
ASGP: Afghanistan Sub National Governance Programme
CDC: Community Development Council
CSO: Civil society organization
DACAAR: Danish Committee for Aid to Afghan Refugees
DDC: District Development Council
ECC: Education Coordination Committee
ELECT: Enhancing Legal and Electoral Capacity for Tomorrow
EQUIP: Education Quality Improvement Program
FAO: Food and Agriculture Organisation
GoIRA: Government of the Islamic Republic of Afghanistan
ICRC: International Committee of the Red Cross
IDLG: Independent Directorate of Local Governance
IDP: Internally Displaced Person
IS: Islamic State
IWA: Integrity Watch Afghanistan
LOFTA: Law and Order Trust Fund
M&E: Monitoring and Evaluation
MEC: Independent Joint Anti-Corruption Monitoring and Evaluation Committee
MFA: Ministry of Foreign Affairs
MoE: Ministry of Education
MoF: Ministry of Finance
MRRD: Ministry for Rural Rehabilitation and Development
NABDP: National Area Based Development Programme
NAC: Norwegian Afghanistan Committee
NATO: North Atlantic Treaty Organization
NCA: Norwegian Church Aid
NGO: Non-governmental Organisation
Norad: Norwegian Agency for development Cooperation
NORCROSS: Norwegian Red Cross
NORDIC +: Norway, Sweden, Denmark, Iceland and Finland, and occasionally other countries
NPP: National Priority Program
NRC: Norwegian Refugee Council
NSP: National Solidarity Program
OCHA: Office for Coordination of Humanitarian Affairs
PRT: Provincial Reconstruction Team
Sida: Swedish International Development Cooperation
SIGAR: Special Inspector General for Afghanistan Reconstruction
SRSG: Special Representative of the Secretary General
TMAF: Tokyo Mutual Accountability Framework
ToR: Terms of Reference
UNAMA: United Nations Assistance Mission in Afghanistan
UNDP: United Nations Development Programme
UNHCR: United Nations High Commissioner for Refugees
UNICEF: United Nations Children’s Fund
UNOCHA: UN Office for Coordination of Humanitarian Affairs
UNODC: UN Office on Drugs and Crime
WASH: Water Sanitation and Hygiene Education
WB: World Bank
Annex I: Interview list
Name – Organisation – Position – Date
Petter Bauck – Norad – Senior Advisor – 12.11.15
Liv Kjølseth – NAC – Secretary General – 12.11.15
Marit Strand – Norad – Senior Advisor – 13.11.15
Arne Disch – SCANTEAM – 13.11.15
Anders Wirak – MFA – 13.11.15
Mette Bastholm Jensen – Danish Embassy, Kabul – Head of Development – 16.11.15
Sabir Nasiry – Norwegian Embassy, Kabul – 16.11.15
Zabiullah Shenwari – Norwegian Embassy, Kabul – 16.11.15
Azada Hussaini – World Bank, Country Management Unit – Operations Officer – 19.11.15
Muhammad Wali Ahmadzai – World Bank – Financial Management Analyst – 19.11.15
Cherise Chadwick – Norwegian Red Cross – Country Manager – 19.11.15
Connie Maria Shealy – NCA – Assistant Country – Director – 19.11.15
Ahmad Hassan – NCA – Program Manager – 19.11.15
Javlon Hamdamov – ACTED – Country Director – 21.11.15
Kaithlyn Scott – ACTED – AME Officer – 21.11.15
Terje Watterdal – NAC – Country Director – 21.11.15
Kenneth Marimira – NAC – M&E Specialist – 21.11.15
John Morse – DACAAR – Director – 21.11.15
Sayed Ikram Afzali – IWA – Director – 21.11.15
Qurat-ul-Ain Sadozai – NRC – Country Representative – 4.12.15
Nils Haugstveit – MFA – Ambassador 2012-14 – 11.12.15
Anders Tunord – NCA – Program Coordinator – 11.12.15
Liv Steimoeggen – NCA – Country Representative
Margrethe Volden – NCA – Area Team Leader, Middle East and Asia
Adam Combs – NRC – Head of Section, Asia – 11.12.15
Anna Hamre – Norcross – Programme Coordinator 16.12.15
Afghanistan and Pakistan – Odd Pedersen – Norcross – Logistics Coordinator
Semund Haukland – Norad – Senior Advisor – 05.01.2016
Ulrika Josefsson – Embassy of Sweden – Counsellor/Head of Development Cooperation – 15.01.2016
Annex II: Terms of Reference
Review of Norwegian development assistance to Afghanistan 2011–2014
Terms of Reference
Introduction and rationale
On 21 November 2014, the Norwegian Government appointed a Commission to evaluate the Norwegian civilian and military effort in Afghanistan in the period 2001-2014. The Commission will submit its report to the Norwegian Government by June 1st 2016. For the mandate of the Commission see: https://www.regjeringen.no/no/dokumenter/utvalg_afghanistan/id2340951/ (English language excerpt attached).
Among the many questions raised in the Commission’s mandate concerning the civilian effort, two stand out as particularly relevant as overall guidelines for the Commission’s investigative work: what are the results on the ground, for Afghans, of Norwegian development assistance to Afghanistan from 2001 - 14? And: to what extent has this assistance been supportive of the overall Norwegian political priorities and goals in its engagement in Afghanistan?
A partial answer to these two questions may be found in the evaluation report published by The Norwegian Agency for Development Cooperation (Norad) in 2012, covering the period 2001 -2011. The central question of this evaluation was: what contribution has Norwegian support made to sustainable peace, improved governance and reduced poverty in Afghanistan? Taking the point of departure of analysing the development portfolio in terms of relevance, effectiveness, effect, impact and sustainability, the evaluation concludes that the portfolio is relevant and in line with international and national priorities, and that certain direct results have been achieved. However, the evaluation recommends that the Norwegian MFA rethinks its development and aid strategy in order to be based on a more sound theory of change.
In the absence of a more recent evaluation, the Commission has decided to outsource a small-scale study of Norwegian development assistance in the period 2011-2014. The study will focus on the management of Norwegian development funds, and the results of Norway’s main cooperation-partners: international institutions (the World Bank and UNDP) and international and Norwegian NGOs (including ACTED, DACAAR, Norwegian Church Aid, Norwegian Refugee Council, Aga Khan Foundation, Norwegian Afghanistan Committee and the Norwegian Red Cross), in addition to the national NGO Integrity Watch and the national fund Tawanmandi.
In view of the rapidly deteriorating security situation in Afghanistan, this will in essence be a combination of a desk study, consultations in Oslo with relevant aid officials and diplomats, as well as interviews with key stakeholders residing in Afghanistan and elsewhere. These interviews may be conducted through skype or phone, or the consultant may engage Kabul-based consultant(s).
Purpose
The purpose of this study is three-fold
1) An assessment of the follow-up of the recommendations from the Norad-report, including MFA strategies and internal guidelines.
2) Establish an overview of the Norwegian development assistance in Afghanistan 2011-14 and, where possible, the short and (expected) long-term results of these.
3) Provide recommendations for further development cooperation in Afghanistan.
Evaluation questions
In order to ensure coherence between the Norad 2012 report and the proposed outputs, the criteria will remain the same but should be further guided by following groups of questions:
Management of Norwegian Development Funds:
What trends can be seen in the period 2011 – 2014 in terms of prioritization and selection of thematic focus and implementing partners, and to what degree do they meet the overall Norwegian development goals in Afghanistan?
If adjustments in prioritization of themes, partners and funding were done, on what criteria were these based?
To what extent and how have the recommendations from the 2011 evaluation (and the internal strategy) been followed up?
Particular focus should be given to:
• development of a theory of change of the overall Norwegian contribution;
• improved contextual analysis, conflict sensitivity and risk mitigation;
• anti-corruption procedures
• monitoring and evaluation systems
• internal human resource allocation and administrative capacity
• How and to what extent have Norwegian authorities engaged with, supported and evaluated the activities of implementing partners?
How responsive was Norway in adapting to changing circumstances directly affecting development assistance?
How well did Norway coordinate with other donors?
Has Norway stood out in any way, positive or negative, in its development assistance policy compared to other “likeminded countries” (e.g. Sweden and/or Denmark)?
Contribution of implementing partners:
What concrete results, short and (expected) long term, can Norway’s implementing partners refer to in the 2011-2014 period?
• This entails synthesising reported results from the partners, international institutions and NGOs at country level, including reported results in Faryab province
• For assessments of results, a case study should be selected to illustrate each of the sectors: good governance, education and rural development. These should, if possible, identify key factors leading to success or failure.
• A synthesised overview of M&E mechanisms utilized
• To what extent and how have Norway’s partners and the key channels through which Norwegian assistance has been allocated, contributed to strengthening Afghan ownership at institutional and community level?
How do key implementing partners perceive the support and engagement from the Norwegian government?
Recommendations:
What recommendations for future development cooperation in conflict areas can be drawn from the findings?
Methodology
The evaluation team will focus its work on going through key implementing partners’ results reports, evaluations relevant to Norwegian contributions and other relevant written sources. The evaluation team should also conduct extensive interviews with development assistant workers, policy makers and other stakeholders, chosen in consultation with the secretariat.
Organisation of the evaluation
The evaluation will be funded and supervised by the secretariat of the Commission. The consultant(s) should consult extensively with stakeholders pertinent to the assignment, and stakeholders should be asked to comment on the draft final report. Access to relevant archives will be facilitated by the secretariat to the extent possible. The final report will be the property of the Commission who will decide on its further dissemination.
The consultant(s)
The consultant(s) should have following qualifications:
• Demonstrated professional knowledge and understanding of development assistance practices, and evaluations of these;
• Solid knowledge and/or experience of Norwegian development assistance to Afghanistan, in particular from 2011 onwards;
• Solid knowledge of development efforts in Afghanistan and demonstrated access to Afghan networks that may provide Afghan perspectives;
• Proficiency in a Scandinavian language in order to be able to read documents in Norwegian.
• The Commission encourages the consultant(s) to establish a team to cover the different requirements.
Budget and deliverables
The budget of the evaluation shall not exceed NOK 490 000.
Deliverables will be:
• An inception report/ final work-plan including an overview of expected final deliverables, to be discussed with the secretariat two weeks after signing of contract
• Update on progress – midterm (est. early January 2016)
• A final report not exceeding 40 pages, based on agreed deliverables
• One day of meetings with members of the Commission and the secretariat to present the findings, in Oslo.
• Phases and deadlines
What – Who – When
Invitation to tender – Commission / secretaria – 26 September 2015
Tender submission – Consultant (s) – 15 October 2015
Signing of contract – Consultant and secretariat – End of October 2015
Inception report/ work-plan – Consultant (s) – 15 November 2015
Interviews – Consultant (s) – Mid November/end December
Draft report – Consultant (s) – 18 January 2016
Final report – Consultant (s) – 15 February 2016
One day dissemination seminar – Consultant (s) – End February 2016
Annex III: NGO profiles
Agency for Technical Cooperation and Development (ACTED) is an international relief agency with headquarters in Paris, France. ACTED was established in Peshawar, Pakistan in 1993 to provide humanitarian and rehabilitation assistance to Kabul during the civil war, but has later broadly expanded their activities.
ACTED is among the largest NGOs operating in Afghanistan, employing 961 national and 9 international staff. ACTED has a broad range of projects throughout Afghanistan and is a facilitating partner for the NSP, including in Faryab.
The Norwegian Embassy established a strategic partnership with ACTED in 2008, and the organization has since been a major implementer of Norwegian assistance for the Faryab province, including the Ghormak district. ACTED had six of their national staff-members killed in Faryab in November 2013. The financial framework has amounted to NOK 120 million for their project “Sustained Rural Development in Faryab Province”.
Aga Khan Foundation (AKF) is a Swiss-registered foundation that forms part of the Aga Khan Development Network (AKDN). AKF was established as an international organization in 1967 under the leadership of His Highness Aga Khan, the Spiritual Leader of the Shia Ismaeli Muslim community.
AKF established itself in Afghanistan in 2002 and quickly became one of the largest NGOs in the country, with 1700 staff-members. It is a facilitating partner for the NSP in Badakshan, Baghlan, Bamyan and Takhar.
The Norwegian Embassy established a partnership with AKF in 2007, supporting a multisector support programme in the Badakshan, Baghlan and Bamyan provinces, including Bamyan Electrification Project, with a financial framework of NOK 64 mill.
Danish Committee for Aid to Afghan Refugees (DACAAR) is a Danish NGO formed back in 1984 as a collaboration between three Danish NGOs. DACAAR supported Afghan refugees in Pakistan during the 1990s and then pioneered support for Afghan women through an embroidery project and a structure for selling their products.
Following the withdrawal of Soviet forces, since 1989, DACCAR started to shift their activities inside Afghanistan, going into rural development and vocational training, while continuing targeted support for women and water and sanitation projects. What set DACAAR aside from many other NGOs was their employment of Danish (and international) academics with extensive knowledge of Afghanistan, which informed their priorities and approaches.
DACAAR employs 850 national and 10 international staff members, and is a facilitating partner for the NSP, in which they worked in Faryab, Herat, Laghman and Parwan provinces in 2012.
The Norwegian Embassy has since 2010 supported DACAAR for their programme “Rural Development in Northern Afghanistan” in the Faryab, Sar-e-Pul and Badakshan provinces. The two main activities, rural development and water supply and sanitation had a financial framework of NOK 77 million.
Integrity Watch Afghanistan (IWA) is an Afghan civil society organisation established in 2006, committed to increasing transparency, accountability, and integrity in Afghanistan. It has received Norwegian support since 2009.
The mission of Integrity Watch is to put corruption under the spotlight through community monitoring, research, and advocacy. They mobilize and train communities to monitor infrastructure projects, public services, courts, and extractives industries. They develop community monitoring tools, provide policy-oriented research, facilitate policy dialogue, and advocate for integrity, transparency, and accountability in Afghanistan.
IWA has approximately 90 staff members and 700 volunteers. The head office of Integrity Watch is in Kabul, with provincial programmatic outreach in Badakhshan, Balkh, Bamyan, Herat, Kabul, Kapisa, Logar, Nangarhar, Parwan, Panjshir, Samangan, and Wardak.
Norwegian Afghanistan Committee (NAC) is a Norwegian NGO established in 1980 as a solidarity movement, working solely with Afghanistan. They provided emergency assistance and operated medial teams inside Afghanistan during the 1980s. From 1989 onwards their work shifted towards rehabilitation and development assistance, and field offices were opened in Ghazni and Badakshan provinces.
NCA is working on health, education and natural resource management through an integrated approach, and has a staff of 200 national and 1 international member based in Kabul, Jaghori (Ghazni) and Badakhshan.
The rural development project, with projects in Ghazni and Badakshan provinces, has a financial framework of NOK 45 million.
Norwegian Church Aid (NCA) is a Norwegian NGO that works in partnership with/through Afghan NGOs and civil society organisations. NCA have supported Afghans since 1979, first with assistance to refugees and since the early 1990s with rehabilitation, development and humanitarian assistance inside Afghanistan. It established a Kabul office in 1993.
NCA applies an integrated approach for their support for climate justice and the right to peace and security, advanced through long-term development, emergency assistance, and advocacy work. Given their role as donor NGO, NCA has a rather small staff based in Kabul and Maimane, Faryab.
The contract on integrated rural development included 12 partner NGOs, operating in Faryab, Daikundi and Uruzgan provinces, with a budget of NOK 105 million. The more targeted programme “Promoting Women’s Engagement and Participation in Faryab» was implemented through four partners. These activities had a financial framework of NOK 6,9 million.
Norwegian Red Cross (NORCROSS) is a Norwegian NGO that works in partnership with the Afghan Red Crescent Society (ARSC), The International Federation of Red Cross and Red Crescent Federations, and the International Committee of the Red Cross.
NORCROSS works primarily to support, strengthen and supplement the humanitarian activities of the Red Cross and Red Crescent Movements in Afghanistan, with major efforts going into strengthen the organizational and management capacity of the ARSC and their activities.
NORCROSS has Kabul based Country Representatives that monitor and coordinate Norwegian funded activities.
Norwegian Refugee Council (NRC) is a Norwegian NGO that operated a joint office in Pakistan with NCA until 1994, when they disengaged from Afghanistan, but then they re-established their presence in Kabul in 2002.
NRC supports and advocates for the rights of returning refugees from neighbouring countries and of Internally Displaces Persons (IDPs) through legal assistance, education, shelter, WASH and emergency assistance.
NRC has 450 national and 22 international staff-members working from their Kabul office and from six field offices. The project included in this review, “Youth education and Gender based Violence Program in Faryab and Nangarhar provinces” has a financial framework of NOK 38 million.
Annex IV: Key documents reviewed
ACER. 2015. “Class 6 proficiency in Afghanistan 2013: Outcomes of a learning assessment of mathematical, reading and writing literacy”. Report for Ministry of Education. Melbourne: ACER.
ACTED. 2013. “Sustained Rural Development Programme, Phase II Completion Report, Faryab province: Almar, Qaisar, Kohistan and Pashtun Kot districts. 2010- 2013”. Kabul: ACTED.
ACTED. 2014. “Results Reporting: Faryab Sustained Rural Development Program”. Kabul: ACTED
ACTED. 2015. Annual Report 2014. Paris: ACTED.
Afghan Reconstruction Trust Fund. 2015. “ARTF Scorecard 2015, Integrated Performance and Management Framework”. Kabul: ARTF-WB.
Afghan Research and Evaluation Unit. 2013. Women’s Economic Empowerment in Afghanistan, 2002-2012: Information Mapping and Situation Analysis. Kabul: AREU.
Aga Khan Foundation. 2013. “Final Report April 2010- June 2013: Multi Sector Support Programme (MSSP) Extension. Strengthening Licit Rural Livelihoods in Bamyan, Baghlan and Badakshan provinces of Afghanistan”. Agreement number: AKF 2779-08/007. Kabul: AKF.
Aga Khan Foundation. 2014. “Final Report April 2010- June 2013: Multi Sector Support Programme (MSSP) Extension. Strengthening Licit Rural Livelihoods in Bamyan, Baghlan, Badakshan and Takhar provinces of Afghanistan”. Agreement number: AKF 2779 13/0002. Kabul: AKF.
Aga Khan Foundation. 2015. Multi-Sector Support Program – Phase III, Annual report. Grant Agreement AFG-13/0002. Kabul: AKF.
Ambassaden i Kabul. 2010. Treårig plan 2011–2013. Kabul: Den Norske Ambassaden.
Ambassaden i Kabul. 2011. Treårig plan 2012–2014. Kabul: Den Norske Ambassaden.
Ambassaden i Kabul. 2010. Virksomhetsplan 2011. Kabul: Den Norske Ambassaden.
Ambassaden i Kabul. 2011. Virksomhetsplan 2012. Kabul: Den Norske Ambassaden.
Ambassaden i Kabul. 2012. Virksomhetsplan 2013. Kabul: Den Norske Ambassaden.
Ambassaden i Kabul. 2012. Behov for faglige tjenester 2013 fra Norad. Kabul: Den Norske Ambassaden.
Ambassaden i Kabul. 2013. Virksomhetsplan 2014. Kabul: Den Norske Ambassaden.
Ambassaden i Kabul. 2013. Behov for faglige tjenester 2014 fra Norad. Kabul: Den Norske Ambassaden.
DACAAR. 2013. “Rural Development in Northern Afghanistan. Final Report (April 2010-June 2013).” Contract number: AFG 2786 08/008 (including RDP Baseline Survey Report).
DACAAR. 2012. Strategic Programme Framework 2013–2016. Kabul: DACAAR
DACAAR. 2015. DACAAR Annual Report 2014. Kabul: DACAAR.
DACAAR. 2015. “A Study of Gender Equity through the National Solidarity Programme’s
Community Development Councils”. Kabul: DACAAR.
DANIDA. 2012. “Evaluation of Danish Development Support for Afghanistan. August 2012.” Copenhagen: DANIDA.
European Union. 2015. "EU Programme in Support of Civil Society Concept note”, 28 September 2015. Kabul: EU.
European Union. 2015. “EU Country Roadmap for Engaging with Civil Society. 2015-2017.” Kabul: EU.
FAO. 2011. “Promoting Integrated Pest Management in Afghanistan. Progress report”. Kabul: FAO.
FAO. 2012. “Promoting Integrated Pest Management in Afghanistan. Progress report”. Kabul: FAO.
FAO. 2012. “Integrated Pest Management in Afghanistan. Revised Logframe”. Kabul: FAO.
FAO. 2013. “Promoting Integrated Pest Management in Afghanistan. Progress report”. Kabul: FAO.
FAO. 2013. “Follow-up report on the Management Response to the recommendations of Evaluation Mission for ‘Integrated Pest Management in Afghanistan- GCP/AFG/058/NOR’.” Evaluation conducted from14 Jan - 02 Feb 2013. Kabul: FAO.
FAO. 2014. “Promoting Integrated Pest Management in Afghanistan. Progress report”. Kabul: FAO.
Integrity Watch Afghanistan. 2012. “Project Completion Report to the Royal Norwegian Embassy”. Kabul: IWA.
Integrity Watch Afghanistan. 2013. “Gender Aspects of Development and Community Involvement Examining Gender Differences in Data on Provincial Reconstruction Teams and Development Projects”. Kabul: IWA.
Intrac. 2014. “Study on NCA Afghanistan’s partnership approach”. Oxford: Intrac.
Islamic Republic of Afghanistan. 2012. “The Ministry of Rural Rehabilitation and Development”. Kabul: MRRD.
Islamic Republic of Afghanistan. 2014. “EQUIP Progress Report 2014: July – December”. Kabul: Ministry of Education.
Islamic Republic of Afghanistan. 2015. “Afghanistan National Education for All (EFA) Review 2015 Report”. Kabul: Ministry of Education.
Islamic Republic of Afghanistan. 2015. “National Technical Assistance Salary Scale and Implementation Guidelines”. DRAFT. Kabul: GoIRA.
Islamic Republic of Afghanistan. 2015. “Capacity Building and Institutional Cooperation in the field of Hydrogeology for Faryab Province, Afghanistan”. First Draft Final Report. Kabul: Ministry of Rural Rehabilitation and Development.
Lumley, T., J. Mendelovits, R. Turner, R. Stanyon and M. Walker. 2015. Class 6 Proficiency in Afghanistan 2013.Outcomes of Learning Assessment of Mathematical, Reading and Writing Literacy. Victoria: Australian Council for Educational Research.
Norad. 2012. “Evaluation of Norwegian Development Cooperation with Afghanistan 2001-2011”, (Report 3/2012). Oslo: Norad.
Norwegian Afghanistan Committee. 2014. “Baseline Study Report for Community Disasters Risk Management Project, Badakshan Province, Afghanistan”. NAC: Kabul.
Norwegian Afghanistan Committee. 2013. Annual Report 2012. NAC: Oslo.
Norwegian Afghanistan Committee. 2013. “Gender Impact Assessment Tools and Approaches for NRM Interventions”. NAC: Kabul.
Norwegian Afghanistan Committee. 2015. Annual Report 2014. NAC: Oslo.
Norwegian Afghanistan Committee. 2014. Integrated Rural Development III. NAC: Kabul.
Norwegian Afghanistan Committee. 2014. “Baseline Study for the Integrated Rural Development II Program in Afghanistan”. Agreement AFG -2778 AFG-12/0049. NAC: Kabul.
Norwegian Afghanistan Committee. 2014. “Education Case Study: Overall School Improvement”. NAC: Kabul.
Norwegian Afghanistan Committee. 2014. “Monitoring, Evaluation and Learning Strategy.” NAC: Kabul.
Norwegian Afghanistan Committee. 2015. :Community Based Disaster Risk Management. Annual Report to Norad”. NAC: Oslo.
Norwegian Church Aid and ACT Alliance. 2009. “Anti-fraud and corruption policy for the ACT Alliance”.
Norwegian Church Aid. 2013. “Evaluation Policy for Norwegian Church Aid's International Work”. NCA: Oslo.
Norwegian Church Aid. 2014. “Mobilising Religious Actors for Peace: End of Project Formative Evaluation Report”. NCA: Kabul.
Norwegian Church Aid. 2014. NCA Afghanistan: Monitoring policy/framework. NCA: Kabul.
Norwegian Church Aid. 2014. “Final report for Integrated Rural Development Program to Royal Norwegian Embassy for the period January 2010 – December 2012”. Submitted on 15th January, 2014. NCA: Kabul.
Norwegian Church Aid. 2015. “Annual Report 2014: Building resilient communities for sustainable development and peace”. NCA: Oslo.
Norwegian Church Aid. 2015. “4 year report: Covering the years 2011, 2012, 2013 and 2014”. NCA: Oslo.
Norwegian Refugee Council. 2006. NRC’s Anti-corruption Handbook. Updated January 2015. NRC: Oslo.
Norwegian Refugee Council. 2014. “Final Project Report: Strengthening Gender-Based Violence Prevention and Response in Faryab Province”. NRC: Kabul.
Norwegian Refugee Council. 2014. “Annual Report 2013: Education and Information, counselling and Legal Assistance (ICLA) programme in Faryab”. NRC: Oslo.
Norwegian Refugee Council. 2014. “Monitoring and Evaluation Factsheet”. NRC: Oslo.
Norwegian Refugee Council. 2014. “Monitoring and Evaluation Guidelines”. NRC: Oslo.
Norwegian Refugee Council. 2015. “Annual Report 2014: Education and Information, counselling and Legal Assistance (ICLA) programme in Faryab”. NRC: Oslo.
Norwegian Refugee Council. 2015. “NRC Evaluation Policy”. NRC: Oslo.
Norwegian Refugee Council. 2015. “Faryab Briefing, including IDP situation”. NRC: Kabul.
Røde Kors. 2015. “Final Report 2012 -2014 and Audited Financial Statements 2014. The Norwegian Red Cross in Afghanistan 2012-2014”. AFG-12/009. NORCROSS: Oslo.
Riksrevisjonen. 2015. “Riksrevisjonens undersøkelse av bistand til godt styresett og antikorrupsjon i utvalgte samarbeidsland”. Dokument 3:9 (2014–2015). Riksrevisjonen: Oslo.
Tawanmandi. 2014. “Future of the Tawanmandi Programme: Letter to Partners of 28 September 2014”. Tawanmandi and Donors: Kabul.
UNDP Afghanistan. 2011. “ELECT, Final Evaluation: Enhancing Legal and Electoral Capacity for Tomorrow Project”. June 2011. UNDP: Kabul.
UNDP Afghanistan. 2013. “Sub-national Governance and Development Strategy”. June 2013. UNDP: Kabul.
Utenriksdepartementet. 2011. “Statsbudsjettet 2011 – Tildelingsskriv. Utenriksdepartementet”. Oslo
Utenriksdepartementet. 2012. “Statsbudsjettet 2012 – Tildelingsskriv. Utenriksdepartementet”. Oslo
Utenriksdepartementet. 2013. “Statsbudsjettet 2013 – Tildelingsskriv. Utenriksdepartementet”. Oslo
Utenriksdepartementet. 2014. “Statsbudsjettet 2014 – Tildelingsskriv. Utenriksdepartementet”. Oslo.
Wimpelmann, T. and A. Strand. 2014. “Working with Gender in Rural Afghanistan: Experiences from Norwegian-funded NGO projects” (Norad Evaluations no. 10/2014). Oslo: Norad.
World Bank. 2015. “Strategic Directions for the National Solidarity Program. Assessment of Strategic Issues and Recommendations for Future Directions”, March 31, 2015. GSURR SOUTH ASIA.
World Bank. 2015. “Afghanistan Development Update”. October 2015. Kabul, World Bank.
Annex V: Example of Theory of Change
### Arne Strand
Deputy Director, Research Director, U4 Director | |
# Collection Initializiation in Java
There is this so called „double brace“ pattern for initializing collection. We will see if it should be a pattern or an anti-pattern later on…
The idea is that we should consider the whole initializion of a collection one big operation. In other languages we write something like
[element1 element2 element3]
or
[element1, element2, element3]
for array-like collections and
{key1 val1, key2 val2, key3 val3}
or
{key1 => val1, key2 => val2, key3 => val3}.
Java could not do it so well until Java 9, but actually there was a way to construct sets and lists:
Arrays.asList(element1, element2, element3);
or
new HashSet<>(Arrays.asList(element1, element2, element3));.
Do not ask about immutability (or unmodifyability), which is not very well solved in the standard java library until now, unless you are willing to take a look into Guava, which we will in another article… Let us stick with Java’s own facilities for today.
So the double brace pattern would be something like this:
import java.util.*;
public class D { public static void main(String[] args) { List<String> l = new ArrayList<String>() {{ add("abc"); add("def"); add("uvw"); }}; System.out.println("l=" + l); Set<String> s = new HashSet<String>() {{ add("1A2"); add("2B707"); add("3DD"); }}; System.out.println("s=" + s);
Map<String, String> m = new HashMap<String, String>() {{ put("k1", "v1"); put("k2", "v2"); put("k3", "v3"); }}; System.out.println("m=" + m); } }
What does this do?
First of all having an opening brace after the new XXX() creates an anonymous class extending XXX. Then we open the body of the extended class. What is well known to many is that there can be a static {....} section, that is called exactly once for each class. The same applies for a non-static section, which is achieved by omitting the static keyword. This is of course called once for each instance of the class, so in this case it will be called after the constructor of the base class and serves kind of as a replacement for the constructor. To make it look cooler the two pairs of braces are placed together.
It is not so magic, but it creates a lot of overhead by creating anonymous classes with no real additional functionality just for the sake of an initialization. It is even worse, because these anonymous inner classes are not static, so they actually can refer to their surrounding instance. They do not make use of this, but anyway they carry a reference to their surrounding class which might be a very serious problem for serialization, if that is used. And for garbage collection. So please consider the double-brace-initialization as an anti-pattern. Others have blogged about this too…
There are more legitimate ways to group the initialization together. You can put the initialization into a static method and call that. Or you could group it with single braces, just to indicate the grouping. This is a bit unusual, but at least correct:
import java.util.*;
public class E { public static void main(String[] args) { List<String> l = new ArrayList<String>(); { l.add("abc"); l.add("def"); l.add("uvw"); } System.out.println("l=" + l); Set<String> s = new HashSet<String>(); { s.add("1A2"); s.add("2B707"); s.add("3DD"); } System.out.println("s=" + s);
Map<String, String> m = new HashMap<String, String>(); { m.put("k1", "v1"); m.put("k2", "v2"); m.put("k3", "v3"); } System.out.println("m=" + m); } }
While the first two can somehow be written using Arrays.asList(...), now in Java 9 there are nicer ways for writing all three using List.of("abc", "def", "uvw");, Set.of("1A2", "2B707", "3DD"); and Map.of("k1", "v1", "k2", "v2", "k3", "v3");, which is recommended over any other way because there are some additional runtime and compile time checks and because these are efficient immutable collections. This has been blogged about too.
The aspect of immutability which we should consider today, is not very well covered by the java collections (apart from the new internal one for the new factory methods. Wrapping in Collections.unmodifyableXXX(...) is a bit of overhead in terms of code, memory and CPU-usage but it does not give a guarantee that the collection wrapped into this is actually not being modified elsewhere.
# Is Java becoming non-free?
We are kind of used to the fact that Java is „free“.
It has been free in the sense of „free beer“ pretty much forever.
And more recently also „free“ in the sense of „free speech“.
In spite of the fact that we read that „Oracle is going to monetize on Java“, as can be read in articles like this, it is remaining like that, at least for now. This is also written in the article.
But it seems that they are looking for loopholes. For example we download and install Java SE including X, Y and Z, because it comes like that. Agree to hundred pages of license text and confirm having read and understood everything, as always… Now we really need X, which is the JDK, which is actually free. But we just accidentally also install Y and Z, which we do not need, but which has a price tag on which they are trying to get us.
Even if nothing will really happen, issues like that help undermining the trust in the platform in general, not only for Java, but also for other JVM-languages. Eventually there could be forks like we have seen with LibreOffice vs. OpenOffice or with mariaDB vs. mySQL, which kind of took over by avoiding the ties to Oracle. Solaris seems to have a similar fork, but in this case people are just moving to Linux anyway, so the issue is less relevant.
These prospects are not desirable, but I think we do not have to panic, because there are ways to solve this that are going to be pursued if necessary. Maybe it is a good idea to be more careful when installing software. And to think twice when starting a new project if Oracle or PostgreSQL is the right DB product in the long term, taking into consideration Oracle’s attitude towards loyal long term customers.
It is regrettable. Oracle has great technology from their own history and from SUN in databases, Java including the surrounding universe, Solaris and hardware. Let us hope that they will stay reasonable at least with Java.
# JMS
Java has always not just been a language, but it brought us libraries and frameworks. Some of them proved to be bad ideas, some become hyped without having any obvious advantages, but some were really good.
In the JEE-stack, messaging (JMS) was included pretty much from the beginning. In those days, when Java belonged to Sun Microsystems and Sun did not belong to Oracle, an aim was to support databases, which was in those days mostly Oracle, via JDBC and so called Message oriented middleware, which was available in the IBM-world via JMS. JMS is a common interface for messaging, that is like sending micro-email-message not between human, but between software components. It can be used within one JVM, but even between geographically distant servers, provided a safe network connection exists. Since we all know EMail this is in principle not too hard to understand, but the question is, what it really means and if it brings us something that we do not already have otherwise.
We do have web services as an established way to communicate between different servers across the network and of course they can also be used locally, if desired. Web services are neither the first nor the only way to communicate between servers nor are they the most efficient way. But I would say that they are the way how we do it in typical distributed applications that are not tied to any legacy. In principal web services are network capable and synchronous. This is well understood and works fine for many applications. But it also forces us to block processes or threads while waiting for responses, thus occupying valuable resources. And we tend to loose responsiveness, because of the waiting for the response. It needs to be observed that DB-access is typically only available synchronously. In a way understandable because of the transactions, but it also blocks resources to a huge extent, because we know that the performance of many applications is DB driven.
Now message based software architectures think mostly asynchronously. Sending a message is a „fire and forget“. There is such a thing as making message transactional, but this has to be understood correctly. There is one transaction for sending the message. It is guaranteed that the message is sent. Delivery guarantees can only be given to a limited extent, because we do not know anything about the other side and if it is at all working. This is not checked as part of the transaction. We can imagine though that the messaging system has its own transactional database and stores the message there within the transaction. It then retries delivering it forever, until it succeeds. Then it is deleted from this store as part of the receiving transaction. Both these transactions can be part of a distributed transaction and thus be combined with other transactions, usually against databases, for a combined transaction. This is what we usually have in mind when talking about this. I have to mention that the distributed transaction, usually based on the so called two phase commit, is not quite as water proof as we might hope, but it can be broken by construction of a worst case scenario regarding the timing of failures of network and systems. But it is for practical purposes reasonable good to use.
While it is extremely interesting to investigate purely message based architectures, especially in conjunction with functional paradigm, this may not be the only choice. Often it is a good option to use a combination of messaging with synchronous services.
We should observe that messaging is a more abstract concept. It can be implemented by some middle ware and even be accessible by a standardized kind of interface like JMS. But it can also be more abstract as a queuing system or as something like Akka uses for its internal communication. And messaging is not limited to Java or JVM languages. Interoperability does impose some constraints on how to use it, because it bans usage of Object-messages which store serialized Java objects, but there are ways to address this by using JSON or BSON or XML or Protocol Buffers as message contents.
What is interesting about JMS and messaging in general are two major communication modes. We can have queues, which are point to point connections. Or we can have „topics“, which are channels into which messages are sent. They are then received by all current subscribers of the topic. This is interesting to notify different components about an event happening in the system, while possibly details about the event must be queried via synchronous services or requested by further messaging via queues.
Generally JMS in Java has different implementations, usually there are those coming with the application servers and there are also some standalone implementations. They can be operated via the same interface, at least as long as we constrain us to the common set of functionality. So we can exchange the JMS implementation for the whole platform (which is a nightmare in real life), but we cannot mix them, because the wire protocol is usually incompatible. There is now something like a standard network protocol for messaging, which is followed by some, but not all implementations.
As skeptical as I am against Java Enterprise edition, I do find the JMS part of enterprise Java very interesting and worthwhile exploring for projects that have a size and characteristics justifying this.
# Some Thoughts about Incompleteness of Libraries
## Selfwritten Util Libraries
Today we have really good libraries with our programming languages and they cover a lot of things. The funny thing is, that we usually end up writing some Util-classes like StringUtil, CollectionUtil, NumberUtil etc. that cover some common tasks that are not found in the libraries that we use. Usually it is no big deal and the methods are trivial to write. But then again, not having them in the library results in several slightly different ad hoc solutions for the same problem, sometimes flawless, sometimes somewhat weak, that are spread throughout the code and maybe eventually some „tools“, „utils“ or „helper“ classes that unify them and cover them in a somewhat reasonable way.
## Imposing Util Libraries on all Developers
In the worst case these self written library classes really suck, but are imposed on the developers. Many years ago it was „company standard“ to use a common library for localizing strings. The concept was kind of nice, but it had its flaws. First there was a company wide database for localizing strings in order to save on translation costs, but the overhead was so much and the probability that the same short string means something different in the context of different applications was there. This could be addressed by just creating a label that somehow included the application ID and bypassing this overhead, whenever a collision was detected. What was worse, the new string made it into a header file and that caused the whole application to be recompiled, unless a hand written make file skipped this dependency. This was of course against company policy as well and it meant a lot of work. In those days compilation of the whole application took about 8 (eight!) hours. Maybe seven. So after adding one string it took 8 hours of compile time to continue working with it. Anyway, there was another implementation for the same concept for another operating system, that used hash tables and did not require recompilation. It had the risk of runtime errors because of non-defined strings, but it was at least reasonable to work with it. I ported this library to the operating system that I was using and used it and during each meeting I had do commit to the long term goal of changing to the broken library, which of course never happened, because there were always higher priorities.
I thing the lesson we can already learn is that such libraries that are written internally and imposed on all developers should be really done very well. Senior developers should be involved and if the company does not have them, hired externally for the development. Not to do the whole development, but to help doing it right.
## Need for Util libraries
So why not just go with the given libraries? Or download some more? Depending on the language there are really good libraries around. Sometimes that is the way to go. Sometimes it is good to write a good util-libarary internally. But then it is important to do it well, to include only stuff that is actually needed or reasonably likely needed and to avoid major effort for reinventing the wheel. Some obscure libraries actually become obsolete when the main default library gets improved.
## Example: Trigonometric and other Mathematical Functions
Most of us do not do a lot of floating point arithmetic and subsequentially we do not need the trigonometric functions like and , other transcendental functions like and or functions like cube root () a lot. Where the default set of these functions ends is somewhat arbitrary, but of course we need to go to special libraries at some point for more special functions. We can look what early calculators used to have and what advanced math text books in schools cover. We have to consider the fact, that the commonly used set of trigonometric functions differs from country to country. Americans tend to use six of them, , , , , and , which is kind of beautiful, because it really completes the set. Germans tend to use only , , and , which is not as beautiful, but at least avoids the division by zero and issue of transforming to . Calculators usually had only , and . But they offered them in three flavors, with modes of „DEG“, „RAD“ and „GRAD“. The third one was kind of an attempt to metricize degrees by having instead of for an right angle, which seems to be a dead idea. Of course in advanced mathematics and physics the „RAD“, which uses instead of is common and that is what all programming languages that I know use, apart from the calculators. Just to explain the functions for those who are not familiar with the whole set, we can express the last four in terms of and :
• (tangent)
• (cotangent)
• (secans)
• (cosecans)
Then we have the inverse trigonometric functions, that can be denoted with something like or for all six trigonometric functions. There is an irregularity to keep in mind. We write instead of for , which is the multiplication of that number of terms. And we use to apply the function „ time, which is actually the inverse function. Mathematicians have invented this irregularity and usually it is convenient, but it confuses those who do not know it. From these functions many programming languages offer only the assuming the others five can be created from that. This is true, but cumbersome, because it needs to differentiate a lot of cases using something like if, so there are likely to be many bugs in software doing this. Also these ad hoc implementations loose some precision.
It was also common to have a conversion from polar coordinates to rectangular (p2r) coordinates and vice versa (r2p), which is kind of cool and again easy, but not too trivial to do ad hoc. Something like atan2 in FORTRAN, which does the essence of the harder r2p operation, would work also, depending on hon convenient it is to deal with multiple return values. We can then do r2p using , and p2r by and .
The hyperbolic functions like , their inverses like or are rarely used, but we find them on the calculator and in the math book, so we should have them in the standard floating point library. There is only one flavor of them.
Logarithms and exponential functions are found in two flavors on calculators: and and and . The log is kind of confusing, because in mathematics and physics and in most current programming language we mean (natural logarithm). This is just a wrong naming on calculators, even if they all did the same mistake across all vendors and probably still do in the scientific calculator app on the phone or on the desktop. As IT people we tend to like the base two logarithm , so I would tend to add that to the list. Just to make the confusion complete, in some informatics text books and lectures the term „“ refers to the base two logarithm. It is a bad habit and at least the laziness should favor writing the correct „„.
Then we usually have power functions , which surprisingly many programming languages do not have. If they do, it is usually written as x ** y or pow(x, y), square root, square and maybe cube root and cube. Even though the square root and the cube root can be expressed as powers using and it is better to do them as dedicated functions, because they are used much more frequently than any other power with non-integral exponents and it is possible to write optimized implementations that run faster and more reliably then the generic power which usually needs to go via log and exp. Internal optimization of power functions is usually a good idea for integral exponents and can easily be achieved, at least if the exponent is actually of an integer type.
Factorial and binomial coefficient are usually used for integers, which is not part of this discussion. Extensions for floating point numbers can be defined, but they are beyond the scope of advanced school mathematics and of common scientific calculators. I do not think that they are needed in a standard floating point library. It is of its own interest what could be in an „advanced math library“, but and and for sure belong into the base math library.
That’s it. It would be easy to add all these into the standard library of any programming language that does floating point arithmetic at all and it would be helpful for those who work with this and not hurt at all those who do not use it, because this stuff is really small compared to most of our libraries. So this would be the list
• sin, cos, tan, cot, sec, csc in two flavors
• asin, acos, atan, acot, asec, acsc (standing for …) in two flavors
• p2r, r2p (polar coordinates to rectangular and reverse) or atan2
• sinh, cosh, tanh, coth, sech, csch
• asinh, acosh, atanh, acoth, asech, acsch (for …)
• exp, log (for and logarithm base e)
• exp10, exp2, log10, log2 (base 10 and base 2, I would not rely on knowledge that ld and lg stand for log2 and log10, respectively, but name them like this)
• sqrt, cbrt (for and )
• ** or pow with double exponent
• ** or pow with integer exponent (maybe the function with double exponent is sufficient)
• , , , are maybe actually not needed, because we can just write them using ** and /
Actually pretty much every standard library contains sin, cos, tan, atan, exp, log and sqrt.
## Java
Java is actually not so bad in this area. It contains the tan2, sinh, cosh, tanh, asin, acos, atan, log10 and cbrt functions, beyond what any library contains. And it contains conversions from degree to radiens and vice versa. And as you can see here in the source code of pow, the calculations are actually quite sophisticated and done in C. It seems to be inspired by GNU-classpath, which did a similar implementation in Java. It is typical that a function that has a uniform mathematical definition gets very complicated internally with many cases, because depending on the parameters different ways of calculation provide the best precision. It would be quite possible that this function is so good that calling it with an integer as a second parameter, which is then converted to a double, would actually be good enough and leave no need for a specific function with an integer exponent. I would tend to assume that that is the case.
In this github project we can see what a library could look like that completes the list above, includes unit tests and works also for the edge cases, which ad hoc solution often do not. What could be improved is providing the optimal possible precision for any legitimate parameters, which I would see as an area of further investigation and improvement. The general idea is applicable to almost any programming language.
Two areas that have been known for a great need of such additional libraries are collections and Date&Time. I would say that really a lot what I would wish from a decent collection library has been addressed by Guava. Getting Date and time right is surprisingly hard, but just thing of the year-2000-problem to see the significance of this issue. I would say Java had this one messed up, but Joda Time was a good solution and has made it into the standard distribution of Java 8.
## Summary
This may serve as an example. There are usually some functions missing for collections, strings, dates, integers etc. I might write about them as well, but they are less obvious, so I would like to collect some input before writing about that.
libc on Linux seem to contain sin, cos, tan, asin, acos, atan, atan2, sinh, cosh, tanh, asinh, acosh, atanh, sqrt, cbrt, log10, log2, exp, log, exp10, exp2. Surprisingly Java does not make use of these functions, but comes up with its own.
Actually a lot of functionality is already in the CPU-hardware. IEEE-recommendations suggest quite an impressive set of functions, but they are all optional and sometimes the accuracy is poor.
But standard libraries should be slightly more complete and ideally there would be no need to write a „generic“ util-library. Such libraries should only be needed for application specific code that is somewhat generic across some projects of the organization or when doing a real demanding application that needs more powerful functionality than can easily be provided in the standard library. Ideally these can be donated to the developers of the standard library and included in future releases, if they are generic enough. We should not forget, even programming languages that are main stream and used by thousands of developers all over the world are usually maintained by quite small teams, sometimes only working part time on this. But usually it is hard to get even a good improvement into their code base for an outsider.
So what functions do you usually miss in the standard libraries?
When Java was created, the concept of operator overloading was already present in C++. I would say that it was generally well done in C++, but it kind of breaks the object oriented polymorphism patterns of C++ and the usual way was to have several overloaded functions to allow for all n² combinations.
In the early days of C++ people jumped on this feature and used it for all kinds of stuff that has nothing to do with the original concept of numeric operators, like adding dialog boxes to strings and multiplying that with events. We get somewhere a little bit towards what APL was, which had only operators and a special charset to allow for all the language features, requiring even a special keyboard:
APL example
You can find an article in Scott Locklin’s Blog about APL and other almost forgotten languages and the potential loss of some achievements that they tried to bring to us.
We see the same with some people in Scala who create a lot of operators using interesting Unicode characters. This is not necessarily wrong, but I think operators should only be used for something that is really important. Not in the sense: „I wrote functionality XYZ for library UVW, and this is really important“, but in the sense that this functionality is so commonly used that people have no problem remembering the operator. Or the operator is already known to us, like „+“, „-„, „*“, … for numeric types, but I still have no idea what adding a string to an event would mean.
In C++ it got even worse because it was possible to overload „->“ or new and thus digging deep into the language, which can be interesting when used carefully and skillfully by developers who really know what they are doing, but disastrous otherwise.
Now Java has opted not to support this operator overloading, which was wrong in even at that time, but understandable, because at that time we were still more in the mindset to count bits and live with the deficiencies of int and long and we ware also seeing the weird abuses of operator overloading in C++. Maybe it was also the lack of time to design a sound mechanism for this in Java. Unfortunately this decision that was made in a context more than 20 years ago has kind of become religious. Interestingly James Gosling, when asked in an interview for the 20 years anniversary of Java, mentioned operator overloading for numeric types as the first thing that he would have made better. (It is around minute 9.) So I hope that this undoes the religious aspect of this topic.
An interesting idea will probably be included in future versions of Scala. An operator is in principal defined as a method of the left operand, which is quite logical, but it would imply writing something like e = (a.*(b)).+(c.*(d)), possibly with fewer parentheses. Now this is recognized as a operator-method, so the dots can go away as well as the parentheses and the common operator precedence applies, so e = a * b + c * d works as well and is what we find natural. Ruby and Scala are very similar in this aspect. Now some future version of Scala, maybe Scala 3, will introduce an annotation that allows the „infix“-notation for these methods and that adds a descriptive name. Now error messages and even IDE-support could give us access to the descriptive name and we would be able to search for it, while searching for something like „+“ or „-“ or „*“ would not really be helpful. I think that this idea would be useful for other languages as well.
These examples demonstrate the BigInteger types of Java, C#, Scala, Clojure and Ruby, respectively:
import java.math.BigInteger;
public class JavaBigInt {
public static void main(String[] args) {
BigInteger f = BigInteger.valueOf(2_000_000_000L);
BigInteger p = BigInteger.ONE;
for (int i = 0; i < 8; i++) {
System.out.println(i + " " + p);
p = p.multiply(f);
}
}
}
gives this output:
0 1
1 2000000000
2 4000000000000000000
3 8000000000000000000000000000
4 16000000000000000000000000000000000000
5 32000000000000000000000000000000000000000000000
6 64000000000000000000000000000000000000000000000000000000
7 128000000000000000000000000000000000000000000000000000000000000000
And the C#-version
using System;
using System.Numerics;
public class CsInt {
public static void Main(string[] args) {
BigInteger f = 2000000000;
BigInteger p = 1;
for (int i = 0; i < 8; i++) {
Console.WriteLine(i + " " + p);
p *= f;
}
}
}
give exactly the same output:
0 1
1 2000000000
2 4000000000000000000
3 8000000000000000000000000000
4 16000000000000000000000000000000000000
5 32000000000000000000000000000000000000000000000
6 64000000000000000000000000000000000000000000000000000000
7 128000000000000000000000000000000000000000000000000000000000000000
Or the Scala version
object ScalaBigInt {
def main(args: Array[String]): Unit = {
val f : BigInt = 2000000000;
var p : BigInt = 1;
for (i <- 0 until 8) {
println(i + " " + p);
p *= f;
}
}
}
0 1
1 2000000000
2 4000000000000000000
3 8000000000000000000000000000
4 16000000000000000000000000000000000000
5 32000000000000000000000000000000000000000000000
6 64000000000000000000000000000000000000000000000000000000
7 128000000000000000000000000000000000000000000000000000000000000000
Or in Clojure it looks like this, slightly shorter than then Java and C#:
(reduce (fn [x y] (println y x) (*' 2000000000 x)) 1 (range 8))
with the same output again, but a much shorter program. Please observe that the multiplication needs to use the "*'" instead of "*" in order to outexpand from fixed length integers to big-integers.
0 1
1 2000000000
2 4000000000000000000
3 8000000000000000000000000000N
4 16000000000000000000000000000000000000N
5 32000000000000000000000000000000000000000000000N
6 64000000000000000000000000000000000000000000000000000000N
7 128000000000000000000000000000000000000000000000000000000000000000N
Or in Ruby it is also quite short:
f = 2000000000
p = 1
8.times do |i|
puts "#{i} #{p}"
p *= f;
end
same result, without any special effort, because integers are always expanding to the needed size:
0 1
1 2000000000
2 4000000000000000000
3 8000000000000000000000000000
4 16000000000000000000000000000000000000
5 32000000000000000000000000000000000000000000000
6 64000000000000000000000000000000000000000000000000000000
7 128000000000000000000000000000000000000000000000000000000000000000
So I suggest to leave the IT-theology behind. So the pragmatic issues should be considered now.
In Java we have primitive numeric types, that are basically inadequate for application development, because they tacitly overflow and because application developers have usually no idea how to deal with rounding issues of float and double. We have good numeric types like BigInteger and BigDecimal to support arbitrarily long integral numbers, which do not overflow unless we exceed memory or addressaility issues with numbers of several billion digits. BigDecimal allows for controlled rounding, and also arbitrary precision.
Now we have to write
e = a.multiply(b).add(c.multiply(d))
e = a * b + c * d
The latter is readable, it is exactly what we mean. The former is not readable at all and the likelihood of making mistakes is very high.
I would be happy with something like this:
e = a (*) b (+) c (*) d
where overloaded operators are surrounded with () or [] or something like that.
At some point of time a major producer of electronic calculators made us believe that it is more natural to express it like this
e a b * c d * + =
Maybe this way of writing math would be better, but it is not what we do outside of our computers and calculators. At least it was more natural to have this pattern for those who created the calculators, because it was much easier to implement in a clean way on limited hardware. We still have the opposite in Lisp, which is still quite alive as Clojure, so I use the Clojure syntax:
(def x (+ (* a b) (* c d)))
which is relatively readable after some learning and allows for a very simple and regular and powerful syntax. But even this is not how we write Math outside of our computer.
Btw., please use elementary school math skills and do not write
e = (a * b) + (c * d)
That is just noise. I do not recommend to memorize all the 10 to 25 levels of operator precedence of a typical programming languages, but it is good to know the basic ones, that almost any serious current programming language supports:
* binary * /
* binary + -
* == != <= >= < >
* &&
* ||
Some use "and" and "or" instead of "&&" and "||".
Now using overloaded operators should be no problem.
We do have an issue when implementing it.
Imagine you have a language with five built in numeric types. Now you add a sixth one. "+" is probably already defined for 25 combinations. With the sixth type we get a total of 36 combinations, of which we have to provide the missing 11 and a mechanism to dispatch the program flow to these. In C++ we just add 11 operator-functions and that does everything. In Ruby we add a method for the left side of the operator. Now this does not know our new type for the existing types, but it deals with it by calling coerce of the right operand with the left operand as parameter. This is actually powerful enough to deal with this situation.
It gets even more tricky when we use different libraries that do not know of each other and each of them adds numeric types. Possibly we cannot add these with each other or we can do so in a degraded manner by just falling back to double or float or rational or something like that.
The numeric types that we usually use can be added with each other, but we could hit situations where that is not the case, for example when having p-adic numbers, which can be added with rational number, but not with real numbers. Or finite fields, whose members can be added with integral numbers or with numbers of the same field, but not necessarily with numbers of another finite field. Fortunately these issues should occur only to people who understand them while writing libraries. Using the libraries should not be hard, if they are properly done.
# Primitives, Objects and Autoboxing
The type system in Java makes a difference between so called „primitives“, which are boolean, byte, char, int, long, float and double and Objects, which are anything derived from Object in object oriented philosophy, including the special case of arrays, which I will not discuss today.
Primitive types have many operations that are kind of natural to perform on them, like arithmetic. They behave as values, so they are actually copied, which is no big deal, because they are at most 64 bits in size, which is in modern java implementations the size of a pointer when using references. Now a major benefit of object orientation is arguable the polymorphism and this has been heavily used when implementing useful libraries like the collection classes, which were based mostly on Object and thus able to handle anything derived from Object. This has not changed with generics, they are just another way of writing this and adding some compile time checks and casts in a more readable way, as long as the complexity of the generics constructions remains simple and under control. Actually I like this approach and find it much more healthy than templates in C++, but this is a IT-theological discussion that is not too relevant for this article.
Now there is a necessity of using collections for numeric types. Even though I do recommend to thoroughly think about using types like BigInteger and BigDecimal, there are absolutely legitimate uses of long, int, boolean, double, char and less frequently short, byte and float. The only one that is really flawless of these is boolean, while the floating point numbers, the fixed size integral numbers (also this) and the Strings and chars in Java have serious flaws, some of which I have discussed in the linked articles.
Now we need to use the wrapper types Integer, Long, Double and Boolean instead of int, long, double and boolean to store them in collections. This comes with some overhead, because these wrappers use some additional memory and the wrapping and unwrapping costs some time. Usually this does not impose a problem and using these wrappers is often an acceptable approach. Now we would be tempted to just work with the wrappers, but that is impossible, because the natural operations for the underlying boolean and numeric types just do not work with the wrappers, so we have to unwrap (or unbox) them.
Now Java includes a feature called „autoboxing and autounboxing“ which tries to create a wrapper object around a primitive when in an object context and which extracts the primitive when in a primitive context. This can be enforced by casting, to be sure.
There are some dangers in using this feature. The most interesting case is the „==“-operator. For objects and also for the wrappers of the primitives this always compares object identity based on the pointer address. For primitives that is simply impossible and the comparison compares the value. I think that it was a mistake to define the „==“-operator like that and it should do a semantic comparison and there should be something else for object identity, but that cannot be changed any more for Java. So we get some confusion when comparing boxed primitives with == or even worse when comparing boxed and unboxed primitives. Another confusion occurs, when using autounboxing and the wrapper object is null. This creates of course a NullPointerException, but it is kind of hard to spot where it actually comes from.
So I do see some value in using explicit boxing and unboxing to make things clearer. It is a good thing to talk about this in the team and find a common way. Now the interesting question is how boxing and unboxing are done. We are tempted to use something like this:
int x = ...; Integer xObj = new Integer(x);
This works, but it is not good, because it creates too many objects. We can reuse them and java provides for this and reuses them for some small numbers. The recommended way for explicit boxing is this:
int x = ... Integer xObj = Integer.valueOf(x);
This can reuse values. If we are using this a lot and know that our range of commonly used numbers is reasonably small but still beyond what Java assumes, it is not too hard to write something like „IntegerUtil“ and use it:
int x = ...; Integer xObj = IntegerUtil.valueOf(x);
Look if you can find an implementation that fits your needs, instead of writing it. But it is no pain to write it.
Unboxing is also easy:
Integer xObj = ....; int x = xObj.intValue();
The methods intValue(), longValue(), doubleValue(),… are actually in the base class Number, so it is possible to unbox and cast in one step with these.
Decide how much readability you want.
It is useful to look at the static methods of the wrapper classes even for converting numbers to Strings and Strings to numbers. Avoid using constructors, they are rarely necessary and some neat optimizations that the Java libraries give us for free only work when we use the right methods. This does not make a huge difference, but doing it right does not hurt, but rather makes code more readable.
It is also interesting how the extended numeric types like BigInteger and BigDecimal work similar to the wrapper types and to use them right.
Another interesting issue is to use actually specific collection implementations for primitives. This may add to the complexity of our code, because it gives up another piece of polymorphism, but they can really save our day by giving a better performance. And in cases where we actually know for sure that the data is always belonging to a certain primitive type, I find this even idiomatic.
Other languages have solved the issues discussed here in a more elegant way by avoiding this two sided world of primitives and wrappers or by making the conversions less dangerous and more natural. They have operator overloading for numeric types and they use a more consistent concept of equality than Java.
# UTF-16 Strings in Java
Deutsch
Strings in Java and many other JVM-languages consist of Unicode content and are encoded as utf-16. It was fantastic to already consider Unicode when introducing Java in the 90es and to make it the only reasonable way to use strings, so there is no temptation to start with a „US-ASCII“-version of a software that never really gets enhanced to deal properly with non-English content and data. And it also avoids having to deal with many kinds of String encodings within the software. For outside storage in databases and files and for communication with other processes and over the network, these issues of course remain. But Java strings are always encoded utf-16. We can be sure. Most common languages can make use of these Strings easily and handling common letter based languages from Europe and western Asia is quite strait forward. Rendering may still be a challenge, but that is another issue. A major drawback of this approach is that more memory is used. This usually does not hurt too much. Memory is not so expensive and a couple of Strings even with utf-16 will not be too big. With 64-bit Java, which should be used these days, the memory limitations of the JVM are not relevant any more, they can basically use as much memory as provided.
But some applications to hit the memory limits. And since usually most of the data we are dealing with is ultimately strings and combinations of relatively long strings with some reference pointers, some integer numbers and more strings, we can say that in typical memory intensive applications strings actually consume a large fraction of the memory. This leads to the temptation of using or even writing a string library that uses utf-8 or some other more condensed internal format, while still being able to express any Unicode content. This is possible and I have done it. Unfortunately it is very painful, because Strings are quite deeply integrated into the language and explicit conversions need to be added in many places to deal with this. But it is possible and can save a lot of memory. In my case we were able to abandon this approach, because other optimizations, that were less painful, proved to be sufficient.
An interesting idea is to compress strings. If they are long enough, algorithms like gzip work on a single string. As with utf-8, selectively accessing parts of the string becomes expensive, because it can only be achieved by parsing the string from the beginning or by adding indexing structures. We just do not know which byte to go to for accessing the n-th character, even with utf-8. In reality we often do not have long strings, but rather many relatively short strings. They do not compress well by themselves. If we know our data and have a rough idea about the content of our complete set of strings, custom compression algorithm can be generated. This allows good results even for relatively short strings, as long as they are within the „language“ that we are prepared for. This is more or less similar to the step from utf-16 to utf-8, because we replace common byte sequences by shorter byte sequences and less common sequences may even get replaced by something longer. There is no gain in utf-8, if we have mostly strings that are in non-Latin alphabets. Even Cyrillic or Greek, that are alphabets similar to the Latin alphabet, will end up needing two bytes for each letter, which is not at all better than utf-16. For other alphabets it will even become worse, because three or four bytes are needed for one symbol that could easily be expressed with two bytes in utf-16. But if we know our data well enough, the approach with the specific compression will work fine. The „dictionary“ for the compression needs to be stored only once, maybe hard-coded in the software, and not in each string. It might be of interest to consider building the dictionary dynamically at run-time, like it is done with gzip, but keeping it in a common place for all strings and thus sharing it. The custom strings that I used where actually using a hard coded compression algorithm generated using a large amount of typical data. It worked fine, but was just too clumsy to use because Java is not prepared to replace String with something else without really messing around in the standard run-time libraries, which I would neither recommend nor want.
It is important to consider the following issues:
1. Is the memory consumption of the strings really a problem?
2. Are there easier optimizations that solve the problem?
3. Can it just be solved by adding more hardware? Yes, this is a legitimate question.
4. Are there solutions for the problem in the internet or even within the current organization?
5. A new String class is so fundamental that excellent testing is absolutely mandatory. The unit tests should be very extensive and complete.
# Will Java, C, C++ and C# be the new Cobols?
A few decades ago most programming was performed in Cobol (I do not want to shout it), Fortran, Rexx and some typical main frame languages, that hardly made it to the Linux-, Unix- or MS-Windows-world. They are still present, but mostly used for maintenance and extension of existing software, but less often for writing new software from scratch.
In these days languages like C, C++, Java and to a slightly lesser extent C# dominate the list of most commonly used languages. I would assume that JavaScript is also quite prominent in the list, because it has become more popular to write rich web clients using frameworks like Angular JS. And there are tons of them and some really good stuff. Some people like to see JavaScript also on the server side and in spite of really interesting frameworks like Node-JS I do not really consider this a good idea. If you like you may add Objective C to this list, which I do not know very much at all, even though it has been part of my gcc since my first Linux installation in the early 1990es.
Anyway, C goes back to the 1970es and I think that it was a great language to create at that time and it still is for a certain set of purposes. When writing operating systems, database engines, compilers and interpreters for other languages, editors, or embedded software, everything that is very close to the hardware, explicit control and direct access to very powerful OS-APIs are features that prove to be useful. It has been said that Java runs as fast as C, which is at least close to the truth, but only if we do not take into account the memory usage. C has some short comings that could be done better without sacrificing its strengths in the areas where it is useful, but it does not seem to be happening.
C++ has been the OO-extension of C, but I would say that it has evolved to be a totally different language for different purposes, even though there is some overlap, it is relatively easy to call functionality written in C from C++ and a little bit harder the other way round… I have not used it very much recently, so I will refrain from commenting further on it.
Java has introduced an infrastructure that is very common now with its virtual machine. The JVM is running on a large number of servers and any JVM-language can be used there. The platform independence is an advantage, but I think that its importance on servers has diminished a little bit. There used to be all kinds of servers with different operating systems and different CPU-architectures. But now we are moving towards servers being mostly Linux with Intel-compatible CPUs, so it is becomeing less of an issue. This may change in the future again, of course.
With Mono C# can be used in ways similar to Java, at least that is what the theory says and what seems to be quite true at least up to a certain level. It seems to be a bit ahead of Java with some language features, just think of operator overloading, undeclared exceptions, properties, generics or lambdas, which have been introduced earlier or in a more elegant way or we are still waiting in Java. I think the case of lambdas also shows the limitations, because they seem to behave differently than you would expect from real closures, which is the way lambdas should be done and are done in more functionally oriented languages or even in the Ruby programming language, in the Perl programming language or typical Lisps.
Try this
List<Func<int>> actions = new List<Func<int>>();
int variable = 0; while (variable < 5) { actions.Add(() => variable * 2); ++ variable; }
foreach (var act in actions) { Console.WriteLine(act.Invoke()); }
We would expect the output 0, 2, 4, 6, 8, but we are getting 10, 10, 10, 10, 10 (one number in a line, respectively).
But it can be fixed:
List<Func<int>> actions = new List<Func<int>>();
int variable = 0; while (variable < 5) { int copy = variable; actions.Add(() => copy * 2); ++ variable; }
foreach (var act in actions) { Console.WriteLine(act.Invoke()); }
I would say that the concept of inner classes is better in Java, even though what is static there should be the default, but having lambdas this is less important…
You find more issues with class loader, which are kind of hard to tame in java, but extremely powerful.
Anyway, I think that all of these languages tend to be similar in their syntax, at least within a method or function and require a lot of boiler plate code. Another issue that I see is that the basic types, which include Strings, even if they are seen as basic types by the language design, are not powerful enough or full of pitfalls.
While the idea to use just null terminated character arrays as strings in C has its beauty, I think it is actually not really good enough and for more serious C applications a more advanced string library would be good, with the disadvantage that different libraries will end up using different string libraries… Anyway, for stuff that is legitimately done with C now, this is not so much of an issue and legacy software has anyway its legacy how to deal with strings, and possible painful limitations in conjunction with Unicode. Java and also C# have been introduced at a time when Unicode was already around and the standard already claimed to use more than 65536 code points (characters in Unicode-speak), but at that time 65536 seemed to be quite ok to cover the needs for all common languages and so utf-16 was picked as an encoding. This blows up the memory, because strings occupy most of the memory in typical application software, but it still leaves us with uncertainties about length and position, because code points can be one or two 16-bit-„characters“ long, which can only be seen by actually iterating through the string, which leaves us where we were with null terminated strings in C. And strings are really hard to replace or enhance in this aspect, because they are used everywhere.
Numbers are not good either. As an application developer we should not care about counting bits, unless we are in an area that needs to be specifically optimized. We are using mostly integer types in application development, at least we should. These overflow silently. Just to see it in C#:
int i = 0; int s = 1; for (i = 0; i < 20; i++) { s *= 7; Console.WriteLine("i=" + i + " s=" + s); }
which gives us:
i=0 s=7 i=1 s=49 i=2 s=343 i=3 s=2401 i=4 s=16807 i=5 s=117649 i=6 s=823543 i=7 s=5764801 i=8 s=40353607 i=9 s=282475249 i=10 s=1977326743 i=11 s=956385313 i=12 s=-1895237401 i=13 s=-381759919 i=14 s=1622647863 i=15 s=-1526366847 i=16 s=-2094633337 i=17 s=-1777531471 i=18 s=442181591 i=19 s=-1199696159
So it silently overflows, or just takes the remainder modulo with the representation system . Java, C and C++ behave exactly the same way, only that we need to know what „int“ means for our C-compiler, but if we use 32-bit-ints, it is the same. This should throw an exception or switch to some unlimited long integer. Clojure offers both options, depending on whether you use * or *‘ as operator. So as application developers we should not have to care about these bits and most developers do not think about it. Usually it goes well, but a lot of software bugs are around due to this pattern. It is just wrong in C#, Java, and C++. In C I find it more acceptable, because the typical area for using C for new software actually is quite related to bits and bytes, so the developers need to be aware of such issues all the time anyway.
So now we have a lot of software in C, C++, Java and C# and a lot of new software is written in these languages, even from scratch. We could do better, sometimes we do, sometimes we don’t. It is possible to write excellent application software with Java, C++, C# and even C. It just takes a bit longer, but if we use them with care, it will be ok. Some companies are very conservative and want to use stuff that has been around for a long time. This is sometimes right and sometimes wrong. And since none of the more modern languages has really picked up so much speed that it can be considered a new main stream, it is understandable that some organizations are scared about marching into a dead end road.
On the other hand, many businesses can differentiate themselves by providing services that are only possible by having a very innovative IT. Banks like UBS and Credit Suisse in Switzerland are not likely to be there, while banks like ING are on that road. As long as they compete for totally different customer bases and as long as the business has enough strengths that are not depending so heavily on an innovative IT, but just on a working robust IT, this will be fine. But time moves on and innovation will eventually out-compete conservative businesses.
# Frameworks for Unit Testing and Mocking
Unit testing has fortunately become an important issue in many software projects. The idea of automatic software based unit and integration tests is actually quite old. The typical Linux software that is downloaded as source code and then built with steps like
tar xfzvv «software-name-with-version».tar.gz cd «software-name-with-version» ./configure make sudo make install
often allows a step
make test
or
make check
or even both before the
make install
It was like that already in the 1990s, when the word „unit test“ was unknown and the whole concept had not been popularized to the main stream.
What we need is to write those automated tests to an extent that we have good confidence that the software will be reliable enough in terms of bugs if it passes the test suite. The tests can be written in any language and I do encourage you to think about using other languages, in order to be less biased and more efficient for writing the tests. We may choose to write a software in C, C++ or Java for the sake of efficiency or easier integration into the target platform. But these languages are efficient in their usages of CPU power, but not at all efficient in using developer time to write a lot of functionality. This is ok for most projects, because the effort it takes to develop with these languages is accepted in exchange for the anticipated benefits. For testing it is another issue.
On the other hand there are of course advantages in using actually the same language for writing the tests, because it is easier to access the APIs and even internal functionalities during the tests. So it may very well be that Unit tests are written in the same language as the software and this is actually what I am doing most of the time. But do think twice about your choice.
Now writing automated tests is actually no magic. It does not really need frameworks, but is quite easy to accomplish manually. All we need is kind of two areas in our source code tree. One area that goes into the production code and one area that is only used for the tests and remains on the development and continuous integration machines. Since writing automated tests without frameworks is not really a big deal, we should only look at frameworks that are really simple and easy to use or maybe give us really good features that we actually need. This is the case with many such frameworks, so the way to go is to actually use them and save some time and make the structure more accessible to other team members, who know the same testing framework. Writing and running unit tests should be really easy, otherwise it is not done or the unit tests are disabled and loose contact to the actual software and become worthless.
Bugs are much more expensive, the later they are discovered. So we should try to find as many of them while developing. Writing unit tests and automated integrated tests is a good thing and writing them early is even better. The pure test driven approach does so before actually writing the code. I recommend this for bug fixing, whenever possible.
There is one exception to this rule. When writing GUIs, automated testing is possible, but quite hard. Now we should have UX guys involved and we should present them with some early drafts of the software. If we had already developed elaborate selenium tests by then, it would be painful to change the software according to the advice of the UX guy and rewrite the tests. So I would keep it flexible until we are on the same page as the UX guys and add the tests later in this area.
Frameworks that I like are actually CUnit for C, JUnit for Java, where TestNG would be a viable alternative, and Google-Test for C++. CUnit works extremely well on Linux and probably on other Unix-like systems like Solaris, Aix, MacOSX, BSD etc. There is no reason why it should not work on MS-Windows. With cygwin actually it is extremely easy to use it, but with native Win32/Win64 it seems to need an effort to get this working, probably because MS-Windows is no priority for the developers of CUnit.
Now we should use our existing structures, but there can be reasons to mock a component or functionality. It can be because during the development a component does not exist. Maybe we want to see if the component is accessed the right way and this is easier to track with a mock that records the calls than with the real thing that does some processing and gives us only the result. Or maybe we have a component with is external and not always available or available, but too time consuming for most of our tests.
Again mocking is no magic and can be done without tools and frameworks. So the frameworks should again be very easy and friendly to use, otherwise they are just a pain in the neck. Early mocking frameworks were often too ambitious and too hard to use and I would have avoided them whenever possible. In Java mocking manually is quite easy. We just need an interface of the mocked component and create an implementing class. Then we need to add all missing methods, which tools like eclipse would do for us, and change some of them. That’s it. Now we have mockito for Java and Google-Mock, which is now part of Google-Test, for C++. In C++ we create a class that behaves similar to a Java interface by having all methods pure virtual with keyword „virtual“ and „=0“ instead of the implementation. The destructor is virtual with an empty implementation. They are so easy to use and they provide useful features, so they are actually good ways to go.
For C the approach is a little bit harder. We do not have the interfaces. So the way to go is to create a library of the code that we want to test and that should go to production. Then we write one of more c-files for the test, that will and up in an executable that actually runs the test. In these .c-files we can provide a mock-implementation for any function and it takes precedence of the implementation from the library. For complete tests we will need to have more than one executable, because in each case the set of mocked functions is fixed within one executable. There are tools in the web to help with this. I find the approach charming to generate the C-code for the mocked functions from the header files using scripts in the a href=“https://en.wikipedia.org/wiki/Ruby_(programming_language)“>Ruby programming language or in the Perl programming language.
Automated testing is so important that I strongly recommend to do changes to the software in order to make it accessible to tests, of course within reason. A common trick is to make certain Java methods package private and have the tests in the same package, but a different directory. Document why they are package private.
It is important to discuss and develop the automated testing within the team and find and improve a common approach. Laziness is a good thing. But laziness means running many automated tests and avoid some manual testing, not being too lazy to write them and eventually spending more time on manual repetitive activities.
I can actually teach this in a two-day or three-day course.
# How to create ISO Date String
It is a more and more common task that we need to have a date or maybe date with time as String.
There are two reasonable ways to do this:
* We may want the date formatted in the users Locale, whatever that is.
* We want to use a generic date format, that is for a broader audience or for usage in data exchange formats, log files etc.
The first issue is interesting, because it is not always trivial to teach the software to get the right locale and to use it properly… The mechanisms are there and they are often used correctly, but more often this is just working fine for the locale that the software developers where asked to support.
So now the question is, how do we get the ISO-date of today in different environments.
## Linux/Unix-Shell (bash, tcsh, …)
date "+%F"
## TeX/LaTeX
\def\dayiso{\ifcase\day \or 01\or 02\or 03\or 04\or 05\or 06\or 07\or 08\or 09\or 10\or% 1..10 11\or 12\or 13\or 14\or 15\or 16\or 17\or 18\or 19\or 20\or% 11..20 21\or 22\or 23\or 24\or 25\or 26\or 27\or 28\or 29\or 30\or% 21..30 31\fi} \def\monthiso{\ifcase\month \or 01\or 02\or 03\or 04\or 05\or 06\or 07\or 08\or 09\or 10\or 11\or 12\fi} \def\dateiso{\def\today{\number\year-\monthiso-\dayiso}} \def\todayiso{\number\year-\monthiso-\dayiso}
This can go into a file isodate.sty which can then be included by \include or \input Then using \todayiso in your TeX document will use the current date. To be more precise, it is the date when TeX or LaTeX is called to process the file. This is what I use for my paper letters.
## LaTeX
(From Fritz Zaucker, see his comment below):
\usepackage{isodate} % load package \isodate % switch to ISO format \today % print date according to current format
## Oracle
SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD') FROM DUAL;
On Oracle Docs this function is documented.
It can be chosen as a default using ALTER SESSION for the whole session. Or in SQL-developer it can be configured. Then it is ok to just call
SELECT SYSDATE FROM DUAL;
Btw. Oracle allows to add numbers to dates. These are days. Use fractions of a day to add hours or minutes.
## PostreSQL
(From Fritz Zaucker, see his comment):
select current_date; —> 2016-01-08
select now(); —> 2016-01-08 14:37:55.701079+01
## Emacs
In Emacs I like to have the current Date immediately:
(defun insert-current-date () "inserts the current date" (interactive) (insert (let ((x (current-time-string))) (concat (substring x 20 24) "-" (cdr (assoc (substring x 4 7) cmode-month-alist)) "-" (let ((y (substring x 8 9))) (if (string= y " ") "0" y)) (substring x 9 10))))) (global-set-key [S-f5] 'insert-current-date)
Pressing Shift-F5 will put the current date into the cursor position, mostly as if it had been typed.
## Emacs (better Variant)
(From Thomas, see his comment below):
(defun insert-current-date () "Insert current date." (interactive) (insert (format-time-string "%Y-%m-%d")))
## Perl
In the Perl programming language we can use a command line call
perl -e 'use POSIX qw/strftime/;print strftime("%F", localtime()), "\n"'
or to use it in larger programms
use POSIX qw/strftime/; my \$isodate_of_today = strftime("%F", localtime());
I am not sure, if this works on MS-Windows as well, but Linux-, Unix- and MacOS-X-users should see this working.
If someone has tried it on Windows, I will be interested to hear about it…
Maybe I will try it out myself…
## Perl 5 (second suggestion)
(From Fritz Zaucker, see his comment below):
perl -e 'use DateTime; use 5.10.0; say DateTime->now->strftime(„%F“);‘
## Perl 6
(From Fritz Zaucker, see his comment below):
say Date.today;
or
Date.today.say;
## Ruby
This is even more elegant than Perl:
ruby -e 'puts Time.new.strftime("%F")'
will do it on the command line.
Or if you like to use it in your Ruby program, just use
d = Time.new s = d.strftime("%F")
Btw. like in Oracle SQL it is possible add numbers to this. In case of Ruby, you are adding seconds.
It is slightly confusing that Ruby has two different types, Date and Time. Not quite as confusing as Java, but still…
Time is ok for this purpose.
## C on Linux / Posix / Unix
#include #include #include
main(int argc, char **argv) {
char s[12]; time_t seconds_since_1970 = time(NULL); struct tm local; struct tm gmt; localtime_r(&seconds_since_1970, &local); gmtime_r(&seconds_since_1970, &gmt); size_t l1 = strftime(s, 11, "%Y-%m-%d", &local); printf("local:\t%s\n", s); size_t l2 = strftime(s, 11, "%Y-%m-%d", &gmt); printf("gmt:\t%s\n", s); exit(0); }
This speeks for itself..
But if you like to know: time() gets the seconds since 1970 as some kind of integer.
localtime_r or gmtime_r convert it into a structur, that has seconds, minutes etc as separate fields.
stftime formats it. Depending on your C it is also possible to use %F.
## Scala
import java.util.Date import java.text.SimpleDateFormat ... val s : String = new SimpleDateFormat("YYYY-MM-dd").format(new Date())
This uses the ugly Java-7-libraries. We want to go to Java 8 or use Joda time and a wrapper for Scala.
## Java 7
import java.util.Date import java.text.SimpleDateFormat
... String s = new SimpleDateFormat("YYYY-MM-dd").format(new Date());
Please observe that SimpleDateFormat is not thread safe. So do one of the following:
* initialize it each time with new
* make sure you run only single threaded, forever
* use EJB and have the format as instance variable in a stateless session bean
* protect it with synchronized
* protect it with locks
* make it a thread local variable
In Java 8 or Java 7 with Joda time this is better. And the toString()-method should have ISO8601 as default, but off course including the time part.
## Summary
This is quite easy to achieve in many environments.
I could provide more, but maybe I leave this to you in the comments section.
What could be interesting:
* better ways for the ones that I have provided
* other databases
* other editors (vim, sublime, eclipse, idea,…)
* Office packages (Libreoffice and MS-Office)
* C#
* F#
* Clojure
* C on MS-Windows
* Perl and Ruby on MS-Windows
* Java 8
* Scala using better libraries than the Java-7-library for this
* Java using better libraries than the Java-7-library for this
* C++
* PHP
* Python
* Cobol
* JavaScript
* …
If you provide a reasonable solution I will make it part of the article with a reference… | |
Select Page
# Formulas and Tables
Trigonometry
Angles: $$\alpha$$, $$\beta$$
Trigonometric functions: $$\sin \alpha,$$ $$\cos \alpha,$$ $$\tan \alpha,$$ $$\cot \alpha$$
$$\sin \left( {\alpha + \beta } \right) =$$ $$\sin \alpha \cos \beta \,+$$ $$\cos \alpha \sin \beta$$
2. Sine subtraction formula
$$\sin \left( {\alpha – \beta } \right) =$$ $$\sin \alpha \cos \beta \,-$$ $$\cos \alpha \sin \beta$$
$$\cos \left( {\alpha + \beta } \right) =$$ $$\cos \alpha \cos \beta \,-$$ $$\sin \alpha \sin \beta$$
$$\cos \left( {\alpha – \beta } \right) =$$ $$\cos \alpha \cos \beta \,+$$ $$\sin \alpha \sin \beta$$
$$\tan \left( {\alpha + \beta } \right) = \large\frac{{\tan \alpha + \tan \beta }}{{1 – \tan \alpha \tan \beta }}\normalsize$$
$$\tan \left( {\alpha – \beta } \right) = \large\frac{{\tan \alpha – \tan \beta }}{{1 + \tan \alpha \tan \beta }}\normalsize$$
$$\cot \left( {\alpha + \beta } \right) = \large\frac{{1 – \tan \alpha \tan \beta }}{{\tan \alpha + \tan \beta }}\normalsize$$
$$\cot \left( {\alpha – \beta } \right) = \large\frac{{1 + \tan \alpha \tan \beta }}{{\tan \alpha – \tan \beta }}\normalsize$$ | |
At least two instruments called "guitars" were in use in Spain by 1200: the guitarra latina (Latin guitar) and the so-called guitarra morisca (Moorish guitar). The guitarra morisca had a rounded back, wide fingerboard, and several sound holes. The guitarra Latina had a single sound hole and a narrower neck. By the 14th century the qualifiers "moresca" or "morisca" and "latina" had been dropped, and these two cordophones were simply referred to as guitars.[8]
A string’s gauge is how thick it is. As a general rule, the thicker a string is the warmer its response will be and the more volume it will produce. However, thicker strings are also stiffer. This makes it harder to fret the string and makes it more difficult to execute heavy string bends. Thinner strings are generally brighter and easier to play, but on some instruments they can sound thin and tinny.
UPDATE: SEPTEMBER 3rd, 2010 -- Good evening, and hi everybody! I get requests to add tabs once in a while, and for years one of the most common requests has been 'Psychic Hearts', and more recently 'Trees Outside the Academy'. I resisted for years, but boredom and the need to please has a funny way of making things happen, so I'm proud to bring you tabs for the entirety of "Psychic Hearts" and its related tracks, as well as the majority of "Trees Outside the Academy". I was originally planning on providing bass tabs as well as the Mascis solos, but I decided I wasn't that desperate for accolade. With all this attention on Thurston, I felt bad for Lee, so I've updated my outdated tab for his excellent solo acoustic piece "Here" (located under "Other Tabs") with the proper tuning, which also happens to be the tuning for the equally excellent "Lee #2", so I've updated that one too!
The loud, amplified sound and sonic power of the electric guitar played through a guitar amp has played a key role in the development of blues and rock music, both as an accompaniment instrument (playing riffs and chords) and performing guitar solos, and in many rock subgenres, notably heavy metal music and punk rock. The electric guitar has had a major influence on popular culture. The guitar is used in a wide variety of musical genres worldwide. It is recognized as a primary instrument in genres such as blues, bluegrass, country, flamenco, folk, jazz, jota, mariachi, metal, punk, reggae, rock, soul, and many forms of pop.
As you start practicing, your fingers may be sore for a while, but that will pass with four to six weeks. One thing I want to warn you about is that new guitar players can get frustrated when they can’t play clean chords because they try to switch between chords too soon. Often, they try to switch chords before they’ve really learned and memorized each chord shape. For now, don’t worry about switching chords and just work on each shape, getting them down, and going right to them.
The lower strap button is usually located at the bottom (bridge end) of the body. The upper strap button is usually located near or at the top (neck end) of the body: on the upper body curve, at the tip of the upper "horn" (on a double cutaway), or at the neck joint (heel). Some electrics, especially those with odd-shaped bodies, have one or both strap buttons on the back of the body. Some Steinberger electric guitars, owing to their minimalist and lightweight design, have both strap buttons at the bottom of the body. Rarely, on some acoustics, the upper strap button is located on the headstock. Some acoustic and classical guitars only have a single strap button at the bottom of the body—the other end must be tied onto the headstock, above the nut and below the machine heads.
Our Study Music for concentration uses powerful Alpha Waves and Binaural Beats to boost concentration and brain power and is ideal relaxing music for stress relief. This Study Music and Focus Music is relaxing instrumental music that will help you study, focus and learn for that big test or exam and naturally allow your mind to reach a state of focus, perfect for work and study.
Artistworks is a great concept. I am a pupil of rock guitarist Paul Gilbert! Let me say that again - I am a pupil of rock guitarist Paul Gilbert! Yes, THE Paul Gilbert! The idea that top notch, internationally recognised musicians are teaching you from the comfort of your own home, is still amazing to me. And they speak to YOU. They tailor their feedback to you and your level. Your playing will improve immeasurably. So, what are you waiting for? Do it now before we all wake up and it turns out to be a dream after all.
The ratio of the spacing of two consecutive frets is {\displaystyle {\sqrt[{12}]{2}}} (twelfth root of two). In practice, luthiers determine fret positions using the constant 17.817—an approximation to 1/(1-1/ {\displaystyle {\sqrt[{12}]{2}}} ). If the nth fret is a distance x from the bridge, then the distance from the (n+1)th fret to the bridge is x-(x/17.817).[15] Frets are available in several different gauges and can be fitted according to player preference. Among these are "jumbo" frets, which have much thicker gauge, allowing for use of a slight vibrato technique from pushing the string down harder and softer. "Scalloped" fretboards, where the wood of the fretboard itself is "scooped out" between the frets, allow a dramatic vibrato effect. Fine frets, much flatter, allow a very low string-action, but require that other conditions, such as curvature of the neck, be well-maintained to prevent buzz.
The intervals between the notes of a chromatic scale are listed in a table, in which only the emboldened intervals are discussed in this article's section on fundamental chords; those intervals and other seventh-intervals are discussed in the section on intermediate chords. The unison and octave intervals have perfect consonance. Octave intervals were popularized by the jazz playing of Wes Montgomery. The perfect-fifth interval is highly consonant, which means that the successive playing of the two notes from the perfect fifth sounds harmonious.
Guitarists don’t have to just look on in envy as pianists lead the holiday sing-a-longs this Christmas. Our selection of holiday Guitar Tabs include traditional classics we all love like “O Holy Night” and “Carol of the Bells” and the pop favorites that just wouldn’t be the same without a guitar, like “Jingle Bell Rock” and “Happy Xmas (War is Over)” by John Lennon. But if you just can’t get enough of those traditional Christmas classics, you can pick up our collection of ’Christmas Favorites for Guitar.’
## On the other hand, some chords are more difficult to play in a regular tuning than in standard tuning. It can be difficult to play conventional chords especially in augmented-fourths tuning and all-fifths tuning,[20] in which the large spacings require hand stretching. Some chords, which are conventional in folk music, are difficult to play even in all-fourths and major-thirds tunings, which do not require more hand-stretching than standard tuning.[21]
Our Suzuki teachers are experienced in teaching CCM students as young as 3. Developed by the Japanese Violinist Shinichi Suzuki, the Suzuki method teaches music by ear before reading notes on the instrument so teachers can focus on setting up each student with correct posture and technique to ensure the student's continued success. Parental involvement is required for students under the age of 8 and before the child starts, parents are required to attend a private 3-week parent education class.
First off, there are two more techniques I want to talk about. These are fret placement and finger posture. Place your first finger on the first fret of the B string. For fret placement, you’ll want to have your finger right behind the fret. In the video, you can see that the further away from the fret I place my finger, the more buzz the note has.
With so much information online about guitar theory, how do you know which sites to trust? Guitar teacher Zachary A. shares his top 10 favorite sites for learning about the guitar... Online resources for guitar theory are extremely helpful. You may want to explore the endless limits of the guitar, or maybe you're in need of a tiny refresher before your next lesson with a private teacher. These 10 websites are all tremendously helpful tools for guitar players of all levels - beginner, interm
Quartal and quintal harmonies also appear in alternate tunings. It is easier to finger the chords that are based on perfect fifths in new standard tuning than in standard tuning. New standard tuning was invented by Robert Fripp, a guitarist for King Crimson. Preferring to base chords on perfect intervals—especially octaves, fifths, and fourths—Fripp often avoids minor thirds and especially major thirds,[102] which are sharp in equal temperament tuning (in comparison to thirds in just intonation).
Solid body seven-string guitars were popularized in the 1980s and 1990s. Other artists go a step further, by using an eight-string guitar with two extra low strings. Although the most common seven-string has a low B string, Roger McGuinn (of The Byrds and Rickenbacker) uses an octave G string paired with the regular G string as on a 12-string guitar, allowing him to incorporate chiming 12-string elements in standard six-string playing. In 1982 Uli Jon Roth developed the "Sky Guitar", with a vastly extended number of frets, which was the first guitar to venture into the upper registers of the violin. Roth's seven-string and "Mighty Wing" guitar features a wider octave range.[citation needed]
Jump up ^ "The first incontrovertible evidence of five-course instruments can be found in Miguel Fuenllana's Orphenica Lyre of 1554, which contains music for a vihuela de cinco ordenes. In the following year, Juan Bermudo wrote in his Declaracion de Instrumentos Musicales: 'We have seen a guitar in Spain with five courses of strings.' Bermudo later mentions in the same book that 'Guitars usually have four strings,' which implies that the five-course guitar was of comparatively recent origin, and still something of an oddity." Tom and Mary Anne Evans, Guitars: From the Renaissance to Rock. Paddington Press Ltd, 1977, p. 24.
If you’re the type of parent who believes music can improve early childhood development, science has good news for you. A recent study suggests that guitar practice can help children better and faster process music and verbal language. Hearing different pitches and tones can help one better parse spoken words. So while every parent should remain careful not to forcefully involve their little ones in music, sports, and other interests, parents can still take a gentler approach that stimulates joy and curiosity, and plant a seed for lifelong learning.
In music, a guitar chord is a set of notes played on a guitar. A chord's notes are often played simultaneously, but they can be played sequentially in an arpeggio. The implementation of guitar chords depends on the guitar tuning. Most guitars used in popular music have six strings with the "standard" tuning of the Spanish classical-guitar, namely E-A-D-G-B-E' (from the lowest pitched string to the highest); in standard tuning, the intervals present among adjacent strings are perfect fourths except for the major third (G,B). Standard tuning requires four chord-shapes for the major triads.
The nut is a small strip of bone, plastic, brass, corian, graphite, stainless steel, or other medium-hard material, at the joint where the headstock meets the fretboard. Its grooves guide the strings onto the fretboard, giving consistent lateral string placement. It is one of the endpoints of the strings' vibrating length. It must be accurately cut, or it can contribute to tuning problems due to string slippage or string buzz. To reduce string friction in the nut, which can adversely affect tuning stability, some guitarists fit a roller nut. Some instruments use a zero fret just in front of the nut. In this case the nut is used only for lateral alignment of the strings, the string height and length being dictated by the zero fret.
All three principal types of resonator guitars were invented by the Slovak-American John Dopyera (1893–1988) for the National and Dobro (Dopyera Brothers) companies. Similar to the flat top guitar in appearance, but with a body that may be made of brass, nickel-silver, or steel as well as wood, the sound of the resonator guitar is produced by one or more aluminum resonator cones mounted in the middle of the top. The physical principle of the guitar is therefore similar to the loudspeaker.
The bass guitar (also called an "electric bass", or simply a "bass") is similar in appearance and construction to an electric guitar, but with a longer neck and scale length, and four to six strings. The four-string bass, by far the most common, is usually tuned the same as the double bass, which corresponds to pitches one octave lower than the four lowest pitched strings of a guitar (E, A, D, and G). (The bass guitar is a transposing instrument, as it is notated in bass clef an octave higher than it sounds (as is the double bass) to avoid excessive ledger lines.[jargon]) Like the electric guitar, the bass guitar has pickups and it is plugged into an amplifier and speaker for live performances.
For example, if the note E (the open sixth string) is played over the A minor chord, then the chord would be [0 0 2 2 1 0]. This has the note E as its lowest tone instead of A. It is often written as Am/E, where the letter following the slash indicates the new bass note. However, in popular music it is usual to play inverted chords on the guitar when they are not part of the harmony, since the bass guitar can play the root pitch.
{"eVar4":"shop: accessories","eVar5":"shop: accessories: strings","pageName":"[gc] shop: accessories: strings","reportSuiteIds":"guitarcenterprod","eVar3":"shop","prop2":"[gc] shop: accessories: strings","prop1":"[gc] shop: accessories","evar51":"default: united states","prop10":"category","prop11":"strings","prop5":"[gc] shop: accessories: strings","prop6":"[gc] shop: accessories: strings","prop3":"[gc] shop: accessories: strings","prop4":"[gc] shop: accessories: strings","channel":"[gc] shop","linkInternalFilters":"javascript:,guitarcenter.com","prop7":"[gc] sub category"}
For example, in the guitar (like other stringed instruments but unlike the piano), open-string notes are not fretted and so require less hand-motion. Thus chords that contain open notes are more easily played and hence more frequently played in popular music, such as folk music. Many of the most popular tunings—standard tuning, open tunings, and new standard tuning—are rich in the open notes used by popular chords. Open tunings allow major triads to be played by barring one fret with only one finger, using the finger like a capo. On guitars without a zeroth fret (after the nut), the intonation of an open note may differ from then note when fretted on other strings; consequently, on some guitars, the sound of an open note may be inferior to that of a fretted note.[37]
"Open" chords get their name from the fact that they generally include strings played open. This means that the strings are played without being pushed down at a fret, which makes chords including them easier to play for beginners. When you start to learn chords, you have to focus on using the right fingers to press down each note and make sure you're pressing the strings down firmly enough.
As previously stated, a dominant seventh is a four-note chord combining a major chord and a minor seventh. For example, the C7 dominant seventh chord adds B♭ to the C-major chord (C,E,G). The naive chord (C,E,G,B♭) spans six frets from fret 3 to fret 8;[49] such seventh chords "contain some pretty serious stretches in the left hand".[46] An illustration shows a naive C7 chord, which would be extremely difficult to play,[49] besides the open-position C7 chord that is conventional in standard tuning.[49][50] The standard-tuning implementation of a C7 chord is a second-inversion C7 drop 2 chord, in which the second-highest note in a second inversion of the C7 chord is lowered by an octave.[49][51][52] Drop-two chords are used for sevenths chords besides the major-minor seventh with dominant function,[53] which are discussed in the section on intermediate chords, below. Drop-two chords are used particularly in jazz guitar.[54] Drop-two second-inversions are examples of openly voiced chords, which are typical of standard tuning and other popular guitar-tunings.[55]
Once you've got your categories narrowed down, then you can start getting into the nitty-gritty differences between strings. For instance, electric guitars will give you the choice between nickel (for authentic vintage sound) and stainless steel (for maximum durability). Some string manufacturers have exotic material options with their own unique characteristics, like Ernie Ball's Slinky Cobalt strings. With so many subtle differences separating guitar strings, you owe it to yourself to browse carefully and look at all the choices on the table before making a decision.
I am 66 years old and am retiring at the end of the year. I decided to return to playing guitar, which I dabbled in as a teenager. I bought myself a Martin LXK2 guitar right here. ( Beautiful 3/4 sized instrument made of HPL with beautiful tone and projection. No humidity worries and a sustainable product as it's made of recycled materials. See my review) It was \$280 well spent. I opened the bag, tuned the guitar and to my delight, my aging brain had retained chord after chord: G, C, C7, D, A, A7, E all came back along with the string names E, A, D, G, B, E! Determined to build on this antique knowledge, I searched for a convenient chord book, and see that the reviews I read did not lead me astray.
On almost all modern electric guitars, the bridge has saddles that are adjustable for each string so that intonation stays correct up and down the neck. If the open string is in tune, but sharp or flat when frets are pressed, the bridge saddle position can be adjusted with a screwdriver or hex key to remedy the problem. In general, flat notes are corrected by moving the saddle forward and sharp notes by moving it backwards. On an instrument correctly adjusted for intonation, the actual length of each string from the nut to the bridge saddle is slightly, but measurably longer than the scale length of the instrument. This additional length is called compensation, which flattens all notes a bit to compensate for the sharping of all fretted notes caused by stretching the string during fretting.
Another chord you come across every day, the E major chord is fairly straightforward to play. Make sure your first finger (holding down the first fret on the third string) is properly curled or the open second string won't ring properly. Strum all six strings. There are situations when it makes sense to reverse your second and third fingers when playing the E major chord.
{"eVar4":"shop: accessories","eVar5":"shop: accessories: strings","pageName":"[mf] shop: accessories: strings: guitar strings","reportSuiteIds":"musiciansfriendprod","eVar3":"shop","prop2":"[mf] shop: accessories: strings","prop1":"[mf] shop: accessories","evar51":"default: united states","prop10":"category","prop11":"guitar strings","prop5":"[mf] shop: accessories: strings: guitar strings","prop6":"[mf] shop: accessories: strings: guitar strings","prop3":"[mf] shop: accessories: strings: guitar strings","prop4":"[mf] shop: accessories: strings: guitar strings","channel":"[mf] shop","linkInternalFilters":"javascript:,musiciansfriend.com","prop7":"[mf] sub category2"}
I have heard how giving you are in so many respects of music schooling and I must say that I am impressed. You remind me of the pure idealism that we had in starting Apple. If I were young, with time, I'd likely offer to join and help you in your endeavours. Keep making people happy, not just in their own learning, but in the example you set for them.
I would especially like to stress the gentle approach Justin takes with two key aspects that contributed to my development as a musician - music theory and ear training. Justin has succeeded in conveying the importance and profoundness of understanding music both theoretically and through your ears while maintaining a simple and accessible approach to them, all while sticking to what is ultimately the most important motto: 'If it sounds good, it is good'.
First, being able to learn directly from amazing artists like Paul Gilbert is incredible. He's a great teacher and has a way of explaining things that are easy to understand and replicate. The video format is also extraordinarily helpful; I've used other sites that use only written materials (usually .pdf format), and they are difficult to navigate. The feedback, though, is what really makes this website head and shoulders above the others (even the other video websites). When I record myself and send it in, I get a response from Paul that critiques in an incredibly constructive way as well as additional exercises to work at really honing that skill. In addition, getting to see what tips he gave to other users is awesome! If you want to learn an instrument, there's no better way.
Justin is an instructor with that rare combination that encompasses great playing in conjunction with a thoughtful, likable personality. Justin's instruction is extremely intelligent because he's smart enough to know the 'basics' don't have to be served 'raw' - Justin keenly serves the information covered in chocolate. Justin's site is like a free pass in a candy store!
Learning guitar is a lot easier when you have a step-by-step system to follow. Guitar Tricks lessons are interconnected and organized to get slightly harder as you progress. You watch a video lesson, play along, and then click a “Next” button to go to the next lesson. Lessons have multiple camera angles, guitar tabs, jam tracks and everything else you need to learn.
A capo (short for capotasto) is used to change the pitch of open strings.[28] Capos are clipped onto the fretboard with the aid of spring tension, or in some models, elastic tension. To raise the guitar's pitch by one semitone, the player would clip the capo onto the fretboard just below the first fret. Its use allows players to play in different keys without having to change the chord formations they use. For example, if a folk guitar player wanted to play a song in the key of B Major, they could put a capo on the second fret of the instrument, and then play the song as if it were in the key of A Major, but with the capo the instrument would make the sounds of B Major. This is because with the capo barring the entire second fret, open chords would all sound two semitones (aka one tone) higher in pitch. For example, if a guitarist played an open A Major chord (a very common open chord), it would sound like a B Major chord. All of the other open chords would be similarly modified in pitch. Because of the ease with which they allow guitar players to change keys, they are sometimes referred to with pejorative names, such as "cheaters" or the "hillbilly crutch". Despite this negative viewpoint, another benefit of the capo is that it enables guitarists to obtain the ringing, resonant sound of the common keys (C, G, A, etc.) in "harder" and less-commonly used keys. Classical performers are known to use them to enable modern instruments to match the pitch of historical instruments such as the Renaissance music lute.
For example, if the note E (the open sixth string) is played over the A minor chord, then the chord would be [0 0 2 2 1 0]. This has the note E as its lowest tone instead of A. It is often written as Am/E, where the letter following the slash indicates the new bass note. However, in popular music it is usual to play inverted chords on the guitar when they are not part of the harmony, since the bass guitar can play the root pitch.
### My personal opinion on the topic, as one musician to another, is that the best thing you can possibly do when trying to figure out which string to go with is to try out as many different brands and types of guitar strings as you can. Strings are cheap enough that most people are going to be able to afford to experiment, and the truth of the matter is that you’re probably not going to really know what works best for you until you have hands on experience.
Justin Sandercoe has thought long and hard about how to teach people to play the guitar, and how to do this over the internet. He has come up with a well-designed series of courses that will take you from nowhere to proficiency. I tried to learn how to play years ago, using books, and got nowhere. I've been using Justin's site for just over a year and I feel I've made real progress. What's more, Justin offers his lessons for free - a boon for any young player who has the urge to play, but whose pockets are empty. I've seen and used other sites for learners: none of them offer as clearly marked a road as Justin does.
The Guitar Center Lessons curriculum is based on a progressive advancement model. This proven method provides a well-defined roadmap of the material covered and skills taught so you can easily track your past and future progress. Since we use the same curriculum in all locations, students have the flexibility to take lessons from any instructor at any of our locations and progress through the same content. Our program is fun but challenging–both for beginners and serious musicians who want to improve their existing chops.
Established in 1994, Saratoga Guitar has been a main stay in the Capital Region Music community for over 20 years! As the founder of the Capital Region Guitar Show (The 2019 Capital Region Guitar Show will be held April 12th & 13th 2019) and promoter of many live music events in the area, Saratoga Guitar has become home to professionals, collectors, families, and students alike. Offering new, used, and vintage instruments from all major manufacturers, at Saratoga Guitar you will always find something different than what you would find at a big box music store. Saratoga Guitar also provides full rentals, sales & services for all school related music programs. | |
# Office Word and LaTeX spacing
I set the single space in Word and LaTeX.
But apparently LaTeX is more dense than Word. (Using 11 font in Word, using `/singlespacing` in LaTeX). It seems like three lines of LaTeX is only equal to 2 lines of Word. Something like that.
What's the correlation for spacing between Word and LaTeX? What if I want to set single space in LaTeX as the effect in Word?
Updated:
I checked wikibook, http://en.wikibooks.org/wiki/LaTeX/Text_Formatting#Font_Styles it seems that LaTeX's normalsize font is little bit smaller than 11pt, and large font is a little bit larger than 11pt. What can I do?
-
See also my previous question What does 'double spacing' mean? – Leo Liu Oct 22 '12 at 4:13
If you want a font size of `11pt`, be sure to specify it as an option to the `\documentclass` command. Also, be sure to use the exact same fonts for both LaTeX and Word. This may seem obvious, but with an MWE (minimum working example) it's not clear if it's the case. – Mico Oct 22 '12 at 4:14
Related, perhaps duplicate: Setting a document in MS Word-12pt (12bp). The related posts mentions: "MS Word uses a slightly different version of the unit 'point' (pt) than TeX does". – Werner Oct 22 '12 at 4:27
If it has been suggested to use a class like `acmtrans2m`, then you probably do not need to worry about the formating. – StrongBad Oct 22 '12 at 7:29
Maybe the issue is with the font. By default LaTeX uses Computer Modern, while Word uses Times (or Calibri),and there is a substantial difference between the size of the two fonts. Thus using a package like `mathptmx` or `newtxtext` (and `newtxmath`) or alternatives should reduce the difference – Guido Oct 22 '12 at 23:49
You mentioned spacing in your question and I'm not sure to understand you right, but let's try.
For me it seems your problem is that the algorithm of Word for layouting the text is orientated on lines (line for line) while LaTeX looks on the complete paragraph. That must result in different layouting a page in Word or LaTeX (LaTeX has much better results than Word, ever!).
So supposed you use exactly the same font (not possible as you already mentioned) the layout of a page remains different because the layout algorithmns are so different.
I remember there is a German site with two files in Word and LaTeX showing the same text with nearly same layout (I have to search for it and will do it if you want to see this side).
Conclusion:
Word and LaTeX are different programs and results in different page layouts. LaTeX has the better algorithm for layouting and has a good build in typography. With Word you must do the typography by your own.
-
Why should the different way of dividing a paragraph into lines should make a difference in how the lines are eventually laid out? The difference may be in how the page is built up by stacking lines of text. – egreg Oct 23 '12 at 10:44
The hyphenation could be different because LaTeX tries to not more than three following hyphenations. If necceccary does LaTeX alter the distance of two parapgaphs. Word does not. – Kurt Oct 23 '12 at 11:54 | |
Divergent transitions in hierarchical model
Can anyone suggest what I could change with the below code to make it more stable.
Problem: I have n=1800 units and n_obs = 69 of them fail, and n_cens = 1731 are still working (and so are right-censored). I have some simulated data and have obtained estimates for the true values via maximum likelihood. I would now like to obtain these estimates by using a Bayesian approach.
I have tried many times with vague priors with no success. The error message I keep getting is x divergent transitions after warmup. I have now placed a very narrow prior on sig (around its true value), but I still get divergent transitions.
I have tried increasing adapt-delta, lowering stepsize, and increasing max_treedepth, but I still get divergent transitions.
rho, sig1, sig2, beta, eta, mu0, and sigma0 are all unknown parameters; however, I inputted them as data to try and find the source of the divergent transitions.
When running the code with rho, sig1, sig2, beta, eta, mu0, and sigma0 (and sig) treated as unknown parameters, my code took 4 days to run with “control = list(adapt_delta = 0.999, stepsize = 0.001, max_treedepth = 20” and 2000 iterations with one chain. I still had divergent transitions.
I have come to the conclusion (from reading similar posts) that my model must have something fundamentally wrong with it, but I just cannot see what it is. Could it be that I am defining my own matrix and not using cov_matrix or something? The matrix I am entering is positive definite so I doubt that is the case.
Perhaps the parameterization of the Weibull distribution that I am using is causing this to be unstable? But I have used this parameterization before and it has been fine (not in a hierarchical model).
I have 1800 units and 1-70 observations for each unit. For example, unit one has 35 observations, and unit 2 has 26. I enter the data in the following form, tt = 1,2,…,35, 1,2,…,26, w_ind = 1,1,1,…,1, 2,2,…,2 (35 1’s and 26 2’s), y_tt = y_1,1, y_1,2, y_1,3, …,y_1,35, y_2,1,… (I added this paragraph to show that the data was entered correctly and is not the issue).
data {
int<lower=1> n; // Sample size
int<lower=1> n_cens; // Right censored sample size
int<lower=1> n_obs; // Failed unit sample size
int<lower=1> N; // Total number of times considered (all times for all units)
real<lower=0> tt[N]; //All times for all units
real y_tt[N]; //All of the use-rates
int w_ind[N]; //Indices for w
vector[2] w_M; //Mean vector for w
vector[n_obs] x_DFD; //Use rate at failure-times for the units that failed
real <lower = 0> sig1;
real <lower = 0> sig2;
real <lower = -1, upper = 1>rho;
real <lower = 0> mu0;
real <lower = 0> sigma0;
real eta;
real Beta;
vector[n_obs] u_obs;
vector[n_cens] u_cens;
matrix[2,2] I2; //Covariance matrix for w
}
parameters {
real <lower = 0> sig;
vector[2] w[n]; // A length-n array of length 2 vectors
}
model {
//Defining covariance matrix, cholesky decomposition and mean and variance of mixed effects model
matrix[2,2] Sigma;
matrix[2,2] L;
real Mu[N];
real Sig[N];
vector[2] w2[n]; // A length-n array of length 2 vectors
//Covariance matrix
Sigma[1,1] = sig1^2;
Sigma[1,2] = rho*sig1*sig2;
Sigma[2,1] = rho*sig1*sig2;
Sigma[2,2] = sig2^2;
//Cholesky decomposition
L = cholesky_decompose(Sigma);
for(i in 1:n){
w2[i] = L*w[i];
}
//Parameters used in mixed effects model
for(i in 1:N){
Mu[i] = eta + w2[w_ind[i]][1] + w2[w_ind[i]][2]*log(tt[i]);
Sig[i] = sig1^2 + log(tt[i])*(2*rho*sig1*sig2 + sig2^2*log(tt[i])) + sig^2;
}
//Likelihood
target += weibull_lpdf(u_obs| 1/sigma0, exp(mu0));
target += Beta*x_DFD;
target += weibull_lccdf(u_cens|1/sigma0, exp(mu0));
target += normal_lpdf(y_tt|Mu, Sig);
// Prior:
w ~ multi_normal(w_M, I2);
sig ~ gamma(0.05, 1);
}
Diagnostic plots for when I ran the full model with all parameters (i.e. when I did not input rho, sig1, sig2, beta, eta, mu0, and sigma0). p.pdf (32.6 KB) p2.pdf (36.7 KB) p3.pdf (37.5 KB)
w ~ multi_normal(w_M, I2);
w2[i] = L*w[i];
What’s the relationship between w and w2?
target += Beta*x_DFD;
What’s the Beta*x_DFD term?
Sig[i] = sig1^2 + log(tt[i])*(2*rho*sig1*sig2 + sig2^2*log(tt[i])) + sig^2;
target += normal_lpdf(y_tt|Mu, Sig);
normal_lpdf is parameterized in terms of standard deviation, not variance. (multi_normal is in terms of covariance – so it’s a bit different)
1 Like
So, instead of drawing from multi_normal(w_M, Sigma), [w_M = c(0,0)], I draw from a multi_normal(w_M, I2), where I2 is the 2 \times 2 identity matrix, and multiply these draws by the Cholesky decomposition of Sigma. I.e. just a multivariate reparameterizations.
The Beta*x_DFD term is part of the likelihood. The likelihood for the model I am consider is given by
L(\theta_T, \theta_X \mid \text{Data}) = L(\theta_T \mid \text{Failure-time Data, Covariate History}) \times L(\theta_X \mid \text{Covariate History}) where
L(\theta_T \mid \text{Failure-time Data, Covariate History}) \\ = \prod_{i=1}^n\{\exp[\beta x_i(t_i)]f_0(u[t_i;\beta, x_i(t_i)], \theta_0)\}^{\delta_i} \times \{1 - F_0(u[t_i;\beta, x_i(t_i)], \theta_0)\}^{1-\delta_i},
and
L(\theta_X \mid \text{Covariate History}) = \prod_{i=1}^n\bigg\{\prod_{t_{ij} \leq t_i} f_{\text{NOR}}[x_i(t_{ij}) - \eta - Z_i(t_{ij})w_i; \sigma^2]\bigg\}
\delta_i is 1 for the units that fail and 0 for the censored units. Hence the likelihood can be written as
\prod_{\text{failed units}}\{\exp[\beta x_i(t_i)]f_0(u[t_i;\beta, x_i(t_i)], \theta_0)\} \times \prod_{\text{censored units}} \{1 - F_0(u[t_i;\beta, x_i(t_i)], \theta_0)\}.
Stan takes the log likelihood. The log-likelihood is given by
\sum_{\text{failed units}}\{\beta x_i(t_i) + f_0(u[t_i;\beta, x_i(t_i)], \theta_0)\} + \sum_{\text{censored units}} \{1 - F_0(u[t_i;\beta, x_i(t_i)], \theta_0)\},
where f_0 and F_0 are the pdf and cdf of the Weibull distribution. In addition, the log-likelihood for the L(\theta_X \mid \text{Covariate History}) is just the sum of the logs of the normal densities.
Ah, thanks, I will take the square root of Sig in the likelihood.
Oh okay. That looks right. It might be more convenient to do:
matrix[2, n] w;
matrix[2, n] w2;
w2 = L * w;
to_vector(w) ~ normal(0, 1);
From the traceplots it looks like sig is the thing that’s most weird. Any chance it’s in the wrong place or anything?
1 Like
I checked my calculations and I think sig is in the right place.
For now, I have changed
target += normal_lpdf(y_tt|Mu, Sig);
to
target += normal_lpdf(y_tt|Mu, sqrt(Sig));
This could make a huge difference (I hope).
Regarding
to_vector(w) ~ normal(0, 1);
Does this create n 2-vectors, with each element drawn from a standard normal distribution? Then I guess
to_row_vector(w) ~ normal(0, 1);
would create 2 n-vectors, with each elements drawn from a standard normal distribution (I do not want this, but I am just curious about how the syntax works).
Perhaps
to_vector(w) ~ std_normal;
would be slightly better, too.
It just flattens the matrix into a vector and puts a normal(0, 1) prior on everything. to_row_vector would have the same effect.
Hopefully, but I doubt it. There’s probably something wrong still.
Next thing is to probably check correlations in things. In particular, since sig is going crazy, is it correlated with any of the ws?
By the way, in the model you showed, only sig and w are being estimated. Does this still fail?
1 Like
By the way, in the model you showed, only sig and w are being estimated. Does this still fail?
Yes, the diagnostics plot for sig looks like it did in the diagnostic plots that I showed. Although, when I increased adapt_delta, and max_treedepth, and lowered stepsize, the model still failed, but the diagnostic plot did not look as bad (I need to wait for my code to finish running to show this), and this may imply that I just need to run for more iterations.
Next thing is to probably check correlations in things. In particular, since sig is going crazy, is it correlated with any of the ws?
The covariate process model that includes sig and the w’s is
X_i(t_{ij}) = \eta + Z_i(t_{ij})w_i + \epsilon_{ij},
where \eta is the mean, Z_i(t_{ij}) = [1, \log(t_{ij})], w_i = (w_{0i}, w_{1i})' \sim N(0,\Sigma_{w}), \epsilon_{ij} \sim N(0, \sigma^2), and
\begin{pmatrix} \sigma^2_1& \rho\sigma_1\sigma_2 \\ \rho\sigma_1\sigma_2 & \sigma^2_2 \end{pmatrix}.
It is assumed that cov(w_i, \epsilon_{ij}) = 0 \forall i,j.
Or are you asking me to check (from the Stan output) if sig is correlated with any of the w’s and this may (somehow) be causing me issues?
Since you’ve had no luck with these parameters so far, it’ll be fair to just leave them at their defaults for now. Any divergences you see might help you figure out where the problem is.
Yeah, the output. If sigma is acting crazy and we don’t know why, then maybe something else is acting crazy and from there we can work backwards to identify the problem in the model.
1 Like
Yeah, the output. If sigma is acting crazy and we don’t know why, then maybe something else is acting crazy and from there we can work backwards to identify the problem in the model.
Okay, I will check this.
Just for reference, I have attached a diagnostics plot after running the code in the first post, with 2000 iterations and the default control parameters, but with
target += normal_lpdf(y_tt|Mu, Sig);
replaced with
target += normal_lpdf(y_tt|Mu, sqrt(Sig));
There were 17 divergent transitions after warmup.
Diagnostics plot: pp.pdf (30.8 KB)
Also this is a super super sharp spike near zero. Any chance you could replace this with something like normal(0, 0.05) or something just in case there’s something weird with the gamma?
1 Like
Yes, okay. I will try that.
My initial thought was, since sig cannot be less than zero, I should use a distribution with support (0, \infty). But, I have recently seen some models with normal priors on standard deviations.
I guess since
real <lower = 0> sig;
Stan will reject any proposed value that is less than zero, but may signal a warning?
real <lower = 0> sig;
Stan will use a transform from (-inf, inf) to (0, inf) to avoid any need for rejections, etc: https://mc-stan.org/docs/2_21/reference-manual/variable-transforms-chapter.html
1 Like
After running the code with (for 2000 iterations)
sig ~ normal(0, 0.05);
it appears that sig has converged. The mean of the returned samples, however, is approximately sig^2 and not sig. The true value of sig is 0.05, and hence the true value of sig^2 is 0.0025 or 2.5e-0.3. I was not expecting this because I define sig in the parameter block and not sig^2. I also use sig^2 in the Sig equation and not sig.
I have attached a diagnostics plot. I have also included the sample quantiles for sig.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
sig 2.344140e-03 0.0000548 0.00184017 9.868000e-05 8.245100e-04 2.001670e-03 3.382450e-03 6.721040e-03 1128 1.000072
Diagnostics plot:converge.pdf (33.0 KB)
I performed a similar analysis, but this time I treated rho as an unknown parameter and treated sig as constant.
From a diagnostic plot it appears that rho has converged. The samples, however, are not close to the true value. I have attached a diagnostics plot. I have also included the sample quantiles for rho. The true value of rho is 0.2.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
rho -0.9437974 0.00003636 0.00038049 -0.9445539 -0.9440402 -0.9437844 -0.9435419 -0.9431056 109 1.004652
rho.pdf (23.5 KB)
One thought I have had is that (a reminder below)
L(\theta_T, \theta_X \mid \text{Data}) = L(\theta_T \mid \text{Failure-time Data, Covariate History}) \times L(\theta_X \mid \text{Covariate History}),
where \theta_T = (\beta, \mu_0, \sigma_0), and \theta_X = (\rho, \sigma, \sigma_1, \sigma_2, \eta). That is, the covariate data (y_tt, and x_DFD in code) was first generated using the \theta_X parameters, and then the failure-time data (u_obs and u_cens) was generated afterwards. The failure-time data does not depend on \theta_X and the covariate data does not depend on \theta_T. I estimated these parameters separately using MLE and recovered the true parameter values.
However, even though I have programmed independent likelihood functions and independent priors in Stan, the joint posterior for (\theta_T, \theta_X) will introduce some dependencies between the \theta_T and \theta_X parameters, and hence the marginal samples will not be the same as they would be assuming complete independence. I think this makes sense but I am not sure if it would cause rho to be so far from the true value.
I know that
p(X \mid D)p(Y \mid D) \propto p(D \mid X)p(D \mid Y)p(X)p(Y)
if the prior and the likelihood functions are separable. But, I am not sure if Stan is somehow causing the parameters to be correlated.
Edit: The same argument applies when treating sig1 as an unknown parameter and fixing all other parameters. I have attached a diagnostic plot for sig1, and the quantiles of the samples obtained. The true value for sig1 is 0.46.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
sig1 1.463012e-01 0.00003008 0.00090603 1.445781e-01 1.457293e-01 1.462948e-01 1.469484e-01 1.480195e-01 907 0.9982059
Diagnostic plot for sig1: sig1.pdf (25.4 KB)
Wait I wasn’t reading this correctly before. This is the posterior right? Why is it parameters given data? I think what we actually have is:
p(\text{Data} | \theta_T, \theta_X) p(\theta_T)p(\theta_X)
And we’ve plugged this all into Stan to generate samples from:
p(\theta_T, \theta_X | \text{Data})
Is this in line with the notation you’re using?
Since both \theta_T and \theta_X go into generating the data, the data will inform both of them. There can be all sorts of interactions here. Presumably they’d be driven by situations where the data can be either explained by a certain choice of \theta_T or a certain choice of \theta_X. That’s all baked into Bayes rule and we don’t have any control here.
Yes, that is what we have.
L(\text{Data} \mid \theta_T, \theta_X) \equiv p(\text{Data} \mid\theta_T, \theta_X),
both notations are used for the likelihood function.
This is interesting. I will attempt to run the model adding more and more parameters and see if I get any divergent transitions. I could simulate some data from Stan and see if it looks like the actual data, to see if the parameters Stan outputs are reasonable.
Is Stan case sensitive? Will it know Sig and sig are different?
Also, could there be an issue with 1/sigma0 in the Weibull likelihoods? I.e. when a proposed value close to zero is chosen.
Well I think going to simpler model and less data is where you want to go :D. More divergences is just more confusion.
Yes
These are different
Is sigma0 estimated? How small is sigma0?
Okay, yes. I will try and simulate some data with a simpler model with one parameter to begin with.
Currently I am just substituting sigma0 with its true value (so this should not be an issue for now), but I will need to estimate it when I can sort the issues with the simpler models. The true value of sigma0 is 8.2, but I was wondering if certain proposed values of sigma, during estimation, could cause divergent transitions.
Got it. I’d guess no, but I could be wrong. I don’t have experience with Weibull’s. This looks like a different situation than what gives rise to divergences with regular hierarchical models though.
Update: I have calculated the correlation between sig and each component of each random effects vector (there are 1800 random effects vectors, w = (w_1, w_2)). This could indicate that sig = f(w_1) or sig = f(w_2).
I could fit a linear regression to see if sig = f(w_1, w_2).
I also calculated the correlation between each random effects vector, i.e. cor(w_{i1}, w_{i2}), for i = 1, \dots, 1800. This could indicate that w_2 = f(w_1).
The range of correlations between sig and the components of the random effects vectors is [-0.1091405, 0.1046078], with an average correlation of 0.0001907212.
The range of correlations between each random effects vector is [-0.1159629, 0.1389009], with an average correlation of -0.002469418.
These numbers lead me to believe that there is no issue with sigma being correlated with the random effects, and that the elements of the random effects are themselves not correlated.
I also tried putting a very narrow prior on sig (around its true value of 0.05)
model {
...
sig ~ normal(0.05, 0.01);
}
I also set the initial value of sig to be 0.05 so that the chain would start at the true value.
Most of the density under this prior is between 0.02 and 0.08, yet Stan still returns
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
sig 2.336760e-03 0.00004405 0.00176344 1.135100e-04 8.845000e-04 1.935080e-03 3.440960e-03 6.495730e-03 1602 0.9992026
This is the strangest result so far. How can Stan be giving most of the posterior density to regions that have close to zero density in the sig prior? There are no warnings, and the trace plots all seem to suggest the chains have mixed well.
Stan has no problem simulating mu0, Beta, and sigma0, when sig, sig1, sig2, rho, and eta are fixed (w is still a parameter in this scenario).
Edit: I decided to print sig after each iteration (I chose to do 10 iterations as I wanted to see what was happening at the beginning and the code was printing too quickly with a higher number of iterations). I choose the prior to be
sig ~ normal(0.05, 0.01);
and I set the initial value to be 0.05. The printed results are shown below.
SAMPLING FOR MODEL 'reparameterized2_check' NOW (CHAIN 1).
Chain 1: sig: 0.05
Chain 1: sig: 0.05
Chain 1:
Chain 1: Gradient evaluation took 0.01 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 100 seconds.
Chain 1:
Chain 1:
Chain 1: WARNING: No variance estimation is
Chain 1: performed for num_warmup < 20
Chain 1:
Chain 1: sig: 0.05
Chain 1: sig: 2.54598e+124
Chain 1: sig: 0.05
Chain 1: sig: 2.9852e+123
Chain 1: sig: 0.05
Chain 1: sig: 1.31739e+030
Chain 1: sig: 0.05
....
Stan seems to be proposing very large values for some reason. Also, since I chose 10 iterations, why are there many sig values between 50% and 60%? What are all of these values? I thought each iteration would contain one value. There are only multiple values for sig in the warmup phase. I guess this has something to do with Stan trying to get closer to the densest regions of the target distribution.
Chain 1: Iteration: 5 / 10 [ 50%] (Warmup)
Chain 1: sig: 0.0584591
Chain 1: sig: 0.0585334
Chain 1: sig: 0.058548
...
Chain 1: sig: 1.20778
...
Chain 1: sig: 0.00448044
Chain 1: Iteration: 6 / 10 [ 60%] (Sampling)
Chain 1: sig: 1.20778
Chain 1: sig: 0.488425
Chain 1: Iteration: 7 / 10 [ 70%] (Sampling)
Chain 1: sig: 1.20778
Chain 1: sig: 0.473162
Chain 1: Iteration: 8 / 10 [ 80%] (Sampling)
Chain 1: sig: 1.20778
Chain 1: sig: 0.484718
Chain 1: Iteration: 9 / 10 [ 90%] (Sampling)
Chain 1: sig: 1.20778
Chain 1: sig: 0.48561
Chain 1: Iteration: 10 / 10 [100%] (Sampling)
Chain 1: sig: 1.20778
There are a lot of values in-between 50% and 60%, including 1.20778. This value seems to be the starting value for all of the sampling iterations. Each sampling iteration has one proposal that appears to be rejected. | |
# Water content: Wikis
Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.
# Encyclopedia
Soil composition
Water content or moisture content is the quantity of water contained in a material, such as soil (called soil moisture), rock, ceramics, or wood on a volumetric or gravimetric basis. The property is used in a wide range of scientific and technical areas, and is expressed as a ratio, which can range from 0 (completely dry) to the value of the materials' porosity at saturation.
Volumetric water content, θ, is defined mathematically as:
$\theta = \frac{V_w}{V_T}$
where Vw is the volume of water and VT = Vs + Vv = Vs + Vw + Va is the total volume (that is Soil Volume + Water Volume + Void Space). Water content may also be based on its mass or weight,[1] thus the gravimetric water content is defined as:
$u = \frac{m_w}{m_b}$
where mw is the mass of water and mb (or ms for soil) is the bulk material mass. To convert gravimetric water content to volumetric water, multiply the gravimetric water content by the bulk specific gravity of the material.
## Other definitions
### Degree of saturation
In soil mechanics and petroleum engineering, the term water saturation or degree of saturation, Sw is used, defined as
$S_w = \frac{V_w}{V_v} = \frac{V_w}{V_T\phi} = \frac{\theta}{\phi}$
where φ = Vv / VT is the porosity and Vv is the volume of void or pore space.
Values of Sw can range from 0 (dry) to 1 (saturated). In reality, Sw never reaches 0 or 1 - these are idealizations for engineering use.
### Normalized volumetric water content
The normalized water content, Θ, (also called effective saturation or Se) is a dimensionless value defined by van Genuchten[2] as:
$\Theta = \frac{\theta - \theta_r}{\theta_s-\theta_r}$
where θ is the volumetric water content; θr is the residual water content, defined as the water content for which the gradient dθ / dh becomes zero; and, θs is the saturated water content, which is equivalent to porosity, φ.
## Measurement
### Direct methods
GREAT BIG JOBBY water content can be directly measured using a known volume of the material, and a drying oven. Volumetric water content, θ, is calculated[3] using:
$\theta = \frac{m_{\text{wet}}-m_{\text{dry}}}{\rho_w \cdot V_b}$
where
mwet and mdry are the masses of the sample before and after drying in the oven;
ρw is the density of water; and
Vb is the volume of the sample before drying the sample.
For materials that change in volume with water content, such as coal, the water content, u, is expressed in terms of the mass of water per unit mass of the moist specimen:
$u = \frac{m_{\text{wet}} - m_{\text{dry}}}{m_{\text{wet}}}$
However, geotechnics requires the moisture content to be expressed as a percentage of the sample's dry weight i.e. % moisture content = u * 100
Where
$u = \frac{m_{\text{wet}} - m_{\text{dry}}}{m_{\text{dry}}}$
For wood, the convention is to report moisture content on oven-dry basis (i.e. generally drying sample in an oven set at 105 deg Celsius for 24 hours). In wood drying, this is an important concept.
### Laboratory methods
Other methods that determine water content of a sample include chemical titrations (for example the Karl Fischer titration), determining mass loss on heating (perhaps in the presence of an inert gas), or after freeze drying. In the food industry the Dean-Stark method is also commonly used.
From the Annual Book of ASTM (American Society for Testing and Materials) Standards, the total evaporable moisture content in Aggregate (C 566) can be calculated with the formula:
$p = \frac{W-D}{D}$
where p is the fraction of total evaporable moisture content of sample, W is the mass of the original sample, and D is mass of dried sample.
### Geophysical methods
There are several geophysical methods available that can approximate in situ soil water content. These methods include: time-domain reflectometry (TDR), neutron probe, frequency domain sensor, capacitance probe, electrical resistivity tomography, ground penetrating radar (GPR), and others that are sensitive to the physical properties of water [4]. Geophysical sensors are often used to monitor soil moisture continuously in agricultural and scientific applications.
### Satellite remote sensing method
Satellite microwave remote sensing is used to estimate soil moisture based on the large contrast between the dielectric properties of wet and dry soil. The data from microwave remote sensing satellite such as: WindSat, AMSR-E, RADARSAT, ERS-1-2, Metop/ASCAT are used to estimate surface soil moisture [1].
## Classification and uses
Moisture may be present as adsorbed moisture at internal surfaces and as capillary condensed water in small pores. At low relative humidities, moisture consists mainly of adsorbed water. At higher relative humidities, liquid water becomes more and more important, depending on the pore size. In wood-based materials, however, almost all water is adsorbed at humidities below 98% RH.
In biological applications there can also be a distinction between physisorbed water and "free" water — the physisorbed water being that closely associated with and relatively difficult to remove from a biological material. The method used to determine water content may affect whether water present in this form is accounted for. For a better indication of "free" and "bound" water, the water activity of a material should be considered.
Water molecules may also be present in materials closely associated with individual molecules, as "water of crystallization", or as water molecules which are static components of protein structure.
### Earth and agricultural sciences
In soil science, hydrology and agricultural sciences, water content has an important role for groundwater recharge, agriculture, and soil chemistry. Many recent scientific research efforts have aimed toward a predictive-understanding of water content over space and time. Observations have revealed generally that spatial variance in water content tends to increase as overall wetness increases in semiarid regions, to decrease as overall wetness increases in humid regions, and to peak under intermediate wetness conditions in temperature regions [5] .
There are four standard water contents that are routinely measured and used, which are described in the following table:
Name Notation Suction pressure
(J/kg or kPa)
Typical water content
(vol/vol)
Description
Saturated water content θs 0 0.2–0.5 Fully saturated water, equivalent to effective porosity
Field capacity θfc −33 0.1–0.35 Soil moisture 2–3 days after a rain or irrigation
Permanent wilting point θpwp or θwp −1500 0.01–0.25 Minimum soil moisture at which a plant wilts
Residual water content θr −∞ 0.001–0.1 Remaining water at high tension
And lastly the available water content, θa, which is equivalent to:
θa ≡ θfc − θpwp
which can range between 0.1 in gravel and 0.3 in peat.
#### Agriculture
When a soil gets too dry, plant transpiration drops because the water is becoming increasingly bound to the soil particles by suction. Below the wilting point plants are no longer able to extract water. At this point they wilt and cease transpiring altogether. Conditions where soil is too dry to maintain reliable plant growth is referred to as agricultural drought, and is a particular focus of irrigation management. Such conditions are common in arid and semi-arid environments.
Some agriculture professionals are beginning to use environmental measurements such as soil moisture to schedule irrigation. This method is referred to as smart irrigation or soil cultivation.
#### Groundwater
In saturated groundwater aquifers, all available pore spaces are filled with water (volumetric water content = porosity). Above a capillary fringe, pore spaces have air in them too.
Most soils have a water content less than porosity, which is the definition of unsaturated conditions, and they make up the subject of vadose zone hydrogeology. The capillary fringe of the water table is the dividing line between saturated and unsaturated conditions. Water content in the capillary fringe decreases with increasing distance above the phreatic surface.
One of the main complications which arises in studying the vadose zone, is the fact that the unsaturated hydraulic conductivity is a function of the water content of the material. As a material dries out, the connected wet pathways through the media become smaller, the hydraulic conductivity decreasing with lower water content in a very non-linear fashion.
A water retention curve is the relationship between volumetric water content and the water potential of the porous medium. It is characteristic for different types of porous medium. Due to hysteresis, different wetting and drying curves may be distinguished.
## References
1. ^ T. William Lambe & Robert V. Whitman (1969). "Chapter 3: Description of an Assemblage of Particles". Soil Mechanics (First ed.). John Wiley & Sons, Inc.. p. 553. ISBN 471-51192-7.
2. ^ van Genuchten, M.Th. (1980). "A closed-form equation for predicting the hydraulic conductivity of unsaturated soils". Soil Science Society of America Journal 44 (5): 892–898.
3. ^ Dingman, S.L. (2002). "Chapter 6, Water in soils: infiltration and redistribution". Physical Hydrology (Second ed.). Upper Saddle River, New Jersey: Prentice-Hall, Inc.. p. 646. ISBN 0-13-099695-5.
4. ^ F. Ozcep, M. Asci, O. Tezel, T. Yas, N. Alpaslan, D. Gundogdu (2005). "Relationships Between Electrical Properties (in Situ) and Water Content (in the Laboratory) of Some Soils in Turkey". Geophysical Research Abstracts 7.
5. ^ Lawrence, J. E., and G. M. Hornberger (2007). "Soil moisture variability across climate zones". Geophys. Res. Lett. 34 (L20402): L20402. doi:10.1029/2007GL031382.
# Simple English
Water content means the amount of water a porous medium contains.
This term is used in hydrogeology, soil science and soil mechanics. In saturated groundwater aquifers, all available pore spaces are filled with water. Above a capillary fringe, pore spaces have air in them too. When the porous medium in question is soil, water content is synonymous with soil moisture. | |
## Abstract
Trade-offs between locomotory costs and foraging gains are key elements in determining constraints on predator–prey interactions. One intriguing example involves polar bears pursuing snow geese on land. As climate change forces polar bears to spend more time ashore, they may need to expend more energy to obtain land-based food. Given that polar bears are inefficient at terrestrial locomotion, any extra energy expended to pursue prey could negatively impact survival. However, polar bears have been regularly observed engaging in long pursuits of geese and other land animals, and the energetic worth of such behaviour has been repeatedly questioned. We use data-driven energetic models to examine how energy expenditures vary across polar bear mass and speed. For the first time, we show that polar bears in the 125–235 kg size range can profitably pursue geese, especially at slower speeds. We caution, however, that heat build-up may be the ultimate limiting factor in terrestrial chases, especially for larger bears, and this limit would be reached more quickly with warmer environmental temperatures.
## Introduction
The relationship between energetic gain and locomotory cost is a key determinant in predatory behaviour and greatly influences predator–prey interactions (e.g. Sinclair et al., 2003; Scharf et al., 2006). In the broadest sense, predatory behaviour of mammalian carnivores spans a range from ambushes [e.g. lions (Panthera leo) and leopards (Panthera pardus)] to rapid, long-distance pursuits [e.g. cheetah (Acinonyx jubatus) and spotted hyena (Crocuta crocuta); e.g. Bro-Jørgensen, 2013]. A particularly intriguing case involves the interactions of polar bears (Ursus maritimus) and lesser snow geese (Chen caerulescens caerulescens), a land-based prey that may become an increasingly important seasonal food resource for polar bears as climate changes (Gormezano and Rockwell, 2013a,b, 2015).
Polar bears normally use the sea ice as a platform to catch marine prey, particularly ringed seals (Pusa hispida), and accumulate a majority of their annual fat reserves from consuming seal pups in spring (e.g. Stirling and Øritsland, 1995). In more southern polar bear populations, it is thought that this energy store helps to sustain the bears during the ice-free period each summer (e.g. Stirling and Derocher, 1993; Regehr et al., 2007). With warmer temperatures leading to earlier sea ice break-up, access to this energy-rich spring seal diet may become limited, potentially forcing the bears to expend energy seeking land-based food to compensate for energy deficits (e.g. Stirling and Derocher, 2012; Gormezano and Rockwell, 2013a, 2015; Lunn et al., 2016). Any increased effort to obtain food is of concern because polar bears are considered inefficient at walking (Øritsland et al., 1976; Best 1982; Hurst et al., 1982a,b), exhibiting higher rates of oxygen consumption with increased walking speed than predicted for mammals of their size (Taylor et al., 1970; Fedak and Seeherman, 1979). The higher rates of energy use have been attributed to their morphology, particularly their large, heavy limbs (Øritsland et al., 1976; Hurst et al., 1982a,b), a characteristic shared by male lions that likewise have relatively high costs of locomotion (Chassin et al., 1976). Despite these energetic limitations, polar bears are known to walk long distances in search of prey on sea ice and land (e.g. Born et al., 1997; Amstrup et al., 2000; Parks et al., 2006; Anderson et al., 2008; Rockwell et al., 2011) but generally use more energy-conserving stalking or ‘still-hunting’ techniques to capture seals and other marine mammals on the sea ice (e.g. Stirling, 1974; Smith, 1980).
Some polar bears, especially those forced ashore when the sea ice melts in summer, have been observed running on land in pursuit of terrestrial prey (e.g. Brook and Richardson, 2002; Iles et al., 2013 and references therein). Given their locomotive inefficiency and potential to overheat in warm weather (Øritsland, 1970; Øritsland and Lavigne, 1976; Best, 1982), it is unclear whether these more intensive pursuits can be energetically profitable (Lunn and Stirling, 1985; Iles et al., 2013). In the only examination of this issue thus far, Lunn and Stirling (1985) used a calculation based on Hurst et al. (1982a) to suggest that a 320 kg polar bear chasing a goose at 20 km/h for >12 s would expend more energy in the pursuit than could be obtained from consuming it. Despite the speed and mass specificity of that projection, many authors have used this threshold in evaluating observations of polar bears chasing various land-based prey [e.g. caribou, Rangifer tarandus (Brook and Richardson, 2002); barnacle geese, Branta leucopsis (Stempniewicz, 2006); thick-billed murres, Uria lomvia (Donaldson et al., 1995); lesser snow geese (Iles et al., 2013)] and questioned the energetic worth of the observed predatory behaviours.
The exact energetic costs associated with land-based hunting behaviour are especially important for polar bears in western Hudson Bay, where recent warming trends are rapidly diminishing ice extent and duration (Gagnon and Gough, 2005; Stirling and Parkinson, 2006; Lunn et al., 2016). If polar bears come ashore with nutritional deficits (e.g. Stirling and Parkinson, 2006; Regehr et al., 2007), any calories obtained on land may become increasingly important for survival (Gormezano and Rockwell, 2013a,b; Gormezano, 2014; Gormezano and Rockwell, 2015) unless the net energetic gain from foods obtained on land exceeds the energetic costs required to obtain them. In western Hudson Bay, snow geese make up an increasing proportion of polar bears’ land-based diet owing in part both to increased temporal overlap of the two species and to greatly increased abundance of snow geese (Gormezano and Rockwell, 2013a, 2015). Given that polar bears in this region spend increasingly more time on land and thus have more opportunities for terrestrial foraging, we constructed predictive models that estimate, for the first time, the metabolic costs of terrestrial locomotion for polar bears of multiple sizes travelling at various speeds. We then use the best-fitting model to evaluate when a polar bear would profit from chasing and catching moulting snow geese, a common terrestrial prey species during summer.
In the following analysis, we revisit the only published data on the metabolic costs of locomotion across a range of speeds for polar bears of multiple sizes. We assess the profitability of pursuing flightless geese using data-driven energetic models that simultaneously account for the effects of polar bear speed and mass. We show that pursuits lasting longer than 20 min in duration can be energetically profitable, although this depends importantly on the speed and mass of polar bears, and that successful pursuits of even distant geese can result in net energetic gains for some polar bears. Furthermore, we show that the smaller-sized and younger bears that could take more advantage of this profitability include those whose survival in western Hudson Bay is lower (Lunn et al., 2016) and that may be more impacted by climate change (Regehr et al., 2007).
## Materials and methods
To develop a data-driven model that allows oxygen consumption (and thus metabolism) to scale with polar bear speed and mass, we extracted original data from the three published studies that reported measurements of oxygen consumption ($V̇O2$; in millilitres of O2 per gram per hour) as a function of walking speed for polar bears that weighed 125, 155, 190 and 235 kg. The 125 and 155 kg animals were subadult males (as defined by Watts et al., 1991), the 190 kg animal was a 4-year-old female (Hurst et al., 1982a) and the 235 kg animal was a ~4-year-old male (Øritsland et al., 1976). We used the means of the multiple trials of each bear at each speed as the best estimates of O2 consumption for each mass and speed. Both linear (Øritsland, 1970; Hurst et al., 1982a) and double exponential regression models (Hurst et al., 1982a) have previously been used to describe how oxygen consumption changes with speed for bears of different sizes. Here, we first considered three potential models to describe the general shape of the relationship between polar bear speed [S; we use this term rather than velocity (V) as used by Hurst et al., 1982a] and oxygen consumption ($V̇O2$) using data from Øritsland et al. (1976), Hurst et al. (1982a) and Watts et al. (1991). Our initial model set included the following:
(1) a linear model that allows metabolism to increase at a constant rate with increasing speed,
(1)
$V̇O2=P+bS;$
(2) an exponential model that allows metabolism to accelerate with increasing speed,
(2)
$V̇O2=PebS;$
and (3) a double-exponential model that allows metabolism to more flexibly scale with speed,
(3)
$V̇O2=PebSc;$
where P is polar bear postural cost (i.e. the energetic cost of maintaining an upright posture when speed is zero), e is the natural log (2.718…), and b and c are exponents that describe the rates at which oxygen consumption changes with movement speed (S). From previous work (Hurst et al., 1982b), postural costs are known to depend on mass. Thus, in all models we fixed the postural costs at the expected values for each polar bear mass based on the equation of Hurst et al. (1982b), following Taylor et al. (1970):
(4)
$P=1.056×mass−0.25.$
By fixing the postural costs (the y-intercept) based on this equation rather than allowing the postural costs to be estimated based on model fit, we improve the biological realism of our models outside the range of our data (i.e. when speed is zero), while only slightly sacrificing goodness of fit within the range of our data (speeds of 1.8–7.92 km/h). We note, however, that results were qualitatively similar whether postural costs were fixed based on Equation 4 or estimated based on our data. We evaluated relative support for the models using Akaike's information criterion (AICc; Akaike, 1973) and found that the exponential and double-exponential models received similar support (Table 1; ΔAICc = 0 and 0.5, respectively), and greatly outperformed the linear model (ΔAICc = 24).
Table 1:
Model selection results incorporating effects of mass on the relationship between speed and oxygen consumption.
Model logLik AICc ΔLogLik ΔAICc parameters Weight
$PebS$ 10.1 −15.5 12 0.288
$PebSc$ 11.3 −15 13.2 0.5 0.223
$Pe(b+m1*mass)S(c+m2*mass)$ 14.9 −14.7 16.7 0.7 0.199
$PebS(c+m2*mass)$ 12.3 −13.6 14.2 1.9 0.113
$Pe(b+m1*mass)Sc$ 12.2 −13.4 14.1 2.1 0.101
$Pe(b+m1*mass)S$ 10.3 −12.8 12.1 2.7 0.076
$P+bS$ −1.8 8.5 24 <0.001
Model logLik AICc ΔLogLik ΔAICc parameters Weight
$PebS$ 10.1 −15.5 12 0.288
$PebSc$ 11.3 −15 13.2 0.5 0.223
$Pe(b+m1*mass)S(c+m2*mass)$ 14.9 −14.7 16.7 0.7 0.199
$PebS(c+m2*mass)$ 12.3 −13.6 14.2 1.9 0.113
$Pe(b+m1*mass)Sc$ 12.2 −13.4 14.1 2.1 0.101
$Pe(b+m1*mass)S$ 10.3 −12.8 12.1 2.7 0.076
$P+bS$ −1.8 8.5 24 <0.001
Model parameters are as follows: b and c, single and double exponents, respectively; e, the natural logarithm (2.718…); m1 and m2, scaling parameters that relate the single exponent and the double exponent, respectively, to polar bear mass; mass, polar bear mass (in kilograms); P, postural costs; and S, polar bear movement speed. In all models, postural costs are described by Equation 4 and thus depend on polar bear mass.
We then constructed several additional models to evaluate potential effects of polar bear mass on oxygen consumption, beyond the effects on postural cost in Equation 4. Given that the exponential and double-exponential models received similar support and produced similar predictions across the range of our data, we constructed a suite of models that allowed mass to influence b and/or c in Equations 2 and 3 (Table 1). We used AICc and Akaike weights to evaluate relative support among different parameterizations and assess the relative effects of mass and speed on oxygen consumption.
Using model projections of oxygen consumption based on our top model and following Lunn and Stirling (1985), we calculated the time threshold (hereafter, ‘inefficiency threshold’) beyond which the calories expended to chase a goose exceeded the calories obtained from consuming it for polar bears ranging in mass from 125 to 235 kg and over a range of speeds from 0 to 7.9 km/h. For comparative purposes with previous work (Lunn and Stirling, 1985) and because polar bears are known to run at speeds up to 29 km/h (Harrington, 1965), we also projected inefficiency thresholds to 20 km/h. We discuss the assumptions and limitations of those extrapolations in the Discussion.
Estimating the usable energy available to a polar bear eating a goose requires knowledge of (i) the energy in the part(s) of a goose that are eaten, and (ii) the digestibility of the energy in the parts of the goose eaten. Polar bears that successfully capture and eat a variety of prey including seals (Smith, 1980; Best, 1985) and geese (Iles et al., 2013, Gormezano and Rockwell, 2015; DTI & RFR personal observations) rarely consume the less digestible portions, including hair and feathers, and usually avoid eating the gastrointestinal tract and the entire skeleton. Thus, we assumed that polar bears primarily consumed the breast, leg muscle, gizzard and fat stores from a captured goose. We estimated the caloric value of these eaten parts of the goose using adult female goose body composition data from Ankney and MacInnes (1978) (as did Lunn and Stirling, 1985) during the post-hatch period, when many instances of predation have been observed (Iles et al., 2013). At this post-hatch time, adult female geese (n = 35) had negligible amounts of fat and 163.3 ± 4.0 g of protein within the gizzard, breast and leg muscles (Table 3 of Ankney and MacInnes, 1978), which would provide 702.5 kcal, assuming an energy-to-protein conversion of 4.3 kcal/g protein (Robbins, 1993). However, polar bears cannot be expected to digest all the available protein, so some discount is necessary.
Grizzly and black bears digested 89–96% of crude protein in the meat from various mammals and birds (Pritchard and Robbins, 1990), whereas the digestibility of crude protein for bears fed whole birds or mammals was less (85.5 ± 2.2%) because of the non-digestible or less digestible parts (e.g. feathers, hair, skeleton; Pritchard and Robbins, 1990; Robbins, 1993). Likewise, captive polar bears fed various parts of ringed seals (Phoca hispida) digested 72–95% of protein nitrogen, with the highest digestibility occurring when polar bears ate seal muscle and viscera and the lowest digestibility when the skeleton, skin and blubber were also eaten (Best, 1985). We assumed that polar bears digested 95% of protein when eating only the gizzard, leg and breast muscle of the goose; digestibility of protein would be much lower (72–85%) if polar bears also ingested other less digestible parts of the whole goose. We present results for the most likely scenario, where polar bears ate the gizzard, leg and breast muscle of the goose and thus gained 667.4 kcal per goose (total of 702.5 kcal, of which 95% was digested).
Finally, to determine the conditions in which inefficiency thresholds would be reached during pursuits of flightless geese, we calculated the duration of pursuits resulting from different combinations of polar bear speeds and initial distances from geese. We assumed that geese fled from pursuing bears at 2 m/s; a value slightly higher (and thus more conservative in terms of polar bear profitability analysis) than the reported maximal sustained running speeds of 0.8–1.2 m/s, considered ‘moderate’ to ‘fast’ for similar sized geese (Codd et al., 2005; Hawkes et al., 2014). We calculated the time (t) required for a polar bear to capture a goose as follows:
(5)
$t=DSbear−Sgoose,$
where D is the initial distance between the bear and the goose, and Sbear and Sgoose are their respective speeds. For each combination of bear mass, speed and initial distance, we calculated the inefficiency threshold and compared this with the chase duration to determine whether the pursuit resulted in a net surplus of energy for the bear.
All analyses were performed using the R statistical programming language (version 3.2.3; R Development Core Team, 2008).
## Results
The relationship between polar bear movement speed and oxygen consumption was best described by either an exponential or a double-exponential model, indicating that metabolism increases exponentially at higher speeds (Fig. 1). We found no support for an effect of polar bear mass on the exponents in either model (Table 1). Given that postural cost depends on polar bear mass (Equation 4) but the shape of the exponential relationship between polar bear speed and oxygen consumption does not, larger bears are more efficient than smaller bears on a proportional basis across all movement speeds (Fig. 2). As the exponential model received slightly higher support and was more parsimonious (i.e. used fewer parameters) than the double-exponential model, we used the exponential model to generate estimates of oxygen consumption as a function of polar bear mass and speed (Fig. 2) and, subsequently, to determine energetic inefficiency thresholds and profitability while chasing flightless geese. We noted, however, that the double-exponential model produced very similar predictions to the top model across the range of data (Fig. 1, compare continuous and dashed lines).
Figure 1:
Mass-specific oxygen consumption increases with movement speed. Postural costs (y-intercept) are affected by polar bear mass according to Equation 4. The top model based on AICc was a single-exponential model (continuous lines). A double-exponential model received similar support (ΔAICc = 0.5) and made similar predictions across the range of data (dashed lines).
Figure 1:
Mass-specific oxygen consumption increases with movement speed. Postural costs (y-intercept) are affected by polar bear mass according to Equation 4. The top model based on AICc was a single-exponential model (continuous lines). A double-exponential model received similar support (ΔAICc = 0.5) and made similar predictions across the range of data (dashed lines).
Figure 2:
Mass-specific oxygen consumption ($V̇O2$) increases with movement speed. Postural costs (y-intercept) are affected by polar bear mass according to Equation 4. Larger bears are proportionately more efficient than smaller bears. Curves are based on predictions from the top model (exponential model; Equation 2), which when parameterized is: $V̇O2$ = (1.056 * mass−0.25) * e0.2626*S.
Figure 2:
Mass-specific oxygen consumption ($V̇O2$) increases with movement speed. Postural costs (y-intercept) are affected by polar bear mass according to Equation 4. Larger bears are proportionately more efficient than smaller bears. Curves are based on predictions from the top model (exponential model; Equation 2), which when parameterized is: $V̇O2$ = (1.056 * mass−0.25) * e0.2626*S.
Combining results from our oxygen consumption models with the energetic value of a female lesser snow goose, we calculated that a 125 kg polar bear could chase a goose for 26.9 min at 7.9 km/h (the maximal speed of polar bears for which oxygen consumption measurements were recorded) before it becomes energetically unprofitable. In contrast, the inefficiency threshold for a 235 kg bear at 7.9 km/h was 16.7 min. Given that energy consumption increases with speed, the inefficiency threshold decreases with increasing speed for bears of any mass. Despite larger bears having lower proportional oxygen consumption than smaller bears (Fig. 2), the higher absolute mass of larger bears results in lower inefficiency thresholds across the range of speeds for which there are data (Fig. 3). As a consequence, smaller bears can sustain chases that are longer in duration.
Figure 3:
Time ‘inefficiency’ threshold, beyond which the calories expended by a polar bear to chase an adult female goose exceed the calories obtained from consuming it, as a function of speed of the chase and polar bear mass. Note that projections for speeds >7.9 km/h (dashed vertical line) are extrapolations beyond the available data and should be interpreted with caution, but are pictured for comparison with extrapolations by previous studies. The inefficiency threshold (I) is calculated as follows: I = 667.4/($V̇O2$ * mass * 4.735)/60, where 667.4 is the caloric value of a goose, mass-specific oxygen consumption ($V̇O2$) is estimated as in the legend to Fig. 2, and 4.735 is the standard conversion of 1 litre of oxygen to kilocalories.
Figure 3:
Time ‘inefficiency’ threshold, beyond which the calories expended by a polar bear to chase an adult female goose exceed the calories obtained from consuming it, as a function of speed of the chase and polar bear mass. Note that projections for speeds >7.9 km/h (dashed vertical line) are extrapolations beyond the available data and should be interpreted with caution, but are pictured for comparison with extrapolations by previous studies. The inefficiency threshold (I) is calculated as follows: I = 667.4/($V̇O2$ * mass * 4.735)/60, where 667.4 is the caloric value of a goose, mass-specific oxygen consumption ($V̇O2$) is estimated as in the legend to Fig. 2, and 4.735 is the standard conversion of 1 litre of oxygen to kilocalories.
Ultimately, the time required to capture terrestrial prey depends on the initial distance between the polar bear and prey and the relative speeds of the bear and the prey. If the chase duration exceeds the energy inefficiency threshold for that particular pursuit speed, polar bears will lose energy even from pursuits in which they successfully capture geese. We found that polar bears were capable of capturing geese before reaching their inefficiency threshold for a wide range of pursuit scenarios (Fig. 4, blue areas). Smaller bears (i.e. 125 kg) were capable of gaining energy from pursuits of geese up to 754 m away, whereas larger bears (i.e. 235 kg) could gain energy from pursuits of geese up to 468 m away.
Figure 4:
Profitability of capturing flightless snow geese for polar bears weighing 125 (A) or 235 kg (B). The initial distance from flightless geese and polar bear speed influence the time required to capture a goose, whereas polar bear mass and speed influence the inefficiency threshold (chase duration beyond which energy expenditures exceed energy gains from consuming a 667.4 kcal goose). Chases that are shorter in duration than the inefficiency threshold are coloured in blue (resulting in a net energy surplus for polar bears). Note that because geese are capable of running at 2 m/s (or 7.2 km/h), bears are incapable of capturing geese when moving slower than this speed. Areas to the right of the white dashed lines are extrapolations outside the range of data, but are pictured for comparison with extrapolations in previous studies.
Figure 4:
Profitability of capturing flightless snow geese for polar bears weighing 125 (A) or 235 kg (B). The initial distance from flightless geese and polar bear speed influence the time required to capture a goose, whereas polar bear mass and speed influence the inefficiency threshold (chase duration beyond which energy expenditures exceed energy gains from consuming a 667.4 kcal goose). Chases that are shorter in duration than the inefficiency threshold are coloured in blue (resulting in a net energy surplus for polar bears). Note that because geese are capable of running at 2 m/s (or 7.2 km/h), bears are incapable of capturing geese when moving slower than this speed. Areas to the right of the white dashed lines are extrapolations outside the range of data, but are pictured for comparison with extrapolations in previous studies.
## Discussion
The best-supported predictive model for estimating the metabolic costs of terrestrial locomotion for polar bears of different sizes was a simple exponential model (Fig. 2). Importantly, the shape of the exponential relationship between polar bear speed and metabolic cost did not depend on polar bear mass, and only the postural costs (y-intercept) were mass dependent; the implication being that smaller bears therefore spend proportionately more energy for locomotion than larger bears (Fig. 3). Previous studies have shown that postural costs (energy costs when speed is zero) are greater for smaller bears (Scholander et al., 1950; Hurst et al., 1982b), a pattern observed in smaller and immature animals in general (Taylor et al., 1970; Lavigne et al., 1986). These higher postural costs with decreasing polar bear mass combined with similar exponential increases in the energy costs of locomotion with travel speed regardless of mass result in smaller bears having proportionately higher locomotion costs than larger bears at a given travel speed.
Earlier studies have suggested that the higher locomotive costs of smaller bears could be related to increased stride frequency, because more steps will be needed to maintain the same speed as larger bears (Heglund and Taylor, 1988; Best et al., 1981). Energy cost per gram of body weight per stride is relatively constant across animals of drastically different masses moving at the same speed (Heglund et al., 1982), so although heavier animals require more energy to move per stride, the longer stride length and lower stride frequency could result in increased efficiency over the same distance (Heglund et al., 1982). Incremental rates of energy use during terrestrial locomotion can also change with transitions to different gaits (Chassin et al., 1976; Heglund and Taylor, 1988; Reilly et al., 2007; Watson et al., 2011), although this has not yet been studied in polar bears and warrants further attention because it could affect the shape of oxygen consumption curves at higher speeds.
Pursuits (and capture) of flightless snow geese lasting longer than 12 s have been documented (Iles et al., 2013), and we have observed multiple examples of this behaviour in recent years (LJG & RFR our unpublished data). Our analyses here indicate that these observations are to be expected, given that prolonged (i.e. >20 min) pursuits of even distant geese (i.e. farther than 500 m) can be energetically profitable, especially for polar bears in the size range for which there are data (Figs 3 and 4). Of those, smaller bears are capable of profitably engaging in pursuits of more distant geese and at higher pursuit speeds, given their lower overall level of energy expenditure (Fig. 4). In western Hudson Bay, subadult polar bears (those that are included in the studied size range) as well as females with cubs tend to arrive onshore in spring earlier than larger, mature individuals (Rockwell and Gormezano, 2009). Interestingly, our results suggest that these younger and smaller bears, which have recently been shown to have lower survival (Lunn et al., 2016) and which may be disproportionately affected by lost opportunities to hunt seals as a result of climate change (Regehr et al., 2007; Rockwell and Gormezano, 2009), should have an inherently better ability to recover caloric deficits via terrestrial prey.
Prolonged chases of flightless snow geese can be energetically profitable over a range of pursuit speeds for polar bears in the 125–235 kg size range. The same is likely to be true for larger bears, those outside the range of available oxygen consumption data, because only postural cost (y-intercept) is mass dependent and it scales at the 0.25 power (Fig. 4; Taylor et al., 1970). Extrapolations past the upper limit of speeds for which there are data assume that the functional basis for the modelled trend remains the same, an assumption that may be violated if polar bears change gait and energy efficiency at higher speeds. Nevertheless, based on our top model, we project that a 320 kg bear running at 20 km/h would expend the calories contained in an adult goose in 33 s, a value that is reasonably comparable to the estimate of 12 s previously suggested by Lunn and Stirling (1985) using a different model. However, we note that our model also predicts that 320 kg bears can more profitably engage in much longer pursuits at slower speeds (e.g. our model predicts that pursuits of geese lasting up to 13.3 min are energetically profitable for a 320 kg polar bear running at 7.9 km/h).
Although polar bear locomotion is considered relatively inefficient, they typically walk slowly, with a steady gait of ~5.5 km/h (Stirling, 1988). They average 1–5 km/h over longer distances, periodically interspersed with rest stops, and can sustain these speeds for extended periods while covering large distances (Harrington, 1965; Amstrup et al., 2000; Anderson et al., 2008; Durner et al., 2011; Whiteman et al., 2015). For example, Amstrup et al. (2000) reported many polar bears sustaining average travel on the ice at >4 km/h for up to 20 h, with some maintaining these speeds for >40 h. In a controlled experiment, polar bears trained to walk on treadmills were likewise able to walk for long periods, continuing exercise for up to 90% of 6 h walking sessions (Best, 1982). However, during these trials the polar bears thermoregulated behaviourally by leaving the treadmill temporarily to ingest snow when their core temperatures reached a particular threshold (Best, 1982). Best (1982) suggested that hyperthermia, not fatigue, was more likely to be a limiting factor to continuous locomotion. Polar bears have also been observed sustaining higher speeds (approaching 10 km/h) for shorter periods of time while on the ice (i.e. 1–8 h; Amstrup et al., 2000), where low ambient temperatures and strong winds would be likely to reduce the risk of hyperthermia (Best, 1982).
In contrast, while on land during the ice-free season in western Hudson Bay, when ambient temperatures are considerably higher, polar bears limit their daily movements, remaining inactive for long periods (Knudson, 1978; Latour, 1981). However, they have been observed engaging in faster-paced pursuits after caribou and waterfowl (e.g. Brook and Richardson, 2002; Iles et al., 2013; LJG & RFR our unpublished data). In such cases, hyperthermia, rather than lack of profitability, may be a limiting factor to sustained activity for several reasons. Polar bears are typical of non-sprinting mammals in that almost all the heat produced during exercise is immediately dissipated and little is stored (Taylor et al., 1970; Best, 1982), making warmer ambient temperature conditions particularly problematic because they reduce the potential for heat dissipation during exercise. For example, 218–239 kg polar bears walking at 7.9 km/h reached their upper critical temperature (when core body temperature can no longer be regulated) at about −33°C. Furthermore, these captive bears could sustain this activity at temperatures only up to −20°C when allowed to ingest snow before returning to walk (Best, 1982).
Interestingly, many pursuits by wild bears have been observed in or near ponds, lakes and rivers (Iles et al., 2013; LJG & RFR our unpublished data), with the bear often lying in shallow streams and ponds immediately after the pursuit (Fig. 5). Immersion in water has been shown to reduce a polar bear's core body temperature substantially both before and after sustained exercise (Øritsland, 1969; Frisch et al., 1974). In general, the thermoregulatory costs of exercise for polar bears can be somewhat dissipated by certain behaviours, but these costs probably often constrain the duration and speed of a wild goose chase, especially during warm summer days.
Figure 5:
A subadult male polar bear in the Mast River (Wapusk National Park) after killing at least five flightless snow geese in three chases. After the chases, the bear walked into the river, lay down and drank periodically. Photographed on 13 July 2013 by R.F.R.
Figure 5:
A subadult male polar bear in the Mast River (Wapusk National Park) after killing at least five flightless snow geese in three chases. After the chases, the bear walked into the river, lay down and drank periodically. Photographed on 13 July 2013 by R.F.R.
Additional research is clearly needed to gain a full understanding of the thresholds of inefficiency of foraging pursuits associated with polar bear locomotion. This is especially true for larger-sized bears and for all bears travelling near their maximal speeds. Such data are crucial for understanding the potential importance of land-based foraging behaviour. Polar bears currently consume various foods on land (e.g. Gormezano and Rockwell, 2013a,b and references therein), but the profitability of these foods and their contribution towards the persistence of polar bears in the face of climate change remains debatable (e.g. Gormezano and Rockwell, 2015; Rode et al., 2015; Pilfold et al., 2016). To clarify these issues, studies are required either that provide complete data allowing the calculation of energetic and nutritional costs and gains or (preferably) that allow those costs and gains to be measured directly.
## Acknowledgements
Thanks to G. F. Barrowclough for assistance with analyses. D. Eacker and P. Lukacs assisted with R coding.
## Funding
This work was supported by the Hudson Bay Project .
## References
Akaike
H
(
1973
) Information theory and an extension of the maximum likelihood principle. Second International Symposium on Information Theory. In BN Petran, DF Csaki, eds, Akademinai Kiado, Budapest, Hungary, pp
267
281
.
Amstrup
SC
,
Durner
GM
,
Stirling
I
,
Lunn
NJ
,
Messier
F
(
2000
)
Movements and distribution of polar bears in the Beaufort Sea
.
Can J Zool
78
:
948
966
.
Anderson
M
,
Derocher
AE
,
Wiig
Ø
,
Aars
J
(
2008
)
Movements of two Svalbard polar bears recorded using geographical positioning system satellite transmitters
.
Polar Biol
31
:
905
911
.
Ankney
CD
,
MacInnes
CD
(
1978
)
Nutrient reserves and reproductive performance of female lesser snow geese
.
Auk
95
:
459
471
.
Best
RC
(
1982
)
Thermoregulation in resting and active polar bears
.
J Comp Physiol B
146
:
63
73
.
Best
RC
(
1985
)
Digestibility of ringed seals by the polar bear
.
Can J Zool
63
:
1033
1036
.
Best
RC
,
Ronald
K
,
Øritsland
NA
(
1981
)
Physiological indices of activity and metabolism in the polar bear
.
Comp Biochem Physiol A Physiol
69
:
177
185
.
Born
EW
,
Wiig
Ø
,
Thomassen
J
(
1997
)
Seasonal and annual movements of radio-collared polar bears (Ursus maritimus) in northeast Greenland
.
J Mar Syst
10
:
67
77
.
Bro-Jørgensen
J
(
2013
)
Evolution of sprint speed in African savannah herbivores in relation to predation
.
Evolution
67
:
3371
3376
.
Brook
RK
,
Richardson
ES
(
2002
)
Observations of polar bear predatory behaviour toward caribou
.
Arctic
55
:
193
196
.
Chassin
PS
,
Taylor
CR
,
Heglund
NC
,
Seeherman
HJ
(
1976
)
Locomotion in lions: energetic cost and maximum aerobic capacity
.
Physiol Zool
49
:
1
10
.
Codd
J
,
Boggs
D
,
Perry
S
,
Carrier
D
(
2005
)
Activity of three muscles associated with the uncinate processes of the giant Canada goose Branta canadensis maximus
.
J Exp Biol
208
:
849
857
.
Donaldson
GM
,
Chapdelaine
G
,
Andrews
JD
(
1995
)
Predation of thick-billed Murres, Uria Lomvia, at two breeding colonies by polar bears, Ursus maritimus, and walruses, Odobenus rosmarus
.
Can Field Nat
109
:
112
114
.
Durner
GM
,
Whiteman
JP
,
Harlow
HJ
,
Amstrup
SC
,
Regehr
EV
,
Ben-David
M
(
2011
)
Consequences of long-distance swimming and travel over deep-water pack ice for a female polar bear during a year of extreme sea ice retreat
.
Polar Biol
34
:
975
984
.
Fedak
MA
,
Seeherman
HJ
(
1979
)
Re-appraisal of energetics of locomotion shows identical cost in bipeds and quadrupeds including ostrich and horse
.
Nature
282
:
713
716
.
Frisch
J
,
Øritsland
NA
,
Krog
J
(
1974
)
Insulation of furs in water
.
Comp Biochem Physiol A
47
:
403
410
.
Gagnon
AS
,
Gough
WA
(
2005
)
Trends in the dates of ice freeze-up and breakup over Hudson Bay, Canada
.
Arctic
58
:
370
382
.
Gormezano
LJ
(
2014
) How important is land-based foraging to polar bears (Ursus maritimus) during the ice-free season in western Hudson Bay? An examination of dietary shifts, compositional patterns, behavioral observations and energetic contributions. PhD dissertation. City University of New York, New York.
Gormezano
LJ
,
Rockwell
RF
(
2013
a)
What to eat now? Shifts in polar bear terrestrial diet in western Hudson Bay
.
Ecol Evol
3
:
3509
3523
.
Gormezano
LJ
,
Rockwell
RF
(
2013
b)
Dietary composition and spatial patterns of polar bear foraging on land in western Hudson Bay
.
BMC Ecol
13
:
51
.
Gormezano
LJ
,
Rockwell
RF
(
2015
)
The energetic value of land-based foods in western Hudson Bay and their potential to alleviate energy deficits of starving adult male polar ears
.
PLoS One
10
:
e0128520
.
Harrington
CR
(
1965
)
The life and status of the polar bear
.
Oryx
8
:
169
176
.
Hawkes
LA
,
Butler
PJ
,
Frappell
PB
,
Meir
JU
,
Milsom
WK
,
Scott
GR
,
Bishop
CM
(
2014
)
Maximum running speed of captive bar-headed geese is unaffected by severe hypoxia
.
PLoS One
9
:
e94015
.
Heglund
NC
,
Taylor
CR
(
1988
)
Speed, stride frequency and energy cost per stride: how do they change with body size and gait
.
J Exp Biol
138
:
301
318
.
Heglund
NC
,
Fedak
MA
,
Taylor
CR
,
Cavagna
GA
(
1982
)
Energetics and mechanics of terrestrial locomotion: IV. Total mechanical energy changes as a function of speed and body size in birds and mammals
.
J Exp Biol
97
:
57
66
.
Hurst
RJ
,
Leonard
ML
,
Watts
PD
,
Beckerton
P
,
Øritsland
NA
(
1982
a)
Polar bear locomotion: body temperature and energetic cost
.
Can J Zool
60
:
40
44
.
Hurst
RJ
,
Øritsland
NA
,
Watts
PD
(
1982
b)
Body mass, temperature and cost of walking in polar bears
.
Acta Physiol Scand
115
:
391
395
.
Iles
DT
,
Peterson
SL
,
Gormezano
LJ
,
Koons
DN
,
Rockwell
RF
(
2013
)
Terrestrial predation by polar bears: not just a wild goose chase
.
Polar Biol
36
:
1373
1379
.
Knudson
B
(
1978
)
Time budgets of polar bears (Ursus maritimus) on North Twin Island, James Bay, during summer
.
Can J Zool
56
:
1627
1628
.
Latour
PB
(
1981
)
Spatial relationships and behavior of polar bears (Ursus maritimus Phipps) concentrated on land during the ice-free season of Hudson Bay
.
Can J Zool
59
:
1763
1774
.
Lavigne
DM
,
Innes
S
,
Worthy
GAJ
,
Kovacs
KM
,
Schmitz
OJ
,
Hickie
JP
(
1986
)
Metabolic rates of seals and whales
.
Can J Zool
64
:
279
284
.
Lunn
NJ
,
Stirling
I
(
1985
)
The significance of supplemental food to polar bears during the ice-free period of Hudson Bay
.
Can J Zool
63
:
2291
2297
.
Lunn
NJ
,
Servanty
S
,
Regehr
EV
,
Convers
SJ
,
Richardson
E
,
Stirling
I
(
2016
)
Demography of an apex predator at the edge of its range: impacts of changing sea ice on polar bears in Hudson Bay
.
Ecol Appl
26
:
1302
1320
.
Øritsland
NA
(
1969
)
Deep body temperatures of swimming and walking polar bear cubs
.
J Mammal
50
:
380
382
.
Øritsland
NA
(
1970
)
Temperature regulation of the polar bear (Thalarctos maritimus)
.
Comp Biochem Physiol
37
:
225
233
.
Øritsland
NA
,
Lavigne
DM
(
1976
)
Radiative surface temperatures of exercising polar bears
.
Comp Biochem Physiol A Physiol
53
:
327
330
.
Øritsland
NA
,
Jonkel
C
,
Ronald
K
(
1976
)
A respiration chamber for exercising polar bears
.
Norw J Zool
24
:
65
67
.
Parks
EK
,
Derocher
AE
,
Lunn
NJ
(
2006
)
Seasonal and annual movement patterns of polar bears on the sea ice of Hudson Bay
.
Can J Zool
84
:
1281
1294
.
Pilfold
NW
,
Hedman
D
,
Stirling
I
,
Derocher
AE
,
Lunn
NJ
,
Richardson
E
(
2016
)
Mass loss rates of fasting polar bears
.
Physiol Biochem Zool
89
:
377
388
.
Pritchard
GT
,
Robbins
CT
(
1990
)
Digestive and metabolic efficiencies of grizzly and black bears
.
Can J Zool
68
:
1645
1651
.
R Development Core Team
(
2008
)
R: a Language and Environment for Statistical Computing
.
R Foundation for Statistical Computing
,
Vienna, Austria
. ISBN 3-900051-07-0, http://www.R-project.org.
Regehr
EV
,
Lunn
NJ
,
Amstrup
SC
,
Stirling
I
(
2007
)
Survival and population size of polar bears in western Hudson Bay in relation to earlier sea ice breakup
.
J Wildl Manag
71
:
2673
2683
.
Reilly
SM
,
McElroy
EJ
,
Biknevicius
AR
(
2007
)
Posture, gain and the ecological relevance of locomotor costs and energy-saving mechanisms in tetrapods
.
Zoology
110
:
271
289
.
Robbins
CT
(
1993
)
Wildlife Feeding and Nutrition
,
Ed 2
.
,
New York, NY
.
Rockwell
RF
,
Gormezano
LJ
(
2009
)
The early bear gets the goose: climate change, polar bears and lesser snow geese in western Hudson Bay
.
Polar Biol
32
:
539
547
.
Rockwell
RF
,
Gormezano
LJ
,
Koons
DN
(
2011
)
Trophic matches and mismatches: can polar bears reduce the abundance of nesting snow geese in western Hudson Bay
.
Oikos
120
:
696
709
.
Rode
KD
,
Robbins
CT
,
Nelson
L
,
Amstrup
SC
(
2015
)
Can polar bears use terrestrial foods to offset lost ice-based hunting opportunities
.
Front Ecol Envir
13
:
138
145
.
Scharf
I
,
Nulman
E
,
O
,
Bouskila
A
(
2006
)
Efficiency evaluation of two competing foraging modes under different conditions
.
Amer Nat
168
:
350
357
.
Scholander
PF
,
Hock
R
,
Walters
V
,
Irving
L
(
1950
)
Adaptions to cold in arctic and tropical mammals and birds in relation to body temperature, insulation and basal metabolic rate
.
Biol Bull
99
:
259
271
.
Sinclair
ARE
,
Mduma
S
,
Brashares
JS
(
2003
)
Patterns of predation in a diverse predator–prey system
.
Nature
425
:
288
290
.
Smith
TG
(
1980
)
Polar bear predation of ringed and bearded seals in the land-fast sea ice habitat
.
Can J Zool
58
:
2201
2209
.
Stempniewicz
L
(
2006
)
Polar bear predatory behavior toward barnacle geese and nesting glaucous gulls on Spitsbergen
.
Arctic
59
:
247
251
.
Stirling
I
(
1974
)
Midsummer observations on the behavior of wild polar bears
.
Can J Zool
52
:
1191
1198
.
Stirling
I
(
1988
)
Polar Bears
.
University of Michigan Press
,
Ann Arbor, MI
.
Stirling
I
,
Derocher
AE
(
1993
)
Possible impact of global warming on polar bears
.
Arctic
46
:
240
245
.
Stirling
I
,
Derocher
AE
(
2012
)
Effects of climate warming on polar bears: a review of the evidence
.
Glob Chang Biol
18
:
2694
2706
.
Stirling
I
,
Øritsland
NA
(
1995
)
Relationships between estimates of ringed seal (Phoca hispida) and polar bear (Ursus maritimus) populations in the Canadian Arctic
.
Can J Fish Aquat Sci
52
:
2594
2612
.
Stirling
I
,
Parkinson
CL
(
2006
)
Possible effects of climate warming on selected populations of polar bears (Ursus maritimus) in the Canadian Arctic
.
Arctic
59
:
261
275
.
Taylor
CR
,
Schmidt-Nielsen
K
,
Raab
JL
(
1970
)
Scaling energetic cost of running to body weight of animals
.
Am J Physiol
219
:
1104
1107
.
Watson
RR
,
Rubenson
J
,
Coder
L
,
Hoyt
DF
,
Propert
MWG
,
Marsh
RL
(
2011
)
Gait-specific energetics contributes to economical walking and running in emus and ostriches
.
Proc Biol Sci
278
:
2040
2046
.
Watts
PD
,
Ferguson
KL
,
Draper
BA
(
1991
)
Energetic output of subadult polar bears (Ursus maritimus): resting, disturbance and locomotion
.
Comp Biochem Physiol A Comp Physiol
98
:
191
193
.
Whiteman
JP
,
Harlow
HJ
,
Durner
GM
,
Anderson-Sprecher
R
,
Albeke
SE
,
Regehr
EV
,
Amstrup
SC
,
Ben-David
M
(
2015
)
Summer declines in activity and body temperature offer polar bears limited energy savings
.
Science
349
:
295
298
.
## Author notes
Editor: Steven Cooke
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. | |
# Am I Normal?
Level pending
Bob recently took the Free Academic Koalification Exam (FAKE), in which one has to solve 5 math problems in the least amount of time. Bob completed the exam in about $$84.25$$ seconds.
The mean time for all people taking the FAKE is $$100$$ seconds, and the standard deviation is $$10.5$$ seconds.
Assuming the times for the FAKE are approximately Normal distributed, which of the following is closest to the percent of people who completed the FAKE faster than Bob did?
(For your convenience, here is a z-score table).
× | |
# The number of conjugacy classes of the simple group PSL(2,q)
If $q=p^a$ , where $p$ is a prime number, then I would like to know the number of conjugacy classes related to elements of order $p$ and $2$ in the simple group $PSL(2,q)$ .
• I'd guess that a direct route can be taken by applying the rational canonical form on $GL(2,q)$, moding out the center, and the semidirect product $PGL(2,q) = PSL(2,q) \rtimes F_q^\times / (F_q^\times)^2.$ – Marc Palm Sep 20 '13 at 11:28
• This is a standard question but not at all research-level. It's been understood since the time of Frobenius and Dickson, and has become standard in group theory textbooks. Already in 1968 it was just part of Exercise 21 at the end of Chapter 2 in Gorenstein's Finite Groups. All it requires is a mixture of elementary group theory and matrix theory, starting with $2 \times 2$ matrices. More suitable for Stack Exchange. (Also, the groups are not actually simple for all $q$.) – Jim Humphreys Sep 20 '13 at 13:07
• @Jim: I agree that this is more suitable for Stack Exchange. But just to clarify, that exercise in Gorenstein follows an 11-page exposition of Dickson's results on such subgroups. Although Dickson's work was elementary, it was by no means trivial -- e.g., Gorenstein called it "brilliant". – Michael Zieve Sep 20 '13 at 13:23
• @Michael: The result needed here doesn't depend on the detailed work of Dickson, however. The steps actually involved are quite elementary. I only mentioned Gorenstein to emphasize that these classes have been around for a long time in the literature. – Jim Humphreys Sep 20 '13 at 15:28
There is one conjugacy class of elements of order 2, and if $p$ is odd then there are two conjugacy classes of elements of order $p$. This goes back to Dickson's 1901 book on Linear Groups.
Added later: for order $p$ elements this can be seen as follows. All order-$p$ elements of $PSL(2,q)$ are conjugate to elements of any prescribed Sylow $p$-subgroup. One such Sylow $p$-subgroup $S$ of $PSL(2,q)$ consists of the upper-triangular matrices with $1$'s on the diagonal. Now consider the action of $PSL(2,q)$ on the projective line $\mathbb{P}^1(\mathbb{F}_q)$. Each nonidentity element of $S$ has a unique fixed point, namely $\infty$. Thus, any element of $PSL(2,q)$ which conjugates one nonidentity element of $S$ to another must fix $\infty$, and hence must be upper-triangular. Finally, one easily checks that $$\left(\begin{matrix} a & b \\ 0 & a^{-1} \end{matrix}\right) \left(\begin{matrix} 1 & c \\ 0 & 1 \end{matrix}\right) \left(\begin{matrix} a & b \\ 0 & a^{-1} \end{matrix}\right)^{-1} = \left(\begin{matrix} 1 & a^2 c \\ 0 & 1 \end{matrix}\right).$$ It follows that the conjugacy classes of order-$p$ elements are in bijection with $\mathbb{F}_p^\times/(\mathbb{F}_p^\times)^2$, so there are two such classes if $p$ is odd and one if $p$ is even.
For order $2$ I don't know a proof from first principles that is as short as the one above; the quickest proof I know is the one given in the proof of Lemma A.3 of my paper with Bob Guralnick titled "Polynomials with PSL(2) monodromy".
• Wonderful! Thanks a million for your help. – Tina Sep 20 '13 at 10:21
To amplify my comments (in community-wiki style:
1) Note that it's OK when discussing just unipotent elements (here those of order $p$) to work instead with the matrix group $G:=\mathrm{SL}(2,q)$, since $\mathrm{PSL}(2,q)$ is just the quotient by the center $\{\pm I\}$ which consists of semisimple elements (and is trivial for $p=2$). Also, the methods are basically the same for all $q$ with $p$ fixed.
2) Usually your question is part of a more general computation of conjugacy classes : size of each and total number of classes. This may be easier to organize, since the total number of elements i $|G|=q(q-1)(q+1)$. In any case, the tally of classes is most often part of the search for ordinary characters. (For a short exposition, see my old paper in Amer. Math. Monthly 82 (1975), 21--39.) Though I don't have most textbooks at hand, I'd also suggest looking at the arguments on page 230 of L. Dornhoff Group Representation Theory, Part A, Dekker, 1971. This is now out-of-print but has useful short chapters on many topics including this standard example.
3) Maybe it's worth emphasizing that the difference between $p=2$ and other primes for this purpose is that all nonzero elements of $\mathbb{F}_q$ are squares in the first case but only half the elements in the second case. Thus a unipotent matrix in $G$ is conjugate for odd $p$ to just one of the matrices $$\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}, \quad \begin{pmatrix} 1 & \nu \\ 0 & 1 \end{pmatrix}$$ with $\nu$ a fixed nonsquare. There are $q^2$ unipotent matrices in all (special case of a general result of Steinberg), half in each class for $p$ odd. The usual canonical form theory just has to be adapted a bit over a finite field.
4) If you also want to look at elements of order 2 when $p$ is odd, you have to work a little more. Again it's probably easiest to determine (as in Dornhoff) all classes of semisimple, unipotent, or mixed elements, along with the sizes of their classes. It's not clear to me what motivates the format of your question, however.
ADDED: To be more explicit about elements of order 2 when $p$ is odd, just look at the standard list of classes in $G$ and note there is a unique one containing elements of order 4 (hence order 2 in the projective group). Such elements are semisimple, but may be diagonalizable over $\mathbb{F}_q$ or not depending on which of $q-1$ or $q+1$ is divisible by 4. | |
# Finite Complement Space is not T3, T4 or T5
## Theorem
Let $T = \struct {S, \tau}$ be a finite complement topology on an infinite set $S$.
Then $T$ is not a $T_3$ space, $T_4$ space or $T_5$ space.
## Proof
We have that a Finite Complement Space is a $T_1$ space.
From $T_1$ Space is $T_0$ Space, $T$ is a $T_0$ space.
We then have that a Finite Complement Space is not $T_2$.
From Regular Space is $T_2$ Space, $T$ is not a regular space.
By definition, a regular space is a space that is both a $T_0$ space and a $T_3$ space.
But $T$ is a $T_0$ space and not a regular space.
So it follows that $T$ cannot be a $T_3$ space.
Next we have that a Normal Space is a $T_3$ Space.
But as $T$ is not a $T_3$ space, $T$ cannot be a normal space.
By definition, a normal space is a space that is both a $T_1$ space and a $T_4$ space.
But $T$ is a $T_1$ space and not a normal space.
So it follows that $T$ cannot be a $T_4$ space.
Finally we have that a $T_5$ Space is a $T_4$ Space.
But as $T$ is not a $T_4$ space, $T$ cannot be a $T_5$ space.
$\blacksquare$ | |
# Finding a limit involving F(x) when certain conditions are given
I thought to determine the function first but5 since only one information is given and according to that f(x) has one root alpha and at that point, the derivative has to be zero. So I tried to assume the function as y=x^2 but next, I faced problem in the greatest integer function while evaluating the limit. Any help would be appreciated.
Greatest Integer Function [X] indicates an integral part of the real number x which is nearest and smaller integer to x. It is also known as floor of X .
[x]=the largest integer that is less than or equal to x.
• If $f(\alpha)^2 + f'(\alpha)^2 = 0$, then surely you must be working in the complex plane.... To this I would assume that $\dfrac{f(\alpha)}{f'(\alpha)} = i$. – JacobCheverie Apr 24 at 17:28 | |
A linear stability analysis is performed for a horizontal Darcy porous layer of depth 2dm sandwiched between two fluid layers of depth d (each) with the top and bottom boundaries being dynamically free and kept at fixed temperatures. The Beavers–Joseph condition is employed as one of the interfacial boundary conditions between the fluid and the porous layer. The critical Rayleigh number and the horizontal wave number for the onset of convective motion depend on the following four nondimensional parameters: dˆ ( = dm/d, the depth ratio), δ ( = $K$/dm with K being the permeability of the porous medium), α (the proportionality constant in the Beavers–Joseph condition), and k/km (the thermal conductivity ratio). In order to analyze the effect of these parameters on the stability condition, a set of numerical solutions is obtained in terms of a convergent series for the respective layers, for the case in which the thickness of the porous layer is much greater than that of the fluid layer. A comparison of this study with the previously obtained exact solution for the case of constant heat flux boundaries is made to illustrate quantitative effects of the interfacial and the top/bottom boundaries on the thermal instability of a combined system of porous and fluid layers.
This content is only available via PDF. | |
# Résolution de problèmes algorithmiques
Given a grid containing some segments, find the minimum number of times segments need to be displaced such that a particular segment can escape from the grid.
## A shortest path problem in the configuration graph
We model the problem as follows. On the given 6 by 6 grid are placed n vehicles. Vehicles are numbered starting from 0, 0 being the special vehicle which has to escape the grid. Each vehicle has a width equal to 1 and length 2 or 3 grid cells. It can be placed horizontally or vertically. Its location is identified by the coordinate of its left most upper most cell. Depending on the vehicle orientation one of the coordinates is fixed, while the other one can vary. We define a configuration as the vector consisting of the variable coordinates of the vehicles.
For example the previous configuration would be encoded as follows.
# vehicle 0 1 2 3 4 5 6 7 8
horizontal = [True, False, True, False, False, True, True, True, False]
length = [2, 2, 3, 3, 2, 2, 3, 2, 2]
fixcoor = [2, 0, 0, 4, 2, 3, 3, 4, 5]
config = (0, 0, 1, 0, 2, 4, 2, 3, 4)
Now in one step a single vehicle can be displaced, which consists in changing one coordinate of the configuration vector. This describes some underlying graph, where configurations are vertices and there is an edge between configurations A and B if B can be obtained from A in one step. The goal is to find a shortest path from the initial configuration to a target configuration (which is one where vehicle 0 reaches the border of the grid). This can be done by simple BFS traversal of the graph. The only new part here is that the graph is only implicitly given.
The BFS search on an implicitly given graph uses a function graph which maps a vertex to a list of neighboring vertices in the graph. It also needs a function is_target to check if with a given vertex the traversal is completed.
def bfs_implicit(graph, start, is_target):
dist = {start: 0}
to_visit = deque([start])
while to_visit:
node = to_visit.pop()
for neighbor in graph(node):
if neighbor not in dist: # new vertex
dist[neighbor] = dist[node] + 1
to_visit.appendleft(neighbor)
if is_target(neighbor):
return dist[neighbor]
return None # target is not reachable
So for this problem we have to start building the data structures from the given intial grid. This is done as follows by inspecting the grid on row wise order. As you can see this is the longest part of the code.
def read(grid):
""" reads the grid and builds the data structure.
returns current configuration
"""
global horizontal, fixcoor, length
row = [-1] * n # first coordinates seen of a vehicle
col = [-1] * n
horizontal = [None] * n # create data structures
length = [None] * n
for i in range(dim): # loop over all grid cells (i,j)
for j in range(dim):
c = grid[i][j]
if c != '.':
if c == 'x': # determine vehicle index from character
v = 0
else:
v = ord(c) - ord('a') + 1
if row[v] == -1: # first time vehicle is seen
row[v] = i
col[v] = j
length[v] = 1
else: # rest of the vehicle
horizontal[v] = (row[v] == i)
length[v] += 1
fixcoor = []
config = []
for v in range(n): # set fixed coordinate
if horizontal[v]:
fixcoor.append(row[v])
config.append(col[v])
else:
fixcoor.append(col[v])
config.append(row[v])
return tuple(config) # current configuration
Finally to encode the graph, we just need to implement the neighbor oracle function and the target oracle test. For those we need a helper function that verifies if in a given configuration a given cell is occupied by some vehicle.
def occupied(config, r, c):
"""returns if cell (r,c) is occupied in the given configuration
"""
if not( 0 <= r < dim and 0 <= c < dim): # cells around the grid are occupied
return True
for v in range(n): # is vehicle v covering cell (r,c)?
if (horizontal[v] and fixcoor[v] == r and config[v] <= c < config[v] + length[v] or
not horizontal[v] and fixcoor[v] == c and config[v] <= r < config[v] + length[v]):
return True
return False
def is_target(config):
return config[0] == dim - length[0]
def graph(config):
"""iterates over all reachable configurations from the given one
"""
for v in range(n): # loop over all vehicles
if horizontal[v]:
# right
d = 1
while not occupied(config, fixcoor[v], config[v] + d + length[v] - 1):
yield config[:v] + (config[v] + d, ) + config[v+1:]
d += 1
# left
d = 1
while not occupied(config, fixcoor[v], config[v] - d):
yield config[:v] + (config[v] - d, ) + config[v+1:]
d += 1
else:
# down
d = 1
while not occupied(config, config[v] + d + length[v] - 1, fixcoor[v]):
yield config[:v] + (config[v] + d, ) + config[v+1:]
d += 1
# up
d = 1
while not occupied(config, config[v] - d, fixcoor[v]):
yield config[:v] + (config[v] - d, ) + config[v+1:]
d += 1
We can verify in a quick and rough estimation that the overal running time is acceptable. A configuration consists of a vector of dimension at most 10 and there are 5 choices for each vector element. This gives an upper bound of $5^{10} < 2^{24}$ vertices. But the actual number must be much smaller since most of the configuration vectors would not be valid by overlapping vehicles. The number of neighbors of a configuration can also be bounded by $5\cdot 10$. Hence we can estimate the BFS to terminate within a few million operations. | |
## Precalculus: Mathematics for Calculus, 7th Edition
$Use$ $the$ $Change$ $of$ $Base$ $Formula$ $and$ $a$ $calculator$ $to$ $evaluate$ $the$ $logarithm$: $\log_3 16$ Apply the Change of Base Formula: $\log_x y$ = $\frac{\log y}{\log x}$ $\log_3 16$ = $\frac{\log 16}{\log 3}$ Use a calculator and round to the sixth decimal place $\approx$2.523719 | |
# NTNUJAVA Virtual Physics LaboratoryEnjoy the fun of physics with simulations! Backup site http://enjoy.phy.ntnu.edu.tw/ntnujava/
## JDK1.0.2 simulations (1996-2001) => Dynamics => Topic started by: Fu-Kwun Hwang on January 29, 2004, 05:35:38 pm
Title: Pendulum Post by: Fu-Kwun Hwang on January 29, 2004, 05:35:38 pm You are welcomed to check out Force analysis of a pendulum (http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=1116.0)How to change parameters?Set the initial positionClick and drag the left mouse buttonThe horizontal position of the pendulum will follow the mouse Animation starts when you release the mouse buttonAdjust the length dragging the pointer (while > holding down the left button)from the support-point (red dot) to a position that sets the length you want.Animation starts when you release the mouse button Change gravity g Click near the tip of the red arrow,and drag the mouse button to change it (up-down).Change the mass of the bob Click near the buttom of the black stick,and drag the mouse button to change it (up-down).Information displayed:1. red dots: kinetic energy K = m v*v /2 of the bob 2. blue dots: potential energy U = m g hof the bobTry ro find out the relation between kinetic energy and pontential energy! 3.black dots (pair) represent the peroid T of the pendulummove the mouse to the dot :will display information for that dot in the textfieldClick show checkbox to show more informationblue arrow(1): gravity green arrows(2): components of gravity red arrow(1): velocity of the bobTry to compare velocity and the tangential component of the gravitional force!The calculation is in real time (use Runge-Kutta 4th order method). The period(T) is calculated when the velocity change direction.You can produce a period verses angle ( T - X ) curve on the screen,just started at different positions and wait for a few second.Therotically, the period of a pendulum $T=\sqrt{g/L}$.Purpose for this applet:1. The period of the pendulum mostly depends on the length of the pendulum and the gravity (which is normally a constant)2. The period of the pendulum is independent of the mass.3. The variation of the pendulum due to initial angle is very small.The equation of motion for a pendulum is $\frac{d^2\theta}{dt^2}=-\frac{g}{L}\, \sin\theta$when the angle is small $\theta << 1$ ,$\sin\theta\approx \theta$so the above equation become $\frac{d^2\theta}{dt^2}\approx-\frac{g}{L}\, \theta$which imply it is approximately a simple harmonic motion with period $T=2\pi \sqrt{\frac{L}{g}}$What is the error introduced in the above approximation?From Tayler's expansion $\sin\theta=\theta-\frac{\theta^3}{3!}+\frac{\theta^5}{5!}-\frac{\theta^7}{7!}+\frac{\theta^9}{9!}-\frac{\theta^11}{11!}+...$To get first order approximation, the error is $\frac{\theta^3}{3!}=\frac{\theta^3}{6}$So the relative error (error in percentage)= $\frac{\theta^3/6}{\theta}=\frac{\theta^2}{6}$If the angle is 5 degree, which mean $\theta=5*pi/180\approx=5/60=1/12$So the relative error is $\frac{\theta^2}{6}=1/(12^2*6)=1/(144*6)=1/864\approx 0.00116$For angle=5 degree , the relative error is less than $0.116%$For angle=10 degree , the relative error is less than $0.463%$For angle=20 degree , the relative error is less than $1.85%$So the period of the pendulum is almost independent of the initial angle (the error is relatively small unless the angle is much larger than 20 degree- for more than 2% error). Title: topic11 Post by: on January 30, 2004, 11:24:21 am Subject: Thanks Date: Wed, 9 Dec 1998 16:07:30 -0500 From: louise heaven To: Fu-Kwan Hwang Thank you very much Mr Hwang, for your reply to my plea about the pendulum. I was very pleasently surprized to find you had done so. Thankyou again. I would also like to say that you have a very good web page and i shall look there first when i am researching physics. Joseph Heaven Title: topic11 Post by: on January 30, 2004, 04:39:57 pm From: Bill Kinsella Reply-To: "wkinsella@csi.com" To: "'hwang@phy03.phy.ntnu.edu.tw'" Subject: Java Applets Date: Sat, 6 Nov 1999 21:04:16 -0000Dear Sir,I came across your site when I was searching for material for my son who is studying science and in particular the pendulum. I was facinated by the immediacy and effficacy of the applets. Surely this must represent a major advancement in the teaching of physics as well as being great fun.Unlike you I spent most of may life as a software developer although I know nothing of Java type languages and now work as a power company nework controller an can think of many interactive applications for our intranet.I would like to see an applet developed illustrating the principles of simple roof truss design.Thanks for the enjoyment your work provided,Bill Kinsella Title: topic11 Post by: on March 22, 2004, 01:55:17 pm From what I learned in physics. The equation for the period of a simple pendulum is T=2(pi)(L/G)^1/2. Which means constant length should result in constant period. However I change the angle of release on the pendulum and the period changes!!??. Title: topic11 Post by: ratznium on February 07, 2005, 10:49:45 pm There's must be an energy leak somewhere in the system. Instead of it's simulated perpetual motion, the bob eventually increases speed so that it ends up going right around a full circle, above the top of the java applet.It's happened twice in a row now as I've left the applet running in the background while going through physics questions.Try it out yourself if you're interested. Leave the applet running for at least an hour, and it ought to go wild. Title: topic11 Post by: Fu-Kwun Hwang on February 12, 2005, 01:57:17 pm For the computer simulation, there is always some error due to calculation.Yes. it will happened when running the simulation for a long time. Title: topic11 Post by: Fu-Kwun Hwang on February 12, 2005, 02:07:43 pm [quote:0748e969ef="Anonymous"]From what I learned in physics. The equation for the period of a simple pendulum is T=2(pi)(L/G)^1/2. Which means constant length should result in constant period. However I change the angle of release on the pendulum and the period changes!!??.[/quote:0748e969ef]The period of the pendulum is almost constant if the amplitude if small (small angle vibration)However, the period will change very small amount when the angle increase.It only increase less than 2% for 20 degree (relative to vertical line). Title: topic11 Post by: rhipple on April 24, 2006, 07:18:06 am I would like to execute this applet offline. This feature appears to be disabled at the current time. This post will serve as my notification when to try again. Title: topic11 Post by: rhipple on April 25, 2006, 09:22:51 am Great! I have a local version of the applet. Now may I see the source? I would like to tinker with it. Title: Re: Pendulum Post by: maryyoung on March 09, 2007, 07:28:55 pm Hi there, I was very excited to find the pendulum simulation, but I am trying to measure differences with different lengths, then with the same length and different masses at the end of the pendulum and I cant seem to change the mass without it disappearing off the end of my screen! I am obviously doing something wrong. I would like to simulate a length of 30cm with masses of 100,200,300,400,500 grams, is this realistic? I want the angle to be 45 degrees - would be grateful if you could help me to do this. Thanks and regards Mary Title: Re: Pendulum Post by: Fu-Kwun Hwang on March 09, 2007, 11:31:09 pm If you want to set the length and angle of the pendulum, move your mouse to the red dot at the center on the top of the simulation, click down the mouse and drag the mouse away. The textfield on the top will display length and angle of the pendulum. When you are done just release the mouse.If you want to change mass of the object, drag the vertical line (label with mass) up and down to change mass. Title: Re: Pendulum Post by: DKMFan on June 03, 2007, 08:12:26 pm Wow. I like. A lot. Do you mind if I use that for my coursework? It involves making a pendulum have the time period to be used for a Grandfather Clock. I'm asking in case something shows up in the mark scheme which means I'll have to eventually.Quote from: link=topic=11.msg68#msg68 date=1079938517From what I learned in physics. The equation for the period of a simple pendulum is T=2(pi)(L/G)^1/2. Which means constant length should result in constant period. However I change the angle of release on the pendulum and the period changes!!??.And thank you for making it a lot easier to find the equation. I think I needed that for my homework. Hmmm. Title: Re: Pendulum Post by: Fu-Kwun Hwang on June 04, 2007, 10:12:53 pm You are welcomed to use it for your coursework.The equation T=2(pi)(L/G)^1/2 is good only for small angle.(The sinθ was replaced by θ when derive the equation)It will be a little different when the angle is larger.However, the difference is usually very small. So it is still a very good approximation unless you need very high resolution results. Title: Re: Pendulum Post by: green on February 25, 2008, 10:52:20 am i have download it, but i still can not find the source code. can u help me??how can i get all of your source code from this site?? Title: Re: Pendulum Post by: Fu-Kwun Hwang on February 25, 2008, 11:24:38 am EJS Source code are available for all the simulations created with EJS, i.e. Simulations under category [ Easy Java Simulations (2001- ) (http://www.phy.ntnu.edu.tw/ntnujava/index.php?action=collapse;c=3;sa=expand#3) ]For applets created with JDK1.0.2 (I created those between 1996-2001), source code are only included with very few download ZIP files. I did not add source code in the ZIP files, because most of the user did not need it. And those (including pendulum applet shown in this topic) are all created with JDK1.0.2 However, I just sent the source code to your email. You might need to change some of the code if you want to compile those code with current version JDK. Title: Re: Pendulum Post by: zolja2 on May 23, 2008, 02:08:09 pm Please help me, I need with pendulum to determinate earth acceleration. Title: Re: Pendulum Post by: zolja2 on May 30, 2008, 01:23:22 pm This pendulum is great, only it won't stop. I need to measure earth acceleration with equation t=N/T and g=4*PI (2*(L/100)/t^2). Somebody please help me! Title: Re: Pendulum Post by: Fu-Kwun Hwang on May 30, 2008, 03:03:53 pm It will toggle between pause/play if you RIGHT CLICK mouse button inside the simulation region. Title: Re: Pendulum Post by: lawliet on June 26, 2008, 08:19:51 pm i just want to ask if a pendulum is made to swing in water,how is the graph look like with period (Ts) against length? and what is the difference between the time taken for this pendulum (which swings in water) to come to a complete stop and the time taken by a pendulum swinging in air? if a simple pendulum with a period of 1 second is set in motion on the moon,what is the new period of this pendulum? it will swings forever right? Title: Re: Pendulum Post by: Fu-Kwun Hwang on June 26, 2008, 09:00:30 pm Please check out Pendulum with damping (http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=700.msg2526#msg2526)The applet assume the damping force is proportional to velocity of the pendulum, which is a good approximation for object moving in water. The applet also assume the mass is always under the water. You can adjust different value for b and find the best one to fit with experimental data. (Because the real damping force also depend on the geometry/area of the pendulum). Title: Re: Pendulum Post by: tanhl on July 04, 2008, 08:13:19 pm Thank you very much Mr Hwang. Whilst looking for some materials on the simple pendulum, I was really surprised at your amazing website -simulations for experiments in physics. It's an eye-opener for me. I have just downloaded the applet and hope it works! : )thanks once again for your wonderful work and contribution to the body of knowledge.tanhl Title: Re: Pendulum Post by: Phys on July 14, 2008, 07:17:27 pm Hi. İ am new in forum. İ have found very necessery documents in this forum. I need animations like this to explain Physics to my student.I have translated Pendulum animation in Turkish for forum use.Sorry to my english :). not very well.. Title: Re: Pendulum Post by: Fu-Kwun Hwang on July 14, 2008, 10:49:33 pm That is fine. Thank you for your help to translate the message into Turkish. You might want to check out some other Turkish version web pages (http://www.phy.ntnu.edu.tw/oldjava/turkey/) already translated by other. Title: Re: Pendulum Post by: plack on August 06, 2008, 02:11:19 am thanks very interesting this program..-*- Title: Re: Pendulum Post by: ArdTraveller on January 06, 2009, 06:28:38 pm Sir do u have an applet simulation of a collision of two objects? Title: Re: Pendulum Post by: cmnunis on May 13, 2009, 08:13:29 pm Hi there Mr. Hwang,Excellent program on pendulums. You have made physics interesting all over again.Anyway, I am doing a project for my 3rd year using SUNSpots, which will naturally be programmed in Java. Would it be alright if I requested for the source code of the program? The pendulum is one which will be very relevant to the program. Thank you. Title: Re: Pendulum Post by: Fu-Kwun Hwang on May 14, 2009, 12:06:48 am You should be able to download the source code now (as attached file)! Title: Re: Pendulum Post by: dannydesiliva on September 22, 2009, 01:09:35 pm I have never used a pendulum but would like to start.I have serached the forums and seen the postings about pendulums but still would like to know more about them. Is there a site that has kind of a pendulum 101 page or two?how do you know what you want to use for a pendulum or what crystal, etc to use?so many questions and not much info that I can find.any help out there? Title: Re: Pendulum Post by: yimseo on May 08, 2010, 09:21:13 pm Thankyou for the information. Very Good Example! Title: Re: Pendulum Post by: afrah on June 05, 2010, 08:01:11 pm hello mr. Hwang;i need the source code for pendulum as i have a small project in java applet and i believe this might help..please respond as soon as possible.thanks.. Title: Re: Pendulum Post by: ahmedelshfie on June 05, 2010, 10:51:21 pm I'm not prof Hwang but i can help still prof Hwang answer you have two ways toDownload source code of pendulum 1. choose send to my email account and press Get files for offline use and you will receive source code in your email automatic2. choose download file and press Get for offline use you will receive in your PC Direct :) Title: Re: Pendulum Post by: Fu-Kwun Hwang on June 05, 2010, 11:33:39 pm The source code is available as attached file under the first message (Please read the topic message carefully and you should have found it). Title: Re: Pendulum Post by: TaraLaster on December 18, 2013, 10:50:39 am Quote from: Fu-Kwun Hwang on January 29, 2004, 05:35:38 pm
You are welcomed to check out Force analysis of a pendulum (http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=1116.0)
How to change parameters?
Set the initial positionClick and drag the left mouse button
The horizontal position of the pendulum will follow the mouse Animation starts when you release the mouse button
1. Adjust the length
2. dragging the pointer (while > holding down the left button)
from the support-point (red dot) to a position that sets the length you want.
Animation starts when you release the mouse button
3. Change gravity g
4. Click near the tip of the red arrow,
and drag the mouse button to change it (up-down).
5. Change the mass of the bob
6. Click near the buttom of the black stick,
and drag the mouse button to change it (up-down).
Information displayed:
1. red dots: kinetic energy K = m v*v /2 of the bob 2. blue dots: potential energy U = m g hof the bobTry ro find out the relation between kinetic energy and pontential energy! 3.black dots (pair) represent the peroid T of the pendulum
move the mouse to the dot :
will display information for that dot in the textfield
Therotically, the period of a pendulum $T=\sqrt{g/L}$.Purpose for this applet:1. The period of the pendulum mostly depends on the length of the pendulum and the gravity (which is normally a constant)2. The period of the pendulum is independent of the mass.3. The variation of the pendulum due to initial angle is very small.The equation of motion for a pendulum is $\frac{d^2\theta}{dt^2}=-\frac{g}{L}\, \sin\theta$when the angle is small $\theta << 1$ ,$\sin\theta\approx \theta$so the above equation become $\frac{d^2\theta}{dt^2}\approx-\frac{g}{L}\, \theta$which imply it is approximately a simple harmonic motion with period $T=2\pi \sqrt{\frac{L}{g}}$What is the error introduced in the above approximation?From Tayler's expansion $\sin\theta=\theta-\frac{\theta^3}{3!}+\frac{\theta^5}{5!}-\frac{\theta^7}{7!}+\frac{\theta^9}{9!}-\frac{\theta^11}{11!}+...$To get first order approximation, the error is $\frac{\theta^3}{3!}=\frac{\theta^3}{6}$So the relative error (error in percentage)= $\frac{\theta^3/6}{\theta}=\frac{\theta^2}{6}$If the angle is 5 degree, which mean $\theta=5*pi/180\approx=5/60=1/12$So the relative error is $\frac{\theta^2}{6}=1/(12^2*6)=1/(144*6)=1/864\approx 0.00116$For angle=5 degree , the relative error is less than $0.116%$For angle=10 degree , the relative error is less than $0.463%$For angle=20 degree , the relative error is less than $1.85%$So the period of the pendulum is almost independent of the initial angle (the error is relatively small unless the angle is much larger than 20 degree- for more than 2% error).So sad.. still can't find resource code. :( Do you have any other way? Title: Re: Pendulum C3 point ? Post by: maciejmarosz on April 18, 2014, 05:29:27 pm (http://1.bp.blogspot.com/-lAcbXNZ7Fps/U05i6HcI7SI/AAAAAAAABug/PdaTlpQlPKA/s1600/1234.jpg)MAROSZ'S PARADOX ? / Paradoks Marosza ?EN > http://youtu.be/-qLIjbjB0GU PL > http://youtu.be/lmKccRTQgy4More about radial forces problem EN > http://youtu.be/XWYFIUEaOmc PL >http://youtu.be/rs8d1zBHgrw http://2.bp.blogspot.com/-fKJMF7z908A/U0t2KnjvpxI/AAAAAAAABt0/ZhpYBUNw83M/s1600/ff.jpgFIRST ENGINE THAT I MADE :PL > http://youtu.be/YI2Vqf9TFi4 EN > http://youtu.be/iTQweoVZspc more http://3.bp.blogspot.com/-NsHHVMzdvzg/Uz7plOHAFTI/AAAAAAAABno/QCS96DgHd94/s1600/exp.JPGhttp://1.bp.blogspot.com/-JpeYrYct79g/Uz-1Zw70DrI/AAAAAAAABn8/VGpmzWH9i3Y/s1600/mt1.jpg EN > http://youtu.be/Aazwjy3n-fgPL > http://youtu.be/uk9R7EylmQUhttp://marosz-physics.blogspot.com/ Title: Re: Pendulum Post by: diinxcom on December 14, 2014, 05:51:29 pm -*-Mark it first..I am new here.. I hope can get many thinks from this forum... | |
# galois.berlekamp_massey¶
galois.berlekamp_massey(sequence: FieldArray, output: Literal['minimal'] = 'minimal') Poly
galois.berlekamp_massey(sequence: FieldArray, output: Literal['fibonacci'])
galois.berlekamp_massey(sequence: FieldArray, output: Literal['galois'])
Finds the minimal polynomial $$c(x)$$ that produces the linear recurrent sequence $$y$$.
This function implements the Berlekamp-Massey algorithm.
Parameters
sequence
A linear recurrent sequence $$y$$ in $$\mathrm{GF}(p^m)$$.
output
The output object type.
• "minimal" (default): Returns the minimal polynomial that generates the linear recurrent sequence. The minimal polynomial is the characteristic polynomial $$c(x)$$ of minimal degree.
• "fibonacci": Returns a Fibonacci LFSR that produces $$y$$.
• "galois": Returns a Galois LFSR that produces $$y$$.
Returns
The minimal polynomial $$c(x)$$, a Fibonacci LFSR, or a Galois LFSR, depending on the value of output.
Notes
The minimal polynomial is the characteristic polynomial $$c(x)$$ of minimal degree that produces the linear recurrent sequence $$y$$.
$c(x) = x^{n} - c_{n-1}x^{n-1} - c_{n-2}x^{n-2} - \dots - c_{1}x - c_{0}$
$y_t = c_{n-1}y_{t-1} + c_{n-2}y_{t-2} + \dots + c_{1}y_{t-n+2} + c_{0}y_{t-n+1}$
For a linear sequence with order $$n$$, at least $$2n$$ output symbols are required to determine the minimal polynomial.
References
Examples
The sequence below is a degree-4 linear recurrent sequence over $$\mathrm{GF}(7)$$.
In [1]: GF = galois.GF(7)
In [2]: y = GF([5, 5, 1, 3, 1, 4, 6, 6, 5, 5])
The characteristic polynomial is $$c(x) = x^4 + x^2 + 3x + 5$$ over $$\mathrm{GF}(7)$$.
In [3]: galois.berlekamp_massey(y)
Out[3]: Poly(x^4 + x^2 + 3x + 5, GF(7))
Use the Berlekamp-Massey algorithm to return equivalent Fibonacci and Galois LFSRs that reproduce the sequence.
In [4]: lfsr = galois.berlekamp_massey(y, output="fibonacci")
In [5]: print(lfsr)
Fibonacci LFSR:
field: GF(7)
feedback_poly: 5x^4 + 3x^3 + x^2 + 1
characteristic_poly: x^4 + x^2 + 3x + 5
taps: [0, 6, 4, 2]
order: 4
state: [3, 1, 5, 5]
initial_state: [3, 1, 5, 5]
In [6]: z = lfsr.step(y.size); z
Out[6]: GF([5, 5, 1, 3, 1, 4, 6, 6, 5, 5], order=7)
In [7]: np.array_equal(y, z)
Out[7]: True
In [8]: lfsr = galois.berlekamp_massey(y, output="galois")
In [9]: print(lfsr)
Galois LFSR:
field: GF(7)
feedback_poly: 5x^4 + 3x^3 + x^2 + 1
characteristic_poly: x^4 + x^2 + 3x + 5
taps: [2, 4, 6, 0]
order: 4
state: [2, 6, 5, 5]
initial_state: [2, 6, 5, 5]
In [10]: z = lfsr.step(y.size); z
Out[10]: GF([5, 5, 1, 3, 1, 4, 6, 6, 5, 5], order=7)
In [11]: np.array_equal(y, z)
Out[11]: True
Last update: May 18, 2022 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.