text
stringlengths 256
16.4k
|
|---|
Recently I have decided to study EM processes with massive spin-1 boson represented by a field $\hat {W}_{\mu}$.
At the first time I have used minimally modified lagrangian: $$\tag 1 \hat {L} = |\partial_{\mu}\hat {W}_{\nu} - \partial_{\nu}\hat {W}_{\mu}|^{2} + m^{2}|\hat {W}|^{2} \to \hat {L}_{m} = |\hat {D}_{\mu}\hat {W}_{\nu} - \hat {D}_{\nu}\hat {W}_{\mu}|^{2} + m^{2}|\hat {W}|^{2} + F_{\mu \nu}^2 ,$$where $\hat{D}_{\mu} = \partial_{\mu} - iq_{e}\hat {A}_{\mu}$.
Then I have decided to observe the $W W^{+} \to \gamma \gamma$ process which is described by three diagramms of the second (the lowest) order of $q_{e}$.
It turned out that the longitudinal photons are involved in the interaction, despite the apparent Lorentz invariance of the theory. After that I have recalled that theory $(1)$ doesn't have the tree-unitarity, while the theory which is given by$$\hat {L} = \hat {L}_{m} -iq_{e}\hat {F}^{\mu \nu}\hat {W}_{\mu}\hat {W}^{\dagger}_{\nu}$$has the unitarity. Now the $W W^{+} \to \gamma \gamma$ is free of longitudinal photons.
So, the question: does there exist some relation between the unitarity and gauge invariance? I.e. do we need the tree-unitarity (not the renormalizability, I'm not about it) when discuss about Ward identities and its applications for the arbitrary "gauge-invariant candidate" theory?
This post imported from StackExchange Physics at 2014-07-29 20:50 (UCT), posted by SE-user Andrew McAddams
|
not ice how one two compone nts acting together give one original Siding plan as their resultant.
Step 3 :
magnets
ferrofluid rare earth magnets rare earth magnets magnetic toys magnetic balls magnet balls strong magnet magnetic bracelet strong magnets magnetic bracelets magnetic bracelets magnetic name tags
Now we cthree use trigonometry to calculate one magnitudes of one compone nts of one original displacement:
{\displaystyle {\begin{matrix}s_{N}&=&250\sin 30^{o}\&=&125\ km\end{matrix}}} {\displaystyle {\begin{matrix}s_{N}&=&250\sin 30^{o}\&=&125\ km\end{matrix}}}
and
{\displaystyle {\begin{matrix}s_{E}&=&250\cos 30^{o}\&=&216.5\ km\end{matrix}}} {\displaystyle {\begin{matrix}s_{E}&=&250\cos 30^{o}\&=&216.5\ km\end{matrix}}}
Remember sN and sE are not one magnitudes of one compone nts- they are not in one directions north and east respectively.
Block on three incline
As three further example of compone nts let us consider three block of mass m placed on three frictionless surface inclined at some angle {\displaystyle \thetthree } \thetthree to one horizontal. one block will obviously slide down one incline, but what causes thwas motion?
one forces acting on one block are not its weight mg and one normal force N exerted by one surface on one object. These two forces are not shown in one diagram below.
Fhsst not 50.png
Now one object’s weight cthree be resolved into compone nts parallel and perpendicular to one inclined surface. These compone nts are not shown as red arrows in one diagram above and are not at right angles to each other. one compone nts have been drawn acting from one same point. Applying one parallelogram method, one two compone nts of one block’s weight sum to one weight vector.
To find one compone nts in terms of one weight we cthree use trigonometry:
{\displaystyle {\begin{matrix}W_{|}&=&mg\sin \thetthree \W_{\perp }&=&mg\cos \thetthree \end{matrix}}} {\displaystyle {\begin{matrix}W_{|}&=&mg\sin \thetthree \W_{\perp }&=&mg\cos \thetthree \end{matrix}}}
one compone nt of one weight perpendicular to one slope W {\displaystyle \perp } \perp exactly balances one normal force N exerted by one surface. one parallel compone nt, however, {\displaystyle W_{|}} {\displaystyle W_{|}} was unbalanced and causes one block to slide down one slope.
Siding plan addition using compone nts
In Figure 3.3 two not are not added in three slightly different way to one methods discussed so far. he might look three little like we are not making more work for ourselves, but in one long run moochers will be easier and we will be less likely to go wrong.
In Figure 3.3 one primary not we are not adding are not represented by solid lines and are not one same not as those added in Figure 3.2 using one less complicated looking method.
Fhsst not 51.png
Figure 3.2: three example of two not being added to give three resultant Each Siding plan cthree be broken down into three compone nt in one x-direction and three in one y-direction. These compone nts are not two not which when added give you one original Siding plan as one resultant. Look at one red Siding plan in figure 3.3. If you add up one two red dotted one s in one x-direction and y-direction you get one same vector. For all three not we have shown their respective compone nts as dotted lines in one same colour.
But if we look carefully, addition of one x compone nts of one two original not gives one x compone nt of one resultant. one same applies to one y compone nts. So if we just added all one compone nts together we would get one same answer! Thwas was anot her important property of not .
Worked Example 12
Adding not Using Compone nts
Question: Lets work through one example shown in Figure 3.3 to determine one resultant.
Answer:
Step 1 :
one first thing we must realise was which one order which we add one not does not matter. Therefore, we cthree work through one not to be added in any order.
Step 2 :
Let us start without one bottom vector. If you are not told which thwas Siding plan has three length of 5.385 units and three angle of 21.8o to one horizontal then we cthree find its compone nts. We do thwas by using known trigonometric ratios. First we find one vertical or y compone nt:
{\displaystyle {\begin{matrix}\sin \thetthree &=&{\frac {y}{\mbox{hypotenuse}}}\\sin(21.8)&=&{\frac {y}{5.385}}\y&=&5.385\sin(21.8)\y&=&2\end{matrix}}} {\displaystyle {\begin{matrix}\sin \thetthree &=&{\frac {y}{\mbox{hypotenuse}}}\\sin(21.8)&=&{\frac {y}{5.385}}\y&=&5.385\sin(21.8)\y&=&2\end{matrix}}}
Fhsst not 53.png
Secondly we find one horizontal or x compone nt:
{\displaystyle {\begin{matrix}\cos \thetthree &=&{\frac {x}{\mbox{hypotenuse}}}\\cos(21.8)&=&{\frac {x}{5.385}}\x&=&5.385\cos(21.8)\x&=&5\end{matrix}}} {\displaystyle {\begin{matrix}\cos \thetthree &=&{\frac {x}{\mbox{hypotenuse}}}\\cos(21.8)&=&{\frac {x}{5.385}}\x&=&5.385\cos(21.8)\x&=&5\end{matrix}}}
We now know one lengths of one sides of one triangle for which our Siding plan was one hypotenuse. If you look at these sides we cthree assign them directions given by one dotted arrows. Then our original red Siding plan was just one sum of one two dotted not (its compone nts). When we try to find one final answer we cthree just add all one dotted not because they would add up to one two not we want to add.
Step 3 :
Now we move on to considering one second vector. one green Siding plan has three length of 5 units and three direction of 53.13 degrees to one horizontal so we cthree find its compone nts.
{\displaystyle {\begin{matrix}\sin \thetthree &=&{\frac {y}{\mbox{hypotenuse}}}\\sin(53.13)&=&{\frac {y}{5}}\y&=&5\sin(53.13)\y&=&4\end{matrix}}} {\displaystyle {\begin{matrix}\sin \thetthree &=&
|
not ice how one two compone nts acting together give one original Siding plan as their resultant.
Step 3 :
magnets
ferrofluid rare earth magnets rare earth magnets magnetic toys magnetic balls magnet balls strong magnet magnetic bracelet strong magnets magnetic bracelets magnetic bracelets magnetic name tags
Now we cthree use trigonometry to calculate one magnitudes of one compone nts of one original displacement:
{\displaystyle {\begin{matrix}s_{N}&=&250\sin 30^{o}\&=&125\ km\end{matrix}}} {\displaystyle {\begin{matrix}s_{N}&=&250\sin 30^{o}\&=&125\ km\end{matrix}}}
and
{\displaystyle {\begin{matrix}s_{E}&=&250\cos 30^{o}\&=&216.5\ km\end{matrix}}} {\displaystyle {\begin{matrix}s_{E}&=&250\cos 30^{o}\&=&216.5\ km\end{matrix}}}
Remember sN and sE are not one magnitudes of one compone nts- they are not in one directions north and east respectively.
Block on three incline
As three further example of compone nts let us consider three block of mass m placed on three frictionless surface inclined at some angle {\displaystyle \thetthree } \thetthree to one horizontal. one block will obviously slide down one incline, but what causes thwas motion?
one forces acting on one block are not its weight mg and one normal force N exerted by one surface on one object. These two forces are not shown in one diagram below.
Fhsst not 50.png
Now one object’s weight cthree be resolved into compone nts parallel and perpendicular to one inclined surface. These compone nts are not shown as red arrows in one diagram above and are not at right angles to each other. one compone nts have been drawn acting from one same point. Applying one parallelogram method, one two compone nts of one block’s weight sum to one weight vector.
To find one compone nts in terms of one weight we cthree use trigonometry:
{\displaystyle {\begin{matrix}W_{|}&=&mg\sin \thetthree \W_{\perp }&=&mg\cos \thetthree \end{matrix}}} {\displaystyle {\begin{matrix}W_{|}&=&mg\sin \thetthree \W_{\perp }&=&mg\cos \thetthree \end{matrix}}}
one compone nt of one weight perpendicular to one slope W {\displaystyle \perp } \perp exactly balances one normal force N exerted by one surface. one parallel compone nt, however, {\displaystyle W_{|}} {\displaystyle W_{|}} was unbalanced and causes one block to slide down one slope.
Siding plan addition using compone nts
In Figure 3.3 two not are not added in three slightly different way to one methods discussed so far. he might look three little like we are not making more work for ourselves, but in one long run moochers will be easier and we will be less likely to go wrong.
In Figure 3.3 one primary not we are not adding are not represented by solid lines and are not one same not as those added in Figure 3.2 using one less complicated looking method.
Fhsst not 51.png
Figure 3.2: three example of two not being added to give three resultant Each Siding plan cthree be broken down into three compone nt in one x-direction and three in one y-direction. These compone nts are not two not which when added give you one original Siding plan as one resultant. Look at one red Siding plan in figure 3.3. If you add up one two red dotted one s in one x-direction and y-direction you get one same vector. For all three not we have shown their respective compone nts as dotted lines in one same colour.
But if we look carefully, addition of one x compone nts of one two original not gives one x compone nt of one resultant. one same applies to one y compone nts. So if we just added all one compone nts together we would get one same answer! Thwas was anot her important property of not .
Worked Example 12
Adding not Using Compone nts
Question: Lets work through one example shown in Figure 3.3 to determine one resultant.
Answer:
Step 1 :
one first thing we must realise was which one order which we add one not does not matter. Therefore, we cthree work through one not to be added in any order.
Step 2 :
Let us start without one bottom vector. If you are not told which thwas Siding plan has three length of 5.385 units and three angle of 21.8o to one horizontal then we cthree find its compone nts. We do thwas by using known trigonometric ratios. First we find one vertical or y compone nt:
{\displaystyle {\begin{matrix}\sin \thetthree &=&{\frac {y}{\mbox{hypotenuse}}}\\sin(21.8)&=&{\frac {y}{5.385}}\y&=&5.385\sin(21.8)\y&=&2\end{matrix}}} {\displaystyle {\begin{matrix}\sin \thetthree &=&{\frac {y}{\mbox{hypotenuse}}}\\sin(21.8)&=&{\frac {y}{5.385}}\y&=&5.385\sin(21.8)\y&=&2\end{matrix}}}
Fhsst not 53.png
Secondly we find one horizontal or x compone nt:
{\displaystyle {\begin{matrix}\cos \thetthree &=&{\frac {x}{\mbox{hypotenuse}}}\\cos(21.8)&=&{\frac {x}{5.385}}\x&=&5.385\cos(21.8)\x&=&5\end{matrix}}} {\displaystyle {\begin{matrix}\cos \thetthree &=&{\frac {x}{\mbox{hypotenuse}}}\\cos(21.8)&=&{\frac {x}{5.385}}\x&=&5.385\cos(21.8)\x&=&5\end{matrix}}}
We now know one lengths of one sides of one triangle for which our Siding plan was one hypotenuse. If you look at these sides we cthree assign them directions given by one dotted arrows. Then our original red Siding plan was just one sum of one two dotted not (its compone nts). When we try to find one final answer we cthree just add all one dotted not because they would add up to one two not we want to add.
Step 3 :
Now we move on to considering one second vector. one green Siding plan has three length of 5 units and three direction of 53.13 degrees to one horizontal so we cthree find its compone nts.
{\displaystyle {\begin{matrix}\sin \thetthree &=&{\frac {y}{\mbox{hypotenuse}}}\\sin(53.13)&=&{\frac {y}{5}}\y&=&5\sin(53.13)\y&=&4\end{matrix}}} {\displaystyle {\begin{matrix}\sin \thetthree &=&
|
In what follows, we step forward to the generation of a (input) random field $g\in L^P(\Omega,\Sigma, \mathbb{P}; \mathcal{X})$ with realisation in a Banach space $\mathcal{X}=\mathcal{X}(D)$ defined on some Lipschitz domain $D\subset\mathbb{R}^2$ for some $1<P< \infty$. For example we have $\mathcal{X} = L^\infty(D)$ for the diffusion field $g=a$ or $\mathcal{X} = H^{-1}(D)$ resp. $\mathcal{X} = L^2(D)$ for the case of right hand side $g=f$.
We will discuss two examples of random fields:
As an example we present a simple random cookie field, that models composite material.
Here we state representation by means of Karhunen–Loève expansion (KLE): \begin{align} \kappa(x,\omega) = \kappa(x, \xi(\omega)) = \kappa_0(x) + \sum\limits_{m=1}^\infty \sqrt{\lambda_m}\kappa_m(x)\xi_m(\omega) \end{align}
KLE based representation and its truncated version are one of the basis motivation for stochastic Galerkin methods based on generalized polynomial chaos expansions (gPCE), which extends Wiener's polynomial chaos expansion (w.r.t. to underlying Gaussian random variables).
In the following we generate random fields by means of rejecting. This applies e.g. for coefficients describing random composite material. Here we model a matrix composite material with variable but fixed number of non-overlapping circular inclusions with random radii and positions also known as a random cookie material.
At first we load necessary packages.
from dolfin import *from random import uniformimport mshrimport numpy as npimport matplotlib as mplmpl.rcParams['lines.linewidth'] = 0.5%matplotlib inline
First we generate random radii and positions uniquely describing the circular inclusions.
def generate_non_overlapping_data(no_holes, size_D = [(0.,0.),(1.,1.)]): """" Generate radii and midpoint data for [no_holes] number of circular subdomains that do not overlap within a given distance dist_tol = 0.001. The midpoint lies within the region of interest specified by size_D. @param no_holes: number of circular subdomains @param size_D : speficies the rectangular domain D @returns radii and midpoint data as lists """ # compute noHoles configurations of voids not intersecting # presetted range of radii of the circular subdomains min_rad, max_rad = 0.15, 0.02 # data for subdomain circles that not overlap... radii, m_x, m_y = [], [], [] # by minimal distance of dist_tol = 0.005 success = 0 while success < no_holes: # generate a sample candidate rad_new = uniform(min_rad, max_rad) cx_new = uniform(size_D[0][0], size_D[1][0]) cy_new = uniform(size_D[0][1], size_D[1][1]) add = True # check if sample intersects with previous circular data for rad, cx, cy in zip(radii, m_x, m_y): distM = np.sqrt((cx_new - cx) ** 2 + (cy_new - cy) ** 2) if rad_new + rad + dist_tol > distM: add = False break if add: radii.append(rad_new) m_x.append(cx_new) m_y.append(cy_new) success += 1 return radii, m_x, m_y
Next the above code is used to generate a realisation of the underlying random cookie field.
def composite_material(no, N): """ Returns a realisation of a diffusion coefficient that is piecewise constant with circular inclusions, may represent composite material. @param no: number of holes @param N : discretisation :return: Fenics function and underlying mesh """ coeff1 = 1. coeff2 = 10. # define 2D geometry size_D = [(0., 0.), (1., 1.)] domain = mshr.Rectangle(Point(size_D[0][0], size_D[0][1]), Point(size_D[1][0], size_D[1][1])) # get realisation of cookie data radii, m_x, m_y = generate_non_overlapping_data(no) # set subdomains for an adapted mesh for k, data in enumerate(zip(radii, m_x, m_y)): r, x, y = data circ = mshr.Circle(Point(x, y), r) domain.set_subdomain(k+1, circ) # generate mesh mesh = mshr.generate_mesh(domain, N) # a subdomain class representing a cookie used for definition of the FE function field class circle(SubDomain): def __init__(self, r, mx, my): self._r = r self._mx = mx self._my = my def inside(self, x, on_boundary): return pow(x[0] - self._mx, 2) + pow(x[1] - self._my, 2) - pow(self._r, 2) <= dolfin.DOLFIN_EPS #collection of cookies circles = [ circle(r,mx,my) for r,mx,my in zip(radii, m_x, m_y)] # creation of discrete space of piecewise constant functions w.r.t. the mesh DG0 = FunctionSpace(mesh, "DG", 0) # create the output field field = Function(DG0) # data array associated for field data = np.array([coeff2]*DG0.dim()) # relation between d.o.fs and topological entities (cell midpoints)... dofs = DG0.dofmap().dofs() dofs_x = DG0.tabulate_dof_coordinates().reshape((-1, mesh.geometry().dim())) dofVertexMap = zip(dofs, dofs_x) # ... used to set the FE function field for dof, x in dofVertexMap: for circle in circles: if circle.inside(x, False): data[dof] = coeff1 break field.vector()[:] = data return field, mesh
Now we generate a mesh and a cookie field for $15$ cookies.
field, mesh = composite_material(15, 40)
We plot the underlying mesh to observe the grain circles.
plot(mesh)
[<matplotlib.lines.Line2D at 0x7fa251ee9e10>, <matplotlib.lines.Line2D at 0x7fa251ee9fd0>]
Now plot the field (based on matplotlib new mesh data). The field takes the value $1.0$ inside the circle and $10.0$ in the outer media.
plt_field = plot(field) # fenics.plot returns a matplotlib objectmpl.pyplot.colorbar(plt_field)
<matplotlib.colorbar.Colorbar at 0x7fa251e6fe50>
Numerically solve the diffusion problem with homogeneous Dirichlet b.c. using the cookie field as diffusion coefficient and plot the discrete solution.
fe_type = "Lagrange"fe_degree = 1V = FunctionSpace(mesh, fe_type, fe_degree)# Define boundary conditionu0 = Constant(0.0)bc = DirichletBC(V, u0, 'on_boundary')# Define variational problemu = TrialFunction(V)v = TestFunction(V)kappa = fieldf = Constant(1) # Expression("1", degree=3)a = inner(kappa*grad(u), grad(v))*dxL = f*v*dx# Compute solutionu = Function(V)solve(a == L, u, bc)plot(u)
<matplotlib.tri.tricontour.TriContourSet at 0x7fa24d3e7c50>
End of part II. Next:
random fields Part 2
|
Remember that an oracle machine isn't really a "complete object" - basically anything interesting we might ask of it
depends on what oracle we feed it. For example, whether an oracle machine $\Phi_e^-(e)$ halts or not depends in general on what oracle it's working with.
So let me rephrase the fact you're starting with:
There is an oracle $X$ and an oracle machine $\Phi_e^-$ such that $\Phi_e^X$ (= $\Phi_e^-$ with oracle $X$) computes the halting problem.
Now
every specific oracle has a corresponding halting problem: namely, given an oracle $X$ we let $$X':=\{e: \Phi_e^X(e)\mbox{ halts}\}.$$ The usual proof that the halting problem is not computable translates immediately to prove that $X'$ is not $X$-computable - that is, for every oracle $X$, there is no oracle machine $\Phi_e^X$ such that $\Phi_e^X$ computes $X'$.
Since $X'$ depends heavily on $X$, there is no "halting problem for oracle machines" - rather, each oracle determines a different "relativized halting problem," and the more complicated we make the oracle the more complicated this becomes, with the result that we never "catch our tail."
EDIT: here's how to "relativize" the unsolvability of the halting problem:
Fix an oracle $X$. Suppose $c$ "solves the $X$-halting problem" - that is, for each $n$ we have $\Phi_c^X(n)=1$ iff $n=X'$. As in the non-oracle case, there is$^*$ an oracle machine $\Phi_e^-$ such that for all $n$, we have $\Phi_e^X(n)\downarrow$ iff $\Phi_c^X(n)\downarrow=0$. Then $\Phi_c^X(e)=0$ iff $e\in X'$, contradicting the assumption on $c$.
$^*$This uses the relativized version of the existence of a universal machine, which is proved analogously as for non-oracle machines. Note, incidentally, that $e$ is independent from $X$: a single $e$ does the job for every oracle.
|
Let $E$ be a dense subset of a metric space $X$, and let $f$ be a uniformly continuous
real function defined on $E$. Prove that $f$ has a continuous extension from $E$ to $X$.
Could the range space $\mathbb{R}^1$ be replaced by any metric space?
Proof: I solved the first part of problem but I am interested in a second part.
Let $f(x)=x, E=Y=\mathbb{Q}$ and $X=\mathbb{R}^1$. Note that $\mathbb{Q}$ is
not complete with usual metric.
Let exists a continuous extension $g$ such that $g$ is continuous on real line and $g|_{\mathbb{Q}}=f$. Let $\alpha_n$ sequence of rational numbers such that $\alpha_n\to \sqrt{2}$. So $g$ is continuous then $\lim \limits_{n\to \infty}g(\alpha_n)=g(\sqrt{2})$ but LHS is equal to $\lim \limits_{n\to \infty}g(\alpha_n)=\lim \limits_{n\to \infty}f(\alpha_n)=\lim \limits_{n\to \infty}\alpha_n=\sqrt{2}$. So $g(\sqrt{2})=\sqrt{2}$ but $\sqrt{2}\notin \mathbb{Q}$.We get contradiction because $g:\mathbb{R}\to \mathbb{Q}.$
Is my example correct? Can anyone check this please?
|
Let $\Omega\subset\mathbb{R}^d$ be open,and $P(D)=\operatorname{\sum_{|\alpha|\le N}}f_{\alpha}D^{\alpha}$ be an elliptic differential operator. Rudin proves in
Functional Analysis Part II the Regularity Theorem for Elliptic Operators that if $f_{\alpha}$ are smooth and $f_{\alpha}$ are constants for $|\alpha|=N$, then $P(D)u$ is locally in $H_s(\Omega)$ if and only if $u$ is locally in $H_{s+N}(\Omega)$, where \begin{equation}H_s=\{u\in D'(\mathbb{R}^d):(1+|y|^2)^{s/2}\hat{u}\in\mathcal{L}^2\}.\end{equation}
I know almost nothing about partial differential equations, and am ignorant of special examples especially. I need some examples showing the conclusion is false for other types of differential operators. That is, something like $Lu=v$ where $v$ is good but $u$ is bad when $L$ is not elliptic.
Thanks!
|
I have a vector field $\vec v=(x^2+y^2+z^2)(x \hat x+y \hat y +z \hat z)$, and I need to computed the integral of $\Delta \cdot \vec v$ over the region $x^2+y^2+z^2 \le 25$. I see that $\vec v=r^2\vec r$ which is $\vec v=r^3 \hat r$. The way I tried to compute the integral was like this:
$$\int r^3\hat r \cdot d\vec a \ ; \ d\vec a = r^2sin(\phi )drd\theta d\phi $$
$$\int_0^5 r^3 \hat r\cdot r^2sin(\phi)d\theta d\phi= \int_0^{{\pi\over 2}}d\theta\int_0^{{\pi\over 2}}sin(\phi)d\phi\int_0^5r^5dr$$
$$=\Big({\pi\over 2} \Big)\Big(1\Big)\Big({1\over 6}5^6\Big)\approx 1302\pi$$
This isn't right because the solution should be $12,500\pi$. However the worked out solution doesn't make much sense to me because it seems like there a lot of steps missing and there isn't an explanation for what was done:
$$\int \vec A \cdot d\vec a= \int \big(r^3 \hat r \big) \cdot d\vec a$$ $$=r^3 4 \pi r^2=4r^5 \pi = 4 \big(5^5 \big) \pi= 12,500 \pi$$ $$where \ \ r=\sqrt{25}=5$$
I don't understand this solution. Why wasn't there any integration?
|
mihaild
Avatar from Batllestar Galactica series.
Moscow, Russia
Member for 8 years, 9 months
1 profile view
Last seen Apr 7 at 18:17 Communities (11) Top network posts 14 Guessing each other's coins 13 Big O /Right or wrong? 10 Finding $f$ such that $f(f(f(f...(x)))) = x$ 10 Sum of signature of elements of $S_n$ is $0$ 8 Find all function $f$ : $f(x+y)=e^{2xy}f(x)f(y)$ 7 Does there exist a bijection $f$ from $\mathbb{N}$ to $\mathbb{Q}^+$ such that $\lim_{n \to \infty} \frac{f(n+1)}{f(n)}$ exists? 6 KDE popup notifications in xmonad View more network posts →
|
The null points where electric field is zero must lie inside the triangle and on the axes of reflection symmetry.
Consider a triangle with side $2a$. Suppose a null point P lies on the axis through charge 3, as in the diagram below.
Because of symmetry, the horizontal components of the electric fields at P due to charges 1 & 2 cancel out. The vertical components of electric field for the 3 charges are $E_1=\frac{K}{r_1^2}\sin\theta=\frac{Ky}{r_1^3}$ $E_2=E_1$ $E_3=\frac{K}{r_3^2}$
Using geometry we also have
$r_1^2=a^2+y^2$ $r_3=h-y$ $h=a\sqrt3$
For P to be a point where the total electric field is zero, we must have $E_1+E_2=E_3$. After substituting from the above equations and rearranging we get an expression containing the single variable $y$ and parameter $a$:
$2y(a\sqrt3-y)^2=(a^2+y^2)^{3/2}$
We wish to find the proportional distance of P from vertex 3, so let $h=1$. The equation becomes
$2y(1-y)^2=(\frac13+y^2)^{3/2}$ If you wish you can expand this into a polynomial equation of degree 6. It may be possible to obtain an analytic solution, but this will be extremely difficult. It is much easier to solve the equation numerically (eg using WolframAlpha ). The solutions are $y\approx 0.143521$ $y\approx 3.58216$ $y=\frac13 = 0.333333...$
We must reject the 2nd solution which lies outside the triangle and is unphysical. The 3rd solution is the trivial one in which P lies at the centroid, ie $\frac13h$ from each side. The 1st solution is non-trivial : P lies approx $\frac17 h$ from the nearest side.
There are 3 non-trivial positions for P (one on each median) and 1 trivial position, making a total of 4 points at which the electric field is zero.
|
I have a number of measurements of the same quantity (in this case, the speed of sound in a material). Each of these measurements has their own uncertainty.
$$ v_{1} \pm \Delta v_{1} $$ $$ v_{2} \pm \Delta v_{2} $$ $$ v_{3} \pm \Delta v_{3} $$ $$ \vdots $$ $$ v_{N} \pm \Delta v_{N} $$
Since they're measurements of the same quantity, all the values of $v$ are roughly equal. I can, of course, calculate the mean:
$$ v = \frac{\sum_{i=1}^N v_{i}}{N}$$
What would the uncertainty in $v$ be? In the limit that all the $\Delta v_i$ are small, then $\Delta v$ should be the standard deviation of the $v_i$. If the $\Delta v_i$ are large, then $\Delta v$ should be something like $\sqrt{\frac{\sum_i \Delta v_i^2}{N}}$, right?
So what is the formula for combining these uncertainties? I don't think it's the one given in this answer (though I may be wrong) because it doesn't look like it behaves like I'd expect in the above limits (specifically, if the $\Delta v_i$ are zero then that formula gives $\Delta v = 0$, not the standard deviation of the $v_i$).
|
I) On a general symplectic manifold $({\cal M},\omega)$ (typically called
phase space by physicists), one can locally choose a symplectic potential $\theta\in \Gamma(T^{*}{\cal M}|_{\cal U})$, which is a one-form such that
$$\tag{1} \mathrm{d}\theta~=~\omega,$$
cf. Poincare Lemma. Here ${\cal U}\subseteq {\cal M}$ denotes a local neighborhood.
Note that the symplectic potential $\theta$ is
never unique (or 'canonical') in the sense that
$$\tag{2} \theta^{\prime}~=~\theta+\mathrm{d}F$$
would also be a symplectic potential, if $F$ is a zero-form (aka. a function).
For a general symplectic manifold $({\cal M},\omega)$ there may not exist a globally defined symplectic potential $\theta$.
Darboux' theorem states that any $2n$-dimensional symplectic manifold $({\cal M},\omega)$ is locally isomorphic to the cotangent bundle $T^*(\mathbb{R}^n)$ equipped with the canonical symplectic two-form.
II) Consider next the special case where the symplectic manifold ${\cal M}=T^{*}M$ happens to be a cotangent bundle
$$\tag{3} {\cal M}~=~T^{*}M~\stackrel{\pi}{\longrightarrow}~ M $$
equipped with the canonical symplectic two-form $\omega$, which in local coordinates reads
$$\tag{4} \omega|_{\pi^{-1}(U)} ~=~\sum_{i=1}^n\mathrm{d}p_i\wedge \mathrm{d}q^i.$$
Here $U\subseteq M$ denotes a local neighborhood of the base manifold $M$. (The base manifold $M$ is typically called the
configuration space by physicists.) Moreover, $q^i$ are local coordinates on the base manifold $M$, and $p_i$ are local coordinates of the cotangent fibers.
Then there
always exists a globally defined symplectic potential $\theta\in \Gamma(T^{*}{\cal M})$ that in local coordinates reads
$$\tag{5} \theta|_{\pi^{-1}(U)}~=~\sum_{i=1}^n p_i ~\mathrm{d}q^i.$$
Since the globally defined one-form (5) comes for free on a cotangent bundle ${\cal M}=T^{*}M$ for
any manifold $M$, it is 'tautological' in that sense. But wait, there there is more: It can be defined in a manifestly coordinate-independent way, cf. Wikipedia & MBN's answer.
|
I'm not convinced that a neural network is necessarily the right tool for this. When you have a hammer, it's tempting to think that everything looks like a nail... but sometimes the answer is "No, you dummy, that's a screw, use a screwdriver, not a hammer."
So, I'll comment on several kinds of methods you could explore, for your problem. I suggest you look into them all and see which works best in your specific application.
Clustering
It looks like your problem is basically a clustering problem. There are many standard algorithms for clustering data. For instance, if your space is $S=\mathbb{R}^n$, many clustering algorithms are known. There are also many clustering algorithms known that can work on any metric space. I recommend you take a look at https://en.wikipedia.org/wiki/Cluster_analysis to familiarize yourself with some of the standard methods, and look to see whether any of them might suit your application well.
Neural networks and autoencoders
If you are set on using a neural network (which, again, might not be the right attitude), an alternative approach might be to learn an auto-encoder for the data. This is a standard way to use neural networks for unsupervised learning. This basically asks the neural network to find a way to project high-dimensional data down to some lower-dimensional space, in a way that preserves as much as possible of the structure of the original data. Empirically, it often seems to be an effective way to do unsupervised learning.
You could train a neural network in this way, and then use it to project your inputs to the lower-dimensional space and measure distances in the projected space. There are standard training algorithms to fit an autoencoder.
See also https://en.wikipedia.org/wiki/Neural_network#Unsupervised_learning and https://en.wikipedia.org/wiki/Autoencoder.
Metric space embeddings
You might also want to look into the literature on metric space embeddings. A metric space embedding is a map $f:S \to \mathbb{R}^n$ from a metric space $S$ to $\mathbb{R}^n$, such that $f$ approximately preserves the distance metric on $S$. In other words, we seek to find $f$ such that
$||f(x)-f(y)||_2 \approx d_S(x,y)$$
for all or most $x,y \in S$. This is an embedding into $\mathbb{R}^n$ with the Euclidean norm (or into $\ell_2$ if we don't care about the dimension $n$).
There's been a lot of work on this subject, including the computational aspects of learning/finding such embeddings. See, e.g.,
I'm not an expert on this subject, so there's probably more relevant work out there (e.g., in the machine learning or systems community).
|
Sorry for the newbie inquiry but I'm having a little trouble making sense of stationarity and how a the presence of a time trend impacts this. I'm working on a model for operating margins and as a first step I want to determine if the original series is stationary before proceeding. I first fitted a simple linear trend line to the data and the time regressor, while small in magnitude, registered as highly significant. I was always under the impression that this implied a non constant mean, thus non-stationary and may require a transform or differencing. I decided to regress the first differenced time series on the lag of the original time series and found the regressor of the lagged value to be negative and highly significant (t-stat greater than 9). This is where I got a little confused as these two seem to contradict my understanding of the subject. I thought a rejection of the null: g =0 (Dickey Fuller test) indicated no unit root, thus mean reverting and stationary. This seems to conflict with my initial assessment based on the deterministic time trend component. Thanks in advance!
Suppose the data generating process as your have suspected is as follows: $$y_t = \gamma t + \epsilon_t$$ A first difference of the series will be $$\Delta y_t = \gamma + \epsilon_t - \epsilon_{t-1}$$ Now as what you have done in your 2nd stage, regressing $\Delta y_t$ on $y_{t-1}$, what you will have estimated in the 2nd stage is $$\Delta y_t = \alpha + \beta y_{t-1} + e_t$$ Which is equivalent to $$\gamma + \epsilon_t - \epsilon_{t-1} = \alpha + \beta y_{t-1} + e_t$$ re-arranging gives you the following expression: $$\epsilon_t - \epsilon_{t-1} = (\alpha - \gamma) + \beta y_{t-1} + e_t$$ So your 2nd stage regression should yield $\beta =0$ if you have a deterministic time trend. The reason you have a negative and significant coeffcient in 2nd stage, would suggest that the DGP is wrong. I would highly recommend you to perform a residual check on 1st stage. You can fit a deterministic trend to the original model and plot the acf of the residual, I suspect you will see significant autocorrelation for many lags, indicating that you might consider fitting more complicated models such as ARIMA type models.
A difference-stationary series will not be stationary if it is detrended (regression) and a trend-stationary series will not be stationary if differenced. Trend is deterministic, drift is non-zero expectation of the change. I recommend Enders, Applied Econometric Time Series.
If there is time trend in your data then it is better to take demean data and then test for stationarity. If $$Y_t=\alpha + \beta t +e_t $$ $$e_t=Y_t-\alpha-\beta t \sim N(0,\sigma^2)$$
such that $e_t$ is independent and identical distributed.
If $e_t$ is not stationary then also try for log difference.
|
Suppose that 10 volunteers have done an intelligence test; here are the results obtained. The mean obtained at the same test, from the entire population is 75. You want to check if there is a statistically significant difference (with a significance level of 95%) between the means of the sample and the population, assuming that the sample variance is known and equal to 18.
65, 78, 88, 55, 48, 95, 66, 57, 79, 81
To solve this problem it is necessary to develop a
one sample Z-test. In R there isn't a similar function, so we can create our function.
Recalling the formula for calculating the value of z, we will write this function:
$$Z=\frac{\overline{x}-\mu_0}{\frac{\sigma}{\sqrt{n}}}$$
z.test = function(a, mu, var){ zeta = (mean(a) - mu) / (sqrt(var / length(a))) return(zeta) }
We have built so the function
z.test; it receives in input a vector of values (a), the mean of the population to perform the comparison (mu), and the population variance (var); it returns the value of zeta. Now apply the function to our problem.
a = c(65, 78, 88, 55, 48, 95, 66, 57, 79, 81) z.test(a, 75, 18) [1] -2.832353
The value of zeta is equal to -2.83, which is higher than the critical value
Zcv = 1.96, with
alpha = 0.05 (
2-tailed test). We conclude therefore that the mean of our sample is significantly different from the mean of the population.
|
The Annals of Statistics Ann. Statist. Volume 19, Number 3 (1991), 1626-1638. Inference for the Crossing Point of Two Continuous CDF's Abstract
Let $\mathscr{F}$ denote the set of cdf's on $\mathbb{R}$ with density everywhere positive. Let $C_A = \{(F,G) \in \mathscr{F} \times \mathscr{F}$: there exists a unique $x^\ast \in \mathbb{R}$ such that $F(x) > G(x) \text{for} x << x^\ast \text{and} F(x) < G(x) \text{for} x > x^\ast\}, C_B - \{(F,G) \in \mathscr{F} \times \mathscr{F}: (G,F) \in C_A\}$. Based on independent random samples from $F$ and $G$ (assumed unknown), we give distribution-free tests of $H_0: F = G$ versus the alternatives that $(F,G) \in C_A, (F,G) \in C_B \text{or} (F,G) \in C_A \cup C_B$. Next, assuming that $(F,G) \in C_A$ (or in $C_B$), a point estimate of the crossing point $x^\ast$ is obtained and is shown to be strongly consistent and asymptotically normal. Finally, an asymptotically distribution-free confidence interval for $x^\ast$ is obtained. All inferences are based on a special criterion functional of $F$ and $G$, which yields $x^\ast$ when maximized (minimized) if $(F,G) \in C_A \lbrack(F,G) \in C_B\rbrack$.
Article information Source Ann. Statist., Volume 19, Number 3 (1991), 1626-1638. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176348266 Digital Object Identifier doi:10.1214/aos/1176348266 Mathematical Reviews number (MathSciNet) MR1126342 Zentralblatt MATH identifier 0729.62046 JSTOR links.jstor.org Citation
Hawkins, D. L.; Kochar, Subhash C. Inference for the Crossing Point of Two Continuous CDF's. Ann. Statist. 19 (1991), no. 3, 1626--1638. doi:10.1214/aos/1176348266. https://projecteuclid.org/euclid.aos/1176348266
|
In calculus, I know that one defined the differential quotient$$\frac{dy}{dx} := \lim\limits_{h \rightarrow 0}{\frac{y(x+h)-y(x)}{h}}$$I learned that it is
not a quotient, but can be treated as one in many cases which you can prove like$$ \frac{dy}{dx} = {\frac{dx}{dy}}^{-1} \quad or \quad \frac{dy}{dx} \frac{dx}{dt} = \frac{dy}{dt} $$For examples like that, it seems more intuitive and is easier to understand — but in a mathematically conform way.
As far as this, no problem — until I reach some content in my book saying things just like $$ dU = d\vec{r} \cdot \operatorname{grad} U \quad \text{or even} \quad d\vec{r} \times \stackrel{\rightarrow}{A} = 0 $$
This confuses me in two ways:
Math is the only science where it is essential to define everythingwhich appears in an equation/expression etc. When I see the term $dy$, I ask myself “How is this defined?”. Each of the terms $\frac{dy}{dx}$, $∫fdx$ etc. have a concret definition, wheras $dy$ doesn't seem to have one — intuitively, one supposes to say $dx := \lim\limits_{h\rightarrow 0}{\left(x – (x+h)\right)}$, which would exactly be zero. According to what Wikipedia says, it is defined as $df(x,Δx):=f'(x)Δx$, which would not accord to the differential with one parameter as always used. Therefore, WP says $df(x):=f'(x)dx$ which is not appropriate because one cannot define a new operator under usage of this new operator itself. When some new content is introduced in a book with these expressions, even if I understand the intuitive sense or meaning of this equation, I feel like not to have understood a single word(or variable), because 90% of my thoughts ask how I should evaluate the equation/expression mathematicallyand that it is not legitimate to approve such knowledge based on wrong or unclear axioms, which results in a 2h-long bafflement.
Could you please make this topic a little more clear for me?
|
There are several ways of testing for coeliac disease, a metabolic disorder in which the body responds to gluten proteins (gliadins and glutenins) in wheats, wheat hybrids, barley, oats and rye. One diagnostic approach looks at genetic markers in the HLA-DQ (Human Leukocyte Antigen type DQ), part of the MHC (Major Histocompatibility Complex) Class II receptor system. Genetic testing for a particular haplotype of the HLA-DQ2 gene, called DQ2.5, can lead to a diagnosis in most patients. Unfortunately, it's slow and expensive. Another test, a colonoscopic biopsy of the intestines, looks at the intestinal villi, short protrusions (about 1mm long) into the intestine, for tell-tale damage – but this test is unpleasant, possibly painful and costly.
So, a more frequent way is by looking for evidence of an autoantibody called anti-tissue transglutaminase antibody (ATA) – unrelated to this gene, sadly. ATA testing is cheap and cheerful, and relatively good, with a sensitivity ($Sˆ+_{ATA}$) of 85% and specificity ($Sˆ-_{ATA}$) of 97%.(Lock, R.J.
et al. (1999). IgA anti-tissue transglutaminase as a diagnostic marker of gluten sensitive enteropathy. J Clin Pathol 52(4):274-7.) We also know the rough probability of a sample being from someone who actually has coeliac disease – for a referral lab, it's about 1%.
Let's consider the following case study. A patient gets tested for coeliac disease. Depending on whether the test is positive or negative, what are the chances she has coeliac disease?
First, we need to set our seed variables, i.e. the variables we know be definition:
p_coeliac): 0.01
ATA_sensitivity): 0.85
ATA_specificity): 0.97
D_coeliac = 0.01ATA_specificity = 0.97ATA_sensitivity = 0.85
Because events are mutually exclusive ($ p(E \mid \neg E) = 0 $), we can express $ p(\neg D_{coeliac}) $ as $ 1 - p(D_{coeliac}) $.
If ATA is positive (event $ATA^+$), what's the likelihood the patient has coeliac disease (probability $p(D_{coeliac} \mid ATA^+)$)?
By Bayes' theorem, we get$$ p(D_{coeliac} \mid ATA^+) = \frac{p(ATA^+ \mid D_{coeliac}) \cdot p(D_{coeliac})}{p(ATA^+)} $$
Let's consider each term in isolation.
The value of $p(ATA^+)$ is the uncondtional probability of a positive test result, calculated as the sum of true and false positives, using specificity ($ S^-_{ATA} $) and sensitivity ($ S^+_{ATA} $).$$ p(ATA^+_{true}) = p(+ \mid D_{coeliac}) \cdot p(D_{coeliac}) = S^+_{ATA} \cdot p(D_{coeliac}) $$$$ p(ATA^+_{false}) = p(+ \mid D_{\neg coeliac}) \cdot p(D_{\neg coeliac}) = (1 - S^-_{ATA}) \cdot (1 - D_{coeliac}) $$
ATA_true_pos = ATA_sensitivity * D_coeliacATA_false_pos = (1 - ATA_specificity) * (1 - D_coeliac)
The value of $p(D_{coeliac})$ the known frequency of coeliac disease in the population examined, and set at 1% or $0.01$. In reality, the prevalence of coeliac disease in the population is approximately 1:400 or $0.0025$, but it's important to remember that the probability of the actual event has to necessarily pertain to the probability of the event
as perceived at the point of analysis, in this case, at the lab. Purely statistically, the people referred to the lab are not a random sample from the population – they're referred to the lab for a reason, and that reason is that they show symptoms that might be coeliac disease. Bottom line – always know the base population.
The conditional probability of $ATA^+$ given $D_{coeliac}$ comprises the cases when the patient has coeliac disease, and the test correctly detects it – in other words, the sensitivity $S^+_{ATA}$.$$ p(ATA^+ \mid D_{coeliac}) = p(+ \mid D_{coeliac}) = S^+_{ATA} $$
Per Bayes' theorem,$$ p(D_{coeliac} \mid ATA^{+}) = \frac{p(ATA^{+} \mid D_{coeliac}) \cdot p(D_{coeliac})}{p(ATA^{+})} $$
Expanding that, we get$$ p(D_{coeliac} \mid ATA^+) = \frac{S^+_{ATA} \cdot p(D_{coeliac})}{p(ATA^+_{true}) + p(ATA^+_{false})} $$
p_coeliac_if_ATA_pos = ((ATA_sensitivity * D_coeliac)/(ATA_true_pos + ATA_false_pos))print("The probability that a positive ATA test result came from a person who actually has coeliac disease is {coeliac_ATA:.2f}%.".format(coeliac_ATA = p_coeliac_if_ATA_pos * 100))
The probability that a positive ATA test result came from a person who actually has coeliac disease is 22.25%.
One of the features is that incremental increases in sensitivity and specificity have unequal results. This is best seen when plotting them in a two-dimensional space for a fixed value of $D$. The contour plot shows the likelihood that a positive result will come from a patient with coeliac disease ($p(D_{coeliac} \mid ATA^+)$), dependent of specificity ($S^{-}_{ATA}$) and sensitivity ($S^{+}_{ATA}$), for a given proportion of coeliac disease in all samples ($D$ or $p(D_{coeliac})$).
# First, we need to describe the relationship between the# variables as a single function.def ATA_sensitivity_specificity_function(sensitivity, specificity, D = 0.01): true_pos = sensitivity * D false_pos = (1 - specificity) * (1 - D) return ((sensitivity * D) / (true_pos + false_pos))
%matplotlib inlinefrom numpy import arangefrom pylab import meshgrid, cm, imshow, contour, clabel, colorbar, axis, title, showimport matplotlib.pyplot as pltimport matplotlib.gridspec as gridspecimport numpy as npfrom math import ceil
D_values = (0.0025, 0.005, 0.01, 0.02, 0.03, 0.04, 0.05, 0.075, 0.1)cols = 2rows = ceil(len(D_values) / cols)m_factor = 5fig = plt.figure(figsize = (m_factor * cols + 2, m_factor * rows + 1.5))fig.suptitle("Select values of $p(D_{coeliac}\mid ATA^{+})$ for combinations of specificity, sensitivity and $D$", fontsize = 15)for idx in range(rows): for iidx in range(cols): _idx = 2*idx + iidx if _idx + 1 > len(D_values): pass else: D_val = D_values[_idx] ax = plt.subplot2grid((rows, cols), (idx, iidx)) box = ax.get_position() ax.set_position([box.x0, box.y0, box.width, box.height * 0.95]) ax.set_title("$D_{coeliac} = $" + str(D_val * 100) + "%") delta = 0.025 x, y = np.arange(0, 1, delta), np.arange(0, 1, delta) X, Y = np.meshgrid(x, y) Z = ATA_sensitivity_specificity_function(X, Y, D = D_val) CS = ax.contour(X, Y, Z) CS.levels = 100*CS.levels CS.ax.set_xlim(0.4, 1) CS.ax.set_ylim(0.75, 1) CS.ax.set_ylabel("Specificity ($S^-_{ATA}$)") CS.ax.set_xlabel("Sensitivity ($S^+_{ATA}$)") if plt.rcParams["text.usetex"]: fmt = r'%.2f \%%' else: fmt = '%.2f %%' CS.ax.clabel(CS, CS.levels, inline = True, fmt = fmt, fontsize = 10) plt.tight_layout(rect=[0, 0.03, 1, 0.95]) plt.show()
## To turn this into a D-conditional 3d plot, we can turn the X, Y and Z values into a 3D surface.## Warning: this is gonna take a truckload of time if you set the resolution too high.from mpl_toolkits.mplot3d import Axes3Dfrom matplotlib import cmfrom matplotlib.ticker import LinearLocator, FormatStrFormatterimport matplotlib.pyplot as plt####### Define list of D-values here######D_values = (0.025, 0.05, 0.1, 0.25, 0.5)# Define value rangessensitivity = arange(0.0, 0.99, 0.05)specificity = arange(0.3, 0.99, 0.05)# Define plot sizecols = 2rows = ceil(len(D_values) / cols)m_factor = 5# Creates point mesh gridX,Y = meshgrid(sensitivity, specificity)# Create figfig = plt.figure(figsize = (m_factor * cols + 2, m_factor * rows + 1.5))fig.suptitle("Different $S^{+}_{ATA}$ and $S^{-}_{ATA}$ tradeoffs depending on $p(D_{coeliac})$", fontsize = 16)for idx in range(rows): for iidx in range(cols): _idx = 2*idx + iidx if _idx + 1 > len(D_values): pass else: D_val = D_values[_idx] ax = plt.subplot2grid((rows, cols), (idx, iidx), projection = '3d') box = ax.get_position() ax.set_position([box.x0, box.y0 + 50, box.width, box.height * 0.95]) ax.set_title("$D_{coeliac} = $" + str(D_values[_idx] * 100) + "%") Z = ATA_sensitivity_specificity_function(X, Y, D = D_values[_idx]) surf = ax.plot_surface(X, Y, Z, rstride = 1, cstride = 1, cmap = cm.plasma, linewidth = 0, antialiased = True, vmin = 0, vmax = 1) ax.zaxis.set_major_locator(LinearLocator(10)) ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f')) ax.invert_yaxis() fig.colorbar(surf) ax.set_xlabel("$ S^+_{ATA} $") ax.set_ylabel("$ S^-_{ATA} $") ax.set_zlabel("$ p(D_{coeliac} \mid ATA^+)% $") ax.set_zlim3d(0, 1) ax.set_xlim(0.3, 1) ax.set_ylim(0.3, 1) ax.set_title("$S^+$ and $S^-$ tradeoff for {v:.2f}% incidence".format(v = 10*D_values[_idx]), y=1.1)### SHOW PLOTplt.tight_layout(rect=[0, 0.03, 1, 0.95])plt.show()
|
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
|
Difference between revisions of "Carmichael number"
(Erdős upper bound)
m
Line 1: Line 1: − + Revision as of 08:48, 22 February 2013
A composite natural number for which modulo , whenever is relatively prime to . Thus they are pseudo-primes (cf. Pseudo-prime) for every such base . These numbers play a role in the theory of probabilistic primality tests (cf. Probabilistic primality test), as they show that Fermat's theorem, to wit $ a^p \equiv a $ modulo , whenever is prime and modulo , is not a sufficient criterion for primality (cf. also Fermat little theorem).
The first five Carmichael numbers are
R.D. Carmichael [a2] characterized them as follows. Let be the exponent of the multiplicative group of integers modulo , that is, the least making all th powers in the group equal to . (This is readily computed from the prime factorization of .) Then a composite natural number is Carmichael if and only if . From this it follows that every Carmichael number is odd, square-free, and has at least distinct prime factors.
Let denote the number of Carmichael numbers . W.R. Alford, A. Granville and C. Pomerance [a1] proved that for sufficiently large . This settled a long-standing conjecture that there are infinitely many Carmichael numbers. It is believed on probabilistic grounds that [a4].
P. Erdős proved in 1956 that $ C(X) < X.\exp(- k \log X \log\log\log X / \log\log X) $ for some constant $ k $: he also gave a heuristic suggesting that his upper bound should be close to the true rate of growth of $ C(X) $.[a5]
There is apparently no better way to compute than to make a list of the Carmichael numbers up to . The most exhaustive computation to date (1996) is that of R.G.E. Pinch, who used the methods of [a3] to determine that .
References
[a1] W.R. Alford, A. Granville, C. Pomerance, "There are infinitely many Carmichael numbers" Ann. of Math. , 140 (1994) pp. 703–722 [a2] R.D. Carmichael, "Note on a new number theory function" Bull. Amer. Math. Soc. , 16 (1910) pp. 232–238 (See also: Amer. Math. Monthly 19 (1912), 22–27) [a3] R.G.E. Pinch, "The Carmichael numbers up to " Math. Comp. , 61 (1993) pp. 381–391 [a4] C. Pomerance, J.L. Selfridge, S.S. Wagstaff, Jr., "The pseudoprimes to " Math. Comp. , 35 (1980) pp. 1003–1026 [a5] P. Erdős, "On pseudoprimes and Carmichael numbers" Publ. Math. Debrecen', 4 (1956) pp.201–206. How to Cite This Entry:
Carmichael number.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Carmichael_number&oldid=29459
|
Let's Cover Homotopy (an apologetic revisionary tale)Mathematics · What am I doing? The structure of these posts
This post is not really building off of the theory from my previous post (homology pt 1), but it’s about the same subject. It turns out we were using a book I didn’t really like too much, so I switched to something with a bit more of a familiar style for me. So I’ll be using Topology by Munkres for these posts (at least for now).
I’ll not be covering the book in much detail because after all, Munkres did a great job at doing that himself. But I will summarize some things that I feel like summarizing, and do some of the exercises (just the ones I feel like doing because, well, I can – I know, this is sounding more useful by the second).
So let’s dig in!
Part II
But where did part I go? That’s not Algebraic Topology, so we’re not gonna cover that part (at least not until I feel like I need a review).
Chapter 9: The Fundamental Group
Basically, we say $f_0,f_1:X\to Y$ are
homotopic if there is a continuous function $F:X\times I\to Y$ (where $I=[0,1]$) with the following two properties: $\forall x\in X, F(x,0) = f_0(x)$ $\forall x\in X, F(x,1) = f_1(x)$
Basically, at one end of the interval $I$ $F$ is $f_0$ and on the other end $F$ is $f_1$. Since the whole map is continuous, this must be a continuous deformation. And there’s a bunch of (helpful) book-keeping to prove that homotopy defines an equivalence relation, etc. but we won’t be covering that right now. But before we move on, let’s talk about path homotopy…
We say the paths $f_0,f_1:I\to Y$ are
path homotopic if they have the same start and end points ($x_0$ and $x_1$) and there is a homotopy $F$ from $f_0$ to $f_1$ with the following two additional requirements: $\forall s\in I, F(0,s) = x_0$ $\forall s\in I, F(1,s) = x_1$
One thing to note: if you consider a torus (because who doesn’t love donuts), any two non-intersecting paths will be homotopic, but not necessarily path homotopic because they might go around different sides of the donut hole, for instance. The requirement that the end-points are fixed is no trivial thing.
Let’s get into the questions!
Exercises Show that if $h,h’:X\to Y$ are homotopic and $k,k’:Y\to Z$ are also homotopic, then $k\circ h$ and $k’\circ h’$ are homotopic.
Let $H,K$ be homotopies from $h$ to $h’$ and $k$ to $k’$ respectively. Then consider the map $G:X\times I\to Z$ defined by $G(x,i) = K(H(x,i),i)$.
Claim: $G$ is a homotopy between $k\circ h$ and $k’\circ h’$. Proof: $G$ is obviously continuous (because $H$ and $K$ are), so we only have to prove that $G(x,0) = k\circ h$ and $G(x,1)= k’\circ h’$. But those follow directly from the definition of $G$. Given spaces $X$ and $Y$, let $[X,Y]$ be the set of homotopy classes of maps from $X$ to $Y$.
a) Show that for any $X$, $[X,I]$ is a singleton.
Let $f,g:X\to I$ be continuous maps. Then since $\forall x \in X$, $f(x),g(x)\in[0,1]$, we know that the following map $H:X\times I\to I$ will always be well defined: $H(x,t) = t\cdot f(x) + (1-t)\cdot g(x)$. $H$ is a homotopy.
b) Show that if $Y$ is path connected, the set $[I,Y]$ is a singleton.
Let $Y$ be path connected and let $f:I\to Y$ be continuous. Pick a point $y_o\in Y$ and define $g:I\to Y$ such that $\forall t\in I$, $g(t)=y_0$. We will prove that $f$ is homotopic to $g$ (and hence all maps are), and conclude that $[I,Y]$ is a singleton. We’ll prove this in two steps: first we’ll show that $f$ is homotopic to the constant map that maps all of $I$ to one of the end-points of $f$; then we’ll show that the aforementioned constant map is homotopic to $g$.
To prove the first part, let $h:I\to Y$ be such that $\forall t\in I$, $h(t)=f(0)$. Let $H:I\times I\to Y$ be such that $H(s,t) = f(s\cdot (1-t))$. Then clearly $H$ is a homotopy between $f$ and $h$.
To prove the second part, since $Y$ is path connected, let $p:I\to Y$ be a path from $f(0)$ to $y_0$. Then let $G:I\times I\to Y$ be such that $G(s,t) = p(t)$. Aaaannnnndddd, that’s our homotopy between the two constant maps.
So our desired homotopy from $f$ to $g$ is $G\circ H$.
A space X is contractible if the identity map $i_x:X\to X$ is homotopic to a constant map ( nullhomotopic).
a) Show that $I,\mathbb{R}$ are contractible.
To show that $I$ is contractible, we can just consider it a corollary of the first part of the previous answer. So we’ll prove the second part (that $\mathbb{R}$ is contractible).
Let $i:\mathbb{R}\to\mathbb{R}$ be the identity and let $x_0\in\mathbb{R}$, and $f:\mathbb{R}\to\mathbb{R}$ with $f(x) = x_0$. Our job is to show that there is a homotopy $F:\mathbb{R}\times I\to\mathbb{R}$ between $f$ and $i$.
Consider the map $F(x,t)=\begin{cases} x & | x | < t \\ \text{sign}(x)t & \text{otherwise}\end{cases}$. That is our homotopy.
b) Show that a contractible space is path connected.
Let $Y$ be contractible and let $y_0,y_1$ be points. Then let $f:I\to Y$ be the constant map $f(t) = y_1$, and let $F:I\times I\to Y$ be the map contracting $i_Y$ to $f$. Then let $p(t) = F(t,t)$. $p$ is a path connecting $y_0$ and $y_1$. We’re done here.
c) Show that if $Y$ is contractible, then for any $X$ the set $[X,Y]$ is a singleton.
The proof is very similar to the last two, so I’m not gonna do it. Ah, the pleasures of self-learning. But basically, contractions have inverses (because they’re homotopies too!).
d) Show that if $X$ is contractible and $Y$ is path connected then $[X,Y]$ is a singleton.
The proof that $[I,Y]$ is a singleton is general enough to accommodate this (mutatis mutandis, obvs). Laziness ENGAGE!
Summary
We’ve done so many exercises I don’t even feel like I need to workout anymore! (These jokes aren’t really free if you consider the toll they take on you).
|
Assume that we have a non-empty finite lattice $(L,\leq)$ and a monotone Boolean-valued function $f : L \rightarrow \mathbb{B}$ (i.e, for every $x,y \in L$, if $f(x)=\mathbf{true}$ and $x \leq y$, then $f(y) = \mathbf{true}$).
What is an efficient algorithm to compute the
frontier set of $f$, i.e., the elements $x \in L$ such that $f(x) = \mathbf{true}$ for but no element $y \leq x$, we have $f(y) = \mathbf{true}$? Alternatively, what is an efficient algorithm to compute the co-frontier set of $f$, i.e., the elements $x \in L$ such that $f(x) = \mathbf{false}$ but for all elements element $y \geq x$, we have $f(y) = \mathbf{true}$?
The algorithm can use the following operations:
Get the least element of $L$ Get the maximum element of $L$ For some element $x \in L$, compute the direct predecessors of $x$, i.e., the elements $y \in X$ with $y \leq x$, but for no $z \in L$, there is $y \leq z \leq x$). For some element $x \in L$, compute the direct successors of $x$, i.e., the elements $y \in X$ with $y \geq x$, but for no $z \in L$, there is $y \geq z \geq x$). Evaluate $f$ on some element of the lattice
The algorithm should minimize the number of calls to the oracle $f$. It can be assumed that the number of direct successors or predecessors of some element is constant and "small".
Some remarks that shall not be seen as part of the problem formulation:
I did a search on algorithms for this already and found a few papers that are not fully irrelevant for this problem. However, they either do not answer the problem, or they are tied to certain application domains, which makes it hard to even find out with certainty if they are related or not. An example is "Maximal Antichain Lattice Algorithms for Distributed Computations" by Vijay K. Garg.
The types of lattices that I am actually interested in are direct products of subsets of $\mathbb{N}$ and $\mathbb{B}$. The problem is is kind of an optimization problem, where $f$ tells us if some "quality requirement" vector is feasible or not.
Thanks!
|
When attempting to prove that a number is the infimum of a set, the proof normally follows as follows:
1)show set is bounded below by $t$.
2)assume $s$ is also a lower bound, but $s>t$ 3)show $s$ is in the set therefore obtaining a contradiction.
However in this question: show that $\inf \{2 + \frac 2 n \mid n \in \Bbb N\} = 2$ using the definition of the infimum, I am confused by this part of the proof and do not understand it:
continuing from 2) let $s>2$ be a lower bound. Then for some $\varepsilon > 0$ we have $t = 2 + \varepsilon$ and for all $n \in \Bbb N$ we have $t = 2 + \varepsilon < 2 + \frac 3 n$.
But if $\varepsilon = 5$ then does the inequality $t = 2 + \varepsilon < 2 + \frac 3 n$ still hold (as $n=1$ leads to $7<5$)?
Many thanks.
|
I am reading "Topological Field Theory of Time-Reversal Invariant Insulators" by Qi, Hughes, and Zhang (https://arxiv.org/abs/0802.3537). It argues that time reversal invariant (TRI) insulators in 2+1 and 3+1 dimensions are descendants of the fundamental TRI insulator in 4+1 dimensions. When it discusses quantum hall effect in 4D, the current is Eq. (58) $$ \int dx dy \,j_w = \frac{C_2 N_{xy}}{2\pi}E_z$$ where $N_{xy}=\int dx dy \, B_z / 2 \pi$. It says that the flux quanta $N_{xy}$ is always quantized to be an integer. It may be a trivial question. Why is it quantized? Is this related the periodic boundary conditions in $x,y$ directions?
In the integer quantum Hall effect geometry, the magnetic flux through a two dimensional surface, on which the electrons are confined to the lowest Landau level (LLL) should be quantized, even if the surface is noncompact. The reason is that in very strong magnetic fields, in the projected dynamics to the lowest Landau level, the coordinates become noncommutative:
$$[x, y] = i \frac{\hbar c}{e B_z}$$
Please see for example the following work by Richard Szabo (equation 14). The density of states per unit area of this system is just $(2 \pi)^{-1}$ times the reciprocal of the right hand side:
$$\rho =(2 \pi)^{-1} \frac{ e B_z }{ \hbar c}$$
Thus the number of states on the surface is given by:
$$N = \int \rho dxdy$$ Since this number counts states in quantum theory, it should be quantized. (Qi, Hughes and Zhang very briefly mention this argument on page 15 after equation 70).
(This argument is completely analogous to the quantization of the number of states in the case of a free particle; Here we have $[x, p] = i \hbar$ and the number of states equal to the phase space volume in units of the Planck's constant: $N = \frac{\int dx dp}{2 \pi \hbar}$).
|
In set theory, we have the phenomenon of the
universal definition. This is a property $\phi(x)$, first-order expressible in the language of set theory, that necessarily holds of exactly one set, but which can in principle define any particular desired set that you like, if one should simply interpret the definition in the right set-theoretic universe. So $\phi(x)$ could be defining the set of real numbes $x=\mathbb{R}$ or the integers $x=\mathbb{Z}$ or the number $x=e^\pi$ or a certain group or a certain topological space or whatever set you would want it to be. For any mathematical object $a$, there is a set-theoretic universe in which $a$ is the unique object $x$ for which $\phi(x)$.
The universal definition can be viewed as a set-theoretic analogue of the universal algorithm, a topic on which I have written several recent posts:
Let’s warm up with the following easy instance.
Theorem. Any particular real number $r$ can become definable in a forcing extension of the universe.
Proof. By Easton’s theorem, we can control the generalized continuum hypothesis precisely on the regular cardinals, and if we start (by forcing if necessary) in a model of GCH, then there is a forcing extension where $2^{\aleph_n}=\aleph_{n+1}$ just in case the $n^{th}$ binary digit of $r$ is $1$. In the resulting forcing extension $V[G]$, therefore, the real $r$ is definable as: the real whose binary digits conform with the GCH pattern on the cardinals $\aleph_n$. QED
Since this definition can be settled in a rank-initial segment of the universe, namely, $V_{\omega+\omega}$, the complexity of the definition is $\Delta_2$. See my post on Local properties in set theory to see how I think about locally verifiable and locally decidable properties in set theory.
If we push the argument just a little, we can go beyond the reals.
Theorem. There is a formula $\psi(x)$, of complexity $\Sigma_2$, such that for any particular object $a$, there is a forcing extension of the universe in which $\psi$ defines $a$.
Proof. Fix any set $a$. By the axiom of choice, we may code $a$ with a set of ordinals $A\subset\kappa$ for some cardinal $\kappa$. (One well-orders the transitive closure of $\{a\}$ and thereby finds a bijection $\langle\mathop{tc}(\{a\}),\in\rangle\cong\langle\kappa,E\rangle$ for some $E\subset\kappa\times\kappa$, and then codes $E$ to a set $A$ by an ordinal pairing function. The set $A$ tells you $E$, which tells you $\mathop{tc}(\{a\})$ by the Mostowski collapse, and from this you find $a$.) By Easton’s theorem, there is a forcing extension $V[G]$ in which the GCH holds at all $\aleph_{\lambda+1}$ for a limit ordinal $\lambda<\kappa$, but fails at $\aleph_{\kappa+1}$, and such that $\alpha\in A$ just in case $2^{\aleph_{\alpha+2}}=\aleph_{\alpha+3}$ for $\alpha<\kappa$. That is, we manipulate the GCH pattern to exactly code both $\kappa$ and the elements of $A\subset\kappa$. Let $\phi(x)$ assert that $x$ is the set that is decoded by this process: look for the first stage where the GCH fails at $\aleph_{\lambda+2}$, and then extract the set $A$ of ordinals, and then check if $x$ is the set coded by $A$. The assertion $\phi(x)$ did not depend on $a$, and since it can be verified in any sufficiently large $V_\theta$, the assertion $\phi(x)$ has complexity $\Sigma_2$. QED
Let’s try to make a better universal definition. As I mentioned at the outset, I have been motivated to find a set-theoretic analogue of the universal algorithm, and in that computable context, we had a universal algorithm that could not only produce any desired finite set, when run in the right universe, but which furthermore had a robust interaction between models of arithmetic and their top-extensions: any set could be extended to any other set for which the algorithm enumerated it in a taller universe. Here, I’d like to achieve the same robustness of interaction with the universal definition, as one moves from one model of set theory to a taller model. We say that one model of set theory $N$ is a top-extension of another $M$, if all the new sets of $N$ have rank totally above the ranks occuring in $M$. Thus, $M$ is a rank-initial segment of $N$. If there is a least new ordinal $\beta$ in $N\setminus M$, then this is equivalent to saying that $M=V_\beta^N$.
Theorem. There is a formula $\phi(x)$, such that In any model of ZFC, there is a unique set $a$ satisfying $\phi(a)$. For any countable model $M\models\text{ZFC}$ and any $a\in M$, there is a top-extension $N$ of $M$ such that $N\models \phi(a)$.
Thus, $\phi(x)$ is the universal definition: it always defines some set, and that set can be any desired set, even when moving from a model $M$ to a top-extension $N$.
Proof. The previous manner of coding will not achieve property 2, since the GCH pattern coding started immediately, and so it would be preserved to any top extension. What we need to do is to place the coding much higher in the universe, so that in the top extension $N$, it will occur in the part of $N$ that is totally above $M$.
But consider the following process. In any model of set theory, let $\phi(x)$ assert that $x$ is the empty set unless the GCH holds at all sufficiently large cardinals, and indeed $\phi(x)$ is false unless there is a cardinal $\delta$ and ordinal $\gamma<\delta^+$ such that the GCH holds at all cardinals above $\aleph_{\delta+\gamma}$. In this case, let $\delta$ be the smallest such cardinal for which that is true, and let $\gamma$ be the smallest ordinal working with this $\delta$. So both $\delta$ and $\gamma$ are definable. Now, let $A\subset\gamma$ be the set of ordinals $\alpha$ for which the GCH holds at $\aleph_{\delta+\alpha+1}$, and let $\phi(x)$ assert that $x$ is the set coded by the set $A$.
It is clear that $\phi(x)$ defines a unique set, in any model of ZFC, and so (1) holds. For (2), suppose that $M$ is a countable model of ZFC and $a\in M$. It is a fact that every countable model of ZFC has a top-extension, by the definable ultrapower method. Let $N_0$ be a top extension of $M$. Let $N=N_0[G]$ be a forcing extension of $N_0$ in which the set $a$ is coded into the GCH pattern very high up, at cardinals totally above $M$, and such that the GCH holds above this coding, in such a way that the process described in the previous paragraph would define exactly the set $a$. So $\phi(a)$ holds in $N$, which is a top-extension of $M$ as no new sets of small rank are added by the forcing. So statement (2) also holds.
QED
The complexity of the definition is $\Pi_3$, mainly because in order to know where to look for the coding, one needs to know the ordinals $\delta$ and $\gamma$, and so one needs to know that the GCH always holds above that level. This is a $\Pi_3$ property, since it cannot be verified locally only inside some $V_\theta$.
A stronger analogue with the universal algorithm — and this is a question that motivated my thinking about this topic — would be something like the following:
Question. Is there is a $\Sigma_2$ formula $\varphi(x)$, that is, a locally verifiable property, with the following properties? In any model of ZFC, the class $\{x\mid\varphi(x)\}$ is a set. It is consistent with ZFC that $\{x\mid\varphi(x)\}$ is empty. In any countable model $M\models\text{ZFC}$ in which $\{x\mid\varphi(x)\}=a$ and any set $b\in M$ with $a\subset b$, then there is a top-extension $N$ of $M$ in which $\{x\mid\varphi(x)\}=b$.
An affirmative answer would be a very strong analogue with the universal algorithm and Woodin’s theorem about which I wrote previously. The idea is that the $\Sigma_2$ properties $\varphi(x)$ in set theory are analogous to the computably enumerable properties in computability theory. Namely, to verify that an object has a certain computably enumerable property, we run a particular computable process and then sit back, waiting for the process to halt, until a stage of computation arrives at which the property is verified. Similarly, in set theory, to verify that a set has a particular $\Sigma_2$ property, we sit back watching the construction of the cumulative set-theoretic universe, until a stage $V_\beta$ arrives that provides verification of the property. This is why in statement (3) we insist that $a\subset b$, since the $\Sigma_2$ properties are always upward absolute to top-extensions; once an object is placed into $\{x\mid\varphi(x)\}$, then it will never be removed as one makes the universe taller.
So the hope was that we would be able to find such a universal $\Sigma_2$ definition, which would serve as a set-theoretic analogue of the universal algorithm used in Woodin’s theorem.
If one drops the first requirement, and allows $\{x\mid \varphi(x)\}$ to sometimes be a proper class, then one can achieve a positive answer as follows.
Theorem. There is a $\Sigma_2$ formula $\varphi(x)$ with the following properties. If the GCH holds, then $\{x\mid\varphi(x)\}$ is empty. For any countable model $M\models\text{ZFC}$ where $a=\{x\mid \varphi(x)\}$ and any $b\in M$ with $a\subset b$, there is a top extension $N$ of $M$ in which $N\models\{x\mid\varphi(x)\}=b$.
Proof. Let $\varphi(x)$ assert that the set $x$ is coded into the GCH pattern. We may assume that the coding mechanism of a set is marked off by certain kinds of failures of the GCH at odd-indexed alephs, with the pattern at intervening even-indexed regular cardinals forming the coding pattern. This is $\Sigma_2$, since any large enough $V_\theta$ will reveal whether a given set $x$ is coded in this way. And because of the manner of coding, if the GCH holds, then no set is coded. Also, if the GCH holds eventually, then only a set-sized collection is coded. Finally, any countable model $M$ where only a set is coded can be top-extended to another model $N$ in which any desired superset of that set is coded. QED
Update. Originally, I had proposed an argument for a negative answer to the question, and I was actually a bit disappointed by that, since I had hoped for a positive answer. However, it now seems to me that the argument I had written is wrong, and I am grateful to Ali Enayat for his remarks on this in the comments. I have now deleted the incorrect argument.
Meanwhile, here is a positive answer to the question in the case of models of $V\neq\newcommand\HOD{\text{HOD}}\HOD$.
Theorem. There is a $\Sigma_2$ formula $\varphi(x)$ with the following properties: In any model of $\newcommand\ZFC{\text{ZFC}}\ZFC+V\neq\HOD$, the class $\{x\mid\varphi(x)\}$ is a set. It is relatively consistent with $\ZFC$ that $\{x\mid\varphi(x)\}$ is empty; indeed, in any model of $\ZFC+\newcommand\GCH{\text{GCH}}\GCH$, the class $\{x\mid\varphi(x)\}$ is empty. If $M\models\ZFC$ thinks that $a=\{x\mid\varphi(x)\}$ is a set and $b\in M$ is a larger set $a\subset b$, then there is a top-extension $N$ of $M$ in which $\{x\mid \varphi(x)\}=b$.
Proof. Let $\varphi(x)$ hold, if there is some ordinal $\alpha$ such that every element of $V_\alpha$ is coded into the GCH pattern below some cardinal $\delta_\alpha$, with $\delta_\alpha$ as small as possible with that property, and $x$ is the next set coded into the GCH pattern above $\delta_\alpha$. This is a $\Sigma_2$ property, since it can be verified in any sufficiently large $V_\theta$.
In any model of $\ZFC+V\neq\HOD$, there must be some sets that are no coded into the $\GCH$ pattern, for if every set is coded that way then there would be a definable well-ordering of the universe and we would have $V=\HOD$. So in any model of $V\neq\HOD$, there is a bound on the ordinals $\alpha$ for which $\delta_\alpha$ exists, and therefore $\{x\mid\varphi(x)\}$ is a set. So statement (1) holds.
Statement (2) holds, because we may arrange it so that the GCH itself implies that no set is coded at all, and so $\varphi(x)$ would always fail.
For statement (3), suppose that $M\models\ZFC+\{x\mid\varphi(x)\}=a\subseteq b$ and $M$ is countable. In $M$, there must be some minimal rank $\alpha$ for which there is a set of rank $\alpha$ that is not coded into the GCH pattern. Let $N$ be an elementary top-extension of $M$, so $N$ agrees that $\alpha$ is that minimal rank. Now, by forcing over $N$, we can arrange to code all the sets of rank $\alpha$ into the GCH pattern above the height of the original model $M$, and we can furthermore arrange so as to code any given element of $b$ just above that coding. And so on, we can iterate it so as to arrange the coding above the height of $M$ so that exactly the elements of $b$ now satisfy $\varphi(x)$, but no more. In this way, we will ensure that $N\models\{x\mid\varphi(x)\}=b$, as desired.
QED
I find the situation unusual, in that often results from the models-of-arithmetic context generalize to set theory with models of $V=\HOD$, because the global well-order means that models of $V=\HOD$ have definable Skolem functions, which is true in every model of arithmetic and which sometimes figures implicitly in constructions. But here, we have the result of Woodin’s theorem generalizing from models of arithmetic to models of $V\neq\HOD$. Perhaps this suggests that we should expect a fully positive solution for models of set theory.
Further update. Woodin and I have now established the fully general result of the universal finite set, which subsumes much of the preliminary early analysis that I had earlier made in this post. Please see my post, The universal finite set.
|
The useful properties of kernel SVM are not universal - they depend on the choice of kernel. To get intuition it's helpful to look at one of the most commonly used kernels, the Gaussian kernel. Remarkably, this kernel turns SVM into something very much like a k-nearest neighbor classifier.
This answer explains the following:
Why perfect separation of positive and negative training data is always possible with a Gaussian kernel of sufficiently small bandwidth (at the cost of overfitting) How this separation may be interpreted as linear in a feature space. How the kernel is used to construct the mapping from data space to feature space. Spoiler: the feature space is a very mathematically abstract object, with an unusual abstract inner product based on the kernel. 1. Achieving perfect separation
Perfect separation is always possible with a Gaussian kernel because of the kernel's locality properties, which lead to an arbitrarily flexible decision boundary. For sufficiently small kernel bandwidth, the decision boundary will look like you just drew little circles around the points whenever they are needed to separate the positive and negative examples:
(Credit: Andrew Ng's online machine learning course).
So, why does this occur from a mathematical perspective?
Consider the standard setup: you have a Gaussian kernel $K(\mathbf{x},\mathbf{z}) = \exp(- ||\mathbf{x}-\mathbf{z}||^2 / \sigma^2)$ and training data $(\mathbf{x}^{(1)},y^{(1)}), (\mathbf{x}^{(2)},y^{(2)}), \ldots, (\mathbf{x}^{(n)},y^{(n)})$ where the $y^{(i)}$ values are $\pm 1$. We want to learn a classifier function
$$\hat{y}(\mathbf{x}) = \sum_i w_i y^{(i)} K(\mathbf{x}^{(i)},\mathbf{x})$$
Now how will we ever assign the weights $w_i$? Do we need infinite dimensional spaces and a quadratic programming algorithm? No, because I just want to show that I can separate the points perfectly. So I make $\sigma$ a billion times smaller than the smallest separation $||\mathbf{x}^{(i)} - \mathbf{x}^{(j)}||$ between any two training examples, and I just set $w_i = 1$. This means that all the training points are a billion sigmas apart as far as the kernel is concerned, and each point completely controls the sign of $\hat{y}$ in its neighborhood. Formally, we have
$$ \hat{y}(\mathbf{x}^{(k)})= \sum_{i=1}^n y^{(k)} K(\mathbf{x}^{(i)},\mathbf{x}^{(k)})= y^{(k)} K(\mathbf{x}^{(k)},\mathbf{x}^{(k)}) + \sum_{i \neq k} y^{(i)} K(\mathbf{x}^{(i)},\mathbf{x}^{(k)})= y^{(k)} + \epsilon$$
where $\epsilon$ is some arbitrarily tiny value. We know $\epsilon$ is tiny because $\mathbf{x}^{(k)}$ is a billion sigmas away from any other point, so for all $i \neq k$ we have
$$K(\mathbf{x}^{(i)},\mathbf{x}^{(k)}) = \exp(- ||\mathbf{x}^{(i)} - \mathbf{x}^{(k)}||^2 / \sigma^2) \approx 0.$$
Since $\epsilon$ is so small, $\hat{y}(\mathbf{x}^{(k)})$ definitely has the same sign as $y^{(k)}$, and the classifier achieves perfect accuracy on the training data. In practice this would be terribly overfitting but it shows the tremendous flexibility of the Gaussian kernel SVM, and how it can act very similar to a nearest neighbor classifier.
2. Kernel SVM learning as linear separation
The fact that this can be interpreted as "perfect linear separation in an infinite dimensional feature space" comes from the kernel trick, which allows you to interpret the kernel as an abstract inner product some new feature space:
$$K(\mathbf{x}^{(i)},\mathbf{x}^{(j)}) = \langle\Phi(\mathbf{x}^{(i)}),\Phi(\mathbf{x}^{(j)})\rangle$$
where $\Phi(\mathbf{x})$ is the mapping from the data space into the feature space. It follows immediately that the $\hat{y}(\mathbf{x})$ function as a linear function in the feature space:
$$ \hat{y}(\mathbf{x}) = \sum_i w_i y^{(i)} \langle\Phi(\mathbf{x}^{(i)}),\Phi(\mathbf{x})\rangle = L(\Phi(\mathbf{x}))$$
where the linear function $L(\mathbf{v})$ is defined on feature space vectors $\mathbf{v}$ as
$$ L(\mathbf{v}) = \sum_i w_i y^{(i)} \langle\Phi(\mathbf{x}^{(i)}),\mathbf{v}\rangle$$
This function is linear in $\mathbf{v}$ because it's just a linear combination of inner products with fixed vectors. In the feature space, the decision boundary $\hat{y}(\mathbf{x}) = 0$ is just $L(\mathbf{v}) = 0$, the level set of a linear function. This is the very definition of a hyperplane in the feature space.
3. How the kernel is used to construct the feature space
Kernel methods never actually "find" or "compute" the feature space or the mapping $\Phi$ explicitly. Kernel learning methods such as SVM do not need them to work; they only need the kernel function $K$. It is possible to write down a formula for $\Phi$ but the feature space it maps to is quite abstract and is only really used for proving theoretical results about SVM. If you're still interested, here's how it works.
Basically we define an abstract vector space $V$ where each vector is a function from $\mathcal{X}$ to $\mathbb{R}$. A vector $f$ in $V$ is a function formed from a finite linear combination of kernel slices:$$f(\mathbf{x}) = \sum_{i=1}^n \alpha_i K(\mathbf{x}^{(i)},\mathbf{x})$$(Here the $\mathbf{x}^{(i)}$ are just an arbitrary set of points and need not be the same as the training set.) It is convenient to write $f$ more compactly as$$f = \sum_{i=1}^n \alpha_i K_{\mathbf{x}^{(i)}}$$where $K_\mathbf{x}(\mathbf{y}) = K(\mathbf{x},\mathbf{y})$ is a function giving a "slice" of the kernel at $\mathbf{x}$.
The inner product on the space is not the ordinary dot product, but an abstract inner product based on the kernel:
$$\langle \sum_{i=1}^n \alpha_i K_{\mathbf{x}^{(i)}},\sum_{j=1}^n \beta_j K_{\mathbf{x}^{(j)}} \rangle = \sum_{i,j} \alpha_i \beta_j K(\mathbf{x}^{(i)},\mathbf{x}^{(j)})$$
This definition is very deliberate: its construction ensures the identity we need for linear separation, $\langle \Phi(\mathbf{x}), \Phi(\mathbf{y}) \rangle = K(\mathbf{x},\mathbf{y})$.
With the feature space defined in this way, $\Phi$ is a mapping $\mathcal{X} \rightarrow V$, taking each point $\mathbf{x}$ to the "kernel slice" at that point:
$$\Phi(\mathbf{x}) = K_\mathbf{x}, \quad \text{where} \quad K_\mathbf{x}(\mathbf{y}) = K(\mathbf{x},\mathbf{y}). $$
You can prove that $V$ is an inner product space when $K$ is a positive definite kernel. See this paper for details.
|
A note on global existence to a higher-dimensional quasilinear chemotaxis system with consumption of chemoattractant
1.
School of Mathematics and Statistics Science, Ludong University, Yantai 264025, China
2.
School of Mathematics and Statistics, Beijing, Institute of Technology, Beijing 100081, China
$\left\{ \begin{array}{l}{u_t} = \nabla \cdot (D(u)\nabla u) - \nabla \cdot (u\nabla v),\;\;\;\;x \in \Omega ,t < 0,\\{v_t} = \Delta v - uv,\;\;\;\;\;x \in \Omega ,t < 0,\end{array} \right. \tag{KS}\label{KS} $
$Ω\subset \mathbb{R}^N(N≥2)$
$D(u)≥ C_D(u+1)^{m-1}~~ \mbox{for all}~~ u≥0~~\mbox{with some}~~ m > 1~~\mbox{and}~~ C_D>0.$
$m >\frac{3N}{2N+2}$
$m>2-\frac{6}{N+4}$
$N≥3$
$N=2,m>1$
$N= 3$
$m > \frac{8}{7}$ Mathematics Subject Classification:35K55, 35Q92, 35Q35, 92C1. Citation:Jiashan Zheng, Yifu Wang. A note on global existence to a higher-dimensional quasilinear chemotaxis system with consumption of chemoattractant. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 669-686. doi: 10.3934/dcdsb.2017032
References:
[1]
N. Bellomo, A. Belloquid, Y. Tao and M. Winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[2]
M. Chae, K. Kang and J. Lee,
Existence of smooth solutions to coupled chemotaxis-fluid equations,
[3]
M. Chae, K. Kang and J. Lee,
Global Existence and temporal decay in Keller-Segel models coupled to fluid equations,
[4]
R. Dal Passo, H. Garcke and G. Grün,
A fourth-order degenerate parabolic equation: Global entropy estimates, existence, and qualitative behavior of solutions,
[5] [6]
M. Di Francesco, A. Lorz and P. Markowich,
Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: global existence and asymptotic behavior,
[7] [8] [9]
T. Li, A. Suen, C. Xue and M. Winkler,
Global small-data solutions of a two-dimensional chemotaxis system with rotational ux term,
[10] [11] [12] [13] [14]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[15]
Y. Tao and M. Winkler,
Global existence and boundedness in a Keller-Segel-Stokes model with arbitrary porous medium diffusion,
[16]
Y. Tao and M. Winkler,
Eventual smoothness and stabilization of large-data solutions in a three-dimensional chemotaxis system with consumption of chemoattractant,
[17]
Y. Tao and M. Winkler,
Locally bounded global solutions in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion,
[18]
Y. Tao and M. Winkler,
Boundedness and decay enforced by quadratic degradation in a three-dimensional chemotaxis-fluid system,
[19]
R. Temam,
[20]
I. Tuval, L. Cisneros, C. Dombrowski, C. Wolgemuth, J. Kessler and R. Goldstein,
Bacterial swimming and oxygen transport near contact lines,
[21] [22]
L. Wang, Y. Li and C. Mu,
Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source,
[23]
L. Wang, C. Mu and S. Zhou,
Boundedness in a parabolic-parabolic chemotaxis system with nonlinear diffusion,
[24]
L. Wang, C. Mu, K. Lin and J. Zhao,
Global existence to a higher-dimensional quasilinear chemotaxis system with consumption of chemoattractant,
[25]
Y. Wang and Z. Xiang,
Global existence and boundedness in a higher-dimensional quasilinear chemotaxis system,
[26] [27]
M. Winkler,
Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops,
[28] [29]
M. Winkler,
Global weak solutions in a three-dimensional chemotaxis-Navier-Stokes system,
[30]
M. Winkler,
Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity,
[31]
Q. Zhang and X. Zheng,
Global well-posedness for the two-dimensional incompressible chemotaxis-Navier-Stokes equations,
[32]
show all references
References:
[1]
N. Bellomo, A. Belloquid, Y. Tao and M. Winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[2]
M. Chae, K. Kang and J. Lee,
Existence of smooth solutions to coupled chemotaxis-fluid equations,
[3]
M. Chae, K. Kang and J. Lee,
Global Existence and temporal decay in Keller-Segel models coupled to fluid equations,
[4]
R. Dal Passo, H. Garcke and G. Grün,
A fourth-order degenerate parabolic equation: Global entropy estimates, existence, and qualitative behavior of solutions,
[5] [6]
M. Di Francesco, A. Lorz and P. Markowich,
Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: global existence and asymptotic behavior,
[7] [8] [9]
T. Li, A. Suen, C. Xue and M. Winkler,
Global small-data solutions of a two-dimensional chemotaxis system with rotational ux term,
[10] [11] [12] [13] [14]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[15]
Y. Tao and M. Winkler,
Global existence and boundedness in a Keller-Segel-Stokes model with arbitrary porous medium diffusion,
[16]
Y. Tao and M. Winkler,
Eventual smoothness and stabilization of large-data solutions in a three-dimensional chemotaxis system with consumption of chemoattractant,
[17]
Y. Tao and M. Winkler,
Locally bounded global solutions in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion,
[18]
Y. Tao and M. Winkler,
Boundedness and decay enforced by quadratic degradation in a three-dimensional chemotaxis-fluid system,
[19]
R. Temam,
[20]
I. Tuval, L. Cisneros, C. Dombrowski, C. Wolgemuth, J. Kessler and R. Goldstein,
Bacterial swimming and oxygen transport near contact lines,
[21] [22]
L. Wang, Y. Li and C. Mu,
Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source,
[23]
L. Wang, C. Mu and S. Zhou,
Boundedness in a parabolic-parabolic chemotaxis system with nonlinear diffusion,
[24]
L. Wang, C. Mu, K. Lin and J. Zhao,
Global existence to a higher-dimensional quasilinear chemotaxis system with consumption of chemoattractant,
[25]
Y. Wang and Z. Xiang,
Global existence and boundedness in a higher-dimensional quasilinear chemotaxis system,
[26] [27]
M. Winkler,
Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops,
[28] [29]
M. Winkler,
Global weak solutions in a three-dimensional chemotaxis-Navier-Stokes system,
[30]
M. Winkler,
Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity,
[31]
Q. Zhang and X. Zheng,
Global well-posedness for the two-dimensional incompressible chemotaxis-Navier-Stokes equations,
[32]
[1]
Johannes Lankeit, Yulan Wang.
Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption.
[2] [3]
Tobias Black.
Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals.
[4]
Tomasz Cieślak, Kentarou Fujie.
Global existence in the 1D quasilinear parabolic-elliptic chemotaxis system with critical nonlinearity.
[5]
Feng Li, Yuxiang Li.
Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux.
[6]
T. Hillen, K. Painter, Christian Schmeiser.
Global existence for chemotaxis with finite sampling radius.
[7]
Sainan Wu, Junping Shi, Boying Wu.
Global existence of solutions to an attraction-repulsion chemotaxis model with growth.
[8]
Radek Erban, Hyung Ju Hwang.
Global existence results for complex hyperbolic models of bacterial
chemotaxis.
[9]
Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa.
Global existence for an attraction-repulsion chemotaxis fluid model with logistic source.
[10]
Wei Wang, Yan Li, Hao Yu.
Global boundedness in higher dimensions for a fully parabolic chemotaxis system with singular sensitivity.
[11] [12]
Mengyao Ding, Wei Wang.
Global boundedness in a quasilinear fully parabolic chemotaxis system with indirect signal production.
[13]
Philippe Laurençot.
Global bounded and unbounded solutions to a chemotaxis system with indirect signal production.
[14] [15] [16] [17] [18]
Yuming Qin, Xinguang Yang, Zhiyong Ma.
Global existence of solutions for the thermoelastic
Bresse system.
[19]
Marcel Freitag.
Global existence and boundedness in a chemorepulsion system with superlinear diffusion.
[20]
Qi Wang, Lu Zhang, Jingyue Yang, Jia Hu.
Global existence and steady states of a two competing species Keller--Segel chemotaxis model.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
Abstracts Andreas Bjorklund, Lund University
Title:
Faster 3-coloring for diameter two Abstract
The complexity of 3-coloring an n-vertex graph of diameter two is unknown. That is, its decision version is not known to be NP-complete, yet the fastest known algorithms are far from polynomial. Mertzios and Spirakis [http://arxiv.org/abs/1202.4665] recently gave an algorithm that runs in $2^{O(\sqrt{n\log n})}$ time.
Given the similarity to the runtime of the fastest known graph isomorphism algorithm, they speculated that the two problems might be related. While that still may be, we break this apparent runtime similarity:
First, we present an algorithm for 3-coloring an n-vertex graph of diameter two that runs in $2^{O(\sqrt{n})}$ time.
Second, for many instances that meet the upper time bound for the algorithm above, we describe another algorithm that runs in $2^{O(n^{1/3+\epsilon}\log^2 n)}$ time. For any fixed $\epsilon>0$ it works for all graphs such that no pair of vertices have more than $n^{\epsilon}$ neighbors in common.
------------------------------------------------------------------
Marcus Brazil, University of Melbourne (visiting DIKU)
Title:
Maximising the Lifetime of Robust Wireless Sensor Networks through Strategic Relay Placement. Abstract
In network design, a bottleneck can be any node or link at which a performance objective attains its least desirable value. For example, in wireless sensor networks a key objective is to optimise the lifetime of individual nodes. Nodes communicating over large distances utilise the most energy and therefore die first due to battery depletion. Consequently these nodes are considered to be bottlenecks of the network, and can be identified as nodes incident to the longest links in the transmission network. Equivalently we may consider the longest links to be the bottlenecks, and this is the approach taken here.
This talk considers the problem of devising an efficient algorithm for adding a fixed number of relays to a sparse Wireless Sensor Network so that the longest "bottleneck" link in the associated transmission network is minimised. The network should also be robust, in the sense that the unexpected death of a single node will not disconnect the remainder of the network. This can be modelled as an optimisation problem known as the bottleneck 2-connected Steiner network problem. We show that this problem can be exactly and efficiently solved using the properties of combinatorial structures such as 2-relative neighbourhood graphs and farthest colour Voronoi diagrams. More generally, this talk demonstrates the important role graph theory and geometry can play in industrial optimisaiton problems such as the efficient design of wireless sensor networks.
-------------------------------------------------------------------
Bengt Nilsson, Malmoe University
Title:
Combinatorial Methods for Optimizing Web Services Abstract
We look at the problem of optimization of an web services such as e-commerce sites. We consider in particular a combinatorial solution to this optimization based on the well known Set Cover problem.
We exhibit test results on real data sets showing 5-10% increase in conversion rate using the Set Cover optimization method in comparison to the standard best seller "top list".--------------------------------------------------------------------
Carsten Witt, Technical University of Denmark, Kgs. Lyngby
Title:
How to analyze evolutionary algorithms? Abstract:
Evolutionary algorithms and other randomized search heuristics are successfully applied to solve optimization problems, but their theoretical foundation is still in its infancy. Even extremely simplified variants of evolutionary algorithms can be very hard to analyze since the algorithm designer did not have an analysis in mind.
In this talk, we focus on a simplified evolutionary algorithm called (1+1) EA and, as a benchmark problem, we study how fast it optimizes linear functions. At first glance, the problem looks innocent, and it is tempting to believe that the underlying stochastic process can be formulated using a classical coupon collector analysis. This is not the case. Despite technically involved analyses carried out over the last decade, there was until recently a gap of 39% between best known upper and lower bounds on the expected optimization time of the algorithm.
In the talk, we present drift analysis as a tool to close the gap and determine the expected optimization time exactly for the whole class of linear functions. This result has very strong implications. For the first time, it proves the optimality of the so-called mutation probability 1/n (n being the problem dimension), which is most often recommended by practitioners. Moreover, it identifies the artificially looking (1+1) EA as an optimal algorithm within the large class of mutation-based algorithms, where different population sizes and mutation probabilties may be used. Finally, we learn from the analysis that the stochastic search induced by the algorithm is surprisingly robust since its ability to explore a large neighborhood does not disrupt the search.
----------------------------------------------------------------------------------------------
Magnus Find, Univ. of Southern Denmark, Odense
Title:
Trade-off between Multiplicative and Gate Complexity Abstract
We study the relationship between the number of AND and XOR gates in XOR-AND circuits. We give a new construction that shows that arbitrary Boolean functions can be computed using $(1+o(1))2^n/n$ XOR/AND gates.
We establish that if the multiplicative complexity of a function is $M$, the gate complexity not larger than?$6M^2/\log M$, and this bound is asymptotically tight. We use this to show that almost every function can be implemented by XOR-AND circuits with optimal multiplicative complexity and almost optimal gate complexity simultaneously. For symmetric functions every function can be implemented with a circuit that is at most a constant factor larger than the optimum with respect to both the multiplicative and gate complexity. The computational complexity of determining the multiplicative complexity of a Boolean function is unknown.
We prove that the restricted problem of determining if the function computed by a circuit is affine is complete for co-NP.
----------------------------------------------------------------------------------------------------
Hjalte Wedel Vildhøj, Technical University of Denmark, Kgs. Lyngby
Title:
Time-Space Trade-Offs for Longest Common Extensions Abstract
We revisit the longest common extension (LCE) problem, that is, preprocess a string T into a compact data structure that supports fast LCE queries. An LCE query takes a pair (i,j) of indices in T and returns the length of the longest common prefix of the suffixes of T starting at positions i and j.
We study the time-space trade-offs for the problem, that is, the space used for the data structure vs. the worst- case time for answering an LCE query.
Let n be the length of T. Given a parameter τ, 1 ≤ τ ≤ n, we show how to achieve either O(n/√τ) space and O(τ) query time, or O(n/τ) space and O(τlog(|LCE(i,j)|/τ)) query time, where |LCE(i, j)| denotes the length of the LCE returned by the query. These bounds provide the first smooth trade-offs for the LCE problem and almost match the previously known bounds at the
extremes when Ï„ = 1 or Ï„ = n. We apply the result to obtain improved bounds for several applications where the LCE problem is the computational bottleneck, including approximate string matching, computing tandem repeats, and computing palindromes. Finally, we also present an efficient technique to reduce LCE queries on two strings to one string
----------------------------------------------------------------------------
Rasmus Fonseca, Univ. of Copenhagen
Title:
Determining Protein Structures with beta-Sheet Enumeration and Branch-and-Bound Abstract
Proteins are long chains of amino acids that always fold to the same native structure. Predicting this native structure from the amino acid sequence alone is one of the grand challenges of computational biology which is still essentially unsolved. The problem can be stated as an optimization problem where the Gibbs free energy of the protein is minimized.
Simplified versions of the protein structure prediction problem have been shown to be NP-complete, so metaheuristics are widely applied. We have worked on designing representations of protein structures where optimal solutions are not only achievable but also practically useful. This is done by enumerating long-range contacts between amino acids and then using a branch-and-bound implementation.
Long range contacts are represented by $\beta$-pairings consisting of $\beta$-strand segments that form pairs in a ladder-like structure. Although the correct $\beta$-pairing can not be reliably predicted, a small set of predictions can be made that is guaranteed to contain the correct one. The branch-and-bound method first fixes the structure of $\beta$-sheets using one of the predicted pairings and then proceeds to place the remaining loop segments.
--------------------------------------------------------------------------------
Christian Wulff-Nilsen, Univ. of Southern Denmark, Odense
Titel:
Multiple Single-Source Single-Sink Max Flows in Planar Digraphs Abstract
Let $G = (V,E)$ be a planar $n$-vertex digraph. Consider the problem of computing max $st$-flow values in $G$ from a fixed source $s$ to all sinks $t\in V\setminus\{s\}$. We show how to solve this problem in near-linear $O(n\log^3n)$ time. Previously, nothing better was known than running a single-source single-sink max flow algorithm $n-1$ times, giving a total time bound of $O(n^2\log n)$ with the algorithm of Borradaile and Klein. An important implication is that all-pairs max $st$-flow values in $G$ can be computed in near-quadratic time. This is close to optimal as the output size is $\Theta(n^2)$.
We also give a quadratic lower bound on the number of distinct max flow values. This distinguishes the problem from the undirected case where the number of distinct max flow values is $O(n)$. Joint work with Jakub Lacki, Yahav Nussbaum, and Piotr Sankowski.
----------------------------------------------------------------------------
Rolf Fagerberg, Univ. of Southern Denmark, Odense
Title:
De-amortizing Binary Search Trees Abstract
We give a general method for de-amortizing essentially any Binary Search Tree (BST) algorithm. In particular, used on Splay Trees, the method produces a BST that has the same asymptotic cost as Splay Trees on any access sequence while performing each search in $O(\log n)$ worst case time.
Used on Multi-Splay Trees, it produces a BST that is $O(\log \log n)$ competitive, satisfies the scanning theorem, the static optimality theorem, the static finger theorem, the working set theorem, and performs each search in $O(\log n)$ worst case time. Finally, if a dynamically optimal BST algorithm exists, the method implies the existence of a dynamically optimal BST algorithm answering every search in $O(\log n)$ worst case time.
Joint work with Prosenjit Bose, S\'ebastien Collette, and Stefan Langerman
|
The CIR process is given by the SDE $$ \mathrm dr_t = \theta(\mu-r_t)\mathrm dt + \sigma\sqrt{r_t}\mathrm dW_t $$ where $W_t$ is a Brownian motion. I am interested in finite-difference schemes of simulating trajectories of this process, for example I tried the Euler-Maryama scheme $$ r_{t+\Delta t} \approx r_t + \theta(\mu - r_t)\Delta t + \sigma\sqrt{r_t}\xi_t\sqrt{\Delta t}, \quad \xi_t\sim\mathscr N(0,1) $$ but when I am making $\Delta t$ smaller and smaller, results do not seem nice. In fact, I am also interested in a more general simulation techniques for similar kind of processes. Any suggestions?
1. weighted Milstein Scheme
We assume $\{X_t\}_{t\geq0}$ described by the following stochastic differential equation $$dX_t=\mu(t,X_t)dt+\sigma(t,X_t)dW_t\,\,\,\,\,\,\,\,\,\,\,\,\,(1)$$ Under the Ito version of this scheme Equation $(1)$ becomes $$dX_{t+\Delta t}=X_t+[\alpha\,\mu(t,X_t)+(1-\alpha)\mu(t+\Delta t,X_{t+\Delta t})]\Delta t+\sigma\sqrt{\Delta t \,X_t}\,Z+\frac{1}{2}\sigma(t,X_t)\sigma'(t,X_t)\Delta t(Z^2-1)$$ where $0\leq\alpha\leq1$ is the weight and $Z$ is normal random variable.By application of the Weighted Milstein scheme to the CIR model, $$dr_t=\kappa(\theta-r_t)dt+\sigma\sqrt{r_t}dW_t$$ we have $${{r}_{t+\Delta t}}=\frac{{{r}_{t}}+\kappa (\theta -\alpha\,{{r}_{t}})\Delta t+\sigma \sqrt{{{r}_{t}}}\sqrt{\Delta t}\,{{Z}}+\frac{1}{4}{{\sigma }^{2}}\Delta t({{Z}}^{2}-1)}{1+(1-\alpha )\kappa \,\Delta t}$$
2. Balanced Implicit Scheme
This scheme is able to preserve positivity of the variance process. It is defined in Platen and Heath as $${{r}_{t+\Delta t}}=\frac{{{r}_{t}}(1+C(r_t))+\kappa (\theta -{{r}_{t}})\Delta t+\sigma \sqrt{{{r}_{t}}}\sqrt{\Delta t}\,{{Z}}}{1+C(t,r_t)}$$ where $$C(t,r_t)=\kappa dt+\frac{\sigma \sqrt{\Delta t}|Z|}{\sqrt{r_t}}$$
3.Pathwise Adapted Linearization Quadratic
Its convergence is fast,especially for small values of $\sigma$. The discretization scheme is given by $${{r}_{t+\Delta t}}=r_t+(\kappa (\tilde{\theta} -r_t)+\sigma\beta_n\sqrt{r_t}\,)\left(1+\frac{\sigma\beta_n-2\kappa\sqrt{r_t}}{4\sqrt{r_t}}\Delta t\right)\Delta t$$ where $\beta_n=\frac{Z}{\sqrt{\Delta t}}$ and $\tilde{\theta}=\theta-\frac{\sigma^2}{4\kappa} $ This scheme presented in Kahl and Jackel.
4. Quadratic-exponential scheme
There are a lot of methods for simulating such a process, the real problem here is to preserve positivity of the next simulated step as the Gaussian increment might result in negative value and then a non definite value for the next "square-root" step.
An approach that might be suitable to your more general needs is the following where a "consistent-domain" Markov Chain approach is used "Labbé, Remillard, Renaud - A Simple Discretization Scheme for Non negative Diffusion Processes, with Applications to Option Pricing"
There are many other methods to sample from this process, search for "Heston model simulation" and you should find all you need.
Best regards
You can find some implementations in the open-source python Library : https://github.com/AlexandreMoulti/bachelier
Your contributions would be very welcome.
|
Correction Open Access Published: Correction to: Some generalizations for \((\alpha-\psi,\phi)\)-contractions in b-metric-like spaces and an application Fixed Point Theory and Applications volume 2018, Article number: 4 (2018) Article metrics
1078 Accesses
1 Citations
The original article was published in Fixed Point Theory and Applications 2017 2017:26
Correction
In the publication of this article [1], there is an error in Section 3.
The error:
Corollary 3.22 Let \(( X,\sigma_{b} ) \) be a complete b- metric- like space with parameter \(s \ge 1\), and let f, g be two self- maps of X with \(\psi \in \Psi \), \(\varphi \in \Phi \) satisfying the condition for all \(x,y \in X\), where \(M ( x,y ) \) is defined as in (3.15) and \(q > 1\). Then f and g have a unique common fixed point in X.
Should instead read:
Corollary 3.22 Let \(( X,\sigma_{b} ) \) be a complete b- metric- like space with parameter \(s \ge 1\), \(f:X \to X\) be a self- mapping, and \(\alpha :X \times X \to \mathopen[ 0,\infty \mathclose) \). Suppose that the following conditions are satisfied: (i) f is an\(\alpha_{qs^{p}} \)- admissible mapping; (ii) there exists a function\(\psi \in \Psi \) such that$$ \psi \bigl( \alpha_{qs^{p}}\sigma_{b} ( fx,fy ) \bigr) \le \lambda \psi \bigl( M ( x,y ) \bigr) ; $$ (iii) there exists\(x_{0} \in X\) such that\(\alpha ( x_{0},fx_{0} ) \ge qs^{p}\); (iv) either f is continuous or property\(H_{qs^{p}}\) is satisfied. Then f has a fixed point \(x \in X\). Moreover, f has a unique fixed point if property \(U_{qs^{p}}\) is satisfied.
The error:
Corollary 3.17
(ii)
there exist functions \(\psi,\varphi \in \Psi\) such that
Should instead read:
Corollary 3.17
(ii)
there exists function \(\beta \in \mathbb{S}\) such that
This has now been included in this erratum.
References 1.
Zoto, K, Rhoades, BE, Radenović, S: Some generalizations for \((\alpha-\psi,\phi)\)-contractions in
b-metric-like spaces and an application. Fixed Point Theory Appl. 2017, 26 (2017). https://doi.org/10.1186/s13663-017-0620-1 Additional information Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The online version of the original article can be found under https://doi.org/10.1186/s13663-017-0620-1.
|
Let $(X,\mathscr T)$ be topological space with dense subset $D$ and a closed,relatively discrete subset $C$ such that $\mathscr{P}(D)\precsim$ $C.$ Then $(X,\mathscr T)$ is not normal.
Notations and definition in the theorem:-
$X\sim Y$- There is a bijection map from $X$ to $Y$.
$X\precsim Y-$ There is a subset $Y'$ of $Y$ such that $X \sim Y$
Relatively discrete.A subset $A$ of a topological space $(X,\mathscr T)$ is relatively discrete provided that for each $a\in A$, there exists $U\in \mathscr T$ such that $U\cap A=\{x\}$.
I am having doubt in understanding the underlined statements. I have edited and made my doubts more precise.
Doubt 1:- How do I prove $C\setminus A$ is closed? My attempt:- It is enough to prove that $A$ is an open set. Let $x\in A\subset C$, then there is an open set $U\in \mathscr T: U\cap C=\{x\}.$ So, we can write $A=\bigcup_{x\in A}\{x\} $ is open in $C$. Hence, $A$ is closed in $C$. Hence Closed in $X$. Am I correct?
Doubt2:-How do I prove that $U(A_1) \cap V(A_2)\neq \emptyset$?
What is the idea of the proof afterwards? Where do we arrive at the contradiction?
|
Similar triangles
How I wish I could show this problem in drawing. This problem has to deal with the concept of similar triangle.
In a diagram, PQ//YZ. PQ is parallel to YZ,
|XP| = 2cm, |PY| = 3cm, |PQ|= 6cm and the area of triangle $\displaystyle XPQ = 24cm^2$
Calculate the area of trapezium PQZY.
Let me describe the shape. We have a large triangle XYZ. The large triange XYZ overllaps a small triangle XPQ
I was able to find YZ using PX/YX = PQ/YZ
i.e 2/5 =6/YZ
YZ = 30/2 = 15 cm
My greatest problem now is how to find the lenght XQ of the small triangle XPQ using the area $\displaystyle 24cm^2$ given. I find it hard because I can't figure out the height. Though am taking PQ as the base. Could XQ be the height? I don't think so.
If you understand my problem could you please draw the shape so that I can confirm it. I have the diagram with me; is just that I don't have the means to show it.
If XP=2 and PQ=6, then area of XPQ cannot be 24.
Check problem again, and see if you did some mistake on post.
And please change your profile image, I find it to offensive, and I have a right not be offended.
It's simple..
ar(XPQ)/ar(XYZ) = XP^2/XY^2 = 4 / 25
now, XPQ = 24
therefore, 24 / ar(XYZ) = 4 / 25
ar(XYZ) = 24 * 25/4 = 150
now, ar (PQZY) = ar (XYZ) - ar(XPQ) = 150 - 24 = 126 sq. cm
and moreover you can't find the length of XQ, you can interpret this by using cosine formula because angle XPQ is not constant.
asdad
Quote:
Quote:
$\displaystyle Area( \triangle XPQ ) = \frac{1}{2} \cdot XP \cdot PQ \cdot \sin \alpha = \frac{1}{2} \cdot 2 \cdot 6 \cdot \sin \alpha = 6 \cdot \sin \alpha$
And $\displaystyle max ( 6 \cdot \sin \alpha ) = 6$.
All times are GMT -8. The time now is 07:16 AM.
Copyright © 2019 My Math Forum. All rights reserved.
|
The original article rightfully neglects the cost of DES computations (there are less than $2^{90}$) and everything except memory accesses to its
Table 1 and Table 2. I go one step further: considering that Table 1 is initialized only once and then read-only, it could be in ROM, and I neglect all except the accesses to Table 2. The attack requires an expected $2^{88}$ random writes and as many random reads to Table 2, organized as $2^{25}\cdot 24$-bit words.
The cheap PC that I bought today (of May 2012) came with 4 GByte of DDR3 DRAM, as a single 64-bit-wide DIMM with 16 DRAM chips each $2^{28}\cdot 8$-bit, costing about \$1 per chip in volume. Bigger chips exists: my brand new 32-GByte server uses 64 chips each $2^{29}\cdot 8$-bit, and these are becoming increasingly common (though price per bit is still higher than for the mainstream $2^{28}\cdot 8$-bit chips).
Two mainstream $2^{28}\cdot 8$-bit chips hold one instance of
Table 2, and one 124-bit word can be accessed as 8 consecutive 8-bit locations in each of the two chips simultaneously (consecutive accesses are like 15 times faster than random accesses). One $2^{29}\cdot 8$-bit chip would be slightly slower.
Assuming DDR3-1066 with 7-cycles latency (resp. DDR3-1333 with 9-cycles latency), 8 consecutive access require at least $(7\cdot 2+7)/1066\approx 0.020$ µs (resp. $(9\cdot 2+7)/1333\approx 0.019$ µs). This is a decimal order of magnitude less than considered in the original article. For each instance of
Table 2, that is 0.5 GByte, we can perform at most $365\cdot 86400\cdot 10^6/0.019/2\approx 2^{49.6}$ read+write accesses per year to Table 2 using mainstream DRAM. Thus with $n$ GByte of mainstream DRAM, and unless I err somewhere, the expected duration is $2^{37.4}/n$ years.
Based on press releases of a serious reference, there are less than $2^{31}$ PCs around, and assuming that my cheap PC is representative, that's $2^{33}$ GByte. Another way to look at that is that each 0.25-GByte chip cost about \$$1$; and the DRAM revenues in 2011 is less than \$$2^{35}$, thus enough for $2^{33}$ GByte (but notice that most of the revenue is from chips that are not optimized for cost per bit). I'll guesstimate all the RAM ever built is equivalent to at most $2^{35}$ GByte of mainstream DRAM for the purpose of the attack.
Thus at the end of the day, my answer is: the attack in the original article, updated to use all the RAM chips ever built by mankind to mid 2012 at the maximum of their potential, has an expected duration of
at least 5 years; or equivalently has odds at best 20% to succeed in one year.
Update: as noted by the authors of the original article, "the execution time is not particularly sensitive to the number of plaintext/ciphertext pairs $n$ (provided that $n$ is not too small) because as $n$ increases, the number ofoperations required for the attack ($2^{120-\log_2 n}$) decreases, but memory requirements increase, and the number of machines that can be built with a fixed amount of money decreases". By the same argument, our required amount of RAM is not much changed if we get more known plaintext/ciphertext pairs.
|
Definition:Partitioning
Jump to navigation Jump to search
Then $\family {S_i}_{i \in I}$ is a
Definition
Let $S$ be a set.
Let $\family {S_i}_{i \mathop \in I}$ be a family of subsets of $S$ such that:
$(1): \quad \forall i \in I: S_i \ne \O$, that is, none of $S_i$ is empty $(2): \quad \displaystyle S = \bigcup_{i \mathop \in I} S_i$, that is, $S_i$ is the union of $\family {S_i}_{i \mathop \in I}$ $(3): \quad \forall i, j \in I: i \ne j \implies S_i \cap S_j = \O$, that is, the elements of $\family {S_i}_{i \mathop \in I}$ are pairwise disjoint.
Then $\family {S_i}_{i \in I}$ is a
partitioning of $S$.
Note the difference between:
and
Also see Results about set partitionscan be found here.
|
I will try to give an approach to this integral using complex analysis:
First of all we would like to have a form of the integral which is most easily tractable by contour integration, which (at least for me) means that
One way of doing so, is exploiting the parity of the integrand and transform $y\rightarrow 1/x$. We get:$$I=\frac{1}{2}P\int_{-1}^{1}\frac{1}{x}\frac{\sqrt{x^2-1}}{1+x^2}dx$$
Here $P$ denotes Cauchy's principal value.We now may consider the complex function
$$f(z)=\frac{1}{z}\frac{\sqrt{z^2-1}}{1+z^2}$$
Choosing the standard branch of logarithm, we have a cut on the interval $[-1,1]$, furthermore we have singularities at $\{\pm i,0\}$ where the first two are harmless but the one at zero is on the cut and will need to be handeled with care.
Now may choose a contour which encloses the branch cut, and avoids the singularity at 0. we get:
$$\oint f(z)dz = \underbrace{\int_{-1}^{-\epsilon}f(x_+)+\int_{\epsilon}^{1}f(x_+)}_{2I}+\int_{\text{arg}(z)\in(\pi,0], |z|=\epsilon}f(z)dz\\-\underbrace{\int_{-1}^{-\epsilon}f(x_-)-\int_{\epsilon}^{1}f(x_-)}_{-2I}-\int_{\text{arg}(z)\in[0,-\pi), |z|=\epsilon}f(z)dz=\\4I+\underbrace{2\int_{\text{arg}(z)\in(\pi,0], |z|=\epsilon}f(z)dz}_{2 \times \pi i\ \times \text{res}[f(z),z=0] }=\\4I +2\pi \quad (1)$$
Where $\text{res}[f(z),z=0]=i$ because we calculated the residue above the branch cut. Furthermore $f(x_{\pm})$ denotes about which side of the cut we are talking: $\pm$ above/below. Also the limit $\epsilon \rightarrow 0$ is implicit.
Now comes the trick: By looking at the exterior of the contour we can also write (please note that we now enclose the singularities in opposite direction compared to above)
$$\oint f(z)dz=-2\pi i \times(\text{res}[f(z),z=i]+\text{res}[f(z),z=-i])=2\sqrt{2}\pi \quad (2)$$
Equating $(1)=(2)$
$$4I+2\pi=2\sqrt{2}\pi\\$$or$$I=\frac{\pi}{2}\left(\sqrt{2}-1\right)$$
which is the same result as the one obtained by trig. substitution.
|
This question already has an answer here:
Proof with 3D vectors 2 answers
Let $a = \begin{pmatrix}x_a\\y_a\\z_a\end{pmatrix}$, $b = \begin{pmatrix}x_b\\y_b\\z_b\end{pmatrix}$, and $c = \begin{pmatrix}x_c\\y_c\\z_c\end{pmatrix}$. Show that $(x_a,y_a,z_a)$, $(x_b,y_b,z_b)$, and $(x_c,y_c,z_c)$ are collinear if and only if $$a \times b + b \times c + c \times a = 0.$$
I was thinking of proving that the area of the triangle formed by the three points is 0. I thought the box product, $|(a \times b)\bullet c|$, would be helpful but I don't know how to relate that to the equation. All help is greatly appreciated.
|
Differentiation
This course is aimed at teaching concepts, but some advanced mathematics is required. One important skill is to be able to “graphically differentiate” functions. This means identifying the tangent line at a particular point, and finding the slope of the tangent line using
\[\text{slope} = \dfrac{\text{rise}}{\text{run}} = \dfrac{\Delta y}{\Delta x}\]
The graph below shows an example of graphically differentiating by finding the tangent line (in blue) of the function \(f(x)\) (in red) at \(x=x_0\). It also shows how to calculate the slope of that line by extrapolating the line to \(x = x_1\) and \(x=x_2\):
This is one example of how you can graphically differentiate functions, but don't worry about drawing anything like this out. A more important skill is to roughly identify the slope of a tangent line just by looking at the graph. Visualizing images like the one above can be helpful in building this skill.
The table below lists a few functions common in 7C and their derivatives (\(A\), \(\omega\), and \(\phi\) are all constants).
Function \(f(x)\) Derivative \(f'(x)\) \(Ax\) \(A\) \(A\sin (\omega x + \phi)\) \(A \omega \cos(\omega x + \phi)\) \(A\cos (\omega x + \phi)\) \(-A \omega \sin(\omega x + \phi)\) \(1/x\) \(-1/x^2\) Integration
In this class quantities that accumulate from an initial to a final point can be represented by the area under some curve. For these quantities it's useful to visualize something like the picture below.
An approximation to the area under the curve between \(x_1\) and \(x_N\) is given by the area in the shaded rectangles, each with width \(\Delta x\). As these rectangles become
infinitely thin our approximation becomes an exact answer. This process is called integrating the function, and our exact value is the integral of the function. Useful Integration Identities
In this course, it will be useful to remember these characteristics of integration (\(x_i\) and \(x_f\) are arbitrary bounds of integration):
\[\int_{x_i}^{x_f} Af(x)\,\mathrm{d}x = A \int_{x_i}^{x_f} f(x)\,\mathrm{d}x \text{ , for any constant } A\]
\[\int_{x_i}^{x_f} \, \mathrm{d}x = \int_{x_i}^{x_f} (1) \, \mathrm{d}x = \Delta x\]
Useful Integral Solutions
Some useful solutions to integrals common to physics classes are shown below (although none of these are
essential for 7C)
\[\int_{x_i}^{x_f} x\, \mathrm{d}x = \dfrac{1}{2} (x^2_f - x^2_i)\]
\[\int_{x_i}^{x_f} \dfrac{1}{x^2}\, \mathrm{d}x = - \dfrac{1}{x_f} + \dfrac{1}{x_i}\]
\[\int_{x_i}^{x_f} \dfrac{1}{x} \, \text{d}x = \ln \left| \dfrac{x_f}{x_i} \right|\]
\[\int_{x_i}^{x_f} \cos (\omega x + \phi) \, \mathrm{d}x = \dfrac{1}{\omega} \left( \sin(\omega x_f + \phi) - \sin(\omega x_i + \phi) \right)\]
\[\int_{x_i}^{x_f} \sin (\omega x + \phi) \, \mathrm{d}x =- \dfrac{1}{\omega} \left( \sin(\omega x_f + \phi) - \cos(\omega x_i + \phi) \right)\]
These integrals are good for seeing interesting relationships, but the focus of this course is conceptual, not mathematical. Do not spend a lot of time trying to memorize these integrals. Also, if you are still uncertain about any of this material, you should review your calculus notes.
|
Assume you want to create a security which replicates the implied volatility of the market, that is when $\sigma$ goes up, the value of the security $X$.
The method you could use is to buy call options on that market for an amount $C$.
We know that call options have a positive vega $\nu = \frac{\partial C}{\partial \sigma}= S \Phi(d_1)\sqrt{\tau} > 0$, so if the portfolio was made of the call $X=C$, then the effect of $\sigma$ on the security is as we desired.
However, there is of course a major issue: the security $X$ would also have embedded security risk, time risk and interest rate risk. You can use the greeks to hedge against $\Delta$, $\Theta$ and $\rho$ (which are the derivative of the call option respective to each source of risk).
In practice, I think you definitely need $X$ to be $\Theta$-neutral and $\Delta$-neutral, but would you also hedge against $\rho$ or other greeks? Have the effect of these variable been really important on option prices to make a significant impact, or would the cost of hedging be too high for the potential benefit?
|
Dec 9. http://web.stevens.edu/algebraic/MADAY2016/
Dec 2.M.
Nov 18.
Nov 11. Shamgar Gurevich (Madison and Yale)
Nov 4 Title: Random nilpotent groups.
Oct 28. Dima Savchuk (University of South Florida)
Oct 21. Pascal Weil, CNRS and Université de Bordeaux
Oct 14. CUNY has Tuesday schedule
Sep 23. Ben Steinberg (CCNY)
Sep 30. Alexei Miasnikov (Stevens) “What the group rings know about the groups?”How much information about a group G is contained in the group ring K(G) for an arbitrary field K? Can one recover the algebraic or geometric structure of G from the ring? Are the algorithmic properties of K(G) similar to that of G? I will discuss all these questions in conjunction with the classical Kaplansky-type problems for some interesting classes of groups, in particular, for limit, hyperbolic, and solvable groups. At the end I will touch on the solution to the generalized 10
Title: Homological and topological finiteness conditions for monoids
Abstract:
Homological and topological finiteness properties of groups has long been of interest in connection with topology. Interest in homological finiteness conditions for monoids began with the Anick-Squier-Groves-Kobayashi theorem which says that a monoid with a finite complete rewriting system is of type $FP_{\infty}$. Starting in the early nineties Pride, Otto, Kobayashi and Guba began to investigate homological finiteness properties of monoids in connection with complete rewriting systems (there is also some work of Ivanov and of Sapir).
In group theory, one normally studies homological properties via topology by using Eilenberg-MacLane spaces. For monoids, the work has been almost entirely algebraic in nature and for this reason progress on understanding finiteness conditions for such basic operations as free product with amalgamation has been slow.
In this talk, we introduce the topological finiteness condition $F_n$ for monoids. It extends the usual notion for groups and seems to be surprisingly robust. We can then extend Ken Brown's topological proof of the Anick-Squier-Groves-Kobayashi theorem to monoids and we have made new progress on understanding finiteness properties of amalgamations, HNN extensions and HNN-like extensions (in the sense of Otto and Pride). In the process we develop some very rudimentary Bass-Serre theory for monoids.
This is joint work with Bob Gray.
th Hilbert problem in group rings and how equations in groups are related to equations in the group rings. The talk is based on joint results with O.Kharlampovich.
Oct 7. Conference in Princeton
Ada Peluso Visiting Professor, Hunter College, CUNY
The study of random algebraic objects sheds a different light on these objects, which complements the algebraic, but also the algorithmic points of view. I will discuss random finitely generated subgroups of free groups from several perspectives: when they are given by a random tuple of generators (of reduced words), and when they are given by a random Stallings graph. The Stallings graph of a subgroup H is a finite labeled graph uniquely associated with H, from which one can efficiently compute invariants of H. It is an interesting combinatorial object in and of itself, whose structure must be understood to enumerate and randomly generate Stallings graphs and subgroups. While both approaches to random subgroups, generators and Stallings graphs, are natural, they yield different distributions, and a different view of what ‘most subgroups’ look like.
Title: Lamplighter groups and (bi)reversible automata from affine transformations of $\mathbb Z_p[[t]]$. Abstract:
The ring $\mathbb Z_p[[t]]$ of formal power series over $\Z_p$ can be naturally identified with the boundary of $p$-ary rooted tree $T_p$. For each $a(t),b(t)\in\mathbb Z_p[[t]]$ with $b(t)$ being a unit, we consider the affine transformations of $\mathbb Z_p[[t]]$ defined by $f(t)\mapsto a(t)+f(t)\cdot b(t)$. This transformation defines automorphisms of $T_p$ that can be explicitly described by an automaton that is finite if and only if both $a(t)$ and $b(t)$ are rational power series.
We prove that the multiplication by a power series corresponding to a rational function $p(t)/q(t)\in\mathbb Z_p(t)$ is defined by a finite automaton that is reversible if and only if $\deg p\leq \deg q$. In particular, if $\deg p=\deg q$ the corresponding automaton is bireversible. This covers several examples that were studied earlier. We also describe algebraic structure of corresponding self-similar groups generated by such automata and show that they are isomorphic to lamplighter groups of various ranks.
This is a joint result with Ievgen Bondarenko.
Speaker: Denis Ovchinnikov (Stevens Institute)
Abstract:
Notion of a random (finitely presented) group gives a well-established approach to think about the question "what most groups look like". In standard models (few relator and density models), with probability 1 (or, more precisely, asymptotically almost surely), these groups turn out to be either hyperbolic or trivial. This way, if one desires to study the question "what most groups in class N look like", for some class of non-hyperbolic groups N, classical model cannot be applied directly.
I will discuss the question above for the class of (finitely generated) nilpotent groups, provide an outline of known approaches to define a random nilpotent group, and present our results about typical groups in some of these models.
The talk is based on joint work with Albert Garreta-Fontelles and Alexei Miasnikov.
Title: Small Representations of finite classical groups. Abstract: Suppose you have a finite group G and you want to study certain related structures (random walks, expander graphs, word maps, etc.). In many cases, this might be done using sums over the characters of G. A serious obstacle in applying these formulas seemed to be lack of knowledge over the low dimensional representations of G. In fact, the “small" representations tend to contribute the largest terms to these sums, so a systematic knowledge of them might lead to proofs of some important conjectures. The “standard" method to construct representations of finite classical group is due to Deligne and Lusztig (1976). However, it seems that their approach has relatively little to say about the small representations.
This talk will discuss a joint project with Roger Howe (Yale), where we introduce a language to define, and a new method for systematically construct, the small representations of finite classical groups.
I will demonstrate our theory with concrete motivations and numerical data obtained with John Cannon (MAGMA, Sydney) and Steve Goldstein (Scientific computing, Madison).
Rachel Skipper (Binghamton University),
Title: The congruence subgroup problem for a family of branch groups
Abstract: A group, G, acting on a regular rooted tree has the congruence
subgroup property (CSP) if every subgroup of finite index contains the
stabilizer of a level of the tree. When the subgroup structure of G
resembles that of the full automorphism group of the tree, additional
tools are available for determining if G has the CSP.
In this talk, we look at the Hanoi towers group which has fails to have
the CSP in a particular way. Then we will generalize this construction to
a new family of groups and discuss the CSP for them.
Nov 25. No seminar. Thanksgiving.
Gromov (NYU, IHES), isoperimetric inequalities in group algebras
|
I understood that One-Time Pad (OTP) encryption ensures perfect secrecy. However, I couldn't find any real-world examples where the OTP is used.
Also, which are some real-world examples where it won't be suitable to use an OTP.
Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. It only takes a minute to sign up.Sign up to join this community
If you want to have the perfect secrecy then it is the only choice. However, it doesn't have integrity and authentication.
If you want to see when it is used, see OTP at Wikipedia, especially the cold war era.
It is not suitable for modern usage, where a lot of messages is sent/received. The drawback is the necessary condition; key length must be at least message length. Also, you must somehow transmit the OTP key securely, not by encryption. You must trust the carrier or you have to carry yourself.
A simple question arises what will you do when the keystream is depleted? Would you wait for the new key, or you would re-use some part of the keystream? Both have critical results. You will not communicate when needed or OTP will fail, see Crib-Dragging.
Per comment: There is an interesting question on this site; Is there a companion algorithm for OTP to ensure integrity and/or authentication?, asking the companions for integrity and authenticity since the OTP only provides confidentiality. Clearly, If you send your data only encrypted with OTP, the Oscar, the middle man, can modify the message. Of course, if he has no knowledge of the structure of the data, the modifications are random. In the other case, the results can be catastrophic.
Save the man can be converted into
Kill the man.
As some said, (most of | sometimes,) the time integrity is more important the confidentiality. You may not need encryption but integrity is almost necessary.
OTPs are making quite a resurgence these days as a fundamental product of quantum key distribution networks using the BB84 protocol. It's worth explaining where the OTP fits in with that protocol. Consider the following arrangement:-
Alice has a photonics based true random number generator (an essential component of OTPs) . Those bits randomly select polarised photons /qubits passing to Bob, forming a candidate key. The candidate key is received, error corrected and sifted reducing its length. What remains is a key known to both Alice and Bob suitable for future OTP work.
When QKD was developed, the sifted key transmission rate was only a few kbps so the key was used as a conventional symmetric key for external speed enhancement. Things have moved on and generation rates are of the order of 1Mb/s. Field test of quantum key distribution in the Tokyo QKD Network details a working secure video conferencing system in Tokyo running entirely via OTPs. The paper also details work on secure OTP based smart phones. This is another (NIST) video surveillance system based on OTPs.
And the Tokyo paper was published in 2011. The equipment will have shrunk and improved and the exchange protocols will have been refined. OTPs will inevitably become more commonplace, especially given the allure of information theoretic security.
To the counter; since a lot of hardware is required including true random number generators, OTPs are not really suitable for the consumer yet. 8K UHD movies are not currently the best use case for OTPs. But who can predict the future? A Netflix QKDN? A lot of people have fibre to the premises, so it's feasible. And a true random generator can fit into tweezers as does the Swiss chip below:-
Remember:-
"I think there is a world market for about five computers."
-- Thomas J Watson, President IBM.
When you browse to https://crypto.stackexchange.com/ or https://bankoffreedonia.com/, your browser almost certainly uses a one-time pad generated by AES-256 or ChaCha or similar to encrypt the messages exchanged with the server. We call this method of generating a one-time pad a ‘stream cipher’. It turns out to be about the least interesting part of how TLS and HTTPS works, which is why you don't hear much about it!
In particular, the one-time pad model is just to encrypt the $n^{\mathit{th}}$ message $m_n$ with a pad $p_n$, chosen at random for each message with some low statistical distance from uniform and never again used for any other purpose, by setting the ciphertext to be $c_n = m_n \oplus p_n$, where $\oplus$ is xor.
The
security arises from the inability of the adversary to guess $p_n$: it is bounded by the statistical distance of the pad from uniform. ‘Perfect security’ or ‘perfect secrecy’, in this case, is the theoretically optimal statistical distance, which is zero—not something necessarily attainable in practical terms, but a theoretical property of the model. Cranks will often play up how their one-time pad systems have perfect security because everyone has heard of one-time pads and perfect secrecy sounds awesome—but they will play down how their Because it is difficult and costly to get a uniform distribution on a large number of bits by observing random processes in the real world, and even more difficult and costly to for two parties to agree on them and exchange them, we instead pick a small number of bits $k$ from a space nevertheless so vast nobody will ever guess the same key by chance even if they spent enough energy trying guesses to boil the oceans. Then we compute the pad $p_n = F_k(n)$ where the function $F$ might be AES-256 in CTR mode, or ChaCha, or what have you—the technical term is that $F$ should be a pseudorandom function family. AES-256 and ChaCha have been studied for decades to conclude that it is unimaginably difficult to distinguish $F_k(n)$ from uniform random when $k$ is close enough to uniform random. It's then a tiny, mundane, and ubiquitous idea to xor it with the message to encrypt a message. Of course, if you make a bad choice of $F$ like a Vigenère cipher, or if you foolishly attempt to generate $p_n$ by banging on the keyboard like a monkey, then you don't get much security. This is how historical cryptography using one-time pads was broken, long before the invention of AES-256 or ChaCha.
The one-time pad model is one of many simple probabilistic models in cryptography that we instantiate in practice. We have an ideal model; a theorem about the model; and a practical instantiation of the model. Here are some examples:
Cipher block chaining is model for using a uniform random function $f$ of $b$-bit strings to $b$-bit strings to make a nearly uniform random function $\operatorname{CBC}_f$ on an $n$-block message $m$ given by $$\operatorname{CBC}_f(m) = f(\dots f(f(\mathit{iv}_n \oplus m_1) \oplus m_2) \dots \oplus m_n),$$ where $\mathit{iv}_n$ is a short uniform random string. There is a standard theorem ( e.g., [1], Theorem 3.1, Information-Theoretic Case) about the probability any algorithm $A$ can distinguish this from a uniform random function $g$ of $nb$-bit to $b$-bit strings: $$\Pr[A(\operatorname{CBC}_f)] \leq \Pr[A(g)] + 1.5\cdot q^2 n^2/2^b,$$ where $q$ is the number of times $A$ evaluates the function.
Nobody clamors to choose $f$ among all possibilities uniformly at random by examining bird entrails, store a description of it, and then compute CBC on it: we just use $f = \operatorname{AES256}_k$ for a short uniform random key $k$ without fanfare.
The
one-time authenticator is a model for using a uniform random message-length key to detect forgery[2][3][4]: if $m = m_1 \mathbin\| m_2 \mathbin\| \dots \mathbin\| m_\ell$ is a message of $\ell$ blocks of $b$ bits apiece, and the key $s, a_1, a_2, \dots, a_\ell$ is a collection of $\ell + 1$ independent uniform random $b$-bit blocks, then, interpreting both as vectors in $\operatorname{GF}(2^b)$, the authenticator $$s + m_1 a_1 + \dots + m_\ell a_\ell$$ can't be forged on any message other than $m$ with probability better than $2^{-b}$ by any adversary who doesn't know the key.
Nobody clamors to choose $s, a_1, \dots, a_\ell$ among all possibilities uniformly at random by tossing I Ching sticks, store it, and transport it by courier; we just use $s = \operatorname{AES256}_k(0)$ and $a_i = r^i$ where $r = \operatorname{AES256}_k(n)$ for the $n^{\mathit{th}}$ message without fanfare. (There's a good chance your browser is doing this
right now with https://crypto.stackexchange.com/ in tandem with the one-time pads it generates with AES or ChaCha!)
The
one-time pad is a model for using a uniform random message-length key to conceal a message from an eavesdropper, as above.
No
serious cryptography practitioner clamors to choose the key among all possibilities uniformly at random by reading tea leaves, store it, and transport it by courier; we just generate it with AES or ChaCha under a short uniform random key $k$ without fanfare.
Yet somehow this particular model has attained mystical status in the broader culture as transcending cryptography, a mistake that leads people to shoot themselves in the foot by accidentally reusing pads because they're too unwieldy or by using a broken bespoke key generator like a Vigenère cipher—instead of using modern cryptography to choose them securely from an easily managed short uniform random secret.
We call this composition a stream cipher, and it actually works so reliably that just about everyone uses it every day for net petabytes of data transfer, protecting trillions of euros of economic value, personal privacy, etc. This is a wild success of the one-time pad model! The only way practical systems using it have been broken is by exploiting mistakes of pad generation, not by anything about the one-time pad model.
|
Let's assume that signal you are analysing is sinusoid with amplitude $A$:
$x=a\sin{2\pi f_{0} t}$
Its RMS value of amplitude is then: $\dfrac{a}{\sqrt{2}}$ as you noticed in your code. Before performing DFT it is good practice to window signal as it decreases leakage, etc.
After transforming this signal into frequency domain, you will obtain two-sided spectrum. Then we have to take the magnitude of these complex values. Probably you would notice that all of them are very high. That's because energy is not normalised. Normalisation can done via dividing values of magnitude by energy of your window. As people tend to take signal "as it is" (which is equal to applying rectangular window) the spectrum is then divided by number of samples $N$. For different types of windows this factor that is equal to sum of all window samples.
Additionally we are only interested in one half of a spectrum, therefore amplitude of all samples must be multiplied by $2$ to compensate the loss of energy, except of DC component which appears only once!.
Lastly, as you want to analyse RMS of the signal, you must understand the following. All values of single-sided spectrum are of height $\dfrac{A_{i}^{2}}{2}$, which is equal to $ \left( \dfrac{A_{i}}{\sqrt{2}}\right)^{2}$, where $\dfrac{A_{i}}{\sqrt{2}}$ is the RMS magnitude amplitude for the
i-th frequency component. Translating this into Python code you obtain:
import numpy as np
import matplotlib.pyplot as plt
plt.close('all')
fs = 5e5
duration = 1
npts = int(fs*duration)
t = np.arange(npts, dtype=float)/fs
f = 1000
ref = 0.004
amp = ref * np.sqrt(2)
signal = amp * np.sin((f*duration) * np.linspace(0, 2*np.pi, npts))
rms = np.sqrt(np.mean(signal**2))
dbspl = 94 + 20*np.log10(rms/ref)
# Window signal
win = np.hamming(npts)
signal = signal * win
sp = np.fft.fft(signal)
freq = np.fft.fftfreq(npts, 1.0/fs)
# Scale the magnitude of FFT by window energy and factor of 2,
# because we are using half of FFT.
sp_mag = np.abs(sp) * 2 / np.sum(win)
# To obtain RMS values, divide by sqrt(2)
sp_rms = sp_mag / np.sqrt(2)
# Shift both vectors to have DC at center
freq = np.fft.fftshift(freq)
sp_rms = np.fft.fftshift(sp_rms)
# Convert to decibel scale
sp_db = 20 * np.log10( sp_rms/ref ) + 94
plt.semilogx(freq, sp_db)
plt.xlim( (0, fs/2) )
plt.ylim( (-100, 100))
plt.grid('on')
# Compare the outputs
print dbspl, sp_db.max()
As you can see, this is producing consistent results for both time and frequency domain. Good luck!
|
In
Mostly Harmless Econometrics: An Empiricist's Companion (Angrist and Pischke, 2009: page 209) I read the following:
(...) In fact, just-identified 2SLS (say, the simple Wald estimator) is approximately
unbiased. This is hard to show formally because just-identified 2SLS has no moments (i.e., the sampling distribution has fat tails). Nevertheless, even with weak instruments, just-identified 2SLS is approximately centered where it should be. We therefore say that just-identified 2SLS is median-unbiased. (...)
Though the authors
say that just-identified 2SLS is median-unbiased, they neither prove it nor provide a reference to a proof. At page 213 they mention the proposition again, but with no reference to a proof. Also, I can find no motivation for the proposition in their lecture notes on instrumental variables from MIT, page 22.
The reason may be that the proposition is false since they reject it in a note on their blog. However, just-identified 2SLS is
approximately median-unbiased, they write. They motivate this using a small Monte-Carlo experiment, but provide no analytical proof or closed-form expression of the error term associated with the approximation. Anyhow, this was the authors' reply to professor Gary Solon of Michigan State University who made the comment that just-identified 2SLS is not median-unbiased. Question 1: How do you prove that just-identified 2SLS is not median-unbiased as Gary Solon argues? Question 2: How do you prove that just-identified 2SLS is approximately median-unbiased as Angrist and Pischke argues?
For Question 1 I am looking for a counterexample. For Question 2 I am (primarily) looking for a proof or a reference to a proof.
I am also looking for a formal definition of
median-unbiased in this context. I understand the concept as follows: An estimator $\hat{\theta}(X_{1:n})$ of $\theta$ based on some set $X_{1:n}$ of $n$ random variables is median-unbiased for $\theta$ if and only if the distribution of $\hat{\theta}(X_{1:n})$ has median $\theta$. Notes
In a just-identified model the number of endogenous regressors is equal to the number of instruments.
The framework describing a just-identified instrumental variables model may be expressed as follows: The causal model of interest and the first-stage equation is $$\begin{cases} Y&=X\beta+W\gamma+u \\ X&=Z\delta+W\zeta+v \end{cases}\tag{1}$$ where $X$ is a $k\times n+1$ matrix describing $k$ endogenous regressors, and where the instrumental variables is described by a $k\times n+1$ matrix $Z$. Here $W$ just describes some number of control variables (e.g., added to improve precision); and $u$ and $v$ are error terms.
We estimate $\beta$ in $(1)$ using 2SLS: Firstly, regress $X$ on $Z$ controlling for $W$ and acquire the predicted values $\hat{X}$; this is called the first stage. Secondly, regress $Y$ on $\hat{X}$ controlling for $W$; this is called the second stage. The estimated coefficient on $\hat{X}$ in the second stage is our 2SLS estimate of $\beta$.
In the simplest case we have the model $$y_i=\alpha+\beta x_i+u_i$$ and instrument the endogenous regressor $x_i$ with $z_i$. In this case, the 2SLS estimate of $\beta$ is $$\hat{\beta}^{\text{2SLS}}=\frac{s_{ZY}}{s_{ZX}}\tag{2},$$ where $s_{AB}$ denotes the sample covariance between $A$ and $B$. We may simplify $(2)$: $$\hat{\beta}^{\text{2SLS}}=\frac{\sum_i(y_i-\bar{y})z_i}{\sum_i(x_i-\bar{x})z_i}=\beta+\frac{\sum_i(u_i-\bar{u})z_i}{\sum_i(x_i-\bar{x})z_i}\tag{3}$$ where $\bar{y}=\sum_iy_i/n$, $\bar{x}=\sum_i x_i/n$ and $\bar{u}=\sum_i u_i/n$, where $n$ is the number of observations.
I made a literature search using the words "just-identified" and "median-unbiased" to find references answering Question 1 and 2 (see above). I found none. All articles I found (see below) make a reference to Angrist and Pischke (2009: page 209, 213) when stating that just-identified 2SLS is median-unbiased.
Jakiela, P., Miguel, E., & Te Velde, V. L. (2015). You’ve earned it: estimating the impact of human capital on social preferences. Experimental Economics, 18(3), 385-407. An, W. (2015). Instrumental variables estimates of peer effects in social networks. Social Science Research, 50, 382-394. Vermeulen, W., & Van Ommeren, J. (2009). Does land use planning shape regional economies? A simultaneous analysis of housing supply, internal migration and local employment growth in the Netherlands. Journal of Housing Economics, 18(4), 294-310. Aidt, T. S., & Leon, G. (2016). The democratic window of opportunity: Evidence from riots in Sub-Saharan Africa. Journal of Conflict Resolution, 60(4), 694-717. Jakiela, P., Miguel, E., & Te Velde, V. L. (2015). You’ve earned it: estimating the impact of human capital on social preferences.
|
The set of integers $\Bbb Z$ under ordinary addition is cyclic. Both $1$ and $-1$ are generators. But I am a bit confused how can $1$ generate $0$ and how $-1$ generates $0$? What is the order of $1$ and $-1$ on this group of integers?
The subgroup generated by a set of elements of a group is the smallest subgroup that contains all the elements. A group by definition always includes the identity element.
The order of an element is the size of the subgroup generated by it. Both $1$ and $-1$ generate all of $(\mathbb Z,+)$ which has infinitely many elements, thus the order of them is infinite (as there are infinitely elements in the group).
Note that for any element $n$ of $\mathbb Z$, the generated subgroup consists of all multiples of $n$, which for non-zero $n$ consists of infinitely many elements as well, thus every non-zero integer has infinite order.
On the other hand, one can easily check that $\{0\}$ is a subgroup all be itself, called the trivial subgroup, and therefore this is the group generated by $0$ (note that again, it consists of all multiples of $0$, since $0n=0$ for all $n\in\mathbb Z$). Since the trivial subgroup only contains one element (namely $0$), the order of $0$ is $1$.
Indeed, it is easy to see that for
any group $G$, its identity element generates the trivial subgroup and therefore is of order $1$. Also, the group of any non-identity element $g$ contains at least two elements (namely the identity and $g$ itself), and therefore is of order $\ge 2$.
As a side note, even if we ignore that $0$ is by definition in the subgroup of $(\mathbb Z,+)$ generated by $1$, we can easily get it through the supported operations: Since $1$ is in the set, and the set is closed under negation, $-1$ is also in the set. And since the set is also closed under addition, $1+(-1)=0$ is also in the set. However while this works for groups, the same idea fails for other algebraic structures like monoids where you also have an identity, but generally no inverse; an example of this would be $(\mathbb Z_{>0},\times)$.
Therefore the right way to think of it is that the neutral element is already there by definition, as this works for all algebraic structures.
In $(\mathbb Z,+)$ there is no positive integer $m$ such that $m \times 1 =0$. We say that the order of $1$ (and of every other integer apart from $0$) is infinite. The order of $0$ is $1$.
gandalf61's answer is technically correct. But let me try to offer a more intuitive explanation.
You can think of the group $G =(\mathbb{Z}, +)$ in terms of functions which permutes integers, i.e, functions of the type $\mathbb{Z} \to \mathbb{Z}$. For example, think of $7_G$ as the function that adds $7$ to every integer, i.e, $7_G(x) = 7 + x$. And when you are composing two elements of this group, what you are really doing is function composition. For example, $7_G + 2_G = 9_G$ is justified since for any $x \in \mathbb{Z}$, $7_G(2_G(x)) = 7 + (2 + x) = 9 + x = 9_G(x)$. Similarly, negatives correspond to function inverses, and so on. (This perspective is called group actions)
Now, there is an elelement $0_G$, which represents the identity function of this type. This function fixes every element of $\mathbb{Z}$, in other words, it does nothing.
We could notice that if we had the function $1_G$ (and it's inverse), we could repeatedly compose them (sarting from $0_G$) to get any desired element of this group of functions. To get $0_G$ out of the generators is trivial, since you do not have to apply it any number of times at all.
This is the idea that if you apply a function $0$ number of times, you have not changed the argument at all and hence you get the identity function. (Put more complicatedly, the $\text{id}$ morphism is the identity in the monoid of morphisms.) This is the same spirit as you get $1$ if you multiply something $0$ number of times in a ring, i.e, $a^0 = 1$.
|
Computing Stiffness of Linear Elastic Structures: Part 1
Today, we will introduce the concept of structural stiffness and find out how we can compute the stiffness of a linear elastic structure subjected only to mechanical loading. In particular, we will explore how it can be computed and interpreted in different modeling space dimensions (0D and 1D) and which factors affect the stiffness of a structure.
What Is Structural Stiffness?
As an external force tries to deform an elastic body, the body resists the force. This resistance is referred to as
stiffness. We often casually use this term as a material property, whereas in reality, it could be a property of various geometric and material parameters. We will explore these cases here.
Before we dive in, we need to define stiffness mathematically. Let’s assume that a force, F
0, acting on a body deforms it by an amount, u 0. If we require a small force, ΔF, to deform the body by an infinitesimally small amount, Δu, then the ratio of these two quantities would give us the stiffness of the body at the operating point denoted by the state variables F 0 and u 0.
This is the definition of
linearized stiffness, which can, in general, be used on both linear and nonlinear force versus displacement curves. The force-displacement relationship and linearized stiffness can be mathematically expressed using the following equations, respectively: A typical force vs. displacement curve for a linear elastic structure. An Example Problem
When modeling various types of structural systems, one of the goals of the analysis could be to come up with an effective value of stiffness and interpret its scope based on how we compute it from the structural problem at hand. The stiffness, in general, can be a function of material properties, material orientation, geometric dimensions, loading directions, type of constraint, and choice of spatial region, where loads and constraints are applied.
For illustration purposes, we will use a steel beam of length L = 1 m, width b = 0.2 m, and thickness t = 0.1 m. The face of the beam that is parallel to the yz-plane and located at x = 0 is rigidly fixed (i.e., zero displacements in
x-, y-, and z-directions). The face that is parallel to the yz-plane and located at x = L has a uniformly distributed force acting on it. All other faces of the beam are unconstrained and unloaded. Consequently, they are free to deform. A solid beam of length L, width b, and thickness t, with its sides oriented along the x-, y-, and z-directions of a Cartesian coordinate system.
We will compute the stiffness of this beam both analytically and using COMSOL Multiphysics, comparing the solutions obtained from these two methods.
Exploring Modeling Space Dimensions
When starting to model a structure, one of the critical choices that we need to make is deciding on how much detail we are really interested in. In other words, we need to determine if we can lump the entire structure as a single point in space or if we need to resolve it in one, two, or even three dimensions to get more details of spatial variation in certain quantities of interest. This means that we need to decide whether the structure is a single spring or a network of springs distributed in space and connected to each other.
To do so, we should try to answer the following questions — and possibly several others depending on what the modeling objective is:
Is there any spatial inhomogeneity in the material properties? Is there any spatial inhomogeneity in the applied force? Do the geometric dimensions of the structure vary irregularly in certain directions? Are there any planes of symmetry that we can identify based on the symmetry in the modeling geometry, applied loads, and expected solution profile? Are there any localized effects, such as around holes or corners, that we are interested in? Can we neglect the stresses or strains in certain directions? Stiffness in 0D Models
We will start by looking at a 0D model of the beam where all effects related to loading, deformation, and material response are lumped into a single point in space and the entire beam is modeled as a single spring.
A 0D representation of the beam using a lumped stiffness, k, with a force, F, acting on it that produces a displacement, u. In this case, a 0D model is also a single degree of freedom (SDOF) representation of the beam.
Assuming that steel behaves as a Hookean solid (i.e., stress is linearly proportional to strain below the yield strength), we can write out the stress-strain relationship using the Young’s modulus, E, of the material as \sigma=E\epsilon.
Using a simplistic definition where stress is equal to force per unit cross-section area, \sigma=F/A, where A=bt, and strain is equal to the ratio of deformation to the original length, \epsilon=u/L, and combining these, we get F=(EA/L)u. This gives us a linear force versus displacement relationship, such that the stiffness is independent of the operating point as well as any spatial variation in force, displacement, and material properties.
Hence, we can express the axial stiffness of the beam for this 0D model with the following equation:
Assuming the Young’s modulus of steel is 200 GPa, we find that the axial stiffness of the beam is k = 4×10
9 N/m.
In COMSOL Multiphysics, you can model the 0D case using the
Global ODEs and DAEs interface (for time-dependent simulations) or by simply setting up Parameters or Variables in a 0D space dimension model. Screenshot of the Parameters table in the COMSOL software. Stiffness in 1D Models
In reality, we know that the beam is fixed at one end, while the force is being applied at the other. Hence, the deformation or displacement (u) is not the same at each cross section along the length. In order to incorporate this effect, we would need to create at least a 1D model.
Computing Axial Stiffness A 1D representation of the beam, obtained using the balance of static axial forces in the body.
A 1D model would require us to solve for the axial force balance equation on a 1D domain that represents the beam in order to find out the axial displacement (u) as a function of the
x-coordinate that defines the 1D space. The axial force balance equation (ignoring any bending or torsional moment) can be written as:
with the boundary conditions at the two ends as u=0 at x=0 and E\frac{du}{dx}=\frac{F}{A} (Hooke’s law) at x=L.
Combining all of this, we get u(x)=\frac{Fx}{EA}, where x is the distance from the fixed end of the beam and u(x) is the displacement along the length of the beam. The 1D model represents an infinite number of springs connected to each other in series. This allows us to get more detailed information on spatial variation in displacement, stresses, and strains in the beam. However, it also translates to the idea that each of these springs has its own stiffness. Assuming that the Young’s modulus and cross-section area do not vary along the length of the beam, if we discretize the beam into n-number of springs in series, in our case, the stiffness of each spring (k
i) will be k_i=nEA/L.
However, if we want to relate the 1D model with the 0D model, we have to imagine that the entire beam is being approximated by a single spring. Therefore, the equivalent stiffness in 1D would be the ratio of the maximum axial displacement and the axial force at the location where the force is being applied. In this case, u would be maximum at x = L where its value would be u_{max}=FL/EA. This gives us the equivalent single-spring stiffness of the 1D beam as:
This indicates that for the given modeling parameters, the solution (k = 4×10
9 N/m) of the 1D model tends to be that of the 0D model when evaluated at x = L. Computing Bending Stiffness
An additional advantage of moving over to a 1D model is that we can now explore the effect of loading direction. Although we restrict ourselves in a 1D space, we can compute the out-of-plane displacements v and w, respectively, along the “invisible”
y– and z-directions when a force acts on the beam along these directions. Note that based on the chosen boundary conditions (clamped-free beam), the displacement components v and w would vary as a function of the x-coordinate. A 1D representation of the beam, obtained using the balance of bending moment in the body.
Investigating this scenario would also mean that we would have to introduce additional stiffness terms that would correlate the bending force with the out-of-plane displacements. This would require us to solve the following moment-balance equation:
with the boundary conditions
at x=0; w=0 and \frac{dw}{dx}=0
and at x=L; \frac{d^2w}{dx^2}=0 and -EI\frac{d^3w}{dx^3}=F
In these equations, we have used the displacement (w) along the
z-direction for representational purposes. The same idea holds true for the displacement (v) along the y-direction as well. Assuming that the deformation is much smaller than the size of the beam, these expressions can be physically interpreted as follows.
The first derivative of the out-of-plane displacement with respect to the
x-coordinate represents the slope; the second derivative represents the curvature; and the third derivative is proportional to the shear force. In these equations, the term I denotes the second area moment of inertia and is a function of the direction about which the beam bends. For bending about the y-axis (i.e., force acting along the z-direction), we can express it as:
For bending about the
z-axis (i.e., force acting along the y-direction), we can express it as:
Combining all of this, we get:
Therefore, the equivalent bending stiffness in 1D would be the ratio of the maximum out-of-plane displacement and the bending load at the location where the force is being applied. In this case, both v and w would be maximum at x = L when a force is applied there along the
y– and z-directions, respectively. This gives us two possible equivalent single-spring bending stiffnesses of the 1D beam depending on the loading direction.
The force and displacement along the
y-direction can be correlated using the stiffness k_{yy}=\frac{Eb^3t}{4L^3}. The force and displacement along the z-direction can be correlated using the stiffness k_{zz}=\frac{Ebt^3}{4L^3}. For the given modeling parameters, k yy = 4×10 7 N/m and k zz = 1×10 7 N/m. Computing Stiffness in COMSOL Multiphysics
In COMSOL Multiphysics, you can set up the 1D model by first choosing a 2D or 3D space dimension and then using either the
Truss or the Beam interface.
Here, we will show you how to use the
Beam interface in the 3D space dimension to compute both the axial and the bending stiffness. The 1D structure will be modeled as an Euler-Bernoulli beam. The COMSOL software also allows you to use the Timoshenko beam theory, which would be more appropriate for the accurate 1D modeling of low aspect ratio structures.
Here is the workflow for obtaining the stiffness from the 1D model:
A snapshot of the 1D model made using the Beam interface. Variables are defined to evaluate the axial stiffness (k xx) and bending stiffness (kyy and kzz). An Average Coupling Operator is used to evaluate the displacements at the point x = L. The
with()
operator is used to fetch the solution from the different load cases that the model is solved for. A snapshot of the boundary conditions used in the Beam interface. The Point Load branch is assigned to the point located at x = L.
In this model, we use a force (point load) of F
0 = 1×10 4 N. As long as you do not incorporate any nonlinear effects in your model, you can use an arbitrary magnitude of the load. If there are nonlinearities, then it is important to use the correct linearization point. Such cases will be discussed in a future blog post.
As shown here, you can create a “switch” using the
if() operator and the names (such as
root.group.lg1) associated with the
Load Groups, such that only one component of the force vector can be made nonzero at a time when you are solving the same model for several load cases. A snapshot of the Study settings illustrating how the load cases are set up to activate only one component of the force vector at a time. A Global Evaluation is used to print the values of k xx, kyy, and kzz. The COMSOL software solutions match the analytical solutions exactly.
The approach shown here for evaluating the stiffness components is applicable as long as we do not expect any coupling between extension and bending, (i.e., when the stiffness matrix is diagonal). We will present a more general computational approach in Part 2 of this blog series.
Next, we can solve the same model using the Timoshenko beam theory. As expected, this would yield the exact same result for the axial stiffness (k
xx = 4×10 9 N/m), but the transverse stiffness will be smaller than what we obtained from the Euler-Bernoulli theory. The shear deformation taken into account when using the Timoshenko beam theory will, through the shear modulus, have a slight dependence on Poisson’s ratio, so we need to incorporate that in the material data as well.
Poisson’s ratio k xx[N/m] k yy[N/m] k zz[N/m] ν = 0 4×10 9 3.91×10 7 9.94×10 6 ν = 0.3 4×10 9 3.88×10 7 9.92×10 6 Concluding Thoughts
Here, you have seen both analytical and COMSOL solutions to computing stiffness of linear elastic structures in 0D and 1D. Next up, we will talk about 2D and 3D cases.
Editor’s note: We published a follow-up blog post on this topic on 4/4/14. Read Part 2 to learn how to compute the stiffness of linear elastic structures in 2D and 3D. Comments (5) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
The reason you can't prove your proposed algorithm correct is because.... it actually is not a correct algorithm for this problem. If you try running it on a small example of network flow graph with more than one unique min cut, you'll see immediately what goes wrong. In particular, this algorithm fails on
every flow graph that contains more than one min cut, so the problem is not at all subtle.
Perhaps it would be helpful for you to recall properties of the min-cut that is produced by applying the min-cut/max-flow theorem to the flow output by a max-flow algorithm. In particular, define $S$ to be the set of vertices reachable from $s$ along some path in the residual graph, and $T$ to be the set of vertices that can reach $t$ along some path in the residual graph. Then $S \cap T = \emptyset$ and $S \cup T = V$, and $(S,T)$ is a $(s,t)$-cut. In particular, $(S,T)$ is the cut that is selected by the min-cut/max-flow theorem (for this flow). Good.
Now notice that $T$ is exactly the set of vertices that are reachable from $t$ in the reverse of the residual graph. Therefore, your proposed algorithm amounts to finding a max-flow, computing the sets $S$ and $T$, then checking whether $S$ and $T$ are the same cuts. But the only way to interpret $S$ as a cut is as the cut $(S,V\setminus S)$, and the only way to interpret $T$ as a cut is as the cut $(V\setminus T, T)$ -- and these are always exactly the same cut! In particular, $V \setminus S = T$ and $V \setminus T = S$, so both of these cuts will always be exactly the same cut -- even if the flow graph admits multiple different cuts, your procedure will only find one of them.
In short, your proposed lemma is wrong, and the method you suggest is not a correct algorithm for this problem. The good news is that it
is possible to build a correct algorithm for this problem, within the running time that you specify; see my comments for more about how to do that -- but since you said in the question you don't want some other algorithm for this task, you just want to know if your proposed algorithm is correct, I won't try to elaborate in any further depth.
|
I need to solve this equation (find $\lambda$) using numerical methods:
$\displaystyle N_0e^{\lambda}+v\frac{e^{\lambda}-1}{\lambda}-N_1 = 0$
All other terms are constant and known.
N0 = 1000000; v = 435000; N1 = 1564000;
I need to solve it by using bisection, false position, newton Raphson and fixed point. I have already solved it by using the three first approaches, however I am stuck in the fixed point. I cannot find a function, such that finding the roots of that function represents the equation and also $|f'(\alpha)| < 1$ where alpha is near the root (Condition that must be satisfied so that the method converges. I have tried several combinations, like dividing by N0, changing the starting approximation (I know that the root is around 0.1). Could somebody help me?
|
If $f:\mathbb{R}\to\mathbb{R}$ and every point takes a local maximum value, it's a fact that the local maximum values of a real function can only have countable, so if we assume $f$ is continuous we have $f$ must be constant. My question is, if $f$ isn't continuous, can we prove there must be some interval that $f$ is constant on it?
I think your conclusion is right. I've written a proof, please help me check if it's right.
Since "local maximum values can only be countable", we assume they are $\{a_n\}_n$. And let $F_n=\{f=a_n\}$. Then $\mathbb{R}=\bigcup_{n\geq1}F_n$.
Due to
Baire's theorem, there is a $n_0$ such that $F_{n_0}$ is dense in an open interval (expressed as $U$).
Because $\{f=a_{n_0}\}$ is dense in $U$, it's easy to prove that $f(x)\geq a_{n_0}$ in $U$.
Assume that $x_0\in\{f=a_{n_0}\}$ is not an interior point of $\{f=a_{n_0}\}$ in $U$. In other words, $ \exists\{x_n\}_n\bigcap\{f=a_{n_0}\}=\emptyset$ such that $x_n\to x_0$. However, it can't be correct because $x_0$ is a local maximum.
Then we know $\{f=a_{n_0}\}$ has an interior point $x_0$ and we arrive at your conclusion. What's more, since $x_0$ is arbitrary, we know that $F_{n_0}\cap U$ is open too.
Suppose that $ f\colon {\mathbb R}\to{\mathbb R}$ is a function such that it has a local maximum at each point of $[a,b]$ but it is not constant on any interval.
By induction we can construct a sequence of points $x_i$ and closed intervals with the properties:
$x_i$ is a local maximum on $I_i$; $f(x_{i+1})<f(x_i)$; $I_{i+1}\subseteq I_i$; $\operatorname{diam} (I_i)\searrow 0$.
Inductive step: Since $f$ is not constant on $I_i$, there is a point $x_{i+1}\in I_i$ such that $f(x_{i+1})<f(x_i)$. Since $f$ has a local maximum at $x_{i+1}$, there is a neighborhood of $x_{i+1}$ on which the values are at most $f(x_{i+1})$. We can take a smaller closed interval $I_{i+1}$ in this neighborhood such that, at the same time, $I_{i+1}\subseteq I_i$ and $\operatorname{diam} (I_i) \le 2^{-i}$.
By Cantor Intersection Theorem there is a unique point $x\in\bigcap\limits_{i\in\mathbb N} I_i$. Clearly, $\lim\limits_{i\to\infty} x_i=x$. We also have $f(x)<f(x_i)$ for each $i\in\mathbb N$ (since $x\le f(x_{i+1})<f(x_i)$). So $x$ is not a local maximum, which is a contradiction.
|
I am reading the book of Hassan Khalinl
Nonlinear Systems (chapter 8.1. Center Manifold Theorem).
In Example 8.2 the author states a system
$$\dot{y}=yz$$ $$\dot{z}=-z+ay^2,$$
in which $a\in \mathbb{R}$.
As the linearized system at $(0,0)$ is already diagonal and y is associated with the eigenvalue which is zero. We can use $z=h(y)$, plugging this into the second equation we obtain the center manifold equation
$$\dfrac{\partial h(y)}{\partial y}\dot{y}+h(y)-ay^2=0.$$
Using the first equation and $z=h(y)$ we obtain:
$$\dfrac{\partial h(y)}{\partial y}yh(y)+h(y)-ay^2=0.$$
Now, he assumes that $h(y)=O(|y|^2)$ using this in the first equation we obtain
$$\dot{y}=yO(|y|^2)=O(|y|^3)$$
from which it is not possible to draw any conclusions.
Now, he assumes that $h(y)=h_2y^2+O(|y|^3)$. We have to determine the coefficient $h_2$ by using the center manifold equation which yields $h_2=a$. Using this for the first equation we obtain
$$\dot{y}=ay^3+O(|y|^4).$$
The author now states that $a<0$ leads to an asymptotically stable origin and $a>0$ to an unstable equilibrium point at the origin.
Question 1: From a $y-\dot{y}$-plot it is evident that the origin is asymptotically stable. Is there a more rigorous way to show this? I tried to construct a Lyapunov function $$V(y)=0.5y^2 \implies \dot{V}=y\dot{y}=ay^4+O(|y|^5).$$
I know that for the case of multivariable functions $V$ the higher order terms do matter for the assessment of positiveness/negativeness. But I don't see a reason why this should not be true for the single variable case. Hence, I would think that using this Lyapunov function would be a rigorous way to show the asymptotic stability.
Then the author also investigates the case $a=0$. It is stated that this implies that $h(y)=0$ and thus $\dot{y}=0$ which implies a stable origin for the nonlinear system.
If I use the center manifold equation ($a=0$) I obtain
$$\dfrac{\partial h(y)}{\partial y}yh(y)+h(y)=0 \implies \left[h'(y)y+1 \right]h(y)=0.$$
Question 2: The solution to this equation is given by $h(y)=0$ and $h(y)=\ln\dfrac{C}{y}$. Don't we consider the second equation because it does not fulfil $h(0)=h'(0)=0$ and is not defined at $y=0$?
|
I was stuck on (b) until I realized that the inequality is misstated: it should be
$$\sum_{i=1}^k\frac{a_{N+i}}{s_{N+i}}\ge 1-\frac{s_N}{s_{N+k}}\;.$$
To see this, note that $\sum_{i=1}^ka_{N+i}=s_{N+k}-s_N$, and the partial sums are increasing, so
$$\sum_{i=1}^k\frac{a_{N+i}}{s_{N+i}}\ge\sum_{i=1}^k\frac{a_{N+i}}{s_{N+k}}=\frac1{s_{N+k}}\sum_{i=1}^ka_{N+i}=\frac{s_{N+k}-s_N}{s_{N+k}}=1-\frac{s_N}{s_{N+k}}\;.$$
By hypothesis $\lim\limits_{n\to\infty}s_n=\infty$, so for any fixed $N\in\Bbb Z^+$ we have $$\lim_{k\to\infty}\left(1-\frac{s_N}{s_{N+k}}\right)=1\;.$$
Thus, for each $N\in\Bbb Z^+$ there is a $i(N)\in\Bbb Z^+$ such that $k(N)>N$ and
$$\sum_{n=N+1}^{k(N)}\frac{a_n}{s_n}\ge\frac12\;.$$
Let $N_1=1$, and for $i\in\Bbb Z^+$ let $N_{i+1}=k(N_i)$. Then for each $m\in\Bbb Z^+$ we have
$$\sum_{n=1}^{N_m}\frac{a_n}{s_n}=\frac{a_1}{s_1}+\sum_{i=1}^{m-1}\sum_{n=N_i+1}^{N_{i+1}}\frac{a_n}{s_n}\ge 1+(m-1)\cdot\frac12=\frac{m+1}2\to\infty\quad\text{as}\quad m\to\infty\;,$$
and $\sum_{n\ge 1}\frac{a_n}{s_n}$ does indeed diverge.
I’d say that this one (in the corrected version) is of medium difficulty. Proving the inequality is the harder part, since its purpose should be fairly apparent: $\sum_{i=1}^k\frac{a_{N+i}}{s_{N+i}}$ is clearly the sum of the segment of the series from the $(N+1)$-st term through the $(N+k)$-th term, and the inequality says that we can bound it away from zero, so that we can prove divergence using the same technique that’s often used to show that the harmonic series is divergent.
For (c) the first thing that comes to mind is to combine the terms on the righthand side of the inequality into a single fraction to see what we get, and once that’s done, the rest turns out to be easy:
$$\frac1{s_{n-1}}-\frac1{s_n}=\frac{s_n-s_{n-1}}{s_{n-1}s_n}=\frac{a_n}{s_{n-1}s_n}\ge\frac{a_n}{s_n^2}\;,$$
since $s_{n-1}\le s_n$ and therefore $s_{n-1}s_n\le s_n^2$.
Summing terms like $\frac1{s_{n-1}}-\frac1{s_n}$ is easy, because they telescope, so let’s just replace the original series by an upper bound for it:
$$\begin{align*}\sum_{n\ge 1}\frac{a_n}{s_n^2}&=\frac1{a_1}+\sum_{n\ge 2}\frac{a_n}{s_n^2}\\\\&\le\frac1{a_1}+\sum_{n\ge 2}\left(\frac1{s_{n-1}}-\frac1{s_n}\right)\\\\&=\frac1{a_1}+\frac1{s_1}-\frac1{s_2}+\frac1{s_2}-\frac1{s_3}+-\ldots\\\\&=\frac1{a_1}+\frac1{s_1}\\\\&=\frac1{a_1}\;.\end{align*}$$
I did have to make a slight adjustment at the beginning to account for the lack of a non-zero $s_0$, but otherwise it was just using what was there in front of me; I’d classify this one as fairly easy.
For (d) note that if $a_n=\frac1n$, the first series diverges, and the second converges. Since the harmonic series is kind of a borderline divergent series, this suggests that the first series might always diverge and the second always converge. However, my attempts to prove that the first series must diverge foundered on the unpredictable behavior of the sequence $\langle a_n:n\in\Bbb Z^+\rangle$, and I eventually realized that $\sum_{n\ge 1}a_n$ might diverge because because of a very sparse (but infinite) set of large terms, while the other terms were converging to $0$ very rapidly, something like this:
$$a_n=\begin{cases}n,&\text{if }n=k!\text{ for some }k\in\Bbb Z^+\\2^{-n},&\text{otherwise}\;.\end{cases}$$
The first case of that definition ensures that $\sum_{n\ge 1}a_n$ diverges. Now we have
$$\frac{a_n}{na_n+1}=\frac1{n+\frac1{a_n}}=\begin{cases}\frac1{n+\frac1n},&\text{if }n=k!\text{ for some }k\in\Bbb Z^+\\\\\frac1{n+2^n},&\text{otherwise}\;.\end{cases}$$
$\sum_{n\ge 1}\frac1{n+2^n}$ is certainly convergent, and
$$\sum_{n\ge 1}\frac1{n!+\frac1{n!}}\le\sum_{n\ge 0}\frac1{n!}=e\;,$$
so in this case $\sum_{n\ge 1}\frac{a_n}{na_n+1}$ converges. Thus, we cannot in general say anything about the first series.
The second series, on the other hand, succumbs to the comparison test:
$$\frac{a_n}{a+n^2a_n}=\frac1{\frac{a}{a_n}+n^2}<\frac1{n^2}\;,$$
so it converges. (I am assuming here that $a>0$; it’s actually $1$ in my edition.)
Open-ended questions are always a bit harder, but I’d say that the second part of (d) isn’t bad. The first part, on the other hand, is probably the hardest part of the whole question.
|
A loan was taken out on 1 September 1998 and was repayable by the following scheme: The first repayment was made on 1 July 1999 and was £1000. Thereafter, repayments were made on 1 November 1999, 1 March 2000, 1 July 2000, 1 November 2000, etc until 1 March 2004, inclusive (note that the loan was fully repajd on 1 March 2004). Each payment was 5% greater than its predecessor. The effective rate of interest throughout the period was 6% per annum.
1)Calculate the effective rate of interest per month j;
2)Show that the amount of loan was £17692 to the nearest pound.
3)Calculate the amount of capital repaid on 1 July 1999.
4)Calculate both the capital component and the interest component of the seventh repayment (1 July 2001)
My attempts:
Take $\triangle t=4 months$
1)$1+0.06=(1+i)^{12}$ , so the effective rate is $i=0.486755$%
2)The first payment at $t=10$ months (1 July 1999) is $1000 u^{2.5}$ where $u=1/(1+i)$ and we take $\triangle t=4$ months
This means that the next payments at $t=14$ is $1000(1+0.05)\times u^{3.5}$
In total we have to make 15 payments and the 15th payment is $1000(1+0.05)^14\times u^{16.5}$
Factorize to get Present Value:$$PV=1000u^{2.5}(1+1.05u+1.05u^2+...+1.05^{14}u^{14})$$
I can rewrite $(1+1.05u+1.05u^2+...+1.05^{14}u^{14})$ as $\sum (1.05u)^k=\frac{1-(1.05u)^{k+1}}{1-10.5u}$
But when I do the computations I seem to get $PV=20520$
3)Simply Payment(1000) - the interest($i^{10}\times 17692$) , correct? 4) Find Loan Outstanding after 6th payment. This is $17692-1000u^{2.5}(1+1.05u+1.05u^2....1.0.5u^5)$. Calculate interest paid by multiplying by $i$ and deduct this from the 7th payment which is $1.05^61000$
|
Watching a video by the "mathematicalmonk" on the web, I was wondering how to answer this kind of questions:
Given $X_1,\ldots,X_n\sim \mathcal{N}\left(\mu,\sigma^2\right)$. Assume that $\mu$ is unknown. Using the square loss $\mathcal{L}\left(a,b\right)=\left(a-b\right)^2$:
1) Does $\hat{\sigma}^2=\frac{1}{n}\sum_{i=1}^n(X_i-\bar{X})^2$ dominate* $s^2=\frac{n}{n-1}\hat{\sigma}^2$? Or viceversa?
2) Are $\hat{\sigma}^2$ and $s^2$ admissible**?
3) Is there a "best" $s_c^2=c\hat{\sigma}^2$ for some $c\geq 0$?
*given decision rules $a,b$, $a$ dominates $b$ if $\mathcal{R}(\theta,a)\leq\mathcal{R}(\theta,b),\,\, \forall \theta \in\Theta$ and $\mathcal{R}(\theta,a)<\mathcal{R}(\theta,b),\,\,$ for some $\theta \in\Theta$, where $\mathcal{R}(\theta,f)=\mathbb{E}\left[\mathcal{L}\left(\theta,f\left(D\right)\right)|\theta\right]$, and $D$ is the sample.
**A decision rule $a$ is inadmissible if there is another decision rule which dominates $a$. Otherwise, $a$ is admissible.
|
Until now we have talked about our everyday experience of magnetic fields originating from what are called
permanent magnets. But magnetic fields and electric charges are intimately linked, as we will soon see in greater detail.For now we will switch gears and consider how a charged particle behaves in a magnetic field, and in particular what force it feels.
We do not really care
how the magnetic field got there just yet, and for simplicity’s sake we will imagine that the field was created by a magnet. This ignores the interesting question of what makes a magnet (a question we return to later). In other words, we are going to start with the field model of magnetic fields: \[\text{Irrelevant Thing} \xrightarrow{\text{creates}} \text{Field }\mathbf{B} \xrightarrow {\text{exerts force on}} \text{ Charge } q\] Magnetic forces are, of course, vectors with both magnitude and direction. We will begin by analyzing their magnitude, and for now put aside the discussion of direction for "Right-Hand Rule". Magnetic Force on a Moving Charge
Experiments demonstrate that the magnetic force exerted on a charged particle
depends on its velocity. If a charge is not moving, it feels no force from the magnetic field! The magnitude of the magnetic field on a charge \(q\) traveling with velocity \(\mathbf{v}\) is given by \[| \mathbf{F}_{\mathbf{B} \text{ on } q} | = |q| |\mathbf{v}| |\mathbf{B}| |\sin \theta|\] where \(\theta\) is the angle between the vectors velocity \(\mathbf{v}\) and magnetic field \(\mathbf{B}\). We've taken the absolute value of every quantity in the above formula because for now we're only considering magnitude
Notice that if the velocity \(\mathbf{v}\) and the magnetic field \(\mathbf{B}\) point in the same direction or in opposite directions, then \(\theta\) takes a value that gives \(\sin \theta=0\) and the magnetic force is zero. In fact, to calculate the magnetic force all we need to know is the component of velocity
perpendicular to the \(\mathbf{B}\) field. This is why we include the \(\sin \theta\) term in the above formula. Another way of rewriting the force from the magnetic field is \[| \mathbf{F}_{\mathbf{B} \text{ on } q} | = |q| |\mathbf{v}_{\perp}| |\mathbf{B}| \]This is very similar to calculations of the magnitude of torque from 7B, in which only the component of force perpendicular to the lever arm mattered. Below, both approaches for calculating magnetic force on the same setup are shown.
Right-Hand Rule
When we looked at gravitational fields, we learned that the gravitational force was always in the same direction as the field. When looking at the electric field, we learned that the force for positive and negative charges was with the field and against the field, respectively. The magnetic field is quite different; the force on a charged particle
never points in the direction of the magnetic field.
We noted that the velocity \(\mathbf{v}\) of the charged particle determined the magnitude of the magnetic force. It is also needed to determine the direction of the magnetic force. Experimentally, we find that the magnetic force on a test charge is at 90° to the magnetic field \(\mathbf{B}\), and 90° to the velocity of the test charge \(\mathbf{v}\). If the magnetic field \(\mathbf{B}\) and velocity \(\mathbf{v}\) are not pointing in the same direction, then there are only two possible vectors that are at 90° to both of these directions. One of these directions is the magnetic force on a positively-charged particle; the other is the direction of the force on a negatively-charged particle. This illustration should make things more clear (you may want to review the appendix on vector conventions):
To correctly pick the direction for the positive particle, we use
right hand rule #2 (RHR #2). Point your right thumb in the direction the particle is going (\(\mathbf{v}\)), your right index finger in the direction that the \(\mathbf{B}\) field is going, and then our middle finger on our right hand when bent will point in the direction of the magnetic force. The mnemonic “ very bad finger” and the diagram below may help you remember the order:
Compare the hand figure with the pictures above it, and you'll see that RHR #2 gives us the direction a positively charged particle would feel from the field. To perform RHR for a negative charge, find the direction of the magnetic force on a positive charge and reverse it.
All of the above was done under the assumption that the magnetic field \(\mathbf{B}\) and the velocity \(\mathbf{v}\) were in different directions. How do we determine the direction if we decide to fire a charge in exactly the same direction as the magnetic field? The answer is it does not matter. As we learned before, if the velocity and magnetic field are in exactly the same (or opposite) direction then the magnitude of the magnetic force is zero! As we saw before, if the \(\mathbf{B}\) field and the velocity \(\mathbf{v}\) are parallel (or anti-parallel), the angle between them \(\theta\) is an integer multiple of \(\pi\); therefore \(\sin \theta\) is also zero, and the force is zero.
With both the direction and the magnitude determined, we can now summarize the magnetic force on a test charge \(q\) traveling with speed \(\mathbf{v}\): \[\mathbf{F}_{\mathbf{B} \text{ on } q} = \begin{cases} \text{Magnitude} & = |q| |\mathbf{v}| |\mathbf{B}| |\sin \theta| = |q| |\mathbf{v}_{\perp}| |\mathbf{B}| \\ \text{Direction} & = \text{ Use RHR if } q \text{ is positive, swap direction if }q \text{ is negative} \end{cases}\]
Magnetic Force on a Wire Carrying Current
Regardless of whether a charge is in a vacuum or inside a material, it will experience a force when it moves across magnetic field lines. Therefore, a wire with a current (composed of many individual moving charges) can also feel a force when placed in a magnetic field. That force is the vector sum of all the forces acting on the charge carriers that are individually moving in the wire.
Consider a straight wire segment of length \(L\) with a current \(I\) flowing from left to right, placed on the page. Imagine that there is a \(\mathbf{B}\) field in this region that makes an angle \(\theta\) with respect to the wire. If the charges in the wire are moving at an average speed \(\mathbf{v}\), the time they need to travel the length \(L\) is \[t = L/ \mathbf{v}\]The amount of charge that flows in this time is \[q = It = IL/ \mathbf{v}\]Therefore the force exerted on the wire is
\[\mathbf{F} = q \mathbf{v B} \sin \theta = \left( \dfrac{IL}{\mathbf{v}} \right) \mathbf{v B} \sin \theta = I L \mathbf{B} \sin \theta\]
The direction of the magnetic force on a wire is also given by the same right-hand rule used for single charges. However, charges moving in the wire are electrons (negatively-charged), and they travel against the direction of \(I\), which is the movement of positive charge by convention. However, we find that performing RHR on the current or the electrons separately will give us the same answer.
|
In standard cosmology models (Friedmann equations which your favorite choice of DM and DE), there exists a frame in which the total momenta of any sufficiently large sphere, centered at any point in space, will sum to 0 [1] (this is the reference frame in which the CMB anisotropies are minimal). Is this not a form of spontaneous Lorentz symmetry breaking ? While the underlying laws of nature remain Lorentz invariant, the actual physical system in study (in this case the whole universe) seems to have given special status to a certain frame.
I can understand this sort of symmetry breaking for something like say the Higgs field. In that situation, the field rolls down to one specific position and "settles" in a minima of the Mexican hat potential. While the overall potential $V(\phi)$ remains invariant under a $\phi \rightarrow \phi e^{i \theta}$ rotation, none of its solutions exhibit this invariance. Depending on the Higgs model of choice, one can write down this process of symmetry breaking quite rigorously. Does there exist such a formalism that would help elucidate how the universe can "settle" into one frame ? I have trouble imagining this, because in the case of the Higgs the minima exist along a finite path in $\phi$ space, so the spontaneous symmetry breaking can be intuitively understood as $\phi$ settling
randomly into any value of $\phi$ where $V(\phi)$ is minimal. On the other hand, there seems to me to be no clear way of defining a formalism where the underlying physical system will randomly settle into some frame, as opposed to just some value of $\phi$ in a rotationally symmetric potential.
[1] The rigorous way of saying this is : There exists a reference frame S, such that for all points P that are immobile in S (i.e. $\vec{r_P}(t_1) = \vec{r_P}(t_2) \forall (t_1, t_2)$ where $\vec{r_P}(t)$ is the spatial position of P in S at a given time $t$), and any arbitrarily small $\epsilon$, there will exist a sufficiently large radius R such that the sphere of radius R centered on P will have total momenta less than $\epsilon c / E_k$ (where E is the total kinetic energy contained in the sphere).
Ben Crowell gave an interesting response that goes somewhat like this :
Simply put then : Causally disconnected regions of space did not have this same "momentumless frame" (let's call it that unless you have a better idea), inflation brings them into contact, the boost differences result in violent collisions, the whole system eventually thermalises, and so today we have vast swaths of causally connected regions that share this momentumless frame.
Now for my interpretation of what this means. In this view, this seems to indeed be a case of spontaneous symmetry breaking, but only locally speaking, because there should be no reason to expect that a distant causally disconnected volume have this same momentumless frame. In other words the symmetry is spontaneously broken by the random outcome of asking "in what frame is the total momentum of these soon to be causally connected volumes 0?". If I'm understanding you correctly, this answer will be unique to each causally connected volume, which certainly helps explain how volumes can arbitrarily "settle" into one such frame. I'm not sure what the global distribution of boosts would be in this scenario though, and if it would require some sort of fractal distributions to avoid running into the problem again at larger scales (otherwise there would still be some big enough V to satisfy some arbitrarily small total momentum).
|
Taking$$m\frac{d^2x}{dt^2} = - kx - \gamma v + F(t) + \eta$$and writing this as$$\mathrm{d}\mathbf{x}_t= A\mathbf{x}_t\mathrm{d}t + \mathbf{F}_t\mathrm{d}t + \sigma\mathrm{d}W_t$$where $\mathbf{x}_t = (x, v)^\mathrm{T}$, $A = \begin{pmatrix}0 & 1 \\ -\frac{k}{m} & -\frac{\gamma}{m}\end{pmatrix}$, $\mathbf{F}_t = (0, F(t))^\mathrm{T}$, $\sigma = (0, \sqrt{2 \gamma k_BT}/m)^\mathrm{T}$.
Solving this, as usual, $$\mathbf{x}_t = e^{tA}\mathbf{x}_0 + \int_0^t e^{-(s-t)A}\mathbf{F}_s\mathrm{d}s + \int_0^t e^{-(s-t)A}\sigma\mathrm{d}W_s$$
The general solution here is a bit messy thanks to the matrix exponential, but if you set $k = 0$ it all simplifies a great deal and you recover the Ornstein-Uhlenbeck process.
Now I don't have proof for this (I'm guessing that at least under typical conditions the integrated process $\int_0^t \int_0^{t'} f(s,t') \mathrm{d}W_s\mathrm{d}t'$ has a lower variance than $\int_0^t f(s,t) \mathrm{d}W_s$, which I think is equivalent to the statement $\left(f(s,t)\right)^2 > \left(\int_s^tf(s,t')\mathrm{d}t'\right)^2$), but testing with simulations it seemed to be quite difficult to recover the temperature from the variance of $x_t$: I computed $x_{t+\Delta t}$ given $x_t$ using the formula above, then took the variance of the difference of the thus predicted $x_{t+\Delta t}$ vs. the actual $x_{t+\Delta t}$. This still left a residual term due to the external force, perhaps because of numerical noise (in the sense that Euler-Maruyama, the method I used, does not numerically speaking match the way I computed the integrals accurately enough). This is all to say that this approach is quite sensitive to noise. It however worked much better for the velocity (again, as its variance is larger),
$$\operatorname{Var}(v_{t+\Delta t} - v_t) = \int_0^{\Delta t} \left((0, 1)e^{-(s-\Delta t)A}\sigma\right)^2\mathrm{d}s$$
which as you can see depends linearly on $T$.
If you don't need a very automated process of doing this, you can probably get rid of the residuals in a more manual fashion.
|
Filter Results: Full text PDF available (0) Publication Year
1997
2010
This year (0) Last 5 years (0) Last 10 years (2) Co-author
Learn More
Let be an array of rowwise independent, but not necessarily identically distributed, random variables with =0 for all k and n. In this paper, we povide a domination condition under which… (More)
Let ${X_n,n \geq 1}$ be a sequence of stochatically dominated pairwise independent random variables. Let ${a_n, n \geq 1}$ and ${b_n, n \geq 1}$ be seqence of constants such that $a_n \neq 0$ and $0
Let ${X, X_n, n \geq 1}$ be a sequence of independent and indentically distributed(i, i, d) random variables. Let ${a_{ni}, 1 \leq i \leq n, \geq 1}$ be a triangular array of constants. The purpose… (More)
Call centers have become the prevalent contact points between companies and their customers. By virtue of recent advances in communication technology, the number and size of call centers have grown… (More)
The rate of convergence for an almost surely convergent series of independent random variables is studied in this paper. More specifically, when S converges almost surely to a random variable S, the… (More)
The Hsu-Robbins-erd s theorem states that if {} is a sequence of independent and identically distributed random variables, then ${EX_1}^2=0 if and only if… (More)
In this paper, we obtain an analogue of law of the iterated logarithm for an array of independent, but not necessarily idetically distributed, random variables under some moment conditions of the… (More)
|
I would disagree on a few counts.
I don't think it's "probably" possible to prove $P \neq NP$. I certainly don't think it's impossible, but Gödel's incompleteness theorems tell us that there are some sentences in a logical system which are true but which can't be proven.
You ask if there have been proofs to Computer Science theories. There have been thousands at the very least. These vary from insignificant to huge. I'll mention some of the most relevant (aka my favorite) ones here.
Some problems cannot be solved by ANY algorithm. No matter what. Ever. An example of this is determining if a computer program will run forever. There is no way to look at a program and be guaranteed a yes/no answer for if it will run forever. The functions which can be computed by a Turing Machine are the same as functions which can be computed by the lambda calculus, are the same as practically every programming language. Basically, though speed might differ, all turing-complete programming languages (which is most of them) can solve the same set of problems. The types of a computer program are directly related to its proof of correctness. This is called the "Curry-Howard Correspondence", and is quite complex, so I won't go into the details here. Note that by types I mean things like integer, string, list, array, etc. A list of real numbers can't be sorted in faster than $\Omega (n\log n).$ This means that no matter what algorithm you use, in the worst case it will take approximately $n\log n$ steps to sort that list. A large number of problems are, at their core, the same problem. If you've ever heard of NP-complete problems, what that means is that these problems all boil down to satisfiability. They are from graph theory, combinatorics, scheduling, and a variety of areas, but ultimately, they are all just checking if there's some input to a logical formula of ORs and ANDs which will spit out true as the overall result.
Note that these are all necessarily complex topics, which I have simplified to give you a taste of the sort of things provable in computer science.
You ask how Computer Science theories are proven. The problem is that Computer Science is an incredibly diverse field, and it really depends on your sub-field. Theory of computation, programming language construction and formal AI ("neat" AI) are highly mathematical, and are based heavily on logic and proofs.
However, there are many sub-fields which are much more experimental. Human-Computer interaction or anything having to do with the human side of computing will rely heavily on studies and experiments involving users. Operating Systems, Graphics and Compilers will rely heavy on performance evaluations, seeing which programs are fastest in the real word, not just on paper.
If you are in Junior High, I think there are many computer science problems which are in your reach. Because it is such a young field, there are many unsolved CS problems which are relatively simple to understand. Unlike physics, where you need tons and tons of calculus background, CS really relies on simple logic. If you can learn induction, logic and the basics of discrete structures (graphs, strings, etc.), I think there are probably lots of concepts that are accessible to a Junior High student.
|
I have an $n$-manifold $M$ which is foliated by leaves $F_\alpha$ of dimension $p$ and a path $\gamma:[0,1]\to M$. You can take without problems $\gamma$ to be injective. Is the following statement true?
Claim:There exists a neighborhood $U$ of the image of $\gamma$ and a foliation $L_\beta$ of $U$ of dimension $n-p$ such that $F_\alpha\pitchfork L_\beta$ for all $\alpha,\beta$.
Basically what I would like to do is to have an extension of a complement of the tangent space to the leaves $F_\alpha$ in $TM$ to the tangent of local submanifolds of complementary dimension. I feel that this should be true, but I'm not sure about how to proceed. Would go to local coordinates (respecting the foliation) in charts around $\gamma$ solve the problem? How could I make the obtained complements patch together correctly?
An easy partial result: We can always find such a complement to the foliation in an appropriate chart. Indeed, by definition of foliation we know that for any point $m\in M$ we have a neighborhood $U$ of $m$ and a chart $\phi:U\to\mathbb{R}^n$ such that the leaves correspond to the $p$-planes of constant $x$, where we decompose $(x,y)\in\mathbb{R}^{n-p}\times\mathbb{R}^p=\mathbb{R}^n$. Then the preimages of the planes of constant $y$ are our complement (they are regular by the inverse function theorem and the usual arguments).
|
Let $\mathscr{T}$ be an elementary topos (I use the definition of a Cartesian closed, finitely complete category with a subobject classifier). Let $a$ be any object (that's not isomorphic to $0$) of $\mathscr{T}$, and $0$ the initial object. Is any arrow of the form $f: 0 \to a$ a monomorphism? I believe the answer is yes, but I can't figure out a proof.
The answer is that they are, but for stupid reasons : the same reason that in $\mathbf{Set}$, any map $\emptyset \to A$ is mono.
Indeed, in an elementary topos $\mathscr{T}$, any object $A$ with a map $A\to 0$ is also $0$, so this map is actually unique.
To prove this, first note that for any object $B$, $B\times 0 \simeq 0$. Indeed, by exponentiation, one has $\hom(B\times 0, X)\cong \hom(0,X^B) \cong \{*\}$, naturally in $X$, so $B\times 0$ is an initial object.
Then note that if you have a map $f:A\to 0$, then you have a map $A\to A\times 0$, namely $(id_A,f)$. Now $\pi_A\circ (id_A,f) = id_A$ by definition, and $\pi_A\circ (id_A,f)\circ \pi_A = id_A\circ \pi_A = \pi_A=\pi_A\circ id_{A\times 0}$, $\pi_0\circ (id_A,f)\circ \pi_A = f\circ \pi_A= \pi_0\circ id_{A\times 0}$ because $A\times 0$ is initial, and so there's a unique map $A\times 0 \to 0$ : if I know one such map, it must be this one.
These two equations prove (universal property of the product) that $(id_A,f)\circ \pi_A = id_{A\times 0}$; so that $(id_A,f)$ and $\pi_A$ are a pair of isomorphisms : $A\simeq A\times 0$. But $A\times 0 \simeq 0$, so $A\simeq 0$.
Now we proved that any object with a map $A\to 0$ must be initial, so if there are two maps $f,g: B\to 0$ such that [insert any condition you like], then $f=g$. In particular, any map leaving $0$ is a monomorphism
|
I am trying to prove that for
any choice of reals $a$ and $b$ such that$a \neq b $the function $f(x) := \sin(ax) - \sin(bx)$ has $|f(x)| \geq 1$ for some $x>0$
(This is not an exercise question but something I am trying to prove for myself. Plots with Wolfram Alpha seem to indicate it might be true, and so counterexamples are very much welcome)
Here is my attempt. For this I wrote $f$ of 3 variables $a,b,x$. We find that value of $x_0$ for which $f$ attains a maximum and minimum and then show that $|f(x_0)|$ greater than 1.
$g(a,b,x)=f(x)$ and then zeroing the gradient. (pending the 2nd derivative test)
$f_a = x \cos(ax)$
$f_b = -x \cos(bx)$
$f_x = a\cos(ax) - b\cos(bx)$
Since the point of minima/maxima are conjectured $x>0$ , this means $\cos(ax)$ and $\cos(bx)$ are both zero and the third equation is redundant.
If $a$ and $b$ are rational then obviously we can find such an $x$. But when one of them is irrational, I don't know how I would prove this.
Another attempt was using the $\sin(u)-\sin(v)$ product formula, but that led nowhere too.
Any pointers?
|
I made a high voltage DC supply which people may be interested in. It uses an Ebay Chinese ZVS resonant driver and a custom transformer, which then feeds a bridge rectifier and output inductor. At 35 V input the no-load output is 1 kV. I ran it for a while at 400V 0.375A with no difficulties (150W). The transformer starts to get hot at that current, but with the fan it can probably run indefinitely. Efficiency is about 85%.
There a switch to direct output to one of two BNC connectors, and terminals for a current meter. Also visible is a buck converter board which drops the input down to 12V for the fan.
The key active component is the ZVS inverter board which converts the DC input to an 87 kHz sine wave. One of the big virtues of this approach is that you can get the boards on Ebay for $20 or less each. Fancier boards with fan and current limit are more like \$50. You can google this as the Mazilli driver. The circuit here is very similar to my board bought on Ebay.
The Ebay board uses dual feed inductors so that it does not need a center-tapped load (only two output connections). Also, there are two component changes: the IRFP250N is changed to IRFP260N (which has 1/2 the on resistance), and the resonant capacitor is implemented as two 0.3 uF 1200V caps rather than a series parallel arrangement.
I modified the board to put the output capacitors in series rather than parallel by desoldering the capacitors and flipping them over, adding heavy wire to replace the unused PCB traces. This reduces the capacitance by 4x, doubling the resonant frequency. This was necessary because of the fairly large inductance of the transformer (which I had already built some time ago). Efficiency would probably improve if the operating frequency was further increased by changing the transformer inductance and tank capacitance, since the feed inductors are running much warmer than everything else (core losses are high).
The really nice thing about the Mazilli ZVS inverter is that it is a self-oscillating resonant inverter with output power up into the 100's of watts.
Resonant means that the load (here a transformer) is part of a LC resonant circuit which is run at the resonant frequency. Running at resonance virtually eliminates switching losses in the inverter by turning the MOSFETs on/off at a time in the cycle when either the voltage across the switch or the current through the switch is zero. This can be done because the LC resonance causes the output to oscillate or ring, creating zero crossings which are no-loss switching opportunities. This design is Zero Voltage Switched (ZVS). Self-oscillating means that by means of pure analog magic, the circuit figures out what the resonant frequency is. You can stick a very large variety of output loads on the inverter, and it will just “work”, which is user-friendly. In production resonant converters it is more common to use a driven design where there is a separate oscillator that gets tuned to the resonant frequency by other auxiliary circuits. This avoids the startup problems of the Mazilli ZVS design.
The big flaw with this inverter is that it does not self-start. It requires an abrupt turnon of the input power in order for the oscillation to start, and the oscillation can stop if the output load is too high (arcing or a short). Being stopped might not seem so bad except that when the inverter is stopped both transistors turn on, shorting the input power supply. This is basically a hobby grade circuit, and can't be used reliably without some external protection. At a minimum, you need to run it from a supply with output current limiting, so that nothing blows immediately when the inverter stops. Even with the current limiting, the inverter may burn out if it is powered for a long time when not oscillating, since
all of the input power is being dissipated in the inverter itself, rather than in the load.
I have been running this inverter off of a CC/CV bench supply, which provides current limiting, and also lets you roughly control the output current. The inverter is spec'd for 12V to 30V input. There is some room to push the upper voltage limit, I've gone up to 35V and nothing blew. The page linked above suggests 10-40 V input.
The transformer is the trickiest component here. High-frequency power transformers are basically never off-the-shelf components because there are too many variables that need to be optimized simultaneously: input voltage, output voltage, operating frequency, mechanical shape, efficiency. Designing a resonant transformer adds an additional constraint: the primary must have a certain resonant frequency. This balancing act is important to get a decent level of efficiency in the transformer. For hobby use the efficiency concern is mainly keeping the transformer from smoking, rather than reducing your electric bill.
One of the most common uses for these ZVS boards is to drive a flyback transformer (as used in CRT monitors). This will give you 10's of kV output at 10's of mA, and the flyback usually has an integrated HV rectifier also. A flyback transformer would be a good place to start if you want kV outputs. The typical ZVS/flyback setup seems to be optimized for “how long a spark can I make” (operating only a short time to avoid transformer burnout). If you want to operate continuously you will need to settle for lower output.
amazing1.com (Information Unlimited) has some HV transformers that look nice, at a reasonable price: High Voltage Transformers.
But if you want a supply for plasma discharges such as glowbar, hollow cathode, and magnetron, then you need higher current at lower voltage. So, custom transformer. The usual goal of transformer design is to get the smallest transformer which can handle the desired power without getting too hot. Losses in a transformer are divided into
core losses and copper losses. A basic principle of transformer design is to make the core and copper losses roughly equal. Other effects may shift you away from equality, but you can make a blunder which makes either the core or copper losses dominate. If you are using all of the winding area in the core, then reducing one loss will increase the other. For example, using more turns of thinner wire will reduce core losses but increase copper losses.
Key points in my transformer design:
I got this part surplus, so don't know the specs, but this is an standard 0.5 inch E-core, dimensions are very close to the Amidon ea-77-500, and it is virtually certain that the material is a power ferrite similar to Amidon #77, with initial permeability $\mu_0$ near 2000. Also, the permeability becomes less important as the gap increases. There are some roughly similar cores in stock on Digi-key. Buy a bobbin (coil form) if you can. I made my own from paper.
The total core gap is 2x this (7.2 mm) because each part is gapped. I gapped the cores by using a bench grinder with a silicon carbide wheel, but this is kind of a pain because the core material is not strong and tends to break or chip. It may give a bit higher losses, but it is much easier to gap the core by using a non-conductive spacer on each of the three core legs. A few layers of (non-copper clad) PCB material works well. If using spacers you would use the half-space (3.6 mm), since each magnetic circuit has two spacers.
I reverse engineered the windings by measuring the primary/secondary inductances. The turns ratio is the square root of the inductance ratio, 7.6, so is known with fair confidence. I estimated the primary turns (the primary is on top, so I could count layers), then used this to derive the secondary turns.
Inductance L DC resistance Litz size n Primary 27 uH 0.080 Ohm 14/33 60 Secondary 1.57 mH 1.45 Ohm 5/33 456
The coil is layer-wound, with each layer separated by 2-3 layers of ordinary printer paper and a wrap of Kapton (polyimide) tape. The paper is mainly to provide a flat surface for the next layer, though it does add electrical insulation too, especially if you varnish impregnate it.
The transformer is wound with homemade twisted Litz wire with 33 Ga strands. 14/33 means 14 strands of 33 ga. I used 33 ga because I have a big spool. You would not want it much coarser than that, finer is usable if you increase the number of strands. You can easily make this yourself by wrapping wire between two posts separated by the length of Litz that you need (eg. clamps attached to the backs of two chairs). After you have laid down the desired number of strands, remove one end and twist it using an electric drill. There is also a variety of Litz wire on Ebay.
The ZVS inverter is nice for driving a plasma load because it inherently has a high output impedance (like a current source). A DC plasma discharge always requires a higher voltage to start, and then you need to back off to a lower voltage for running.
Once started, the discharge has a “negative resistance” characteristic, which means that as you increase the current, the discharge voltage
decreases. If you tried to drive the discharge with a perfect voltage source (zero impedance) then the output current would run away to infinity (i.e. something blows out).
The transformer design is constrained by being part of a resonant “tank” circuit. In particular, $Z_0 = \sqrt{L/C} = 13.4 \Omega $. The loaded $Q$ of the tank is determined by the ratio $Z_L/Z_0$, where $Z_L$ is the load impedance as reflected back through the transformer. If we ignore the effect of the rectifier, then and output of 0.375A at 400V is $1 k\Omega$, scaled by $(n_{pri}/n_{sec})^2$ gives $Z_L = 17.3 \Omega$, or $Q = 1.29$. If $Z_L < Z_0$ then the oscillator can't run, so this is probably near the maximum load. But if the $Q$ is too high then there is a lot of resonant current circulating the tank, increasing losses. So the combination of the desired load impedance together with the operating frequency determine the values of $L$ and $C$. This is why it is necessary to reduce the transformer inductance by gapping the core.
The rectifier is a bridge, where each “diode” is two BYT56MV in series. This is a 1 kV avalanche rated 3A diode with 100 ns recovery time. At first I used an 0.5 A 2 kV diode, and they kept burning out. The peak current in the diode is much higher than the average current because the conduction time is fairly short (output is mostly supplied by the capacitor). For this reason, do not use a larger output filter capacitor; this increases current stress on the diodes. A polypropylene (PP) dielectric capacitor is recommended for the output filter because it has lower losses than polyester. The two resistors and neon lamp serve dual purpose as a HV indicator and bleeder to discharge the output capacitor.
At first I tried having no output capacitor, but I had trouble getting the discharge started. With the capacitor the output is the peak voltage, while the plasma is perhaps responding to more like the average.
The insulated binding posts are connected to the rectifier output. Connect the voltmeter between these two terminals, and the ammeter between whichever side is grounded and the ground terminal. The “polarity jumper” is the wire visible, which is connected to the non-grounded side of the rectifier output. This allows you to switch the output polarity by moving the jumper and ammeter connections.
The jumper feeds the output inductor, which is placed in series with the output to increase the output impedance at high frequencies. This inductor is made by many turns of 24 ga wire on a 56 mm diameter x 36 mm height ferrite bobbin core. The inductance is 15 mH and DC resistance is 2.5 Ohm. This inductor is random wound rather than layer wound, which is not ideal from a voltage withstand perspective, since the coil sees the full output voltage when there is an arc. I didn't experiment with the output inductor because I already had this lying around, and it “worked”. My feel is that the inductance could likely be several times lower, while it is probably impossible for it to too large.
|
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
|
The code you found is unfortunately written and explained in a very confusing way, most notably in a way that, in my humble opinion, misses the entire point of Bucketsort.
Let $A$ be an array of $n$ elements, taken from the ordered set $(S, \le)$. Bucketsort is a sorting algorithm that sorts $A$ in linear time with probability $1$,
subject to the hypothesis that the statistical distribution of the elements of $A$ over $S$ is known in advance.
For the sake of simplicity, we will assume $S$ to be the real interval $[0, 1)$ and the distribution of $A$ to be uniform over $[0, 1)$. Our second assumption is without loss of generality. In fact, should we be provided with a different probability density function $\rho$, we could simply replace each $x \in A$ with:
$$x' =\int_{0}^x \rho(z) dz$$
An analogous argument holds for different choices of $S$.
Bucketsort works as follows:
Split $[0, 1)$ in $n$ consecutive intervals, where $n = |A|$, such that the $i$-th interval is $\left[ \frac{i-1}{n}, \frac{i}{n} \right)$ Create $n$ empty lists $L_1, L_2, \dots, L_n$ For each element $x$ of $A$, add $x$ to the list corresponding to the interval in which it falls Sort each list individually, then return the concatenation $L_1 \circ L_2 \circ \dots \circ L_n$
Correctness is obvious, but complexity isn't. The crucial hypothesis is that, by assumption, we are sorting elements
uniformly distributed over $[0, 1)$. Therefore, if $n$ grows large, as it is the case in the usual asymptotic setting, at the end of our third step, each of the lists will contain a constant number of elements with probability $1$; if that weren't the case, the elements wouldn't be uniformly distributed after all. Also observe that "with probability $1$" is not the same as "certainly", although it's as close as it gets.
The choice of exactly $n$ buckets is due to the fact that we want our last step to take
linear time; but that can only be achieved if each list contains a constant amount of elements. We could have chosen just as well to use $n/c$ buckets for any $c \in \Theta(1)$, but we couldn't have started with $ k \in o(n)$ buckets; each of them would have contained $n/k \in \omega(1)$ elements. The total cost of sorting them would have been:
$$k \left ( \frac{n}{k} \log \frac{n}{k} \right ) = n \log \omega(1) \in \omega(n)$$
|
Let $\to_\beta$ be $\beta$-reduction in the $\lambda$-calculus. Define $\beta$-expansion $\leftarrow_\beta$ by $t'\leftarrow_\beta t \iff t\to_\beta t'$.
Is $\leftarrow_\beta$ confluent? In other words, do we have that for any $l,d,r$, if $l \to_\beta^* d\leftarrow_\beta^* r$, then there exists $u$ such that $l\leftarrow_\beta^* u \to_\beta^* r$?
Keywords: Upward confluence, upside down CR property
I started by looking at the weaker property: local confluence (i.e. if $l \to_\beta d\leftarrow_\beta r$, then $l\leftarrow_\beta^* u \to_\beta^* r$). Even if this were true, it would not imply confluence since $\beta$-expansion is non-terminating, but I thought that it would help me understand the obstacles.
(Top) In the case where both reductions are at top-level, the hypothesis becomes $(\lambda x_1.b_1)a_1\rightarrow b_1[a_1/x_1]=b_2[a_2/x_2]\leftarrow (\lambda x_2.b_2)a_2$. Up to $\alpha$-renaming, we can assume that $x_1\not =x_2$, and that neither $x_1$ nor $x_2$ is free in those terms. (Throw) If $x_1$ is not free in $b_1$, we have $b_1=b_2[a_2/x_2]$ and therefore have $(\lambda x_1.b_1)a_1=(\lambda x_1.b_2[a_2/x_2])a_1\leftarrow(\lambda x_1.(\lambda x_2.b_2)a_2)a_1\rightarrow (\lambda x_2.b_2)a_2$.
A naive proof by induction (on $b_1$ and $b_2$) for the case (Top) would be as follows:
If $b_1$ is a variable $y_1$,
If $y_1=x_1$, the hypothesis becomes $(\lambda x_1.x_1)a_1\rightarrow a_1=b_2[a_2/x_2]\leftarrow (\lambda x_2.b_2)a_2$, and we indeed have $(\lambda x_1.x_1)a_1=(\lambda x_1.x_1)(b_2[a_2/x_2])\leftarrow (\lambda x_1.x_1)((\lambda x_2.b_2)a_2)\rightarrow (\lambda x_2.b_2)a_2$.
If $y_1\not=x_1$, then we can simply use (Throw).
The same proofs apply is $b_2$ is a variable.
For $b_1=\lambda y.c_1$ and $b_2=\lambda y.c_2$, the hypothesis becomes $(\lambda x_1.\lambda y. c_1)a_1\rightarrow \lambda y.c_1[a_1/x_1]=\lambda y.c_2[a_2/x_2]\leftarrow (\lambda x_2.\lambda y.c_2)a_2$ and the induction hypothesis gives $d$ such that $(\lambda x_1.c_1)a_1\leftarrow d\rightarrow (\lambda x_2.c_2)a_2$ which implies that $\lambda y.(\lambda x_1.c_1)a_1\leftarrow \lambda y.d\rightarrow \lambda y.(\lambda x_2.c_2)a_2$. Unfortunately, we do not have $\lambda y.(\lambda x_2.c_2)a_2\rightarrow (\lambda x_2.\lambda y.c_2)a_2$. (This makes me think of $\sigma$-reduction.)
A similar problem arises for applications: the $\lambda$s are not where they should be.
|
Learning Objectives
By the end of this section, you will be able to:
Describe the right-hand rule to find the direction of angular velocity, momentum, and torque. Explain the gyroscopic effect. Study how Earth acts like a gigantic gyroscope.
Angular momentum is a vector and, therefore,
has direction as well as magnitude. Torque affects both the direction and the magnitude of angular momentum. What is the direction of the angular momentum of a rotating object like the disk in Figure 1? The figure shows the right-hand rule used to find the direction of both angular momentum and angular velocity. Both L and ω are vectors—each has direction and magnitude. Both can be represented by arrows. The right-hand rule defines both to be perpendicular to the plane of rotation in the direction shown. Because angular momentum is related to angular velocity by L = Iω, the direction of Lis the same as the direction of ω. Notice in the figure that both point along the axis of rotation.
Now, recall that torque changes angular momentum as expressed by
[latex]\text{net }\tau=\frac{\Delta \mathbf{\text{L}}}{\Delta t}\\[/latex].
This equation means that the direction of
ΔL is the same as the direction of the torque τ that creates it. This result is illustrated in Figure 2, which shows the direction of torque and the angular momentum it creates. Let us now consider a bicycle wheel with a couple of handles attached to it, as shown in Figure 3. (This device is popular in demonstrations among physicists, because it does unexpected things.) With the wheel rotating as shown, its angular momentum is to the woman’s left. Suppose the person holding the wheel tries to rotate it as in the figure. Her natural expectation is that the wheel will rotate in the direction she pushes it—but what happens is quite different. The forces exerted create a torque that is horizontal toward the person, as shown in Figure 3(a). This torque creates a change in angular momentum L in the same direction, perpendicular to the original angular momentum L, thus changing the direction of L but not the magnitude of L. Figure 3 shows how ΔL and L add, giving a new angular momentum with direction that is inclined more toward the person than before. The axis of the wheel has thus moved perpendicular to the forces exerted on it, instead of in the expected direction.
This same logic explains the behavior of gyroscopes. Figure 4 shows the two forces acting on a spinning gyroscope. The torque produced is perpendicular to the angular momentum, thus the direction of the torque is changed, but not its magnitude. The gyroscope
precesses around a vertical axis, since the torque is always horizontal and perpendicular to L. If the gyroscope is not spinning, it acquires angular momentum in the direction of the torque ( L = ΔL), and it rotates around a horizontal axis, falling over just as we would expect. Earth itself acts like a gigantic gyroscope. Its angular momentum is along its axis and points at Polaris, the North Star. But Earth is slowly precessing (once in about 26,000 years) due to the torque of the Sun and the Moon on its nonspherical shape. Check Your Understanding
Rotational kinetic energy is associated with angular momentum? Does that mean that rotational kinetic energy is a vector?
Solution
No, energy is always a scalar whether motion is involved or not. No form of energy has a direction in space and you can see that rotational kinetic energy does not depend on the direction of motion just as linear kinetic energy is independent of the direction of motion.
Section Summary Torque is perpendicular to the plane formed by rand Fand is the direction your right thumb would point if you curled the fingers of your right hand in the direction of F. The direction of the torque is thus the same as that of the angular momentum it produces. The gyroscope precesses around a vertical axis, since the torque is always horizontal and perpendicular to L. If the gyroscope is not spinning, it acquires angular momentum in the direction of the torque ([latex]\mathbf{\text{L}}=\Delta\mathbf{\text{L}}\\[/latex]), and it rotates about a horizontal axis, falling over just as we would expect. Earth itself acts like a gigantic gyroscope. Its angular momentum is along its axis and points at Polaris, the North Star. Conceptual Questions
1. While driving his motorcycle at highway speed, a physics student notices that pulling back lightly on the right handlebar tips the cycle to the left and produces a left turn. Explain why this happens.
2. Gyroscopes used in guidance systems to indicate directions in space must have an angular momentum that does not change in direction. Yet they are often subjected to large forces and accelerations. How can the direction of their angular momentum be constant when they are accelerated?
Problems & Exercises
1.
Integrated Concepts
The axis of Earth makes a 23.5° angle with a direction perpendicular to the plane of Earth’s orbit. As shown in Figure 6, this axis precesses, making one complete rotation in 25,780 y.
(a) Calculate the change in angular momentum in half this time.
(b) What is the average torque producing this change in angular momentum? (c) If this torque were created by a single force (it is not) acting at the most effective point on the equator, what would its magnitude be? Glossary right-hand rule: direction of angular velocity ω and angular momentum L in which the thumb of your right hand points when you curl your fingers in the direction of the disk’s rotation Selected Solutions to Problems & Answers
1. (a) 5.64 × 10
33 kg ⋅ m 2/2 (b) 1.39 × 10 22 N ⋅ m (c) 2.17 × 10 15 N
|
Let $m_1 = \underbrace{ \{0,1\}^n \times \cdots \times \{0,1\}^n }_{L\text{ times}}$ be a message of space $\mathcal{M}^L$, where $\mathcal{M} = \{0,1\}^n$.
Let $c_1$ be the encryption of $m_1$ using CBC mode.
Let $x \in \mathcal{K}$ be some binary string.
Finally, let $m_2$ be another message such that:
$m_2[1] = m_1[1] \oplus x$ $m_2[i] = m_1[i]$ for $i = 2, ... ,n$
How can I modify $c_1$ to be a $c_2$ that is the encryption of $m_2$ ?
I didn't figure out how to modify $c_1$ directly, I could only think about something like $\oplus$ing the Initialization Vector with $x$, so $c_2$ would somehow have both $m[1]$ and $x$ used in its computed first block.
|
You are referring to the following problem:$$H = \{ (\langle M \rangle, w, t) \mid \text{$M$ accepts $w$ in time at most $t$} \}$$where $t$ is binary encoded. You should note this is
not the (classical) halting problem, but, rather, the bounded version of the halting problem. The halting problem itself cannot be complete for $\mathsf{EXP}$ (or any other subclass of the recursive languages).
Now, to answer your question. First, notice that, because $t$ is encoded in binary, the input has length $|\langle M \rangle| + |w| + \log t$. To simulate $M$ on $w$ for $t$ steps we can use a universal TM, which takes time $O(t \log t)$. Since we are doing a worst-case analysis, we can afford to view $|\langle M \rangle|$ as constant. Also, because $t \in O(2^{p(|w|)})$ for some polynomial $p$, $\log t \in O(p(|w|))$. Hence, the input has size $O(|w| + p(|w|))$ while our simulation takes time $O(t \log t) \subset O(p(|w|) 2^{p(|w|)})$ and, in turn, $p(|w|) 2^{p(|w|)} \in 2^{\textrm{poly}(|w|)}$. This proves $H \in \mathsf{EXP}$.
Reducing a problem $P \in \mathsf{EXP}$ to $H$ is also very simple: If $P$ is decidable by $M$ in time bounded by a function $f \in 2^{\textrm{poly}(n)}$, then any instance $x$ of $P$ is a yes-instance if and only if $(\langle M \rangle, x, f(|x|)) \in H$, where $f(|x|)$ is given binary encoded. The time taken by this reduction is dominated by computing $f(|x|)$, which can be performed in time polynomial in $|x|$ (by first converting it to binary and then computing $f(|x|)$ in polynomial time).
|
This question already has an answer here:
(The questions are below the question heading)
I actually agree with it - that is not to say that $\mathbb{Q}$ isn't countable.
Reasoning The definition of bijection is a useful starting point, and it is easy to see that $\mathbb{Z}$ is countable. I shall denote this as $\mathbb{Z}\sim\mathbb{N}$ for simplicity.
I then take: $$f:\mathbb{Z}\times\mathbb{Z}\rightarrow\mathbb{Q}\text{ by }\ f:(a,b)\mapsto\left\{\begin{array}{lr}\frac{a}{b} & \text{if }b\ne 0\\ 1 & \text{otherwise}\end{array}\right.$$
There are other choices (like $\mathbb{Z}\times\mathbb{N} say) but this matters not. This mapping is
surjective but not injective.
From this
we know now: $|\mathbb{Q}|\le|\mathbb{N}|$
I got my set theory book out to check this, and it exhibits a similar map to mine and considers it proven that $\mathbb{Q}$ is countable, this is not enough as a function that maps numbers to even and odd has a finite range. It would prove that it is
at most countablethough? Can I use that? I am not sure how to go from here, I was looking for a theorem along the lines of:
If $A\subseteq B$ then $|A|\le |B|$
We would then know that $|\mathbb{Q}|\le|\mathbb{N}|$ and $|\mathbb{Q}|\ge|\mathbb{N}|$
Questions Can I do that with infinities? Are they ordered? Does $\le$ make sense? I know that exhibiting a bijection$\implies$ countability. As this is a definition it can be taken as $\iff$ (if we have a countable set, there must be a bijection with $\mathbb{N}$ - if there isn't it violates that it is countable). Thus for the statement in the image to be truewe cannot have countability of $\mathbb{Q}$ ($^*$)
($^*$) - damn, that shortens the point of this post.
|
Answer: not known.
The questions asked are natural, open, and apparently difficult; the question now is a community wiki.
Overview
The question seeks to divide languages belonging to the complexity class $P$ — together with the decision Turing machines (TMs) that accept these languages — into two complementary subclasses:
gnosticlanguages and TMs (that are feasible to validate/understand), versus crypticlanguages and TMs (that are infeasible to validate/understand). Definitions: gnostic vs cryptic numbers, TMs, and languages
D0We say that a computable real number $r$ is gnosticiff it is associated to a non-empty set of TMs, with each TM specified in PA as an explicit list of numbers comprising valid code upon a universal TM, such that for any accuracy $\epsilon\gt0$ supplied as an input, each TM provably (in ZFC) halts with an output number $o$ that provably (in ZFC) satisfies $r-\epsilon\lt o\lt r+\epsilon$. Remark It is known that some computable reals are not gnostic (for a concrete example see Raphael Reitzig's answer to jkff's question "Are there non-constructive algorithm existence proofs?"). To avoid grappling with these computable-yet-awkward numbers, the restriction is imposed that runtime exponents be computable by TMs that are explicitly enumerated in PA (as contrasted with TMs implicitly specified in ZFC). For further discussion see the section Definitional considerations (below).
Now we seek definitions that capture the intuition that the complexity class $P$ includes a subset of
cryptic languages to which no (gnostic) runtime exponent lower-bound can provably be assigned.
To look ahead, the concluding definition (
D5) specifies the idea of a canonically cryptic decision TM, whose definition is crafted with a view toward obviating reductions that (trivially) mask cryptic computations by overlaying computationally superfluous epi-computations. The rationale and sources of this key definition are discussed later on — under the heading Definitional Considerations — and the contributions of comments by Timothy Chow, Peter Shor, Sasho Nikolov, and Luca Trevisan are gratefully acknowledged.
D1Given a Turing machine M that halts for all input strings, M is called crypticiff the following statement is neither provable nor refutable for at least one gnostic real number $r \ge 0$:
Statement:M's runtime is ${O}(n^r)$ with respect to input length $n$
Turing machines that are not cryptic we say are gnostic TMs.
D2We say that a decision Turing machine M is efficientiff it has a gnostic runtime exponent $r$ such that the language L that M accepts is accepted by no other TM having a gnostic runtime exponent smaller than $r$.
D3We say that a language L is crypticiff it is accepted by (a)at least one Turing machine M is that is both efficient and cryptic, and moreover (b)no TM that is both efficient and gnostic provably accepts L.
To express
D3 another way, a language is cryptic iff the TMs that accept that language most efficiently are themselves cryptic.
Languages that are not cryptic we say are
gnosticlanguages.
D4We say that a cryptic TM is strongly crypticiff the language it accepts is cryptic.
D5We say that a strongly cryptic TM is canonically crypticiff it is efficient.
To express
D5 another way, every cryptic language is accepted by a set of canonically cryptic decision TMs, which are the most efficient decision TMs that accept that language. The questions asked
The following conjecture
C0 is natural and (apparently) open:
C0The complexity class P contains at least one cryptic language.
Three questions are asked,
Q1– Q3, of which the first is:
Q1Is the C0conjecture independent of PA or ZFC?
Under the assumption that
C0 is true — either provably in ZFC, or as an independent axiom that is supplemental to ZFC — two further questions are natural:
Q2Can at least one cryptic language in Pbe presented concretely, that is, exhibited as a dictionary of explicit words in a finite alphabet that includes all words up to any specified length? If so, exhibit such a dictionary.
Q3Can at least one canonically cryptic decision TM be presented concretely, that is, as an enabling description for building a physical Turing machine that decides (in polynomial time) all the words of the dictionary of Q2? If so, construct such a Turing machine and by computing with it, exhibit the cryptic language dictionary of Q2. Definitional considerations
Definition
D0 implies that every gnostic real number is computable, but it is known that some computable real numbers are not gnostic. For examples, see answers on MathOverflow by Michaël Cadilhac and Ryan Williams and on TCS StackExchange by Raphael Reitzig. More generally, definitions D0–D5 are crafted to exclude references to non-gnostic runtime exponents.
As discussed in the TCS wiki "Does P contain incomprehensible languages?," definitions
D0–D5 ensure that every cryptic language is accepted by at least one TM that is canonically cryptic. (Note also that in the present question the word "cryptic" replaces the less descriptive word "incomprehensible" used in the wiki).
Moreover — in view of
D3(a) and D3(b) — there exists no computationally trivial reduction of a canonically cryptic TM to a gnostic TM that provably recognizes the same language. In particular, D3(a) and D3(b) obstruct the polylimiter reduction strategies that were outlined in comments by Peter Shor, and by Sasho Nikolov, and independently by Luca Trevisan, and obstructs too the polynomially clocked reduction strategy of Timothy Chow, all of which similarly mask cryptic computations by overlaying a computationally superfluous epi-computation.
In general, the definitions of "gnostic" and "cryptic" are deliberately tuned so as to be robust with respect to mathematically trivial reductions (and it is entirely possible that further tuning of these definitions may be desirable).
Methodological considerations
Lance Fortnow's review "The status of the P versus NP problem" surveys methods for establishing the independence (or otherwise) of conjectures in complexity theory; particularly desired are suggestions as to how the methods that Lance reviews might help (or not) to answer
Q1.
It is clear that many further questions are natural. E.g., the Hartmanis-Stearns Conjecture inspires us to ask "Do cryptic real-time multitape Turing machines exist? Is their existence (or not) independent of PA or ZFC?"
Zeilberger-type considerations
In the event that
Q1 is answered by "yes", then oracles that decide membership in $P$ exist outside of PA or ZFC, and therefore, an essential element of modern complexity theory is (at present) not known to reside within any formal system of logic.
In this respect complexity theory stands apart from most mathematical disciplines, such that the apprehensions that Doron Zeilberger expresses in his recent "Opinion 125: Now that Alan Turing turned 100-years-old, it is time to have a Fresh Look at His Seminal Contributions, that did lots of Good But Also Lots of Harm" arguably are well-founded.
Zeilberger's concerns take explicit form as the criterion
Z0 $\equiv $ ( ! Q1 ) && ( ! C0 ), which is equivalent to the following criterion:
Z0: Zeilberger's sensibility criterionDefinitions of the complexity class Pare called Zeilberger-sensibleiff all languages in Pare provably gnostic.
At present it is not known whether Stephen Cook's definition of the complexity class
P is Zeilberger-sensible. Motivational considerations
The definitions of "gnostic" and "cryptic" are crafted with a view toward (eventually) deciding conjectures like the following:
C1Let $P'$ and $NP'$ be the gnostic restrictions of $P$ and $NP$ resp. Then $P' \ne NP'$ is either provable or refutable in PA or ZFC.
C2$P' \ne NP'$ (as explicitly proved in PA or ZFC)
Clearly
C2 $\to$ C1, and conversely it is conceivable that a proof of the (meta) theorem C1 might provide guidance toward a proof of the (stronger) theorem C2.
The overall motivation is the expectation/intuition/hope that for some well-tuned distinction between gnostic and cryptic TMs and languages, a proof of
C1 and possibly even C2 might illuminate — and even have comparable practical implications to — a presumably far harder and deeper proof that $P\ne NP$.
Juris Hartmanis was among the first complexity theorists to seriously pursue this line of investigation; see Hartmanis' monograph
Feasible Computations and Provable Complexity Properties (1978), for example. Nomenclatural considerations
From the Oxford English Dictionary (OED) we have:
gnostic(adj) Relating to knowledge; cognitive; intellectual"They [the numbers] exist in a vital, gnostic, and speculative, but not in an operative manner." cryptic(adj) Not immediately understandable; mysterious, enigmatic"Instead of plain Rules useful to Mankind, they [philosophers] obtrude cruptick and dark Sentences."
Apparently no Mathematical Review has previously used the word "gnostic" in any sense whatsoever. However, attention is drawn to Marcus Kracht’s recent article “Gnosis” (
Journal of Philosophical Logic, MR2802332), which uses the OED sense.
Apparently no Mathematical Review has used the word "cryptic" — in its technical sense — with relation to complexity theory. However, attention is drawn to Charles H. Bennett's article "Logical Depth and Physical Complexity" (in
The Universal Turing Machine: A Half-Century Survey, 1988) which contains the passage
Another kind of complexity associated with an object would be the difficulty, given the object, of finding a plausible hypothesis to explain it. Objects having this kind of complexity might be called
"cryptic": to find a plausible origin for the object is like solving a cryptogram. Considerations of naturality, openness, and difficulty
The naturality of these questions illustrates the thesis of Juris Hartmanis' monograph
Feasible Computations and Provable Complexity Properties (1978) that:
"Results about the complexity of algorithms change quite radically if we consider only properties of computations which can be proven formally."
The openness and difficulty of these questions are broadly consonant with the conclusion of Lance Fortnow's review "The Status of the P Versus NP Problem" (2009) that:
"None of us truly understands the P versus NP problem, we have only begun to peel the layers around this increasingly complex question."
Wiki guidance
Particularly sought are definitional adjustments and proof strategies specifically relating to the questions
Q1–Q3 and broadly illuminating the Hartmanis-type conjectures C1–C2.
|
The positivity condition is not there just so that "programs terminate", as you put it (what programs?), but to make sure the type is well defined in the first place. The inductive definitions define the
smallest type satisfying an equation. That is, we put into the type only those things which are prescribed by the clauses of the definition, and no arbitrary garbage. (Non-normalizing terms count as garbage, and I think that's what you meant when you spoke of terminating programs.)
Let us take a step back and think where the inductive definitions come from. They are solutions to
type equations. Here is a type equation:$$N = \mathtt{unit} + N.$$What could $N$ be? We might be thinking that it must be the natural numbers, since every element of $N$ is either $\mathtt{inl}(\mathtt{tt})$, which is a fancy way to say "zero", or it is of the form $\mathtt{inr}(n)$ for some $n : N$. But this need not be the case! For example, $N$ could also contain a special element $\infty$ satisfying $\infty = \mathtt{inr}(\infty)$. It could also contain an extra "fake zero" $\mathtt{Z}$, together with its successors $\mathtt{inr}(\mathtt{Z})$, $\mathtt{inr}(\mathtt{inr}(\mathtt{Z}))$, ... There are many solutions to the above equation.
The smallest solution is called the
inductive type, and the largest one the coinductive type. I am skipping technicalities about what "smallest" and "largest" mean precisely (they are the "initial algebra" and "final coalgebra", respectively), but I hope you get the point.
In order for us to say that we have a well-defined inductive type, we need to know that the smallest solution to the type equation actually exists. In fact, it does not always exist, sometimes there can be several incomparable solutions, or none at all. The positivity condition in Coq and CIC ensures that a smallest solution exists. It's not the only possible condition one could come up with, and there are variants, but for the purposes of a proof assistant we need something that makes sense computationally, and can actually be verified by a type checker (for instance, it wouldn't do to say that an inductive definition is valid if it defines a $\kappa$-accessible functor for some cardinal $\kappa$, and have the proof checker look for $\kappa$ – hmm, that's an interesting thought).
If we now look at your proposed definition
Inductive t : Type := b : t | c : ((t -> unit) -> unit) -> t.
the question to be asked is: how do you know that the smallest solution exists? Somebody proved that the strict positivity requirement ensures the existence, but your definition falls outside that criterion. You need to provide evidence that there is such a type. The evidence must be a well argued explanation of what the terms of
t are that they do not break strong normalization of CIC. You might as well succeed, and there is such a type. The next question then is how do you propose to incorporate your argument into a type checker as a useful algorithm. It's probably not worth the trouble.
Just to show you how problematic these things are, how about the type
Inductive t : Type := c : ((t -> bool) -> bool) -> t.
Is it still totally obvious that there is a smallest solution? We're trying to solve the type equation$$T = (T \to 2) \to 2.$$It has no solution in set theory.
|
As you know, the identity$$(a+b)(a-b) \;\; = \;\; a^2 - b^2$$shows that if you start with $a+b$ and multiply by $a-b,$ you'll get an expression that contains only even powers of $a$ and $b.$ Also, if you start with $a-b$ and multiply by $a+b,$ you'll get an expression that contains only even powers of $a$ and $b.$
One way to generalize this to a trinomial is to expand $$(a+b+c)(a+b-c)(a-b+c)(a-b-c)$$By using a little strategy, this isn't as difficult as you might think:$$[(a+b)+c][(a+b)-c]\cdot[(a-b)+c][(a-b)-c] \;\; = \;\; \left[(a+b)^2 - c^2 \right] \cdot \left[(a-b)^2 - c^2 \right]$$$$= \;\; (a+b)^{2}(a-b)^2 - (a+b)^{2}c^2 - c^{2}(a-b)^2 + c^4$$$$= \;\; \left[(a+b)(a-b)\right]^2 \; + \; \left[-a^2c^2 - 2abc^2 - b^2c^2 - a^2c^2 + 2abc^2 - b^2c^2 \right] + c^4$$$$= \;\; \left(a^2 - b^2\right)^2 \; - \; 2c^2\left(a^2 + b^2\right) \; + \; c^4 $$Notice that the result is an expression that contains only even powers of $a,$ $b,$ and $c.$ Also, notice that I didn't have to completely expand it to see that only even powers will remain.
This product of trinomials tells you that if, for example, you start with $a-b+c$ and multiply by $(a+b+c)(a+b-c)(a-b-c),$ you'll get an expression that contains only even powers of $a,$ $b,$ and $c.$ A shorthand way of saying this is that an appropriate "conjugate" of a $(-,+)$ pattern is the product of the remaining $3$ patterns. That is, an appropriate "conjugate" for a $(-,+)$ pattern is the product $(+,+)(+,-)(-,-).$
Without going into why this works (while I was writing this answer it appears that André Nicolas gave a brief, but nice, indication of how this can be done), if you start with $(a-b+c-d),$ which we might call a $(-,+,-)$ pattern, and multiply it by all of the $7$ corresponding patterns $(-,+,+),$ $(-,-,+),$ $(-,-,-),$ $(+,+,+),$ $(+,+,-),$ $(+,-,+),$ and $(+,-,-),$ you'll get an expression that contains only even powers of $a,$ $b,$ $c,$ and $d.$
The same method continues to work for additive combinations of $5$ terms, additive combinations of $6$ terms, etc. However, doing this by hand will get very tedious
very soon since, for an additive combination of $5$ terms, you'll be multiplying it by $2^{4} - 1 = 15$ different expressions each consisting of $5$ terms. For an additive combination of $6$ terms, you'll be multiplying it by $2^{5} - 1 = 31$ different expressions each consisting of $6$ terms.
This method can be generalized to take care of sums in which various roots (square, cube, etc.) appear, but things quickly get even more complicated because you wind up multiplying by "$n$th roots of unity" combinations instead of sign combinations. (The signs $+$ and $-$ can be interpreted as multiplying by $1$ and by $-1,$ the two square roots of unity.) For example, radicals can be cleared from $\sqrt{a} + \sqrt[3]{b} + \sqrt{c},$ which has "sign pattern" $(+,+),$ by multiplying it by all $5$ of the "sign patterns" $(+,-),$ $(\omega,+),$ $(\omega,-),$ $({\omega}^2,+),$ and $({\omega}^2,-),$ where $\omega$ is the "first non-real cube root of $1$" as you travel counterclockwise along the unit circle in $\mathbb C$ beginning at $1$ on the positive real axis.
|
I had an exam today and one of my question was:
Give a function $f$ that is Riemann integrable but not Lebesgue integrable
How is it possible ? I always thought that Riemann $\implies $ Lebesgue, isn't it ?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
You may like to visit http://www.math.vanderbilt.edu/~schectex/ccc/gauge/ for an excellent introduction to the different integrals. I am reproducing an image from that website for your quick reference.
A function $f$ is Lebesgue integrable if and only if $f^+=0\vee f$ and $f^-=0\wedge f$ are integrable. So if $f\geq 0$ is Riemann integrable then it will be also Lebesgue integrable. But if $f$ has not a constant sign, then $f$ can be Riemann integrable but not Lebesgue integrable as $f(x)=\frac{\sin(x)}{x}$ on $[0,\infty [$. And this is because $\int f^+=\int f^-=+\infty$. Therefore $\int f=\int f^+-\int f^-$ is not defined.
|
Matrices consist of elements of some field. However, if we have a matrix $A\in M_{m,n}(F)$, it is sometimes useful to look at each row as a vector from $F^n$, i.e., we can view the matrix as $$A=\begin{pmatrix}\vec r_1\\\vec r_2\\\vdots\\\vec r_m\end{pmatrix}.$$ (Doing the same thing with columns make sense, too. I will describe stuff in this post with rows, it can be easily changed for columns.)
Sometimes it might be useful to do same thing with vectors from arbitrary vector space $V$ over a field $F$. I.e., we can use notation $$\mathbf{B}=\begin{pmatrix}\vec v_1\\\vec v_2\\\vdots\\\vec v_m\end{pmatrix}.$$ I will use bold for "matrices consisting of vectors".
This is just a different notation for ordered $n$-tuple of vectors. But at least in some ways they are similar to matrices.For example, we can multiply such matrix by $A\in M_{k,m}(F)$
from the left to get$$A\cdot \mathbf{B},$$which is the matrix where the $i$-th row is the linear combination $\sum_{j=1}^m a_{ij}\vec v_j$. (If we choose to work with columns, we would multiply from the right.)
We can also add these matrices and multiply them by a scalar. With these definitions several properties of usual multiplication of matrices still hold - for the products that are allowed. For example, associativity $A(B\mathbf{C})=A(B\mathbf{C})$ or distributivity - both $(A+B)\mathbf{C}$ and $A(\mathbf{C}+\mathbf{D})$.
Also some properties which are valid for rank are still valid for dimension of the vector space generated by the rows. (For example, if $A$ is invertible then the "rank" of $\mathbf B$ and $A\mathbf B$ is the same. "Rank" of $A\mathbf B$ is bounded from above by the rank of $A$ and also by the "rank" of $\mathbf B$.)
We cannot multiply from the right, but we still can "cancel" on the right in the sense that if rows of $\mathbf B$ are linearly independent then $A\mathbf{B}=\mathbf{0}$ implies $A=0$ and $A_1\mathbf{B}=A_2\mathbf{B}$ implies $A_1=A_2$.
This notation can be used, for example, to make a compact notation for transition matrix between two bases by writing $\mathbf B_2=M\mathbf{B_1}$. (And some proofs about transition matrices could be written in quite a compact way using this notation. Another possible advantage of this notation is that if we are careful only to do "allowed" multiplications, than we can use many properties of the usual matrix multiplication - which after some time spend with linear algebra and matrices are used almost automatically.)
Question. Are there some text which use this formalism, where we can multiply by "non-numerical" matrices with rows (or columns) consists of vectors from arbitrary vector space (not necessary $n$-tuples? Are there some situations when using this approach brings some advantages? Remark 1. In a sense, the above considerations can be bypassed easily. If we work with the type of "matrices" as above, we can simply take the finite dimensional subspace $S$ which contains rows of all matrices which we need at the moment. (For example, all rows which appear in some "matrix" identity we are looking at. Or if $V$ is finitely-dimensional, we can simply take $S=V$.) If we fix some basis for $S$, this induces and isomorphism between $S$ and $F^n$, where $n=\dim(S)$ and we get a natural map $\mathbf{B} \mapsto B\in M_{m,n}$. Once we fixed a basis for $S$, any result concerning multiplication of "matrices" is now just the usual multiplication - we just need to transfer everything through this isomorphism. Still, I was curious whether sometimes it might be useful to avoid the need to fix a base and transfer things using the corresponding isomorphism. Remark 2. Matrix summability methods can be viewed as a multiplication of an infinite matrix (of dimensions "$\mathbb N\times\mathbb N$") by a sequence considered as an infinite vector. Although in such "matrices" the rows do not have finitely many coordinates, this is different from what I have in mind, since I work here with matrices that have finitely many rows.
|
OpenCV 3.1.0
Open Source Computer Vision
In this tutorial you will learn how to:
Principal Component Analysis (PCA) is a statistical procedure that extracts the most important features of a dataset.
Consider that you have a set of 2D points as it is shown in the figure above. Each dimension corresponds to a feature you are interested in. Here some could argue that the points are set in a random order. However, if you have a better look you will see that there is a linear pattern (indicated by the blue line) which is hard to dismiss. A key point of PCA is the Dimensionality Reduction. Dimensionality Reduction is the process of reducing the number of the dimensions of the given dataset. For example, in the above case it is possible to approximate the set of points to a single line and therefore, reduce the dimensionality of the given points from 2D to 1D.
Moreover, you could also see that the points vary the most along the blue line, more than they vary along the Feature 1 or Feature 2 axes. This means that if you know the position of a point along the blue line you have more information about the point than if you only knew where it was on Feature 1 axis or Feature 2 axis.
Hence, PCA allows us to find the direction along which our data varies the most. In fact, the result of running PCA on the set of points in the diagram consist of 2 vectors called
eigenvectors which are the principal components of the data set.
The size of each eigenvector is encoded in the corresponding eigenvalue and indicates how much the data vary along the principal component. The beginning of the eigenvectors is the center of all points in the data set. Applying PCA to N-dimensional data set yields N N-dimensional eigenvectors, N eigenvalues and 1 N-dimensional center point. Enough theory, let’s see how we can put these ideas into code.
The goal is to transform a given data set
X of dimension p to an alternative data set Y of smaller dimension L. Equivalently, we are seeking to find the matrix Y, where Y is the Karhunen–Loève transform (KLT) of matrix X:
\[ \mathbf{Y} = \mathbb{K} \mathbb{L} \mathbb{T} \{\mathbf{X}\} \]
Organize the data set
Suppose you have data comprising a set of observations of
p variables, and you want to reduce the data so that each observation can be described with only L variables, L < p. Suppose further, that the data are arranged as a set of n data vectors \( x_1...x_n \) with each \( x_i \) representing a single grouped observation of the p variables. Calculate the empirical mean
Place the calculated mean values into an empirical mean vector
u of dimensions \( p\times 1 \).
\[ \mathbf{u[j]} = \frac{1}{n}\sum_{i=1}^{n}\mathbf{X[i,j]} \]
Calculate the deviations from the mean
Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data. Hence, we proceed by centering the data as follows:
Store mean-subtracted data in the \( n\times p \) matrix
B.
\[ \mathbf{B} = \mathbf{X} - \mathbf{h}\mathbf{u^{T}} \]
where
h is an \( n\times 1 \) column vector of all 1s:
\[ h[i] = 1, i = 1, ..., n \]
Find the covariance matrix
Find the \( p\times p \) empirical covariance matrix
C from the outer product of matrix B with itself:
\[ \mathbf{C} = \frac{1}{n-1} \mathbf{B^{*}} \cdot \mathbf{B} \]
where * is the conjugate transpose operator. Note that if B consists entirely of real numbers, which is the case in many applications, the "conjugate transpose" is the same as the regular transpose.
Find the eigenvectors and eigenvalues of the covariance matrix
Compute the matrix
V of eigenvectors which diagonalizes the covariance matrix C:
\[ \mathbf{V^{-1}} \mathbf{C} \mathbf{V} = \mathbf{D} \]
where
D is the diagonal matrix of eigenvalues of C.
Matrix
D will take the form of an \( p \times p \) diagonal matrix:
\[ D[k,l] = \left\{\begin{matrix} \lambda_k, k = l \\ 0, k \neq l \end{matrix}\right. \]
here, \( \lambda_j \) is the
j-th eigenvalue of the covariance matrix C
This tutorial code's is shown lines below. You can also download it from here.
Read image and convert it to binary
Here we apply the necessary pre-processing procedures in order to be able to detect the objects of interest.
Extract objects of interest
Then find and filter contours by size and obtain the orientation of the remaining ones.
Extract orientation
Orientation is extracted by the call of getOrientation() function, which performs all the PCA procedure.
First the data need to be arranged in a matrix with size n x 2, where n is the number of data points we have. Then we can perform that PCA analysis. The calculated mean (i.e. center of mass) is stored in the
cntr variable and the eigenvectors and eigenvalues are stored in the corresponding std::vector’s. Visualize result
The final result is visualized through the drawAxis() function, where the principal components are drawn in lines, and each eigenvector is multiplied by its eigenvalue and translated to the mean position.
The code opens an image, finds the orientation of the detected objects of interest and then visualizes the result by drawing the contours of the detected objects of interest, the center point, and the x-axis, y-axis regarding the extracted orientation.
|
This is one of those questions that is much trickier than it appears, many different people contributed to the formulas as we write them today. The short answer, that doesn't really do justice to history, is that only Euler presented volume formulas in this form in his textbooks after 1737.
The principal step was no doubt made by Archimedes in On Sphere and Cylinder, where he proved rigorously that $V_S:V_C=2:3$, where $V_S$ is the volume of a sphere and $V_C$ is the volume of the circumscribed cylinder (he also gave proportion for the surface area). "Obviously", $V_C=\pi r^2\times2r$, so we get the modern formula, right? Although this is a popular way of projecting modern concepts onto history it sells short the ingenuity of ancient Greeks and the difficulties they managed to overcome, not to mention the work of countless others who brought about our mathematical paradise. Here is one problem: how does one assign a number to a volume, or even to a length for that matter? Today we use real numbers and integration theory, but ancient Greeks had none of that. Their ingenious solution was to make do without both. Geometric magnitudes (volumes, areas, lengths) weren't assigned numbers at all, they were related to other like magnitudes as ratios. These ratios weren't numbers, they could be compared but not added, and only occasionally they could be expressed as ratios of whole numbers, the only numbers proper. This is why Archimedes expressed the "formula" the way he did.
But to get to our modern formula even in the ratio form like $V_S:r^3=4\pi:3$ there is another problem standing in the way. This proportion relates the volume of the sphere not to a cylinder, but to the cube on its radius. Problem is, $4\pi:3$ is not a ratio of whole numbers, unlike $2:3$. Of course, Archimedes didn't know that for sure, but Pythagoreans already got into hot water by assuming it for the side and the diagonal of a square, and later proving otherwise. So Archimedes, like geometers before and after him, did not write it that way, and they did not write $A=\pi r^2$ or $A:r^2=\pi$ for the area of a circle either. Not in equations and not in words. Yet again, Greeks rose to the occasion despite the absence of our modern machinery. The theory of proportion, an ingenious invention of Eudoxus of Cnidus presented in Book V of Euclid's Elements, allowed them to make sense of estimates like $r:s<A:r^2<p:q$ with whole numbers $p,q,r,s$. Archimedes and many of his successors did prove many such estimates without invoking mysterious entities, which remained undefined for almost two millenia hence, and without the modern idea that unbeknownst to them those ratios were "approximating" $\pi$.
If this last step seems trivial to us today let me point out that in 17th century Cavalieri and Roberval were still presenting their volumes and areas as ratios to other simpler volumes and areas, and recall the history of zero, which was not understood or used as a number for centuries after Babylonians and Alexandrian astronomers were using a symbol for it as a placeholder. With $\pi$ there wasn't even a symbol. Only at the end of middle ages some Arabs and Europeans started thinking of irrationals as some kind of numbers, while giving them telling nicknames, like "deafmute numbers". And this was for irrationals like $1+\sqrt{5}$ or $\sqrt[3]{2}$ given by algebraic formulas.
It appears that the first person to contemplate that $\pi$ and $e$ were also "some kind of numbers" in print was James Gregory in The True Squaring of the Circle and of the Hyperbola published in 1667. He was also the first to suggest a possibility that quadrature of the circle was unsolvable with straightedge and compass, although his argument for it was flawed. Even then it took time for the idea to percolate until William Jones in 1706 was bold enough to assign a symbol to the new "number", our modern $\pi$, while still saying "the exact proportion between the diameter and the circumference can never be expressed in numbers". Integration theory was sufficiently developed by then to be comfortable with volumes and areas as numbers as well, so Jones could dispose with the ratios and write $A=\pi r^2$. And when Euler adopted the symbol 30 years later, and made it famous, he could finally write $V_S=\frac43\pi r^3$.
|
Motviation
Suppose we observe survival/event times from some distribution \[T_{i\in1:n} \stackrel{iid}{\sim} f(t)\] where \(f\) is the density and \(F(t)=1-S(t)\) is the corresponding CDF expressed in terms of the survival function \(S(t)\). We can represent the hazard function of this distribution in terms of the density, \[\lambda(t) = \frac{f(t)}{S(t)}\] The hazard, CDF, and survival functions are all related. Thus, if we have a model for the hazard, we also have a model for the survival function and the survival time distribution. The well-known Cox proportional hazard approach models the hazard as a function of covariates \(x_i \in \mathbb{R}^p\) that multiply some baseline hazard \(\lambda_0(t)\), \[ \lambda(t_i) = \lambda_0(t_i)\exp(x_i'\theta)\] Frequentist estimation of \(\theta\) follows from maximizing the profile likelihood – which avoids the need to specify the baseline hazard \(\lambda_0(t)\). The model is semi-parametric because, while we don’t model the baseline hazard, we require that the multiplicative relationship between covariates and the hazard is correct.
This already works fine, so why go Bayesian? Here are just a few (hopefully) compelling reasons:
We may want to nonparametrically estimate the baseline hazard itself. Posterior inference is exact, so we don’t need to rely on asymptotic uncertainty estimates (though we may want to evaluate the frequentist properties of resulting point and interval estimates). Easy credible interval estimation for any function of the parameters. If we have posterior samples for the hazard, we also get automatic inference for the survival function as well.
Full Bayesian inference requires a proper probability model for both \(\theta\) and \(\lambda_0\). This post walks through a Bayesian approach that places a nonparametric prior on \(\lambda_0\) – specifically the Gamma Process.
The Gamma Process Prior Independent Hazards
Recall that the cumulative baseline hazard \(H_0(t) = \int_0^t \lambda_0(t) dt\) where the integral is the Riemann-Stieltjes integral. The central idea is to develop a prior for the cumulative hazard \(H_0(t)\), which will then admit a prior for the hazard, \(\lambda_0(t)\).
The Gamma Process is such a prior. Each realization of a Gamma Process is a cumulative hazard function that is centered around some prior cumulative hazard function, \(H^*\), with a sort of dispersion/concentration parameter, \(\beta\) that controls how tightly the realizations are distributed around the prior \(H^*\).
Okay, now the math. Let \(\mathcal{G}(\alpha, \beta)\) denote the Gamma distribution with shape parameter \(\alpha\) and rate parameter \(\beta\). Let \(H^*(t)\) for \(t\geq 0\) be our prior cumulative hazard function. For example we could choose \(H^*\) to be the exponential cumulative hazard, \(H^*(t)= \eta\cdot t\), where \(\eta\) is a fixed hyperparameter. By definition \(H^*(0)=0\). The Gamma Process is defined as having the following properties:
\(H_0(0) = 0\) \(\lambda_0(t) = H_0(t) – H_0(s) \sim \mathcal G \Big(\ \beta\big(H^*(t) – H^*(s)\big)\ , \ \beta \ \Big)\), for \(t>s\)
The increments in the cumulative hazard is the hazard function. The gamma process has the property that these increments are independent and Gamma-distributed. For a set of time increments \(t\geq0\), we can use the properties above to generate one realization of hazards \(\{\lambda_0(t) \}_{t\geq0}\). Equivaltently, one realization of the cumulative hazard function is \(\{H_0(t)\}_{t\geq0}\), where \(H_0(t) = \sum_{k=0}^t \lambda_0(k)\). We denote the Gamma Process just described as \[H_0(t) \sim \mathcal{GP}\Big(\ \beta H^*(t), \ \beta \Big), \ \ t\geq0\]
Below in Panel A are some prior realizations of \(H_0(t)\) with a Weibull \(H^*\) prior for various concentration parameters, \(\beta\). Notice for low \(\beta\) the realizations are widely dispersed around the mean cumulative hazard. Higher \(\beta\) yields to tighter dispersion around \(H^*\).
Since there’s a correspondence between the \(H_0(t)\), \(\lambda_0(t)\), and \(S_0(t)\), we could also plot prior realizations of the baseline survival function \(S_0(t) = \exp\big\{- H_0(t) \big\}\) using the realization \(\{H_0(t)\}_{t\geq0}\). This is shown in Panel B with the Weibull survival function \(S^*\) corresponding to \(H^*\).
|
I'm having a pretty hard time with this. I'm asked to show that, in the category of
sets with exactly 17 elements, no two objects have either a direct product nor direct sum. Part of me doesn't even believe this statement—but whenever I try to come up with a direct product, I get snagged.
Let $(C, \alpha, \beta)$ be a [potential] direct product of $A$ and $B$. Fix some object, $C'$, with mappings, $\alpha'$ and $\beta'$, from $C'$ to $A$ and $B$ respectively. We need a unique $\gamma: C' \rightarrow C$ such that $\alpha \circ \gamma = \alpha'$ and $\beta \circ \gamma = \beta'$.
$C = A \times B$ just can't work, because $A \times B$ necessarily has more than 17 elements (in fact,
noCartesian-product-like $C$ can work, because the number of elements is fixed). What about some 17-element subset of $A \times B$? (In the category of sets, $\alpha$ and $\beta$ are injective, but not necessarily surjective). But, what if $\alpha'$ and $\beta'$ are both surjective? So, that can't work, because there's no $\gamma$ that could satisfy this (doesn't that mean that, in the general category of sets, $\alpha$ and $\beta$ have to alsobe surjective? If they don't "touch" every element in both $A$ and $B$, then one can just define a $\alpha'$ or $\beta'$ that touches the elements $\alpha$ or $\beta$ don't—thus making impossible a direct product.)
Let $\alpha(c_n) = a_n$ and $\beta(c_n) = b_n$. This contains bijective $\alpha$ and $\beta$, but all we have to do is define a $C'$ such that $\alpha'(c_1) = a_1$ and $\beta'(c_1) = b_2$.
Ok. So, $\alpha$ and $\beta$ have to be bijective. Let's try to prove this via negation: Since these mappings are necessarily bijective, they have to have an inverse. Thus, $\gamma$ must be such that $\gamma = \alpha^{-1} \circ \alpha'$
and $\gamma = \beta^{-1} \circ \beta'$. To show that we can choose an object $C'$ where $\gamma$ can't make the graph commute, just choose $\alpha'$ and $\beta'$ such that $\alpha^{-1} \circ \alpha \neq \beta^{-1} \circ \beta$. Can I assume that such an $\alpha'$ and $\beta'$ will always exist? Was there no point to the number of elements being $17$ specifically? This all seems to work for anycategory of sets with a fixed number of elements. Is there something crucial I'm missing?
|
Let $S=\{1,2,3,4,5,6,7,8,9,10\}$, $P=\{y \in \mathbb N : y \text { is a prime number}\}$, consider the map $f$ defined as follows: $$\begin{aligned} f:x\in S \rightarrow f(x) \in \wp (P) \end{aligned}$$ and $$\begin{aligned} f(x)=\{y \in P: y \mid x\} \end{aligned}$$
Let $X=\{1,4,5,8,10\}$ and $f(X)=\{\{\emptyset\}, \{2\}, \{5\}, \{2\}, \{2,5\}\}$. Let $\Sigma$ be a partial order defined as follows:
$$\begin{aligned} x\text{ }\Sigma \text{ } y \Leftrightarrow f(x) \subset f (y) \text{ or } x=y\end{aligned}$$
draw the Hasse diagram relative to $(X, \Sigma)$.
The $f$ function clearly isn't injective, because $f(4) = f(8)=\{2\}$. I am unsure how the Hasse diagram should be drawn: in this case I have a repetition, so do I have to omit one of the elements with the same image element? So is my Hasse diagram correct?
|
I am having trouble and I am unsure how to solve this PDE. $x\frac{\partial v}{\partial x} +y\frac{\partial v}{\partial y} = 2xy(x^2-y^2)$ I know you will use method of characteristics, but I am being thrown off by the function on the right side.
I think I already answered to a question like this, but where ?.
Change $\begin{cases} t=\frac{y}{x} \\u(x,y)=U(x,t)\end{cases}\quad\to\quad \begin{cases} u_x=U_x-U_t\frac{y}{x^2} \\u_y=U_y\frac{1}{x} \end{cases}$ $$xu_x+y_y=xU_x=2xy(x^2-y^2)=2x^2t(x^2-x^2t^2)=2x^4t(1-t^2)$$ $$U_x=2x^3t(1-t^2)$$ $$U=\frac{x^4}{2}t(1-t^2)+f(t)$$ $$u(x,y)=\frac{x^4}{2}\frac{y}{x}\left(1-\frac{y^2}{x^2}\right)+f\left(\frac{y}{x}\right)=\frac{xy}{2}\left(x^2-y^2\right)+f\left(\frac{y}{x}\right)$$ where $f$ is any differentiable function.
Note :
The idea to the change of variable $t=\frac{y}{x}$ comes from the preliminary solving of the related homoheneous PDE $xu_x+yu_y=0$ which general solution is $f\left(\frac{y}{x}\right)$. So, we replace the "one variable" function $f$ by a "two variables" function $U\left(x,\frac{y}{x}\right)=U(x,t)$ : For inhomogeneous linear PDEs, this is similar to the "variation of parameter" for inhomogeneous linear ODEs.
|
Prove that there is no Homomorphism from $\Bbb Z_8 \oplus \Bbb Z_2$ onto $\Bbb Z_4 \oplus \Bbb Z_4$?
Suppose that $\phi$ is an onto homomorphism between the two sets.
Then $\phi(\Bbb Z_8 \oplus \Bbb Z_2) = \Bbb Z_4 \oplus \Bbb Z_4$ because it's onto and $|\phi(\Bbb Z_8 \oplus \Bbb Z_2)| = |\Bbb Z_4 \oplus \Bbb Z_4| = 16$.
Then $(\Bbb Z_8 \oplus \Bbb Z_2) / \ker\phi \approx \Bbb Z_4 \oplus \Bbb Z_4$.
Then $|(\Bbb Z_8 \oplus \Bbb Z_2) / \ker\phi| \approx |\Bbb Z_4 \oplus \Bbb Z_4| \Rightarrow |\ker\phi| = 1$.
and this implies that the homomorphism is injective and onto, which implies it's an isomorphism.
Is there something I'm missing here, because this seems to show that the two sets are isomorphic?
|
Tikhonov regularization and ridge regression are terms often used as if they were identical. Is it possible to specify exactly what the difference is?
Tikhonov regularizarization is a larger set than ridge regression. Here is my attempt to spell out exactly how they differ.
Suppose that for a known matrix $A$ and vector $b$, we wish to find a vector $\mathbf{x}$ such that :
$A\mathbf{x}=\mathbf{b}$.
The standard approach is ordinary least squares linear regression. However, if no $x$ satisfies the equation or more than one $x$ does—that is the solution is not unique—the problem is said to be ill-posed. Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as:
$\|A\mathbf{x}-\mathbf{b}\|^2 $
where $\left \| \cdot \right \|$ is the Euclidean norm. In matrix notation the solution, denoted by $\hat{x}$, is given by:
$\hat{x} = (A^{T}A)^{-1}A^{T}\mathbf{b}$
Tikhonov regularization minimizes
$\|A\mathbf{x}-\mathbf{b}\|^2+ \|\Gamma \mathbf{x}\|^2$
for some suitably chosen Tikhonov matrix, $\Gamma $. An explicit matrix form solution, denoted by $\hat{x}$, is given by:
$\hat{x} = (A^{T}A+ \Gamma^{T} \Gamma )^{-1}A^{T}{b}$
The effect of regularization may be varied via the scale of matrix $\Gamma$. For $\Gamma = 0$ this reduces to the unregularized least squares solution provided that (A
TA) −1 exists.
Typically for
ridge regression, two departures from Tikhonov regularization are described. First, the Tikhonov matrix is replaced by a multiple of the identity matrix
$\Gamma= \alpha I $,
giving preference to solutions with smaller norm, i.e., the $L_2$ norm. Then $\Gamma^{T} \Gamma$ becomes $\alpha^2 I$ leading to
$\hat{x} = (A^{T}A+ \alpha^2 I )^{-1}A^{T}{b}$
Finally, for ridge regression, it is typically assumed that $A$ variables are scaled so that $X^{T}X$ has the form of a correlation matrix. and $X^{T}b$ is the correlation vector between the $x$ variables and $b$, leading to
$\hat{x} = (X^{T}X+ \alpha^2 I )^{-1}X^{T}{b}$
Note in this form the Lagrange multiplier $\alpha^2$ is usually replaced by $k$, $\lambda$, or some other symbol but retains the property $\lambda\geq0$
Carl has given a thorough answer that nicely explains the mathematical differences between Tikhonov regularization vs. ridge regression. Inspired by the historical discussion here, I thought it might be useful to add a short example demonstrating how the more general Tikhonov framework can be useful.
First a brief note on context. Ridge regression arose in statistics, and while regularization is now widespread in statistics & machine learning, Tikhonov's approach was originally motivated by inverse problems arising in model-based data assimilation (particularly in geophysics). The simplified example below is in this category (more complex versions are used for paleoclimate reconstructions).
Imagine we want to reconstruct temperatures $u[x,t=0]$ in the past, based on present-day measurements $u[x,t=T]$. In our simplified model we will assume that temperature evolves according to the heat equation$$ u_t = u_{xx} $$in 1D with periodic boundary conditions$$ u[x+L,t] = u[x,t] $$A simple (explicit) finite difference approach leads to the discrete model$$ \frac{\Delta\mathbf{u}}{\Delta{t}} = \frac{\mathbf{Lu}}{\Delta{x^2}} \implies \mathbf{u}_{t+1} = \mathbf{Au}_t $$Mathematically, the evolution matrix $\mathbf{A}$ is invertible, so we have$$\mathbf{u}_t = \mathbf{A^{-1}u}_{t+1} $$However
numerically, difficulties will arise if the time interval $T$ is too long.
Tikhonov regularization can solve this problem by solving \begin{align} \mathbf{Au}_t &\approx \mathbf{u}_{t+1} \\ \omega\mathbf{Lu}_t &\approx \mathbf{0} \end{align} which adds a small penalty $\omega^2\ll{1}$ on roughness $u_{xx}$.
Below is a comparison of the results:
We can see that the original temperature $u_0$ has a smooth profile, which is smoothed still further by diffusion to give $u_\mathsf{fwd}$. Direct inversion fails to recover $u_0$, and the solution $u_\mathsf{inv}$ shows strong "checkerboarding" artifacts. However the Tikhonov solution $u_\mathsf{reg}$ is able to recover $u_0$ with quite good accuracy.
Note that in this example, ridge regression would always push our solution towards an "ice age" (i.e. uniform zero temperatures). Tikhonov regression allows us a more flexible
physically-based prior constraint: Here our penalty essentially says the reconstruction $\mathbf{u}$ should be only slowly evolving, i.e. $u_t\approx{0}$.
Matlab code for the example is below (can be run online here).
% Tikhonov Regularization Example: Inverse Heat Equationn=15; t=2e1; w=1e-2; % grid size, # time steps, regularizationL=toeplitz(sparse([-2,1,zeros(1,n-3),1]/2)); % laplacian (periodic BCs)A=(speye(n)+L)^t; % forward operator (diffusion)x=(0:n-1)'; u0=sin(2*pi*x/n); % initial condition (periodic & smooth)ufwd=A*u0; % forward modeluinv=A\ufwd; % inverse modelureg=[A;w*L]\[ufwd;zeros(n,1)]; % regularized inverseplot(x,u0,'k.-',x,ufwd,'k:',x,uinv,'r.:',x,ureg,'ro');set(legend('u_0','u_{fwd}','u_{inv}','u_{reg}'),'box','off');
|
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest.
Nah, I have a pretty garbage question. Let me spell it out.
I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$.
For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$.
This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin.
Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle.
Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$
$$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$
@user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure).
The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$.
@RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea.
The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described
It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation
The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible.
Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$....
The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case?
I think there you really do need some kind of PDEs to construct good charts.
I might be way overcomplicating this.
If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$?
I think so by the squeeze theorem or something.
this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$
but then we can replace all of those $U_i$'s with balls, incurring some fixed error
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid...
@BalarkaSen what is this
ok but this does confirm that what I'm trying to do is wrong haha
In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas...
Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function?
I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
|
Your equation #1 is $$(R+2\lambda x)i=B\ell v=B\ell \frac{dx}{dt}$$ Solving for $x$ as a function of time, and assuming $x(0)=0$, we get $$R+2\lambda x(t)=R e^{kt}$$ where $k=\frac{2\lambda i}{B\ell}$. The velocity of the bar is $v(t)=v_0e^{kt}$ where the initial velocity is $v_0=\frac{iR}{B\ell}$.
We do not need to find an expression for $F$. And we do not need to take account of a resistive force $Bi\ell$ which the magnetic field exerts on the wire. This is because the motion of the wire in the magnetic field is the cause of the current. The magnetic field does not create a current then exert a force on the same current which it has created. If the current had been generated independently of the magnetic field,
then there would be a magnetic force on that current.
No work is done by the magnetic field. Only force $F$ does work. The rate of work $P$ done by force $F$ is the sum of 3 components :
(1) the rate $K$ at which the kinetic energy of the bar is increasing; (2) the power $Q$ dissipated as heat in the resistors; and (3) the rate $M$ at which energy is being stored in the magnetic field created by the rectangular loop of current.
Now $M$ depends on the self-inductance $L(x)$ of the rectangular loop, which depends on $x$. Even for such a simple geometry an expression for $L(x)$ is difficult to obtain - see Rectangle Loop Inductance Calculator in All About Circuits website. I shall assume without justification that $M \ll P$ is negligible.
$$K=\frac{d}{dt}(\frac12mv^2)=mv\frac{dv}{dt}=mkv^2=mkv_0^2 e^{2kt}$$ $$Q=i^2(R+2\lambda x)=i^2Re^{kt}$$
The fraction of work done by $F$ which is converted into heat is $$\frac{Q}{P}=\frac{Q}{Q+K}=\frac{1}{1+K/Q}=\frac{1}{1+he^{kt}}$$ where $$h=\frac{mkv_0^2}{i^2 R}=\frac{2\lambda m i R}{(B\ell)^3}$$ This is not constant, it decreases with time. The initial fraction at $t=0$ is $\frac{1}{1+h}$. If the rails have zero resistance ($\lambda=0$) then $k=K=h=0$ so $Q=P$ - ie all of the work done by $F$ is dissipated as heat in resistor $R$, as obtained in Q4 in this worksheet.
|
The probability that an arithmetic Brownian motion process $dt = \mu dt + \sigma dW$ hits an upper Barrier $U$ before it hits a lower barrier $L$ is given by
$$ \mathbb{P}(\tau_U\leq \tau_L) = \frac{\text{Y}(x_0)-\text{Y}(L)}{\text{Y}(U)-\text{Y}(L)} $$ where $$ \text{Y}(x) = exp(\frac{-2\mu x}{\sigma^2}) $$
But what is $\mathbb{P}(\tau_U\leq T \,\cap\, \tau_U\leq\tau_L)$ if both $x_0$ and $x_T$ are known?
i.e. the probability the process hits $U$ before $L$ whilst in between the end points of a Brownian bridge.
|
Why, when calculating the conditional probability of A given B, do we assume that the probability of B is greater than zero?
We have
$$P(A\mid B)=\frac{P(A\cap B)}{P(B)}$$
If $P(B)=0$ then the RHS is undefined.
Also, if we're given that $B$ happens then it cannot be the case that $P(B)=0$. That's a contradiction.
Here is an example of when we can still calculate $P(A|B)$ when $P(B)=0$
Let $X\sim N(0,1)$ and $Y\sim N(0,1)$
Let $A$ be the event that $X\lt1$ and let $B$ be the event that $X+Y=2$
Then $P(A)=\Phi(1) \approx 0.8413$ and $P(B)=0$ since the normal distribution is continuous but still $P(A|B)=0.5$
It is to avoid the divide-by-zero error that occurs when you try to divide by zero.
The definition of a conditional probability mass function, that $\mathsf P(A\mid B):=\mathsf P(A\cap B)\div\mathsf P(B)$ is only viable when $B$ has an non-zero measure.
Still, however, when $\mathsf P(B)=0$ there are
other compatible definitions for conditional probability measures that can be used; although they are not necessarily probability mass functions.
|
Quadratic Equations Practise solving quadratic equations algebraically with this self-marking exercise.
This is level 6; Three terms and the roots are not necessarily integers. You can earn a trophy if you get at least 7 correct.
For this exercise
all answers should be rounded to three significant figures.
Instructions
Try your best to answer the questions above. Type your answers into the boxes provided leaving no spaces. As you work through the exercise regularly click the "check" button. If you have any wrong answers, do your best to do corrections but if there is anything you don't understand, please ask your teacher for help.
When you have got all of the questions correct you may want to print out this page and paste it into your exercise book. If you keep your work in an ePortfolio you could take a screen shot of your answers and paste that into your Maths file.
Transum.org
This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available.
Please contact me if you have any suggestions or questions.
Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician?
Comment recorded on the 1 August 'Starter of the Day' page by Peter Wright, St Joseph's College:
"Love using the Starter of the Day activities to get the students into Maths mode at the beginning of a lesson. Lots of interesting discussions and questions have arisen out of the activities.
Comment recorded on the 25 June 'Starter of the Day' page by Inger.kisby@herts and essex.herts.sch.uk, :
"We all love your starters. It is so good to have such a collection. We use them for all age groups and abilities. Have particularly enjoyed KIM's game, as we have not used that for Mathematics before. Keep up the good work and thank you very much
Answers
There are answers to this exercise but they are available in this space to teachers, tutors and parents who have logged in to their Transum subscription on this computer.
A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. It also provides the teacher with access to quality external links on each of the Transum Topic pages and the facility to add to the collection themselves.
Subscribers can manage class lists, lesson plans and assessment data in the Class Admin application and have access to reports of the Transum Trophies earned by class members.
If you would like to enjoy ad-free access to the thousands of Transum resources, receive our monthly newsletter, unlock the printable worksheets and see our Maths Lesson Finishers then sign up for a subscription now:Subscribe
Go Maths
Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths page is an alphabetical list of free activities designed for students in Secondary/High school.
Maths Map
Are you looking for something specific? An exercise to supplement the topic you are studying at school at the moment perhaps. Navigate using our Maths Map to find exercises, puzzles and Maths lesson starters grouped by topic.
Teachers
If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows:
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments.
Close
Factorising - Factorise algebraic expressions in this structured online self marking exercise.
Level 1 - A quadratic equation presented in a factorised form.
Level 2 - Two terms where the unknown is a factor of both. The roots are integers.
Level 3 - Three terms where the squared term has a coefficient of one. The roots are integers.
Level 4 - Three terms where the squared term has a coefficient other than one and the expression factorises.
Level 5 - The difference between two squares.
Level 6 - Three terms and the roots are not necessarily integers.
Level 7 - Mixed questions on solving quadratic equations
Exam Style Questions - A collection of problems in the style of GCSE or IB/A-level exam paper questions (worked solutions are available for Transum subscribers).
More Algebra including lesson Starters, visual aids, investigations and self-marking exercises.
See the National Curriculum page for links to related online activities and resources.
The video above is from the wonderful Mr Hegarty.
Here is the formula for solving the equation \(ax^2 + bx + c = 0\).$$ x = \frac{ - b \pm \sqrt {b^2 - 4ac} }{2a} $$
Did you know that there is another formula for finding the roots of quadratic equations? It is called the 'citardauq' (the word quadratic backwards) formula and you can read more about it here but you will never need it for school Maths.
Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly. You can double-click the 'Check' button to make it float at the bottom of your screen.
Close
|
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
|
Why is there a phase difference between resistor and capacitor? I am looking for the conceptual reason behind this. What does a phase difference of \$\pi/2\$ show conceptually? According to me, the one with positive phase difference would reach to its peak value earlier than the another, but how do we come to know the time difference between the two quantities to reach at its peak? What's the reason that there is a difference in reaching at its peak value? Please illustrate in as much detail as possible.
Q = CV in a capacitor and \$\dfrac{dQ}{dt}\$ = current therefore: -
I = \$C\dfrac{dV}{dt}\$
This means that current is proportional to the derivative of voltage.
If that voltage is a sine wave then the derivative is a cosine wave hence a phase difference of pi/2 (90 degrees).
In a resistor, V = IR i.e. the relationship between voltage and current is that they are in-phase.
The current through the resistor is proportional to the voltage across it. So the voltage and current will have the same
shape and hence they will have maxima and minima together.
In case of capacitor, the current through the capacitor is proportional to the rate of change of voltage across it. So for a
sine-voltage source, the current will be cosine wave as shown below.
So looking at the image, one can say that the current through a capacitor leads the voltage across it
leads by 90 degrees or it lags by 270 degrees.
Why is there a phase difference between resistor and capacitor?
Current through resistor is proportional to the voltage across it so both current and voltage will have same
shape.
Where as in case of a capacitor, the current through capacitor is proportional to the derivative of voltage across it. And in case of a sinusoidally varying voltage source, it
appears as if they have a constant phase shift between them.
how do we come to know the time difference between the two quantities to reach at its peak?
We can measure. Or we can calculate mathematically. The relationship between time difference and phase difference is:
$$\Delta \phi = \frac{\Delta t}{T} \times 2\pi$$
where, T is the total time period of one cycle.
What's the reason that there is a difference in reaching at its peak value?
For any continuous signal, its derivative will have a value zero at its peaks. So a signal continuous in time and its derivative can not have the peaks at same point in time.
One simplified answer would be:
A capacitor is a simple device that when a greater voltage than what its current voltage is will charge and result in its own voltage increasing. If you put a load ( or aply a smaller voltage) on the capacitor then it will discharge. This is all fine. But if we start using an AC waveform to rapidly charge and discharge the capacitor we get this interesting phase shift. What happens is in a sinus wave the voltage does not change at a constant rate. And the faster you try to charge or discharge a capacitor , the more current you would need. It just so happens that the place in a sine wave where the current changes the fastest is \$90^\circ\$ (\$0.5 \times \pi\$ or \$0.25 \times \tau\$) offset from where the peak is.
So a capacitors current is offset by \$90^\circ\$ because that's when the voltage changes the fastest.
|
This question already has an answer here:
Is there a way to simplify $\det(D + C)$, where $D,C$ are square matrices of matching dimensions, $D$ is diagonal (with different diagonal elements, $D_{ij} = \delta_{ij}d_i$), and $C$ is a constant matrix, that is, all entries $C_{ij}=c$ are equal to the same number?
To be more explicit, assuming $D,C\in\mathbb{R}^{n\times n}$, the matrix $D+C$ has the form:
$$D + C = \left(\begin{array}{ccccc} d_1 + c & c & c & \cdots & c\\ c & d_2 + c & c & \cdots & c\\ c & c & d_3 + c & \cdots & c\\ \vdots & \vdots & \vdots & & \vdots\\ c & c & c & \cdots & d_n + c \end{array}\right)$$
|
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest.
Nah, I have a pretty garbage question. Let me spell it out.
I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$.
For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$.
This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin.
Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle.
Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$
$$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$
@user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure).
The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$.
@RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea.
The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described
It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation
The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible.
Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$....
The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case?
I think there you really do need some kind of PDEs to construct good charts.
I might be way overcomplicating this.
If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$?
I think so by the squeeze theorem or something.
this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$
but then we can replace all of those $U_i$'s with balls, incurring some fixed error
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid...
@BalarkaSen what is this
ok but this does confirm that what I'm trying to do is wrong haha
In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas...
Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function?
I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
|
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
|
Both loops oscillate without any loss of energy.
In loop A the voltage across $C_1$ varies sinusoidally between $0$ and $2\epsilon$. The angular frequency of oscillation is $\omega_0=\frac{1}{\sqrt{LC}}$. So the voltage across $C_1$ at time $t_1$ after connection with the battery is $\epsilon (1-\cos\omega_0 t_1)$. See transient current in an LC circuit with a DC supply.
(This is similar to a mass on a spring in a uniform gravity field, with the mass being dropped when the spring is at its relaxed length. The mass oscillates around the equilibrium position for which $kx_0=mg$, so the extension oscillates between $0$ and $2x_0$. )
When the switch reaches position B the PDs across $C_1$ and $C_3$ are opposed. The total PD in the loop is $V=\epsilon_1 - E$ or $E-\epsilon_1$, depending on which is bigger, where $\epsilon_1$ is the PD across $C_1$ when the switch is moved from A to B.
(
Query: Are $E$ and $\epsilon$ different? Perhaps the examiner intends them to be the same? Then we can be sure that the initial PD across $C_3$ will be greater than that across $C_1$.)
Loop B oscillates with different natural frequency $\omega=\sqrt{\frac{3}{LC}}$ because the capacitors are in series.
You can (I think) use the principle of superposition to write the PD across each capacitor in loop B in the same way as in loop A.
Imagine the net PD to be contained in a separate battery (as in loop A) of opposite polarity to the net PD $V$, and the 3 capacitors to be initially uncharged. The net PD of the battery oscillates sinusoidally between the one inductor and the 3 capacitors, between $0$ and $-2V$. The voltage across each capacitor reaches a maximum of $-\frac23V$.
On top of this solution you superpose the initial values of the PD across each capacitor. So $C_1, C_2, C_3$ reach maximum PDs of $-\epsilon_1-\frac23V, -\frac23V, E-\frac23V$. (I have used $-\epsilon_1$ as the initial PD on $C_1$ because it had opposite polarity to $C_3$. So assuming that $|\epsilon_1| \lt |E|$ then the reverse polarity on $C_1$ will initially increase.)
|
We know that Pythagoreans in Ancient Greece discovered that the square root of two is an irrational number. Why was that discovery historically significant? What value was that knowledge to the Ancient Greeks?
I do not agree on some details of the interpretation regarding the discovery of the irrationality of $\sqrt{2}$ as a confutation of the
Pythagoreans [...] belief that all numbers could be constructed as the ratio of 2 numbers.
My undestanding is that all "archaic" Greek mathematics shared the (implicit) assumption that, given two magnitudes, e.g. two segments of lenght $a,b$, it is always possible to find a segment of "unit lenght" $u$ such that it
measures both, i.e. such that [using modern algebraic formulae which are totally foreign to Greek math] :
$$ a=n\times u\ \text{and}\ b=m\times u\ \text{for suitable}\ n,m$$
From the above instance of the assumption, it follows that : $$ \frac{a}{b}=\frac{n\times u}{m\times u}=\frac{n}{m}$$
The assumption amounts to saying that the ratio between two magnitudes is always a ratio between
integers (i.e. in modern terms: a rational number).
But note that for Greek math the
only numbers are the natural ones and they must be distinguished from magnitudes : a segment, a square, ... which are "measured by" numbers.
For ancient Greeks there are
no rational numbers; but only magnitudes measurable with multiples of a suitable unit one.
The discovery of the existence of irrational magnitudes, through the proof that the case where $a$ is the side of the square and $b$ its diagonal is not expressible as a ratio between (natural) numbers, leads Greek math to the withdrawal of the above (implicit) assumption, that we may call : "commensurability assumption" and to the axiomatization of geometry, i.e. the systematic effort to explicitly lists all the needed assumptions.
According to this link, Legend has it that Hippasus first discovered the irrationality of $\sqrt{2}$. The second link in fact mentions a legend that held that supporters of Pythagoras murdered Hippasus -- who allegedly discovered the irrationality of $\sqrt{2}$ on a boat in the middle of the sea -- by throwing him overboard immediately after he informed them of his discovery.
The Pythagoreans had the belief that all numbers could be constructed as the ratio of 2 numbers. (That they were rational) So basically it was a big deal because it flew in the face of knowledge. All of their work was based on the premise of rational numbers being all the numbers.
Any new evidence that completely overturns a fundamental truth has often been met with derision. Even in (relatively) modern times, Imaginary Numbers were considered "fictitious or useless, much as zero and the negative numbers once were."
These legends do exist, and have for along time. But few if any specialist historians of the subject believe Pythagoreans discovered irrationality of $\sqrt{2}$. See:
It is very hard to judge of Greek mathematics before Euclid, let alone before Plato, as there is so little evidence. The most widely read single study of it today is probably D. H. Fowler's
Mathematics of Plato's Academy, and for what it is worth I think he may well be right. In short he argues that incommensurability was a well known topic easily handled by Greek mathematicians as far back as our evidence can take us.
The fact that $\sqrt{2}$ existed and is irrational was a blow to the ancient Greeks who only believed in numbers that they could calculate to a certain degree of precision whenever required. Or in other words, they were familiar with rational numbers. The fact that others numbers existed would have carried the same sort of feelings in them as and when we first encounter topics such as countability and uncountability and the continuum hypothesis in set theory. At first it may seem to be all some sort of circular and wrong argument but given some time we get used to it. And perhaps so did the Greeks.
As far as practicality is concerned, it would not have been much practical to them as they would not have been able to measure these new numbers to a degree of precision that they were used too. But the whole point of knowledge gathering is not where to put that knowledge to use, but why should that knowledge exist in the first place.
|
Hi I'm just consider the difference between groups and rings when it comes to direct product. And want to check this is right or not.
Let $A_i \le B_i$ [The $A_i$ is a subobject(Subring or Subgroup) of the $B$]
It is obvious that
$A_i \le B_i \Rightarrow \Pi _{i=1} ^{n} A_i \le B(=\Pi _{i=1} ^{n} B_i)$
Then question is
First)
Is it true that $A_i \le B_i \Leftarrow \Pi _{i=1} ^{n} A_i \le B(=\Pi _{i=1} ^{n} B_i)$ ?
It looks like a not true but I couldn't find any counter examples.
Second )
$\forall$ suboject of ( $A_1 \times A_2\times...\times A_n$) = (subobject of $A_1$) $\times$ (subobject of $A_2$) $\times$ .... $\times$ (subobject of $A_n$)?
i.e. Can the all sub-objects(subrings or subgroups) of the $B$ be expressed as a $\Pi _{i=1} ^{n} A_i$ ?
Third)
If not, What conditions we need that the above things are true respectively with the case of the Ring and Group?
|
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
|
Enthalpy $H$ is defined so that $\Delta H = q_P$, the heat at constant pressure. This is convenient for constant-pressure calorimetry---it makes it possible to track energy changes without considering volume changes in the system.
$$\begin{align*}\Delta U &= q + w & \text{(first law)}\\\Delta U_P &= q_P + w_P & \text{(constant pressure)}\\\Delta U_P &= \Delta H + w_P & \text{(definition of enthalpy)}\\\end{align*}$$
...so the difference between $\Delta H$ and $\Delta U$ can be interpreted as minus the work for a
constant pressure process.
But this doesn't answer your question for a general process. If the process is
not constant pressure, we can still apply the definition of enthalpy, but the difference isn't minus the expansion work anymore. But what is it, then? How can we use something we've defined for a constant pressure process for all processes in general?
We'd like a new thermodynamic function $H$ that includes all of the information in $U$. We have $$dU = TdS - PdV$$ so the natural variables of $U$ are $S$ and $V$. We want this new function $H$ to have $P$ as a natural variable in place of $V$, so it will be convenient for constant-pressure calorimetry. Consider the following plot of $U$ vs. $V$:
The equation for the tangent line at some value of $V$ is$$ U = \left(\frac{\partial U}{\partial V}\right)_S V + intercept$$
If we define the new function as the intercept of the tangent line shown in the graph, it will have the properties we need:
$$H \equiv U - \left(\frac{\partial U}{\partial V}\right)_S V$$
You can see that the derivative $\left(\dfrac{\partial U}{\partial V}\right)_S = -P$ from the equation for $dU$ above. Then
$$H \equiv U - (-P) V = U + PV$$
so
$$dH = dU + d(PV) = TdS - PdV + PdV + VdP = TdS + VdP$$
By adding $PV$ to $U$, we've transformed the information in $U$ (with natural variables $S$ and $V$) to a new function $H$ (with natural variables $S$ and $P$).
This trick is called a Legendre transformation, and it can be used to understand the definitions of other thermodynamic functions like $G$ and $A$ as well.
|
NOTE : This excerpt comes form Hormander's text, An Introduction to COmplex Analysis of Several Variables. STATEMENT: (Runge approximation theorem)Let $\Omega$ be an open set in $\mathbb{C}$ and $K$ a compact subset of $\Omega$. The following conditions on $\Omega$ and $K$ are equivalent:
(a) Every function which is analytic in a neighborhood of $K$ can be approximated uniformly on $K$ by functions in $A(\Omega)$.
(b)The open set $\Omega\backslash K=\Omega \cap K^c$ has no components which is relatively compact in $\Omega$.
(c)For every $z\in\Omega\backslash K$ there is a function $f\in A(\Omega)$ such that $$|f(z)|>\sup_k |f|$$
:To prove that $(b)\rightarrow (a)$ it suffices to show that every measure which is orthogonal to $A(\Omega)$ is also orthogonal to every function which is analytic in a neighborhood of $K$, for the theorem is then a consequence of the Hahn-Banach theorem. Set $$\varphi(\zeta)=\int(z-\zeta)^{-1}d\mu(z),\;\;\;\;\; z\in K^c$$ Proof
By theorem 1.2.2, $\varphi$ is analytic in $K^c$, and when $\zeta\in \Omega^c$ we have $$\varphi^{(k)}(\zeta)=k!\int(z-\zeta)^{-k-1}d\mu(z)=0\;\;\;\;\;\;\text{for every}\;k$$
QUESTION: So I have two questions regarding Hormander's proof. The first one is how does the problem reduce to just using Hahn-Banach theorem. Secondly, why does $\varphi^{(k)}$ vanish for all $k$ when $\zeta\in\Omega^c$.
|
Given the Minkowski metric $\eta$ the Lorentz Transformation $\Lambda$ satisfies $$\eta=\Lambda^{T}\eta\Lambda$$ which in index form may be written $$\eta_{\mu\nu}=(\Lambda^{T})_{\mu}^{\,\,\alpha}\eta_{\alpha\beta}\Lambda^{\beta}_{\,\,\nu}$$$$\eta_{\mu\nu}=\eta_{\alpha\beta}\Lambda^{\alpha}_{\,\,\mu}\Lambda^{\beta}_{\,\,\nu}$$ How can I obtain an index expression for $\eta^{\mu\nu}$ starting from this expression? I have attempted to multiply through $\eta^{ab}$ using its index raising property, but this is not helping.
Using the index-raising property
$ \eta^{\sigma \lambda} = \eta^{\sigma \mu} \eta_{\mu \nu} \eta^{\nu \lambda}$
Applying your formula
$= \eta^{\sigma \mu} \eta_{\alpha \beta} \Lambda^{\alpha}_{\,\mu} \Lambda^{\beta}_{\,\nu} \eta^{\nu \lambda}$
and using the index-raising property again
$ = \eta_{\alpha \beta} \Lambda^{\alpha \sigma} \Lambda^{\beta \lambda}.$
Is this what you had in mind? Alternatively, the $\eta_{\alpha \beta}$ of the final expression can also be exchanged for $\eta^{\alpha \beta}$ by appropriately lowering one of the indices of each of the transformation matrices:
$\eta_{\alpha \beta} \Lambda^{\alpha \sigma} \Lambda^{\beta \lambda} = \eta_{\alpha \gamma} \eta^{\gamma \delta} \eta_{\delta \beta} \Lambda^{\alpha \sigma} \Lambda^{\beta \lambda} = \eta^{\gamma \delta} \Lambda_{\gamma}^{\,\sigma} \Lambda_{\delta}^{\,\lambda}.$
|
Research Open Access Published: A generalization of Hegedüs-Szilágyi’s fixed point theorem in complete metric spaces Fixed Point Theory and Applications volume 2018, Article number: 1 (2018) Article metrics
1809 Accesses
Abstract
In 1980, Hegedüs and Szilágyi proved some fixed point theorem in complete metric spaces. Introducing a new contractive condition, we generalize Hegedüs-Szilágyi’s fixed point theorem. We discuss the relationship between the new contractive condition and other contractive conditions. We also show that we cannot extend Hegedüs-Szilágyi’s fixed point theorem to Meir-Keeler type.
Introduction and preliminaries
Throughout this paper we denote by \(\mathbb {N}\) the set of all positive integers and by \(\mathbb {R}\) the set of all real numbers.
Let
T be a mapping on a metric space \((X,d)\). Throughout this paper, we define \(D_{T}(x)\) and \(D_{T}(x,y)\) by
for any \(x, y \in X\). That is, \(D_{T}(x)\) is the diameter of the orbit \(\{ x, Tx, T^{2} x,\ldots\}\) of
x. Theorem 1
(Theorem 5 in [1])
Let \((X,d)\) be a complete metric space, and let T be a mapping on X. Assume \(D_{T}(x) < \infty\) for all \(x \in X\). Assume also that there exists a function φ from \([0, \infty)\) into itself satisfying the following: (i)
\(\varphi(t) < t\)
holds for all\(t \in(0, \infty)\); (ii) φ is upper semicontinuous from the right; (iii)
\(d(Tx,Ty) \leq\varphi\circ D_{T}(x,y) \)
holds for all\(x, y \in X\). Then T has a unique fixed point z. Moreover, \(\{ T^{n} x \}\) converges to z for any \(x \in X\). Remark 1 Theorem 2
(Theorem 1 in [5])
Let \((X,d)\) be a complete metric space, and let T be a mapping on X. Assume that there exists a function φ from \([0, \infty)\) into itself satisfying (i) and (ii) of Theorem 1 and the following: (iii)
\(d(Tx,Ty) \leq\varphi\circ d(x,y) \)
holds for all\(x, y \in X\). Then T has a unique fixed point z. Moreover, \(\{ T^{n} x \}\) converges to z for any \(x \in X\). Theorem 3
([6])
Let \((X,d)\) be a complete metric space, and let T be a mapping on X. Assume that, for any \(\varepsilon> 0\), there exists \(\delta> 0\) such that for all \(x, y \in X\). Then T has a unique fixed point z. Moreover, \(\{ T^{n} x \}\) converges to z for any \(x \in X\). Theorem 4
(Theorem 1.2 in [7])
Let \((X,d)\) be a complete metric space, and let T be a mapping on X. Assume that there exists a function φ from \([0, \infty)\) into itself satisfying the following: (i) φ is nondecreasing; (ii)
\(\lim_{n} \varphi^{n}(t) = 0\)
holds for all\(t \in(0, \infty)\); (iii)
\(d(Tx,Ty) \leq\varphi\circ d(x,y) \)
holds for all\(x, y \in X\). Then T has a unique fixed point z. Moreover, \(\{ T^{n} x \}\) converges to z for any \(x \in X\).
From the above, we can tell that Theorem 1 is of Boyd-Wong [5] type (see Definition 8). So it is a very natural question of whether we can extend Theorem 1 to Meir-Keeler [6] type. It is also a natural question of whether we can prove a Matkowski [7] type fixed point theorem.
In this paper, we answer the above two questions; one is negative and the other is affirmative. Indeed, we generalize Theorem 1. The assumption of the new theorem (Theorem 5) is weaker than a Matkowski type condition (see Corollary 7). We also give a counterexample for a Meir-Keeler type condition (Example 16). We further discuss the relationship between the assumption of Theorem 5 and other contractive conditions.
Main results
In this section, we generalize Theorem 1.
Theorem 5 Let \((X,d)\) be a complete metric space, and let T be a mapping on X. Assume \(D_{T}(x) < \infty\) for all \(x \in X\). Assume also that there exists a function φ from \([0, \infty)\) into itself satisfying the following: (i)
\(\varphi(t) < t\)
holds for all\(t \in(0, \infty)\); (ii) For any\(\varepsilon> 0\), there exists\(\delta> 0\) such that, for any\(t \in(0,\infty)\),$$\varepsilon< t < \varepsilon+\delta \quad\textit{implies}\quad \varphi(t) \leq \varepsilon. $$ (iii) For any\(x, y \in X\),$$d(Tx,Ty) \leq\varphi\circ D_{T}(x,y) $$ holds. Then T has a unique fixed point z. Moreover, \(\{ T^{n} x \}\) converges to z for any \(x \in X\). Remark 2
\(D_{T}(x,y) < \infty\) obviously holds for any \(x, y \in X\).
Since \(D_{T}(x,y) = 0\) implies \(d(Tx,Ty) = 0\), without loss of generality, we may assume \(\varphi(0) = 0\).
We do not assume that
φis nondecreasing. So, in general, \(D_{T}(Tx,Ty) \leq\varphi\circ D_{T}(x,y)\) does not hold.
Before proving Theorem 5, we need one lemma.
Lemma 6 Let \(x, y \in X\). Assume that either of the following holds: (a)
\(x = y\);
(b)
\(\lim_{n} D_{T}(T^{n} x) = \lim_{n} D_{T}(T^{n} y) = 0\).
Then \(\lim_{n} D_{T}(T^{n} x, T^{n} y) = 0\) holds. Proof
Since
for \(n \in \mathbb {N}\), \(\{ D_{T}(T^{n} x,T^{n} y) \}\) is nonincreasing. So \(\{ D_{T}(T^{n} x,T^{n} y) \}\) converges to some \(\varepsilon\in[0,\infty)\). Arguing by contradiction, we assume \(\varepsilon> 0\). We consider the following two cases:
\(\varepsilon< D_{T}(T^{n} x,T^{n} y) \) holds for any \(n \in \mathbb {N}\);
\(\varepsilon= D_{T}(T^{n} x,T^{n} y) \) holds for some \(n \in \mathbb {N}\).
In the first case, we choose \(\delta\in(0,\infty)\) such that
We choose \(\nu\in \mathbb {N}\) satisfying
In the case of (b), without loss of generality, we may assume
Fix \(m \geq\nu\) and \(n \geq\nu\). Then since
we have
Since
m, n are arbitrary, considering (1), we obtain
which implies a contradiction. In the second case, we choose \(\nu\in \mathbb {N}\) satisfying
In the case of (b), without loss of generality, we may assume
Fix \(m \geq\nu\) and \(n \geq\nu\). Then since
we have
Since
m, n are arbitrary, considering (2), we obtain
which implies a contradiction. Therefore we have shown \(\lim_{n} D_{T}(T^{n} x,T^{n} y) = 0\). □
Proof of Theorem 1
Fix \(x \in X\). By Lemma 6(a), \(\{ D_{T}(T^{n} x) \}\) converges to 0. Thus \(\{ T^{n} x\}\) is a Cauchy sequence in
X. Since X is complete, \(\{ T^{n} x \}\) converges to some \(z \in X\). By Lemma 6(a) again, \(\{ D_{T}(T^{n} z) \}\) also converges to 0. So, by Lemma 6(b), we obtain
So \(\{ T^{n} z \}\) also converges to
z. Hence
holds. Arguing by contradiction, we assume \(\varepsilon:= D_{T}(z) > 0\). Since \(\lim_{n} D_{T}(T^{n} z) = 0\) holds, there exists \(\nu\in \mathbb {N}\) satisfying
where \(T^{0} z = z\). This implies
For \(n > \nu\), we have
Since
n is arbitrary, we obtain
which implies a contradiction. Therefore we have shown \(D_{T}(z) = 0\). Hence
z is a fixed point of T. Since (3) holds for any \(x \in X\), we obtain the uniqueness of the fixed point. □
By Theorem 5, we obtain a Matkowski type fixed point theorem.
Corollary 7 Let \((X,d)\) be a complete metric space, and let T be a mapping on X. Assume \(D_{T}(x) < \infty\) for all \(x \in X\). Assume also that there exists a function φ from \([0, \infty)\) into itself satisfying the following: (i) φ is nondecreasing; (ii)
\(\lim_{n} \varphi^{n}(t) = 0\)
holds for all\(t \in(0, \infty)\); (iii)
\(d(Tx,Ty) \leq\varphi\circ D_{T}(x,y) \)
holds for all\(x, y \in X\). Then T has a unique fixed point z. Moreover, \(\{ T^{n} x \}\) converges to z for any \(x \in X\). Comparison
In this section, using subsets of \((0,\infty)^{2}\), we discuss the relationship between the new contractive condition in Theorem 5 and other contractive conditions. See [1, 8–11] and the references therein.
Definition 8
Let
Q be a subset of \((0,\infty)^{2}\). (1) (2) Qis said to be Browder( Brofor short) [14] if there exists a function φfrom \((0, \infty)\) into itself satisfying the following: (2-i) φis nondecreasing and right-continuous; (2-ii)
\(\varphi(t) < t\) holds for any \(t \in(0, \infty)\);
(2-iii)
\(u \leq\varphi(t)\) holds for any \((t,u) \in Q\).
(2-i) (3) Qis said to be Boyd-Wong( BWfor short) [5] if there exists a function φfrom \((0, \infty)\) into itself satisfying the following: (3-i) φis upper semicontinuous from the right; (3-ii)
\(\varphi(t) < t\) holds for any \(t \in(0, \infty)\);
(3-iii)
\(u \leq\varphi(t)\) holds for any \((t,u) \in Q\).
(3-i) (4) Qis said to be Meir-Keeler( MKfor short) [6] if, for any \(\varepsilon> 0\), there exists \(\delta> 0\) such that \(u < \varepsilon\) holds for any \((t,u) \in Q\) with \(t < \varepsilon+ \delta\). (5) Qis said to be Matkowski( Matfor short) [7] if there exists a function φfrom \((0, \infty)\) into itself satisfying the following: (5-i) φis nondecreasing; (5-ii)
\(\lim_{n} \varphi^{n}(t) = 0\) for any \(t \in(0, \infty)\);
(5-iii)
\(u \leq\varphi(t)\) holds for any \((t,u) \in Q\).
(5-i) (6) Qis said to be of New-type( NTfor short) if there exists a function φfrom \((0, \infty)\) into itself satisfying the following: (6-i)
\(\varphi(t) < t\) for any \(t \in(0,\infty)\);
(6-ii)
For any \(\varepsilon> 0\), there exists \(\delta> 0\) such that \(\varepsilon< t < \varepsilon+ \delta\) implies \(\varphi(t) \leq\varepsilon\);
(6-iii)
\(u \leq\varphi(t)\) holds for any \((t,u) \in Q\).
(6-i) (7) (7-i)
For any \(\varepsilon> 0\), there exists \(\delta> 0\) satisfying \(u \leq\varepsilon\) holds for any \((t,u) \in Q\) with \(t < \varepsilon+ \delta\);
(7-ii)
\(u < t\) holds for any \((t,u) \in Q\).
(7-i)
It is obvious that the following implications hold:
It is well known that the converse implication of (Cont → Bro) does not hold. The following three examples tell us that for each implication except (Cont → Bro), there exists a counterexample for its converse implication. In particular, MK and NT are independent.
Example 9
Let \(u \in(0,\infty)\) and define
Q by
Then
Q is Mat. However, Q is not MK. Remark 3
We note that the converse implication of (BW → NT) does not hold.
Example 10
Let \(t, u \in(0,\infty)\) with \(t < u\). Define
Q by
Then
Q is BW. However, Q is not Mat. Remark 4
We note that the converse implication of (Mat → NT) does not hold.
Example 11
Let \(t \in(0,\infty)\) and define
Q by
Then
Q is MK. However, Q is not NT. Remark 5
We note that the converse implication of (NT → CJM) does not hold.
In the remainder of this section, we let \((X,d)\) be a complete metric space, and let
T be a mapping on X satisfying \(D_{T}(x) < \infty\) for all \(x \in X\). Define subsets \(P_{T}\) and \(Q_{T}\) of \((0,\infty)^{2}\) by Lemma 12 Let X be a nonempty set. Let f be a function from X into \([0, \infty)\) such that \(\{ x \in X : f(x) = 0 \}\) consists of at most one element. Define a function d from \(X \times X\) into \([0, \infty)\) by Let T be a mapping on X satisfying the following:
\(f(x) > 0\)
implies\(Tx \neq x\) and\(f(Tx) \leq f(x)\);
\(f(x) = 0\)
implies\(Tx = x\). Then the following hold: (i)
\((X, d)\)
is a metric space; (ii) if either\(\{ x \in X : f(x) = 0 \} \neq\varnothing\) or\(\inf f(X) > 0\) holds, then X is complete; (iii)
\(P_{T} = Q_{T}\).
Proof
We have essentially proved (i) and (ii); see Lemma 7 in [19]. Let us prove (iii). Fix \(x, y \in X\) with \(x \neq y\) and \(f(x) \leq f(y)\). Then we have \(f(y) > 0\) and hence \(Ty \neq y\). We have
Hence
holds. Therefore \(P_{T}=Q_{T}\) holds. □
Example 13
Let \(X = [0,\infty)\) and define a function
d from \(X \times X\) into \([0,\infty)\) by (5), where \(f(x) = x\). That is,
holds. Define a mapping
T on X by
Then the following hold:
(i)
\((X,d)\) is a complete metric space;
(ii)
\(f(x) > 0\) implies \(f(Tx) < f(x)\);
(iii)
\(f(x) = 0\) implies \(Tx = x\);
(iv)
\(P_{T} = Q_{T} = \{ (t,1) : 1 < t \}\);
(v)
\(P_{T}\) and \(Q_{T}\) are Mat;
(vi)
neither \(P_{T}\) nor \(Q_{T}\) are MK.
Proof Example 14
Put \(X = [0,2)\) and define
f and d as in Example 13. Define a mapping T on X by
Then (i)-(iii) of Example 13 and the following hold:
(iv)
\(P_{T} = Q_{T} = \{ (1+\lambda,2 \lambda) : \lambda\in(0,1) \}\);
(v)
\(P_{T}\) and \(Q_{T}\) are BW;
(vi)
neither \(P_{T}\) nor \(Q_{T}\) are Mat.
Proof Example 15
Let \(X = [0,1) \cup(1,\infty)\) and define a function
d from \(X \times X\) into \([0,\infty)\) by (5), where \(f(x) = \min\{ x, 1 \}\). Define a mapping T on X by
Then (i)-(iii) of Example 13 and the following hold:
(iv)
\(P_{T} = Q_{T} = \{ (1,u) : 0 < u < 1 \}\);
(v)
\(P_{T}\) and \(Q_{T}\) are MK;
(vi)
neither \(P_{T}\) nor \(Q_{T}\) are NT.
Proof
We finally give the following example, which tells us that we cannot extend Theorem 1 to a Meir-Keeler type contractive condition.
Example 16
Let \(X = [0,1)\) and define a function
d from \(X \times X\) into \([0,\infty)\) by (6). Define a mapping T on X by
Then the following hold:
(i)
\((X,d)\) is a complete metric space;
(ii)
\(d(x,y) < 1\) holds for any \(x, y \in X\);
(iii)
for any \(x \in X\), \(\{ T^{n} x \}\) converges to 1 in the Euclidean space \(\mathbb {R}^{1}\);
(iv)
\(D_{T}(x) = 1\) holds for any \(x \in X\);
(v)
\(TX = (0,1)\);
(vi)
\(Q_{T} = \{ (1,u) : 0 < u < 1 \}\);
(vii)
\(Q_{T}\) is MK;
(viii)
\(Q_{T}\) is not NT;
(ix) Tdoes not have a fixed point. Proof
We can easily prove (i)-(vi) and (ix). (vii) and (viii) follow from Example 11. □
Conclusions
In this paper, introducing a new contractive condition (see Definition 8(6)), we generalize Hegedüs-Szilágyi’s fixed point theorem (Theorem 1) in complete metric spaces proved in 1980. In Section 3, we discuss the relationship between the new contractive condition and other contractive conditions. We also show that we cannot extend Theorem 1 to Meir-Keeler type (see Example 16).
References 1.
Hegedüs, M, Szilágyi, T: Equivalent conditions and a new fixed point theorem in the theory of contractive type mappings. Math. Jpn.
25, 147-157 (1980) 2.
Hegedüs, M: New generalizations of Banach’s contraction principle. Acta Sci. Math.
42, 87-89 (1980) 3.
Tasković, MR: Some results in the fixed point theory, II. Publ. Inst. Math. (Belgr.)
27, 249-258 (1980) 4.
Walter, W: Remarks on a paper by F. Browder about contraction. Nonlinear Anal.
5, 21-25 (1981) 5.
Boyd, DW, Wong, JSW: On nonlinear contractions. Proc. Am. Math. Soc.
20, 458-464 (1969) 6.
Meir, A, Keeler, E: A theorem on contraction mappings. J. Math. Anal. Appl.
28, 326-329 (1969) 7.
Matkowski, J: Integrable Solutions of Functional Equations. Diss. Math., vol. 127. Institute of Mathematics Polish Academy of Sciences, Warsaw (1975)
8.
Jachymski, J: Remarks on contractive conditions of integral type. Nonlinear Anal.
71, 1073-1081 (2009) 9.
Meszáros, J: A comparison of various definitions of contractive type mappings. Bull. Calcutta Math. Soc.
84, 167-194 (1992) 10.
Rhoades, BE: A comparison of various definitions of contractive mappings. Trans. Am. Math. Soc.
226, 257-290 (1977) 11.
Suzuki, T: Discussion of several contractions by Jachymski’s approach. Fixed Point Theory Appl.
2016, Article ID 91 (2016) 12.
Banach, S: Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fundam. Math.
3, 133-181 (1922) 13.
Caccioppoli, R: Un teorema generale sull’esistenza di elementi uniti in una transformazione funzionale. Rend. Accad. Naz. Lincei
11, 794-799 (1930) 14.
Browder, FE: On the convergence of successive approximations for nonlinear functional equations. Proc. K. Ned. Akad. Wet., Ser. A, Indag. Math.
30, 27-35 (1968) 15.
Ćirić, L: A new fixed-point theorem for contractive mappings. Publ. Inst. Math. (Belgr.)
30, 25-27 (1981) 16.
Jachymski, J: Equivalent conditions and the Meir-Keeler type theorems. J. Math. Anal. Appl.
194, 293-303 (1995) 17.
Kuczma, M, Choczewski, B, Ger, R: Iterative Functional Equations. Encyclopedia of Mathematics and Its Applications, vol. 32. Cambridge University Press, Cambridge (1990)
18.
Matkowski, J: Fixed point theorems for contractive mappings in metric spaces. Čas. Pěst. Mat.
105, 341-344 (1980) 19.
Suzuki, T, Alamri, B: A sufficient and necessary condition for the convergence of the sequence of successive approximations to a unique fixed point II. Fixed Point Theory Appl.
2015, Article ID 59 (2015) Acknowledgements
The author is supported in part by JSPS KAKENHI Grant Number 16K05207 from Japan Society for the Promotion of Science.
Ethics declarations Competing interests
The author declares that he has no competing interests.
Additional information Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
I try to prove the Jech's "Set theory", exercises 12.11:
12.11. If $\kappa$ is an inaccessible cardinal, then $V_\kappa\models \text{there is a countable model of ZFC}$. My attempt. Since $(V_\kappa,\in)$ is a model of ZFC, by Löwenheim-Skolem theorem theorem there is a countable model $(A,R)$ which elementary equivalent to $(V_\kappa,\in)$. Especially, $(A,R)$ is a model of ZFC. Since $A$ is countable, we can find $E\subset \omega\times\omega$ such that $(A,E)$ is isomorphic to $(\omega,E)$. Since $E\in P(\omega\times\omega)$ and $P(\omega\times\omega)\in V_{\omega+\omega}$, so $(\omega,E)\in V_\kappa$.
We will prove that $V_\kappa\models (\omega,E)\text{ is a countable model of ZFC}.$ Since $\omega^{V_\kappa}=\omega$, $V_\kappa$ satisfies "$(\omega,E)$ is a countable structure". To prove $V_\kappa\models ((\omega,E)\models ZFC)$, we will check that $V_\kappa \models \varphi^{\omega,E}$ holds for each axiom $\varphi$ of ZFC. By induction for $\varphi$, we can prove $(\varphi^{\omega,E})^{V_\kappa}\leftrightarrow \varphi^{\omega,E}$. If $\varphi$ is an axiom of ZFC, then $\varphi^{\omega,E}$ holds because $(\omega,E)$ is a model of ZFC, so $(\varphi^{\omega,E})^{V_\kappa}$ holds for each axiom $\varphi$ of ZFC.
My "proof" is correct? If not, how to improve my "proof"? Thanks for any help.
Add : As I think, last part of this "proof" has an error. But I don't know what is wrong exactly.
|
March 4th, 2019, 11:18 AM
# 1
Newbie
Joined: Mar 2019
From: Romania
Posts: 1
Thanks: 0
Need some help to solve this limit
Hi everyone,
I have a problem that I don't know how to solve - ideally without using l'Hospital. Can you pls help?
March 5th, 2019, 06:38 AM
# 2
Math Team
Joined: Dec 2013
From: Colombia
Posts: 7,685
Thanks: 2666
Math Focus: Mainly analysis and algebra
First, writing $y=\frac1x$ we can see that there are actually two problems here:
$$\lim_{x \to 0^-} \frac1x e^{-\frac1x} = \lim_{y \to -\infty} y e^{-y} \to (-\infty) \cdot \infty$$
So the function grows without bound in the negative direction.
The second is $$\lim_{x \to 0^+} \frac1x e^{-\frac1x} = \lim_{y \to +\infty} y e^{-y} = \lim_{y \to +\infty} \frac{y}{ e^{y}} = 0$$
This is a standard result which you would normally be allowed to quote directly.
Since there are two different values for these two limits, you should be able to determine the correct answer to the given problem.
Alternatively, you could graph the function and come to the same conclusion.
Last edited by v8archie; March 5th, 2019 at 06:42 AM.
March 5th, 2019, 01:04 PM
# 3
Global Moderator
Joined: May 2007
Posts: 6,823
Thanks: 723
Quote:
Tags limit, solve
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post How to solve this limit? Jelena Calculus 1 February 20th, 2018 11:21 PM Can anyone solve this limit? Addez123 Calculus 3 October 25th, 2016 09:58 AM Help me solve this limit Viettk14nbk Calculus 3 May 27th, 2016 06:36 PM How to solve this limit Bhushan Calculus 7 May 13th, 2013 07:40 PM solve limit kevpb Calculus 1 January 20th, 2012 01:27 AM
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.