text
stringlengths 256
16.4k
|
|---|
Students should be able to:
Find the electric potential from a system of discrete point sources. Write the difference vector between two vectors (and its magnitude). (Optional) Write charge densities in terms of delta functions. Compute line integrals. Find power series approximations.
Students may be familiar with the iconic equation for the electric potential (due to a point charge): $$\text{Iconic:} \qquad V=\frac{1}{4 \pi \epsilon_0} \frac{Q}{r}$$ With information about the type of source distribution, one can write or select the appropriate coordinate independent equation for $V$. For example, if the source is a line of charge: $$\text{Coordinate Independent:} \qquad V=\frac{1}{4 \pi \epsilon_0} \int\frac{\lambda | d\vec r' |}{| \vec r - \vec r' |}$$ Looking at symmetries of the source, one can choose a coordinate system and write the equation for the potential in terms of this coordinate system. Note that this step is often combined with the following step, though one may wish to keep them separate for the sake of careful instruction. $$\text{Coordinate Dependent:} \qquad V=\frac{1}{4 \pi \epsilon_0} \int\frac{\lambda |ds'\ \hat s + s'\ d\phi'\ \hat \phi + dz'\ \hat z|}{| s'^2 + s^2 +2ss' \cos(\phi-\phi') + z^2|}$$ Using what you one about the geometry of the situation, one can possibly simplify the numerator. For example: $$\text{Coordinate and Geometry Dependent:} \qquad V=\frac{1}{4 \pi \epsilon_0} \int\frac{\lambda s'\ d\phi'}{| s'^2 + s^2 +2ss' \cos(\phi-\phi') + z^2|}$$ Emphasize that “primes” (i.e., $s'$, $\phi'$, $z'$, etc.) are used to indicate the location of charge in the charge distribution.
To find the area under a curve, one may chop up the x-axis into small pieces (of width $dx$). The area under the curve is then found by calculating the area for each region of $dx$ (which is $f(x) dx$) and then summing up all of those areas. In the limit where $dx$ is small enough, the sum becomes an integral. One could also find the area under a curve by chopping up both the x- and y-axes (chop), calculating the area of each small area under the curve (calculate), and adding all of those together with a double sum or double integral. This approach can be used to find the area of a cone, where the 'horizontal' length of each area is $r d\phi$ and the 'vertical' length is $dr$, giving an area of $dA = r d\phi dr$. It is important to make sure that the limits of integration are appropriate so that the integrals range over the whole area of interest. If one wants to calculate something other than length, area, or volume, such as if one sprinkled charge over a thin bar, then chop, calculate, and add still works. Again, chop the bar up into small lengths of $dx$. Then calculate the charge $dQ$ on each length ($dQ = \lambda dx$), and add all of the $dQ$s together in a sum or integral. This also works for calculating something (such as charge) over a volume. For a thick cylindrical shell with a charge density $\rho(\vec r)$, chop the shell into small volumes of $d \tau$ (which will be a product of 3 small lengths, e.g. $d \tau = r d\phi\ dr\ dz$), multiply this volume by the charge density at each part of the shell (defined by e.g. $r, \phi,$ and $z$), and add the resulting $dQ$s together. Lines of Charge (Lecture: 30 min) (FiniteDisk)
Starting with the integral expression for the electrostatic potential due to a ring of charge, find the value of the potential everywhere along the axis of symmetry.
Find the electrostatic potential everywhere along the axis of symmetry due to a finite disk of charge with uniform (surface) charge density $\sigma$. Start with your answer to part (a)
Find two nonzero terms in a series expansion of your answer to part (b) for the value of the potential very far away from the disk.
(InfiniteDisk)
Find the electrostatic potential due to an infinite disk, using your results from the finite disk problem.
(PotentialConeGEM227)
A conical surface (an empty ice-cream cone) carries a uniform charge density $\sigma$. The height of the cone is $a$, as is the radius of the top. Find the potential at point $P$ (in the center of the opening of the cone), letting the potential at infinity be zero.
(WritingII)
Using the handout “Guiding Questions for Science Writing” as a guide, write up your solution for finding the electrostatic potential everywhere in space due to a uniform ring of charge. Be sure to include a series expansion along one of the axes of interest.
|
A role for generalized Fermat numbers Date2016 Author
Cosgrave, John B.
Dilcher, Karl
MetadataShow full item record Abstract
We define a Gauss factorial $N_n!$ to be the product of all positive integers up to $N$ that are relatively prime to $n\in\mathbb N$. In this paper we study particular aspects of the Gauss factorials $\lfloor\frac{n-1}{M}\rfloor_n!$ for $M=3$ and 6, where the case of $n$ having exactly one prime factor of the form $p\equiv 1\pmod{6}$ is of particular interest. A fundamental role is played by those primes $p\equiv 1\pmod{3}$ with the property that the order of $\frac{p-1}{3}!$ modulo $p$ is a power of 2 or 3 times a power of 2; we call them Jacobi primes. Our main results are characterizations of those $n\equiv\pm 1\pmod{M}$ of the above form that satisfy $\lfloor\frac{n-1}{M}\rfloor_n!\equiv 1\pmod{n}$, $M=3$ or 6, in terms of Jacobi primes and certain prime factors of generalized Fermat numbers. We also describe the substantial and varied computations used for this paper.
Subject Collections License
Creative Commons Attribution - Non-Commercial - No Derivatives (CC BY-NC-ND)
|
I'm trying to solve the Nonlinear Schrodinger's Equation (NLSE) in 2D using Finite Elements, but I don't know how to handle the nonlinear term. I suppose I have to apply the Newton-Raphson algortihm to my discretized system of PDEs, but i'm not sure how to proceed.
Note: I just started studying Finite Elements two weeks ago, so I'd appreaciate any advice on how to tackle the problem more efficiently!
The NLSE is,
$i\frac{\partial{u}}{dt}=-\frac{1}{2}\nabla^2u-|u|^2u$, where $u$ is complex valued.
Then $u=r+is$, with $r,s$ real valued functions, and plugging this into the NLSE I obtain a system of coupled PDEs.
$ \left\{ \begin{array}{ll} \partial_tr+\frac{1}{2}\nabla^2s=-(r^2+s^2)s\\ \partial_ts-\frac{1}{2}\nabla^2r=(r^2+s^2)r\\ \end{array} \right. $
Using FE to perform the spatial discretization (and assuming null Dirichlet BC for both the functions and their gradients) , the expansion in hat-functions corresponding to each function $r$ and $s$ is $r_h=\sum_j\rho_j(t)\phi_j$ and $s_h=\sum_j\psi_j(t)\phi_j$, and we get
$ \left\{ \begin{array}{ll} M\dot{\rho}-\frac{1}{2}A\psi=-b_1(\rho,\psi)\\ M\dot{\psi}+\frac{1}{2}A\rho=b_2(\rho,\psi)\\ \end{array} \right. $
where $M$ is the mass matrix, $A$ the stifness matrix and $(b_1)_i=\int_\Omega (r_h^2+s_h^2)s_h \phi_i$, $(b_2)_i=\int_\Omega (r_h^2+s_h^2)r_h \phi_i$.
I've read something sugesting considering the term $|u|^2$ known for every time iteration, and given by $|u_{t-1}|^2$, but after implementing the norm $|u|$ kept growing so I suppose this is not the correct way this non-linearity.
So my question is, how would I apply Newton-Raphson's method to the time discretized version (using backwards Euler) of the above equation, namely,
$ \left\{ \begin{array}{ll} M\psi_t+-M\psi_{t-1}+\frac{1}{2}dtA\rho_t=b_2(\rho_l,\psi_l)dt\\ M\rho_t+-M\rho_{t-1}-\frac{1}{2}dtA\psi_t=-b_1(\rho_l,\psi_l)dt\\ \end{array} \right. $
to be able to handle the nonlinearity?
EDIT
I need some help, because I'm not even sure if I'm on the right track.
Last time, I ended up with the following system of equations (discretized spatially and in time),
$ \left\{ \begin{array}{ll} M\psi_t+\frac{1}{2}dtA\rho_t=b_1(\rho_l,\psi_l)dt+M\psi_{t-1}\\ M\rho_t+\frac{1}{2}dtA\psi_t=b_2(\rho_l,\psi_l)dt+M\rho_{t-1}\\ \end{array} \right. $
NOTE: I redefined $b_1$ and $b_2$, just to be in accordance with my notes, they are $b_{1i}=\int(r_h^2+s_h^2)r_h\phi_i$, $b_{2i}=\int-(r_h^2+s_h^2)s_h\phi_i$.
Now I define $\xi_t=\left[\begin{array}{ll}\rho_t\\\psi_t\end{array}\right]$, and the system becomes,
$\begin{bmatrix} M & \frac{1}{2}dtA\\ -\frac{1}{2}dtA & M \end{bmatrix}\xi_t=dt\left[\begin{array}{ll}b_2\\b_1\end{array}\right]+ \begin{bmatrix} M & 0\\ 0 & M \end{bmatrix}\xi_{t-1}\rightarrow\eta(\xi_t)=0 $ where $\eta(\xi_t)$ is the residual. To apply Newton's Method, I compute the Jacobian $J_{mn}=\frac{\partial\eta(\xi_t)_m}{\partial(\xi_t)_n}=\mathcal{M}_{mn}-dt\frac{\partial\bar{b}_m}{\partial(\xi_t)_n}$, where $\mathcal{M}$ is the left side big matrix in the equation above and $\bar{b}$ is the right side big vector.
By expanding $\bar{b}$ in the basis functions (which may be $b_2$ or $b_1$ depending on the index $m$) I was able to compute $J_{mn}$. For example, lets suppose we're looking at $m=1...n_{nodes}$, $\bar{b}_m=-\sum_{ijk}(\rho_i\rho_j+\psi_i\psi_j)\psi_k\int_{domain}\phi_i\phi_j\phi_k\phi_m$, and so,
$ \frac{\partial\bar{b}_m}{\partial(\xi_t)_n} = -\left\{ \begin{array}{ll} 2\sum_{ik}\rho_i\psi_k\mathcal{I}_{inkm}, \text{for } n=1...n_{nodes} \\ 2\sum_{ij}(\rho_i\rho_j+\psi_i\psi_j)\mathcal{I}_{ijnm}, \text{for } n=n_{nodes}+1...2n_{nodes}\\ \end{array} \right. $,
where $\mathcal{I}_{ijnm}=\int_{domain}\phi_i\phi_j\phi_n\phi_m$.
So, is this correct up until now? And if so, how am I supposed to handle the tensor $\mathcal{I}_{ijnm}$? One thing to notice is that I can't really divide the integral over the domain by a sum of integrals over the elements, since $\mathcal{I}_{ijnm}\neq 0$ when not only all the hat functions $\{i,j,n,m\}$ are inside one element , but also when only one is outside the element.
|
Search
Now showing items 1-2 of 2
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
|
The obvious choice for Alex's [vampire Dr. Alex Schwartz's, PHANG's] research area, given the Oxford topology group's interests, appears to be Topological Quantum Field Theory, or TQFT. But what is a TQFT? And what does it have to do with category theory?...
In honor of Halloween, I would like to discuss mathematics associated with one of my favorite fictional mathematicians: Dr. Alex Schwartz, from Charles Stross's Hugo-award-nominated series
The Laundry Files. Alex is from the north of England and "suffers from an inordinate case of impostor syndrome." Like many people with math backgrounds, he has a job outside academia: at the beginning of The Rhesus Chart, he is working in London as a "quant," developing new financial models for the research arm of a major investment bank. He is also a vampire.
There's a long literary tradition associating certain kinds of geometry with horror. It began with the astonishingly influential (and astonishingly racist) writer H.P. Lovecraft's references to non-Euclidean geometry and continues in more light-hearted references to twisting spaces and Escher's art. In Stross's fiction, this association is literal: sufficiently complex algorithmic or mathematical calculations can open gateways to other worlds and the unspeakable horrors that lurk within. Alex's multidimensional financial models create just such a gateway.
My favorite game, though, is trying to work out the topic of Alex's dissertation. What do we know about Alex's mathematical background?
Armed with this information, one can try to identify Alex's research area, and perhaps even his advisor. There are multiple people in the Oxford topology group working on problems in algebraic topology inspired by theoretical physics. Category theory is a meta-subject, focused on common structures that appear in different branches of mathematics, so perhaps it's not surprising for physics-inspired topology problems to cross into category theory. (Oxford also has a very active mathematical physics group incorporating some people who literally work on string theory, but Alex's interest in physics seems to be limited to the math problems it produces.) The obvious choice for Alex's research area, given the Oxford topology group's interests, appears to be Topological Quantum Field Theory, or TQFT. But what is a TQFT? And what does it have to do with category theory?
Let's start with a very simple story motivated by string theory. Suppose we have a little tiny loop. Because this is a string theory story, this loop represents a fundamental component of the universe; perhaps a physicist in a lab would identify it as a light particle like a photon, or something more exotic, like a neutrino. After a little while, our loop splits into two loops. In the physicist's version of events, one particle has become two. We can illustrate this story with a surface, the aptly named pair of pants:
At the top of the picture we have our starting loop, and at the bottom of the picture we have the final two loops. Intermediate slices of the surface correspond to intermediate moments of time.
Now, even if the fundamental components of the universe do act like tiny loops--a matter which is a subject of much debate--we can't observe these loops directly. Not only is there no microscope powerful enough, it is impossible to build a microscope powerful enough. Instead, we associate to each loop a
quantum state, representing quantities that someone could measure in a lab. Geometrically, we can think of these numbers as a vector, a little arrow with its tail at the origin and its head at the coordinates given by our list of numbers. If you're a quantum physics enthusiast, you may recognize that this vector describes the probability that a physicist will make particular measurements. At the moment, though, we're more interested in the shape of our pair of pants. Shapes like this will give us the topology in Topological Quantum Field Theory.
Suppose we have two loops that, after a while, join to form a single loop:
We can combine the pair of pants shape with the upside-down pair of pants shape to form a new shape, corresponding to the story of a loop that splits into two, then recombines into a single loop:
Next, we'd like to recast our stories about loops and strings in the language of category theory. To specify a category in the sense of category theory, you start by specifying the
objects you are studying. For example, I might have a category where my objects are collections of miniature animals:
Categories also come with
morphisms, which are ways of transforming the objects in the category. You can think of morphisms as particularly nice maps or functions. Each morphism has a source (sometimes called a "domain") and a target. In the miniature animal collection category, I have morphisms that consist of "adding an animal":
If the target of one morphism matches the source of another morphism, we can compose the two morphisms. For example, I can add a frog after adding a chicken:
But I can't compose my "adding a chicken" morphism with itself: I only own one toy chicken, so an animal collection with a chicken in it cannot be the source of an adding a chicken morphism.
In our stories about strings, the objects are one or more loops, and the morphisms are the surfaces that connect them. These connecting surfaces are called
cobordisms. The word "cobordism" combines "bord-" from border with "-ism" from morphism; we say "cobordisms" rather than "bordisms" because we are reversing the operation of taking the border of a surface.
It's worth pointing out that there can be multiple different cobordisms relating objects. For instance, one cobordism from a single loop to a single loop is a cylinder:
But we have already seen another cobordism starting and ending with one loop that has a hole in the middle:
Our category of cobordisms has some extra structure, which comes from the geometric operation of combining loops. Mathematically, the procedure of combining multiple loops without gluing them together is called the
disjoint union. (It's "disjoint" because they are not joined!) The nice thing about disjoint union is that it's compatible with our morphisms, the cobordisms, as well as our objects. For example, we can combine the pair of pants with the hole-in-the-middle cobordism to make a cobordism taking two loops to three loops:
In this context, one can think of the disjoint union operation as being like multiplication. There's even an identity element: taking the disjoint union with the empty set gives us the same cobordism we started with. Mathematically, adding this multiplication-like operation to our category makes it a
monoidal category.
Mathematicians don't necessarily stop at 1-dimensional loops and 2-dimensional cobordisms. You can also make a monoidal category out of smooth
d-dimensional shapes (strictly speaking, d-manifolds) that form the boundaries of d+1-dimensional cobordisms. The loops and surface cobordisms category has two advantages, though: it matches the stories from string theory, and it's easy to draw examples.
Our story about strings also included vectors, associated with quantum states. Let's use some of these ingredients to build a different kind of monoidal category. There is certainly a category whose objects are vectors, but we want to do something a little fancier: our objects will be
vector spaces. For example, we could take the two-dimensional vector space consisting of all of the vectors with their tails at the origin and their heads at a point in a plane. In physics terms, each vector space represents all of the possible quantum states that a string might be in at a particular moment in time.
The morphisms in our category will be
linear transformations. These include operations such as stretching, rotating, or flipping all the vectors in a vector space. In terms of equations, each linear transformation corresponds to multiplying by a matrix. For example, we can include a two-dimensional vector space as the $xy$-plane in a three-dimensional vector space by multiplying by an appropriate matrix: $$ \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x \\ y \\ 0 \end{pmatrix}$$
We have objects and morphisms now. For a monoidal category, we also need a "multiplication" operation. This means we need a way to combine two vector spaces
that might have different dimensions in order to make a new vector space. We will use an operation called the tensor product of vectors, which is usually written $\otimes$. Tensor product is a really multiplication-heavy multiplication: we will multiply every single coordinate of the first vector by every coordinate of the second vector, then line up the resulting numbers into a new vector. Thus, if $\vec{u}$ has $m$ coordinates and $\vec{v}$ has $n$ coordinates, the tensor product $\vec{u} \otimes \vec{v}$ has $mn$ coordinates. We can organize the computation of $\vec{u} \otimes \vec{v}$ by computing the matrix $\vec{u} \vec{v}^T$ and then listing its entries in a vector of length $mn$.
As an example, let's compute $\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} \otimes \begin{pmatrix} 4\\ 5 \end{pmatrix}$.
$$ \begin{align} \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} \begin{pmatrix} 4\\ 5 \end{pmatrix}^T &= \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} \begin{pmatrix} 4 & 5 \end{pmatrix} \\ &= \begin{pmatrix} 4 & 5 \\ 8 & 10 \\ 12 & 15\end{pmatrix} . \end{align} $$
Thus,
$$\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} \otimes \begin{pmatrix} 4\\ 5 \end{pmatrix} = \begin{pmatrix} 4 \\ 5 \\ 8 \\ 10 \\ 12 \\ 15 \end{pmatrix}.$$
Now, if we have an $m$-dimensional vector space $V$ and an $n$-dimensional vector space $W$, the tensor product of vector spaces $V \otimes W$ is just all the possible tensor products of a vector from $V$ and a vector from $W$. The new vector space $V \otimes W$ is $mn$-dimensional. It's also possible to take the tensor product of two linear transformations, by finding the output of each linear transformation and then taking the tensor product of those outputs.
Now that we have two different monoidal categories, we want a way to relate them. When mathematicians want to compare two categories, they usually look for a
functor. Functors assign each object in one category to an object in the other category, and assign each morphism to a corresponding morphism. For example, I could create a category from my "collections of miniature animals" category to a "sets of names" category. Perhaps the bear is named Emmy, the green turtle is named Logo, and the blue turtle is named Pig. Then my functor takes this collection:
and yields {Emmy, Logo, Pig}. The functor takes the "adding an animal" morphism to the "adding a name" morphism. For example, adding a chicken corresponds to the transformation from {Emmy, Logo, Pig} to {Emmy, Logo, Pig, Henrietta Swan}.
We now have all of the language we need to describe topological quantum field theory. A Topological Quantum Field Theory, or TQFT, is a functor from the category of cobordisms of loops (or, more generally, from the category of cobordisms of $d$-dimensional manifolds) to the category of vector spaces that takes disjoint union to tensor product. In other words, a TQFT is a rule that tells you how to associate a vector space to a loop and a linear transformation to a cobordism. Disjoint unions on the cobordism side give you tensor products on the vector space side.
From a mathematical perspective, the interesting thing about a TQFT is that it relates very different styles of mathematics. Cobordisms are easy to visualize, at least in low dimensions, but the geometric information associated with them can grow very complicated very quickly. In contrast, working with tensor products can be algebraically intensive, but it's a very straightforward kind of labor. In particular, though it's hard to teach a computer to understand the boundaries of surfaces, it's very easy to make a computer manipulate matrices for you.
We've seen that the study of TQFTs combines topology (from cobordisms), algebra (from tensor products), category theory (to describe the functors), and even potentially some coding (to make a computer work out examples). This is excellent preparation for the modern, mathematically inclined vampire!
|
Let $M$ be a smooth compact manifold, and let $X$ be a smooth vector field of $M$ that is nowhere vanishing, thus one can think of the pair $(M,X)$ as a smooth flow with no fixed points. Let us say that a smooth $1$-form $\theta$ on $M$ is
adapted to this flow if $\theta(X)$ is everywhere positive; and The Lie derivative ${\mathcal L}_X \theta$ is an exact $1$-form.
(By the way, I'd be happy to take suggestions for a better name than "adapted". Most adjectives such as "calibrated", "polarised", etc. are unfortunately already taken.)
Question.Is it true that every smooth flow with no fixed points has at least one $1$-form adapted to it?
At first I was sure that there must be counterexamples (perhaps many such), but every construction of a smooth flow I tried to make ended up having at least one adapted $1$-form. Some examples:
If the flow is isometric (that is, it preserves some Riemannian metric $g$), one can take $\theta$ to be the $1$-form dual to $X$ with respect to the metric $g$. If the flow is an Anosov flow, one can take $\theta$ to be the canonical $1$-form. If $M$ is the cosphere bundle of some compact Riemannian manifold $N$ and $(M,X)$ is the geodesic flow, then one can again take $\theta$ to be the canonical $1$-form. (This example can be extended to a number of other Hamiltonian flows, such as flows that describe a particle in a potential well, which was in fact the original context in which this question arose for me.) If the flow is a suspension, one can take $\theta$ to be $dt$, where $t$ is the time variable (in local coordinates). If there is a morphism $\phi: M \to M'$ from the flow $(M,X)$ to another flow $(M',X')$ (thus $\phi$ maps trajectories to trajectories), and the latter flow has an adapted $1$-form $\theta'$, then the pullback $\phi^* \theta'$ of that form will be adapted to $(M,X)$.
Some simple remarks:
If $\theta$ is adapted to a flow $(M,X)$, then so is $(e^{tX})^* \theta$ for any time $t$, where $e^{tX}: M \to M$ denotes the time evolution map along $X$ by $t$. In many cases this allows one to average along the flow and restrict attention to cases where $\theta$ is $X$-invariant. In the case when the flow is ergodic, this would imply in particular that we could restrict attention to the case when $\theta(X)$ is constant. Conversely, in the ergodic case one can almost (but not quite) use the ergodic theorem to relax the requirement that $\theta(X)$ be positive to the requirement that $\theta(X)$ have positive mean with respect to the invariant measure. The condition that ${\mathcal L}_X \theta$ be exact implies that $d\theta$ is $X$-invariant, and is in turn implied by $\theta$ being closed. For many vector fields $X$ it is already easy to find a closed $1$-form $\theta$ with $\theta(X) > 0$, but this is not always possible in general, in particular if $X$ is the divergence of a $2$-vector field with respect to some volume form, in which case the integral of $\theta(X)$ along this form must vanish when $\theta$ is closed. However, in all the cases in which this occurs, I was able to locate a non-closed example of $\theta$ that was adapted to the flow. (But perhaps if the flow is sufficiently "chaotic" then one can rule out non-closed examples also?)
|
Lately I’ve been studying up on ray tracing, and one of my goals has been to build a
nonlinear ray tracer — that is, a ray tracer that works in curved space, for example space that is curved by a nearby black hole. (See the finished source code!)
In order to do this, the path of each ray must be calculated in a stepwise fashion, since we can no longer rely on the geometry of straight lines in our world. With each step taken by the ray, the velocity vector of the ray is updated based on an equation of motion determined by a “force field” present in our space.
This idea has certainly been explored in the past, notably by Riccardo Antonelli, who derived a very clever and simple equation for the force field that guides the motion of the ray in the vicinity of a black hole, namely
$$\vec F(r) = – \frac{3}{2} h^2 \frac{\hat r}{r^5}$$
I decided to use the above equation in my own ray tracer because it’s very efficient computationally (and because I’m not nearly familiar enough with the mathematics of GR to have derived it myself). The equation models a simple Schwarzschild black hole (non-rotating, non-charged) at the origin of our coordinate system. The simplicity of the equation has the tradeoff that the resulting images will be mostly unphysical, meaning that they’re not exactly what a real observer would “see” in the vicinity of the black hole. Instead, the images must be interpreted as instantaneous snapshots of how the light bends around the black hole, with no regard for redshifting or distortions relative to the observer’s motion.
Nevertheless, this kind of ray tracing provides some powerful visualizations that help us understand the behavior of light around black holes, and help demystify at least some of the properties of these exotic objects.
My goal is to build on this existing work, and create a ray tracer that is more fully featured, with support for other types of objects in addition to the black hole. I also want it to be more extensible, with the ability to plug in different equations of motion, as well as to build more complex scenes, or even to build scenes algorithmically. So, now that my work on this ray tracer has reached a semi-publishable state, let’s dive into all the things it lets us do.
Accretion disk
The ray tracer supports an accretion disk that is either textured or plain-colored. It also supports
multiple disks, at arbitrary radii from the event horizon, albeit restricted to the horizontal plane around the black hole. The collision point of the ray with the disk is calculated by performing a binary search for the exact intersection. If we don’t search for the precise point of intersection, we would see artifacts due to the “resolution” of the steps taken by each ray (notice the jagged edges at the bottom of the disk):
Once the intersection search is implemented, the lines and borders become nice and crisp:
We can also apply different colors to the top and bottom of the disk. Observe that the black hole distorts the disk in a way that makes the bottom (colored in green) appear around the lower semicircle of the photon sphere, even though we’re looking at the disk from above:
Note that the dark black circle is
not the event horizon, but is actually the photon sphere. This is because photons that cross into the photon sphere from the outside cannot escape. (Only photons that are emitted outward from inside the photon sphere can be seen by an outside observer.)
If we zoom in on the right edge of the photon sphere, we can see higher-order images of the disk appear around the sphere (second- and even third-order images are visible). These are rays of light that have circled around the photon sphere one or more times, and eventually escaped back to the observer.
And here is the same image with a more realistic-looking accretion disk:
Great! Now that we have the basics out of the way, it’s time to get a little more crazy with ray tracing arbitrary materials around the black hole.
Additional spheres
The ray tracer allows adding an unlimited number of spheres, positioned anywhere (outside the event horizon, that is!) and either textured or plain-colored. Here is a scene with one hundred “stars” randomly positioned in an “orbit” around the black hole (click to view larger versions of the images):
Notice once again how we can see second- and third-order images of the spheres as we get closer to the photon sphere. By the way, here is a similar image of stars around the black hole, but with the curvature effects turned off (as if the black hole
did not curve the surrounding space):
And here is a video, generated using the ray tracer, that shows the observer circling around the black hole with stars in its vicinity. Once again, this is not a completely realistic physical picture, since the stars are not really “orbiting” around the black hole, but rather it’s a series of snapshots taken at different angles:
Notice how the spherical stars are distorted around the Einstein ring, as well as how the background sky is affected by the curvature.
Reflective spheres
And finally, the ray tracer supports adding spheres that are
perfectly reflective:
All that’s necessary for doing this is to calculate the exact point of impact by the ray on the sphere (again using a binary intersection search) and get the corresponding reflected velocity vector based on the normal vector on the sphere at that point. Here is a similar image, but with a textured accretion disk:
Future work
Eventually I’d like to incorporate more algorithms for different equations of motion for the rays. For example, someone else has encoded a similar algorithm for a Kerr black hole (i.e. a black hole with angular momentum), and there is even a port of it to C# already, which I was able to integrate into my ray tracer easily:
A couple more ideas:
There’s no reason the ray tracer couldn’t support different types of shapes besides spheres, or even arbitrary mesh models (e.g. STL files). I’d also like to use this ray tracer to create some more animations or videos, but that will have to be the subject of a future post. Make it run on CUDA?
|
$$ Y(a) = \int_{0}^{\infty} \frac{e^{-ax}}{x^2+4} dx $$
a) Find the values of $a$ for which $Y(a)$ is well defined.
b) $Y$ checks on $(0, \infty)$ a non-homogeneous differential equation of degree two with constant coefficients.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
a) The integral exists if and only if $a \geq 0$. For $a \geq 0$, compare the integrand to $\frac{1}{x^2+4}$ to see that the integral converges. If $a < 0$, then there is an $x_0$ such that the integrand is positive and increases for all $x > x_0$, which shows that the integral can not be finite.
b) Differentiating twice under the integral sign, we obtain for $a > 0$ $$Y''(a) = \int_{0}^{\infty} \frac{x^2}{x^2 + 4} e^{-ax}~ \mathrm dx.$$ This yields $$Y''(a) + 4 Y(a) = \int_{0}^{\infty} e^{-ax}~\mathrm dx = \frac{1}{a}, \quad a > 0.$$ which is the desired differential equation.
|
This question is quite general, I'll write just my own point of view, and hope others add more to get a complete picture.
0) Let me quote A. Kirillov himself:
"In conclusion I want to express the hope that the orbit method
becomes for my readers a source of thoughts and inspirations as it has
been for me during the last thirty-ve years"
Sorry I cannot find another much more colorful quote from him, where he says something likeorbit method not only produced results on many principal questions of representation theory, but gives informal guidance how to invent new and new results.
1) What is the context of orbit method and why it is related to mathematical physics.Orbit method as a particular case of quantization ideology.
I think orbit method should be seen in the context of quantization and roughly speaking its relation to the mathematical physics is that orbit method is particular example of "quantization program". (Well, there some other relations with integrable systems, but they are not so central, imho).
Let me try to explain this in more details. Consider universal enveloping $U(g)$,it is an algebra with generators $e_k$ and relations $[e_{k}, e_{l} ] = c^{i}_{kl} e_l $,let us insert parameter $h$ here: $[e_{k}, e_{l} ] = h c^{i}_{kl} e_l$.For $h=0$ we have just the commutative algebra - denote it $S(g)$, for any non-zero $h$ this algebra is isomorphic to $U(g)$.
Now let us look at $h$ "very small", i.e. we can formally take $h^2=0$, what will see from the structure of non-commutative algebra $U(g)$ reduces to commutative algebra $S(g)$"plus" a Poisson bracket on it.
So the moral is that
Non-commutative algebra, when non-commutativity tends to zero is commutative algebra with Poisson bracket. (It is called classical limit).
The
big goal of quantization programis try to express everything about non-commutative algebra in terms of Poisson algebra. The reason is that Poisson algebra is something more simple than non-commutative algebra.
In particular we can be interested in description of irreps of non-commutative algebra.What are the corresponding objects for the Poisson algebra ?Answer - symplectic leaves.
Observation: the symplectic leaves of $S(g)$ (classical limit of $U(g)$) are exactly the coadjoint orbits.
So this puts the orbit method - in more general framework of quantization. Where one may hope to describe the irreps of quantized algebras via symplectic leaves in Poisson algebra.The problem is that such construction does not always exists neither for $U(g)$, nor for general quantum algebras, at least it does not exist in some simple sense, and big activity is to understand what are the borders between true statements from fakes, it is neverending activity.
It is worth to remark that such point of view on orbit method is not the original one, but emerged later when quantization theory begin to develop.
The basic question of representation theory are calculating characters, induction-restriction, tensor products. The natural question: what are the parallel constructionsin the Poisson world (i.e. symplectic geometry) ? how representation theory questions can be answered with the help of symplectic geometry ? There are ideological answers to these questions and again it is neverending game to make "ideology" to theorems or to counter-examples.
Some MO-questions with more details on quantization: Q1, Q2.
2) What is naive quantization without metaplectic correction. (It is better to call it correction (imho), but not quantization, but people use both).
So, our basic wish is to construct a irreducible representation of $U(g)$ (or more generally of some quantized algebra $\hat A$). The "naive recipe" is the following:
1) take symplectic leaf
2) Consider algebra of functions on it and split it into two halves "P-part" and "Q-part
3) The
representation space is ----- all functions of "Q"-variable,and representation is constructed as: Q-variables act as multiplication operators,while "P" acts as $\partial_Q$.
Now, what I mean "split in two parts", informally you should think as follows Darboux theorem says that symplectic form is $dp\wedge dq$ is appropriate coordinates, so you have these "p" and "q" as my splitting. The problem is that Darboux is local result,and you need something more complicated to make it work, look at the word "polarization" for more information on that.
3) Towards metaplectic correction.
In the previous item I wrote that naively we should take "half of functions" on the orbit(symplectic leaf) as a Hilbert space.
This actually a point to be corrected.
We should not take "functions", but should take "half-forms".
The simplest motivation is that we want to have a Hilbert space, so we need to have an inner product, but there is no canonical one on the functions.But if we take half-forms: $f(q) \sqrt{dq}$ we have the canonical inner product: $\int fg \sqrt{dq}^2 = \int fg dq $.
So the metaplectic correction is story is how to introduce these "half-forms" into a business in an appropriate way.Some time ago we exercised with quantization of sphere $S^2$ and argued that in generalthis should be consistent with the Duflo isomorphism.
4) Finally to your main question: "quantization of coadjoint orbits".
Well, sorry, I cannot say much. The general ideology here is that we should takea coadjoint orbit and try to construct irrep. Kirillov done it in 1962 for nilpotent groups, for solvable much progress achieved later. For semisimples - generic orbits - no problems, but for many orbits it is impossible (or at least in some naive sense impossib). I do not know what is the current state of art.It might be there are some particular classes of orbits where some people thinkthat one can indeed construct irrep and it is in reach and good topic for a paper of a PhD. It might be that ideas by Ranee Brylinski Geometric Quantization of Real Minimal Nilpotent Orbits can be somehow developed...But I do not know much about it, my impression that all left open problems are quite difficult and technical, I would not start this as a PhD.Any case I would ask David Vogan or Jeffrey Adams (he is sometimes on MO).By the way have a look at D. Vogan's REVIEW OF “LECTURES ON THE ORBIT METHOD,”.
Any way good luck !
|
What's New in v18.2.9
Visual Studio 2019 Support
CodeRush now installs and runs in Visual Studio 2019.
Unit Test Builder
This release gets a port of the
Unit Test Builder from CodeRush Classic, which helps you generate test cases for members of interest as you step through code.
The Unit Test Builder supports NUnit, XUnit and VSTest frameworks.
To generate a new test case as you are stepping through a debug session, place the caret on the method you want to test, invoke the
Code Actions menu (with Ctrl+ . or Ctrl+ ~), select the "Generate Test In" menu item and select the target location for the new test. Tests can be added to existing test classes or to a new test fixture.
After your debug session ends, CodeRush will generate unit test stubs for all the tests created, complete with calls to the methods the feature was invoked from. You can add assertions and/or modify the initialization as needed.
New Refactorings and Code Providers
We have added the
Initialize code provider. This provider initializes the variable or the field under the caret with the default type value.
Just place the caret on a variable or the field, press Ctrl+. or Ctrl+~ to invoke the 'Code Actions' Menu, select 'Initialize' from the menu and press Enter.
This code provider is available in
C# and Visual Basic.
TypeScript Support - Navigation Providers
The following navigation providers are now available in TypeScript code:
Base Types Derived Types Members Instantiations Implementations
LaTeX Formulas Support (Beta)
This release introduces beta support for LaTeX formulas in comments.
You can view and edit fully-formatted mathematical formulas in source code comments (in C#, Visual Basic, JavaScript, TypeScript, HTML, XAML, CSS, and F# code).
You can also change a formula's foreground color, background color and size.
LaTeX support is currently in beta, and supports a subset of the LaTeX formula commands.
We have also added templates to make LaTeX formula creation easier.
Template Description Expansion Example
/f LaTeX formula in C#. // <formula >
'f LaTeX formula in VB. ' <formula >
!f LaTeX formula in XAML. <!-- <formula > -->
\. Centered dot \cdot
\.. Three centered dots \cdots
\8 Infinity \infty
\b Braces {}
\bca Big Cap \bigcap_{lower}^{upper}
\bcu Big Cup \bigcup_{lower}^{upper}
\cp Co-product \coprod_{lower}^{upper}
\f Fraction \frac{numerator}{denominator}
\l Limit \lim_{x\to\infty}
\lr Left & Right Parens \left( \right)
\nr nth Root \sqrt[root]{value}
\o Circle symbol \circ
\oi Contour integral \oint
\p Product \prod_{lower}^{upper}
\s Sum \sum_{lower}^{upper}
\sq Square Root \sqrt{value}
\v Vector \vec{numerator}
These templates make it easy to create formatted formulas inside code comments from scratch.
New CodeRush templates also include support for Greek letters in LaTeX formulas. To get a Greek letter in a formula, just enter a "." followed by the letter (or letter abbreviation) you want. For example, to reference the Greek pi symbol, use ".p". To get an uppercase Greek letter, use an uppercase letter after the dot. For example, to get an uppercase Delta symbol, use ".D". For a complete list of supported Greek symbols, see the
Comments\LaTeX\Greek Symbols folder on the CodeRush Templates options page.
The
LaTeX Formulas Support feature ships disabled by default. You can enable LaTeX formula support on the Rich Comments options page.
Code Places Enhancements
Left-Right Dock Option
We have added the 'Dock Options' to the 'Navigation | Code Places' options page. You can now dock the Code Places panel to the right/left margin.
Single-Click Navigation
You can now jump to the desired type member in the code places list with a single click. Just set the 'Navigate to the member' option to 'Single click' on the 'Navigation | Code Places' options page.
Known Issues
T728015 – IntelliRush does now allow inserting a lambda expression after a user types "=>" in Visual Studio 2019
T729124 – Naming Assistant does not provide any intellisense items in Visual Studio 2019
T729119 – String Format Assistant does not support filtering during typing
Resolved Issues
T726495 – Code Formatting – The local function with the lambda expression is formatted incorrectly
T726351 – Code Places – The Code Places panels shown for multiple source control diff views does not leave much space to view the code
T721690 – Code Places – The Options page is missing under the Navigation option node
T726916 – General - Visual Studio 2019 RC3 - InvalidOperationException with NodeJs in the log file
T719550 – General – NGen does not work in Visual Studio 2019 Preview
T722858 – Markers – CodeRush drops markers even if the "Enable markers" option is disabled
T722534 – Naming Assistant – The Naming Assistant works incorrectly in Visual Studio 2019 Preview
T720387 – Navigation – "Jump to Symbol" causes the access denied exceptions when Quick Nav filters are changed during filtering
T724618 – Refactoring – The "Add to interface" refactoring destroys methods in the existing class
T723733 – Refactoring – The "Convert to Constant" refactoring works incorrectly when it is applied to a read-only field
T725570 – Rich Comments – Rich comments do not work in C++
T723559 – Settings – The Code Cleanup options page does now show code changes in before cleanup and after cleanup previews
T724259 – Settings – The "Locals" filter is not available on IntelliRush menu
T724119 – Smart Semicolon – A semicolon goes in an incorrect place in a certain scenario in JavaScript/TypeScript code
T722629 – Static Code Analysis – CRRSP01: False positive in certain circumstances with words following a new line char
T724175 – Static Code Analysis – The "Possible NullReferenceException" code issue gives false reports
T725542 – Static Code Analysis – "Possible NullReferenceException" false positive for fields with value types
T728539 – Templates – Typing "if … or" expands to "||" in a JavaScript comment
T722875 – Test Runner – The Test Runner incorrectly calculates the count of tests grouped by multiple categories
|
Overview
What in the world do they look like?
Why are they fast? Why are they small? Which one is better and Why?
Why the authors design them like that?
So, let’s try to solve these doubts step by step.
MobileNet v1 vs. Standard CNN models
MobileNet v1 is smart enough to decompose the standard convolution operation into two separate operations: depth-wise (or channel-wise) convolution and point-wise convolution.
We can take the following figure as an illustration:
Suppose we have the convolutional layer with kernel size $K$, input size $C_{in}\times H\times W$ and output size $C_{out} \times H \times W$ (stride=1). For a standard convolution operation, the computation complexity, here we use MACC (Multiply-accumulate, also known as MADD), is calculated as (for how to calculate FLOPs or MACC, we kindly recommend this great post: How Fast is my model?):
$$\begin{equation}
K\times K\times C_{in}\times C_{out}\times H\times W. \end{equation}\label{eq1}$$
With decomposition, the two separate operation parts lead to output feature maps with exactly the same size as the standard counterpart does while with much less computation cost. How does that works?
OK, depth-wise convolution takes as input a single channel and output a single channel for each channel of the input volume, and then concatenates the output channels for the second stage, in which the point-wise convolution takes place. According to this, its corresponding computation cost is:
$$ K\times K\times H\times W\times C_{in}. $$
The point-wise convolution is a simple 1x1 convolution (also known as network-in-network), which transfers the $C_{in}\times H\times W$ volume produced by the depth-wise operation to a $C_{out}\times H\times W$ output volume. Since we have dealt with the input volume with a channel-by-channel strategy at first, so the purpose of point-wise operation is to combine the information of different channels and fuse them to new features. The point-wise operation costs
$$ 1\times 1\times C_{in}\times C_{out}\times H\times W = C_{in}\times C_{out}\times H\times W.$$
As a result, with the above decomposition, the total MACC is
$$\begin{equation} K\times K\times H\times W\times C_{in} + C_{in}\times C_{out}. \end{equation}\label{eq2}$$
Compared with equation $\eqref{eq1}$, the reduction of computation is $\eqref{eq2}$/$\eqref{eq1}$ $=\frac{1}{C_{out}} + \frac{1}{K^2}$.
In addition, the number of parameters of the standard convolution filters is $K\times K\times C_{in}\times C_{out}$. With depth-wise and point-wise convolution, the number of parameters becomes $K\times K\times C_{in} + C_{in}\times C_{out} = C_{in}\times (K\times K + C_{out})$. In this way, both computation cost and model size can be considerably reduced. What’s more, this can be further done by applying the
Resolution Multipier and Width Pultipier, which reduce the resolution of the input images and channels of all layers by a multipier coefficient.
If you are not clear, the following is the whole MobielNet v1 structure with all the bells and whistles.
The structure was drawn according to the code in https://github.com/marvis/pytorch-mobilenet, where filter in each row of the table takes the input with size written immediately in the same row, and therefore, outputs a volume with size written in the following row, and then, processed by the next filter. Finally,
BR means Batch normalization and Relu layers after a certain filter.
What surprised me was that there is no residual module at all, what if we add some residuals or shortcuts like ResNet? Afterall, the author got his purpose and the accuracy on ImageNet classification task is comparable to the one using the standard convolution filters instead as well as other famous CNN models.
MobileNet v1 vs. SqueezeNet
First, let’s compare these two networks directly,
where,
0.50 MobileNet-160 means halving the channels for all layers and setting the resolution of input images as $160\times 160$. We can see from the table that the only highlight of SqueezeNet is its model size. It is not ignorable that we also need the speed of computation when we embed our model into resource-restricted devices like Mobile phones. It’s hard to say that SqueezeNet is good enough when we see that its MACC is even more than AlexNet, with a large margin.
However, it’s worth thinking why SqueezeNet has so few parameters. Take a look at it basic unit (a fire module):
The basic idea behind SqueezeNet comes from three principles. First, using 1x1 filters as possible as we can; Second, decreasing the number of input channels to 3x3 filters. The last pinciple is to downsample feature maps after the merging operation of residual blocks so that to keep more activations.
By stacking fire modules, we get a small model, while also having numerous computations.
MobileNet v1 vs. MobileNet v2
Keep it in mind that MobileNet v1’s success attributes to using the
depth-wise and point-wise convolutions. These two kinds of filters become the very basic tools for most of the following works focusing on network compression and speeding up, including MobileNet v2, ShuffleNet v1 and v2.
For the MobileNet v2, similar to the above illustration, let’s first take a look at its whole structure. For analysis, we take part of it as the whole structure is stacked with similar components.
In this illustration, the green unit means a residual block while the orange one means a normal block (without residual) with stride 2 to do downsampling. The main characteristic of MobileNet v2 architecture is for every unit or block, it first expands the number of channels by point-wise convolutions, then applies depth-wise convolutions with kernel size 3x3 on the expanded space, and finally projects back to low-channel feature spaces using point-wise convolutions again. For a block doesn’t having to downsample its input volume, an additional residual component is applied to enhance the performance. Another feature is, as illustrated in the above figure with a single
B after each block which means Batch normalization only, it doesn’t use non-linearity at the output of blocks. Now, I have the following questions:
When building a residual block, why connect the shortcut between two low-channel ends? Why not connect the “fat part” just like the original ResNet does? Why it needs to be “fat” in the middle of block? Why not just keep it slim so that to further reduce its size and parameters? Why not apply ReLu at the end of block? Comparing with ResNet, which applies ReLU on its “slim part” of each block, it seems like the two designing strategies (ResNet block and MobileNet v2 block) conflict with each other, why?
OK, let’s try to answer these questions (if you have any different idea, please do not hesitate to contact me, the email can be found in my profile).
For question 1, there is a intuition when designing MobileNet v2: bottlenecks actually contains all the necessary informations. So it would not cause information loss if we do like that. On the other hand, connecting the “fat parts” is possible, but that also means we should connect two volumes produced by two depth-wise convolutions, sounds strange because we usually connect the outputs of normal convolutions (here a point-wise covolution is a normal 1x1 convolution), but nothing stops trying.
For question 2, we can find our answer from the analysis of ReLU.
ReLu cause information collapse. However, the higher the dimension of the input, the less the degree information collapses. So the high dimension in the middle of block is to avoid information loss. And intuitively, more channels usually means more powerful representative features thus to enhance the discriminability of a model. According to this, it is reasonable not to apply ReLU at the “slim output” of the block.
We can use the same explanation to attack ResNet, which indeed use ReLU on the low-dimensional features. So why is it still so effective? This would attribute to its high dimensions of input and output ends of a ResNet block, which ensure its representative ability even with the ReLU layer in the bottleneck.
The design art of MobileNet v2 is to keep few number of channels for the input and output of each block, while doing more complicated feature extraction inside the block with enough channels. This ensures the extraction of effective and high-level features of the image while reduce the computation cost at the same time, because
the main computation cost is from the 1x1 convolution filters (see the following figure).
MobileNet v2 has even less parameters and MACCs than v1. This because MobileNet v1 takes more channels for 1x1 convolutions than v2, leading to much more MACCs. While MobileNet v2 smartly avoid giving many channels to 1x1 convolutions, and do feature extraction mainly via depth-wise convolutions.
MobileNet v2 vs. ShuffleNet v1 vs. NasNet
Above figure shows that a ShuffleNet v1(1.5) and a MobileNet V2 have the similar model size (3.4M params) and computation cost ($\approx 300$M MACCs), and furthermore, the similar classification accuracy. This means that ShuffleNet v1 is at the same level of MobileNet v2, the two are closely comparable. So, what does a ShuffleNet v1 look like? Click here
Again, we capture part of it to analyse.
Since we realize that the main computation takes place at the 1x1 convolutions, which also accounts for main part of parameters. Unlike MobileNet v2 who solves the problem by reducing number of channels inputted to 1x1 convolutions, ShuffleNet v1 is more straightforward. Specifically, rather than only applying group convolution (for group convolution, see ResNeXt, depth-wise convolution can be regarded as an extreme case of group convolution) on 3x3 filters, it also applies group operation on 1x1 filters. Although it reduces computation cost and number of parameters effectively, it leads to a problem: different groups cannot communicate with each other, thus restrict the power of model.
Shuffle in ShuffleNet v1 provides the solution of above problem by shuffling all the output channels of 1x1 group convolutions as a whole, so that enforce information communication among groups. And the most inspiring thing is the shuffle operation doesn’t take any additional parameters and computationally efficient.
To further reduce model size and computation cost, ShuffleNet v1 also uses
BottleNecks as illustrated:
As discussed above, MobileNet v2 and ShuffleNet v1 both focus on reducing computation cost on 1x1 convolutions, while there are still three more differences according to their structures.
Difference on how to apply residual. For MobileNet v2, no residual is used when the shape of input volume and output volume of a block doesn’t match. For ShuffleNet v1, when the two doesn’t match, a
AveragePool + Concatenationstrategy is used to do shortcut connection.
According to the above diagram, ShuffleNet v1 quickly downsamples the input image from 224x224 to 56x56, while MobileNet v2 only downsamples its input image to 112x112 in the first stages. According to the logic of MobileNet v2, ReLU layers should apply on “fat layers” rather than bottleneck layers. While ShuffleNet (both v1 and v2) more or less does the opposite (e.g., ReLU after the Compress operator, marked red in the figure). Why?
Well, I think it’s worth trying and see what will happen if we take the ReLU away after the 3x3 convolutions in MobileNet v1 or MobileNet v2 (e.g., only connect the ReLu to the first 1x1 convolution layer of each block mobileNet v2). On the other hand, the reason why ShuffleNet v1 doesn’t connect a ReLU after the 3x3 convolution layers comes from the explanation in Xception, which thought that for shallow features (i.e., the 1-channel deep feature spaces of depth-wise convolutions), non-linearity becomes harmful, possibly due to a loss of information.
NasNet, in which the word “Nas” is an abbreviation of Network architecture search, definitely is a more advanced technology to search for compact and efficient networks. The auto-search algorithms and other very recent research works (works in ICLR 2019, ICML 2019 and CVPR 2019) will be gone through in another post. Let’s proceed to ShuffleNet v2. ShuffleNet v2 vs. All
The above methods are based on two principles, small model size and less computation cost. However, in practical applications, efforts taken on the above criterion doesn’t exactly bring a corresponding faster model in hardware equipments. There are some other factors we should take into account when designing an embeddable model for hardware devices – memory access cost (MAC) and battery consuming.
Based on the above findings, ShuffleNet v2 rethinks the previous compression models and proposes four useful designing guidelines.
G1, Equal channel width minimizes MAC (this means letting number of input channels equal to that of output channels);
G2, Excessive group convolution increase MAC (do not use or use less group convolutions); G3, Network fragmentation reduces degree of parallelism (small stacked convolutions with in blocks and branches in NasNet); G4, Element-wise operations are non-negligible (like ReLU and addition operations in residual block).
As described in the original paper, ShuffleNet v1 violates G2 (group convolutions) and G1 (bottleneck blocks), MobileNet v2 violates G1 (inverted bottleneck structure) and G4 (ReLU on “thick” feature maps), and NasNet violates G3 (too many branches).
So the problem is:
How to maintain a large number and equally wide channels with neither dense convolution nor too many groups?
We mention that all the above guidelines have been proved by a series of validation experiments. Let’s draw the building blocks of ShuffleNet v2 here (actually I’ve also drawn a table for ShuffleNet v2 structure here, but takes time to understand…)
How does it solve the problem?
First, the
channel splitdivide the input channels into two parts, one of them keeps untouched, the other experiences a
1x1 + DW3x3 + 1x1flowchart, here, the 1x1doesn’t use group convolution. On one hand to follow G2, on the other hand, two branches indicates two groups. Second, the two branches are merged by concatenation. By doing so, there is no add operations (follows G4), and all the ReLU and depth-wise convolutions only exist in half of all the input channels, which again follows G4. Then, after concatenation, channel shuffling is applied to enforce branch communication. In addition, the Concat + Shuffle + Splitpipeline can be merge into a single element-wise operation, which follows G4. Similar to DenseNet, it takes the advantage of
feature reuse.
Under the same FLOPs, ShuffleNet v2 is superior than other models.
Conclusion
We have analysed several classical network compression models, from which we can see that the main strategy to reduce model size and computation cost is using
Depth-wise convolution, Group convolution and Point-wise convolution.
There are other interesting algorithms like network pruning, network quantization (e.g., binarize weiths and activations) and Network architecture search. They also lead to fast and small network models and will be discussed in the next post.
Note: Most of the figures are directly copied from the original paper.
|
Hodge, J.A. and Swinbank, A.M. and Simpson, J.M. and Smail, I. and Walter, F. and Alexander, D.M. and Bertoldi, F. and Biggs, A.D. and Brandt, W.N. and Chapman, S.C. and Chen, C.C. and Coppin, K.E.K. and Cox, P. and Dannerbauer, H. and Edge, A.C. and Greve, T.R. and Ivison, R.J. and Karim, A. and Knudsen, K.K. and Menten, K.M. and Rix, H.-W. and Schinnerer, E. and Wardlow, J.L. and Weiss, A. and van der Werf, P. (2016) 'Kiloparsec-scale dust disks in high-redshift luminous submillimeter galaxies.',
Astrophysical journal., 833 (1). p. 103. Abstract
We present high-resolution (0farcs16) 870 μm Atacama Large Millimeter/submillimeter Array (ALMA) imaging of 16 luminous (${L}_{\mathrm{IR}}\sim 4\times {10}^{12}\,{L}_{\odot }$) submillimeter galaxies (SMGs) from the ALESS survey of the Extended Chandra Deep Field South. This dust imaging traces the dust-obscured star formation in these $z\sim 2.5$ galaxies on ~1.3 kpc scales. The emission has a median effective radius of R e = 0farcs24 ± 0farcs02, corresponding to a typical physical size of ${R}_{e}=$ 1.8 ± 0.2 kpc. We derive a median Sérsic index of n = 0.9 ± 0.2, implying that the dust emission is remarkably disk-like at the current resolution and sensitivity. We use different weighting schemes with the visibilities to search for clumps on 0farcs12 (~1.0 kpc) scales, but we find no significant evidence for clumping in the majority of cases. Indeed, we demonstrate using simulations that the observed morphologies are generally consistent with smooth exponential disks, suggesting that caution should be exercised when identifying candidate clumps in even moderate signal-to-noise ratio interferometric data. We compare our maps to comparable-resolution Hubble Space Telescope ${H}_{160}$-band images, finding that the stellar morphologies appear significantly more extended and disturbed, and suggesting that major mergers may be responsible for driving the formation of the compact dust disks we observe. The stark contrast between the obscured and unobscured morphologies may also have implications for SED fitting routines that assume the dust is co-located with the optical/near-IR continuum emission. Finally, we discuss the potential of the current bursts of star formation to transform the observed galaxy sizes and light profiles, showing that the $z\sim 0$ descendants of these SMGs are expected to have stellar masses, effective radii, and gas surface densities consistent with the most compact massive (${M}_{* }\,\sim $ 1–2 × 1011 ${M}_{\odot }$) early-type galaxies observed locally.
Item Type: Article Full text: (VoR) Version of Record
First Live Deposit - 09 March 2017
Download PDF (7544Kb)
Status: Peer-reviewed Publisher Web site: https://doi.org/10.3847/1538-4357/833/1/103 Publisher statement: © 2016. The American Astronomical Society. All rights reserved. Record Created: 09 Mar 2017 09:44 Last Modified: 30 Mar 2017 09:27
Social bookmarking: Export: EndNote, Zotero | BibTex Look up in GoogleScholar | Find in a UK Library
|
I've been studying Ring-LWE based crytposystems such as the one in this paper, but I can't seem to find/come up with a proof of correctness for this particular scheme.
The encryption goes as follows:
given a message $m \in R_2 $ and sampling a secret key $s$ from a distribution $\chi$, one samples $a$ from $R_q$ and $e \leftarrow \chi$, the ciphertext $c$ is a pair such that $c = ( c_0,c_1) = (a, as+2e+m)$.
To decrypt:
given $c=(c_0,c_1)$ one computes $c_0 + c_1s ~~mod~2$
Can someone please try and help me with that?
thanks in advance.
|
Scientific posters present technical information and are intended for congress or presentations with colleagues. Since LaTeX is the most natural choice to typeset scientific documents, one should be able to create posters with it. This article explains how to create posters with latex
Contents
The two main options when it comes to writing scientific posters are
tikzposter and beamerposter. Both offer simple commandsto customize the poster and support large paper formats. Below, you can see a side-to-side comparison of the output generated by both packages (tikzposter on the left and beamerposter on the right).
Tikzposter is a document class that merges the projects
fancytikzposter and tikzposter and it's used to generate scientific posters in PDF format. It accomplishes this by means the TikZ package that allows a very flexible layout.
The preamble in a tikzposter class has the standard syntax.
\documentclass[24pt, a0paper, portrait]{tikzposter} \usepackage[utf8]{inputenc} \title{Tikz Poster Example} \author{ShareLaTeX Team} \date{\today} \institute{ShareLaTeX Institute} \usetheme{Board} \begin{document} \maketitle \end{document}
The first command,
\documentclass[...]{tikzposter} declares that this document is a
tikzposter. The additional parameters inside the brackets set the font size, the paper size and the orientation; respectively. The available font sizes are:
12pt, 14pt, 17pt, 20pt and
24pt. The possible paper sizes are:
a0paper, a1paper and
a2paper. There are some additional options, see the further reading section for a link to the documentation.
The commands
title,
author,
date and
institute are used to set the author information, they are self-descriptive.
The command
\usetheme{Board} sets the current theme, i.e. changes the colours and the decoration around the text boxes. See the reference guide for screenshots of the available themes.
The command
\maketitle prints the title on top of the poster.
The body of the poster is created by means of text blocks. Multi-column placement can be enabled and the width can be explicitly controlled for each column, this provides a lot of flexibility to customize the look of the final output.
\documentclass[25pt, a0paper, portrait]{tikzposter} \usepackage[utf8]{inputenc} \title{Tikz Poster Example} \author{ShareLaTeX Team} \date{\today} \institute{ShareLaTeX Institute} \usepackage{blindtext} \usepackage{comment} \usetheme{Board} \begin{document} \maketitle \block{~} { \blindtext } \begin{columns} \column{0.4} \block{More text}{Text and more text} \column{0.6} \block{Something else}{Here, \blindtext \vspace{4cm}} \note[ targetoffsetx=-9cm, targetoffsety=-6.5cm, width=0.5\linewidth ] {e-mail \texttt{sharelatex@sharelatex.com}} \end{columns} \begin{columns} \column{0.5} \block{A figure} { \begin{tikzfigure} \includegraphics[width=0.4\textwidth]{images/lion-logo.png} \end{tikzfigure} } \column{0.5} \block{Description of the figure}{\blindtext} \end{columns} \end{document}
In
tikzposter the text is organized in blocks, each block is created by the command
\block{}{} which takes two parameters, each one inside a pair of braces. The first one is the title of the block and the second one is the actual text to be printed inside the block.
The environment
columns enables multi-column text, the command
\column{} starts a new column and takes as parameter the relative width of the column, 1 means the whole text area, 0.5 means half the text area and so on.
The command
\note[]{} is used to add additional notes that are rendered overlapping the text block. Inside the brackets you can set some additional parameters to control the placement of the note, inside the braces the text of the note must be typed.
The standard LaTeX commands to insert figures don't work in
tikzposter, the environment
tikzfigure must be used instead.
The package
beamerposter enhances the capabilities of the standard beamer class, making it possible to create scientific posters with the same syntax of a beamer presentation.
By now there are not may themes for this package, and it is slightly less flexible than tikzpopster, but if you are familiar with beamer, using beamerposter don't require learning new commands.
Note: In this article a special
beamerposter theme will be used. The theme "Sharelatex" is based on the theme "Dreuw" created by Philippe Dreuw and Thomas Deselaers, but it was modified to make easier to insert the logo and print the e-mail address at the bottom of the poster. Those are hard-coded in the original themes.
Even though this article explains how to typeset a poster in LaTeX, the easiest way is to use a template as start point. We provide several in the ShareLaTeX templates page
The preamble of a
beamerposter is basically that of a beamer presentation, except for an additional command.
\documentclass{beamer} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage{times} \usepackage{amsmath,amsthm, amssymb, latexsym} \boldmath \usetheme{Sharelatex} \usepackage[orientation=portrait,size=a0,scale=1.4,debug]{beamerposter} \title[Beamer Poster]{ShareLaTeX example of the beamerposter class} \author[sharelatexteam@sharelate.com]{ShareLaTeX Team} \institute[Sharelatex University]{The ShareLaTeX institute, Learn faculty} \date{\today} \logo{\includegraphics[height=7.5cm]{SharelatexLogo}}
The first command in this file is
\documentclass{beamer}, which declares that this is a beamer presentation. The theme "Sharelatex" is set by
\usetheme{Sharelatex}. There are some beamer themes on the web, most of them can be found in the web page of the beamerposter authors.
The command
\usepackage[orientation=portrait,size=a0,scale=1.4,debug]{beamerposter}
Imports the
beamerposter package with some special parameters: the orientation is set to
portrait, the poster size is set to
a0 and the fonts are scaled to
1.4. The poster sizes available are a0, a1, a2, a3 and a4, but the dimensions can be arbitrarily set with the options
width=x,height=y.
The rest of the commands set the standard information for the poster: title, author, institute, date and logo. The command
\logo{} won't work in most of the themes, and has to be set by hand in the theme's .sty file. Hopefully this will change in the future.
Since the document class is
beamer, to create the poster all the contents must be typed inside a
frame environment.
\documentclass{beamer} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage{times} \usepackage{amsmath,amsthm, amssymb, latexsym} \boldmath \usetheme{Sharelatex} \usepackage[orientation=portrait,size=a0,scale=1.4]{beamerposter} \title[Beamer Poster]{ShareLaTeX example of the beamerposter class} \author[sharelatexteam@sharelate.com]{ShareLaTeX Team} \institute[Sharelatex University] {The ShareLaTeX institute, Learn faculty} \date{\today} \logo{\includegraphics[height=7.5cm]{SharelatexLogo}} \begin{document} \begin{frame}{} \vfill \begin{block}{\large Fontsizes} \centering {\tiny tiny}\par {\scriptsize scriptsize}\par {\footnotesize footnotesize}\par {\normalsize normalsize}\par ... \end{block} \end{block} \vfill \begin{columns}[t] \begin{column}{.30\linewidth} \begin{block}{Introduction} \begin{itemize} \item some items \item some items ... \end{itemize} \end{block} \end{column} \begin{column}{.48\linewidth} \begin{block}{Introduction} \begin{itemize} \item some items and $\alpha=\gamma, \sum_{i}$ ... \end{itemize} $$\alpha=\gamma, \sum_{i}$$ \end{block} ... \end{column} \end{columns} \end{frame} \end{document}
Most of the content in the poster is created inside a
block environment, this environment takes as parameter the title of the block.
The environment
columns enables multi-column text, the environment
\column starts a new columns and takes as parameter the width of said column. All LaTeX units can be used here, in the example the column width is set relative to the text width.
Tikzposter themes
Default Rays Basic Simple Envelope Wave Board Autumn Desert
For more information see
|
This is a further development of @ais523's answer, reducing it to only two sets of brackets, and also using a more compact cell placement based on Golomb ruler theory. ais523 has made a compiler for this construction, as well as this TIO session showing a sample resulting BF program running with debug tracing of the TWM counters.
Like the original, this starts with a program in The Waterfall Model, with some restrictions that don't lose generality:
All counters have the same self-reset value $R$; that is, the TWM trigger map $f$ has the property that $f(x,x)=R$ for all $x$. There is a single halting counter $h$. The number $c$ of counters is $(p-1)/2$ for some prime number $p$. Golomb ruler
We combine the Erdős–Turán construction with the permutation function of a Welch–Costas array in order to get a Golomb ruler with the necessary properties.
(I'm sure this combined construction cannot be a new idea but we just found and fit together these two pieces from Wikipedia.)
Let $r$ be a primitive root of $p=2c+1$. Define the function
$$g(k)=4ck - ((r^k-1)\bmod(2c+1)), k=0,\ldots,2c-1.$$
$g$ is a Golomb ruler of order $2c$. That is, the difference $g(i)-g(j)$ is unique for every pair of distinct numbers $i,j \in \{0,\ldots,2c-1\}$. $g(k)\bmod(2c)$ takes on every value $0,\ldots,2c-1$ exactly once. Tape structure
For each TWM counter $x\in \{0,\ldots,c-1\}$, we assign two BF tape cell positions, a
fallback cell $u(x)$ and a value cell $v(x)$:
$$u(x)=g(k_1)<v(x)=g(k_2)\mbox{ with }u(x)\equiv v(x)\equiv x\pmod c$$
By the second property of $g$ there are exactly two distinct $k_1,k_2$ values to choose from.
A fallback cell's content will most of the time be kept at $0$, except when its counter has just been visited, when it will be at $2R$, twice the counter self-reset value. A value cell will be kept at twice the value of the corresponding TWM counter.
All other cells that can be reached by the BF program execution (a finite number) will be kept at odd values, so that they always test as nonzero. After initialization this is automatic because all cell adjustments are by even amounts.
If desired, all cell positions can be shifted rightwards by a constant in order to avoid moving to the left of the initial BF tape position.
BF program structure
Let $H = v(h)-u(h)$ be the distance between the halting counter's value and fallback cells, and let $N$ be a number large enough that $cN+1 \geq v((x+1)\bmod c) - u(x)$ for all counters $x$. Then the basic BF program structure is
initialization
[
>$\times (H+cN+1)$
[
<$\times c$
]
adjustments
<$\times H$
]
Initialization
The
initialization phase sets all cells reachable by the program to their initial values, in a state as if the last counter had just been visited and the just active cell was its fallback cell $u(c-1)$: Value cells are initialized to twice the initial content of the corresponding TWM counter, except that counter $0$ is pre-decremented. Fallback cells are set to $0$, except cell $u(c-1)$, which is set to $2R$. All other cells reachable by the program (a finite number) are set to $1$.
Then the tape pointer is moved to position $u(c-1)-H$ (an always non-zero cell) before we reach the program's first
[.
Beginning of outer loop
At the beginning of an iteration of the outer loop, the tape pointer will be at either $u(x)-H$ or $v(x)-H$ for a counter $x$.
Let $y=((x+1)\bmod c)$ be the next counter to visit.
The movement
>$\times (H+cN+1)$ places the tape pointer on a position that is $\equiv y\pmod c$ and not to the left of $v(y)$.
The inner loop
[
<$\times c$
] now searches leftwards in steps of $c$ for a zero cell. If counter $y$ is zero, then it will stop at the (zero) value cell $v(y)$; otherwise it will find the fallback cell $u(y)$.
Whichever cell is found becomes the new
active cell. Adjustments
The
adjustment phase adjusts various cells on the tape based on their position relative to the active cell. This section contains only
+->< commands and so these adjustments happen unconditionally. However, because all counter-related cells are in a Golomb ruler pattern, any adjustments that are not proper for the current active cell will miss all the important cells and adjust some irrelevant cell instead (while keeping it odd).
Separate code must thus be included in the program for each possible required pair of active and adjusted cell,
except for an active cell's self-adjustment, which, because adjustment is based solely on relative position, must be shared between all of them.
The required adjustments are:
Adjust the previous counter's fallback cell $u(x)$ by $-2R$. Adjust the current counter's fallback cell $u(y)$ by $2R$, except if the current active cell is $v(h)$ and so we should halt. Adjust the next counter's value cell $v((y+1)\bmod c)$ by $-2$ (decrementing the counter). When the active cell is a value cell $v(y)$ (so the counter $y$ has reached zero), adjust all value cells $v(z)$ by $2f(y,z)$ from the TWM trigger map. $v(y)$ itself becomes adjusted by $2R$.
The first and second adjustments above are made necessary by the fact that all active cells must adjust
themselves by the same value, which is $2R$ for value cells, and thus also for fallback cells. This requires preparing and cleaning up the fallback cells to ensure they get back to $0$ in both the value and fallback branches. End of outer loop
The movement
<$\times H$ represents that at the end of the adjustment phase, the tape pointer is moved $H$ places to the left of the active cell.
For all active cells
other than the halting counter's value cell $v(h)$, this is an irrelevant cell, and so odd and non-zero, and the outer loop continues for another iteration.
For $v(h)$, the pointer is instead placed on its corresponding fallback cell $u(h)$, for which we have made an exception above to keep it zero, and so the program exits through the final
] and halts.
|
I am confused whether Lorentz transform is a tensor or not, since it is a linear transform. If yes how can I verify that?
When we say something is a tensor, we're saying that it's a linear operator whose components have certain transformation properties under a change of coordinates. The underlying assumption is that this is a quantity that is a function of the state of a system. You can walk up to the system, choose your coordinate system, and measure a particular component of the tensor in that coordinate system.
The Lorentz transformation isn't a function of the state of the system, so it's not meaningful to talk about measuring its components. The Lorentz transformation is independent of the state of the system but does depend on other data: the data defining the two local frames of reference that this Lorentz transformation operates between.
So the general idea is that we have (A) tensor observables, and (B) transformations that convert the components of a tensor from one coordinate system to another. The Lorentz transformation is in category B, not A.
Another way to see this is that it never makes sense to write the Lorentz transformation in abstract index form. When we apply a Lorentz transformation between coordinates $x^\mu$ and coordinates $x^{\mu'}$ in concrete index notation, we have $T^\mu=\Lambda^\mu{}_{\mu'}T^{\mu'}$. If you try to write that relationship in abstract index notation, you can't, because the $\mu$ and $\mu'$ are there to explicitly name the two coordinate systems.
I know what you're thinking:
A Lorentz matrix $\Lambda^{\mu}{}_{\nu}$ looks pretty much like a (1,1) tensor $T^{\mu}{}_{\nu}$, right? No, wrong. Perhaps it is easiest to explain with an analogy: If you are familiar with category theory and know that a category consists of objects and arrows, then a tensor is like a category, a reference frame is like an object, while a Lorentz transformation is like an arrow, which transforms objects. In other words, you're comparing apples and oranges.
A tensor can be defined and written down in any given reference frame. The Lorentz transformation is a statement about how things change when you switch from one frame to another.
Coordinate transforms from first principles
A coordinate transformation maps one tensor-space $\mathcal T_1$ to another tensor-space $\mathcal T_2$. All $[m, n]$-tensors in the one space get mapped to $[m, n]$-tensors in the other one, by different (but related) linear transforms, such that all inner products that result in scalars produce the same scalar. For example, given a covector $u$ (a $[0,1]$-tensor, a linear function mapping vectors to scalars) and a vector $v$ (a $[1, 0]$-tensor) we have that the coordinate transformations on each of these spaces $C_{1,0}[\bullet]$ and $C_{0,1}[\bullet]$ are related by the expression that,$$C_{0, 1}[u](C_{1,0}[v]) = u(v),$$ since the last is a scalar.
If we insert a bunch of basis vectors $\hat e_i$ into this picture we can describe any $v$ as being a sum $v = \sum_i v^i \hat e_i$ and we can invent basis covectors $\underline e^j$ such that $$\underline e^j(\hat e_i) = \{1 \text{ if } i = j \text{ else } 0\}.$$We can then characterize a linear coordinate transform by what it does to these basis vectors and covectors, $C_{1,0}[v] = \sum_i v^i~C_{1,0}[\hat e_i]$. Note that the components do not really "transform" here, what transforms is the basis vectors.
Where we get a matrix is when we have some
other basis for the second space—who says we have to use the “natural” basis induced by $C$ that $\hat a_i = C_{1,0}[\hat e_i]$? So if we have this other basis then we have that $$C_{1,0}[\hat e_i] = \sum_{k'} C^{~k'}_i~\hat a_{k'},$$ where we use primes to mentally keep the two different spaces separate in our heads. Now we find that we can interpret the coefficients as changing via $$C_{1,0}[v] = \sum_{k'} v^{k'}\hat a_{k'},\\~~~\text{ where }~~~v^{k'} = \sum_i C_i^{~k'}~v^i.$$ Where it’s easy to get tripped up
Now within the first space $\mathcal T_1$ itself, there
is a $[1,1]$-tensor which can be formed by those coefficients, simply by saying "well those primed indices are just numbers and the transform matrix is just a bunch of numbers so I will write some tensor, $$c(v) = \sum_{i,k'} \hat e_{k'}~ C_i^{~k'}~\underline e^i(v).$$This is a very general point so I am going to make it very loudly: In general for anything we say is “not a tensor”—Lorentz transforms, Christoffel symbols—it is possible to take their components and assemble a tensor! But what we mean is that it is illusorily a tensor, it looks like a tensor but it does not interact the way you would expect a tensor to interact. For example when you are dealing with curved spaces and you start to talk about Christoffel symbols, you can assemble a tensor from the components that the Christoffel symbol gives you. But that tensor will not give the Christoffel symbol after a coordinate transformation!
There is something similar happening here. This tensor is a composition of part of your coordinate transform $C_{1,0}$ going from $\mathcal T_1 \to \mathcal T_2$ with some other coordinate transform $D: \mathcal T_2\to \mathcal T_1$ that maps $\hat a_{k'} \to \hat e_{k'}$. That is why it has this weird mixup of primed and unprimed indices that is $\hat e_{k'}$, because you have "collapsed" both spaces together with this second coordinate transform that transforms the $\mathcal T_2$ basis vectors into the $\mathcal T_1$ basis vectors.
The illusion is revealed by the following failure-to-interact-the-way-you-expect: in fact we said that $C$ is not just $C_{1,0}$ but is some family of functions $C_{m,n}$ mapping $[m,n]$-tensors in $\mathcal T_1$ to $[m, n]$-tensors in $\mathcal T_2$. By deriving that there is this one $c(v)$ tensor for $C_{1,0}$ you might have expected that this mapping of vectors is all that there is, but it is not: actually even under this second coordinate transform $D$ you have to model this coordinate transform $C$ as a family of tensors, a different one from each $[m, n]$-tensor space onto itself. Each one of these is its own $[m+n,m+n]$-tensor, and they are certainly related to this $[1, 1]$-tensor $c(v)$, but even for constant $m+n$ they are not the same. So there is a different $[1, 1]$-tensor mapping covectors to covectors, it is not $\tilde c(u) = v \mapsto u\big(c(v)\big)$ the way such a tensor would normally transform a covector.
In fact the resulting math is no more interesting than just keeping the two spaces separate in the first place and on that basis we just do what’s lazy and say “it’s not a tensor:” yes, you can imagine it as studying a
family of tensors within the space, but that just makes your life much harder than saying “the un-primed indices belong to space one and the primed indices belong to space two.”
|
The Strauss conjecture on negatively curved backgrounds
1.
Department of Mathematics, Johns Hopkins University, Baltimore, MD 21218, USA
2.
School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
This paper is devoted to several small data existence results for semi-linear wave equations on negatively curved Riemannian manifolds. We provide a simple and geometric proof of small data global existence for any power $ p\in (1, 1+\frac{4}{n-1}] $ for the shifted wave equation on hyperbolic space $ {\mathbb{H}}^n $ involving nonlinearities of the form $ \pm |u|^p $ or $ \pm|u|^{p-1}u $. It is based on the weighted Strichartz estimates of Georgiev-Lindblad-Sogge [
Keywords:Wave equations, curvature, Strauss conjecture, Strichartz estimates, weighted Strichartz estimates, Bessel potentials. Mathematics Subject Classification:35L71, 35L05, 58J45, 35B33, 35B45, 35R01, 58C40. Citation:Yannick Sire, Christopher D. Sogge, Chengbo Wang. The Strauss conjecture on negatively curved backgrounds. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 7081-7099. doi: 10.3934/dcds.2019296
References:
[1] [2] [3] [4] [5] [6] [7]
I. Chavel,
[8] [9]
V. Georgiev, H. Lindblad and C. D. Sogge,
Weighted Strichartz estimates and global existence for semilinear wave equations,
[10] [11] [12] [13]
E. Hebey,
[14] [15] [16] [17] [18] [19]
R. R. Mazzeo and R. B. Melrose,
Meromorphic extension of the resolvent on complete spaces with asymptotically constant negative curvature,
[20] [21] [22] [23] [24]
A. G. Setti,
A lower bound for the spectrum of the Laplacian in terms of sectional and Ricci curvature,
[25] [26]
C. D. Sogge,
[27] [28]
R.t S. Strichartz,
Restrictions of Fourier transforms to quadratic surfaces and decay of solutions of wave equations,
[29]
D. Tataru, Strichartz estimates in the hyperbolic space and global existence for the semilinear wave equation,
[30]
M. Taylor,
[31] [32]
C. Wang and X. Yu, Recent works on the Strauss conjecture, In
[33] [34] [35]
show all references
References:
[1] [2] [3] [4] [5] [6] [7]
I. Chavel,
[8] [9]
V. Georgiev, H. Lindblad and C. D. Sogge,
Weighted Strichartz estimates and global existence for semilinear wave equations,
[10] [11] [12] [13]
E. Hebey,
[14] [15] [16] [17] [18] [19]
R. R. Mazzeo and R. B. Melrose,
Meromorphic extension of the resolvent on complete spaces with asymptotically constant negative curvature,
[20] [21] [22] [23] [24]
A. G. Setti,
A lower bound for the spectrum of the Laplacian in terms of sectional and Ricci curvature,
[25] [26]
C. D. Sogge,
[27] [28]
R.t S. Strichartz,
Restrictions of Fourier transforms to quadratic surfaces and decay of solutions of wave equations,
[29]
D. Tataru, Strichartz estimates in the hyperbolic space and global existence for the semilinear wave equation,
[30]
M. Taylor,
[31] [32]
C. Wang and X. Yu, Recent works on the Strauss conjecture, In
[33] [34] [35]
[1] [2]
Youngwoo Koh, Ihyeok Seo.
Strichartz estimates for Schrödinger equations in weighted $L^2$ spaces and their applications.
[3] [4] [5]
Haruya Mizutani.
Strichartz estimates for Schrödinger equations with variable coefficients and unbounded potentials II. Superquadratic potentials.
[6] [7] [8]
Chu-Hee Cho, Youngwoo Koh, Ihyeok Seo.
On inhomogeneous Strichartz estimates for fractional Schrödinger equations and their applications.
[9]
Mingchun Wang, Jiankai Xu, Huoxiong Wu.
On Positive solutions of integral equations with the weighted Bessel potentials.
[10]
Vladimir Georgiev, Atanas Stefanov, Mirko Tarulli.
Smoothing-Strichartz estimates for the Schrodinger equation with small magnetic potential.
[11]
Younghun Hong.
Strichartz estimates for $N$-body Schrödinger operators with small potential interactions.
[12]
Michael Goldberg.
Strichartz estimates for Schrödinger operators with a non-smooth
magnetic potential.
[13]
Hyeongjin Lee, Ihyeok Seo, Jihyeon Seok.
Local smoothing and Strichartz estimates for the Klein-Gordon equation with the inverse-square potential.
[14] [15]
Claudia Anedda, Giovanni Porru.
Boundary estimates for solutions of weighted semilinear elliptic
equations.
[16]
Jason Metcalfe, David Spencer.
Global existence for a coupled wave system related to the Strauss conjecture.
[17] [18]
Sun-Yung Alice Chang, Xi-Nan Ma, Paul Yang.
Principal curvature estimates for the convex level sets of semilinear
elliptic equations.
[19]
Jinju Xu.
A new proof of gradient estimates for mean curvature equations with oblique boundary conditions.
[20]
Junjie Zhang, Shenzhou Zheng.
Weighted lorentz estimates for nondivergence linear elliptic equations with partially BMO coefficients.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Search
Now showing items 21-30 of 74
Measurement of azimuthal asymmetries in inclusive charged dipion production in e + e - annihilations at ? s = 3.65 GeV
(APS Physics, 2016)
We present a measurement of the azimuthal asymmetries of two charged pions in the inclusive process e(+) e(-) -> pi pi X based on a data set of 62 pb(-1) at the center-of-mass energy of 3.65 GeV collected with the SESIII ...
Study of D + › K - ? + e + ? e
(APS Physics, 2016)
Observation of hc radiative decay hc › ??' and evidence for hc › ??
(2016)
A search for radiative decays of the P-wave spin singlet charmonium resonance hc is performed based on 4.48 × 108 ? events collected with the BESIII detector operating at the BEPCII storage ring. Events of the reaction ...
Measurement of e(+)e(-) -> pi(0)pi(0)psi(3686) at root s from 4.009 to 4.600 GeV and observation of a neutral charmoniumlike structure
(Amer Physical Soc., 2018)
Using ethorne-collision data collected with the BESIII detector at the BEPCII collider corresponding to an integrated luminosity of 5.2 fb(-1) at center-of-mass energies (root s) from 4.009 to 4.600 GeV, the process e(+)e(-) ...
Determination of the number of psi(3686) events at BESIII
(Science Press, 2018)
The numbers of psi(3686) events accumulated by the BESIII detector for the data taken during 2009 and 2012 are determined to be (107.0 +/- 0.8)x10(6) and (341.1 +/- 2.1)x10(6), respectively, by counting inclusive hadronic ...
Observation of e(+)e(-) -> phi chi(c1) and phi chi(c2) at root s=4.600 GeV
(Amer Physical Soc., 2018)
Using a data sample collected with the BESIII detector operating at the BEPCII storage ring at a center-of-mass energy of root s = 4.600 GeV, we search for the production of e(+)e(-) -> phi chi(c0,1,2). A search is also ...
Measurement of the e+e- › ?+?- cross section between 600 and 900 MeV using initial state radiation
(Elsevier, 2016)
We extract the e+e-›?+?- cross section in the energy range between 600 and 900 MeV, exploiting the method of initial state radiation. A data set with an integrated luminosity of 2.93 fb-1 taken at a center-of-mass energy ...
Observation of e(+)e(-) -> eta ' J/psi center-of-mass energies between 4.189 and 4.600 GeV
(APS Physics, 2016)
The process $e^{+}e^{-}\to \eta^{\prime} J/\psi$ is observed for the first time with a statistical significance of $8.6\sigma$ at center-of-mass energy $\sqrt{s} = 4.226$ GeV and $7.3\sigma$ at $\sqrt{s} = 4.258$ GeV using ...
Precision Measurement of the e(+)e(-) -> Lambda(+)(c)(Lambda)over-bar(c)(-) Cross Section Near Threshold
(Amer Physical Soc., 2018)
The cross section of the e(+)e(-) -> Lambda(+)(c)(Lambda) over bar (-)(c) process is measured with unprecedented precision using data collected with the BESIII detector at root s = 4574.5, 4580.0, 4590.0 and 4599.5 MeV. ...
|
The fractional Schrödinger equation with singular potential and measure data
1.
Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Plaza de Ciencias 3, 28040 Madrid, Spain
2.
Departamento de Matemáticas, Universidad Autónoma de Madrid, Calle Francisco Tomás y Valiente 7, 28049 Madrid, Spain
We consider the steady fractional Schrödinger equation $ L u + V u = f $ posed on a bounded domain $ \Omega $; $ L $ is an integro-differential operator, like the usual versions of the fractional Laplacian $ (-\Delta)^s $; $ V\ge 0 $ is a potential with possible singularities, and the right-hand side are integrable functions or Radon measures. We reformulate the problem via the Green function of $ (-\Delta)^s $ and prove well-posedness for functions as data. If $ V $ is bounded or mildly singular a unique solution of $ (-\Delta)^s u + V u = \mu $ exists for every Borel measure $ \mu $. On the other hand, when $ V $ is allowed to be more singular, but only on a finite set of points, a solution of $ (-\Delta)^s u + V u = \delta_x $, where $ \delta_x $ is the Dirac measure at $ x $, exists if and only if $ h(y) = V(y) |x - y|^{-(n+2s)} $ is integrable on some small ball around $ x $. We prove that the set $ Z = \{x \in \Omega : \rm{no solution of } (-\Delta)^s u + Vu = \delta_x \rm{ exists}\} $ is relevant in the following sense: a solution of $ (-\Delta)^s u + V u = \mu $ exists if and only if $ |\mu| (Z) = 0 $. Furthermore, $ Z $ is the set points where the strong maximum principle fails, in the sense that for any bounded $ f $ the solution of $ (-\Delta)^s u + Vu = f $ vanishes on $ Z $.
Keywords:Nonlocal elliptic equations, bounded domains, Schrödinger operators, singular potentials, measure data. Mathematics Subject Classification:35R11, 35J10, 35D30, 35J67, 35J75. Citation:David Gómez-Castro, Juan Luis Vázquez. The fractional Schrödinger equation with singular potential and measure data. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 7113-7139. doi: 10.3934/dcds.2019298
References:
[1] [2] [3]
M. Bonforte, A. Figalli and J. Vázquez, Sharp boundary behaviour of solutions to semilinear nonlocal elliptic equations,
[4]
M. Bonforte, Y. Sire and J. L. Vázquez,
Existence, uniqueness and asymptotic behaviour for fractional porous medium equations on bounded domains,
[5]
M. Bonforte and J. L. Vázquez,
Fractional nonlinear degenerate diffusion equations on bounded domains part I. Existence, uniqueness and upper bounds,
[6]
H. Brezis, M. Marcus and A. C. Ponce,
A new concept of reduced measure for nonlinear elliptic equations,
[7]
H. Brezis, M. Marcus and A. C. Ponce, Nonlinear elliptic equations with measures revisited, in
[8]
C. Bucur and E. Valdinoci,
[9]
X. Cabré and Y. Sire,
Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates,
[10] [11]
L. A. Caffarelli and P. R. Stinga,
Fractional elliptic equations, Caccioppoli estimates and regularity,
[12] [13]
M. Cozzi,
Interior regularity of solutions of non-local equations in Sobolev and Nikol'skii spaces,
[14]
E. Di Nezza, G. Palatucci and E. Valdinoci,
Hitchhiker's guide to the fractional Sobolev spaces,
[15]
J. I. Díaz, D. Gómez-Castro and J. Vázquez,
The fractional Schrödinger equation with general nonnegative potentials. The weighted space approach,
[16]
J. I. Díaz, D. Gómez-Castro, J.-M. Rakotoson and R. Temam,
Linear diffusion with singular absorption potential and/or unbounded convective flow: The weighted space approach,
[17] [18] [19]
D. Gilbarg and N. S. Trudinger,
[20]
G. Grubb,
Fractional Laplacians on domains, a development of Hörmander's theory of $\mu$-transmission pseudodifferential operators,
[21]
K.-Y. Kim and P. Kim,
Two-sided estimates for the transition densities of symmetric Markov processes dominated by stable-like processes in $C^{1, \eta}$ open sets,
[22] [23]
L. Orsina and A. C. Ponce, On the nonexistence of Green's function and failure of the strong maximum principle,
[24]
A. C. Ponce,
[25]
A. C. Ponce and N. Wilmet,
Schrödinger operators involving singular potentials and measure data,
[26] [27] [28]
X. Ros-Oton and J. Serra,
The Dirichlet problem for the fractional Laplacian: Regularity up to the boundary,
[29]
H. Triebel,
[30]
J. L. Vázquez,
On a Semilinear Equation in $\mathbb R^2$ Involving Bounded Measures,
show all references
References:
[1] [2] [3]
M. Bonforte, A. Figalli and J. Vázquez, Sharp boundary behaviour of solutions to semilinear nonlocal elliptic equations,
[4]
M. Bonforte, Y. Sire and J. L. Vázquez,
Existence, uniqueness and asymptotic behaviour for fractional porous medium equations on bounded domains,
[5]
M. Bonforte and J. L. Vázquez,
Fractional nonlinear degenerate diffusion equations on bounded domains part I. Existence, uniqueness and upper bounds,
[6]
H. Brezis, M. Marcus and A. C. Ponce,
A new concept of reduced measure for nonlinear elliptic equations,
[7]
H. Brezis, M. Marcus and A. C. Ponce, Nonlinear elliptic equations with measures revisited, in
[8]
C. Bucur and E. Valdinoci,
[9]
X. Cabré and Y. Sire,
Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates,
[10] [11]
L. A. Caffarelli and P. R. Stinga,
Fractional elliptic equations, Caccioppoli estimates and regularity,
[12] [13]
M. Cozzi,
Interior regularity of solutions of non-local equations in Sobolev and Nikol'skii spaces,
[14]
E. Di Nezza, G. Palatucci and E. Valdinoci,
Hitchhiker's guide to the fractional Sobolev spaces,
[15]
J. I. Díaz, D. Gómez-Castro and J. Vázquez,
The fractional Schrödinger equation with general nonnegative potentials. The weighted space approach,
[16]
J. I. Díaz, D. Gómez-Castro, J.-M. Rakotoson and R. Temam,
Linear diffusion with singular absorption potential and/or unbounded convective flow: The weighted space approach,
[17] [18] [19]
D. Gilbarg and N. S. Trudinger,
[20]
G. Grubb,
Fractional Laplacians on domains, a development of Hörmander's theory of $\mu$-transmission pseudodifferential operators,
[21]
K.-Y. Kim and P. Kim,
Two-sided estimates for the transition densities of symmetric Markov processes dominated by stable-like processes in $C^{1, \eta}$ open sets,
[22] [23]
L. Orsina and A. C. Ponce, On the nonexistence of Green's function and failure of the strong maximum principle,
[24]
A. C. Ponce,
[25]
A. C. Ponce and N. Wilmet,
Schrödinger operators involving singular potentials and measure data,
[26] [27] [28]
X. Ros-Oton and J. Serra,
The Dirichlet problem for the fractional Laplacian: Regularity up to the boundary,
[29]
H. Triebel,
[30]
J. L. Vázquez,
On a Semilinear Equation in $\mathbb R^2$ Involving Bounded Measures,
[1]
Woocheol Choi, Yong-Cheol Kim.
The Malgrange-Ehrenpreis theorem for nonlocal Schrödinger operators with certain potentials.
[2]
Woocheol Choi, Yong-Cheol Kim.
$L^p$ mapping properties for nonlocal Schrödinger operators with certain potentials.
[3]
Marco Degiovanni, Michele Scaglia.
A
variational approach to semilinear elliptic equations with
measure data.
[4]
Jussi Behrndt, A. F. M. ter Elst.
The Dirichlet-to-Neumann map for Schrödinger operators with complex potentials.
[5]
Xinlin Cao, Yi-Hsuan Lin, Hongyu Liu.
Simultaneously recovering potentials and embedded obstacles for anisotropic fractional Schrödinger operators.
[6] [7]
Mouhamed Moustapha Fall, Veronica Felli.
Unique continuation properties for relativistic Schrödinger operators with a singular potential.
[8]
Verena Bögelein, Frank Duzaar, Ugo Gianazza.
Very weak solutions of singular porous medium equations with measure data.
[9] [10]
Veronica Felli, Elsa M. Marchini, Susanna Terracini.
On the behavior of solutions to Schrödinger equations with dipole type potentials near the singularity.
[11]
Rémi Carles, Christof Sparber.
Semiclassical wave packet dynamics in
Schrödinger equations with periodic potentials.
[12]
Xing Cheng, Ze Li, Lifeng Zhao.
Scattering of solutions to the nonlinear Schrödinger equations with regular potentials.
[13]
Yongsheng Jiang, Huan-Song Zhou.
A sharp decay estimate for nonlinear Schrödinger equations with vanishing potentials.
[14] [15]
Zaihui Gan, Boling Guo, Jian Zhang.
Blowup and global existence of the nonlinear Schrödinger equations with multiple potentials.
[16]
Liang Zhang, X. H. Tang, Yi Chen.
Infinitely many solutions for a class of perturbed elliptic equations with nonlocal operators.
[17]
Haruya Mizutani.
Strichartz estimates for Schrödinger equations with variable coefficients and unbounded potentials II. Superquadratic potentials.
[18]
Lassaad Aloui, Moez Khenissi.
Boundary stabilization of the wave and Schrödinger equations in exterior domains.
[19]
Giuseppe Maria Coclite, Mario Michele Coclite.
On a Dirichlet problem in bounded domains with singular nonlinearity.
[20]
Mingqi Xiang, Patrizia Pucci, Marco Squassina, Binlin Zhang.
Nonlocal Schrödinger-Kirchhoff equations with external magnetic field.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Let $G$ be a graph of diameter 2 ($\forall u,v\in V: d(u,v)\leq2$).
Can we decide if $G$ has Hamiltonian path in poly time? What about digraphs?
Perhaps some motivation is in place:
the question arises from Dirac's theorem which states that if $\forall v\in V:d(v)\geq \frac{n}{2}$ then the graph is Hamiltonian, as well as it's generalizations (the Ghouila-Houri theorem and the result from Bang-Jensen and Gutin's book).
I've shown here that these degree requirements imply that the graph has diameter 2, and was wondering if such graphs can be decided without the degree requirements (strong gut feeling: No).
|
I intended to use Mathematica to perform the following numerical integration arising in the context of cosmology. I wish to evaluate the function $\sigma(R)$ at some specific points and plot those points.
$$\sigma(R) = \frac{1}{2\pi^2R^6}\int_{0}^{\infty}\frac{dk}{k^4}P(k) (-3kR\cos(kR)+3\sin(kR))^2$$
$$P(k)=2\pi^2\delta_H^2\frac{k}{H_0^4}T(k)^2$$
The function $T(k)$ is defined via
$$T(x) = \frac{\log[1+0.171x]}{0.171x}[1+0.284x+(1.18x)^2+(0.399x)^3+(0.490x)^4]^{-0.25}$$
$$x = \frac{k}{k_{eq}}$$
where $\delta_H,H_0,k_{eq}$ are some constant cosmological parameters
Apparently, my naive
Mathematica code led to the error message,
NIntegrate::inumr: The integrand has evaluated to non numerical values for all sampling points in the region with boundaries {{∞, 0.}}
Might I ask for some help in correcting my code?
H0 := 0.5*100/(3*10^5) (*Hubble rate in unit of Mpc-1, h=0.5 is used*)keq := 0.0731*0.5^2 T[k_] := (Log[1 + 0.171 (k/keq)]/(0.171*(k/keq)))*(1 + 0.284 (k/keq) + (1.18*(k/keq))^2 + (0.399*k/keq)^3 + (0.490 k/keq)^4)^(-0.25)P[k_] := 2*Pi^2*DeltaH^2*(k*H0^(-4))*T[k]^2f[R_] := (1/(2*Pi^2*R^6))* NIntegrate[(P[k]/k^4)*(3*(-k*R*Cos[k*R] + Sin[k*R]))^2, {k, 0, Infinity}]
I attach a screen capture detailing my code, definition of the variables and the error message.
|
School, from our house as the crow flies, is 5.73 km. If we neglect air resistance and deal strictly with ballistic flight then we can materialize a wonderful fantasy. Starting in the backyard, extending over the top of the house, is a
launch-o-rocket, a rail-like launcher that accelerates the school-bound student until he or she can cruise over the city and arrive without bother of traffic. Our charter is to find the acceleration of the student from the launch-o-rocket. Finding the Initial Velocity
We rely on the well-known fact that the maximum distance in a throw occurs when the departure angle is 45°. The vertical speed and the horizontal speed are equal. We denote these two identical speeds as $s$. Since distance is time multiplied by speed, the distance from home to school $d$ is
$$ d = t\cdot s.$$
We know the distance $d = $ 5.73 km.
Turning to the vertical speed, the student departs the launch-o-rocket with vertical speed $s$, but is immediately subject to gravitational acceleration. Since the student’s upward flight is exactly matched by his or her downward flight. Because the flight is matched, the student spends $t/2$ time rising and $t/2$ time descending. Since the student has no vertical speed at the top, we know that his or her speed is
$$ s = g\frac{t}{2},$$
where $g$ is the gravitational acceleration 9.8 m/s
2.
Now, we have a system of equations
$$ d = t\cdot s $$
$$ s = g\frac{t}{2}.$$ The system looks like it has a many variables, but really there are only two, $s$ and $t$. We know $g$ and $d$. To solve the system we substitute for $s$ in the first equation with the second to get
$$ d = tg\frac{t}{2} = g\frac{t^2}{2}$$
Solve for $t$ $$ t = \sqrt{\frac{2d}{g}} = \sqrt{\frac{2\cdot 5.73\,\text{m}}{9.8\,\text{m/s}^2}} \approx 34.2\,\text{s}. $$ Not a bad commute, a little over half a minute.
With $t$ in hand, we can find the magnitude of the initial velocity. Remember that the initial velocity is $s$ in the horizontal direction and $s$ in the vertical direction, so the speed when leaving the launcher is
$$ \left| \mathbf{v}_0\right| = \sqrt{s^2 + s^2} = \sqrt{2s^2} = s\sqrt{2}. $$ The initial speed the student must attain is given by the very first equation, $d = s\cdot t$. Solving for $s$ with the value of $t$ we found, we get
$$ s = \frac{5.73\,\text{km}}{34.2\,\text{s}} = 168\,\text{m/s}. $$
Finding the Acceleration
The ramp lives on a footprint that is about 80 ft, or 24.4 m. It is also 24.4 m tall, so special zoning is surely required! The rail of the launch-o-rocket is the hypotenuse of a triangle, and that triangle has sides 24.4 m, and a total length of $\sqrt{2}\cdot 24.4\,\text{m} = 34.5\,\text{m}$.
The formula for position after a period of acceleration is
$$ p = \frac{1}{2}a\tau^2.$$ For our system, we also know that the acceleration is the change in speed divided by the change in time. Our speed goes from zero to 168 m/s in $\tau$. Again, we have a system of equations,
$$ 34.5\, \text{m} = \frac{1}{2}a\tau^2 $$
$$ a = \frac{168\,\text{m/s}}{\tau}.$$
Solve for $a$ by first solving the second equation for $\tau$, and then substituting that result into the first equation to get
$$ 34.5\, \text{m} = \frac{1}{2}a\left(\frac{168\,\text{m/s}}{a}\right)^2 $$
$$ a = \frac{\left( 168\, \text{m/s}\right)^2}{2 \cdot 34.5\,\text{m}}
= 407\, \text{m/s}^2 = 41.5\, g. $$
The typical onset of death occurs when acceleration exceeds about $10g$, so unfortunately,
the launch-o-rocket is a single try system.
|
I thought about this problem again, and I think I have a full proof. It is a bit more tricky than what I anticipated. Comments are very welcome!
Update: I submitted this proof on arXiv, in case this is useful to someone: http://arxiv.org/abs/1207.2819
$\DeclareMathOperator{\fp}{fp}$$\DeclareMathOperator{\lp}{lp}$$\newcommand{\fpp}[1]{\widehat{\fp{#1}}}$$\newcommand{\lpp}[1]{\widehat{\lp{#1}}}$
Let $L$ be a context-free language over an alphabet $\Sigma$. Let $A$ be apushdown automaton which recognizes $L$, with stack alphabet $\Gamma$. We denoteby $|A|$ the number of states of $A$. Without loss of generality, we can assumethat transitions of $A$ pop the topmost symbol of the stack and either push nosymbol on the stack or push on the stack the previous topmost symbol and someother symbol.
We define $p' = |A|^2 |\Gamma|$ and $p = |A| (|\Gamma|+1)^{p'}$ the pumpinglength, and will show that all $w \in L$ such that $|w| > p$ have adecomposition of the form $w = u v x y z$ such that $|vxy| \leq p$, $|vy| \geq1$ and $\forall n \geq 0, u v^n x y^n z \in L$.
Let $w \in L$ such that $|w| > p$. Let $\pi$ be an accepting path of minimallength for $w$ (represented as a sequence of transitions of $A$), we denote itslength by $|\pi|$. We can define, for $0 \leq i < |\pi|$, $s_i$ the size of thestack at position $i$ of the accepting path. For all $N > 0$, we define an
$N$-level over $\pi$ as a set of three indices $i, j, k$ with $0 \leq i< j < k \leq p$ such that: $s_i = s_k, s_j = s_i + N$ for all $n$ such that $i \leq n \leq j$, $s_i \leq s_n \leq s_j$ for all $n$ such that $j \leq n \leq k$, $s_k \leq s_n \leq s_k$.
(For an example of this, see the picture for case 2 below which illustrates an $N$-level.)
We define the level $l$ of $\pi$ as the maximal $N$ such that $\pi$ has an$N$-level. This definition is motivated by the following property: if the sizeof the stack over a path $\pi$ becomes larger than its level $l$, then the stacksymbols more than $l$ levels deep will never be popped. We will now distinguishtwo cases: either $l < p'$, in which case we know that the same configurationfor the automaton state and the topmost $l$ symbols of the stack is encounteredtwice in the first $p+1$ steps of $\pi$, or $l \geq p'$, and there must be astacking and unstacking position that can be repeated an arbitrary number oftimes, from which we construct $v$ and $y$.
Case 1. $l < p'$. We define the configurations of $A$ as the couplesof a state of $A$ and a sequence of $l$ stack symbols (where stacks of size lessthan $l$ with be represented by padding them to $l$ with a special blank symbol,which is why we use $|\Gamma| + 1$ when defining $p$). By definition, there are$|A| (|\Gamma| + 1)^l$ such configurations, which is less than $p$. Hence, inthe $p+1$ first steps of $\pi$, the same configuration is encountered twice attwo different positions, say $i < j$. Denote by $\widehat{i}$ (resp.$\widehat{j}$) the position of the last letter of $w$ read at step $i$ (resp.$j$) of $\pi$. We have $\widehat{i} \leq \widehat{j}$. Hence, we can factor $w =u v x y z$ with $y z = \epsilon$, $u = w_{0 \cdots \widehat{i}}$, $v =w_{\widehat{i} \cdots \widehat{j}}$, $x = w_{\widehat{j} \cdots |w|}$. (By $w_{x\cdots y}$ we denote the letters of $w$ from $x$ inclusive to $y$ exclusive.) Byconstruction, $|vxy| \leq p$.
We also have to show that $\forall n \geq 0, u v^n x y^n z = u v^n x \in L$, butthis follows from our observation above: stack symbols deeper than $l$ are neverpopped, so there is no way to distinguish configurations which are equalaccording to our definition, and an accepting path for $u v^n x$ is built fromthat of $w$ by repeating the steps between $i$ and $j$, $n$ times.
Finally, we also have $|v| > 0$, because if $v = \epsilon$, then, because wehave the same configuration at steps $i$ and $j$ in $\pi$, $\pi' = \pi_{0 \cdotsi} \pi_{j \cdots |\pi|}$ would be an accepting path for $w$, contradicting theminimality of $\pi$.
(Note that this case amounts to applying the pumping lemma for regular languagesby hardcoding the topmost $l$ stack symbols in the automaton state, which isadequate because $l$ is small enough to ensure that $|w|$ is larger than thenumber of states of this automaton. The main trick is that we must adjust for$\epsilon$-transitions.)
Case 2. $l \geq p'$. Let $i, j, k$ be a $p'$-level. To any stacksize $h$, $s_i \leq h \leq s_j$, we associate the last push$\lp(h) = \max(\{y \leq j | s_y = h\})$ and the first pop$\fp(h) = \min(\{y \geq j | s_y = h\})$.By definition, $i \leq \lp(h) \leq j$ and $j \leq \fp(h) \leqk$. Here is an illustration of this construction. To simplify the drawing, I omit the distinction between the path positions and word positions which we will have to do later.
We say that the
full state of a stack size $h$ is the triple formedby: the automaton state at position $\lp(h)$ the topmost stack symbol at position $\lp(h)$ the automaton state at position $\fp(h)$
There are $p'$ possible full states, and $p' + 1$ stack sizes between $s_i$ and$s_j$, so, by the pidgeonhole principle, there exist two stack sizes $g, h$ with$s_i \leq g < h \leq s_j$ such that the full states at $g$ and $h$ are the same.Like in Case 1, we define by $\lpp(g)$, $\lpp(h)$, $\fpp(h)$ and $\fpp(g)$ thepositions of the last letters of $w$ read at the corresponding positions in $\pi$.We factor $w = u v x y z$ where $u = w_{0 \cdots \lpp(g)}$,$v = w_{\lpp(g) \cdots \lpp(h)}$,$x = w_{\lpp(h) \cdots \fpp(h)}$,$y = w_{\fpp(h) \cdots \fpp(g)}$,and $z = w_{\fpp(g) \cdots |w|}$.
This factorization ensures that $|vxy| \leq p$ (because $k \leq p$ by ourdefinition of levels).
We also have to show that $\forall n \geq 0, u v^n x y^n z \in L$. To do so,observe that each time that we repeat $v$, we start from the same state and thesame stack top and we do not pop below our current position in the stack(otherwise we would have to push again at the current position, violating themaximality of $\lp(g)$), so we can follow the same path in $A$ and push thesame symbol sequence on the stack. By the maximality of $\lp(h)$ and theminimality of $\fp(h)$, while reading $x$, we do not pop below our currentposition in the stack, so the path followed in the automaton is the sameregardless of the number of times we repeated $v$. Now, if we repeat $w$ as manytimes as we repeat $v$, since we start from the same state, since we have pushedthe same symbol sequence on the stack with our repeats of $v$, and since we donot pop more than what $v$ has stacked by minimality of $\fp(g)$, we can followthe same path in $A$ and pop the same symbol sequence from the stack. Hence, anaccepting path from $u v^n x y^n z$ can be constructed from the accepting pathfor $w$.
Finally, we also have $|vy| > 1$, because like in case 1, if $v =\epsilon$ and $y = \epsilon$, we can build a shorter accepting path for $w$ byremoving $\pi_{\lp(g)\cdots\lp(h)}$ and $\pi_{\fp(h)\cdots\fp(g)}$.
Hence, we have an adequate factorization in both cases, and the result isproved.
(Credit goes to Marc Jeanmougin for helping me with this proof.)
|
Consider, I have a dynamic system model for air compressor. Which means I have modelled the system by including its physics. Does this means I also included the transients of the system?What I think is, when i modelled any system with equations, I think that includes the transients. Is it...
1. Homework Statementhttps://i.imgur.com/WPAKuf4.pngseeking G(s) = \frac{\theta_2(s)}{\tau(s)}2. Homework Equations3. The Attempt at a SolutionWhat does it mean when the viscous drag is parallel to the axis of rotation?It also turns out that this system needs two equations. I...
1. Homework StatementDeriving an s-domain equation for the following inputs a) &b)3. The Attempt at a SolutionI understand how to derive the equation for an input with zero initial conditions (part a) but I'm not sure what to do when there are non-zero initial conditions (part b)
Outdated Control Systems Engineer with a PhD in Industrial Preventive Maintenance, in training for a double major in Chemistry and Biology. Prefer to work the math, reason and deduct, infer, interpolate and extrapolate, yet a lot is presented to me as to be "simple memorization", so I have to...
I´m taking a course on control engineering and I have a test next Tuesday so I need to study the basics which are: Laplace transform, simplification of block diagrams, and analysis of transient and steady state responses. Right now I am dealing with the second one.I know the basic rules of...
1. Homework StatementI am trying to answer two questions:1. Solve part (b) in the image above by hand.2. What are the differences between transfer function (a) and transfer function (b).2. Homework Equationscos(Im(s) / Re(s)) = ζ3. The Attempt at a Solution1. For part one...
i am finding it difficult to get insight about modelling of non linear system (electrical).my queries are,consider the system is DC DC converter...1) I read that to study the dynamics of the system we linearise the non-linear system....why do we want to linearise it?...2)can some one please...
Hello,My name is Emre and I am a MSc student in mechanical engineering.I am looking for PhD in US.My research interests are mechanical vibrations,rail vehicles,finite element method and control theory.In fact I have 2.68/4 GPA in Undergrad and 3.79/4 GPA in Master.So if anyone has any advice...
Hi,This question on PD control is from a practice quiz.1. Homework StatementIf you can't see it- the question asks to find values for Kp and Kd such that the system achieves 5% OS and has a settling time Ts of 3s.Cs = 3Cd = 2m = 52. Homework Equationsω_n^2/(s^2 + 2ζw_n + ω_n^2) -...
QUESTION:1) Why are Time Response Characteristic's Expressions derived from only from Zero State Equations?NOTE: Nise Control Systems Engineering 6ed uses step inputs to derive Time Response Characteristics for 1stand 2nd order ordinary differential equations...
Hi,I'm trying to implement an auto track guidance system for ground vehicles (Eg Tractors), I'm using Matlab and Simulink. I'm at a point where I can calculate heading errors. I'm not too sure how to calculate the lateral errors. Also, I need help in designing the controller. I'm using...
1. Homework StatementAn error matrix is in the form, has a characteristic equation:## CE: s^2 + 120s + 7200 = 0 ##A state variable feedback system is described by:## A_F = \begin{bmatrix}0 & 1 \\-616.8 & -40 \end{bmatrix} #### B = \begin{bmatrix}0 \\ 1 \end{bmatrix} #### C =...
OverviewI'm trying to levitate a constrained permanent magnet with 2 electromagnets. I'm having trouble conceptualizing the control system for such an operation.SetupThe permanent magnet is fixed onto a horizontal pendulum and is repelled by an electromagnet above and repelled by an...
1. Homework StatementFind the steady-state error due to a disturbance Td(s) = 1/s.Set R(s) = 0.if given a system:2. Homework Equationsn/a3. The Attempt at a SolutionI need Y(s)/Td(s). To do this I must find Y(s) in terms of the transfer function Y(s)/R(s) which I have obtained...
Can someone please explain intuitively how the terms “frequency” and “dynamics” are related ? I understand the concept of each of these two individually, but I am having some difficulty visualizing what high-frequency dynamics mean.I understand the concept of how time and frequency domains...
Hi all,I would like to know why the Transfre function of the system is represented in S Domain instead of doing all the math in time domain itself. I studied a course on control systems and i wonder why s- domain is taken instead of other domains like Z- domain and others.Please refer belwo...
I am trying to calculate the steady state error of the following system but unable to do it. I have used MATLAB and calculated the steady state error to be 0.1128 but don't understand the steps that I need to do to calculate this.Please help.Thanks
|
Look at the right-most figure in diagram. The small triangle defines the all the major dimensions of the hexagon. Assuming the user measures the edge-to-edge dimension $c$, he or she can calculate the rest of the measurements. A simple right-triangle expression gives the relationship between $s/2$ and $c/2$, and is readily solved for $s$ in terms of $c$,
$$ \frac{s}{2} = \frac{c}{2}\tan\left(30°\right) $$ $$ s = c\tan\left(30°\right) = c\frac{\sqrt{3}}{3}. $$
Similarly, the Pythagorean formula gives the relationship between $c$ and the two other dimensions,
$$ d = \sqrt{ c^2 + s^2} = \frac{2c}{\sqrt{3}}.$$
Simplifying this equation by substituting for $s$ into the previous equation and simplifying produces
$$ \text{long pitch} = s + \frac{d-s}{2} = \frac{s+d}{2} = \frac{\sqrt{3}}{2}c \approx 0.87c. $$
A blanket $n$ hexagons by $m$ hexagons will be approximately
$$ 0.87c\,n \times m\,c, $$ where the $m$ and $n$ dimensions are as shown in the next figure.
If a blanket will be wider at the ends, as shown in the figure, then $n$ will be odd. The total number of hexagons will then be
$$ \text{number of hexagons} = m \frac{n+1}{2} + (m-1)\frac{n-1}{2}. $$
In the example there are $(n+1)/2=5$ tall columns and $(n-1)/2=4$ short columns. So there are 6×5=30 hexagons in tall columns and (6-1)×4=20 hexagons in the short columns, or 50 hexagons overall. Using equation for blanket size and assuming $c=10$ inches, the approximate dimensions of the finished blanket are 9×10×0.87 by 6×10, or 78.3 by 60 inches.
|
Let $\{S_n\}$ and $\{T_n\}$ be independent renewal processes with interrenewal distributions $F$ and $G$. Define$$N(t):=N_S(t) + N_T(t)=\sum_{n=1}^\infty \left[\mathsf 1_{(0,t]}(S_n) +\mathsf 1_{(0,t]}(T_n)\right]. $$Then the sequence of jump times of $N(t)$, $$U_n = \inf\{t: N(t)=n\} $$is not in general a renewal sequence, because the inter-jump times need not be i.i.d. For a counterexample, consider when $F$ and $G$ are constant distributions, e.g. $S_n=\{i, 2i, 3i, \ldots\}$ and $T_n=\{j, 2j, 3j, \ldots\}$, with $i\ne j$. Take $i=2$ and $j=3$, then$$U_n-U_{n-1} = \begin{cases}1,& n\equiv 1,2,3,5\pmod 6\\2,& n\equiv 0,4\pmod 6.\end{cases}$$Further, we have that$$R(t) = \mathbb E[N(t)] \stackrel{t\to\infty}\longrightarrow \frac43, $$but as $R(n)-R(n-1)=U_n-U_{n-1}$, it is clear that $\lim_{t\to\infty}R(t+1)-R(t)$ does not exist, and thus Blackwell's renewal theorem does not hold.
There are two remaining questions to consider - is there more we can say about $N(t)$ than that it is a counting process, and whether being stable under superposition is equivalent to having independent and stationary increments (i.e. being a Poisson process)? As for the first, we can describe the jump times $\{U_n\}$ by the transitions in a Markov renewal process on $$E=\{(X_n,t) : X_n\in \{S,T\}, t>0\}. $$ As for the second, the superposition of $\{S_n\}$ and $\{T_n\}$ is a renewal process iff one of the following holds:
(i) One of the processes, WLOG $\{S_n\}$ has multiple renewals and $\{T_n\}$ does not (i.e. $F(0)>0$ and $G(0)=0$), $F$ and $G$ are concentrated on a semi-lattice $\{0,\delta,2\delta,\ldots\}$ and either\begin{align}F(x) &= \left(1 - p^{\left\lfloor\frac x\delta\right\rfloor+1}\right)\mathsf 1_{[0,\infty)}(x), \quad 0<p<1\\G(x) &= \mathbb 1_{[\delta,\infty)}(x)\end{align}or\begin{align}F(x) &= \left(1 - p^{\left\lfloor\frac x\delta\right\rfloor+1}\right)\mathsf 1_{[0,\infty)}(x), \quad 0<p<1\\G(x) &= \left(1 - q^{\left\lfloor\frac x\delta\right\rfloor}\right)\mathsf 1_{[0,\infty)}(x), \quad 0<q<1.\\\end{align}
(ii) Neither process has multiple renewals, and $F$ and $G$ are exponential and hence $\{S_n\}$, $\{U_n\}$, and their superposition are Poisson processes.
The proof is given by
Pairs of renewal processes whose superpositionis a renewal process by J.A. Ferreira (2000).
Note in particular this means that ordinary renewal processes with strictly positive inter-renewal times, stability under superposition is equivalent to the processes being Poisson.
|
This question was inspired by an answer to the "Magic trick based on deep mathematics" question. I wanted to post it as a comment, but I ran out of characters! I'm sure there must be a collection of standard results related to this question, but I don't know where to start looking.
First, a quick definition. The
diameter of a set $S \in \mathbb{R}^n$ is $\sup\{d(x, y) \mid x, y \in S\}$.
A sheet of paper is a good physical example of a Riemannian 2-manifold with boundary, and a table is a good physical model of (a subset of) $\mathbb{R}^2$. Embed the paper isometrically in $\mathbb{R}^2$ by laying flat on a table.
Draw the outline of a circular cup on the paper. It seems obvious that no matter how you embed the paper in $\mathbb{R}^2$, the outline of the cup will always be a metric circle, and it will always have the same diameter $D$.
Now, lift the paper into the air, embedding it isometrically in $\mathbb{R}^3$. If you let the paper flop around, the outline of the cup might not be a metric circle anymore...
but will it still have diameter $D$?
Finally, cut along the outline of the cup, removing an open disk from the sheet of paper. The paper now has a second boundary component, and it's no longer simply connected. The paper has also gained a surprising property: you can bend it around in midair (that is, embed it isometrically in $\mathbb{R}^3$) so that the outline of the cup has diameter greater than $D$!
What's the important property of the paper that we changed to make this possible? Comments
I don't think you need to cut along the outline of the cup to make this work... you could probably just cut out any disk contained within the outline of the cup. So maybe simply-connectedness is the important property?
My gut tells me that if you draw two dots on the sheet of paper, the distance between the dots is maximized when the paper is flat on the table. When you bend the paper around in midair, the dots can get closer together, but they can never get farther apart. I think this is equivalent to the statement that if $\delta$ is the natural distance function on the paper, $d$ is the distance function in $\mathbb{R}^3$, and $F$ is an isometric embedding of the paper in $\mathbb{R}^3$, $d(Fx, Fy) \le \delta(x, y)$ for all points $x$ and $y$ on the paper.
|
Beside the wonderful examples above, there should also be counterexamples, where visually intuitive demonstrations are actually wrong. (e.g. missing square puzzle)
Do you know the other examples?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Beside the wonderful examples above, there should also be counterexamples, where visually intuitive demonstrations are actually wrong. (e.g. missing square puzzle)
Do you know the other examples?
The never ending chocolate bar!
If only I knew of this as a child..
The trick here is that the left piece that is three bars wide grows at the bottom when it slides up. In reality, what would happen is that there would be a gap at the right between the three-bar piece and the cut. This gap is is three bars wide and one-third of a bar tall, explaining how we ended up with an "extra" piece.
Side by side comparison:
Notice how the base of the three-wide bar grows. Here's what it would look like in reality$^1$:
1: Picture source https://www.youtube.com/watch?v=Zx7vUP6f3GM
A bit surprised this hasn't been posted yet. Taken from this page:
Visualization can be misleading when working with alternating series. A classical example is \begin{align*} \ln 2=&\frac11-\frac12+\frac13-\frac14+\;\frac15-\;\frac16\;+\ldots,\\ \frac{\ln 2}{2}=&\frac12-\frac14+\frac16-\frac18+\frac1{10}-\frac1{12}+\ldots \end{align*} Adding the two series, one finds \begin{align*}\frac32\ln 2=&\left(\frac11+\frac13+\frac15+\ldots\right)-2\left(\frac14+\frac18+\frac1{12}+\ldots\right)=\\ =&\frac11-\frac12+\frac13-\frac14+\;\frac15-\;\frac16\;+\ldots=\\ =&\ln2. \end{align*}
Here's how to trick students new to calculus (applicable only if they don't have graphing calculators, at that time):
$0$. Ask them to find inverse of $x+\sin(x)$, which they will unable to. Then,
$1$. Ask them to draw graph of $x+\sin(x)$.
$2$. Ask them to draw graph of $x-\sin(x)$
$3$. Ask them to draw $y=x$ on both graphs.
Here's what they will do :
$4$. Ask them, "What do you conclude?". They will say that they are inverses of each other. And then get
very confused.
Construct a rectangle $ABCD$. Now identify a point $E$ such that $CD = CE$ and the angle $\angle DCE$ is a non-zero angle. Take the perpendicular bisector of $AD$, crossing at $F$, and the perpendicular bisector of $AE$, crossing at $G$. Label where the two perpendicular bisectors intersect as $H$ and join this point to $A$, $B$, $C$, $D$, and $E$.
Now, $AH=DH$ because $FH$ is a perpendicular bisector; similarly $BH = CH$. $AH=EH$ because $GH$ is a perpendicular bisector, so $DH = EH$. And by construction $BA = CD = CE$. So the triangles $ABH$, $DCH$ and $ECH$ are congruent, and so the angles $\angle ABH$, $\angle DCH$ and $\angle ECH$ are equal.
But if the angles $\angle DCH$ and $\angle ECH$ are equal then the angle $\angle DCE$ must be zero, which is a contradiction.
Proof : Let $O$ be the intersection of the bisector $[BC]$ and the bisector of $\widehat{BAC}$. Then $OB=OC$ and $\widehat{BAO}=\widehat{CAO}$. So the triangles $BOA$ and $COA$ are the same and $BA=CA$.
Another example :
From "Pastiches, paradoxes, sophismes, etc." and solution page 23 : http://www.scribd.com/JJacquelin/documents
A copy of the solution is added below. The translation of the comment is :
Explanation : The points A, B and P are not on a straight line ( the Area of the triangle ABP is 0.5 ) The graphical highlight is magnified only on the left side of the figure.
I think this could be the goats puzzle (Monty Hall problem) which is nicely visually represented with simple doors.
Three doors, behind 2 are goats, behind 1 is a prize.
You choose a door to open to try and get the prize, but before you open it, one of the other doors is opened to reveal a goat. You then have the option of changing your mind. Should you change your decision?
From looking at the diagram above, you know for a fact that you have a 1/3rd chance of guessing correctly.
Next, a door with a goat in is opened:
A cursory glance suggests that your odds have improved from 1/3rd to a 50/50 chance of getting it right. But the truth is different...
By calculating all possibilities we see that if you change, you have a higher chance of winning.
The easiest way to think about it for me is, if you choose the car first, switching is guaranteed to be a goat. If you choose a goat first, switching is guaranteed to be a car. You're more likely to choose a goat first because there are more goats, so you should always switch.
A favorite of mine was always the following:
\begin{align*} \require{cancel}\frac{64}{16} = \frac{\cancel{6}4}{1\cancel{6}} = 4 \end{align*}
I particularly like this one because of how simple it is and how it gets the right answer, though for the wrong reasons of course.
A recent example I found which is credited to Martin Gardner and is similar to some of the others posted here but perhaps with a slightly different reason for being wrong, as the diagonal cut really is straight.
I found the image at a blog belonging to Greg Ross.
Spoilers
The triangles being cut out are not isosceles as you might think but really have base $1$ and height $1.1$ (as they are clearly similar to the larger triangles). This means that the resulting rectangle is really $11\times 9.9$ and not the reported $11\times 10$.
Squaring the circle with Kochanski's Approximation
1
One of my favorites:
\begin{align} x&=y\\ x^2&=xy\\ x^2-y^2&=xy-y^2\\ \frac{(x^2-y^2)}{(x-y)}&=\frac{(xy-y^2)}{(x-y)}\\ x+y&=y\\ \end{align}
Therefore, $1+1=1$
The error here is in dividing by x-y
That $\sum_{n=1}^\infty n = -\frac{1}{12}$. http://www.numberphile.com/videos/analytical_continuation1.html
The way it is presented in the clip is completely incorrect, and could spark a great discussion as to why.
Some students may notice the hand-waving 'let's intuitively accept $1 -1 +1 -1 ... = 0.5$.
If we accept this assumption (and the operations on divergent sums that are usually not allowed) we can get to the result.
A discussion that the seemingly nonsense result directly follows a nonsense assumption is useful. This can reinforce why it's important to distinguish between convergent and divergent series. This can be done within the framework of convergent series.
A deeper discussion can consider the implications of allowing such a definition for divergent sequences - ie Ramanujan summation - and can lead to a discussion on whether such a definition is useful given it leads to seemingly nonsense results. I find this is interesting to open up the ideas that mathematics is not set in stone and can link to the history of irrational and imaginary numbers (which historically have been considered less-than-rigorous or interesting-but-not-useful).
\begin{equation} \log6=\log(1+2+3)=\log 1+\log 2+\log 3 \end{equation}
Here is one I saw on a whiteboard as a kid... \begin{align*} 1=\sqrt{1}=\sqrt{-1\times-1}=\sqrt{-1}\times\sqrt{-1}=\sqrt{-1}^2=-1 \end{align*}
I might be a bit late to the party, but here is one which my maths teacher has shown to me, which I find to be a very nice example why one shouldn't solve an equation by looking at the hand-drawn plots, or even computer-generated ones.
Consider the following equation: $$\left(\frac{1}{16}\right)^x=\log_{\frac{1}{16}}x$$
At least where I live, it is taught in school how the exponential and logarithmic plots look like when base is between $0$ and $1$, so a student should be able to draw a plot which would look like this:
Easy, right? Clearly there is just one solution, lying at the intersection of the graphs with the $x=y$ line (the dashed one; note the plots are each other's reflections in that line).
Well, this is clear at least until you try some simple values of $x$. Namely, plugging in $x=\frac{1}{2}$ or $\frac{1}{4}$ gives you two more solutions! So what's going on?
In fact, I have intentionally put in an incorrect plots (you get the picture above if you replace $16$ by $3$). The real plot looks like this:
You might disagree, but to be it still seems like it's a plot with just one intersection point. But, in fact, the part where the two plots meet has all three points of intersection. Zooming in on the interval with all the solutions lets one
barely see what's going on:
The oscillations are truly minuscule there. Here is the plot of the
difference of the two functions on this interval:
Note the scale of the $y$ axis: the differences are on the order of $10^{-3}$. Good luck drawing that by hand!
To get a better idea of what's going on with the plots, here they are with $16$ replaced by $50$:
Here is a measure theoretic one. By 'Picture', if we take a cover of $A:=[0,1]∩\mathbb{Q}$ by open intervals, we have an interval around every rational and so we also cover $[0,1]$; the Lebesgue measure of [0,1] is 1, so the measure of $A$ is 1. As a sanity check, the complement of this cover in $[0,1]$ can't contain any intervals, so its measure is surely negligible.
This is of course wrong, as the set of all rationals has Lebesgue measure $0$, and sets with no intervals need not have measure 0: see the fat Cantor set. In addition, if you fix the 'diagonal enumeration' of the rationals and take $\varepsilon$ small enough, the complement of the cover in $[0,1]$ contains $2^{ℵ_0}$ irrationals. I recently learned this from this MSE post.
There are two examples on Wikipedia:Missing_square_puzzle Sam Loyd's paradoxical dissection, and Mitsunobu Matsuyama's "Paradox". But I cannot think of something that is not a dissection.
This is my favorite.
\begin{align}-20 &= -20\\ 16 - 16 - 20 &= 25 - 25 - 20\\ 16 - 36 &= 25 - 45\\ 16 - 36 + \frac{81}{4} &= 25 - 45 + \frac{81}{4}\\ \left(4 - \frac{9}{2}\right)^2 &= \left(5 - \frac{9}{2}\right)^2\\ 4 - \frac{9}{2} &= 5 - \frac{9}{2}\\ 4 &= 5 \end{align}
You can generalize it to get any $a=b$ that you'd like this way:
\begin{align}-ab&=-ab\\ a^2 - a^2 - ab &= b^2 - b^2 - ab\\ a^2 - a(a + b) &= b^2 -b(a+b)\\ a^2 - a(a + b) + \frac{a + b}{2} &= b^2 -b(a+b) + \frac{a + b}{2}\\ \left(a - \frac{a+b}{2}\right)^2 &= \left(b - \frac{a+b}{2}\right)^2\\ a - \frac{a+b}{2} &= b - \frac{a+b}{2}\\ a &= b\\ \end{align}
It's beautiful because visually the "error" is obvious in the line $\left(4 - \frac{9}{2}\right)^2 = \left(5 - \frac{9}{2}\right)^2$, leading the observer to investigate the reverse FOIL process from the step before, even though this line is valid. I think part of the problem also stems from the fact that grade school / high school math education for the average person teaches there's only one "right" way to work problems and you always simplify, so most people are already confused by the un-simplifying process leading up to this point.
I've found that the number of people who can find the error unaided is something less than 1 in 4. Disappointingly, I've had several people tell me the problem stems from the fact that I started with negative numbers. :-(
Solution
When working with variables, people often remember that $c^2 = d^2 \implies c = \pm d$, but forget that when working with concrete values because the tendency to simplify everything leads them to turn squares of negatives into squares of positives before applying the square root. The number of people that I've shown this to who can find the error is a small sample size, but I've found some people can carefully evaluate each line and find the error, and then can't explain it even after they've correctly evaluated $\left(-\frac{1}{2}\right)^2=\left(\frac{1}{2}\right)^2$.
To give a contrarian interpretation of the question I will chime in with Goldbach's comet which counts the number of ways an integer can be expressed as the sum of two primes:
It is mathematically "wrong" because there is no proof that this function doesn't equal zero infitely often, and it is visually deceptive because it appears to be unbounded with its lower bound increasing at a linear rate.
This is essentially the same as the chocolate-puzzle. It's easier to see, however, that the total square shrinks.
This is a fake visual proof that a sphere has Euclidean geometry. Strangely enough, in a 3 dimensional hyperbolic space, the amount of curve a sphere will have approaches a nonzero amount and if you have an infinitely large object with exactly the amount of a curve a sphere approaches as its size approaches infinity, it will have Euclidean geometry and appear sort of the way that image appears.
I don't know about you but to me, it looks like the hexagons are stretched horizontally. If you also see it that way and you trust your eyes, then you could take that as a visual proof that $\tan\frac{7}{4} < 60^\circ$. If that's how you saw it, then it's an optical illusion because the hexagons are really stretched vertically. Unlike some optical illusions of images that appear different than they are but are still mathematically possible, this is an optical illusion of a mathematically impossible image. The math shows that $\tan^{-1} 60^\circ = \sqrt{3}$ and $\sqrt{3} < \frac{7}{4}$ because $7^2 = 49$ but $3 \times 4^2$ = 48. It's just like it's mathematically impossible for something to not be moving when it is moving but it's theoretically possible for your eyes to stop sending movement signals to your brain and have you not see movement in something that is moving which would look creepy for those who have not experienced it because your brain could still tell by a more complex method than signals from the eyes that it actually is moving.
To draw a hexagonal grid over a square grid more accurately, only the math and not your eye signals can be trusted to help you do it accurately. The math shows that the continued fraction of $\sqrt{3}$ is [1; 1, 2, 1, 2, 1, 2, 1 ... which is less than $\frac{7}{4}$, not more.
I do not think this really qualify as "visually intuitive", but it is definitely funny
They do such a great job at dramatizing these kind of situations. Who cannot remember of an instance in which he has been either a "Billy" or a "Pa' and Ma'"? Maybe more "Pa' and Ma'" instances on my part...;)
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
4 1 1. Homework Statement
I'm struggling to perform a symplectic reduction and don't really understand the process in general. I have a fairly solid understanding of differential equations but am just starting to explore differential geometry. Hopefully somebody will be able to walk me through this very simple example. Given the following optimization problem:
\begin{equation}\text{min}\;J = \frac{1}{2}\int_0^{t_f} u^2dt\end{equation}
\begin{equation}\dot{x_1} = x_2\end{equation}
\begin{equation}\dot{x_2} = u\end{equation}
Where u is a control variable and we have arbitrary boundary conditions defined. Our control Hamiltonian has 2 states and 2 co-states, thus has dimension 2n.
2. Homework Equations
\begin{equation}H = <\lambda,f> + L = \lambda_1x_2 + \lambda_2u + \frac{1}{2}u^2\end{equation}
\begin{equation}\frac{\partial H}{\partial u} = 0 = \lambda_2 + u\end{equation}
\begin{equation}\dot{x} = \frac{\partial H}{\partial \lambda}\end{equation}
\begin{equation}\dot{\lambda} = -\frac{\partial H}{\partial x}\end{equation}
Through integration, we know the solution has the following form:
\begin{equation}x_2 = \frac{1}{2}C_1t^2 - C_2t + C_3\end{equation}
\begin{equation}x_1 = \frac{1}{6}C_1t^3 - \frac{1}{2}C_2t^2 + C_3t + C_4\end{equation}
And using given boundary conditions, the problem simplifies to a 4 dimensional search for each of the 4 constants (or 5 depending on if final time is free, but let's ignore that for now).
3. The Attempt at a Solution
Knowing the following is a constant of motion:
\begin{equation}\int_0^t \dot{\lambda_1} dt = C_1\end{equation}
because
\begin{equation}\dot{\lambda_1} = 0\end{equation}
According to the Marsden-Weinstein reduction theorem, I should be able to reduce the Hamiltonian to dimension 2n-2.
\begin{equation}H = <\lambda,f> + L = C_1x_2 + \lambda_2u + \frac{1}{2}u^2\end{equation}
Yet when I rederive the equations of motion, the constant still reappears.
\begin{equation}u = -\lambda_2\end{equation}
\begin{equation}\dot{x_2} = \lambda_2\end{equation}
\begin{equation}\dot{\lambda_2} = -C_1\end{equation}
According to papers I've read online, I should be able to simplify this optimization problem down in dimensionality, but with the first constant reappearing in the equations of motions, I effectively still have a 4 dimensional search. Keep in mind that I'm assuming I'm using some numerical method to solve this problem, not an analytic procedure. Even though this is an easy problem to solve analytically, I don't care about the solution so much as the method itself. If anybody could help me properly reduce the Hamiltonian, that would be great!
|
Before we look at what f does, let's look at some of Knuth's definitions earlier in the chapter.
A
computational method is a quadruple ($Q, I, \Omega, f$) where $I,\Omega \subseteq Q$ and $f : Q \mapsto Q$; and $f$ has the property of leaving $\Omega$ pointwise fixed, i.e.: $\forall q \in \Omega, f(q) = q$.
The intended, informal meanings of these terms are the following:
$\fbox{Q}$ the
states of the computation
$\fbox{I}$ the
input
$\fbox{$\Omega$}$ the
output
$\fbox{f}$ the computational
rule
Although irrelevant to our discussion, Knuth then defines the notion of
algorithm as follows:
An
algorithm is a computational method that terminates in finitely many steps for all $x \in I$.
On page 8 he gives an explication of a
gcd algorithm in terms of this formal notion. Now, getting back to f, it would be helpful if we keep the following auxiliary definitions in sight:
Occurs $(\theta, \sigma) =_{df} \exists \alpha,\omega \in A^*(\sigma = \alpha\theta\omega)$.
Shortest $(\sigma, \Sigma, \phi) =_{df} \phi(\sigma) \land \forall\tau\in\Sigma(\phi(\tau) \rightarrow [\tau] > [\sigma])$, where [σ] is the length of σ.
With these, we're ready to make sense of f. It maps strings σ and natural numbers j to:
$(\sigma, \alpha_j)$ $~~~~~~~~$if$~~$ $\lnot Occurs(\theta_j,\sigma)$
$(\alpha\phi_j\omega, b_j)$ $~~$if$~~$ $\exists\alpha\in A^*(\sigma = \alpha\theta_j\omega \land Shortest(\alpha, A^*, [\lambda x.\sigma = x\theta_j\omega])$
$(\sigma, N)$ $~~~~~~~~$ otherwise
Informally, we can describe f's behavior as follows. If no two strings $\alpha$ and $\omega$ exist in $A^*$ that can be wrapped around $\theta_j$ to make $\sigma$, then f returns $\sigma$ with its index $a_j$ (i'm not sure why Knuth differentiates between $a_j$ and $j$ s.t. $0 \leq j \leq N$). Else, if there exist strings $\alpha$ and $\omega$ that can be wrapped around $\theta_j$ in the following way: $\alpha\theta_j\omega$, to make $\sigma$, then f returns: the sequence of the shortest $\alpha$ that makes that equality true, $\phi_j$, $\omega$, along with the index $b_j$. In all the other cases, f returns $\sigma$ with the index N.
Hope this helps. If you find errors, feel free to edit this post or add a comment below.
|
This question is
not about good algorithms for solving stochastic differential equations. It is about how to implement simple codes in Mathematica efficiently exploiting Mathematica's programming methodology. (Hopefully, this may be useful in a stochastic processes course, for instance).
A simple Langevin Eq. in a single random variable $X$ with additive noise reads \begin{equation} \dot{X} = f(X) + \zeta(t) \end{equation} where $f(X)$ is an arbitrary function and $\zeta(t)$ is a Gaussian white noise satisfying \begin{equation} E(\zeta(t)) = 0, \qquad \text{and} \qquad E(\zeta(t) \zeta(t')) = \Gamma \delta(t-t') \end{equation}
To solve it we discretize time as $t = n dt$ and write \begin{equation} X_{n+1} = X_{n} + f(X_n)dt + \sqrt{\Gamma dt}\xi_n \end{equation} where $\xi_n \sim N(0,1)$.
Here is my best implementation thus far:
Langevin[x0_, f_, G_, tf_, n_, m_: 1] := With[{dt = N[tf/n], s = N[Sqrt[ tf G/n]], xx0 = Table[x0, {m}]}, Transpose@NestList[ # + dt f[#] + RandomVariate[NormalDistribution[0, s], m] &, xx0, n]];
It takes as input a initial condition $x_0$, a function $f[x]$, the spectral density $\Gamma$ (here written as $G$), the final integration time $t_f$ and the number of integration points $n$. The time step is then $dt = t_f/n$. It also takes an optional argument $m$ corresponding to the number of realisations.
The output consists of $m$ vectors $(X_0, X_1, X_2, \ldots,X_n )$ representing the stochastic realisations.
Here is this program applied to the famous bi-stable potential given by $V(x) = -\frac{x^2}{2} + \frac{x^4}{4}$, so that $f(x) = - V'(x) = x-x^3$. It simulates a cold ($\Gamma=0.1$, in
data1) and a hot ($\Gamma=1$, in data2) condition:
First@AbsoluteTiming[ data1 = Langevin[0, -#^3 + # &, 0.1, 10, 10^3, 2000]; data2 = Langevin[0, -#^3 + # &, 1, 10, 10^3, 2000];]0.317665
To analyse the steady state I discard some initial points (80% in this example). This shows how the particle remains distributed close to the potential minima when it's cold, but spread out when it's hot:
Show[ Histogram[{Flatten[data1[[All, 800 ;; 1000]]], Flatten[data2[[All, 800 ;; 1000]]]}, Automatic, "PDF"], Plot[-z^2/2 + z^4/4, {z, -1.8, 1.8}, PlotStyle -> Red], AxesOrigin -> {0, 0}, PlotRange -> {-0.3, 1.2}]
Now to the questions:
Any immediate improvements on this function? Is there a better way through a different approach Can I compile this function as isto gain speed? What about parallelisation?
A follow up would be to extend all this to systems of Langevin equations, replacing $X$, $f$ and $\zeta$ by vector valued functions. But, then, we loose the advantage of computing many realisations at once within the same NestList. I'll think more about this problem and if I come up with any ideas I'll update the question.
Thank you all in advance and I hope this may be of use to other researchers as well.
Note: here is an example using the idea of @R.M.: generate all random numbers at once and use an index through the iteration to move along:
LangevinBad[x0_, f_, G_, tf_, n_, m_: 1] := Block[{i = 1}, With[{dt = N[tf/n], r = RandomVariate[ NormalDistribution[0, N[Sqrt[ tf G/n]]], {n, m}], xx0 = Table[x0, {m}]}, Transpose@NestList[ # + dt f[#] + r[[i++]] &, xx0, n]]];
Maybe my coding is no good, but this version is really bad. Actually; Nest probably has an internal variable to keep track of what iteration step it is, but I have no idea if it is possible to access that.
ACL's version
@ACL came up with a really efficient code, which I copy here for completeness.
(* This was originally called l4 by ACL *)LangevinACL[fn_] := With[{f = fn}, Compile[{{x0, _Real}, {G, _Real}, {tf, _Real}, {n, _Integer}}, Module[{dt, s, state, r}, dt = N[tf/n]; s = N[Sqrt[tf G/n]]; state = ConstantArray[0., n]; state[[1]] = x0; r = RandomVariate[NormalDistribution[0, s], n]; Do[state[[nc]] = state[[nc - 1]] + dt*f@state[[nc - 1]] + r[[nc - 1]], {nc, 2, n}];state], CompilationTarget -> "C"]]
Then to compile for a given function use
ll = LangevinACL[(# - #^3) &];AbsoluteTiming[dat = Table[ll[0, .1, 10, 10^3], {2000}];]
This code is always faster then the originally posted and allows for easy parallelisation.
Vector Equations
In vector equations there are two possibilities; either all particles have the same fluctuating properties, in which case we usually write $E(\zeta_i(t) \zeta_j(t')) = \Gamma\delta_{i,j} \delta(t-t')$ for the components of the fluctuating vector; or each particle has a specific fluctuation: $E(\zeta_i(t) \zeta_j(t')) = \Gamma_{i,j} \delta(t-t')$, where $\Gamma_{i,j}$ are the entries of a covariance matrix.
Here are two implementations of the former (all equations with the same fluctuation).
The first is a simple variation of the original code, as suggested by @ACL, so that instead of computing several realisations at once, each function call evaluates only a single realisation, but for a vector system:
LangevinVec[x0_, f_, G_, tf_, n_] := With[{dt = N[tf/n], s = N[Sqrt[ tf G/n]], m = Length@x0}, NestList[ # + dt f[#] + RandomVariate[NormalDistribution[0, s], m] &, x0, n]];
Everything is exactly as in Langevin, except that on input $x_0$ should be an array of numbers. Note also that there is no failsafe to check if the function $f$ has the correct dimensionality! (it should be a mapping from $\mathbb{R}^m\rightarrow\mathbb{R}^m$, where $m$ is the length of $x_0$).
The second implementation is again motivated by @ACL's code:
LangevinVecACL[fn_] := With[{f = fn}, Compile[{{x0, _Real, 1}, {G, _Real}, {tf, _Real}, {n, _Integer}}, Module[{dt, s, state, r, m}, m = Length@x0; dt = N[tf/n]; s = N[Sqrt[tf G/n]]; state = ConstantArray[0., {n, m}]; state[[1]] = x0; r = RandomVariate[NormalDistribution[0, s], {n, m}]; Do[state[[nc]] = state[[nc - 1]] + dt*f@state[[nc - 1]] + r[[nc - 1]], {nc, 2, n}]; state], CompilationTarget -> "C"]]
Now to applications. Here is a model of ferromagnetism reminiscent of the 1D Ising system. There are $m$ random variables in $\vec{x} = (x_1,x_2,\ldots,x_m)$ representing spins in a linear chain of atoms. The interaction potential is given by
\begin{equation} V(\vec{x}) = - \sum_{i=1}^m (\frac{a x_i^2}{2} - \frac{b x_i^4}{4}) - c \sum_{i=1}^m x_i x_{i+1} \end{equation} This refers to a bi-stable potential (as in the previous example) for each variable representing the magnetic order, plus a harmonic-type interaction between them. The corresponding force is \begin{equation} f_i = a x_i - bx_i^3 + c(x_{i-1}+x_{i+1}) \end{equation}
In matrix notation I can write \begin{equation} f(\vec{x}) = A\vec{x} - b \vec{x}^3 \end{equation} where $\vec{x}^3$ stands for $(x_1^3,x_2^3,\ldots)$ and $A$ is an $m\times m$ tridiagonal matrix with of the form \begin{equation} A = \left( \begin{array}{ccccc} a & c & 0 & 0 & c \\ c & a & c & 0 & 0 \\ 0 & c & a & c & 0 \\ 0 & 0 & c & a & c \\ c & 0 & 0 & c & a \end{array} \right) \end{equation} Note that I am using periodic boundary conditions $x_{m+1}=x_1$ and thence the c's in the upper-right and lower-left corners.
Here is $f(x)$ in Mathematica
m = 100; a = 2.0; b = 3.0; c = 3;A = SparseArray[{ {m, 1} -> c, {1, m} -> c, Band[{1, 1}] -> a, Band[{2, 1}] -> c, Band[{1, 2}] -> c}, {m, m}];f[x_] := A.x - b x^3
The choice of parameters is somewhat arbitrary and perhaps this definition of $f(x)$ is not the fastest due to the dot product.
I will use ACL's version LangevinVecACL, which is faster. So I first compile it
llvec = LangevinVecACL[f];
Here are two data sets for $\Gamma = 0.01$ (pretty cold) and $\Gamma = 10$ (pretty hot).
x0 = ConstantArray[0.0, m];AbsoluteTiming[ data1 = llvec[x0, .01, 4, 10^4]; data2 = llvec[x0, 10, 4, 10^4];]
The following code shows the steady-state distribution of a single realisation
GraphicsGrid[{Map[ ListPlot[#, PlotRange -> {{-1, m + 1}, {Floor@Min@#, Ceiling@Max@#}}, Filling -> Axis, Frame -> True, BaseStyle -> 14, FrameLabel -> {"Position", "Magnetization"}] &, {Last@data1, Last@data2}]}, ImageSize -> {600}]
As can be seen, at cold temperatures the system tends to divide itself into domains with all spins chunked either "up" or "down"; conversely, at high temperatures the domain configuration is clearly degraded.
The following function animates the time evolution of the system.
animateSpinChain[data_] := Animate[ListPlot[data[[i]], PlotRange -> {{-1, m + 1}, {Floor[Min[data]], Ceiling[Max[data]]}}, Filling -> Axis], {i, 1, Length@data, Floor[Length@data/100]}]
|
Recall that Goursat's Lemma has the following useful consequence. Let $G_1, G_2$ be finite groups with no common simple non-abelian quotients, and suppose $\gcd(|G_1^{\operatorname{ab}}|, |G_2^{\operatorname{ab}}|) = 1$, where superscript $\operatorname{ab}$ denotes abelianization. If $H \subset G_1 \times G_2$ is a subgroup with the property that the natural projections $H \to G_1$ and $H \to G_2$ are surjective, then $H = G_1 \times G_2$. A statement/proof of this version of Goursat's Lemma can be found in Lemma A.4 of Zywina's article ``Elliptic Curves with Maximal Galois Action on their Torsion Points'' (see http://www.math.cornell.edu/~zywina/papers/MaximalGalois.pdf).
I would like to obtain a similar version Goursat's Lemma in the following more general situation. Let $I$ be an at-most-countable index set, and let $\{G_i\}_{i \in I}$ be a collection of topological groups. Let the product group $G = \prod_{i \in I} G_i$ be equipped with the usual product topology (a base of opens is given by sets of the form $U_{i_1} \times \cdots \times U_{i_n} \times \prod_{i \neq i_1, \dots, i_n} G_i$, where $U_{i_j} \subset G_{i_j}$ is open for each $1 \leq j \leq n$). Perhaps a statement of Goursat's Lemma in this situation would be something like the following:
Suppose no two of the $G_i$'s have any common simple non-abelian quotients, and suppose further that no two of the $G_i$'s have any common abelian quotients (this is the analogue of saying that the abelianizations have coprime order). If $H \subset G$ is a closed subgroup with the property that the natural projections $H \to G_i$ are surjective for every $i \in I$, then $H = G$.
Is the above blocked statement true, or are further assumptions required to make it true? For example, would the statement hold if the groups $G_i$ were profinite?
Here is what I have found so far:
Simply inducting on the size of $I$ with Goursat's Lemma for finite products isn't going to get the desired result. Even if I consider the projection $H'$ of $H$ onto a product of $G_{i_1}, \dots, G_{i_n}$ and argue that that $H' = \prod_{j = 1}^n G_{i_j}$ with Goursat's Lemma for finite products, I still do not immediately know that $H \supset H'$.
|
For most of the month of June I gathered photometric data for the star OV Boötis (OV Boo), an eclipsing cataclysmic variable star with a very short orbital period. The data were acquired using the #2 16-inch with an SBIG ST8xme camera and clear filter. The images were calibrated using BHO_ImageCalibration, and reduced using the PPX photometry program. With the exception of the first night of observing, the exposure time for each image was 50 seconds. Peranso period-search software was used to determine an orbital period of 3996.7 ± 1.4 seconds.
OV Boötis
Like all cataclysmic variable (CV) stars, OV Boo is a double star system in which a collapsed (white dwarf) star is siphoning material from the outer atmosphere of it’s companion star, a low-mass red dwarf that is over filling it’s Roche lobe. There are, however, some unique features to this star. First of all, its orbital period is shorter than any other hydrogen-rich CV. There are reasonably well understood reasons why the orbital period of a “normal” CV cannot be less than about 78 minutes. Observations back up that limit; the only CVs with shorter periods are helium-rich stars. That is, until OV Boo was discoverd. OV Boo’s orbital period is 66.6 minutes, a full 12-minutes shorter than is though possible. It is also the CV with the fastest proper motion. Finally, spectra of the red dwarf star show it to be “metal” poor. These last two points indicate that it is likely a population-II object, the only one among the 3000 or so known CVs.
One of the consequences of being a population-II (older, early generation) star is that their atmospheres lack the quantity of “metals” (elements heavier than Helium) found in population-I (younger, more recent generation) stars. As a result, the atmospheres of population-II stars have much lower opacity. You can think of opacity in this sense as resistance to energy escaping. Because the atmospheres of population-II stars are more transparent they tend not to “bloat up” as much as their younger cousins and so tend to be a bit smaller for a given mass. There is a strong correlation between the orbital period of a CV and the mean density of the secondary star such that:
$latex P \varpropto 1/\sqrt{\rho}$
where “P” is the orbital period and $latex \rho$ is the secondary star’s mean density. So we can understand why OV Boo has such a short orbital period; its secondary is smaller, thus more dense, than the population-I stars that typically are found as secondaries in CVs.
OV Boo has been the subject of a number of investigations which indicated it was similar in many ways to WZ Sge, another short orbital period CV (82 min) that has a very low mass transfer rate and only outbursts very infrequently. In fact, in March of this year, OV Boo was observed in outburst for the very first time. Subsequent observations by the Center for Backyard Astrophysics (CBA) detected what are called “superhumps” in its early outburst phase, confirming its membership in the WZ Sge class.
Better Late than Never?
As often happens, I was not able to participate in the CBA campaign to monitor OV Boo’s outburst, due to a combination of lousy weather and being very busy running and, ultimately, selling my business (Orion Studios). Still, I wanted to see if I could nail down the orbital period. Doing so would require the deepest photometry I’ve yet attempted from BHO as the eclipses take OV Boo’s brightness to as low as 19th magnitude. That’s a tall order for a small telescope anywhere, let alone from the middle of a major city with its god-awful light pollution. The challenge was made all the more difficult because the eclipses themselves are very fast. That meant I could not take long exposures which might improve the signal-to-noise ratio of the resulting photometry. Instead, I had to stick to shorter exposures which limited the precision of my measurements to around $latex \pm$ 0.05 magnitudes.
Observations of OV Boo - June 2017
Date Number of Images Number of Eclipses 2017 June 01/02 100 3 2017 June 02/03 195 3 2017 June 03/04 229 3 2017 June 08/09 237 3 2017 June 09/10 216 3 2017 June 24/25 174 3 2017 June 25/26 246 4 2017 June 27/28 266 5
All of the exposures were 50 seconds in duration, except for the first night, where the images were 200 seconds in duration. After that first night I decided to try increasing the sampling cadence as the eclipses were short.
Reductions and Analysis
The images were calibrated using a home-brewed python script called BHO_ImageCalibrate. The script handles all of the usual steps of bias-subtraction, dark-image scaling and subtracting, and flat-fielding. The photometry was performed using PPX, a program written in the IDL language. PPX finds all of the stars in each image and performs matching of stars from one image to the next. PPX then performs variance-weighted optimal-extraction photometry for the comparison stars and the variable star. I used DPlot to produce the chart (shown below) illustrating the photometric results for the night of June 27/28, 2017.
As can be seen in the table of observations, I managed to catch a total of 27 eclipses. The next step was to correct all of the observation times to heliocentric time – the time an observer on the sun would observe a given phenomenon. The heliocentric correction removes the changing arrival time of photons due to the earth’s movement around the sun. In the most extreme cases that time difference can be as much as 19 minutes. The data were then fed to Peranso, which incorporates a number of period-finding algorithms. Just looking at the chart, above, it’s obvious the period is a bit over 0.045 days. I used Peranso’s implementation of the Lomb-Scargle algorithm to determine that the orbital period is $latex 3996.7\pm 1.4$ seconds.
|
This attractor is unusual because it uses both the tanh() and abs() functions. A picture can be found here (penultimate image). Here is some dependency-free Python (abridged from the GitHub code, but not flattened!) to generate the data to arbitrary order:#!/usr/bin/env python3from sys...
I came across this basic limits questionLtx->0[(ln(1+X)-sin(X)+X2/2]/[Xtan(X)Sin(X)]The part before '/'(the one separated by ][ is numerator and the one after that is denominatorThe problem is if I substitute standard limits :(Ltx->0tan(X)/x=1Ltx->0sin(X)/X=1Ltx->0ln(1+X)/X=1)The...
I have written some ODE solvers, using a method which may not be well known to many. This is my attempt to explain my implementation of the method as simply as possible, but I would appreciate review and corrections.At various points the text mentions Taylor Series recurrences, which I only...
1. Homework StatementGiven: ## f(x) = \sum_{n=0}^\infty (-1)^n \frac {\sqrt n} {n!} (x-4)^n##Evaluate: ##f^{(8)}(4)##2. Homework EquationsThe Taylor Series Equation3. The Attempt at a SolutionSince the question asks to evaluate at ##x=4##, I figured that all terms in the series except...
1. Homework StatementShow that the magnitude of the net force exerted on one dipole by the other dipole is given approximately by:$$F_{net}≈\frac {6q^2s^2k} {r^4}$$for ##r\gg s##, where r is the distance from one dipole to the other dipole, s is the distance across one dipole. (Both dipoles...
We were informally introduced Taylor series in my physics class as a method to give an equation of the electric field at a point far away from a dipole (both dipole and point are aligned on an axis). Basically for the electric field: $$\vec E_{axis}=\frac q {4πε_o}[\frac {1} {(x-\frac s 2)^2}-...
I've read that, in general, the energy of a wave, as opposed to what's commonly taught, isn't strictly related to the square of the amplitude. It can be seen to be related to a Taylor series, where E = ao + a1 A + a2A2 .... Also, that the energy doesn't depend on phase, so only even terms will...
I already learn to use Taylor series as:f(x) = ∑ fn(x0) / n! (x-x0)nBut i don´t see why the serie change when we use differents x0 points.Por example:f(x) = x2to express Taylor series in x0 = 0f(x) = f(0) + f(0) (x-0) + ..... = 0 due to f(0) = (0)2to x0=1 the series are...
1. Homework StatementFind the Taylor series for:ln[(x - h2) / (x + h2)]2. Homework Equationsf(x+h) =∑nk=0 f(k)(x) * hk / k! + En + 1where En + 1 = f(n + 1)(ξ) * hn + 1 / (n + 1)!3. The Attempt at a Solutionln[(x - h2) / (x + h2)] = ln(x-h2) - ln(x + h2)This is as far as I have...
Hi, I've got this:$$\sin{(A*B)}\approx \frac{Si(B^2)-Si(A^2)}{2(\ln{B}-ln{A})}$$, whenever the RHS is defined and B is close to A ( I don't know how close).Here ##Si(x)## is the integral of ##\frac{\sin{x}}{x}##But, to check it, I need to evaluate the ##Si(x)## function. I'm new with Taylor...
1. Homework StatementUsing the taylor series at point ##(x=0)## also known as the meclaurin series find the limit of the expression:$$L=\lim_{x \rightarrow 0} \frac{1}{x}\left(\frac{1}{x}-\frac{cosx}{sinx}\right)$$2. Homework Equations3. The Attempt at a Solution##L=\lim_{x \rightarrow...
1. Homework StatementUsing Taylor series, Find a polynomial p(x) of minimal degree that will approximate F(x) throughout the given interval with an error of magnitude less than 10-4F(x) = ∫0x sin(t^2)dt2. Homework EquationsRn = f(n+1)(z)|x-a|(n+1)/(n+1)!3. The Attempt at a Solution...
I ran across an infinite sum when looking over a proof, and the sum gets replaced by a function, however I'm not quite sure how.$$\sum_{n=1}^\infty \frac{MK^{n-1}|t-t_0|^n}{n!} = \frac{M}{K}(e^{K(t-t_0)}-1)$$I get most of the function, I just can't see where the ##-1## comes from. Could...
I am linearizing a vector equation using the first order taylor series expansion. I would like to linearize the equation with respect to both the magnitude of the vector and the direction of the vector.Does that mean I will have to treat it as a Taylor expansion about two variables...
1. Homework StatementFind a power series that represents $$ \frac{x}{(1+4x)^2}$$2. Homework Equations$$ \sum c_n (x-a)^n $$3. The Attempt at a Solution$$ \frac{x}{(1+4x)^2} = x* \frac{1}{(1+4x)^2} $$since \frac{1}{1+4x}=\frac{d}{dx}\frac{1}{(1+4x)^2}$$...
I studied Taylor series but I would like to have an answer to a doubt that I have. Suppose I have ##f(x)=e^{-x}##. Sometimes I've heard things like: "the exponential curve can be locally approximated by a line, furthermore in this particular region it is not very sharp so the approximation is...
1. Homework StatementFind the Taylor Series for f(x)=1/x about a center of 3.2. Homework Equations3. The Attempt at a Solutionf'(x)=-x^-2f''(x)=2x^-3f'''(x)=-6x^-4f''''(x)=24x^-5...f^n(x)=-1^n * (x)^-(n+1) * (x-3)^nI'm not sure where I went wrong...
1. Homework Statement\lim\limits_{x \to 0} \left(\ln(1+x)\right)^x2. Homework EquationsMaclaurin series:\ln(1+x) = x - \frac{x^2}{2} + \frac{x^3}{3} + ... + (-1)^{r+1} \frac{x^r}{r} + ...3. The Attempt at a SolutionWe're considering vanishingly small x, so just taking the first term...
So, I was doing a question on Laurent series. Part of it asked me to work out the pole of the function:$$ exp \bigg[\frac{1}{z-1}\bigg]$$The answer is ##1## - since, we can write out a Maclaurin expansion:(1) $$ exp\bigg[\frac{1}{z-1}\bigg] = 1+\frac{1}{z-1}+\frac{1}{2!}\frac{1}{(z-1)^{2}}...
1. Homework Statement## L (v^2 + 2 \pmb{v} \cdot \pmb{ \epsilon } ~ + \pmb{ \epsilon} ^2)##, where ## \pmb{\epsilon}## is infinitesimal and ##\pmb{v}## is a constant vector (## v^2 ## here means ## \pmb{v} \cdot \pmb{v} ## ), must be expanded in terms of powers of ## \pmb{\epsilon} ## to...
Hello,I want to prove that the taylor expansion of f(x)={\frac{1}{\sqrt{1-x}}} converges to ƒ for -1<x<1. If I didn't make a mistake the maclaurin series should look like this:Tf(x;0)=1+\sum_{n=1}^\infty{\frac{(2n)!}{(2^n n!)^2}}x^nMy attempt is to use the lagrange error bound, which is...
1. Homework StatementTo rephrase the question, given a power series representation for a function, like ex , and its MacLaurin Series, when I expand the two there's no difference between the two, but my question is: Is this true for all functions? Or does the Radius of Convergence have to do...
In my multivariable calculus class, we briefly went over Taylor polynomial approximations for functions of two variables. My professor said that the second degree terms include any of the following:$$x^2, y^2, xy$$What surprised me was the fact that xy was listed as a nonlinear term.In...
I don't think I've fully grasped the underlying ideas of this class, so at the moment I'm just sort of flailing for equations to plug stuff into...1. Homework StatementShow that in the mean field model, M is proportional to H1/3 at T=Tc and that at H=0, M is proportional to (Tc - T)1/22...
1. Homework StatementFor ln(.8) estimate the number of terms needed in a Taylor polynomial to guarantee an accuracy of 10^-10 using the Taylor inequality theorem.2. Homework Equations|Rn(x)|<[M(|x-a|)^n+1]/(n+1)! for |x-a|<d.3. The Attempt at a SolutionAll I've done so far is take a...
I am studying power series right now and I am understanding well how to write them and where they converge but I am having some trouble grasping the Taylor Remainder Theorem for a few reasons.First of all it says the remainder is:f^(n+1)(c)(x-a)^(n+1)/(n+1)! for some c between a and x.I...
I am just trying to clarify this point which I am unsure about:If I am asked to write out (for example) a third order taylor polynomial for sin(x), does that mean I would write out 3 terms of the series OR to the x^3 term.x-x^3/3!+x^5/5!or justx-x^3/3!Also, I have a question for the...
The problem is as the title says. This is an example we went through during the lecture and therefore I have the solution. However there is a particular step in the solution which I do not understand.Using the Taylor series we will write sin(x) as:sin(x) = x - (x^3)/6 + (x^5)B(x)and...
|
I’m not trying to feed a fed horse (I had to) with this post, but there’s nothing more dangerous than drawing conclusions from a poorly excecuted AB test. This post is will detail the factors to consider when designing an AB Test and the reasoning behind each factor.
Null Hypothesis (\(H_{0})\)
Coming up with a hypothesis to test is the arguably the most important part of an experiment. There is nothing to test without a hypothesis. The Hypothesis we seek to reject is the Null Hypothesis. A common \(H_{0}\) for AB tests is \(\mu_{A}=\mu_{B}=\). Which states the mean of sample A is equal to the mean of sample B.
p-value
The p-value is a powerful statistic and typically if this value is lower than 0.05 we reject the Null Hypothesis. That is a very simplistic definition, as there is more to know about the p-value. Rathe than a magic number, we should think of the p-value as the probability of an event equal to or more extreme than the observed one occurring when the Null Hypothesis (\(H_{0})\) is TRUE. When the p-value is under 0.05, it means the event is unlikely to happen, thus we can reject the Null and accept an alternative hypothesis.
..but WAIT!!
Calculating the p-value is not enough. In fact, way before we calcualte the metric, we need to know how large our data sample should be and we should when running an experiment we need to complete the data acquisition stage before calculating p-values.
Consider the following example to Toss Fair Coin Example
Imagine you want to test whether a coin is fair or not. You flip a coin 10000 times and after each toss count the cumulative number of times the coin landed on Head. Using the
rbinom R function we simulate the tosses and we assign a probability of 50% for Heads. Now using the
binom.test R function we check the p-value for when our (\(H_{0}) = 0.5 \). Since we assigned the probability of 50% we know the Null hypothesis is TRUE so we expect the p-value to be high.
set.seed(2)n=10000toss <- rbinom(n,1,prob=0.5)p.values <- numeric(n)erun<-numeric(n)for (i in 5:n){ p.values[i] <- binom.test(table(toss[1:i]),p=0.5)$p.value erun[i]<-i}df<-data.frame(erun,p.values)df%>%ggplot(aes(x=erun,y=p.values))+geom_line()+ geom_hline(yintercept=0.05,color='red',size=1,linetype='dashed',alpha=0.5)+ labs(x = "Coin Flips", y = "p-value", title = "Change in p-value") + theme_bw() + theme(axis.text = element_text(size = 10), title = element_text(size = 10))
As we can see in the graph, the p-values are larger than the standard threshold of 0.05 throughout the duration of the experiment, but we can observe some large fluctuations. As we continue tossing more coins our p-value jumps all over. In this case, we accept the Null hypothesis at all time but it is possible to not be this lucky.
set.seed(22)n=10000toss <- rbinom(n,1,prob=0.5)p.values <- numeric(n)erun<-numeric(n)for (i in 5:n){ p.values[i] <- binom.test(table(toss[1:i]),p=0.5)$p.value erun[i]<-i}
In this example, we repeated the experiment with the exact same parameters as the one above, but in this case, we do see our p-value fall under 0.05. As we continue to toss more coins, the p-value rises above the threshold. The lesson here is, the p-value is a strong metric but it’s only reliable whenever our sample size is large enough. If we peek early, we might draw erroneous conclusions.
Sample Size
We know we need a large enough sample size, but exactly how large? Well this depends on the effect we wish to detect, and the variance of our data. Detecting a small effect will increase our required sample size.
The following formula is commonly used to calculate a sample size
\(n = 16 \frac{\sigma^{2}}{\delta^2}\)
Where \( \delta\) is the minimum effect we wish to detect (Calculated as difference in sample means divided by pooled standard deviation). Also notice, having a large sample variance, will result in a large sample size needed.
If you’re an R user you can use the following to get a sample size:
library(pwr)pwr.t.test(n=NULL,d=0.1, sig.level=0.05, power=0.8) Two-sample t test power calculation n = 1570.733 d = 0.1 sig.level = 0.05 power = 0.8 alternative = two.sidedNOTE: n is number in *each* group
Power and sigificance level
Power is the probability of finding an effect when there is one. We can think of it as
1 - P(Type II error). It is standard to set this as 80% but this can change based on the nature of the experiment.
Significance level is the probability of finding an effect that is Not there. In other words,
P(Type I error). The standard is to set this as 0.05. Conclusion
So there you have it, AB testing is a very “simple” and powerful tecnnique, but we NEED to make sure we have enough data to draw conclusions. Immediately after writing our hypothesis, we need to calculate a minimum required sample size, and only if we know we can get the minimum sample size needed to detect an effect, we should proceed with the experiment.
Notes Type I error: occurs when we Reject a Null Hypothesis when it is True Type II error: occurs when we Fail to Reject the Null Hypothesis when it is False)
|
Forgot password? New user? Sign up
Existing user? Log in
How many solutions are there to sinθ+7=8 \sin \theta + 7 = 8 sinθ+7=8 in the domain [0∘,1000∘] [0 ^ \circ, 1000 ^ \circ ] [0∘,1000∘]?
What is the minimum value of a positive number xxx such that the value of sin(x+π8)\sin(x+\frac{\pi}{8})sin(x+8π) is 0?0?0?
How many xxx's in the interval 0≤x≤15π0 \leq x \leq 15\pi0≤x≤15π satisfy tanx=−3\tan x=-\sqrt{3}tanx=−3?
What are the solutions to 12sin(2x−76π)=6212\sin \left( 2x-\frac{7}{6}\pi \right)=6\sqrt{2}12sin(2x−67π)=62 in the interval x∈[0,π]?x \in [0,\pi]?x∈[0,π]?
Which of the following is a solution to
sin(2θ−31∘)=cos49∘? \sin ( 2 \theta - 31 ^ \circ ) = \cos 49 ^ \circ ? sin(2θ−31∘)=cos49∘?
Problem Loading...
Note Loading...
Set Loading...
|
For purposes of scientific discovery, the field of insider-threat detection often lacks sufficient amounts of time-series training data. Moreover, the limited data that
are available are quite noisy. For instance, Greitzer and Ferryman (2013) state that ‘ground truth’ data on actual insider behavior is typically either not available or is limited. In some cases, one might acquire real data, but for privacy reasons there is no attribution of any individuals relating to abuses or offenses (i.e., there is no ground truth). The data may contain insider threats, but these are not identified or knowable to the researcher (Greitzer and Ferryman, 2013; Gheyas and Abdallah, 2016). The Problem
Having limited and quite noisy data for insider-threat detection presents a major challenge when estimating time-series models that are robust to overfitting and have well-calibrated uncertainty estimates. Most of the current literature in time-series modeling for insider-threat detection is associated with two major limitations. First, the methods involve visualizing the time series for noticeable structure and patterns such as periodicity, smoothness and growing/decreasing trends, and then hard-coding these patterns into the statistical models during formulation. This approach is suitable for large datasets where more data typically provides more information to learn expressive structure. Given limited amounts of data, such expressive structure may not be easily noticeable. For instance, the figure below shows monthly attachment sizes in emails sent by an insider from his employee account to his home account. Trends such as periodicity, smoothness and growing/decreasing trends are not easily noticeable.
Second, most of the current literature focuses on parametric models that impose strong restrictive assumptions by pre-specifying the functional form and number of parameters. Pre-specifying a functional form for a time-series model could lead either to overly complex model specifications or to simplistic models. It is difficult to know
a priori the most appropriate function to use for modeling sophisticated insider-threat behavior that involves complex hidden patterns and many other influencing factors.
Haystax Technology conducted a research study to address these problems.
Data Science Questions
Given the above limitations in the current state-of-art, our study formulated the following three data science questions: Given limited and quite noisy time-series data for insider-threat detection, is it possible to perform:
Pattern discovery without hard-coding trends into statistical models during formulation?
Model estimation that precludes pre-specifying a functional form?
Model estimation that is robust to overfitting and has well-calibrated uncertainty estimates?
Hypothesis
To answer these three questions and address the limitations described above, our study formulated the following hypothesis:
By leveraging current state-of-the-art innovations in nonparametric Bayesian methods, such as Gaussian processes with spectral mixture kernels, it is possible to perform pattern discovery without prespecifying functional forms and hard-coding trends into statistical models.
Methodology
To test our hypothesis, a nonparametric Bayesian approach was proposed to implicitly capture hidden structure from time series having limited data. The proposed model, a Gaussian process with a spectral mixture kernel, precludes the need to pre-specify a functional form and hard-code trends, is robust to overfitting and has well-calibrated uncertainty estimates.
(Mathematical details of the proposed model formulation are described in a corresponding paper that can be found on arXiv through the following link: Emaasit, D. and Johnson, M. (2018). Capturing Structure Implicitly from Time-Series having Limited Data. arXiv preprint arXiv:1803.05867.)
A brief description of the fundamental concepts of the proposed methodology is as follows: Consider for each data point, $latex i$, that $latex y_i$ represents the attachment size in emails sent by the insider to his home account and $latex x_i$ is a temporal covariate, such as month. The task is to estimate a latent function, $latex f$, which maps input data $latex x_i$ to output data $latex y_i$ for $latex i$ = 1, 2, $latex \ldots{}$, $latex N$, where $latex N$ is the total number of data points. Each of the input data $latex x_i$ is of a single dimension $latex D = 1$, and $latex \textbf{X}$ is a $latex N$ x $latex D$ matrix with rows $latex x_i$.
The observations are assumed to satisfy:
\begin{equation}\label{eqn:additivenoise} y_i = f(x_i) + \varepsilon, \quad where \, \, \varepsilon \sim \mathcal{N}(0, \sigma_{\varepsilon}^2) \end{equation} The noise term, $latex \varepsilon$, is assumed to be normally distributed with a zero mean and variance, $latex \sigma_{\varepsilon}^2$. Latent function $latex f$ represents hidden underlying trends that produced the observed time-series data.
Our study proposed a prior distribution, $latex p(\textbf{f})$, over an infinite number of possible functions of interest given that it is difficult to know
a priori the most appropriate functional form to use for $latex f$. A natural prior over an infinite space of functions is a Gaussian-process (GP) prior (Williams and Rasmussen, 2006). A GP is fully parameterized by a mean function, $latex \textbf{m}$, and covariance function, $latex \textbf{K}_{N,N}$, denoted as: \begin{equation}\label{eqn:gpsim} \textbf{f} \sim \mathcal{GP}(\textbf{m}, \textbf{K}_{N,N}), \end{equation}
The posterior distribution over the unknown function evaluations, $latex \textbf{f}$, at all data points, $latex x_i$, was estimated using Bayes theorem, as follows:
\begin{equation}\label{eqn:bayesinfty} \begin{aligned} p(\textbf{f} \mid \textbf{y},\textbf{X}) &= \frac{p(\textbf{y} \mid \textbf{f}, \textbf{X}) \, p(\textbf{f})}{p(\textbf{y} \mid \textbf{X})} = \frac{p(\textbf{y} \mid \textbf{f}, \textbf{X}) \, \mathcal{N}(\textbf{f} \mid \textbf{m}, \textbf{K}_{N,N})}{p(\textbf{y} \mid \textbf{X})}, \end{aligned} \end{equation} where:
$latex p(\textbf{f}\mid \textbf{y},\textbf{X})$ = the posterior distribution of functions that best explain the email-attachment size, given the covariates
$latex p(\textbf{y} \mid \textbf{f}, \textbf{X})$ = the likelihood of email-attachment size, given the functions and covariates $latex p(\textbf{f})$ = the prior over all possible functions of email-attachment size $latex p(\textbf{y} \mid \textbf{X})$ = the data (constant)
This posterior is a GP composed of a distribution of possible functions that best explain the time-series pattern.
Experiments Raw data and sample formation
The insider-threat data used for empirical analysis in this study was provided by the Computer Emergency Response Team (CERT) division of the Software Engineering Institute at Carnegie Mellon University. The particular case used is a known insider who sent information as email attachments from his work email to his home email. The
pydata software stack including packages such as
pandas,
numpy,
matplotlib,
seaborn and others, was used for data manipulation and visualization. The figure below shows that email attachment sizes increased drastically in March and April 2011.
Empirical analysis
In the figure below, the first 10 data points (shown in black) were used for training and the rest (in blue) for testing. The figure also shows that the GP model with a spectral mixture kernel is able to capture the structure implicitly both in regions of the training and testing data. The 95% predicted credible interval contains the ‘normal’ size of email attachments for the duration of the measurements. The GP model was also able to detect both of the anomalous data points, shown in red, that fall outside of the 95% predicted credible interval.
An ARIMA model was estimated using the methodology in the
statsmodels Python package for comparison. The figure below shows that the ARIMA model is poor at capturing the structure within the region of testing data. This finding suggests that ARIMA models have poor performance for small data without noticeable structure. The 95% confidence interval for ARIMA is much wider than the GP model showing a high degree of uncertainty about the ARIMA predictions. The ARIMA model is able to detect only one anomalous data point in April 2011, missing the earlier anomaly in March 2011.
It’s important to note that the machine-learning approach described above is able to predict anomalous data points such as unusually large email attachment sizes, but it does not tell us
why this behavior happened. Knowledge about causality is often locked in the brains of domain experts (adjudicators, threat experts, psychologists, HR professionals, etc.) who understand the behavior of humans and the leading indicators of insider threat activity. Data scientists need to capture this domain knowledge and combine it with machine-learned indicators in order to understand this behavior.
At Haystax, we go about this critical step by capturing expertise in a probabilistic (i.e., Bayesian) model and feeding many machine-learned indicators to understand/predict the risk score of an insider. In the next blog post, we will demonstrate this approach by using machine learning to extract more anomalous events related to this user and feed them into a probabilistic model of their risk score.
Daniel Emaasit is a Data Scientist at Haystax Technology. For a more detailed treatment of this study, please see Daniel’s blog. Source code
For interested readers, two options are provided below to access the source code used for empirical analyses:
The entire project (code, notebooks, data and results) can be found here on GitHub. References
Emaasit, D. and Johnson, M. (2018). Capturing Structure Implicitly from Noisy Time-Series having Limited Data. arXiv preprint arXiv:1803.05867.
Williams, C. K. and Rasmussen, C. E. (2006). Gaussian processes for machine learning. The MIT Press, 2(3):4.
Knudde, N., van der Herten, J., Dhaene, T., & Couckuyt, I. (2017). GPflowOpt: A Bayesian Optimization Library using TensorFlow. arXiv preprint arXiv:1711.03845.
Wilson, A. G. (2014). Covariance kernels for fast automatic pattern discovery and extrapolation with Gaussian processes. University of Cambridge.
Greitzer, F. L. and Ferryman, T. A. (2013). Methods and metrics for evaluating analytic insider threat tools. Security and Privacy Workshops (SPW), 2013 IEEE, pages 90–97. IEEE.
Gheyas, I. A. and Abdallah, A. E. (2016). Detection and prediction of insider threats to cybersecurity: a systematic literature review and meta-analysis. Big Data Analytics, 1(1):6.
|
A Seifert surface for the (3,4) torus knot is defined by the intersection of $S^3 \subset \mathbb{C}^2$ with the set $\arg(z^4-w^3)=0$. The boundary is at $z^4=w^3$ and this is the torus knot. Here is a picture of its stereographic projection into $\mathbb{R}^3$:
If you replace $w\to e^{\frac{2\pi i}{3}}w$, this surface is unchanged. In the picture, the point in the middle gets mapped to the point at the top, the point at the top gets mapped to the point at the bottom, and the point at the bottom gets mapped to the point in the middle.
I want to use
Mathematica to make a smooth animation of this happening. I've tried using
ContourPlot3D and
ParametricPlot3D but both produce surfaces that are joined up in a different way to the picture. I've also tried generating some random points on $S^3$, selecting those that satisfy $|\arg(z^4-w^3)|<\epsilon$ for a small $\epsilon$ and that does produce points looking like the picture. I've put the code for doing that at the bottom. If I use
ListSurfacePlot3D instead of
ListPointPlot3D, it gives the wrong picture. I can't believe this is the best
Mathematica can do. Explicit Definition of the Surface
The surface is the set of points $(z,w)\in \mathbb{C}^2$ such that
$|z|^2+|w|^2=1$ and $\arg(z^4-(e^{i\theta} w)^3)=0$,
where $\theta$ is a parameter that I want to vary for the animation. This surface is projected into $\mathbb{R}^3$ by stereographic projection
$f(z,w)=\left(\frac{\Re(z)}{1-\Im(w)},\frac{\Im(z)}{1-\Im(w)},\frac{\Re(w)}{1-\Im(w)}\right)$
Mathematica Code
n = 100; ε = 0.01; R4pts = RandomVariate[ MultinormalDistribution[ConstantArray[0, 4], IdentityMatrix[4]], 300000]; S3pts = #/Norm[#] & /@ R4pts; R3proj[a_] = {x/(1 - w), y/(1 - w), z/(1 - w)} /. {x -> a[[1]], y -> a[[2]], z -> a[[3]], w -> a[[4]]}; complexify[ a_] = {x + I y, z + I w} /. {x -> a[[1]], y -> a[[2]], z -> a[[3]], w -> a[[4]]}; C2pts = complexify /@ S3pts; C2ptsSel = Select[C2pts, (Abs[ Arg[z^4 - Exp[(I π)/2] w^3]] < ε) /. {z -> #[[ 1]], w -> #[[2]]} &]; C2colours = Im[#[[2]]] & /@ C2ptsSel; realify[a_] = {x, y, z, w} /. {x -> Re[a[[1]]], y -> Im[a[[1]]], z -> Re[a[[2]]], w -> Im[a[[2]]]}; S3ptsSel = realify /@ C2ptsSel; R3pts = R3proj /@ S3ptsSel; ListPointPlot3D[{#} & /@ R3pts, PlotRange -> {{-2, 2}, {-2, 2}, {-2, 2}}, PlotStyle -> ((Hue[(# - Min[C2colours])/( Max[C2colours] - Min[C2colours])] &) /@ C2colours), AspectRatio -> 1, ViewPoint -> {2, 1, 0.7}, ImageSize -> 400]
EDIT: The same code with ListSurfacePlot3D gives this picture:
|
October 30th, 2014, 01:55 PM
# 1
Newbie
Joined: Oct 2014
From: London
Posts: 27
Thanks: 0
Help with these surd equations?!
I have these equations
1.$\displaystyle \frac{5^2x5^{-3}}{5^{-4}x5^{6}}$
2.$\displaystyle \frac{(x^5y^3)^4}{x^3y^2}$
3.$\displaystyle \frac{\sqrt{6}+4}{\sqrt{4}}$
Any help?
October 30th, 2014, 05:22 PM
# 2
Math Team
Joined: Jul 2011
From: Texas
Posts: 3,031
Thanks: 1620
Quote:
the x factors in the numerator & denominator divide out to 1
numerator ... $\displaystyle 5^2 \cdot 5^{-3} = 5^{-1}$
denominator ... $\displaystyle 5^{-4} \cdot 5^6 = 5^2$
$\displaystyle \frac{5^{-1}}{5^2} = 5^{-1-2} = 5^{-3} = \frac{1}{5^3}$
2. $\displaystyle \frac{(x^5y^3)^4}{x^3y^2}$
$\displaystyle \frac{x^{20}y^{12}}{x^3y^2}$ ... now use the rule for division for exponents
3. $\displaystyle \frac{\sqrt{6}+4}{\sqrt{4}}$
come on ... what is $\displaystyle \sqrt{4}$ ?
Tags equations, surd
Search tags for this page
Click on a term to search for related topics.
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post rationalizing surd fractions jessjans11 Elementary Math 9 July 20th, 2014 07:24 AM trigonometry surd Monokuro Trigonometry 2 May 6th, 2014 07:14 AM "easy" question involving a surd mathematics101010 Algebra 6 August 4th, 2013 04:00 AM SURD - Finding square root sachinrajsharma Algebra 15 February 23rd, 2013 06:51 PM Help on integral problem>>surd(2+t^2) prescott2006 Calculus 0 March 9th, 2008 11:00 AM
|
Measurement of the form factors in the decay $D^+ \to \omega e^+ \nu_{e}$ and search for the decay $D^+ \to \phi e^+ \nu_{e}$ 2016 (English)In: PHYSICAL REVIEW D, ISSN 2470-0010, Vol. 92, no 7, article id 071101Article in journal (Refereed) Published Resource typeText Abstract [en]
Using 2.92 fb-1 of electron-positron annihilation data collected at a center-of-mass energy of s=3.773 GeV with the BESIII detector, we present an improved measurement of the branching fraction B(D+→ωe+νe)=(1.63±0.11±0.08)×10-3. The parameters defining the corresponding hadronic form factor ratios at zero momentum transfer are determined for the first time; we measure them to be rV=1.24±0.09±0.06 and r2=1.06±0.15±0.05. The first and second uncertainties are statistical and systematic, respectively. We also search for the decay D+→ϕe+νe. An improved upper limit B(D+→ϕe+νe)<1.3×10-5 is set at 90% confidence level.
Place, publisher, year, edition, pages
2016. Vol. 92, no 7, article id 071101
National Category Subatomic Physics IdentifiersURN: urn:nbn:se:uu:diva-287667OAI: oai:DiVA.org:uu-287667DiVA, id: diva2:923273 Note
Funding: The BESIII Collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts No. 11125525, No. 11235011, No. 11322544, No. 11335008, and No. 11425524; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics; the Collaborative Innovation Center for Particles and Interactions; Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts No. 11179007, No. U1232201, and No. U1332201; CAS under Contracts No. KJCX2-YW-N29 and No. KJCX2-YW-N45; 100 Talents Program of CAS; National 1000 Talents Program of China; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Collaborative Research Center Contract No. CRC-1044; Istituto Nazionale di Fisica Nucleare, Italy; Joint Funds of the National Science Foundation of China under Contract No. U1232107; Ministry of Development of Turkey under Contract No. DPT2006K-120470; Russian Foundation for Basic Research under Contract No. 14-07-91152; The Swedish Research Council; US Department of Energy under Contracts No. DE-FG02-04ER41291, No. DE-FG02-05ER41374, No. DE-SC0012069, and No. DESC0010118; US National Science Foundation; University of Groningen and the Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt; and WCU Program of National Research Foundation of Korea under Contract No. R32-2008-000-10155-0.2016-04-262016-04-262016-04-26
|
I'm reading a linear algebra book and it defines the integration operation as $T \epsilon L(P(\Bbb R), \Bbb R)$. However, it defines the differentiation operation as
T $\epsilon$ $L(P(\Bbb R), P(\Bbb R))$. Don't they both map to the vector space that contains all polynomials? Why is integration as a linear function defined this way?
I'm reading a linear algebra book and it defines the integration operation as $T \epsilon L(P(\Bbb R), \Bbb R)$. However, it defines the differentiation operation as
Because integration gives you a real number value
Integration is considered a transformation T: $ P(\mathbb R) \to \mathbb R $ because a definite integral of the form $ \int_a^b p(x) dx $ will typically give you a real number value: i.e. the area under the curve of p(x) on [a,b]. For example, consider: $$ T(p(x))= \int_0^1 p(x) dx $$
Let's take this transformation for $p(x)=x^2$: $$ \int_0^1 x^2 dx = \frac{1}{3}$$
The transformation T has taken our polynomial, $x^2$, which is in the set $P(\mathbb R)$, and produced a fraction, which is in the set $\mathbb R$, so for any polynomial in the set, the transformation (the definite integral on [0,1] in this case) will produce a real number. That's why we consider the integral to be a linear transformation between these two spaces.
A derivative applied to a first order polynomial $(ax+C)$ will give you a real number in the same way.
However, for the indefinite integral $\int p(x) dx $, you are correct that most of the time you will get a polynomial back as an answer, and the derivative of a higher-order polynomial will also give you a polynomial.
TL;DR: the indefinite integral will map to $\mathbb R$ but the definite integral and most derivatives tend to remain in $P(\mathbb R)$.
|
Consider A Gaussian Procces $X(t):\mathbb{R}\times \Omega \to \mathbb{R}$ with $\Omega$ a probability space and $\mathbb{E} \left[ X_t \right] = 0$ for all $t\in \mathbb{R}$.
Consider also its KL expansion $X(t) = \sum\limits_{k=0}^{\infty} Z_k e_k (t)$, with $Z_k$ being pairwise uncorrelated Gaussian random variables.
I'm interested in what is known about the convergence rate of the finite expansion $\sum\limits_{k=0}^N Z_k e_k (t)$. Mainly
When can we expect an exponentialy small pointwise (or a.e.) error term in $N$? Given a smooth function $f$, when can we approximate $\int\limits_{\mathbb{R}} f(X(t)) \, dt $ with an integration $d^N \mathbf{Z}$ with exponentially small error term. Extra Details: I've learned that the functions $e_k (t)$ are, in fact, the eigenvectors of the following integral Kernel over $L^2\left(\mathbb{R} \right)$,$$Ku(t) = \int\limits_{\mathbb{R}} \mathbb{E} \left[ X_t X_s \right] u(s) \, ds \, . $$
For more details, see this coincise introductory here.
Thanks
|
Given a set of vertices $\{x_\alpha\}$ whose cardinality exceeds $\aleph_1$, (assume the axiom of choice) connect each vertex with its successor by an edge, forming a linear graph. Choose two vertices $x_i$ and $x_j$ such that $card\{x|x_i \lt x \lt x_j\} \gt \aleph_1$. On one hand, it is impossible to construct a path corresponding to the edge path from $x_i$ to $x_j$, since there does not exist a continuous, surjective map $\mathcal f: \mathbf I \to \mathbf S$, from the unit interval to the subgraph $\mathbf S$ between $x_i$ and $x_j$. On the other hand, the graph is connected and locally path-connected, and thereby path-connected, so that there must exist a path from $x_i$ to $x_j$, a contradiction.
You’re essentially looking at the lexicographic order topology on $\preceq$ on $X=\alpha\times[0,1)$ for an ordinal $\alpha\ge\omega_2$. This $X$ is not locally path-connected: the points $\langle\eta,0\rangle$ such that $\operatorname{cf}\eta\ge\omega_1$ (and hence in particular the point $\langle\omega_1,0\rangle$) do not have path-connected nbhds. Thus, there is no contradiction.
If you take $\alpha=\omega_1$, so that you have a set of vertices of cardinality $\omega_1$, you’re looking at the closed long ray, which is connected and locally path-connected (and hence path-connected). If, however, you add a righthand endpoint, you have (up to homeomorphism) a subspace of $X$ (above), $\{x\in X:x\preceq\langle\omega_1,0\rangle\}$, with righthand endpoint $\langle\omega_1,0\rangle$. As noted above, this point does
not have a path-connected nbhd, and indeed this space is not path-connected: there is no path from and $x\prec\langle\omega_1,0\rangle$ to $\langle\omega_1,0\rangle$. (As Dan Rust suggested in the comments, this answer and the comments below it may be helpful.)
|
I suppose I must have learned about Bayes' Theorem at sixth form. In those days, maths was Pure and Applied, so I could - and did - largely avoid statistics until midway through my PhD I was roundly mocked by my future boss for not knowing what 'significant' meant. I quickly put that right.
However, I know I knew about Bayes' Theorem because of something a good friend told me while we were supposed to be working on our seditious sixth-form magazine: "If you had to pick someone to play Minesweeper for your life, it'd be Colin." I think he meant it as a criticism (his record times were far below mine, and he won many more games than I did; however, my win percentage dwarfed his.) My secret? Maths.
Thinking about it, that's not much of a secret.
When you start a game of Minesweeper on the basic level (my friend's favourite, of course), you have to find ten mines spread across a field of 81 squares. Even my pal could work out that that gave you about a one-in-eight chance of hitting a mine. But then, something you might not know: Minesweeper never gives you a mine on the first go, which is awfully nice of it.
Clicking in the corner is, I reckon, the best way to start. First up: you have about a 67% chance of hitting a clear square, which normally gives you plenty to work with. Your chance of getting a useless 1 is about 30%.
Clicking in the middle will leave you with a clear square about 33% of the time, and give you a useless 1 about 41% of the time. The fewer neighbours you have, the more likely you are to get a good start.
Bayes' Theorem - one of the few bits of progress to come out of the 18th century between Newton and Babbage - is pretty reasonable: if you want to find the probability of something
when you have extra information, the updated ('conditional') probability is the probability of both things happening divided by the probability of your information.
For instance, if you want to know the probability of rolling at least a 10 on two dice, given that you've already rolled a 6, you'd work out the probability of rolling either (6,4), (6,5), or (6,6) - the only three ways of rolling at least 10 when the first die is a 6. That would be three rolls out of 36 possible pairs, making one in twelve on the top. The probability of rolling a six - the condition - goes on the bottom; that's one in six. So, you do a twelfth divided by a sixth, which is a half. In case you've given up on fractions like a fool:
$$\frac{1}{12} \div \frac{1}{6} = \frac{1}{12} \times \frac{6}{1} = \frac{6}{12} = \frac{1}{2}$$
(Really, read up on fractions. They'll make your life so much easier.)
For example, imagine you have failed to take this advice and clicked somewhere in the middle of the screen to get a 1 - then you've compounded your error and clicked next to it to get a 2. What were you thinking? OK, well, let's figure out what you need to do next.
What you've now done is split the world up into four distinct types of cell: you've got three cells (marked A) next to the 1 but not next to the 2; three other cells (marked B) next to the 2 but not next to the 1; and four cells (marked C) next to both of them. Oh, and you have 69 other cells that aren't next to either.
You've also got two possible scenarios: either there's a mine in zone A and two in zone B, (leaving 7 elsewhere) or there's a mine in zone C and one in zone B (leaving 8 elsewhere).
Look, here's a table showing the probability of hitting a mine if you pick a random cell in a given zone in each situation:
Cell type ABB BC Zone A $\frac{1}{3}$ $0$ Zone B $\frac{2}{3}$ $\frac{1}{3}$ Zone C $0$ $\frac{1}{4}$ other $\frac{7}{69}$ $\frac{8}{69}$
One thing jumps out: you definitely don't want to click in zone B. You have at least a one in three chance of dying if you do; zone A is definitely a better proposition than zone B in either case. However, the other choices aren't as clear-cut: clicking elsewhere is about 10% in either case, and zones A and C are both fairly low probabilities - depending on how likely the two situations are, either of them might be a better bet than clicking randomly elsewhere.
What we have here are conditional probabilities: the 1/3 next to the A is $P($ A1 is a mine $|$ setup is ABB $)$, for instance. You can write that as $P(A1 | ABB) = \frac{P(A1 \cap ABB)}{P(ABB)}$, and rearrange it (and the same thing for its friend) to get:
$$P(ABB) \cdot P(A1 | ABB) = P(A1 \cap ABB)$$
$$P(BC) \cdot P(A1 | BC) = P(A1 \cap BC)$$
Since ABB and BC are mutually exclusive (they can't both happen) and exhaustive (nothing else can happen), $P(A)$ is just $P(A1 \cap ABB) + P(A1 \cap BC)$ - which is a the average of the two probabilities, weighted by the probability of each situation.
Which is the problem, of course: we don't know how likely either situation is! Yet, at least.
Luckily, Impala the Koala is brilliant at combinatorics and can work this out without Bayes' Theorem. What's that you say, li'l buddy? ABB is a $\frac{3}{7}$ shot and BC is $\frac{4}{7}$? Thanks, that's really helpful - have some eucalyptus!
That means we can work out the probability of dying if we click in any of the zones:
$$P(A) = \frac{3}{7} \cdot \frac{1}{3} + \frac{4}{7} \cdot 0 = \frac{1}{7} \simeq 0.143$$
$$P(B) = \frac{3}{7} \cdot \frac{2}{3} + \frac{4}{7} \cdot \frac{1}{3} = \frac{10}{21} \simeq 0.476$$ $$P(C) = \frac{3}{7} \cdot 0 + \frac{4}{7} \cdot \frac{1}{4} = \frac{1}{7} \simeq 0.143$$ $$P(other) = \frac{3}{7} \cdot \frac{7}{69} + \frac{4}{7} \cdot \frac{8}{69} = \frac{53}{483} \simeq 0.110$$
This supports the idea that B is a really bad move (you're almost 50-50 to die); A and C are just as likely as each other to contain a mine, but clicking elsewhere is slightly more likely to leave you alive.
(Final aside: no, those probabilities don't add up to 1. They're not mutually exclusive - there are definitely mines in both B and other, so there's no reason they should add up to 1.)
* Edited 2015-03-13 to fix a counting error (thanks to Stefan Kremer for spotting it!) and a LaTeX error.
|
To get some certain properties for my use case I need a prime $P$ which has the form:
$P=2\cdot Q \cdot R \cdot S \cdot t+1$ with $Q,R,S,t$ primes as well.
Why that form - Use case
Together with this three factors $q,r,s$ are used. The values $v$ of interest have the form
$v(a,b,c) = q^ar^bs^c\bmod P$,
Those factors have the following properties:
$q^Q \equiv 1 \bmod P$ $r^R \equiv 1 \bmod P$ $s^S \equiv 1 \bmod P$
and the equation holds:
$q^{a+dQ}r^{b+eR}s^{c+fS} \equiv q^{a}r^{b}s^{c} \bmod P$, with any $d,e,f \in \mathbb{N}$
so
$|\{v(a,b,c), \forall a,b,c \in \mathbb{N}\}| = QRS = \frac{P-1}{2t}$
If another factor is added:
$v(a,b,c,T) = q^ar^bs^c T\bmod P$, with any $T\in\mathbb{N} < P$
you can achive:
$|\{v(a,b,c,T), \forall a,b,c,T <P \in \mathbb{N}\}| = P-1$
Two different $T$ have $0$ or all values equal.
That those properties work the prime $P$ need to have the form:
$P=2QRSt+1$
(constructed myself, there might be better options)
It also works with $t=1, T=1$. With this half of all values ($(P-1)/2$) can get generated.
How safe is such a prime?
A user and also possible attacker has access to the source code and all runtime variables. For a given $v$, which is not computed at the local PC (its just a random number) the attacker should not be able to determine the values $a,b,c$ and $T$ in:
$v(a,b,c,T) \equiv q^ar^bs^c T\bmod P$
or to be more exact, he should not be able to derive one $v'$ out of another $v$
$v'(a',b',c',T') \equiv v \cdot q^{a'}r^{b'}s^{c'} T'\bmod P$
The attacker knows all other values $P,Q,R,S,q,r,s,t$
$Q,R,S$ need to be about the same size, $t$ is much smaller $t\ll Q,R,S$, in use case less than $t<1000$;
I read about safe and strong primes. Both don't hold for that kind of prime form. How much safety get lost with that form? Would it help if
$Q,R,S$ are safe/strong primes
if $P+1$ has a large prime factor
You know about other enhancements?
Comparison to normal discrete logarithm
The form above is different to the normal discrete logarithm problem form like:
$v'\equiv g^x \bmod P'$ and finding $x$ for a given $v'$
I'm not familiar with all discrete log. solving algorithms. Does it make a difference if there is only one base ($g$) or three of it ($q,r,s$)? Three harder or faster solving?
Assuming $S$ is a safe prime and largest out of $Q,R,S,t$. Could you compare the mean solving time complexity of
finding $a,b,c,T$ for a given v solving:
$v \equiv q^ar^bs^c T\bmod P$
with finding d for a given $v'$
$v'\equiv g^d \bmod S$, with g prime root of $S$
Or is it harder/faster? How would a normal form look like which has about the same solving time (to get an idea how much worse my form is)?
(toy) example
$P=35531=2 \cdot 11 \cdot 17 \cdot 19 \cdot 5+1$
$r=4999, q=21433, s=3181$
|
The point about circuits is that a circuit has a fixed number of inputs. This means that, to define a language, we need a family of circuits $C_0, C_1, C_2, \dots$ such that the circuit $C_i$ tells you which strings of length $i$ are in the language, for each $i$. This doesn't require that there should be any relationship between the circuits $...
If $\mathsf{NP} \subseteq \mathsf{P}/\mathsf{poly}$, then $\mathsf{SAT} \in \mathsf{SIZE}[O(n^k)]$ for some fixed constant $k$. The claimed results should follow by using this circuit to replace the $\mathsf{NP}$ oracle(s) involved in the relevant classes. For example, (2) follows by noting that $\mathsf{ZPP}^{\mathsf{NP}} = \mathsf{ZPP}^{\mathsf{SAT}}$ and ...
The trick is, unlike classical gates, quantum gates have to be reversible (aka invertible). In other words, for every possible output, there must be one and only one possible input producing that output.This means the classical NAND gate can't possibly work in quantum computing: there are more inputs than outputs, so by the pigeonhole principle there must ...
The NAND gate is not reversible, you can't recover its inputs using its outputs, so it's not a well defined quantum gate. Or, at the very least, it must contain some sort of internal measurement mechanism that would cause decoherence. This would prevent it from being universal for quantum computation.A simple way to fix the reversibility problem is to have ...
In the zeroth order sense, it is correct that the logic depth and the time to execute the logic would be the same. There are nuances to this because you need to do something with the result.What logic is a medium to do work. In the simplest sense, you have a bounded function with inputs and outputs:In most actual systems, you need a way to hold the ...
Unrestricted Boolean circuits are not very interesting, since they can compute any function. The class of Boolean functions they compute is the class of all Boolean functions.In order to make Boolean circuits more interesting, we need to put restrictions on them. The following seem to be the most common restrictions:Upper bounds on the size.Upper bounds ...
The automaton as represented by a combinational logic circuit (CLC) is called combinational logic. Its model of computation is given an input, a CLC is used to compute the output. The name of that model of computation can also be called combinational logic. (This is similar to Turing machine, which is a kind of computing machine and also the name of the ...
Whenever you are faced with two Boolean expressions $f,g$ on $n$ variables and wish to know whether they are equivalent, there is a simple algorithm you can apply:Go over all $2^n$ possible truth assignments, and check whether $f$ and $g$ have the same truth value on each.While this is infeasible for large $n$, in your case $n = 4$, so there are only ...
The Cook–Levin theorem shows how to construct a circuit of size $f(n)^{O(1)}$. I'm not sure what's the best exponent.The opposite direction is impossible, since circuits of size 1 can compute the following language:$$ \{ w : \text{the $|w|$th Turing machine halts on the empty input} \}. $$More generally, circuits of size 1 can compute any language ...
OK so now I figured this out. The problem is $NP$-complete. We could simply verify an assignment by checking each gate as an equation. For solving SAT of boolean function A(x), simply construct a equation, $y=y\otimes\lnot A(x)$ would suffice.
In an $\mathsf{NC^0}$ circuit, every output bit depends on a bounded number of input bits. But the $k$th bit of the output (counting from the LSB) depends on the first $k$th bits of each input.To see that $\mathsf{AC^0}$ circuits can compute addition, we need to produce such a circuit. Hopefully you have seen such circuits, and otherwise perhaps you can ...
With such an instruction set, all you could express are straight-line programs. Without branches, you can't have loops. Thus, the program would not be able to handle arbitrary-length inputs: it would be limited to dealing with fixed-size inputs (or inputs with a fixed known upper bound on the size of the input). So, that wouldn't be very satisfactory.
Multiplication can be done even with stronger restrictions, like $AC^1$ with bounded fan-in.Proof is little hard to typeset here, but I will outline the sketch and give a link.You shall prove, that addition of two m-bit numbers have $O(m)$ size and constant depth circuit and thus is in $AC^0$. This is pretty simple (start with looking for $O(m)$ depth ...
What you describe is essentially Turing machines with advice, the advice for length $i$ being simply the description of $T_i$. It is a classic result that the two models are equivalent in the case of poly-time TMs and poly-sized circuits, that is, both produce the same class $\mathsf{P}/\mathrm{poly}$. If the description length of $T_i$ is allowed to be ...
Morioka considers uniform versions of his circuit classes:Throughout this paper we write $\mathbf{NC^1}$ to mean $\mathbf{Dlogtime}$-uniform $\mathbf{NC^1}$, which is equivalent to the class $\mathbf{Alogtime}$ of languages accepted by an alternating Turing machine in $O(\log n)$ time.The paper you mention should imply the equivalence of these two ...
an $NC^0$ can only consider circuits of fan-in 2. If we try to adding with a Full-Adder with Lookahead Gatter to calculate the carry, needs every Full-Adder 3 Input signals. But in $NC^0$ are only 2 inputs allowed.If we try to replace Full-Adder with other logic gatters, we hurt the depth of the circuit.image-source: https://upload.wikimedia.org/...
The Boolean circuits you are referring to are a non-uniform model. This means that, for every input length, you have a different circuit. When we say that Boolean circuits solve a particular problem what is actually meant is that a family of circuits does, which is an infinite sequence $(C_n)_{n \in \mathbb{N}_0}$, $C_n$ being the circuit for inputs of ...
The NAND gate is "universal" in that a network of NAND gates can implement any combinational or sequential logic function.So to construct a "program" out of NANDs all you need is a way of specifying the network that interconnects them.So if every NAND gate in your "computer" has a address, and the list of instructions is of the form inputA,inputB you ...
Unfortunately, you've gotten as far as you can get on this problem.(Note: I'm going to write AND, OR, NOT, and NAND as $\wedge \vee \neg \uparrow$ respectively, since the standard sum/product/overbar notation tends to end up with lots of stacked bars, and I don't like those.)You currently have:$$(a \wedge \neg b) \vee (\neg a \wedge b) \vee (a \wedge b)...
Some further thoughts, at least for the case when batch generation is done by feeding a gray code or other simply generated sequence into a circuit with internal memory.For an ordinary combinatorial logic circuit without memory, we can bound the size of a circuit needed to evaluate a general function on n bits in two ways: by providing a general ...
You can think of Turing machines as a finite description of a function whose domain is $\mathbb{N}$. When you want to specify a different behavior for each member in a collection of finite sets which covers the entire domain, Turing machines are not the natural object to discuss.You could obviously talk about a sequence of machines $M_n$, where $M_i$ ...
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Observation of a peaking structure in the J/psi phi mass spectrum from B-+/- -> J/psi phi K-+/- decays
PHYSICS LETTERS B, ISSN 0370-2693, 06/2014, Volume 734, Issue 370-2693 0370-2693, pp. 261 - 281
A peaking structure in the J/psi phi mass spectrum near threshold is observed in B-+/- -> J/psi phi K-+/- decays, produced in pp collisions at root s = 7 TeV...
PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
2. Measurement of the ratio of the production cross sections times branching fractions of B c ± → J/ψπ ± and B± → J/ψK ± and ℬ B c ± → J / ψ π ± π ± π ∓ / ℬ B c ± → J / ψ π ± $$ \mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm }{\pi}^{\pm }{\pi}^{\mp}\right)/\mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm}\right) $$ in pp collisions at s = 7 $$ \sqrt{s}=7 $$ TeV
Journal of High Energy Physics, ISSN 1029-8479, 1/2015, Volume 2015, Issue 1, pp. 1 - 30
The ratio of the production cross sections times branching fractions σ B c ± ℬ B c ± → J / ψ π ± / σ B ± ℬ B ± → J / ψ K ± $$ \left(\sigma...
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
Journal Article
Physics Letters B, ISSN 0370-2693, 03/2017, Volume 766, Issue C, pp. 212 - 224
Journal Article
Physics Letters B, ISSN 0370-2693, 05/2016, Volume 756, Issue C, pp. 84 - 102
A measurement of the ratio of the branching fractions of the meson to and to is presented. The , , and are observed through their decays to , , and ,...
scattering [p p] | pair production [pi] | statistical | Physics, Nuclear | 114 Physical sciences | Phi --> K+ K | Astronomy & Astrophysics | LHC, CMS, B physics, Nuclear and High Energy Physics | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | Science & Technology | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | Nuclear & Particles Physics | 7000 GeV-cms | leptonic decay [J/psi] | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | Physical Sciences | hadronic decay [f0] | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Physics, Particles & Fields | 0202 Atomic, Molecular, Nuclear, Particle And Plasma Physics | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0
scattering [p p] | pair production [pi] | statistical | Physics, Nuclear | 114 Physical sciences | Phi --> K+ K | Astronomy & Astrophysics | LHC, CMS, B physics, Nuclear and High Energy Physics | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | Science & Technology | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | Nuclear & Particles Physics | 7000 GeV-cms | leptonic decay [J/psi] | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | Physical Sciences | hadronic decay [f0] | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Physics, Particles & Fields | 0202 Atomic, Molecular, Nuclear, Particle And Plasma Physics | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0
Journal Article
5. Search for rare decays of $$\mathrm {Z}$$ Z and Higgs bosons to $${\mathrm {J}/\psi } $$ J/ψ and a photon in proton-proton collisions at $$\sqrt{s}$$ s = 13$$\,\text {TeV}$$ TeV
The European Physical Journal C, ISSN 1434-6044, 2/2019, Volume 79, Issue 2, pp. 1 - 27
A search is presented for decays of $$\mathrm {Z}$$ Z and Higgs bosons to a $${\mathrm {J}/\psi } $$ J/ψ meson and a photon, with the subsequent decay of the...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
Nuclear Physics, Section A, ISSN 0375-9474, 02/2019, Volume 982, pp. 1038 - 1039
Journal Article
Journal of High Energy Physics, ISSN 1126-6708, 2012, Volume 2012, Issue 5
Journal Article
Physics Letters B, ISSN 0370-2693, 06/2014, Volume 734, pp. 261 - 281
A peaking structure in the mass spectrum near threshold is observed in decays, produced in pp collisions at collected with the CMS detector at the LHC. The...
Journal Article
9. J/ψ production cross section and its dependence on charged-particle multiplicity in p + p collisions at s=200 GeV
Physics Letters B, ISSN 0370-2693, 11/2018, Volume 786, pp. 87 - 93
We present a measurement of inclusive production at mid-rapidity ( ) in collisions at a center-of-mass energy of GeV with the STAR experiment at the...
[formula omitted] collisions | Charged-particle multiplicity | Quarkonium | Multiple parton interactions | p+p collisions | NUCLEAR PHYSICS AND RADIATION PHYSICS | quarkonium | charged-particle multiplicity | multiple parton interactions
[formula omitted] collisions | Charged-particle multiplicity | Quarkonium | Multiple parton interactions | p+p collisions | NUCLEAR PHYSICS AND RADIATION PHYSICS | quarkonium | charged-particle multiplicity | multiple parton interactions
Journal Article
European Physical Journal C, ISSN 1434-6044, 07/2018, Volume 78, Issue 7
We report on the measurement of the inclusive J/ψ polarization parameters in pp collisions at a center of mass energy √s=8 TeV with the ALICE detector at the...
Engineering (miscellaneous); Physics and Astronomy (miscellaneous) | Astrophysics | J/psi: hadroproduction | 114 Physical sciences | J/psi: leptonic decay | High Energy Physics - Experiment | High Energy Physics | Nuclear Experiment | Engineering (miscellaneous) | p p: colliding beams | [PHYS.NEXP]Physics [physics]/Nuclear Experiment [nucl-ex] | Physics and Astronomy (miscellaneous), Relativistic Heavy-Ion collisions | Physics and Astronomy (miscellaneous) | muon: pair production | experimental results | Experiment | Nuclear and particle physics. Atomic energy. Radioactivity | CERN LHC Coll | 8000 GeV-cms | J/psi: polarization | helicity | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | rapidity dependence | transverse momentum dependence | p p: scattering | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Engineering (miscellaneous); Physics and Astronomy (miscellaneous) | Astrophysics | J/psi: hadroproduction | 114 Physical sciences | J/psi: leptonic decay | High Energy Physics - Experiment | High Energy Physics | Nuclear Experiment | Engineering (miscellaneous) | p p: colliding beams | [PHYS.NEXP]Physics [physics]/Nuclear Experiment [nucl-ex] | Physics and Astronomy (miscellaneous), Relativistic Heavy-Ion collisions | Physics and Astronomy (miscellaneous) | muon: pair production | experimental results | Experiment | Nuclear and particle physics. Atomic energy. Radioactivity | CERN LHC Coll | 8000 GeV-cms | J/psi: polarization | helicity | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | rapidity dependence | transverse momentum dependence | p p: scattering | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
|
I'm writing some code to solve problems in nonlinear elasticity using finite element methods. I have been following Bathe's book but I am having trouble with some nagging details.
My question is related to which configuration variables to use for calculating different quantities. I'm working on static problems in the updated Lagrangian formulation, so for each "time" $t$ I have a loading coefficient $\lambda_t$, and a displacement $\sideset{^t}{}{U}$. In chapter 6, Bathe derives a linearization of the principle of virtual work, resulting in the equation (6.103): \begin{equation} \left(\sideset{^t_{t\,}}{_\text{L}}{K}+\sideset{^t_{t\,}}{_\text{NL}}{K}\right)U=\sideset{^{t+\Delta t}}{}R-\sideset{^t_t}{}F \end{equation} where $\sideset{^t_{t\,}}{_\text{L}}{K}$ is the "linear" part of the incremental stiffness matrix in the configuration with displacement $\sideset{^t}{}{U}$, $\sideset{^t_{t\,}}{_\text{NL}}{K}$ is the "nonlinear" part, $U$ is the displacement increment, $\sideset{^{t+\Delta t}}{}R$ is the externally applied load at "time" $t+\Delta t$, and $\sideset{^t_t}{}F$ is the nodal point force resulting from the stress, at "time" $t$.
My question is about actually implementing this as a system of nonlinear equations to be solved via Newton's method. As I understand it, the idea is to consider a function $H(\sideset{^{t}}{}U)=\sideset{^{t+\Delta t}}{}R-\sideset{^t_t}{}F$, where $\sideset{^t_t}{}F$ is a function of $\sideset{^{t}}{}U$ via the stress.
We want to find a displacement $\sideset{^{t+\Delta t}}{}U$ s.t. the unbalanced force vector is zero:\begin{equation}0=H(\sideset{^{t+\Delta t}}{}U)\approx H(\sideset{^t}{}U)+\nabla H(\sideset{^t}{}U)\left(\sideset{^{t+\Delta t}}{}U-\sideset{^t}{}U\right)=\sideset{^{t+\Delta t}}{}R-\sideset{^t_t}{}F - \left(\sideset{^t_{t\,}}{_\text{L}}{K}+\sideset{^t_{t\,}}{_\text{NL}}{K}\right)U\end{equation}so that $\left(\sideset{^t_{t\,}}{_\text{L}}{K}+\sideset{^t_{t\,}}{_\text{NL}}{K}\right)$ is minus the Jacobian of the unbalanced force vector $\sideset{^{t+\Delta t}}{}R-\sideset{^t_t}{}F$. Is this interpretation correct?
Is the sum of the incremental stiffness matrices derived by Bathe exactly equal to minus the Jacobian of the unbalanced force vector, when all are evaluated in the same configuration? In implementing Newton's method, an iteration is introduced, so that we can write $U^{(k)}$ for the incremental displacement at the $k$th iteration. How does this affect the terms in the system of equations? At iteration $k$, are the integrals still calculated over the volume $\sideset{^t}{}V$, or over $\sideset{^{t+\Delta t}}{}V^{(k-1)}$? What about the stress, and the stiffness matrices?
It seems to me that a full implementation of Newton's method, with no modifications, would involve calculating the unbalanced force vector AND the stiffness matrices in the most recently calculated configuration $\sideset{^{t+\Delta t}}{^{(k-1)}}U$: \begin{equation} 0=H(\sideset{^{t+\Delta t}}{^{(k)}}U)\approx H(\sideset{^{t+\Delta t}}{^{(k-1)}}U)+\nabla H(\sideset{^{t+\Delta t}}{^{(k-1)}}U)\left(\sideset{^{t+\Delta t}}{^{(k)}}U-\sideset{^{t+\Delta t}}{^{(k-1)}}U\right)=\sideset{^{t+\Delta t}}{}R-\sideset{^{t+\Delta t}_{t+\Delta t}\,}{^{(k-1)}}F - \left(\sideset{^{t+\Delta t}_{t+\Delta t\,}}{^{(k-1)}_\text{L}}{K}+\sideset{^{t+\Delta t}_{t+\Delta t\,}}{^{(k-1)}_\text{NL}}{K}\right)U \end{equation} and solving the linear system for the incremental displacement $U$. However, when I set things up this way, the Jacobian of $H$ at displacement $\sideset{^{t+\Delta t}}{^{(k-1)}}U$, as calculated by finite difference, is not equal to $-\left(\sideset{^{t+\Delta t}_{t+\Delta t\,}}{^{(k-1)}_\text{L}}{K}+\sideset{^{t+\Delta t}_{t+\Delta t\,}}{^{(k-1)}_\text{NL}}{K}\right)$.
One clue is that the coded Jacobian matches the finite difference Jacobian almost exactly, for all iterations at $t=0$. My first instinct was to look for errors coming from the "nonlinear" stiffness matrix $K_{\text{NL}}$, but that matrix has entries which are far too small to explain the disparity.
I am fairly confident that I've implemented the calculation properly, provided my understanding is correct. I can provide details if anyone is willing to help me check them, and help me debug the many steps in this calculation - Green-Lagrange strain, deformation gradient, second Piola-Kirchhoff stress, Cauchy stress, etc. Suggestions for sanity checks are much appreciated as well. Thanks!
|
So this question had two main parts that I got stuck on:
Suppose that
(X,d) is a complete metric space and $f : X \rightarrow X$ is a map.
Parts a) & b) just asked for the definition of a contraction and to prove that $f$ has at most one fixed point without using Banach's fixed point theorem, which I was fine with.
(c) Prove that $f : \mathbb{R} \rightarrow \mathbb{R}, x\mapsto f(x)= $ $\frac{1}{20} \frac{1}{1+x^4}$ is a contraction.
(d) Use the Banach fixed point theorem to prove that the polynomial equation $x^5 + 3x − 1 = 0$ has exactly one real solution and compute this solution numerically to 3 decimal places.
So for part c) I have:
For $C^1$ functions $|f(x)-f(y)|\leqslant M|x-y|$ if $|f'(x)|\leqslant M$.
We compute
$$f'(x)= -\frac{x^3}{5(x^4+1)^2}$$
$$=-\frac{x^3}{(x^4+1)^2}\cdot\frac{1}{5}$$ $$\leqslant \frac{1}{5}$$
Therefore $$|f(x)-f(y)|\leqslant\frac{1}{5}|x-y|$$and hence $f$ is a contraction.
If somebody could tell me if this is correct I would appreciate it a lot!
Part d) I am completely stuck on and don't really know how to tackle it! All I managed to do was compute the root to be 0.332 by iterating.
|
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
|
I'm trying to learn more about finite difference methods here.
Theorically using the taylor series $u(x)=\sum_{n=0}^{\infty}\frac{(x-x_i)^n}{n!}(\frac{d^nu}{dx^n})_i $ you can get the forward/backward/central difference methods, which are:
forward: $(\frac{du}{dx})_i\approx\frac{u_{i+1}-u_i}{\Delta x}$
backward: $(\frac{du}{dx})_i\approx\frac{u_{i}-u_{i-1}}{\Delta x}$
central: $(\frac{du}{dx})_i\approx\frac{u_{i+1}-u_{i-1}}{2\Delta x}$
But I'm having a rough time trying to understand how the above taylor series is being expanded to obtain the difference methods. The fact of not having very clear how taylor works and that subindex notation is confusing me. In the lecture says that $u_i \approx\ {u(x_i)}$ and $x_i=i\Delta x$
Also, the lecture I'm following introduces FDM saying that:
$\frac{\delta u}{\delta x}(x)=\lim_{x\to 0} \frac{u(x+\Delta x) - u(x)}{\Delta x} = \lim_{x\to 0} \frac{u(x)-u(x-\Delta x)}{\Delta x} = \lim_{x\to 0} \frac{u(x+\Delta x)-u(x-\Delta x)}{2\Delta x} $
But how can i prove those equations are equals, any idea?
|
Let $A$ be a real asymmetric $n \times n$ matrix with i.i.d. random, zero-mean elements. What results, if any, are there for the eigenvectors of $A$? In particular:
How are individual eigenvectors distributed (probably zero-mean multi-variate Normal, but what is the covariance)? If $u_i$ and $u_j$ are eigenvectors of $A$, what is the distribution of $|u_i^*u_j|$, or, even better, the $n^2$-d joint distribution $P(u_1, ... u_n)$? What is the joint distribution of eigenvalues and their corresponding eigenvectors (or, perhaps more in line with my application described below, the conditional distribution of an eigenvector given an eigenvalue)? Numerically, I've found that every eigenvector corresponding to a complex eigenvalue has a single real element. (Naturally, real eigenvalues have corresponding real eigenvectors.) Has this been proven? What is the distribution of the number of real eigenvalues of $A$?
Note: I'm not a mathematician, but a physicist working in dynamical systems, and I skipped nuclear so my knowledge of GUE/GOE results is limited to basically the circular laws. I'm really interested in constructing random real matrices $A = VDV^{-1}$ where $D$ is a diagonal matrix of eigenvalues drawn from a distribution that I control and differs from the one given by the various circular laws, and $V$ is the matrix of eigenvectors drawn from the conditional distribution of eigenvectors of random matrices given their corresponding eigenvalue. So this question can be summarized: how do I draw $V$? I don't imagine that there are complete answers to my questions yet, but any insights along those lines that will help me draw $V$ "realistically" would be appreciated. Heck, I just realized bullets two and three may have somewhat incompatible assumptions: bullet three (or, rather, my proposed application) assumes that $P(u_1, ... u_n | \lambda_1, ..., \lambda_n) = P(\lambda_1,...,\lambda_N) \prod_i P(u_i | \lambda_i)$ where $i$ ranges over a single member of each complex conjugate pair of eigenvalues, where two makes no such assumptions and just jumps to $\int P(u_1, ... u_n, \lambda_1, ..., \lambda_n) d\vec{\lambda}$ where $\vec{\lambda}$ is circular law distributed.
If that seems like a weird application, my motivation is to study the influence of only the eigenvalues of the adjacency matrix of a dynamical process that takes place on a random network. A simple first attempt at this by drawing $A$, performing SVD on it to get $V$, and mucking with $D$ only gives either interesting or disastrous results depending on how you look at it. A still simple but second attempt (which to my mind seems like it should work if the conditional independence assumption holds) of drawing several random $A$'s and choosing eigenvalues and corresponding eigenvectors from them according to my desired distribution is even more disastrous (but no more interesting, I think).
|
Group Theory and Rubik's Cubes
I got the idea in my head the other day to write a Rubik's cube solver. While working on that I accidentally learned some interesting math that I thought might be nice to share. Here, then, is a brief, probably over-simplified introduction to group theory and its applications to Rubik's cubes.
What is Group Theory, Anyway?
Group theory is the kind of math that for whatever reason most people never know about unless they study math at a university even though the basics are simple enough that a first grader could understand them. There are really just two building blocks underpinning the fun stuff: functions and sets.
Functions
If you've programmed a computer or taken a high school math class, you already have the right idea of what a function is. Functions are basically like little black boxes that accept some input and map it to an output. For example, \(f(x) = x^2\) takes every number and maps it to its square.
When talking about functions, we use
domain to talk about the kinds of inputs a function can take and range to describe its output. The domain and range of a function that takes the natural numbers (\(\Bbb N\)) and maps them to the rationals (\(\Bbb Q\)) - maybe something like \(f(x, y) = \frac{x}{y}\) - would be written as \(f: \Bbb N \rightarrow \Bbb Q\). That is, \(f\) takes everything in \(\Bbb N\) and maps it to something in \(\Bbb Q\).
The last major notational thing to mention that's relevant to group theory is the idea of inverses. The inverse of a function is just another function that "undoes" it. To use the example from before, the inverse of \(f(x) = x^2\) would be \(f^{-1}(x) = \sqrt{x}\) (ignoring the multiple square roots of course).
Sets
Sets are also pretty easy to understand by intuition plus a little math. Set theory gets a lot fancier post-19th century, but Naive Set Theory is good enough for working with Rubik's cubes. In Naive Set Theory, a set is simply any well-defined group of objects. For example, the integers are a set, usually written \(\Bbb Z\). Similarly, the positive even numbers are a set (\(\{2, 4, 6, 8, \dots\}\)), the primes are a set (\(\{2, 3, 5, 7, \dots\}\)), and so on. For the purposes of this article, we're mostly interested in set membership. Membership is described using the \(\in\) symbol. For example, to say that \(x\) is a member of some set \(S\), we would write \(x \in S\). For the opposite, it's \(x \notin S\).
Groups
Now that we sort of understand functions and sets, it's pretty easy to make the leap to groups. A group has two parts: a set and an operation. These are usually written as \(G\) and \(*\) and the group is referred to by the tuple \((G, *)\). There are four axioms that these have to satisfy in order to be considered a group:
closure, associativity, identity element, and inverse element. Closure
\(\forall a, b \in G,\: a * b \in G\)
This means that for all members of the set \(G\), applying the operation to any pair of them gives you a result that's also a member of \(G\), i.e. the domain and range of our operator \(*\) are both \(G\).
Associativity
\(\forall a, b, c \in G,\: (a * b) * c = a * (b * c)\)
This just means that the operation we pick (\(*\)) has to be associative, that is, we can group the operations however we want without changing the result. The integers, for example, are a group under addition but not subtraction because addition is associative but subtraction is not, e.g. \(3 + (2 + 1) = (3 + 2) + 1\) but \(3 - (2 - 1) \neq (3 - 2) - 1\). That is, \((\Bbb Z, +)\) is a group but \((\Bbb Z, -)\) is not.
Identity Element
\(\exists e \in G\) such that \(\forall a \in G,\: e * a = a * e = a\)
This means that there is always a member of the set \(G\) that basically does nothing when combined with another element. To use the addition example again, the identity element for \((\Bbb Z, +)\) is 0, since \(0 + a = a + 0 = a\). For multiplication, the identity element is 1. This means that \(\Bbb N^*\), the natural numbers starting at 1, are not a group under addition since the identity element for addition, 0, is not a member of the set.
Inverse Element
\(\forall a \in G,\: \exists b \in G\) such that \(a * b = b * a = e\)
This axiom means that each element \(a\) in \(G\) must have an inverse (\(a^{-1}\)) that gives you the identity element when you \(*\) them together. In \((\Bbb Z, +)\), the inverse of each number is just its additive inverse (\(a^{-1} = -a\)) since \(a + -a = 0\). This axiom also excludes the natural numbers under addition since none of the inverses of natural numbers are themselves natural numbers.
Mechanics of Rubik's Cubes
In order to apply group theory to solving Rubik's cubes, we first need to understand how Rubik's cubes work. Since a Rubik's cube is, well, a cube, it has six faces. We call these up (U), down (D), right (R), left (L), front (F), and back (B). Each face is subdivided into nine smaller cubes called "cubies." There are 26 cubies in the Rubik's cube, \(3^3 - 1\) since there is no center cubie. Here are all of the faces labeled so you can get an idea of how they relate to each other:
You can see that there are eight corners (4 on the F face and 4 on the D face) and twelve edges (4 on U, 4 on D, and 4 in the middle). This plus the six center cubies gives us our total of 26.
Each face can be rotated about its center. A move consisting of a clockwise rotation of a face is written simply as the name of the face (e.g. R for clockwise rotation of the right face), whereas a counterclockwise rotation is written with a prime or single quote (e.g. B' for counterclockwise rotation of the back face). Since rotating a face four times gets you back to the original state, we can write all of the counterclockwise rotations as combinations of clockwise rotations, for example B' = BBB since rotating the back face clockwise three times is the same as rotating it once counterclockwise.
Note that the centers of the faces never move relative to each other; in fact, if you take apart a Rubik's cube, you'll see that the internal structure actually connects all the centers together so the faces can turn. Most standard cubes look something like this:
So Where's the Group?
Well, we know a group consists of a set and an operation, so what does that correspond to for the Rubik's cube? One way to think about it would be to have a set of all the possible cube states and use the rotations as operations for going from state to state. This sort of works, but we run into trouble when we try to satisfy the identity axiom: what move would be the identity move? Then of course there's the fact that we're actually trying to define six different operations on our set, which is too many for a valid group.
It's more useful if instead we use the moves themselves as the set (call it \(\Bbb G\)) and then the operation can be concatenating moves together (call it \(*\)). The trick is that a move might consist of zero rotations or millions of rotations - this allows us to represent every possible cube state as the sequence of moves that gets us there from a solved cube. So, does this make a group? Let's see if it satisfies our four axioms.
Closure
Well, there are no two moves you can make that combine to give you something other than a valid move. More formally, \(\forall M_1, M_2 \in \Bbb G,\: M_1 * M_2 \in \Bbb G\).
Associativity
This one's a little tricky. First, it's important to note that this isn't the same thing as commutativity, i.e. it's OK that we don't get the same result if the moves are applied in a different order - we still have a group (a non-abelian group if you're interested). That aside, how do we show that concatenating moves is associative?
Well, what happens if we think of a move as a function that takes us from one state of the cube to another? If we call the state \(S\) then we might write a move as \(M(S)\). If we combine two moves with \(*\) then we would get something like \(M_1 * M_2 = M_2(M_1(S))\). Note that \(M_2\) is on the outside since it gets applied to the cube state after \(M_1\). So here's what associativity looks like with our function notation then: $$M_1 * (M_2 * M_3) = (M_1 * M_2) * M_3\\ M_1 * M_3(M_2(S)) = M_2(M_1(S)) * M_3\\ M_3(M_2(M_1(S))) = M_3(M_2(M_1(S)))$$ The two sides of the equation are the same, so \(*\) is associative.
Identity
If we include a null move consisting of no rotations (call it \(E\)) in our set, then this requirement is satisfied since doing nothing followed by a move (or a move followed by nothing) is the same as just doing that move. That is, \(\forall M \in \Bbb G,\: E * M = M * E = M\).
Inverse
This one's also pretty simple: for every move \(M\), we can define \(M'\) to be the steps that undo it. For example, if \(M = \{RFL'\}\) then \(M' = \{LF'R'\}\). Since \(M'\) has to consist of nothing but face rotations, it must be a member of our set. Formally, \(\forall M \in \Bbb G,\: \exists M' \in \Bbb G\) such that \(M * M' = M' * M = E\).
There you go, all the axioms are satisfied. \(\Bbb G\) is a group.
Using Group Theory to Solve the Cube
Just proving that the group of Rubik's cube moves exists is neat and all, but it doesn't seem to be terribly useful. It turns out, however, that we can use groups to make finding solutions to scrambled cubes a lot faster.
If I were writing an algorithm to solve a Rubik's cube, I would probably (in fact I did) start with some kind of iterative deepening algorithm where we take our cube and see if applying any one of the base moves solves it. If not, we try two moves, then three moves, etc. Eventually we'll find the shortest solution, but it will probably take a very long time since there are about 43 quintillion possible cube states all within 20 or so moves. I tried this and it starts to get unusably slow around 7 moves, which is not enough to solve every cube.
One of the first fast algorithms for cube solving was discovered by Morwen Thistlethwaite. The so-called "descent through nested sub-groups" algorithm works by moving the cube state into subgroups that require fewer and fewer moves until it gets to the solved state. The first group, \(G_0\), is the same as our \(\Bbb G\) group; it contains moves made of all six base moves. The next group consists only of moves made of double turns of the up and down faces, but 90 degree turns of the other faces are allowed. The next two groups work the same way, eliminating single turns of the front and back faces and finally removing single turns of the left and right faces. From there, the only subgroup left is the one that contains the solved state.
This algorithm works because the subgroups are so much smaller than the original group that it's possible to construct lookup tables that tell you how to get from one subgroup to the next. The original algorithm used a few preliminary moves to get the cube into a state that matched the lookup table for the \(G_0 \rightarrow G_1\) transition, but now there are lookup tables for that as well. You can find these lookup tables here if you're interested.
If we take advantage of some facts related to group theory, we can take a problem that's too big to solve reasonably (find a solution to an arbitrary cube) and make it trivially solvable — modern cube solving algorithms can find solutions in milliseconds. Group theory has practical applications all over the place though; modern cryptography depends on it, pretty much any error correcting algorithm uses group theory in some way, and it informs a lot of current physics and chemistry research, among other things.
The ID solver I wrote, along with a little Rust library to support it, is on my GitHub if you're interested. I'll probably add some fancier solution algorithms (like Thistlethwaite's algorithm for starters) when I have time.
|
Dear Uncle Colin, I've got a line with equation $10y+36x=16.5$. That equation has no negative numbers in it, yet its gradient is apparently negative. I don't understand why. -- Silly Line, Only Positive Equation Dear SLOPE, It looks like we're in misconception-land! In fact, you can write the equation ofRead More →
The estimable @solvemymaths tweeted, some time back: hmm, perhaps I'll keep this one as "sin(22)" pic.twitter.com/cT5IHonoyb — solve my maths (@solvemymaths) January 16, 2016 A sensible option? Perhaps. But Wolfram Alpha is being a bit odd here: that's something that can be simplified significantly. (One aside: I'm not convinced thatRead More →
Dear Uncle Colin, I get $-\frac{\ln(0.02)}{0.03}$ as my answer to a question. They have $\frac{100\ln(50)}{3}$. Numerically, they seem to be the same, but they look completely different. What gives? -- Polishing Off Weird Exponents, Really Stuck Dear POWERS, What you need here are the log laws (to show that $-\ln(0.02)=\ln(50)$,Read More →
Some while back, Ben Orlin of the brilliant Maths With Bad Drawings blog posted a puzzle he'd set for some eleven-year-olds: Which is larger, $\frac{3997}{4001}$ or $\frac{4996}{5001}$? Hint: they differ by less than 0.000 000 05. He goes on to explain how he solved it (by considering the difference betweenRead More →
Dear Uncle Colin, I have to solve the inequality $x^2 - \left|5x-3\right| \lt 2+x$. I rearranged to make it $x^2 - x - 2 \lt \left|5x-3\right|$ , but the final answer is eluding me. -- Put Right Inequality Muddle Hello, PRIM! You're off to a good start; the next thingRead More →
While I'm no Mathematical Ninja, it does amuse me to come up with mental approximations to numbers, largely to convince my students I know what I'm doing. One number I've not looked at much1 is $\sqrt{\frac{1}{3}}$, which comes up fairly frequently, as it's $\tan\left(\frac{\pi}{6}\right)$2 . Ninja-chops taught me all aboutRead More →
Dear Uncle Colin, My maths mock went terribly, and I got a U. Since then I've done some real revision and got a good grade on a paper I did off my own bat. However, I'm a long way behind on the new material and I feel like it's tooRead More →
The Mathematical Ninja played an implausible trick shot, not only removing himself from a cleverly-plotted snooker, but potting a red his student had presumed safe and setting himself up on the black. Again. "One!" he said, brightly, and put some chalk on the end of his cue. The student sighed.Read More →
In this month's Wrong, But Useful, Dave and Colin discuss: Colin gets his plug for Cracking Mathematics in early Colin is upset by a missing apostrophe Dave teases us with the number of the podcast and asks about the kinds of things it's reasonable to expect students to know, andRead More →
Dear Uncle Colin, Could you please tell me how to solve simultaneous equations? I have a rough idea, but I get confused about it. -- Stuck In Mathematical Examinations/Qualifications Hello, SIMEQ! Here’s how I attack linear simultaneous equations, such as: $5x + 6y = -34$ (A) $7x + 2y =Read More →
|
Physics case
The SoLid experiment aims to address one of the most outstanding issue in neutrino physics. Recent experimental neutrino oscillation results suggest the existence of a new neutrino state. This possible new state, with a mass of around 1 eV, is called sterile because of its vanishing weak interaction quantum numbers. Its discovery would modify in depth our current understanding of the elementary interactions.
Electronic antineutrinos produced at nuclear reactors can help to shed light on this new state. The electron-to-sterile antineutrino oscillation induces a modification of the flux of the antineutrinos close to the reactor core (< 10 m). The measurement of the antineutrino flux from \(^{235}\rm U\) at short distance from the production will provide an essential reference for predictions used in current and future neutrino experiments.
SoLid experiment
The unique strengthes of the SoLid experiment relies on both the antineutrino source and the technology of detection. The detector is installed at the BR2 research reactor of \( \rm{SCK \! \cdot \! CEN (Mol,Belgium) } \), which provides a very pure source of electron antineutrino (>95 % \(^{235}\rm U\)). The experimental site allows to perform flux measurements at distance varying from 5.5 to 12 m from the core, where the modification of the flux in presence of a sterile neutrino is maximal. In addition to these beneficial characteristics, the site is distinguished by its exceptionally low background environment.
The measurement principle is based on the identification of the inverse beta decay products when an electron antineutrino interacts with a proton inside the detector \( \rm{(\bar\nu_e + p \rightarrow e^+ + n)} \). This produces a prompt signal originating from the ionization energy loss and subsequent annihilation of the positron as well as a time-delayed signal induced by the capture of the neutron after thermalisation.
In order to measure these signals, the SoLid detector is composed of detection units made of polyvinyl-toluene (PVT) cubes of 5 x 5 x 5 cm
3 lined with two layers of \( \rm{ ^6LiF:ZnS(Ag) }\). The PVT cubes provide the hydrogen rich target for the electron antineutrino. They are optically isolated using Tyvek wrapping and assembled in planes of 16 x 16 units. In its final design, the SoLid detector is made of 5 modules, each of them consisting of 10 planes. The positron is detected by the scintillation light generated in the PVT cubes. The neutron thermalises and is subsequently captured by the Li6 thus producing a \( ^3\rm H \) nucleus and an \( \alpha \) particle with a combined energy of 4.8 MeV exciting scintillation in the \( \rm{ZnS(Ag) }\) layers. This light is collected via two horizontal (vertical) fibres running through each row (column) of cubes. Each fibre is coupled to a Silicon Photon Multiplier at one end and a mirror on the other end. This enables a 3D reconstruction of all signals in any cube. The light from the PVT and \( \rm{ZnS(Ag) } \) have different decay times, allowing the positron and neutron signal to be separated.
|
Given a secure PRG $G :\text{{0,1}}^n \rightarrow \text{{0,1}}^{2n}$, I've constructed the following PRG: $$G'(x) = \begin{cases}0^{2n} & \text{if x is palindrome;}\\G(x) & \text{otherwise;}\\\end{cases}$$My intuition is that $G'$ is secure because it's "predictable" only for negligible portion of the inputs, so I tried to write a formal reduction.
I tried showing that given an efficient distinguisher $D'$ for $G'$, I can construct an efficient distinguisher $D$ for $G$. The immediate issue I faced is that $D$ receives as input either a random string $r$ of length $2n$, or $G(s)$ for a random $s$ of length $n$, and In the later case I'm having trouble rearranging $G(s)$ to be "suitable" for $D'$. (which I'm using as a black box) Is my intuition about $G'$ security correct? If so, what's the way of proving its' security?
Given a secure PRG $G :\text{{0,1}}^n \rightarrow \text{{0,1}}^{2n}$, I've constructed the following PRG: $$G'(x) = \begin{cases}0^{2n} & \text{if x is palindrome;}\\G(x) & \text{otherwise;}\\\end{cases}$$My intuition is that $G'$ is secure because it's "predictable" only for negligible portion of the inputs, so I tried to write a formal reduction.
Indeed, your intuition is perfectly correct and the distinguisher $D$ you're trying to build is basically $D'$ itself. To see that this is the case, simply divide the experiment for $G'$ into two cases: either the chosen input is palindrome or it is not. The first case occurs with negligible probability, so we can focus in the second case, but now it's easy since this case corresponds to the original experiment for $G$.
If you want to be formal, just keep in mind that \begin{multline*} \mathbb{P}\left[D'(G'(x)) = 1\right] = \mathbb{P}\left[D'(G'(x)) = 1|x\text{ is palindrome}\right]\cdot\overbrace{\mathbb{P}\left[x\text{ is palindrome}\right]}^{\delta} \\ + \mathbb{P}\left[D'(G'(x)) = 1|x\text{ is not palindrome}\right]\cdot\underbrace{\mathbb{P}\left[x\text{ is not palindrome}\right]}_{1-\delta} \end{multline*} and $$\mathbb{P}\left[D'(r) = 1\right] = \delta\cdot\mathbb{P}\left[D'(r) = 1\right] + (1-\delta)\cdot\mathbb{P}\left[D'(r) = 1\right].$$ Now, when you take the difference $$\begin{align*} \left|\mathbb{P}\left[D'(G'(x)) = 1\right] - \mathbb{P}\left[D'(r) = 1\right]\right| \end{align*}$$ try to associate terms appropriately and apply some straightforward inequalities to show this is negligible.
A palindrome is a bit string that is equal to itself when reversed, $x=\sigma({x})$ for some permutation of the bits $\sigma$. For every $x\in\lbrace{0,1\rbrace}^n$ sequence we can append $n$ additional bits to create a palindrome by $x||\sigma(x)$. To verify that this process produces a palindrome, note that $\sigma$ is an involution; $x = \sigma(\sigma(x))$ (self inverse) and that it distributes over string concatenation; $\sigma(a||b) = \sigma(b)||\sigma(a)$, so that $\sigma(x||\sigma(x)) = \sigma(\sigma(x))||\sigma(x) = x||\sigma(x)$, as claimed.
Of the $2^n$ possible ways to extend $x$ by $x||y$ with $y\in\lbrace{0,1\rbrace}^n$, only one choice of $y$ will produce a palindrome; $y=\sigma(x)$. This means that the total number of palindromes in $x||z$ with $z\in\lbrace{0,1\rbrace}^n$ is equal to the number of possible choices of $x$ which is $2^n$.
Thus the probability that G is a palindrome is $2^{n}/2^{2n} = 2^{-n}$.
|
I would like your help to show a statement that uses the law of iterated expectations.
In my notation $Supp_X$ denotes the support of a random variable $X$.
Consider the random variables $\epsilon, X, Y$. Take the function $z: Supp_X\rightarrow \mathbb{R}^L$ with $L>1$. Take the function $r: Supp_{(X,Y)}\rightarrow \{0,1\}$.
Assume that $E(\epsilon| X,Y)=0$. I want to show that $$ E(\epsilon \times r(X,Y) \times z_l(X))=0 \hspace{1cm} \forall l=1,...,L $$ where $z_l(X)$ is the $l$-th element of $z(X)$.
I believe that this statement makes uses of the law of iterated expectations but I'm not sure how to show it formally. I'm confused because I've always used the law of iterated expectations to get rid of the conditioning event, which is not the case here
|
56 6 Homework Statement A boy decides to hang its bag from a tree branch, with the purpose of raising it. The boy walks with constant velocity in the x axis, and the bag moves in the y axis only. Taking into account that the lenght of the rope is constant and that we know the height h of the branch respect to the floor: a) find the acceleration of the bag Homework Equations kinematic equations
So what I did was at first consider the case the kid is below the branch, so that x=0,t=0, then I thought that the lenght L of the rope should be ##L=2h## because we know the radius from the branch to the kid is just ##x^2+y^2=r^2## and when x=0, y=h. So then I wrote the motion equations for the bag: $$a(t)=a-g$$ $$x(t)=\frac{(a-g)t^2}{2}$$
And I want to find the time t so that the bag is at the branch, then x(t)=h: $$h=\frac{(a-g)t^2}{2} \rightarrow \sqrt{\frac{2h}{a-g}}=t$$ At this moment, ##x=v_0 \sqrt{\frac{2h}{a-g}}## then we have that the lenght of the rope is ##h^2+(v_0 \sqrt{\frac{2h}{a-g}})^2=4h^2## and we find that $$a= \frac{2{v_0}^2}{3h}+g$$ Is this correct?
And I want to find the time t so that the bag is at the branch, then x(t)=h: $$h=\frac{(a-g)t^2}{2} \rightarrow \sqrt{\frac{2h}{a-g}}=t$$
At this moment, ##x=v_0 \sqrt{\frac{2h}{a-g}}## then we have that the lenght of the rope is ##h^2+(v_0 \sqrt{\frac{2h}{a-g}})^2=4h^2## and we find that $$a= \frac{2{v_0}^2}{3h}+g$$
Is this correct?
|
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
|
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
|
Nandakumaran, AK and George, Raju K (2004)
Exact Controllability of a Semilinear Thermoelastic System. In: Numerical Functional Analysis and Optimization, 25 (3 - 4). pp. 271-285.
PDF
Exact_Controllability.pdf - Published Version
Restricted to Registered users only
Download (208kB) | Request a copy
Abstract
In this paper, we obtain the exact controllability of a semi-linear thermo-elastic system described by the partial differential equations: $ \omega_t_t - \gamma\triangle \omega_t_t + \Delta(2)\omega + \alpha\Delta\theta + f(\theta)= 0 in \Omega_T$ $ \theta_t - \Delta\theta + \sigma\theta -\alpha\Delta\omega_t + g(\omega) = u in \Omega_T$ with Dirichlet boundary conditions when the Lipschitz constants of the non-linearities f and g are small.
Item Type: Journal Article Additional Information: The copyright of this article belongs to Taylor and Francis Group. Keywords: Exact controllability;Thermo-elastic system;Fixed point theorem Department/Centre: Division of Physical & Mathematical Sciences > Mathematics Depositing User: Ravi V Nandhan Date Deposited: 19 Nov 2004 Last Modified: 16 Jan 2012 10:19 URI: http://eprints.iisc.ac.in/id/eprint/2181 Actions (login required)
View Item
|
The Diffie-Hellman on curve25519 is usually calculated using the base point $(9,…)$ which induces a cyclic subgroup of $G:=\{\infty\}\cup(E(F_{p^2})\cap(F_p\times F_p))$ with index 8, i.e. there is a prime $p_1$ such that $|G|=8p_1$ and the order of $(9,…)$ is $p_1$. An attacker does not have to use a multiple of $(9,…)$ though and can even choose an element in the twist group $T:=\{\infty\}\cup(E(F_{p^2})\cap(F_p\times \sqrt 2 F_p))$ which has order $|T|=4p_2$ for a prime $p_2$.
Contributory behaviour (afaik) describes the property that none of the participants of the Diffie-Hellman exchange can force the outcome to be one of a small set of values. Such a property is for example interesting to defend against something like the triple handshake attack. The website on curve25519 lists 12 values to reject to assure contributory behaviour.
I understand where eleven of these come from, namely the elements of the subgroups of order 8 and 4 of $G$ and $T$ respectively. As they both share the same identity element ($\infty$) there are $8+4-1=11$ of those elements.
(If an element is not in those subgroups of order 8 and 4, then its order is $\geq \min(p_1,p_2)$ and thus the set of possible values that result out of the multiplication with the private scalar of the other party is large.)
Which of the 12 elements listed on the website is not one of the above eleven and why is it there?
|
Sara Maloni (University of Virginia) The geometry of quasi-Hitchin symplectic Anosov representations
Lieu : IMO, salle 2L8
Résumé : In this talk we will focus on our joint work in progress with Daniele Alessandrini and Anna Wienhard about quasi-Hitchin representations in Sp(4,C), which are deformations of Fuchsian representations which remain Anosov. These representations acts on the space Lag(C^4) of complex lagrangian subspaces of C^4. We will show that the quotient of the domain of discontinuity for this action is a fiber bundle over the surface and we will describe the fiber. In particular, we will describe how the projection map comes from an interesting parametrization of Lag(C^4) as the space of regular ideal hyperbolic tetrahedra and their degenerations.
Notes de dernières minutes : Café culturel assuré à 13h par Daniel Monclair
The geometry of quasi-Hitchin symplectic Anosov representations Version PDF
Luca Calatroni (Ecole Polytechnique) Anisotropic image osmosis models for visual computing
Résumé : We consider a drift-diffusion PDE modelling the non-symmetric physical phenomenon of osmosis (Weickert, ’13) and apply it to solve efficiently several imaging tasks such as image cloning, image compression and shadow removal. For the latter problem, in order to overcome the smearing artefacts on the shadow boundary due to the action of the Laplace operator, we extend the linear model by means of directional diffusion weights allowing for a combined osmosis and non-linear inpainting procedure. In particular, analogies with the second order diffusion inpainting equations (e.g. Harmonic, Absolutely Minimising Lipschitz Extensions, Total Variation) and connections with Grushin operators are shown. Numerical details on the efficient implementation of the model via appropriate stencils mimicking the anisotropy at a discrete level are presented and applications to camera and cultural heritage conservation images are also presented.
Anisotropic image osmosis models for visual computing Version PDF
Simão Correia (Université de Strasbourg) Some new local and global well-posedness results for the nonlinear Schrödinger equation
Lieu : IMO, Salle 3L8
Résumé : In this presentation, we shall consider the nonlinear Schrödinger equation on $\mathbb R^d$, $$iu_t + \Delta u + \lambda |u|^\sigma u = 0$$ with an initial condition at $t=0$. This is already a classical equation, with a vast literature regarding the behaviour of the solutions to this problem. We discuss the extension of the $H^1$ local well-posedness theory to some larger spaces which, in particular, do not lie inside $L^2$. As a byproduct, we develop the theory for the plane wave transform, which is of independent mathematical interest. If time allows, we present some global existence results, which either rely on a small data theory or on the concept of finite speed of disturbance.
Some new local and global well-posedness results for the nonlinear Schrödinger equation Version PDF
|
In this talk, we investigate the ratio of the connected version of a problem to the original problem in graphs, called the
Price of Connectivity (PoC).
Firstly, we study the PoC for the Domination problem. The ratio of the connected domination number,
\(\gamma_c\) , and the domination number,
\(\gamma\), is strictly bounded from above by 3. We consider the computational aspect for computing the PoC and we extend some structural results, for instance, it was shown by Zverovich that for every connected
\(P_5, C_5\)-free graph,
\(\gamma = \gamma_c\), and the class of
\((P_5, C_5)\)-free graph is the largest satisfying this equality for all induced subgraphs. We prove that being a connected
\((P_6, C_6)\)-free graph is equivalent to satisfy
\(PoC(H) \leq \frac{3}{2}\) for all induced subgraphs
\(H\). Moreover, for every connected
\((P_8, C_8)\)-free graph,
\(\frac{\gamma_c}{\gamma} \leq 2\). However, this class is not the largest one.
Secondly, to find the list of forbidden induced subgraphs, we introduce the notion of critical graphs: a graph
\(G\) such that
\(PoC(H)<PoC(G)\) for every induced subgraph
\(H\). In order to characterize these graphs, we develop a new system called
GraphsInGraphs (GIG). The system makes the links between a graph and theirs induced subgraphs. Maybe you have already an application for our system? It is welcome!
This is joint work with Oliver Schaudt and Gilles Caporossi.
Ce séminaire s'adresse seulement aux étudiants du GERAD. Nous vous remercions de confirmer votre présence en indiquant votre nom complet. Des pizzas et des breuvages seront servis aux participants ou vous pouvez apporter votre lunch.
|
Colloquium di Matematica
Invariance principle for the random Lorentz gas beyond the [Boltzmann-Grad / Gallavotti-Spohn] limit
Balint Toth
23-05-2018 - 16:00 Aula F, primo piano, edificio Aule - Largo San Leonardo Murialdo,1
Let hard ball scatterers of radius $r$ be placed in $\mathbb R^d$, centred at the points of a
Poisson point process of intensity $\rho$. The volume fraction $r^d \rho$ is assumed to be
sufficiently low so that with positive probability the origin is not trapped in a finite domain
fully surrounded by scatterers. The Lorentz process is the trajectory of a point-like particle
starting from the origin with randomly oriented unit velocity subject to elastic collisions with
the fixed (infinite mass) scatterers. The question of diffusive scaling limit of this process is
a major open problem in classical statistical physics.
Gallavotti (1969) and Spohn (1978) proved that under the so-called Boltzmann-Grad limit, when $r
\to 0$, $\rho \to \infty$ so that $r^{d-1}\rho \to 1$ and the time scale is fixed, the Lorentz process (described informally above) converges to a Markovian random flight process, with independent exponentially distributed free flight times and Markovian scatterings. It is essentially straightforward to see that taking a second diffusive scaling limit (after the Gallavotti-Spohn limit) yields invariance principle.
I will present new results going beyond the [Boltzmann-Grad / Gallavotti-Spohn] limit, in $d=3$:
Letting $r \to 0$, $\rho \to \inf \to y$ so that $r^{d-1} \rho \to 1$ (as in B-G) and simultaneously
rescaling time by $T \sim r^{-2+\epsilon}$ we prove invariance principle (under diffusive
scaling) for the Lorentz trajectory. Note that the B-G limit and diffusive scaling are done
simultaneously and not in sequel. The proof is essentially based on control of the effect of
re-collisions by probabilistic coupling arguments. The main arguments are valid in $d=3$ but not
in $d=2$.
org: CAPUTO Pietro
|
Suppose I know the Dirchlet series $$\sum_{n=1}^{\infty} \frac{f(n)}{n^s} = \frac{\zeta(s)}{\zeta(3s)},$$ where $\zeta(s)$ is the usual Riemann zeta function.
My question is - is there a way to determine $f(n)$ from this information? If so, how?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Suppose I know the Dirchlet series $$\sum_{n=1}^{\infty} \frac{f(n)}{n^s} = \frac{\zeta(s)}{\zeta(3s)},$$ where $\zeta(s)$ is the usual Riemann zeta function.
My question is - is there a way to determine $f(n)$ from this information? If so, how?
Yes: \begin{align} \sum_{n = 1}^\infty \frac{f(n)}{n^s} & = \frac{\zeta(s)}{\zeta(3s)}\\ & = \prod_p \frac{1 - p^{-3s}}{1 - p^{-s}} &&\text{by the Euler product for $\zeta$ where the $p$s are all primes}\\ & = \prod_p \left(1 + \frac{1}{p^s} + \frac{1}{p^{2s}} \right) &&\text{by the identity $ 1 - x^3 = (1 - x)(1 + x + x^2)$} \end{align} Comparing with the Euler product formula, we see that $f$ is a multiplicative function and that $$f(p^\alpha) = \begin{cases} 1 & \text{if $\alpha = 1$ or $2$,} \\ 0 & \text{if $\alpha = 3, 4, 5, \ldots$.} \end{cases}$$
Because a multiplicative function is completely determined by its values at the powers of prime numbers, we can restate the function as $$f(n) = \begin{cases} 1 & \text{if 1 is the largest cube that divides $n$,} \\ 0 & \text{otherwise.} \end{cases}$$
The Dirichlet series for $\zeta(s)$ and $\zeta(3s)$ both converge absolutely for $s > 1$, so that definition of $f(n)$ is the only answer when $s > 1$ (see Theorem 4.8 at http://www.math.illinois.edu/~ajh/ant/main4.pdf).
|
This article provides answers to the following questions, among others:
Why are slip steps preferably characterized by a single crystal at an angle of 45° to the tensile axis? What is meant by multiple gliding? Under what condition does easy gliding occur? Why are there usually no slip steps on a polycrystal visible? Introduction
If a material on the atomic level is characterized by a uniform lattice orientation (no grain boundaries), it is also called a single crystal.
If such a single crystal is deformed under tension, inclined rings are formed in the deformation region. This can be clearly seen from the copper single crystal shown in the figure below. These rings are so-called slip steps.
Why these slip steps preferentially aligned at an angle of 45°, will be clarified in the following sections.
Shear force
The single crystal with a cross-sectional area \(A_0\) is externally loaded with the tensile force \(F_0\). In order to initiate the deformation process and to allow the atomic planes to shear off, a force \(F_{\parallel}\) must act in the slip planes. Only in this way can the plane actually be moved in the slip direction. On the other hand, a normal force acting perpendicular to the slip planes has no influence on the deformation. As a result, the layers would merely be compressed but not moved.
If a slip plane lies at an angle \(\alpha\) to the tensile axis, then the external force \(F_0\) can be divided into a parallel component (shear force \(F_{\parallel}\)) and a perpendicular component (normal force \(F_{\perp}\)) to the slip plane. Mathematically, the shear force and the normal force can be determined as a function of the angle \(\alpha\) as follows:
\begin{align}
\label{kraft} F_{\parallel}&=F_0 \cdot \sin(\alpha) ~~~~~\text{shear force} \\[5px] F_{\perp}&=F_0 \cdot \cos(\alpha)~~~~~\text{normal force} \\[5px] \end{align}
If the angle \(\alpha\) between the surface normal and the tensile axis is relatively small, the shear force in the slip plane is also relatively low. The force may not be sufficient to activate the slip plane and initiate the deformation process.
If, on the other hand, the slip plane normal lies at a relatively large angle (\alpha\) to the tensile axis, the shear force in the slip plane is relatively large at first. However, at the same time the slip plane area and with it the binding force between the planes increases. Even in this case, the greater shear force may still not be enough to shear off the plane and start the deformation process.
The situation can be compared with a hook-and-loop fastener, wherein the mutual entanglements between the closure materials correspond to the bonds between the atomic planes. If the shear forces to open the hook-and-loop fastener are too low, the closure materials do not slide off each other. However, a multiplication of the force does not lead to slippage, if at the same time the overlap area of the two closure materials increases disproportionately.
There must therefore be a favorable ratio of force and area in order to reach the optimum state where most force per area is obtained. That means where the greatest
shear stress is achieved. Shear stress
Decisive for a slipping of the atomic planes is therefore not the force in the slip plane alone but the acting force per area, i.e. the
shear stress (see also fundamentals of deformation):
\begin{equation}
\label{schubspannung} \tau=\frac{F_{\parallel}}{A} \end{equation}
While the force \(F_{\parallel}\) depends on the angle \(\alpha\) of the slip plane accordingly to the equation (\ref{kraft}), the slip plane area \(A\) is determined as follows by the sample’s cross section \(A_0\):
\begin{equation}
\label{flaeche} A=\frac{A_0}{\cos(\alpha)} \end{equation}
The increase of the shear force (blue curve) and the increase of the area (black curve) with increasing angle schematically shows the diagram below. In addition, the resulting curve of the shear stress is shown (red curve). Obviously, the largest force per area acts at an angle of 45°. This will be shown mathematically in the following section.
Maximum shear stress
The shear stress curve shown in the figure above is obtained by inserting the equation (\ref{kraft}) and the equation (\ref{flaeche}) in the shear stress equation (\ref{schubspannung}):
\begin{equation}
\label{scherspannung} \tau=\frac{F_{\parallel}}{A}=\frac{F_0 \cdot \sin(\alpha)}{\frac{A_0}{\cos(\alpha)}} =\frac{F_0}{A_0} \cdot \cos(\alpha) \cdot \sin(\alpha) \end{equation}
The quotient \(\frac{F_0}{A_0}\) appearing in equation (\ref{scherspannung}) corresponds to the normal stress \(\sigma_0\) applied from the outside (“force per cross-sectional area”). For the angle-dependent shear stress \(\tau(\alpha)\), therefore, the following equation applies:
\begin{equation}
\tau(\alpha)=\sigma_0 \cdot \cos(\alpha) \cdot \sin(\alpha) \end{equation}
The term \(\cos(\alpha)\cdot \sin(\alpha)\) can further be replaced by the expression \(\frac{\sin(2\alpha)}{2}\). Thus, the effective shear stress \(\tau\) in a slip plane which is aligned at the angle \(\alpha\) to the tensile axis, can be calculated as follows:
\begin{equation}
\label{tau} \boxed{\tau(\alpha)={\frac{\sin(2 \alpha)}{2} \cdot \sigma_0} } \end{equation}
The internal shear stress \(\tau\) thus depends on the applied external stress \(\sigma_0\) and in particular on the angular position \(\alpha\). The term \(\sin(2 \alpha)\) reaches the maximum value 1 for an angle of \(\alpha\) = 45°.
It is now also shown mathematically that the maximum shear stress \(\tau_ {max}\) is thus reached at an angle of \(\alpha\) = 45°. For this case, the maximum shear stress corresponds to exactly half of the outer normal stresses \(\sigma_0\)!
\begin{equation}
\label{max} \boxed{\tau_{max}=\frac{\sigma_0}{2}} ~~~ \text{for} ~~~ \alpha=45° \end{equation}
External normal stresses induce shear stresses in the material interior, which become maximum at an angle of 45°. In those favorably lying slip planes, the atomic layers therefore prefer to shear off!
This is the reason why the copper single crystal shown above has slip steps at an angle of about 45° to the tensile axis. The prerequisite for this is that the lattice structure is spatially oriented in such a way that a slip plane is aligned at an angle of about 45°. The effects of the deformation process when the slip plane is not at 45 ° will be discussed in the next section.
Note: In addition to the optimal orientation of the slip plane, the slip direction must also be oriented accordingly. This optimal orientation has been assumed tacitly. The article Schmid’s law deals with this topic in more detail. Influence of lattice orientation
What does the deformation of a single crystal look like if its slip planes are not at an angle of 45°? Due to the less favorable position, larger external normal stresses are required in this case, so that the critical resolved shear stress in the slip plane (and in the slip direction!) will be exceeded.
Multiple gliding
In the case of a very unfavorable position of the (primary) slip planes, this can even lead to an activation of other slip planes, which normally only come into account at higher stresses, because in them the critical shear stress is exceeded first. Slipping then probably takes place in several different slip systems.
This will be the case if the different slip planes are relatively symmetrical arranged to the tensile axis. Then the critical shear stress in all slip systems is reached approximately simultaneously. For face-centered cubic monocrystals, the unit cells must be aligned nearly parallel to the tensile axis (see figure after next).
All four slip planes are then activated equally. Due to the fact that the shearing takes place on several different slip planes, one also speaks of
multiple gliding. By such multiple gliding, the crystal is then deformed in different directions. The slip steps exit the material at different angles and spatial directions. The superposition of these slip steps makes it impossible to distingish different slip steps from one another. Therefore they are not visible with the naked eye.
Figure: Multiple gliding in a single crystal
Easy gliding
On the other hand, in the image of the copper single crystal obviously shows distinguished uniform slip steps. The reason for this is that the fcc lattice was specially aligned to the tensile axis. The single crystal was so designed that only one slip plane was oriented at an angle of approximately 45° to the tensile axis.
Thus, upon reaching the limit stress, only this slip plane was activated, while the shear stresses in the remaining slip planes (or in the slip direction) remained below the critical resolved shear stress. Since in such a case, the sliding takes place only within one slip plane, this process is also referred to as a
single gliding or easy gliding.
The external normal stress that has to be applied \(\sigma_{crit}\) to a material in order to exceed the critical resolved shear stress in a slip plane, is referred to as
yield strength or elastic limit.
This characteristic value thus refers to the (tensile) stress at which an irreversible deformation process begins. As explained, the yield strength for single crystals depends on the spatial orientation of the lattice! The lowest value of the yield strength is therefore obtained for single crystals when their slip planes are aligned at an angle of 45 ° to the tensile axis.
The yield strength marks the stress limit from which on a material deforms plastically!
Polycrystals
The annular depiction of the slip steps at an angle of 45° is only evident in specially oriented single crystals. However, single crystals can only be produced with great technical effort. They are therefore limited to special applications such as nickel-based single-crystal turbine blades or silicon chip production.
In general metals do not have a uniform lattice orientation (note: the term lattice orientation must not be confused with the term lattice structure!). Rather, many small microscopic areas (
grains) are found in a metal, each with a different lattice orientation. Such crystals are also referred to as polycrystals.
Thus, in such polycrystals several slip systems with different slip plane orientation are always simultaneously activated (
multiple gliding). A uniform exit of slip steps is therefore not visible in such materials anyway. How it comes to such a nonuniform lattice orientation is explained in more detail in the chapter microstructure.
In this article, only the processes that lead to the initiation of a deformation were discussed. The actual deformation process in single crystals is therefore discussed in more detail in the article deformation process in single crystals.
|
This posting presents the \(N_l\) spectra to be used for Phase 2 of the CMB-S4 data challenge and is the analogue of this posting for Phase 1. These noise spectra are based on the most recent optimization, which includes extra low frequency bands. The documentation and evolution of these specifications have been layed out through multiple postings on the CMB S4 wiki. Below is a useful historical recap that builds on the posting linked above (for Phase1), and includes work from many people:
In this posting I do the following things:
As mentioned in the posting above, the effort distributions in the tables below were calculated given an optimized solution for a minimal \(\sigma_r\), taking into account contributions from foregrounds and CMB lensing. The assumed unit of effort is equivalent to 500 det-yrs at 150 GHz. For other channels, the number of detectors is calculated as \(n_{det,150}\times \left(\frac{\nu}{150}\right)^2\), i.e. assuming comparable focal plane area. A conversion between the (150 equivalent) number of det-yrs and (actual) number of det-yrs is given for each band. This is just one way to implement a detector cost-function, and other suggestions are welcomed.
\(f_{sky}=0.03\) Analytic Fitting parameters (BB) Analytic Fitting parameters (EE) Analytic Fitting parameters (TT) \(\nu\),GHz # det-yrs
(150 equiv)
# det-yrs
(actual)
FWHM, arcmin \(\sigma_{map}\),
\(\mu K\)-arcmin
\(l_{knee}\) \(\gamma\) \(\sigma_{map}\),
\(\mu K\)-arcmin
\(l_{knee}\) \(\gamma\) \(\sigma_{map}\),
\(\mu K\)-arcmin
\(l_{knee}\) \(\gamma\) 20 30,000 533 76.6 14.69 50 -2.0 15.06 50 -2.0 17.99 175 -4.1 30 22,500 900 76.6 9.36 50 -2.0 9.59 50 -2.0 11.47 175 -4.1 40 22,500 1,600 57.5 8.88 50 -2.0 9.10 50 -2.0 10.88 175 -4.1 85 182,500 58,600 27.0 1.77 50 -2.0 1.81 50 -2.0 2.17 175 -4.1 95 182,500 73,200 24.2 1.40 50 -2.0 1.43 50 -2.0 1.72 175 -4.1 145 67,500 63,075 15.9 2.19 60 -3.0 2.29 65 -3.0 4.89 230 -3.8 155 67,500 72,075 14.8 2.19 60 -3.0 2.29 65 -3.0 4.89 230 -3.8 220 57,500 118,130 10.7 5.61 60 -3.0 5.87 65 -3.0 12.54 230 -3.8 270 57,500 186,300 8.5 7.65 60 -3.0 8.01 65 -3.0 17.11 230 -3.8 Total Degree Scale Effort 690,000 574,420 Total Arcmin Scale Effort 310,000 289,680 Total Effort 1,000,000 864,100
|
Let $\chi : (\mathbb Z/f\mathbb Z)^\times \to K = \mathbb Q(\mu_{\phi(f)})$ be a primitive Dirichlet character. Assume moreover that it is
not quadratic, that is, $\chi^2$ is not the trivial character. Let $\pi_1,\dots,\pi_g$ be the primes lying over $2$ and $v_1,\dots,v_g$ be the corresponding valuations. Recall that:$$L(0,\chi) = \frac1f\sum_{n=1}^fn\chi(n)$$
Experimentally (upto conductor 200), I find that there always exists some $k$ such that $v_k(L(0,\chi)) > 0$. Does anyone know a proof?
Note that it is not true that $2 | L(0,\chi)$. For instance, for for the character of conductor $5$ mapping $2 \to i$, we have $L(0,\chi) = (i+3)/5$. There are lots of other examples.
Note also that we do require the condition that $\chi$ is non quadratic. For instance, if $f = p \equiv 3 \pmod4$ and $\chi$ is quadratic, then: $$pL(0,\chi) \equiv (p-1)/2 \equiv 1 \pmod 2.$$
I asked this question a few hours before on stackexchange but at the suggestion of someone, I am posting it here. I have deleted the question on stackexchange.
|
This is a collection of a few ideas. Let me see where it takes us.
It is probably too verbose since I am writing it as I think the question.
Maybe later I can shorten it and correct errors.
If you are hurried it is safe to scroll down to the last part, where there is
what I think is an answer to the OP's question.
I want to concentrate on differentiation and integration as symbolic operations.
For differentiation we can consider a class $E$ of functions-symbols that contains the constants (complex constants maybe), $x$, is closed under the arithmetic operations, and composition. We can throw in other function-symbols like $e^x$, $\ln(x)$ together with $x^{-1}$, or many others. But notice that for every new function-symbol we throw in $E$ we assume we know how to compute its derivative (we have a symbol for it). The minimal assumption of having the constants and $x$ gives us $E$ to be the polynomials. A larger option would be the elementary functions.
If differentiation is considered as an operation in the symbols in $E$ then it is, by the definition of $E$, an algorithmic operation. Given a function-symbol from $E$ (which by this act it is assumed to be formed from a few symbols for which we know the derivative and arithmetic and composition operations) we can compute its derivatives because the properties of differentiation cover the operations generating $E$. In principle, what might be hard is the question of whether a function belongs to $E$.
Claim: Integration is, at least, as hard as derivation. (maybe harder)
This is clear for the case of polynomials, which are always contained
in a reasonable minimal E.
Observation: The tentative claim that integration is harder than derivation is going to necessarily depend on $E$, since for $E$ being the polynomials both are simple operations.
_
Let us consider now constructing a domain, as we have have done for differentiation, that is adapted to the operation of integration. Consider $I$ to be a collection of function-symbols, that contains the constants, $x$, and possibly others $e^x$, $x^{-1}$, ... for which we assume we know their integrals. Assume that $I$ is closed by the following operations:
If $f$ and $g$ are in $I$,
$af+bg\in I$ for any constants $a$ and $b$. And the operations: $f\oplus g:=fg'+f'g\in I$ $f\otimes g:= (f\circ g)\cdot g'\in I$
An $I$ like this is a reasonable minimum domain in which to define integration. It is clear that in such an $I$, integration is algorithmic, for a given function written using these operations.
Claim: In $I$, derivation is simple if we assume $I$ contains the constants and either 2 or 3 are satisfied.
In fact, for a given f in I, its derivative is f'=f⊕1=1⊗f$.
This means that just one basic operation in $I$ allows to compute derivatives.
_
To translate the OP's question into another question:
Given an $E$ we already have a way to define linear combinations with constants, $\oplus$ and $\otimes$, since these are defined using the operations allowed in $E$. So, for an $E$ to be an $I$ or to form out of it a $\subset I$ we would need to have an algorithm that checks whether an element of $E$ is an $I$ (can be written using function-symbols with function-symbols integrals, linear combinations with constants coefficients, $\oplus$, and $\otimes$).
We have that the existence of such an algorithm depends on the $E$, on the function-symbols available in it. For $E$ being the polynomials in $x$ it is clear we have such an algorithm and it is simple.
We have also that for some $E$ the problem is undecidable. From Richardson's theorem we know that if $E$:
Contains $\ln(2),\pi,e^x,\sin(x)$ Contains $|x|$ and Contains a function with no primitive in $E$
Condition $3$ is satisfied for the $E$-closure of the elementary functions together with $|x|$, since we can take $e^{x^2}$ to verify $3$.
The theorem is true because they manage to prove there is an elementary function (using also $|x|$) $M(n,x)$ which is identically to either $0$ or $1$ for each natural number $n$ but for which it is undecidable for every natural number $n$ whether it is identically $0$ or $1$. Given such a function then, if we could decide integration in $E$, then we could decide, for each natural number $n$, whether $f_n(x):=e^{x^2}M(n,x)$ is integrable or not. But this would tell us when $M(n,x)$ is zero or one, since $f_n(x)$ is integrable when $M(n,x)=0$ and non-integrable in $E$ when $M(n,x)=1$.
So, for certain classes $E$ we have that, while derivation is elementary (after having shown the function belongs to $E$), integration is undecidable. This already shows that integration is harder than derivation (the statement depending of course on the class of functions we want to integrate).
Observation: The undecidability of integration for the $E$ above is of course deeply related to having function-symbols in $E$ without primitive function-symbol in $E$. This trivially disappears if we close $E$ by throwing in it symbols for each primitive. On the other hand, the inconvenience is that this makes $E$ not being generated by finitely many symbols. This makes the problem of detecting when a function is represented by a symbol in $E$ even more complex. So, the reason why for this large $E$, if we are given a function which we know is in $E$ we can compute its integral, is because we are pretty much assuming that we can by assuming that the input is in $E$.
It remains then the question:
Question: How small can $E$ be such that integration is harder than derivation?
|
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
|
I'm facing this problem about Banach fixed-point theorem. The theorem says that if a function is a contraction (Lipschitz) then has only one fixed point. $$**Q:** \ f(x,y)=(e^{-1-y}+\frac{x}{3},\ e^{-1-y}+\frac{y}{3}) \ \ \ , \ A=\{(x,y) \in \mathbb{R}^2:y\geqslant 0\} \ \ \ \ \ \\ Show \ that \ the \ restriction \ in \ A \ has \ one \ and \ only \ one \ fixed \ point $$ So, for this exercise I want to show that $f$ is a contraction, that is, showing $$d(f(x),f(y))\leqslant \lambda d(x,y),\ \lambda \in [0,1]$$ But my question starts here. How can I prove this?? A hint/tip would be great.
Note that $f(x) \in A$ for all $x \in A$.
Note that $Df(x) = \begin{bmatrix} {1 \over 3} & - e^{-(1+x_2)} \\ 0 & {1 \over 3} - e^{-(1+y)} \end{bmatrix} = {1 \over 3} I - e^{-(1+x_2)} e e_2^T$, where $e= (1,1)^T$.
If we use the induced 2 norm, we have $\|Df(x)\| \le {1 \over3} + {1 \over e} \|e e_2^T \| = {1 \over3} + {\sqrt{2} \over e} <1$. Let $\lambda = {1 \over3} + {\sqrt{2} \over e}$.
To show that $f$ is a contraction, we use the following result: If $a^T b \le M \|a\|$ for all $z$, then $\|b\| \le M$.
Pick $a$, then for some $t$, the mean value theorem gives $a^T(f(y)-f(x)) = a^T Df(x+t(y-x))(y-x) \le \lambda a^T \|y-x\|$, and so we have $\|f(y)-f(x) \| \le \lambda \|y-x\|$.
Hint: Let $w\equiv(x,y)$ for brevity. We know that, by a generalized form of the mean value theorem,$$\left\Vert f(w)-f(w^{\prime})\right\Vert _{\infty}\leq\left\Vert w-w^{\prime}\right\Vert _{\infty}\sup_{t\in(0,1)}\left\Vert Df(\left(1-t\right)w+tw^{\prime})\right\Vert _{\infty}$$where $Df$ is the Jacobian of $f$.
Can you show that $\left\Vert Df\right\Vert _{\infty}$ is strictly less than one on the region $A$?
|
is the initial velocity just the average of all the velocities? I mean here is a little of my data Velocity Stopping Distance 1.18 .22 1.58 .32 1.27 .295 1.12 .151 1.33 .28 velocity squared 1.392 2.49 1.61 1.25 1.76
In this example, did you push the object 5 times? (This is what I've understood so far).
What does the velocity stand for? Is it the velocity you pushed the object? If yes then it's the initial velocity of the object and its velocity will decrease until the object stops.
I'll do as if it was true. First example :
$\displaystyle a(t)=a$.
$\displaystyle v(t)=a\cdot t+1.18$.
$\displaystyle x(t)=\frac{a\cdot t^2}{2}+1.18\cdot t$. But we know that when $\displaystyle x=0.22$, the velocity is worth $\displaystyle \frac{0m}{s}$.
So let's find at what t it occurs. The best way to do that is to count how much photos took the photogate from the instant you pushed the object till it stops completely. Say it had time to take 20 photos with time intervals of $\displaystyle 0.1 s$. This means the body stopped at $\displaystyle t=2s$.
So now we now that at $\displaystyle t=2s$, the velocity is worth $\displaystyle \frac{0m}{s}$. Thus $\displaystyle v(2)=2a+1.18=0 \Leftrightarrow a=\frac{-1.18}{2}=-\frac{0.59m}{s^2}$ in this case. Do that for all the values you got and you obtain all the accelerations it seems they are asking you.
But still, I don't understand why did you calculate the velocity squared, so I must be missing something. I'm almost sure I don't understand well what you've done.
|
If I take the Fourier transform of data $x \pm \sigma$, is there a standard approach to what the error in the outputs will be? Would the best way be a direct evaluation of the upper and lower bounds?
Assembled from comments of @AlexE:
The Fourier transform is linear, so the error in the Fourier domain is the Fourier transform of the error in the spatial (original) domain.
So, if $\sigma$ is understood as a variance spread not being a function of $x$, one can use the Fourier transform's uncertainty relation.
This StackOverflow post demonstrates this behaviour using Python code.
I've pondered this question before. The best I can come up with is as follows.
The Fourier transform
y = fft(x) can be expressed as some matrix, $X$, dot producted with $x$.
See scipy documentation examples for how to generate the Fourier matrix here
This matrix representation means that the Fourier transform can be thought of as a linear least squares problem. That is, the Fourier coefficients are the fit parameters. The problem of estimating the fit parameters' standard deviation has a known known solution.
See here the wikipedia article hereUnbiasedness and variance of $\beta$ for how to do so.
For the sake of completeness, the quantity on wishes to find is the standard deviation of the fit parameters, $\sigma_\beta$.
Using the wikipedia article above
$\sigma_\beta^2 =E[(\hat{\beta}-\beta)(\hat{\beta}-\beta)] = \sigma \sigma^T (X^T X)^{-1}$
Where $X$ is the fourier matrix.
Note that $\sigma \sigma^T$ is a covariance matrix, not a scalar. In your case, it will most likely be a diagonal matrix
|
"editing and improving tag-infos should certainly be encouraged. However, it seems that not many users of this site find the time to do this". I don't think it's a matter of time. Indeed, the site's feature for this are very poor, at least to my understanding. Edits would typically require discussion, and a discussion is not really possible, except specifically posting on meta for this, which is not practical, and discouraging for minor issues. — YCor 1 hour ago how the tag should be used- in which case I consider the fact that usage of the tag is unclear or inconsistent a bigger problem than a missing tag-wiki. The fact that these days there is not much discussion around tags is confirmed for example in the message by François G. Dorais which I quoted in chat. (Where we can discuss this further if this is too much of a digression.) — Martin Sleziak 1 min ago
Let $n\ge 1$ be an integer and let us work over the field of complex numbers. Let $\mathcal{R}_n$ denote the set of rational conic bundles $\pi\colon X\to \mathbb{P}^n$ (morphisms such that the generic fibre is a geometry irreducible conic and $X$ is rational), up to square equivalence: two conic...
Consider the quadratic forms $$ Q_1 = x^2 + y^2 - (t^2+1)z^2,\qquad Q_2 = x^2 + y^2 - z^2 $$ over the rational function field $\mathbb{F}_p(t)$, where $p > 2$ is a prime such that $t^2 + 1$ is irreducible over $\mathbb{F}_p$, i.e., $p \equiv 3 \ (\mathrm{mod} \ 4)$. These forms are isomorphic ov...
1) In the following OEIS Sequence, what is the meaning of One alphabet labeled, the other $3$ unlabeled? The number of Lyndon words (aperiodic necklaces) with $4n$ beads of $4$ colors, $n$ beads of each color. One color labeled, the other $3$ unlabeled. $1, 52, 5133, 656880, 97772875, 1603...
There are 7 occurrences of lmis, whose tag info is "Linear Matrix Inequalities (LMIs)". I suggest to replace this tag with non-existing linear-matrix-inequalities since tags are usually not abbreviations.
« first day (1981 days earlier) ← previous day next day → last day (262 days later) »
|
Chapter 5: Games with Sequential Actions: Reasoning and Computing with the Extensive Form
Page number: 115
Section number:5.1.2
Content: In Definition 5.1.2, pure strategies as defined are products of the set of all actions available at all choice nodes owned by a player, rather than products of actions available at all chocie nodes owned by a player. To correct this, change $\prod_{h \in H, \rho(h = i)} \chi(h)$ to $\prod_{h \in H, \rho(h) = i, a \in \chi(h)} a$
Content:In Figure 5.7, change the "forall" to "for". (Under the previous wording, it is possible to understand that the different iterations are executed in parallel rather than sequentially.)
Page number:136-142
Section number:5.2.3
Date:4/4/2014
Name:Haden Lee
Email:haden[dot]lee[at]stanford[dot]edu
Content:After Definition 5.2.8 (until the end of section 5.2.3), the book keeps using "I" to refer to a certain information set in "I_i" (for player i), but earlier in the book in Definition 5.2.1 the book defined I to be the tuple of the information sets of n players (namely, I = (I_1, I_2, ..., I_n)). This may lead to a huge change, but just to be consistent with the rest of the chapter, it might be wise to call it "I_{i, j}" or "I_{i, k}" (or such) instead of just "I".
The following errors are fixed in the second printing of the book and online PDF v1.1
Content: Theorem 5.2.12 should read "In extensive-form games of perfect information, the set of subgame-perfect equilibria is exactly the set of sequential equilibria," and footnote 9 should be removed. (The issue with genericity arises only under games of imperfect information.)
Page number: 132
Section number: 5.2.2
Date: Feb 10, 2009
Name: Sean Sutherland
Email: ssuther(at) cs.ubc.ca
Content: The paragraph beginning "We illustrate this distinction first..." should be removed. We discussed that it could be misleading about the differences between behavioral and mixed strategies, while not really contributing something worthwhile. One point was that it doesn't mention that correlations in mixed strategies only occur off-path.
Content:"from the root of the tree up to the root" should be "from the leaves of the tree up to the root"
Page number: 133
Section number:5.2.3
Date:Feb19, 2010
Name B.J.Buter
Email: bjbuter [at] science [dot] uva [dot] nl
Content: Definition 5.2.3, using n as a counter is confusing since in other definitions n indicates the number of players.
Page number: 130
Section number:5.2.1
Date:Feb 18, 2010
Name: B.J.Buter
Email: bjbuter [at] science [dot] uva [dot] nl
Content: Definition 5.2.1 "I=..., equivalence relation" is not an equivalence relation. An equivalence relation is a collection of ordered pairs, thus if I_{i,j} were a relation, then where h \in I_{i,j} is written, h should be an ordered pair, this is not meant. I would rewrite as:"I=(I_1,...,I_n), where I_i={I_{i,1},...,I_{i,k_i}} is a partition of \{h\in H : \rho(h) = i\}, with I_{i,j} equivalence classes."
Page number: 133
Section number:5.2.3
Date:Feb19, 2010
Name B.J.Buter
Email: bjbuter [at] science [dot] uva [dot] nl
Content: Definition 5.2.3, item 2 equivalence classes are only defined for nodes where a player acts, according to definition 5.2.1. The restriction if \rho(h_j)=i should be added.
Page number: 137-138
Section number:5.2.4
Date:Feb 27, 2010
Name:Nicolas Lambert
Content:All instances of S should be replaced by s for consistency with the rest of the book.
|
One problem that is pervasive in the power grid and not well understood is called the Transient Recovery Voltage (TRV). When a short-circuit due to a fault in high voltage or medium voltage system is switched-out by a circuit breaker, the voltage across the breaker poles rapidly increases; as high as twice the operating voltage. See picture below.
Although the interrupting chamber inside the breaker has enough dielectric strength to withstand the elevated voltage, the rate at which the recovery voltage rises (also known as RRRV) can overwhelm the insulating medium. This leads to arc re-ignition at the poles and eventually breaker failure.
Let’s split hair here and understand the underlying cause of TRV. I’ll present this issue in two steps.
Step 1: Inductance and Capacitance
When a layman sees pieces of power grid (shown in picture below), it is natural to process them as mere steel structures and energized wires.
However an electrical engineer must visualize it as inductor L, capacitor C, and resistor R. Parameters L and C are born as a consequence of energizing the wire. How?
AC current passing through a wire or a piece of rigid metal produces magnetic field. This field is alternating just as its source and thereby induces voltage back onto the same wire such that it opposes any sudden change in current. A physical manifestation of this push-back effect occurs when you drop a magnet through a non-ferrous metallic tube, shown below. The magnets fall is retarded by the magnetic field generated by the copper tube.
We looked at the effects of alternating current. The alternating voltage on the line creates its own effect – the capacitance. Remember, earth conducts electricity. It is the return path for unbalanced currents (when neutral is unavailable) or fault currents (involving earth). So two conductors separated by dielectric (air, in this case) is essentially a capacitor. Now because there is a path from energized line to ground during normal operation (through a capacitor), the capacitor draws a leading current called the
charging current. This current flows even when one end of the line is open.
The last piece is the resistor. The resistance is that of the material used for routing power, usually aluminum or copper. It always exists regardless the line is energized or de-energized.
Step 2: L-C circuit behavior during a short circuit
Now that we have established R-L-C components in the grid, how do they behave when the breaker operates?
Figure below will be used to explain.
Let’s begin with following conditions
A solid 3-phase fault close to the circuit breaker is assumed – produces the most severe TRV. Resistance R is ignored for now. The line to ground capacitance close to breaker is considered. This capacitance exists due to capacitance in bushings (specifically condenser bushing), current transformers, adjacent power transformer, etc.
At the instant a short-circuit occurs, before the breaker opens, the line to ground capacitance is shorted-out. However as the breaker opens, it draws an arc. As soon as it is extinguished, the source voltage starts to charge the line to ground capacitor through the system inductance.
The consequence? The voltage across the breaker poles is equal to the system nominal voltage
plus voltage created by the natural response of a L-C circuit. There are two circuits at play here now. Let’s look at L-C circuit specifically.
In a lossless L-C circuit, the voltage oscillates indefinitely. The energy stored in capacitor discharges and in-turn charges the inductor – storing energy in its magnetic field. Once inductor is charged, it discharges to charge the capacitor. This cycle looks like below. See KhanAcademy for additional details.
The voltage equation for a L-C circuit is given by
V(t)=V_{m}[1-cos\omega_{0}t] ……….[1]
where the first peak 2\times V_{m} occurs when \omega_{0}t = \pi
The frequency of oscillations is given by
f = \frac{1}{2\pi\sqrt{LC}} ……….[2]
The L-C circuit in the power grid is
not lossless, however. You have resistance due to copper or aluminum material. This resistance ends up damping the oscillations down. The peak magnitude of TRV too is subdued by several factors. One of them is the type of fault. We have considered 3-phase fault, however, asymmetric faults like line-to-ground or double-line-to-ground makes the fault current cross zero axis before the system voltage reaches its peak. The other reason could be the way the system is grounded – low impedance or high impedance.
When you superimpose damped L-C oscillations over operating voltage at its peak, the resultant waveform looks like below.
Note the difference in the duration of oscillations. The capacitance near the 138kV breaker is typically in pico-farads and inductance in micro-henrys. When you plug these into equation [2] above, it generates frequency in kilo-hertz. This high frequency transient occurs at such a rapid pace the nominal voltage looks constant in comparison. It is precisely for this reason a breaker not equipped to handle TRV – fails.
Specifying circuit breaker to overcome TRV
The TRV performance for breakers have been well established in several ANSI/IEEE standards, namely:
IEEE C37.04 – Standard rating structure for AC high voltage circuit breakers IEEE C37.06 – Standard for AC high voltage CBs rated on a symmetrical current basis IEEE C37.011 – Guide for the application of TRV for AC HV circuit breakers
For instance, for a 145kV (maximum voltage) rated circuit breaker in an effectively grounded system, the TRV it must handle at rated interrupting rating (40kA or 63kA) is 215,000 volts. The Rate of Rise of Recovery Voltage (RRRV) is 2,000V/microsecond.
Well, are these ratings adequate? The answer to this is driven by the public utility’s experience with breaker failure associated with TRV.
There are areas in the power substation where TRV is acutely severe, for instance, when switching a capacitor bank or a lightly loaded EHV line. Breaker manufacturers don’t just rely on IEEE guidelines here. A detailed study is conducted to determine the size of an external capacitor and resistor which are used to proactively tame the L-C circuit. These are then mounted across breaker bushing or interrupting chamber. More on this in another post.
Summary
AC systems create complexities by introducing inductance and capacitance. In a L-C circuit, a circuit breaker interrupting a fault is subjected to high frequency – high voltage magnitude transients. This article presented details of how this happens and why it is catastrophic for the circuit breaker.
|
Given that $f$ is a OWF and $|f(x)|=|x|$ for all $x$, is $g(x)=f(x)\oplus x$ necessarily also a OWF?
While poncho's answer gives an interesting example, why this can go wrong in practice, it does not necessarily answer the question from a theoretical point of view.After all, we don't
know whether $f(x) = AES_k(x) \oplus x$ is one-way. (Even if it might be reasonable to assume that.)
So, let's give a theoretical example. Assume that a one-way function $h$ exists where in- and output length are the same. We call this length $n/2$. I.e. we have a one-way function $$h : \{0,1\}^{n/2} \to \{0,1\}^{n/2}.$$
From this function, we now construct a new function $$f : \{0,1\}^{n} \to \{0,1\}^{n}$$ as follows: $$f(x_1\Vert x_2) = 0^{n/2}\Vert h(x_1),$$ where $|x_1|=|x_2|=n/2$.
It is easy to show via reduction that $f$ is one-way whenever $h$ is one-way. Let $\mathcal{A}$ be an attacker against the one-wayness of $f$, then we construct an attacker $\mathcal{B}$ against the one-wayness of $h$ as follows: Upon input of $y$, $\mathcal{B}$ invokes $\mathcal{A}$ on input $0^{n/2}\Vert y$. Eventually, $\mathcal{A}$ outputs $x_1'\Vert x_2'$ and $\mathcal{B}$ outputs $x_1'$.
It is trivial to see that if $\mathcal{A}$ runs in polynomial time (in input length $n$) then $\mathcal{B}$ also runs in polynomial time (in input length $n/2$).
It is also easy to see the following holds: $$\Pr[\mathcal{B}(y) \in h^{-1}(y)] = \Pr[\mathcal{A}(0^{n/2}\Vert y) \in f^{-1}(0^{n/2}\Vert y)].$$ Therefore it follows that $f$ is one-way whenever $h$ is.
Now lets use this function $f$ in the proposed construction:
$$g(x) = f(x)\oplus x = (0^{n/2}\Vert h(x_1) ) \oplus x_1\Vert x_2 = x_1\Vert (h(x_1)\oplus x_2)$$
This is obviously not one-way. An attacker upon seeing an image $x_1\Vert y$ can simply output $x_1\Vert (y\oplus h(x_1))$ as a valid preimage.
No, you can find $f$ such that $f(x)$ is a OWF, but $f(x)\oplus x$ is not.
One example would be $f(x) = AES_k(x) \oplus x$ (for a public key $k$, perhaps the all-zeros key). $f(x)$ is believed to be one way; as there is no known practical way, given a value $y$, to find an $x$ with $f(x) = y$. However, $g(x) = f(x) \oplus x = AES_k(x)$ is easy to invert (because we know the AES key $k$).
|
Theorem 11.4For any constant $\delta > 0$, every problem in NP has probabilistically checkable proofs of length poly($n$), where the verifier flips $O(\log n)$ coins and looks at three bits of the proof, with completeness $1-\delta$ and soundness $1/2+\delta$.
Moreover, the conditions under which Arthur accepts are extremely simple - namely, if the parity of these three bits is even.
This theorem has powerful consequences for inapproximability. If we write down all the triplets of bits that Arthur could look at, we get a system of linear equations mod 2. If Merlin takes his best shot at a proof, the probability that Arthur will accept is the largest fraction of these equations that can be simultaneously satisfied. Thus even if we know that this fraction is either at least $1-\delta$ or at most $1/2+\delta$, it is NP-hard to tell which. This implies that it is NP-hard to approximate Max-XORSAT within any constant factor better than $1/2$.
First, I don't think it can be as simple as just checking that the parity is even. Merlin could just make Arthur accept anything by handing him a proof of all zeros, and make him reject anything with a proof of all ones. Surely the parity must instead equal some value that Arthur computes along the way.
Further, It's not a priori clear to me that Arthur would select every triplet with equal likelihood. If that wasn't the case, wouldn't it be possible for a no-instance to yield a Max-XORSAT formula where more than $1/2+\delta$ of the clauses can be satisfied even though Arthur accepts with probability less than $1/2+\delta$?
Let's take a degenerate case as an example. Arthur asks Merlin to help him decide whether a given sequence of $n$ bits $\mathbf{b}$ is all ones - $\mathbf{b=1}$ for short. This problem is trivially in NP. And Arthur clearly doesn't need Merlin's help. But he can go out of his way to use it anyway. Here's what they do:
First Merlin gives Arthur a PCP $\mathbf{x} \in \{0,1\}^n$ (let's assume $n \gg 3$). Then Arthur checks if $\mathbf{b=1}.$ If it is, he checks that $x_1 \oplus x_2 \oplus x_3 = 1$ holds in Merlin's PCP. Otherwise, he checks either $x_1 \oplus x_2 \oplus x_3 = 1$ or $\overline{x_1} \oplus \overline{x_2} \oplus \overline{x_3} = 1$ (note that these can't both be satisfied) with equal probability 0.45. Or with probability 0.1 he checks $x_i \oplus x_j \oplus x_k = 1$ for a uniformly random triplet. Merlin can thus convince Arthur that $\mathbf{b=1}$ by giving him, for example, the proof consisting of all ones. But if $\mathbf{b\neq1}$ there's no proof that Arthur accepts with probability greater than 0.55. This gives PCP's with completeness $1-\delta$ (indeed we also have completeness 1) and soundness $1/2+\delta$ where $\delta = 0.05$. Now what's the Max-XORSAT formula corresponding to $\mathbf{b\neq1}$? Well, Arthur could look at any triplet $x_i, x_j, x_k$ as well as $\overline{x_1},\overline{x_2},\overline{x_3}$. So presumably it's the conjunction of all those triplets when XORed together. All but one of these clauses can be satisfied. So the fraction is very close to one. But that's nowhere near the probability that Arthur will actually accept.
PS: We need a PCP tag.
|
I've been reading about count-min sketch and I'm interested in the performance of this data structure when doing conservative updates. To my understanding from the Wikipedia article, conservative updating means that when you see an item, instead of incrementing all the counters you only increment the counter(s) that have the lowest current count. Intuitively, I understand that this method will give a strictly better estimate than the standard update procedure because collisions are now less likely to cause us to overestimate a count.
I know that it's possible with traditional count-min sketch to tune the parameters of the data structure based on the error rate $\delta$ you're willing to tolerate. This error analysis shows that to solve $\epsilon$-approximate heavy hitters with probability $1 - \delta$, we should use a sketch with $\frac{e}{\epsilon}$ columns and $\ge \ln \frac{1}{\delta}$ rows/hash functions. What would this analysis look like if we were performing conservative updates?
I found this paper which I believe calculates the probability of a false negative using conservative updates to be $(1 - (1-\frac{1}{W})^N)^d$ where $W=\frac{e}{\epsilon}$ is the number of columns, $N$ is the number of unique additions, and $d$ is the number of rows. What I don't understand:
How does this calculation take into account the fact that only the counter(s) with lowest current count are incremented in each step? Intuitively, why does this probability depend on the size of the stream $N$ (where it seems that the probability of false negatives in traditional count-min sketch does not?)
|
65 4 Homework Statement Two identical audio speakers, connected to the same amplifier, produce monochromatic sound waves with a frequency that can be varied between 300 and 600 Hz. The speed of the sound is 340 m/s. You find that, where you are standing, you hear minimum intensity sound a) Explain why you hear minimum-intensity sound b) If one of the speakers is moved 39.8 cm toward you, the sound you hear has maximum intensity. What is the frequency of the sound? c) How much closer to you from the position in part (b) must the speaker be moved to the next position where you hear maximum intensity? Homework Equations interference Homework Statement:Two identical audio speakers, connected to the same amplifier, produce monochromatic sound waves with a frequency that can be varied between 300 and 600 Hz. The speed of the sound is 340 m/s. You find that, where you are standing, you hear minimum intensity sound
a) Explain why you hear minimum-intensity sound
b) If one of the speakers is moved 39.8 cm toward you, the sound you hear has maximum intensity. What is the frequency of the sound?
c) How much closer to you from the position in part (b) must the speaker be moved to the next position where you hear maximum intensity?
Homework Equations:interference
I have no idea on how to proceed
I started with
## frequency=\frac {speed\space of\space sound} \lambda \space = \frac {340 \frac m s} \lambda ##
then
##d \space sin\alpha \space = \space \frac \lambda 2\space ##
but now i'm stuck
Any help please?
|
The local criterion for flatness goes this way:
Let $\phi : (A,m)\rightarrow (B,m')$ be a local morphism of local Noetherian rings, and $M$ a finitely generated $B$-module. If $x\in m$ is a non zero-divisor on $M$ then $M$ is flat over $A$ iff $M/xM$ is flat over $A/xA$.
One usual geometric interpretation (see for instance Eisenbud,
Commutative Algebra with a View towards Algebraic Geometry, chapter 6.4) is:
If we have a morphism of affine varieties $X\rightarrow Y$ over $\mathbb{A}^1$ such that the maps to $\mathbb{A}^1$ are flat and dominant, for any point $p$ in $\mathbb{A}^1$ choose a point $p'$ in $Y$ above $p$ and a point $p''$ in $X$ above $p'$. If the map of fibers $X_{p}\rightarrow Y_{p}$ is flat in a neighborhood of $p''$ in $X_{p}$, then the map $X\rightarrow Y$ is also flat in a neighborhood of $p''$ in $X$.
I fail to see the obviousness of this interpretation: does this mean that if $R$ and $S$ are the respective affine rings defining the affine varieties $Y$ and $X$ over the field $k$, if $P'$ and $P''$ are the maximal ideals defining the points $p'$ and $p''$, if we have $S_{P''}$ flat over $R_{P'}$, there exist an element $f''$ of $S$ not contained in $P''$ such that $S_{f''}$ is flat above $R$?
I mean that using the local criterion for flatness I see how I can get the flatness of the rings localized at maximal ideals coming from the flatness on the fibers, but how to extend it to a neighborhood of each points ?
Edit: after re-reading the clear answer from Akhil Mathew, I cannot help but wondering if there is a way to get the geometric interpretation of Eisenbud without using the result on the open locus for flat maps which is above the level of chapter 6 of Eisenbud classical book. Can somebody enlighten me here?
Edit2: an interesting thread on Math.SE which gave me all the answers I needed using generic flatness instead https://math.stackexchange.com/a/2321347/14860
|
Forgot password? New user? Sign up
Existing user? Log in
Consider the sets A={x∣−8<x<4,x is an integer}B={x∣−2<x≤11,x is an integer}.\begin{aligned}A &=\{x \mid -8 < x < 4, x \text{ is an integer} \}\\ B &=\{x \mid -2 < x \leq 11, x \text{ is an integer} \}.\end{aligned}AB={x∣−8<x<4,x is an integer}={x∣−2<x≤11,x is an integer}.What is ∣A△B∣? \lvert A \triangle B \rvert ?∣A△B∣?
Details and assumptions
You may choose to read the summary page Set Notation.
Consider two sets:
A={5,6,7,8,9,10}B={6,8,10,16,24} A = \{ 5, 6, 7, 8, 9, 10 \} \\B = \{ 6, 8, 10, 16, 24 \} A={5,6,7,8,9,10}B={6,8,10,16,24}
What is ∣A△B∣ \lvert A \triangle B \rvert ∣A△B∣?
Consider the sets A={1,2,a+7},B={a+3,a2,−a+10,16}.A=\{1,2,a+7\}, B=\{a+3, a^2, -a+10, 16\}.A={1,2,a+7},B={a+3,a2,−a+10,16}. If A∩B={1,16},A \cap B=\{1, 16\},A∩B={1,16}, what is the sum of all the elements in the set A△B?A \triangle B?A△B?
If sets AAA and BBB satisfy A={23,11,55,6,4,18,37},A∩B={37,55},∣A△B∣=5,\begin{aligned}A & =\{23, 11, 55, 6, 4, 18, 37 \},\\A\cap B & =\{37, 55\}, \\\lvert A \triangle B \rvert & =5,\end{aligned}AA∩B∣A△B∣={23,11,55,6,4,18,37},={37,55},=5,what is the sum of all the elements in the set B?B?B?
If AAA and BBB are two sets such that ∣A∣=24,∣B∣=17,∣B−A∣=9,|A|=24, |B|=17, |B-A|=9,∣A∣=24,∣B∣=17,∣B−A∣=9, what is the value of ∣A△B∣? \lvert A \triangle B \rvert ?∣A△B∣?
Problem Loading...
Note Loading...
Set Loading...
|
The
package is a powerful tool, based on pgfplots
tikz, dedicated to create scientific graphs.
Contents Pgfplots is a visualization tool to make simpler the inclusion of plots in your documents. The basic idea is that you provide the input data/formula and pgfplots does the rest.
\begin{tikzpicture} \begin{axis} \addplot[color=red]{exp(x)}; \end{axis} \end{tikzpicture} %Here ends the furst plot \hskip 5pt %Here begins the 3d plot \begin{tikzpicture} \begin{axis} \addplot3[ surf, ] {exp(-x^2-y^2)*x}; \end{axis} \end{tikzpicture}
Since
pgfplot is based on tikz the plot must be inside a tikzpicture environment. Then the environment declaration
\begin{axis},
\end{axis} will set the right scaling for the plot, check the Reference guide for other axis environments.
To add an actual plot, the command
\addplot[color=red]{log(x)}; is used. Inside the squared brackets some options can be passed, in this case we set the colour of the plot to
red; the squared brackets are mandatory, if no options are passed leave a blank space between them. Inside the curly brackets you put the function to plot. Is important to remember that this command must end with a semicolon ;.
To put a second plot next to the first one declare a new
tikzpicture environment. Do not insert a new line, but a small blank gap, in this case
hskip 10pt will insert a 10pt-wide blank space.
The rest of the syntax is the same, except for the
\addplot3 [surf,]{exp(-x^2-y^2)*x};. This will add a 3dplot, and the option
surf inside squared brackets declares that it's a surface plot. The function to plot must be placed inside curly brackets. Again, don't forget to put a semicolon ; at the end of the command.
Note:
It's recommended as a good practice to indent the code - see the second plot in the example above - and to add a comma , at the end of each option passed to \addplot. This way the code is more readable and is easier to add further options if needed.
To include
pgfplots in your document is very easy, add the next line to your preamble and that's it:
\usepackage{pgfplots}
Some additional tweaking for this package can be made in the preamble. To change the size of each plot and also guarantee backwards compatibility (recommended) add the next line:
\pgfplotsset{width=10cm,compat=1.9}
This changes the size of each
pgfplot figure to 10 centimeters, which is huge; you may use different units (pt, mm, in). The compat parameter is for the code to work on the package version 1.9 or later.
Since LaTeX was not initially conceived with plotting capabilities in mind, when there are several
pgfplot figures in your document or they are very complex, it takes a considerable amount of time to render them. To improve the compiling time you can configure the package to export the figures to separate PDF files and then import them into the document, add the code shown below to the preamble:
\usepgfplotslibrary{external}
\tikzexternalize
See this help article for further details on how to set up tikz-externalization in your Overleaf project.
Pgfplots 2D plotting functionalities are vast, you can personalize your plots to look exactly what you want. Nevertheless, the default options usually give very good result, so all you have to do is feed the data and LaTeX will do the rest:
To plot mathematical expressions is really easy:
\begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $x$, ylabel = {$f(x)$}, ] %Below the red parabola is defined \addplot [ domain=-10:10, samples=100, color=red, ] {x^2 - 2*x - 1}; \addlegendentry{$x^2 - 2x - 1$} %Here the blue parabloa is defined \addplot [ domain=-10:10, samples=100, color=blue, ] {x^2 + 2*x + 1}; \addlegendentry{$x^2 + 2x + 1$} \end{axis} \end{tikzpicture}
Let's analyse the new commands line by line:
axis lines = left.
xlabel = $x$ and
ylabel = {$f(x)$}.
\addplot.
domain=-10:10.
samples=100.
\addlegendentry{$x^2 - 2x - 1$}.
To add another graph to the plot just write a new
\addplot entry.
Scientific research often yields data that has to be analysed. The next example shows how to plot data with
pgfplots:
\begin{tikzpicture} \begin{axis}[ title={Temperature dependence of CuSO$_4\cdot$5H$_2$O solubility}, xlabel={Temperature [\textcelsius]}, ylabel={Solubility [g per 100 g water]}, xmin=0, xmax=100, ymin=0, ymax=120, xtick={0,20,40,60,80,100}, ytick={0,20,40,60,80,100,120}, legend pos=north west, ymajorgrids=true, grid style=dashed, ] \addplot[ color=blue, mark=square, ] coordinates { (0,23.1)(10,27.5)(20,32)(30,37.8)(40,44.6)(60,61.8)(80,83.8)(100,114) }; \legend{CuSO$_4\cdot$5H$_2$O} \end{axis} \end{tikzpicture} There are some new commands and parameters here:
title={Temperature dependence of CuSO$_4\cdot$5H$_2$O solubility}.
xmin=0, xmax=100, ymin=0, ymax=120.
xtick={0,20,40,60,80,100}, ytick={0,20,40,60,80,100,120}.
legend pos=north west.
ymajorgrids=true.
xmajorgrids to enable grid lines on the x axis.
grid style=dashed.
mark=square.
coordinates {(0,23.1)(10,27.5)(20,32)...}
If the data is in a file, which is the case most of the time; instead of the commands
\addplot and
coordinates you should use
\addplot table {file_with_the_data.dat}, the rest of the options are valid in this environment.
Scatter plots are used to represent information by using some kind of marks, these are common, for example, when computing statistical regression. Lets start with some data, the sample below is to show the structure of the data file we are going to plot (see the end of this section for a link to the LaTeX source and the data file):
GPA ma ve co un
3.45 643 589 3.76 3.52
2.78 558 512 2.87 2.91
2.52 583 503 2.54 2.4
3.67 685 602 3.83 3.47
3.24 592 538 3.29 3.47
2.1 562 486 2.64 2.37
The next example is a scatter plot of the first two columns in this table:
\begin{tikzpicture} \begin{axis}[ enlargelimits=false, ] \addplot+[ only marks, scatter, mark=halfcircle*, mark size=2.9pt] table[meta=ma] {scattered_example.dat}; \end{axis} \end{tikzpicture}
The parameters passed to the
axis and addplot environments can also be used in a data plot, except for scatter. Below the description of the code:
enlarge limits=false
only marks
scatter
meta parameter explained below.
mark=halfcircle*
mark size=2.9pt
table[meta=ma]{scattered_example.dat};
Bar graphs (also known as bar charts and bar plots) are used to display gathered data, mainly statistical data about a population of some sort. Bar plots in
pgfplots are highly customisable, but here we are going to show an example that 'just works':
\begin{tikzpicture} \begin{axis}[ x tick label style={ /pgf/number format/1000 sep=}, ylabel=Year, enlargelimits=0.05, legend style={at={(0.5,-0.1)}, anchor=north,legend columns=-1}, ybar interval=0.7, ] \addplot coordinates {(2012,408184) (2011,408348) (2010,414870) (2009,412156)}; \addplot coordinates {(2012,388950) (2011,393007) (2010,398449) (2009,395972)}; \legend{Men,Women} \end{axis} \end{tikzpicture}
The figure starts with the already explained declaration of the
tikzpicture and axis environments, but the axis declaration has a number of new parameters:
x tick label style={/pgf/number format/1000 sep=}
\addplot commands within this
ybar parameter described below is mandatory for this to work).
enlargelimits=0.05.
legend style={at={(0.5,-0.2)}, anchor=north,legend columns=-1}
ybar interval=0.7,
The
coordinates in this kind of plot determine the base point of the bar and its height.
The labels on the y-axis will show up to 4 digits. If in the numbers you are working with are greater than 9999
pgfplot will use the same notation as in the example. pgfplots has the 3d Plotting capabilities that you may expect in a plotting software.
There's a simple example about this at the introduction, let's work on something slightly more complex:
\begin{tikzpicture} \begin{axis}[ title=Exmple using the mesh parameter, hide axis, colormap/cool, ] \addplot3[ mesh, samples=50, domain=-8:8, ] {sin(deg(sqrt(x^2+y^2)))/sqrt(x^2+y^2)}; \addlegendentry{$\frac{sin(r)}{r}$} \end{axis} \end{tikzpicture}
Most of the commands here have already been explained, but there are 3 new things:
hide axis
colormap/cool
mesh
Note:
When working with trigonometric functions pgfplots uses degrees as default units, if the angle is in radians (as in this example) you have to use de deg function to convert to degrees.
In
pgfplots is possible to plot contour plots, but the data has have to be pre calculated by an external program. Let's see:
\begin{tikzpicture} \begin{axis} [ title={Contour plot, view from top}, view={0}{90} ] \addplot3[ contour gnuplot={levels={0.8, 0.4, 0.2, -0.2}} ] {sin(deg(sqrt(x^2+y^2)))/sqrt(x^2+y^2)}; \end{axis} \end{tikzpicture}
This is a plot of some contour lines for the same equation used in the previous section. The value of the
title parameter is inside curly brackets because it contains a comma, so we use the grouping brackets to avoid any confusion with the other parameters passed to the
\begin{axis} declaration. There are two new commands:
view={0}{90}
contour gnuplot={levels={0.8, 0.4, 0.2, -0.2}}
levels is a list of values of elevation levels where the contour lines are to be computed.
To plot a set of data into a 3d surface all we need is the coordinates of each point. These coordinates could be an unordered set or, in this case, a matrix:
\begin{tikzpicture} \begin{axis} \addplot3[ surf, ] coordinates { (0,0,0) (0,1,0) (0,2,0) (1,0,0) (1,1,0.6) (1,2,0.7) (2,0,0) (2,1,0.7) (2,2,1.8) }; \end{axis} \end{tikzpicture}
The points passed to the
coordinates parameter are treated as contained in a 3 x 3 matrix, being a white row space the separator of each matrix row.
All the options for 3d plots in this article apply to data surfaces.
The syntax for parametric plots is slightly different. Let's see:
\begin{tikzpicture} \begin{axis} [ view={60}{30}, ] \addplot3[ domain=0:5*pi, samples = 60, samples y=0, ] ({sin(deg(x))}, {cos(deg(x))}, {x}); \end{axis} \end{tikzpicture}
There are only two new things in this example: first, the
samples y=0 to prevent
pgfplots from joining the extreme points of the spiral and; second, the way the function to plot is passed to the
addplot3 environment. Each parameter function is grouped inside curly brackets and the three parameters are delimited with parenthesis.
Command/Option/Environment Description Possible Values axis Normal plots with linear scaling semilogxaxis logaritmic scaling of x and normal scaling for y semilogyaxis logaritmic scaling for y and normal scaling for x loglogaxis logaritmic scaling for the x and y axes axis lines changes the way the axes are drawn. default is ' box box, left, middle, center, right, none legend pos position of the legend box south west, south east, north west, north east, outer north east mark type of marks used in data plotting. When a single-character is used, the character appearance is very similar to the actual mark. *, x , +, |, o, asterisk, star, 10-pointed star, oplus, oplus*, otimes, otimes*, square, square*, triangle, triangle*, diamond, halfdiamond*, halfsquare*, right*, left*, Mercedes star, Mercedes star flipped, halfcircle, halfcircle*, pentagon, pentagon*, cubes. (cubes only work on 3d plots). colormap colour scheme to be used in a plot, can be personalized but there are some predefined colormaps hot, hot2, jet, blackwhite, bluered, cool, greenyellow, redyellow, violet.
For more information see:
|
I'm having some trouble with understanding the derivation of the action of the $X$ operator. It seems to be a result of the notation used and not a property of itself.
The usual argument is to consider eigenfunctions of the $X$-operator: $X|x\rangle = x|x\rangle$ where $X$ is an operator, $|x\rangle$ is an eigenket of $X$ and $x$ is the corresponding eigenvalue. Then \begin{eqnarray*} \color{red}{\langle x'|}X|x\rangle &=& \color{red}{\langle x'|}x|x\rangle \\ \\ \langle x'|X|x\rangle &=& x\langle x'|x\rangle \\ \\ \langle x'|X|x\rangle &=& x\,\delta(x'-x) \end{eqnarray*} where $\delta$ is Dirac's $\delta$-"function". Then we ask what $X$ does to arbitrary kets like $|f\rangle$: \begin{eqnarray*} (Xf)(x) &=& \langle x | X | f\rangle \\ \\ &=&\int_{-\infty}^{+\infty} \langle x|X|x'\rangle\langle x'|f\rangle~\mathrm dx' \\ \\ &=& \int_{-\infty}^{+\infty}f(x')\color{red}{\langle x|X|x'\rangle}~\mathrm dx' \\ \\ &=& \int_{-\infty}^{+\infty} f(x')\,\color{red}{x'\,\delta(x-x')}~\mathrm dx' \end{eqnarray*} The defining property of the $\delta$-"function" is that $\int_{\mathbb R} f(y)\,\delta(x-y)~\mathrm dy=f(x)$, and so $$(Xf)(x) = x\,f(x)$$
However, if I do this with other symbols, then I can't get the same result. Let's say $X|x\rangle = \lambda |x\rangle$. Then following the same steps gives $\langle x'|X|x\rangle = \lambda \langle x'|x\rangle=\lambda \,\delta(x'-x)$, and whence$$(Xf)(x)=\lambda f(x)$$This is to be expected: $Xf$ is just an eigenvalue multiple of $f$. It seems that the property that $X : f \mapsto xf$ comes from the fact that we used $x$ to denote the eigenvalue of $X$. What am I missing here?
Perhaps because $x$ is a real number and the set of all kets $|x\rangle$ can be identified with the real line by $|x\rangle \mapsto x$, and that $x$ must be an eigenvalue of $|x\rangle$ under this construction?
|
A matrix is positive if and only if it is Hermitian (and thus unitarily diagonalizable) and all its eigenvalues are positive (that they are real follows automatically from it being Hermitian).If this is not the way you define a positive operator, then you need to specify how you do so that we can prove the equivalence.
In other words, $A$ is positive, $A\ge0$, iff it can be written as$$A=\sum_k \lambda_k v_k v_k^*\equiv \sum_k \lambda_k \lvert v_k\rangle\!\langle v_k\rvert, \quad \lambda_k\ge0,$$with $\langle v_k\rvert v_j\rangle\equiv v_k^* v_j=\delta_{jk}$.I used here both diadic notation and bra-ket notation just to point out that these are simply two equivalent ways to write the same thing.
Why is $A^\dagger A\ge0$?
One way to show this is to start from the fact that
unitarily diagonalizable is equivalent to normal.This means that a matrix $B$ can be written as $B=UDU^\dagger$ for some unitary $U$ and diagonal matrix $D$ if and only if $BB^\dagger=B^\dagger B$.
As you can readily verify, it is the case that $(A^\dagger A)^\dagger (A^\dagger A)=(A^\dagger A)(A^\dagger A)^\dagger$. It follows that we can write$$A^\dagger A=\sum_k s_k\lvert v_k\rangle\!\langle v_k\rvert$$for some (generally complex as far as we know now) $s_k$, satisfying $A^\dagger A\lvert v_k\rangle=s_k\lvert v_k\rangle$.
That $s_k\in\mathbb R$, and more precisely $s_k\ge0$, now follows from$$A^\dagger A\lvert v_k\rangle=s_k\lvert v_k\rangle\Longrightarrow s_k=\langle v_k\rvert A^\dagger A\lvert v_k\rangle=\|Av_k\|^2\ge0.$$
Why $\sqrt{A^\dagger A}\ge0$?
The square root of a positive operator is (can be) defined through the square root of its eigenvalues. In other words, if $A=\sum_k s_k\lvert v_k\rangle\!\langle v_k\rvert$ with $s_k\ge0$, then we define$$\sqrt A=\sum_k \sqrt{s_k}\lvert v_k\rangle\!\langle v_k\rvert.$$Clearly, $s_k\ge0\Longrightarrow \sqrt{s_k}\ge0$, and thus $A\ge0\Longrightarrow \sqrt A\ge0$.
This is not directly related to the question asked, but as we are talking about this in connection with the polar decomposition, let me show another way to get to the polar decomposition.
Start from the previously shown fact that $A^\dagger A\ge0$.This is equivalent to it being writable as $A^\dagger A=\sum_k s_k\lvert v_k\rangle\!\langle v_k\rvert$ for $s_k\ge0$.But then again, note that$$A^\dagger A v_k=s_k v_k\Longleftrightarrow \langle A v_k,Av_j\rangle=\delta_{jk}s_k\Longleftrightarrow Av_j=\sqrt{s_j}w_j$$for some orthonormal set of vectors $w_j$.This is nothing but the singular value decomposition of $A$.
Now, to get to the polar decomposition, we just rewrite this as$$A=\sum_k \sqrt{s_k}\lvert w_k\rangle\!\langle v_k\rvert=\Bigg(\underbrace{\sum_k \lvert w_k\rangle\!\langle v_k\rvert}_{U}\Bigg)\Bigg(\underbrace{\sum_k \sqrt{s_k} \lvert v_k\rangle\!\langle v_k\rvert}_{\sqrt{A^\dagger A}}\Bigg),$$which is nothing but the polar decomposition of $A$.
|
To do this analysis, one needs data up to 20000 psi (1400 bars) on either enthalpy vs temperature and pressure (a graph) or compressibility factor Z vs temperature and pressure. The only graphs I have found of the former type go up to only 100 bars, which is a factor of 14 too low. However, I have found data on the compressibility factor Z at temperatures up to 300 K and pressures up to 1000 bars: https://cds.cern.ch/record/1444601/files/978-1-4419-9979-5_BookBackMatter.pdfAlthough this is still a factor of 1.4 too low, it might provide some idea of the temperature rise that might be expected in the valve.
So here is how the data would be used.
The effect of pressure on enthalpy (per mole) of gas is given by $$\left(\frac{\partial H}{\partial P}\right)_T=V-T\left(\frac{\partial V}{\partial T}\right)_P\tag{1}$$For a real gas, the equation of state in terms of the compressibility factor Z=Z(P,T) is given by$$PV=ZRT\tag{2}$$If we substitute Eqn. 2 into Eqn. 1, we obtain: $$\left(\frac{\partial H}{\partial P}\right)_T=-\frac{RT^2}{P}\left(\frac{\partial Z}{\partial T}\right)_P\tag{3}$$Integrating Eqn. 3 between P=0 and arbitrary P at constant temperature yields the so-called Residual Enthalpy $H^R$:$$H^R(P,T)=-RT^2\int_0^P{\left(\frac{\partial Z}{\partial T}\right)_{P'}\frac{dP'}{P'}}=-RT^2\frac{\partial}{\partial T}\left(\int_0^P{(Z(T,P')-1)\frac{dP'}{P'}}\right)\tag{4}$$where P' is a dummy variable of integration.
If the final pressure coming out of the valve is low (so that the gas exiting the valve is in the ideal gas region), we can write: $$\Delta H=-H^R+C_p\Delta T=0$$where, for a monoatomic gas like Helium, $C_p=\frac{5}{2}R$. Therefore, $$\Delta T=-\frac{2}{5}T^2\frac{\partial}{\partial T}\left(\int_0^P{(Z(T,P')-1)\frac{dP'}{P'}}\right)\tag{5}$$This expression would be evaluated using the data presented in the reference above.
If a reference can be found with data going out to 1400 bar, that would be even better.
|
I have the following CFG,
$S \rightarrow CB$
$C \rightarrow aCa \text{ }|\text{ } bCb \text{ }|\text{ } \text{#}B$ $B \rightarrow AB \text{ }|\text{ } \varepsilon$ $A \rightarrow a\text{ }|\text{ }b$
This is the CFG for the following language:
$$L= \left\{w \text{#} x\mid w^R \text{ is a substring of }\ x \text{, where } x,w\in \{a, b\}^*\right \}$$
I have a problem with constructing PDA for this CFG.
My attempt
My idea was to store characters in stack until "#" character, then as soon as the sequence of reversed characters go, pop from the stack. If at the end of input stack is empty, then we are done.
The problem is that for the following string, for example:
abbaa#aabbbbbbb(aabba)bbbbbb
when we read characters after "#", PDA will pop 4 characters, the it will see that the sequence is not valid and proceed with input. How can I return these 4 characters back so that I can check sequence again because I need full stack to proceed with accepted reversed substring that I have showed in brackets?
|
I am studying the lecture The Complexity of Propositional Proofs. Here there is a definition together with a discussion (page 3). I don't understand that discussion.
Let $F$ denote the set of propositional formulas over the connectives $\wedge$, $\vee$, $\rightarrow$ and $\lnot$, with a countably infinite supply of propositional variables. An abstract propositional proof system is a polynomial time function $V: F \times \{0,1\}^* \to \{0,1\}$ such that for every tautology $\tau$ there is a proof $P \in \{0,1\}^*$ with $V(\tau, P) = 1$ and for every non-tautology $\tau$, for every $P$, $V(\tau, P)=0$. The size of the proof is $|P|$.
Definition equates propositional proof systems with non-deterministic algorithms for the language of tautologies.
Why does the author conclude "Definition equates propositional proof systems with non-deterministic algorithms for the language of tautologies."?
|
that's a question from some exam in Calculus Can someone help?
does $\int _0^{\infty }\frac{\sin\pi \:x}{\left|\ln \left(x\right)\right|^{\frac{3}{2}}}$ converge?
I proved that it converges between 1 and infinity using comparison test with the integral of $\frac{1}{x^{\frac{3}{2}}}$ Between 1/2 and 1 i used Dirichlet exam to prove it converges. Is that true?
Any thoughts aout how can I prove between 0 and 1/2?
|
As already mentioned by
TheSimpliFire you cannot simply change from real to complex variables without any changes within your solution. I will present you a solution which does not rely on complex analysis at all.
Recently reading this post dealing with related integrals I have finally found a way to evaluate your integral. However, the crucial part we are in need of is precisely the Mellin Transform of the sine function which is given by
$$\mathcal M_s\{\sin(x)\}~=~\int_0^\infty x^{s-1}\sin(x)\mathrm dx~=~\Gamma(s)\sin\left(\frac{\pi s}2\right)\tag1$$
Here $\Gamma(z)$ denotes the Gamma Function. There are different possible ways to show this relation, for myself I prefer using Ramanujan's Master Theorem as it is done here for example $($just substitute $s$ by $-s$$)$, but I will leave this out for now hence it is not of our concern. Note that we can use a variation of Feynman's Trick, i.e. Differentation under the Integral Sign. Before applying this technique we may rewrite the RHS of $(1)$ in the following way
$$\Gamma(s)\sin\left(\frac{\pi s}2\right)=\Gamma(s)\sin\left(\frac{\pi s}2\right)\frac{2\cos\left(\frac{\pi s}2\right)}{2\cos\left(\frac{\pi s}2\right)}=\frac1{{2\cos\left(\frac{\pi s}2\right)}}\Gamma(s)\sin(\pi s)=\frac\pi2\frac1{\Gamma(1-s)\cos\left(\frac{\pi s}2\right)}$$
Here we used Euler's Reflection Formula. Even though the new form seems to be more complicated in the context of taking derivatives it actually prevents us from running into indefinite expressions which are harder to deal with. Anyway, differentiating w.r.t. $s$ leads us to
\begin{align*}\frac{\mathrm d}{\mathrm ds}\int_0^\infty x^{s-1}\sin(x)\mathrm dx&=\frac{\mathrm d}{\mathrm ds}\frac\pi2\frac1{\Gamma(1-s)\cos\left(\frac{\pi s}2\right)}\\\int_0^\infty \frac{\partial}{\partial s}x^{s-1}\sin(x)\mathrm dx&=\frac\pi2\frac{\mathrm d}{\mathrm ds}\frac1{\Gamma(1-s)\cos\left(\frac{\pi s}2\right)}\\\int_0^\infty x^{s-1}\log(x)\sin(x)\mathrm dx&=\frac\pi2\left[\frac1{\cos\left(\frac{\pi s}2\right)}\frac{-(-1)\Gamma'(1-s)}{\Gamma^2(1-s)}+\frac1{\Gamma(1-s)}\frac{-\frac\pi2\sin\left(\frac{\pi s}2\right)}{\cos^2\left(\frac{\pi s}2\right)}\right]\\\int_0^\infty x^{s-1}\log(x)\sin(x)\mathrm dx&=\frac\pi2\left[\frac1{\cos\left(\frac{\pi s}2\right)}\frac{\psi^{(0)}(1-s)}{\Gamma(1-s)}-\frac\pi2\frac1{\Gamma(1-s)}\frac{\sin\left(\frac{\pi s}2\right)}{\cos^2\left(\frac{\pi s}2\right)}\right]\end{align*}
Now we are basically done. Hence every occuring term is defined at $s=0$ we can simply plug in this value. Utilizing that the Digamma Function $\psi^{(0)}(z)$ is closely related to the Euler-Mascheroni Constant we can deduce that
$$\int_0^\infty x^{0-1}\log(x)\sin(x)\mathrm dx=\frac\pi2\left[\underbrace{\frac1{\cos\left(\frac{\pi\cdot0}2\right)}\frac{\psi^{(0)}(1-0)}{\Gamma(1-0)}}_{=-\gamma}-\underbrace{\frac\pi2\frac1{\Gamma(1-0)}\frac{\sin\left(\frac{\pi\cdot0}2\right)}{\cos^2\left(\frac{\pi\cdot0}2\right)}}_{=0}\right]$$
$$\therefore~\int_0^\infty \frac{\log(x)\sin(x)}x\mathrm dx~=~-\frac{\gamma\pi}2$$
|
By a "globally bounded $G$-function," following G. Christol, I will mean a solution to a (minimal) linear differential equation on $\mathbb{P}^1$ (then necessarily of the Fuchsian type and with rational exponents), which is regular at $x = 0$, has a Taylor expansion in $O_{K,S}[[x]]$, and has a positive radius of convergence at each place of $K$, where $O_{K,S}$ is the ring of integers of a number field localized at a finite set $S$ of primes. In other words, the coefficients are required to be $S$-integral, excluding the typical examples of $G$-functions with logarithmic branching: the polylogarithms.
Question. May a globally bounded $G$-function have a logarithmic branch point? (I.e., a logarithmic term in the asymptotic expansion near a singular point of the Fuchsian differential equation.)
For example, algebraic functions are globally bounded by Eisenstein's theorem, and they only admit algebraic branching (Puiseux expansions at every point).
More generally, the diagonals of a rational function in several variables are globally bounded $G$-functions. (They in fact satisfy a Picard-Fuchs differential equation for periods of the complement of an algebraic hypersurface. See, for instance, http://pierre.lairez.fr/objets.html , "Theorem of Christol-Lipschitz.") Christol has a conjecture that all globally bounded $G$-functions are of this form. (Does the question have a negative answer for diagonals of rational functions?)
Motivation. If the question has a negative answer, it would imply the following conjecture of Ruzsa: If a mapping $a : \mathbb{N}_0 \to \mathbb{Z}$ preserves congruences (this is to say, it has the divisibility property $n - m \mid a(n)-a(m)$ characteristic of polynomials), and if there is an $A < e$ such that $|a(n)| < A^n$ for all $n \gg 0$, then $a(n)$ is a polynomial. Indeed:
Perelli and Zannier have shown that such an $f_0(x) := \sum_{n \geq 0} a(n)x^n \in \mathbb{Z}[[x]]$ is $D$-finite; it is therefore a globally bounded $G$-function. Assuming as we may that $a(0) = 0$, the divisibility property is easily seen to imply the global boundedness of each of the series of iterated integrals $$ f_{k+1}(x) := - \frac{f_k'(0)}{1-x} + \frac{1}{x} \int_0^x f_k(t) \frac{dt}{t}. $$ For example, the $n$-th coefficient of $f_1$ is $$ b(n) := \frac{a(n+1)}{n+1}-a(1) = \frac{a(n+1)-a(0)}{n+1} -a(1) \in \mathbb{Z}, $$ and satisfies a mildly weaker form of the divisibility property: $$ n-m \mid (n+1)(m+1)(b(n)-b(m)), $$ sufficient to yield $f_2 \in \frac{1}{2} \mathbb{Z}[[x]]$ upon applying it with $m = 0$.
Therefore each $f_k$ is a globally bounded $G$-function. Finally, it is easy to see that unless $f_0$ is meromorphic on all of $\mathbb{P}^1$, regular outside $x = 1$, and vanishing at infinity (in which case the $f_k$ stabilize to $0$ after $\deg{f_0}$ steps; those are exactly the functions predicted by Ruzsa's conjecture), a logarithmic term will enter the iterated integral at each singularity $a \neq 1$ (or at $a = 1$ if the latter is a branch point) as soon as $k \gg_a 0$.
For example, if $a$ is a Laurent pole of order $-m < 0$ of $f_0$, the expansion of $f_m$ at $x = a$ will involve $\log{(x-a)}$. Similar remarks apply to the branch points.
Added.
A negative answer to the question would refine the classical theorem of Polya (resp. its generalization by Andre) which states that a globally bounded power series whose derivative is a rational function (resp. an algebraic function) is itself a rational (resp. algebraic) function.
A stronger (contrapositive) question would be the following. If an irreducible linear differential operator $L$ with polynomial coefficients has one globally bounded solution (e.g., recall Apery's differential equation from the proof of the irrationality of $\zeta(3)$), does it follow that $L$ has finite local monodromies?
|
Adiabatic Temperature Rise Constant
The adiabatic temperature rise constant k is the constant term used in the adiabatic temperature rise equation:
[math] A = \frac{\sqrt{i^{2}t}}{k} \, [/math]
Where [math]A \,[/math] is the minimum cross-sectional area of the PE conductor ([math]mm^{2}[/math])
[math]i^{2}t \,[/math] is the energy of the short circuit ([math]A^{2}s[/math]) [math]k \,[/math] is the adiabatic temperature rise constant
The constant k is made up of the material properties and temperature range of the conductor material (see the derivation of the constant here).
IEC 60364-5-54 Annex A gives some guidance on the calculation of the constant k, according to the following equation:
[math]k = \sqrt{\frac{Q_{c} (\beta + 20 ^\circ C)}{\rho_{20}} \ln \left( 1 + \frac{\theta_{f} - \theta_{i}}{\beta + \theta_{i}} \right)} [/math]
Where [math]Q_{c} \,[/math] is the volumetric heat capacity of the conductor material at 20
oC (J/ oC mm 3) [math]\beta \,[/math] is the reciprocal of the conductor temperature coefficient of resistivity at 0 oC ( oC) [math]\rho_{20} \,[/math] is the electrical resistivity of the conductor material at 20 oC ([math]\Omega[/math] mm) [math]\theta_{i} \,[/math] and [math]\theta_{f} \,[/math] are the conductor initial and final temperatures respectively ( oC) [math]\beta \,[/math] is the reciprocal of the conductor temperature coefficient of resistivity at 0 Table of material values
Material [math]\beta \,[/math] [math]Q_{c} \,[/math] [math]\rho_{20} \,[/math] Copper 234.5 3.45 x 10 -3 17.241 x 10 -6 Aluminium 228 2.5 x 10 -3 28.264 x 10 -6 Lead 230 1.45 x 10 -3 214 x 10 -6 Steel 202 3.8 x 10 -3 138 x 10 -6 Table of temperature ranges for conductors not incorporated in cables and not bunched with other cables
Insulation [math]\theta_{i} \,[/math] [math]\theta_{f} \,[/math] 70 oC PVC 30 160 or 140(*) 90 oC PVC 30 160 or 140(*) 90 oC Thermosetting 30 250 60 oC Rubber 30 200 85 oC Rubber 30 220 Silicone rubber 30 350
(*) For PVC insulated conductors greater than 300mm
2 Table of temperature ranges for bare conductors in contact with cable coverings but not bunched with other cables
Cable Covering [math]\theta_{i} \,[/math] [math]\theta_{f} \,[/math] PVC 30 200 Polyethylene 30 150 CSP 30 220 Table of temperature ranges for conductors incorporated in cables and bunched with other cables or other insulated conductors
Insulation [math]\theta_{i} \,[/math] [math]\theta_{f} \,[/math] 70 oC PVC 70 160 or 140(*) 90 oC PVC 90 160 or 140(*) 90 oC Thermosetting 90 250 60 oC Rubber 60 200 85 oC Rubber 85 220 Silicone rubber 180 350
(*) For PVC insulated conductors greater than 300mm
2 Table of temperature ranges for protective conductors as a metallic layer of a cable, e.g. armour, metallic sheath, concentric conductor, etc
Insulation [math]\theta_{i} \,[/math] [math]\theta_{f} \,[/math] 70 oC PVC 60 200 90 oC PVC 80 200 90 oC Thermosetting 80 200 60 oC Rubber 55 200 85 oC Rubber 75 220 Mineral PVC covered 70 200 Mineral bare sheathed 105 250 Table of temperature ranges for bare conductors with no risk of damaging neighbouring material
Conditions [math]\theta_{i} \,[/math] Copper [math]\theta_{f} \,[/math] Aluminium [math]\theta_{f} \,[/math] Steel [math]\theta_{f} \,[/math] Visible in restricted area 30 500 300 500 Normal conditions 30 200 200 200 Fire risk 30 150 150 150
|
During the season 2014/2015, Real Madrid’s striker position was covered by the frenchman Karim Benzema. As opposed to Mourinho’s philosophy of rotating the striker almost on a game by game basis, Carlo Ancelotti had Benzema starting every game and left Javier “Chicharito” Hernandez in the bench for the majority of the season. Benzema played 2,312 La Liga minutes while Hernandez played 859 minutes. One could assume Chicharito was not world class quality, but every time he jumped on the pitch, he responded by scoring. How can we determine which player deserved to be in the initial lineup?
Determining who is the better player can become a never ending opinion battle, so I’m focusing this analysis on who is the most effective striker. Where I define “most effective striker” as the player who has the highest probability of scoring, given the fact that he is included in the starting squad. The data comes from Squwawka and the module I’m experimenting with is the PyMC Python module.
A little bit of data wrangling before we begin… I build two dataframes including the date of the match and the goals scored for Benzema and Hernandez. These dataframes exclude games where the player was active for less than 10 minutes. This is because it is may just not enough time for the striker to perform.
As mentioned before, we want to calculate the probability of scoring at least one goal given that the player was included in the starting squad. The probability can be solved by using Bayes’ Rule as shown bellow:
\( P( k > 0 | Y= 1) = \frac{P( Y = 1 | k >0 ) \times P( k > 0)}{P( Y = 1 | k >0) \times P(K > 0) + P( Y = 1 | K =0) \times P(K=0)} \)
Where:
K: Number of Goals scored in game Y=1: Starting game Y=1: Entering as a substitute
From the equation above, it is clear we need to find the probability of scoring \( P(K>0 \) for Hernandez and Benzema. To do this, we will use the PyMC module.
To begin with, I set up my Ipython environmanet to load all these modules.
The first step in the analysis is to define the probabiliy distribution we want to base our model on. The Poisson distribution is the most appropriate because it is discrete and gives attributes a higher probability to lower values of K. Keep in mind we are talking about soccer and alghough possible, it is rare when we see a player scoring multiple goals in one match. The parameter that differenciates Hernandez Poisson distribution from Benzema’s Poisson distribution is the \( \lambda \) parameter, and we now need to define it.
The \( \lambda \) parameter is what determines the shape of the distribution. A larger \( \lambda \), gives a greater probability for larger values of K. In our case, a larger \( \lambda \) will indicate a higher chance of scoring larger number of goals. We initially don’t know \( \lambda \), but we can begin by estimating it. Heuristically, I determined that the best distribution was an Exponential distribution with an \( \alpha \) parameter of the inverse of the mean. This is going to be our prior distribution for \( \lambda \).
In the code above, we set our prior distribution PyMC varialbes for \( \lambda _{CH} \) and \( \lambda _{KB} \). We then used the @pm.deterministic decorator to indicate ovserved_proportion_CH and observed_proportion_KB as a deterministic function. (This is required to work with PyMC models)
The next step is to devlop a model. In the code below, the variable obsCH use the value paremter to mold the striker’s goal scoring distribution. Note that we “educate” our distribution with our data using the “value” parameter in the function.
At this point, we have sucessfuly defined a posterior distribution for our parameters \( \lambda _{CH} \) and \( \lambda _{KB} \). To extract samples, we use the following code:
These parameters are shown in the graph below. \( \lambda _{KB} \) has a higher value than \( \lambda _{CH} \) on average. Using these parameters in a poisson distribution gives us the graph below.
We could assume Benzema is more effective than Javier Hernandez, but we are missing one final step. We have ignored whether the player was a starter or a substitute. For the case of Benzema, no adjustment is necessary since he was never a substitute. For his case, the Bayesian equation is reduced to the tautology:
\( P( k > 0 ) = ( P( k > 0 ) \)
Where:
\( P( k > 0 ) = 43.46\% \)
This is not the case for Javier Hernandez. Hernandez played 19 games, 11 as a substitute player and 8 as a starter. He was a starter on 4 of the 5 times he scored and he was a starter on 4 of the 14 times he didn’t get his name on the scoreboard. Hence, Hernadez Probability Bayesian Equation:
\( P( k > 0 | Y= 1) = \frac{P( Y = 1 | k >0 ) \times P( k > 0)}{P( Y = 1 | k >0) \times P(K > 0) + P( Y = 1 | k =0) \times P(K=0)} \)
can be answered as:
\( P( k > 0 | Y= 1) = \frac{P( Y = 1 | k >0 ) \times P( k > 0)}{\frac{4}{5} \times P(K > 0) + \frac{4}{14} \times P(K=0)} \)
\( P( k > 0 | Y= 1) = 57.19\% \)
According to our anlysis, Hernandez is 13.73% more likely to score than the frenchman Karim Benzema when being in the starting eleven.
Upon showing this report to people, a common response was “Yes, Hernandez scores as a sub because the defense is tired”. So we’re now going to compare apples to apples and use only full game statistics.
Notice, how Hernandez’ \( \lambda \) has wider tails than Benzema’s \( \lambda\). This is because we have less data for Hernandez (8 games vs Benzema’s 29 games) On the plot below, we appreciate the scoring probabilities of both. As we should expect from the \( \lambda \) parameters, Hernandez is more likely to score than Benzema.
Hernadez has a 47.4% probability of scoring when starting a game and Benzema has a 40.8% probability of scoring.
|
Ed25519 is a typical elliptic-curve signature scheme, in a group of large prime order $\ell \approx 2^{252}$ on a curve over the field $\mathbb F_p$ for $p = 2^{255} - 19$. A secret key is a uniform random 32-byte string; a signature is a 64-byte string encoding a point $R$ on the curve generated by the standard base point, and encoding a scalar $s \in \mathbb Z/\ell\mathbb Z$.
Since In fact, for each choice of $R$, of which there are $\ell$ possibilities, there is a unique choice of $s$, so there are only about $2^{252}$ possible signatures—obviously, some secret keys share a common signature on $m$ (but there are so few collisions it doesn't matter in practice). for any fixed message $m$ the key-to-signature function is a map from a 32-byte space to a 64-byte space, it can't possibly induce a uniform distribution on the output space.
Further, the signatures are easily distinguishable from uniform random bit strings: the scalar is always less than $\ell$, and the encoding of $R$ contains an encoding of an integer $x$ such that $(1 + x^2)/(1 + (121665/121666) x^2)$ is a quadratic residue modulo $2^{255} - 19$.
Does the output distribution have At best it can have about 252 bits of entropy, because there are only about $2^{252}$ possible signatures—but that's pretty close to the 256 bits of entropy possible in a secret key. At worst it ought not to have substantially less than 128 bits of entropy, because if not, there would be a random algorithm with substantially better probability of success at forgery than, approximately the same entropy as the uniform distribution on secret keys? e.g., Pollard's $\rho$ to recover the secret scalar from the public key, at comparable costs.
As it turns out, we choose between these options more or less uniformly at random: the choice of $R$ is determined by a scalar in $\mathbb Z/\ell\mathbb Z$ chosen by a pseudorandom function of the message, keyed by the secret key. So, to anyone
without knowledge of the public key: Yes, the distribution on signatures has about 252 bits of entropy.
This is why you can make a VRF out of it, though you need a little more work like an output filter to get all the properties of a VRF, so that
even knowing the public key is not enough to distinguish outputs from uniform random if you don't have the proof of output too.
|
I have proved the 'resolution of the identity' for a normal operator, namely that there is a unique spectral measure E such that $\int_{{\sigma}(T)} {\lambda}\,dE=T$
If (${\lambda}_{n}$) is the sequence of eigenvalues of $T$, How do I prove that
a) $\sum_{n=1}^\infty \int_{\{\lambda_n\}}\lambda\,dE(\lambda)=\sum_{n=1}^\infty \lambda_n E(\{\lambda_n\})$
and b) that $E({\lambda}_{n})$ orthogonal projection to the eigenspace of $\lambda_n$?
|
Search
Now showing items 11-17 of 17
Measurement of azimuthal asymmetries in inclusive charged dipion production in e + e - annihilations at ? s = 3.65 GeV
(APS Physics, 2016)
We present a measurement of the azimuthal asymmetries of two charged pions in the inclusive process e(+) e(-) -> pi pi X based on a data set of 62 pb(-1) at the center-of-mass energy of 3.65 GeV collected with the SESIII ...
Study of D + › K - ? + e + ? e
(APS Physics, 2016)
Experimental study on cylindrical grinding with helical grooves wheels
(Karamanoglu Mehmetbey University, 2016)
Gastronomi ve Yiyecek Tarihi
(Tema Yayıncılık, 2010)
Observation of e(+)e(-) -> eta ' J/psi center-of-mass energies between 4.189 and 4.600 GeV
(APS Physics, 2016)
The process $e^{+}e^{-}\to \eta^{\prime} J/\psi$ is observed for the first time with a statistical significance of $8.6\sigma$ at center-of-mass energy $\sqrt{s} = 4.226$ GeV and $7.3\sigma$ at $\sqrt{s} = 4.258$ GeV using ...
The role of social media strategies in competitive banking operations worldwide
(IGI-Global Publications, 2014)
Social media has rapidly taken its place among the important phenomenons of today. It has an important role in institutionalization and companies’ financial effectivness in many fields. This chapter discusses concept, ...
|
Is there a simple way to simplify an expression in terms of vector operations?
For example, when I evaluate this;
v1 = {x1, y1, z1};v2 = {x2, y2, z2};v3 = {x3, y3, z3};v4 = {x4, y4, z4};Integrate[1, {x, y, z} \[Element] Tetrahedron[{v1, v2, v3, v4}]]
I get this horrible abomination;
1/6 Abs[x3 y2 z1 - x4 y2 z1 - x2 y3 z1 + x4 y3 z1 + x2 y4 z1 - x3 y4 z1 - x3 y1 z2 + x4 y1 z2 + x1 y3 z2 - x4 y3 z2 - x1 y4 z2 + x3 y4 z2 + x2 y1 z3 - x4 y1 z3 - x1 y2 z3 + x4 y2 z3 + x1 y4 z3 - x2 y4 z3 - x2 y1 z4 + x3 y1 z4 + x1 y2 z4 - x3 y2 z4 - x1 y3 z4 + x2 y3 z4]
However, this is simply the formula for the tetrahedron volume;
$$\frac16 \left| ( \vec{v}_2 - \vec{v}_1 ) \cdot ( ( \vec{v}_3 - \vec{v}_1 ) \times ( \vec{v}_4 - \vec{v}_1 ) ) \right|$$
Can Mathematica show the result I get in terms of vectors and vector operations?
There are other questions similar to this, but answers are some hacky manipualtions and not quite what I'm looking for.
Isn't there a simple, non-hacky, built-in way? There must be! C'mon Mathematica...
|
Buried in the physics paper by Nekrasov and Okounkov, a strange identity is proven: $$ \prod_{n > 0} (1 - q^n)^{\mu^2-1} = \sum_{\mathbf{k}} q^{|\mathbf{k}|} \prod_{\square \in k} \left( 1 - \frac{\mu^2}{h(\square)^2}\right) $$ where the left side is a q-series and the right side is the sum over all partitions. Ihis was proven by physical considerations, evaluating the Yang-Mills partition function in 2 different ways.
The partitions could index representations of the permutation group $S_n$. We can define measure on partitions, $\mathrm{Irr}(S_n)$ by
$$ \mathbb{P}\_{\mu, t} (\mathbf k) = \prod_{n \geq 1} (1-t^n)^{1-\mu^2} q^{|\mathbf{k}|} \prod_{\square \in k} \left( 1 - \frac{\mu^2}{h(\square)^2}\right) $$
In fact, 3 years later Alexei Borodin explains this formula interpolates between uniform and Plancherel measures on partitions.
Can this be extended to a q,t-deformation of uniform measure on the permutation group? Maybe through something similar to Robinson-Schensted correspondence.
|
Is there
a need for $L\subseteq \Sigma^*$ to be infinite to be undecidable?
I mean what if we choose a language $L'$ be a
bounded finite version of $L\subseteq \Sigma^*$, that is $|L'|\leq N$, ($N \in \mathbb{N}$), with $L' \subset L$. Is it possible for $L'$ to be an undecidable language?
I see that there is a problem of "How to choose the $N$ words that $\in$ $L' "$ for which we have to establish a rule for choosing which would be the first $N$ elements of $L'$, a kind of "finite" Kleene star operation. The aim is to find undecidability language without needing an infinite set, but I can't see it.
EDIT Note:
Although I chose an answer, many answers
and all comments are important.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.