text
stringlengths 256
16.4k
|
|---|
Prove or disprove the following with reasons:
If $X,Y,Z$ are iid standard exponential random variables then $X + Y$ and $Y + Z$ are both gamma random variables. If $X,Y$ are continuous random variables with joint density $f(x,y)=2, x, y \ge 0, x+y \le 1$ then X,Y are marignally both $\operatorname{U}[0,1]$ Suppose $X,Y$ have a bivariate uniform distribution in the unit circle. Therefore, $E(XY-XY^2)=0$ If X is a standard exponential random variable then $P(X$ is an even integer$) = 0$.
What I have so far:
True, because all exponential random variables are gamma random variables with $\alpha=1$ False, because $f_X(x)=\int_{0}^{1-x}2dy \ne 1$ (not sure about this one)
Is many answer and reasoning for 1 correct? What about 2 (could someone explain this a bit more since I'm kinda confused)? For 3 and 4 I can only guess an anwer.
|
I'd like to draw a diagram with tikz-cd for a composition of natural transformations between functors that looked like the following
The best I could do so far is given by the following code:
\documentclass{article}\usepackage{amsmath,amsfonts}\usepackage{tikz-cd}\begin{document} \[ \begin{tikzcd}[row sep=huge] \mathcal{A} \arrow[r, bend left=65, "F"]\arrow[r, "G", swap]\arrow[r, bend right=65, "H", swap] & \mathcal{B}. \end{tikzcd} \]\end{document}
I'm wondering how to put G over the middle arrow and how to draw the vertical arrows
\alpha and
\beta.
|
There is the statement that $\beta$-function vanishes for super-renormalizable theories. In $D=2$, scalar field has mass dimension zero. So any polynomial interaction is super-renormalizable. Then shouldn't all of them have vanishing $\beta$-functions? But there are many theories (e.g, sine-Gordon) in $2D$ which have nontrivial $\beta$-function. I must be missing something very basic here.
In a qft, it may be possible to redefine other parameters than coupling to absorb the infinities coming from higher order corrections. In this way, coupling constant does not get renormalized and hence the beta function vanishes. It is a possibility in super-renormalizable theory as fewer diagrams are divergent and the condition may be satisfied.
As an example for the Sine- Gordon model, the action is
$$\mathcal{S}(\theta)=\int d^2x [\frac{1}{2}(\partial_\mu\theta(x))^2-\frac{m^2}{k^2}cosk\theta(x)]$$ Redefining, $\theta=k\theta$ gives $$\mathcal{S}(\theta)=\frac{1}{t}\int d^2x [\frac{1}{2}(\partial_\mu\theta(x))^2-m^2cos\theta(x)]$$ with $t=k^2$. Perturbative expansion in the power of k only modifies the $cos\theta$ term as a self interaction and the divergences arising can be absorbed by a redefinition of m. In this way, coupling constant does not get renormalized and hence beta function vanishes.
This property is not true in general as the vanishing of beta function to all orders implies a finite theory ($\mathcal{N}=4$ SYM) which is a result need to be obtained from a non-perturbative analysis unless it is trivially true as in the former case. Most qft exist perturbatively and the existence of fixed points is not known non-perturbatively. A super-renormalizable theory does not have a vanishing beta function generally as can be seen from $\phi^3$ theory beta function which in d dimension reads,
$\beta(g)=(d/2-3)g-\frac{3g^3}{256\pi^3}+O(g^5)$ ( Collins "Renormalization," eqn. 7.3.7)
$\phi^3$ theory is super-renormalizable for $d<6$ but $\beta$ function is not zero. It however shows asymptotic freedom which is a property of super-renormalizable theories ( I am not aware of the proof though).
|
Here is a well-known interview/code golf question: a knight is placed on a chess board. The knight chooses from its 8 possible moves uniformly at random. When it steps off the board it doesn’t move anymore. What is the probability that the knight is still on the board after \( n \) steps?
We could calculate this directly but it’s more interesting to frame it as a Markov chain.
Calculation using the transition matrix
Model the chess board as the tuples \( \{ (r, c) \mid 0 \leq i, j \leq 7 \} \).
Here are the valid moves and a helper function to check if a move
\( (r,c) \rightarrow (u,v) \) is valid and if a cell is on the usual \( 8 \times 8 \) chessboard: moves = [(-2, 1), (-1, 2), (1, 2), (2, 1), (2,-1), (1,-2), (-1,-2), (-2,-1)] def is_move(r, c, u, v): for m in moves: if (u, v) == (r + m[0], c + m[1]): return True return False def on_board(x): return 0 <= x[0] < 8 and 0 <= x[1] < 8
The valid states are all the on-board positions plus the immediate off-board positions:
states = [(r, c) for r in range(-2, 8+2) for c in range(-2, 8+2)]
Now we can set up the transition matrix.
def make_matrix(states): """ Create the transition matrix for a knight on a chess board with all moves chosen uniformly at random. When the knight moves off-board, no more moves are made. """ # Handy mapping from (row, col) -> index into 'states' to_idx = dict([(s, i) for (i, s) in enumerate(states)]) P = np.array([[0.0 for _ in range(len(states))] for _ in range(len(states))], dtype='float64') assert P.shape == (len(states), len(states)) for (i, (r, c)) in enumerate(states): for (j, (u, v)) in enumerate(states): # On board, equal probability to each destination, even if goes off board. if on_board((r, c)): if is_move(r, c, u, v): P[i][j] = 1.0/len(moves) # Off board, no more moves. else: if (r, c) == (u, v): # terminal state P[i][j] = 1.0 else: P[i][j] = 0.0 return to_idx, P
We can visualise the transition graph using graphviz (full code here):
Oops! The corners aren’t connected to anything so we have 5 communicating classes (the 4 corners plus the rest). We never reach these nodes from any of the starting positions so we can get rid of them:
corners = [(-2,9), (9,9), (-2,-2), (9,-2)] states = [(r, c) for r in range(-2, 8+2) for c in range(-2, 8+2) if (r,c) not in corners]
Here’s the new transition graph:
Intuitively, the knights problem is symmetric, and this graph is symmetric, so it’s likely that we’ve set things up correctly.
Let \( X_0 \), \( X_1 \), \( \ldots \), \( X_n \) be the positions of the knight. Then then probability of the knight moving from state \( i \) to \( j \) in \( n \) steps is
\[
P(X_n = j \mid X_0 = i) = (P^n)_{i,j} \]
So the probability of being on the board after \( n \) steps, starting from \(i\), will be
\[
\sum_{k \in \mathcal{B}} (P^n)_{i,k} \]
where \( \mathcal{B} \) is the set of on-board states. This is easy to calculate using Numpy:
start = (3, 3) n = 5 idx = to_idx[start] Pn = matrix_power(P, n) pr = sum([Pn[idx][r] for (r, s) in enumerate(states) if on_board(s)])
For this case we get probability \( 0.35565185546875 \).
Here are a few more calculations:
start: (0, 0) n: 0 Pr(on board): 1.0 start: (3, 3) n: 1 Pr(on board): 1.0 start: (0, 0) n: 1 Pr(on board): 0.25 start: (3, 3) n: 4 Pr(on board): 0.48291015625 start: (3, 3) n: 5 Pr(on board): 0.35565185546875 start: (3, 3) n: 100 Pr(on board): 5.730392258771815e-13
It’s always good to do a quick Monte Carlo simulation to sanity check our results:
def do_n_steps(start, n): current = start for _ in range(n): move = random.choice(moves) new = (current[0] + move[0], current[1] + move[1]) if not on_board(new): break current = new return on_board(new) N_sims = 10000000 n = 5 nr_on_board = 0 for _ in range(N_sims): if do_n_steps((3,3), n): nr_on_board += 1 print('pr on board from (3,3) after 5 steps:', nr_on_board/N_sims)
The estimate is fairly close to the value we got from taking power of the transition matrix:
pr on board from (3,3) after 5 steps: 0.3554605 Absorbing states
An
absorbing state of a Markov chain is a state that, once entered, cannot be left. In our problem the absorbing states are precisely the off-board states.
A natural question is: given a starting location, how many steps (on average) will it take the knight to step off the board?
With a bit of matrix algebra we can get this from the transition matrix \( \boldsymbol{P} \). Partition \( \boldsymbol{P} \) by the state type: let \( \boldsymbol{Q} \) be the transitions of transient states (here, these are the on-board states to other on-board states); let \( \boldsymbol{R} \) be transitions from transient states to absorbing states (on-board to off-board); and let \( \boldsymbol{I} \) be the identity matrix (transitions of the absorbing states). Then \( \boldsymbol{P} \) can be written in block-matrix form:
\[
\boldsymbol{P}= \left( \begin{array}{c|c} \boldsymbol{Q} & \boldsymbol{R} \\ \hline \boldsymbol{0} & \boldsymbol{I} \end{array} \right) \]
We can calculate powers of \( \boldsymbol{P} \):
\[
\boldsymbol{P}^2= \left( \begin{array}{c|c} \boldsymbol{Q} & \boldsymbol{R} \\ \hline \boldsymbol{0} & \boldsymbol{I} \end{array} \right) \left( \begin{array}{c|c} \boldsymbol{Q} & \boldsymbol{R} \\ \hline \boldsymbol{0} & \boldsymbol{I} \end{array} \right) = \left( \begin{array}{c|c} \boldsymbol{Q}^2 & (\boldsymbol{I} + \boldsymbol{Q})\boldsymbol{R} \\ \hline \boldsymbol{0} & \boldsymbol{I} \end{array} \right) \]
\[
\boldsymbol{P}^3= \left( \begin{array}{c|c} \boldsymbol{Q}^3 & (\boldsymbol{I} + \boldsymbol{Q} + \boldsymbol{Q}^2)\boldsymbol{R} \\ \hline \boldsymbol{0} & \boldsymbol{I} \end{array} \right) \]
In general:
\[
\boldsymbol{P}^n= \left( \begin{array}{c|c} \boldsymbol{Q}^n & (\boldsymbol{I} + \boldsymbol{Q} + \cdots + \boldsymbol{Q}^{n-1})\boldsymbol{R} \\ \hline \boldsymbol{0} & \boldsymbol{I} \end{array} \right) \]
We want to calculate \( \lim_{n \rightarrow \infty} \boldsymbol{P}^n \) since this will tell us the long-term probability of moving from one state to another. In particular, the top-right block will tell us the long-term probability of moving from a transient state to an absorbing state.
Here is a handy result from matrix algebra:
Lemma. Let \( \boldsymbol{A} \) be a square matrix with the property that \( \boldsymbol{A}^n \rightarrow \mathbf{0} \) as \( n \rightarrow \infty \). Then \[ \sum_{n=0}^\infty = (\boldsymbol{I} – \boldsymbol{A})^{-1}. \]
Applying this to the block form gives:
\[
\begin{align*} \lim_{n \rightarrow \infty} \boldsymbol{P}^n &= \left( \begin{array}{c|c} \boldsymbol{Q}^n & (\boldsymbol{I} + \boldsymbol{Q} + \cdots + \boldsymbol{Q}^{n-1})\boldsymbol{R} \\ \hline \boldsymbol{0} & \boldsymbol{I} \end{array} \right) \\ &= \left( \begin{array}{c|c} \lim_{n \rightarrow \infty} \boldsymbol{Q}^n & \lim_{n \rightarrow \infty} (\boldsymbol{I} + \boldsymbol{Q} + \cdots + \boldsymbol{Q}^{n-1})\boldsymbol{R} \\ \hline \boldsymbol{0} & \boldsymbol{I} \end{array} \right) \\ &= \left( \begin{array}{c|c} \mathbf{0} & (\boldsymbol{I} – \boldsymbol{Q})^{-1}\boldsymbol{R} \\ \hline \boldsymbol{0} & \boldsymbol{I} \end{array} \right) \end{align*} \]
where \( \lim_{n \rightarrow \infty} \boldsymbol{Q}^n = 0\) since all of the entries in \( \boldsymbol{Q} \) are transient.
The top-right corner also contains the
fundamental matrix as defined in the following theorem: Theorem Consider an absorbing Markov chain with \( t \) transient states. Let \( \boldsymbol{F} \) be a \( t \times t \) matrix indexed by the transient states, where \( \boldsymbol{F}_{i,j} \) is the expected number of visits to \( j \) given that the chain starts in \( i \). Then
\[
\boldsymbol{F} = (\boldsymbol{I} – \boldsymbol{Q})^{-1}. \]
Taking the row sums of \( \boldsymbol{F} \) gives the expected number of steps \( a_i \) starting from state \( i \) until absorption (i.e. we count the number of visits to each transient state before eventual absorption):
\[
a_i = \sum_{k} \boldsymbol{F}_{i,k} \]
Back in our Python code, we can rearrange the states vector so that the transition matrix is appropriately partitioned. Taking the \( \boldsymbol{Q} \) matrix is very quick using Numpy’s slicing notation:
states = [s for s in states if on_board(s)] + [s for s in states if not on_board(s)] (to_idx, P) = make_matrix(states) # k states k = len(states) # t transient states t = len([s for s in states if on_board(s)]) Q = P[:t, :t] assert Q.shape == (t, t) assert Q.shape == (64, 64) F = linalg.inv(np.eye(*Q.shape) - Q) # example calculation for a_(3,3): state = (3, 3) print(F[to_idx[state], :].sum())
Again, compare to a Monte Carlo simulation to verify that the numbers are correct:
start: (0, 0) Avg nr steps to absorb (MC): 1.9527606 start: (0, 0) Avg nr steps (F matrix): 1.9525249995183136 start: (3, 3) Avg nr steps to absorb (MC): 5.4187947 start: (3, 3) Avg nr steps (F matrix): 5.417750460813215
So, on average, if we start in the corner \( (0,0) \) we will step off the board after \( 1.95 \) steps; if we start in the centre at \( (3,3) \) we will step off the board after \( 5.41 \) steps.
Further reading
The theoretical parts of this blog post follow the presentation in chapter 3 of Introduction to Stochastic Processes with R (Dobrow).
|
Second version, hopefully correct.
I claim that solving the feasibility problem $\exists? x: Ax \le b$ reduces in strongly polynomial time to finding a linear separator. Then it's easy to reduce linear programming to the feasibility problem.
Let us first reduce the strict feasibility problem $\exists? x: Ax < b$ to finding a linear separator. Towards this, notice that you can reduce to the special case $b=0$: just transform the constraints to $$Ax + tb < 0, \\t < 0 $$If $Ax < b$ is feasible for some $x$, then $(x, -1)$ is feasible for the problem above. Conversely, if $(x, t)$ is feasible for the problem above, then $\tilde{x} = -x/t$ is feasible for $A\tilde{x} < b$. This reduction just adds a row and a column to $A$, and a new variable $t$.
Assume then that you are given the problem $Ax < 0$. Create a data set which has one point labeled red for every row of $A$, and is equal to that row. It also contains the origin $0$, labeled blue. Any hyperplane $H$ separating the red-labeled points from the blue-labeled origin gives a solution to $Ax < 0$: just take the normal of $H$ in the direction of the origin to be $x$.
The final step is to reduce the feasibility problem $Ax \le b$ to the strict feasibility problem $Ax < b$. We argue that given an oracle for the strict feasibility problem, we can solve the feasibility problem. Let $S$ be an inclusion-maximal set of constraints such that the system $\forall i \in S: (Ax)_i < b_i$ is feasible: we can find $S$ in at most $m$ calls to the oracle (assuming $A$ is $m\times n$). If $S$ contains
all constraints, we are done. Otherwise, it is not hard to see that, if $Ax \le b$ is feasible, then for any $i \not \in S$ the constraint $(Ax)_i \le b_i$ must be satisfied with equality for all feasible $x$. Use one (or more) of these constraints to eliminate one (or more) of the variables, and recurse on the remaining variables.
This equivalence between finding a linear separator and linear programming is surely well-known, but I do not know what the right reference is. For example, I have seen something like the final reduction above attributed to Chvatal.
|
Here is a complete proof. (It is inspired by a similar proof that an ideal in a ring that is maximal with the property of not being finitely generated, must be prime.)
Assume that there exists subgroups that are not finitely generated. They are ordered by inclusion and by Zorn's lemma there exists a maximal subgroup that is not finitely generated, call it $X$.
Now consider any $x\in G\setminus X$. Since $\langle X, x\rangle$ ($= X + \mathbb Z\cdot x$) is larger than $X$, it is finitely generated. Consider a set of generators$$ v_i + \lambda_i x, \quad i = 1,\dots, n $$for $X$.
Next, consider the subgroup
$$ M = \{ \mu \in\mathbb Z \mid \mu x \in X\}. $$
Since subgroups of $\mathbb Z$ are finitely generated, $M$ is finitely generated, say by $\mu_1,\dots,\mu_m$. (If $M = 0$, then this is generated by the empty set. In fact $m=1$ will always suffice because subgroups of $\mathbb Z$ are cyclic.)
We will show that $X = \langle v_1,\dots,v_n, \mu_1 x , \dots, \mu_m x\rangle$. One inclusion is pretty obvious, since $v_i\in X$ and $\mu_i x\in X$ because of the way we definted them. To see the other inclusion, consider any $g\in X$ and note that there exist $\sigma_i\in \mathbb Z$ such that $$ g = \sum_i \sigma_i v_i + \sigma_i \lambda_i x. $$However this implies that$$ (\sum_i \sigma_i \lambda_i) x = g - \sum_i \sigma_i v_i \in X,$$thus from the definition of $M$, we have $\sum_i \sigma_i \lambda_i\in M$ and therefore there exist $\tau_i$ such that $$\sum_i \sigma_i \lambda_i = \sum_i \tau_i \mu_i.$$But this implies that$$g = \sum_i \sigma_i v_i + \sum_i \tau_i \mu_i x,$$thus $g$ is indeed in the set generated by the $v_i$ and $\mu_i x$, which completes the proof.
(Note that this is a proof by contradiction: we assume that $X$ is not finitely generated and construct a finite set of generators. Nonetheless, the proof yields a practical procedure for finding generators: Start from an arbitrary subgroup $V$, add generators of $V$ until you find a group for which you know a finite generating set (this will exist because $G$ is finitely generated), and repeatedly use the above argument to get rid of the generators one by one.)
|
I was reading the following notes on tensor products: http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/tensorprod.pdf
At some point (p. 39) there is the following example
In the last paragraph, he says that using exterior powers it can be proved that if $I\oplus I\simeq S^2$ as $S$-module, then $I\otimes_S I\simeq S$ as $S$-modules.
I do not know a lot about exterior powers (just the definition), but I would like to know what is the property being used here and what is the isomorphism he finds out.
Can you give me some hints?
As matter of fact I think it really proves that $I\otimes_S I$ is isomorphic to $S\otimes_S S\simeq S$, but I cannot construct a surjective map from $S\otimes_S S$ to $I\otimes_S I$.
|
Overview of the Model
As you begin this chapter, listening in lecture and working in DL, it must seem, at least at first, that you are being introduced to a lot of new concepts. The representation of the motion of an object and the forces acting on an object are necessary ideas to understand before we can fully understand this new conserved quantity called momentum. One goal of this (and the next) chapter is to understand the effects of forces on motion and we begin to do this in this chapter through a discussion of momentum and transfers of momentum.
We introduce two concepts which are completely new:
. However, we are taking great pains to help you see how these concepts play roles very similarly to energy and work. So, yes, you have to memorize that momentum, \(p\), is the product of mass and velocity \((p = mv\)). And you have to be careful to not forget that momentum has vector properties, just as velocity does. But, impulse is not an isolated construct you file away in your brain somewhere. Rather, you should really strive to understand impulse in analogy to work. A transfer of energy as work changes the energy of a physical system. Similarly, a transfer of momentum as an impulse changes the momentum of a system. Energy is conserved. Momentum is also conserved. Of course, there are differences between momentum and energy and between impulse and work. As you work in DL, as you study this text, as you work the FNT’s, and as you interact with other students and with instructors as you mentally struggle with this material, try to understand these new concepts in relation to what you already know, rather than as simply some more isolated facts that you memorize. momentum and impulse Review and Extension of the “Before and After” Interaction Approach
In Chapters 1 and 2 we focused on changes in the energy of a physical system. Energy has meaning for one particle or 10
23 particles, for objects as small as the nucleus of an atom and as large as a galaxy. It really is a universal concept that applies to any physical system. It turns out that there are two other concepts that are like energy in that they are universally applicable, are transferred among systems as a result of interactions, and the amount transferred gives very useful information that does not depend on the details of the interaction. These are the concepts of momentum and angular momentum. Integrating “the Agent” of Interactions
We have called force the “
agent of interactions”. Interactions occur between objects as they exert forces on each other. Objects experience changes in energy when other objects exert forces on them and do work on them. We recall that the amount of energy change caused by a force is the integral of the force over a distance. This integral is called the \(work\) done on a system. The only component that contributes to the work, however, is the component of the force that is parallel to the motion. We usually indicated this component with the symbol \(F_{||}\):
\[W = \int _{x_i} ^{x_f} F_{||}(x) dx = E_f - E_i = \Delta E \tag{7.1.1}\]
Or, if \(F\) is constant, or we define an average force \(F_{avg}\), we can write
\[W =F_{avg|| }\Delta x = E_f - E_i = \Delta E \tag{7.1.2}\]
In other words, the parallel component of force integrated over the path of the motion is the work, and this work equals the amount of energy transferred to the system due to the application of the force by an object outside the system.
A similar integral of the force is equal to the change in momentum of the system. But instead of integrating over distance, we now integrate over time. This integral is called the
, \(F\). We represent the impulse with the symbol \(J\). impulse of the force
\[J = \int _{t_i} ^{t_f} F_{||}(t) dx = p_f - p_i = \Delta p \tag{7.1.3}\]
Or, if \(F\) is constant, or we define an average force, \(F_{avg}\), then
\[J = F_{avg} \Delta t = p_f - p_i = \Delta p \tag{7.1.4}\]
Impulse is a vector quantity and causes a change in a vector property of a system: specifically, a change in the linear momentum, \(\Delta p\). The
change in momentum, is of course, independent of what Galilean reference frame we choose to measure the momenta in.
Note on units: Force has SI units of newtons, of course. Impulse must therefore have units of newton seconds, \(N \times s\). Momentum, the product of mass and velocity, must have SI units of kilogram meter per second, kg m/s. Since these two quantities are equated, these units must be equivalent, as you can show using the relation \(N = kg~ m/s^2\).
Linear Momentum
The linear momentum of an object is simply the product of the object’s mass and velocity: \(p = mv\) Linear momentum incorporates the notion of inertia, expressed as mass, as well as the speed and direction of motion. In some ways it is similar to kinetic energy, 1/2 mv
2, but an obvious difference is that momentum has a direction; it is described as a vector. (Often, the word “ momentum” is used without the modifier “linear,” when talking about linear momentum. Later, however, the modifier “ angular” is always used when talking about angular momentum.)
Example 7.1.1: Calculating Momentum: A Football Player and a Football
(a) Calculate the momentum of a 110-kg football player running at 8.00 m/s. (b) Compare the player’s momentum with the momentum of a hard-thrown 0.410-kg football that has a speed of 25.0 m/s.
Strategy
No information is given regarding direction, and so we can calculate only the magnitude of the momentum, \(p\) (As usual, a symbol that is in italics is a magnitude, whereas one that is italicized, boldfaced, and has an arrow is a vector.) In both parts of this example, the magnitude of momentum can be calculated directly from the definition of momentum given in the equation, which becomes
\[p = mv\]
when only magnitudes are considered.
Solution for (a)
To determine the momentum of the player, substitute the known values for the player’s mass and speed into the equation.
\[p_{player} = (110 \space kg)(8.00 \space m/s) = 880 \space kg \cdot m/s\]
Solution for (b)
To determine the momentum of the ball, substitute the known values for the ball’s mass and speed into the equation.
\[p_{ball} = (0.410 \space kg)(25.0 \space m/s) = 10.3 \space kg \cdot m/s\]
The ratio of the player’s momentum to that of the ball is
\[\dfrac{p_{player}}{p_{ball}} = \dfrac{880}{10.3} = 85.0\]
Discussion
Although the ball has greater velocity, the player has a much greater mass. Thus the momentum of the player is much greater than the momentum of the football, as you might guess. As a result, the player’s motion is only slightly affected if he catches the ball. We shall quantify what happens in such collisions in terms of momentum in later sections.
Temporary restriction to non-rotating objects and center of mass
Until we consider rotation of objects in the next model/approach, Angular Momentum Conservation, we will consider phenomena in which extended objects act only like point particles. A useful construct that will become much more meaningful when we consider rotation, is
center of mass. Right now we can simply consider that any extended object acts like a single particle whose mass is equal to the mass of the object, located at the special point, the center of mass. The Construct of Net Force and net Impulse
Be sure to review the discussion of Net Force in Chapter 6. The central point is that the
effect of all forces acting on an object can be represented by a single vector construct called the unbalanced force or the net force, \(\sum F\). When we use the concept of impulse, it is sometimes useful to consider the impulse, \(J\), due to a particular force; we would write this as:
\[J_A = F_A \Delta t \tag{7.1.5}\]
\((\text{The subscript “A” tells us that the impulse is due to particular force exerted by the object “A”}) \)
However, the power of the construct of impulse comes into play when we consider the impulse of the net force; we write this as
\[J_{net} = J = \sum F \Delta t = \Delta p \tag{7.1.6}\]
\((\text{J without a subscript usually means the impulse due to the net force}.) \)
In words, \(\text{the net impulse is equal to the change in the linear momentum of the system}\). We explore this relationship further below.
Momentum of a System of Particles
The momentum of a single object is simply the product of its mass and velocity. Suppose we define a physical system that contains several particles which move with different velocities. The total linear momentum of this physical system is the vector sum of the individual linear momenta.
\[p_{system} = p_1+p_2+p_3+...=\sum p_i \tag{7.1.7}\]
If the particles in our system interact with each other, they exert forces on each other, and there will be an impulse associated with each of these forces.
tells us, however, that the impulse that particle a, for example, exerts on particle b is equal in magnitude and opposite in direction to the impulse exerted by particle b on particle a. And using the relation that the impulse is equal to a change in momentum of a particle, we see that the change in momenta of particles \(a~and ~b\) due to their interaction will be equal in magnitude, but opposite in direction. Newton’s 3rd law
Generalizing the above argument to interactions
between any of the particles within the system, we see that if the momentum of one particle changes a certain amount, another particle’s momentum changes the same amount in the opposite direction. Thus, when we sum over all the momenta of the system, the of the system does not change in response to interactions among the particles total momentum within the system.
However, if the particles of our system interact with particles (objects)
outside the system, then the total momentum of the system might change. The figure below shows some of the forces that might be acting on the particles of the system. Some, labeled int (for internal) don’t change the total momentum of the system. The forces labeled ext (for external) do change the momentum of the system. Figure 7.1.1
Of the various impulses shown in the figure 7.1.1, only the impulse caused by \(F_{ext ~on ~c}\) causes a change in momentum of the
system of particles. Statement of Conservation of Momentum
So, for a system of particles (objects) it is useful to write the impulse/momentum relation in a way that emphasizes the external interaction:
\[Net ~Impulse_{ext} = J_{ext} = \int \sum F_{ext}(t) dt = p_f - p_i = \Delta p_{system} \tag{7.1.8}\]
A system acted on by external forces undergoes a change in total linear momentum equal to the net impulse (total impulse) of the external forces. We can rephrase the relationship stated above as a conservation principle for the total momentum of a system of particles:
Note:
Conservation of Linear Momentum
If the net external impulse acting on a system is zero, then there is no change in the total linear momentum of that system; otherwise, the change in momentum is
equal to the net external impulse.
This statement is an expression of
Conservation of Linear Momentum. The total linear momentum of a system of objects remains constant as long as there is no net impulse due to forces that arise from interactions with objects outside the system. It does not matter that the objects of the system interact with each other and exert impulses on each other. These internal impulses cause changes in the individual momenta of the objects, but not the sum or total momentum of the systemof objects.
We can rephrase this discussion in terms of open and closed systems:
Closed system- A closed system does not interact with its environment so there is no net external impulse. The total momentum of a closed system is conserved. That is, the total momentum of the system remains constant. Open system- An open system interacts with its environment, so that it can exchange both energy and momentum with the environment. For an open system the change in the total momentum is equal to the net impulse added from the environment–from objects outside the system.
|
Let's review the design of ChaCha to see how the nonce, the counter, and te number of rounds all fit into it.
How do we encrypt a sequence of
messages $m_1, m_2, \dots, m_\ell$? One way is to pick a sequence of message-length pads $p_1, p_2, \dots, p_\ell$ independently and uniformly at random, and encrypt the $n^{\mathit{th}}$ message $m_n$ with the $n^{\mathit{th}}$ pad $p_n$ as the ciphertext $$c_n = m_n \oplus p_n,$$ where $\oplus$ is xor. If the adversary can guess a pad, you lose; if you ever repeat a pad for two different messages, you lose. Otherwise, this model, called the one-time pad, has a very nice security theorem, but choosing and agreeing on independent uniform random pads message-length $p_n$ is hard.
Can we make do with a
short uniform key $k$, say 256 bits long? Approximately, yes: if we had a deterministic function $F_k$ from message sequence numbers $n$ to message-length pads $F_k(n)$ which are hard to distinguish from independent uniform random when $k$ is uniformly distributed, then we could pick $$p_n = F_k(n)$$ and we only need to choose and agree on a 256-bit secret key $k$. We call $F_k$ a pseudorandom function family. This makes our job easier without making it much easier for any adversary even if they could spend humanity's entire energy budget on breaking it.
How do we design our
short-input, long-output PRF $F_k(n)$? If we had a short-input, $f_k(n, c)$ which computed a short-output PRF fixed-size block given a message sequence number and an extra input $c$, we could simply generate a lot of blocks for each message, using a block counter for the extra input $c$, and concatenate them: $$F_k(n) = f_k(n, 0) \mathbin\| f_k(n, 1) \mathbin\| f_k(n, 2) \mathbin\| \cdots.$$ How do we design our short-input, short-output function $f_k(n, c)$? If $\pi$ were a uniform random permutation, then the function $S(x) = \pi(x) + x$ would be hard to distinguish from a uniform random function, and almost certainly be noninvertible. We could define $$f_k(n, c) = S(k \mathbin\| n \mathbin\| c \mathbin\| \sigma).$$ Of course, we don't have a uniform random permutation, but if $\delta$ is a permutation without much structure, and if we define $\pi$ by iterating $\delta$ many times, $$\pi(x) = \delta(\delta(\cdots(\delta(x))\cdots)) = \delta^r(x),$$ then $\pi(x)$ will have even less structure than $\delta$—with any luck, so little structure that it will destroy any patterns a cryptanalyst could look for within humanity's energy budget.
Recapitulating, the design of the ChaCha$(2r)$ is as follows:
Start with a permutation $\delta$ of 512-bit blocks that doesn't have much structure. The permutation $\delta$ is called the (Why a ‘doubleround’? ChaCha alternates between ‘row rounds’ and ‘column rounds’; $\delta$ does one row round, and one column round.) ChaCha doubleround. Define the permutation $$\pi(x) = \delta^r(x),$$ the $r$-fold iteration of $\delta$. The number $2r$ is the For instance, in ChaCha20 (the default), we iterate $\delta$ ten times; in ChaCha8 (the smallest unbroken number of rounds), we iterate $\delta$ four times. number of rounds. Define the function $$S(x) = \pi(x) + x.$$ This function $S$ is called the ChaCha core. Define the short-input, short-output pseudorandom function family $$f_k(n, c) = S(k \mathbin\| n \mathbin\| c \mathbin\| \sigma),$$ where $\sigma$ is a fixed constant with moderate Hamming weight. When unambiguous, $f_k$ is sometimes just called ChaCha, or the ChaCha PRF. Define the short-input, long-output pseudorandom function family $$F_k(n) = f_k(n, 0) \mathbin\| f_k(n, 1) \mathbin\| f_k(n, 2) \mathbin\| \cdots.$$ Here we use the $c$ parameter of $f_k$ as a When unambiguous, $F_k$ is sometimes just called ChaCha, or the ChaCha stream cipher. block counter. For the $n^{\mathit{th}}$ message, compute the pad $$p_n = F_k(n).$$ Here we use the $n$ parameter of $F_k$ as a nonce. Encrypt the $n^{\mathit{th}}$ message $m_n$ by computing the ciphertext $$c_n = m_n \oplus p_n.$$
When you are The using ChaCha, as in the NaCl
crypto_stream_chacha_xor(output, msg, len, n, k), your obligations are to choose $k$ uniformly at random and never reuse the nonce $n$ with the same key $k$.
counter is an implementation detail that does not concern you in most protocols.
Note 1: You almost certainly shouldn't use ChaCha directly either; you should use an authenticated cipher like ChaCha/Poly1305 or NaCl
crypto_secretbox_xsalsa20poly1305. Unauthenticated data is pure evil—don't touch it!
Note 2: That ChaCha's counter enables random access to blocks within a message also shouldn't concern you; your messages should be short enough that a forgery won't waste much memory before you are guaranteed to realize it's a forgery and drop it on the floor. Use the nonce for random access to a sequence of authenticated messages instead so you're not tempted to reach inside a box of pure evil.
To address the specific questions you asked:
Does the counter at position of 13th byte actually increment by one? Can I extract the number of iterations from the state of ChaCha20?
The counter increments for each block within a single message, as illustrated above.
The number of iterations (or ‘rounds’) is not encoded into the state. The number of iterations for ChaCha20 is always 20. If you have ciphertexts under ChaCha12 and ChaCha20 with an unknown key, you can't tell whether they were made with ChaCha12 or ChaCha20 either.
In particular, the ChaCha20 core, $\operatorname{ChaCha20}_{\mathit{key}}(\mathit{nonce}, \mathit{counter})$ permutes the 512-bit state $(\mathit{key}, \mathit{nonce}, \mathit{counter}, \mathit{constant})$ (encoded in some bit order) with 20 rounds to produce a
single 512-bit block of pad at a time; the ChaCha20 cipher then moves on to using $\operatorname{ChaCha20}_{\mathit{key}}(\mathit{nonce}, \mathit{counter} + 1)$ for the next block, and then $\mathit{counter} + 2$, and so on.
From the specification, I'd say that the state gets randomized after as much as one iteration.
There's an illustration of the diffusion of a change in a single byte of the Salsa20 core here: https://cr.yp.to/snuffle/diffusion.html (Salsa20 is closely related to ChaCha; they have almost the same security.)
Does this mean then that nonce can also be made public (just like IV for block ciphers) without compromising the security? (of course provided that key stays confidential)
Yes. Not only can it be public, but it can be predictable in advance—unlike a CBC IV.
The
security contract for ChaCha20 obliges you never to repeat a nonce with the same key, and obliges you to limit the messages to at most $2^\ell\cdot 512$ bits long, where $\ell$ is the number of bits reserved for the counter—in NaCl, $\ell = 64$ so messages can be of essentially arbitrary length, while in RFC 7539 as used in, e.g., TLS, $\ell = 32$ so messages are limited to 256 GB which is more than enough for sensible applications which break messages into bite-sized pieces to be authenticated anyway—you are using this as a part of the authenticated cipher ChaCha/Poly1305 or similar, right?
Neither the nonce nor the counter need be secret in the security contract; normally they are prescribed by the protocol and algorithm,
e.g. to be a message sequence number starting at 0, and a block sequence number starting at 0, respectively.
It is still unclear to me what the function of counter is. Why not just use larger 128bit nonce, instead of a 32bit counter + 96bit nonce?
If you used a 128-bit nonce, your messages would be limited to 32
bytes long.
|
I am asked to verify that the sequence $\left(\frac{1}{6n^2+1}\right)$ converges to $0$:
$$\lim \frac{1}{6n^2+1}=0.$$
Here is my work:
$$\left|\frac{1}{6n^2+1}-0\right|<\epsilon$$
$\frac{1}{6n^2+1}<\epsilon$, since $\frac{1}{6n^2+1}$ is positive
$$\frac{1}{\epsilon}<6n^2+1$$
$$\frac{1}{\epsilon}-1<6n^2$$
$$\frac{1}{6\epsilon}-\frac{1}{6}<n^2$$
At this point, I am stuck. I'm not sure if I take the square root of both sides if I then have to deal with $\pm\sqrt{\frac{1}{6\epsilon}-\frac{1}{6}}$. That doesn't seem right.
The book provides the answer:
$$\sqrt{\frac{1}{6\epsilon}}<n$$
But I don't understand (1) what happened to the $\frac{1}{6}$, and (2) why there's not a +/- in front of the square root.
Any help is greatly appreciated.
|
I have to agree the exposition in this paper was quite sparse. I think answering these two questions requires a bit of review of previous work, but you can skip to the bottom if you just want the answers.
Recall the energy function of binary RBMs.
$$E(v,h) = v^TWh + v^Ta + h^Tb $$
In the binomial RBM work, they duplicate each visible unit a bunch of times, and correspondingly the bias $a_i$ of each visible unit and the rows $W_i$ corresponding to "connections" to the hidden units are also copied.
Now, let $W'$, $a'$, and $v'$ denote the appropriately duplicated values of $W$, $a$, and $v$.
Critically, the energy function does not change, other than swapping out some symbols:
$$E(v',h) = v'^TW'h + v'^Ta' + h^Tb $$
From an implementation and computational standpoint, it's quite wasteful to do extra computation for all those units. You might ask if we can save any effort by representing $K$ duplicates of a binary unit using a single variable which just stores how many of those $K$ duplicates are set to 1 (and how many are set to 0). I'll use $v^*$ to denote the combined duplicates: in other words, $v^*$ is the
sum of each group of $K$ duplicates in $v'$
In order to do so, we need to make sure we can still sample hidden units given visible units and vice versa, which is a prerequisite for running contrastive divergence.
Sampling hidden from visible is easy because $P(h=1|v') = \sigma(b+W'^Tv') = \sigma(b+Wv^*)$
Sampling visible from hidden is slightly tricker. We have: $P(v'=1|h) = \sigma(a'+W'h)$. Now note that $v^*|h \sim \text{Binomial}(K, \sigma(a+Wh))$. By grouping out duplicated units together, we have gone from sampling from a bernoulli distribution to sampling from a binomial one.
Note that the gradient of the log probability:
$$\frac{\partial \log p(v)}{\partial W} = E_\text{data}[vh^T] - E_\text{model}[vh^T]$$
remains unchanged with our binomial units.
Now suppose after duplicating each unit $K$ times, we added different fixed biases to the duplicated units: the first duplicate bias is offset by $-0.5$, the second by $-1.5$, the third $-2.5$, and so on. In fact, why stop at $K$ duplicates -- why not make infinite duplicates of each unit. This is not as unreasonable as it might first sound, because each successive duplicate has a greater negative bias offset, so it has a vanishingly small chance of ever being switched on.
Whereas in the binomial case it was nice that we were able to group duplicate units together, here is it actually crucial, since computing with infinite binary units is not exactly possible. Whereas in the binomial case the sum of the duplicated units followed a binomial distribution, here it is possible to prove that the sum of the duplicated units approximately follows a distribution
$$\text{relu}(\mathcal{N}(x,\sigma(x)))$$
Take care to note that $\sigma$ here denotes the sigmoid function and not the standard deviation. So instead of sampling from a binomial distribution, we sample from this relu'd normal distribution, and everything else proceeds as previously described.
Now that we have an understanding of what exactly it means to have relu activations in an RBM, we can return to the questions at hand.
How should the energy function change?
We have seen above that it does not change at all!
How should max(0,x) be interpreted as a probability?
It shouldn't, and the relu RBM model never uses relu as a probability value of any sort. relu is only used to approximate the sampling of infinite binary units.
|
I haven't been taught tensor product in class but they have taught us addition of spin. I looked up online in this link->http://homepage.univie.ac.at/reinhold.bertlmann/pdfs/T2_Skript_Ch_7.pdf#page=10 (pg 148, pg 10 in the pdf) and found an explanation. I think I understand most of it except this step:
$$ \vec{S}^{(A)} \otimes \vec{S}^{(B)} = \frac{\hbar^2}{4}(\sigma_x \otimes \sigma_x + \sigma_y \otimes\sigma_y + \sigma_z \otimes \sigma_z). $$
I know that:
$$ S^{2} = \frac{\hbar^2}{4}((\sigma_x)^2 + (\sigma_y)^2 + (\sigma_z)^2), $$
but I don't see the connection.
|
Double bifurcation diagrams and four positive solutions of nonlinear boundary value problems via time maps
1.
Department of Mathematics and Physics, North China Electric Power University, Beijing, 102206, China
2.
School of Applied Science, Beijing Information Science & Technology University, Beijing, 100192, China
$\left\{ \begin{array}{l} - u''(x) = \lambda f(u),\;\;\;\;0 < x < 1,\\u(0) = 0,\\\frac{{u(1)}}{{u(1) + 1}}u'(1) + \left[ {1 - \frac{{u(1)}}{{u(1) + 1}}} \right]u(1) = 0,\end{array} \right.$
$λ>0$
$f(u)>0$
$u>0$
$f(u) = e^{u},\ f(u) = a^{u}(a>0),\ f(u) = u^{p}(p>0),\ f(u) = e^{u}-1,\ f(u) = a^{u}-1(a>1)$
$f(u) = (1+u)^{p}(p>0)$ Keywords:Double bifurcation diagrams, existence and multiplicity of positive solutions, nonlinear boundary conditions, time map. Mathematics Subject Classification:34B18, 74G35. Citation:Xuemei Zhang, Meiqiang Feng. Double bifurcation diagrams and four positive solutions of nonlinear boundary value problems via time maps. Communications on Pure & Applied Analysis, 2018, 17 (5) : 2149-2171. doi: 10.3934/cpaa.2018103
References:
[1] [2] [3] [4] [5] [6]
J. Goddard II, E. K. Lee and R. Shivaji,
A double $S$-shaped bifurcation curve for a reaction-diffusion model with nonlinear boundary conditions,
[7]
K.-C. Hung, Y.-H. Cheng, S.-H. Wang and C.-H. Chuang,
Exact multiplicity and bifurcation diagrams of positive solutions of a one-dimensional multiparameter prescribed mean curvature problem,
[8]
K.-C. Hung and S.-H. Wang,
A theorem on
[9] [10] [11] [12] [13] [14] [15]
Y.-H. Liang and S.-H. Wang,
Classification and evolution of bifurcation curves for the one-dimensional perturbed Gelfand equation with mixed boundary conditions,
[16]
Y.-H. Liang and S.-H. Wang,
Classification and evolution of bifurcation curves for the one-dimensional perturbed Gelfand equation with mixed boundary conditions Ⅱ,
[17] [18] [19] [20] [21] [22] [23]
S.-H. Wang and T.-S. Yeh,
Exact multiplicity and ordering properties of positive solutions of a $p$-Laplacian Dirichlet problem and their applications,
[24]
show all references
References:
[1] [2] [3] [4] [5] [6]
J. Goddard II, E. K. Lee and R. Shivaji,
A double $S$-shaped bifurcation curve for a reaction-diffusion model with nonlinear boundary conditions,
[7]
K.-C. Hung, Y.-H. Cheng, S.-H. Wang and C.-H. Chuang,
Exact multiplicity and bifurcation diagrams of positive solutions of a one-dimensional multiparameter prescribed mean curvature problem,
[8]
K.-C. Hung and S.-H. Wang,
A theorem on
[9] [10] [11] [12] [13] [14] [15]
Y.-H. Liang and S.-H. Wang,
Classification and evolution of bifurcation curves for the one-dimensional perturbed Gelfand equation with mixed boundary conditions,
[16]
Y.-H. Liang and S.-H. Wang,
Classification and evolution of bifurcation curves for the one-dimensional perturbed Gelfand equation with mixed boundary conditions Ⅱ,
[17] [18] [19] [20] [21] [22] [23]
S.-H. Wang and T.-S. Yeh,
Exact multiplicity and ordering properties of positive solutions of a $p$-Laplacian Dirichlet problem and their applications,
[24]
[1]
Michael E. Filippakis, Nikolaos S. Papageorgiou.
Existence and multiplicity of positive solutions for nonlinear boundary value problems driven by the scalar $p$-Laplacian.
[2]
Santiago Cano-Casanova.
Bifurcation to positive solutions in BVPs of logistic type with nonlinear indefinite mixed boundary conditions.
[3]
Tai-Chia Lin, Tsung-Fang Wu.
Existence and multiplicity of positive solutions for two coupled
nonlinear Schrödinger equations.
[4] [5]
Julián López-Góme, Andrea Tellini, F. Zanolin.
High multiplicity and complexity of the
bifurcation diagrams of large solutions for a class of
superlinear indefinite problems.
[6]
Dagny Butler, Eunkyung Ko, Eun Kyoung Lee, R. Shivaji.
Positive radial solutions for elliptic equations on exterior domains with nonlinear boundary conditions.
[7]
Tsung-Fang Wu.
Multiplicity of positive solutions for a semilinear elliptic equation
in $R_+^N$ with nonlinear boundary condition.
[8]
Eric R. Kaufmann.
Existence and nonexistence of positive solutions for a nonlinear fractional boundary value problem.
[9]
Daniel Franco, Donal O'Regan.
Existence of solutions to second order problems with nonlinear boundary conditions.
[10]
Shao-Yuan Huang.
Bifurcation diagrams of positive solutions for one-dimensional Minkowski-curvature problem and its applications.
[11]
Jiafeng Liao, Peng Zhang, Jiu Liu, Chunlei Tang.
Existence and multiplicity of positive solutions for a class of Kirchhoff type problems at resonance.
[12]
Leonelo Iturriaga, Eugenio Massa.
Existence, nonexistence and multiplicity of positive solutions for the poly-Laplacian and nonlinearities with zeros.
[13]
Julián López-Gómez, Marcela Molina-Meyer, Paul H. Rabinowitz.
Global bifurcation diagrams of one node solutions in a class of degenerate boundary value problems.
[14]
Xudong Shang, Jihui Zhang.
Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation.
[15]
Xiyou Cheng, Zhaosheng Feng, Zhitao Zhang.
Multiplicity of positive solutions to nonlinear systems of Hammerstein integral equations with weighted functions.
[16]
Eun Kyoung Lee, R. Shivaji, Inbo Sim, Byungjae Son.
Analysis of positive solutions for a class of semipositone
[17]
Mariane Bourgoing.
Viscosity solutions of fully nonlinear second order parabolic equations with $L^1$ dependence in time and Neumann boundary conditions. Existence and applications to the level-set approach.
[18]
Shao-Yuan Huang.
Exact multiplicity and bifurcation curves of positive solutions of a one-dimensional Minkowski-curvature problem and its application.
[19]
Johnny Henderson, Rodica Luca.
Existence of positive solutions for a system of nonlinear second-order integral boundary value problems.
[20]
Hongjing Pan, Ruixiang Xing.
On the existence of positive solutions for some nonlinear boundary value problems
and applications to MEMS models.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
Use partial fraction decompostion. This requires factoring the quadratic first.
$$I(s)=\frac{6}{L}\frac{1}{s^2 + \frac{R}{L}s + \frac{1}{LC}}$$
The roots of that quadratic are$$s_{1,2} = -\frac{R}{2L}\pm\sqrt{\left(\frac{R}{2L}\right)^2-\frac{1}{LC}}$$
If $s_{1,2}$ are distinct, your function can be represented as$$I(s)=\frac{6}{L}\frac{1}{(s-s_1)(s-s_2)} = \frac{A}{s-s_1} + \frac{B}{s-s_2}$$
$A$ can be found by multiplying the equation through by $s-s_1$ and then taking $\lim_{s\rightarrow s_1}$; $B$ can be found in the analogous way with $s_2$. The form of each term is in the Laplace transform tables; each term will correspond to an exponential.
If $s_{1,2}$ happen to be identical (i.e. the discriminant is exactly zero), you'll use the representation$$I(s)=\frac{6}{L}\frac{1}{(s-s_1)^2} = \frac{A}{s-s_1} + \frac{B}{(s-s_1)^2}$$
and again, find constants $A$ and $B$ that satisfy the above. Each of that kind of term will also be found in the transform tables.
The top case is more likely, where the roots are distinct. In that case you'll either get two real roots or a complex conjugate pair. If there are two real roots, you'll get two decaying exponentials after the inverse Laplace transform. If you have a complex conjugate pair, you'll get an exponentially decaying sinusoid, where the decay will look like $e^{-\frac{R}{2L}}$ and the sinusoidal part will have real frequency $\sqrt{\frac{1}{LC}-\left(\frac{R}{2L}\right)^2}$
In the second case, you should get something like $(C_1t+C_2)e^{-\frac{R}{2L} t}$ for some constants $C_1,C_2$.
|
I will attempt to atone for my previous error by showing something opposite -- that $\tilde{\Theta}\left(\frac{1}{\epsilon^2}\right)$ samples are sufficient (the lower bound of $1/\epsilon^2$ is almost tight)! See what you think....
The key intuition starts from two observations. First, in order for distributions to have an $L_2$ distance of $\epsilon$, there must be points with high probability ($\Omega(\epsilon^2)$). For example, if we had $1/\epsilon^3$ points of probability $\epsilon^3$, we'd have $\|D_1 - D_2\|_2 \leq \sqrt{\frac{1}{\epsilon^3} (\epsilon^3)^2} = \epsilon^{3/2} < \epsilon$.
Second, consider uniform distributions with an $L_2$ distance of $\epsilon$. If we had $O(1)$ points of probability $O(1)$, then they would each differ by $O(\epsilon)$ and $1/\epsilon^2$ samples would suffice. On the other hand, if we had $O(1/\epsilon^2)$ points, they would each need to differ by $O(\epsilon^2)$ and again $O(1/\epsilon^2)$ samples (a constant number per point) suffices. So we might hope that, among the high-probability points mentioned earlier, there is always some point differing "enough" that $O(1/\epsilon^2)$ draws distinguishes it.
Algorithm. Given $\epsilon$ and a confidence parameter $M$, let $X = M \log(1/\epsilon^2)$. Draw $\frac{X}{\epsilon^2}$ samples from each distribution. Let $a_i,b_i$ be the respective higher,lower number of samples for point $i$. If there is any point $i \in [n]$ for which $a_i \geq \frac{X}{8}$ and $a_i-b_i \geq \sqrt{a_i} \frac{\sqrt{X}}{4}$, declare the distributions different. Otherwise, declare them the same.
The correctness and confidence bounds ($1-e^{-\Omega(M)}$) depend on the following lemma which says that all of the deviation in $L_2$ distance comes from points whose probabilities differ by $\Omega(\epsilon^2)$.
Claim. Suppose $\|D_1 - D_2\|_2 \geq \epsilon$. Let $\delta_i = |D_1(i) - D_2(i)|$. Let $S_k = \{i : \delta_i > \frac{\epsilon^2}{k}\}$. Then
$$\sum_{i \in S_k} \delta_i^2 \geq \epsilon^2\left(1-\frac{2}{k}\right).$$
Proof. We have $$ \sum_{i \in S_k} \delta_i^2 ~ + ~ \sum_{i \not\in S_k} \delta_i^2 \geq \epsilon^2. $$Let us bound the second sum; we wish to maximize $\sum_{i \not\in S_k} \delta_i^2$ subject to $\sum_{i \not\in S_k} \delta_i \leq 2$. Since the function $x \mapsto x^2$ is strictly convex and increasing, we can increase the objective by taking any $\delta_i \geq \delta_j$ and increasing $\delta_i$ by $\gamma$ while decreasing $\delta_j$ by $\gamma$. Thus, the objective will be maximized with as many terms as possible at their maximum values, and the rest at $0$. The maximum value of each term is $\frac{\epsilon^2}{k}$, and there are at most $\frac{2k}{\epsilon^2}$ terms of this value (since they sum to at most $2$). So $$ \sum_{i \not\in S_k} \delta_i^2 \leq \frac{2k}{\epsilon^2}\left(\frac{\epsilon^2}{k}\right)^2 = \frac{2\epsilon^2}{k} . ~~~~ \square $$
Claim. Let $p_i = \max\{D_1(i),D_2(i)\}$. If $\|D_1 - D_2\|_2 \geq \epsilon$, there exists at least one point $i \in [n]$ with $p_i > \frac{\epsilon^2}{4}$ and $\delta_i \geq \frac{\epsilon \sqrt{p_i}}{2}$.
Proof. First, all points in $S_k$ have $p_i \geq \delta_i > \frac{\epsilon^2}{k}$ by definition (and $S_k$ cannot be empty for $k > 2$ by the previous claim).
Second, because $\sum_i p_i \leq 2$, we have $$ \sum_{i \in S_k} \delta_i^2 \geq \epsilon^2 \left(\frac{1}{2} - \frac{1}{k}\right) \sum_{i \in S_k} p_i, $$or, rearranging, $$ \sum_{i \in S_k} \left( \delta_i^2 - p_i \epsilon^2 \left(\frac{1}{2} - \frac{1}{k}\right)\right) \geq 0 , $$so the inequality $$ \delta_i^2 \geq p_i \epsilon^2 \left(\frac{1}{2} - \frac{1}{k}\right) $$holds for at least one point in $S_k$. Now pick $k=4$. $\square$
Claim (false positives). If $D_1 = D_2$, our algorithm declares them different with probability at most $e^{-\Omega(M)}$.
Sketch. Consider two cases: $p_i < \epsilon^2/16$ and $p_i \geq \epsilon^2/16$. In the first case, the number of samples of $i$ will not exceed $X/8$ from either distribution: The mean number of samples is $< X/16$ and a tail bound says that with probability $e^{-\Omega(X/p_i)} = \epsilon^2 e^{-\Omega(M/p_i)}$, $i$'s samples do not exceed their mean by an additive $X/16$; if we are careful to keep the value $p_i$ in the tail bound, we can union bound over them no matter how many such points there are (intuitively, the bound decreases exponentially in the number of possible points).
In the case $p_i \geq \epsilon^2/16$, we can use a Chernoff bound: It says that, when we take $m$ samples and a point is drawn with probability $p$, the probability of differing from its mean $pm$ by $c \sqrt{pm}$ is at most $e^{-\Omega((c\sqrt{pm})^2/pm)} = e^{-\Omega(c^2)}$. Here, let $c = \frac{\sqrt{X}}{16}$, so the probability is bounded by $e^{-\Omega(X)} = \epsilon^2 e^{-\Omega(M)}$.
So with probability $1-\epsilon^2e^{-\Omega(M)}$, (for both distributions) the number of samples of $i$ is within $\sqrt{p_i\frac{X}{\epsilon^2}}\frac{\sqrt{X}}{16}$ of its mean $p_i\frac{X}{\epsilon^2}$. Thus, our test will not catch these points (they are very close to each other), and we can union bound over all $16/\epsilon^2$ of them. $\square$
Claim (false negatives). If $\|D_1 - D_2\|_2 \geq \epsilon$, our algorithm declares them identical with probability at most $\epsilon^2 e^{-\Omega(M)}$.
Sketch. There is some point $i$ with $p_i > \epsilon^2/4$ and $\delta_i \geq \epsilon \sqrt{p_i}/2$. The same Chernoff bound as in the previous claim says that with probability $1-\epsilon^2 e^{-\Omega(M)}$, the number of samples of $i$ differs from its mean $p_i m$ by at most $\sqrt{p_i m} \frac{\sqrt{X}}{16}$. That is for (WLOG) distribution $1$ which has $p_i = D_1(i) = D_2(i) + \delta_i$; but there is an even lower probability of the number of samples of $i$ from distribution $2$ differing from its mean by this additive amount (as the mean and variance are lower).
So with high probability the number of samples of $i$ from each distribution is within $\sqrt{\frac{p_i X}{\epsilon^2}} \frac{\sqrt{X}}{16}$ of its mean; but their probabilities differ by $\delta_i$, so their means differ by $$ \frac{X}{\epsilon^2}\delta_i \geq \frac{X \sqrt{p_i}}{2\epsilon} = \sqrt{\frac{p_i X}{\epsilon^2}} \frac{\sqrt{X}}{2} . $$
So with high probability, for point $i$, the number of samples differs by at least $\sqrt{\# samples(1)} \frac{\sqrt{X}}{4}$. $\square$
To complete the sketches, we would need to more rigorously show that, for $M$ big enough, the number of samples of $i$ is close enough to its mean that, when the algorithm uses $\sqrt{\# samples}$ rather than $\sqrt{mean}$, it doesn't change anything (which should be straightforward by leaving some wiggle room in the constants).
|
You shouldn't dismiss Karo's graph. Drawing a graph to get a feel for the equation (when you don't know how to proceed) is most helpful. And this isn't nonsense, in fact the graph is the key for the solution, as it makes it clear that we can consider only $x$ for which $-2 \leq x \leq 2$.
This can be conveniently rewritten as $-1\leq\frac{x}{2}\leq1$.
This inequality reminds us immediately of the one that $\sin a$ and $\cos a$ also satisfy, and indeed we can assume WLOG $\frac{x}{2}=\sin a$, for some angle $a$ in radians of course. This makes $x=2 \sin a$, so let's substitute this in the equation.
$$\begin{align}8\sin^3 a-6\sin a&=\sqrt{2(\sin a+1)}\\-2(-4\sin^3a+3\sin a)&=\sqrt{2(\sin a+1)}\\-2\sin3a&=\sqrt{2(\sin a+1)}\end{align}$$
It helped that we could factor it into $\sin 3a$, but then again we can't get rid of the square root in a nice way. This is because $\sin^2a$ is given in terms of $\cos 2a$, and not $\sin 2a$, so we can't turn the radicand into a nice square of a sine. On the other hand, $\cos^2 a$
can be expressed in terms of a cosine (indeed, we have the trigonometric identity $\cos^2a=\frac{1+\cos2a}{2}$). So we are lead to believe that the substituition $x=2\cos a$ is much more fortunate. It's worth a shot:
$$\begin{align}8\cos^3 a-6\cos a&=\sqrt{2(\cos a+1)}\\2(4\cos^3a-3\cos a)&=\sqrt{4\frac{(\cos a+1)}{2}}\\2\cos3a&=\sqrt{2^2\cos^2\frac{a}{2}}\\\cos3a &=\cos\frac{a}{2}\end{align}$$
The equation has been successfully trivialized. We should note that imposing $0\leq a\leq\pi$, $\cos a$ will still assume all of the values it possibly could, so we impose this restriction on $a$.
Therefore, we can have $3a=\frac{a}{2}\Rightarrow a=0$, which gives us the solution $x=2\cos0=2$, or we can have the other, non-trivial solutions: $$3a=\frac{a}{2}+2\pi\Rightarrow a=\frac{4\pi}{5}$$ which gives us $x=2\cos\frac{4\pi}{5}=-2\cos\frac{\pi}{5}$. If $0\leq a\leq\pi$, then $0\leq 3a \leq 3\pi\lt4\pi$, so we don't need to look the case $3a=\frac{a}{2}+4\pi$ or higher, and we just need to consider the last case:$$3a=2\pi-\frac{a}{2}\Rightarrow a=\frac{4\pi}{7}$$Which gives us the last solution, $x=2\cos\frac{4\pi}{7}=-2\cos\frac{3\pi}{7}$.
Note: This solution was inspired by the one brazilian mathematician Nicolau C. Saldanha presented in a lesson. Here's the link: http://y2u.be/jFFdOSsVGgg (someone who's fluent in Spanish shouldn't have trouble understanding the explanations). It is also shown that with some simple manipulations on the regular pentagon, one can arrive at $\cos\frac{\pi}{5}=\frac{1+\sqrt{5}}{4}$, so the second solution can be written as $x=-\frac{1+\sqrt{5}}{2}$ (which justifies the quadratic factor $x^2-x-1$ of the 6th degree polynomial you get when you square both sides, which obviously contains some extraneous roots).
I'm not sure there's an easy way to turn $\cos\frac{3\pi}{7}$ into one algebraic term (if there is one at all), so we just leave it that way. The solution set is $$x \in \left. \left \{ 2,-\frac{1+\sqrt{5}}{2},-2\cos\frac{3\pi}{7} \right. \right \}$$
|
Darcy Weisbach Equation statement
It is an empirical equation in fluid mechanics named after Henry Darcy and Julius Weisbach. The Darcy Weisbach Equation relates the loss of pressure or head loss due to friction along the given length of pipe to the average velocity of the fluid flow for an incompressible fluid.
Darcy Weisbach Equation
\(H_{F}=\frac{4fLv^{2}}{2gd}\)
Where,
H
F is the head loss or pressure loss.
f is the coefficient of friction or friction factor.
v is the velocity of incompressible fluid.
L is the length of the pipe.
d is the diameter.
g is the acceleration due to gravity. (g = 9.8m/s
2) Derivation of Darcy Weisbach Equation
Figure(1)Uniform horizontal pipe with a steady flow of fluid.
Step 1: Terms and Assumptions
Consider a uniform horizontal pipe with fixed diameter d and area A, which allow a steady flow of incompressible fluid.
For simplicity consider two sections; S1 and S2 of the pipe separated by the distance L.
At all the point of S1, The pressure is P
1, velocity is V 1.
At all the point of S2, the pressure is P
2 and velocity is V 2.
Consider the fluid flow as shown in the figure(1) Thus, the pressure at S1is greater than the pressure at S2 i.e.,(P
1>P 2) This pressure difference makes the fluid flow along the pipe.
When fluid flows there will be the loss of energy due to friction. Thus we can apply Bernoulli’s principle.
Bernoulli’s principle
which states that a decrease in the pressure or potential energy of the fluid increases the velocity/speed of the fluid flow or in other words, “For incompressible fluid, the sum of its potential energy, pressure, and velocity remains constant.”
Step 2: Applying Bernoulli’s principle
On applying Bernoulli’s equation at section; S1 and S2 we get-\(P_{1}+\frac{1}{2}\rho v_{1}^{2}+\rho gh_{1}=P_{2}+\frac{1}{2}\rho v_{2}^{2}+\rho gh_{2}+H_{F}\) —–(1)
Where,
H
F is the head loss due to friction
On dividing above equation (1) by 𝜌g we get-\(\frac{P_{1}}{\rho g}+\frac{v_{1}^{2}}{2g}+h_{1}=\frac{P_{2}}{\rho g}+\frac{v_{2}^{2}}{2g}+h_{2}+H_{F}\) ——-(2)
For horizontal pipe (That is, the inlet of pipe and the outlet of the pipe are at the same level from the reference plane)
h
1=h 2.
Here, the diameter is uniform, for uniform diameter-
v
1=v 2
On substituting them, the equation(2) becomes-\(\frac{P_{1}}{\rho g}+\frac{v_{1}^{2}}{2g}+h_{1}=\frac{P_{2}}{\rho g} +\frac{v_{1}^{2}} {2g}+h_{1}+H_{F}\) \(\Rightarrow \frac{P_{1}}{\rho g}=\frac{P_{2}}{\rho g}+H_{F}\) —–(3)
Thus, on rearranging equation (3) for the head loss we get-\(H_{F}= \frac{P_{1}}{\rho g}-\frac{P_{2}}{\rho g}\) \(\Rightarrow H_{F}= \frac{P_{1}-P_{2}}{\rho g}\) \(\Rightarrow H_{F}\rho g= P_{1}-P_{2}\) ——-(4)
Step 3: Find frictional resistance
Due to the combined effect of wet surface and surface roughness, the resistance is offered to the flow of fluid due to friction. As a result, speed is reduced. The Froude was the first person to observe the dependency of frictional resistance with surface roughness.
The frictional resistance is well expressed through Froude’s formula.
Let f’ be the frictional resistance per unit area(wet) per unit velocity.
Frictional resistance F = f’ × wet area × (velocity)
2
= f’ × 2𝜋rL × v
2
= f’ × 𝜋dL × v
2
F = f’ × PL × v
2 ——-(5) Step 4: Net force acting on the fluid at section S1 and S2
The net force is the sum of Force due to pressure at S1, S2, and Fluid friction.
At S1, pressure is given by- \(P_{1}=\frac{F_{1}}{A}\)
Which implies, The net force \(F_{1}=P_{1}A\)
For our convenience consider the direction of the force due to pressure as +ve.
At S2, the pressure is given by- \(P_{2}=\frac{F_{2}}{A}\)
Which implies, The net force \(F_{2}=P_{2}A\)
here, the direction of the force due to pressure as -ve.
Fluid Frictional force(F): It is a resistive force, thus the direction as -ve.
Thus, resolving all the forces along horizontal direction we get-
P
1A – P 2A – F = 0
P
1A – P 2A = F
(P
1– P 2)A = F
Substitute the values for F and (P
1– P 2) from equation (5) and (4) respectively.
On rearranging the terms we get-\(H_{F}=\frac{f’PLv^{2}}{A \rho g}\) \(H_{F}=\frac{f’}{\rho g}\times Lv^{2}\times \frac{P}{A}\) —–(7)
But, \(\frac{P}{A}=\frac{Wetted\;perimeter}{Area}=\frac{\pi d}{\frac{\pi }{4}d^{2}}=\frac{4}{d}\)
Substituting \(\frac{P}{A}=\frac{4}{d}\) in equation(7),\(\left ( 7 \right )\Rightarrow H_{F}=\frac{f’}{\rho g}\times \frac{4Lv^{2}}{d}\) ——-(8)
Now substitute \(\frac{f’}{\rho}=\frac{f}{2}\)
Where,
f’ is a frictional resistance
⍴ is the density of the fluid.
f is the coefficient of friction
Because the fluid is incompressible, which means that with the application of the external force there will not be any change in the density.\(\left ( 8 \right )\Rightarrow H_{F}=\frac{f}{2g}\times \frac{4Lv^{2}}{d}\)
Thus, On rearranging, we finally arrive at Darcy Weisbach Equation
\(H_{F}=\frac{4fLv^{2}}{2gd}\) Application of Darcy Weisbach Equation:
Is used to calculate the loss of head due to friction in the pipe.
Physics related links:
Stay tuned with BYJU’S for more such interesting derivations. Also, register to “BYJU’S-The Learning App” for loads of interactive, engaging physics related videos and an unlimited academic assist.
|
In model theory, the satisfiability relation $ \vDash$ between a model $M= (D,f)$ and a set of formulas tells us when a formula $\varphi$ is true or not in the model ("interpretation") $M$.
This relation is usually defined recursively in the following way. I won't be 100% precise giving the definition, but it's something like this:
$M \vDash \neg \varphi$ if and only if $M \not \vDash \varphi;$
$M \vDash \varphi \wedge \psi $ if and only if $M \vDash \varphi$ and $M \vDash \psi$;
$M \vDash \forall x(\varphi)$ if and only if for all $x$ in $D$ the formula $\varphi$ holds;
$M \vDash \exists x(\varphi)$ if and only if there exists an $x$ in $D$ such that the formula $\varphi$ holds;
...and so on. At this point, there's something that really bothers me.
Model theory is developed in set theory (models are sets), which uses the language of first order logic with the connectives $\wedge, \vee, \neg, \Rightarrow$ and the quantifiers $\exists , \forall$. Well, we know that set theory is so powerful that it allows us to talk about first order logic, and this is why we can describe the semantics of first order logic within set theory (the model theoretic semantics). That said, why is the natural language used in the recursive definition of $\vDash?$ Shouldn't we use the usual connectives connectives $\wedge, \vee, \neg, \Rightarrow$ and the quantifiers $\exists , \forall$ in the definition?
I know that the symbols would be graphically the same (since we are describing first order logic with first order logic) and it would look circular, but if necessary, we could change a bit the connectives since they are symbols of a language and define $\vDash$ for this symbols in a way that they still model our intuitive idea of the logical connectives (and anyway $\vDash$ is a relation between sets, and there's nothing circular about that, but that's not the point). I mean something like this:
$M \vDash \neg_L \varphi \Leftrightarrow M \not \vDash \varphi;$
$M \vDash \varphi \wedge_L \psi \Leftrightarrow \Big (M \vDash \varphi \wedge M \vDash \psi \Big )$;
$M \vDash \forall_L x(\varphi) \Leftrightarrow \forall x \in D:\varphi;$
$M \vDash \exists_L x(\varphi) \Leftrightarrow \exists x \in D: \varphi$
...and so on. That's a lot more rigorous, and in this way it's easier to see the "interpretation" of the considered language $L$.
Why isn't the definition stated in this way, using the formal connectives? Is there something that I'm missing or are we informally using our natural language just because it's easier to understand for the reader and it's implicitly used the formal language as I did?
|
Background: A real
period is defined to be the value of an integral of the form
$$\int_D R(x_1,\cdots,x_n)dV$$
where $R$ is a rational function with rational ceofficients, and $D\subseteq\Bbb R^n$ is a set of points satisfying a system of inequalities involving rational functions also with rational coefficients (combined via the Boolean operators
and and or), and the integral converges absolutely. A period is then a complex number whose real and imaginary parts are real periods. The set of periods is called $\cal P$. In my other question here, I've asked why we can simplify this definition to only volume integrals over domains defined by polynomial inequalities, and why we can generalize the definition by replacing both instances of the phrase "rational coefficients" with "(real) algebraic coefficients."
So,
question: how do we express an arbitrary (wlog real) algebraic number as a period integral?
I know using the argument principle from complex analysis, if $w$ is a root of $f(z)$ then
$$\frac{1}{2\pi i}\oint_{\partial D}z\frac{f'(z)}{f(z)}dz=w$$
where $D\subset\Bbb C$ is a disc of small enough radius around $w$ to not include any other zeros. (This is assuming that $w$ has multiplicity one, for instance take $f$ to be its minimal polynomial, and the boundary of the disc $\partial D$ is oriented clockwise.) But it's not obvious to me how to convert this into a
real (multidimensional) integral. Perhaps I could invoke a continuum of different $D$ whose $\partial D$s form there own disk (like an onion), and then convert via polar coordinates? There's also the issue of $2\pi i$ to contend with as well. (Conjecturally, $\frac{1}{2\pi i}$ is not a period.)
(I
can figure out why $\cal P$ is closed under addition: we can always write
$$\int_A R(\vec{x})dV+\int_B S(\vec{x})dV=\int_{\large A\sqcup(B+\vec{v})} \hskip -0.5in R(\vec{x})+S(\vec{x}-\vec{v})\,dV $$
for some rational-coordinate displacement vector $\vec{v}$ of sufficiently large magnitude, in which case $A\sqcup(B+\vec{v})$ can be described in the obvious way. Furthermore we can write
$$\int_A \hskip -0.05in R(x_1,\cdots,x_n)dV \hskip -0.05in \times \hskip -0.05in \int_BS(x_1,\cdots,x_k)dV \hskip -0.05in = \hskip -0.05in \int_{A\times B} \hskip -0.2in R(x_1,\cdots,x_n)S(x_{n+1},\cdots,x_{n+k})dV $$
describing $A\times B$ in the obvious way as well, so I know how $\cal P$ is closed under multiplication.)
|
Spurious points in WMAP3 likelihood code
Post Reply
2 posts • Page
1of 1
Apologies if this has already been discussed -but I've seen some spurious model likelihoods from the WMAP code, specifically beyond about 2sigma there are some parameter combinations that give a likelihood in excess of 200 ln units better than the maximum?
e.g. WMAP3 (TT,TE,EE) data only: lhood \Omega_b h^2 \Omega_c h^2 \theta \tau n_s ln A_s -5373.072 0.016048 0.177486 1.056867 0.501029 1.078956 3.022956 -5410.972 0.0161 0.156035 1.016962 0.462042 1.050303 2.878407 the best fit ln like is about -5626.
e.g. WMAP3 (TT,TE,EE) data only:
lhood \Omega_b h^2 \Omega_c h^2 \theta \tau n_s ln A_s
-5373.072 0.016048 0.177486 1.056867 0.501029 1.078956 3.022956
-5410.972 0.0161 0.156035 1.016962 0.462042 1.050303 2.878407
the best fit ln like is about -5626.
Hi,
I have checked your models; they give large, negative values of TT beam and point source corrections, which almost cancel the likelihoods from TT spectrum. This may be due to the failure of the gaussian approximations in the TT beam and point source corrections. See the topic "WMAP likelihood approximations," where I gave a model with -ln L = 5302 (k_{pivot}=0.002). Loison Hoi 2 Aug 2006
I have checked your models; they give large, negative values of TT beam and point source corrections, which almost cancel the likelihoods from TT spectrum. This may be due to the failure of the gaussian approximations in the TT beam and point source corrections. See the topic "WMAP likelihood approximations," where I gave a model with -ln L = 5302 (k_{pivot}=0.002).
Loison Hoi
2 Aug 2006
|
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
|
What's in a language? Languages in abstraction
This post is about Languages from a mathematical and abstract linguistics point of view. Not much more to say about that, so let’s get right to it!
The Rigorous definition:
Let \( \Sigma \) be an alphabet and let \( \Sigma^k \) be the set of all strings of length k over that alphabet. Then, we define \( \Sigma^* \) to be \( \bigcup\limits_{k\in\mathbb{N}}\Sigma^k \) (the union of ∑
k over all natural numbers k). If \( L\subseteq\Sigma^* \) , we call \( L \) a language. The Intuition Behind the Definition
Consider an alphabet (some finite set of characters), for example we can consider the letters of the English language, the ASCII symbols, the symbols \( {0, 1} \) (otherwise known as binary), or the symbols \( {1, 2, 3, 4, 5, 6, 7, 8, 9, 0, +, \times , =} \) . We can then construct the infinite list of all the different ways we can arrange those characters (e.g. \( 1001011 \) or \( 0011011011 \) , etc. if we’re using binary). We call these arrangements “strings”. Once we have all that machinery built up, a language is just some subset of that infinite collection of strings. The language may itself be infinite.
Some Examples The alphabet: \( \Sigma={0, 1} \) The language: \( {x\in\Sigma|x \text{ is prime}} \) (all prime numbers in binary) Some strings from the language: \( 10, 11, 101… \) The alphabet: ASCII characters The language: All syntactically correct Clojure programs (the source code) The alphabet: All Clojure functions, operators, etc, and list \( {x, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0} \) The language: All syntactically correct Clojure programs (the source code)
You see that we need to have an alphabet before we can have a Formal Language. Also, different alphabets may result in equivalent languages – by equivalent, we mean that both languages contain the same strings.
What are we getting at?
Well, there are two ways to look at this. On the first hand, linguists would like to study language in its abstract essence. For this, Formal Languages may come in handy (if endowed with a Grammar and possibly more). That is not the reason I will be studying Formal Languages. I’m learning about formal languages to find their applications to computing. Apparently, with the help of a little Mathematical thinking, we can assign semantics to the strings in a language and somehow correlate them with real world problems – such as computability, the P=nP problem, cryptography, and more!
|
Don't be intimidated, semiclassical quantization is very simple, and it can be straightforwardly understood from a few examples which lead to the general case.
Consider a particle in a box. The classical motions are reflections off the wall. These make a box in phase space, as the particle goes left, hits the wall, goes right, and hits the other wall. If the particle has momentum p and the length of the box is L, the area enclosed by this motion in phase space is
$$ p L $$
and the condition is that this is an integer multiple of $h=2\pi\hbar$. This gives the momentum quantization condition from quantum mechanics.
For a 1 dimesional system, the rule is that
$$ \int p dx = n h$$
With a possible offset, so that the right-hand side might be $(n+1/2)h$, or $(n+3/4)h$, as appropriate, but the spacing between levels is given by this rule to leading order in h. This rule can be understood from deBroglie's relation--- the momentum at any x is the wavenumber, or the rate of change of the phase of the wavefunction. The condition (in natural units where $h=2\pi$ ) is saying that the phase change as you follow a classical orbit should be an integer multiple of $2\pi$, i.e. that the wave should form a standing wave.
This formula is not exact, because the quantum wave doesn't follow the classical trajectory, but the WKB approximation just takes this as a starting point, and makes a wave whose phase is given by the value of this integral, and whose amplitude is the reciprocal of the square root of the classical velocity.
The reason this works was known already before quantum theory was fully formulated. But to understand it requires familiarity with action-angle variables
Action-angle variables
Consider an orbit of a particle in one-dimension, with position x and momentum p. You call the area in phase space enclosed by the orbit J, and this is the action. J is only a function of H and it is constant in time (by definition).
The conjugate variable to J is a variable which distinguishes the points of the orbit, and this is called $\theta$. Now you notice that the area in phase space is invariant under canonical transformations (for infinitesimal canonical transformations this is Liouville's theorem), so that the area between the orbits at J and J+dJ is the same as the area in x-p coordinates between J and J+dJ, which is just dJ because that's the definition of J. But this area in J,$\theta$ coordinates is dJ times the period of $\theta$, so $\theta$ has the same period for all J, which I will take to be $2\pi$.
The rate at which $\theta$ increases with time is given by Hamilton's equations
$$ \dot{\theta} = {\partial H\over \partial J} = H'(J) $$
And this is constant over the entire orbit, because H is constant, and so is J. So you learn that $\theta$ increase monotonically at a constant rate at each J, and the time period of $\theta$ is:
$$ T = {2\pi\over H'(J)} $$
Semiclassical quantization
Suppose you weakly couple this one-dimensional system to electromagnetism. The classical orbital frequency is going to be the frequency of the emitted photons (and double this frequency, and three times this frequency), so that if you want to have discrete photon-emission transitions, you must ensure that emitting a photon of frequency $f={1\over T}$, and taking away energy $hf$ leaves you with a quantum state to fall to. So if there is a quantum state corresponding to a classical motion with one value of J, at energy H(J), there must be another quantum state with energy
$$ H(J) - {2\pi h\over T} = H(J) - H'(J)h \approx H(J-h) $$
in other words, the quantum states must be spaced evenly in J. To this order, this means that there are states at J-h,J-2h,J-3h and so on, and transitions to these states have to reproduce the classical radiation harmonics produced when you weakly couple the thing to electromagnetism.
So the quantization rule is $J=nh$, up to a possible offset. The derivation makes it clear that it is only true to leading order in h. This was Bohr's correspondence argument for the quantization condition.
When you have more than one degree of freedom, and the system is integrable, you have action variables $J_1,J_2...J_n$ and conjugate angle variables periodic with period $2\pi$ each. You can couple any of the degrees of freedom to electromagnetism weakly, and each classical period of the $\theta$ variable in time is
$$T_k = {\partial H \over \partial J_k}$$
so the statement is that for each orbit, each J variable is quantized according to the Bohr rule.
$$ J_k = nh $$
The $J_k$ variable is the area enclosed in the one dimensional projection of the motion in those coordinates where the motion separates into multiperiodic motion (this is the torus of Bar Moshe's answer). This is Sommerfeld's extension of Bohr quantization.
So the integral $\int p dq$ is taken with p and q any conjugate variables which make a period motion. In 1d, there is nothing to do, in multiple dimensions, you just choose variables which separately execute a 1d motion, and in general, you have to find J variables. This procedure doesn't work for classically chaotic systems.This post imported from StackExchange Physics at 2017-03-13 12:20 (UTC), posted by SE-user Ron Maimon
|
Don't be intimidated, semiclassical quantization is very simple, and it can be straightforwardly understood from a few examples which lead to the general case.
Consider a particle in a box. The classical motions are reflections off the wall. These make a box in phase space, as the particle goes left, hits the wall, goes right, and hits the other wall. If the particle has momentum p and the length of the box is L, the area enclosed by this motion in phase space is
$$ p L $$
and the condition is that this is an integer multiple of $h=2\pi\hbar$. This gives the momentum quantization condition from quantum mechanics.
For a 1 dimesional system, the rule is that
$$ \int p dx = n h$$
With a possible offset, so that the right-hand side might be $(n+1/2)h$, or $(n+3/4)h$, as appropriate, but the spacing between levels is given by this rule to leading order in h. This rule can be understood from deBroglie's relation--- the momentum at any x is the wavenumber, or the rate of change of the phase of the wavefunction. The condition (in natural units where $h=2\pi$ ) is saying that the phase change as you follow a classical orbit should be an integer multiple of $2\pi$, i.e. that the wave should form a standing wave.
This formula is not exact, because the quantum wave doesn't follow the classical trajectory, but the WKB approximation just takes this as a starting point, and makes a wave whose phase is given by the value of this integral, and whose amplitude is the reciprocal of the square root of the classical velocity.
The reason this works was known already before quantum theory was fully formulated. But to understand it requires familiarity with action-angle variables
Action-angle variables
Consider an orbit of a particle in one-dimension, with position x and momentum p. You call the area in phase space enclosed by the orbit J, and this is the action. J is only a function of H and it is constant in time (by definition).
The conjugate variable to J is a variable which distinguishes the points of the orbit, and this is called $\theta$. Now you notice that the area in phase space is invariant under canonical transformations (for infinitesimal canonical transformations this is Liouville's theorem), so that the area between the orbits at J and J+dJ is the same as the area in x-p coordinates between J and J+dJ, which is just dJ because that's the definition of J. But this area in J,$\theta$ coordinates is dJ times the period of $\theta$, so $\theta$ has the same period for all J, which I will take to be $2\pi$.
The rate at which $\theta$ increases with time is given by Hamilton's equations
$$ \dot{\theta} = {\partial H\over \partial J} = H'(J) $$
And this is constant over the entire orbit, because H is constant, and so is J. So you learn that $\theta$ increase monotonically at a constant rate at each J, and the time period of $\theta$ is:
$$ T = {2\pi\over H'(J)} $$
Semiclassical quantization
Suppose you weakly couple this one-dimensional system to electromagnetism. The classical orbital frequency is going to be the frequency of the emitted photons (and double this frequency, and three times this frequency), so that if you want to have discrete photon-emission transitions, you must ensure that emitting a photon of frequency $f={1\over T}$, and taking away energy $hf$ leaves you with a quantum state to fall to. So if there is a quantum state corresponding to a classical motion with one value of J, at energy H(J), there must be another quantum state with energy
$$ H(J) - {2\pi h\over T} = H(J) - H'(J)h \approx H(J-h) $$
in other words, the quantum states must be spaced evenly in J. To this order, this means that there are states at J-h,J-2h,J-3h and so on, and transitions to these states have to reproduce the classical radiation harmonics produced when you weakly couple the thing to electromagnetism.
So the quantization rule is $J=nh$, up to a possible offset. The derivation makes it clear that it is only true to leading order in h. This was Bohr's correspondence argument for the quantization condition.
When you have more than one degree of freedom, and the system is integrable, you have action variables $J_1,J_2...J_n$ and conjugate angle variables periodic with period $2\pi$ each. You can couple any of the degrees of freedom to electromagnetism weakly, and each classical period of the $\theta$ variable in time is
$$T_k = {\partial H \over \partial J_k}$$
so the statement is that for each orbit, each J variable is quantized according to the Bohr rule.
$$ J_k = nh $$
The $J_k$ variable is the area enclosed in the one dimensional projection of the motion in those coordinates where the motion separates into multiperiodic motion (this is the torus of Bar Moshe's answer). This is Sommerfeld's extension of Bohr quantization.
So the integral $\int p dq$ is taken with p and q any conjugate variables which make a period motion. In 1d, there is nothing to do, in multiple dimensions, you just choose variables which separately execute a 1d motion, and in general, you have to find J variables. This procedure doesn't work for classically chaotic systems.This post imported from StackExchange Physics at 2017-03-13 12:20 (UTC), posted by SE-user Ron Maimon
|
A transition element is defined as the one which has incompletely filled d orbital in its ground state or in any one of its oxidation state. Zinc , cadmium and mercury are not typical transition elements because they have fill filled d orbital in their ground state as well as in their common oxidation state. These elements do not show none of the characteristic properties of transition metals like the lattice structure as shown below in
picture. They probably do not have proper metal-metal bonding, a characteristic feature of transition metals. But, still they are studied along with the chemistry of transition metals. But what about manganese? It is a core and important element in transition series. Here, X is unknown lattice structure.
A transition element is defined as the one which has incompletely filled d orbital in its ground state or in any one of its oxidation state. Zinc , cadmium and mercury are not typical transition elements because they have fill filled d orbital in their ground state as well as in their common oxidation state. These elements do not show none of the characteristic properties of transition metals like the lattice structure as shown below in
Ah, good old Mn. This is one of the uglier ones in terms of phases. $\alpha$-Mn, the room temperature phase, is CBCC (A12 family), fairly unusual. Near 1000K, Mn transitions to $\beta$-Mn, with a simple cubic (A13 family) crystal structure. $\gamma$-Mn is FCC, face-centered cubic (A1), and $\delta$-Mn is BCC, body-centered cubic (A2).
One excellent reference is A.T. Dinsdale, SGTE Data for Pure Elements, CALPHAD 15(4) 317-425 (1991). There is a readily available PDF of an updated version of that paper that can be found (Dinsdale). This paper gives the accepted Gibbs free energy functions for the elements in their various stable, and some unstable, forms.
Editing to add additional information on the $\alpha$- and $\beta$-Mn crystal phases. Both of these phases are pretty weird, and each is the first observed crystal with their structure (hence they are the 'prototype' in crystallography terms).
$\alpha$-Mn was described by A.J. Bradley and J. Thewlis in 'The Crystal Structure of $\alpha$-Manganese', Proc. Royal Society 115, 456-471 (1927). The unit cell is based on body-centered cubic, but contains 58 atoms representing 4 distinct positions. It can be thought of as a bcc Bravais lattice with a 29 atom basis. Mn is the only element that exhibits this crystal structure.
$\beta$-Mn was described by G.D. Preston in Phil. Mag. 5(33) 1207-1212 (1928). The unit cell is simple cubic, containing 20 atoms in 2 groups. Preston's unit cell drawing is significantly worse than that of Bradley and Thewlis, although representations of the unit cell can be easily found on the web these days. Again, Mn is the only element that exhibits this crystal structure.
The $\delta$-Mn and $\gamma$-Mn are normal, boring bcc and fcc crystals, so no need to point to references for them.
I would provide links to the papers if I could - my institute does not have direct access to them.
As yet further info on the crystal structure references. The A12, A13, ... notation is the Strukturebericht notation used by crystallographers. Unfortunately, there are a number of other notations used as well, including Schonenflies and Hermann-Mauguin. The Strukturebericht is pretty basic, mostly a catalog of crystal types, while the others convey the actual symmetries of the crystal structure. Some of this confusion of notations seems to date from the early days of crystallography when it was done in many places and languages. There are seven crystal systems, 14 Bravais lattices, 32 crystallographic point groups, and 230 space groups to be described to some level by the various notations. For this answer I stuck to the Strukturebericht because that is the notation used in the Dinsdale reference on Gibbs free energies of the elemental phases.
|
I was puzzling: "What is the shortest proof of $\exists x \forall y (P(x) \to P(y)) $?" (a variation of the drinkers paradox see Proof of Drinker paradox) given a certain set of inference rules and using natural deduction.
I managed to proof it in 23 lines, but I am not sure if this is the shortest possible proof.
That made me think maybe I could turn it into a little competition and offer say 200 reputation as reward for the shortest proof.
Would such a (competition) question be allowed?
|
I've started learning sequences and I'm having a hard time calculating the following, for $a > 0$:
$$\lim_{n\to ∞}{\frac{\lfloor na\rfloor}{n}} $$
Using Heine’s Lemma I'm trying to solve it analogous to the corresponding limit definitions for functions, but I get stuck. I've tried mostly with the
Squeeze theorem.
Any help is appreciated.
|
We highly recommend you (I can not stress enough) to download the LaTeX fonts (just 151 KB) to your PC, and install them. This will not only improve the look of the website, but also save bandwidth as you do not need to download the images every time you see an equation.
Instructions to install and enable fonts:
A. How (and why) to install LaTeX fonts in your PC:
1.
Download the fonts attached with this post(we have attached 8 fonts)
and unzip them. Then copy the all 8 fonts (control+C). Go to start > control panel > fonts. Now paste the fonts (control+V). The fonts will be installed. 2. Then restart your browser, and go to this topic.
3. After that click on the jsMath button at the bottom-right corner of your page. Cick on "Options", and check if your settings are as shown the following image. If not, change them.
(Specially set the setting for 5 years, and select "use native TeX fonts"
4. Bingo! We are done!
When you don't have fonts installed, you shall see something like this (and it will take an eon to load):
Whereas, with the settings enabled, and fonts installed you should see something nice like the following.
Now compare and decide if you have done everything correctly.
$2\sum \sqrt{(a^{4}+b^{2}c^{2})(b^{4}+c^{2}a^{2})} \geq 2[3,1,0]$ (Cauchy)
$a^4+b^4+c^4+a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2}+ 2\sum \sqrt{(a^{4}+b^{2}c^{2})(b^{4}+c^{2}a^{2})}\geq 2[3,1,0]+\frac{[4,0,0]+[2,2,0]}{2}$
$2[3,1,0]+\frac{[4,0,0]+[2,2,0]}{2}\geq\frac{[4,0,0]}{2}+ \frac{3[3,1,0]}{2}+\frac{[2,1,1]+[2,2,0]}{2}$
I think that after seeing these examples, I don't need to repeat
whywe should install LaTeX fonts. B. LaTeX Intro:
In the advanced mathematics or science textbook you can see nicely typeset equations. LaTeX makes it possible. LaTeX is a typesetting program that can generate professional looking equations (and many more cool stuffs!). However, the aim of this post is to tell you how to write simple nice equations.
The art of problem solving (AOPS) forum has a great LaTeX guide for the beginners. However, we shall need to focus on a few parts for being able write in our posts:
1. How to write equations in the posts
2. All the symbols (you haven't probably seen all of them before )
Now you can write equations without learning LaTeX code.However, learning LaTeX is fun, and might be useful in your later life. So learn try learning LaTeX seeing the codes.
Read: Writing Equation using LaTeX was never easier!
C. Our Own guide:
If you want to write simple equation within a line like the following:
Just use dollar sign (technically when you are writing inline math use it) like the follwoing:This is Pythagoras' theorem $a^2+b^2=c^2$
Code: Select all
This is Pythagoras' theorem $a^2+b^2=c^2$
Use:The following is normal distribution function \[F(x) = \tfrac{1}{\sqrt{2\pi\sigma^2}}\; e^{ -\frac{(x-\mu)^2}{2\sigma^2} } \]
Code: Select all
\[ Your equation within this third bracketed backslashs \]
Code: Select all
The following is normal distribution function \[F(x) = \tfrac{1}{\sqrt{2\pi\sigma^2}}\; e^{ -\frac{(x-\mu)^2}{2\sigma^2} } \]
The best way to learn LaTeX is to see the examples. Double click on the examples to see the code.
*** for writing power use ^ i.e. x^a=$x^a$
*** for writing subscript use _ i.e. x_a=$x_a$
*** for writing nice looking fractions use \frac{a}{b}=$\frac{a}{b}$. You can use \frac within frac command
i.e. \frac{\frac{ab}{c}}{\frac{f}{g}}$=\dfrac{\frac{ab}{c}}{\frac{f}{g}}$
** "\" this sign is vry frequently used in LaTeX. See the symbol guide of AOPS to learn how to write symbols and letters like \pi=$\pi$
When you think that you have learned some basic LaTeX, you an try it on codecogs equation editor or on our Test Forum.
|
I am given a zip-inflated Poisson (ZIP) model, where random data $ X_1, .., X_n$ are of the form $ X_i=R_iY_i$ , where the $ Y_i$ ‘s have Poisson distribution ($ \lambda$ ) and the $ R_i$ ‘s have Bernoulli distribution ($ p$ ), and all independent from each other. If given an outcome $ x = (x_1, .., x_n)$ , the objective is to estimate both $ \lambda$ and $ p$ .
We can use a hierarchical Bayes model:
$ p$ ~ Uniform(0,1) (prior for $ p$ ),
$ (\lambda|p)$ ~ Gamma(a,b) (prior for $ \lambda$ ), $ (r_i|p, \lambda )$ ~ Bernoulli($ p$ ) independently (from the model above), $ (x_i|r, \lambda, p )$ ~ Poisson($ \lambda r_i$ ) independently (from the model above)
Since $ a$ and $ b$ are known parameters, and $ r = (r_1,…,r_n)$ , it follows that
$ f(x,r, \lambda, p) = \frac{b^\alpha \lambda^{\alpha-1} e^{-b \lambda}}{\Gamma(\alpha)} \prod_{i=1}^n\frac{e^{-\lambda r_i} (\lambda r_i)^{x_i}}{x_i!} p^{r_i}(1-p)^{1-r_i}$
My question is to obtain the following:
1) $ \lambda|p,r,x\sim Gamma(a+ \sum_{i}x_i, b+ \sum_{i}r_i)$
2) $ p|\lambda,r,x\sim Beta(1+ \sum_{i}r_i, n+1 – \sum_{i}r_i)$ 3) $ r_i|\lambda,p,x \sim Bernoulli(\frac{pe^{- \lambda}}{pe^{- \lambda}+(1-p)I{\{x_i=0}\}})$
for 1) and 2), I am able to deduce them by integrating and removing the other variables. However, for 3), I was not able to do so and eventually obtained the following expression:
$ e^{\lambda \sum r_i} r_i^{\sum x_i} p^{\sum r_i} (1-p)^{n-\sum r_i}$
Can anyone show me if what I did it correct and perhaps how to obtain the required expression?
Thank you.
|
I'm trying to price a "power contract" and would appreciate guidance on the next step. The payoff at time $T$ is $(S(T)/K)^\alpha$, where $K > 0$, $\alpha \in \mathbb{N}$, $T > 0$. $S$ is adapted to $\mathscr{F}$, and we are currently at time $t \in [0,T)$. Let $Q$ denote the risk-neutral measure and $\beta(t) = e^{\int_0^t r(s)ds}$ be the domestic savings account/discount factor. Also, $W(t)$ is standard Brownian Motion.
Here's my progress:
$\displaystyle \ \ \text{value}_t = E^Q[\frac{\beta(t)}{\beta(T)}(S(T)/K)^\alpha \big|\mathscr{F}_t]$
$\displaystyle \ \ = \frac{\beta(t)}{\beta(T)K^\alpha}E^Q[S(T)^\alpha \big|\mathscr{F}_t]$
We take $\displaystyle \ \ S(T)^\alpha = S(t)^\alpha \exp{\{\bigg[ (r-\frac12 \sigma^2)(T-t)+\sigma(W(T)-W(t))\bigg]\alpha \}}$.
Therefore:
$\displaystyle \ \ \text{value}_t = \frac{\beta(t)S(t)^\alpha}{\beta(T)K^\alpha}\exp{\{ \alpha(r-\frac12 \sigma^2)(T-t) + \frac12 \alpha^2 \sigma^2(T-t) \}}$
by the fact that $E[e^z] = e^{\mu + \frac12 \sigma^2}$ when $z \sim \mathscr{N}(\mu,\sigma^2)$.
This is homework but is not graded.
|
Let's Cover The Fundamental Group (pt 1)Mathematics · Jumping right in Prerequisites:
Firstly, this post builds off of my previous post, so if you’re learning these things for the first time and find yourself kinda lost, start there.
In addition to the previous post, the following are mathematical definitions you should probably know before reading this post. If you don’t know any of these, trust google.
Group. Homomorphism Kernel Subgroup Normal subgroup Homotopy Retraction Star Convex Moving onward Definition:
Given a path $\alpha$ from $x_0$ to $x_1$ in $X$, we define $\hat\alpha:\pi_1(X,x_0)\to\pi_1(X,x_1)$ as follows:
And it’s a theorem that $\hat\alpha$ is a group isomorphism.
Definition:
Given a continuous map $h:(X,x_0)\to(Y,y_0)$, we define:
$h_\star:\pi_1(X,x_0)\to\pi_1(Y,y_0)$ as follows:
$h_\star([f]) = [h\circ f]$.
And with that…
Let’s get into the questions!
Exercises Show that if $A\subseteq\mathbb{R}^n$ is star convex then $A$ is simply connected.
Let $a\in A$ be one of the points that make the set star convex. Our task now is to show:
For any $x,y\in A$, there is a path entirely within $A$ connecting $x$ and $y$, and $\pi_1(A,a)$ is trivial.
For the first: just let $x,y\in A$ and let $f_x,f_y$ be the paths connecting them to $a$. Then note that $f_x \star \bar f_y$ is a path connecting $x$ to $y$.
For the second we can use the same straight-line homotopy that we used for the convex version: let $f$ be a path starting and ending at $a$ and let $H(s,t) = t\cdot a + (1-t)\cdot f(s)$.
Lastly, because part
a (that I didn’t mention) is to find a set that’s star convex but not convex, I’ll leave the question with The Star of David. Show that if $\alpha,\beta$ are paths from $x_0\to x_1\to x_2$ all in $X$ and $\gamma = \alpha\star\beta$ that $\hat\gamma = \hat\beta\circ\hat\alpha$.
Since these are functions between equivalence classes, our job is now to show that the outputs for a given input are homotopic.
So let $f$ be a path in $X$ starting and stopping at $x_0$. Then:
Show that if $x_0,x_1$ are points in a path-connected space $X$, $\pi_1(X,x_0)$ is abelian if and only if for every pair of paths from $\alpha,\beta$ from $x_0$ to $x_1$, $\hat\alpha = \hat\beta$.
If $\pi_1(X,x_0)$ is abelian, we have:
Let $A\subset X$ and let $r:X\to A$ be a retraction. Show that for $a_0\in A$, $r_\star:\pi_1(X,a_0)\to\pi_1(A,a_0)$ is surjective.
Well, any path $\alpha$ in $A$ starting and stopping at $a_0$ will also be a path in $X$ (because $A\subset X$). So $r_\star([\alpha]_X) = [\alpha]_A$. It’s surjective because it’s the identity map when we restrict paths to $A$.
Let $A\subset\mathbb{R}^n$ and $h:(A,a_0)\to(Y,y_0)$. Show that if $h$ is extendable to a continuous map $\tilde h:\mathbb{R}^n\to Y$, then $h_\star$ is trivial (i.e. sends everybody to the class of the constant loop).
Let $G=\pi_1(A,a_0), H=\pi_1(Y,y_0)$. Then, as a reminder, $h_\star:G\to H$ such that $h_\star([\alpha]) = [h\circ\alpha]$.
So let $\alpha,\beta$ be paths in $(A,a_0)$. Since $\alpha,\beta$ were arbitrary, it is sufficient to show that $h_\star([\alpha])$ is homotopic to $h_\star([\beta])$.
Consider $F:I\times I\to \mathbb{R}^n$ given by $F(s,t) = t\alpha(s) + (1-t)\beta(s)$ (the straight line homotopy between the two loops). But since $h$ is extendible to $\tilde h:\mathbb{R}^n\to Y$, we know that even $F$ is a homotopy that leaves $A$ (into some other part of $\mathbb{R}^n$), $\tilde h\circ F$ is a homotopy between $h(\alpha),h(\beta)$ that stays entirely in $Y$. Hence $[\tilde h\circ \alpha] = [\tilde h\circ \beta]$.
|
Let
$\beta \in K \setminus F; \tag 1$
then, since $[K:F] = 2$, there must exist a linear dependence over $F$ 'twixt$1$, $\beta$, and $\beta^2$; that is, there are $a, b, c \in F$, not all zero, such that
$a \beta^2 + b\beta + c = 0; \tag 2$
if now $a = 0$,
$b\beta + c = 0; \tag 3$
then $b = 0$ forces $c = 0$, so $a = b = c = 0$, contrary to our assumption;if $b \ne 0$, then (3) implies $\beta \in F$, another contradiction; thus we may rule out the case $a = 0$.
With $a \ne 0$, (2) yields
$\beta^2 + d \beta + e = 0, \tag 4$
where $d = a^{-1}b$ and $e = a^{-1}c$; then we may write
$\beta^2 + d \beta = -e, \tag 5$
whence, with $\text{char}(F) \ne 2$,
$(\beta + \dfrac{d}{2})^2 = \beta^2 + d \beta+ \dfrac{d^2}{4} = \dfrac{d^2}{4} - e \in F; \tag 6$
we note that $4 = 2^2 \ne 0$ in $F$; otherwise $2^2 = 0 \Longrightarrow 2 = 0$, and $\text{char}(F) = 2$; now setting
$\alpha = \beta + \dfrac{d}{2}, \tag 7$
we see from (6) that $\alpha^2 \in F$; also,
$\alpha \in K \setminus F, \tag 8$
thus $[F(\alpha):F] = 2$, so
$K = F(\alpha), \tag 9$
and we have produced the requisite $\alpha$.
If $F = \Bbb Q$, then
$\alpha^2 = \dfrac{p}{q} \tag{10}$
with $p, q \in \Bbb Z$; note $q \ne 0$; then
$q \alpha^2 = p, \tag{11}$
so
$(q \alpha)^2 = q^2 \alpha^2 = pq \in \Bbb Z; \tag{12}$
finally,
$K = F(\alpha) = F(q\alpha). \tag{13}$
|
Please excuse the non-specific title, this is a rather long problem.
So on our last exam in multivariable calculus, our professor gave us a very lengthy vector manipulation problem as a bonus. Seeing as it's no longer worth any points to me, I was wondering if someone could help me understand how to solve it.
So let $ \vec v_1,\vec v_2,\vec v_3 \in \Bbb R^3$, such that $\vec v_1 \cdot (\vec v_2 \times \vec v_3) \neq 0$
Now define
$$k_1 = \frac{\vec v_2 \times \vec v_3}{\vec v_1 \cdot (\vec v_2 \times \vec v_3)}, k_2 = \frac{\vec v_3 \times \vec v_1}{\vec v_1 \cdot (\vec v_2 \times \vec v_3)}, k_3 = \frac{\vec v_1 \times \vec v_32}{\vec v_1 \cdot (\vec v_2 \times \vec v_3)}$$
We must show:
i) Each $k_i$ is perpendicular to the corresponding $v_j$ where $j \neq i$;
ii) $\vec k_1 \cdot (\vec k_2 \times \vec k_3) = \frac{1}{\vec v_1 \cdot (\vec v_2 \times \vec v_3)}$
The second part seems somewhat straightforward to solve through brute force, but I'm wondering if there's a simpler way. However, I have no idea how to approach the first part; I have no notion of vector division and that seems to be what it's approaching.
|
A parallel plate capacitor is located horizontally so that one of its plates is submerged into the liquid while the other is over its surface. The permittivity of the liquid is equal to $\epsilon$, its density is equal to $\rho$. To what height will the level of the liquid in the capacitor rise after its plates get a charge of surface charge density $\sigma$?
This is a type of controversial problem, as it contains a different answer by the different author, two different answers provided for this are
$$1) \ h=\dfrac{(\epsilon-1)\sigma^2}{2\epsilon_o\epsilon\rho g}$$ $$2) \ h=\dfrac{(\epsilon^2-1)\sigma^2}{2\epsilon_o\epsilon^2\rho g}$$
Solution for 1 and solution for 2
Please help!
|
Definitions
Let $\mathbb{F}$ be the set of floating point numbers in a given format, which could be either IEEE-754 binary32 (single precision) or binary64 (double precision).
Let $m(Z)$ the floating point number obtained rounding the real number $Z$.
Let $\phi(Z)$ the next floating point number greater than $Z$.
Let $\lfloor Z \rfloor$ be truncation to 64 bit unsigned integer of the positive floating point number $Z < 2^{64}$ (i.e. there are no overflow concerns).
Hypothesis
Given three numbers $X, Y, A$, such that: $$ X, Y, A \in \mathbb{F} \\ 0<X<Y \\ A > m \left( \frac{1}{m(Y-X)} \right) \\ \lfloor m(A\cdot X) \rfloor < \lfloor m(A\cdot Y) \rfloor $$
Question
Is it always true that? $$ \lfloor m(\phi(A)\cdot X) \rfloor < \lfloor m(\phi(A)\cdot Y) \rfloor $$
Explanation on the Question
The function $\lfloor m(z\cdot X) \rfloor$ is non-decreasing in $z$ and its behavior is stepwise constant (it looks like a ladder). Increasing $z$ by a tiny bit, the results either does not change, or it jumps to a greater unit.
I am asking if it is possible that by increasing $z$ the terms in the left hand side and right hand side always jump together, or if it is possible that only the term in left hand side jumps befor than the right one, so locally breaking the inequality.
Type of Answer Sought
Either a formal proof that it holds (or that it doesn't), or a numerical example where it does not hold (either in single or double precision) would be accepted as an answer.
|
Homology, Homotopy and Applications Homology Homotopy Appl. Volume 5, Number 1 (2003), 407-421. Extensions of homogeneous coordinate rings to $A_ \infty$-algebras Abstract
We study $A_\infty$-structures extending the natural algebra structure on the cohomology of $\oplus_{n\in\mathbb{Z}} L^n$, where $L$ is a very ample line bundle on a projective $d$-dimensional variety $X$ such that $H^i(X,L^n)=0$ for 0 > i > d and all $ n \in \mathbb{Z}$. We prove that there exists a unique such
nontrivial$A_{\infty}$-structure up to a strict $A_{\infty}$-isomorphism (i.e., an $A_{\infty}$-isomorphism with the identity as the first structure map) and rescaling.
In the case when $X$ is a curve we also compute the group of strict $A_{\infty}$-automorphisms of this $A_{\infty}$-structure.
Article information Source Homology Homotopy Appl., Volume 5, Number 1 (2003), 407-421. Dates First available in Project Euclid: 13 February 2006 Permanent link to this document https://projecteuclid.org/euclid.hha/1139839940 Mathematical Reviews number (MathSciNet) MR2072342 Zentralblatt MATH identifier 1121.55005 Citation
Polishchuk, A. Extensions of homogeneous coordinate rings to $A_ \infty$-algebras. Homology Homotopy Appl. 5 (2003), no. 1, 407--421. https://projecteuclid.org/euclid.hha/1139839940
|
This question assumes some familiarity with Jensen's fine structure analysis of the constructible universe L (https://en.wikipedia.org/wiki/Jensen_hierarchy, http://www.math.cmu.edu/~laiken/papers/FineStructure.pdf).
Everything to follow is in L.
Given some $J_\alpha$, the n-th projectum of $J_\alpha, \rho_n(J_\alpha)$ is defined as follows: the least $\rho\leq \alpha$ such that there exists a subset of $\omega\cdot \rho$ which is $\Sigma_n(J_\alpha)$ but not in $J_\alpha$. Another equivalent characterization is that $\rho_n$ is the least $\delta\leq \alpha$ such that there exists a $\Sigma_n(J_\alpha)$ that maps $\omega\cdot \delta$ onto $J_\alpha$. Of course if $1<\rho_n<\alpha$, then $J_\alpha\models \rho_n \text{ is a cardinal}$ so $\omega\cdot \rho_n = \rho_n$.
The exercise is asking to produce any arbitrary pattern of the projectums. More concretely like the following, exhibit a $J_\alpha$ such that $\rho_k(J_\alpha)=\alpha, k=0,1,2,3, \rho_4(J_\alpha)<\alpha, \rho_5<\rho_4, \rho_j=\rho_5 \forall j\geq 6$.
What I can do now is to produce one drop (I feel if somehow I know how to produce two drops then I am done). More precisely, consider $J_{\omega_2}$. Let $\xi$ be the least ordinal in $J_{\omega_2}$ which is not $\Sigma_4$-definable from $\omega_1$. Take the $\Sigma_4$ Skolem Hull in $J_{\omega_2}$ with parameters from $\omega_1 \cup \{\xi\}$, denoted by $Hull_{\Sigma_4}^{J_{\omega_2}}(\omega_1\cup \{\xi\}) \simeq_\pi J_{\beta}=Hull_{\Sigma_4}^{J_{\beta}}(\omega_1\cup \{\pi(\xi)\})$ by condensation via $\pi$. Then it's not hard to verify that $\rho_4(J_\beta)=\omega_1$ (with standard parameter $\{\pi(\xi)\}$) and $\rho_k(J_\beta)=\beta, k<4$ by elementarily.
But it is obvious that the above construction also yields that $\rho_k(J_\alpha)=\omega_1$ for all $k\geq 4$ by cardinality considerations. My feeling is that I should probably produce those projectums starting from $\rho_5$ (i.e. backtrack). But I don't see how, so far, to get another projectum drop. Thanks in advance!
|
The following answer doesn't really answer the question given, because it's not about counting spanning subgraphs but about counting all subgraphs where some edges are
fixed (i.e., cannot be removed). However, I'm posting it because it gives me the strong impression that having different path lengths is not crucial to achieving hardness for this kind of problems. Maybe a variant of the argument can be used to show #P-hardness for the actual problem asked in the question. Edit: Another point is that this answer uses directed graphs, not undirected graphs.
If you can fix edges, then the problem in #P-hard. To see why, let's reduce from the problem #PP2DNF, of counting the number of satisfying assignments of a positive partitioned 2-DNF Boolean formula, i.e., a formula $\phi$ on variables $X_1, \ldots, X_n, Y_1, \ldots, Y_m$ of the form $\bigvee_{1 \leq j \leq k} X_{n_j} \land Y_{m_j}$. This problem is #P-complete by this paper (sorry, couldn't find an open-access version).
Given $\phi$ let's build a DAG $G$ with the source $s$, the sink $t$, and vertices $x_1, \ldots, x_n$ and $y_1, \ldots, y_m$. Put one edge from $s$ to $x_i$ for each $i$, one edge from $y_i$ to $t$ for each $i$, and for each clause $X_{n_j} \land Y_{m_j}$ add an edge from $x_{n_j}$ to $y_{m_j}$ which is
fixed. This completes the definition of $G$: note that all possible paths from $s$ to $t$ have length 3.
Now, choosing a subgraph of $G$ while keeping the
fixed edges amounts to keeping a subset of the edges incident to $s$, and a subset of the edges incident to $t$. It is clear that there is a bijection between such subgraphs and the valuations of the variables of $\phi$, where we set a variable to true if we keep its one incident edge that is not fixed. Further, any path from $s$ to $t$ in such a subgraph of $G$ witnesses the existence of a clause that satisfies $\phi$ in the corresponding valuation. Hence, the number of satisfying valuations of $\phi$ is exactly the number of subgraphs of $G$ that keep the fixed edges, concluding the proof.
(This proof is inspired by the proof of Theorem 3.2 in the book Probabilistic Databases by Suciu, Olteanu, Ré, and Koch; sorry again but I don't have an open-access link to this either.)
|
Forgot password? New user? Sign up
Existing user? Log in
I don't know why,but when I wanted to upgrade with Brilliant2Brilliant^{2}Brilliant2 ,there's this problem I faced.I have a photo attached about what's the problem?Can you shed some light for me in this problem?
Thank you :)
Note by Vaibhav Reddy 5 years, 4 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Thanks Vaibhav, we will look into it. If we require any personal information, we will be in contact via email. @Suyeon Khim
Log in to reply
Thank you :) ....
I just tried paying for a subscription using my dad's credit but it was declined 4 times. He called the credit card company (Chase) and they had no record of it. In their words: "It is not us". Can I pay using PayPal instead? Any ideas? Thanks, Lucas
Hi Lucas, I just sent you an email. We'll try to sort it out :)
Hi Vaibhav - I am sorry to hear that your card was declined. Usually, this means that your bank's automated system has decided that the charge looks suspicious, and has decided to decline it. It's possible that this is happening because we are a company based in the United States, and you typically only make purchases in India with that card. The way to resolve this issue is 1) ask your bank to accept future charges to Brilliant, or 2) try a different card.
Please feel free to email me with questions if you try either of the above and continue to have trouble. We are also working on adding more payment options for India that are more compatible with the Indian banking system, but it may take a month or two for us to implement it.
Oh thanks... :)
I already tried another card.I actually did only one payment with it.That card was also declined.
I don't know how to ask my bank to accept charges from Brilliant.Should I go to the bank and ask them or is there any way to do it online?Can you tell me how?
Hi, I have the same problem. My card was declined when I applied for a premium subscription. I really want to be able to subscribe within the next hour, before the discount offer ends. What can I do to fix this?
Hi I have the same problem here. I'm from Morocco and I get the message "This card was declined" when I enter the informations.I have payed with this card in coursera and some other foreing countries without any problems.If you could help me with this issue.Thank you :)
Even I am having the same problem.Someone help please.
Hey @Vaibhav Reddy , were you able to pay ?
I want to join brilliant squared but every time I try my card gets declined.What should I do?
Problem Loading...
Note Loading...
Set Loading...
|
Difference between revisions of "Sine-cubed function"
(→Repeated antidifferentiation)
(→Computation of power series)
Line 117: Line 117:
We have the power series:
We have the power series:
−
<math>\! \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \dots = \sum_{k=0}^\infty \frac{(-1)^kx^{2k+1}}(2k+1)!}</math>
+
<math>\! \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \dots = \sum_{k=0}^\infty \frac{(-1)^kx^{2k+1}}(2k+1)!}</math>
We thus get the power series:
We thus get the power series:
Latest revision as of 21:26, 3 September 2011 This article is about a particular function from a subset of the real numbers to the real numbers. Information about the function, including its domain, range, and key data relating to graphing, differentiation, and integration, is presented in the article. View a complete list of particular functions on this wiki For functions involving angles (trigonometric functions, inverse trigonometric functions, etc.) we follow the convention that all angles are measured in radians. Thus, for instance, the angle of is measured as . Contents Definition
For brevity, we write or .
Key data
Item Value Default domain all real numbers, i.e., all of range the closed interval , i.e.,
absolute maximum value: 1, absolute minimum value: -1
period , i.e., local maximum values and points of attainment All local maximum values are equal to 1, and they are attained at all points of the form where varies over integers. local minimum values and points of attainment All local minimum values are equal to -1, and they are attained at all points of the form where varies over integers. points of inflection (both coordinates) All points of the form , as well as points of the form and where where varies over integers. derivative second derivative antiderivative important symmetries odd function (follows from composite of odd functions is odd, and the fact that the cube function and sine function are both odd)
half turn symmetry about all points of the form
mirror symmetry about all lines .
Identities
We have the identity:
Graph
Here is the basic graph, drawn on the interval :
Here is a more close-up graph, drawn on the interval . The thick black dots correspond to local extreme values, and the thick red dots correspond to points of inflection.
Differentiation First derivative
To differentiate once, we use the chain rule for differentiation. Explicitly, we consider the function as the composite of the cube function and the sine function, so the cube function is the
outer function and the sine function is the inner function.
We get:
[SHOW MORE]
Integration First antiderivative: standard method
We rewrite and then do integration by u-substitution where . Explicitly:
Now put . We have , so we can replace by , and we get:
By polynomial integration, we get:
Plugging back , we get:
.
Here, is an arbitrary real constant.
First antiderivative: using triple angle formula
An alternate method for integrating the function is to use the identity:
We thus get:
This answer looks superficially different from the other answer. However, using the identity , we can verify that the antiderivatives are exactly the same.
Repeated antidifferentiation
The antiderivative of involves cos^3 and cos, both of which can be antidifferentiated, and this now involves sin^3 and sin. We can thus antidifferentiate (i.e., integrate) the function any number of times, with the antiderivative expression alternating between a cubic function of sine and a cubic function of cosine.
Power series and Taylor series Computation of power series
We can use the identity:
We have the power series:
We thus get the power series:
Plugging into the formula, we get:
The first few terms are as follows:
|
Why do negative even numbers plugged into the Zeta function produce a zero? The Riemann Hypothesis implies that the non-trivial zeros are connected to the primes, so how does that fit with negative even numbers?
The functional equation of the zeta function, needed to extend anatically that function to $\;\Bbb C\setminus\{1\}\;$ and one of the most astonishingly beautiful equations in mathematics, is
$$\zeta(s)=2^s\pi^{s-1}\,\sin\frac{\pi s}2\,\Gamma(1-s)\,\zeta(1-s)$$
Well, now for $\;s=-2n\;,\;\;n\in\Bbb N\;$ , you get $\;\zeta(-2n)=0\;$ ...
For example, the zeta function at $-2$ is $1^2 + 2^2 + 3^2 + ...$ which is trivially equal to 0.
Similarly, the zeta function at $-4$ is $1^4 + 2^4 + 3^4 + ... $ which is also trivially equal to zero.
It is trivial to see that the same holds for the function at $-6$, $-8$, ... etc. For that reason, these are considered the trivial zeros of the Zeta function.
|
My current lecture states the theorem of monotone convergence:
Let $(X, \mathcal{E}, \mu)$ a measure space and $\varphi_n: X \rightarrow \mathbb{R}$ an increasing sequence of $\mu$-integrable functions with:
$$\exists M \in \mathbb{R}: \forall n \in \mathbb{N}: \int_X\varphi_n\,d\mu \leq M$$
Then $\varphi := \lim \varphi_n: X \rightarrow \overline{\mathbb{R}}$ is $\mu$-integrable with:
$$\int_X \varphi\,d\mu = \lim_{n \rightarrow \infty} \varphi_n\,d\mu$$
Note: A function $f: X \rightarrow \mathbb{R}$ is called $\mu$-integrable iff $\int_X|f|\,d\mu < \infty$.
Wikipedia states:
Let $(X, \mathcal{E}, \mu)$ be a measure space. For a pointwise non-decreasing sequence of $\mathcal{E}$-measurable non-negative functions $f_k: X \rightarrow [0, \infty]$ consider the pointwise limit $$f := \lim_{k\rightarrow\infty} f_k$$ Then $f$ is $\mathcal{E}$-measurable with:
$$ \lim_{k\rightarrow\infty} \int_X f_k\, d\mu = \int_X f\,d\mu$$
Now I'm a little bit confused my lecture doesn't require the functions of the sequence to be non-negative but introduces the extra upper bound. What is the difference here?
The Wikipedia definition allows it to set the integral of the limit equal to the limit of the integrals. The lecture definition permits this only under the condition, that the sequence is bounded. Is this really required?
|
I don't have an answer to the question "why would one want to consider such crazy stuff in
physics?" since I don't know much physics, but as a mathematics student I do have an answer to the question "why would one want to consider such crazy stuff in mathematics?"
What physicists call Grassmann numbers are what mathematicians call elements of the exterior algebra $\Lambda(V)$ over a vector space $V$. The exterior algebra naturally arises as the solution to the following geometric problem. Say that $V$ has dimension $n$ and let $v_1, ... v_n$ be a basis of it. We would like a nice natural definition of the $n$-dimensional volume of the paralleletope defined by the vectors $\epsilon_1 v_1 + ... + \epsilon_n v_n, e_i \in \{ 0, 1 \}$. When $n = 2$ this is the standard parallelogram defined by two linearly independent vectors, and when $n = 3$ this is the standard paralellepiped defined by three linearly independent vectors.
The thing about the
naive definition of volume is that it is very close to having really nice mathematical properties: it is almost multilinear. That is, if we denote the volume we're looking at by $\text{Vol}(v_1, ... v_n)$, then it is almost true that $\text{Vol}(v_1, ... v_i + cw, ... v_n) = \text{Vol}(v_1, ... v_n) + c \text{Vol}(v_1, ... v_{i-1}, w, v_{i+1}, ... v_n)$. You can draw nice diagrams to see this readily. However, it isn't actually completely multilinear: depending on how you vary $w$ you will find that sometimes the volume shrinks to zero and then goes back up in a non-smooth way when really it ought to keep getting more negative. (You can see this even in two dimensions, by varying one of the vectors until it goes past the other.)
To fix that, we need to look instead at
oriented volume, which can be negative, but which has the enormous advantage of being completely multilinear and smooth. The other major property it satisfies is that if any of the two vectors $v_i$ agree (that is, the vectors are linearly dependent) then the oriented volume is zero, which makes sense. It turns out (and this is a nice exercise) that this is equivalent to oriented volume coming from a "product" operation, the exterior product, which is anticommutative. Formally, these two conditions define an element of the top exterior power $\Lambda^n(V)$ defined by the exterior product $v_1 \wedge v_2 ... \wedge v_n$, and choosing an element of this top exterior power (a volume form) allows us to associate an actual number to an $n$-tuple of vectors which we can call its oriented volume in the more naive sense. If $V$ is equipped with an inner product, then there are two distinguished elements of $\Lambda^n(V)$ given by a wedge product of an orthonormal basis in some order, and it's natural to pick one of these as a volume form.
Alright, so what about the rest of the exterior powers $\Lambda^p(V)$ that make up the exterior algebra? The point of these is that if $v_1, ... v_p, p < n$ is a tuple of vectors in $V$, we can consider the subspace they span and talk about the $p$-dimensional oriented volume of the paralleletope given by the $v_i$ in this subspace. But the result of this computation shouldn't just be a number: we need a way to do this that keeps track of
what subspace we're in. It turns out that mathematically the most natural way to do this is to keep in mind the requirements we really want out of this computation (multilinearity and the fact that if the $v_i$ are not linearly independent then the answer should be zero), and then just define the result of the computation to be the universal thing that we get by imposing these requirements and nothing else, and this is nothing more than the exterior power $\Lambda^p(V)$.
This discussion hopefully motivated for you why the exterior algebra is a natural object from the perspective of geometry. Since Einstein, physicists have been aware that geometry has a lot to say about physics, so hopefully the concept makes a little more sense now.
Let me also say something about how modern mathematicians think about "space" in the abstract sense. The inspiration for the modern point of view actually derives at least partially from physics: the only thing you can really know about a space are observables defined on it. In classical physics, observables form a commutative ring, so one might say roughly speaking that the study of commutative rings is the study of "classical spaces." In mathematics this study, in the abstract, is called algebraic geometry. It is a very sophisticated theory that encompasses classical algebraic geometry, arithmetic geometry, and much more, and it is in large part because of the success of this theory and related commutative ring approaches to geometry (topological spaces, manifolds, measure spaces) that mathematicians have gotten used to the slogan that "commutative rings are rings of observables on some space."
Of course, quantum mechanics tells us that the actual universe around us doesn't work this way. The observables we care about don't commute, and this is a big issue. So mathematically what is needed is a way to think about noncommutative rings as "quantum spaces" in some sense. This subject is very broad, but roughly it goes by the name of noncommutative geometry. The idea is simple: if we want to take quantum mechanics completely seriously, our spaces shouldn't have "points" at all because points are classical phenomena that implicitly require a commutative ring of observables, which we
know is not what we actually have. So our spaces should be more complicated things coming from noncommutative rings in some way.
Grassmann numbers satisfy one of the most tractable forms of noncommutativity (actually they
are commutative if one alters the definition of "commutative" very slightly, but never mind that...), and even better it is a form of noncommutativity that is clearly related to something physicists care about (the properties of fermions), so anticommuting observables are a natural step up from commuting observables in order to get our mathematics to align more closely with reality while still being able to think in an approximately classical way.
|
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
|
I have to prove the following problem in propositional logic:
Let $F$ be a set of clauses and let $F' = F \cup \{res(C_1,C_2,A_i)\}$ be the extension of $F$ by a resolvent of some clauses $C_1,C_2 \in F$ where $A_i$ is a literal occuring positively in $C_1$ and negatively in $C_2$.
Prove that: If $F$ is valid, then $F'$ is valid.
So in other words I have to prove that when I construct the union of the original formula $F$ and the formula resulting by applying resolution on $F$ over a literal $A_i$ in $F$, validity is still preserved.
I think that this should be provable by applying a direct proof.
Recall: Note that resolution is defined as follows: given two clauses:$C_1 = (A_1 \lor \dots \lor A_i \lor \dots \lor A_n)$ and $C_2 = (B_1 \lor \dots \lor B_j \lor \dots \lor B_m)$ such that for some $i, j$ with $1 \leq i \leq n$, and $1 \leq j \leq m,\; A_i = \neg B_j$,
the resolvent of $C_1$ and $C_2$ on $A_i$ is the clause
$res(C_1,C_2,A_i) = (A_1 \lor \dots \lor A_{i-1} \lor A_{i+1} \lor \dots \lor A_n \lor B_1 \lor \dots B_{j-1} \lor B_{j+1} \lor \dots \lor B_m)$
EDIT: Here is my try:
Let $I$ be an interpratation taken from the set of models $Mod(F)$. Hence, $I(F) = 1$. Because this interpretation satisfies $F$ it must also be the case that it satisfies $C_1$ and $C_2$. If we take a look at the structure of $C_1$ and $C_2$ we have to distinguish between $2$ different cases:
(1) $A_i$ is positive in $C_1$ but negative in $C_2$ and
(2) $A_i$ is negative in $C_1$ but positive in $C_2$.
In case (1) if $A_i$ is set to true in $C_1$ it would be false in $C_2$. In case (2) if $A_i$ is set to true in $C_2$ it would be false in $C_1$.
$A_i$ cannot be the literal which preserves satisfiability of $F$. Hence $I$ must include some assignment of other literals in $F$ such that $F$ is satisfied. Thus, the resolvant $res(C_1,C_2,A_i)$ is also satisfied.
|
Why multiplying fractions is equal to multiply the tops, multiply the bottoms? $$\frac{a}{b}\times \frac{c}{d}=\frac{a\times c}{b \times d},$$ And why $$\frac{a}{b}\times \frac{c}{c}=\frac{a}{b},$$ Also why $$\frac{a}{b}+\frac{c}{b}=\frac{a+c}{b}.$$ I understand it, but I want a mathematical approach as a math student proves it. Also I want to know the mathematics topic of this question (number theory, logic, etc). A full answer is not necessary. Just a reference.
I'll give an abstract look at why these identities hold in arbitrary fields.
In some sense, this is
the definition of addition and multiplication of fractions. Specifically, we can define division to be the multiplication of the numerator with the inverse of the denominator.
For example, we can write $\frac{a}{b}=ab^{-1}$, identifying division as taking the multiplicative inverse. Then$$ \frac{a}{b}\cdot \frac{c}{d} =(a\cdot b^{-1})\cdot (c\cdot d^{-1})$$
Now, if we want multiplication to be associative and commutative, then we would find that $$(a\cdot b^{-1})\cdot (c\cdot d^{-1})=(a\cdot c)\cdot (d^{-1}\cdot b^{-1})$$ It is a general fact that $(xy)^{-1}=y^{-1}x^{-1}$, which can be verified directly by multiplying $(xy)$ by both $y^{-1}x^{-1}$ and $(xy)^{-1}$. Then we find that $$ \frac{a}{b}\cdot \frac{c}{d}=(a\cdot c)\cdot (d^{-1}\cdot b^{-1})=(a\cdot c)\cdot (b\cdot d)^{-1}=\frac{a\cdot c}{b\cdot d}$$
Similar justifications can be given for the remaining identities. For example, $\frac{c}{c}=1$ can be verified by $\frac{c}{c}=c\cdot c^{-1}=1$. Again, this is rather definitional.
There's another way in which we can view these identities for fractions as well. There is also another approach, mirroring the construction of the integers. If we're given an integral domain (i.e. a commutative ring in which $ab=0$ implies that one of $a$ and $b$ are equal to $0$) $(R,+,\cdot,0,1)$, where $+$ is some notion of "addition", $\cdot$ some notion of "multiplication", $0$ the identity for addition, and $1$ the identity for multiplication, then we can form a field $\operatorname{Quot}(R)$ called the quotient field or fraction field of $R$.
Specifically, we define the underlying set of $\operatorname{Quot}(R)$ by the quotient $[R\times (R\setminus\{0\})]/\sim$, where $\sim$ is the equivalence relation defined by $(a,b)\sim(c,d)$ if and only if $a\cdot d=b\cdot c$. The idea is that the ordered pairs $(a,b)\in R\times (R\setminus \{0\})$ represent the fractions of elements in $R$, but we also want to identify "equivalent" fractions, and thus we introduce the equivalence relation.
We'll represent the equivalence class of an element $(a,b)$ in $\operatorname{Quot}(R)$ by $\frac{a}{b}$.
Then the definition of addition and multiplication are exactly the commonly given identities for the addition and multiplication of fractions: $$\frac{a}{b}+\frac{c}{d}=\frac{ad+bc}{bd} \quad \text{and} \quad \frac{a}{b}\cdot \frac{c}{d}=\frac{a\cdot c}{b\cdot d}$$
|
Elliptic boundary value problems in spaces of continuous functions
1.
Dipartimento di Matematica Applicata, Università di Pisa, Via Buonarroti 1/C, 56127 Pisa
full regularityoccurs. namely, for each $\,\lambda>\,0\,$ and arbitrary real $\,\alpha\,,$ $\,\nabla^2\,u $ and $\,f\,$ enjoy the same $\, C^{0,\,\lambda}_\alpha(\overline{\Omega}) \,$ regularity. All the above setup is presented as part of a more general picture. Keywords:data spaces of continuous functions, full regularity., Linear elliptic boundary value problems, continuity properties of higher order derivatives, classical solutions. Mathematics Subject Classification:Primary: 31B10; Secondary: 31B35, 33E30, 35A09, 35B65, 35J25, 58F15, 58F1. Citation:Hugo Beirão da Veiga. Elliptic boundary value problems in spaces of continuous functions. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 43-52. doi: 10.3934/dcdss.2016.9.43
References:
[1] [2]
H. Beirão da Veiga, Concerning the existence of classical solutions to the Stokes system. On the minimal assumptions problem,,
16 (2014), 539.
doi: 10.1007/s00021-014-0170-9.
Google Scholar
[3]
H. Beirão da Veiga, An overview on classical solutions to $2-D$ Euler equations and to elliptic boundary value problems,,
[4] [5] [6]
L. Bers, F. John and M. Schechter,
[7] [8] [9]
O. A. Ladyzenskaya,
[10]
V. L. Shapiro, Generalized and classical solutions of the nonlinear stationary Navier-Stokes equations,,
216 (1976), 61.
doi: 10.1090/S0002-9947-1976-0390550-X.
Google Scholar
[11]
I. I. Sharapudinov, The basis property of the Haar system in the space $L^{p(t)}[0,1]$, and the principle of localization in the mean,,
130 (1986), 275.
Google Scholar
[12]
V. A. Solonnikov, On estimates of Green's tensors for certain boundary value problems,,
130 (1960), 128.
Google Scholar
show all references
References:
[1] [2]
H. Beirão da Veiga, Concerning the existence of classical solutions to the Stokes system. On the minimal assumptions problem,,
16 (2014), 539.
doi: 10.1007/s00021-014-0170-9.
Google Scholar
[3]
H. Beirão da Veiga, An overview on classical solutions to $2-D$ Euler equations and to elliptic boundary value problems,,
[4] [5] [6]
L. Bers, F. John and M. Schechter,
[7] [8] [9]
O. A. Ladyzenskaya,
[10]
V. L. Shapiro, Generalized and classical solutions of the nonlinear stationary Navier-Stokes equations,,
216 (1976), 61.
doi: 10.1090/S0002-9947-1976-0390550-X.
Google Scholar
[11]
I. I. Sharapudinov, The basis property of the Haar system in the space $L^{p(t)}[0,1]$, and the principle of localization in the mean,,
130 (1986), 275.
Google Scholar
[12]
V. A. Solonnikov, On estimates of Green's tensors for certain boundary value problems,,
130 (1960), 128.
Google Scholar
[1]
John Baxley, Mary E. Cunningham, M. Kathryn McKinnon.
Higher order boundary value problems with multiple solutions: examples and techniques.
[2] [3]
Angelo Favini, Yakov Yakubov.
Regular boundary value problems for ordinary differential-operator equations of higher order in
[4] [5]
Yaobin Ou, Pan Shi.
Global classical solutions to the free boundary problem of planar full magnetohydrodynamic equations with large initial data.
[6]
G. R. Cirmi, S. Leonardi.
Higher differentiability for solutions of linear
elliptic systems with measure data.
[7] [8]
Zhilin Yang, Jingxian Sun.
Positive solutions of a fourth-order boundary value problem
involving derivatives of all orders.
[9]
Sofia Giuffrè, Giovanna Idone.
On linear and nonlinear elliptic boundary value problems in the plane with discontinuous coefficients.
[10] [11]
Shujie Li, Zhitao Zhang.
Multiple solutions theorems for semilinear elliptic boundary value problems with resonance at infinity.
[12]
Inara Yermachenko, Felix Sadyrbaev.
Types of solutions and multiplicity results for second order nonlinear boundary value problems.
[13]
Olga A. Brezhneva, Alexey A. Tret’yakov, Jerrold E. Marsden.
Higher--order implicit function theorems and degenerate nonlinear boundary-value problems.
[14]
G. Métivier, K. Zumbrun.
Symmetrizers and continuity of stable subspaces for parabolic-hyperbolic boundary value problems.
[15]
Yohei Fujishima.
On the effect of higher order derivatives of initial data on the blow-up set for a semilinear heat equation.
[16]
Olha P. Kupenko, Rosanna Manzo.
On optimal controls in coefficients for ill-posed non-Linear elliptic Dirichlet boundary value problems.
[17]
Angelo Favini, Gisèle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli.
Selfadjointness of degenerate elliptic operators on higher order Sobolev spaces.
[18]
Aram L. Karakhanyan.
Lipschitz continuity of free boundary in the continuous casting problem with divergence form elliptic equation.
[19]
Denis R. Akhmetov, Renato Spigler.
$L^1$-estimates for the higher-order derivatives of solutions to parabolic equations subject to initial values of bounded total variation.
[20]
Yong Zhou.
Decay rate of higher order derivatives for solutions to the 2-D dissipative quasi-geostrophic flows.
2018 Impact Factor: 0.545
Tools Metrics Other articles
by authors
[Back to Top]
|
Show that $f$ is continuous and in neigbourhood of $(0,0)$ has bounded partial derivatives but is
not differentiable.$$f(x,y) = \begin{cases}\frac{xy}{\sqrt{x^2+y^2}} & x^2+y^2 \neq 0 \\0 & x = y = 0\end{cases}$$This seems like basic introductory material and is also a copy of the same question on StackExchange, but unfortunately it was not sufficiently answered there.
I'm stuck at the beggining. Function f is said to be continous at point (0,0) when it satisfies the following $$ \lim_{(x_0,y_0) \rightarrow (0,0)}f(x,y)= f(0,0) $$
When I plug that I immediately get: $$ \lim_{(x_0,y_0) \rightarrow (0,0)}f(x,y) = \lim_{(x_0,y_0) \rightarrow (0,0)} \frac{x_0y_0}{\sqrt{x_0^2 + y_0^2}} = \frac{0}{0} = \text{ ?} $$ How should I tackle that problem?
|
Just as we did for linear momentum conservation, we will summarize the main ideas of the angular momentum conservation model/approach by listing the
constructs, i.e., the “things” or ideas that are get “used” in the model, the relationships–in mathematical or sentence form– that connect the constructs in meaningful ways, and the ways of representing the relationships.
Developing a deep and rich understanding of the relationships in a model/approach comes slowly. It is absolutely not something you can memorize. This understanding comes only with repeated hard mental effort over a period of time. A good test you can use to see if you are “getting it” is whether you can tell a full story about each of the relationships. It is the meaning behind the equations, behind the simple sentence relationships, that is important for you to acquire. With this kind of understanding, you can apply a model/approach to the analysis of phenomena you have not thought about before. You can
reason with the model Listed here are the major, most important constructs, relationships, and representations of the angular momentum conservation model. Constructs Angular Velocity, \(\omega\) Rotational Inertia, \(I\) I Angular Momentum, \(L\) Net Torque, \(\Sigma\tau\) Angular Impulse, \(angJ\) Newton’s 3rd law Conservation of angular momentum Relationships
The angular velocity is the time derivative of the displacement:
\[ \omega = \frac{d\theta}{dt} \: or \: \omega_{average} = \frac{\Delta\theta}{\Delta t} \]
The angular momentum of an object measured about some fixed axes is simply the product of the object’s rotational inertia and angular velocity:
\[ L = I\omega \]
The angular impulse of the total (or net) external torque acting on an object equals the product of the average torque and the time interval during which the torque acted.
\[ Net \: Angular \: Impulse_{ext} = angJ = \Sigma\tau_{avge \: ext}\Delta t = \int \Sigma\tau_{ext}(t)dt \]
The directions of torque, impulse, angular velocity, and angular momentum as determined by the right-hand rule.
The torque (angular impulse) exerted by object A on object B is equal and opposite to the torque (angular impulse) exerted by object B on object A.
\[\tau_{A\:on\:B}=-\tau_{B\:on\:A} \: and \ angJ_{A\:on\:B} = -angJ_{B\:on\:A} \]
Conservation of Angular Momentum
If the net external angular impulse acting on a system is zero, then there is no change in the total angular momentum of that system; otherwise, the change in angular momentum is equal to the net external angular impulse.
\[ Net \: Angular \: Impulse_{ext} = angJ= \int \Sigma\tau_{ext}(t)dt = L_{f} - L{i} = \Delta L_{system} \]
Representations
Graphical representation of all vector quantities and (vector relationships) as arrows whose length is proportional to the magnitude of the vector and whose direction is in the direction of the vector quantity.
Algebraic vector equations. Vectors denoted as bold symbols or with small arrows over the symbol.
Component algebraic equations, one equation for each of the three independent directions. A useful way to organize and use the representations of the various quantities that occur in phenomena involving angular momentum, change in angular momentum, and angular impulse and torques is an angular momentum chart, which is totally analogous to the linear momentum chart. The angular momentum chart helps us keep track of what we know about the interaction, as well as helping us see what we don’t know.
The boxes are to be filled in with scaled arrows representing the various angular momenta and changes in angular momenta.
Contributors
Authors of Phys7B (UC Davis Physics Department)
|
In mathematics, the radian is defined as the standard unit for measurement of angles. The measurement in angles is usually equal to the total length of a corresponding arc in a unit circle. The relationship between arch length and radius of a circle is generally defined with the help of radian of a circle. Degree and radian measure formula in mathematics is generally needed to convert degrees to radian and radians to degrees.
\[\LARGE Radian=\frac{Arc\;Length}{Radius\;Length}\]
\[\LARGE Radian=\frac{Degree\times \pi}{180}\]
Further, degrees can be used to define the directionality and the size of an angle. If you are standing straight directly to north then it is defined as the zero degree if you end up again in North after a complete turn then the full revolution is marked as the 360°.
Next is radian but the question is why do we need to learn radian when we have the degrees. Actually, we need number to perform calculations in mathematics. There is nothing to do with degrees until they are converted to some equivalent form. If we right it as 2π then it is more significant than the previous ones. Here, are how degrees can be converted to radians in mathematics.
360°=2π radians, and
180°=π radians
For me, both degrees and radian had their own place. Once you are sure how to play with these two popular terms then mathematical calculations would be much simpler than your expectations. Also, you should check either the rotation is clockwise or anticlockwise to make the solutions more accurate and precise.
|
Learning Objectives
Formulate the principle of conservation of mechanical energy, with or without the presence of non-conservative forces Use the conservation of mechanical energy to calculate various properties of simple systems
In this section, we elaborate and extend the result we derived in Potential Energy of a System, where we re-wrote the work-energy theorem in terms of the change in the kinetic and potential energies of a particle. This will lead us to a discussion of the important principle of the conservation of mechanical energy. As you continue to examine other topics in physics, in later chapters of this book, you will see how this conservation law is generalized to encompass other types of energy and energy transfers. The last section of this chapter provides a preview.
The terms ‘conserved quantity’ and ‘conservation law’ have specific, scientific meanings in physics, which are different from the everyday meanings associated with the use of these words. (The same comment is also true about the scientific and everyday uses of the word ‘work.’) In everyday usage, you could conserve water by not using it, or by using less of it, or by re-using it. Water is composed of molecules consisting of two atoms of hydrogen and one of oxygen. Bring these atoms together to form a molecule and you create water; dissociate the atoms in such a molecule and you destroy water. However, in scientific usage, a
conserved quantity for a system stays constant, changes by a definite amount that is transferred to other systems, and/or is converted into other forms of that quantity. A conserved quantity, in the scientific sense, can be transformed, but not strictly created or destroyed. Thus, there is no physical law of conservation of water. Systems with a Single Particle or Object
We first consider a system with a single particle or object. Returning to our development of Equation 8.2, recall that we first separated all the forces acting on a particle into conservative and non-conservative types, and wrote the work done by each type of force as a separate term in the work-energy theorem. We then replaced the work done by the conservative forces by the change in the potential energy of the particle, combining it with the change in the particle’s kinetic energy to get Equation 8.2. Now, we write this equation without the middle step and define the sum of the kinetic and potential energies, K + U = E; to be the
mechanical energy of the particle.
Conservation of Energy
The mechanical energy E of a particle stays constant unless forces outside the system or non-conservative forces do work on it, in which case, the change in the mechanical energy is equal to the work done by the non-conservative forces:
$$W_{nc,\; AB} = \Delta (K + U)_{AB} = \Delta E_{AB} \ldotp \label{8.12}$$
This statement expresses the concept of
energy conservation for a classical particle as long as there is no non-conservative work. Recall that a classical particle is just a point mass, is nonrelativistic, and obeys Newton’s laws of motion. In Relativity, we will see that conservation of energy still applies to a non-classical particle, but for that to happen, we have to make a slight adjustment to the definition of energy.
It is sometimes convenient to separate the case where the work done by non-conservative forces is zero, either because no such forces are assumed present, or, like the normal force, they do zero work when the motion is parallel to the surface. Then
$$0 = W_{nc,\; AB} = \Delta (K + U)_{AB} = \Delta E_{AB} \ldotp \label{8.13}$$
In this case, the conservation of mechanical energy can be expressed as follows: The mechanical energy of a particle does not change if all the non-conservative forces that may act on it do no work. Understanding the concept of energy conservation is the important thing, not the particular equation you use to express it.
Problem-Solving Strategy: Conservation of Energy
Identify the body or bodies to be studied (the system). Often, in applications of the principle of mechanical energy conservation, we study more than one body at the same time. Identify all forces acting on the body or bodies. Determine whether each force that does work is conservative. If a non-conservative force (e.g., friction) is doing work, then mechanical energy is not conserved. The system must then be analyzed with non-conservative work, Equation 8.13. For every force that does work, choose a reference point and determine the potential energy function for the force. The reference points for the various potential energies do not have to be at the same location. Apply the principle of mechanical energy conservation by setting the sum of the kinetic energies and potential energies equal at every point of interest.
Example 8.7
Simple Pendulum
A particle of mass m is hung from the ceiling by a massless string of length 1.0 m, as shown in Figure 8.8. The particle is released from rest, when the angle between the string and the downward vertical direction is 30°. What is its speed when it reaches the lowest point of its arc?
Strategy
Using our problem-solving strategy, the first step is to define that we are interested in the particle-Earth system. Second, only the gravitational force is acting on the particle, which is conservative (step 3). We neglect air resistance in the problem, and no work is done by the string tension, which is perpendicular to the arc of the motion. Therefore, the mechanical energy of the system is conserved, as represented by Equation 8.13, 0 = \(\Delta\)(K + U). Because the particle starts from rest, the increase in the kinetic energy is just the kinetic energy at the lowest point. This increase in kinetic energy equals the decrease in the gravitational potential energy, which we can calculate from the geometry. In step 4, we choose a reference point for zero gravitational potential energy to be at the lowest vertical point the particle achieves, which is mid-swing. Lastly, in step 5, we set the sum of energies at the highest point (initial) of the swing to the lowest point (final) of the swing to ultimately solve for the final speed.
Solution
We are neglecting non-conservative forces, so we write the energy conservation formula relating the particle at the highest point (initial) and the lowest point in the swing (final) as
$$K_{i} + U_{i} = K_{f} + U_{f} \ldotp$$
Since the particle is released from rest, the initial kinetic energy is zero. At the lowest point, we define the gravitational potential energy to be zero. Therefore our conservation of energy formula reduces to
$$\begin{split} 0 + mgh & = \frac{1}{2} mv^{2} + 0 \\ v & = \sqrt{2gh} \ldotp \end{split}$$
The vertical height of the particle is not given directly in the problem. This can be solved for by using trigonometry and two givens: the length of the pendulum and the angle through which the particle is vertically pulled up. Looking at the diagram, the vertical dashed line is the length of the pendulum string. The vertical height is labeled h. The other partial length of the vertical string can be calculated with trigonometry. That piece is solved for by
$$\cos \theta = \frac{x}{L} = L \cos \theta \ldotp$$
Therefore, by looking at the two parts of the string, we can solve for the height h,
$$\begin{split} x + h & = L \\ L \cos \theta + h & = L \\ h & = L - L \cos \theta \\ & = L(1 - \cos \theta) \ldotp \end{split}$$
We substitute this height into the previous expression solved for speed to calculate our result:
$$v = \sqrt{2gL(1 - \cos \theta)} = \sqrt{2(9.8\; m/s^{2})(1\; m)(1 - \cos 30^{o})} = 1.62\; m/s \ldotp$$
Significance
We found the speed directly from the conservation of mechanical energy, without having to solve the differential equation for the motion of a pendulum (see Oscillations). We can approach this problem in terms of bar graphs of total energy. Initially, the particle has all potential energy, being at the highest point, and no kinetic energy. When the particle crosses the lowest point at the bottom of the swing, the energy moves from the potential energy column to the kinetic energy column. Therefore, we can imagine a progression of this transfer as the particle moves between its highest point, lowest point of the swing, and back to the highest point (Figure 8.9). As the particle travels from the lowest point in the swing to the highest point on the far right hand side of the diagram, the energy bars go in reverse order from (c) to (b) to (a).
Exercise 8.7
How high above the bottom of its arc is the particle in the simple pendulum above, when its speed is 0.81 m/s?
Example \(\PageIndex{1}\)
Air Resistance on a Falling Object
A helicopter is hovering at an altitude of 1 km when a panel from its underside breaks loose and plummets to the ground (Figure 8.10). The mass of the panel is 15 kg, and it hits the ground with a speed of 45 m/s. How much mechanical energy was dissipated by air resistance during the panel’s descent?
Strategy
Step 1: Here only one body is being investigated.
Step 2: Gravitational force is acting on the panel, as well as air resistance, which is stated in the problem.
Step 3: Gravitational force is conservative; however, the non-conservative force of air resistance does negative work on the falling panel, so we can use the conservation of mechanical energy, in the form expressed by Equation 8.12, to find the energy dissipated. This energy is the magnitude of the work:
$$\Delta E_{diss} = |W_{nc,if}| = |\Delta (K + U)_{if}| \ldotp$$
Step 4: The initial kinetic energy, at yi = 1 km, is zero. We set the gravitational potential energy to zero at ground level out of convenience.
Step 5: The non-conservative work is set equal to the energies to solve for the work dissipated by air resistance.
Solution
The mechanical energy dissipated by air resistance is the algebraic sum of the gain in the kinetic energy and loss in potential energy. Therefore the calculation of this energy is
$$\begin{split} \Delta E_{diss} & = |K_{f} - K_{i} 9 U_{f} - U_{i}| \\ & = \Big| \frac{1}{2} (15\; kg)(45\; m/s)^{2} - 0 + 0 - (15\; kg)(9.8\; m/s^{2})(1000\; m) \Big| \\ & = 130\; kJ \ldotp \end{split}$$
Significance
Most of the initial mechanical energy of the panel (U
i), 147 kJ, was lost to air resistance. Notice that we were able to calculate the energy dissipated without knowing what the force of air resistance was, only that it was dissipative.
Exercise 8.8
You probably recall that, neglecting air resistance, if you throw a projectile straight up, the time it takes to reach its maximum height equals the time it takes to fall from the maximum height back to the starting height. Suppose you cannot neglect air resistance, as in Example 8.8. Is the time the projectile takes to go up (a) greater than, (b) less than, or (c) equal to the time it takes to come back down? Explain.
In these examples, we were able to use conservation of energy to calculate the speed of a particle just at particular points in its motion. But the method of analyzing particle motion, starting from energy conservation, is more powerful than that. More advanced treatments of the theory of mechanics allow you to calculate the full time dependence of a particle’s motion, for a given potential energy. In fact, it is often the case that a better model for particle motion is provided by the form of its kinetic and potential energies, rather than an equation for force acting on it. (This is especially true for the quantum mechanical description of particles like electrons or atoms.)
We can illustrate some of the simplest features of this energy-based approach by considering a particle in one-dimensional motion, with potential energy U(x) and no non-conservative interactions present. Equation 8.12 and the definition of velocity require
$$K = \frac{1}{2} mv^{2} = E - U(x)$$
$$v = \frac{dx}{dt} = \sqrt{\frac{2(E - U(x))}{m}} \ldotp$$
Separate the variables x and t and integrate, from an initial time t = 0 to an arbitrary time, to get
$$t = \int_{0}^{t} dt = \int_{x_{0}}^{x} \frac{dt}{\sqrt{\frac{2(E - U(x))}{m}}} \ldotp \label{8.14}$$
If you can do the integral in Equation 8.14, then you can solve for x as a function of t.
Example 8.9
Constant Acceleration Use the potential energy U(x) = −E \(\left(\dfrac{x}{x_{0}}\right)\), for E > 0, in Equation 8.14 to find the position x of a particle as a function of time t.
Strategy
Since we know how the potential energy changes as a function of x, we can substitute for U(x) in Equation 8.14, integrate, and then solve for x. This results in an expression of x as a function of time with constants of energy E, mass m, and the initial position x
0. Solution
Following the first two suggested steps in the above strategy,
$$t = \int_{x_{0}}^{x} \frac{dx}{\sqrt{\left(\dfrac{2E}{mx_{0}}\right)(x_{0} - x)}} = \frac{1}{\sqrt{\left(\dfrac{2E}{mx_{0}}\right)}} \Big| -2\sqrt{(x_{0} - x)} \Big|_{x_{0}}^{x} = \frac{-2\sqrt{(x_{0} - x)}}{\sqrt{\left(\dfrac{2E}{mx_{0}}\right)}} \ldotp$$
Solving for the position, we obtain
$$x(t) = x_{0} - \frac{1}{2} \left(\dfrac{E}{mx_{0}}\right) t^{2} \ldotp$$
Significance
The position as a function of time, for this potential, represents one-dimensional motion with constant acceleration, a = \(\left(\dfrac{E}{mx_{0}}\right)\), starting at rest from position x
0. This is not so surprising, since this is a potential energy for a constant force, F = \(− \frac{dU}{dx}\) = \(\frac{E}{x_{0}}\), and a = \(\frac{F}{m}\).
Exercise 8.9
What potential energy U(x) can you substitute in Equation 8.13 that will result in motion with constant velocity of 2 m/s for a particle of mass 1 kg and mechanical energy 1 J?
We will look at another more physically appropriate example of the use of Equation 8.13 after we have explored some further implications that can be drawn from the functional form of a particle’s potential energy.
Systems with Several Particles or Objects
Systems generally consist of more than one particle or object. However, the conservation of mechanical energy, in one of the forms in Equation 8.12 or Equation 8.13, is a fundamental law of physics and applies to any system. You just have to include the kinetic and potential energies of all the particles, and the work done by all the non-conservative forces acting on them. Until you learn more about the dynamics of systems composed of many particles, in Linear Momentum and Collisions, Fixed-Axis Rotation, and Angular Momentum, it is better to postpone discussing the application of energy conservation to then.
Contributors
Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
|
I needed to solve the following equation: $$\tan\theta + \tan 2\theta+\tan 3\theta=\tan\theta\tan2\theta\tan3\theta$$
Now, the steps that I followed were as follows.
Transform the LHS first: $$\begin{split} \tan\theta + \tan 2\theta+\tan 3\theta &= (\tan\theta + \tan 2\theta) + \dfrac{\tan\theta + \tan 2\theta} {1-\tan\theta\tan2\theta} \\ &= \dfrac{(\tan\theta + \tan 2\theta)(2-\tan\theta\tan2\theta)} {1-\tan\theta\tan2\theta} \end{split}$$
And, RHS yields $$\begin{split} \tan\theta\tan2\theta\tan3\theta &= (\tan\theta\tan2\theta)\dfrac{\tan\theta + \tan 2\theta} {1-\tan\theta\tan2\theta} \end{split}$$
Now, two terms can be cancelled out from LHS and RHS, yielding the equation:
$$ \begin{split} 2-\tan\theta\tan2\theta &= \tan\theta\tan2\theta\\ \tan\theta\tan2\theta &= 1, \end{split}$$ which can be further reduced as: $$\tan^2\theta=\frac{1}{3}\implies\tan\theta=\pm\frac{1}{\sqrt3}$$
Now, we can yield the general solution of this equation:
$\theta=n\pi\pm\dfrac{\pi}{6},n\in Z$. But, setting $\theta=\dfrac{\pi}{6}$ in the original equation is giving one term $\tan\dfrac{\pi}{2}$, which is not defined.
What is the problem in this computation?
|
Drábek, Pavel , Takáč, Peter Convergence to travelling waves in Fisher’s population genetics model with a non-Lipschitzian reaction term
We consider the Fisher equation for advance of advatageous gene with non-Lipschitzian reaction term. We prove new travelling wave profiles and the convergence of the Cauchy problem to one of those profiles.
Kašparová, Martina , Honzík, Lukáš , Hora, Jaroslav , Pěchoučková, Šárka Maturita z matematiky v Bavorsku - zadání úloh a stručný postup řešení. Část 1
The article presents the tasks of a mathematic school leaving examination at Bavarian high schools in May 2017. This tasks are supplemented by brief descriptions of the solutions.
Tomiczek, Petr Forced duffing equation with a non-strictly monotonic potential
This article is devoted to study the existence of a solution to the periodic nonlinear second order ordinary differential equation with damping u''(x)+ c u'(x)+g(x,u)=f(x) x in [0,T] , u(0)=u(T), u'(0)=u'(T), where c in R, g is a Carath'eodory function, f in L^1(...
Ryjáček, Zdeněk , Vrána, Petr , Wang, Shipeng Closure for {K(1,4),K(1,4)+e}-free graphs
We introduce a closure concept for hamiltonicity in the class of {K(1,4),K(1,4)+e}-free graphs, extending the closure for claw-free graphs introduced by Ryjáček (1997). The closure of a {K(1,4),K(1,4)+e}-free graph G with minimum degree at least 6 is uniquely determined, is a line graph...
Looseová, Iveta , Nečesal, Petr The Fucik spectrum of the discrete Dirichlet operator
In this paper, we deal with the discrete Dirichlet operator of the second order and we investigate its Fučík spectrum, which consists of a finite number of algebraic curves. For each non-trivial Fučík curve, we are able to detect a finite number of its points, which ...
Vršek, Jan Contour curves and isophotes on rational ruled surfaces
Ruled surfaces, i.e., surfaces generated by a one-parametric set of lines, are widely used in the field of applied geometry. An isophote on a surface is a curve consisting of those surface points whose normals form a constant angle with a fixed vector. Choosing the angle&...
Bobkov, Vladimír , Tanaka, Mieko On sign-changing solutions for resonant (p,q)-Laplace equations
We provide two existence results for sign-changing solutions to the Dirichlet problem for the family of equations $-\Delta_p u -\Delta_q u = \alpha |u|^{p-2}u + \beta |u|^{q-2}u$, where $1<q<p$ and $\alpha$, $\beta$ are parameters. First, we show the existence in...
Musso, Monica , Agudelo Rico, Oscar Iván , Correa, Santiago , Restrepo, Daniel , Vélez, Carlos Multiplicity results and qualitative properties for Neumann semilinear elliptic problems
In this paper we establish the existence of multiple ordered classical solutions for Neumann semilinear elliptic problems and provide qualitative information about them. No symmetry assumptions are required neither on the non-linearity nor on the domain. For some results, the growth of ...
Marek, Patrice , Vávra, František Comparison of Home Team Advantage in English and Spanish Football Leagues
Home team advantage in sports is widely analysed phenomenon. This paper builds on results of recent research that -- instead of points gained -- uses goals scored and conceded to describe home team advantage. Using this approach, the home team advantage is random variable that...
Šedivá, Blanka , Marek, Patrice Stability analysis of optimal mean-variance portfolio due to covariance estimation
The objective of this paper is to study the stability of the mean-variance portfolio optimization. The results of the mean-variance optimal selection problem are very sensitive to the model parameters (portfolio calibration window and frequency of portfolio rebalancing). There are presented ...
Audoux, Benjamin , Bobkov, Vladimír , Parini, Enea On multiplicity of eigenvalues and symmetry of eigenfunctions of the p--Laplacian
We investigate multiplicity and symmetry properties of higher eigenvalues and eigenfunctions of the $p$-Laplacian under homogeneous Dirichlet boundary conditions on certain symmetric domains $\Omega \subset \R^N$. By means of topological arguments, we show how symmetries of $\Omega$ help...
Bobkov, Vladimír , Tanaka, Mieko Remarks on minimizers for (p, q)-Laplace equations with two parameters
We study in detail the existence, nonexistence and behavior of global minimizers, ground states and corresponding energy levels of the (p,q)-Laplace equation in a bounded domain under zero Dirichlet boundary condition, where p>q>1. A curve on the plane of parameters which allocates...
Anoop, T.V. , Bobkov, Vladimír , Sasi, Sarath On the strict monotonicity of the first eigenvalue of the p-Laplacian on annuli
Let B1 be a ball in RN centred at the origin and let B0 be a smaller ball compactly contained in B1. For p ∈ (1,∞), using the shape derivative method, we show that the first eigenvalue of the p-Laplacian in annulus B1\B0 strictly decreases as the inner ...
Bobkov, Vladimír , Parini, Enea On the higher Cheeger problem
We develop the notion of higher Cheeger constants for a measurable set $\Omega \subset \mathbb{R}^N$. By the $k$-th Cheeger constant we mean the value \[h_k(\Omega) = \inf \max \{h_1(E_1), \dots, h_1(E_k)\},\] where the infimum is taken over all $k$-tu...
Kotrla, Lukáš Maclaurin series for sin_p with p an Integer greater than 2
We find an explicit formula for the coefficients $\alpha_n$, $n \in \mathbb{N}$, of the generalized Maclaurin series for $\sin_p$ provided $p > 2$ is an integer. Our method is based on an expression of the $n$-th derivative of $\sin_p$ in the form \[...
Dvořák, Zdeněk , Kabela, Adam , Kaiser, Tomáš Planar graphs have two-coloring number at most 8
We prove that the two-coloring number of any planar graph is at most 8. This resolves a question of Kierstead et al. (2009). The result is optimal.
Cibulka, Radek , Dontchev, Asen L. , Preininger, Jakob , Veliov, Vladimir M. , Roubal, Tomáš Kantorovich-Type Theorems for Generalized Equations
We study convergence of the Newton method for solving generalized equations with a continuous but not necessarily smooth single-valued part and a set-valued mapping with closed graph, both acting in Banach spaces. We present a Kantorovich-type theorem concerning r-linear convergence for a...
Cibulka, Radek , Dontchev, Asen L. , Krastanov, Mikhail I. , Veliov, Vladimir M. Metrically Regular Differential Generalized Equations
In this paper we consider a control system coupled with a generalized equation, which we call a differential generalized equation (DGE). This model covers a large territory in control and optimization, such as differential variational inequalities, control systems with constraints, as well...
Drábek, Pavel , Ho, Ngoc Ky , Sarkar, Abhishek Fredholmova alternativa pro p-Laplacian na vnějších oblastech
We investigate the Fredholm alternative for the p-Laplacian in an exterior domain which is the complement of the closed unit ball in R^N (N ≥ 2). By employing techniques of Calculus of Variations we obtain the multiplicity of solutions. The striking difference between our case...
Ryjáček, Zdeněk , Vrána, Petr , Xiong, Liming Hamiltonovské vlastnosti 3-souvislých grafů bez indukovaných podgrafů K(1,3) a hourglass
We show that some sufficient conditions for hamiltonian properties of claw-free graphs can be substantially strengthened under an additional assumption that G is hourglass-free (where hourglass is the graph with degree sequence 4, 2, 2, 2, 2).
DSpace at University of West Bohemia Publikační činnost / Publications Fakulta aplikovaných věd / Faculty of Applied Sciences Katedra matematiky / Department of Mathematics
|
Bulk Stress, Strain, and Modulus
When you dive into water, you feel a force pressing on every part of your body from all directions. What you are experiencing then is bulk stress, or in other words,
pressure. Bulk stress always tends to decrease the volume enclosed by the surface of a submerged object. The forces of this “squeezing” are always perpendicular to the submerged surface Figure \(\PageIndex{1}\). The effect of these forces is to decrease the volume of the submerged object by an amount \(\Delta\)V compared with the volume V 0 of the object in the absence of bulk stress. This kind of deformation is called bulk strain and is described by a change in volume relative to the original volume:
$$bulk\; strain = \frac{\Delta V}{V_{0}} \label{12.37}$$
The bulk strain results from the bulk stress, which is a force F
\(\perp\) normal to a surface that presses on the unit surface area A of a submerged object. This kind of physical quantity, or pressure p, is defined as
$$pressure = p \equiv \frac{F_{\perp}}{A} \ldotp \label{12.38}$$
We will study pressure in fluids in greater detail in Fluid Mechanics. An important characteristic of pressure is that it is a scalar quantity and does not have any particular direction; that is, pressure acts equally in all possible directions. When you submerge your hand in water, you sense the same amount of pressure acting on the top surface of your hand as on the bottom surface, or on the side surface, or on the surface of the skin between your fingers. What you are perceiving in this case is an increase in pressure \(\Delta\)p over what you are used to feeling when your hand is not submerged in water. What you feel when your hand is not submerged in the water is the
normal pressure p 0 of one atmosphere, which serves as a reference point. The bulk stress is this increase in pressure, or \(\Delta\)p, over the normal level, p 0.
When the bulk stress increases, the bulk strain increases in response, in accordance with Equation \ref{12.33}. The proportionality constant in this relation is called the bulk modulus, B, or
$$B = \frac{bulk\; stress}{bulk\; strain} = \frac{\Delta p}{\frac{\Delta V}{V_{0}}} = - \Delta p \frac{V_{0}}{\Delta V} \ldotp \label{12.39}$$
The minus sign that appears in Equation \ref{12.39} is for consistency, to ensure that \(B\) is a positive quantity. Note that the minus sign (–) is necessary because an increase \(\Delta\)p in pressure (a positive quantity) always causes a decrease \(\Delta\)V in volume, and decrease in volume is a negative quantity. The reciprocal of the bulk modulus is called
compressibility k, or
$$k = \frac{1}{B} = - \frac{\frac{\Delta V}{V_{0}}}{\Delta p} \ldotp \label{12.40}$$
The term ‘compressibility’ is used in relation to fluids (gases and liquids). Compressibility describes the change in the volume of a fluid per unit increase in pressure. Fluids characterized by a large compressibility are relatively easy to compress. For example, the compressibility of water is 4.64 x 10
−5 /atm and the compressibility of acetone is 1.45 x 10 −4 /atm. This means that under a 1.0-atm increase in pressure, the relative decrease in volume is approximately three times as large for acetone as it is for water.
Example \(\PageIndex{1}\): Hydraulic Press
In a hydraulic press Figure \(\PageIndex{2}\), a 250-liter volume of oil is subjected to a 2300-psi pressure increase. If the compressibility of oil is 2.0 x 10
−5 / atm, find the bulk strain and the absolute decrease in the volume of oil when the press is operating. Strategy
We must invert Equation \ref{12.40} to find the bulk strain. First, we convert the pressure increase from psi to atm, \(\Delta\)p = 2300 psi = \(\frac{2300}{14.7\; atm}\) ≈ 160 atm, and identify V
0 = 250 L. Solution
Substituting values into the equation, we have
$$bulk\; strain = \frac{\Delta V}{V_{0}} = \frac{\Delta p}{B} = k \Delta p = (2.0 \times 10^{-5}\; /atm)(160\; atm) = 0.0032$$
answer
$$\Delta V = 0.0032 V_{0} = 0.0032 (250\; L) = 0.78\; L \ldotp$$
Significance
Notice that since the compressibility of water is 2.32 times larger than that of oil, if the working substance in the hydraulic press of this problem were changed to water, the bulk strain as well as the volume change would be 2.32 times larger.
Exercise \(\PageIndex{1}\)
If the normal force acting on each face of a cubical 1.0-m
3 piece of steel is changed by 1.0 x 10 7 N, find the resulting change in the volume of the piece of steel. Shear Stress, Strain, and Modulus
The concepts of shear stress and strain concern only solid objects or materials. Buildings and tectonic plates are examples of objects that may be subjected to shear stresses. In general, these concepts do not apply to fluids.
Shear deformation occurs when two antiparallel forces of equal magnitude are applied tangentially to opposite surfaces of a solid object, causing no deformation in the transverse direction to the line of force, as in the typical example of shear stress illustrated in Figure \(\PageIndex{3}\). Shear deformation is characterized by a gradual shift \(\Delta\)x of layers in the direction tangent to the acting forces. This gradation in \(\Delta\)x occurs in the transverse direction along some distance L
0. Shear strain is defined by the ratio of the largest displacement \(\Delta\)x to the transverse distance L 0
$$shear\; strain = \frac{\Delta x}{L_{0}} \ldotp \label{12.41}$$
Shear strain is caused by shear stress. Shear stress is due to forces that act
parallel to the surface. We use the symbol F \(\parallel\) for such forces. The magnitude F \(\parallel\) per surface area A where shearing force is applied is the measure of shear stress
$$shear\; stress = \frac{F_{\parallel}}{A} \ldotp \label{12.42}$$
The shear modulus is the proportionality constant in Equation \ref{12.33} and is defined by the ratio of stress to strain. Shear modulus is commonly denoted by \(S\):
$$S = \frac{shear\; stress}{shear\; strain} = \frac{\frac{F_{\parallel}}{A}}{\frac{\Delta x}{L_{0}}} = \frac{F_{\parallel}}{A} \frac{L_{0}}{\Delta x} \ldotp \label{12.43}$$
Example \(\PageIndex{2}\): An Old Bookshelf
A cleaning person tries to move a heavy, old bookcase on a carpeted floor by pushing tangentially on the surface of the very top shelf. However, the only noticeable effect of this effort is similar to that seen in Figure \(\PageIndex{2}\), and it disappears when the person stops pushing. The bookcase is 180.0 cm tall and 90.0 cm wide with four 30.0-cm-deep shelves, all partially loaded with books. The total weight of the bookcase and books is 600.0 N. If the person gives the top shelf a 50.0-N push that displaces the top shelf horizontally by 15.0 cm relative to the motionless bottom shelf, find the shear modulus of the bookcase.
Strategy
The only pieces of relevant information are the physical dimensions of the bookcase, the value of the tangential force, and the displacement this force causes. We identify F
\(\parallel\) = 50.0 N, \(\Delta\)x = 15.0 cm, L 0 = 180.0 cm, and A = (30.0 cm)(90.0 cm) = 2700.0 cm 2, and we use Equation \ref{12.43} to compute the shear modulus. Solution
Substituting numbers into the equations, we obtain for the shear modulus
$$S = \frac{F_{\parallel}}{A} \frac{L_{0}}{\Delta x} = \frac{50.0\; N}{2700.0\; cm^{2}} \frac{180.0\; cm}{15.0\; cm} = \frac{2}{9} \frac{M}{cm^{2}} = \frac{2}{9} \times 10^{4}\; N/m^{2} = \frac{20}{9} \times 10^{3}\; Pa = 2.222\; kPa \ldotp \nonumber$$
We can also find shear stress and strain, respectively:
$$\frac{F_{\parallel}}{A} = \frac{50.0\; N}{2700.0\; cm^{2}} = \frac{5}{27}\; kPa = 185.2\; Pa \nonumber$$
$$\frac{\Delta x}{L_{0}} = \frac{15.0\; cm}{180.0\; cm} = \frac{1}{12} = 0.083 \ldotp \nonumber$$
Significance
If the person in this example gave the shelf a healthy push, it might happen that the induced shear would collapse it to a pile of rubbish. Much the same shear mechanism is responsible for failures of earth-filled dams and levees; and, in general, for landslides.
Exercise \(\PageIndex{2}\)
Explain why the concepts of Young’s modulus and shear modulus do not apply to fluids.
Contributors
Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
|
I note $L_{t}^{[T_s, T_e]}$ the forward rate at time $t$ for the period $[T_s, T_e]$. Recall it is the strike making equal to $0$ the value at time $t$ of a forward contract for the period $[T_s, T_e]$.
The strategy I am looking at is the following : I enter (for $0$) at time $t$ in a payer (I pay the strike) forward contract for the period $[T_s, T_e]$ and at a later time $t'$ I unwind my position by entering in a receiver (I receive the strike) forward contract for the period $[T_s, T_e]$. Noting $X$ the notional and $\delta$ the year fraction represented by the period $[T_s, T_e]$, my payout at time $T_e$ is $$X\delta\left( L_{T_s}^{[T_s, T_e]} - L_{t}^{[T_s, T_e]}\right) - X\delta\left( L_{T_s}^{[T_s, T_e]} - L_{t'}^{[T_s, T_e]}\right) = X\delta \left(L_{t'}^{[T_s, T_e]} - L_{t}^{[T_s, T_e]}\right).$$
(From two times nothing I have generated a non zero P&L.) From there, assuming non arbitrage and therefore the existence of a local martingale numéraire $N$ and an associated local martingale measure $\mathbf{Q}^N$, I basically want to apply the $\mathbf{E}^{\mathbf{Q}^N} \left[ \bullet | \mathscr{F}_t\right]$ operator and conclude that $L_{t}^{[T_s, T_e]}$ is a martingale under $\mathbf{Q}^N$.
Dividing $X\delta \left(L_{t'}^{[T_s, T_e]} - L_{t}^{[T_s, T_e]}\right)$ by $N_t$ leads by for the "$L_{t'}^{[T_s, T_e]}$ part" with a $\frac{N_{t'}}{N_t}$ factor I can get rid of.
How can I do this properly ? (I voluntarily stay in a non-diffusive setting.)
|
This site contains various discussions of one-way functions and their relation to P versus NP.
Some of these discussions use a language $L=\{(x',y) ~\mid~ x'\le x \text{ and } f(x)=y \}$, where $f:\Sigma^*\to\Sigma^*$ is the one-way function and $x'\le x$ is the prefix relation. Now one central claim is that this language $L$ is contained in NP, since the word $x$ is a YES-certificate for $(x',y)\in L$.
I do not see why this claim is justified.
Why is the length of the certificate $x$ polynomially bounded in the length of $(x',y)$?
Couldn't it be possible that $x$ is exponentially long in $y$ and $x'$, but $f(x)$ is short and quickly computable from $x$?
|
Exercise
If $H$ is the Heaviside function, prove, using the definition below, that $\lim \limits_{t \to 0}{H(t)}$ does not exist.
Definition
Let $f$ be a function defined on some open interval that contains the number $a$, except possible $a$ itself. Then we say that the limit of $f(x)$ as $x$ approaches $a$ is $L$, and we write $$\lim \limits_{x \to a}{f(x)} = L$$ if for every number $\epsilon > 0$ there is a number $\delta > 0$ such that $$\text{if } 0 < |x - a| < \delta \text{ then } |f(x) - L| < \epsilon$$
Hint
Use an indirect proof as follows. Suppose that the limit is $L$. Take $\epsilon = \frac{1}{2}$ in the definition of a limit and try to arrive at a contradiction.
Attempt
Let $\delta$ be any (preferably small) positive number.
$H(0 - \delta) = H(-\delta) = 0$
$H(0 + \delta) = H(\delta) = 1$
$H(0 - \delta) =^? H(0 + \delta) \implies 0 =^? 1 \implies 0 \neq 1 \implies H(0 - \delta) \neq H(0 + \delta)$
$\lim \limits_{t \to 0^-}{H(t)} \neq \lim \limits_{t \to 0^+}{H(t)} \implies \lim \limits_{t \to 0}{H(t)}$ does not exist
Request
I don't even know where to begin, even with the hint.
Can someone kickstart the proof for me?$^1$ $^1$ Update: I've come up with an attempt. Is it valid? It seems that I don't use the hint to my advantage; so if indeed my attempt is correct, what is the alternative proof using the hint?
|
This paper presents the two-dimensional (2D) steady incompressible flow in a lid-driven sectorial cavity. In order to analyze the flow structures, the 2D Navier–Stokes equations are solved by using the finite element method. Different cases of the cavity aspect ratio A and three cases of the speed ratios \((S=-1,0,1)\) of the upper and the lower lids are considered. The finite element formulation for the governing equations is adopted via the velocity-pressure formulation. By varying A for each S, the effect of the Reynolds number on the streamline patterns and their bifurcations are investigated in range \(Re\in [0,200]\). A comparison between the obtained results and some earlier studies is presented.
Keywords
Bifurcation Eddy Finite element Flow structure Stagnation point Streamline
List of symbols
\(r_{1},r_{2}\)
radius of the inner and outer circles respectively
A
cavity aspect ratio = \(r_{2}/r_{1}\)
\(2{\alpha }\)
angle of the sector
u
dimensionless fluid velocity
\(U_{1},U_{2}\)
speed of the upper and lower lids respectively
S
speed ratio of the moving lids = \(U_{2}/U_{1}\)
\(\psi \)
streamfunction
Re
Reynolds number
Mathematics Subject Classification
35Q35 74S05 00A69
This is a preview of subscription content, log in to check access.
References
1.
Aksouh, A., Mataoui, A., Seghouani, N.: Low Reynolds-number effect on the turbulent natural convection in an enclosed 3D tall cavity. Prog. Comput. Fluid Dyn. Int. J. 12, 389–399 (2012)MathSciNetCrossRefGoogle Scholar
2.
Armellini, A., Casarsa, L., Giannattasio, P.: Low Reynolds number flow in rectangular cooling channels provided with low aspect ratio pin fins. Int. J. Heat Fluid Flow 31, 689–701 (2010)CrossRefGoogle Scholar
3.
Burggraf, O.: Analytical and numerical studies of the structure of steady separated flows. J. Fluid Mech. 24, 113–151 (1966)CrossRefGoogle Scholar
4.
Burnett, S.D.: Finite element analysis: from concepts to applications. Addison-Wesley, Reading, Ma (1987)zbMATHGoogle Scholar
5.
Chebbi, B., Tavoularis, S.: Low Reynolds number flow in and above asymmetric, triangular cavities. Phys. Fluids (A) 2, 1044–1046 (1990)CrossRefGoogle Scholar
6.
Chiang, T.P., Sheu, W.H., Hwang Robert, R.: Effect of the Reynolds number on the eddy structure in a lid-driven cavity. Int. J. Numer. Methods Fluids 26, 557–579 (1998)CrossRefzbMATHGoogle Scholar
7.
Coyle, D.J.: The fluid mechanics of roll coating: steady flows, stabihty, and rheology, Ph.D. thesis, University of Minnesota (1984)Google Scholar
8.
Darr, J.H., Vanka, S.P.: Separated flow in a driven trapezoidal cavity. Phys. Fluids (A) 3, 385–392 (1991)CrossRefGoogle Scholar
Faraz, N.: Study of the effects of the Reynolds number on circular porous slider via variational iteration algorithm-II. Comput. Math. Appl. 61(8), 1991–1994 (2011)MathSciNetCrossRefGoogle Scholar
11.
Erturk, E., Corke, T.C., Gökçöl, C.: Numerical solutions of 2-D steady incompressible driven cavity Flow at high Reynolds numbers. Int. J. Numer. Methods Fluids 48, 747–774 (2005)CrossRefzbMATHGoogle Scholar
12.
Gaskell, P.H., Gürcan, F., Savage, M.D., Thompson, H.M.: Stokes flow in a double-lid-driven cavity with free surface side-walls. Proc. Inst. Mech. Eng Part C J. Mech. Eng. Sci. 212(C5), 387–403 (1998)CrossRefGoogle Scholar
13.
Gaskell, P.H., Savage, M.D., Wilson, M.: Flow structures in a half-filled annulus between rotating co-axial cylinders. J. Fluid Mech. 337, 263–282 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
14.
Gürcan, F.: Effect of the Reynolds number on streamline bifurcations in a double-lid-driven cavity with free surfaces. Comput. Fluids 32, 1283–1298 (2003)CrossRefzbMATHGoogle Scholar
15.
Gürcan, F., Bilgil, H.: Bifurcations and eddy genesis of Stokes flow within a sectorial cavity. Eur. J. Mech. B/Fluids 39, 42–51 (2013)MathSciNetCrossRefGoogle Scholar
16.
Gürcan F., Bilgil H., Şahin A.: Bifurcations and eddy genesis of Stokes flow within a sectorial cavity PART II: co-moving lids. Eur. J. Mech. B/Fluids. doi:10.1016/j.euromechflu.2015.02.008
17.
Hellebrand H.: Tape casting. In: Brook, R.J. (ed.) Materials science and technology-processing of ceramics, vol. 17A, pp. 189–260. VCH Publishers Inc., New York (1996)Google Scholar
18.
Higgins B.G.: Dynamics of coating, adhesion and wetting. Status Report Project 3328. The Institute of Paper Chemistry, pp. 69–74 (1982)Google Scholar
19.
Oztop, H.F., Dagtekin, I.: Mixed convection in two-sided lid-driven differentially heated square cavity. Int. J. Heat Mass Transf. 47, 1761–1769 (2004)CrossRefzbMATHGoogle Scholar
20.
Pan, F., Acrivos, A.: Steady flows in rectangular cavities. J. Fluid Mech. 28, 643–655 (1967)CrossRefGoogle Scholar
21.
Rao S.S.: The finite element method in engineering. Elsevier Science & Technology Books (2004)Google Scholar
22.
Schewe, G.: Reynolds-number effects in flow around a rectangular cylinder with aspectratio 1:5. J. Fluids Struct. 39, 15–26 (2013)CrossRefGoogle Scholar
Walicka, A., Falicki, J.: Reynolds number effects in the flow of an electrorheological fluid of a Casson type between fixed surfaces of revolution. Appl. Math. Comput. 250, 636–649 (2015)MathSciNetGoogle Scholar
|
Here's what I say when I'm teaching this. These comments are usually spread over several lectures, but I'll say them all at once here.
The first use of partitions of unity is usually to construct integrals over manifolds. For example, let $S$ be a surface in $\mathbb{R}^3$, and say we want to integrate a $2$-form $\omega$ over it (or alternatively, integrate the flux of a vector field across it.)
What we would probably do in practice is break $S$ up into patches $S = \bigcup U_i$ with parametrizations $f_i : P_i \to U_i$ by various open sets $P_i \subset \mathbb{R}^2$, pull the differential form back across the parametrization and integrate on each $P_i$. We then need the patches $U_i$ to cover $S$ up to measure $0$. For example, if $S$ is the unit sphere, we might use a single patch in spherical coordinates with $P = (-\pi, \pi) \times (-\pi/2, \pi/2)$ and $f(\theta, \phi) = (\cos \theta \cos \phi, \sin \theta \cos \phi, \sin \phi)$. Alternatively we might parameterize the northern and southern hemispheres separately, taking $P_1 = P_2 = \{ (u,v) : u^2+v^2<1 \}$, with $f_1(u,v) = (u,v,\sqrt{1-u^2-v^2})$ and $f_2(u,v) = (u,v,-\sqrt{1-u^2-v^2})$.
This is exactly how to compute integrals in practice. But if we use it as our definition in theory, it becomes messy -- we have to talk about the combinatorics of how the patches fit together, and our integrands will have discontinuities at the boundaries of the patches.
A partition of unity allows us to blend from one patch to another more smoothly. For example, instead of saying that every point is either exactly in the northern hemisphere, or exactly in the southern hemisphere, we have two functions $\phi_1+\phi_2$ with $\phi_1+\phi_2=1$, where $\phi_1$ measures how much we will count the point toward the northern integral, and $\phi_2$ measures how much we will count the point toward the southern integral. This gives integrals that are much worse for hand computation, but have cleaner theoretical properties.
Incidentally, I believe this should be
better for machine Monte Carlo integration. That is to say, suppose I want to integrate a $2$-form $\omega$ over the sphere $S^2 \subset \mathbb{R}^3$. One approach would be to parametrize the northern hemisphere and southern hemisphere separately, pulling $\omega$ back to forms supported on discs in two copies of $\mathbb{R}^2$, with discontinuities at the boundary of the disc, and compute these integrals by Monte Carlo. Alternatively, I could use a continuous partition of unity supported on the open sets $z<0.1$ and $z>-0.1$, and pull back by stereographic coordinates to slightly larger discs; my integrands would then be continuous. I believe Monte Carlo integration usually prefers continuous integrands, as that way it is not important to determine exactly which side of the discontinuits a sample random point lies on.
Later uses of partitions of unity are also often of the form "I would like to chop my manifold into pieces, but that is too discontinuous an operation." For example, let's show that every short exact sequence of vector bundles splits. Let $0 \to A \to B \to C \to 0$ be the short exact sequence and let $X$ be the manifold. One would like to cut $X$ into pieces $U_i$ where the bundles are trivial and write down a section $\sigma_i : C \to B$ on each $U_i$. But gluing the $\sigma_i$ together is not continuous. If instead we take a partition of unity $\phi_i$ and write $\sigma = \sum \phi_i \sigma_i$, then $\sigma$ is a smoother version of gluing the $\sigma_i$, and it is continuous.
My mental metaphor for a partition of unity is feathering out paint. If your paint stops abruptly at the edge of the brush stroke, it will leave a visible line even once you paint the wall next to it. Instead, you need to smear out the edge of your stroke so it thins out gradually. I haven't tried bringing a paint can into class though yet!
|
$$\frac{1+\cos 5x+i\sin 5x}{1+\cos 5x-i\sin 5x}=\cos 5x+i\sin 5x$$
When I attempted this I first tried multiplying top and bottom of the LHS by the complex conjugate of what's on the bottom, $1+\cos 5x+i\sin 5x$. After simplification I got:
$$LHS=\frac{1+2\cos 5x+2i\sin 5x+\cos^25x+2i\sin 5x \cos5x-\sin^2x}{2\cos 5x+sin 5x}$$
I cannot see a way of simplifying further to give the RHS, where have I gone wrong?
Also, I know that since $\cos 5x+i\sin 5x=(\cos x+i\sin x)^5$ I could do an expansion but after doing that I could also see no way of getting the LHS. Please help.
|
Let $\Sigma\subset\mathbb{R}^3$ be a regular surface. Prove that $\Sigma$ is minimal if and only if its coordinate functions are harmonic.
I know that every regular surface admits an isothermal parametrization. Assuming that a parametrization $X:U\subset\mathbb{R}^2\to\Sigma$ is isothermal (i.e., $<X_u,X_u>=<X_v,X_v>$ and $<X_u,X_v>=0$), I know how to prove $X_{uu}+X_{vv}=2<X_u,X_u>^2HN$, where $H$ is the mean curvature and $N=\frac{X_u\wedge X_v}{||X_u\wedge X_v||}$. So obviously $H\equiv 0 \iff X_{uu}+X_{vv}\equiv 0$.
The problem doesn't say anything about $X$ being isothermal, so I'm stuck with the general case. I tried to use brute force, but it was such a ridiculous amount of work that I eventuay gave up.
Is there some clever way around it?
|
How can I solve this system of trigonometric equations analytically? It is from physics class. $$ \begin{cases} 30t\cos{\alpha}=50\\ -30t\sin{\alpha}-4.9t^2=0 \end{cases} $$
Hint: Squaring both the equations, you will get $900t^2\cos^2{\alpha}=2500\\ 900t^2\sin^2{\alpha}={4.9}^2t^4$.
Note that $\sin^2{\alpha}+\cos^2{\alpha}=1$.
So add both the equations and solve for $t$ using the substitution $t^2=u$.
$30t\cos{\alpha}=50 \implies t=\frac{5}{3} \sec\alpha$
You can plug this information into the other equation and solve:
$$-30t\sin{\alpha}-4.9t^2=0\implies -30(\frac{5}{3} \sec\alpha)\sin{\alpha}-4.9(\frac{5}{3} \sec\alpha)^2=0$$
$$-50(\tan\alpha)-4.9(\frac{5}{3} \sec\alpha)^2=0$$
$$-50(\tan\alpha)-4.9\frac{25}{9} \sec^2\alpha=0$$
$$-50(\tan\alpha)-4.9\frac{25}{9} (\tan^2\alpha+1)=0$$
Taking $y=\tan\alpha$ you can solve a quadratic equation.
$$-50(y)-4.9\frac{25}{9} (y^2+1)=0$$
I think you're probably in good shape from here?
|
A very general (but extremely useful) approach is by noting the following
$$\sin(x) = \frac{e^{ix} - e^{-ix}}{2i}$$$$\cos(x) = \frac{e^{ix} + e^{-ix}}{2}$$
and since $\tan(x) = \frac{\sin(x)}{\cos(x)}$
$$\tan(x) = \frac{1}{i}\frac{e^{ix} - e^{-ix}}{e^{ix} + e^{-ix}} = -i \frac{e^{ix} - e^{-ix}}{e^{ix} + e^{-ix}} $$
We note here that $i$ is the imaginary constant (basically its a number such that $i^2 = -1$)I will leave you to go ahead and verify these formulas work for every trig Identity you have already memorized and more information is underneath:http://en.wikipedia.org/wiki/Euler%27s_formula
So we wish to prove:
$$\frac{\sin(A+B)}{\sin(A-B)} = \frac{\tan(A) + \tan(B)}{\tan(A) - \tan(B)} $$
We prepare the left side (noting that both fractions take form $\frac{A}{2i}$ and therefore we can drop the $2i$ denominators
$$\frac{\sin(A+B)}{\sin(A-B)} = \frac{e^{i(A+B)} - e^{-i(A+B)}}{e^{i(A-B)} - e^{-i(A-B)}}$$
So because I'm slightly lazy (and for the sake of giving you something to practice) you need to do the exact same job with the tan(x) expression where each instance of x becomes A or B depending on whats being evaluated.
Now the goal is to systematically simplify both expressions by transforming expressions of the form $e^{-k}$ to $\frac{1}{e^k}$
Followed by taking sums and giving them common denominators $\frac{A}{C} + \frac{B}{D} = \frac{AD + BC}{CD}$
And dividing out common factors.
Its a tedious process but once done. Both expression will look exactly the same... Well you don't have to take my word for it, do it yourself and prove that it works ;)
|
It is widely known that $0^0$ is usually defined to be 1. I wonder why we cannot employ a similar technique to ascribe values to functions having poles in a point.
Now take the Gamma function. The real part $\Re(\Gamma(i x))$ is equal to $\gamma$ (Euler-Masceroni constant). We thus can use the formulas:
$$\Gamma(-n)=\lim_{h\to0} \Re (\Gamma(-n+ih))$$
or, alternatively,
$$\Gamma(-n)=\lim_{h\to0} \frac{\Gamma(-n+h)+\Gamma(-n-h)}2$$
Thus we can ascribe natural values to Gamma function at negative integers: $$\Gamma(0)=-\gamma$$ $$\Gamma(-1)=\gamma-1$$ $$\Gamma(-2)=\frac{3}{4}-\frac{\gamma }{2}$$ $$\Gamma(-3)=\frac{\gamma }{6}-\frac{11}{36}$$ $$\Gamma(-4)=\frac{25}{288}-\frac{\gamma }{24}$$
etc.
I wonder why these values are not given in tables and not used in computer algebra systems, this could symplify things a lot.
|
The displacement of a particle performing simple harmonic motion is given by $x = A \sin(\omega t + \phi)$ , where $A$ is the amplitude, $\omega$ is the frequency, $t$ is the time, and $\phi$ is the phase constant. What is the significance of $\phi$. How is it used? Please explain the meaning of the phase constant
The equation you state$$x=Asin(\omega t+\phi)$$describes the displacement motion of a
passive linear harmonic oscillator without loss. In other words there is no input or driving function. Whatever motion the oscillator exhibits is solely due to its initial conditions. $\phi$ in this case provides a point of reference in space for the oscillations.
But for the driven oscillator, $\phi$ provides a more significant role in terms of how efficiently energy is transferred from the driver to to the oscillator (system). If the driving force is in perfect phase with the system and pointing in the right direction, maximum energy is transferred at the harmonic resonant frequency. Either side of this point either leads or lags, decreasing the efficiency of energy transfer.
All the phase angle does is to give you a facility to decide on the displacement of the particle undergoing shm at time $t=0$ or any other time.
With your phase angle of $\phi$, assuming it to be positive, the graphs of $x_1 = a \sin (\omega t)$ (grey) and $x_2= A \sin (\omega t + \phi)$ (red) are shown below.
In this case the motion $x_2$ is in advance of the motion of $x_1$ by a time $t$ (shown in the diagram) or a phase angle of $\phi= \dfrac t T 2 \pi$ where $T$ is the period of the motion and equal to $\dfrac {2 \pi}{\omega}$.
So everything that particle with displacement $x_2$ does the particle with displacement $x_1$ does a time $t$ later.
The equation of motion for a simple harmonic oscillator is
$$ \ddot x+\omega^2 x=0 $$
and the most general solution to this is
$$ x(t) = A_1 \cos \omega t + A_2 \sin \omega t $$
Note there are two constants of integration that correspond to the equation being a second order differential equation. More physically, the velocity is given by
$$ v(t) = - \omega A_1 \sin \omega t + \omega A_2 \cos \omega t $$
and the two constants of integration are fixed by the configuration of the system at any given time. Said differently, if you know the position and velocity at time $t_0$ you can solve for $A_1$ and $A_2$. You should do that as an exercise.
Now note that the expression you have can be written as
$$ x(t) = A \sin \phi \cos \omega t + A \cos \phi \sin \omega t $$
and you can therefore relate $A$ and $\phi$ to $A_1$ and $A_2$ and from there to the position and velocity at time $t_0$.
In the basic SHM equation, you get x=Asin(ωt) where at t=0, the object is at mean position or zero displacement. Now, what is the significance of the angle inside the sine function? It gives you the position of the particle performing SHM. When the angle is π/2, the displacement is maximum i.e A. When it is π, the displacement is once again 0. So, for the equation Asin(ωt+ϕ), it simply means that the SHM does not begin at x=0 and the position at t=0 is Asin(ϕ) (depending upon the value of ϕ it could be A,A/2 anything).
If the initial position is S, then ϕ=sin^-1 (S/A)
What is the significance of $\phi$?
The phase angle $\phi$ represents the relation between the displacement and velocity of the simple harmonic oscillator at the point in time arbitrarily designated as $t=0$. In particular,$$\tan\phi = \omega \frac{x(0)}{v(0)}$$ The point in time at which $t$ is zero is completely arbitrary. With a different time axis given by $t' = t-t_0$, the state of the SHO can be expressed as $x(t) = A \sin(\omega t' + \phi')$, where $\phi' = \phi + \omega t_0$.
Based on a point raised by @docscience this answer addresses the phase in terms of "initial conditions" introduced by driving forces. In fact one can think of this as answering how the SHO was set in motion in the first place.
The position of a simple harmonic oscillator at time $t$ that experienced force at time $t'$ and that was at rest in the far past
$$ \lim_{t\to -\infty} x(t)=0 \\ \lim_{t \to -\infty} \dot x(t)=0 $$
is given by
$$ x(t) = \int_{-\infty}^t \frac{1}{\omega} \sin (\omega (t-t')) f(t') $$
This has been obtained by using retarded Green's function for the SHO details of which can be found elsewhere but one can check that this satisfies the SHO equation of motion.
(1) For the simplest case lets take the case of a pulse of force at time $t'=t_0$ then we get
$$ x(t) = \frac{1}{\omega} \sin(\omega( t- t_0)) \Theta(t-t_0) $$ where $\Theta(t-t_0)$ is the Heavyside step function. Thus we see that the oscillator is at rest for $t<t_0$ and after that the 'phase' is $-\omega t_0$.
(2) Now lets take the case of two pulses at times $t_0$ and $t_1$ with amplitude $f_0$ and $f_1$ i.e.
$$ f(t)=f_0 \delta(t-t_0) + f_1 \delta(t-t_1) $$
with $t_1>t_0$. Its easy to see the solution is
$$ x(t)=\frac{f_0}{\omega} \sin(\omega( t- t_0)) \Theta(t-t_0) + \frac{f_1}{\omega} \sin(\omega( t- t_1)) \Theta(t-t_1) $$
Here is where we see the meaning of phase clearly: If we take $f_1 = f_2$ then we see it is possible to choose $t_0$ and $t_1$ such that the two pulses are "in-phase" and the amplitude doubles or "out-of-phase" such that the amplitude cancels and the second pulse just stops the SHO. These correspond to $\omega(t_1- t_0)=2 n \pi$ and $\omega(t_1-t_0)=n\pi$ for $n$ an odd integer.
protected by Qmechanic♦ Feb 7 '17 at 17:05
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
Let me remind you about the following classical examples in quantum mechanics.
Example 1. Bound states in 1-dim potential V(x). Let $V(x)$ be a symmetric potential i.e. $$V(x) = V(-x)$$ Let us introduce the parity operator $\hat\Pi$ in the following way: $$\hat\Pi f(x) = f(-x).$$ It is obvious that $$[\hat H,\hat\Pi] = 0.$$ Therefore, for any eigenfunction of $\hat H$ we have: $$\hat H|\psi_E(x)\rangle = E|\psi_E(x)\rangle = E\hat\Pi|\psi_E(x)\rangle,$$ i.e. state $\hat\Pi|\psi_E(x)\rangle$ is eigenfunction with the same eigenvalue. Is $E$ a degenerate level? No, because of linear dependence of $|\psi\rangle$ and $\hat\Pi|\psi\rangle.$
Consider the second example.
Example 2. Bound states in 3-dim a potential $V(r)$. Where $V(r)$ possesses central symmetry, i.e. depends only on distance to center. In that potential we can choose eigenfunction of angular momentum $\hat L^2$ for basis $$|l,m\rangle,$$ where $l$ is total angular momentum and $m$ - its projection on chosen axis (usually $z$). Because of isotropy eigenfunction with different $m$ but the same $l$ correspond to one energy level and linearly independent. Therefore, $E_l$ is a degenerate level.
My question is if there is some connection between symmetries and degeneracy of energy levels. Two cases are possible at the first sight:
Existence of symmetry $\Rightarrow$ Existence of degeneracy Existence of degeneracy $\Rightarrow$ Existence of symmetry
It seems like the first case is not always fulfilled as shown in the first example. I think case 1 may be fulfilled if there is
continuous symmetry. I think the second case is always true.
|
A dirty little ditty on Finite AutomataMathematics ·
This post builds on the previous post about Formal Languages.
Some Formal Definitions A Deterministic Finite Automata (DFA) is A set \( \mathcal{Q} \) called “states” A set \( \Sigma \) called “symbols” or “alphabet” A function \( \delta_F:\mathcal{Q}\times\Sigma \to \mathcal{Q} \) A designated state \( q_0\in\mathcal{Q} \) called the start point A subset \( F\subseteq\mathcal{Q} \) called the “accepting states”
The DFA is then often referred to as the ordered quintuple \( A=(\mathcal{Q},\Sigma,\delta_F,q_0,F) \).
Defining how strings act on DFAs.
Given a DFA, \( A=(\mathcal{Q}, \Sigma, \delta, q_0, F) \), a state \( q_i\in\mathcal{Q} \), and a string \( w\in\Sigma^* \), we can define \( \delta(q_i,w) \) like so:
If \( w \) only has one symbol, we can consider \( w \) to be the symbol and define \( \delta(q_i,w) \) to be the same as if we considered \( w \) as the symbol If \( w=xv \), where \( x\in\Sigma \) and \( v\in\Sigma^* \), then \( \delta(q_i, w)=\delta(\delta(q_i,x),v) \)
And in this way, we have defined how DFAs can interpret strings of symbols rather than just single symbols.
The language of a DFA
Given a DFA, \( A=(\mathcal{Q}, \Sigma, \delta, q_0, F) \), we can define “the language of \( A \)”, denoted \( L(A) \), as \( {w\in\Sigma^*|\delta(q_0,w)\in F} \).
Some Examples, Maybe? Example 1:
Let’s construct a DFA that accepts only strings beginning with a 1 that, when interpreted as binary numbers, are multiples of 5. So some examples of strings that would be in \( L(A) \) are 101, 1010, 1111
Some More Formal Definitions A Nondeterministic Finite Automata (NFA) is A set \( \mathcal{Q} \) called “states” A set \( \Sigma \) called “symbols” A function \( \delta_N:\mathcal{Q}\times\Sigma \to \mathcal{P}\left(\mathcal{Q}\right) \) A designated state \( q_0\in\mathcal{Q} \) called the start point A subset \( F\subseteq\mathcal{Q} \) called the “accepting states”
The NFA is then often referred to as the ordered quintuple \( A=(\mathcal{Q},\Sigma,\delta_N,q_0,F) \).
Defining how strings act on NFAs.
Given an NFA, \( N=(\mathcal{Q}, \Sigma, \delta, q_0, F) \), a collection of states \( \mathcal{S}\subseteq\mathcal{Q} \), and a string \( w\in\Sigma^* \), we can define \( \delta(\mathcal{S},w) \) like so:
If \( w \) only has one symbol, then we can consider \( w \) to be the symbol and define \( \delta(\mathcal{S},w):=\bigcup\limits_{s\in\mathcal{S}}\delta(s,w) \) If \( w=xv \), where \( x\in\Sigma \) and \( v\in\Sigma^* \), then \( \delta(\mathcal{S}, w)=\bigcup\limits_{s\in\mathcal{S}}\delta(\delta(s,x),v) \)
And in this way, we have defined how NFAs can interpret strings of symbols rather than just single symbols.
|
Consider the function $f: [0,1] \to \mathbb{R}$ where f(x)= \begin{cases} \frac 1q & \text{if } x\in \mathbb{Q} \text{ and } x=\frac pq \text{ in lowest terms}\\ 0 & \text{otherwise} \end{cases}
Determine whether or not $g$ is in $\mathscr{R}$ on $[0,1]$ and prove your assertion. For this problem you may consider $0= 0/1$ to be in lowest terms.
Here's an attempt. I may have abused a bit of notation here, but the ideas are there.
Proof: Let $M_i = \sup \limits_{x \in [x_{i-1},x_i]} f(x)$.
Notice first that the lower Riemann sums are always $0$, since every interval contains an irrational number. Thus, to prove $f \in \mathscr{R}$, it suffices to prove that, given any $\epsilon >0$, $\sum \limits_{i \in P} M_i \Delta x_i < \epsilon$ for some partition.
Let $\epsilon > 0 $ and $M > \frac{2}{\epsilon}$. We first show that there exists $\eta(x,\frac{1}{M})$ so that $|f(x) - f(y)| < \frac{1}{M}$ if $|x-y| < \eta$. Fix $x \in (\mathbb{R} \setminus \mathbb{Q}) \cap [0,1]$. Now, consider the set $$R_{M} := \{ r \in \mathbb{Q} : r = \frac{p}{n}, n \leq M, p \leq n, p \in \mathbb{N} \}.$$ Clearly this set is finite, enumerate it as $\{q_1,\ldots, q_m\}$. So, let $$\eta(x,\frac{1}{M}) = \min_{i=1,\ldots, m} |x- q_i|.$$ We see then, $|f(x) - f(y)| < \frac{1}{M}$ on this $\eta$-neighborhood.
After we choose that $\eta$ so that $x \in (\mathbb{R} \setminus \mathbb{Q}) \cap [0,1]$, is continuous in a $\eta$-neighborhood, we see $$ A:= [0,1] \setminus R_M \subset \left( \bigcup_{ x \in ( \mathbb{R} \setminus \mathbb{Q}) \cap [0,1]} B_{\eta(x)} (x) \right) \cap [0,1].$$ Since $A$ is compact, we may take finite sub-covering, and let $\delta = \min \limits_{i=1,\ldots,n} \{\eta(x_i)\}$. Take a partition $P_1$ of $A$ so that $\Delta x_i < \delta$. Since $R_M$ is non-empty, we can take a partition $P_2$ of $R_M$ so that $\Delta x_i < \frac{\epsilon}{2m}.$ Moreover, we see that, on $[0,1]$, $f$ is at most $1$. Let $P = P_1 \cup P_2$. Thus,
\begin{eqnarray*} \sum_{i \in P} M_i \Delta x_i &=& \sum_{i \in P_1} M_i \Delta x_i + \sum_{i \in P_2} M_i \Delta x_i \\ &\leq& \frac{1}{M} \sum_{i \in P_1} \Delta x_i + \sum_{i \in P_2} \Delta x_i \\ &<& \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon \end{eqnarray*}
Comments?
EDITED I think I resolved the issue.
|
Based on Leo's answer but I removed one column and replaced the
\big) with a vertical rule. Now I want to make all columns have the same width. How to do this?
\documentclass{article}\usepackage{amssymb}\begin{document}\[x^3 - x + 1 = (x-1)(x^2+x) + 1 \in \mathbb{F}_3[x]\]\[\renewcommand\arraystretch{1.2}\begin{array}{*{6}{r}}& & & 1& 1& 0\\\cline{3-6}1& -1& \multicolumn{1}{|r}{1}& 0& -1& 1\\& & 1& -1& & \\\cline{3-5}& & & 1& -1& \\& & & 1& -1& \\\cline{4-6}& & & & & 1\\\end{array}\]\end{document}
I got the solution by using
\begin{array}{*{6}{>{\hfill}m{1cm}}}, but is there any other solution?
|
I am currently working on my bachelor's thesis on the anapole / toroidal moment and it seems that I am stuck with a tensor decomposition problem.
I have actually never had a course about tensors, so I am a complete newbie.
I need to expand a localized current density, which is done by expressing the current via delta distribution and expanding the latter:
$$\vec{j}(\vec{r},t) = \int\vec{j}(\vec{\xi},t) \delta(\vec{\xi}-\vec{r}) d^3\xi$$ $$\delta(\vec{\xi}-\vec{r}) = \sum_{l=0}^{\infty} \frac{(-1)^l}{l!} \xi_i ...\xi_k \nabla_i ... \nabla_k \delta(\vec{r}) $$
So I get some result containing the following tensor:
$$B_{ij...k}^{(l)}(t) := \frac{(-1)^{l-1}}{(l-1)!} \int j_i \xi_j ... \xi_k d^3\xi$$
So far, I have understood the math. But now comes the tricky part. In the paper, it says that "we can decompose the tensors $B_{ij...k}^{(l)}$ into irreducible tensors, separating the various multipole moments and radii." and further "...of third rank, $B_{ijk}^{(3)}$ can obviously reduced according to the scheme $1 \times (2+0) = (3+1)+2+1$. It can be seen that the representation of weight $l=1$ is extracted twice from $B_{ijk}^{(3)}$." And then follows what seems like the decomposition and I am hopelessly lost.
$$j_i\xi_j\xi_k = \frac{1}{3} \left[ j_i\xi_j\xi_k + j_k\xi_i\xi_j + j_j\xi_k\xi_i - \frac{1}{5} \left( \delta_{ij}\theta_k + \delta_{ik}\theta_j + \delta_{jk}\theta_i \right) \right] - \frac{1}{3} \left( \epsilon_{ijl} \mu_{kl} + \epsilon_{ikl}\mu_{jl}\right)$$ $$+ \frac{1}{6} \left( \delta_{ij}\lambda_k + \delta_{ik}\lambda_j - 2 \delta_{jk}\lambda_i \right) + \frac{1}{5} \left( \delta_{ij}\theta_k + \delta_{ik}\theta_j + \delta_{jk}\theta_i \right)$$
with
$$\mu_{ik} = \mu_i\xi_k + \mu_k\xi_i \ , \ \mu_i=\frac{1}{2} \epsilon_{ijk}\xi_j j_k$$ $$\theta_i=2\xi_i \vec{\xi}\cdot \vec{j} + \xi^2 j_i$$ $$\lambda_i=\xi_i\vec{\xi}\cdot \vec{j} - \xi^2 j_i$$
This decomposition obviously contains many quantities that later on appear also in the multipole expansion, e.g. the magnetic quadrupole moment $\mu_{ik}$. So on the physics side of things, this makes sense to me.
But not on the mathematical side. On this board I found some questions regarding tensor decomposition and in the answers I learned something about symmetric and antisymmetric tensors and that every tensor can be decomposed in several irreducible ones, which better represent physical properties of the system and symmetries.
But I still, some questions are still there... 1.) What do the numbers $\frac{1}{3}$, $\frac{1}{5}$, etc. mean? Is this some kind of normalization? 2.) How exactly does one decompose the tensor? How can I reconstruct what exactly has been done, which steps one has to follow to decompose it like this?
|
Given a translationally invariant Matrix Product State (assuming periodic boundary condition) on $N$ sites of dimension $d$ each, which takes the form
$\sum_{i_1,i_2\ldots i_N=1}^dTr(A_{i_1}A_{i_2}\ldots A_{i_N})|i_1,i_2\ldots i_N\rangle$,
with $A_i$ being $D\times D$ matrices. If $d<D^2$, then matrices $A_i$ do not span the space of $D\times D$ matrices. But by blocking $l$ sites together, so that $d^l\geq D^2 > d^{l-1}$, one notices that in most cases, the matrices $A_{i_1}A_{i_2}\ldots A_{i_l}$ do span the space of $D\times D$ matrices. Then one can write down a blocked Matrix Product representation, in which case, above state takes the form
$\sum_{i_1,i_2\ldots i_N=1}^{d^l}Tr(B_{i_1}B_{i_2}\ldots B_{i_M})|i_1,i_2\ldots i_M\rangle$,
with $M=\frac{N}{l}$.
Given this representation, there is a 2-local parent hamiltonian in the blocked picture, defined as the projector orthogonal to the subspace $\{\sum_{i_1,i_2}Tr(XB_{i_1}B_{i_2})|i_1,i_2\rangle | X\text{ is set of all } D\times D \text{ matrices }\}$. Thus overall locality of the parent hamiltonian is $2l$. Moreover, this Matrix Product State is unique ground state of the parent hamiltonian constructed above.
But now, lets compute these quantities for AKLT model in one dimension (http://en.wikipedia.org/wiki/AKLT_model) . Since particles are spin-1, we have $d=3, D=2$. The matrices for AKLT do not span the space of all $2\times 2$ matrices. So we need to block two neighbouring sites, which implies the parent hamiltonian will have locality 4. But then why does AKLT hamiltonian have locality 2? Is AKLT hamiltonian not the same as parent hamiltonian for AKLT? Note that AKLT hamiltonian has the AKLT state as its unique ground state, assuming periodic boundary condition.
|
As some people on this site might be aware I don't always take downvotes well. So here's my attempt to provide more context to my answer for whoever decided to downvote.
Note that I will confine my discussion to functions $f: D\subseteq \Bbb R \to \Bbb R$ and to ideas that should be simple enough for anyone who's taken a course in scalar calculus to understand. Let me know if I haven't succeeded in some way.
First, it'll be convenient for us to define a new notation. It's called "little oh" notation.
Definition: A function $f$ is called little oh of $g$ as $x\to a$, denoted $f\in o(g)$ as $x\to a$, if
$$\lim_{x\to a}\frac {f(x)}{g(x)}=0$$
Intuitively this means that $f(x)\to 0$ as $x\to a$ "faster" than $g$ does.
Here are some examples:
$x\in o(1)$ as $x\to 0$ $x^2 \in o(x)$ as $x\to 0$ $x\in o(x^2)$ as $x\to \infty$ $x-\sin(x)\in o(x)$ as $x\to 0$ $x-\sin(x)\in o(x^2)$ as $x\to 0$ $x-\sin(x)\not\in o(x^3)$ as $x\to 0$
Now what is an affine approximation? (Note: I prefer to call it affine rather than linear -- if you've taken linear algebra then you'll know why.) It is simply a function $T(x) = A + Bx$ that
approximates the function in question.
Intuitively it should be clear which affine function should best approximate the function $f$ very near $a$. It should be $$L(x) = f(a) + f'(a)(x-a).$$ Why? Well consider that any affine function really only carries two pieces of information: slope and some point on the line. The function $L$ as I've defined it has the properties $L(a)=f(a)$ and $L'(a)=f'(a)$. Thus $L$ is the unique line which passes through the point $(a,f(a))$ and has the slope $f'(a)$.
But we can be a little more rigorous. Below I give a lemma and a theorem that tell us that $L(x) = f(a) + f'(a)(x-a)$ is the
best affine approximation of the function $f$ at $a$.
Lemma: If a differentiable function $f$ can be written, for all $x$ in some neighborhood of $a$, as $$f(x) = A + B\cdot(x-a) + R(x-a)$$ where $A, B$ are constants and $R\in o(x-a)$, then $A=f(a)$ and $B=f'(a)$.
Proof: First notice that because $f$, $A$, and $B\cdot(x-a)$ are continuous at $x=a$, $R$ must be too. Then setting $x=a$ we immediately see that $f(a)=A$.
Then, rearranging the equation we get (for all $x\ne a$)
$$\frac{f(x)-f(a)}{x-a} = \frac{f(x)-A}{x-a} = \frac{B\cdot (x-a)+R(x-a)}{x-a} = B + \frac{R(x-a)}{x-a}$$
Then taking the limit as $x\to a$ we see that $B=f'(a)$. $\ \ \ \square$
Theorem: A function $f$ is differentiable at $a$ iff, for all $x$ in some neighborhood of $a$, $f(x)$ can be written as $$f(x) = f(a) + B\cdot(x-a) + R(x-a)$$ where $B \in \Bbb R$ and $R\in o(x-a)$.
Proof: "$\implies$": If $f$ is differentiable then $f'(a) = \lim_{x\to a} \frac{f(x)-f(a)}{x-a}$ exists. This can alternatively be written $$f'(a) = \frac{f(x)-f(a)}{x-a} + r(x-a)$$ where the "remainder function" $r$ has the property $\lim_{x \to a} r(x-a)=0$. Rearranging this equation we get $$f(x) = f(a) + f'(a)(x-a) -r(x-a)(x-a).$$ Let $R(x-a):= -r(x-a)(x-a)$. Then clearly $R\in o(x-a)$ (confirm this for yourself). So $$f(x) = f(a) + f'(a)(x-a) + R(x-a)$$ as required.
"$\impliedby$": Simple rearrangement of this equation yields
$$B + \frac{R(x-a)}{x-a}= \frac{f(x)-f(a)}{x-a}.$$ The limit as $x\to a$ of the LHS exists and thus the limit also exists for the RHS. This implies $f$ is differentiable by the standard definition of differentiability. $\ \ \ \square$
Taken together the above lemma and theorem tell us that not only is $L(x) = f(a) + f'(a)(x-a)$ the only affine function who's remainder tends to $0$ as $x\to a$
faster than $x-a$ itself (this is the sense in which this approximation is the best), but also that we can even define the concept differentiability by the existence of this best affine approximation.
|
Let $u_0\ge 2$ and $\alpha\ge 1$ integers.
I'm trying to study the sequence $(u_n)_{n\ge 0}$ defined by : $$\forall n\ge 0,\quad u_{n+1}=d(u_n)^{\alpha}, $$ where $d$ is the number-of-divisors function.
If $\alpha=1$, it's not hard to prove that $$\lim\limits_{n\rightarrow +\infty}u_n=2,$$ And if $\alpha=2$ we also have : $$\lim\limits_{n\rightarrow +\infty} u_n=9. $$ However, if $\alpha\ge 3$, the sequence doesn't always converge.
For example, when $\alpha=3$, if $u_0=2$, then for $n\ge 1$, $u_{2n}=2^6$ and $u_{2n+1}=7^3$.
Do we have more results about the behaviour of the sequence $(u_n)_{n\ge 0}$ when $\alpha\ge 3$ ?
$\textit{Edit 2 -}$ After many simulations, I have the following conjecture :
$\boxed{\textbf{Conjecture -} \text{ For all $\alpha\ge 1$, the sequence $(u_n)_{n\ge 0}$ is eventually periodic.}}$
When $\alpha=4$, the sequence always seems to converge, but the limit depends on $u_0$ (for $u_0=2$ the limit is $625$ and for $u_0=6$ the limit is $6561$).
For example, when $\alpha=5$, if $u_0=2$, then the sequence converges to $3^52^5$, but if $u_0=2^53^511^{10}$, then the sequence is $2$-periodic (with $u_1=2^{10}3^{10}11^{5}$).
We can easily prove that :
$\boxed{\text{For all $\alpha\ge 1$, there are infinitelty many numbers such that $(u_n)_{n\ge 0}$ converges.}} $
Indeed, if $u_0=p_1...p_l$ is a squarefree integer, then : $$u_1=2^{\alpha l} $$ and $$u_2=(\alpha l+1)^{\alpha}. $$ By Dirichlet's theorem on arithmetic progressions, there are infinitely many integers $l\ge 1$ such that $\alpha l+1$ is a prime number. If we choose a such integer $l\ge 1$, then for all $n\ge 2$ : $$u_n=(\alpha l+1)^{\alpha}, $$ and the sequence $(u_n)_{n\ge 0}$ converges. $\blacksquare$
|
How to get the optimal value for $k$?
You have to define a measure for optimiality. The problem with that is that with bigger $k$ most measures become smaller (better). One measure which is independent from $k$ is the silhouette coefficient:
Let $C = (C_1, \dots, C_k)$ be the clusters. Then:
Average distance between an object $o$ and other objects in the same cluster: $$a(o) = \frac{1}{|C(o)|} \sum_{p \in C(o)} dist(o, p)$$ Average distance to the next cluster: $$b(o) = \min_{C_i \in \text{Cluster} \setminus C(o)}(\frac{1}{C_i}) \sum_{p\in C_i} \sum_{p \in C_i} \text{dist}(o, p)$$ Silhouette of an object: $$s(o) = \begin{cases}0 &\text{if } a(o) = 0, \text{i.e. } |C_i|=1\\ \frac{b(o)-a(o)}{\max(a(o), b(o))} &\text{otherwise}\end{cases}$$ Silhouette of a clustering $C$: $$\text{silh}(C) = \frac{1}{|C|} \sum_{C_i \in C} \frac{1}{|C_i|} \sum_{o \in C_i} s(o)$$
You can see that $s(o) \in [-1, 1]$ and $\text{silh}(C) \in [-1, 1]$. Higher is better. Smaller than 0 is very bad.
Now you can start with $k=1$ and increase $k$ until $\text{silh}(C)$ gets smaller again.
However, there are alternatives to $k$-means clustering:
|
A tantalizing open question in computational complexity is to understand the 'behavioral differences' between the determinant and the permanent. While the former is computable in polynomial time with Gauss pivot, the second is $\# P$-hard by Valiant's result. There have been attempts to generalize the notions of determinant/permanent, but I was wondering if the following notion of
subgroup-restricted determinant had been considered.
Fix a series $\Sigma$ of subgroups $G_1 \leq G_2 \leq \ldots \leq G_k$ of $S_n$, and for an index $p \leq k$ let $\Sigma_p$ denote the truncated series $G_1 \leq G_2 \leq \ldots \leq G_p$. Say that $\Sigma$ is
normal if (i) $G_1$ is simple, (ii) for each $1 \leq i < k$, $G_i$ is a maximal normal subgroup of $G_{i+1}$. We may then define, for an $n \times n$ matrix $A$:
Given $K \subseteq S_n$, let $Sum_{K}(A) = \sum_{\sigma \in K} a_{1,\sigma(1)} \ldots a_{n,\sigma(n)}$;
Let $SgrDet_{\Sigma}(A) = Sum_{G_k}(A) - \sum_{i = 1}^{k-1} [G_i : G_k] SgrDet_{\Sigma_i}(A)$.
Observe that when $G_{k}$ is isomorphic to $Z_n$, any maximal series $\Sigma$ corresponds to a prime factorization of $n = p_1^{i_1} \ldots p_k^{i_k}$, and then $SgrDet_{\Sigma}(I_n)$ corresponds to the 'signed divisor function' $\zeta'(n) = [i_1]_{-p_1} \ldots [i_k]_{-p_k}$ where by convention $[n]_q = \sum_{i = 0}^{n-1} q^i$.
Observe that we can recover the permanent and determinant with the series $(S_n)$ and $(A_n \lhd S_n)$, respectively. Note that the second series is normal (by the simplicity of $A_n$) while the first is not. This leads to the following questions, probably difficult:
(1) does the formula always have exponential algebraic complexity when the series is not normal?
(2) are there other examples of a normal series for which the formula has polynomial complexity?
(NOTE: I updated the definition of $SgrDet$ to have a better-behaved operator, but it's still unclear whether it is the 'right' definition).
|
Let $T:X\to Y$ a linear bounded operator between Banach spaces. Let $U$ a neighbourhood of $0\in Y$, $t\in(0,1)$ and suppose that $\forall u\in U$ $\exists \bar x\in X$ with $\|\bar x\|\le1$ and $\bar u\in U$ such that $$u=T\bar x +t\bar u.$$Then if I take $u\in U$, I can write
$$u=T\left(\sum_{i=0}^{\infty} t^ix_i\right), \ \ \ \|x_i\|\le1$$ and that series has sense since $$\|\sum_{i=0}^{\infty} t^ix_i\|\le \sum_{i=0}^{\infty} t^i=C<+\infty.$$
In this way I obtain that $$U\subset T\left(B_X(0,C)\right).$$
Is it correct?
|
Here is a complete proof. If comes from modifying this answer. We also make use of the lemma $\theta(x)\leq 3x$ shown in this answer. Throughout, $\theta(x)$ refers to the weighted prime counting function $\sum_{p\leq x}\log p$.
Consider $\binom{2N}{N}.$ For each prime $p$, let $v_{p}$ denote the number of times it divides $\binom{2N}{N}$. Then $$\binom{2N}{N}=\prod_{p\leq2N}p^{v_{p}}.$$ Let $t=\log_{p}(2N)$ so that $\lfloor\frac{2N}{p^{j}}\rfloor=0$. Now, using the fact that we know how many times a prime divides $n!$, and $\binom{2n}{n}=\frac{(2n)!}{n!n!}$ we have that $$v_{p}=\left(\lfloor\frac{2N}{p}\rfloor+\lfloor\frac{2N}{p^{2}}\rfloor+\cdots\right)-2\left(\lfloor\frac{N}{p}\rfloor+\lfloor\frac{N}{p^{2}}\rfloor+\cdots\right)=\sum_{i=1}^{t}\lfloor\frac{2N}{p^{i}}\rfloor-2\lfloor\frac{N}{p^{i}}\rfloor.$$ Since $\lfloor\frac{2N}{p^{i}}\rfloor-2\lfloor\frac{N}{p^{i}}\rfloor=0$ or $\lfloor\frac{2N}{p^{i}}\rfloor-2\lfloor\frac{N}{p^{i}}\rfloor=1$ for each $i$ we see that $v_{p}\leq [t],$ the floor of $t$. Now notice that $[t]=1$ as long as $p>\sqrt{2N}$, and that $[t]=2$ when $\sqrt{2n}\geq p>(2N)^\frac{1}{3}$, etc... Hence $$\log\binom{2N}{N}\leq\theta\left(2N\right)+\theta\left(\sqrt{2N}\right)+\theta\left(\left(2N\right)^{\frac{1}{3}}\right)+\cdots $$ $$=\psi\left(2N\right):=\sum_{p^{k}\leq2N}\log p. $$
Now, since the central binomial coefficient is the largest, we have $\binom{2N}{N}\geq \frac{4^N}{2N}$. (We get $2N$ as a denominator instead of $2N+1$ by noting $1$ appears at both ends of the triangle) Hence $$N\log 4-\log (2N) \leq \psi(2N).$$
Idea how to proceed: Since you only need $$N\log 2\leq \theta(2N),$$ we can use upper bounds for $\theta$ and remove the undesirable things from the equation by showing they are less then the other $N\log 2$ which we throw away.
Specifics: In this answer here we show by a similar approach that $\theta(x)\leq 3x$ for all $x$. Hence $$\theta(\sqrt{2n})+\theta\left((2n)^\frac{1}{3}\right)+\cdots\leq 3\sqrt{2n}+3(2n)^\frac{1}{3}+\cdots\leq 6\sqrt{2N}$$ for $N\geq 200$. Now by using methods of calculus, you can show that for all $N\geq 200$ we have that $$N\log 2< \log(2N)+6\sqrt{2N}.$$ This implies that $N\log 2<\theta(N)$ when $N\geq 200$, and hence $$\prod_{p\leq 2N} p >2^N$$for all $N\geq 200$. Now, just check via computer or by hand for $N\leq 200$, and the inequality will be proven for all $N$.
|
Let $n \in \mathbb{N}$ be a composite number, and $n = pq$ where $p,q$ are
distinct primes. Let $F : \mathbb{N} \rightarrow \mathbb{N} \times \mathbb{N}$ (*) be an algorithm which takes as an input $x \in \mathbb{N}$ and returns two primes $u, v$ such that $x = uv,$ or returns FAIL if there is no such factorization ($F$ uses, say, an oracle). That is, $F$ solves the RSA factorization problem. Note that whenever a prime factorization $x = uv$ exists for $x,$ $F$ is guaranteed to find it.
Can $F$ be used to solve the prime factorization problem in general? (i.e. given $n \in \mathbb{N},$ find primes $p_i \in \mathbb{N},$ and integers $e_i \in \mathbb{N},$ such that $n = \prod_{i=0}^{k} p_{i}^{e_i}$)
If yes, how? A rephrased question would be: is the factorization problem
harder than factoring $n = pq$?
(*) abuse of the function type notation. More appropriately $F : \mathbb{N} \rightarrow \mathbb{N} \times \mathbb{N} \bigcup \mbox{FAIL} $
Edit 1: $F$ can determine $p,q,$ or FAIL in polynomial time. The general factoring algorithm is required to be polynomial time. Edit 2: The question is now cross-posted on cstheory.SE.
|
This question already has an answer here:
As the title says, I want to prove that there is a natural number $k$ such that $2^k$ is starting with $999$. Can you help me please ?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
As the title says, I want to prove that there is a natural number $k$ such that $2^k$ is starting with $999$. Can you help me please ?
If $\log_{10}(2^k) = k \log_{10}(2)$ is extremely close to an integer, but less than such integer, we are happy. To get some working values for $k$, it is enough to compute the continued fraction of $$\frac{\log 2}{\log 10}=[0; 3, 3, 9, 2, 2, 4, 6, 2, 1, 1, 3, 1, 18,\ldots ]. $$
By considering the expansion of $[0; 3, 3, 9, 2, 2, 4, 6, 2, 1, 1, 3]$ we get:$$ 254370\cdot\log_{10}(2) = 76572.999997\ldots $$hence the number $\color{red}{2^{254370}}$ starts with the digits $999$ as wanted.
The same happens with $\color{red}{2^{13301}}$ that is associated with the continued fraction $[0; 3, 3, 9, 2, 2, 4, 6]$.
The least power of two with the wanted property should be $\color{red}{2^{2621}}$ that is associated with the continued fraction $[0, 3, 3, 9, 2, 2, 5]$.
|
How can I show that the following holds?
$$\langle nlm\mid \partial_z^2\mid nlm\rangle=-\int_0^{4\pi}d\Omega\int_0^{\infty}drr^2\left|\partial_z\psi_{nlm}\right|^2$$
The wave functions of a free particle are: $\mid nlm\rangle=\psi_{nlm}$.
This conversion is stated in the Quantum Mechanics book by Landau, Lifshitz (Vol.3) on the bottom of the page 137. (https://books.google.de/books?id=neBbAwAAQBAJ&pg=PA137&lpg=PA137&dq=boundary+deformation+as+perturbation+spheric+infinite+potential+well&source=bl&ots=FiqcKgb76e&sig=kmhf0opstnXK3R3fBnVMMv5ZO2s&hl=de&sa=X&ei=QM2NVaeSB8nuUJyZuPgO&ved=0CC0Q6AEwAQ#v=onepage&q=boundary%20deformation%20as%20perturbation%20spheric%20infinite%20potential%20well&f=false)
The only hint that is given there is that integration by parts is used. So I tried: $$\langle nlm\mid \partial_z^2\mid nlm\rangle=\int d^3r\,\psi_{nlm}^\star \left(\partial_z^2\psi_{nlm}\right)=\int dx\int dy\int dz\, \psi_{nlm}^\star\left(\partial_z^2\psi_{nlm}\right)=$$ $$=\int dx\int dy\left[\left.\psi_{nlm}^\star \left(\partial_z\psi_{nlm}\right)\right|_{z=0}^\infty-\int dz\left|\partial_z\psi_{nlm}\right|^2\right]=$$ $$=\int dx\int dy\,\left[\psi_{nlm}^\star \left(\partial_z\psi_{nlm}\right)\right]_{z=0}^\infty-\int d^3r\left|\partial_z\psi_{nlm}\right|^2$$ Now the first term has to vanish? For the upper boundary: $\psi_{nlm}=R_{nl}Y_{lm}\overset{z\rightarrow\infty}{\rightarrow}0$ is trivial because $R_{nl}\propto e^{-r}$. But what happens for $z\rightarrow0$?
|
The aim is to determine the group velocity of a wave packet with the general form
$$\Psi\left(x,t\right)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} \phi\left(x\right)e^{i\left(kx-\omega t\right)}dk.$$
Quoting from
Introducting to Quantum Mechanics by David Griffiths:
Since the integrand is negligible except in the vicinity of $\ k_{0}$, we may as well Taylor expand the function $\ \omega\left(k\right)$ about that point and keep only the leading terms: $$\omega\left(k\right) \approx w_{0}+w_{0}'\left(k-k_{0}\right)$$ where $\omega_{0}'$ is the derivative of $\omega$ with respect to $k$
What is unclear to me here is why do we Taylor-expand the function $\omega\left(k\right)$
Proceeding on, the author performed a change of coordinate variables from $k$ to $s\equiv k-k_{0}$ so that
$$\Psi\left(x,t\right)\approx \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} \phi\left(k_{0}+s\right)e^{i\left[\left(k_{0}+s\right)x-\left(\omega_{0}+\omega_{0}'s\right)t\right]}ds$$
Perhaps I am not understanding the preceding argument completely but what is the motivation for performing this change of variables from $k$ to $s$?
again, the author performed a change of coordinate so that we have
$$\Psi\left(x,t\right)\approx \frac{1}{\sqrt{2\pi}}e^{i\left(-\omega_{0}t+k_{0}\omega_{0}t\right)}\int_{-\infty}^{\infty} \phi\left(k_{0}+s\right)e^{i\left(k_{0}+s\right)\left(x-\omega_{0}'t\right)}ds.$$
A fuzzy intuition I can conjure to account for the change of variables is that the wave packet behaviours like a wave front(I.e., ablation of a layer of material that sublimates so ideally, the best way to prevent the coordinate system from 'moving' is to introduce a very similar change of variable. An explanation would really be ideal.
|
I was reading up on De Sitter spaces, which states that the gravitational effects from a black hole is indistinguishable from any other spherically symmetric mass distribution. This makes a lot of sense to me.
I'm now super curious, can we just formulate all the properties of a black hole beyond the limit at which GR is needed in a completely Maxwellian / Newtonian sense? The Wikipedia article on the Kerr-Newman metric seems to indicate so, but the equations are in terms of the GR metrics. This is not what I want, I want a simplification, a limit-case of that math.
Ted Bunn answered a part of my question in Detection of the Electric Charge of a Black Hole. Let me repeat the stupidly simple form of the gravitational and electric field for a black hole beyond the point at which GR is needed.
$$\vec{g} = \frac{GM}{r^2} \hat{r}$$
$$\vec{E} = \frac{Q}{4\pi\epsilon_0 r^2} \hat{r}$$
Correct me if I'm wrong, but these would be a meaningful and accurate approximation in many, in fact, most situations where we would plausibly interact with a black hole (if we were close enough that these are no longer representative, we'd be risking a date with eternity).
Question: Fill in the blank; what would the magnetic field $\vec{B}$ around a black hole be?
Here is why I find it non-trivial: Every magnet in "our" world has some significant waist to it. So here is a normal magnet.
What happens when this is a black hole? Would the approximation for $\vec{B}$ that I'm asking for have all the magnetic field lines pass through the singularity? Or would they all pass through the event horizon radius but not necessarily a single point?
I think most people who have understood this already, but ideally the answer would use the 3 fundamental metrics of a black hole. Mass $M$, charge $Q$, and angular momentum $L$. The prior equations for gravity and electric field already fit this criteria. So the answer I'm looking for should be doable in the following form.
$$\vec{B} = f \left( M, Q, L, \vec{r} \right) $$
|
2017-04-19 08:09
Flavour anomalies after the $R_{K^*}$ measurement / D'Amico, Guido (CERN) ; Nardecchia, Marco (CERN) ; Panci, Paolo (CERN) ; Sannino, Francesco (CERN ; Southern Denmark U., CP3-Origins ; U. Southern Denmark, Odense, DIAS) ; Strumia, Alessandro (CERN ; Pisa U. ; INFN, Pisa) ; Torre, Riccardo (EPFL, Lausanne, LPTP) ; Urbano, Alfredo (CERN) The LHCb measurement of the $\mu/e$ ratio $R_{K^*}$ indicates a deficit with respect to the Standard Model prediction, supporting earlier hints of lepton universality violation observed in the $R_K$ ratio. We show that the $R_K$ and $R_{K^*}$ ratios alone constrain the chiralities of the states contributing to these anomalies, and we find deviations from the Standard Model at the $4\sigma$ level. [...] arXiv:1704.05438; CP3-ORIGINS-2017-014; CERN-TH-2017-086; IFUP-TH/2017; CP3-Origins-2017-014.- 2017-09-04 - 31 p. - Published in : JHEP 09 (2017) 010 Article from SCOAP3: PDF; Fulltext: PDF; Preprint: PDF; Detaljerad journal - Similar records 2017-04-15 08:30
Multi-loop calculations: numerical methods and applications / Borowka, S. (CERN) ; Heinrich, G. (Munich, Max Planck Inst.) ; Jahn, S. (Munich, Max Planck Inst.) ; Jones, S.P. (Munich, Max Planck Inst.) ; Kerner, M. (Munich, Max Planck Inst.) ; Schlenk, J. (Durham U., IPPP) We briefly review numerical methods for calculations beyond one loop and then describe new developments within the method of sector decomposition in more detail. We also discuss applications to two-loop integrals involving several mass scales.. CERN-TH-2017-051; IPPP-17-28; MPP-2017-62; arXiv:1704.03832.- 2017-11-09 - 10 p. - Published in : J. Phys. : Conf. Ser. 920 (2017) 012003 Fulltext from Publisher: PDF; Preprint: PDF; In : 4th Computational Particle Physics Workshop, Tsukuba, Japan, 8 - 11 Oct 2016, pp.012003 Detaljerad journal - Similar records 2017-04-15 08:30
Anomaly-Free Dark Matter Models are not so Simple / Ellis, John (King's Coll. London ; CERN) ; Fairbairn, Malcolm (King's Coll. London) ; Tunney, Patrick (King's Coll. London) We explore the anomaly-cancellation constraints on simplified dark matter (DM) models with an extra U(1)$^\prime$ gauge boson $Z'$. We show that, if the Standard Model (SM) fermions are supplemented by a single DM fermion $\chi$ that is a singlet of the SM gauge group, and the SM quarks have non-zero U(1)$^\prime$ charges, the SM leptons must also have non-zero U(1)$^\prime$ charges, in which case LHC searches impose strong constraints on the $Z'$ mass. [...] KCL-PH-TH-2017-21; CERN-TH-2017-084; arXiv:1704.03850.- 2017-08-16 - 19 p. - Published in : JHEP 08 (2017) 053 Article from SCOAP3: PDF; Preprint: PDF; Detaljerad journal - Similar records 2017-04-13 08:29
Single top polarisation as a window to new physics / Aguilar-Saavedra, J.A. (Granada U., Theor. Phys. Astrophys.) ; Degrande, C. (CERN) ; Khatibi, S. (IPM, Tehran) We discuss the effect of heavy new physics, parameterised in terms of four-fermion operators, in the polarisation of single top (anti-)quarks in the $t$-channel process at the LHC. It is found that for operators involving a right-handed top quark field the relative effect on the longitudinal polarisation is twice larger than the relative effect on the total cross section. [...] CERN-TH-2017-013; arXiv:1701.05900.- 2017-06-10 - 5 p. - Published in : Phys. Lett. B 769 (2017) 498-502 Article from SCOAP3: PDF; Elsevier Open Access article: PDF; Preprint: PDF; Detaljerad journal - Similar records 2017-04-12 07:18
Colorful Twisted Top Partners and Partnerium at the LHC / Kats, Yevgeny (CERN ; Ben Gurion U. of Negev ; Weizmann Inst.) ; McCullough, Matthew (CERN) ; Perez, Gilad (Weizmann Inst.) ; Soreq, Yotam (MIT, Cambridge, CTP) ; Thaler, Jesse (MIT, Cambridge, CTP) In scenarios that stabilize the electroweak scale, the top quark is typically accompanied by partner particles. In this work, we demonstrate how extended stabilizing symmetries can yield scalar or fermionic top partners that transform as ordinary color triplets but carry exotic electric charges. [...] MIT-CTP-4897; CERN-TH-2017-073; arXiv:1704.03393.- 2017-06-23 - 34 p. - Published in : JHEP 06 (2017) 126 Article from SCOAP3: PDF; Preprint: PDF; Detaljerad journal - Similar records 2017-04-11 08:06
Where is Particle Physics Going? / Ellis, John (King's Coll. London ; CERN) The answer to the question in the title is: in search of new physics beyond the Standard Model, for which there are many motivations, including the likely instability of the electroweak vacuum, dark matter, the origin of matter, the masses of neutrinos, the naturalness of the hierarchy of mass scales, cosmological inflation and the search for quantum gravity. So far, however, there are no clear indications about the theoretical solutions to these problems, nor the experimental strategies to resolve them [...] KCL-PH-TH-2017-18; CERN-TH-2017-080; arXiv:1704.02821.- 2017-12-08 - 21 p. - Published in : Int. J. Mod. Phys. A 32 (2017) 1746001 Preprint: PDF; In : HKUST Jockey Club Institute for Advanced Study : High Energy Physics, Hong Kong, China, 9 - 26 Jan 2017 Detaljerad journal - Similar records 2017-04-08 07:18 Detaljerad journal - Similar records 2017-04-05 07:33
Radiative symmetry breaking from interacting UV fixed points / Abel, Steven (Durham U., IPPP ; CERN) ; Sannino, Francesco (CERN ; U. Southern Denmark, CP3-Origins ; U. Southern Denmark, Odense, DIAS) It is shown that the addition of positive mass-squared terms to asymptotically safe gauge-Yukawa theories with perturbative UV fixed points leads to calculable radiative symmetry breaking in the IR. This phenomenon, and the multiplicative running of the operators that lies behind it, is akin to the radiative symmetry breaking that occurs in the Supersymmetric Standard Model.. CERN-TH-2017-066; CP3-ORIGINS-2017-011; IPPP-2017-23; arXiv:1704.00700.- 2017-09-28 - 14 p. - Published in : Phys. Rev. D 96 (2017) 056028 Fulltext: PDF; Preprint: PDF; Detaljerad journal - Similar records 2017-03-31 07:54 Detaljerad journal - Similar records 2017-03-31 07:54
Continuum limit and universality of the Columbia plot / de Forcrand, Philippe (ETH, Zurich (main) ; CERN) ; D'Elia, Massimo (INFN, Pisa ; Pisa U.) Results on the thermal transition of QCD with 3 degenerate flavors, in the lower-left corner of the Columbia plot, are puzzling. The transition is expected to be first-order for massless quarks, and to remain so for a range of quark masses until it turns second-order at a critical quark mass. [...] arXiv:1702.00330; CERN-TH-2017-022.- SISSA, 2017-01-30 - 7 p. - Published in : PoS LATTICE2016 (2017) 081 Fulltext: PDF; Preprint: PDF; In : 34th International Symposium on Lattice Field Theory, Southampton, UK, 24 - 30 Jul 2016, pp.081 Detaljerad journal - Similar records
|
Suppose I have a finitely generated group $G$ of known rank $n$, and a set $\{s_i\}$ of $n$ group elements. Are there some simple necessary and sufficient conditions to determine whether $s_i$ generates $G$? (Suppose that I don't have any known generating set which I can try to generate with the $\{s_i\}$.)
For example, I think this is a necessary condition:
$\forall s \in \{s_i\} \; \;\not \exists g \in G \; : \; \langle s\rangle \subset \langle g \rangle$
Is it also sufficient?
|
Simulation Tools for Solving Wave Electromagnetics Problems
When solving wave electromagnetics problems with either the RF or Wave Optics modules, we use the finite element method to solve the governing Maxwell’s equations. In this blog post, we will look at the various modeling, meshing, solving, and postprocessing options available to you and when you should use them.
The Governing Equation for Modeling Frequency Domain Wave Electromagnetics Problems
COMSOL Multiphysics uses the finite element method to solve for the electromagnetic fields within the modeling domains. Under the assumption that the fields vary sinusoidally in time at a known angular frequency \omega = 2 \pi f and that all material properties are linear with respect to field strength, the governing Maxwell’s equations in three dimensions reduce to:
With the speed of light in vacuum, c_0, the above equation is solved for the electric field, \mathbf{E}=\mathbf{E}(x,y,z), throughout the modeling domain, where \mathbf{E} is a vector with components E_x, E_y, and E_z. All other quantities (such as magnetic fields, currents, and power flow) can be derived from the electric field. It is also possible to reformulate the above equation as an eigenvalue problem, where a model is solved for the resonant frequencies of the system, rather than the response of the system at a particular frequency.
The above equation is solved via the finite element method. For a conceptual introduction to this method, please see our blog series on the weak form, and for a more in-depth reference, which will explain the nuances related to electromagnetic wave problems, please see
The Finite Element Method in Electromagnetics by Jian-Ming Jin. From the point of view of this blog post, however, we can break down the finite element method into these four steps: Model Set-Up:Defining the equations to solve, creating the model geometry, defining the material properties, setting up metallic and radiating boundaries, and connecting the model to other devices. Meshing:Discretizing the model space using finite elements. Solving:Solving a set of linear equations that describe the electric fields. Postprocessing:Extracting useful information from the computed electric fields.
Let’s now look at each one of these steps in more detail and describe the options available at each step.
Options for Modifying the Governing Equations
The governing equation shown above is the frequency domain form of Maxwell’s equations for wave-type problems in its most general form. However, this equation can be reformulated for several special cases.
Let us first consider the case of a modeling domain in which there is a known background electric field and we wish to place some object into this background field. The background field can be a linearly polarized plane wave, a Gaussian beam, or any general user-defined beam that satisfies Maxwell’s equations in free space. Placing an object into this field will perturb the field and lead to scattering of the background field. In such a situation, you can use the
Scattered Field formulation, which solves the above equation, but makes the following substitution for the electric field:
where the background electric field is known and the relative field is the field that, once added to the background field, gives the total field that satisfies the governing Maxwell’s equations. Rather than solving for the total field, it is the relative field that is being solved. Note that the relative field is
not the scattered field.
For an example of the usage of this
Scattered Field formulation, which considers the radar scattering off of a perfectly electrically conductive sphere in a background plane wave and compares it to the analytic solution, please see our Computing the Radar Cross Section of a Perfectly Conducting Sphere tutorial model.
Next, let’s consider modeling in a 2D plane, where we solve for \mathbf{E}=\mathbf{E}(x,y) and can additionally simplify the modeling by considering an electric field that is polarized either In-Plane or Out-of-Plane. The In-Plane case will assume that E_z=0, while the Out-of-Plane case assumes that E_x=E_y=0. These simplifications reduce the size of the problem being solved, compared to solving for all three components of the electric field vector.
For modeling in the 2D axisymmetric plane, we solve for \mathbf{E}=\mathbf{E}(r,z), where the vector \mathbf{E} has the components E_r, E_\phi, and E_z. We can again simplify our modeling by considering the In-Plane and Out-of-Plane cases, which assume E_\phi=0 and E_r=E_z=0, respectively.
When using either the
2D or the 2D axisymmetric In-Plane formulations, it is also possible to specify an Out-of-Plane Wave Number. This is appropriate to use when there is a known out-of-plane propagation constant, or known number of azimuthal modes. For 2D problems, the electric field can be rewritten as:
and for 2D axisymmetric problems, the electric field can be rewritten as:
where k_z or m, the out-of-plane wave number, must be specified.
This modeling approach can greatly simplify the computational complexity for some types of models. For example, a structurally axisymmetric horn antenna will have a solution that varies in 3D but is composed of a sum of known azimuthal modes. It is possible to recover the 3D solution from a set of 2D axisymmetric analyses by solving for these out-of-plane modes at a much lower computational cost, as demonstrated in our Corrugated Circular Horn Antenna tutorial model.
Meshing Requirements and Capabilities
Whenever solving a wave electromagnetics problem, you must keep in mind the mesh resolution. Any wave-type problem must have a mesh that is fine enough to resolve the wavelengths in all media being modeled. This idea is fundamentally similar to the concept of the
Nyquist frequency in signal processing: The sampling size (the finite element mesh size) must be at least less than one-half of the wavelength being resolved.
By default, COMSOL Multiphysics uses second-order elements to discretize the governing equations. A minimum of two elements per wavelength are necessary to solve the problem, but such a coarse mesh would give quite poor accuracy. At least five second-order elements per wavelength are typically used to resolve a wave propagating through a dielectric medium. First-order and third-order discretization is also available, but these are generally of more academic interest, since the second-order elements tend to be the best compromise between accuracy and memory requirements.
The meshing of domains to fulfill the minimum criterion of five elements per wavelength in each medium is now automated within the software, as shown in this video, which shows not only the meshing of different dielectric domains, but also the automated meshing of Perfectly Matched Layer domains. The new automated meshing capability will also set up an appropriate periodic mesh for problems with periodic boundary conditions, as demonstrated in this Frequency Selective Surface, Periodic Complementary Split Ring Resonator tutorial model.
With respect to the type of elements used, tetrahedral (in 3D) or triangular (in 2D) elements are preferred over hexahedral and prismatic (in 3D) or rectangular (in 2D) elements due to their lower dispersion error. This is a consequence of the fact that the maximum distance within an element is approximately the same in all directions for a tetrahedral element, but for a hexahedral element, the ratio of the shortest to the longest line that fits within a perfect cubic element is \sqrt3. This leads to greater error when resolving the phase of a wave traveling diagonally through a hexahedral element.
It is only necessary to use hexahedral, prismatic, or rectangular elements when you are meshing a perfectly matched layer or have some foreknowledge that the solution is strongly anisotropic in one or two directions. When resolving a wave that is decaying due to absorption in a material, such as a wave impinging upon a lossy medium, it is additionally necessary to manually resolve the skin depth with the finite element mesh, typically using a boundary layer mesh, as described here.
Manual meshing is still recommended, and usually needed, for cases when the material properties will vary during the simulation. For example, during an electromagnetic heating simulation, the material properties can be made functions of temperature. This possible variation in material properties should be considered before the solution, during the meshing step, as it is often more computationally expensive to remesh during the solution than to start with a mesh that is fine enough to resolve the eventual variations in the fields. This can require a manual and iterative approach to meshing and solving.
When solving over a wide frequency band, you can consider one of three options:
Solve over the entire frequency range using a mesh that will resolve the shortest wavelength (highest frequency) case. This avoids any computational cost associated with remeshing, but you will use an overly fine mesh for the lower frequencies. Remesh at each frequency, using the parametric solver. This is an attractive option if your increments in frequency space are quite widely spaced, and if the meshing cost is relatively low. Use different meshes in different frequency bands. This will reduce the meshing cost, and keep the solution cost relatively low. It is essentially a combination of the above two approaches, but requires the most user effort.
It is difficult to determine ahead of time which of the above three options will be the most efficient for a particular model.
Regardless of the initial mesh that you use, you will also always want to perform a mesh refinement study. That is, re-run the simulation with progressively finer meshes and observe how the solution changes. As you make the mesh finer, the solution will become more accurate, but at a greater computational cost. It is also possible to use adaptive mesh refinement if your mesh is composed entirely of tetrahedral or triangular elements.
Solver Options
Once you have properly defined the problem and meshed your domains, COMSOL Multiphysics will take this information and form a system of linear equations, which are solved using either a direct or iterative solver. These solvers differ only in their memory requirements and solution time, but there are several options that can make your modeling more efficient, since 3D electromagnetics models will often require a lot of RAM to solve.
The direct solvers will require more memory than the iterative solvers. They are used for problems with periodic boundary conditions, eigenvalue problems, and for all 2D models. Problems with periodic boundary conditions do require the use of a direct solver, and the software will automatically do so in such cases.
Eigenvalue problems will solve faster when using a direct solver as compared to using an iterative solver, but will use more memory. For this reason, it can often be attractive to reformulate an eigenvalue problem as a frequency domain problem excited over a range of frequencies near the approximate resonances. By solving in the frequency domain, it is possible to use the more memory-efficient iterative solvers. However, for systems with high Q-factors it becomes necessary to solve at many points in frequency space. For an example of reformulating an eigenvalue problem as a frequency domain problem, please see these examples of computing the Q-factor of an RF coil and the Q-factor of a Fabry-Perot cavity.
The iterative solvers used for frequency-domain simulations come with three different options defined by the Analysis Methodology settings of
Robust (the default), Intermediate, or Fast, and can be changed within the physics interface settings. These different settings alter the type of iterative solver being used and the convergence tolerance. Most models will solve with any of these settings, and it can be worth comparing them to observe the differences in solution time and accuracy and choose the option most appropriate for your needs. Models that contain materials that have very large contrasts in the dielectric constants (~100:1) will need the Robust setting and may even require the use of the direct solver, if the iterative solver convergence is very slow. Postprocessing Capabilities
Once you’ve solved your model, you will want to extract data from the computed electromagnetic fields. COMSOL Multiphysics will automatically produce a slice plot of the magnitude of the electric field, but there are many other postprocessing visualizations you can set up. Please see the Postprocessing & Visualization Handbook and our blog series on Postprocessing for guidance and to learn how to create images such as those shown below.
Attractive visualizations can be created by plotting combinations of the solution fields, meshes, and geometry.
Of course, good-looking images are not enough — we also want to extract numerical information from our models. COMSOL Multiphysics will automatically make available the S-parameters whenever using Ports or Lumped Ports, as well as the Lumped Port current, voltage, power, and impedance. For a model with multiple Ports or Lumped Ports, it is also possible to automatically set up a
Port Sweep, as demonstrated in this tutorial model of a Ferrite Circulator, and write out a Touchstone file of the results. For eigenvalue problems, the resonant frequencies and Q-factors are automatically computed.
For models of antennas or for scattered field models, it is additionally possible to compute and plot the far-field radiated pattern, the gain, and the axial ratio.
Far-field radiation pattern of a Vivaldi antenna.
You can also integrate any derived quantity over domains, boundaries, and edges to compute, for example, the heat dissipated inside of lossy materials or the total electromagnetic energy within a cavity. Of course, there is a great deal more that you can do, and here we have just looked at the most commonly used postprocessing features.
Summary of Wave Electromagnetics Simulation Tools
We’ve looked at the various different formulations of the governing frequency domain form of Maxwell’s equations as applied to solving wave electromagnetics problems and when they should be used. The meshing requirements and capabilities have been discussed as well as the options for solving your models. You should also have a broad overview of the postprocessing functionality and where to go for more information about visualizing your data in COMSOL Multiphysics.
This information, along with the previous blog posts on defining the material properties, setting up metallic and radiating boundaries, and connecting the model to other devices should now give you a reasonably complete picture of what can be done with frequency domain electromagnetic wave modeling in the RF and Wave Optics modules. The software documentation, of course, goes into greater depth about all of the features and capabilities within the software.
If you are interested in using the RF or Wave Optics modules for your modeling needs, please contact us.
Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Or : Multiplying by \(i\).
The beginning of complex numbers is often attributed to Cardano (1501 – 1576). Complex numbers provide a solution to the equation \(x^2 = -1\).
They are an extension of the number system from \(\Bbb R\) (the real numbers) to \(\Bbb C\) (the complex numbers).
Complex numbers take the form :
\(z = a + bi\),
where \(a,b\) are real numbers and \( i = \sqrt -1\).
The normal rules of arithmetic and algebra apply with the additional rule that :
\(i^2 = -1\)
When two complex numbers are multiplied together the real numbers, \(a\) and \(b\), act as scalars and the imaginary number, \(i\), acts as a rotation about the origin, \(0\), (counter-clockwise from the positive real axis). Multiplication by \(i\) on its own leads to a rotation of \(\pi/2\) radians in the complex plane.
\(\begin{align}1\cdot i & = i & = i\\
i\cdot i & = i^2 & = -1\\ -1\cdot i & = i^3 & = -i\\ -i\cdot i & = -i^2 & = 1 \end{align}\)
Starting at any point on the real axis, (Re), and repeatedly mutilplying by \(i\) traces larger or smaller circles.
The multiplication of two complex numbers gives another complex number.
Let :
\(z = a + bi\) and \(w = c + di\)
Then :
\(\begin{align} zw & = (a + bi)(c + di)\\
& = ac + bdi^2 + adi + bci\\ & = ac-bd + (ad + bc)i \end{align}\)
Taking our starting point as one again, this time we multiply by \(1 + i\) repeatedly.
\(\begin{align}1(1 + i) & = 1 + i\\
(1 + i)(1 + i) & = 2i\\ 2i(1 + i) & = -2 + 2i\\ (-2 + 2i)(1 + i) & = -4\\ -4(1 + i) & = -4-4i\\ (-4 – 4i)(1 + i) & = -8i\\ (-8i)(1 + i) & = 8-8i\\ (8 – 8i)(1 + i) & = 16 \end{align}\)
We still have a rotational locus but now in the form of a spiral.
Each multiplication by \(1 + i\) leads to a rotation by \(\pi/4\) and an increase in distance from the origin by \(\sqrt 2\).
This is because the point \(1 + i\) is at a distance \(\sqrt 2\) from the origin, \(0\), (by Pythagorus) and the angle that this point makes to the positive real axis is arctan \((1/1) = \pi/4\), where arctan is the inverse of the tangent function.
We look at an example.
Let :
\(z = 3 + 4i\) and \(w = 5 + 2i\)
Then :
\(zw = 7 + 26i\),
by the above formula.
We can break it down to see what is happening.
\((3 + 4i)5 = 15 + 20i\)
The point has been scaled up by a factor of \(5\) and the angle is left unchanged.
\((3 + 4i)2i = -8 + 6i\)
The point has been scaled up by a factor of \(2\) and rotated by an angle of \(\pi/2\).
The addition of these two parts gives us the point \(7 + 26i\)
In terms of distances and angles (the polar representation) :
\(z = (r_1, \theta), w = (r_2, \phi)\) and \(zw = (r_1r_2, \theta + \phi)\).
\(z = 3 + 4i = (5, \theta)\), where \(\theta = \)arctan\((4/3) \approx 0.9273\) radians \(\approx 53.13^\circ\)
\(w = 5 + 2i = (\sqrt29, \phi)\), where \(\phi = \)arctan\((2/5) \approx 0.3805\) radians \(\approx 21.80^\circ\)
\(zw = 7 +26i = (5\sqrt29, \theta + \phi)\), where\(\theta + \phi =\)arctan\((26/7) \approx 1.3078\) radians \(\approx 74.93^\circ\)
Next time we will rotate in three different directions using the quaternions.
© OldTrout \(2019\)
No Audio file – Does not translate well
|
Definition:Connected Relation
(Redirected from Definition:Dichotomous Relation)Jump to navigation Jump to search
Definition Then $\mathcal R$ is connected if and only if: $\forall a, b \in S: a \ne b \implies \tuple {a, b} \in \mathcal R \lor \tuple {b, a} \in \mathcal R$ Also known as
Some sources use the term
weakly connected, using the term strictly connected relation for what is defined on $\mathsf{Pr} \infty \mathsf{fWiki}$ as total relation. Also see Definition:Total Relation: a connected relationwhich also insists that $\tuple {a, b} \in \mathcal R \lor \tuple {b, a} \in \mathcal R$ even for $a = b$ Results about connected relationscan be found here. Sources 1993: Keith Devlin: The Joy of Sets: Fundamentals of Contemporary Set Theory(2nd ed.) ... (previous) ... (next): $\S 1$: Naive Set Theory: $\S 1.5$: Relations
|
Problem for Higher Secondary Group from Divisional Mathematical Olympiad will be solved here.
Forum rules
Please
don't post problems (by starting a topic)in the "Higher Secondary: Solved" forum. This forum is only for showcasing the problems for the convenience of the users. You can post the problems in the main Divisional Math Olympiad forum. Later we shall move that topic with proper formatting, and post in the resource section.
Assume, $\Phi : A \to A, A=\{0,1,2,\cdots \}$ is a function, which is defined as,
\[\Phi(x) = \begin{cases} 0 \quad \text{if } x \text{ is a prime}\\ \Phi(x - 1) \quad \text{if } x \text{ is not a prime} \end{cases} \] Find \[ \sum_{x=2}^{2010} \Phi(x)\] (Corrected)
\[\Phi(x) = \begin{cases}
0 \quad \text{if } x \text{ is a prime}\\
\Phi(x - 1) \quad \text{if } x \text{ is not a prime} \end{cases} \]
Find \[ \sum_{x=2}^{2010} \Phi(x)\]
(Corrected)
I have no idea about summations, Do not even know what does the symbols and variables mean there( I beg someone teach me what are they, physics give me a lot of trouble for this), but comin to the function, for any number n, if it is prime then $\phi (n)$ is 0, and if it is not prime then it will continue to go $\phi (n-1)$ .. until it gets prime, and then it gets 0 as well!!! (I hope I got the question and concept right) so ultimately for any n, $\phi (n)$ =0!!! now please clear e the summation thing so that I can atleast try to post a solution.......
The answer is 0..
|
Cauchy test
The Cauchy criterion for the convergence of a series: Given a series $\sum_{n=1}^{\infty}u_n$ with non-negative real terms, if there exists a number $q$, $0\leq q<1$, such that, for all sufficiently large $n$, one has the inequality $(u_n)^{1/n}\leq q$, which is equivalent to the condition , then the series is convergent. Conversely, if for all sufficiently large one has the inequality , or even the weaker condition: There exists a subsequence , with terms satisfying the inequality , then the series is divergent.
In particular, if exists and is , then the series is convergent; if it is , then the series is divergent. This was proved by A.L. Cauchy . In the case of a series with terms of arbitrary sign, the condition implies that the series is divergent; if , the series is absolutely convergent.
The integral Cauchy test, or the Cauchy–MacLaurin integral criterion: Given a series with non-negative real terms, if there exists a non-increasing non-negative function , defined for , such that , then the series is convergent if and only if the integral is convergent. This test was first presented in a geometrical form by C. MacLaurin [2], and later rediscovered by A.L. Cauchy [3].
References
[1] A.L. Cauchy, "Analyse algébrique" , Gauthier-Villars (1821) pp. 132–135 (German translation: Springer, 1885) [2] C. MacLaurin, "Treatise of fluxions" , 1 , Edinburgh (1742) pp. 289–290 [3] A.L. Cauchy, "Sur la convergence des séries" , Oeuvres complètes Ser. 2 , 7 , Gauthier-Villars (1889) pp. 267–279 [4] S.M. Nikol'skii, "A course of mathematical analysis" , 1 , MIR (1977) (Translated from Russian) Comments
See also Cauchy criteria. The following is also known as Cauchy's condensation test or Cauchy's convergence theorem (criterion): If the terms of a series form a monotone decreasing sequence, then and
References
[a1] K. Knopp, "Theorie und Anwendung der unendlichen Reihen" , Springer (1964) (English translation: Blackie, 1951 & Dover, reprint, 1990) [a2] G.H. Hardy, "A course of pure mathematics" , Cambridge Univ. Press (1975) How to Cite This Entry:
Cauchy test.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Cauchy_test&oldid=29316
|
As Maarten Bodewes points out, you should use a known key derivation function instead of this method (i.e. don't roll your own crypto). Having said that we can still try to understand what happens if you do use this method.
I assume that the method you're describing is something as follows. You take a block cipher $E:\mathsf{K}\times \mathsf{X} \to \mathsf{X}$, and a master key $K\in\mathsf{K}$ and then generate subkeys using $$K_0 = E_K(0), K_1 = E_K(1), K_2 = E_K(2), \ldots\,,$$where $0$, $1$, $2$, $\ldots$ are distinct strings in $\mathsf{X}$.
If you only use the subkeys $K_0,K_1,K_2,\ldots$ as keys in the mode, there should not be an issue, since you're basically using the block cipher as a key derivation function. In order to do this, all you need to assume about the block cipher is that it is a secure pseudorandom permutation when keyed with $K$, $K_0$, $K_1$, $\ldots$. This, of course, also assumes that it is meaningful to use the output of $E_K$ as its key, which you can do for AES128, but not immediately with AES256 (you would have to use two block cipher outputs as a key).
However, the downside to the above approach is that you have to switch keys a few times, which might not be so bad depending upon the application, but is undesirable in general. The optimizations performed by modes such as GCM, EAX, and OCB make it so that you do not need to switch block cipher keys. Their trick is to use $E_K$ and the value $L = E_K(0)$ (or something similar) alongside each other in the mode, without ever using $E_L$. In particular, to ensure security this means that the mode needs to avoid outputting $E_K(0)$ as part of a ciphertext or tag, since then you've released $L$ to the adversary.
Just to be clear, neither method will help you against birthday attacks on the mode. As long as the master key stays the same, you will still find attacks with birthday bound complexity. Also, I don't immediately see the benefit in using a longer nonce, since the number of messages you can process using a key is often well below the number of nonces you can generate using half of a block length. For example, your nonce does not need to exceed 64 bits when using AES128 in GCM, since the birthday bound limits you anyway.
|
One way of summarizing the main ideas in a model/approach is to list the \((1)\) constructs, i.e., the “things” or ideas that are “used” in the model, \((2)\) the relationships–in mathematical or sentence form–that connect the constructs in meaningful ways, and \((3)\) the ways of representing the relationships. During your study of the model/approach you should have developed a good understanding of the meaning of each of the constructs. Some of these constructs probably start out as nothing but memorized definitions, but eventually take on a deeper meaning. The relationships might also start out as nothing more meaningful than a simple equation relating some of the constructs, e.g., \(J = \sum F \Delta t = \Delta p\). By the time you finish this part of the course, however, you should understand this particular relationship, for example, as expressing one of the most fundamental, universal, and widely applicable principles in all of physics. Developing a deep and rich understanding of the relationships in a model/approach comes slowly. It is absolutely not something you can memorize. This understanding comes only with repeated mental effort over a period of time. A good test you can use to see if you are “getting it” is whether you can tell a full story about each of the relationships. It is the meaning behind the equations, behind the simple sentence relationships, that is important for you to acquire. With this kind of understanding, you can apply a model/approach to the analysis of phenomena you have not thought about before. You can
reason with the model. Listed here are the major, most important constructs, relationships, and representations of the momentum conservation model. Constructs
Velocity, \(v\)
Momentum, \(p\)
Net Force, \(\sum F\)
Impulse, \(J\)
Newton’s 3
rd law
Conservation of momentum
Relationships
The velocity is the time derivative of the displacement:
\(v =\frac{ dr}{ dt} ~or ~v_{average} = \frac{\Delta r}{ \Delta t} \tag{7.3.1}\)
The linear momentum of an object measured in some coordinate system is simply the product of the object’s mass and velocity:
\[p = mv \tag{7.3.2} \]
The linear momentum of a system of particles is the vector sum of the individual momenta:
\[p_{system} = \sum p_i \tag{7.3.3} \]
The net force acting on an object (physical system) is the vector sum of all forces acting on that object (physical system) due to the interactions with other objects (physical systems).
\[\sum F_A = F_{B~ on~ A} + F_{C~ on~ A} + F_{D ~on~ A} + … \tag{7.3.4} \]
The impulse of the total (or net) external force acting on a system equals the product of the average force and the time interval during which the force acted.
\[Net~ Impulse_{ext} = J = \sum F_{avg~ ext} \Delta t = \int \sum F_{ext}(t) dt \tag{7.3.5} \]
The force (impulse) exerted by
on object A is equal and opposite to the force (impulse) exerted by object B on object B . object A
\[F_{A ~on~ B} = – F_{B~ on~ A}~~ and~~ J_{A ~on~ B} = – J_{B ~on ~A} \tag{7.3.6} \]
Conservation of Linear Momentum
If the net external impulse acting on a system is zero, then there is no change in the total linear momentum of that system; otherwise, the change in momentum is equal to the net external impulse.
\[Net~ Impulse_{ext} = J = \int \sum F_{ext}(t) dt = p_f - p_i = \Delta p_{system} \tag{7.3.7} \]
Collisions
The momentum of the system of objects (particles) remains constant if the external impulses are negligible. This is true whether the collision is elastic or inelastic
\[p_{tot_i} = p_{tot_f} \tag{7.3.8} \]
If a collision is
elastic, then none of the mechanical energy is transferred to bond or thermal energies and both the total mechanical energy (all kinetic and elastic energies) and the momentum remain constant.
\[(mechanical ~energy)_i = (mechanical~ energy)_f~ and~ p_{tot_i} = p_{tot_f} \tag{7.3.9} \]
Representations
Graphical representation of all vector quantities and (vector relationships) as arrows whose length is proportional to the magnitude of the vector and whose direction is in the direction of the vector quantity.
Algebraic vector equations. Vectors denoted as bold symbols or with small arrows over the symbol.
Component algebraic equations, one equation for each of the three independent directions.
A useful way to organize and use the representations of the various quantities that occur in phenomena involving momentum, change in momentum, and impulse and forces is a momentum chart. The momentum chart, like an energy-system diagram, helps us keep track of what we know about the interaction, as well as helping us see what we don’t know.
The boxes are to be filled in with scaled arrows representing the various momenta and changes in momenta.
Closed System
Typically used for collisions/interactions involving two or more objects.
For total system: \(\Delta p = 0\)
For each object: \(p_i + \Delta p = p_f\)
(written as component equations, if useful)
Write expressions for each momentum vector, such as \(p = mv\)
Open System
Typically used when the phenomenon involves a net impulse acting on the system.
For total system: \(\Delta p = J \)
\(p_i + \Delta p = p_f\)
(and for component equations, if useful)
Write expressions for each momentum vector, such as \(p = mv\)
Below the momentum chart draw a force diagram for the object. The net force gives the direction of the impulse and \(\Delta p\).
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Search for a Heavy Neutral Particle Decaying to eμ, eτ, or μτ in pp Collisions at √s = 8 TeV with the ATLAS Detector
Physical Review Letters, ISSN 0031-9007, 2015, Volume 115, Issue 3, p. 031801
Journal Article
2. Measurement of fiducial differential cross sections of gluon-fusion production of Higgs bosons decaying to WW ∗→eνμν with the ATLAS detector at √s = 8 TeV
Journal of High Energy Physics, ISSN 1126-6708, 2016, Volume 2016, Issue 8, pp. 1 - 63
This paper describes a measurement of fiducial and differential cross sections of gluon-fusion Higgs boson production in the H → W W∗→ eνμν channel, using 20.3...
Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences
Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences
Journal Article
3. Search for a heavy narrow resonance decaying to eμ, eτ, or μτ with the ATLAS detector in √s = 7 TeV pp collisions at the LHC
Physics Letters B, ISSN 0370-2693, 2013, Volume 723, Issue 1-3, pp. 15 - 32
This Letter presents the results of a search for a heavy particle decaying into an e±μ∓e±μ∓, e±τ∓e±τ∓, or μ±τ∓μ±τ∓ final state in pp collisions at √s=7 TeV....
Journal Article
4. Measurements of top-quark pair differential cross-sections in the eμ channel in pp collisions at √s = 13 TeV using the ATLAS detector
The European Physical Journal C: Particles and Fields, ISSN 1434-6052, 2017, Volume 77, Issue 5, pp. 1 - 30
This article presents measurements of tt¯ differential cross-sections in a fiducial phase-space region, using an integrated luminosity of 3.2 fb-1 of...
Luminosity | Quarks | Cross sections | Transverse momentum | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences
Luminosity | Quarks | Cross sections | Transverse momentum | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences
Journal Article
5. High-E T isolated-photon plus jets production in pp collisions at s=8 TeV with the ATLAS detector
Nuclear Physics B, ISSN 0550-3213, 05/2017, Volume 918, Issue C, pp. 257 - 316
The dynamics of isolated-photon plus one-, two- and three-jet production in pp collisions at a centre-of-mass energy of 8 TeV are studied with the ATLAS...
NUCLEAR PHYSICS AND RADIATION PHYSICS
NUCLEAR PHYSICS AND RADIATION PHYSICS
Journal Article
6. e(+)e(-) -> pi(+)pi(-)pi(+)pi(-), K+K-pi(+)pi(-), and K+K-K+K- cross sections at center-of-mass energies 0.5-4.5 GeV measured with initial-state radiation
Physical Review D, ISSN 1550-7998, 2005, Volume 71, Issue 5, p. 052001
We study the process e(+)e- -> pi(+)pi(-)pi(+)pi(-) gamma, with a hard photon radiated from the initial state. About 60 000 fully reconstructed events have...
MONTE-CARLO | TAGGED PHOTONS | PHYSICS | ASTRONOMY & ASTROPHYSICS | BHABHA SCATTERING | PHYSICS, PARTICLES & FIELDS | Physics | High Energy Physics - Experiment
MONTE-CARLO | TAGGED PHOTONS | PHYSICS | ASTRONOMY & ASTROPHYSICS | BHABHA SCATTERING | PHYSICS, PARTICLES & FIELDS | Physics | High Energy Physics - Experiment
Journal Article
7. Measurement of the tt¯ production cross-section using eμ events with b-tagged jets in pp collisions at s=13TeV with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 10/2016, Volume 761, Issue C, pp. 136 - 157
Journal Article
8. Addendum to ‘Measurement of the $$t\bar{t}$$ t t ¯ production cross-section using $$e\mu $$ e μ events with b-tagged jets in pp collisions at $$\sqrt{s}$$ s = 7 and 8 $$\,\mathrm{TeV}$$ TeV with the ATLAS detector’
The European Physical Journal C, ISSN 1434-6044, 11/2016, Volume 76, Issue 11, pp. 1 - 14
The ATLAS measurement of the inclusive top quark pair ( $$t\bar{t}$$ t t ¯ ) cross-section $$\sigma _{t\bar{t}}$$ σ t t ¯ in proton–proton collisions at...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
9. Search for heavy resonances decaying into WW in the eνμν final state in pp collisions at √s=13TeV with the ATLAS detector
European Physical Journal C, ISSN 1434-6044, 01/2018, Volume 78, Issue 1
Journal Article
10. Search for a Heavy Neutral Particle Decaying to $e\mu$, $e\tau$, or $\mu\tau$ in $pp$ Collisions at $\sqrt{s}=8$ TeV with the ATLAS Detector
03/2015
Phys. Rev. Lett. 115, 031801 (2015) This Letter presents a search for a heavy neutral particle decaying into an opposite-sign different-flavor dilepton pair,...
Physics - High Energy Physics - Experiment
Physics - High Energy Physics - Experiment
Journal Article
The European Physical Journal C: Particles and Fields, ISSN 1434-6052, 2017, Volume 77, Issue 5, pp. 1 - 53
During 2015 the ATLAS experiment recorded $$3.8\,{\mathrm{fb}}^{-1}$$ 3.8 fb - 1 of proton–proton collision data at a centre-of-mass energy of...
PHYSICS, PARTICLES & FIELDS | Collisions (Nuclear physics) | Protons | Data acquisition systems | Physics - High Energy Physics - Experiment | High Energy Physics - Phenomenology | Physics | PARTICLE ACCELERATORS | Regular - Experimental Physics | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
PHYSICS, PARTICLES & FIELDS | Collisions (Nuclear physics) | Protons | Data acquisition systems | Physics - High Energy Physics - Experiment | High Energy Physics - Phenomenology | Physics | PARTICLE ACCELERATORS | Regular - Experimental Physics | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
Physical Review Letters, ISSN 0031-9007, 2006, Volume 96, Issue 4
Journal Article
13. Measurement of fiducial differential cross sections of gluon-fusion production of Higgs bosons decaying to WW ∗→eνμν with the ATLAS detector at s = 8 $$ \sqrt{s}=8 $$ TeV
Journal of High Energy Physics, ISSN 1029-8479, 8/2016, Volume 2016, Issue 8, pp. 1 - 63
This paper describes a measurement of fiducial and differential cross sections of gluon-fusion Higgs boson production in the H → W W ∗→ eνμν channel, using...
Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Phenomenology | High Energy Physics | Nuclear Experiment | Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Phenomenology | High Energy Physics | Nuclear Experiment | Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
14. Measurements of gluon–gluon fusion and vector-boson fusion Higgs boson production cross-sections in the H → WW⁎ → eνμν decay channel in pp collisions at s=13TeV with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 02/2019, Volume 789, pp. 508 - 529
Higgs boson production cross-sections in proton–proton collisions are measured in the decay channel. The proton–proton collision data were produced at the...
Journal Article
|
For Confidential Transactions a Pedersen commitment is being used. The commitment preserves addition and the commutative property applies: $$C(\text{BF}_1, \text{data}_1) \oplus C(\text{BF}_2, \text{data}_1) = C(\text{BF}_1 + \text{BF}_2, \text{data}_1 + \text{data}_1)$$
$\oplus$ doesn't denote XOR here but more generally "another type of addition operation" How can I prove these properties and how do we get blinded factor?
For Confidential Transactions a Pedersen commitment is being used. The commitment preserves addition and the commutative property applies: $$C(\text{BF}_1, \text{data}_1) \oplus C(\text{BF}_2, \text{data}_1) = C(\text{BF}_1 + \text{BF}_2, \text{data}_1 + \text{data}_1)$$
First, we'll recap the Pedersen-Commitment scheme and then we'll show that it is indeed additively homomorphic. For reference, the original paper by Pedersen is free by now.
The commitment scheme
Let $q,p\in\mathbb P$ be primes such that $p=r\cdot q+1$, for some $r\in\mathbb N$. Let $q$ be the order of a subgroup of $\mathbb Z_p^*$ called $G_q$. Let $g,h\in G_q$ further be element of this group such that the $\log_g(h)$ is unknown to all parties.
A committer now commits to a value $s\in\mathbb Z_q$ by choosing a random $t\in\mathbb Z_q$ and publishing $$E(s,t)=g^s\cdot h^t\bmod p$$
He opens the commitment by simply publishing $(s,t)$ and he can't change the choice of $s$ if he published $E(s,t)$ except if he knows $\log_g(h)$ and somebody else obviously can't find out what $s$ is before the opening because it's blinded.
The homomorphic property
This assumes that you have committed to $s_1,s_2$ using $t_1,t_2$ as random blinding exponents. Now you can observe that (in $\mathbb Z_p^*$): \begin{eqnarray} E(s_1,t_1)\cdot E(s_2,t_2) & \equiv &(g^{s_1}\cdot h^{t_1})\cdot(g^{s_2}\cdot h^{t_2}) \\ & \equiv & (g^{s_1}\cdot g^{s_2})\cdot(h^{t_1}\cdot h^{t_2} \\ & \equiv & g^{s_1+s_2}\cdot h^{t_1+t_2} \\ & \equiv & E(s_1+s_2,t_1+t_2) \end{eqnarray}
You can create a new committment (as a sum) from two old ones, but then you also have to publish the new blinding exponent.
|
Problem for Higher Secondary Group from Divisional Mathematical Olympiad will be solved here.
Forum rules
Please
don't post problems (by starting a topic)in the "Higher Secondary: Solved" forum. This forum is only for showcasing the problems for the convenience of the users. You can post the problems in the main Divisional Math Olympiad forum. Later we shall move that topic with proper formatting, and post in the resource section.
If $x$ is very very very small $\sin x \approx x$. An operator $S_n$ is defined such that $ S_n(x)= \sin \sin \sin \cdots \sin x$ (a total of $n$ $\sin$ operators are included here). For sufficiently large $n$, $S_n(x) \approx S_{n-1}(x)$.
In that case, express $\cos (S_n(x))$ as the nearest rational value.
In that case, express $\cos (S_n(x))$ as the nearest rational value.
Solved here: viewtopic.php?p=1314#p1314
|
An indefinite nonlinear diffusion problem in population genetics, II: Stability and multiplicity
1.
Department of Mathematics, The Ohio State State University, Columbus, Ohio 43210
2.
School of Mathematics, University of Minnesota, Minneapolis, Minnesota 55455
3.
School of Mathematics, University of Minnesota, Minneapolis, MN 55455
$ u_{t}=d\Delta u+g(x)f(u),~0\leq u\leq 1 $ in Ω × (0, ∞),
$ \frac{\partial u}{\partial\nu}=0 $ on ∂ Ω × (0, ∞),
where $\Delta$ denotes the Laplace operator, $g$ may change sign in $\Omega$, and $f(0)=f(1)=0$, $f(s)>0$ for $s\in(0,1)$. Our main results include stability/instability of the trivial steady states $u\equiv 0$ and $u\equiv 1$, and the multiplicity of nontrivial steady states. This is a continuation of our work [12]. In particular, the conjecture of Nagylaki and Lou [11, p. 152] has been largely resolved. Similar results are obtained for Dirichlet and Robin boundary value problems as well.
Mathematics Subject Classification:Primary: 35K57; Secondary: 35B3. Citation:Yuan Lou, Wei-Ming Ni, Linlin Su. An indefinite nonlinear diffusion problem in population genetics, II: Stability and multiplicity. Discrete & Continuous Dynamical Systems - A, 2010, 27 (2) : 643-655. doi: 10.3934/dcds.2010.27.643
[1] [2]
Pablo Amster, Manuel Zamora.
Periodic solutions for indefinite singular equations with singularities in the spatial variable and non-monotone nonlinearity.
[3]
Begoña Barrios, Leandro Del Pezzo, Jorge García-Melián, Alexander Quaas.
A Liouville theorem for indefinite fractional diffusion equations and its application to existence of solutions.
[4]
Leszek Gasiński, Nikolaos S. Papageorgiou.
Multiplicity of solutions for Neumann problems with an indefinite and unbounded potential.
[5]
Pierre Magal.
Global stability for differential equations with homogeneous nonlinearity and application to population dynamics.
[6]
Fengxin Chen.
Stability and uniqueness of traveling waves for system of nonlocal evolution equations with bistable nonlinearity.
[7]
François Genoud, Charles A. Stuart.
Schrödinger equations with a spatially decaying nonlinearity: Existence and stability of standing waves.
[8] [9]
Junping Shi, Ratnasingham Shivaji.
Exact multiplicity of solutions for classes of semipositone problems with concave-convex nonlinearity.
[10] [11]
Messoud Efendiev, Alain Miranville.
Finite dimensional attractors for reaction-diffusion equations in $R^n$ with a strong nonlinearity.
[12]
Marek Galewski, Shapour Heidarkhani, Amjad Salari.
Multiplicity results for discrete anisotropic equations.
[13]
Julián López-Góme, Andrea Tellini, F. Zanolin.
High multiplicity and complexity of the
bifurcation diagrams of large solutions for a class of
superlinear indefinite problems.
[14]
José Godoy, Nolbert Morales, Manuel Zamora.
Existence and multiplicity of periodic solutions to an indefinite singular equation with two singularities. The degenerate case.
[15]
Zuzana Došlá, Mauro Marini, Serena Matucci.
Global Kneser solutions to nonlinear equations with indefinite weight.
[16] [17]
Said Boulite, S. Hadd, L. Maniar.
Critical spectrum and stability for population equations with diffusion in unbounded domains.
[18]
Kimie Nakashima, Wei-Ming Ni, Linlin Su.
An indefinite nonlinear diffusion problem in population genetics, I: Existence and limiting profiles.
[19]
Farah Abdallah, Denis Mercier, Serge Nicaise.
Spectral analysis and exponential or polynomial stability of some indefinite sign damped problems.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
I've got a fun question, which is somewhat testing my topology skills.
The space we're working with is $\mathbb{R} \rightarrow \mathbb{R}/\sim$, which sends $x$ to $[x] = \{y \in \mathbb{R}: x-y \in \mathbb{Q} \}$, and what I'm trying to show is that $\mathbb{R}/\sim$ isn't Hausdorff.
What I'm struggling with is proving that, for certain $[x],[y] \in \mathbb{R}/\sim$, that ALL open $U_{[x]}, U_{[y]}$ have a non-empty intersection. Intuition says that these open sets overlap, since in any open set around $[x]$ or $[y]$, there must be a rational, so this open set must also conatin all of $\mathbb{Q}$, so these open sets of $\mathbb{Q}$ in common.
Formalizing this is giving me trouble. How do I take an arbitrary open set in such an equivalence class? Is this just $[B_\varepsilon(x)]=\{[x] \in \mathbb{R}/\sim: x \in B_\varepsilon(x)\}$?
|
Implicit in that explanation is 1) some state space $(\cal{X},\mathscr{X})$ in which $X$ takes values, and 2) that $P(\cdot\mid H)=P(\cdot,H)$ is a probability measure on $(\cal{X},\mathscr{X})$. The existence of a $P(\cdot,H)$ satisfying 2) and also satisfying the relation mentioned toward the end of the screenshot $P(A,H)=P(A|H)=P(A\cap H)/P(H)$ (also implicit is $A\in\mathscr{X}$) amounts to the existence of a regular conditional probability distribution for $X$ on $(\Omega,\mathcal{F},P)$, which will hold, e.g., when $X$ is real-valued. As pointed out in the comment below, when we pass to the more everyday bayesian formula definition of event $A$ conditional on event $B$ we actually want to write $P(X^{-1}(A)\cap H)/P(H)$
"What I want is a very clear, purely real analysis notation, i.e., one that specifies the domain over which to integrate," --> $\cal X$
the corresponding σσ-algebra, --> $\mathscr X$
the measure --> $P(\cdot,H)$
and the integrand function" --> the identity
Maybe some of the confusion is also due to the notation of the integrator $P(dx,H)$. This just means integrate with dummy variable $x$ using the measure $P(\cdot,H)$, e.g., for lebesgue measure I might write $\int_0^\pi cos(x)\lambda(dx)$ for $\int_0^\pi cos(x)dx$.
Another possible point of confusion: using the same letter $P$ to refer to the underlying measure on the probability space $(\Omega,\cal{F},P)$ and the rcpd $P(\cdot | \cdot)$.
|
I have the following theorem:
Theorem: Let$(S,\mathcal{A},\mu)$ be a measurable space and let $(A_n)_{n\geq 1}$ be a sequence in $\mathcal{A}$.
i) If $A_n \uparrow A$, then $\mu(A_n) \uparrow \mu(A)$.
ii) If $A_n \downarrow A$ and $\mu(A_1)<\infty$, then $\mu(A_n)\downarrow \mu(A)$.
(Note that in this definition $A = \bigcup\limits_{n=1}^{\infty}A_n$).
Exercise: Prove i) and ii). What I've tried:
i) Pick a disjoint sequence $(B_n)_{n\geq 1}$. We have that $\bigcup\limits_{j=1}^{\infty}B_n = A$ and $\bigcup\limits_{j=1}^{n}B_n = A_j$. Therefore, $\sigma$-additivity gives$$\mu(A) = \sum\limits_{j =1}^{\infty}\mu(B_j) = \lim\limits_{n\to\infty}\sum\limits_{j = 1}^n \mu(B_j) = \lim\limits_{n\to\infty}\mu(A_n).$$ We have that $\mu(A) = \lim\limits_{n\to\infty}\mu(A_n)$ implies that $\mu(A)\geq \mu(A_n)$.
ii) I'm not sure what to do here. Again, I pick a disjoint sequence $(B_n)_{n\geq1}$. We have that $\bigcap\limits_{j=1}^\infty B_j = A$ and $\bigcap\limits_{j=1}^n B_j = A_n$. This time I need to show that $\mu(A_n)\geq \mu(A)$, right? I think this is equivalent to showing that $\bigcap\limits_{j=1}^\infty \mu(A_j) = \mu(A)$. However, I don't know how to use $\sigma$-additivity in this case.
Question: How do I show that if $A_n \downarrow A$ and $\mu(A_n)<\infty$, then $\mu(A_n)\downarrow \mu(A)$?
Thanks in advance!
|
I have to solve the following SDE. $$ \mathrm dY_t= f(X_t) \mathrm dt, \tag{1} $$ where $X_t$ is a two-state Markov Process possesses states $a$ and $b$.
Moreover, I would like to solve $$ \mathrm dY_t= Z_t \mathrm dt, \tag{2} $$ where $Z_t$ is an rv conditioned on $X_t$, i.e., $Z_t \mid X_t \sim Bin(n,g(X_t))$.
After seeing Did's comment and Jay.H's answer, I tried to derive $Y_t$ with Riemann Sum. Let us introduce $$ Y_t := \lim_{\lambda \rightarrow 0} \sum_{t_i} f(X_{t_i}) \Delta t, $$ where $0=t_0<t_1<t_2<\cdots<t_n<t_{n+1}=t$, $\lambda=\max(t_{j+1}-t_j)$ for $j=0 \ldots n$. Assume that the Markov process has a stationary state $\pi$, and $\pi_a$, $\pi_b$ are probabilities of the state $a$ and $b$ respectively. Since the terms added are infinite, thus $$ \Pr \{ \lim_{\lambda \rightarrow 0} \sum_{t_i} f(X_{t_i}) \Delta t\ =\pi_a f(X_a) + \pi_b f(X_b)\} = 1, $$ namely $$ \Pr \{ Y_t =\pi_a f(X_a) + \pi_b f(X_b)\} = 1. $$ Therefore, we can define $Y_t:=\pi_a f(X_a) + \pi_b f(X_b)$ with probability 1.
It's very strange that $Y_t$ is a deterministic real number, what's wrong with the above derivation?
Any suggestions? Thanks in advance!
|
Inelastic Electron Spectroscopy of an H 2 molecule placed between 1D Au chains¶ Version: 2017
This tutorial is focused on simulating inelastic electron tunneling spectroscopy (IETS) [Ree08] for a device consisting of two one-dimensional (1D) gold electrodes and an H
2 molecule placed in between. After DFT calculations of the dynamical matrix and the Hamiltonian derivatives with respect to vibrations of the H 2 molecule, IETS is computed and analyzed based on the lowest-order expansion (LOE) [FPBJ07] and extended LOE (XLOE) [LCF+14] approximations.
Note
It is assumed that you are familiar with QuantumATK. If not, go through the basic QuantumATK Tutorials first.
Introduction¶
We consider electron transport through a device consisting of an H
2 molecule clamped between 1D gold electrodes. In a spectroscopy experiment, if the applied bias is comparable to the energy of one of the vibrational modes of the H 2 molecule, a new inelastic transmission channel opens in which the transmission of electrons is mediated by the emission of a phonon. The coupling process is due to the coupling between electronic and vibrational degrees of freedom, that is, to the electron-phonon coupling. At a voltage where an inelastic channel opens, the derivative of the conductance, i.e., the second-derivative of the current has a peak. In experiments, IETS is defined by
and is measured to investigate, e.g., the electron-phonon coupling strength, heating, configuration, and local doping.
In the tutorial, you will first compute the dynamical matrix and the Hamiltonian derivatives with respect to vibrations of the H
2 molecule by DFT calculations, and then calculate and analyze the IETS signal based on two different approximations: LOE and XLOE. The rest of this tutorial is hence organized as follows: Device setup, Calculation of the dynamical matrix and the Hamiltonian derivatives, Calculation of IETS based on LOE and XLOE, Analysis of IETS based on the vibrational modes of the H 2molecule, Comparison between the LOE and XLOE results. Device setup¶
Start up
QuantumATK and create a new project. Instead of using Builder to create the device configuration, we use the QuantumATK Python script
au-h2-au_0.py, which provides the Au|H
2|Au device. Download and save the script in the project folder, and open it from Script Generator.
Note
The k-point sampling is by default 1x1x93, which should, however, be modified properly if the device is periodic in the A and/or B directions.
Set the output-file name “ au-h2-au.nc”, and save the python script as “ au-h2-au.py”.
Note
The python script must now be the same as
au-h2-au.py. To save time, one may just download the script and use it.
Calculation of IETS¶
The IETS calculation requires information of the dynamical matrix, which is defined in terms of the force for displacing the hydrogen atoms back and forth in each spatial direction. The computation involves six DFT calculations, which are all independent with each other, and is hence performed efficiently by using six CPU cores.
In the Script Generator, click on from File and select “ au-h2-au.nc”. Add a object and set it as follows:
The IETS calculation requires information of the Hamiltonian derivatives as well. This calculation also consists of six independent parts, and is hence carried out efficiently using six CPU cores.
Add another block, and set it in the same way except choosing XLOE as the method. Set the output-file name “ analysis.nc”, and save the python script as “ analysis.py”.
Note
The python script must be the same as
analysis.py. To save time, one may just download the script and use it.
The calculation will take less than one hour if six CPU cores are used, and save every result in “
analysis.nc”. Analysis¶ Vibrational modes of the H 2 molecule¶
There are, in total, six modes: Four transverse modes 0, 1, 2, 3; two longitudinal modes 4 and 5. The two pairs of modes (0 and 1) and (2 and 3) are, respectively, doubly degenerated. Note that the temperature is set
T = 1000 K in the movies below to see the vibrations clearly.
Mode 0: 25.44 meV; Mode 1: 25.63 meV (the small energy difference is due to numerical error)
Mode 2: 60.54 meV; Mode 3: 60.56 meV (the small energy difference is due to numerical error)
Mode 4: 127.46 meV
Mode 5: 264.06 meV
IETS¶
Then look at IETS. Select one of the InelasticTransmissionSpectrum objects \(T({\bf k},{\bf q})\) in LabFloor and click on the
Inelastic Transmission Spectrum Analyzer plugin, a new plugin available from ATK2017.
Set the panel as follows and click
Show.
Attention
Because the density of states (DOS) of the gold electrodes is almost flat within this bias window, there only exists a minor difference between the LOE and XLOE results. See the discussion below for more details.
There appear three significant inelastic features (steps in conductance and peaks in IETS). Note that the inelastic conductance is increased by the coupling to the low-frequency transverse modes 0-3 and decreased by the coupling to the high-frequency longitudinal mode 5. Analyzing the scattering states clarifies the reason of the increases.
Drag the TransmissionEigenstates object \(\psi_{i,{\bf k},\epsilon}\) in LabFloor into Viewer. Then drag the DeviceConfiguration into Viewer. Go to properties and, under the Isosurface flag, set the Isovalue to 0.5.
The scattering eigenstate has a clear
d-symmetry on the gold chain. Hence, it couples only very weakly to the s-like states of the hydrogen atoms. The elastic transmission is thus small. The coupling to the transverse modes 0-3 can, however, open new inelastic channels and leads to positive steps in conductance and steps in IETS. Detailed analysis of transmission: Comparison between LOE and XLOE¶
In the same window of the
Inelastic Transmission Spectrum Analyzer plugin, now choose Transmissions in Plot type, set the options as shown below, and click Show. In this case, one will look at the positive symmetric and asymmetric parts of the inelastic transmission:
To understand some details of the transmission, note that, in either LOE or XLOE, the current as a function of the bias voltage \(V\) and the temperature \(T\) reads
where \(\epsilon_F\) is the Fermi energy, and the rest of the new symbols are defined as follows: In either approximation, the universal functions appear in the same form,
where \(G_0=2e^2/h\) is the conductance quantum, \(n_F(\cdot)\) is the Fermi-Dirac distribution function, and \(\mathcal{H}_{\epsilon'}[\cdot](\epsilon)\) means the Hilbert transform. In XLOE, the transmission functions are given by
with
and
This XLOE expression of the transmission functions reduces to the LOE expression by setting \(\mu_L=\mu_R=\epsilon\). Importantly, in the expression of the current, note that the Green’s function, the self-energies, and the spectral functions in the transmission functions are evaluated at two different energies \(\epsilon_F\pm \hbar\omega_{\lambda}\) in XLOE, whereas they are all evaluated at the same energy \(\epsilon_F\) in LOE. See Ref. [LCF+14] for more details.
For direct comparison between the LOE and XLOE results, see here below the inelastic transmissions for each phonon mode. The lines of positive/negative (a)symmetric transmissions indicate the plots of \(\mathcal{T}^{(a)sym}_{\lambda}(\epsilon)\) as functions \(\epsilon\) with setting \(\mu_R=\epsilon\pm \hbar\omega_{\lambda}\). The positive and negative (a)symmetric parts are exactly the same in LOE by definition, but different in XLOE.
The LOE and XLOE results are in good agreement in areas where the transmission varies slowly as a function of \(\epsilon\). Around -0.11 eV, however, the wide-band approximation in LOE, i.e., the approximation \(\mu_L=\mu_R=\epsilon\), breaks down due to the presence of a resonance in the DOS of the 1D gold chain. As a result, there appears a difference between the LOE and XLOE results. The symmetric and asymmetric inelastic transmissions for mode 5 show the difference very clearly. The single peak at -0.11 eV in LOE splits into two peaks in XLOE; the separation is about 264 meV, which corresponds to the phonon energy of mode 5.
References¶
[FPBJ07] (1, 2) T. Frederiksen, M. Paulsson, M. Brandbyge, and A.-P. Jauho. Inelastic transport theory from first principles: Methodology and application to nanoscale devices. Phys. Rev. B, 75:205413, May 2007. doi:10.1103/PhysRevB.75.205413.
[LCF+14] (1, 2) J.-T. Lü, R. B. Christensen, G. Foti, T. Frederiksen, T. Gunst, and M. Brandbyge. Efficient calculation of inelastic vibration signals in electron transport: Beyond the wide-band approximation. Phys. Rev. B, 89:081405, Feb 2014. doi:10.1103/PhysRevB.89.081405.
[Ree08] M. A. Reed. Inelastic electron tunnelling spectroscopy. Materials Today, 2008. doi:10.1016/S1369-7021(08)70238-4.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.