arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# Category: mean
## load mongo information to my own front-end table
how to load mongo information to my own front-end table I create my table with angular but whem i try i put the information here doesn’t appear and i don’t know why doesn’t have errors but the information but i can see the information and i read the information and the information appear. I put my code down with hope to some one can help me PD: sorry for my bad english. 😀 student-service.ts @Injectable() […]
## AngularJS POST Fails: Failed to load http://localhost:3000/done: Response for preflight has invalid HTTP status code 404
source code when I POST data from postman it shows an correct response data, but when i POST data using app(client) side it shows an Failed to load http://localhost:3000/done: Response for preflight has invalid HTTP status code 404 Source: AngularJS
## How to get the templates of one server from another(Angular 1 or 2 or js)
I have a folder with list of templates.I want to get these templates through a http call or any other method if it suits. I have the folder in the server with url, www.testserver/email/templates/ Can any one please suggest me help.Thanks. Source: AngularJS
|
|
Want to ask us a question? Click here
Browse Questions
Ad
0 votes
# At what points on the curve $x^2+y^2-2x+4y+1=0$,the tangents are parallel to the y-axis?
Can you answer this question?
## 1 Answer
0 votes
Toolbox:
• If the tangent of a curve is parallel to $y$ - axis then $\large\frac{dx}{dy}=0$
Step 1
Equation of the curve is $x^2+y^2-2x-4y+1=0\: \: (1)$
differentiating w.r.t $x$ we get,
$2x+2y\large\frac{dy}{dx}-2-4\large\frac{dy}{dx}=0$
$\Rightarrow \large\frac{dy}{dx}(2y-4)=2-2x$
$\therefore \large\frac{dy}{dx}=\large\frac{(1-x)}{(y-2)}$
It is given that tangent is parallel to $y$ - axis then $\large\frac{dx}{dy}=0$
$\Rightarrow \large\frac{y-2}{1-x}=0$
$\therefore y-2=0$
$\Rightarrow y=2$
Substituting for $y=2$ in equation (1)
$x^2+4-2x-8+1=0$
$\Rightarrow x^2-2x-3=0$
$\Rightarrow (x-3)(x+1)=0$
$\therefore x=3\: and\: x=-1$
Hence the points are $(3,2)\: and\: (3, -1)$
answered Aug 4, 2013
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
–1 vote
1 answer
0 votes
1 answer
|
|
### An implication of the Farrell-Jones conjecture
A ‘well-known’ implication of the Farrell-Jones conjecture (for a given group G) is that the map $\widetilde{K_0(\mathbb{Z}G)} \to \widetilde{K_0(\mathbb{Q}G)}$ in reduced algebraic K-theory is rationally trivial. What at first might seem as a technical statement about algebraic K-theory turns out to have an interesting geometric consequence. It implies the Bass conjecture, which is equivalent to … Continue reading "An implication of the Farrell-Jones conjecture"
### Kaplansky’s direct finiteness conjecture
Not too long ago I blogged about the first counter-example to Kaplansky’s unit conjecture (link) stating that there are no non-trivial units in the group ring K[G] for K a field and G a torsion-free group. A related conjecture of Kaplansky (one that I was not aware of until recently) is that K[G] is directly … Continue reading "Kaplansky’s direct finiteness conjecture"
### Message from the EMS president
In the last EMS Magazine (2021/No. 121) Volker Mehrmann reflected in his editorial (link) on the bygone (virtual) European Congress 8ECM. At the end he asked to write to him our opinions about the matters that he addressed, which I did. I want to share here now my e-mail to him with you: Lieber Volker, … Continue reading "Message from the EMS president"
### Lie groups acting on countable sets
Does every connected Lie group act faithfully on a countable set? In other words: is every Lie group a subgroup of $$\mathrm{Sym}(\mathbb{N})$$? This question is sometimes called Ulam’s problem and there is recent progress in a paper of Nicolas Monod. Monod proves that every nilpotent connected Lie group acts faithfully on a countable set. It … Continue reading "Lie groups acting on countable sets"
### Topological CAT(0)-manifolds
It is an interesting and important fact that a contractible manifold (without boundary) is not necessarily homeomorphic to Euclidean space. This makes the classical Cartan-Hadamard theorem, stating that a contractible manifold equipped with a Riemannian metric of non-positive sectional curvature is diffeomorphic to Euclidean space, even more powerful. One can ask now whether one can … Continue reading "Topological CAT(0)-manifolds"
### PSC obstructions via infinite width and index theory
In a recent preprint (arXiv:2108.08506), Yosuke Kubota proved an intriguing new result on the relation of largeness properties of spin manifolds and index-theoretic obstructions to positive scalar curvature (psc): Let $$M$$ be a closed spin $$n$$-manifold. If $$M$$ has infinite $$\mathcal{KO}$$-width, then its Rosenberg index $$\alpha(M) \in \mathrm{KO}_n(\mathrm{C}^\ast_{\max} \pi_1 M)$$ does not vanish. Let us … Continue reading "PSC obstructions via infinite width and index theory"
### (Non-)Vanishing results for Lp-cohomology of semisimple Lie groups
For a locally compact, second countable group $$G$$ one can define the continuous $$L^p$$-cohomology $$H^*_{ct}(G,L^p(G))$$ of $$G$$ and the reduced version $$\overline{H}^*_{ct}(G,L^p(G))$$ for all $$p > 1$$. In his influential paper “Asymptotic invariants of infinite groups” Gromov asked if $H^j(G,L^p(G)) = 0$ when $$G$$ is a connected semisimple Lie group and $$j < \mathrm{rk}_{\mathbb{R}}(G)$$. … Continue reading "(Non-)Vanishing results for Lp-cohomology of semisimple Lie groups"
### New book about Freedman’s proof
Today I learnt from an article in the QuantaMagazine (link to article) that there is finally a new book trying to explain Freedman’s proof of the 4-dimensional Poincaré conjecture (link to book). The article is fun to read since it contains statements of the involved people about how the whole ‘situation’ about the non-understandable write-up … Continue reading "New book about Freedman’s proof"
|
|
# Linear Regression: P-values of t-test of significance of regression coeficients are same as p-values of F-test about submodel
Suppose we have regression problem:
$$Y = \beta_{0}+\beta_{1}X_{1} + \beta_{2}X_{2} + \epsilon \text{, } \epsilon \sim \mathcal{N}(0,1),$$
where $$X_1 \sim U(0,1)$$,$$X_{2} \sim U(1,2)$$, and we suppose our model with regression function
$$m:=E[Y|X_1,X_2] = \beta_{0}+\beta_{1}X_{1} + \beta_{2}X_{2}.$$
We apply OLS estimation of coefficient and receive estimate of regression coeficeint. Let's do this in $$\verb|R|$$:
X1 <- runif(1000,0,1)
X2 <- runif(1000,1,2)
Y <- X1 + X2 + rnorm(1000,0,1)
model <- lm(Y ~ X1 + X2)
summary(model)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.1401 0.1828 0.766 0.444
X1 1.0250 0.1150 8.909 < 2e-16 ***
X2 0.9247 0.1139 8.119 1.37e-15 ***
---
Signif. codes:
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.026 on 997 degrees of freedom
Multiple R-squared: 0.1276, Adjusted R-squared: 0.1259
F-statistic: 72.94 on 2 and 997 DF, p-value: < 2.2e-16
Next, we want to test, if the regression coeficient $$\beta_2$$ is significant or not, so here is T-test for this:
$$H_{0}: \beta_{2} = 0$$ $$H_{1}: \beta_{2} \neq 0$$
Aproppiate test statistic is:
$$T_n = \frac{\hat{\beta_{2}}}{S.E(\hat{\beta_{2}})} \sim t_{n-r},$$
and p-value:$$2min\{CDF_{t}(t_{0}), 1 - CDF_{t}(t_0)\}$$, where $$CDF_t$$ is cummulative distribution function o $$t$$-distribution with $$n-r$$, in ou case $$1000-3$$ degrees of freedom. Let's suppose, that we also made test about submodel:
$$H_{0} Y \sim X_{1} \text{ holds} (M^{0})$$ $$H_{1} Y \sim X_{1} + X_{2} \text{ holds} (M).$$
Test statistics is $$F = \frac{\frac{SSe^0 - SSe}{r - r_{0}}}{MSe} \sim F_{r-r_0,n-r}$$ and p-value:$$1-CDF_f(f_0)$$, where $$CDF_f$$ is cummulative distribution function of F-distribution with $$r-r_0$$ and $$n-r$$ degrees of freedom. In $$\verb|R|$$ let's make an test about submodel:
m0 <- lm(Y ~ X1)
anova(m0,model)
Analysis of Variance Table
Model 1: Y ~ X1
Model 2: Y ~ X1 + X2
Res.Df RSS Df Sum of Sq F Pr(>F)
1 998 1118.2
2 997 1048.8 1 69.352 65.926 1.373e-15 ***
---
Signif. codes:
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Now there is a question, i know that if $$X \sim t_{n-r}$$, then $$X^2 \sim F_{1,n-r}$$. So if we squared t-value in our summary table, in line where regression coeficient $$\hat{\beta}_2$$ is, we geta f-value in output in test of submodel, so that's okay, but we have also same p-values, but p-value in summary table is computed from t-distribution, not squared distribution, and this numbers implies that $$2min\{CDF_{t}(t_{0}), 1 - CDF_{t}(t_0)\} = 1-CDF_f(f_0)$$, but i think this is not really true. What am i missing, how it is possible, that p-value which are computed from two different distributions are same ? (and this is not only for this case, but for every liner regression problems). Why p-values are same ? Please help.
In all cases i asuume normal linear model, that's why i assume those distribution of test statistics under $$H_0$$ hypothesis.
You should expect that the p-values are the same. If you reject the null hypothesis that $$\beta_2$$ = 0 (using the t-test), you should simultaneously reject the null hypothesis that the model without $$\beta_2$$ is adequate (using the F-test).
The p-value for the t-test is the probability that $$|t_0| > t^{\alpha/2}_{n-r}$$, and the p-value for the F-test is the probability that $$F_0 > F^{\alpha/2}_{1, n-r}$$. But note that these are mathematically identical statements, since squaring both sides of $$|t_0| > t^{\alpha/2}_{n-r}$$, gives you $$F_0 > F^{\alpha/2}_{1, n-r}$$, as you stated. Therefore, the p-values will always be numerically identical in this situation.
It's true that the p-value is computed from two different distributions, but they're found using different test statistics which are related to one another in a way that the p-value will always be identical.
• this would be a good answer if you include how are the two test statistics related to each other and why will they always (or almost always?) produce the same p-value Jan 8 '20 at 17:13
• Thanks for your feedback @rep_ho. I updated my answer with some more details on how the statistics are related to one another. Jan 8 '20 at 18:14
• Yes, this is right, thank you for your help. :) Jan 9 '20 at 13:04
|
|
# Intercepts & standard form
Printable View
• April 27th 2010, 02:07 PM
buck
Intercepts & standard form
Express each of the following in standard form
y= 1/4x - 1/3
-4(-1/4x + y +1/3)=(0)-4
-3(1x - 4y -4/3)=(0)-3
-3x+12y+4=0
-3x+12y=-4
The answer is 3x-12y=4
I don't know what I did wrong, please help.(Itwasntme)
• April 27th 2010, 02:17 PM
harish21
Quote:
Originally Posted by buck
Express each of the following in standard form
y= 1/4x - 1/3
-4(-1/4x + y +1/3)=(0)-4
-3(1x - 4y -4/3)=(0)-3
-3x+12y+4=0
-3x+12y=-4
The answer is 3x-12y=4
I don't know what I did wrong, please help.(Itwasntme)
Your ansswer is correct
You have $-3x+12y=-4$
so,
$-(3x-12y) = -4 \implies 3x-12y=4$
another method:
$y = \frac{x}{4} - \frac{1}{3} = \frac{3x-4}{12}$
$\therefore 12y = 3x-4 \implies 3x-12y=4$
|
|
# What are the big problems in probability theory?
Most branches of mathematics have big, sexy famous open problems. Number theory has the Riemann hypothesis and the Langlands program, among many others. Geometry had the Poincaré conjecture for a long time, and currently has the classification of 4-manifolds. PDE theory has the Navier-Stokes equation to deal with.
So what are the big problems in probability theory and stochastic analysis?
I'm a grad student working in the field, but I can't name any major unsolved conjectures or open problems which are driving research. I've heard that stochastic Löwner evolutions are a big field of study these days, but I don't know what the conjectures or problems relating to them are.
Does anyone have any suggestions?
• Perhaps should be CW... Maybe look at recent papers in probability in top journals and see what people are working on? – Gerald Edgar Aug 30 '10 at 12:59
• Though this question is imperfect, I vote to keep it open. As a frequent consumer of probability theory I find it interesting and useful. – Steve Huntsman Aug 31 '10 at 6:14
• I feel that the answers, while nice, leave large areas of probability untouched. – Gil Kalai Sep 1 '10 at 7:57
• As much as I love maths and their open problems, I don't think the word sexy applies to them – Luis Mendo Dec 6 '20 at 19:43
To my mind the sexiest of open problems in probability is to show that there is "no percolation at the critical point" (mentioned in particular in section 4.1 of Gordon Slade's contribution to the Princeton Companion to Mathematics). A capsule summary: write $\mathbb{Z}_{d,p}$ for the random subgraph of the nearest-neighbour $d$-dimensional integer lattice, obtained by independently keeping each edge with probability $p$. Then it is known that there exists a critical probability $p_c(d)$ (the percolation threshold}) such that for $p < p_c$, with probability one $\mathbb{Z}_{d,p}$ contains no infinite component, and for $p > p_c$, with probability one there exists an unique infinite component.
The conjecture is that with probability one, $\mathbb{Z}_{d,p_c(d)}$ contains no infinite component. The conjecture is known to be true when $d =2$ or $d \geq 19$.
Incidentally, one of the most effective ways we have of understanding percolation -- a technique known as the lace expansion, largely developed by Takeshi Hara and Gordon Slade -- is also one of the key tools for studying self-avoiding walks and a host of other random lattice models.
That article of Slade's is in fact full of intriguing conjectures in the area of critical phenomena, but the conjecture I just mentioned is probably the most famous of the lot.
• I agree that this conjecture (referred to as the dying percolation conjecture) is a great open problem. It is especially challenging in dimensions 3,4, and 5 where the Hara Slade results do not hold. – Gil Kalai Aug 30 '10 at 21:56
Maybe the no1 problem of probability is to make rigorous what one finds in just about any textbook in statistical mechanics. In other words it is to put the predictions of Wilson's renormalization group theory on a rigorous footing. Many of the topics mentioned in this post are particular conjectures in this broader program.
Update: A nice recent review on this topic by Gordon Slade can be found here.
Understand self-avoiding random walks, see http://gowers.wordpress.com/2010/08/22/icm2010-smirnov-laudatio/.
• Understanding SAW is certainly one of the biggest outstanding problems in probability theory. Nonetheless, it's premature to select it as The Answer within hours of posting your question. – Tom LaGatta Aug 30 '10 at 18:50
• Since it is not CW, this means there is one, unique, answer. So this must be it! – Gerald Edgar Aug 30 '10 at 21:39
One major problem is extending the wonderful understanding of planar stochastic models to higher dimensions. So understanding 3,4-dimensional percolation, Ising Model, self avoiding walks, loop erased random walks and their scaling limits is a rather important problem.
The normal distribution and the many places it occurs in mathematics and its application is a primary example of a universal phenomenon. Proving and understanding other universal phenomena in probability is of great importance. One example I like is to understand the distributions that came from random matrix theory and occur in various other places. One such distribution is the distribution of the largest eigenvalue of a random matrix discovered by Tracy and Widom.
Michel Talagrand has a number of open problems (with bounty) listed on his website. I haven't looked at them all, but knowing him, I guarantee you that they are very hard and quite important. These are motivated by his research directions, but unlike some fields, there's not one research direction and one set of open problems that dominate probability theory right now.
• I like the use of the word "very" to mean "probably extremely" (from what little I've tried to read of Talagrand's stuff) – Yemon Choi Aug 30 '10 at 21:55
To determine the limit shape of first passage percolation.
In the $n$-dimensional grid, start with a vertex colored black and all others colored white. Choose uniformly a bicolor edge (one black end, one white end) and color in black its white end. Continue this process forever.
The black part grows, and it is known that if we rescale it so that it has constant diameter, it converges to a convex shape. What we do not know is what the shape is.
• What Benoît has described is the Richardson growth model, which has limiting shape equal to that of first-passage percolation with i.i.d. exponential passage times. What is most fascinating to me is that the limiting shape is not known for any distribution of i.i.d. passage times. There are related models for which the limiting shape is known (e.g. last-passage percolation, Euclidean FPP, FPP with stationary and ergodic passage times), but the i.i.d. case has resisted all attack. – Tom LaGatta Aug 30 '10 at 19:01
• Another variation (the general Richardson model) is to choose some $p$, and for each boundary edge to color its white end black with probability $p$. The limit shape is not known except obviously for $p=1$, and when $p\to 0$ the limit shape converges to the limit shape of first passage percolation. An interesting fact is known, though: if $p$ is close enough to $1$, then the limit shape is not strictly convex. – Benoît Kloeckner Aug 31 '10 at 8:15
The lack of a so-called big problem in probability theory seems to suggest the richness of the subject itself. One of the most fascinating subfields is the determination of convergence rate of finite state space Markov chains. Many convergence problem even on finite groups have exhausted current analytic techniques. For instance, intuitions from coupon collector's problem suggests that the random adjacent transposition walk exhibits cutoff in total variation convergence to the uniform measure on the symmetric group, and the upper and lower bounds gap is only a factor of 2. There are many tools one could employ to study such problems, such as representation theory and discretized version of inequalities from PDE theory, which makes the solutions very creative.
• Number theory has several big problems but is also a very rich subject (not all work in number theory is directed towards the Riemann hypothesis or the Birch and Swinnerton-Dyer conjecture), so the lack of a big problem in probability does not really point to the subject's richness. – KConrad Feb 2 '11 at 17:30
• I suppose if you couple it with the fact so many people work in it, then richness does become a corollary. – John Jiang Feb 2 '11 at 19:55
• I wasn't intending to suggest probability is not a rich subject, but I don't buy the argument that the lack of a very prominent unsolved question or program in an area is in some way a sign that the area is rich. Seems kind of after-the-fact justification to me. – KConrad Apr 15 '14 at 21:17
• Modern mathematicians never cease to amaze me. Just found out that the random adjacent transposition walk has been shown to exhibit cutoff at the Wilson lower bound back in 2016, after reading one of my advisor Persi's latest paper: arxiv.org/pdf/1309.3873.pdf – John Jiang Apr 8 '18 at 5:23
David Aldous has a list of open problems on his website, though they look like personal favorites rather than "big" questions. You might look at the problems Aldous labels as "Type 2:We have a precise mathematical problem, but we do not see any plausible outline for a potential proof."
Chapter 23 of the recent monograph Markov Chains and Mixing Times is a list of open problems. Again, though, I cannot say which of these are "big."
• In an earlier version (stat.berkeley.edu/~aldous/Research/problems.ps) of that list of open problems, Aldous states that 'they are not intended to be "representative" or "the most important" ... of all open problems in probability. The majority are (I think) my own invention and have not been discussed extensively elsewhere'. That having been said, I really enjoy Aldous' list and find many of his open problems dangerously fun to think about. – Louigi Addario-Berry Aug 30 '10 at 19:44
Maybe the 1917 Cantelli conjecture? If $f$ is a positive function on real numbers, if $X$ and $Z$ are $N(0,1)$ independent rv such that $X+f(X)Z$ is normal, prove that $f$ is a constant ae.
• What kind of information is there out there about the history of this problem? – weakstar Feb 3 '11 at 1:56
• Victor Kleptsyn and Aline Kurtzmann claim to give a counterexample (front.math.ucdavis.edu/1202.2250 ). – Ori Gurel-Gurevich Feb 13 '12 at 19:39
You can also have a look at the list of open problems on Michael Aizenman's homepage:
http://www.math.princeton.edu/~aizenman/OpenProblems.iamp/
These are very important for (mathematical) physics, and several fall in the realm of probability theory (in particular: Soft phases in 2D O(N) models, and Spin glass).
In limit theorems, one of the biggest problem is to give an answer to Ibragimov's conjecture, which states the following:
Let $(X_n,n\in\Bbb N)$ be a strictly stationary $\phi$-mixing sequence, for which $\mathbb E(X_0^2)<\infty$ and $\operatorname{Var}(S_n)\to +\infty$. Then $S_n:=\sum_{j=1}^nX_j$ is asymptotically normally distributed.
$\phi$-mixing coefficents are defined as $$\phi_X(n):=\sup(|\mu(B\mid A)-\mu(B)|, A\in\mathcal F^m, B\in \mathcal F_{m+n},m\in\Bbb N ),$$ where $\mathcal F^m$ and $\mathcal F_{m+n}$ are the $\sigma$-algebras generated by the $X_j$, $j\leqslant m$ (respectively $j\geqslant m+n)$, and $\phi$-mixing means that $\phi_X(n)\to 0$.
It was posed in Ibragimov and Linnik paper in 1965.
Peligrad showed the result holds with the assumption $\liminf_{n\to +\infty}n^{-1}\operatorname{Var}(S_n)>0$. It also holds when $\mathbb E\lvert X_0\rvert^{2+\delta}$ is finite for some positive $\delta$ (Ibragimov, I think).
Forming tools for handling random surfaces and proving an universal central limit theorem for them given minimal conditions (think usual CLT):
a)The Gaussian Free Field (GFF) has shown up as the limiting universal object for many random surfaces (in KPZ 2+1, random tilings, random matrix theory Ginibre ensembles cf Borodin's and Kenyon's work)
b)Schramm-Loewner evolutions (SLE) have shown up as the limiting interfaces for families of statistical models.
c)Finally, merging the above two pictures since SLEs can be coupled to GFF (cf Sheffield).
Another powerful result would be showing the equivalence of Random planar maps and Liouville quantum gravity (LQG) (a promising approach by Miller and Sheffield). This is because it happens that statistical models become easier to handle over these random surfaces (Kazanov-Ising model, LERW-Duplantier).
|
|
# Difference between revisions of "TIP4P/Ice model of water"
The TIP4P/Ice model [1] is a re-parameterisation of the TIP4P potential for simulations of ice phases. TIP4P/Ice is a rigid planar model, having a similar geometry to the original Bernal and Fowler model.
## Parameters
$r_{\mathrm {OH}}$ (Å) $\angle$HOH , deg $\sigma$ (Å) $\epsilon/k$ (K) q(O) (e) q(H) (e) q(M) (e) $r_{\mathrm {OM}}$ (Å) 0.9572 104.52 3.1668 106.1 0 0.5897 -2q(H) 0.1577
## Virial coefficients
The second virial coefficient has been calculated by Chialvo et al [3].
## Melting point
$T_m = 269.8 \pm 0.1$ K [4].
## References
This page contains numerical values and/or equations. If you intend to use ANY of the numbers or equations found in SklogWiki in any way, you MUST take them from the original published article or book, and cite the relevant source accordingly.
|
|
# Complex Analysis: Integration
#### Shay10825
1. The problem statement, all variables and given/known data
Evaluate the following integral for 0<r<1 by writing $$\cos\theta = \frac{1}{2}(e^{i\theta} + e^{-i\theta})$$ reducing the given integral to a complex integral over the unit circle.
$$Evaluate: \displaystyle{\frac{1}{2\pi}\int_0^{2\pi}\frac{1}{1-2r\cos\theta + r^2}\,d\theta}$$
2. Relevant equations
none
3. The attempt at a solution
$$\displaystyle\cos\theta = \frac{1}{2}(e^{i\theta} + e^{-i\theta})}$$
$$\displaystyle{z=e^{i\theta}}$$
$$\displaystyle{ \cos t= \frac{1}{2}(z+ \frac{1}{z})}$$
$$\displaystyle{\frac{dz}{iz}= dt}$$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$$\displaystyle{\frac{1}{2\pi}\oint \frac{1}{1-2r[\frac{1}{2}(z+\frac{1}{z})]+r^2}\,\frac{dz}{iz}}$$
$$\displaystyle{\frac{1}{2\pi}\oint \frac{1}{z-2rz[\frac{1}{2}(z+\frac{1}{z})]+r^2}\,\frac{dz}{i}}$$
$$\displaystyle{\frac{-i}{2\pi}\oint \frac{1}{z-2rz[\frac{1}{2}(z+\frac{1}{z})]+r^2}\,dz}$$
$$\displaystyle{\frac{-i}{2\pi}\oint \frac{1}{z-r^2z^2-r+r^2}\,dz}$$
But I get stuck here. What do I do with the "r"? Should I factor it out, and if yes then how?
Thanks
#### gabbagabbahey
Homework Helper
Gold Member
$$\displaystyle{\frac{-i}{2\pi}\oint \frac{1}{z-r^2z^2-r+r^2}\,dz}$$
But I get stuck here. What do I do with the "r"? Should I factor it out, and if yes then how?
Thanks
Leave the "r" where it is, the location of the poles will depend on its value...find those poles by solving the quadratic $z-r^2z^2-r+r^2=0$ for $z$...
Edit: You've also got a couple of algebra errors, double check your last 4 steps
Last edited:
### The Physics Forums Way
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
|
# Mission #1: Exploring the MTBoS
Logarithms grow painfully slow. Students hear me say that but they don’t get it. I want them to really understand this type of function. I want them to grasp how slowly these graphs march off through the Cartesian plane in their deliberate quest to be part of the infinite. If a student cries out, “logarithms grow as slowly as their inverse exponential counterparts grow quickly” I’ve won. Okay, that never happens. But when I say that and they nod their heads instead of squint their eyes, it’s a start.
Consider graphing $y=log(x)$ on a typical whiteboard coordinate grid where every square is 1in. x 1in. and this whiteboard grid is at the front of the class for all to behold its power. There’s no pinching or dragging this graph. The axes are fixed.
Now imagine that this rich Desmos graph of the common log below were on this aforementioned static whiteboard.
Question 1: How many inches would we have to travel from the origin to reach a height of six inches? Six inches, that’s all. Start from the graph on the board in my room and follow the curve until it has climbed to a height of six inches.
Question 2: Where in San Diego county would we be? The answer would astound most any student. Would we find ourselves in neighboring Mr. W’s room? Tijuana? La Jolla Shores? The Laguna mountains? That’s the two-fold question. Go in any direction. Ignore the curvature of the Earth to play this game. Flat map.
The answer is 1,000, 000 inches, since 10^6 is 1,000,000 and so log(1,000,000) = 6. After some conversion, students will come up with 15.78 miles. But how to interpret that on a map? 15.78 miles in any direction. Enter, Mr. Circle and the need for a compass. Students have to make sense of the map they’re given, its scale, and how to measure off 15.78 miles.
When I ask the question now I give them a paper Google map, and tell them to go at it. It’s a 15 minute or so activity that connects logarithms, geography, geometry, measurement, and their imagination. In the near future when the access to tech is no biggie, they’ll pull up their own map and use tools like this website http://www.freemaptools.com/radius-around-point.htm and we can compare reaching heights of seven inches, one foot, etc. The shifting of the map wouldn’t be a problem on a device. Technology here would reduce the thinking involved to construct a circle of a given radius, but it’d allow deeper conceptual questions.
I think when I get to logarithms this year and after students have played with the flat map, we can talk about the Earth’s curvature and what that really means. If the Earth curves roughly 8 inches per mile, how would the results change? Now we’re getting to “how” questions which are supremely better than “what” and “where” questions.
I also want the play with the metaphor of a ride and the value of thrill. Here’s what I’m thinking. A huge ride is built whose track is shaped like the common log function (or any log function for that matter.) The further you go on the ride, the more the ticket to ride costs. But the further you go, the greater the thrill at the end. For at the end, you stop, pivot, and come screaming straight down. Where is the most thrill for your money? Justify it. And I’ll need a cool name for the ride. Suggestions taken.
|
|
• ### Announcements
#### Archived
This topic is now archived and is closed to further replies.
# the difrence betwen true and TRUE
## Recommended Posts
As the tile says I would want to know what''s the difrence betwen "true" and "TRUE" in C++ ( and offcourse "false" and "FALSE" are analog I think ). I use VC++ 6.0 Well I''ve noticed that true is an int var and TRUE is a bool var but beside that is there any real difrence ? Thanks DariusKing
##### Share on other sites
true is a bool constant, built into the C++ language.
TRUE is a #defined int constant used by windows APIs, with a value of 1
While it is true that all non-zero integral values evaluate to the boolean true and that true can be converted to the integral value 1, the two are not equivalent (I had a code snippet showing that, but I can't remember it).
The use of true is preferable in any case. TRUE is a hack that hails back to C (which didn't have a bool type until the C99 Standard) and pre-standard C++. It has no place in modern code... except maybe for backward compatibility with legacy APIs.
Here's an example:
template<class T> void Foo( T f ){ if( f == (T)42 ) cout << "Equal" << endl;}
If you pass true, it will print "Equal", but not if you pass TRUE. In the first case, 42 is converted into a bool and thus is evaluated as true, while in the second case, it stays an int and, well, 1 != 42.
There are other tricks, playing on the fact that it is the bool that gets converted into an int when checking whether they are equal, and other mixed-type comparisons.
The sizes are also not guaranteed to be the same: sizeof( true ) may be different from sizeof( TRUE ).
Documents [ GDNet | MSDN | STL | OpenGL | Formats | RTFM | Asking Smart Questions ]
C++ Stuff [ MinGW | Loki | SDL | Boost. | STLport | FLTK | ACCU Recommended Books ]
[edited by - Fruny on August 31, 2002 3:19:41 PM]
##### Share on other sites
I usualy use true too but today I was coding a litle MFC application and I got a worning when I tryed to compile this piece of code :
// bla bla... other things
if(m_bMesaj == true )
{
// other stuff here
}
the worning was :
"warning C4805: ''=='' : unsafe mix of type ''int'' and type ''const bool'' in operation "
and then I replaced true with TRUE and the code compilded without any wornings.....
So that''s way I''ve posted here....
DariusKing
• ## Partner Spotlight
• ### Forum Statistics
• Total Topics
627682
• Total Posts
2978614
• 13
• 12
• 10
• 12
• 22
|
|
# Hopf invariant
(Redirected from Adams' theorem)
In mathematics, in particular in algebraic topology, the Hopf invariant is a homotopy invariant of certain maps between spheres.
## Motivation
In 1931 Heinz Hopf used Clifford parallels to construct the Hopf map
${\displaystyle \eta \colon S^{3}\to S^{2}}$,
and proved that ${\displaystyle \eta }$ is essential, i.e. not homotopic to the constant map, by using the linking number (=1) of the circles
${\displaystyle \eta ^{-1}(x),\eta ^{-1}(y)\subset S^{3}}$ for any ${\displaystyle x\neq y\in S^{2}}$.
It was later shown that the homotopy group ${\displaystyle \pi _{3}(S^{2})}$ is the infinite cyclic group generated by ${\displaystyle \eta }$. In 1951, Jean-Pierre Serre proved that the rational homotopy groups
${\displaystyle \pi _{i}(S^{n})\otimes \mathbb {Q} }$
for an odd-dimensional sphere (${\displaystyle n}$ odd) are zero unless i = 0 or n. However, for an even-dimensional sphere (n even), there is one more bit of infinite cyclic homotopy in degree ${\displaystyle 2n-1}$.
## Definition
Let ${\displaystyle \phi \colon S^{2n-1}\to S^{n}}$ be a continuous map (assume ${\displaystyle n>1}$). Then we can form the cell complex
${\displaystyle C_{\phi }=S^{n}\cup _{\phi }D^{2n},}$
where ${\displaystyle D^{2n}}$ is a ${\displaystyle 2n}$-dimensional disc attached to ${\displaystyle S^{n}}$ via ${\displaystyle \phi }$. The cellular chain groups ${\displaystyle C_{\mathrm {cell} }^{*}(C_{\phi })}$ are just freely generated on the ${\displaystyle i}$-cells in degree ${\displaystyle i}$, so they are ${\displaystyle \mathbb {Z} }$ in degree 0, ${\displaystyle n}$ and ${\displaystyle 2n}$ and zero everywhere else. Cellular (co-)homology is the (co-)homology of this chain complex, and since all boundary homomorphisms must be zero (recall that ${\displaystyle n>1}$), the cohomology is
${\displaystyle H_{\mathrm {cell} }^{i}(C_{\phi })={\begin{cases}\mathbb {Z} &i=0,n,2n,\\0&{\mbox{otherwise}}.\end{cases}}}$
Denote the generators of the cohomology groups by
${\displaystyle H^{n}(C_{\phi })=\langle \alpha \rangle }$ and ${\displaystyle H^{2n}(C_{\phi })=\langle \beta \rangle .}$
For dimensional reasons, all cup-products between those classes must be trivial apart from ${\displaystyle \alpha \smile \alpha }$. Thus, as a ring, the cohomology is
${\displaystyle H^{*}(C_{\phi })=\mathbb {Z} [\alpha ,\beta ]/\langle \beta \smile \beta =\alpha \smile \beta =0,\alpha \smile \alpha =h(\phi )\beta \rangle .}$
The integer ${\displaystyle h(\phi )}$ is the Hopf invariant of the map ${\displaystyle \phi }$.
## Properties
Theorem: ${\displaystyle h\colon \pi _{2n-1}(S^{n})\to \mathbb {Z} }$ is a homomorphism. Moreover, if ${\displaystyle n}$ is even, ${\displaystyle h}$ maps onto ${\displaystyle 2\mathbb {Z} }$.
The Hopf invariant is ${\displaystyle 1}$ for the Hopf maps (where ${\displaystyle n=1,2,4,8}$, corresponding to the real division algebras ${\displaystyle \mathbb {A} =\mathbb {R} ,\mathbb {C} ,\mathbb {H} ,\mathbb {O} }$, respectively, and to the fibration ${\displaystyle S(\mathbb {A} ^{2})\to \mathbb {PA} ^{1}}$ sending a direction on the sphere to the subspace it spans). It is a theorem, proved first by Frank Adams and subsequently by Michael Atiyah with methods of topological K-theory, that these are the only maps with Hopf invariant 1.
## Generalisations for stable maps
A very general notion of the Hopf invariant can be defined, but it requires a certain amount of homotopy theoretic groundwork:
Let ${\displaystyle V}$ denote a vector space and ${\displaystyle V^{\infty }}$ its one-point compactification, i.e. ${\displaystyle V\cong \mathbb {R} ^{k}}$ and
${\displaystyle V^{\infty }\cong S^{k}}$ for some ${\displaystyle k}$.
If ${\displaystyle (X,x_{0})}$ is any pointed space (as it is implicitly in the previous section), and if we take the point at infinity to be the basepoint of ${\displaystyle V^{\infty }}$, then we can form the wedge products
${\displaystyle V^{\infty }\wedge X}$.
Now let
${\displaystyle F\colon V^{\infty }\wedge X\to V^{\infty }\wedge Y}$
be a stable map, i.e. stable under the reduced suspension functor. The (stable) geometric Hopf invariant of ${\displaystyle F}$ is
${\displaystyle h(F)\in \{X,Y\wedge Y\}_{\mathbb {Z} _{2}}}$,
an element of the stable ${\displaystyle \mathbb {Z} _{2}}$-equivariant homotopy group of maps from ${\displaystyle X}$ to ${\displaystyle Y\wedge Y}$. Here "stable" means "stable under suspension", i.e. the direct limit over ${\displaystyle V}$ (or ${\displaystyle k}$, if you will) of the ordinary, equivariant homotopy groups; and the ${\displaystyle \mathbb {Z} _{2}}$-action is the trivial action on ${\displaystyle X}$ and the flipping of the two factors on ${\displaystyle Y\wedge Y}$. If we let
${\displaystyle \Delta _{X}\colon X\to X\wedge X}$
denote the canonical diagonal map and ${\displaystyle I}$ the identity, then the Hopf invariant is defined by the following:
${\displaystyle h(F):=(F\wedge F)(I\wedge \Delta _{X})-(I\wedge \Delta _{Y})(I\wedge F).}$
This map is initially a map from
${\displaystyle V^{\infty }\wedge V^{\infty }\wedge X}$ to ${\displaystyle V^{\infty }\wedge V^{\infty }\wedge Y\wedge Y}$,
but under the direct limit it becomes the advertised element of the stable homotopy ${\displaystyle \mathbb {Z} _{2}}$-equivariant group of maps. There exists also an unstable version of the Hopf invariant ${\displaystyle h_{V}(F)}$, for which one must keep track of the vector space ${\displaystyle V}$.
## References
• Adams, J.F. (1960), "On the non-existence of elements of Hopf invariant one", Ann. Math., The Annals of Mathematics, Vol. 72, No. 1, 72 (1): 20–104, doi:10.2307/1970147, JSTOR 1970147
• Adams, J.F.; Atiyah, M.F. (1966), "K-Theory and the Hopf Invariant", The Quarterly Journal of Mathematics, 17 (1): 31–38, doi:10.1093/qmath/17.1.31
|
|
org All characters after type are accepted as the string to type. 7z from releases with other tools, and put it into wsl-terminal directory, then run. For details, see The Windows Roaming Profile Versions. This article was written by Ramon Casha This tutorial presents the Linux terminal and the “bash” shell to people who have never used a command line to give commands to an operating system before, or who have never done so in Linux/Unix. A quick list of commands you can use to mount/access windows shares on Linux using samba/cifs/smbfs/smbclient may need a backslash to escape. Given IE_sanitize is only used with files uploaded from Internet Explorer 7 and earlier which represents less than 3% of the total browser share, I don't think it is no longer necessary to use this method. x Dragos Prisaca DRAFT INTERIM ACCEPTED ACCEPTED 5. Cheat sheets comes in handy in those times when you are. If the system's owner has never used smbclient and hasn't created the smb. The Kx wiki was the primary documentation for q and kdb+ until January 2017. Oh, and one more important point. The difference between the forward slash and the backslash is as follows: / = the forward slash leans forward = the backslash leans back. If the current input is the address box, then 'www. \, not that obsolete \. Please be sure to select " Accept Solution " and or select the thumbs up icon to enter Kudos for posts that resolve your issues. Where is the backslash key located on my keyboard? Answer: The backslash key is located near the "Enter" or "Return" key on most keyboards. This post aims to demonstrate how existing Windows detections often have Linux analogs. For example, you must escape the dollar sign ($) anytime you use it in a terminal command/shell script: Does not work: sudo sqlcmd -S myserver -U sa -P Test$$Works:. Ultimate Guide to Docker HTTP Proxy Configuration Using a HTTP proxy is a boon to performance, especially if you have a slow link to the Internet. We’ve done the hard work and chosen between the multiple options at key steps to help make things easier for you. Twine is the closest we've come to a blank page. Domain\User is the "old" logon format, called down-level logon name. It is in beta at the time of this writing. The backslash quotes any character and causes the following character to lose the special meaning it usually has in the context. Also as use of the -m switch implies a non-interactive terminal, you need to force an interactive terminal back using a -t switch. This allows the use of a non-standard login program (for example, one that asks for a dial-up password or that uses a different password file). 5 Hello Special26, The default user of BackSlash Anna is " Administrator " with username " backslash " and no pa**word. Explore websites and apps like Backslash, all suggested and ranked by the AlternativeTo user community. (Technically these are "psql commands", because you issue these commands from the psql command line program. We are not endorsed or supported by BackSlash Linux Project. The other exception is a new-line character. dat' and replace the backslash on that. For those who don't know,. This script is not suitable for default graphic card is not nvidia card, such as " dell precision tower 7910", after I run this script and reboot ubuntu, system could not login. [2002-06-05 18:17 UTC] john43 at temple dot edu Below are two calls to preg_replace() with different replacement strings. We are not endorsed or supported by BackSlash Linux Project. The name and location of such file can vary according to the system. org, a friendly and active Linux Community. NO_BACKSLASH_ESCAPES with replication fails with apostrophes On my production server I use CentOS Linux, but also the windows version of mysql is affected (I used. Linux is the best-known and most-used open source operating system. Authentication of user credentials (via PAM). In order to vote, comment or post rants, you need to confirm your email address. autofs is a program for automatically mounting directories on an as-needed basis. See the mongoexport document for more information regarding mongoexport, which provides the inverse “exporting” capability. The next Windows update is coming soon and we’re bringing exciting new updates to WSL with it! These include accessing the Linux file system from Windows, and improvements to how you manage and configure your distros in the command line. Keyboard backslash printing a hash. But only a few may be knowing about customizing the Bash prompt. c in KDM in KDE Software Compilation (SC) 2. 6, with Mac OS X 10. Backslash doesn’t work in VMware ESXi when installing Windows Posted on July 13, 2017 by Oliver Marshall Just today I had to remotely configure a new VM on a virtual host running VMware ESXi 6. If the system's owner has never used smbclient and hasn't created the smb. Clicking "Download Now" will trigger an automatic download accompanied by instructions for activating your protection. 10 The Linux kernel 2. deepin is an open source GNU/Linux operating system, based on Li. This wikiHow teaches you how to unzip a compressed folder in Linux by using the Terminal command line. exe -i /Cygwin-Terminal. The game compatibility should be identical to 0. Vintage Adidas Fleece Track Suit Medium matching Pants and Jacket Blue,Sailor Moon Girls Big Backpack Insulated Lunch Bag Pencil Case Shoulder Bag Lot,Nike Zoom Rival SD 4 Track Field Throwing Shoes Blue 685135-446 Mens Sz 10. changed to that user’s login group. Exim before 4. machine google. In linux/unix a "\" will tell the shell to continue to the next line and keep reading inputs. As in Case 1, you need simply add an extra backslash right before the existing one. This option may be specified more than once in which case all listed variables will be set. I also tried doing a quadruple backslash, thinking maybe the character was getting stripped twice, but that didn't work either. Pop!_OS encrypts your installation by default and is the only distro that enables full-disk encryption out-of-the box from System76. Red Hat Enterprise Linux 4 Red Hat Enterprise Linux 5 Race condition in backend/ctrl. For any person, who does not have a sound knowledge of Linux Operating System and Linux File System, dealing with the files and their location, their use may be horrible, and a newbie may really mess up. There is no /bin/cd program, say, and which cd specifies that it is a built-in command. Are you now, or have you ever been a victim of domestic violence? Identity theft? Do you have other concerns?. This tutorial explains how to set environmental variables and manage user profile in Linux step by step. It preserves the literal value of the next character that follows, with the exception of. I guess the title just about says it all. The angle brackets, , indicate a variable. In a verbatim string, a backslash is an ordinary character. It is most often found directly to the left of the Enter key, but can also be placed below or above the enter key. Then it dials the user (not shown), and finally connects the two together. I can confirm that both work. The backslash quotes any character and causes the following character to lose the special meaning it usually has in the context. Based on Ubuntu, Backslash modified KDE desktop to resemble macOS and thus carved an ultra-niche for itself in the desktop Linux world. Here are two slightly different versions of the same shell. It has been designed with love, care and professionalism keeping innovation in mind providing you simple and powerful computing experience. When you are in the command line interface a space is a special character so you will have to escape it using a backslash \ I usually try not to make any filenames or directory names with any special characters in them, but if you do have a directory named say Linux Stuff you would use cd Linux\ Stuff to change directories. In this section of our Bash Scripting Tutorial you will learn the ways you may use if statements in your Bash scripts to help automate tasks. GTE Financial is a leading Tampa, Fl. Bash has several prompts which can be customized to increase productivity, aesthetic appeal, and nerd cred. This is more for my own reference than anything else but today I was needing to enter a backslash key while using a US keyboard but with UK keyboard settings. Learn how to change the default Linux shell prompt temporarily and permanently. What is the default username created during a new installation of Fedora Core Linux? A. BackSlash is a spell casting brawler that features brutal combat, arena destruction, and fighting styles mixing in 1v1 fast paced duels. \ is a shorthand way of saying "this computer" *see "footnote" in Windows at a logon screen, which comes in very handy when you don't know or care about the local computer name but need to authenticate against it anyway, such as through RDP or scripting against a set of shared local. The Largest Linux ISO Torrent Repository online! Donations this month: 0% Goal :$ 125 Due: 2019-01-31. A string constant can be assigned to a string variable. I may get corrected on this, but there's not really much of a difference. Just open. Google Analytics lets you measure your advertising ROI as well as track your Flash, video, and social networking sites and applications. When you start learning something new, it’s normal that you won’t know all commands by heart. 12, prepared Centos system with all requirements I read there. That comes much later after hours of practice. credit union that goes beyond money. Kali Linux for ARM Devices. ) (See the main PuTTY manual for full details of the supported %- and backslash-delimited tokens, although most of them are probably not very useful in this. It is also referred to as escape sequences. Based on Ubuntu, Backslash modified KDE desktop to resemble macOS and thus carved an ultra-niche for itself in the desktop Linux world. Would you recommend BackSlash Linux as an alternative to ElementaryOS? Discussion I'm trying to find a distro for some family members, and due to its sleek design and similarity to MacOS, ElementaryOS is an easy sell. This doesn't seem to be a common at the moment. The backslash (\) is a typographical mark used mainly in computing and is the mirror image of the common slash (/). For bugs in Mozilla's modular networking library (aka "Netlib" or "Necko". 0 with many revamped features… BackSlash Linux - Browse Files at SourceForge. While using BackSlash, I had two serious concerns. Login or Register. netrc and the script that I run, but that is above and beyond the OP's original question. It goes on to provide icons similar to Mac OS. However, there are cases where PAM will fail to set the KRB5CCNAME environment variable after the user has been successfully authenticated. You can even consider it Linux clone of Mac OS. Linux for Programmers and Users, Section 5. The user is then prompted for a password, where appropriate. However, backslash commands of the \d family should work with servers of versions back to 7. The other exception is a new-line character. BackSlash Linux is a Linux based Operating System which is created for Computers running 64-bit Processors. By default the screen is cleared. Shell Commands¶. However, backslash commands of the \d family should work with servers of versions back to 7. bashrc and. ) (See the main PuTTY manual for full details of the supported %- and backslash-delimited tokens, although most of them are probably not very useful in this. Within Windows there is an "EN" icon next to the taskbar which gives the choice of UK or United States keyboard layoutI have it on the UK setting all the time and clicked it to select United States to give the backslash but it was reverting to UKit just needed clicking a couple of times then it worked okay. Download Now. Support the Channel on Patreon Ethereum… by ribalinux. To install Backslash goto backslash official site to download the ISO Image to boot into your hard drive. As with everything in Linux, there are several ways to accomplish the same task. This tutorial explains how to configure or customize the Linux shell prompt in detail with practical examples. These Linux commands will help us navigate to particular directories and search for. For those who don't know,. The Linux VDA is PAM "aware" and retrieves all PAM related environment variables on a successful login. Historically, if you wanted to use Active Directory to authenticate users on a UNIX box, you were pretty much limited to using LDAP. Because this file is read at login, the change will not take effect until the user has logged in again. In the Computer Modern math font family, the shapes of \setminus and \backslash are identical. Login or Register. The shell login scripts are the script that launch automatically after a login shell and permit to store Linux - Umask (user mask) Backslash Escape Characters. Login or Register. We are not endorsed or supported by BackSlash Linux Project. repatriation of profits; lack of private ownership and shareholder rights legislation, exchange rate fluctuations of illiquid currencies, the use of different accounting standards of major accounting systems, underdeveloped security markets and capital market regulations, and the lack of regulation, independence or integrity in maintenance of share registers, corporate books and securities. A unique and private encryption key is generated during setup after you receive your computer. You may have to edit several configuration files to tell all of the programs involved precisely what you want. Where is the backslash key located on my keyboard? Answer: The backslash key is located near the "Enter" or "Return" key on most keyboards. Just a quick addendum, if you login as "other" on a Server then you are forced to use \ in the login, I was attempting to. You may not have heard much about this distribution, and the fact that it’s often left out of the conversation is a shame. Also known by the names SAMAccountName and pre-Windows 2000 logon name. This article covers the installation procedure of Backslash OS. Master Password Suggestions We strongly encourage you to choose your own master password, but if you're finding that difficult, here's a list of randomly generated passwords to inspire you. The next Windows update is coming soon and we’re bringing exciting new updates to WSL with it! These include accessing the Linux file system from Windows, and improvements to how you manage and configure your distros in the command line. login and from the Bourne shell and Korn shell files named. Typically the login name is the domain name plus a special character followed by the user name. Ask BackSlash is a community edited knowledge base and support forum for BackSlash. Recent discussions on BackSlash Board. If the special character is the backslash, you must escape it with another backslash. To make your code more readable, you should use C# verbatim strings. User type in a password which includes one or multiple backslash(es) \ symbol(s). For some reason I can't get the backslash key to work on my keyboard. tftpd-hpa can easily be configured to support broken TFTP clients by using a remap file. Backslash Linux has stopped the development temporarily citing financial problems. The backslash is an upward-to-the-left sloping straight line character that is used mostly in a computer context. Linux HD Desktop Wallpapers for 4K Ultra HD TV ★ Wide & Ultra Widescreen Displays ★ Dual Monitor / Multi Display Desktops ★ Tablet ★ Smartphone ★ Mobile Devices | Page 1. BackSlash Linux is an Ubuntu-based desktop distribution featuring a custom shell running on top of the KDE Plasma desktop. ) These Postgres commands help you answer questions like "What tables are in this. BackSlash features a user interface inspired by macOS. Popular Alternatives to Backslash for Web, Software as a Service (SaaS), Windows, Mac, Linux and more. In linux/unix a "\" will tell the shell to continue to the next line and keep reading inputs. A Red Hat Enterprise Linux machine can also use external resources which contain the users and credentials, including LDAP, NIS, and Winbind. Typically the login name is the domain name plus a special character followed by the user name. It is in single backslash format. Depending on the kind of work you do on the command line in Linux, you may want a utility that can act as a Swiss army knife of quick text editing. x Dragos Prisaca DRAFT INTERIM ACCEPTED ACCEPTED 5. The game compatibility should be identical to 0. 7z from releases with other tools, and put it into wsl-terminal directory, then run. As an operating system, Linux is software that sits underneath all of the other software on a computer, receiving requests from those programs and relaying these requests to the computer’s hardware. I am new to Linux. Spanish translation: barra diagonal invertida Explanation: Actually, the source language for this question is English, not Dutch. However, there are cases where PAM will fail to set the KRB5CCNAME environment variable after the user has been successfully authenticated. login and from the Bourne shell and Korn shell files named. A script run from the desktop or file manager, through the dialogue 'run in terminal' will execute as POSIX dash. Hello,I am evaluating POC for LinuxVDA as a replacement for XAU we are still using now. This tutorial explains how to set environmental variables and manage user profile in Linux step by step. How to Mount smbfs (SAMBA file system) permanently in Linux. These restrictions are imposed by the underlying components that IBM InfoSphere Information Server uses. The job ID numbers are used by some programs instead of PIDs (for example, by fg and bg commands). ALLOW_BACKSLASH If this is true the '\' character will be permitted as a path delimiter. However, in i3, your homerow is used for these keys (in vi, the keys are shifted to the left by one for compatibility with most keyboard layouts). bash_profile:. If the download speed is too slow, you can download wsl-terminal-v3. The backslash (\) is a typographical mark used mainly in computing and is the mirror image of the common slash (/). To install Backslash OS &ndash Anna. In this video, I am going to show an overview of BackSlash Linux Kristoff Beta and some of the applications pre-installed. How to Unzip Files in Linux. Clicking "Download Now" will trigger an automatic download accompanied by instructions for activating your protection. Kali Linux for ARM Devices. Problema con slash (/) y backslash (\) Estas en el tema de Problema con slash (/) y backslash (\) en el foro de PHP en Foros del Web. I also tried doing a quadruple backslash, thinking maybe the character was getting stripped twice, but that didn't work either. We are not endorsed or supported by BackSlash Linux Project. Powerlite Redline Stem Dyno Bmx Quill Robinson Piston Dk Haro Bmx Se fits Gt NOS Powerlite. Keep trying the port, it should be only the 445 , make sure ther eis nothing else, no spaces before or after the number. It is a short little trick to login with a local user account instead of a domain account. -l, --login-program login_program Invoke the specified login_program instead of /bin/login. 04 is a great operating system. Cannot login my windows 10 computer cause the language of the keyboard changed. g if the location is //windowsmachine/this is my folder, you just put it as '//windowsmachine/this is my folder'. You will be excited to know that you can change the color of your shell prompt to impress your friends or to make your. This is a piece of code which I am giving in my template file and my script reads this template file attachment and passes to windows server(as they | The UNIX and Linux Forums. Slashdot: News for nerds, stuff that matters. Azure App Service on Linux provides a collection of Microsoft-provided runtime stacks that you can use for your Web App. The mongoimport tool imports content from an Extended JSON, CSV, or TSV export created by mongoexport, or potentially, another third-party export tool. With mozilla and no username we always assume anonymous login, with ftp://mainstar. 4 as host OS and Windows XP SP3 as guest OS. For any person, who does not have a sound knowledge of Linux Operating System and Linux File System, dealing with the files and their location, their use may be horrible, and a newbie may really mess up. Using environment variables in systemd units Environment directive. Discuss whether or not this merger should happen on the discussion page. This page provides the links to download Kali Linux in its latest official release. I have installed the VBox Guest Additions. \, not that obsolete \. 0 Release Notes Nov, 6 2019. With the BackSlash Desktop Shell, it's easy to do just about anything with your computer. Download BackSlash Linux for free. I've tried various quotings and using a double-backslash with no luck. It also includes many Drivers and applications pre-installed out of the box. Is an accessible, friendly, open-source Linux distribution and community. When logging off the user (instead of switching) the problem does not occur. The string will be chopped to "We are the so-called ". A string constant can be assigned to a string variable. 3 silver 2013 2013 rubles Swarovki coin RARE with with insert Certificate Russia Crystal. cURL is pre-installed on many Linux and Mac systems. However, when you ALT+TAB (at least on the KDE desktop. Only a small number of password failures are permitted before login exits and the communications link is severed. Hi vbe, Thanks for your update. See --login-options. Slashdot: News for nerds, stuff that matters. Would you recommend BackSlash Linux as an alternative to ElementaryOS? Discussion I'm trying to find a distro for some family members, and due to its sleek design and similarity to MacOS, ElementaryOS is an easy sell. Once logged in you can still post anonymously. I don’t want to install Linux again. The problem is, if I try to automate this login, either via script or. I tested this command on both my Arch Linux MATE desktop and Ubuntu 16. -E Disable interpretation of backslash escape sequences. Ask BackSlash is a community edited knowledge base and support forum for BackSlash. We are not endorsed or supported by BackSlash Linux Project. Verify that your TFTP server supports a remap file by typing /usr/sbin/in. properties, the main configuration. The utility ‘time’ takes a program name as an input and displays information about the resources. It seems the latest update of American Truck Simulator has come with a few issues, like the Linux version playing audio but seemingly displaying nothing. Discuss whether or not this merger should happen on the discussion page. Only root user can edit this file. text needs to be in front of the backslash. Shell Commands¶. Bind a command to a key. Please be sure to select " Accept Solution " and or select the thumbs up icon to enter Kudos for posts that resolve your issues. For example, the following fragment creates a login process. I can see where to choose to set a persistant file to save work to when downloading the distro to the usb, but how much room should I set, as much as I want?. 04 according to lsb_release. Last month, we announced the extension of Azure Security Center’s detection for Linux. The Plasma-based desktop was not as responsive as I'm used to, in either test environment. I expect to get some flak for this, but I can't find the answer anywhere. Problema con slash (/) y backslash (\) Estas en el tema de Problema con slash (/) y backslash (\) en el foro de PHP en Foros del Web. This post aims to demonstrate how existing Windows detections often have Linux analogs. With the BackSlash Desktop Shell, it's easy to do just about anything with your computer. cURL is pre-installed on many Linux and Mac systems. I have encountered this issue and here is how I worked around this issue: If the username is "ubuntu" as it should be and if no password is needed to execute a sudo command, you will still be asked for a password after logging out when logging back in and a blank password usually won't work here. Then it dials the user (not shown), and finally connects the two together. During the last decade and a half (2003-2018) Linux fonts have caught up, thanks to FreeType and its Truetype fonts. Escape the metacharacter with a backslash (\). The backslash in a terminal view or command synopsis indicates an unfinished line. Introducing BackSlash Shell v2. There are several ways you can match multiple strings or patterns from a file using grep. Bind a command to a key. Linux Hint LLC 1669 Holenbeck Ave, #2-244, Sunnyvale, CA 94087 [email protected] BackSlash Linux is a non-funded Indian Linux based operating system tailored for home, personal and commercial use. Executable files may be scripts (in which case you can read the text), or binaries (which are ELF formatted machine code). Its not fair to compare them on a same level. @twinethreads is the official Twitter account for Twine, with news, interesting links, and new works. Available in TrueType (. The command line interface (CLI) is an alternative configuration tool to the web-based manager. It is based on free software and every release of the operating system is named after the characters of the 2013 Disney film Frozen. For any person, who does not have a sound knowledge of Linux Operating System and Linux File System, dealing with the files and their location, their use may be horrible, and a newbie may really mess up. I expect to get some flak for this, but I can't find the answer anywhere. A shell, for example, would allow the user to work without supplying an account and password. Backslash is used in many environment s ( C , Common Lisp , Perl , and nearly every Unix shell , to name a few) to quote special characters , or to. For testing it, I have followed the installation procedure of the LinuxVDA and installed XenDesktop Delivery 7. These steps walk you through the process of setting up an SFTP server on Linux for the secure transfer of files for specialized file transfer-only users. bashrc and. Nuclide (Linux) - Toggles whether the Project Explorer’s File Tree is shown. Try to print it. Each platform has a different convention is for terminating lines: On Unix, an ASCII newline ( ) LF (linefeed) character ends each line. 04 Bionic Beaver Linux Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials. This is the default when winbind is not used. Based on Arch Linux, providing all the benefits of cutting-edge software combined with a focus on getting started quickly, automated tools to require less manual intervention, and help readily available when needed. The problem is, if I try to automate this login, either via script or. Support the Channel on Patreon Ethereum… by ribalinux. This subchapter looks at login and logout, a pair of UNIX (and Linux) commands. /cmdtool update. We are not endorsed or supported by BackSlash Linux Project. There are three options to avoid shell interpretation of metacharacters. It preserves the literal value of the next character that follows, with the exception of. They allow us to decide whether or not to run a piece of code. Here's a quick tabular list of Postgres commands related to listing information about a Postgres database. To move the focus between the two terminals, you can use the direction keys which you may know from the editor vi. The difference between the forward slash and the backslash is as follows: / = the forward slash leans forward = the backslash leans back. conf for the WINS server's location, and consequently smbclient often can't find the Win2K or NT server that you're trying to connect to. Linux: Fix the 64bit dynrec cpu core and a lot of compilation problems. In the configuration file misc. As we will see in this article, it can be easily changed by modifying bash PS{n} variables, so to include information such as display time, load, number of users using the system, uptime and more. 0 Pearl instead of releasing many releases in a variety of desktop environments we are focusing on our new PDE version which is our own customized DE. You need to remove the leading backslash. Ask BackSlash is a community edited knowledge base and support forum for BackSlash. This allows the use of a non-standard login program. This sort of user hostility is a general problem in Linux; we are long since liberated from the days of unreliable teletype links, and the fetish for 3-letter contractions and abbreviated commands like cp and mv (instead of copy and move) does absolutely nothing to foster computer literacy. For details, see The Windows Roaming Profile Versions. Have you ever thought about running Ubuntu 18. -H login_host Write the specified login_host into the utmp file. In a verbatim string, a backslash is an ordinary character. Customizing your Logon with /etc/issue When you first boot up your new LFS system, the logon screen will be nice and plain (as it should be in a bare-bones system). In the case of Red Hat Linux, an alias for any user can be added to the. And how does one insert a "~" (tilde)? (If you insert \~, it will give you a tilde as an accent over the following letter. -E Disable interpretation of backslash escape sequences. Need help? Post your question and get tips & solutions from a community of 434,237 IT Pros & Developers. Quote type Name Meaning Example (type at shell prompt) " The double quote The double quote ( "quote" ) protects everything enclosed between two double quote marks except \$, ', " and \. It should be something like C:\cygwin\bin\mintty. BASH provides single quotes for this purpose, however you can't use escape a single quote within single quoted string. Upon installing Damn Vulnerable Web Application (DVWA), the first screen will be the main login page. Plus, updates are automatic so you always have the most current protection. 3 silver 2013 2013 rubles Swarovki coin RARE with with insert Certificate Russia Crystal. Ask BackSlash is a community edited knowledge base and support forum for BackSlash. ISO Download Live System Download. Sometimes we need to pass metacharacters to the command being run and do not want the shell to interpret them. (Aside: the vertical size of \backslash can be modified by \left and \right directives; that's not the case for \setminus. network access via double backslash. The other exception is a new-line character. 18-rc5 allows local users to cause a denial of service (crash) via an SCTP socket with a certain SO_LINGER value, possibly related to the patch for CVE-2006-3745. Amazon Linux 2 limits remote access by using SSH key pairs and by disabling remote root login. Slashdot: News for nerds, stuff that matters. I want the value as it it. BackSlash Linux is a Linux based Operating System which is created for Computers running 64-bit Processors. It is built to get things done with ease, comfort and control, making your life more simple. will, because the backslash is escaped so it actually gets passed to ssh properly. On many Linux systems, running in an xterm window: Backspace key emits (8 or ^H) Ctrl-Backspace key combination emits (127 or ^?) I got it to work on my system by creating a. Hello Andre. Linux for Programmers and Users, Section 5. humberto sanchez wrote:Hi Everyone, Basic String question It is possible to append a "\" (backslash) to a String, without having "\\"? There might be some weird, roundabout, "clever" way to do it, but why would you want to?. Given IE_sanitize is only used with files uploaded from Internet Explorer 7 and earlier which represents less than 3% of the total browser share, I don't think it is no longer necessary to use this method. Okay After Enough of those injection we are now moving towards Bypassing Login pages using SQL Injection. There are several ways you can match multiple strings or patterns from a file using grep. To install Backslash goto backslash official site to download the ISO Image to boot into your hard drive. hola mis estimados, estoy tratando de insertar en mi tabla que tengo en mysql unas rutas de unos archivos para luego poder llamr a esos. I thought I could do it this way: gsub(". By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. 0 Pearl instead of releasing many releases in a variety of desktop environments we are focusing on our new PDE version which is our own customized DE. The default is the user name only. When you start learning something new, it’s normal that you won’t know all commands by heart.
|
|
# 3.5 Addition of velocities (Page 3/12)
Page 3 / 12
Solution
Because ${\mathbf{\text{v}}}_{\text{tot}}$ is the vector sum of the ${\mathbf{\text{v}}}_{\text{w}}$ and ${\mathbf{\text{v}}}_{\text{p}}$ , its x - and y -components are the sums of the x - and y -components of the wind and plane velocities. Note that the plane only has vertical component of velocity so ${v}_{px}=0$ and ${v}_{py}={v}_{\text{p}}$ . That is,
${v}_{\text{tot}x}={v}_{\text{w}x}$
and
${v}_{\text{tot}y}={v}_{\text{w}y}+{v}_{\text{p}}\text{.}$
We can use the first of these two equations to find ${v}_{\text{w}x}$ :
${v}_{\text{w}y}={v}_{\text{tot}x}={v}_{\text{tot}}\text{cos 110º}\text{.}$
Because ${v}_{\text{tot}}=\text{38}\text{.}0 m/\text{s}$ and $\text{cos 110º}=–0.342$ we have
${v}_{\text{w}y}=\left(\text{38.0 m/s}\right)\left(\text{–0.342}\right)=\text{–13 m/s.}$
The minus sign indicates motion west which is consistent with the diagram.
Now, to find ${v}_{\text{w}\text{y}}$ we note that
${v}_{\text{tot}y}={v}_{\text{w}y}+{v}_{\text{p}}$
Here ; thus,
${v}_{\text{w}y}=\left(\text{38}\text{.}0 m/s\right)\left(0\text{.}\text{940}\right)-\text{45}\text{.}0 m/s=-9\text{.}\text{29 m/s.}$
This minus sign indicates motion south which is consistent with the diagram.
Now that the perpendicular components of the wind velocity ${v}_{\text{w}x}$ and ${v}_{\text{w}y}$ are known, we can find the magnitude and direction of ${\mathbf{\text{v}}}_{\text{w}}$ . First, the magnitude is
$\begin{array}{lll}{v}_{\text{w}}& =& \sqrt{{v}_{\text{w}x}^{2}+{v}_{\text{w}y}^{2}}\\ & =& \sqrt{\left(-\text{13}\text{.}0 m/s{\right)}^{2}+\left(-9\text{.}\text{29 m/s}{\right)}^{2}}\end{array}$
so that
${v}_{\text{w}}=\text{16}\text{.}0 m/s\text{.}$
The direction is:
$\theta ={\text{tan}}^{-1}\left({v}_{\text{w}y}/{v}_{\text{w}x}\right)={\text{tan}}^{-1}\left(-9\text{.}\text{29}/-\text{13}\text{.}0\right)$
giving
$\theta =\text{35}\text{.}6º\text{.}$
Discussion
The wind’s speed and direction are consistent with the significant effect the wind has on the total velocity of the plane, as seen in [link] . Because the plane is fighting a strong combination of crosswind and head-wind, it ends up with a total velocity significantly less than its velocity relative to the air mass as well as heading in a different direction.
Note that in both of the last two examples, we were able to make the mathematics easier by choosing a coordinate system with one axis parallel to one of the velocities. We will repeatedly find that choosing an appropriate coordinate system makes problem solving easier. For example, in projectile motion we always use a coordinate system with one axis parallel to gravity.
## Relative velocities and classical relativity
When adding velocities, we have been careful to specify that the velocity is relative to some reference frame . These velocities are called relative velocities . For example, the velocity of an airplane relative to an air mass is different from its velocity relative to the ground. Both are quite different from the velocity of an airplane relative to its passengers (which should be close to zero). Relative velocities are one aspect of relativity , which is defined to be the study of how different observers moving relative to each other measure the same phenomenon.
Nearly everyone has heard of relativity and immediately associates it with Albert Einstein (1879–1955), the greatest physicist of the 20th century. Einstein revolutionized our view of nature with his modern theory of relativity, which we shall study in later chapters. The relative velocities in this section are actually aspects of classical relativity, first discussed correctly by Galileo and Isaac Newton. Classical relativity is limited to situations where speeds are less than about 1% of the speed of light—that is, less than . Most things we encounter in daily life move slower than this speed.
Let us consider an example of what two different observers see in a situation analyzed long ago by Galileo. Suppose a sailor at the top of a mast on a moving ship drops his binoculars. Where will it hit the deck? Will it hit at the base of the mast, or will it hit behind the mast because the ship is moving forward? The answer is that if air resistance is negligible, the binoculars will hit at the base of the mast at a point directly below its point of release. Now let us consider what two different observers see when the binoculars drop. One observer is on the ship and the other on shore. The binoculars have no horizontal velocity relative to the observer on the ship, and so he sees them fall straight down the mast. (See [link] .) To the observer on shore, the binoculars and the ship have the same horizontal velocity, so both move the same distance forward while the binoculars are falling. This observer sees the curved path shown in [link] . Although the paths look different to the different observers, each sees the same result—the binoculars hit at the base of the mast and not behind it. To get the correct description, it is crucial to correctly specify the velocities relative to the observer.
what is physics
what are the basic of physics
faith
tree physical properties of heat
tree is a type of organism that grows very tall and have a wood trunk and branches with leaves... how is that related to heat? what did you smoke man?
what are the uses of dimensional analysis
Dimensional Analysis. The study of relationships between physical quantities with the help of their dimensions and units of measurements is called dimensional analysis. We use dimensional analysis in order to convert a unit from one form to another.
Emmanuel
meaning of OE and making of the subscript nc
Negash
kinetic functional force
what is a principal wave?
A wave the movement of particles on rest position transferring energy from one place to another
Gabche
not wave. i need to know principal wave or waves.
Haider
principle wave is a superposition of wave when two or more waves meet at a point , whose amplitude is the algebraic sum of the amplitude of the waves
kindly define principal wave not principle wave (principle of super position) if u can understand my question
Haider
what is a model?
hi
Muhanned
why are electros emitted only when the frequency of the incident radiation is greater than a certain value
b/c u have to know that for emission of electron need specific amount of energy which are gain by electron for emission . if incident rays have that amount of energy electron can be emitted, otherwise no way.
Nazir
Nazir
what is ohm's law
states that electric current in a given metallic conductor is directly proportional to the potential difference applied between its end, provided that the temperature of the conductor and other physical factors such as length and cross-sectional area remains constant. mathematically V=IR
ANIEFIOK
hi
Gundala
A body travelling at a velocity of 30ms^-1 in a straight line is brought to rest by application of brakes. if it covers a distance of 100m during this period, find the retardation.
just use v^2-u^2=2as
Gundala
how often does electrolyte emits?
alhassan
just use +€^3.7°√π%-4¢•∆¥%
v^2-u^2=2as v=0,u=30,s=100 -30^2=2a*100 -900=200a a=-900/200 a=-4.5m/s^2
akinyemi
what's acceleration
The change in position of an object with respect to time
Mfizi
Acceleration is velocity all over time
Pamilerin
hi
Stephen
It's not It's the change of velocity relative to time
Laura
Velocity is the change of position relative to time
Laura
acceleration it is the rate of change in velocity with time
Stephen
acceleration is change in velocity per rate of time
Noara
what is ohm's law
Stephen
Ohm's law is related to resistance by which volatge is the multiplication of current and resistance ( U=RI)
Laura
acceleration is the rate of change. of displacement with time.
the rate of change of velocity is called acceleration
Asma
how i don understand
how do I access the Multiple Choice Questions? the button never works and the essay one doesn't either
How do you determine the magnitude of force
mass × acceleration OR Work done ÷ distance
Seema
|
|
It is taking longer than expected to fetch the next song to play. The music should be playing soon. If you get tired of waiting, you can try reloading your browser.
Please check our Help page for information about troubleshooting Pandora on your browser.
Please ensure you are using the latest Flash Player.
If you are unable or do not wish to upgrade your Flash Player,
please try a different browser.
Please check our Help page for information about troubleshooting Pandora on your browser.
Your Pandora One subscription will expire shortly.
Your Pandora One trial will expire shortly.
Your Pandora One trial subscription will expire shortly. Upgrade to continue unlimited, ad-free listening.
You've listened to hours of Pandora this month. Consider upgrading to Pandora One.
Hi . Pandora is using Facebook to personalize your experience.
|
-0:00
0:00
Change Skin
# Free personalized radio that plays the music you love
Now Playing
Music Feed
My Profile
Create a Station
People who also like this
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
Also listening to:
# Steve Erquiaga (Holiday)
## Selected Discography
x
x
x
### Track List: Windham Hill Holiday Guitar Collection
Report as inappropriate
I am a dj
Report as inappropriate
truly awesome
Report as inappropriate
an excellent rendition of my favorite Christmas song
Report as inappropriate
I love it it reminded me of a good little snowy town
Report as inappropriate
too bland
Report as inappropriate
Yay? Yay!!!!! Yayayayay!!! ! ! ! ! ! ! ! ! ^_^ I love it!!!!!!!!!! ! ! !
Report as inappropriate
Haaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a q a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A a A A a a a a a a a a a a a a A A A A A A A A A A A A a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
Report as inappropriate
Swim champ 159 is so wrong that's hard to do you could not do it so I calling you to ever sing on a sasn
Report as inappropriate
Awesome !!!!
Report as inappropriate
juneliberack i
I am enjoying this Christmas guitar work.
Report as inappropriate
lois.is
nice, but a little churchy.
Report as inappropriate
Love that every guitarist can put their own style into a song! Great rendition!!
Report as inappropriate
testdowns
STEEEVE! I named a unicorn after him! Just the first name, but still!
Report as inappropriate
july yes still AWESOME
Report as inappropriate
Beautiful!
Report as inappropriate
mdgold301
beautifully done, very peacefull.
Report as inappropriate
rbc11spen
His guitar work is spectacular. My music teacher used to say that it's much easier to play a piece fast, that slow playing allows the listener to hear each note and every error. This man is a virtuoso. In my mind's eye I can see him using both hands to play top and bottom of the strings. Quite amazing.
Report as inappropriate
relaxing
Report as inappropriate
Have always loved this song... even more with the soulful guitar
Report as inappropriate
Lovely acoustical piece reflecting the simplicity of the carol.
Report as inappropriate
Oh, excellent indeed! I love the complex rhythms and descants.
Report as inappropriate
swimchamp159
this guy sucks!!!!!!! ! ! ! it is ssssssssssoo o o o o o o o o o o o o o o o o slow!!!! it is the worst christmas song ever!!!!!!!! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
Report as inappropriate
dmjreed
Excellent!
Don't have a Pandora account? Sign up
We're sorry, but a browser plugin or firewall may be preventing Pandora from loading.
In order to use Pandora internet radio, please upgrade to a more current browser.
|
|
• anonymous
1) Consider the rate data for the reaction given below: This reaction occurs under conditions of ultraviolet light for which atomic chlorine is stable. Determine a consistent rate law and evaluate the rate constant, k (with units!).
Chemistry
Looking for something else?
Not the answer you are looking for? Search for more explanations.
|
|
+0
# hello i need help please
0
468
5
x²+y²=25
x+7y=25
x=?
y=?
Guest Dec 2, 2014
#3
+20633
+8
x²+y²=25 x+7y=25 x=? y=?
$$\\\boxed{x^2+y^2=25} \small{\text{ is a circle }}\\ \boxed{x+7y=25} \small{\text{ is a line}}\\$$
$$\small{\text{The line cut the circle in 2 Points.}} \\ \small{\text{Point 1 is ( x=4, y=3) }}\\ \small{\text{Point 2 is ( x=-3, y=4) }}\\$$
heureka Dec 2, 2014
#1
+27227
+5
The first equation can be written as x2 + y2 = 52
There is a well-known Pythagorean triple: 32 + 42 = 52
so it is possible that x and y are 3 and 4.
If x = 3 and y = 4 then the second equation, x + 7y = 25 is not true. However, if x = 4 and y = 3 then it is true that x + 7y = 25. Therefore
x = 4
y = 3
.
If you didn't spot the Pythagorean triple, then, more generally, you would write the 2nd equation as x = 25 - 7y, substitute this into the first equation and solve the resulting quadratic equation in y.
.
Alan Dec 2, 2014
#2
+5
yes but thats what i wrote and i dont know how to solve it
x=25-7y
(25-7y)²+y²=25
625-350+49y²+y²=25
50y²=-250
y²=-5
Guest Dec 2, 2014
#3
+20633
+8
x²+y²=25 x+7y=25 x=? y=?
$$\\\boxed{x^2+y^2=25} \small{\text{ is a circle }}\\ \boxed{x+7y=25} \small{\text{ is a line}}\\$$
$$\small{\text{The line cut the circle in 2 Points.}} \\ \small{\text{Point 1 is ( x=4, y=3) }}\\ \small{\text{Point 2 is ( x=-3, y=4) }}\\$$
heureka Dec 2, 2014
#4
+27227
+5
You have left out a "y" that multiplies the 350.
(25 - 7y)2 +y2 = 25
252 - 2*25*7*y + 72y2 + y2 = 25
625 - 350y + 50y2 = 25
50y2 - 350y + 600 = 0
y2 - 7y + 12 = 0
(y - 3)(y - 4) = 0
y = 3 or y = 4
so x = 25 - 7*3 or x = 4 when y = 3
and x = 25 -7*4 or x = -3 when y = 4 (as Heureka illustrates).
.
Alan Dec 2, 2014
#5
+20633
+5
x=25-7y
(25-7y)²+y²=25 okay
$$\underbrace{(25-7y)^2}_{=25^2-2*25*7*y+49y^2}+y^2=25 \\\\ 25^2-2*25*7*y+49y^2+y^2=25 \\\\ 50y^2-50*7*y + 625 - 25 = 0 \\ \\ 50y^2-50*7*y + 600= 0 \quad | \quad : 50\\ \\ y^2-7*y + 12= 0 \\ \\ y_{1,2}= \frac{7\pm\sqrt{49-4*12} }{2} \\ \\ y_{1,2}= \frac{7\pm\sqrt{49-48} }{2} \\ \\ y_{1,2}= \frac{7\pm 1}{2} \\ \\ y_1 = \frac{7 +1}{2} = \frac{8}{2} = 4 \qquad x_1 = 25 - 7*4 = -3\\ \\ y_2 = \frac{7 -1}{2} = \frac{6}{2} = 3 \qquad x_2 = 25 - 7*3 = 4\\ \\$$
heureka Dec 2, 2014
|
|
# Uniqueness of Hermitian inner product
Let V be an irreducible representation of a finite group G.How to show that up to scalars,there is a unique Hermitian inner product on V preserved by G. i know of how to get an inner product. but i have no idea on the uniqueness part. i think i have to use schur's lemma in some way
-
An inner product is the same as a map from $V \to \bar{V}$: $\langle -, - \rangle$ corresponds to $v \to \langle -, v \rangle$. $G$-invariant inner product corresponds to $G$-invariant maps $Hom_G(V, \bar{V})$. What can you say about this space by Schur's lemma?
@K.Ghosh, try to work out how $G$ acts on $Hom(V, \bar{V})$, and what it means for a map to be $G$-invariant. – user27126 Jan 20 '13 at 7:47
|
|
4 added 84 characters in body
The answer in no, because of the following result:
Theorem 1. Let $X$ be a non-ruled minimal surface. Then there exists a finite ramified covering $S \to X$ of degree $>1$, such that $S$ is minimal of general type with $K_S$ very ample, $\pi_1(S) \cong \pi_1(X)$ and $S$ is not birationally equivalent to $X$. We can moreover assume that $S$ has negative index, i.e. $K_S^2 - 8 \chi(\mathcal{O}_S) <0$.
So the fundamental group $\pi_1(X)$ alone does not determine the birational type of $X$, and in general not even its diffeomorphism type. However, when
When $X$ is the product of two curves, however, something more can be said, if provided that one also knows the topological Euler number. More precisely one proves the following
Theorem 2. Let $C_1$, $C_2$ be smooth curves of genus $g_1$, $g_2$, with $g_i \geq 2$, and let $X=C_1 \times C_2$. Then any surface $S$ such that $\pi_1(S) \cong \pi_1(X)$ and $e(S)=e(X)$ is isomorphic to a product of two curves of the same genera.
Theorems 1 and 2 were proven by F. Catanese in his paper Fibred surfaces, varieties isogenous to a product and related moduli spaces, which considers the more general situation $X=(C_1 \times C_2)/G$, where $G$ is a finite group acting freely on the product $C_1 \times C_2$.
3 deleted 40 characters in body
The answer in no, because of the following result:
Theorem 1. Let $X$ be a non-ruled minimal surface. Then there exists a finite ramified covering $S \to X$ of degree $>1$, such that $S$ is minimal of general type with $K_S$ very ample, $\pi_1(S) \cong \pi_1(X)$ and $S$ is not birationally equivalent to $X$. We can moreover assume that $S$ has negative index, i.e. $K_S^2 - 8 \chi(\mathcal{O}_S) <0$.
So the fundamental group alone does not determine the birational type. However, when $X$ is the product of two curves something more can be said, in fact the fundamental group and if one also knows the topological Euler numbertogether do actually determine the isomorphism type. More precisely one proves the following
Theorem 2. Let $C_1$, $C_2$ be smooth curves of genus $g_1$, $g_2$, with $g_i \geq 2$, and let $X=C_1 \times C_2$. Then any surface $S$ such that $\pi_1(S) \cong \pi_1(X)$ and $e(S)=e(X)$ is isomorphic to $X$. a product of two curves of the same genera.
Theorems 1 and 2 were proven by F. Catanese in his paper Fibred surfaces, varieties isogenous to a product and related moduli spaces, which actually considers the more general situation $X=(C_1 \times C_2)/G$, where $G$ is a finite group acting freely on the product $C_1 \times C_2$.
2 added 9 characters in body
The answer in no, because of the following result:
Theorem 1. Let $X$ be a non-ruled minimal surface. Then there exists a finite ramified covering $S \to X$ of degree $>1$, such that $S$ is minimal of general type with $K_S$ very ample, $\pi_1(S) \cong \pi_1(X)$ and $S$ is not birationally equivalent to $X$. We can moreover assume that $S$ has negative index, i.e. $K_S^2 - 8 \chi(\mathcal{O}_S) <0$.
So the fundamental group alone does not determine the birational type. However, for when $X$ is the product of two curves something more can be said, in fact the fundamental group and the topological Euler number together do actually determine the isomorphism type. More precisely one proves the following
Theorem 2. Let $C_1$, $C_2$ be smooth curves of genus $g_1$, $g_2$, with $g_i \geq 2$, and let $X=C_1 \times C_2$. Then any surface $S$ such that $\pi_1(S) \cong \pi_1(X)$ and $e(S)=e(X)$ is isomorphic to $X$.
Theorems 1 and 2 were proven by F. Catanese in his paper Fibred surfaces, varieties isogenous to a product and related moduli spaces, which actually considers the more general situation $X=(C_1 \times C_2)/G$, where $G$ is a finite group acting freely on the product $C_1 \times C_2$.
1
|
|
# Uncertainty Relation
Quantum Uncertainty:
In quantum mechanics, the Heisenberg uncertainty principle, dictates the uncertainty product of position and momentum of a quantum mechanical system. The inequality arises from definitions of variance and the Cauchy-Schwarz inequality. The proof can be found on wikipedia.
Now, a quantum mechanical state has a specific position and momentum with uncertainty (model 1). A similar analogy exists in the quadratures of light (model 2).
Position and momentum operators (or in-phase and out-of-phase quadratures of light) can be represented as Hermitian operators $\hat{q}$ and $\hat{p}$ respectively.
These operators are canonically conjugate variables, which means they satisfy the following commutation relation
$[\hat{\textbf{q}},\hat{\textbf{p}}] = \hat{\textbf{q}}\hat{\textbf{p}} - \hat{\textbf{p}}\hat{\textbf{q}}$ $= i\hbar$
Then the uncertainty relation for a given wavefunction $\left|\psi\right\rangle$ can be expressed as follows.
$D(\hat{p})D(\hat{q}) \ge \frac{\hbar^2}{4}$
where
$D_\psi(\hat{q}) = ||(\hat{\textbf{q}}-{x})\psi||^2 = \left \langle \psi|(\hat{\textbf{q}}-x)(\hat{\textbf{q}}-x)|\psi\right\rangle\\ \label{D2} D_\psi(\hat{p}) = ||(\hat{\textbf{p}}-{y})\psi||^2 = \left \langle \psi|(\hat{\textbf{p}}-{y})(\hat{\textbf{p}}-{y})|\psi\right\rangle$
and
${x} = \left \langle \psi |\hat{\textbf{q}}|\psi\right\rangle\\ {y} = \left \langle \psi |\hat{\textbf{p}}|\psi\right\rangle$
q represents position and p represents momentum. D(q) and D(p) are variances. The radius of the circle is related to the uncertainty product. Minimum uncertainty condition is met when the circle has the smallest radius.
Naturally occurring states are the number states
The ground state (for quantum harmonic oscillator) or the vacuum state (for light quantization) are the minimum uncertainty states, centered at the origin. As the number of energy level/number of photons in the system (n) increases, the associated uncertainty also increases.
The uncertainty dependence on photon number/principle quantum number is
$D(\hat{q})D(\hat{p}) = \frac{\hbar}{2\omega}(1+2n)\times\frac{\hbar\omega}{2}(1+2n)$
The uncertainty increases with increase in photon/quantum number. Hence the ground/vacuum state is a minimum uncertainty state.
All minimum uncertainty states are called coherent states. A coherent state is hence, a ground state/vacuum state displaced by $(x,y)$. The solution $\left\langle q|\psi\right \rangle$ for the minimum uncertainty condition turns out to be a Gaussian.
The transformation from a ground/vacuum state to a coherent state is dictated by the Weyl operator.
Conceptually, the uncertainty in position(or in-phase quadrature) and momentum(or out-of-phase quadrature) are related in a way that, if one increases, the other must decrease, to conserve the product. So for a particle whose position is accurately known, the uncertainty in momentum of the particle is infinitely high and vice versa.
Fourier Uncertainty:
In signal processing, a signal can be represented in time domain or frequency domain. A signal that is ‘spread out’ and unbounded in the time domain, is a ‘squeezed’ and bounded in the frequency domain and vice versa. A function in time domain, and its Fourier representation are also conjugate variables.
Fractional Fourier Domain by Ozaktus and Aytur offers a good fundamental understanding of the time-frequency representation, and the transform between the various bases that lie in the time-frequency plane.
For a signal $\psi$ and a coordinate multiplication operator $X_a$ defined in domain $x_a$ parametrized by $a$,
$(X_a\psi)(x_a) = x_a\psi(x_a)$
$x_0$ is the time domain and $x_1$ is the frequency domain. $x_a$ and $x_a'$ are intermediate domains.
It can be proved that $X_a$ and $X_a'$ are related by a similar commutation and uncertainty equation
$[X_a,X_a'] = \frac{i}{2\pi}sin(\phi'-\phi)$
$D(x_a)D(x_a') \ge \frac{1}{16\pi^2}\sin^2(\phi'-\phi)$
The transformation from one domain to the other is performed using Fractional Fourier Transform.
Chirp and Wavelet transforms are special forms of Fractional Fourier Transforms (ref).
|
|
Minimal regularity solutions of semilinear generalized Tricomi equations
Research paper by Zhuoping Ruan, Ingo Witt, Huicheng Yin
Indexed on: 05 Aug '16Published on: 05 Aug '16Published in: Mathematics - Analysis of PDEs
Abstract
We prove the local existence and uniqueness of minimal regularity solutions $u$ of the semilinear generalized Tricomi equation $\partial_t^2 u-t^m \Delta u =F(u)$ with initial data $(u(0,\cdot), \partial_t u(0,\cdot)) \in \dot{H^{\gamma}}(\mathbb R^n) \times \dot{H}^{\gamma-\frac2{m+2}}(\mathbb R^n)$ under the assumption that $|F(u)|\lesssim |u|^\kappa$ and $|F'(u)| \lesssim |u|^{\kappa -1}$ for some $\kappa>1$. Our results improve previous results of M. Beals [2] and of ourselves [15-17]. We establish Strichartz-type estimates for the linear generalized Tricomi operator $\partial_t^2 -t^m \Delta$ from which the semilinear results are derived.
|
|
# Computing the partition function from a Metropolis Monte Carlo sample
I must be missing something. I could not find an answer in similar posts.
Suppose I have an energy $$E(x)$$ and have sampled many points, $$\{x_1, x_2, ..., x_N\}$$ through a Metropolis Monte Carlo simulation. If the space is high enough dimension such that numerically integrating over the space is impossible, what are my options for estimating the partition function (or free energy)?
P.S. I don't really care about the accuracy of the estimate. This question is more for didactic reasons than for practical ones.
• Sampled according to which distribution? -- Note that from a normalized sample of the Gibbs distribution, you cannot reconstruct the normalization (unless you use the sample to estimate the entropy and compute the free energy). Jul 8 at 9:40
• What is each individual point $x_i$? Jul 8 at 9:55
• Correct, I was assuming the canonical ensemble. And $x \in \mathbb{R}^N$ where $N$ is the dimension of the space. $x_i$ is a single point in $\mathbb{R}^N$ and is the $i$th point in the sample. Jul 9 at 10:19
The aim of the Metropolis Monte Carlo method is to evaluate the main macroscopic thermodynamic quantities of a system. These quantities are evaluated through an ensemble average. The idea of the Metropolis algorithm is to replace ensemble averages with time averages on a sufficiently large number $$M$$ of steps. The partition function $$Q$$ does not represent a quantity of interest for this method since it is not required for the calculation of the acceptance probability and it is not even involved in the calculation of time averages. Moreover, $$Q$$ is not a function of the coordinates and depends only on the parameters $$N, V, T$$. I see from your question that you also refer to free energy $$A=U-TS$$. The evaluation of the energy average is quite straightforward: $$U=\frac{1}{M}\sum_{i=0}^{M}U(\vec{R}_{i})$$ with $$\vec{R}_{i}=(\vec{r}_1...\vec{r}_N)$$ is the set of all the coordinates at time $$i$$. On the other hand we need to be very careful in the entropy calculation: first of all, we cannot calculate the "real" thermodynamic entropy but a "coarse grained" function which not necessarily approximates the fine-grained entropy. The entropy and the free energy do not admit an explicit expression in time when the system is evolving, but we can compute the differences of these quantities between equilibrium states. Here I want to give a sort of recipe for the evaluation of $$S$$: one divides the phase space into $$R$$ cells (all with the same size) and calculates the "weights" $$w_j$$, i.e., the number of particles placed in the $$j$$-th cell divided by the number of particles $$N$$. Then the coarse-grained entropy is given by $$S=-\sum_{j=1}^{R}w_j\ln(w_j)$$ P.S.: when we run a computer simulation starting from everywhere in the state space, then we expect the distribution to be approximately stationary after a large number of steps, therefore one can discard the non-equilibrium values of the quantities in order to get a more precise evaluation of the integrals.
• But your expression for the partition function does not seem correct now either. I have the sense that if my sample was uniformly distributed in $\mathbb{R}^N$ your method might work. But the sample is distributed $p(x)=e^{-\beta E(x)}$ since it comes from a Metropolis Monte Carlo simulation (I'm assuming the simulation is long enough to converge to equilibrium). Jul 9 at 13:12
|
|
## Introduction
Substance use disorders (SUD) and addiction represent a global public health problem of substantial socioeconomic implications1,2. In 2010, 147.5 million cases of alcohol and drug abuse were reported (Whiteford et al., 2015), and SUD prevalence is expected to increase over time. Genetic factors have been implicated in SUD etiology, with genes involved in the regulation of several neurobiological systems (including dopaminergic and glutamatergic) found to be important (for a review see Prom-Wormley et al., 20173). However, limitations intrinsic to most genetic epidemiological studies support the search for additional risk genes.
## Subjects and methods
### Subjects
We used independent populations from disparate regions of the world (n = 2698) ascertained through patients affected with ADHD co-morbid with disruptive behaviors (Paisa, Spanish and MTA samples) or SUD (Spanish and Kentucky samples).
### Paisa sample
This population isolate is unique in that it was used to identify ADHD susceptibility genes by linkage and association strategies. Detailed clinical and demographic information on this sample has been published elsewhere23,25,29. The sample consists of 1176 people (adults, adolescents, and children), mean age 28 ± 17 years, ascertained from 18 extended multigenerational and 136 nuclear Paisa families inhabiting the Medellin metropolitan area in the State of Antioquia, Colombia. Initial coded pedigrees were obtained through a fixed sampling scheme from a parent or grandparent of an index proband after having collected written informed consent from all subjects or their parent/guardian, as approved by the University of Antioquia and the NIH Ethics Committees, and in accordance with the Helsinki Declaration. Patients were recruited under NHGRI protocol 00-HG-0058 (NCT00046059).
Exclusion criteria for ADHD participants were IQ < 80, or any autistic or psychotic disorders. Parents underwent a full psychiatric structured interview regarding their offspring (Diagnostic Interview for Children and Adolescents—Revised—Parents version (DICA-IV-P, Spanish version translated with permission from Dr. Wendy Reich (Washington University, St. Louis). All adult participants were assessed using the Composite International Diagnostic Interview (CIDI), as well as the Disruptive Behavior Disorders module from the DICA-IV-P modified for retrospective use. The interview was conducted by a “blind” rater (either a psychologist, a neuropsychologist, or a psychiatrist) at the Neurosciences Clinic of the University of Antioquia, or during home visits. ADHD status was defined by the best estimate method. Specific information regarding clinical diagnoses and co-morbid disruptive disorders, affective disorders, anxiety, and substance use has been published elsewhere3.
From the 1176 individuals in this cohort, only founder members were included in analyses (n = 472). This was done to avoid kinship relatedness bias and to exclude children and adolescents, as they may have not been exposed to substances of abuse yet. Of these 472 individuals, 17% (n = 79) fulfilled criteria for ADHD, 17% (n = 78) for ODD, 18% (n = 84) for CD, 22% nicotine dependence (n = 102), 27% alcohol dependence (n = 124), 3% drug dependence (n = 12), 37% social/simple phobia (n = 156), 13% any other anxiety disorder (n = 58), and 25% major depressive disorder (n = 117) (Table 1).
### Spanish sample
The SUD sample consisted of 494 adults (mean age 37 ± 9 years and 76% males, n = 376) recruited and evaluated at the Addiction and Dual Diagnosis Unit of the Psychiatry Department at the Hospital Universitari Vall d’Hebron with the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I). All patients fulfilled DSM-IV criteria for drug dependence beyond nicotine dependence. None were evaluated for ADHD.
The control sample consisted of 483 blood donors (mean age 42 ± 20 years, 74% males) in which DSM-IV lifetime ADHD symptomatology was excluded under the following criteria: (1) not having been diagnosed with ADHD and (2) answering negatively to the lifetime presence of the following DSM-IV ADHD symptoms: (a) often has trouble keeping attention on tasks, (b) often loses things needed for tasks, (c) often fidgets with hands or feet or squirms in seat, and (d) often gets up from seat when remaining in seat is expected. Individuals affected with SUD were excluded from this sample. None of them had self-administered drugs intravenously. It is important to mention that the exposure criterion was not applied; therefore, this set cannot be classified as “pure” controls.
All patients and controls were Spanish of Caucasian descent. This study was approved by the ethics committee of the Hospital Universitari Vall d’Hebron and informed consent was obtained from all subjects in accordance with the Helsinki Declaration.
### MTA sample
DSM-IV abuse or dependence was based on a positive parent or child report with the Diagnostic Interview Schedule for Children version 2.3/3.0 (DISC)46 at the 6- and 8-year follow-up assessments. The DISC includes both lifetime and past year diagnoses. The Diagnostic Interview Schedule-IV47 was used at the 8-year follow-up for 18 + year-olds (n = 111). SUD was defined as the lifetime presence of any abuse or dependence (excluding tobacco dependence, due to differences in the meaning of abuse/dependence for tobacco versus other substances).
Additional analyses explored SUD for alcohol, tobacco, and cannabis/other drugs (recreational or misused prescription medications) separately10. All patients in this study provided informed written consent as approved by the NIH Ethics Committee.
### Kentucky sample
A sample of 560 inpatients and outpatients with severe SUD from Central Kentucky psychiatric facilities was collected during a pharmacogenetics investigation48. Patient interviews and medical record information (including urine drug screens and substance abuse counselor notes) were used by the research nurse to assess the Clinician Rating of Alcohol (CRAUD) and Drug Use Disorder (CRDUD)49,50 that provides a score from 1 = abstinence (not used in the assessed period) to 5 = severe dependence. Scores of 3 and higher are pathological and were considered positive in our analyses. All drugs were combined into one rating48. Descriptions of the training provided to research nurses to assess the CRAUD and CRDUD were published elsewhere48,51.
DNA was available from 533 of 560 study subjects. Of the 533 subjects with available DNA, 53% (n = 285) were male, 82% (n = 436) were Caucasian, 16% (n = 87) were African American, and 2% (n = 10) were from other ethnicities. Additional clinical information for this sample has been described elsewhere48,51 and included: (1) clinical diagnosis obtained from medical records, (2) prior psychiatric history, (3) history of daily smoking, (4) reviews of current and psychiatric medication use, and (5) body mass index (Supplemental Table 2). All participants in the Kentucky study provided informed written consent as approved by the University of Kentucky IRB.
### Genotyping
DNA was extracted from whole blood (Paisa, Spanish and MTA sample) or buccal swabs (Kentucky sample) using standard protocols. The Paisa sample was genotyped using the service provided by Illumina (San Diego, CA). The Spanish, MTA, and Kentucky samples were genotyped for select variants using pre-designed TaqMan® SNP genotyping assays (Thermo Fisher Scientific, Waltham, MA). Allelic discrimination real-time PCR reactions were performed in a 384-well plate format for each individual sample according to the manufacturer’s instructions. Briefly, 20 ng of genomic DNA were mixed with 2.5 μL of 2X TaqMan Universal PCR Master Mix and 0.25 μL of 20X SNP Genotyping Assay in a total volume of 5 μL per reaction. Assays were run in an ABI 7900HT Fast Real-Time PCR System (Thermo Fisher Scientific). Allele calling was made by end-point fluorescent signal analysis using the ABI’s SDS2.3 software. In addition, we had previously collected exome genotype data from the MTA sample26 using the Infinium® HumanExome-12 v1.2 BeadChip kit (Illumina), which covers putative functional exonic variants selected from over 12,000 individual exome and whole-genome sequences. Processed and raw intensity signals for the array data can be accessed at GEO (GSE112652). SNP markers harbored at the ADGRL3 gene were filtered in from this dataset and added to those genotyped using TaqMan® assays.
### Dataset quality control and preparation for analysis
Genotype data were imported into Golden Helix® SVS 8.3.1 (Golden Helix, Bozeman, MT) for quality control analysis. Markers with a minor allele frequency (MAF) < 0.01 (rare variants), significant deviation from Hardy–Weinberg equilibrium (P-values < 0.0001), and a genotyping success rate < 90%, were excluded. For the Paisa and Spanish samples, a subset of variants in the ADGRL3 minimal critical region (MCR), 5′UTR and 3′UTR were selected based on a previous ADHD association study30. Because the Paisa sample is a family-based cohort and recursive-partitioning analysis does not correct for kinship relatedness, only founder members from the pedigrees were included in the analyses. For the MTA sample, a total of 8568 markers with a MAF ≥ 1.0 % from the 244,414 markers genotyped with the exome chip were filtered out using linkage disequilibrium (LD) pruning, and variants within ADGRL3 were selected for analyses. For the Kentucky sample, only four ADGRL3 variants were selected for analyses after LD pruning of a list of markers located within the ADGRL3 5′UTR and MCR regions that was available to us. Variants rs7659636 and rs5010235 had been imputed from ADHD genome-wide association data funded through the Genetics Analysis Information Network (GAIN) initiative, a public-private partnership between the NIH and the private sector (https://www.genome.gov/19518664/genetic-association-information-network-gain/#al-4). ADGRL3 variants used in this study for each cohort are presented in Supplemental Table 3.
### Advanced recursive-partitioning (tree-based) approach (ARPA)
Association studies of ADGRL3 variants with ADHD, ODD, CD, response to stimulant treatment and severity outcome have been published elsewhere for the Paisa and Spanish populations24,29,32,52. We used ARPA to build a predictive framework to forecast the behavioral outcome of children with ADHD, suitable for translational applications. Our goal was to test the hypothesis that ADGRL3 variants predisposing to ADHD also increase the risk of co-morbid disruptive symptoms, including SUD.
ARPA is a tree-based method widely used in predictive analyses because it accounts for non-linear and interaction effects, offers fast solutions to reveal hidden complex substructures and provides truly non-biased statistically significant analyses of high-dimension, seemingly unrelated data53. In a visionary manuscript, D.C. Rao suggested that recursive-partitioning techniques could be useful for genetic dissection of complex traits54. ARPA accounts for the effect of hidden interactions better than alternative methods, and is independent of the type of data (i.e., categorical, continuous, ordinal, etc.) and of the type of data distribution (i.e., fitting or not fitting normality)54. Furthermore, results supplied by tree-based analytics are easy to interpret visually and logically53. Therefore, to generate the most comprehensive and parsimonious classificatory model to predict the susceptibility to disruptive behaviors, we applied ARPA using a set of different modules implemented in the Salford Predictive Modeler® (SPM) software, namely, Classification and Regression Trees (CART), Random Forest, and TreeNet (http://www.salford-systems.com). One important advantage of SPM when compared to other available data mining software is its ability to use raw data with sparse or empty cells, a problem frequently encountered in genetic data.
Briefly, CART is a non-parametric approach whereby a series of recursive subdivisions separate the data by dichotomization55. The aim is to identify, at each partition step, the best predictive variable and its best corresponding splitting value while optimizing a splitting statistical criterion, so that the dataset can be successfully split into increasingly homogeneous subgroups55. We used a battery of different statistical criteria as splitting rules (e.g., GINI Index, Entropy, and Twoing) to determine the splitting rule, maximally decreasing the relative cost of the tree while increasing the prediction accuracy of target variable categories55. The best split at each dichotomous node was chosen by either a measure of between-node dissimilarity or iterative hypothesis testing of all possible splits to find the most homogeneous split (lowest impurity). Similarly, we used a wide range of empirical probabilities (priors) to model numerous scenarios recreating the distribution of the targeted variable categories in the population55. Following this iterative process, each terminal node was assigned to a class outcome. To avoid finishing with an over-fitted CART predictive model (a common problem in CART analyses), and to ensure that the final splits were well substantiated, we applied tree pruning. During the procedure, predictor variables that were close competitors (surrogate predictors with comparable overall classification error to the optimal predictors) were pruned to eliminate redundant commonalities among variables, so the most parsimonious tree would have the lowest misclassification rate for an individual not included in the original data55.
Additionally, we applied the Random Forest (RF) methodology using a bagging strategy to exactly identify the most important set of variables predicting disruptive behaviors56. The RF strategy differs from CART in the use of a limited number of variables to derive each node while creating hundreds to thousands of trees. This strategy has proved to be immune to the over fitting generated by CART56. In RF, variables that appeared repeatedly as predictors in the trees were identified. The misclassification rate was recorded for each approach.
The TreeNet strategy was used as a complement to the CART and RF strategies because it reaches a level of accuracy that is usually not attainable by single models such as CART or by ensembles such as bagging (i.e., RF)57. The TreeNet algorithm generates thousands of small decision trees built in a sequential error-correcting process converging on an accurate model57. The number of variables considered to derive each node with RF was $$\sqrt n$$, where n is the number of independent variables (either 3 or 4).
To derive honest assessments of the derived models and have a better view of their performance on future unseen data, we applied a cross-validation strategy where both training with all the data and then indirectly testing with all the data were performed. To do so, we randomly divided the data into separate partitions (folds) of different sizes. This strategy allowed us to review the stability of results across multiple replications55. We used a 10-fold cross-validation as implemented in the SPM software.
A fixed-effects meta-analysis of the overall fraction of correctly classified individuals (accuracy) using the derived models from each of the four samples was applied to derive a general perspective of the SUD predictive capacity of this demographic-clinical-genetic framework.
## Results
A series of predictive models were built on our data using combinations of the following criteria: (i) the rules of splitting (GINI index, twoing, order twoing, and entropy); (ii) the priors; (iii) the size of the terminal nodes; (iv) the costs; (v) the depth of branching; and (vi) the size of the folds for cross-validation, to maximize the accuracy of the derived classification tree while considering class assignment, tree pruning, testing and cross-validation.
A parsimonious and informative reconstructed predictive tree derived from CART for the Paisa sample revealed demographic (age), clinical (CD), and genetic variables (rs5010235 and rs4860437) (Fig. 1a). The importance of these variables was corroborated, and their potential over fitting discarded by the TreeNet analyses that revealed a set of predictors for SUD containing those derived by CART (Fig. 1b). This predictive model displays good sensitivity and specificity as shown by areas under the receiver-operating characteristic (ROC) curve (0.954 and 0.87 for the learning and the test data, respectively) during TreeNet cross-validation using folding (Fig. 1c). The proportions of misclassification for SUD cases in the cross-validation experiment for the learning and testing data were 0.124 and 0.177, respectively (Fig. 1d).
In the case of the Spanish sample, a parsimonious and informative tree was reconstructed with CART revealing demographic (sex), clinical (CD, ODD, depression, and ADHD), and genetic variables (rs4860437 and rs1868790) (Fig. 2a). The TreeNet analysis revealed a set of predictors for SUD containing those derived by CART (Fig. 2b). This predictive model displayed good sensitivity and specificity as shown by areas under the ROC curve (AUC) of 0.911 and 0.897 for learning and testing samples, respectively, during TreeNet cross-validation using folding (Fig. 2c). The proportions of misclassification for SUD cases obtained by TreeNet analysis for learning and testing data were 0.151 and 0.175, respectively (Fig. 2d).
As in the previous cohorts, for the MTA sample we derived a parsimonious and informative predictive tree with CART depicting demographic (site of ascertainment), and genetic variables (rs2172802, rs61747658, rs12509110, and rs6856328) (Fig. 3a). The TreeNet analyses revealed a set of predictors for SUD containing those derived by CART (Fig. 3b). This predictive model displays good sensitivity and specificity as showed by AUC of 0.808 and 0.643 for learning and testing samples, respectively, during TreeNet cross-validation using folding (Fig. 3c). The proportions of misclassification for SUD cases obtained by TreeNet analysis for learning and testing data were 0.314 and 0.358, respectively (Fig. 3d).
Finally, for the Kentucky sample, we derived a parsimonious and informative predictive tree with CART involving demographic (sex), clinical (high body mass index (HBMI) and schizophrenia diagnosis), and genetic variables (rs4860437 and rs7659636) (Fig. 4a). The TreeNet analyses revealed a set of predictors for SUD containing those derived by CART (Fig. 4b). This predictive model displays good sensitivity and specificity as showed by AUC of 0.811 and 0.744 for learning and testing samples, respectively, during TreeNet cross-validation using folding (Fig. 4c). The proportions of misclassification for SUD cases obtained by TreeNet analysis for learning and testing data were 0.285 and 0.252, respectively (Fig. 4d). The results from the RF analysis were consistent with those produced by TreeNet cross-validation using folding.
A fixed-effects meta-analysis for overall accuracy returned a value of 0.727 (95% CI = 0.710–0.744) (Fig. 5), suggesting potential eventual clinical utility of predictive values. Overall, ADGRL3 marker rs4860437 was the most important variant predicting susceptibility to SUD, a commonality suggesting that these networks may be accurate in predicting the development of SUD based on ADGRL3 genotypes.
We conducted independent analyses for alcohol or nicotine dependence and compared these results with those of our composite SUD phenotype, as defined by the disjunctive presence of substance use phenotypes and explained by likely common neuropathophysiological mechanisms. In general, across cohorts, we found significant alcohol and nicotine risk variants, some of which have reasonably high odd ratios (OR). For instance, in the Spain sample, marker rs2271339 conferred significant risk to nicotine use: the heterozygote genotype A/G confers 43% increased risk of being diagnosed with nicotine use (OR = 1.43, 95% CI = 1.12–1.82). In the same vein, we found in the Paisa sample that the heterozygote A/T genotype for rs1456862 confers 83% increased risk to nicotine use (corrected OR = 1.84, 95 CI% = 1.03–3.38) than the A/A genotype. Regarding alcohol use, we found in the Paisas that the heterozygote C/T genotype for rs2159140 confers susceptibility, whereas the C/C genotype does not (corrected OR = 1.64, 95 CI% = 1.01–2.72). Supplemental Fig. 1 shows the ROC curves of nicotine and alcohol use prediction in the Paisa sample. Note that the AUC is greater than 0.7 in both cases, which suggests a straight performance of markers rs1456862 and rs2159140 in predicting nicotine and alcohol use, respectively.
To determine the significance of improvement of prediction when genetic markers are introduced in the ARPA-based predictive model for SUD, we compared the performance measures (i.e., sensitivity, specificity, classification rate, and lift) across all cohorts under two disjunctive scenarios: inclusion of genetic markers or not. We found that including genetic markers improved the performance measures of the resulting ARPA-based predictive model of SUD, regardless of cohort (Supplemental Fig. 2 and Supplemental Table 1). For instance, the AUC for the Spain sample was 81.6% (95% CI = 79.8–83.4) when genetic information was included, and 77.5 (95% CI = 75.9–79.1) when it was excluded. A bootstrap-based test with 10,000 replicates revealed that the former AUC was statistically greater than the latter (P < 0.0001, Supplemental Table 1). Similar results were obtained for the Paisa sample: the AUC was 90% (95% CI = 86.6–93.0) when genetic information was included versus 78.8% (95% CI = 75.8–81.7) when it was not (P < 0.0001, Supplemental Table 1). Improvements were also observed in the correct classification rate for the Spanish and Paisa samples, the sensitivity values in all samples, the specificity in the Spanish and Paisa samples, and the lift in the Paisa sample (Supplemental Table 1). Similar results were observed for the MTA and Kentucky samples, where including genetic information in the predictive model for SUD drastically improved these performance measures (Supplemental Table 1).
## Discussion
SUD genetic epidemiological studies across multiple substances have been plagued with inconsistency in the replication of genetic association results. This may be due to reasons such as: (i) small effect size of variants expected to influence the SUD phenotype, as with any complex disease;58 (ii) insufficient power to detect significant associations due to small sample size;59 (iii) phenotypic heterogeneity of SUD across samples that may reflect different disease stages or multiple subtypes (i.e., single-drug versus poly-drug dependence/use); (iv) genetic heterogeneity arising from distinct risk genes sets; (v) ethnicity inconsistencies between discovery and replication samples;60,61 and (vi) comorbidity with other psychiatric conditions (e.g., ADHD) with shared genetic and environmental architecture62,63. Consequently, additional studies are required to identify new SUD candidate genes and to help dissect genetic contributions in the context of complex interactions with co-morbid conditions.
In this study, we present a demographic, clinical and genetic framework generated using ARPA that is able to predict the risk of developing SUD. Interestingly, marker rs4860437 showed a differential splitting pattern in the Paisa, Spain, and Kentucky cohorts. For instance, in Fig. 1a, rs4860437 splits into (G/G, G/T) and T/T; in Fig. 2a, the same variable splits into (G/G, T/T) and G/T; and in Fig. 4a, it splits into (G/T, T/T) and G/G. The most parsimonious and plausible explanation of this splitting pattern is the presence of genomic variability surrounding this proxy marker, reflecting ancestral composition. Future studies of genomic regions surrounding rs4860437 might reveal a cryptic mechanism. It is particularly compelling that ADGRL3 marker rs4860437, which is a major predictor variable component in the trees for SUD, is in complete LD with ADHD susceptibility markers rs6551665 and rs1947274 in Caucasians28,30,52, suggesting that the phenotype underpinning SUD is under the pleiotropic effect of ADGRL3 variants. Unfortunately, rs4860437 was not included in the exome chip used to genotype the MTA sample and, therefore, could not be included in the analyses for this sample. Given the limited overlap of markers across datasets and possible stratification differences among study populations, a gene- rather than a marker-level approach has been advocated64.
Adopting such a perspective, our results suggest that genetic variants harbored in the ADGRL3 locus confer susceptibility to SUD in populations from disparate regions of the world. These populations are from three different countries and involve different investigators, diverse inclusion criteria, and different clinical assessments, which suggests that our results may replicate in other settings and are likely to be clinically relevant. Of particular interest is the generalization of our findings to a longitudinal study (the MTA sample), where adding genetic information to baseline data predicted the development of SUD at later ages, as determined from information gathered over a period of more than 10 years. Additionally, our results generalized to a sample of patients with severe SUD from Kentucky (U.S.) that were not ascertained on the basis of ADHD diagnosis.
The first genome-wide significant ADHD risk loci were published recently65. Marker rs4860437 is not represented in this dataset; however, this study was not aimed at identifying loci shared between ADHD and SUD. In any case, while genome-wide association studies are a useful tool for discovering novel risk variants—as it involves a hypothesis-free interrogation of the entire genome—the lack of genetic association may be a reflection of the polygenic, multifactorial nature of ADHD, with both common and rare variants likely contributing small effects to its etiology66,67,68. In addition, an important factor may be the genetic heterogeneity of ADHD subtypes, which may have different underlying genetic mechanisms. Therefore, genome-wide significance may identify loci with larger genetic effects, while others with smaller effects remain undetected for a given population size.
|
|
The primary tool for inspecting Linux disk performance is iostat. The output includes many important statistics, but they’re difficult for beginners to understand. This article explains what they mean and how they’re calculated.
I usually run iostat like this:
iostat -dx 5
This makes iostat print an extended disk device report every 5 seconds forever until you cancel it. The first report will be over the time interval since the system was booted; subsequent reports will be for just that 5-second interval.
The input data comes from /proc/diskstats, which contains a number of fields that, when interpreted correctly, reveal the inner workings of disk (block) devices.
The order and location of fields in /proc/diskstats varies between kernels, but since 2.6, there’s at least the following for both reads and writes:
• the number of operations completed
• the number of operations merged because they were adjacent
• the number of sectors read/written
• the number of milliseconds spent reading/writing
There’s also the following fields, which are not available separately for reads and writes:
• the number of operations in progress as of the instant of reading /proc/diskstats
• the total number of milliseconds during which I/Os were in progress
• the weighted number of milliseconds spent doing I/Os
With the exception of operations in progress, all of the fields are cumulative counters that increase over time. The last field, the weighted number of milliseconds spent doing I/O operations, is special because it includes operations currently in-flight; it is basically the sum of the time spent for every operation, plus those not yet completed:
Field 11 – weighted # of milliseconds spent doing I/Os This field is incremented at each I/O start, I/O completion, I/O merge, or read of these stats by the number of I/Os in progress (field 9) times the number of milliseconds spent doing I/O since the last update of this field. This can provide an easy measure of both I/O completion time and the backlog that may be accumulating.
To interpret the output of iostat, you need to know a little performance terminology:
• Throughput is the rate at which a system completes operations, in units of operations per second.
• Concurrency is the number of operations in progress at a time, either as an instantaneous measure or an average over an interval of time.
• Latency is the total time operations require to complete, from the perspective of the user or system who requested them. It is in units of seconds per operation. Latency is the round-trip time between making the request and getting the response. It is also called residence time because it’s how long the request was resident in the system that was doing the work. Latency is composed of two parts, as follows…
• Queue time is the first component of latency: the time the request spends waiting, queued for service, after the request is made but before the work begins. It is sometimes called “wait” time, however, there’s ambiguity in this term in the context of iostat, so beware.
• Service time is the second component of latency, after the device accepts a request from the queue and does the actual work.
• Utilization is the portion of time during which the device is busy servicing requests. It is a fraction from 0 to 1, usually expressed as a percentage.
The iostat tool works by capturing a snapshot of /proc/diskstats and all its fields, then waiting and grabbing another snapshot. It subtracts the two snapshots and does some computations with the differences.
Here’s how iostat computes the output fields, and what each of them means:
• There are a number of throughput metrics: rrqm/s, wrqm/s, r/s, w/s, rsec/s, and wsec/s. These are per-second metrics whose names indicate what they mean (read requests merged per second, and so on). These are computed by simply dividing the delta in the fields from the file by the elapsed time during the interval.
• avgrq-sz is the number of sectors divided by the number of I/O operations.
• avgqu-sz is average concurrency overall during the interval. It is computed from the last field in the file—the weighted number of milliseconds spent doing I/Os—divided by the milliseconds elapsed. As per Little’s Law, the units cancel out and you just get the average number of operations in progress during the time period, which is a good measure of load or backlog. The name, short for “average queue size”, is misleading. This value does not show how many operations were queued but not yet being serviced—it shows how many were either in the queue waiting, or being serviced. The exact wording of the kernel documentation is “…as requests are given to appropriate struct request_queue and decremented as they finish.”
• %util is utilization: the total time during which I/Os were in progress, divided by the sampling interval. This tells you how much of the time the device was busy, but it doesn’t really tell you whether it’s reaching its limit of throughput, because the device could be backed by many disks and hence capable of multiple I/O operations simultaneously.
• await is average latency: the total time for all I/O operations summed, divided by the number of I/O operations completed. It includes queue wait and service time. It’s important to note that “await” stands for “average wait,” but this is not what a performance engineer might understand “wait” to mean. A performance engineer might think wait is queue time; here it’s total latency.
• svctm is the average service time. It is the most confusing to derive, because to understand that it’s valid you need to know Little’s Law and the Utilization Law. It is the utilization divided by the throughput. You saw utilization above; the throughput is the number of I/O operations in the time interval.
Although the computations and their results seem both simple and cryptic, it turns out that you can derive a lot of information from the relationship between these various numbers.
I’ve shown how the numbers are computed, but now you might ask, why are those things true? Why are those the correct relationships to use when computing these metrics?
The answer lies in several interrelated theories and properties:
1. Queueing Theory. This is the study of “customers” arriving at “servers” to be serviced. In the disk’s case, the customers are I/O requests, and the disks are the servers. Queueing theory explains the relationship between the length of the queue, the time spent waiting in the queue, and the system’s utilization.
2. Little’s Law, which states that in a stable system, where all requests eventually complete, then over the long run, L = λW, or as I prefer to state it, N=XR. The number of requests (customers) resident in the system (whether queued or in service) is L or N. It is equal to the arrival rate λ (or throughput X) times the residence time W (or response time R).
3. The utilization law, ρ = λS. This states that utilization is throughput times service time.
If you’d like to learn more about queueing theory and these relationships, I encourage you to do so. I wrote a free introduction to queueing theory. I also wrote a replacement for iostat, called pt-diskstats, which exposes additional insight into disk device behavior and performance; iostat computes a lot of valuable metrics, but there’s more that it doesn’t compute.
Image by doug88888.
|
|
• Home
• E-submission
• Sitemap
J. Electrochem. Sci. Technol Search
CLOSE
J. Electrochem. Sci. Technol > Volume 4(3); 2013 > Article
Article Journal of Electrochemical Science and Technology 2013;4(3):93-101. DOI: https://doi.org/10.5229/JECST.2013.4.3.93
Nucleation Process of Indium on a Copper Electrode Yonghwa Chung, Chi-Woo Lee Department of Advanced Materials Chemistry, Korea University;Department of Advanced Materials Chemistry, Korea University; Abstract The electrodeposition of indium onto a copper electrode from an aqueous sulfate solution containing $In^{3+}$ was studied by means of cyclic voltammetry and chronoamperometry. Reduction and oxidation of indium on copper were investigated by using cyclic voltammograms at different negative limiting potentials and at different scan rates in cumulative cycles. Cyclic voltammograms indicated that reduction and oxidation processes of indium could involve various reactions. Chronoamperometry was carried out to analyze the nucleation mechanism of indium in the early stage of indium electrodeposition. The non-dimensional plot of the current transients at different potentials showed that the shape of the plot depended on the applied potential. The nucleation of indium at potential step of -0.6~-0.8 V was close to progressive nucleation limited by diffusion. However the non-dimensional plot of current transients for the indium nucleation showed different behaviors from theoretical curves at the potential step lower than -0.8 V. Key Words: Indium, Electrodeposition, Reduction, Oxidation, Nucleation
TOOLS
Share :
METRICS
• Crossref
• Scopus
• 94 View
Related articles in J. Electrochem. Sci. Technol
|
|
Select Page
# O generalizare a unei probleme a lui A. G. Ioachimescu
### Full PDF
creative_1993_2_049_053
In this note we study the sequence (s_n )_(n≥1), given by (4). It is shown that (s_n ) is convergent lim┬(n→∞)〖s_n 〗 [-p,1-p] and the order of convergence is given by (5) or (6).
In the particular case p=2, we obtain a problem from the first issue of Gazeta Matematică, in 1895, due to A.G.Ioachimescu, one of founders of this old Romanian magazine.
|
|
# Majoranas in topological insulators and superconductors¶
## Introduction¶
We have a returning lecturer for the first chapter of this week’s lectures: Carlo Beenakker from Leiden University, who will tell us more about different ways to create Majoranas in superconducting vortices.
## Different types of bulk-edge correspondence¶
By now, we have seen examples of how the topological properties of the bulk of a material can give birth to new physical properties at its edges, and how these edge properties cannot exist without a bulk. This is the essence of bulk-edge correspondence. For example, the unpaired Majorana bound states at the edges of a Kitaev chain exist because they are separated by the bulk of the chain.
Observe that the systems we have studied so far all had something in common: the topologically protected edge states were separated by a bulk that is one dimension higher than the dimension of the edge states. For example, the 0D Majorana bound states are separated by the 1D bulk of a Kitaev chain, and 1D chiral edge modes are separated by a 2D Chern insulator.
In this week, we will see that this does not need to be the case.
The dimension of the bulk does not need to be one higher than the dimension of the topologically protected edge. Any dimension higher than the dimension of the edge works equally well.
We will see how this simple insight opens new avenues in the hunt for topological protection.
## Turning the helical edge into a topological superconductor¶
In the past weeks, we have studied two systems that appear very different, but where topology showed up in a very similar way.
First, let’s consider the quantum spin-Hall insulator. As we saw two weeks ago, it is characterized by a fermion parity pump: if you take a Corbino disk made out of a quantum spin-Hall insulator and change the flux by half a normal flux quantum, that is by $$h/2e$$, one unit of fermion parity is transferred from one edge of the sample to the other.
Secondly, let us consider a one-dimensional topological superconductor, like we studied in weeks two and three. If such a system is closed into a Josephson ring, and the flux through the ring is advanced by one superconducting flux quantum $$h/2e$$, the fermion parity at the Josephson junction connecting the two ends changes from even to odd, or viceversa. This is the $$4π$$ Josephson effect, one of the main signatures of topological superconductivity.
Note that the change in flux is equal to $$h/2e$$ in both cases, since a superconducting flux quantum $$h/2e$$ is half of the normal flux quantum $$h/e$$.
This suggest that once you have a quantum-spin Hall insulator, you are only one small step away from topological superconductivity and Majoranas. The only ingredient that is missing is to introduce superconducting pairing on the quantum spin-Hall edge.
But this is easy to add, for instance by putting a superconductor on top of the outer edge of our quantum spin-Hall Corbino disk:
The superconductor covers the entire quantum spin-Hall edge except for a small segment, which acts as a Josephson junction with a phase difference given by $$\phi = 2e\Phi/\hbar$$, where $$\Phi$$ is the magnetic flux through the center of the disk. We imagine that the superconductor gaps out the helical edge by proximity, which means that Cooper pairs can tunnel in and out from the superconductor into the edge. In order for this to happen, a conventional $$s$$-wave superconductor is enough.
We will not repeat our pumping experiment, that is increasing the flux $$\Phi$$ by $$h/2e$$. We know that one unit of fermion parity must be transferred from the inner edge of the disk to the outer edge. However, the only place where we can now find a zero-energy state is the Josephson junction, because the rest of the edge is gapped.
From the point of view of the superconducting junction, this means that advancing the phase difference $$\phi$$ by $$2\pi$$, the ground state fermion parity of the junction changes. Recalling what we learned in the second and third weeks, we can say that the Josephson effect is $$4\pi$$-periodic.
#### What happens to the Josephson current in the setup shown above if you remove the inner edge of the Corbino disk?
The pumping argument fails and the Josephson effect becomes $2\pi$ periodic.
Then you can no longer apply a flux through the disk.
The Josephson effect remains $4\pi$ periodic, but the fermion parity becomes fixed.
Nothing changes if the inner edge of the Corbino disk is removed.
### Majoranas on the quantum spin-Hall edge¶
We know that the $$4\pi$$-periodicity of the Josephson effect can always be associated with the presence of Majorana zero modes at the two superconducting interfaces of the Josephson junction.
However, if you compare the system above with the Josephson ring studied in week three, you will notice an important difference. In that case, the Josephson junction was formed by an insulating barrier. Now on the other hand, the two superconducting interfaces are connected by the quantum spin-Hall edge.
This means that our Majoranas are connected by a gapless system, and therefore always strongly coupled. In order to see unpaired Majoranas, or at least weakly coupled ones, we need to gap out the segment of the edge forming the Josephson junction.
To gap it out, we can try to place another superconductor in the gap. Unfortunately, this doesn’t really help us, because it results in the formation of two Josephson junctions connected in series, and we only want one.
However, we know that the edge modes of the quantum spin-Hall insulator are protected from backscattering by time-reversal symmetry. To gap them out, we need to break time-reversal symmetry. Since a magnetic field breaks time-reversal symmetry, we can gap out the edge modes by placing a magnet on the segment of the edge between the two superconductors:
In the sketch above, you see two Majoranas drawn, one at each interface between the magnet and the superconductor. Their wavefunctions decay as we move away from the interfaces. As Carlo Beenakker mentioned in the introductory video, these Majoranas are quite similar to those we found at the ends of quantum wires.
To understand them in more detail, note that the magnet and the superconductor both introduce a gap in the helical edge, but through a completely different physical mechanism. The magnet flips the spin of an incoming electron, or hole, while the superconductor turns an incoming electron with spin up into an outgoing hole with spin down. These two different types of reflection processes combine together to form a Majorana bound state.
We can capture this behavior with the following Bogoliubov-de Gennes Hamiltonian for the edge:
$H_\textrm{BdG}=(-iv\sigma_x \partial_x-\mu)\tau_z+m(x)\,\sigma_z+\Delta(x)\,\tau_x.$
The first term is the edge Hamiltonian of the quantum spin-Hall effect, describing spin up and down electrons moving in opposite direction, together with a chemical potential $$\mu$$. The matrix $$\tau_z$$ acts on the particle-hole degrees of freedom, doubling the normal state Hamiltonian as usual. The second term is the Zeeman term due to the presence of the magnet. Finally, the last term is the superconducting pairing.
The strength of the Zeeman field $$m(x)$$ and the pairing $$\Delta(x)$$ both depend on position. At a domain wall between the superconductor and the magnet, when the relevant gap for the edge changes between $$m$$ and $$\Delta$$, the Hamiltonian above yields a Majorana mode.
This is shown below in a numerical simulation of a quantum spin-Hall disk. The left panel shows the edge state of the disk without any superconductor or magnet. In the right panel we cover one half of the disk by a superconductor and the other by a magnet, and obtain two well-separated Majoranas:
The density of states plot of the lowest energy state reveals one Majorana mode at each of the two interfaces between the magnet and the superconductor.
This clearly shows how is it possible to obtain 0D topologically protected states (the Majorana modes) from a $$2D$$ bulk topological phase (the quantum spin Hall insulator). All we had to do was to add the appropriate ingredients (the superconductor and the magnet).
## Two-dimensional $$p$$-wave superconductors¶
Let us now move on to Majoranas in vortices, as discussed by Carlo Beenakker in the introductory video. We will need a model for a 2D topological superconductor. How do we obtain it?
It turns out that the method we used to construct 2D Chern insulators in week 4, namely stacking 1D Kitaev chains and coupling them, can also be used to construct 2D topological superconductors.
That isn’t very surprising though, is it? Remember that back then, we told you to forget that the Kitaev model was really a superconductor. Bearing that in mind, it comes as no surprise that stacking 1D superconductors gives us a 2D superconductor.
So let’s look back at the Hamiltonian we obtained for a Chern insulator by coupling a stack of Kitaev chains:
$H_\textrm{2D}(\mathbf{k})=-(2t\cos{k_x}+\mu)\,\tau_z+\Delta\sin{k_x}\tau_y-2\gamma\,(\cos{k_y}\tau_z+\sin{k_y}\,\tau_x).$
Those of us who are careful would want to check that the above Hamiltonian is indeed a superconductor, in particular that the terms coupling different chains do not spoil particle-hole symmetry.
And indeed if we consider the operator $$\mathcal{P}=\tau_x \mathcal{K}$$ with $$\mathcal{K}$$ the complex conjugation operator, we find that the Bloch Hamiltonian obeys $$H_\textrm{2D}(\mathbf{k}) = -\tau_x H^*_\textrm{2D}(-\mathbf{k}) \tau_x$$, precisely the symmetry obeyed by the Kitaev chain, extended to two dimensions (if you do not remember how to apply an anti-unitary symmetry in momentum space, you can go back to week 1 and look at the original derivation).
The Hamiltonian above is quite anisotropic - it looks different in the $$x$$ and $$y$$ directions, a consequence of the way we derived it in week four. For our purposes, however, it is convenient to make it look isotropic. Thus, we tweak the coefficients in $$H$$ to make it look similar in the $$x$$ and $$y$$ directions. This is fine as long as we do not close the gap, because the topological properties of $$H$$ remain unchanged.
In this way we arrive at the canonical Hamiltonian of a so-called $$p$$-wave superconductor:
$H(k_x,k_y)=-[2t\,(\cos{k_x}+\cos{k_y})+\mu]\,\tau_z+\Delta\,(\sin{k_x}\tau_y-\sin{k_y}\tau_x).$
Apart from looking more symmetric between the $$x$$ and $$y$$ directions, the Hamiltonian clearly separates normal hopping, which is proportional to $$t$$, and superconducting pairing, which is proportional to $$\Delta$$. This superconductor is $$p$$-wave because the pairing is linear in momentum, just like in the Kitaev chain. This can be seen explicitly by expanding $$H$$ around $$\mathbf{k}=\mathbf{0}$$, which gives
$H(k_x,k_y)\approx [t\,(k_x^2+k_y^2)-\mu+4 t]\tau_z+[i \Delta(k_x+i k_y)\tau_++\textrm{h.c.}],$
where $$\tau_+=(\tau_x+i\tau_y)/2$$. Note that the pairing is proportional to $$k_x+ik_y$$, and it breaks both time-reversal and inversion symmetries.
Even though we have reinterpreted the Hamiltonian $$H$$ as a superconductor, it is still originally a Chern insulator. This means that the system is still characterized by a bulk Chern number, which determines the presence of chiral edge states. A chiral edge state can be described by a simple effective Hamiltonian, equivalent to that of a quantum Hall edge:
$H_\textrm{edge}=\hbar v k$
with $$v$$ the velocity and $$k$$ the momentum along the edge. Note that the edge Hamiltonian maintains the particle-hole symmetry of the bulk: for every state with energy $$E$$ and momentum $$k$$ there is a state with energy $$-E$$ and momentum $$-k$$.
We are now ready to see how unpaired Majoranas can appear in a 2D $$p$$-wave superconductor.
## Vortices in 2D p-wave superconductors¶
So far we have considered a uniform superconducting pairing $$\Delta$$, with constant amplitude and phase. This is an idealized situation, which corresponds to a perfect superconductor with no defects.
If you apply a small magnetic field to a superconducting film, or if there are defects in the material, a vortex of supercurrent can form to lower the free energy of the system. In a vortex, there is a supercurrent circulating in a small area around the defect or the magnetic field lines penetrating the superconductor. The magnetic flux enclosed by the vortex supercurrent is equal to a superconducting flux quantum $$h/2e$$.
The amplitude $$\Delta$$ of the superconducting pairing is suppressed in the core of the vortex, going to zero in its center, and the superconducting phase winds by $$2\pi$$ around a closed path surrounding it. The situation is sketched below:
Because the pairing $$\Delta$$ goes to zero in the middle of the vortex, there can be states with an energy smaller than $$\Delta$$ which are localized at the vortex core. We now want to see whether it is possible to have a non-degenerate zero energy solution in the vortex - because of particle-hole symmetry, this would be an unpaired Majorana mode!
To compute the spectrum of the vortex we could introduce a position dependent-phase for $$\Delta$$ in the Hamiltonian of the superconductor, and solve it for the energy spectrum by going through quite some algebra. But as usual in this course, we will take a shortcut.
Our shortcut comes from answering the following question: how is the spectrum of the chiral edge states affected by introducing a vortex in the middle of the superconductor?
From week one, we know that changing the flux through a superconducting ring by a flux quantum changes the boundary condition from periodic to antiperiodic, or viceversa.
A vortex has precisely the same effect on the chiral edge states. Therefore, in the presence of a vortex, the allowed values $$k_n$$ of momentum in a disk shift by $$\pi/L$$, with $$L$$ the length of the edge. The energy levels depend linearly on momentum and are shifted accordingly,
$E_n\,\to\, E_n + \hbar v \pi / L.$
Now, with or without the vortex, the spectrum must be symmetric around $$E=0$$ because of particle-hole symmetry. The energy levels $$E_n$$ correspond to standing waves and are equally spaced, with spacing given by $$2\hbar v \pi / L$$. There are only two such spectra consistent with particle-hole symmetry, $$E_n = 2\pi\,n\, \hbar v / L$$ and $$E_n = 2\pi\,(n+1/2)\, \hbar v / L$$.
Which one of the two spectra correspond to the presence of a vortex?
To answer this question, observe that the energy spectrum $$E_n = 2 \pi\,n\,\hbar v / L$$ includes a zero-energy solution, which is an unpaired Majorana mode at the edge! This is impossible unless there is somewhere a place to have a second zero-energy solution. And the only other possible place where we could have a zero-energy solution is the core of the vortex.
Just by looking at the edge state momentum quantization, we have thus demonstrated that a vortex in a $$p$$-wave superconductor must come with a Majorana.
Below, we plot the wave function of the lowest energy state in a $$p$$-wave disk with a vortex in the middle. The lowest energy wavefunction is an equal superposition of the two Majorana modes. Here you can see that half of it is localized close to the vortex core and half of it close to the edge.
The wave function is not zero in the bulk between the edge and the vortex because of the relatively small size of the system. The separation between edge and vortex, or between different vortices, plays the same role as the finite length of a Kitaev chain, i.e. it splits the Majorana modes away from zero energy by an exponentially small amount.
#### What happens if you add a second vortex to the superconductor? Imagine that the vortices and edge are all very far away from each other
The second vortex has no Majorana.
Both vortices have a Majorana, and the edge has two Majoranas.
The Majorana mode at the edge goes away, and each vortex has its own Majorana.
Vortices can only be added in pairs because Majoranas only come in pairs.
## Vortices in 3D topological insulator¶
Unfortunately, superconductors with $$p$$-wave pairing are very rare, with mainly one material being a good candidate. But instead waiting for nature to help us, we can try to be ingenious.
As Carlo mentioned, Fu and Kane realized that one could obtain an effective $$p$$-wave superconductor and Majoranas on the surface of a 3D TI.
We already know how to make Majoranas with a 2D topological insulator. Let us now consider an interface between a magnet and a superconductor on the surface of a 3D topological insulator. Since the surface of the 3D TI is two dimensional, such an interface will be a one dimensional structure and not a point defect as in the quantum spin-Hall case.
The Hamiltonian of the surface is a very simple extension of the edge Hamiltonian, $$\sigma_x k_x + \sigma_y k_y$$ instead of just $$\sigma_x k_x$$. We can imagine that $$k_y$$ is the momentum along the interface between the magnet and the superconductor, and it is conserved. The effective Bogoliubov-de Gennes Hamiltonian is
$H_\textrm{BdG}=(-i\sigma_x \partial_x+ \sigma_y k_y-\mu)\tau_z+m(x)\,\sigma_z+\Delta(x) \tau_x.$
What is the dispersion $$E(k_y)$$ of states along the interface resulting from this Hamiltonian? Well, for $$k_y=0$$ we have exactly the Hamiltonian of the magnet/superconductor interface in the quantum spin-Hall case, which had a zero mode. So we know that the interface is gapless. The magnet breaks time-reversal symmetry, so we will have a chiral edge state, with energy $$E$$ proportional to $$k_y$$. Just like in the $$p$$-wave superconductor case!
At this point, analyzing the case of a vortex is very simple. We just have to reproduce the geometry we analyzed before. That is, we imagine an $$s$$-wave superconductor disk with a vortex in the middle, surrounded by a magnetic insulator, all on the surface of a 3D topological insulator:
The introduction of a vortex changes the boundary conditions for the momentum at the edge, like in the $$p$$-wave case, and thus affects the spectrum of the chiral edge states going around the disk.
Following the same argument as in the $$p$$-wave case, particle-hole symmetry dictates that there is a Majorana mode in the vortex core on a 3D TI. Interestingly, the vortex core is spatially separated from the magnet - so the vortex should contain a Majorana mode irrespective of the magnet that was used to create the chiral edge mode.
In fact, the magnet was only a crutch that we used to make our argument. We can now throw it away and consider a vortex in a superconductor which covers the entire surface of the topological insulator.
To confirm this conclusion, below we show the result of a simulation of a 3D BHZ model in a cube geometry, with a vortex line passing through the middle of the cube. To make things simple, we have added superconductivity everywhere in the cube, and not just on the surface (nothing prevents us from doing this, even though in real life materials like Bi$$_2$$Te$$_3$$ are not naturally superconducting).
In the right panel, you can see a plot of the wavefunction of the lowest energy state. You see that it is very well localized at the end points of the vortex line passing through the cube. These are precisely the two Majorana modes that Carlo Beenakker explained at the end of his introductory video.
|
|
# Comparison of definitions for Functions of Bounded Variation
I have been trying to understand the functions of bounded variation and I came across the following definitions
Defintion 1: A function $$f:\mathbb{R^d} \rightarrow \mathbb{R}$$ is of bounded variation iff $$\begin{split} \operatorname{TV}(f)&:=\int\limits_{\mathbb{R}^{d-1}}\mathcal{TV}(f(\cdot,x_2,\cdots,x_d))dx_2 \cdots dx_m +\cdots +\\ & \quad+\cdots+\int\limits_{\mathbb{R}^{d-1}}\mathcal{TV}(f(x_1, \cdots, x_{d-1},\cdot)) dx_1\cdots dx_{d-1} < \infty. \end{split}$$
where, for $$g:\mathbb{R} \rightarrow \mathbb{R}$$ $$\mathcal{TV}(g):=\sup \left\{\sum\limits_{k=1}^N{\left|g(\xi_k)-g(\xi_{k-1})\right|}\right\}$$ and supremum is taken over all $$M \geq 1$$ and all partitions $$\{\xi_1,\xi_2,....,\xi_N\}$$ of $$\mathbb{R}.$$
Defintion 2: A function $$f:\mathbb{R^d} \rightarrow \mathbb{R}$$ is of bounded variation iff
$$\operatorname{TV}(f)= \sup \left\{\,\int\limits_{\mathbb{R}^m}f \operatorname{div}(\phi): \phi \in C_c^1(\mathbb{R^d})^d, \|\phi\|_{L^{\infty}} \leq 1\, \right\} < \infty.$$
Clearly if $$f$$ is of bounded variation in the sense of definition 2, it may not be of bounded variation in the sense of definition 1.
In this regard, I have the following doubts.
1. If $$f$$ satisfies definition 1, then do we have $$f$$ satisfies definition 2? (I felt so but could not prove it rigorously).
2. If [1] is true then $$\operatorname{TV}(f)$$ calculated by definition 1 and definition 2 are they equal?
3. If $$f$$ satisfies definition 2, does there exist a function $$g:\mathbb{R}^d \rightarrow \mathbb{R}$$ a.e equal to $$f$$ such that $$g$$ satisfies definition 1? If so how to prove it?
P.S. : I have read somewhere that 3 is true in one dimension and in-fact we can find $$g$$ which is right continuous. But I could not find the rigorous proof and also I could not find any such result in multi-d.
• Are both definitions equivalent for $d = 1$? In higher dimensions, I'd try using $\Phi = \phi_1e_1+\cdots\phi_ne_n$, with $\phi_i$ a suitable choice. Can you prove something under the assumption that $f$ is smooth? – user90189 Apr 2 at 13:01
The questions, despite looking as a representation problem in functional analysis, are much deeper as they bring out the history of the topic involved, notably $$BV$$-functions and the reasons why the customary definition adopted for the variation of a multivariate function is definition 2 above. And as thus the answers below needs to indulge a bit on this history: said that, let's start.
1. If $$f$$ satisfies definition 1, then do we have $$f$$ satisfies definition 2? (I felt so but could not prove it rigorously).
No: the two definitions are in general not equivalent. The main problem is that definition 1 is not invariant respect to coordinate changes for all $$L^1$$ functions: in particular, there exists functions for which the value of the variation $$\mathrm{TV}(f)$$ depend on the choice of coordinate axes, as shown by by Adams and Clarkson ([1], pp. 726-727) with their counterexample. Precisely, by using the ternary set, they construct a function of two variables such that the total variation according to definition 1 passes from a finite value to an infinite one simply by a rotation of angle $${\pi}/{4}$$ of the coordinate axes.
However, for particular classes of functions, the answer is yes: this happens for example for continuous functions, as Leonida Tonelli was well aware of when he introduced definition 1. We'll see something more in the joint answer to the second and third questions.
1. If [1] is true then $$\operatorname{TV}(f)$$ calculated by definition 1 and definition 2 are they equal?
2. If $$f$$ satisfies definition 2, does there exist a function $$g:\mathbb{R}^d \rightarrow \mathbb{R}$$ a.e equal to $$f$$ such that $$g$$ satisfies definition 1? If so how to prove it?
Since definition 1 is not coordinate invariant in $$L^1$$ while definition 2 is, for questions 2 and 3 the answer is no. However, things change if, instead of the total (pointwise) variation $$\mathcal{TV}$$, one considers the essential variation defined as $$\newcommand{\eV}{\mathrm{essV}} \eV(g):=\inf \left\{\mathcal{TV}(v) : g=v\;\; L^1\text{-almost everywhere (a.e.) in }\Bbb R\right\}$$ (see [2], §3.2, p. 135 or [4], §5.3, p. 227 for an alternative definition involving approximate continuity, closer to the original Lamberto Cesari's approach). Then you have the following theorem
Theorem 5.3.5 ([4], pp. 227-228) Let $$f\in L^1_\text{loc}(\mathbb{R}^n)$$. Then $$f\in BV_\text{loc}(\Bbb R^n)$$ if and only if $$\int\limits_{R^{n-1}}\eV_i\big(f(x)\big)\,{\mathrm{d}} x_1\cdots{\mathrm{d}}x_{i-1}\cdot {\mathrm{d}}x_{i+1}\cdots {\mathrm{d}}x_n< \infty\quad \forall i=1,\ldots,n$$ where
• $$\eV_i\big(f(x)\big)$$ is the essential variation of the one dimensional sections of $$f$$ along the $$i$$-axis and
• $$R^{n-1}\subset \Bbb R^{n-1}$$ is any $$(n-1)$$-dimensional hypercube.
This result, apart from its intrinsic interest, is valuable since it allows two prove a variation of the sought for result: namely $$\sum_{i=1}^n \int\limits_{R^{n-1}}\eV_i\big(f(x)\big)\,{\mathrm{d}} x_1\cdots{\mathrm{d}}x_{i-1}\cdot {\mathrm{d}}x_{i+1}\cdots {\mathrm{d}}x_n =\sup \left\{\,\int\limits_{\mathbb{R}^m}f \operatorname{div}(\phi): \phi \in C_c^1(\mathbb{R^d})^d, \|\phi\|_{L^{\infty}} \leq 1\, \right\}\label{1}\tag{V}$$ The proof of \eqref{1} follows from the proof of theorem 5.3.5 above in that the method is the same but, instead of the single $$i$$-th axis ($$i=1,\ldots,n$$) essential variation, the sum of the $$n$$ essential variations is considered. Also, both sides of equation \eqref{1} are lower semicontinuous thus, given any sequence of $$BV$$ functions $$\{f_j\}_{j\in\Bbb N}$$ for which they converge to a common (finite) value, it is possible to find a subsequence converging to a $$BV$$ function $$f$$: simply stated, the supremum is attained for the limit function of the subsequence and thus it is a maximum. Thus question 2 and 3 have an affirmative answer if the essential variation is considered instead of the (pointwise) total variation.
Notes
• Definition 1 defines the so called "total variation in the sense of Tonelli", and was introduced by Leonida Tonelli only for continuous functions, since the problem of non invariance of the value of variation respect to a change in coordinate axes, pointed out by Adams and Clarkson ([1], pp 726-727), does not exists in this class. The multidimensional total variation defined by using the essential variation, i.e. $$\mathrm{TV}(f)=\sum_{i=1}^n \int\limits_{R^{n-1}}\eV_i\big(f(x)\big)\,{\mathrm{d}} x_1\cdots{\mathrm{d}}x_{i-1}\cdot {\mathrm{d}}x_{i+1}\cdots {\mathrm{d}}x_n$$ is called the "total variation in the sense of Tonelli and Cesari" and was introduced by Lamberto Cesari in [3], pp. 299-300 to overcome the known limitation of definition 1.
• I predated reference [1] from the answer by @Piotr Hajlasz to this Q&A: as I pointed out there, definition 1 is the original definition of bounded variation for functions of several variables given by Lamberto Cesari in 1936. Definition 2 was introduced later by Mario Miranda in the early sixties of the 20th century.
References
[1] C. Raymond Adams, James A. Clarkson, "Properties of functions $$f(x,y)$$ of bounded variation" (English), Transactions of the American Mathematical Society 36, 711-730 (1934), MR1501762, Zbl 0010.19902.
[2] Luigi Ambrosio, Nicola Fusco, Diego Pallara, Functions of bounded variation and free discontinuity problems, Oxford Mathematical Monographs, New York and Oxford: The Clarendon Press/Oxford University Press, New York, pp. xviii+434 (2000), ISBN 0-19-850245-1, MR1857292, Zbl 0957.49001.
[3] Lamberto Cesari, "Sulle funzioni a variazione limitata" (Italian), Annali della Scuola Normale Superiore, Serie II, 5 (3–4), 299–313 (1936), JFM , MR1556778, Zbl 0014.29605
[4] William P. Ziemer, Weakly differentiable functions. Sobolev spaces and functions of bounded variation. Graduate Texts in Mathematics, 120. New York: Springer-Verlag, pp. xvi+308, 1989, ISBN: 0-387-97017-7, MR1014685, Zbl 0692.46022
The answer to all three questions is yes, but there are some subtleties. In Definition 2 if you modify the function on a set of measure zero, $$TV$$ does not change. So you need to keep into account sets of measure zero in Definition 1 when $$d>1$$. The idea is to change $$\mathcal{TV}$$ and instead of using arbitrary partitions you only use partitions made of points which are Lebesgue points of your function. This is is called the essential pointwise variation of the function. You can find these results in Leoni The case $$d=1$$ is Theorem 7.3 and is exactly what you wrote. For $$d>1$$ the result you want is due to Serrin and is given in Theorem 14.20 using the essential pointwise variation instead of the pointwise variation. For $$d=1$$ you can also look at Evans and Gariepy Theorem 5.21. Unfortunately none of the proofs are easy.
|
|
##### Walking in circles
Stefan loves exploring forests but he usually gets lost in them. This time, he went a little too deep into the forest and no longer knows if he is even in a forest still. Help Stefan find out how many trees there are in the forest, or tell him that he is not in a forest anymore and is walking in circles. (Just to clarify, a forest is a graph that is not necessarily connected and contains no cycles. In other words, it is a collection of trees).
#### Input Specification
The first line contains two integers: N (The number of nodes in the graph) and M (The number of edges in the graph).
The next M lines contain two integers: u, v indicating that there is an undirected edge connecting the nodes u and v (1 ≤ u, v ≤ N)
#### Output Specification
If the graph given is a forest, print Stefan is in a forest with T trees
where T is the number of trees in the forest.
Otherwise, Stefan is walking in cycles, and you should print middle of nowhere.
#### Constraints (and partial points)
For all cases: 1 ≤ N, M ≤ 10^5
For batch 1 (30 points): 1 ≤ N ≤ 500
For batch 2 (10 points): Stefan is guaranteed to be in a forest
For batch 3 (60 points): No additional constraints
#### Sample Input 1
9 61 22 32 45 65 78 9
#### Output for Sample Input 1
Stefan is in a forest with 3 trees
#### Sample Input 2
7 61 22 33 43 52 63 7
#### Output for Sample Input 2
Stefan is in a forest with 1 trees
Explanation for Sample Case 2:
Stefan is in a forest with 1 tree (still considered a forest I guess)
#### Sample Input 3
6 51 22 33 14 55 6
#### Output for Sample Input 3
Middle of nowhere
Explanation for Sample Case 3:
One of the components of the graph contains a cycle, so Stefan is no longer in a forest
|
|
# Quantitative Research Analyst (Financial Engineer) in DC Office of Risk Analysis and Surveillance
#### Karla Jones
##### Senior Recruiter, SEC
The Securities and Exchange Commission is seeking a Quantitative Research Analyst (Financial Engineer) in our Office of Risk Analysis and Surveillance within the Office of Compliance Inspections and Examinations in our Washington, DC (HQ) location.
Salary Range: $110,910 -$181,398
The mission of the SEC is to protect investors, maintain fair, orderly, and efficient markets, and facilitate capital formation. We seek high-caliber professionals who share the same values of integrity, fairness, accountability, resourcefulness, teamwork and commitment to excellence. The SEC offers challenging work in a collegial environment, while enjoying quality of life and a competitive compensation package.
The Quantitative Research Analyst (Financial Engineer) will:
• Serve as a quantitative research analyst working with SEC staff in building analytical models, mining data, determining proper empirical methodology, organizing data collection, writing unique programs, preparing written reports, and summarizing the studies in formal and informal presentations.
• Assist in the collection and aggregation of risk information as it relates to specific topics or Registrant types.
• Provide technical expertise to design and conduct financial data studies, surveys, reviews, and research projects.[prbreak]read more[/prbreak]
• Conduct research in areas such as the analysis of new financial instruments and strategies, options, and derivatives which involves the application of financial engineering methodologies and employing financial theory and applied mathematics, as well as computation and the practice of programming.
• Support the review and verification of trading strategies for a variety of instruments and markets such as high frequency trading, algorithmic trading, statistical arbitrage, correlation trading, and volatility trading. Perform research and development for statistical analysis of real time market making systems including predictive forecasting algorithms and high throughput, low latency, and multi threading systems or other smart execution systems.
• Work with large volumes of quantitative and qualitative data from different sources for back-testing and validation of models, algorithms, and calculations.
• Develop and presents authoritative reports based on the evaluation and interpretation of studies in the assigned area of financial engineering.
REQUIREMENTS:
The successful candidate MUST be a US Citizen.
• Candidates must submit Official/Unofficial transcripts at the time of application. Failure to provide transcripts will result in your application being disqualified.
• Resume
SK 14 Level: Must possess at least one year of experience equivalent to at least the GS/SK-13 grade level applying the theories, principles, and processes of quantitative research; interpreting complex financial and securities industry data; using models and other types of data analysis and statistical software applications, to manipulate and use large data sets and ensure the accuracy of information produced; developing, maintaining and/or validating models used for forecasting, valuation, instrument and strategy selection, portfolio construction, and risk management covering a wide range of financial instruments, including equities, fixed income, currencies, futures, commodities, and/or derivatives; conveying complex and technical information both orally and in writing and presenting technical findings in meetings and formal presentations.
PREFERRED EXPERIENCE:
• Experience in working with large volumes of data from different sources to include back-testing models, algorithms, and strategies for validation.
• Proficiency in computer processes, methods, and languages such as Java, C/C++, Matlab, R, SQL, VBA, Perl, or similar languages and the state-of-the-art database techniques.
• Experience in utilizing models and products for managing risks in portfolio construction, trade decision, and execution and hedging, including multi-factor models such as BARRA; risk management metrics and methods such as VaR and stress testing models, hedging techniques, credit risk, counterparty risk, market risk, valuation and pricing, and model sensitivity and risk statistics.
• Strong interpersonal skills to interact effectively with industry representatives as well as with SEC senior officials, supervisors, co-workers, and the public.
EDUCATION:
• Candidate must possess at least an undergraduate degree in: finance, engineering, mathematics, statistics, computer science, actuarial science, Economics or related technical field.
http://www.sec.gov/jobs/ohr/job712662.html
Please use Vacancy Identification Number: 712662 The closing date of this position is: August 20, 2012
#### markhobbus
##### Member
looks like they are finally starting to get on board. salary is comparable as well which is heartening in the wake of the knightmare on wall street. however, i don't think they will find someone with such skills and knowledge with only a BA/BS.
#### Andy Nguyen
##### Member
Which is why SEC put this on QuantNet where there are more qualified candidates (MFE grads). I know some who work there and said the work environment is great. SEC is the only government agency that has a competitive salary. You must be a US citizen.
#### elektor
##### Active Member
C++ Student
Seems like a lot for a single financial engineer to handle and that too with only a BS? High frequency analysis, Model validation, collection aggregation and presentation of risk reports.....
Model validation for ALL the different asset classes would need very experienced people in my opinion.
#### Andy Nguyen
##### Member
i don't think they will find someone with such skills and knowledge with only a BA/BS.
Seems like a lot for a single financial engineer to handle and that too with only a BS?
It's clearly stated that "Candidate must possess at least an undergraduate degree".
|
|
Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$
Corollary 3.2.5.7. Let $f: (X,x) \rightarrow (S,s)$ be a Kan fibration between pointed Kan complexes. Then the image of the induced map $\pi _{1}(f): \pi _1(X,x) \rightarrow \pi _{1}(S,s)$ is equal to the stabilizer of $[x] \in \pi _0(X_ s)$ (with respect to the action of $\pi _{1}(S,s)$ on $\pi _0(X_ s)$ supplied by Variant 3.2.4.5.
|
|
Limited access
Suppose Tony tosses a fair die $90\text{ times}$. Of these $90\text{ tosses}$, $18$ resulted in a $1$ showing face-up on the die.
What is the sample proportion, and what is the mean of the sampling distribution?
A
Sample proportion = $20\%$
Mean of sampling distribution = $16.67\%$
B
Sample proportion = $16.67\%$
Mean of sampling distribution = $16.67\%$
C
Sample proportion = $16.67\%$
Mean of sampling distribution = $20\%$
D
Sample proportion = $20\%$
Mean of sampling distribution = $20\%$
E
Sample proportion = $18\%$
Mean of sampling distribution = $16.67\%$
Select an assignment template
|
|
A Column Generation Approach for the Periodic Vehicle Routing Problem with Time Windows (bibtex)
by ,
Reference:
A Column Generation Approach for the Periodic Vehicle Routing Problem with Time WindowsSandro Pirkwieser, Günther R. RaidlProceedings of the International Network Optimization Conference 2009 (Maria Grazia Scutellà, others, eds.), 2009.
Bibtex Entry:
@INPROCEEDINGS{pirkwieser-09,
author = {Sandro Pirkwieser and G{\"u}nther~R. Raidl},
title = {A Column Generation Approach for the Periodic Vehicle Routing Problem with Time Windows},
editor = {Maria Grazia Scutell{\a} and others},
booktitle = {Proceedings of the International Network Optimization Conference 2009},
}`
|
|
LEAP 2025 Algebra I Chapter 3
### LEAP 2025 Algebra I Chapter 3 Sample
DOK: 1 1 pt
3.
A box has length of 20 inches, the height of 12 inches, and the width of 38 inches. What is the volume of the box using the formula V = lwh?
DOK: 1 1 pt
5.
Select the expression from the following word problem.
Fifteen minus a number, then divided by two equals eleven.
DOK: 2 1 pt
8.
A plumber charges $45 per hour plus a$25.00 service charge. If a represents his total charges in dollars and b represents the number of hours worked, which formula below could the plumber use to calculate his total charges?
DOK: 2 1 pt
10.
Alicia found 3 times as many seashells as Juan, who found $\frac{1}{2}$ as many seashells as Blake. The three friends put all their seashells together and then evenly divided them between two teachers. Which expression below shows the number of seashells, s, received by each teacher?
|
|
# Inconsistency in Values of Laplace Transformed Current when Changing Units
I was trying to convert the units of the following current in Time Domain and Complex Frequency domain (s-domain): $$i(t)=20\cdot e^{-t}H(t)\,\,\,\,\,\text{[mA,ms]}$$ Where $$\\text{[mA, ms]}\$$ indicates that in this formula time must be entered in $$\\text{ms}\$$ and the calculated current will be in $$\\text{mA}\$$.
Time Domain: $$\bbox[8px,border:1px solid black] { i_1(t)=20\cdot e^{-t}H(t)\,\,\,\text{[mA,ms]}\,\, \require{AMScd} \begin{CD} @>>> \end{CD} \,\, i_2(t)=0.02 \cdot e^{-1000t}H(1000t)\,\,\,\text{[A,s]} }$$
If, for example, $$\t=1\,\text{ms}=0.001\,\text{s}\$$, we have: \begin{alignat}{1} i_1(1)&=20\cdot e^{-1}H(1)=\frac{20}{e}\,\text{mA} &= \frac{0.02}{e}\,\text{A} \\[6pt] i_2(0.001)&=0.02\cdot e^{-1000\,\cdot\, 0.001}H(1)\,\,\,\,\,\,\,\,\,\, &=\frac{0.02}{e}\,\text{A}\,\,\,\,\,\,✔️ \end{alignat} Here no big deal, trivial conversion. The problem is in s-domain.
S-Domain: \bbox[4px,border:1px solid black] { \begin{alignat}{1} I_1(s)=\mathscr{L}[\,i_1(t)\,]=\frac{20}{s+1} \,\,\,\left[\text{mA,}\frac{1}{\text{ms}}\right] \\[6pt] I_2(s)=\mathscr{L}[\,i_2(t)\,]=\frac{0.02}{s+1000}\,\,\,\left[\text{A,}\frac{1}{\text{s}}\right] \end{alignat} }
If, for example, $$\s=(e-1)\,\frac{1}{\text{ms}}=1000 \cdot (e-1) \,\frac{1}{\text{s}}\$$, we have: \begin{alignat}{1} I_1(e-1)&=\frac{20}{e-1+1}=\frac{20}{e}\,\text{mA} &= \frac{0.02}{e}\,\text{A} \\[4pt] I_2\left(1000 \cdot (e-1)\right) &=\frac{0.02}{1000e-1000+1000}\,\,\,\,\,\,\,\,\,\, &=\frac{0.02}{1000e}\,\text{A}\,\,\,\,❌ \end{alignat}
Here the conversion didn't work. Results differ by a factor of $$\10^{-3}\$$ and I can't figure out why. The transforms and operations are right, I did them in Mathematica to make sure they weren't wrong:
A note about the units of the complex frequency $$\ s \$$ - the product $$\s \cdot t\$$ must be dimensionless, so:
• Since the time in $$\i_1(t)\$$ is measured in $$\\text{milliseconds}\$$, the complex frequency $$\s\$$ in $$\I_1(s)\$$ must be in $$\\frac{1}{\text{milliseconds}}\$$;
• Similarly, $$\s\$$ in $$\I_2(s)\$$ is in $$\\frac{1}{\text{seconds}}\$$ because $$\t\$$ in $$\i_2(t)\$$ is measured in $$\\text{seconds}\$$.
• What is [mA, ms]? $e^{-t}$ and H(t) are unitless. $i(t)$ is a current so the unit is A or mA, etc. – Chu Sep 3 at 8:50
• @Chu $\text{[mA, ms]}$ indicates that in this formula time must be entered in $\text{ms}$ and the calculated current will be in $\text{mA}$ – Vinicius ACP Sep 3 at 10:13
• OK, I've not seen that notation previously. Now, what is $s=(e-1)\frac{1}{ms}$ and why are you doing it? – Chu Sep 3 at 10:52
• s =(e-1)1/ms doesn't make sense cz it belongs to another domain. Not time domain to denote it in ms or s. – Mitu Raj Sep 3 at 11:25
• I think I get your question now... Will try to put an answer – Mitu Raj Sep 3 at 16:56
Nothing to do with Laplace. Just some mathematics. I found that you have dimensional inconsistency while you converted units. For instance in this equation :- $$I_1(s)=\frac{20}{s+1}$$ The dimension of current is milliAmperes, hence the whole RHS should evaluate to 'milliAmperes' units as well.
$$\s\$$ has units 1/ms as you explained. Okay, so in the term $$\(s+1)\$$, '1' should be also be representing a quantity of units 1/ms, otherwise we cannot add them dimensionally. ie., $$\(s+1)\$$ has the units 1/ms.
Which means, whatever the numerator '20' is representing, is not of units milliAmperes as you assumed, but of units 'milliAmperes per millisecond' or mA/ms. Otherwise the whole RHS becomes dimensionally incorrect.
So when you evaluate the expression for $$\I_2(s)\$$ , where $$\s\$$ is in terms of 1/s now, you not only have to multiply each term in the denominator by 1000 (which you have done correctly), but also have to multiply the numerator by 1000 because of the 'per millisecond' thing which you missed out there. And also to convert milliAmperes to Amperes, divide the numerator by 1000 as well. ie.,
$$I_2(s) = \frac{(\frac{20}{1000}.1000)}{1000e} A = \frac{0.02}{e}A$$
I hope this clears why your answer differs by a factor of $$\10^-3\$$
• I think you have inverted things. The frequency unit of $I_1(s)$ is $\frac{1}{\text{ms}}$ and of $I_2(s)$ is $\frac{1}{\text{s}}$. It seems that this make the dimensional inconsistency disappear. – Vinicius ACP Sep 3 at 17:31
• Either way, you missed out 'per second' or 'per millisecond'. – Mitu Raj Sep 3 at 17:35
• Corrected the 'inverted' thing to avoid confusion. – Mitu Raj Sep 3 at 17:55
• What you said makes sense to me. But if I anti-transform this new $I_2(s)$, now the problem will manifest itself into the time domain: $$i_2(t)=20\cdot e^{-1000t}H(1000t) \rightarrow i_2(0.001)=\frac{20}{e}\,\text{A}$$ – Vinicius ACP Sep 3 at 18:21
• No. your time domain is right as in the question. the exponent of e which is of the form (t or t/1) should be dimensionless hence your calculations are correct there. – Mitu Raj Sep 3 at 18:24
In signal theory, the Laplace Transformation is linear with respect to units like voltage and current. If you have a function/signal like f(kI, t) and I is the input current level, the LT is of the form kF(I,s), k being a constant, I is the current, t is time, s is complex frequency.
But the Laplace Transformation of a function of the form f(t) is not linear with respect to the time unit. It means that in general L{f(I,pt)} does not equal qF(I,s), p and q being constants. There are special formulas for LTs, e.g. L{f(I,ct)} = 1/c (F(I,(s/c))), c >0. With this last formula the result will be the same.
• "With this last formula the result will be the same." I tried to go that way before, but it didn't work either, look: $$i_1(t)=20\cdot e^{-t}H(t)\,\,\,\text{[mA,ms]}\,\, \\i_1(1000t)=20\cdot e^{-1000t}H(1000t)\,\,\,\ \text{[} \color{red}{mA}{,s]} \\i_2 = i_1(1000t) \cdot 10^{-3}\,\,\,\ \text{[} \color{red}{A}{,s]} \rightarrow I_2(s)=\frac{1}{1000}\frac{20}{\frac{s}{1000}+1}\cdot 10^{-3}=\frac{0.02}{s+1000}\,\,\,\left[\text{A,}\frac{1}{\text{s}}\right]$$ The current values given by $I_1(s)$ and $I_2(s)$ will still be different, just as happened in the question example. – Vinicius ACP Sep 3 at 16:55
• You have a product of 2 functions in the time domain, f(t) is the product 20 exp(-t) H(t). If H(t) is not changed into H(ct), the calculation is easy since H(t) can be ignored in many cases. But in this case here you have a product of 2 functions that needs to be transformed. How do you transform a product of 2 functions in the time domain? Maybe convolution in the frequency domain rings the bell. – xeeka Sep 3 at 19:41
I decided to investigate how changes in time domain affect the s-domain to see if I could find something that would help to solve my question. And I found something! Let's start with $$\i_1(t)\$$ and investigate the values in both domains that produce the same current:
$$\bbox[8px,border:1px solid black] { i_1(t)=20\cdot e^{-t}H(t)\,\,\,\text{[mA,ms]}\,\, \require{AMScd} \begin{CD} @>>> \end{CD} \,\, I_1(s)=\frac{20}{s+1} \,\,\,\left[\text{mA,}\frac{1}{\text{ms}}\right] }$$
At this point we can conclude that: $$\I_1(e^t-1)=i_1(t)\$$. Doing the same for $$\i_2(t)\$$:
$$\bbox[8px,border:1px solid black] { i_2(t)=0.02 \cdot e^{-1000t}H(1000t)\,\,\,\text{[A,s]}\,\, \require{AMScd} \begin{CD} @>>> \end{CD} \,\,I_2(s)=\frac{0.02}{s+1000}\,\,\,\left[\text{A,}\frac{1}{\text{s}}\right] }$$
But the Laplace Transformation of a function of the form f(t) is not linear with respect to the time unit.
Summing it all up in one diagram:
$$\require{AMScd} \begin{CD} i_1(t) @>\mathrm{\bs s-domain \bs}>> I_1(e^t-1) \\ @V \text{linear} V V @VV \text{nonlinear}V\\ i_2(t) @>\mathrm{\bs s-domain \bs}>> I_2(e^{1000t}-1000) \end{CD}$$
For $$\t=1 \,\mathrm{ms}\$$, we have:
$$\require{AMScd} \begin{CD} i_1(1) @>\mathrm{\bs s-domain \bs}>> I_1(e-1) \\ @V \text{linear} V V @VV \text{nonlinear}V\\ i_2(0.001) @>\mathrm{\bs s-domain \bs}>> I_2(e-1000) \end{CD}$$
So, now everything is clear. I erroneously assumed the linearity of the complex frequency. When I calculated $$\I_2\left(1000 \cdot (e-1)\right)\$$ I wasn't calculating what I was expecting.
Instead, the value obtained there corresponds to $$\t \approx0.00790776 \,\mathrm{s}\$$ in time domain (i.e. $$\0.02/1000e = i_2(0.00790776)\$$) and not $$\t=0.001\,\mathrm{s}\$$. Finally, calculating with the correct input:
$$I_2\left(e-1000\right) =\frac{0.02}{e-1000+1000}=\frac{0.02}{e}\,\text{A}\,\,\,\,✔️$$
|
|
Whilst re-implementing the paper "A Cooperative Coevolutionary Approach to Function Optimization" by Mitchell Potter and Kenneth De Jong for a Computer Science assignment, I noticed a typo in one of the benchmark functions.
In the paper, the Schwefel function is listed on page 5 as
$$f(x) = 418.9829n + \sum_{i=1}^d x_i \sin{(\sqrt{|x_i|})}$$
Whereas the actual function is
$$f(x) = 418.9829n - \sum_{i=1}^d x_i \sin{(\sqrt{|x_i|})}$$
Notice the sign change in the middle.
I have found 3 references that back this up:
It's probably also worth noting that one of these sources give the constant to a greater accuracy: 418.982887.
|
|
# cotangent to flags as a quiver variety
It is easy to realize cotangent space to the flag variety $Fl=SL_n/B$ as a Nakajima quiver variety: consider the finite quiver of type A, the dimension vectors v=(1,2,...,n-1), w=(0,...,0,n); an appropriate stability condition (polarization) amounts to the condition that the arrow from the i-dimensional space to the (i+1)-dimensional one is injective, and we end up with a complete flag in the n-space, the arrows in the opposite direction giving a cotangent vector.
Now, if I understand correctly, the other stability conditions (of which there is n!) should produce quiver varieties which are also isomorphic to $T^*(Fl)$. How to see this, preferably using equally explicit linear algebra? Is it explained in the literature?
-
I'm not sure this is correct - can you elaborate on what your notion of 'stability' is? If you take the trivial character and form the GIT quotient of the fibre over 0 of the moment map used to define Nakajima quiver varieties then we obtain the categorical affine quotient which is Spec of the invariant polynomial functions on this fibre (see Ginzburg's notes 'Lectures on Nakajima Quiver Varieties', Thm. 4.5.6). This is how one obtains the 'quiver variety analog' of the Springer resolution. Perhaps I am misunderstanding your stability condition - where are the n! possibilities coming from? – George Melvin Nov 26 '12 at 18:44
I just mean the usual $\theta$ stability, as defined for example in Ginzburg's notes cited – Roman Nov 27 '12 at 2:13
Can you give a little more information on how 'the other stability conditions' arise? In particular, why are there n! different stability conditions? The $\theta=0$ stability condition (as mentioned in my previous comment) gives a singular variety so cannot be the same as the cotangent bundle. – George Melvin Nov 27 '12 at 19:30
Of course I am considering a generic $\theta$ in order to get cotangent bundle which is nonsingular variety. For example $\theta=(1,...,1)$. – Roman Nov 28 '12 at 0:50
They are compatible with the Weyl group action on the space of stability parameters, i.e., the Cartan subalgebra. The dimension vector $v$ is also changed compatible with the Weyl group action, if we think $w - Cv$ as a weight.
|
|
# A refined mean field approximation of synchronous discrete-time population models
1 POLARIS - Performance analysis and optimization of LARge Infrastructures and Systems
Inria Grenoble - Rhône-Alpes, LIG - Laboratoire d'Informatique de Grenoble
Abstract : Mean field approximation is a popular method to study the behaviour of stochastic models composed of a large number of interacting objects. When the objects are asynchronous, the mean field approximation of a population model can be expressed as an ordinary differential equation. When the objects are (clock-) synchronous the mean field approximation is a discrete time dynamical system. We focus on the latter. We study the accuracy of mean field approximation when this approximation is a discrete-time dynamical system. We extend a result that was shown for the continuous time case and we prove that expected performance indicators estimated by mean field approximation are $O(1/N)$-accurate. We provide simple expressions to effectively compute the asymptotic error of mean field approximation, for finite time-horizon and steady-state, and we use this computed error to propose what we call a \emph{refined} mean field approximation. We show, by using a few numerical examples, that this technique improves the quality of approximation compared to the classical mean field approximation, especially for relatively small population sizes.
Keywords :
Type de document :
Article dans une revue
Performance Evaluation, Elsevier, 2018, pp.1-27. 〈10.1016/j.peva.2018.05.002〉
Domaine :
https://hal.inria.fr/hal-01845235
Contributeur : Nicolas Gast <>
Soumis le : vendredi 20 juillet 2018 - 13:04:59
Dernière modification le : jeudi 11 octobre 2018 - 08:48:05
Document(s) archivé(s) le : dimanche 21 octobre 2018 - 18:07:20
### Fichiers
GaLaMa17.pdf
Fichiers produits par l'(les) auteur(s)
### Citation
Nicolas Gast, Diego Latella, Mieke Massink. A refined mean field approximation of synchronous discrete-time population models. Performance Evaluation, Elsevier, 2018, pp.1-27. 〈10.1016/j.peva.2018.05.002〉. 〈hal-01845235〉
### Métriques
Consultations de la notice
## 418
Téléchargements de fichiers
|
|
Talks
Spring 2020
# Lockable Obfuscation
Wednesday, March 25th, 2020 9:45 am10:45 am
In this talk we will discuss the notion of lockable obfuscation. In a lockable obfuscation scheme there exists an obfuscation algorithm Obf that takes as input a program PP and string called lock ' , and outputs an obfuscated program P'. One can evaluate the obfuscated program P' on any input x where the output of evaluation is 1 iff P(x)=lock, otherwise the output is a rejecting symbol. The security requirement states that if lock' is uniformly random, then the obfuscated program P' hides the program P. We will first discuss one of the applications of lockable obfuscation - anonymous encryption schemes. Next, we will see a construction of lockable obfuscation, followed by a proof of security based on the Learning with Errors (LWE) assumption.
This talk is based on two concurrent works by Goyal-K-Waters and Wichs-Zirdelis.
AttachmentSize
19.69 MB
|
|
# strings and text in py5#
We’ll be looking at how to render text in py5 in this section. Before we get into it, we’ll have to briefly discuss strings and functions revolving around them.
To put it simply, a string is just something we call text in programming terms. Really, we deal with one character of text at a time, and a string is a series (or string… get it?) of those characters. These data types are always wrapped in quotation marks when we use them. You can use single or double quote marks, but only one at a time - something unorthodox "like this' won’t work correctly.
Let’s mess around with strings in the form of variables to understand how they work.
size(500, 500)
background('#004477')
hello = 'hello world'
print(hello)
We created a variable, hello, and stored a string, “hello world”, inside of it. Running this code will print that string back to you.
What if you need to store data with quotation marks inside of it? This code…
whatsup = 'what's up!'
…would cause an error - the string has ended at that apostrophe/single quotation mark, and everything left on the line is unreadable. This, however, would work:
whatsup = "what's up!"
Or, you can put a backslash directly before that apostrophe. This is called escaping the character, and will force it to be read as regular text. Either works, but you might find escaping preferable if you’re dealing with large strings that could contain both kinds of quotation marks.
size(500, 500)
background('#004477')
hello = 'hello world'
print(hello)
whatsup = "what\'s up!"
question = 'is your name really "world"?'
print(whatsup)
print(question)
## concatenation and formatting of strings#
When we’re dealing with numbers (whether whole - integers - or decimal - floats), you can just perform addition on them using the + operator. However, when you’re dealing with strings, the + operate instead will concatenate the strings, or connect them together.
size(500, 500)
background('#004477')
hello = 'hello world'
print(hello)
whatsup = "what\'s up!"
question = 'is your name really "world"?'
print(whatsup)
print(question)
all = hello + whatsup + question
print(all)
You’ll notice this code prints a bit of a mess:
hello worldwhat's up!is your name really "world"?
Concatenating just joins your strings together exactly as-is, with nothing separating them. When you’re concatenating strings, you’ll often want to add in spaces, punctuation or other separators yourself.
size(500, 500)
background('#004477')
hello = 'hello world'
print(hello)
whatsup = "what\'s up!"
question = 'is your name really "world"?'
print(whatsup)
print(question)
all = hello + '. ' + whatsup + ' ' + question
print(all)
This displays a much nicer looking string of text.
hello world. what's up! is your name really "world"?
There’s an alternative in Python (and thus py5) called string formatting. Using the % sign as a placeholder, you can format some text any way you like and then tell py5 which strings to use for those placeholders. For the same result as our concatenation above:
all = ('%s. %s %s') % (hello, whatsup, question)
This has its advantages, but it’s a bit harder to read and understand as a beginner, so we’ll be using concatenation for now. Before we move on, we’ll go over a few different ways to work with text that may come in useful.
### length#
“Length” (len for short) will just give you the total number of characters in any string.
print( len(all) ) # displays total number of characters (52)
### slice notation#
Slice notation (using square brackets) in Python lets you fetch the 1st, or 5th, or 37th (etc, etc) character from any string. We refer to the number of the character you’re trying to get as the index, and we start counting at 0 instead of at 1.
print( all[0] ) # displays the first character (h)
print( all[1] ) # displays character at index 1 (e)
print( all[4] ) # displays character at index 4 (o)
You can also use a colon (:) in your slice notation to get sections, or ranges, of characters in a string. To get everything up to, but not including, the character at this index:
print( all[:4] ) # displays: hell
A number on each side of the colon can get only the characters between those two index numbers:
print( all[1:4] ) # displays: ell
You can also get everything from a given index to the end of the string:
print( all[4:] ) # displays: o world... (and so on)
And you can even use negative numbers for an interesting wrapping-around behavior!
# [:-x] returns everything from index 0 up to
# but not including the fourth last character
print( all[:-4] ) # ...our name really "wor
# [-x:] returns everything from the fourth last character
# to the end of the string
print( all[-4:] ) # ld"?
# [x:-y] returns everything from index 4
# up to but not including the fourth last character
print( all[4:-4] ) # o world. ...eally "wor
## string methods#
In addition to functions, which take our variables as an argument, Python and py5 have methods, which are appended to the end of our variables. For now, it’s enough to know the difference between them and how you write them. Using our length function has the format of len(all), while a method might look more like all.upper() (which we’ll explain in a moment). Trying to use a function as a method (all.len()) or a method as a function (upper(len)) will give you an error, so it’s important not to mix them up. Let’s go over a few methods that you can add to your code.
.upper() will give you a version of the string with all lowercase characters converted to uppercase.
print( all.upper() ) # HELLO WORLD... (and so on)
.title() will give you a version of the string in title case, where the first letter of each word is capitalized.
print( all.title() ) # Hello World...
.count() will give you the number of times the character (or sequence of characters) appears in a string.
print( all.count('o') ) # 4
print( all.count('or') ) # 2
.find() will return the index at which you can first find a given string. If it can’t find this substring (string within a string) at all, it will instead return -1.
print( all.find('world') ) # 6
print( all.find('lemon') ) # -1
If something appears multiple times, it might be helpful to start .find() at a certain point - you can add a second optional argument that is the index to begin finding from, and even a third argument, the index where .find() should stop.
print( all.find('world',7) ) # 45
Using just the all variable we’ve created, how can you print the following to the console?
To start you off, here’s how you might print Hello -
print( all[0:5].title() )
You’ll have to combine a few different methods and concatenate them to get the output you’re looking for.
## typography#
We’ve just learned how to wrangle text within py5. The next step is actually displaying it in our sketches.
Typography is just arranging and styling text (or type) to make it legible, readable, and ideally even enjoyable to look at. You’re probably exposed to a lot of bad typography, with letters that are spaced strangely, hard-to-read or ugly fonts, and huge paragraphs that your eye gets lost in. Good typography is often invisible, but it’s worth understanding.
## fonts#
Fonts on computers used to be made of pixels, so for every size of font there was a different set of letter glyphs. Now we use vector graphics for digital fonts, which can be scaled to any size. py5 actually comes with a default sans-serif font, which is used if you don’t load any external fonts into your sketch. If you don’t know, a serif font will have small decorative lines on the tips of characters; a sans-serif font has none.
You’ll also run into fonts that are described as monospace or monospaced. These can be serif or sans-serif, but each letter will take up exactly the same amount of space horizontally. This can be much more legible in code, where a line of 25 characters will always have the same visual spacing and length as a line of 25 different characters. This is especially useful if you’re trying to line text up into columns.
Let’s create a variable storing a string in a new sketch, using Hanlon’s Razor.
size(500, 500)
background('#004477')
fill('#FFFFFF')
stroke('#0099FF')
stroke_weight(3)
razor = 'Never attribute to malice that which is adequately explained by stupidity.'
You can’t see it yet, but py5 knows about it now. Let’s go over some functions that you can use to actually display this text visually.
## text(string, x, y)#
text() draws text in the sketch window, coloring it with your current fill() color. The second and third arguments position it.
size(500, 500)
background('#004477')
fill('#FFFFFF')
stroke('#0099FF')
stroke_weight(3)
razor = 'Never attribute to malice that which is adequately explained by stupidity.'
text(razor, 0,50)
You can also add a fourth and fifth argument to define its width and height.
size(500, 500)
background('#004477')
fill('#FFFFFF')
stroke('#0099FF')
stroke_weight(3)
razor = 'Never attribute to malice that which is adequately explained by stupidity.'
text(razor, 0,50, 100, 1000)
text(razor, 0,50, 100, 1000)
## text_size(size)#
text_size() sets a size, in pixels, for all subsequent text.
size(500, 500)
background('#004477')
fill('#FFFFFF')
stroke('#0099FF')
stroke_weight(3)
razor = 'Never attribute to malice that which is adequately explained by stupidity.'
text(razor, 0,50)
text_size(20)
text(razor, 0,100)
text_size(20)
text(razor, 0,100)
## create_font(font name, size)#
create_font() can be used to load a font on your computer into a format usable by py5. To get a list of fonts you could use, you can use Py5Font.list(). Remember that a font on your computer may not exist on someone else’s computer, so it’s best to include the font file with your sketch. Notice that we save this created font to a variable, exactly as we would load an image into py5.
print ( Py5Font.list() ) # show a list of available fonts
## text_font(font)#
Now that we have a font loaded in, we can use text_font() to use it for all subsequent text. Combining everything together…
size(500, 500)
background('#004477')
fill('#FFFFFF')
stroke('#0099FF')
stroke_weight(3)
razor = 'Never attribute to malice that which is adequately explained by stupidity.'
text(razor, 0,50)
text_size(20)
text(razor, 0,100)
text(razor, 0,150)
text_leading sets the spacing (in pixels) between lines of subsequent text() function.
size(500, 500)
background('#004477')
fill('#FFFFFF')
stroke('#0099FF')
stroke_weight(3)
razor = 'Never attribute to malice that which is adequately explained by stupidity.'
text(razor, 0,50)
text_size(20)
text(razor, 0,100)
text(razor, 0,150)
text(razor, 0,200, 250,100)
## text_align(alignment)#
text_align() sets the text alignment for an subsequent text - you can use LEFT, CENTER or RIGHT.
size(500, 500)
background('#004477')
fill('#FFFFFF')
stroke('#0099FF')
stroke_weight(3)
razor = 'Never attribute to malice that which is adequately explained by stupidity.'
text(razor, 0,50)
text_size(20)
text(razor, 0,100)
text(razor, 0,150)
text(razor, 0,200, 250,100)
text_align(CENTER)
text(razor, 0,250, 250,100)
text_align(CENTER)
text(razor, 0,250, 250,100)
### text_width(string)#
Finally, text_width() calculates and gives you the width of any string of text. You can use this to easily draw something that fits with any length of text!
size(500, 500)
background('#004477')
fill('#FFFFFF')
stroke('#0099FF')
stroke_weight(3)
razor = 'Never attribute to malice that which is adequately explained by stupidity.'
text(razor, 0,50)
text_size(20)
text(razor, 0,100)
text(razor, 0,150)
|
|
# Induced map in K-theory by a "trivial" bimodule
Let $$R$$ be a ring (not necessary commutative) and let $$P_{\bullet}$$ be a perfect $$R$$-bimodule (chain complex). I will denote the category of perfect right $$R$$-chain complexes by $$\textbf{Perf}(R)$$. The endofunctor $$-\otimes_{R}P_{\bullet} :\textbf{Perf}(R)\rightarrow \textbf{Perf}(R)$$ induces a map in algebraic $$K$$-theory given by
$$K_{\ast}(-\otimes_{R}P_{\bullet}):K_{\ast}(R)\rightarrow K_{\ast}(R)$$.
If the class $$[P_{\bullet}] \in K_{0}(R)$$ is trivial $$(=0)$$ does it mean that $$K_{\ast}(-\otimes_{R}P_{\bullet})$$ is a 0 map ?
• When you write $[P_{\bullet}] \in K_{0}(R)$, do you mean the class of $P_\bullet$ considered as a complex of right $R$-modules (forgetting the left $R$-module structure)? Jun 14 '20 at 20:09
• @JeremyRickard Yes Jun 14 '20 at 20:14
No. Let $$R=\mathbb{Z}\times\mathbb{Z}$$, let $$P$$ and $$Q$$ be the projective modules $$\mathbb{Z}\times0$$ and $$0\times\mathbb{Z}$$, and let $$P_\bullet=\dots\longrightarrow0\longrightarrow P\otimes_\mathbb{Z}P \stackrel{0}{\longrightarrow}Q\otimes_\mathbb{Z}P\longrightarrow0\longrightarrow\dots$$
|
|
## Monday, 1 August 2011
### RAID increases failure rate
Surprising, isn't it? Actually, RAID does indeed increase failure rate. If you take MTBF, MTBF decreases with more disks. Even if RAID5, mean time between disk failures decreases.
In a fault-tolerant storage, time between failures (MTBF) does not matter. What matters is time between data loss events. This is called either mean time to data loss (MTTDL) or mean time between data losses (MTBDL).
You know you can setup a three-way RAID1 (three mirrored copies instead of two), i.e. the mirror can have more than two disks. So, let's imagine a RAID1 of infinite number of disks. This unit will have an MTBF of zero, because at any given moment one of the infinite number of disks is failing. It will also be continuously rebuilding while still delivering infinite linear read speed. Still, this imaginary device will have zero probability of losing data because of the disk failure, because the infinite number of disks cannot all fail at the same time.
|
|
# Logic Gates in Computer Architecture
## Logic Gates:
A logic gate is an electronic circuit which makes logical decisions. To arrive at these decisions, the most common logic gates used are OR, AND, NOT, NAND and NOR gates. The NAND and NOR are called the universal gates. The exclusive-OR gate is another logic gate which can be constructed using basic gates such as AND, OR and NOT gates.
## OR Gate:
The OR gate performs logical addition commonly known as OR function. The OR gate has two or more inputs and only one output. The operation of the OR gate is such that a HIGH(1) on the output is produced when any of the inputs is HIGH(1). The output is LOW(0) only when all inputs are LOW(0).
If X and Y are the input variables of an OR gate and Z is its output, then
```Z = X+Y
```
## AND Gate:
The AND gate performs logical multiplication commonly known as AND function. It has two or more inputs and a single output. The output of an AND gate is HIGH only when all the inputs are HIGH. Even if any one of the inputs is LOW, the output will be LOW.
If X and Y are the input variables of an AND gate and Z is its output, then
```Z = X.Y
```
## NOT Gate:
The NOT gate performs the basic logical function called inversion or complementation. The purpose of this gate is to convert one logic level into the opposite logic level. It has one input and one output. When a HIGH level is applied to an inverter, a LOW level appears at its output and vice versa.
## NAND Gate:
NAND is a contraction of the NOT-AND gates. It has two or more inputs and only one input. When all inputs are HIGH, the output is LOW. If any one or both inputs are LOW, then the output is HIGH. The logic symbol for the NAND gate is shown below figure:
## NOR Gate:
NOR is a contraction of NOT-OR gates. It has two or more inputs and only one input. The is HIGH only when all the inputs are LOW. If any one or both the inputs are HIGH, then the output is LOW. The logic symbol for the NOR gate is shown below figure:
## Ex-OR Gate:
An Ex-OR Gate is a gate with two or more inputs and more output. The output of a two-input. It assumes a HIGH state if one and only one input assumes a HIGH state. This is equivalent to saying that the output is HIGH if either input A or input B is HIGH exclusively and low when both are 1 or o simultaneously.
## Ex-NOR Gate:
The Ex-NOR Gate is an EX-OR gate followed by an inverter. It has two or more inputs and one output. The output of a two-input Ex-NOR Gate assumes a HIGH state if both the inputs assume the same logic state or have an even number of 1’s, and its output is LOW when the inputs assume different logic states or have an odd number of 1’s.
|
|
My watch list
my.chemeurope.com
# Electron cyclotron resonance
Electron cyclotron resonance is a phenomenon observed both in plasma physics and condensed matter physics. An electron in a static and uniform magnetic field will move in a circle due to the Lorentz force. The circular motion may be superimposed with a uniform axial motion, resulting in a helix, or with a uniform motion perpendicular to the field, e.g., in the presence of an electrical or gravitational field, resulting in a cycloid. The angular frequency (ω = 2π f ) of this cyclotron motion for a given magnetic field strength B is given (in SI units[1]) by
$\omega_{ce}=\frac{eB}{m}$.
where e is the elementary charge and m is the mass of the electron. For the commonly used microwave frequency 2.45 GHz and the bare electron charge and mass, the resonance condition is met when B = 875 G = 0.0875 T.
## Contents
### In plasma physics
An ionized plasma may be efficiently produced or heated by superimposing a static magnetic field and a high-frequency electromagnetic field at the electron cyclotron resonance frequency. In the toroidal magnetic fields used in magnetic fusion energy research, the magnetic field decreases with the major radius, so the location of the power deposition can be controlled within about a centimeter. Furthermore, the heating power can be rapidly modulated and is deposited directly into the electrons. These properties make electron cyclotron heating a very valuable research tool for energy transport studies. In addition to heating, electron cyclotron waves can be used to drive current. The inverse process of electron cyclotron emission can be used as a diagnostic of the radial electron temperature profile.
### ECR Ion Sources
Since the early 1980's, following the award-winning pioneering work done by Dr. Richard Geller[2], Dr. Claude Lyneis, and Dr. H. Postma [3]; respectively from French Atomic Energy Commission, Lawrence Berkeley National Laboratory and the Oak Ridge National Laboratory, the use of electron cyclotron resonance for efficient plasma generation, especially to obtain large numbers of multiply charged ions, has acquired a unique importance in various technological fields. Many diverse activities depend on electron cyclotron resonance technology, including
• advanced cancer treatment, where ECR ion sources are crucial for proton therapy,
• advanced semiconductor manufacturing, especially for high density DRAM memories, through plasma etching or other plasma processing technologies,
• electric propulsion devices for spacecraft propulsion, where a broad range of devices (HiPEP, some ion thrusters, or electrodeless plasma thrusters),
• for particle accelerators, on-line mass separation and radio-active ion charge breeding [4].
• and, as a more mundane example, painting of plastic bumpers for cars.
The ECR ion source makes use of the Electron Cyclotron Resonance to heat a plasma. Microwaves are injected into a volume, at the frequency corresponding to the Electron Cyclotron Resonance defined by a magnetic field applied to a region inside the volume. The volume contains a low pressure gas. The microwaves heat free electrons in the gas which in turn collide with the atoms or molecules of the gas in the volume and cause ionization. The ions produced correspond to the gas type used. The gas may be pure, a compound gas or can be a vapour of a solid or liquid material.
ECR ion sources are able to produce singly charged ions with high intensities (e.g H+ and D+ ions of more than 100 mA (electrical) in DC mode [5] using a 2.45 GHz ECR ion source).
For multiply charged ions, the ECR ion source has the advantage that it is able to confine the ions for long enough for multiple collisions to take place (leading to multiple ionization) and that the low gas pressure in the source avoids recombination. The VENUS ECR ion source at Lawrence Berkeley National Laboratory has produced in intensity of 0.25 mA (electrical) of Bi29+ [6].
Some of these industrial fields would not even exist without the use of this fundamental technology, which makes electron cyclotron resonance ion and plasma sources one of the enabling technologies of today's world.
### In condensed matter physics
Within a solid the mass in the cyclotron frequency equation above is replaced with the effective mass tensor $\begin{Vmatrix}m^*\end{Vmatrix}$. Cyclotron resonance is therefore a useful technique to measure effective mass and Fermi surface cross-section in solids. In a sufficiently high magnetic field at low temperature in a relatively pure material
$\begin{matrix}\omega_{ce} > 1/\tau \\ \hbar \omega_{ce} > k_B T \\ \end{matrix}$
where τ is the carrier scattering lifetime, kB is Boltzmann's constant and T is temperature. When these conditions are satisfied, an electron will complete its cyclotron orbit without engaging in a collision, at which point it is said to be in a well-defined Landau level.
## References
1. ^ In SI units, the elementary charge e has the value 1.602×10-19 coulombs, the mass of the electron m has the value 9.109×10–31 kilograms, the magnetic field B is measured in teslas, and the angular frequency ω is measured in radians per second.
2. ^ R. Geller, Peroc. 1st Int. Con. Ion Source, Salcay, p537, 1969
3. ^ H. Postma, Phys. Lett. A, 31, p196, 1970
4. ^ Handbook of Ion Source, B. Wolf, ISBN 0-8493-2502-1, p136-146
5. ^ R. Gobin et al, Saclay High Intensity Light Ion Source Status The Euro. Particle Accelerator Conf. 2002, Paris, France, June 2002, p1712
6. ^ VENUS reveals the future of heavy-ion sources CERN Courier, 6 May 2005
|
|
# Math Help - Classification of Non-Abelian Groups of Order 42
1. ## Classification of Non-Abelian Groups of Order 42
I'm trying to do this methodically. Let $G$ be a group of order 42. The Sylow Theorem says that $G$ has a normal subgroup of order 7, and the number of Sylow-3 subgroups is either 1 or 7.
If there is only one Sylow-3 subgroup, then $Z_3\trianglelefteq G$, so $Z_3Z_7\cong Z_3\times Z_7$ and is normal because its index is 2. We now look for homomorphisms $\phi_i:Z_2\to\mbox{Aut}(Z_3\times Z_7)$. We note that $\mbox{Aut}(Z_3\times Z_7)\cong Z_2\times Z_6$ and write $Z_2$ as $\langle~x~\rangle$, and $Z_2\times Z_6$ as $\langle~y~\rangle\times\langle~z~\rangle$. (Both have identity $1$.)
1. $\phi_0:x\mapsto(1,1)$ This is the trivial mapping and gives the only abelian group $G=Z_{42}$.
2. $\phi_1:x\mapsto(1,z^3)$
3. $\phi_2:x\mapsto(y,1)$
4. $\phi_3:x\mapsto(y,z^3)$
The last three mappings result in three non-abelian groups of order 42, namely $Z_{21}\rtimes_{\phi_i}Z_2$ for $i\in\{1,2,3\}$. My guess is that $\phi_1$ produces $Z_3\times D_{14}$, $\phi_2$ produces $Z_7\times D_6$, and $\phi_3$ produces $D_{42}$, but I don't know how to prove it. So my question is: What is the best way to determine whether these (#2, #3, and #4) are all unique up to isomorphism and, furthermore, whether they are isomorphic to more familiar groups?
Next, if there are three Sylow-3 subgroups, then we look for homomorphisms $\varphi:Z_3\to\mbox{Aut}(Z_7)$. We note that $\mbox{Aut}(Z_7)\cong Z_6$ and write $Z_3$ as $\langle~x~\rangle$ and $Z_6$ as $\langle~y~\rangle$. With this presentation, $\varphi:x\mapsto y^2$ is a valid homomorphism, so $Z_7\rtimes_{\varphi}Z_3\cong Z_7Z_3$. I will call this group $F_{21}$ because I think groups like this are called Frobenius groups. So now I want to look for homomorphisms $\psi_i:Z_2\to\mbox{Aut}(F_{21})$. Again writing $Z_2$ as $\langle~x~\rangle$, we have:
1. $\psi_0:x\mapsto1$ This is the trivial mapping and gives $G=F_{21}\times Z_2$.
2. $\psi_1:x\mapsto\,?$
I don't know whether $?$ can be anything valid because I don't understand the structure of $F_{21}$. How can I determine whether its automorphism group has any elements of order 2?
2. ## Re: Classification of Non-Abelian Groups of Order 42
Originally Posted by redsoxfan325
I'm trying to do this methodically. Let $G$ be a non-abelian group of order 42. The Sylow Theorem says that $G$ has a normal subgroup of order 7, and the number of Sylow-3 subgroups is either 1 or 7.
If $n_3=1$, then $Z_3\trianglelefteq G$, so $Z_{21}=Z_3\times Z_7\cong Z_3Z_7$ and is normal because its index is 2. If we look for homomorphisms $\phi:Z_2\to\mbox{Aut}(Z_{21})$, we find the trivial one (which gives $Z_{42}$), or if $Z_2=\langle~x~\rangle$ and $\mbox{Aut}(Z_{21})\cong Z_2\times Z_6=\langle~y~\rangle\times\langle~z~\rangle$, then $\phi_1(x)=(1,z^3)$, $\phi_2(x)=(y,1)$, and $\phi_3(y,z^3)$. This gives us three non-abelian groups of order 42, namely $Z_{21}\rtimes_{\phi_i}Z_2$ for $i\in\{1,2,3\}$. What is the best way to determine whether these are all unique up to isomorphism and, furthermore, whether they are isomorphic to more familiar groups?
Next, if $n_3=7$, then we look for homomorphisms $\varphi:Z_3\to\mbox{Aut}(Z_7)$. If $Z_3=\langle~x~\rangle$ and $\mbox{Aut}(Z_7)=Z_6=\langle~y~\rangle$, then $\varphi(x)=y^2$ is a valid homomorphism, so $Z_7\rtimes_{\varphi}Z_3\cong Z_7Z_3$, which I will call $F_{21}$ because I think these are called Frobenius groups. So now I want to look for homomorphisms $\psi:Z_2\to\mbox{Aut}(F_{21})$. Aside from the trivial one, which gives $G=F_{21}\times Z_2$, I don't know whether there are any other ones because I don't understand the structure of $F_{21}$, so in particular I don't know whether its automorphism group has any elements of order 2.
What a messy problem. Let's see if we can work on the second paragraph. So, we want to look for morphisms $Z_2\to\text{Aut}(Z_3Z_7)$. If $Z_3Z_7$ is abelian then we know that $\text{Aut}\left(Z_3Z_7)\cong\text{Aut}(\mathbb{Z}_ {21})\cong\left(\mathbb{Z}_{21}\right)^\times\cong \mathbb{Z}_{3}^\times\times\mathbb{Z}_7\times\cong \mathbb{Z}_2\times\mathbb{Z}_6$ which you can check has three subgroups of order two, thus ostensibly three possibilities. If $Z_3Z_7$ is nonabelian you can check that it has only one, up to conjugacy, subgroup of order 2[/tex] and thus there is one choice there.
Can you work for there, and come back with what you find? All the four possibilities I mentioned are, in fact, nonisomorphic.
3. ## Re: Classification of Non-Abelian Groups of Order 42
Originally Posted by Drexel28
What a messy problem. Let's see if we can work on the second paragraph. So, we want to look for morphisms $Z_2\to\text{Aut}(Z_3Z_7)$. If $Z_3Z_7$ is abelian then we know that $\text{Aut}\left(Z_3Z_7)\cong\text{Aut}(\mathbb{Z}_ {21})\cong\left(\mathbb{Z}_{21}\right)^\times\cong \mathbb{Z}_{3}^\times\times\mathbb{Z}_7\times\cong \mathbb{Z}_2\times\mathbb{Z}_6$ which you can check has three subgroups of order two, thus ostensibly three possibilities. If $Z_3Z_7$ is nonabelian you can check that it has only one, up to conjugacy, subgroup of order 2[/tex] and thus there is one choice there.
Can you work for there, and come back with what you find? All the four possibilities I mentioned are, in fact, nonisomorphic.
Sorry for writing a somewhat messy question. I actually have done the work from there in my opening post. If $Z_3Z_7$ is abelian, then those three possibilities are the three groups I found in the first paragraph ( $Z_{21}\rtimes_{\phi_i}Z_2$ for $i=1,2,3$). If $Z_3Z_7$ is non-abelian, then this is where I get stuck because I'm not sure how to verify (short of a long and tedious calculation) that $Z_3Z_7$ has one subgroup of order 2 (up to conjugacy). Furthermore, what criteria do you use to determine that all four groups are non-isomorphic?
4. ## Re: Classification of Non-Abelian Groups of Order 42
Originally Posted by Drexel28
What a messy problem.
I tried to clean up my original post a bit.
5. ## Re: Classification of Non-Abelian Groups of Order 42
Originally Posted by redsoxfan325
I tried to clean up my original post a bit.
I did this a while ago as part of a project. I'll try to find the paper for you and post it here. I believe what I did was you can discount all three of the assuming-abelian-case by element/order counting.
|
|
IMAGE FORMING APPARATUS
Imported: 10 Mar '17 | Published: 27 Nov '08
Katsuyuki Yamazaki
USPTO - Utility Patents
Abstract
An image forming apparatus that forms a latent image on an image carrier based on image data, the apparatus includes: a first calculating unit adapted to calculate an exposure amount of a pixel of interest included in a partial region configured of a plurality of pixels that constitute the image data; a second calculating unit adapted to calculate an exposure amount of surrounding pixels that are located around the pixel of interest and constitute the partial region; and a toner consumption amount calculating unit adapted to calculate a toner consumption amount of the pixel of interest based on the exposure amount of the pixel of interest and the exposure amount of the surrounding pixels, wherein the second calculating unit calculates the exposure amount of the pixel of interest by weighting the image data corresponding to the surrounding pixels on a pixel-by-pixel basis.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an image forming technique.
2. Description of the Related Art
In recent years, as printers become advanced in functionality, and provide higher quality images at a higher speed, there is an increased demand for the reduction of the cost of the printers themselves and the consumable items used in image formation, such as toner. To satisfy such demand, proposals have been made to reduce wasted toner, shorten the adjustment time of electrophotographic systems, and suppress the running cost by efficiently controlling the use of image forming materials (developing materials) such as toner (e.g., Japanese Patent Laid-Open Nos. 06-138769, 2002-189385, 06-11969, 06-175500, 2003-122205, and 10-239980).
As a method for detecting the amount of toner consumed, there is a method called video count. Video count is a method for estimating the amount of the image forming material consumed when producing an image, by adding/cumulating image data to be produced using a printer or digital copying machine, or the driving time of a writing device (e.g., the laser emission time of a laser printer's exposure unit). In other words, the amount of toner consumed is estimated through addition/cumulation by counting the number of dot pixels written onto a photosensitive member.
A conventional video count method shall now be described. When handling image data in bitmap format, a coat of a memory is required if all the bitmap information is to be held (stored) in a memory or the like. Accordingly, digital image forming apparatuses generally handle data as video streams, which are easily processed on a pixel-by-pixel basis.
The video count method generally involves real-time addition/cumulation of the video streams when forming an image. However, it is often the case that the added/cumulated value (video count value) is not directly proportional to the amount of toner that is actually consumed. The reason is because the pixels cannot be approximated accurately to a rectangular shape. FIG. 7 is a schematic diagram illustrating the potential of a linear latent image in the main scanning direction on a photosensitive drum of a conventional laser scanner. As shown in 7a to 7e of FIG. 7, a substantially circular exposure spot extends beyond a rectangular pixel, resulting in escaping light (leaking light).
For example, in the stages shown in 7a to 7e of FIG. 7, the influence of the leaking light cannot be ignored when calculating a change in the leaking light and exposure condition and the amount of toner consumed based on that change.
In an electrophotographic image forming apparatus, a single dot (a single pixel) written onto the photosensitive member is influenced heavily by the dots adjacent thereto. For this reason, the amount of toner fixed to a single dot (pixel) that is written varies depending on whether the adjacent pixels are white or black (i.e., whether the single dot (pixel) is surrounded by white data or black data).
Accordingly, the method in which a single pixel is counted as one without considering the state of the adjacent pixels has a problem in that a significant margin or error occurs over time because of such addition/cumulation, and an accurate estimation of the amount of toner remaining or the amount of toner consumed cannot be achieved using the video count value. In order to eliminate the disparity between the video count value and the amount of toner consumed, a proposal has been made to correct the number of pixels of interest using the write information of the adjacent pixels; specifically, when performing a video count taking the center pixel of a 33 matrix as the pixel of interest, a correction is made by the number of pixels that are written among the adjacent eight pixels surrounding the pixel of interest (for example, Japanese Patent Laid-Open No. 2006-195246).
However, according to the video count method of Japanese Patent Laid-Open No. 2006-195246, the number of pixels of interest is corrected only by the write information of the pixels that are adjacent to the pixel of interest. Accordingly, this method has a problem in that the relationship between the video count value and the actual amount of toner consumed is insufficient, and a margin of error is left between the resulting video count value and the actual amount of toner consumed. The reason why a margin of error is left between the resulting video count value and the actual amount of toner consumed can be considered as follows.
That is, the reason is that the cumulated number of pixels and the cumulated exposure amount are regarded as similar. In other words, when cumulating the number of adjacent pixels in a binary value, it is assumed that the pixels located to the right, left, top, and bottom of the pixel of interest and the pixels located diagonally to the pixel of interest have the same weight. The degree of influence exerted on the pixel of interest varies according to the position (distance) of the adjacent pixels. This is because a difference in distance occurs by a rate of 2, even in the center of the pixel. Further, the pixel of interest is also influenced by the pixels that are not adjacent to the pixel of interest but are located in the periphery of the pixel of interest, with the degree of influence varying according to the distance therefrom, similar to that of the adjacent pixels. It is therefore necessary to take this into consideration.
It is important to take such a margin of error into consideration to maintain the accuracy of the video count.
SUMMARY OF THE INVENTION
In view of the problems encountered with the conventional technology described above, it is an object of the present invention to provide an image forming technique with which the amount of an image forming material (toner) consumed can be calculated with high accuracy.
Alternatively, it is an object of the present invention to provide an image forming technique with which it is possible to accurately detect the state of the image forming apparatus based on the calculated amount of consumed toner, and make a notification regarding the timing of maintenance.
Alternatively, it is an object of the present invention to provide an image forming technique with which it is possible to make a notification regarding the timing of toner supply based on the calculated amount of consumed toner, and reduce unnecessary toner supply and wasted toner.
According to one aspect of the present invention, there is provided an image forming apparatus that forms a latent image on an image carrier based on image data, the apparatus comprising: a first calculating unit adapted to calculate an exposure amount of a pixel of interest included in a partial region configured of a plurality of pixels that constitute the image data; a second calculating unit adapted to calculate an exposure amount of surrounding pixels that are located around the pixel of interest and constitute the partial region; and a toner consumption amount calculating unit adapted to calculate a toner consumption amount of the pixel of interest based on the exposure amount of the pixel of interest and the exposure amount of the surrounding pixels, wherein the second calculating unit calculates the exposure amount of the pixel of interest by weighting the image data corresponding to the surrounding pixels on a pixel-by-pixel basis.
According to the present invention, it is possible to calculate the amount of consumed image forming material (toner) with high accuracy.
Alternatively, it is possible to accurately detect the state of the image forming apparatus based on the calculated amount of consumed toner, and make a notification regarding the timing of an image adjustment.
Alternatively, it is possible to make a notification regarding the timing of toner supply based on the calculated amount of toner consumption, thereby reducing unnecessary toner supply and wasted toner.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
DESCRIPTION OF THE EMBODIMENTS
Hereinafter, preferred embodiments of the present invention shall be described with reference to the accompanying drawings. However, it should be noted that the constituent elements described in these embodiments are merely exemplary, and the technical scope of the present invention is defined by the appended claims, rather than by the individual embodiments described below.
FIG. 1 is a diagram illustrating a schematic configuration of an electrophotographic image forming apparatus provided with a laser scanner according to an embodiment of the present invention. A semiconductor laser device 101 shown in 1a of FIG. 1 irradiates a laser beam based on a control signal 105 that controls the laser emission. A polygon mirror 102a reflects the laser beam irradiated by the semiconductor laser device 101, and irradiates the laser beam to a photosensitive drum 121, which is an image carrier, through an f lens 104. A polygon mirror driving apparatus 102b can control the rotational drive of the polygon mirror 102a. The f lens 104 is a lens that converts the laser beam such that the laser beam is scanned in the direction (also referred to as main scanning direction) orthogonal to the rotating direction (also referred to as sub-scanning direction) of the photosensitive drum (image carrier) 121 at a constant velocity.
A photosensor 103 is disposed in a laser beam path 106, and can detect the scanning start of the laser beam in the main scanning direction.
The semiconductor laser device 101, the polygon mirror 102a, the driving apparatus 102b, the photosensor 103, the f lens 104, the control signal 105, and the laser beam path 106 together form a laser beam scanner 100.
As shown in 1b of FIG. 1, the laser beam scanner 100 irradiates a laser beam onto the photosensitive drum 121. The surface of the photosensitive drum 121 is charged to a predetermined potential by a photosensitive drum charger 123. An electrostatic latent image is formed on the charged surface of the photosensitive drum by the irradiation of the laser beam. The photosensitive drum 121 can be rotatively driven under control of a driving apparatus 122. A developing unit 124 bonds an image forming material (toner) onto the electrostatic latent image of the photosensitive drum 121, forming a toner image. A toner density detecting sensor 126 measures the density of the toner image on the photosensitive drum 121. A recording medium such as paper is fed through a conveying path 127. A transfer unit 125 transfers the toner image on the photosensitive drum 121 onto the recording medium.
The photosensitive drum 121, the driving apparatus 122, the photosensitive drum charger 123, the developing unit 124, the transfer unit 125, the toner density detecting sensor 126, and the conveying path 127 together form a toner image forming unit 120.
An image data holding unit 151 receives an input of image data from an external source and holds the data. A density converting unit 152 converts the density of the image data received from the image data holding unit 151 to a laser emission amount using a density conversion table to produce emission amount data. The density converting unit 152 outputs the produced emission amount data to the semiconductor laser device 101 in the form of control signal 105 that controls the laser emission. Reference numeral 160 denotes a video count unit. The image data holding unit 151, the density converting unit 152, and the video count unit 160 together form an image processing unit 150.
A system control unit 190 controls the entire image forming apparatus. The laser beam scanner 100, the toner image forming unit 120, and the image processing unit 150 can be operated in conjunction with each other in accordance with instructions from the system control unit 190.
The image processing unit 150 receives various image data from an information processing apparatus connected to the image forming apparatus.
The image processing unit 150 produces data (emission amount data) for providing a laser emission amount necessary for forming an image based on the received image data, and outputs the produced data in the form of control signal 105, which controls the laser emission, to the laser beam scanner 100. The semiconductor laser device 101 of the laser beam scanner 100 irradiates a laser beam based on the control signal 105. The beam is scanned on the photosensitive drum 121, and an electrostatic latent image is formed. The toner image forming unit 120 develops the electrostatic latent image with toner.
The video count unit 160 shall be described in detail next. FIG. 3 is an enlarged view of an internal circuit of the video count unit 160.
Reference numeral 301 denotes a video input signal (video stream), which is image data corresponding to a laser emission pattern. When forming an image, the video stream 301 is input to the video count unit 160.
Reference numerals 310a to 310h denote line buffers (storage units) that store data of a plurality of scan lines corresponding to the main scanning direction along which the laser is irradiated. The eight line buffers correspond to eight scans performed by the laser beam scanner 100. The line buffers 310a to 310h are switched in sequence by a line synchronizing signal (not shown), and image data is written into the eight line buffers in sequence. When the writing of image data into the eight line buffers 310a to 310h is finished, the first written line buffer is overwritten with the image data of the ninth line. In this manner, the writing operation is performed for the tenth line, the eleventh line, and so on until the end of the video stream 301 is processed. Herein, it is assumed that the video stream 301 is blank for seven lines at its head and tail, and each line contains seven pixels' worth of blank data that correspond to the right and left ends of an image (see FIG. 6).
Reference numerals 311a to 311g denote output data that are output from the line buffers 310a to 310g. Seven scans' worth of image data scanned by the laser beam scanner 100 is inputted with seven pixels for a single scan (i.e., 49 pixels' worth of data in total, the number of pixels constituting a partial region) to a multiplier 330 where the data is stored. The line buffers are switched in sequence, and the data outputted from the line buffers is input to the multiplier 330.
When the data is written into the line buffers for seven lines including the blank, the video count unit 160 starts the calculation of an effective video count.
Reference numeral 320 denotes a weighting data holding unit that holds weighting data weighted based on the distance (pixel distance) to a pixel of interest. A signal 322 is a signal for inputting weighting data from the system control unit 190 to the weighting data holding unit 320. A coefficient value calculated based on a laser spot profile obtained from a laser scanner optical system (e.g., numerical values shown in FIG. 5B) is inputted to the weighting data holding unit 320.
The weighting data holding unit 320 (holding unit) can output the output signals 321a to 321g of the weighting data to the multiplier 330 in synchronization with the start of the calculation of the video count.
The multiplier 330 defines one pixel in the image data of the fourth line, which is the center of the seven lines, as a pixel of interest, and performs a multiplication to obtain the output data of the line buffers (exposure amount) and weighting data for the pixel of interest and 48 surrounding pixels.
The multiplier 330 defines the resulting weighting data of the pixel of interest included in the partial region that constitutes a page as center pixel exposure amount data. In this case, the multiplier 330 functions as a first calculating unit. The multiplier 330 performs a multiplication to obtain weighting data corresponding to each pixel distance for the 48 surrounding pixels that are located around the pixel of interest and constitute the partial region. Then, the multiplier 330 defines the obtained result as surrounding pixel exposure amount data. In this case, the multiplier 330 functions as a second calculating unit.
The multiplier 330 then outputs, to an exposure amount adder 340 (third calculating unit), output signals (partial exposure amount data 331a to 331g) indicative of the result of the multiplication of the total of 49 pixels, that is, the center pixel exposure amount data and the surrounding pixel exposure amount data of the 48 pixels.
The exposure amount adder 340 adds the output signals (partial exposure amount data 331a to 331g) indicative of the result of the multiplication output from the multiplier 330. The exposure amount adder 340 (third calculating unit) outputs the result of the addition as total exposure amount data 341 of the pixel of interest.
A conversion lookup table (LUT) 350 (converting unit) converts the total exposure amount data 341 of the pixel of interest output from the exposure amount adder 340 to data indicative of the amount of toner consumption per pixel (pixel toner consumption amount data) according to the LUT data, and outputs the pixel toner consumption amount data 351.
As used herein, the LUT data is a conversion coefficient for converting the amount of exposure to the amount of toner consumption. A signal 352 is a signal that is transmitted from the system control unit 190 and used to input the LUT data into the conversion lookup table (LUT) 350. A conversion coefficient for converting the exposure amount obtained through a self-adjustment sequence, which shall be described later, to an amount of toner consumption is input to the conversion lookup table (LUT) 350.
The pixel toner consumption amount data 351 converted by the conversion lookup table (LUT) 350 is input to a toner consumption amount calculating unit 360. The toner consumption amount calculating unit 360 can calculate the amount of toner consumption per page, and is initialized upon receiving a control signal 362 that is transmitted from the system control unit 190. This initialization clears the data of the amount of toner consumption per page to zero.
The toner consumption amount calculating unit 360 can cumulate the pixel toner consumption amount data 351 one after another for the effective region excluding the blank region and the like.
The process described above is performed by changing the pixel of interest, and the toner consumption amount calculating unit 360 cumulates the pixel toner consumption amount data 351 one after another to eventually calculate an amount of toner consumption per page 361, and outputs the amount of toner consumption per page 361.
The physical background as to why the number of pixels in the partial region that constitutes a page is set to seven pixels by seven pixels shall be described hereinafter. FIG. 4A is a graph illustrating a relationship (exposure amount profile) between the distance (pixel) from the center of the laser spot of a given pixel of interest and the amount of exposure on the surface of a photosensitive drum.
The laser spot has a spot shape of a substantially perfect circle, and has an optical intensity sufficient to form a single pixel. It is known from optical designing and numerical values that there is leaking light in the distance (position) two pixels away, three pixels away, and so on, from a given pixel of interest. The distribution of the amount of exposure is symmetric with respect to the pixel of interest.
If the target accuracy is set to a margin of error of about 1% relative to the total amount of exposure, in the case of FIG. 4A, it is sufficient to consider the distribution of the amount of exposure to the distance three pixels away from a pixel of interest (0). As for the distance four or more pixels away from the pixel of interest (0), the calculation of the amount of exposure is not necessary. In the present embodiment, the video count unit 160 calculates the total amount of exposure for a rectangular region (seven pixels by seven pixels) that secures tree pixels in the right, left, upper and lower directions relative to the pixel of interest (0). The video count unit 160 can set the size of the rectangular region to n pixels by n pixels, or n pixels by m pixels (where n and m are natural numbers) according to the target accuracy relative to the total amount of exposure.
FIG. 5A is a graph illustrating an exemplary distribution of weighting data stored in the weighting data holding unit 320 based on the exposure amount profile of FIG. 4A. In FIG. 5A, the x direction corresponds to the main scanning direction along which the laser beam is scanned on the photosensitive drum 121. The y direction corresponds to the sub-scanning direction corresponding to the rotating direction of the photosensitive drum 121. The z direction indicates weighting data of the pixels located in a plane defined by the main scanning direction (x) and the sub-scanning direction (y).
A block 510 indicates a pixel of image data. In FIG. 5A, the weighting data of a block 530 is 42.7. The unit of the weighting data is indicated as a relative value in the calculation process, and is made dimensionless. FIG. 5B is a table illustrating exemplary weighting data of the blocks located in a plane defined by the main scanning direction (x) and the sub-scanning direction (y).
A single block corresponds to a substantially circular laser spot 520 that is converted onto the photosensitive drum 121 by the laser beam scanner 100 (FIG. 5A). Because overlapping of a plurality of partial exposure amounts corresponds linearly to a change in the amount of charge on the photosensitive drum 121, an addition algorithm can be used to determine the total amount of exposure.
As used herein, the weighting data H can be expressed by the following formula (1), where the function that indicates a weighting amount calculating LUT is represented by f, the position of a pixel of interest is represented by (x0,y0), and the position of the pixel of the calculated coefficient is represented by (x,y). Note that the weighting coefficient can be set asymmetrically in terms of the distances of the main scanning direction and the sub-scanning direction according to the shape of the laser spot, which is a flat circular shape or the like.
H=f(xx0,yy0)(1)
In the case of the pixel of interest being located at (S1, 1), for example, f(0,0)=42.72 (FIG. 5B).
FIG. 6 is a diagram illustrating an exemplary distribution of the amount of exposure of the pixels of image data. In the numerical value (Nij) of a block, the exposure amount data is set as data of a two-dimensional array. For example, the partial exposure amount data of a pixel of interest 601 is expressed by the formula (2) using the weighting data shown in FIG. 5B and the exposure amount data.
$i = 1 7 [ j = 1 7 { N i j f ( i - 4 , j - 4 ) } ] ( 2 )$
When the pixel of interest (Nij) is changed to the adjacent pixel 602 located on the right, the partial exposure amount data is calculated by the formula (3).
$i = 2 8 [ j = 1 7 { N i j f ( i - 5 , j - 4 ) } ] ( 3 )$
The calculation as described above is performed for other pixels of interest, whereby the partial exposure amount data can be calculated. By changing the pixel of interest, and adding the partial exposure data amount calculated for each pixel of interest one after another, the amount of consumption of an image forming material (toner) per page can be calculated.
A method for determining the conversion profile (conversion coefficient) of the conversion lookup table (LUT) 350 shall be described next.
The image forming apparatus according to the present embodiment can execute a self-adjustment sequence for finely adjusting (controlling) the operation of the laser beam scanner 100 and the toner image forming unit 120. The system control unit 190 starts the self-adjustment sequence according to the operating conditions such as a change in the surrounding environment in which the image forming apparatus is installed, a continuous operation, and suspend time (e.g., power off time). The system control unit 190 can execute the self-adjustment sequence, for example, in the interval between image forming jobs (i.e., after the completion of a first image forming job, and before the start of a second image forming job), at the start-up of the image forming apparatus, or the like. The conversion profile (conversion coefficient) is determined based on the self-adjustment sequence.
FIG. 2A is a flowchart illustrating the process flow of the self-adjustment sequence.
In step S201, the system control unit 190 determines whether or not there is a change in the state of the image forming apparatus. As described earlier, the change in the state is determined based on the operation conditions such as a change in the surrounding environment, a continuous operation, and suspend time. If no change occurs in the state of the image forming apparatus, the system control unit 190 stands by while monitoring the occurrence of a change in the state, without performing the self-adjustment sequence.
If it is determined in step S201 that there is a change in the state (Yes in S201), the process advances to step S202, and the self-adjustment sequence starts.
In step S203, the system control unit 190 causes the toner image forming unit 120 to form a measurement pattern, and loads the measurement pattern using the toner density detecting sensor 126. In step S204, the system control unit 190 determines the density of the measurement pattern based on the loaded result.
In step S205, the system control unit 190 compares the density of the measurement pattern against a predetermined reference density. If the density of the measurement pattern is equal to or greater than the reference density (Yes in S205), the self-adjustment sequence ends (S207).
If it is determined in step S205 that the density of the measurement pattern is less than the reference density (No in S205), the process advances to step S206.
In step S206, the system control unit 190 adjusts the image forming conditions such that the conditions for the reference density are satisfied by, for example, controlling the control signal 105 to increase the intensity of the laser irradiated from the semiconductor laser device 101, to increase the developing high voltage or the like.
Then, the process returns to step S203, and the same process is repeated. If the density of a newly formed measurement pattern exceeds the reference density (Yes in S206), the self-adjustment sequence ends.
FIG. 2B is a diagram illustrating an exemplary measurement pattern formed in the self-adjustment sequence. The measurement pattern is formed by irradiating a pulse width modulated (PWM) laser by gradually changing the laser emission time (DUTY). The measurement pattern is formed in a shape and at a position that can be detected by the toner density detecting sensor 126. Based on the result of the detection of the toner density detecting sensor 126, the intensity of the laser beam irradiated from the semiconductor laser device 101 and the amount of exposure, as well as the amount of charge necessary for the adjustment of the toner image forming unit 120, the developing high voltage (developing bias) and the like are adjusted by the system control unit 190.
The system control unit 190 calculates a conversion coefficient for converting the amount of exposure to the amount of toner consumption based on the exposure intensity (amount of exposure) of the laser used for image formation and the toner density detected by the toner density detecting sensor 126.
FIG. 4B is a graph illustrating toner consumption amount characteristics (profiles) and the exposure intensity (exposure amount) gradually changed in the self-adjustment sequence, in which four different exemplary profiles A, B, C and D obtained by changing the conditions are shown. The horizontal axis represents pixel total exposure amount (exposure amount), and the vertical axis represents pixel toner consumption amount. All the units are made dimensionless and shown in relative values in the calculation process.
The amount of exposure of the photosensitive drum 121 is proportional to the amount of toner consumption as an amount of charge on the photosensitive drum 121, but because there are upper and lower limits on the amount of charge on the surface of the photosensitive drum, the minimum saturation amount of toner consumption and the maximum saturation amount of toner consumption are determined per pixel regardless of the amount of exposure. Accordingly, in each profile, the amount of toner consumption is nearly flat around the upper and lower limits.
The system control unit 190 can select a profile suitable for exposure conditions from the profiles of FIG. 4B, and calculate a conversion coefficient for converting the amount of exposure to the amount of toner consumption. In this case, as for the data of the points except for the points used to measure the measurement pattern, the system control unit 190 can perform an interpolation calculation based on the profiles of FIG. 4B to obtain a conversion coefficient for converting the amount of exposure to the amount of toner consumption.
This interpolation calculation process is performed based on monotonic characteristics and saturation characteristics of the electrostatic development, and it is devised so that the accuracy degradation will be small when reproducing toner development characteristics.
The system control unit 190 writes the calculated conversion coefficient into the conversion lookup table (LUT) 350 with the signal 352. The conversion lookup table (LUT) 350 converts the total exposure amount data 341 of the pixel of interest output from the exposure amount adder 340 to the data indicative of the amount of toner consumption per pixel (pixel toner consumption amount data) according to the conversion coefficient (LUT data). The conversion lookup table (LUT) 350 outputs the converted pixel toner consumption amount data 351 to the toner consumption amount calculating unit 360.
The toner consumption amount calculating unit 360 cumulates the pixel toner consumption amount data 351 of the effective region excluding the blank region and the like, one after another. By changing the pixel of interest, calculating the pixel toner consumption amount data 351, and cumulating the data one after another, the amount of toner consumption of an entire page can be calculated.
According to the present invention, it is possible to calculate the amount of consumption of an image forming material with high accuracy.
The calculated amount of toner consumption is applicable to various applications. For example, it is possible to measure or determine the life of various consumable items, such as a toner cartridge, an electrophotographic process cartridge, a photosensitive drum, a photosensitive drum cleaner and a fixer cleaner, with high accuracy.
For example, the system control unit 190 can measure or determine the life of a consumable item by storing the result of the calculation of the amount of toner consumption, and referring to the stored cumulated value of the amount of toner consumption.
It is also possible that the system control unit 190 controls the timing (adjustment timing) of executing the self-adjustment sequence (FIG. 2A) that adjusts an image such as adjusting the density or tone of an image based on the stored cumulated value of the amount of toner consumption.
The system control unit 190 can also accurately detect the state of the image forming apparatus. Thereby, it is possible to make a notification regarding the timing of maintenance at an appropriate timing, and execute maintenance (adjusting operation), so that the inactive time (down time) of the image forming apparatus can be shortened.
The system control unit 190 can also make a notification regarding the timing of toner supply based on the stored cumulated value of the amount of toner consumption. Thereby, the amount of toner supplied from a toner bottle to the developing unit is controlled preciously, so that unnecessary toner supply and waste toner can be reduced.
(Variations)
In the above embodiment, the calculation of the amount of consumption of an image forming material (toner) was described taking, as an example, a single laser beam image forming apparatus in which one pixel is defined as a pixel of interest, and leaking light occurs around the pixel of interest. However, the spirit of the present invention is not limited thereto, and it is also possible to apply the present invention to, for example a photolithography apparatus in which leaking light occurs around the pixel of interest serving as a light-emitting spot, such as a multi-beam laser scanner.
The image forming apparatus described above is configured to include a memory that holds the image data of a pixel of interest and the surrounding pixels, but similar effects can be attained also by performing a calculation using digital data with software of a calculator before forming an image. The range of the partial image region (two dimensional area) formed by a single pixel of interest and the surrounding pixels is not limited to seven pixels by seven pixels, and it is possible to set the range to any range by, for example, taking the extent of the influence of leaking light into consideration based on the exposure amount profile shown in FIG. 4A.
For example, the range of the partial image region may be nine pixels including eight pixels that surround a pixel of interest, and it is also possible to set the range to nine pixels by nine pixels, or a range larger than nine pixels by nine pixels. In addition, the partial image region is not limited to a square shape, and for example, the size defined by the main scanning direction along which a laser beam is irradiated and the sub-scanning direction orthogonal to the main scanning direction can be set to, for example, three pixels by five pixels. The range of the partial image region can be determined by trading off the desired accuracy with the calculating unit resources. The weighting coefficient can be determined by setting or calculating the weighting coefficient asymmetrically in terms of the distances of the main scanning direction and the sub-scanning direction according to the relationship between the main scanning direction of laser and time, the shape of a laser spot of a flat circular shape, or the like, or setting an asymmetric LUT, so as to increase the calculating accuracy of the amount of toner consumption.
As for various image data in which dots or blank dots occur randomly even in a one dimensional area of only the main scanning direction, it is possible to obtain higher calculation accuracy of the amount of consumption of the image forming material (toner) by adding the total amount of exposure and calculating the proximity effect of leaking light.
The above-described embodiment employs the configuration in which the weighting coefficient of a region of seven pixels by seven pixels is determined based on the distance between a pixel of interest and the surrounding pixels. However, a mathematical calculation formula can be used instead of the LUT as long as the weighting coefficient can be determined by the correlation with the distance.
Similarly, as for the conversion lookup table (LUT) 350 that converts from the total amount of exposure to the amount of toner consumption, a mathematical calculation formula can be used instead of the LUT as long as the relationship between the total amount of exposure and the amount of consumption of toner in a pixel of interest can be formulated by the mathematical calculation formula.
Other Embodiments
It should be noted that the object of the invention is attained also by supplying a storage medium in which software program code that implements the functions of the foregoing embodiment is recorded, to a system or apparatus, by loading the program code stored in the storage medium with a computer (or CPU or MPU) of the system or apparatus, and then executing the program code.
In this case, the program code per se loaded from the storage medium implements the functions of the aforementioned embodiment, and the storage medium in which the program code is stored constitutes the present invention.
Examples of storage media that can be used for supplying the program code are a flexible disk, hard disk, optical disk, magneto-optical disk, CD-ROM, CD-R, non-volatile type memory card, ROM, etc.
The functions of the foregoing embodiment are implemented by executing computer-loaded program code. Also, an operating system (OS) or the like running on the computer based on the instructions of the program code may perform all or a part of the actual processing so that the functions of the foregoing embodiment can be implemented by this processing.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2007-134580 filed on May 21, 2007 and No. 2008-124973 filed on May 12, 2008, which are hereby incorporated by reference herein in their entirety.
Claims
1. An image forming apparatus that forms a latent image on an image carrier based on image data, the apparatus comprising:
a first calculating unit adapted to calculate an exposure amount of a pixel of interest included in a partial region configured of a plurality of pixels that constitute the image data;
a second calculating unit adapted to calculate an exposure amount of surrounding pixels that are located around the pixel of interest and constitute the partial region; and
a toner consumption amount calculating unit adapted to calculate a toner consumption amount of the pixel of interest based on the exposure amount of the pixel of interest and the exposure amount of the surrounding pixels,
wherein the second calculating unit calculates the exposure amount of the pixel of interest by weighting the image data corresponding to the surrounding pixels on a pixel-by-pixel basis.
a first calculating unit adapted to calculate an exposure amount of a pixel of interest included in a partial region configured of a plurality of pixels that constitute the image data;
a second calculating unit adapted to calculate an exposure amount of surrounding pixels that are located around the pixel of interest and constitute the partial region; and
a toner consumption amount calculating unit adapted to calculate a toner consumption amount of the pixel of interest based on the exposure amount of the pixel of interest and the exposure amount of the surrounding pixels,
wherein the second calculating unit calculates the exposure amount of the pixel of interest by weighting the image data corresponding to the surrounding pixels on a pixel-by-pixel basis.
2. The image forming apparatus according to claim 1,
wherein the toner consumption amount calculating unit includes a third calculating unit adapted to calculate a total exposure amount of the pixel of interest based on the exposure amount of the pixel of interest and the exposure amount of the surrounding pixels; and
a converting unit adapted to convert the total exposure amount to the toner consumption amount of the pixel of interest.
wherein the toner consumption amount calculating unit includes a third calculating unit adapted to calculate a total exposure amount of the pixel of interest based on the exposure amount of the pixel of interest and the exposure amount of the surrounding pixels; and
a converting unit adapted to convert the total exposure amount to the toner consumption amount of the pixel of interest.
3. The image forming apparatus according to claim 1,
wherein the toner consumption amount calculating unit calculates a toner consumption amount of the image data based on the toner consumption amount of the pixel of interest calculated for each pixel of interest.
wherein the toner consumption amount calculating unit calculates a toner consumption amount of the image data based on the toner consumption amount of the pixel of interest calculated for each pixel of interest.
4. The image forming apparatus according to claim 1, further comprising a storage unit adapted to store a plurality of scan lines' worth of image data, the scan lines corresponding to a main scanning direction when forming a latent image on the image carrier,
wherein the storage unit outputs image data corresponding to the number of pixels that constitute a predetermined partial region.
wherein the storage unit outputs image data corresponding to the number of pixels that constitute a predetermined partial region.
5. The image forming apparatus according to claim 1, further comprising a holding unit adapted to hold weighting data that is weighted for each pixel in the partial region,
wherein the first calculating unit calculates the exposure amount of the pixel of interest based on image data of the pixel of interest and the weighting data of the pixel of interest, and
the second calculating unit calculates the exposure amount of the surrounding pixels based on image data of the surrounding pixels and the weighting data of the surrounding pixels.
wherein the first calculating unit calculates the exposure amount of the pixel of interest based on image data of the pixel of interest and the weighting data of the pixel of interest, and
the second calculating unit calculates the exposure amount of the surrounding pixels based on image data of the surrounding pixels and the weighting data of the surrounding pixels.
6. The image forming apparatus according to claim 2,
wherein the converting unit refers to a conversion lookup table that stores a conversion coefficient for converting an exposure amount to a toner consumption amount, and converts the total exposure amount to the toner consumption amount of the pixel of interest.
wherein the converting unit refers to a conversion lookup table that stores a conversion coefficient for converting an exposure amount to a toner consumption amount, and converts the total exposure amount to the toner consumption amount of the pixel of interest.
7. The image forming apparatus according to claim 1,
wherein the size of the partial region is determined by the number of pixels in a main scanning direction along which a laser is irradiated and the number of pixels in a sub-scanning direction orthogonal to the main scanning direction.
wherein the size of the partial region is determined by the number of pixels in a main scanning direction along which a laser is irradiated and the number of pixels in a sub-scanning direction orthogonal to the main scanning direction.
8. The image forming apparatus according to claim 1, further comprising a control unit adapted to control the operation of the image forming apparatus,
wherein the control unit determines an adjustment timing of adjusting an image of the image forming apparatus based on the toner consumption amount.
wherein the control unit determines an adjustment timing of adjusting an image of the image forming apparatus based on the toner consumption amount.
9. The image forming apparatus according to claim 8,
wherein the control unit makes a notification regarding a timing of toner supply based on the toner consumption amount.
wherein the control unit makes a notification regarding a timing of toner supply based on the toner consumption amount.
|
|
Educational Codeforces Round 67 (Rated for Div. 2)
A. Stickers and Toys
time limit per test2 seconds memory limit per test256 megabytes
Your favorite shop sells nn Kinder Surprise chocolate eggs. You know that exactly ss stickers and exactly tt toys are placed in nn eggs in total.
Each Kinder Surprise can be one of three types:
• it can contain a single sticker and no toy;
• it can contain a single toy and no sticker;
• it can contain both a single sticker and a single toy.
But you don’t know which type a particular Kinder Surprise has. All eggs look identical and indistinguishable from each other.
What is the minimum number of Kinder Surprise Eggs you have to buy to be sure that, whichever types they are, you’ll obtain at least one sticker and at least one toy?
Note that you do not open the eggs in the purchasing process, that is, you just buy some number of eggs. It’s guaranteed that the answer always exists.Input
The first line contains the single integer TT (1≤T≤1001≤T≤100) — the number of queries.
Next TT lines contain three integers nn, ss and tt each (1≤n≤1091≤n≤109, 1≤s,t≤n1≤s,t≤n, s+t≥ns+t≥n) — the number of eggs, stickers and toys.
All queries are independent.Output
Print TT integers (one number per query) — the minimum number of Kinder Surprise Eggs you have to buy to be sure that, whichever types they are, you’ll obtain at least one sticker and one toy
Exampleinput
3
10 5 7
10 10 10
2 1 1
output
6
1
2
Note
In the first query, we have to take at least 66 eggs because there are 55 eggs with only toy inside and, in the worst case, we’ll buy all of them.
In the second query, all eggs have both a sticker and a toy inside, that’s why it’s enough to buy only one egg.
In the third query, we have to buy both eggs: one with a sticker and one with a toy.
签到
#include<bits/stdc++.h>
using namespace std;
#define rep(i,a,b) for(int i=(a);i<=(b);++i)
#define dep(i,a,b) for(int i=(a);i>=(b);--i)
#define pb push_back
typedef long long ll;
const int maxn=(int)2e5+100;
int n,s,t;
int main(){
int T;cin>>T;
while(T--){
cin>>n>>s>>t;
int ans=0;
ans=max(ans,n-s+1);
ans=max(ans,n-t+1);
printf("%d\n",ans);
}
}
B. Letters Shop
time limit per test2 seconds memory limit per test256 megabytes
The letters shop showcase is a string ss, consisting of nn lowercase Latin letters. As the name tells, letters are sold in the shop.
Letters are sold one by one from the leftmost to the rightmost. Any customer can only buy some prefix of letters from the string ss.
There are mm friends, the ii-th of them is named titi. Each of them is planning to estimate the following value: how many letters (the length of the shortest prefix) would s/he need to buy if s/he wanted to construct her/his name of bought letters. The name can be constructed if each letter is presented in the equal or greater amount.
• For example, for ss=”arrayhead” and titi=”arya” 55 letters have to be bought (“arrayhead”).
• For example, for ss=”arrayhead” and titi=”harry” 66 letters have to be bought (“arrayhead”).
• For example, for ss=”arrayhead” and titi=”ray” 55 letters have to be bought (“arrayhead”).
• For example, for ss=”arrayhead” and titi=”r” 22 letters have to be bought (“arrayhead”).
• For example, for ss=”arrayhead” and titi=”areahydra” all 99 letters have to be bought (“arrayhead“).
It is guaranteed that every friend can construct her/his name using the letters from the string ss.
Note that the values for friends are independent, friends are only estimating them but not actually buying the letters.Input
The first line contains one integer nn (1≤n≤2⋅1051≤n≤2⋅105) — the length of showcase string ss.
The second line contains string ss, consisting of exactly nn lowercase Latin letters.
The third line contains one integer mm (1≤m≤5⋅1041≤m≤5⋅104) — the number of friends.
The ii-th of the next mm lines contains titi (1≤|ti|≤2⋅1051≤|ti|≤2⋅105) — the name of the ii-th friend.
It is guaranteed that ∑i=1m|ti|≤2⋅105∑i=1m|ti|≤2⋅105.Output
For each friend print the length of the shortest prefix of letters from ss s/he would need to buy to be able to construct her/his name of them. The name can be constructed if each letter is presented in the equal or greater amount.
It is guaranteed that every friend can construct her/his name using the letters from the string ss.
Exampleinput
9
5
arya
harry
ray
r
areahydra
output
5
6
5
2
9
瞎搞题
#include<bits/stdc++.h>
using namespace std;
#define rep(i,a,b) for(int i=(a);i<=(b);++i)
#define dep(i,a,b) for(int i=(a);i>=(b);--i)
#define pb push_back
typedef long long ll;
const int maxn=(int)2e5+100;
int n,q;
string str;
int num[26],a[26][maxn];
int main(){
ios::sync_with_stdio(0);
cin>>n>>str>>q;
rep(i,0,n-1){
a[str[i]-'a'][++num[str[i]-'a']]=i;
}
while(q--){
string s;cin>>s;
int len=s.length(),ans=0;
int tem[26]={0,};
rep(i,0,len-1) tem[s[i]-'a']++;
rep(i,0,25) ans=max(ans,a[i][tem[i]]);
printf("%d\n",ans+1);
}
}
C. Vasya And Array
time limit per test1 second memory limit per test256 megabytes
Vasya has an array a1,a2,…,ana1,a2,…,an.
You don’t know this array, but he told you mm facts about this array. The ii-th fact is a triple of numbers titi, lili and riri (0≤ti≤1,1≤li<ri≤n0≤ti≤1,1≤li<ri≤n) and it means:
• if ti=1ti=1 then subbarray ali,ali+1,…,ariali,ali+1,…,ari is sorted in non-decreasing order;
• if ti=0ti=0 then subbarray ali,ali+1,…,ariali,ali+1,…,ari is not sorted in non-decreasing order. A subarray is not sorted if there is at least one pair of consecutive elements in this subarray such that the former is greater than the latter.
For example if a=[2,1,1,3,2]a=[2,1,1,3,2] then he could give you three facts: t1=1,l1=2,r1=4t1=1,l1=2,r1=4 (the subarray [a2,a3,a4]=[1,1,3][a2,a3,a4]=[1,1,3] is sorted), t2=0,l2=4,r2=5t2=0,l2=4,r2=5 (the subarray [a4,a5]=[3,2][a4,a5]=[3,2] is not sorted), and t3=0,l3=3,r3=5t3=0,l3=3,r3=5 (the subarray [a3,a5]=[1,3,2][a3,a5]=[1,3,2] is not sorted).
You don’t know the array aa. Find any array which satisfies all the given facts.Input
The first line contains two integers nn and mm (2≤n≤1000,1≤m≤10002≤n≤1000,1≤m≤1000).
Each of the next mm lines contains three integers titi, lili and riri (0≤ti≤1,1≤li<ri≤n0≤ti≤1,1≤li<ri≤n).
If ti=1ti=1 then subbarray ali,ali+1,…,ariali,ali+1,…,ari is sorted. Otherwise (if ti=0ti=0) subbarray ali,ali+1,…,ariali,ali+1,…,ari is not sorted.Output
If there is no array that satisfies these facts in only line print NO (in any letter case).
If there is a solution, print YES (in any letter case). In second line print nn integers a1,a2,…,ana1,a2,…,an (1≤ai≤1091≤ai≤109) — the array aa, satisfying all the given facts. If there are multiple satisfying arrays you can print any of them.
Examplesinput
7 4
1 1 3
1 2 5
0 5 6
1 6 7
output
YES
1 2 2 3 5 4 4
input
4 2
1 1 4
0 2 3
output
NO
给每个连续非减区间染个色,然后判断一下每个f=0的l和r是否处于同一个非零色块,是的话就是NO
#include<bits/stdc++.h>
using namespace std;
#define rep(i,a,b) for(int i=(a);i<=(b);++i)
#define dep(i,a,b) for(int i=(a);i>=(b);--i)
#define pb push_back
typedef long long ll;
const int maxn=(int)2e5+100;
int n,k,a[1010],c[1010],d[1010],pos=1;
map<int,int> mp;
struct node{
int l,r;
}q[1010];
int main(){
cin>>n>>k;
rep(i,1,k){
int f,l,r;scanf("%d%d%d",&f,&l,&r);
if(f) a[l]++,a[r]--;
else q[pos].l=l,q[pos++].r=r,d[l]++,d[r+1]--;
}
rep(i,1,n) a[i]+=a[i-1];
rep(i,1,n) d[i]+=d[i-1];
int fr=1,color=1,flag=0;
rep(i,1,n){
if(fr&&a[i]){
fr=0;
c[i]=color;
mp[i]=1;
}
else if(a[i]){
if(flag) mp[i]=1,flag=0;
c[i]=color;
}
else if(a[i-1]) c[i]=color++,flag=1;
}
int ok=1;
rep(i,1,pos) if(c[q[i].l]==c[q[i].r]&&c[q[i].l]){ok=0;break;}
if(!ok) return puts("NO"),0;
puts("YES");
color=10086;
rep(i,1,n){
if(d[i]&&c[i]==0) color--;
else if(d[i]&&mp[i]) color--;
printf("%d ",color);
}
}
D. Subarray Sorting
time limit per test2 seconds memory limit per test256 megabytes
You are given an array a1,a2,…,ana1,a2,…,an and an array b1,b2,…,bnb1,b2,…,bn.
For one operation you can sort in non-decreasing order any subarray a[l…r]a[l…r] of the array aa.
For example, if a=[4,2,2,1,3,1]a=[4,2,2,1,3,1] and you choose subbarray a[2…5]a[2…5], then the array turns into [4,1,2,2,3,1][4,1,2,2,3,1].
You are asked to determine whether it is possible to obtain the array bb by applying this operation any number of times (possibly zero) to the array aa.Input
The first line contains one integer tt (1≤t≤3⋅1051≤t≤3⋅105) — the number of queries.
The first line of each query contains one integer nn (1≤n≤3⋅1051≤n≤3⋅105).
The second line of each query contains nn integers a1,a2,…,ana1,a2,…,an (1≤ai≤n1≤ai≤n).
The third line of each query contains nn integers b1,b2,…,bnb1,b2,…,bn (1≤bi≤n1≤bi≤n).
It is guaranteed that ∑n≤3⋅105∑n≤3⋅105 over all queries in a test.Output
For each query print YES (in any letter case) if it is possible to obtain an array bb and NO (in any letter case) otherwise.
Exampleinput
4
7
1 7 1 4 4 5 6
1 1 4 4 5 7 6
5
1 1 3 3 5
1 1 3 3 5
2
1 1
1 2
3
1 2 3
3 2 1
output
YES
YES
NO
NO
Note
In first test case the can sort subarray a1…a5a1…a5, then aa will turn into [1,1,4,4,7,5,6][1,1,4,4,7,5,6], and then sort subarray a5…a6a5…a6.
首先记录一下在a数组中a[i]出现cnt[a[i]]次的下标,再去扫一遍b串,令t为当前b[i]出现cnt[b[i]]次时a中的下标,那么如果[1,t]中的最小值小于b[i]则不行,因为这样的话这个最小的数无论如何都不可能会到它应有的位置
#include<bits/stdc++.h>
using namespace std;
#define rep(i,a,b) for(int i=(a);i<=(b);++i)
#define dep(i,a,b) for(int i=(a);i>=(b);--i)
#define pb push_back
#define lson rt<<1,L,mid
#define rson rt<<1|1,mid+1,R
#define delf int mid=(L+R)>>1
typedef long long ll;
const int maxn=(int)3e5+100;
int n,a[maxn],b[maxn],cnt[maxn],s[maxn<<2];
void pushup(int rt){
s[rt]=min(s[rt<<1],s[rt<<1|1]);
}
void build(int rt,int L,int R){
if(L==R) {s[rt]=a[L];return;}
delf; build(lson); build(rson);
pushup(rt);
}
void update(int rt,int L,int R,int pos,int val){
if(L==R){s[rt]=val;return;} delf;
if(pos<=mid) update(lson,pos,val);
else update(rson,pos,val);
pushup(rt);
}
int query(int rt,int L,int R,int l,int r){
if(l<=L&&R<=r) return s[rt];
delf;
int minx=n+1;
if(l<=mid) minx=min(minx,query(lson,l,r));
if(r>mid) minx=min(minx,query(rson,l,r));
return minx;
}
void solve(){
scanf("%d",&n);
rep(i,0,n) cnt[i]=0;
map<pair<int,int>,int> pos;
rep(i,1,n) scanf("%d",&a[i]),pos[{a[i],++cnt[a[i]]}]=i;
rep(i,1,n) scanf("%d",&b[i]),--cnt[b[i]];
rep(i,1,n) if(cnt[i]) {puts("NO");return;}
build(1,1,n);
rep(i,1,n){
int t=pos[{b[i],++cnt[b[i]]}];
if(query(1,1,n,1,t)<b[i]){puts("NO");return;}
update(1,1,n,t,n+1);
}
puts("YES");
}
int main(){
int T;cin>>T;
while(T--) solve();
}
E. Tree Painting
time limit per test2 seconds memory limit per test256 megabytes
You are given a tree (an undirected connected acyclic graph) consisting of nn vertices. You are playing a game on this tree.
Initially all vertices are white. On the first turn of the game you choose one vertex and paint it black. Then on each turn you choose a white vertex adjacent (connected by an edge) to any black vertex and paint it black.
Each time when you choose a vertex (even during the first turn), you gain the number of points equal to the size of the connected component consisting only of white vertices that contains the chosen vertex. The game ends when all vertices are painted black.
Let’s see the following example:
Vertices 11 and 44 are painted black already. If you choose the vertex 22, you will gain 44 points for the connected component consisting of vertices 2,3,52,3,5 and 66. If you choose the vertex 99, you will gain 33 points for the connected component consisting of vertices 7,87,8 and 99.
The first line contains an integer nn — the number of vertices in the tree (2≤n≤2⋅1052≤n≤2⋅105).
Each of the next n−1n−1 lines describes an edge of the tree. Edge ii is denoted by two integers uiui and vivi, the indices of vertices it connects (1≤ui,vi≤n1≤ui,vi≤n, ui≠viui≠vi).
It is guaranteed that the given edges form a tree.Output
Print one integer — the maximum number of points you gain if you will play optimally.
Examplesinput
9
1 2
2 3
2 5
2 6
1 4
4 9
9 7
9 8
output
36
input
5
1 2
1 3
2 4
2 5
output
14
Note
The first example tree is shown in the problem statement.
换根法,之前也写过;首先我们暴力dfs出已1为根时每个点的儿子节点数,然后尝试换根,每次转移是 dp[v]=dp[x]+n-2*SZ[v] ,因为每个点的贡献是1,所以旧根左边的点的贡献+1,右边的减一
#include<bits/stdc++.h>
using namespace std;
#define rep(i,a,b) for(int i=(a);i<=(b);++i)
#define dep(i,a,b) for(int i=(a);i>=(b);--i)
#define pb push_back
typedef long long ll;
const int maxn=(int)2e5+100;
ll SZ[maxn],dp[maxn],ans;
struct node{
int v,next;
}g[maxn<<1];
}
void dfs1(int x,int fa){
SZ[x]=1;
int v=g[i].v;
if(v!=fa){
dfs1(v,x);
SZ[x]+=SZ[v];
}
}
}
void dfs2(int x,int fa){
ans=max(ans,dp[x]);
int v=g[i].v;
if(v!=fa){
dp[v]=dp[x]+n-2*SZ[v];
dfs2(v,x);
}
}
}
int main(){
cin>>n;
rep(i,1,n-1){
int u,v;scanf("%d%d",&u,&v);
|
|
<1r>
Hypoth. 1. Bodies move uniformly in straight lines unless so far as they are retarded by the resistence of ye Medium or disturbed by some other force.
Hyp. 2. The alteration of motion is ever proportional to ye force by wch it is altered.
Hyp. 3. Two Motions imprest in \two/ different lines, if those lines be taken in proportion to the motions & completed into a parallelogram, compose a motion whereby the diagonal of ye Parallelogram shall be described in the same time in wch ye sides thereof would have been described by those compounding motions apart. The motions AB & AC compound the motion AD.
## Prop. 1.
If a body move in vacuo & be continually attracted toward an immoveable center, it shall constantly move in one & the same plane, & in that plane describe equal areas in equall times.
Let A be ye center towards wch ye body is attracted, & suppose ye attraction acts not continually but by discontinued impressions made at equal intervalls of time wch intervalls we will consider as physical moments. Let BC be ye right line in wch it begins to move from B & {illeg} wch it describes wth uniform motion in the first physical moment before ye attraction make its first impression upon it. At C let it be attracted towards ye center A wth|by| one impuls or impression of force, & let CD be ye line in wch it shall move after that impuls. Produce BC to I so that CI be equall to BC & draw ID parallel to CA & the point D in wch it cuts CD shall be ye place of ye body at the end of ye second moment. And because the bases BC CI of the triangles ABC, ACI are equal those two triangles shall be equal. Also because the triangles ACI, ACD stand upon the same base AC & between two parallels they shall be equall. And therefore the triangles ACD described in the second moment shall be equal to ye triangle ABD described in the first moment. And by the same reason if the body at ye end of the 2d, 3d, 4th, 5t & following moments be attracted by single impulses in <1r> D, E, F, G &c describing the line DE in ye 3d moment, EF in the 4th, FG in ye 5t &c: the triangle AED shall be equall to the triangle ADC & all the following triangles AFE, AGF & to the preceding ones & to one another. And by consequence the areas compounded of these equall triangles (as ABE, AEG, ABG &c) are to one another as the {l}ines times in wch they are described. Suppose now that the moments of time be diminished in length & encreased in number in infinitum, so yt the impulses or impressions of ye attraction may become continuall & that ye line BCDEFG by ye infinite number & infinite littleness of its sides BC, CD, DE &c may become a curve one: & the body by the continual attraction shall describe areas of this Curve ABE, AEG, ABG & proportionall to the times in wch they are described. W. W. to be Dem.
## Prop. 2.
If a body be attracted towards either focus of an Ellipsis & the quantity of the attraction be such as suffices to make ye body revolve in the circumference of the Ellipsis: the attraction at ye two ends of the Ellipsis shall be reciprocally as the squares of the body in those ends from that focus.
Let AECD be the Ellipsis, A, C its two ends or vertices, F that focus towards wch the body is attracted, & AFE, CFD areas wch the body with a ray drawn from that focus to its center, describes at both ends in equal times: & those areas by the foregoing Proposition must be equal because proportionall to the times: that is the rectangles $\frac{1}{2}AF×AE$ & $\frac{1}{2}FC×DC$ must be equal supposing the arches AE & CD to be so very short that they may be taken for right lines & therefore AE is to CD as FC to FA. Suppose now that AM & CN are tangents to the Ellipsis at its two ends A & C & that EM & DN are perpendiculars let fall from the points E & D upon those tangents: & because the Ellipsis is alike crooked at both ends those perpendiculars EM & DN will be to one another as the squares of the arches AE & CD, & therefore EM is to DN as FCq to FAq. Now in the times that the body by means of the attraction moves in the arches AE <2r> & CD from A to E & from C to D it would without attraction move in the tangents from A to M & from C to N. Tis by ye force of the attractions that the bodies are drawn out of the tangents from M to E & from N to D & therefore the attractions are as those distances ME & ND, \that is the attraction/ at the end of the Ellipsis A is to the attraction at ye other end of ye Ellipsis C as ME to ND & by consequence as FCq to FAq. W. w. to be dem.
## Lemma. 1.
If a right line touch an Ellipsis in any point thereof & parallel to that tangent be drawn another right line from the center of the Ellipsis wch shall intersect a third right line drawn from ye touch point through either focus of the Ellipsis: the segment of the last named right line lying between ye point of intersection & ye point of contact shall be equal to half ye long axis of ye Ellipsis.
Let APBQ be the Ellipsis; AB its long axis; C its center; F, f its Foci; P the point of contact; PR the tangent; CD the line parallel to the tangent, & PD the segment of the line FP. I say that this segment shall be equal to AC.
For joyn PF Pf & draw fE parallel to CD & because Ff & F{illeg} {are} \is/ bisected in C, & {illeg} FE shall be bisected in D & therefore 2PD shall be equal to half the summ of PF & PE that is to half the summ of PF & Pf, that is to AB & therefore PD shall be equal to AC. W. W. to be Dem.
## Lemma. 2.
Every line drawn through either Focus of any Ellipsis & terminated at both ends by the Ellipsis is to that diameter of the Ellipsis wch is parallel to this line as the same Diameter is to the long Axis of the Ellipsis.
Let APBQ be ye Ellipsis, AB its long Axis, F, f its foci, C its center, PQ ye line drawn through its focus F, & VCS its diameter parallel to PQ & PQ will be to VS as VS to AB.
For draw FP \fp/ parallel to QFP & cutting the Ellipsis in p. Joyn Pp cutting VS in T & draw PR wch shall touch the <2v> Ellipsis in P & cut the diameter VS produced in R & CT will be to CS as CS to CR, as has been shewed by all those who treat of ye Conic sections. But CT is ye semisumm of FP & fp that is of FP & FQ & therefore 2CT is equal to PQ. Also 2CS is equal to VS & (by ye foregoing Lemma) 2CR is equal to AB. Wherefore PQ is to VS as VS to AB. W. W. to be Dem.
Corol. $AB×PQ={VS}^{q}=4{CS}^{q}$.
## Lem. 3.
If from either focus F of any Ellipsis unto any point in the perimeter of the Ellipsis be drawn a right line & another right line doth touch ye Ellipsis in that point & the angle of contact be subtended by any third right line drawn parallel to the first line: the rectangle wch that subtense conteins wth the same subtense produced to the other side of the Ellipsis is to the rectangle wch the long Axis of the Ellipsis conteins wth ye first line produced to the other side of the Ellipsis as the square of the distance between the subtense & the first line is to the square of the short Axis of the Ellipsis.
Let AKBL be the Ellipsis, AB its long Axis, KL its short Axis, C its center, F, f its foci, P ye point of the perimeter, PF ye first line PQ that line produced to the other side of the Ellipsis PX the tangent, XY ye subtense produced to ye other side of the Ellipsis & YZ the distance between this subtense & the first line. I say that the rectangle YXI is to the rectangle $AB×PQ$ as YZq to KLq
For let VS be the diameter of the Ellipsis parallel to the first line PF & GH another diametrer parallel to ye tangent PX, & the rectangle YXI will be to the square of the tangent PXq as the rectangle SCV to ye rectangle GCH that is as SVq to GHq. This a property of the Ellipsis demonstrated by all that write of the conic sections. And they have also demonstrated that all the Parallelogramms circumscribed about an Ellipsis are equall. Whence the rectangle $2PE×GH$ is equal to ye rectangle $AB×KL$ & consequently GH is to KL as AB that is (by Lem. 1) 2PD to 2PE & in the same proportion is PX to YZ. Whence GH PX is to GH as YZ to KL & PXq to GHq as YZq to KLq. But PXq was to GHq as <3r> YXI was to PXq as SVq {illeg} that is (by Lem Cor. Lem. 2) $AB×PQ$ to GHq, whence invertedly YXI is to $AB×PQ$ as PXq to GHq & by consequence as YZq to KLq. W. w. to be Dem.
## Prop. III.
If a body be attracted towards either focus of any Ellipsis & by that attraction be made to revolve in the Perimeter of ye Ellipsis: the attraction shall be reciprocally as the square of the distance of the body from that focus of the Ellipsis.
Let P be the place of the body at {any} in the Ellipsis at any moment of time & PX the tangent in wch the body would move uniformly were it not attracted & X ye place in that tangent at wch it would arrive in any given part of time & Y the place in the perimeter of the Ellipsis at wch the body doth arrive in the same time by means of the attraction. Let us suppose the time to be divided into equal parts & that those parts are very little ones so yt they may be considered as physical moments & yt ye attraction acts not continually but by intervalls only once in the beginning of every physical moment & let ye first action be upon ye body in P, the next upon it in Y & so on perpetually, so yt ye body may move from P to Y in the chord of ye arch PY & from Y to its next place in ye Ellipsis in the chord of ye next arch & so on for ever. And because the attraction in P is made towards F & diverts the body from ye tangent PX into ye chord PY so that in the end of the first physical moment it be not found in the place X where it would have been without ye attraction but in Y being by ye force of ye attraction in P translated from X to Y: the line XY generated by the force of ye attraction in P must be proportional to that force & parallel to its direction that is parallel to PF Produce XY & PF till they cut the Ellipsis in I & Q. Ioyn FY & upon FP let fall the perperpendicular {sic} YZ & let AB be the long Axis & KL ye short Axis of ye Ellipses. And by the third Lemma YXI will be to $AB×PQ$ as YZq to KLq & by consequence YX will be equall to $\frac{AB×PQ×{YZ}^{q}}{XI×{KL}^{q}}$.
And in like manner if py be the chord of another Arch py wch the revolving body describes in a physical moment of time & px be the tangent of the Ellipsis at p & xy the subtense of <3v> the angle of contact drawn parallel to pF, & if pF & xy produced cut ye Ellipsis in q & i & from y upon pF be let fall the perpendicular yz: the subtense yx shall be equal to $\frac{AB×pq×{yz}^{quad.}}{xi×{KL}^{quad.}}$ . And therefore YX shall be to yx as $\frac{AB×PQ×{YZ}^{q}}{XI×{KL}^{q}}$ to $\frac{AB×pq×{yz}^{quad.}}{xi×{KL}^{quad.}}$ , that is as $\frac{PQ}{XI}{YZ}^{q}$ to $\frac{pq}{xi}{yz}^{quad.}$
And because the lines PY py are by the revolving body described in equal times, the areas of the triangles PYF pyF must be equal by the first Proposition; & therefore the rectangles $PF×YZ$ & $PF×yz$ are equal, & by consequence YZ is to yz as pF to PF. Whence $\frac{PQ}{XI}{YZ}^{q}$ is to $\frac{pq}{xi}{yz}^{quad.}$ as $\frac{PQ}{XI}{pF}^{quad.}$ {illeg} to $\frac{pq}{xi}{PF}^{quad.}$ And therefore YX is to yx as $\frac{PQ}{XI}{pF}^{quad}$ to $\frac{pq}{xi}{PF}^{quad.}$.
And as we told you that XY was the line generated in a physical moment of time by ye force of the attraction in P, so for the same reason is xy the line generated in the same quantity of time by the force of the attraction in p. And therefore the attraction in P is to the attraction in p as the line XY to the line xy, that is as $\frac{PQ}{XI}{pF}^{quad}$ to $\frac{pq}{xi}{PF}^{quad.}$
Suppose now that the equal lines in wch the revolving body describes the lines PY & py become infinitely little, so that the attraction may become continual & the body by this attraction revolve in the perimeter of the Ellipsis: & the lines PQ, XI as also pq, xi becoming coincident & by consequence equal, the quantities $\frac{PQ}{XI}{pF}^{quad}$ & $\frac{pq}{xi}{PF}^{quad.}$ will become ${pF}^{quad}$ & ${PF}^{quad}$. And therefore the attraction in P will be to the attraction in p as ${pF}^{q}$ to ${PF}^{q}$, that is reciprocally as the squares of the distances of the revolving bodies from the focus of the Ellipsis. W. W. to be Dem.
|
|
Vapour Absorption (V-A) Cycle MCQ Quiz - Objective Question with Answer for Vapour Absorption (V-A) Cycle - Download Free PDF
Last updated on Jan 31, 2023
Latest Vapour Absorption (V-A) Cycle MCQ Objective Questions
Vapour Absorption (V-A) Cycle Question 1:
In a vapour absorption refrigeration system, heating, cooling, and refrigeration take place at the temperatures of 373 k, 293 K and 268 K respectively. The maximum COP of the system is _______.
1. 2.3
2. 1.9
3. 3.3
4. 4.3
Option 1 : 2.3
Vapour Absorption (V-A) Cycle Question 1 Detailed Solution
Concept:
COP of VARS is given by relation;
$${\bf{COP}} = \frac{{{{\bf{T}}_{\bf{E}}}\;\left( {{{\bf{T}}_{\bf{G}}} - {{\bf{T}}_0}} \right)}}{{{{\bf{T}}_{\bf{G}}}\;\left( {{{\bf{T}}_0} - {{\bf{T}}_{\bf{E}}}} \right)}}$$
Where, TE = Evaporator temperature, T= Generator temperature, T= Condenser and absorber temperature.
and the pump work is neglected.
Calculation:
Given:
TE = 268 K, TG = 373 K, T0 = 293 K.
$${\bf{COP}} = \frac{{{{\bf{T}}_{\bf{E}}}\;\left( {{{\bf{T}}_{\bf{G}}} - {{\bf{T}}_0}} \right)}}{{{{\bf{T}}_{\bf{G}}}\;\left( {{{\bf{T}}_0} - {{\bf{T}}_{\bf{E}}}} \right)}}$$
$${\rm{COP}} = \frac{{268\;\left( {373 - 293} \right)}}{{373\;\left( {293 - 268} \right)}}$$
∴ COP = 2.3
Vapour Absorption (V-A) Cycle Question 2:
In the vapour absorption refrigeration system, the refrigeration temperature is 7 ºC, the generator temperature is 127 ºC temperature of the heat sink is 47 ºC. The maximum possible COP of the system
1. 0.714
2. 0.86
3. 1.4
4. 0.11
Option 3 : 1.4
Vapour Absorption (V-A) Cycle Question 2 Detailed Solution
Concept:
COP of VARS is given by relation;
$${{COP}} = \frac{{{{{T}}_{{E}}}\;\left( {{{{T}}_{{G}}} - {{{T}}_0}} \right)}}{{{{{T}}_{{G}}}\;\left( {{{{T}}_0} - {{{T}}_{{E}}}} \right)}}$$
Where, TE = Evaporator temperature, T= Generator temperature, T= Condenser and absorber temperature.
Calculation:
Given:
TE = 7 + 273 = 280 K, TG = 127 + 273 = 400 K, T0 = 47 + 273 = 320 K.
$${{COP}} = \frac{{{{{T}}_{{E}}}\;\left( {{{{T}}_{{G}}} - {{{T}}_0}} \right)}}{{{{{T}}_{{G}}}\;\left( {{{{T}}_0} - {{{T}}_{{E}}}} \right)}}$$
$${\rm{COP}} = \frac{{280\;\left( {400 - 320} \right)}}{{400\;\left( {320 - 280} \right)}}$$
∴ COP = 1.4
Vapour Absorption (V-A) Cycle Question 3:
Directions: Each of the next Six (06) items consists of two statements, one labelled as the 'Statement (I)' and the other as 'Statement (II)'. You are to examine these two statements carefully and select the answers to these items using the codes given below:
Statement (I): The vapour absorption system uses heat energy to change the condition of the refrigerant from the evaporator.
Statement (II): The load variations do not affect the performance of a Vapour absorption system.
1. Both Statement (I) and Statement (II) are individually true and Statement (II) is the correct explanation of Statement (I)
2. Both Statement (I) and Statement (II) are individually true but Statement (II) is NOT the correct explanation of Statement (I)
3. Statement (I) is true but Statement (II) is false
4. Statement (I) is false but Statement (II) is true
Option 2 : Both Statement (I) and Statement (II) are individually true but Statement (II) is NOT the correct explanation of Statement (I)
Vapour Absorption (V-A) Cycle Question 3 Detailed Solution
Explanation:
Simple vapour Absorption System:
• A simple vapour absorption system consists of an absorber, a pump, a generator and a pressure-reducing valve to replace the compressor of the vapour compression system.
• The other components of the system are the condenser, expansion valve and evaporator as in the vapour compression system.
• Ammonia is used as a refrigerant while water is used as an absorbent.
• Liquid ammonia (normally a mixture of liquid and vapour) from the expansion valve enters the evaporator, either it absorbs heat from the evaporator space or it cools the secondary refrigerant in a heat exchanger.
• Normally these units have a large cooling capacity of the order of 80 TR and above.
• In such units, liquid ammonia absorbs heat from the secondary refrigerant which would be used as a medium to cool the space or products in the refrigerated space.
• Low-pressure ammonia vapour then enters the absorber.
• This vapour is allowed to be mixed and absorbed in the absorber with a weak solution of aqua ammonia flowing from the generator under gravity through a pressure-reducing valve.
• The water has the ability to absorb very large quantities of ammonia vapour and the solution thus formed is known as aqua-ammonia.
• The absorption of ammonia vapour in water lowers the pressure in the absorber which in turn draws more ammonia vapour from the evaporator and thus raises the temperature of the solution.
• Some form of cooling arrangement (Usually water cooling) is employed in the absorber to remove the heat of the solution evolved in it.
• This is necessary in order to increase the absorption capacity of water because, at higher temperatures, water absorbs less Ammonia vapour.
• The strong solution thus formed in the absorber is pumped to the generator by a liquid pump.
• The pump increases the pressure of the solution up to 10 bar.
• The strong solution of ammonia in the generator is heated by some external source such as gas or steam.
• During the heating process, ammonia vapour is driven off the solution at high pressure leaving behind the hot weak ammonia solution in the generator.
• This weak ammonia solution flows back to the absorber at low pressure after passing through the pressure-reducing valve.
• The high-pressure ammonia vapour from the generator is condensed in the condenser to high-pressure liquid ammonia.
• This liquid ammonia is passed to the expansion valve through the receiver and then to the evaporator. This completes the simple vapour absorption cycle.
• The heat required for the operation of the generator can be supplied by burning kerosene using solar energy or waste heat from the process industry in the case of industrial applications.
• The load variation does not affect the performance of the vapour absorption system. (Statement II is correct)
• The load variations are met by controlling the quantity of aqua circulated and the quantity of steam supplied to the generator.
• The electrical energy required for the operation of the aqua pump in this system is extremely small compared to the electrical energy needed for the compressor of a vapour compression cycle.
• The basic difference here is that the aqua pump handles the liquid ammonia while the compressor has to work with the refrigerant vapour of high specific volume.
Coefficient of performance (COP) of vapour absorption refrigeration system:
• COP = $$(\frac{T_E}{T_C~-~T_E})~\times~(\frac{T_G~-~T_C}{T_G})$$
Both Statement (I) and Statement (II) are individually true but Statement (II) is NOT the correct explanation of Statement (I)
Vapour Absorption (V-A) Cycle Question 4:
When water – Lithium Bromide is used in a vapour absorption refrigeration system, then
1. they together act as refrigerant.
2. water is the refrigerant.
3. lithium bromide is refrigerant.
4. None of these
Option 2 : water is the refrigerant.
Vapour Absorption (V-A) Cycle Question 4 Detailed Solution
Concept:
In the vapour absorption system, the water is used as the refrigerant while lithium bromide (Li Br) is used as the absorbent.
Basic Vapour Absorption Refrigeration System
• The basic absorption cycle employs two fluids, the absorbate or refrigerant, and the absorbent
• The most common fluids are Water/Ammonia as the refrigerant and lithium bromide/ water as the absorbent
• These fluids are separated and recombined in the absorption cycle
In the absorption cycle, the low-pressure refrigerant vapour is absorbed into the absorbent releasing a large amount of heat (Absorber pressure is equal to evaporator pressure)
• The liquid refrigerant/absorbent solution is pumped to a high-operating pressure generator using significantly less electricity than that for compressing the refrigerant for an electric chiller
• Heat is added at the high-pressure generator from a gas burner, steam, hot water or hot gases
• The added heat causes the refrigerant to desorb from the absorbent and vaporize
• The vapours flow to a condenser, where heat is rejected and condense to a high-pressure liquid
• The liquid is then throttled through an expansion valve to the lower pressure in the evaporator where it evaporates by absorbing heat and provides useful cooling
• The remaining liquid absorbent, in the generator, passes through a valve, where its pressure is reduced, and then is recombined with the low-pressure refrigerant vapours returning from the evaporator so the cycle can be repeated
Vapour Absorption (V-A) Cycle Question 5:
Following component is absent in vapour absorption refrigeration system.
1. Evaporator
2. Condenser
3. Compressor
4. Expansion device
Option 3 : Compressor
Vapour Absorption (V-A) Cycle Question 5 Detailed Solution
Concept:
Vapor absorption system:
• In the vapor absorption system, the energy input is given in the form of the heat.
• While in the vapour compression system the energy input is given in the form of the mechanical work from the electric motor run by the electricity.
• Waste Heat or free energy is effectively used in the vapour absorption refrigeration cycle
• Thus, the Vapour absorption system works on low-grade thermal energy such as waste heat or solar energy.
In the vapour absorption cycle the compressor is replaced by
• Absorber
• Pump
• Generator or desorber
Therefore the vapour-absorption refrigeration cycle one moving part, i.e. the liquid pump.
The coefficient of performance of the vapour-absorption refrigeration cycle is much lesser than the vapour-compression cycle because the desired effect achieved in the vapour-absorption refrigeration cycle is very low.
Top Vapour Absorption (V-A) Cycle MCQ Objective Questions
Vapour Absorption (V-A) Cycle Question 6
Vapour absorption refrigeration system uses as input energy.
1. electricity only
2. water energy only
4. non conventional energy only
Option 3 : low grade heat energy
Vapour Absorption (V-A) Cycle Question 6 Detailed Solution
Concept:
Vapor absorption system:
• In the vapor absorption system, the energy input is given in the form of the heat.
• While in the vapour compression system the energy input is given in the form of the mechanical work from the electric motor run by the electricity.
• Waste Heat or free energy is effectively used in the vapour absorption refrigeration cycle
• Thus, the Vapour absorption system works on low-grade thermal energy such as waste heat or solar energy.
In the vapour absorption cycle the compressor is replaced by
• Absorber
• Pump
• Generator or desorber
Therefore the vapour-absorption refrigeration cycle one moving part, i.e. the liquid pump.
The coefficient of performance of the vapour-absorption refrigeration cycle is much lesser than the vapour-compression cycle because the desired effect achieved in the vapour-absorption refrigeration cycle is very low.
Basic Vapour Absorption Refrigeration System
• The basic absorption cycle employs two fluids, the absorbate or refrigerant, and the absorbent
• The most common fluids are Water/Ammonia as the refrigerant and lithium bromide/ water as the absorbent
• These fluids are separated and recombined in the absorption cycle
• In the absorption cycle, the low-pressure refrigerant vapour is absorbed into the absorbent releasing a large amount of heat (Absorber pressure is equal to evaporator pressure)
• The liquid refrigerant/absorbent solution is pumped to a high-operating pressure generator using significantly less electricity than that for compressing the refrigerant for an electric chiller
• Heat is added at the high-pressure generator from a gas burner, steam, hot water or hot gases
• The added heat causes the refrigerant to desorb from the absorbent and vaporize
• The vapours flow to a condenser, where heat is rejected and condense to a high-pressure liquid
• The liquid is then throttled through an expansion valve to the lower pressure in the evaporator where it evaporates by absorbing heat and provides useful cooling
• The remaining liquid absorbent, in the generator, passes through a valve, where its pressure is reduced, and then is recombined with the low-pressure refrigerant vapours returning from the evaporator so the cycle can be repeated
Vapour Absorption (V-A) Cycle Question 7
In a Li-Br water absorption cycle _____ is used as a refrigerant.
1. water
2. air
3. ammonia
4. lithium-bromide
Option 1 : water
Vapour Absorption (V-A) Cycle Question 7 Detailed Solution
Explanation:
In vapour absorption refrigeration systems most commonly these two pairs are used.
1. Water-Lithium Bromide (H2O – LiBr) system for above 0ºC applications such as air conditioning. Here water is the refrigerant and lithium bromide is the absorbent.
2. Ammonia-Water (NH3 – H2O) system for refrigeration applications with ammonia as refrigerant and water as absorbent.
• In vapour absorption refrigeration system (VARS), the components used are Evaporator, Absorber, Pump, Generator, Condenser, Expander.
• In vapour compression refrigeration system (VCRS), the components used are Evaporator, Compressor, Condenser and Expander.
• Thus generator, absorber and pump are absent in VCRS.
• In VARS less mechanical work is used compared to VCRS.
• In the vapour absorption system, the compressor is replaced with the absorber, pump and generator.
Vapour Absorption (V-A) Cycle Question 8
The operating temperatures of a single stage Vapour Absorption Refrigeration System (VARS) are : Generator 90°C, condenser and absorber 40°C, evaporator 0°C. If the pump work is negligible, find the ideal COP of this VARS.
1. 0.94
2. 1.4
3. 0.84
4. 2.4
Option 1 : 0.94
Vapour Absorption (V-A) Cycle Question 8 Detailed Solution
Concept:
COP of VARS is given by relation;
$${\bf{COP}} = \frac{{{{\bf{T}}_{\bf{E}}}\;\left( {{{\bf{T}}_{\bf{G}}} - {{\bf{T}}_0}} \right)}}{{{{\bf{T}}_{\bf{G}}}\;\left( {{{\bf{T}}_0} - {{\bf{T}}_{\bf{E}}}} \right)}}$$
Where, TE = Evaporator temperature, T= Generator temperature, T= Condenser and absorber temperature.
and the pump work is neglected.
Calculation:
Given:
TE = 0 + 273 = 273 K, TG = 90 + 273 = 363 K, T0 = 40 + 273 = 313 K.
$${\bf{COP}} = \frac{{{{\bf{T}}_{\bf{E}}}\;\left( {{{\bf{T}}_{\bf{G}}} - {{\bf{T}}_0}} \right)}}{{{{\bf{T}}_{\bf{G}}}\;\left( {{{\bf{T}}_0} - {{\bf{T}}_{\bf{E}}}} \right)}}$$
$${\rm{COP}} = \frac{{273\;\left( {363 - 313} \right)}}{{363\;\left( {313 - 273} \right)}}$$
∴ COP = 0.94.
Vapour Absorption (V-A) Cycle Question 9
Which of the following refrigeration systems is most suitable for solar cooling?
1. Ejector refrigeration system
2. Vapour absorption system
3. Desiccant refrigeration system
4. Vortex tube refrigeration system
Option 3 : Desiccant refrigeration system
Vapour Absorption (V-A) Cycle Question 9 Detailed Solution
Explanation:
Desiccant refrigeration system:
• Desiccant cooling systems are heat-driven cooling units and they can be used as an alternative to conventional vapor compression and absorption cooling systems.
• The operation of a desiccant cooling system is based on the use of a rotary dehumidifier (desiccant wheel) in which air is dehumidified.
• The resulting dry air is somewhat cooled in a sensible heat exchanger (rotary regenerator), and then further cooled by an evaporative cooler.
• The resulting cool air is directed into a room. The system may be operated in a closed cycle or more commonly in an open cycle in ventilation or recirculation modes.
• A heat supply is needed to regenerate the desiccant. Low-grade heat at a temperature of about 60–95°C is sufficient for regeneration, so renewable energies such as solar and geothermal heat as well as waste heat from conventional fossil-fuel systems may be used.
• The system is simple and the thermal coefficient of performance (COP) is usually satisfactory.
Vapour Absorption (V-A) Cycle Question 10
Lithium bromide is used as an absorbent in ________.
1. vapour compression refrigeration
2. vapour absorption refrigeration
3. steam jet refrigeration
4. Electrolux refrigerators
Option 2 : vapour absorption refrigeration
Vapour Absorption (V-A) Cycle Question 10 Detailed Solution
Concept:
In the vapour absorption system, the water is used as the refrigerant while lithium bromide (Li Br) is used as the absorbent.
Basic Vapour Absorption Refrigeration System
• The basic absorption cycle employs two fluids, the absorbate or refrigerant, and the absorbent
• The most common fluids are Water/Ammonia as the refrigerant and lithium bromide/ water as the absorbent
• These fluids are separated and recombined in the absorption cycle
• In the absorption cycle, the low-pressure refrigerant vapour is absorbed into the absorbent releasing a large amount of heat (Absorber pressure is equal to evaporator pressure)
• The liquid refrigerant/absorbent solution is pumped to a high-operating pressure generator using significantly less electricity than that for compressing the refrigerant for an electric chiller
• Heat is added at the high-pressure generator from a gas burner, steam, hot water or hot gases
• The added heat causes the refrigerant to desorb from the absorbent and vaporize
• The vapours flow to a condenser, where heat is rejected and condense to a high-pressure liquid
• The liquid is then throttled through an expansion valve to the lower pressure in the evaporator where it evaporates by absorbing heat and provides useful cooling
• The remaining liquid absorbent, in the generator, passes through a valve, where its pressure is reduced, and then is recombined with the low-pressure refrigerant vapours returning from the evaporator so the cycle can be repeated
Vapour Absorption (V-A) Cycle Question 11
Aqua ammonia solution used in a vapour absorption refrigeration system is a solution of ammonia in ______
1. CF3
2. H2O
3. LiBr
4. H2O2
Option 2 : H2O
Vapour Absorption (V-A) Cycle Question 11 Detailed Solution
Explanation:
Vapour Absorption Refrigeration System:
• The common vapour absorption refrigeration system is based on NH3 - H2where NH3 is refrigerant and H2is absorbent.
• Ammonia vapour enters the absorber where it gets dissolved in water, this reaction is exothermic and a lot of heat is released.
• The solubility of Ammonia is inversely proportional to temperature hence cooling water is circulated to maintain the low temperature of the solution.
• The solution rich in ammonia is pumped to the generator where the solution is heated and the ammonia vapour separates.
• The rectifier is installed for the complete removal of water vapour from ammonia vapour.
Important Points
Coefficient of performance ( COP ) for Vapour Absorption Refrigeration System ( VARS ):
COPmax $$T_E~(~T_G~-~T_0~)\over T_G~(~T_0~-~T_E~)$$
where,
TE = Absolute temperature of Evaporator, TG = Absolute temperature of Generator, T0 = Absolute temperature of the atmosphere.
• Vapour Absorption refrigeration system ( VARS ) works on low-grade energy i.e. Heat, therefore, the COP is very less as compared with the Vapour compression refrigeration system ( VCRS ).
• Solar refrigeration system and geothermal refrigeration system works on VARS.
• Other common VARS are based on LiBr - H2O and LiCl - H2O. In both these water acts as a refrigerant, therefore, can be used only for air conditioning applications.
• Heat is absorbed in evaporator and generator and heat is rejected in condensor and absorber.
Vapour Absorption (V-A) Cycle Question 12
In a vapour absorption refrigeration, heat is rejected in
1. Generator only
2. Condenser only
3. Condenser and absorber
4. Absorber only
Option 3 : Condenser and absorber
Vapour Absorption (V-A) Cycle Question 12 Detailed Solution
Explanation:
Basic Vapour Absorption Refrigeration System
• The basic absorption cycle employs two fluids, the absorbate or refrigerant, and the absorbent
• The most common fluids are Water/Ammonia as the refrigerant and lithium bromide/ water as the absorbent
• These fluids are separated and recombined in the absorption cycle
• In the absorption cycle, the low-pressure refrigerant vapour is absorbed into the absorbent releasing a large amount of heat (Absorber pressure is equal to evaporator pressure)
• The liquid refrigerant/absorbent solution is pumped to a high-operating pressure generator using significantly less electricity than that for compressing the refrigerant for an electric chiller
• Heat is added at the high-pressure generator from a gas burner, steam, hot water or hot gases
• The added heat causes the refrigerant to desorb from the absorbent and vaporize
• The vapours flow to a condenser, where heat is rejected and condense to a high-pressure liquid
• The liquid is then throttled through an expansion valve to the lower pressure in the evaporator where it evaporates by absorbing heat and provides useful cooling
• The remaining liquid absorbent, in the generator, passes through a valve, where its pressure is reduced, and then is recombined with the low-pressure refrigerant vapours returning from the evaporator so the cycle can be repeated
Vapour Absorption (V-A) Cycle Question 13
In vapour absorption refrigeration system, heat is rejected in
1. condenser only
2. generator only
3. absorber only
4. condenser and absorber
Option 4 : condenser and absorber
Vapour Absorption (V-A) Cycle Question 13 Detailed Solution
Explanation:
Basic Vapour Absorption Refrigeration System
• The basic absorption cycle employs two fluids, the absorbate or refrigerant, and the absorbent
• The most common fluids are Water/Ammonia as the refrigerant and lithium bromide/ water as the absorbent
• These fluids are separated and recombined in the absorption cycle
• In the absorption cycle, the low-pressure refrigerant vapour is absorbed into the absorbent releasing a large amount of heat (Absorber pressure is equal to evaporator pressure)
• The liquid refrigerant/absorbent solution is pumped to a high-operating pressure generator using significantly less electricity than that for compressing the refrigerant for an electric chiller
• Heat is added at the high-pressure generator from a gas burner, steam, hot water or hot gases
• The added heat causes the refrigerant to desorb from the absorbent and vaporize
• The vapours flow to a condenser, where heat is rejected and condense to a high-pressure liquid
• The liquid is then throttled through an expansion valve to the lower pressure in the evaporator where it evaporates by absorbing heat and provides useful cooling
• The remaining liquid absorbent, in the generator, passes through a valve, where its pressure is reduced, and then is recombined with the low-pressure refrigerant vapours returning from the evaporator so the cycle can be repeated
Vapour Absorption (V-A) Cycle Question 14
In a vapour absorption refrigerator, the temperatures of evaporator and ambient air are 10°C and 30°C respectively. For obtaining COP of 2 for this system, the temperature of the generator is to be nearly
1. 90 °C
2. 85 °C
3. 80 °C
4. 75 °C
Option 3 : 80 °C
Vapour Absorption (V-A) Cycle Question 14 Detailed Solution
Concept:
$${\left( {COP} \right)_{VARS}} = \left( {\frac{{{T_G} - {T_0}}}{{{T_G}}}} \right) \cdot \left( {\frac{{{T_E}}}{{{T_0} - {T_E}}}} \right)$$
TG is the generator temperature, TE is the evaporator temperature and T0 is the environment temperature
Calculation:
Given:
TE = 10°C = 283 K, T0 = 30°C = 303 K, COP = 2
Now,
$${\left( {COP} \right)_{VARS}} = \left( {\frac{{{T_G} - {T_0}}}{{{T_G}}}} \right) \cdot \left( {\frac{{{T_E}}}{{{T_0} - {T_E}}}} \right)$$
$$2 = \left( {\frac{{{{\rm{T}}_{\rm{G}}} - 303}}{{{{\rm{T}}_{\rm{G}}}}}} \right)\left( {\frac{{283}}{{303 - 283}}} \right)$$
TG = 352.87 K
TG = 79.87°C
Vapour Absorption (V-A) Cycle Question 15
Which of the following components of vapor absorption refrigeration system has same pressure level [neglecting the pipe loss]
1. Generator and absorber
2. Evaporator and absorber
3. Generator and evaporator
4. Condenser and evaporator
Option 2 : Evaporator and absorber
Vapour Absorption (V-A) Cycle Question 15 Detailed Solution
Explanation:
From the P-T diagram of vapour absorption cycle, it is clear that evaporator and absorber have the same pressure.
Now, let's have a look at the whole cycle.
In the vapour absorption cycle the compressor is replaced by
• Absorber
• Pump
• Generator or desorber
Therefore the vapour-absorption refrigeration cycle one moving part, i.e. the liquid pump.
The coefficient of performance of the vapour-absorption refrigeration cycle is much lesser than the vapour-compression cycle because the desired effect achieved in the vapour-absorption refrigeration cycle is very low.
Basic Vapour Absorption Refrigeration System
• The basic absorption cycle employs two fluids, the absorbate or refrigerant, and the absorbent
• The most common fluids are Water/Ammonia as the refrigerant and lithium bromide/ water as the absorbent
• These fluids are separated and recombined in the absorption cycle
• In the absorption cycle, the low-pressure refrigerant vapour is absorbed into the absorbent releasing a large amount of heat (Absorber pressure is equal to evaporator pressure)
• The liquid refrigerant/absorbent solution is pumped to a high-operating pressure generator using significantly less electricity than that for compressing the refrigerant for an electric chiller
• Heat is added at the high-pressure generator from a gas burner, steam, hot water or hot gases
• The added heat causes the refrigerant to desorb from the absorbent and vaporize
• The vapours flow to a condenser, where heat is rejected and condense to a high-pressure liquid
• The liquid is then throttled through an expansion valve to the lower pressure in the evaporator where it evaporates by absorbing heat and provides useful cooling
• The remaining liquid absorbent, in the generator, passes through a valve, where its pressure is reduced, and then is recombined with the low-pressure refrigerant vapours returning from the evaporator so the cycle can be repeated.
|
|
# Thread: Decimal to Irrational Fraction
1. ## Decimal to Irrational Fraction
I was wanting to know how I can use the graphing calculator to take a decimal and change it into an irrational fraction. Is there some sort of setup I can use...Thanks!
2. Originally Posted by qbkr21
I was wanting to know how I can use the graphing calculator to take a decimal and change it into an irrational fraction. Is there some sort of setup I can use...Thanks!
Ummm...what is an "irrational" fraction?
You don't really need to make the calculator do it, but if you want to try what I would suggest is to put the calcuator into "exact" mode, then enter your decimal, followed by "enter." It should give you back a fraction. The problem is that if you put in 0.333333 you will get back:
$\frac{333333}{1000000}$
There's no way I know of to get around the problem of repeating decimals.
-Dan
3. Originally Posted by qbkr21
I was wanting to know how I can use the graphing calculator to take a decimal and change it into an irrational fraction. Is there some sort of setup I can use...Thanks!
There is a way to find a repeating decimal.
And I also found a way how find find decimals for irrationals of the form,
$\frac{a+\sqrt{b}}{c}$
But it might be too advanced it involves continued fractions and solving quadradic equations.
|
|
# How does one create an alpha signal
I am curious and want to do some personal research into alpha signals, but I couldn't find much relevant information. What I think will be the way to is to start with a return series, build a long- short portfolio (e.g. top/bottom decile or some more refined ML techniques), take those returns calculate z-scores and do s.th like
z-score * IC * volatility
to get a real alpha signal that I can use in a portfolio optimisation context. Would be great to get more insight.
• Generally, the questions of type "How to make money?" are off topic here. – LazyCat Jun 4 '17 at 17:27
• If the goal is to only understand how to generate an alpha signal in general terms that could be fed in something like Black and Litterman, I thinks its okay. Asking for a method to generate alpha for today's markets? Not so much. – Bob Jansen Jun 4 '17 at 17:54
• I am interested in the concept, I thought that's fine as it's a big part of factor research. – ThatQuantDude Jun 4 '17 at 19:21
• One paper I know that talks about how to incorporate empirical data signals into a portfolio optimisation is Brandt's well known paper on Parametric Portfolio Policies. But I don't think that paper talks about Alpha per se. – Alex C Jun 4 '17 at 19:52
• The only 'practical' publication I could find was from MSCI titled 'Converting Scores into Alphas'. – ThatQuantDude Jun 4 '17 at 19:58
|
|
## Algebraic & Geometric Topology
### Spaces of orders of some one-relator groups
#### Abstract
We show that certain left-orderable groups admit no isolated left orders. The groups we consider are cyclic amalgamations of a free group with a general left-orderable group, the HNN extensions of free groups over cyclic subgroups, and a particular class of one-relator groups. In order to prove the results about orders, we develop perturbation techniques for actions of these groups on the line.
#### Article information
Source
Algebr. Geom. Topol., Volume 18, Number 7 (2018), 4161-4185.
Dates
Revised: 20 July 2018
Accepted: 29 July 2018
First available in Project Euclid: 18 December 2018
https://projecteuclid.org/euclid.agt/1545102069
Digital Object Identifier
doi:10.2140/agt.2018.18.4161
Mathematical Reviews number (MathSciNet)
MR3892243
Zentralblatt MATH identifier
07006389
#### Citation
Alonso, Juan; Brum, Joaquín. Spaces of orders of some one-relator groups. Algebr. Geom. Topol. 18 (2018), no. 7, 4161--4185. doi:10.2140/agt.2018.18.4161. https://projecteuclid.org/euclid.agt/1545102069
#### References
• J Alonso, J Brum, C Rivas, Orderings and flexibility of some subgroups of $\mathrm{Homeo}_+(\mathbb R)$, J. Lond. Math. Soc. 95 (2017) 919–941
• G Baumslag, Topics in combinatorial group theory, Birkhäuser, Basel (1993)
• V V Bludov, A M W Glass, On free products of right ordered groups with amalgamated subgroups, Math. Proc. Cambridge Philos. Soc. 146 (2009) 591–601
• S D Brodskiĭ, Equations over groups, and groups with one defining relation, Sibirsk. Mat. Zh. 25 (1984) 84–103 In Russian; translated in Siberian Math. J. 25 (1984) 235–251
• A Clay, D Rolfsen, Ordered groups and topology, Graduate Studies in Mathematics 176, Amer. Math. Soc., Providence, RI (2016)
• P Dehornoy, Monoids of $O$–type, subword reversing, and ordered groups, J. Group Theory 17 (2014) 465–524
• B Deroin, A Navas, C Rivas, Groups, orders, and dynamics, preprint (2014)
• E Ghys, Groups acting on the circle, Enseign. Math. 47 (2001) 329–407
• T Ito, Dehornoy-like left orderings and isolated left orderings, J. Algebra 374 (2013) 42–58
• T Ito, Construction of isolated left orderings via partially central cyclic amalgamation, Tohoku Math. J. 68 (2016) 49–71
• V M Kopytov, N Y Medvedev, Right-ordered groups, Consultants Bureau, New York (1996)
• P A Linnell, The space of left orders of a group is either finite or uncountable, Bull. Lond. Math. Soc. 43 (2011) 200–202
• D Malicet, K Mann, C Rivas, M Triestino, Ping-pong configurations and circular orders on free groups, preprint (2017)
• A Navas, On the dynamics of (left) orderable groups, Ann. Inst. Fourier $($Grenoble$)$ 60 (2010) 1685–1740
• A Navas, A remarkable family of left-ordered groups: central extensions of Hecke groups, J. Algebra 328 (2011) 31–42
• C Rivas, Left-orderings on free products of groups, J. Algebra 350 (2012) 318–329
• C Rivas, R Tessera, On the space of left-orderings of virtually solvable groups, Groups Geom. Dyn. 10 (2016) 65–90
• A S Sikora, Topology on the spaces of orderings of groups, Bull. London Math. Soc. 36 (2004) 519–526
|
|
# Skullish Remains of Esben
The skull still seems to contain some of the Void Entity’s power.
Tier UT
MP Cost 110
On Equip +40 HP, +4 ATT, -3 SPD
Effect(s) On enemies: Inflicts Slowed for 2.5 seconds
Damage $\begin{cases}160&\text({wis}<50)\\3.2*\text{wis}&(50\le\text{wis})\end{cases}$
Defense Ignored 30
Heal 90
Heal Range 5 (+0.5 tiles for every 10 WIS above 50) tiles
XP Bonus 7%
Feed Power 800
Forging Cost 45 / 90 / 400 / 1
Dismantling Value 15 / 30
Loot Bag Esben the Unwilling Prismimic Defender Mighty Quest Chest Current offers on RealmEye’s trading pages
Blueprint Grand Bazaar ( 5175 / 575)
Notes
This skull has the unique ability of slowing its targets temporarily, giving the Necromancer some extra utility and making it very useful in many situations, such as when there are many fast-moving enemies and you need the breathing room. However, it only slows for 2.5 seconds, so it is nearly impossible to keep enemies permanently slowed with this skull alone (lest you have close to a maxed Magic Heal pet).
It has a fairly small radius (the same as the Heartstealer Skull) and does a bit less damage than the Lifedrinker Skull. The skull also costs the same as the latter, and the +4 attack bonus usually outweighs the -3 Speed debuff, even for a slower class like the Necromancer. However, keep in mind that it does also give 30 fewer HP compared to the aforementioned Lifedrinker Skull, making the Necromancer squishier than most other options.
Nevertheless, the skull can be safely used as a main. It is one of the more common white bag items, so can be relatively easily replaced. If you truly don’t have any use of it, it becomes nice pet food, bringing 800 feed power to whichever pet you want to feed it to.
The -3 SPD stat bonus for the skull is mirrored by the +3 SPD bonus of the Staff of Esben.
History
The description used to refer to ‘the Dark Spirit’, but was changed to the Void Entity to accomodate the lore for the Lost Halls. The old description is as follows:
This skull still seems to contain some of the Dark Spirit’s power.
Before Exalt Version 1.5.0.0 (May 2021), this item had the following sprite:
Before Exalt Version 2.0.0.0 (Aug 2021), this item was soulbound.
Esben Necromancer Set
Skullish Remains of Esben
Skulls
UT. Skullish Remains of Esben
Limited Edition Skulls
|
|
## Precalculus (6th Edition) Blitzer
$5$ years from 2014 is the year $2019$.
Let x be the number of years after 2014. 11.3 increased by $0.2x\quad=11.3+0.2x$ We want to find the x for which this equals $12.3.$ $11.3+0.2x=12.3 \qquad$ ... subtract $11.3$ $0.2x=12.3-11.3$ $0.5x=1.0 \qquad$ ... multiply with 5 $x=5$ $5$ years from 2014 is the year $2019$.
|
|
# How do you graph y=5/2x-2 using intercepts?
May 20, 2017
We can find that the x-intercept is $\left(\frac{4}{5} , 0\right)$ and the y-intercept is $\left(0 , - 2\right)$.
Drawing a straight line through these two points creates a graph of the function.
#### Explanation:
The x-intercept is the point at which the function meets the x-axis, which is the line $y = 0$.
If we substitute $y = 0$ into the equation, we get:
0=5/2x−2
Rearranging,
$\frac{5}{2} x = 2$
$5 x = 4$
$x = \frac{4}{5}$
So one point on the line is the point $\left(\frac{4}{5} , 0\right)$.
The y-intercept, similarly, is the point at which the function meets the y-axis, which is the line $x = 0$. Substituting $x = 0$ into the equation yields:
$y = \frac{5}{2} \left(0\right) - 2$
$y = - 2$
So the y-intercept is the point $\left(0 , - 2\right)$.
Simply drawing a line through these two points will create a graph of the function, using intercepts.
|
|
# Is this the mathematical equation for the Vigenere Cipher
Is this the mathematical equation for the Vigenere Cipher or something else and if something else what is it?
Where t is the letter of plain text and n is the position of t within the text and c is the ciphered character.
For 't' and 'c' taking a as 1 and z as 26.
((t + n) > 25 -> c = (t + n) - 25) ^ ((t + n) <= 25 -> c = (t + n))
-
Sorry I ment a zero based index (a as 0 and z as 25) thanks to @fgrieu for pointing out my mistake. – Chris Dec 18 '13 at 15:13
NO, the question does not contain the mathematical equation for the Vigenere Cipher with plaintext t and ciphertext c in thet set $\{1\dots 26\}$ (with the letter a as 1 and the letter z as 26), and displacement (or key) n, for two reasons:
• the best interpretation I can make of the expression given is by considering -> to be the $\implies$ mathematical symbol, and ^ to mean logical XOR (or perhaps logical OR), that is one of the left thing or the right thing holds (possible meanings of the word one are equivalent in the context); but even with these assumptions, for n of 1 (meaning next character, circularly) the expression given maps the plaintext y (coded by t of 25) to the ciphertext a (coded by c of 1) instead of the desired z (coded 26);
• the expression given does not parse (to me) as a mathematical equation.
With the original formalism, the Vigenere Cipher with plaintext t and ciphertext c in $\{1\dots 26\}$, and displacement n in the set $\{0\dots 26\}$, a correct expression would be:
((t + n) > 26 -> c = (t + n) - 26) ^ ((t + n) <= 26 -> c = (t + n))
and a passable mathematical equation would be: $$t\mapsto c=((t+n+25)\bmod 26)+1$$
If we use the set $\{0\dots 25\}$ for plaintext and ciphertext, we get the nicer $$t\mapsto c=(t+n)\bmod 26$$
or in the original formalism the expression
((t + n) > 25 -> c = (t + n) - 26) ^ ((t + n) <= 25 -> c = (t + n))
-
Sorry yea I ment zero based index (0 to 25). Although what I was trying to do with the equasion (the -> and ^) was mathematically write an if else statement as in if((t + n) > 25) then c = (t + n) - 25 elseif((t + n)) <= 25 then c = (t + n) Could you please explaine what the ↦ means in your equation though. And now with this if elseif statement would my -> mean ⟹ like you interpreted? – Chris Dec 18 '13 at 15:11
@Chris If you click on 'edit' below any answer you can read the plaintext that gets rendered into Latex with mathjax. The symbol you asked about is created with the command "\mapsto" and the symbol indeed does have this meaning. In the context above, assuming we represent the cipher by the function $f$, then the expression $t\mapsto c=((t+n+25)\bmod 26)+1$ is equivalent to $f(t)=c=((t+n+25)\bmod 26)+1$. To say the original out loud: "t maps (under the Vigenere Cipher) to..." – Kaya Dec 18 '13 at 15:49
@Chris: Welcome to CSE. Adding to the previous comment: to see what's in a math statement, right-click on it and use "Show Math As.. TeX Commands". Also: depending on your programming language, what you actually wanted may be c = (t+n)%26; (without any test). – fgrieu Dec 18 '13 at 16:11
Thanks guys, I think i get it now. – Chris Dec 18 '13 at 17:21
So would this be right as well then? $$t \mapsto c = \begin {cases} (t + n) > 25 & (t + n) - 25 \\ t + n \le 25 & (t + n) \end {cases}$$ – Chris Dec 18 '13 at 17:32
|
|
# Positive rotational symmetric solution for p-Laplacian
I have the the following problem and I just can't get my head around how to solve it. Be $1<p<n$ and $q=\frac{np}{n-p}$, $u\in\mathcal{C}_{n,p}=\{f\in W^{1,p}_{loc}: \|f\|_{L^q(\mathbb{R}^n)}=1\}$ and
$\mathcal{F}(u)=\int_{\mathbb{R}^n}|Du|^p.$
Find all positive, rotational symmetric solutions for the corresponding Euler-Lagrange-Equation.
I don't need a perfect solution just some ideas on how to solve this.
-
I suggest you write out the Euler-Lagrange equation (which is the p-Laplace equation), and then write the equation in spherical coordinates. Rotationally symmetric solutions mean that the function is of $r$ alone, which should give you an ODE. – Ray Yang Feb 10 '13 at 19:54
|
|
# Striim 3.10.3 documentation
### Making visualizations interactive
Drill-downs can be used within a single page to filter data interactively. For example, the PosApp visualization below can be filtered by clicking on the pie-chart labels:
If you click the WARM label of the right-hand pie chart, the map and heat map display only data for merchants currently in that category:
This interactivity is defined by setting the drill-down configuration in the pie charts, then using their Id values as in the queries for the two other visualizations. The drill-down configurations are:
visualization
drill-down configuration
Status (left) pie chart
Page: Interactive HeatMap Id: status Source field: Status
Category (right) pie chart
Page: Interactive HeatMap Id: category Source field: Category
Since these visualizations are on the Interactive HeatMap page, the drill-down filters the data without switching to another page.
The US map's query is:
select * from Samples.MerchantActivity [15 minute and push] where (:status
IS NULL or (Status = :status)) and (:category IS NULL or (Category = :category))
group by merchantId;
When the page is first loaded, the :status and :category values are null, so the map displays all data. When you click the WARM label, the :category value is set to WARM, and the map updates accordingly.
To clear the filter and return to viewing all data, click Clear All (next to the funnel icon at left).
To explore this more, run PosApp and go to the Interactive HeatMap page.
|
|
#### Biography
I was born and brought up in North Wales until the age of eight, when my family moved to the Sultanate of Oman. I spent four and a half years at school there at the British School Muscat, before going to Malvern College in Worcestershire for five years. During that time my family left Oman and moved to Santiago in Chile. After Malvern I went to St John's College, Cambridge to read Natural Sciences, obtaining a 2.1 in Astrophysics in 2005. I then moved to Linacre College, Oxford to do a DPhil in Physics based at AOPP, which I completed in December 2009. This was followed by a short stint at the Centre for the Analysis of Time Series at the London School of Economics, before moving back to Oxford to start a post-doc position in the Geophysical and Planetary Fluid Dynamics group in AOPP. I married Julia Angell in August 2009, who is a Speech and Language Therapist, and we lived in Wolvercote until 2017, and then in Évry near Paris until 2019. My family is based near Conwy in North Wales, so I spend time there as well. I have a sister, Lucy.
I am a Fellow of the Royal Astronomical Society, an Associate Fellow of the Royal Meteorological Society, and a Member of the Institute of Physics.
In my time at Malvern and Cambridge I spent a lot of time involved with rifle shooting, which took up much of my time outside of school/university. When I moved to Oxford I turning my attention to fencing and cricket instead. In days gone by I have also been known to play the trombone and piano. These days I spend what spare time I have involved in tabletop wargaming and painting, and I also enjoy having two pet dogs, Django and Scout.
#### A miscellany of useful technical / computing things and links I have found useful
Not updated for some time!
Commenting: some LaTeX classes don't have a selective comment (i.e. ignore text) command. This can be rectified by putting \newcommand{\ct}[1]{} at the top of your document; any subsequent text placed within \ct{} will be ignored by the compiler.
Derivatives can be contracted by using the following custom commands:
Full, first derivative: \newcommand{\fd}[2]{\frac{d #1}{d #2}}
Full, second derivative: \newcommand{\ffd}[2]{\frac{d^2 #1}{d #2^2}}
Partial, first derivative: \newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}}
Partial, second derivative: \newcommand{\ppd}[2]{\frac{\partial^2 #1}{\partial #2^2}}
The Comprehensive LaTeX Symbol List: A list of all the symbols you could ever think of with their LaTeX commands, and then some more.
Various LaTeX guides are here and here.
The Comprehensive TeX Archive Network - first place to look for packages to download.
JabRef reference manager for BibTeX. Much more user-friendly than typing in all the syntax yourself, and easy to manipulate large sets of references at once. Works under both Windows and Linux.
KBibTeX is also a good GUI for BibTeX documents.
TeXnicCenter (Windows) and Kile (Linux). Two good GUI frontends for LaTeX editing.
Conditional compilation: this is useful for example when you need to include different versions of the same figure for online and print versions of a paper (usually one in colour, one not). Insert the following before the \begin{document} command:
\usepackage{ifthenelse}
\newboolean{onlineversion}
\setboolean{onlineversion}{true}
Then in the main part of the document, wherever you need only one option to be compiled, use
\ifthenelse{\boolean{onlineversion}}{ONLINE version content}{PRINT version content}
Only the online content will appear. To get the print content to appear, set the boolean onlineversion to false.
A presentation class to use with LaTeX (pdflatex, to be precise). It can be found here. I think the presentations made with this package look a lot more professional than PowerPoint (I can't comment on Apple's Keynote, as I have never used it). Particularly neat is the ability to place bars along the top and bottom of the slides which allow the audience to see at a glance where you are in the presentation. As with all things LaTeX, it is somewhat fiddly to start with, but I feel it is worth the effort.
I have found that when IDL figures are used in presentations they usually look grey and are hard to see. This is because the lines are too thin when printed to the .ps format. To get around this, add the following line to the plotting code:
XTHICK=6,YTHICK=6,THICK=4,TICKLEN=0.03,CHARSIZE=1.7,CHARTHICK=5
The result looks weird when displayed within IDL, but much better than the default when used to output as postscript.
There is a useful list of IDL colour tables here.
TeXtoIDL: routines to include LaTeX syntax in plots, etc.
IDL colour bars
A Library of IDL Programs by Daithi Stone.
Functional list of IDL routines
One-liners in csh, awk, and sed.
Alan Iwi has a page with a lot of useful UM information here. In particular, the file utilities and parallel installation guide are very useful.
xconv / convsh - software for file manipulation and conversion
Panoply - netCDF viewer created by NASA for geo-gridded data.
VAPOR: Visualization and Analysis Platform for Ocean, Atmosphere, and Solar Researchers.
#### Some comments on presentation technique
27 April 2007
Updated 2 September 2011
Having attended 97 talks over five days at the 2007 European Geosciences Union General Assembly, I now feel suitably qualified to identify some things that make a good talk, and some things that make a bad one. The most surprising thing I learned at the conference was that speakers don't necessarily bring their 'A-game' to international conferences, in terms of presentation technique and preparation. More experience ⇒ better presentation is definitely not a general statement; many poor talks were given by speakers whose experience would lead you to expect otherwise. True, many were presenting in a foreign language, but you will see below that most (but not all!) of the points relate to slide and talk structure, rather than oratorical style. Every example of 'poor' technique noted below was made at least once.
Presentation preparation depends primarily on (1) the target audience and (2) what the speaker wants to get across. These notes were compiled from attending talks at a conference covering many different fields. The typical audience member could therefore be assumed to be interested and intelligent, but not necessarily an expert (similar to the audience at a departmental seminar, for example). The notes below apply primarily with this audience in mind; the approach will differ when the audience is one's research group, for example.
Beamer is a presentation class for LaTeX. In my opinion, it looks more professional than PowerPoint; an example presentation can be found here. It is a bit fiddly (like everything in LaTeX), but I think the results are well worth the effort.
Oh, and I'm sure I am guilty of some of these things too - please tell me if I am, otherwise I will never improve!
#### Some 'schoolboy' errors...
• Face the audience! If you must face the screen, at least point your feet towards the audience and turn your head towards the screen.
• If a microphone is available and you have a sore throat, use it!
• If something can be said simply, say it simply. Leave out as many unnecessary technical details as possible (e.g. names of model variables).
• Try to avoid using the words 'obviously' or 'clearly', or words to that effect, as you can be guaranteed that at least one person in the audience will think it is neither.
• Use a large font size on axes. The plot and axes labels should be as large as the main text in order to be seen from the back of the room.
#### General points:
• Spend at least one minute on each slide - it takes that long to read and take in most slides, and some people may be taking notes.
• If you are going faster than your normal speaking speed, there is too much material.
• Put the most important parts of the slide in the top half - the audience seating may not be stepped and the screen may not be raised.
• Speak loudly, smile and be generally enthusiastic. If you look bored, or come across as confused, arrogant or unconfident, your material will appear so too.
• A colourful title page gives the audience confidence that the rest of the talk will be interesting.
• Not every slide needs a title.
• If a movie doesn't work when running through the presentation beforehand, make sure you have an image or two to replace it. Going to a new slide mid-presentation and finding that your movie doesn't work looks rather amateurish. To avoid this, use your own laptop if possible - try to avoid using the provided hardware as inevitably something won't work.
• If you have many slides you know you won't use, remove them (for example, if the presentation is a shorter version of an earlier one).
• A conclusion / summary slide at the end is essential.
• Humour can work, but only in context...
#### Some things I saw which worked well:
• Leaving up a page of references as the final slide, while questions are asked.
• Linking to subplots relevant to only part of a diagram by clicking on that part of the diagram.
• Comparisons with familiar concepts, e.g. comparing the size of Antarctica with the relative positions of European cities.
• If the main focus of the talk can be posed as a single question, pose and answer that question as part of the introduction, before expanding on it. This is instead of building up to the answer, as people will eventually forget what the question was, and any impact will be lost.
• If your presentation style is a bit 'off the wall', the audience will stay interested for longer, and will be more likely to remember your talk afterwards...
• Flow charts work well, as long as they are built up sequentially and not displayed all at once.
• If displaying a large matrix (particularly a matrix of data), use a contour representation of the numbers instead of a table, unless the matrix is very sparse.
• Lists are usually OK and can be used effectively (e.g. the ExoMars payload list by priority for a number of different funding scenarios).
#### Things to avoid:
• Complete paragraphs / covering the slide with text (probably 40-50% of speakers were guilty here). However, lots of text sometimes works if you know that your spoken English isn't very clear.
• Equations. Only include equations if absolutely essential or very simple - replace them with images or words if at all possible. Usually equations serve to confuse rather than clarify, and anyone sufficiently clued up with the subject to understand the equations in the time spent on the slide will know them already.
• Abbreviations, unless you are sure they are part of that particular audience's common knowledge (e.g. NASA, EGU), as people will forget them (maybe put the full version in small text at the bottom of the slide).
• Overrunning - people moving between rooms at a multi-session conference are relying on talks finishing on time, in order to get to the next talk before it starts. If you have something groundbreaking to say then this can be relaxed, but if so it should have been put earlier in the talk! Most of the audience won't remember the details, and will resent you for overrunning and for cutting into the time available for questions.
• Tables, unless they are 2x2 - even 3x2 tables are difficult to follow with other things on the slide. If nothing else is on the slide, however, then slightly larger tables can work.
• Slides with multiple but very similar plots, without explicitly stating the difference between them. You need to say how the parameters change between plots (i.e. why there are multiple plots at all) and what the difference is between the plots themselves.
• Assuming the audience is completely familiar with the plotting techniques you are using - at least put labels on plots to explain them (e.g. Talagrand or Hovmöller diagrams).
• Splitting words over lines - choose a different word or use an abbreviation.
• In a short talk, avoid a summary slide at the beginning, unless your talk structure is non-standard (i.e. different from motivation, aims, method, results, and future work).
• Significant areas of the slide taken up by titles/toolbars/logos/template structure etc., leaving little space for actual content (which is then displayed too small).
• Displaying an extended technical scheme (e.g. all the components of a satellite instrument and their interactions) - just include the important bits. Otherwise, people not familiar with the diagram will wonder why you didn't talk about the bits you missed out.
#### Some colour combinations which don't work:
• Green and light blue
• Grey and blue
• Red and black
• Blue (text) on red (background)
• Light green on white
• Black on blue
• Black on white is just boring, if it is the only thing on the slide.
#### A few comments on posters:
I found it easier to spot faults in orals than on posters. Having said that, I spent 85% of my time in talks and only about 15% of the time looking at posters. A few points:
• Make A4 copies for people to take, and leave them by the poster board after taking down the poster.
• Make the title big enough and contrasted enough with the background to be seen on the other side of a lecture-hall sized room.
#### Finally...
In one talk I attended, the speaker inflected the end of every sentence of a 15 minute talk, except the very last sentence of the talk. It was quite hypnotic, eventually hilarious, and quickly distracted the audience from the material. Not to be recommended. If anyone you know does this, please tell them, as they probably don't realise it.
#### About the sport of rifle shooting in the UK
The basic idea is to shoot at targets between distances of 25 and 1200 yards away, and to try to get as many shots as close to the centre as possible! At Cambridge I shot in nine Varsity matches against Oxford in various disciplines of the sport, winning four Half Blues, and I was Captain of the University first team in my final year.
#### Rifles
• At short range (25 yards, small-bore) we use the .22 rifle. This fires small lead bullets about 2cm in length (including the casing) and 0.22 inches in diameter at card targets (the area within the box is about A4, giving an indication of the size of each target).
• At longer ranges (300-1000 yards, full-bore) we use high-powered target rifles with "iron" sights (two iron circles mounted on either end of the rifle), firing a 7.62mm calibre (diameter) bullet.
• At the longest range, 1000-1200 yards (also referred to as full-bore), we use match rifles, which are very similar to target rifles but have a telescopic sight mounted on the top. The bullets they fire are the same calibre as the target rifle, but have about 30% more gunpowder in them.
The accuracy of these rifles means that it is possible for a good shot to hit the bull's-eye from 1200 yards away with 6 out of 10 shots, with the remaining three falling within a foot or so of the 'bull'. At 1200 yards the bull is about the same size as a bathroom sink. As the standard of equipment has improved, it has become necessary to introduce a smaller 'V-bull' inside the bull to separate the top competitors. This is worth 5.1 points instead of 5 for a bull (however, ten V-bulls are worth 50.10 in total, not 51). At 1200 yards the 'V' is about the size of a large dinner plate. It was not until 2001 that the maximum score of 100.20 was made in top-level competition.
The bullet takes just over a second to reach the target from 1200 yards away, leaving the barrel at about 900 m/s (supersonic). In the right light and humidity conditions, it is possible to see the bullet travelling down the range if you position yourself directly behind a firer and look through a telescope at the target; the shape formed is approximately a left-handed helix of pitch 2π.
The practice of competing with a particular weapon is called a discipline. I was most involved with the latter two full-bore disciplines, so the text below refers primarily to those.
#### Competition
Most competitions (or 'shoots') in full-bore consist of a string of ten shots fired from the prone position (lying on your front), with up to two non-scoring shots beforehand to 'sight' the rifle and to test the wind conditions. Some shoots are 7 or 15 shots long, and in match rifle a handful of competitions are 20 shots long - this is very hard on the shoulder and upper back in particular as the 'kick' from these rifles is harder than that from a shotgun or a military rifle such as the AK-47 or SA80. To counter this, a thick padded shooting jacket is worn, in addition to the obvious safety kit such as ear protection. Some match rifle shots choose to shoot while lying on their backs; this is called the supine position.
The strength and direction of the wind is very important and an individual's performance in a particular shoot depends greatly on an ability to 'read' the wind. While this can be done systematically, after a time it becomes more of an intuition. At long range, a change in the angle of the wind by 30 degrees may mean the bullet lands on the target over a metre away from where it was aimed!
#### International competition
The two full-bore disciplines are most common in the UK, the Commonwealth and in former British colonies, along with a few other countries such as Germany and the USA (although it is very much a minority discipline in the USA compared with other types of shooting!). As of ~2005, the UK, Canada and Germany were the strongest national teams.
International team and open individual championships take place each year at the national ranges of each of the major nations who compete: the UK, South African, Australian and Canadian meetings are the main events in the calendar. The nature of the sport (i.e. being minimally dependent on fitness and strength) means that it is one of the few in the world where every person competes on equal terms; there are no ability divisions by age or between men and women.
The most prestigious of these open competitions is the Imperial Meeting, which is held over three weeks each July at Bisley Camp near Guildford in Surrey. The two full-bore disciplines form the bulk of the Meeting, with about 1500 competitors from all over the world. There are a number of other disciplines competed in such as Service Rifle, Historic Arms, and the Schools Meeting, which is a week of competition for CCF units in UK independent schools.
Bisley is the 'home' of the sport and the individual competitions which make up the Imperial Meeting are regarded as being the de facto target shooting world championships. Bisley is a very strange place. It contains two main ranges: Century has 108 targets and is 600 yards long (almost exactly a square), and Stickledown has 50 targets and is 1200 yards long (very much not a square). Surrounding these are 40 to 50 clubhouses, about ten smaller ranges, several camping sites and caravan parks, and a lot of green space. Many of the clubhouses are over 100 years old; as a result of this, and of the rather conservative attitudes associated with a sport of this type, it is occasionally said that Bisley is the last true remnant of the British Empire, and that entering Bisley is like stepping back into the 19th century.
The National Rifle Association of the UK has its HQ at Bisley, and is the UK governing body for rifle shooting. The UK NRA should not be confused with the NRA in the USA - the UK NRA is almost exclusively a sporting organisation and not a political one like its American counterpart.
As with all sports, some competitions are more prestigious than others. There are five competitions held at the Imperial Meeting which are the most important competitions in the UK rifle shooting calendar:
• The Grand Aggregate (individual, target rifle) - An aggregate of all the individual target rifle shoots; the winner usually gets no less than about 695/705.
• The Kolapore (team, target rifle) - The main international target rifle match, for teams of 8. In recent years the Great Britain team has been very strong, setting a record score of 1199/1200 in 2012. Other important international matches are the Palma Match, held every four years, and the Australia Match (called the Empire Match until 1988), which is usually held every year. Both of these matches are held in a different country each time, but the Kolapore is always held at the UK Imperial Meeting.
• The Hopton (individual, match rifle) - An aggregate of all the individual match rifle shoots; the record score is currently 1004/1025, set in 2004.
• The Elcho (team, match rifle) - The home nations (England, Scotland, Wales, Ireland) team match, for teams of 8. It is held each year.
• HM The Queen's Prize (individual, target rifle) - This is the most prestigious prize in the sport of target shooting. It is a three-stage competition, held each year. The first and second stages are at short range (300, 500 and 600 yards), and the third stage is at long range (900 and 1000) yards. Only 100 people compete in the third stage. Queen's III is the last event of the Imperial Meeting and attracts a crowd of several thousand spectators; the spectator-unfriendly nature of the sport is helped in this event by each firer having a continually-updated scoreboard behind their firing point, and there is a leaderboard at the side of the range which is updated shot-by-shot. After the competition is complete the winner is chaired from the range by friends, and spends the rest of the day (and night!) being chaired round all the clubhouses on Bisley Camp, usually accepting a drink from each one... The winner of The Queen's is immortalised in the sport; such is the significance of the competition that winners may subsequently use the letters GM after their names in shooting circles. When the Queen's prize was first held the prize itself was £250, enough to buy a house; however, the prize has remained at £250 ever since! Most other national meetings have an equivalent competition (for example the Governor General's Prize in Canada), but the status of Bisley as the home of target shooting has meant that the Queen's Prize remains the premier competition in the sport.
#### CURA
At Cambridge I was part of the Cambridge University Rifle Association (CURA) and the Cambridge University Small Bore Club (CUSBC), which are the full-bore and small-bore clubs respectively.
CURA has a long history stretching back over 100 years, and is one of the oldest sports clubs in the University. The Club competes as a team and as individuals at the Imperial Meeting described above, at which the Varsity Matches against Oxford take place.
There are three Varsity matches each year: the Chancellors (target rifle, teams of 8, and the most important of the three), the Humphry (match rifle, teams of 4) and the Heslop (small bore, teams of 8, which takes place in February in London). Between 1981 and 2004 CURA won an unprecedented 24 straight Chancellors Varsity match victories, a record not even approached by any other sport at either University. The run was stopped in 2005, one short of a quarter-century of victories, in a match won convincingly by Oxford 1155.112v-1142.115v (out of 1200). This followed two very close results; the 2004 match was won by Cambridge by one point, and the 2003 match produced record scores from both teams, 1170.126v-1164.132v.
The sport has Discretionary Full Blue status: all participants in the two full-bore Varsity matches and some of the participants in the small-bore Varsity match are awarded Half Blues, and if a set of very stringent individual-score-based criteria are met, a Full Blue may be won. The Chancellors is shot at the same time as the Kolapore (see above), and the criteria for a Full Blue are based on the Great Britain score in that match. Therefore a Full Blue is won for being of approximately international standard; only 19 have been won since the sport was granted this status in 1985.
In my time with CURA I held positions as Secretary and Vice-Captain, and was Captain of the club in my final year. Unfortunately my legacy as Captain was the first Varsity Match loss in 25 years! I competed in the Chancellors four times, the Humphry twice and the Heslop three times.
|
|
, 24.01.2020GreenHerbz206
# How am i supposed to make a function? number 15
anser is a
step-by-step explanation:
negitive slope
i don't even know man. edge inuity be messing with us. i had to ask a question too.
Part A
T(n) = 58n+1800
Work Shown:
1800 = initial cost or starting cost
58*1 = 58 = cost for one person
58*2 = 116 = cost for two people
58*n = cost for n people
58n+1800 = total cost
58n+1800 = T(n)
T(n) = 58n+1800
Part B
D(T) = 0.85T
Work Shown:
T = total bill
15% discount means you still pay 85% of the bill (85%+15% = 100%)
D = discounted cost
D = 85% of T
D = 0.85*T
Part C
D(T(n)) = 49.3n + 1530See attached image below for the table.
Work Shown:
We combine parts A and B.
T(n) = 58n+1800 from part A
D(T) = 0.85*T from part B
D(T(n)) = 0.85*( T(n) )
D(T(n)) = 0.85*( 58n+1800 ) ... plug in T(n) = 58n+1800
D(T(n)) = 0.85*( 58n+1800 )
D(T(n)) = 0.85*(58n)+0.85*(1800) ... distribute
D(T(n)) = 49.3n + 1530
If we plugged in n = 0, then,
D(T(n)) = 49.3n + 1530
D(T(0)) = 49.3*0 + 1530
D(T(0)) = 1530
Or we could plug n = 0 into T(n) to get T(0) = 1800
Then plug T = 1800 into D(T) = 0.85*T to get D(1800) = 1530
Either way you'll get the same answer.
Repeat this (using either method) for n = 50, 100, 150, 200 and you'll get the table of values you see below in the attached image.
Part D
The D( T(n) ) function allows us to find the discounted cost D for any number n of people that show up.
The input is n, where n is some positive integer less than or equal to 200 (because this is the reception hall's max capacity).
The output is the final discounted cost.
Example: n = 50 is one input which leads to its corresponding output of D( T(n) ) = 3995 as the table shows. So 50 people would have a discounted total cost of $3,995. Part A T(n) = 58n+1800 ------------- Work Shown: 1800 = initial cost or starting cost 58*1 = 58 = cost for one person 58*2 = 116 = cost for two people 58*n = cost for n people 58n+1800 = total cost 58n+1800 = T(n) T(n) = 58n+1800 ====================================== Part B D(T) = 0.85T ------------- Work Shown: T = total bill 15% discount means you still pay 85% of the bill (85%+15% = 100%) D = discounted cost D = 85% of T D = 0.85*T ====================================== Part C D(T(n)) = 49.3n + 1530See attached image below for the table. ------------- Work Shown: We combine parts A and B. T(n) = 58n+1800 from part A D(T) = 0.85*T from part B D(T(n)) = 0.85*( T(n) ) D(T(n)) = 0.85*( 58n+1800 ) ... plug in T(n) = 58n+1800 D(T(n)) = 0.85*( 58n+1800 ) D(T(n)) = 0.85*(58n)+0.85*(1800) ... distribute D(T(n)) = 49.3n + 1530 If we plugged in n = 0, then, D(T(n)) = 49.3n + 1530 D(T(0)) = 49.3*0 + 1530 D(T(0)) = 1530 Or we could plug n = 0 into T(n) to get T(0) = 1800 Then plug T = 1800 into D(T) = 0.85*T to get D(1800) = 1530 Either way you'll get the same answer. Repeat this (using either method) for n = 50, 100, 150, 200 and you'll get the table of values you see below in the attached image. ====================================== Part D The D( T(n) ) function allows us to find the discounted cost D for any number n of people that show up. The input is n, where n is some positive integer less than or equal to 200 (because this is the reception hall's max capacity). The output is the final discounted cost. Example: n = 50 is one input which leads to its corresponding output of D( T(n) ) = 3995 as the table shows. So 50 people would have a discounted total cost of$3,995.
### Other questions on the subject: Mathematics
Mathematics, 21.06.2019, babyskitt
NoStep-by-step explanation:I believe at the point of (1,1) there is a distinct sort of point where it looks like an edge or corner which means that it is not continuous.Continuous...Read More
Mathematics, 21.06.2019, jordan7626
Step 1: 6x-10=1; step 2: 6x=11...Read More
Um what? rephrase that ?...Read More
Mathematics, 22.06.2019, zaeswag23
x= -3 and $$x=\frac{-3+/-3i\sqrt{3} }{2}$$step-by-step explanation: to find the roots of a cubic, use division or factoring to find the factors then set equal to 0. as this...Read More
|
|
# Peeter Joot's (OLD) Blog.
• ## Archives
Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…
• 295,813
# Posts Tagged ‘maxwells equations’
## Polarization angles for normal transmission and reflection
Posted by peeterjoot on January 22, 2014
## Question: Polarization angles for normal transmission and reflection ([1] pr 9.14)
For normal incidence, without assuming that the reflected and transmitted waves have the same polarization as the incident wave, prove that this must be so.
Working with coordinates as illustrated in fig. 1.1, the incident wave can be assumed to have the form
fig 1.1: Normal incidence coordinates
\begin{aligned}\tilde{\mathbf{E}}_{\mathrm{I}} = E_{\mathrm{I}} e^{i (k z - \omega t)} \hat{\mathbf{x}}\end{aligned} \hspace{\stretch{1}}(1.0.1a)
\begin{aligned}\tilde{\mathbf{B}}_{\mathrm{I}} = \frac{1}{{v}} \hat{\mathbf{z}} \times \tilde{\mathbf{E}}_{\mathrm{I}} = \frac{1}{{v}} E_{\mathrm{I}} e^{i (k z - \omega t)} \hat{\mathbf{y}}.\end{aligned} \hspace{\stretch{1}}(1.0.1b)
Assuming a polarization $\hat{\mathbf{n}} = \cos\theta \hat{\mathbf{x}} + \sin\theta \hat{\mathbf{y}}$ for the reflected wave, we have
\begin{aligned}\tilde{\mathbf{E}}_{\mathrm{R}} = E_{\mathrm{R}} e^{i (-k z - \omega t)} (\hat{\mathbf{x}} \cos\theta + \hat{\mathbf{y}} \sin\theta)\end{aligned} \hspace{\stretch{1}}(1.0.2a)
\begin{aligned}\tilde{\mathbf{B}}_{\mathrm{R}} = \frac{1}{{v}} (-\hat{\mathbf{z}}) \times \tilde{\mathbf{E}}_{\mathrm{R}} = \frac{1}{{v}} E_{\mathrm{R}} e^{i (-k z - \omega t)} (\hat{\mathbf{x}} \sin\theta - \hat{\mathbf{y}} \cos\theta).\end{aligned} \hspace{\stretch{1}}(1.0.2b)
And finally assuming a polarization $\hat{\mathbf{n}} = \cos\phi \hat{\mathbf{x}} + \sin\phi \hat{\mathbf{y}}$ for the transmitted wave, we have
\begin{aligned}\tilde{\mathbf{E}}_{\mathrm{T}} = E_{\mathrm{T}} e^{i (k' z - \omega t)} (\hat{\mathbf{x}} \cos\phi + \hat{\mathbf{y}} \sin\phi)\end{aligned} \hspace{\stretch{1}}(1.0.3a)
\begin{aligned}\tilde{\mathbf{B}}_{\mathrm{T}} = \frac{1}{{v}} \hat{\mathbf{z}} \times \tilde{\mathbf{E}}_{\mathrm{T}} = \frac{1}{{v'}} E_{\mathrm{T}} e^{i (k' z - \omega t)} (-\hat{\mathbf{x}} \sin\phi + \hat{\mathbf{y}} \cos\phi).\end{aligned} \hspace{\stretch{1}}(1.0.3b)
With no components of any of the $\tilde{\mathbf{E}}$ or $\tilde{\mathbf{B}}$ waves in the $\hat{\mathbf{z}}$ directions the boundary value conditions at $z = 0$ require the equality of the $\hat{\mathbf{x}}$ and $\hat{\mathbf{y}}$ components of
\begin{aligned}\left( \tilde{\mathbf{E}}_{\mathrm{I}} + \tilde{\mathbf{E}}_{\mathrm{R}} \right)_{x,y} = \left( \tilde{\mathbf{E}}_{\mathrm{T}} \right)_{x,y}\end{aligned} \hspace{\stretch{1}}(1.0.4a)
\begin{aligned} \left( \frac{1}{\mu} \left( \tilde{\mathbf{B}}_{\mathrm{I}} + \tilde{\mathbf{B}}_{\mathrm{R}} \right) \right)_{x,y} = \left( \frac{1}{\mu'} \tilde{\mathbf{B}}_{\mathrm{T}} \right)_{x,y}.\end{aligned} \hspace{\stretch{1}}(1.0.4b)
With $\beta = \mu v/\mu' v'$, those components are
\begin{aligned}E_{\mathrm{I}} + E_{\mathrm{R}} \cos\theta = E_{\mathrm{T}} \cos\phi \end{aligned} \hspace{\stretch{1}}(1.0.5a)
\begin{aligned}E_{\mathrm{R}} \sin\theta = E_{\mathrm{T}} \sin\phi\end{aligned} \hspace{\stretch{1}}(1.0.5b)
\begin{aligned}E_{\mathrm{R}} \sin\theta = - \beta E_{\mathrm{T}} \sin\phi\end{aligned} \hspace{\stretch{1}}(1.0.5c)
\begin{aligned}E_{\mathrm{I}} - E_{\mathrm{R}} \cos\theta = \beta E_{\mathrm{T}} \cos\phi\end{aligned} \hspace{\stretch{1}}(1.0.5d)
Equality of eq. 1.0.5b, and eq. 1.0.5c require
\begin{aligned}- \beta E_{\mathrm{T}} \sin\phi = E_{\mathrm{T}} \sin\phi,\end{aligned} \hspace{\stretch{1}}(1.0.6)
or $(\theta, \phi) \in \{(0, 0), (0, \pi), (\pi, 0), (\pi, \pi)\}$. It turns out that all of these solutions correspond to the same physical waves. Let’s look at each in turn
• $(\theta, \phi) = (0, 0)$. The system eq. 1.0.5.5 is reduced to
\begin{aligned}\begin{aligned}E_{\mathrm{I}} + E_{\mathrm{R}} &= E_{\mathrm{T}} \\ E_{\mathrm{I}} - E_{\mathrm{R}} &= \beta E_{\mathrm{T}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.7)
with solution
\begin{aligned}\begin{aligned}\frac{E_{\mathrm{T}}}{E_{\mathrm{I}}} &= \frac{2}{1 + \beta} \\ \frac{E_{\mathrm{R}}}{E_{\mathrm{I}}} &= \frac{1 - \beta}{1 + \beta}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.8)
• $(\theta, \phi) = (\pi, \pi)$. The system eq. 1.0.5.5 is reduced to
\begin{aligned}\begin{aligned}E_{\mathrm{I}} - E_{\mathrm{R}} &= -E_{\mathrm{T}} \\ E_{\mathrm{I}} + E_{\mathrm{R}} &= -\beta E_{\mathrm{T}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.9)
with solution
\begin{aligned}\begin{aligned}\frac{E_{\mathrm{T}}}{E_{\mathrm{I}}} &= -\frac{2}{1 + \beta} \\ \frac{E_{\mathrm{R}}}{E_{\mathrm{I}}} &= -\frac{1 - \beta}{1 + \beta}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.10)
Effectively the sign for the magnitude of the transmitted and reflected phasors is toggled, but the polarization vectors are also negated, with $\hat{\mathbf{n}} = -\hat{\mathbf{x}}$, and $\hat{\mathbf{n}}' = -\hat{\mathbf{x}}$. The resulting $\tilde{\mathbf{E}}_{\mathrm{R}}$ and $\tilde{\mathbf{E}}_{\mathrm{T}}$ are unchanged relative to those of the $(0,0)$ solution above.
• $(\theta, \phi) = (0, \pi)$. The system eq. 1.0.5.5 is reduced to
\begin{aligned}\begin{aligned}E_{\mathrm{I}} + E_{\mathrm{R}} &= -E_{\mathrm{T}} \\ E_{\mathrm{I}} - E_{\mathrm{R}} &= -\beta E_{\mathrm{T}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.11)
with solution
\begin{aligned}\begin{aligned}\frac{E_{\mathrm{T}}}{E_{\mathrm{I}}} &= -\frac{2}{1 + \beta} \\ \frac{E_{\mathrm{R}}}{E_{\mathrm{I}}} &= \frac{1 - \beta}{1 + \beta}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.12)
Effectively the sign for the magnitude of the transmitted phasor is toggled. The polarization vectors in this case are $\hat{\mathbf{n}} = \hat{\mathbf{x}}$, and $\hat{\mathbf{n}}' = -\hat{\mathbf{x}}$, so the transmitted phasor magnitude change of sign does not change $\tilde{\mathbf{E}}_{\mathrm{T}}$ relative to that of the $(0,0)$ solution above.
• $(\theta, \phi) = (\pi, 0)$. The system eq. 1.0.5.5 is reduced to
\begin{aligned}\begin{aligned}E_{\mathrm{I}} - E_{\mathrm{R}} &= E_{\mathrm{T}} \\ E_{\mathrm{I}} + E_{\mathrm{R}} &= \beta E_{\mathrm{T}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.13)
with solution
\begin{aligned}\begin{aligned}\frac{E_{\mathrm{T}}}{E_{\mathrm{I}}} &= \frac{2}{1 + \beta} \\ \frac{E_{\mathrm{R}}}{E_{\mathrm{I}}} &= -\frac{1 - \beta}{1 + \beta}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.14)
This time, the sign for the magnitude of the reflected phasor is toggled. The polarization vectors in this case are $\hat{\mathbf{n}} = -\hat{\mathbf{x}}$, and $\hat{\mathbf{n}}' = \hat{\mathbf{x}}$. In this final variation the reflected phasor magnitude change of sign does not change $\tilde{\mathbf{E}}_{\mathrm{R}}$ relative to that of the $(0,0)$ solution.
We see that there is only one solution for the polarization angle of the transmitted and reflected waves relative to the incident wave. Although we fixed the incident polarization with $\mathbf{E}$ along $\hat{\mathbf{x}}$, the polarization of the incident wave is maintained regardless of TE or TM labeling in this example, since our system is symmetric with respect to rotation.
# References
[1] D.J. Griffith. Introduction to Electrodynamics. Prentice-Hall, 1981.
## PHY450H1S. Relativistic Electrodynamics Tutorial 4 (TA: Simon Freedman). Waveguides: confined EM waves.
Posted by peeterjoot on March 14, 2011
# Motivation
While this isn’t part of the course, the topic of waveguides is one of so many applications that it is worth a mention, and that will be done in this tutorial.
We will setup our system with a waveguide (conducting surface that confines the radiation) oriented in the $\hat{\mathbf{z}}$ direction. The shape can be arbitrary
PICTURE: cross section of wacky shape.
## At the surface of a conductor.
At the surface of the conductor (I presume this means the interior surface where there is no charge or current enclosed) we have
\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} &= - \frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} \\ \boldsymbol{\nabla} \times \mathbf{B} &= \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} \\ \boldsymbol{\nabla} \cdot \mathbf{B} &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{E} &= 0\end{aligned} \hspace{\stretch{1}}(1.1)
If we are talking about the exterior surface, do we need to make any other assumptions (perfect conductors, or constant potentials)?
## Wave equations.
For electric and magnetic fields in vacuum, we can show easily that these, like the potentials, separately satisfy the wave equation
Taking curls of the Maxwell curl equations above we have
\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{E}) &= - \frac{1}{{c^2}} \frac{\partial^2 {\mathbf{E}}}{\partial {{t}}^2} \\ \boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{B}) &= - \frac{1}{{c^2}} \frac{\partial^2 {\mathbf{B}}}{\partial {{t}}^2},\end{aligned} \hspace{\stretch{1}}(1.5)
but we have for vector $\mathbf{M}$
\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{M})=\boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{M}) - \Delta \mathbf{M},\end{aligned} \hspace{\stretch{1}}(1.7)
which gives us a pair of wave equations
\begin{aligned}\square \mathbf{E} &= 0 \\ \square \mathbf{B} &= 0.\end{aligned} \hspace{\stretch{1}}(1.8)
We still have the original constraints of Maxwell’s equations to deal with, but we are free now to pick the complex exponentials as fundamental solutions, as our starting point
\begin{aligned}\mathbf{E} &= \mathbf{E}_0 e^{i k^a x_a} = \mathbf{E}_0 e^{ i (k^0 x_0 - \mathbf{k} \cdot \mathbf{x}) } \\ \mathbf{B} &= \mathbf{B}_0 e^{i k^a x_a} = \mathbf{B}_0 e^{ i (k^0 x_0 - \mathbf{k} \cdot \mathbf{x}) },\end{aligned} \hspace{\stretch{1}}(1.10)
With $k_0 = \omega/c$ and $x_0 = c t$ this is
\begin{aligned}\mathbf{E} &= \mathbf{E}_0 e^{ i (\omega t - \mathbf{k} \cdot \mathbf{x}) } \\ \mathbf{B} &= \mathbf{B}_0 e^{ i (\omega t - \mathbf{k} \cdot \mathbf{x}) }.\end{aligned} \hspace{\stretch{1}}(1.12)
For the vacuum case, with monochromatic light, we treated the amplitudes as constants. Let’s see what happens if we relax this assumption, and allow for spatial dependence (but no time dependence) of $\mathbf{E}_0$ and $\mathbf{B}_0$. For the LHS of the electric field curl equation we have
\begin{aligned}0 &= \boldsymbol{\nabla} \times \mathbf{E}_0 e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 - \mathbf{E}_0 \times \boldsymbol{\nabla}) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 - \mathbf{E}_0 \times \mathbf{e}^\alpha i k_a \partial_\alpha x^a) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 + \mathbf{E}_0 \times \mathbf{e}^\alpha i k^a {\delta_\alpha}^a ) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 + i \mathbf{E}_0 \times \mathbf{k} ) e^{i k_a x^a}.\end{aligned}
Similarly for the divergence we have
\begin{aligned}0 &= \boldsymbol{\nabla} \cdot \mathbf{E}_0 e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 + \mathbf{E}_0 \cdot \boldsymbol{\nabla}) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 + \mathbf{E}_0 \cdot \mathbf{e}^\alpha i k_a \partial_\alpha x^a) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 - \mathbf{E}_0 \cdot \mathbf{e}^\alpha i k^a {\delta_\alpha}^a ) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 - i \mathbf{k} \cdot \mathbf{E}_0 ) e^{i k_a x^a}.\end{aligned}
This provides constraints on the amplitudes
\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E}_0 - i \mathbf{k} \times \mathbf{E}_0 &= -i \frac{\omega}{c} \mathbf{B}_0 \\ \boldsymbol{\nabla} \times \mathbf{B}_0 - i \mathbf{k} \times \mathbf{B}_0 &= i \frac{\omega}{c} \mathbf{E}_0 \\ \boldsymbol{\nabla} \cdot \mathbf{E}_0 - i \mathbf{k} \cdot \mathbf{E}_0 &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{B}_0 - i \mathbf{k} \cdot \mathbf{B}_0 &= 0\end{aligned} \hspace{\stretch{1}}(1.14)
Applying the wave equation operator to our phasor we get
\begin{aligned}0 &=\left(\frac{1}{{c^2}} \partial_{tt} - \boldsymbol{\nabla}^2 \right) \mathbf{E}_0 e^{i (\omega t - \mathbf{k} \cdot \mathbf{x})} \\ &=\left(-\frac{\omega^2}{c^2} - \boldsymbol{\nabla}^2 + \mathbf{k}^2 \right) \mathbf{E}_0 e^{i (\omega t - \mathbf{k} \cdot \mathbf{x})}\end{aligned}
So the momentum space equivalents of the wave equations are
\begin{aligned}\left( \boldsymbol{\nabla}^2 +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{E}_0 &= 0 \\ \left( \boldsymbol{\nabla}^2 +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{B}_0 &= 0.\end{aligned} \hspace{\stretch{1}}(1.18)
Observe that if $c^2 \mathbf{k}^2 = \omega^2$, then these amplitudes are harmonic functions (solutions to the Laplacian equation). However, it doesn’t appear that we require such a light like relation for the four vector $k^a = (\omega/c, \mathbf{k})$.
# Back to the tutorial notes.
In class we went straight to an assumed solution of the form
\begin{aligned}\mathbf{E} &= \mathbf{E}_0(x, y) e^{ i(\omega t - k z) } \\ \mathbf{B} &= \mathbf{B}_0(x, y) e^{ i(\omega t - k z) },\end{aligned} \hspace{\stretch{1}}(2.20)
where $\mathbf{k} = k \hat{\mathbf{z}}$. Our Laplacian was also written as the sum of components in the propagation and perpendicular directions
\begin{aligned}\boldsymbol{\nabla}^2 = \frac{\partial^2 {{}}}{\partial {{x_\perp}}^2} + \frac{\partial^2 {{}}}{\partial {{z}}^2}.\end{aligned} \hspace{\stretch{1}}(2.22)
With no $z$ dependence in the amplitudes we have
\begin{aligned}\left( \frac{\partial^2 {{}}}{\partial {{x_\perp}}^2} +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{E}_0 &= 0 \\ \left( \frac{\partial^2 {{}}}{\partial {{x_\perp}}^2} +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{B}_0 &= 0.\end{aligned} \hspace{\stretch{1}}(2.23)
# Separation into components.
It was left as an exercise to separate out our Maxwell equations, so that our field components $\mathbf{E}_0 = \mathbf{E}_\perp + \mathbf{E}_z$ and $\mathbf{B}_0 = \mathbf{B}_\perp + \mathbf{B}_z$ in the propagation direction, and components in the perpendicular direction are separated
\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E}_0 &=(\boldsymbol{\nabla}_\perp + \hat{\mathbf{z}}\partial_z) \times \mathbf{E}_0 \\ &=\boldsymbol{\nabla}_\perp \times \mathbf{E}_0 \\ &=\boldsymbol{\nabla}_\perp \times (\mathbf{E}_\perp + \mathbf{E}_z) \\ &=\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp +\boldsymbol{\nabla}_\perp \times \mathbf{E}_z \\ &=( \hat{\mathbf{x}} \partial_x +\hat{\mathbf{y}} \partial_y ) \times ( \hat{\mathbf{x}} E_x +\hat{\mathbf{y}} E_y ) +\boldsymbol{\nabla}_\perp \times \mathbf{E}_z \\ &=\hat{\mathbf{z}} (\partial_x E_y - \partial_z E_z) +\boldsymbol{\nabla}_\perp \times \mathbf{E}_z.\end{aligned}
We can do something similar for $\mathbf{B}_0$. This allows for a split of 1.14 into $\hat{\mathbf{z}}$ and perpendicular components
\begin{aligned}\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp &= -i \frac{\omega}{c} \mathbf{B}_z \\ \boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp &= i \frac{\omega}{c} \mathbf{E}_z \\ \boldsymbol{\nabla}_\perp \times \mathbf{E}_z - i \mathbf{k} \times \mathbf{E}_\perp &= -i \frac{\omega}{c} \mathbf{B}_\perp \\ \boldsymbol{\nabla}_\perp \times \mathbf{B}_z - i \mathbf{k} \times \mathbf{B}_\perp &= i \frac{\omega}{c} \mathbf{E}_\perp \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{E}_\perp &= i k E_z - \partial_z E_z \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{B}_\perp &= i k B_z - \partial_z B_z.\end{aligned} \hspace{\stretch{1}}(3.25)
So we see that once we have a solution for $\mathbf{E}_z$ and $\mathbf{B}_z$ (by solving the wave equation above for those components), the components for the fields in terms of those components can be found. Alternately, if one solves for the perpendicular components of the fields, these propagation components are available immediately with only differentiation.
In the case where the perpendicular components are taken as given
\begin{aligned}\mathbf{B}_z &= i \frac{ c }{\omega} \boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp \\ \mathbf{E}_z &= -i \frac{ c }{\omega} \boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp,\end{aligned} \hspace{\stretch{1}}(3.31)
we can express the remaining ones strictly in terms of the perpendicular fields
\begin{aligned}\frac{\omega}{c} \mathbf{B}_\perp &= \frac{c}{\omega} \boldsymbol{\nabla}_\perp \times (\boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp) + \mathbf{k} \times \mathbf{E}_\perp \\ \frac{\omega}{c} \mathbf{E}_\perp &= \frac{c}{\omega} \boldsymbol{\nabla}_\perp \times (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp) - \mathbf{k} \times \mathbf{B}_\perp \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{E}_\perp &= -i \frac{c}{\omega} (i k - \partial_z) \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp) \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{B}_\perp &= i \frac{c}{\omega} (i k - \partial_z) \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp).\end{aligned} \hspace{\stretch{1}}(3.33)
Is it at all helpful to expand the double cross products?
\begin{aligned}\frac{\omega^2}{c^2} \mathbf{B}_\perp &= \boldsymbol{\nabla}_\perp (\boldsymbol{\nabla}_\perp \cdot \mathbf{B}_\perp) -{\boldsymbol{\nabla}_\perp}^2 \mathbf{B}_\perp + \frac{\omega}{c} \mathbf{k} \times \mathbf{E}_\perp \\ &= i \frac{c}{\omega}(i k - \partial_z)\boldsymbol{\nabla}_\perp \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp)-{\boldsymbol{\nabla}_\perp}^2 \mathbf{B}_\perp + \frac{\omega}{c} \mathbf{k} \times \mathbf{E}_\perp \end{aligned}
This gives us
\begin{aligned}\left( {\boldsymbol{\nabla}_\perp}^2 + \frac{\omega^2}{c^2} \right) \mathbf{B}_\perp &= - \frac{c}{\omega} (k + i\partial_z) \boldsymbol{\nabla}_\perp \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp) + \frac{\omega}{c} \mathbf{k} \times \mathbf{E}_\perp \\ \left( {\boldsymbol{\nabla}_\perp}^2 + \frac{\omega^2}{c^2} \right) \mathbf{E}_\perp &= -\frac{c}{\omega} (k + i\partial_z) \boldsymbol{\nabla}_\perp \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp) - \frac{\omega}{c} \mathbf{k} \times \mathbf{B}_\perp,\end{aligned} \hspace{\stretch{1}}(3.37)
but that doesn’t seem particularly useful for completely solving the system? It appears fairly messy to try to solve for $\mathbf{E}_\perp$ and $\mathbf{B}_\perp$ given the propagation direction fields. I wonder if there is a simplification available that I am missing?
# Solving the momentum space wave equations.
Back to the class notes. We proceeded to solve for $\mathbf{E}_z$ and $\mathbf{B}_z$ from the wave equations by separation of variables. We wish to solve equations of the form
\begin{aligned}\left( \frac{\partial^2 {{}}}{\partial {{x}}^2} + \frac{\partial^2 {{}}}{\partial {{y}}^2} + \frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \phi(x,y) = 0\end{aligned} \hspace{\stretch{1}}(4.39)
Write $\phi(x,y) = X(x) Y(y)$, so that we have
\begin{aligned}\frac{X''}{X} + \frac{Y''}{Y} = \mathbf{k}^2 - \frac{\omega^2}{c^2}\end{aligned} \hspace{\stretch{1}}(4.40)
One solution is sinusoidal
\begin{aligned}\frac{X''}{X} &= -k_1^2 \\ \frac{Y''}{Y} &= -k_2^2 \\ -k_1^2 - k_2^2&= \mathbf{k}^2 - \frac{\omega^2}{c^2}.\end{aligned} \hspace{\stretch{1}}(4.41)
The example in the tutorial now switched to a rectangular waveguide, still oriented with the propagation direction down the z-axis, but with lengths $a$ and $b$ along the $x$ and $y$ axis respectively.
Writing $k_1 = 2\pi m/a$, and $k_2 = 2 \pi n/ b$, we have
\begin{aligned}\phi(x, y) = \sum_{m n} a_{m n} \exp\left( \frac{2 \pi i m}{a} x \right)\exp\left( \frac{2 \pi i n}{b} y \right)\end{aligned} \hspace{\stretch{1}}(4.44)
We were also provided with some definitions
\begin{definition}TE (Transverse Electric)
$\mathbf{E}_3 = 0$.
\end{definition}
\begin{definition}
TM (Transverse Magnetic)
$\mathbf{B}_3 = 0$.
\end{definition}
\begin{definition}
TM (Transverse Electromagnetic)
$\mathbf{E}_3 = \mathbf{B}_3 = 0$.
\end{definition}
\begin{claim}TEM do not existing in a hollow waveguide.
\end{claim}
Why: I had in my notes
\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} = 0 & \implies \frac{\partial {E_2}}{\partial {x^1}} -\frac{\partial {E_1}}{\partial {x^2}} = 0 \\ \boldsymbol{\nabla} \cdot \mathbf{E} = 0 & \implies \frac{\partial {E_1}}{\partial {x^1}} +\frac{\partial {E_2}}{\partial {x^2}} = 0\end{aligned}
and then
\begin{aligned}\boldsymbol{\nabla}^2 \phi &= 0 \\ \phi &= \text{const}\end{aligned}
In retrospect I fail to see how these are connected? What happened to the $\partial_t \mathbf{B}$ term in the curl equation above?
It was argued that we have $\mathbf{E}_\parallel = \mathbf{B}_\perp = 0$ on the boundary.
So for the TE case, where $\mathbf{E}_3 = 0$, we have from the separation of variables argument
\begin{aligned}\hat{\mathbf{z}} \cdot \mathbf{B}_0(x, y) =\sum_{m n} a_{m n} \cos\left( \frac{2 \pi i m}{a} x \right)\cos\left( \frac{2 \pi i n}{b} y \right).\end{aligned} \hspace{\stretch{1}}(4.45)
No sines because
\begin{aligned}B_1 \propto \frac{\partial {B_3}}{\partial {x_a}} \rightarrow \cos(k_1 x^1).\end{aligned} \hspace{\stretch{1}}(4.46)
The quantity
\begin{aligned}a_{m n}\cos\left( \frac{2 \pi i m}{a} x \right)\cos\left( \frac{2 \pi i n}{b} y \right).\end{aligned} \hspace{\stretch{1}}(4.47)
is called the $TE_{m n}$ mode. Note that since $B = \text{const}$ an ampere loop requires $\mathbf{B} = 0$ since there is no current.
Writing
\begin{aligned}k &= \frac{\omega}{c} \sqrt{ 1 - \left(\frac{\omega_{m n}}{\omega}\right)^2 } \\ \omega_{m n} &= 2 \pi c \sqrt{ \left(\frac{m}{a} \right)^2 + \left(\frac{n}{b} \right)^2 }\end{aligned} \hspace{\stretch{1}}(4.48)
Since $\omega < \omega_{m n}$ we have $k$ purely imaginary, and the term
\begin{aligned}e^{-i k z} = e^{- {\left\lvert{k}\right\rvert} z}\end{aligned} \hspace{\stretch{1}}(4.50)
represents the die off.
$\omega_{10}$ is the smallest.
Note that the convention is that the $m$ in $TE_{m n}$ is the bigger of the two indexes, so $\omega > \omega_{10}$.
The phase velocity
\begin{aligned}V_\phi = \frac{\omega}{k} = \frac{c}{\sqrt{ 1 - \left(\frac{\omega_{m n}}{\omega}\right)^2 }} \ge c\end{aligned} \hspace{\stretch{1}}(4.51)
However, energy is transmitted with the group velocity, the ratio of the Poynting vector and energy density
\begin{aligned}\frac{\left\langle{\mathbf{S}}\right\rangle}{\left\langle{{U}}\right\rangle} = V_g = \frac{\partial {\omega}}{\partial {k}} = 1/\frac{\partial {k}}{\partial {\omega}}\end{aligned} \hspace{\stretch{1}}(4.52)
(This can be shown).
Since
\begin{aligned}\left(\frac{\partial {k}}{\partial {\omega}}\right)^{-1} = \left(\frac{\partial {}}{\partial {\omega}}\sqrt{ (\omega/c)^2 - (\omega_{m n}/c)^2 }\right)^{-1} = c \sqrt{ 1 - (\omega_{m n}/\omega)^2 } \le c\end{aligned} \hspace{\stretch{1}}(4.53)
We see that the energy is transmitted at less than the speed of light as expected.
# Final remarks.
I’d started converting my handwritten scrawl for this tutorial into an attempt at working through these ideas with enough detail that they self contained, but gave up part way. This appears to me to be too big of a sub-discipline to give it justice in one hours class. As is, it is enough to at least get an concept of some of the ideas involved. I think were I to learn this for real, I’d need a good text as a reference (or the time to attempt to blunder through the ideas in much much more detail).
## PHY450H1S. Relativistic Electrodynamics Lecture 11 (Taught by Prof. Erich Poppitz). Unpacking Lorentz force equation. Lorentz transformations of the strength tensor, Lorentz field invariants, Bianchi identity, and first half of Maxwell’s.
Posted by peeterjoot on February 24, 2011
[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]
Covering chapter 3 material from the text [1].
Covering lecture notes pp. 74-83: Lorentz transformation of the strength tensor (82) [Tuesday, Feb. 8] [extra reading for the mathematically minded: gauge field, strength tensor, and gauge transformations in differential form language, not to be covered in class (83)]
Covering lecture notes pp. 84-102: Lorentz invariants of the electromagnetic field (84-86); Bianchi identity and the first half of Maxwell’s equations (87-90)
# Chewing on the four vector form of the Lorentz force equation.
After much effort, we arrived at
\begin{aligned}\frac{d{{(m c u_l) }}}{ds} = \frac{e}{c} \left( \partial_l A_i - \partial_i A_l \right) u^i\end{aligned} \hspace{\stretch{1}}(2.1)
or
\begin{aligned}\frac{d{{ p_l }}}{ds} = \frac{e}{c} F_{l i} u^i\end{aligned} \hspace{\stretch{1}}(2.2)
## Elements of the strength tensor
\paragraph{Claim}: there are only 6 independent elements of this matrix (tensor)
\begin{aligned}\begin{bmatrix}0 & . & . & . \\ & 0 & . & . \\ & & 0 & . \\ & & & 0 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.3)
This is a no-brainer, for we just have to mechanically plug in the elements of the field strength tensor
Recall
\begin{aligned}A^i &= (\phi, \mathbf{A}) \\ A_i &= (\phi, -\mathbf{A})\end{aligned} \hspace{\stretch{1}}(2.4)
\begin{aligned}F_{0\alpha} &= \partial_0 A_\alpha - \partial_\alpha A_0 \\ &= -\partial_0 (\mathbf{A})_\alpha - \partial_\alpha \phi \\ \end{aligned}
\begin{aligned}F_{0\alpha} = E_\alpha\end{aligned} \hspace{\stretch{1}}(2.6)
For the purely spatial index combinations we have
\begin{aligned}F_{\alpha\beta} &= \partial_\alpha A_\beta - \partial_\beta A_\alpha \\ &= -\partial_\alpha (\mathbf{A})_\beta + \partial_\beta (\mathbf{A})_\alpha \\ \end{aligned}
Written out explicitly, these are
\begin{aligned}F_{1 2} &= \partial_2 (\mathbf{A})_1 -\partial_1 (\mathbf{A})_2 \\ F_{2 3} &= \partial_3 (\mathbf{A})_2 -\partial_2 (\mathbf{A})_3 \\ F_{3 1} &= \partial_1 (\mathbf{A})_3 -\partial_3 (\mathbf{A})_1 .\end{aligned} \hspace{\stretch{1}}(2.7)
We can compare this to the elements of $\mathbf{B}$
\begin{aligned}\mathbf{B} = \begin{vmatrix}\hat{\mathbf{x}} & \hat{\mathbf{y}} & \hat{\mathbf{z}} \\ \partial_1 & \partial_2 & \partial_3 \\ A_x & A_y & A_z\end{vmatrix}\end{aligned} \hspace{\stretch{1}}(2.10)
We see that
\begin{aligned}(\mathbf{B})_z &= \partial_1 A_y - \partial_2 A_x \\ (\mathbf{B})_x &= \partial_2 A_z - \partial_3 A_y \\ (\mathbf{B})_y &= \partial_3 A_x - \partial_1 A_z\end{aligned} \hspace{\stretch{1}}(2.11)
So we have
\begin{aligned}F_{1 2} &= - (\mathbf{B})_3 \\ F_{2 3} &= - (\mathbf{B})_1 \\ F_{3 1} &= - (\mathbf{B})_2.\end{aligned} \hspace{\stretch{1}}(2.14)
These can be summarized as simply
\begin{aligned}F_{\alpha\beta} = - \epsilon_{\alpha\beta\gamma} B_\gamma.\end{aligned} \hspace{\stretch{1}}(2.17)
This provides all the info needed to fill in the matrix above
\begin{aligned}{\left\lVert{ F_{i j} }\right\rVert} = \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0.\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.18)
## Index raising of rank 2 tensor
To raise indexes we compute
\begin{aligned}F^{i j} = g^{i l} g^{j k} F_{l k}.\end{aligned} \hspace{\stretch{1}}(2.19)
### Justifying the raising operation.
To justify this consider raising one index at a time by applying the metric tensor to our definition of $F_{l k}$. That is
\begin{aligned}g^{a l} F_{l k} &=g^{a l} (\partial_l A_k - \partial_k A_l) \\ &=\partial^a A_k - \partial_k A^a.\end{aligned}
Now apply the metric tensor once more
\begin{aligned}g^{b k} g^{a l} F_{l k} &=g^{b k} (\partial^a A_k - \partial_k A^a) \\ &=\partial^a A^b - \partial^b A^a.\end{aligned}
This is, by definition $F^{a b}$. Since a rank 2 tensor has been defined as an object that transforms like the product of two pairs of coordinates, it makes sense that this particular tensor raises in the same fashion as would a product of two vector coordinates (in this case, it happens to be an antisymmetric product of two vectors, and one of which is an operator, but we have the same idea).
### Consider the components of the raised $F_{i j}$ tensor.
\begin{aligned}F^{0\alpha} &= -F_{0\alpha} \\ F^{\alpha\beta} &= F_{\alpha\beta}.\end{aligned} \hspace{\stretch{1}}(2.20)
\begin{aligned}{\left\lVert{ F^{i j} }\right\rVert} = \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.22)
## Back to chewing on the Lorentz force equation.
\begin{aligned}m c \frac{d{{ u_i }}}{ds} = \frac{e}{c} F_{i j} u^j\end{aligned} \hspace{\stretch{1}}(2.23)
\begin{aligned}u^i &= \gamma \left( 1, \frac{\mathbf{v}}{c} \right) \\ u_i &= \gamma \left( 1, -\frac{\mathbf{v}}{c} \right)\end{aligned} \hspace{\stretch{1}}(2.24)
For the spatial components of the Lorentz force equation we have
\begin{aligned}m c \frac{d{{ u_\alpha }}}{ds} &= \frac{e}{c} F_{\alpha j} u^j \\ &= \frac{e}{c} F_{\alpha 0} u^0+ \frac{e}{c} F_{\alpha \beta} u^\beta \\ &= \frac{e}{c} (-E_{\alpha}) \gamma+ \frac{e}{c} (- \epsilon_{\alpha\beta\gamma} B_\gamma ) \frac{v^\beta}{c} \gamma \end{aligned}
But
\begin{aligned}m c \frac{d{{ u_\alpha }}}{ds} &= -m \frac{d{{(\gamma \mathbf{v}_\alpha)}}}{ds} \\ &= -m \frac{d(\gamma \mathbf{v}_\alpha)}{c \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} dt} \\ &= -\gamma \frac{d(m \gamma \mathbf{v}_\alpha)}{c dt}.\end{aligned}
Canceling the common $-\gamma/c$ terms, and switching to vector notation, we are left with
\begin{aligned}\frac{d( m \gamma \mathbf{v}_\alpha)}{dt} = e \left( E_\alpha + \frac{1}{{c}} (\mathbf{v} \times \mathbf{B})_\alpha \right).\end{aligned} \hspace{\stretch{1}}(2.26)
Now for the energy term. We have
\begin{aligned}m c \frac{d{{u_0}}}{ds} &= \frac{e}{c} F_{0\alpha} u^\alpha \\ &= \frac{e}{c} E_{\alpha} \gamma \frac{v^\alpha}{c} \\ \frac{d{{ m c \gamma }}}{ds} &=\end{aligned}
Putting the final two lines into vector form we have
\begin{aligned}\frac{d{{ (m c^2 \gamma)}}}{dt} = e \mathbf{E} \cdot \mathbf{v},\end{aligned} \hspace{\stretch{1}}(2.27)
or
\begin{aligned}\frac{d{{ \mathcal{E} }}}{dt} = e \mathbf{E} \cdot \mathbf{v}\end{aligned} \hspace{\stretch{1}}(2.28)
# Transformation of rank two tensors in matrix and index form.
## Transformation of the metric tensor, and some identities.
With
\begin{aligned}\hat{G} = {\left\lVert{ g_{i j} }\right\rVert} = {\left\lVert{ g^{i j} }\right\rVert}\end{aligned} \hspace{\stretch{1}}(3.29)
\paragraph{We claim:}
The rank two tensor $\hat{G}$ transforms in the following sort of sandwich operation, and this leaves it invariant
\begin{aligned}\hat{G} \rightarrow \hat{O} \hat{G} \hat{O}^\text{T} = \hat{G}.\end{aligned} \hspace{\stretch{1}}(3.30)
To demonstrate this let’s consider a transformed vector in coordinate form as follows
\begin{aligned}{x'}^i &= O^{i j} x_j = {O^i}_j x^j \\ {x'}_i &= O_{i j} x^j = {O_i}^j x_j.\end{aligned} \hspace{\stretch{1}}(3.31)
We can thus write the equation in matrix form with
\begin{aligned}X &= {\left\lVert{x^i}\right\rVert} \\ X' &= {\left\lVert{{x'}^i}\right\rVert} \\ \hat{O} &= {\left\lVert{{O^i}_j}\right\rVert} \\ X' &= \hat{O} X\end{aligned} \hspace{\stretch{1}}(3.33)
Our invariant for the vector square, which is required to remain unchanged is
\begin{aligned}{x'}^i {x'}_i &= (O^{i j} x_j)(O_{i k} x^k) \\ &= x^k (O^{i j} O_{i k}) x_j.\end{aligned}
This shows that we have a delta function relationship for the Lorentz transform matrix, when we sum over the first index
\begin{aligned}O^{a i} O_{a j} = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(3.37)
It appears we can put 3.37 into matrix form as
\begin{aligned}\hat{G} \hat{O}^\text{T} \hat{G} \hat{O} = I\end{aligned} \hspace{\stretch{1}}(3.38)
Now, if one considers that the transpose of a rotation is an inverse rotation, and the transpose of a boost leaves it unchanged, the transpose of a general Lorentz transformation, a composition of an arbitrary sequence of boosts and rotations, must also be a Lorentz transformation, and must then also leave the norm unchanged. For the transpose of our Lorentz transformation $\hat{O}$ lets write
\begin{aligned}\hat{P} = \hat{O}^\text{T}\end{aligned} \hspace{\stretch{1}}(3.39)
For the action of this on our position vector let’s write
\begin{aligned}{x''}^i &= P^{i j} x_j = O^{j i} x_j \\ {x''}_i &= P_{i j} x^j = O_{j i} x^j\end{aligned} \hspace{\stretch{1}}(3.40)
so that our norm is
\begin{aligned}{x''}^a {x''}_a &= (O_{k a} x^k)(O^{j a} x_j) \\ &= x^k (O_{k a} O^{j a} ) x_j \\ &= x^j x_j \\ \end{aligned}
We must then also have an identity when summing over the second index
\begin{aligned}{\delta_{k}}^j = O_{k a} O^{j a} \end{aligned} \hspace{\stretch{1}}(3.42)
Armed with these facts on the products of $O_{i j}$ and $O^{i j}$ we can now consider the transformation of the metric tensor.
The rule (definition) supplied to us for the transformation of an arbitrary rank two tensor, is that this transforms as its indexes transform individually. Very much as if it was the product of two coordinate vectors and we transform those coordinates separately. Doing so for the metric tensor we have
\begin{aligned}g^{i j} &\rightarrow {O^i}_k g^{k m} {O^j}_m \\ &= ({O^i}_k g^{k m}) {O^j}_m \\ &= O^{i m} {O^j}_m \\ &= O^{i m} (O_{a m} g^{a j}) \\ &= (O^{i m} O_{a m}) g^{a j}\end{aligned}
However, by 3.42, we have $O_{a m} O^{i m} = {\delta_a}^i$, and we prove that
\begin{aligned}g^{i j} \rightarrow g^{i j}.\end{aligned} \hspace{\stretch{1}}(3.43)
Finally, we wish to put the above transformation in matrix form, look more carefully at the very first line
\begin{aligned}g^{i j}&\rightarrow {O^i}_k g^{k m} {O^j}_m \\ \end{aligned}
which is
\begin{aligned}\hat{G} \rightarrow \hat{O} \hat{G} \hat{O}^\text{T} = \hat{G}\end{aligned} \hspace{\stretch{1}}(3.44)
We see that this particular form of transformation, a sandwich between $\hat{O}$ and $\hat{O}^\text{T}$, leaves the metric tensor invariant.
## Lorentz transformation of the electrodynamic tensor
Having identified a composition of Lorentz transformation matrices, when acting on the metric tensor, leaves it invariant, it is a reasonable question to ask how this form of transformation acts on our electrodynamic tensor $F^{i j}$?
\paragraph{Claim:} A transformation of the following form is required to maintain the norm of the Lorentz force equation
\begin{aligned}\hat{F} \rightarrow \hat{O} \hat{F} \hat{O}^\text{T} ,\end{aligned} \hspace{\stretch{1}}(3.45)
where $\hat{F} = {\left\lVert{F^{i j}}\right\rVert}$. Observe that our Lorentz force equation can be written exclusively in upper index quantities as
\begin{aligned}m c \frac{d{{u^i}}}{ds} = \frac{e}{c} F^{i j} g_{j l} u^l\end{aligned} \hspace{\stretch{1}}(3.46)
Because we have a vector on one side of the equation, and it transforms by multiplication with by a Lorentz matrix in SO(1,3)
\begin{aligned}\frac{du^i}{ds} \rightarrow \hat{O} \frac{du^i}{ds} \end{aligned} \hspace{\stretch{1}}(3.47)
The LHS of the Lorentz force equation provides us with one invariant
\begin{aligned}(m c)^2 \frac{d{{u^i}}}{ds} \frac{d{{u_i}}}{ds}\end{aligned} \hspace{\stretch{1}}(3.48)
so the RHS must also provide one
\begin{aligned}\frac{e^2}{c^2} F^{i j} g_{j l} u^lF_{i k} g^{k m} u_m=\frac{e^2}{c^2} F^{i j} u_jF_{i k} u^k.\end{aligned} \hspace{\stretch{1}}(3.49)
Let’s look at the RHS in matrix form. Writing
\begin{aligned}U = {\left\lVert{u^i}\right\rVert},\end{aligned} \hspace{\stretch{1}}(3.50)
we can rewrite the Lorentz force equation as
\begin{aligned}m c \dot{U} = \frac{e}{c} \hat{F} \hat{G} U.\end{aligned} \hspace{\stretch{1}}(3.51)
In this matrix formalism our invariant 3.49 is
\begin{aligned}\frac{e^2}{c^2} (\hat{F} \hat{G} U)^\text{T} G \hat{F} \hat{G} U=\frac{e^2}{c^2} U^\text{T} \hat{G} \hat{F}^\text{T} G \hat{F} \hat{G} U.\end{aligned} \hspace{\stretch{1}}(3.52)
If we compare this to the transformed Lorentz force equation we have
\begin{aligned}m c \hat{O} \dot{U} = \frac{e}{c} \hat{F'} \hat{G} \hat{O} U.\end{aligned} \hspace{\stretch{1}}(3.53)
Our invariant for the transformed equation is
\begin{aligned}\frac{e^2}{c^2} (\hat{F'} \hat{G} \hat{O} U)^\text{T} G \hat{F'} \hat{G} \hat{O} U&=\frac{e^2}{c^2} U^\text{T} \hat{O}^\text{T} \hat{G} \hat{F'}^\text{T} G \hat{F'} \hat{G} \hat{O} U \\ \end{aligned}
Thus the transformed electrodynamic tensor $\hat{F}'$ must satisfy the identity
\begin{aligned}\hat{O}^\text{T} \hat{G} \hat{F'}^\text{T} G \hat{F'} \hat{G} \hat{O} = \hat{G} \hat{F}^\text{T} G \hat{F} \hat{G} \end{aligned} \hspace{\stretch{1}}(3.54)
With the substitution $\hat{F}' = \hat{O} \hat{F} \hat{O}^\text{T}$ the LHS is
\begin{aligned}\hat{O}^\text{T} \hat{G} \hat{F'}^\text{T} \hat{G} \hat{F'} \hat{G} \hat{O} &= \hat{O}^\text{T} \hat{G} ( \hat{O} \hat{F} \hat{O}^\text{T})^\T \hat{G} (\hat{O} \hat{F} \hat{O}^\text{T}) \hat{G} \hat{O} \\ &= (\hat{O}^\text{T} \hat{G} \hat{O}) \hat{F}^\text{T} (\hat{O}^\text{T} \hat{G} \hat{O}) \hat{F} (\hat{O}^\text{T} \hat{G} \hat{O}) \\ \end{aligned}
We’ve argued that $\hat{P} = \hat{O}^\text{T}$ is also a Lorentz transformation, thus
\begin{aligned}\hat{O}^\text{T} \hat{G} \hat{O}&=\hat{P} \hat{G} \hat{O}^\text{T} \\ &=\hat{G}\end{aligned}
This is enough to make both sides of 3.54 match, verifying that this transformation does provide the invariant properties desired.
## Direct computation of the Lorentz transformation of the electrodynamic tensor.
We can construct the transformed field tensor more directly, by simply transforming the coordinates of the four gradient and the four potential directly. That is
\begin{aligned}F^{i j} = \partial^i A^j - \partial^j A^i&\rightarrow {O^i}_a {O^j}_b \left( \partial^a A^b - \partial^b A^a \right) \\ &={O^i}_a F^{a b} {O^j}_b \end{aligned}
By inspection we can see that this can be represented in matrix form as
\begin{aligned}\hat{F} \rightarrow \hat{O} \hat{F} \hat{O}^\text{T}\end{aligned} \hspace{\stretch{1}}(3.55)
# Four vector invariants
For three vectors $\mathbf{A}$ and $\mathbf{B}$ invariants are
\begin{aligned}\mathbf{A} \cdot \mathbf{B} = A^\alpha B_\alpha\end{aligned} \hspace{\stretch{1}}(4.56)
For four vectors $A^i$ and $B^i$ invariants are
\begin{aligned}A^i B_i = A^i g_{i j} B^j \end{aligned} \hspace{\stretch{1}}(4.57)
For $F_{i j}$ what are the invariants? One invariant is
\begin{aligned}g^{i j} F_{i j} = 0,\end{aligned} \hspace{\stretch{1}}(4.58)
but this isn’t interesting since it is uniformly zero (product of symmetric and antisymmetric).
The two invariants are
\begin{aligned}F_{i j}F^{i j}\end{aligned} \hspace{\stretch{1}}(4.59)
and
\begin{aligned}\epsilon^{i j k l} F_{i j}F_{k l}\end{aligned} \hspace{\stretch{1}}(4.60)
where
\begin{aligned}\epsilon^{i j k l} =\left\{\begin{array}{l l}0 & \quad \mbox{if any two indexes coincide} \\ 1 & \quad \mbox{for even permutations ofi j k l=0123$} \\ -1 & \quad \mbox{for odd permutations of$i j k l=0123} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(4.61)
We can show (homework) that
\begin{aligned}F_{i j}F^{i j} \propto \mathbf{E}^2 - \mathbf{B}^2\end{aligned} \hspace{\stretch{1}}(4.62)
\begin{aligned}\epsilon^{i j k l} F_{i j}F_{k l} \propto \mathbf{E} \cdot \mathbf{B}\end{aligned} \hspace{\stretch{1}}(4.63)
This first invariant serves as the action density for the Maxwell field equations.
There’s some useful properties of these invariants. One is that if the fields are perpendicular in one frame, then will be in any other.
From the first, note that if ${\left\lvert{\mathbf{E}}\right\rvert} > {\left\lvert{\mathbf{B}}\right\rvert}$, the invariant is positive, and must be positive in all frames, or if ${\left\lvert{\mathbf{E}}\right\rvert} {\left\lvert{\mathbf{B}}\right\rvert}$ in one frame, we can transform to a frame with only $\mathbf{E}'$ component, solve that, and then transform back. Similarly if ${\left\lvert{\mathbf{E}}\right\rvert} < {\left\lvert{\mathbf{B}}\right\rvert}$ in one frame, we can transform to a frame with only $\mathbf{B}'$ component, solve that, and then transform back.
# The first half of Maxwell’s equations.
\paragraph{Claim: } The source free portions of Maxwell’s equations are a consequence of the definition of the field tensor alone.
Given
\begin{aligned}F_{i j} = \partial_i A_j - \partial_j A_i,\end{aligned} \hspace{\stretch{1}}(5.64)
where
\begin{aligned}\partial_i = \frac{\partial {}}{\partial {x^i}}\end{aligned} \hspace{\stretch{1}}(5.65)
This alone implies half of Maxwell’s equations. To show this we consider
\begin{aligned}e^{m k i j} \partial_k F_{i j} = 0.\end{aligned} \hspace{\stretch{1}}(5.66)
This is the Bianchi identity. To demonstrate this identity, we’ll have to swap indexes, employ derivative commutation, and then swap indexes once more
\begin{aligned}e^{m k i j} \partial_k F_{i j} &= e^{m k i j} \partial_k (\partial_i A_j - \partial_j A_i) \\ &= 2 e^{m k i j} \partial_k \partial_i A_j \\ &= 2 e^{m k i j} \frac{1}{{2}} \left( \partial_k \partial_i A_j + \partial_i \partial_k A_j \right) \\ &= e^{m k i j} \partial_k \partial_i A_j e^{m i k j} \partial_k \partial_i A_j \\ &= (e^{m k i j} - e^{m k i j}) \partial_k \partial_i A_j \\ &= 0 \qquad \square\end{aligned}
This is the 4D analogue of
\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} f) = 0\end{aligned} \hspace{\stretch{1}}(5.67)
i.e.
\begin{aligned}e^{\alpha\beta\gamma} \partial_\beta \partial_\gamma f = 0\end{aligned} \hspace{\stretch{1}}(5.68)
Let’s do this explicitly, starting with
\begin{aligned}{\left\lVert{ F_{i j} }\right\rVert} = \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0.\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(5.69)
For the $m= 0$ case we have
\begin{aligned}\epsilon^{0 k i j} \partial_k F_{i j}&=\epsilon^{\alpha \beta \gamma} \partial_\alpha F_{\beta \gamma} \\ &= \epsilon^{\alpha \beta \gamma} \partial_\alpha (-\epsilon_{\beta \gamma \delta} B_\delta) \\ &= -\epsilon^{\alpha \beta \gamma} \epsilon_{\delta \beta \gamma }\partial_\alpha B_\delta \\ &= - 2 {\delta^\alpha}_\delta \partial_\alpha B_\delta \\ &= - 2 \partial_\alpha B_\alpha \end{aligned}
We must then have
\begin{aligned}\partial_\alpha B_\alpha = 0.\end{aligned} \hspace{\stretch{1}}(5.70)
This is just Gauss’s law for magnetism
\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{B} = 0.\end{aligned} \hspace{\stretch{1}}(5.71)
Let’s do the spatial portion, for which we have three equations, one for each $\alpha$ of
\begin{aligned}e^{\alpha j k l} \partial_j F_{k l}&=e^{\alpha 0 \beta \gamma} \partial_0 F_{\beta \gamma}+e^{\alpha 0 \gamma \beta} \partial_0 F_{\gamma \beta}+e^{\alpha \beta 0 \gamma} \partial_\beta F_{0 \gamma}+e^{\alpha \beta \gamma 0} \partial_\beta F_{\gamma 0}+e^{\alpha \gamma 0 \beta} \partial_\gamma F_{0 \beta}+e^{\alpha \gamma \beta 0} \partial_\gamma F_{\beta 0} \\ &=2 \left( e^{\alpha 0 \beta \gamma} \partial_0 F_{\beta \gamma}+e^{\alpha \beta 0 \gamma} \partial_\beta F_{0 \gamma}+e^{\alpha \gamma 0 \beta} \partial_\gamma F_{0 \beta}\right) \\ &=2 e^{0 \alpha \beta \gamma} \left(-\partial_0 F_{\beta \gamma}+\partial_\beta F_{0 \gamma}- \partial_\gamma F_{0 \beta}\right)\end{aligned}
This implies
\begin{aligned}0 =-\partial_0 F_{\beta \gamma}+\partial_\beta F_{0 \gamma}- \partial_\gamma F_{0 \beta}\end{aligned} \hspace{\stretch{1}}(5.72)
Referring back to the previous expansions of 2.6 and 2.17, we have
\begin{aligned}0 =\partial_0 \epsilon_{\beta\gamma\mu} B_\mu+\partial_\beta E_\gamma- \partial_\gamma E_{\beta},\end{aligned} \hspace{\stretch{1}}(5.73)
or
\begin{aligned}\frac{1}{{c}} \frac{\partial {B_\alpha}}{\partial {t}} + (\boldsymbol{\nabla} \times \mathbf{E})_\alpha = 0.\end{aligned} \hspace{\stretch{1}}(5.74)
These are just the components of the Maxwell-Faraday equation
\begin{aligned}0 = \frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} + \boldsymbol{\nabla} \times \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(5.75)
# Appendix. Some additional index gymnastics.
## Transposition of mixed index tensor.
Is the transpose of a mixed index object just a substitution of the free indexes? This wasn’t obvious to me that it would be the case, especially since I’d made an error in some index gymnastics that had me temporarily convinced differently. However, working some examples clears the fog. For example let’s take the transpose of 3.37.
\begin{aligned}{\left\lVert{ {\delta^i}_j }\right\rVert}^\text{T} &= {\left\lVert{ O^{a i} O_{a j} }\right\rVert}^\text{T} \\ &= \left( {\left\lVert{ O^{j i} }\right\rVert} {\left\lVert{ O_{i j} }\right\rVert} \right)^\text{T} \\ &={\left\lVert{ O_{i j} }\right\rVert}^\text{T}{\left\lVert{ O^{j i} }\right\rVert}^\text{T} \\ &={\left\lVert{ O_{j i} }\right\rVert}{\left\lVert{ O^{i j} }\right\rVert} \\ &={\left\lVert{ O_{a i} O^{a j} }\right\rVert} \\ \end{aligned}
If the transpose of a mixed index tensor just swapped the indexes we would have
\begin{aligned}{\left\lVert{ {\delta^i}_j }\right\rVert}^\text{T} = {\left\lVert{ O_{a i} O^{a j} }\right\rVert} \end{aligned} \hspace{\stretch{1}}(6.76)
From this it does appear that all we have to do is switch the indexes and we will write
\begin{aligned}{\delta^j}_i = O_{a i} O^{a j} \end{aligned} \hspace{\stretch{1}}(6.77)
We can consider a more general operation
\begin{aligned}{\left\lVert{{A^i}_j}\right\rVert}^\text{T}&={\left\lVert{ A^{i m} g_{m j} }\right\rVert}^\text{T} \\ &={\left\lVert{ g_{i j} }\right\rVert}^\text{T}{\left\lVert{ A^{i j} }\right\rVert}^\text{T} \\ &={\left\lVert{ g_{i j} }\right\rVert}{\left\lVert{ A^{j i} }\right\rVert} \\ &={\left\lVert{ g_{i m} A^{j m} }\right\rVert} \\ &={\left\lVert{ {A^{j}}_i }\right\rVert}\end{aligned}
So we see that we do just have to swap indexes.
## Transposition of lower index tensor.
We’ve saw above that we had
\begin{aligned}{\left\lVert{ {A^{i}}_j }\right\rVert}^\text{T} &= {\left\lVert{ {A_{j}}^i }\right\rVert} \\ {\left\lVert{ {A_{i}}^j }\right\rVert}^\text{T} &= {\left\lVert{ {A^{j}}_i }\right\rVert} \end{aligned} \hspace{\stretch{1}}(6.78)
which followed by careful treatment of the transposition in terms of $A^{i j}$ for which we defined a transpose operation. We assumed as well that
\begin{aligned}{\left\lVert{ A_{i j} }\right\rVert}^\text{T} = {\left\lVert{ A_{j i} }\right\rVert}.\end{aligned} \hspace{\stretch{1}}(6.80)
However, this does not have to be assumed, provided that $g^{i j} = g_{i j}$, and $(AB)^\text{T} = B^\text{T} A^\text{T}$. We see this by expanding this transposition in products of $A^{i j}$ and $\hat{G}$
\begin{aligned}{\left\lVert{ A_{i j} }\right\rVert}^\text{T}&= \left( {\left\lVert{g_{i j}}\right\rVert} {\left\lVert{ A^{i j} }\right\rVert} {\left\lVert{g_{i j}}\right\rVert} \right)^\text{T} \\ &= \left( {\left\lVert{g^{i j}}\right\rVert} {\left\lVert{ A^{i j} }\right\rVert} {\left\lVert{g^{i j}}\right\rVert} \right)^\text{T} \\ &= {\left\lVert{g^{i j}}\right\rVert}^\text{T} {\left\lVert{ A^{i j}}\right\rVert}^\text{T} {\left\lVert{g^{i j}}\right\rVert}^\text{T} \\ &= {\left\lVert{g^{i j}}\right\rVert} {\left\lVert{ A^{j i}}\right\rVert} {\left\lVert{g^{i j}}\right\rVert} \\ &= {\left\lVert{g_{i j}}\right\rVert} {\left\lVert{ A^{i j}}\right\rVert} {\left\lVert{g_{i j}}\right\rVert} \\ &= {\left\lVert{ A_{j i}}\right\rVert} \end{aligned}
It would be worthwhile to go through all of this index manipulation stuff and lay it out in a structured axiomatic form. What is the minimal set of assumptions, and how does all of this generalize to non-diagonal metric tensors (even in Euclidean spaces).
## Translating the index expression of identity from Lorentz products to matrix form
A verification that the matrix expression 3.38, matches the index expression 3.37 as claimed is worthwhile. It would be easy to guess something similar like $\hat{O}^\text{T} \hat{G} \hat{O} \hat{G}$ is instead the matrix representation. That was in fact my first erroneous attempt to form the matrix equivalent, but is the transpose of 3.38. Either way you get an identity, but the indexes didn’t match.
Since we have $g^{i j} = g_{i j}$ which do we pick to do this verification? This appears to be dictated by requirements to match lower and upper indexes on the summed over index. This is probably clearest by example, so let’s expand the products on the LHS explicitly
\begin{aligned}{\left\lVert{ g^{i j} }\right\rVert} {\left\lVert{ {O^{i}}_j }\right\rVert} ^\text{T}{\left\lVert{ g_{i j} }\right\rVert}{\left\lVert{ {O^{i}}_j }\right\rVert} &=\left( {\left\lVert{ {O^{i}}_j }\right\rVert} {\left\lVert{ g^{i j} }\right\rVert} \right) ^\text{T}{\left\lVert{ g_{i j} }\right\rVert}{\left\lVert{ {O^{i}}_j }\right\rVert} \\ &=\left( {\left\lVert{ {O^{i}}_k g^{k j} }\right\rVert} \right) ^\text{T}{\left\lVert{ g_{i m} {O^{m}}_j }\right\rVert} \\ &={\left\lVert{ O^{i j} }\right\rVert} ^\text{T}{\left\lVert{ O_{i j} }\right\rVert} \\ &={\left\lVert{ O^{j i} }\right\rVert} {\left\lVert{ O_{i j} }\right\rVert} \\ &={\left\lVert{ O^{k i} O_{k j} }\right\rVert} \\ \end{aligned}
This matches the ${\left\lVert{{\delta^i}_j}\right\rVert}$ that we have on the RHS, and all is well.
# References
[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.
## PHY450H1S. Relativistic Electrodynamics Lecture 14 (Taught by Simon Freedman). Wave equation in Coulomb and Lorentz gauges.
Posted by peeterjoot on February 17, 2011
Covering chapter 4 material from the text [1].
Covering lecture notes pp.103-114: the wave equation in the relativistic Lorentz gauge (114-114) [Tuesday, Feb. 15; Wednesday, Feb.16]…
Covering lecture notes pp. 114-127: reminder on wave equations (114); reminder on Fourier series and integral (115-117); Fourier expansion of the EM potential in Coulomb gauge and equation of motion for the spatial Fourier components (118-119); the general solution of Maxwell’s equations in vacuum (120-121) [Tuesday, Mar. 1]; properties of monochromatic plane EM waves (122-124); energy and energy flux of the EM field and energy conservation from the equations of motion (125-127) [Wednesday, Mar. 2]
# Trying to understand “c”
\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} &= 0 \\ \boldsymbol{\nabla} \times \mathbf{B} &= \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(2.1)
Maxwell’s equations in a vacuum were
\begin{aligned}\boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) &= \boldsymbol{\nabla}^2 \mathbf{A} -\frac{1}{{c}} \frac{\partial {}}{\partial {t}} \boldsymbol{\nabla} \phi - \frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}}{\partial t^2} \\ \boldsymbol{\nabla} \cdot \mathbf{E} &= - \boldsymbol{\nabla}^2 \phi - \frac{1}{{c}} \frac{\partial {\boldsymbol{\nabla} \cdot \mathbf{A}}}{\partial {t}} \end{aligned} \hspace{\stretch{1}}(2.3)
There’s a redundancy here since we can change $\phi$ and $\mathbf{A}$ without changing the EOM
\begin{aligned}(\phi, \mathbf{A}) \rightarrow (\phi', \mathbf{A}')\end{aligned} \hspace{\stretch{1}}(2.5)
with
\begin{aligned}\phi &= \phi' + \frac{1}{{c}} \frac{\partial {\chi}}{\partial {t}} \\ \mathbf{A} &= \mathbf{A}' - \boldsymbol{\nabla} \chi\end{aligned} \hspace{\stretch{1}}(2.6)
\begin{aligned}\chi(\mathbf{x}, t) = c \int dt \phi(\mathbf{x}, t)\end{aligned} \hspace{\stretch{1}}(2.8)
which gives
\begin{aligned}\phi' = 0\end{aligned} \hspace{\stretch{1}}(2.9)
\begin{aligned}(\phi, \mathbf{A}) \sim (\phi = 0, \mathbf{A}')\end{aligned} \hspace{\stretch{1}}(2.10)
Maxwell’s equations are now
\begin{aligned}\boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}') &= \boldsymbol{\nabla}^2 \mathbf{A}' - \frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}'}{\partial t^2} \\ \frac{\partial {\boldsymbol{\nabla} \cdot \mathbf{A}'}}{\partial {t}} &= 0\end{aligned}
Can we make $\boldsymbol{\nabla} \cdot \mathbf{A}'' = 0$, while $\phi'' = 0$.
\begin{aligned}\underbrace{\phi}_{=0} &= \underbrace{\phi'}_{=0} + \frac{1}{{c}} \frac{\partial {\chi'}}{\partial {t}} \\ \end{aligned} \hspace{\stretch{1}}(2.11)
We need
\begin{aligned}\frac{\partial {\chi'}}{\partial {t}} = 0\end{aligned} \hspace{\stretch{1}}(2.13)
How about $\mathbf{A}'$
\begin{aligned}\mathbf{A}' = \mathbf{A}'' - \boldsymbol{\nabla} \chi'\end{aligned} \hspace{\stretch{1}}(2.14)
We want the divergence of $\mathbf{A}'$ to be zero, which means
\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{A}' = \underbrace{\boldsymbol{\nabla} \cdot \mathbf{A}''}_{=0} - \boldsymbol{\nabla}^2 \chi'\end{aligned} \hspace{\stretch{1}}(2.15)
So we want
\begin{aligned}\boldsymbol{\nabla}^2 \chi' = \boldsymbol{\nabla} \cdot \mathbf{A}'\end{aligned} \hspace{\stretch{1}}(2.16)
Can we solve this?
Recall that in electrostatics we have
\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 4 \pi \rho\end{aligned} \hspace{\stretch{1}}(2.17)
and
\begin{aligned}\mathbf{E} = -\boldsymbol{\nabla} \phi\end{aligned} \hspace{\stretch{1}}(2.18)
\begin{aligned}\boldsymbol{\nabla}^2 \phi = 4 \pi \rho\end{aligned} \hspace{\stretch{1}}(2.19)
This has the identical form (with $\phi \sim \chi$, and $4 \pi \rho \sim \boldsymbol{\nabla} \cdot \mathbf{A}'$).
While we aren’t trying to actually solve this (just show that it can be solved). One way to look at this problem is that it is just a Laplace equation, and we could utilize a Green’s function solution if desired.
## On the Green’s function.
Recall that the Green’s function for the Laplacian was
\begin{aligned}G(\mathbf{x}, \mathbf{x}') = \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}}\end{aligned} \hspace{\stretch{1}}(2.20)
with the property
\begin{aligned}\boldsymbol{\nabla}^2 G(\mathbf{x}, \mathbf{x}') = \delta(\mathbf{x} - \mathbf{x}')\end{aligned} \hspace{\stretch{1}}(2.21)
Our LDE to solve by Green’s method is
\begin{aligned}\boldsymbol{\nabla}^2 \phi = 4 \pi \rho,\end{aligned} \hspace{\stretch{1}}(2.22)
We let this equation (after switching to primed coordinates) operate on the Green’s function
\begin{aligned}\int d^3 \mathbf{x}' {\boldsymbol{\nabla}'}^2 \phi(\mathbf{x}') G(\mathbf{x}, \mathbf{x}') =\int d^3 \mathbf{x}' 4 \pi \phi(\mathbf{x}') G(\mathbf{x}, \mathbf{x}').\end{aligned} \hspace{\stretch{1}}(2.23)
Assuming that the left action of the Green’s function on the test function $\phi(\mathbf{x}')$ is the same as the right action (i.e. $\phi(\mathbf{x}')$ and $G(\mathbf{x}, \mathbf{x}')$ commute), we have for the LHS
\begin{aligned}\int d^3 \mathbf{x}' {\boldsymbol{\nabla}'}^2 \phi(\mathbf{x}') G(\mathbf{x}, \mathbf{x}') &=\int d^3 \mathbf{x}' {\boldsymbol{\nabla}'}^2 G(\mathbf{x}, \mathbf{x}') \phi(\mathbf{x}') \\ &=\int d^3 \mathbf{x}' \delta(\mathbf{x} - \mathbf{x}') \phi(\mathbf{x}') \\ &=\phi(\mathbf{x}).\end{aligned}
Substitution of $G(\mathbf{x}, \mathbf{x}') = 1/{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}$ on the RHS then gives us the general solution
\begin{aligned}\phi(\mathbf{x}) = 4 \pi \int d^3 \mathbf{x}' \frac{\rho(\mathbf{x}') }{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}\end{aligned} \hspace{\stretch{1}}(2.24)
## Back to Maxwell’s equations in vacuum.
What are the Maxwell’s vacuum equations now?
With the second gauge substitution we have
\begin{aligned}\boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}'') &= \boldsymbol{\nabla}^2 \mathbf{A}'' - \frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}''}{\partial t^2} \\ \frac{\partial {\boldsymbol{\nabla} \cdot \mathbf{A}''}}{\partial {t}} &= 0\end{aligned}
but we can utilize
\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{A}) = \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) - \boldsymbol{\nabla}^2 \mathbf{A},\end{aligned} \hspace{\stretch{1}}(2.25)
to reduce Maxwell’s equations (after dropping primes) to just
\begin{aligned}\frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}''}{\partial t^2} - \Delta \mathbf{A} = 0\end{aligned} \hspace{\stretch{1}}(2.26)
where
\begin{aligned}\Delta = \boldsymbol{\nabla}^2 = \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} = \frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial y^2}\end{aligned} \hspace{\stretch{1}}(2.27)
Note that for this to be correct we have to also explicitly include the gauge condition used. This particular gauge is called the \underline{Coulomb gauge}.
\begin{aligned}\phi &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{A}'' &= 0 \end{aligned} \hspace{\stretch{1}}(2.28)
# Claim: EM waves propagate with speed $c$ and are transverse.
\paragraph{Note:} Is the Coulomb gauge Lorentz invariant?
\paragraph{No.} We can boost which will introduce a non-zero $\phi$.
The gauge that is Lorentz Invariant is the “Lorentz gauge”. This one uses
\begin{aligned}\partial_i A^i = 0\end{aligned} \hspace{\stretch{1}}(3.30)
Recall that Maxwell’s equations are
\begin{aligned}\partial_i F^{ij} = j^j = 0\end{aligned} \hspace{\stretch{1}}(3.31)
where
\begin{aligned}\partial_i &= \frac{\partial {}}{\partial {x^i}} \\ \partial^i &= \frac{\partial {}}{\partial {x_i}}\end{aligned} \hspace{\stretch{1}}(3.32)
Writing out the equations in terms of potentials we have
\begin{aligned}0 &= \partial_i (\partial^i A^j - \partial^j A^i) \\ &= \partial_i \partial^i A^j - \partial_i \partial^j A^i \\ &= \partial_i \partial^i A^j - \partial^j \partial_i A^i \\ \end{aligned}
So, if we pick the gauge condition $\partial_i A^i = 0$, we are left with just
\begin{aligned}0 = \partial_i \partial^i A^j\end{aligned} \hspace{\stretch{1}}(3.34)
Can we choose ${A'}^i$ such that $\partial_i A^i = 0$?
Our gauge condition is
\begin{aligned}A^i = {A'}^i + \partial^i \chi\end{aligned} \hspace{\stretch{1}}(3.35)
Hit it with a derivative for
\begin{aligned}\partial_i A^i = \partial_i {A'}^i + \partial_i \partial^i \chi\end{aligned} \hspace{\stretch{1}}(3.36)
If we want $\partial_i A^i = 0$, then we have
\begin{aligned}-\partial_i {A'}^i = \partial_i \partial^i \chi = \left( \frac{1}{{c^2}} \frac{\partial^2}{\partial t^2} - \Delta \right) \chi\end{aligned} \hspace{\stretch{1}}(3.37)
This is the physicist proof. Yes, it can be solved. To really solve this, we’d want to use Green’s functions. I seem to recall the Green’s function is a retarded time version of the Laplacian Green’s function, and we can figure that exact form out by switching to a Fourier frequency domain representation.
Anyways. Returning to Maxwell’s equations we have
\begin{aligned}0 &= \partial_i \partial^i A^j \\ 0 &= \partial_i A^i ,\end{aligned} \hspace{\stretch{1}}(3.38)
where the first is Maxwell’s equation, and the second is our gauge condition.
Observe that the gauge condition is now a Lorentz scalar.
\begin{aligned}\partial^i A_i \rightarrow \partial^j {O_j}^i {O_i}^k A_k\end{aligned} \hspace{\stretch{1}}(3.40)
But the Lorentz transform matrices multiply out to identity, in the same way that they do for the transformation of a plain old four vector dot product $x^i y_i$.
# What happens with a Massive vector field?
\begin{aligned}S = \int d^4 x \left( \frac{1}{{4}} F^{ij} F_{ij} + \frac{m^2}{2} A^i A_i \right)\end{aligned} \hspace{\stretch{1}}(4.41)
## An aside on units
“Note that this action is expressed in dimensions where $\hbar = c = 1$, making the action is unit-less (energy and time are inverse units of each other). The $d^4x$ has units of $m^{-4}$ (since $[x] = \hbar/mc$), so $F$ has units of $m^2$, and then $A$ has units of mass. Therefore $d^4x A A$ has units of $m^{-2}$ and therefore you need something that has units of $m^2$ to make the action unit-less. When you don’t take $c=1$, then you’ve got to worry about those factors, but I think you’ll see it works out fine.”
For what it’s worth, I can adjust the units of this action to those that we’ve used in class with,
\begin{aligned}S = \int d^4 x \left( -\frac{1}{{16 \pi c}} F^{ij} F_{ij} - \frac{m^2 c^2}{8 \hbar^2} A^i A_i \right)\end{aligned} \hspace{\stretch{1}}(4.42)
## Back to the problem.
The variation of the field invariant is
\begin{aligned}\delta (F_{ij} F^{ij})&=2 (\delta F_{ij}) F^{ij}) \\ &=2 (\delta(\partial_i A_j -\partial_j A_i)) F^{ij}) \\ &=2 (\partial_i \delta(A_j) -\partial_j \delta(A_i)) F^{ij}) \\ &=4 F^{ij} \partial_i \delta(A_j) \\ &=4 \partial_i (F^{ij} \delta(A_j)) - 4 (\partial_i F^{ij}) \delta(A_j).\end{aligned}
Variation of the $A^2$ term gives us
\begin{aligned}\delta (A^j A_j) = 2 A^j \delta(A_j),\end{aligned} \hspace{\stretch{1}}(4.43)
so we have
\begin{aligned}0 &= \delta S \\ &= \int d^4 x \delta(A_j) \left( -\partial_i F^{ij} + m^2 A^j \right)+ \int d^4 x \partial_i (F^{ij} \delta(A_j))\end{aligned}
The last integral vanishes on the boundary with the assumption that $\delta(A_j) = 0$ on that boundary.
Since this must be true for all variations, this leaves us with
\begin{aligned}\partial_i F^{ij} = m^2 A^j\end{aligned} \hspace{\stretch{1}}(4.44)
The RHS can be expanded into wave equation and divergence parts
\begin{aligned}\partial_i F^{ij}&=\partial_i (\partial^i A^j - \partial^j A^i) \\ &=(\partial_i \partial^i) A^j - \partial^j (\partial_i A^i) \\ \end{aligned}
With $\square$ for the wave equation operator
\begin{aligned}\square = \partial_i \partial^i = \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta,\end{aligned} \hspace{\stretch{1}}(4.45)
we can manipulate the EOM to pull out an $A_i$ factor
\begin{aligned}0 &= \left( \square -m^2 \right) A^j - \partial^j (\partial_i A^i) \\ &= \left( \square -m^2 \right) g^{ij} A_i - \partial^j (\partial^i A_i) \\ &= \left( \left( \square -m^2 \right) g^{ij} - \partial^j \partial^i \right) A_i.\end{aligned}
If we hit this with a derivative we get
\begin{aligned}0 &= \partial_j \left( \left( \square -m^2 \right) g^{ij} - \partial^j \partial^i \right) A_i \\ &= \left( \left( \square -m^2 \right) \partial^i - \partial_j \partial^j \partial^i \right) A_i \\ &= \left( \left( \square -m^2 \right) \partial^i - \square \partial^i \right) A_i \\ &= \left( \square -m^2 - \square \right) \partial^i A_i \\ &= -m^2 \partial^i A_i \\ \end{aligned}
Since $m$ is presumed to be non-zero here, this means that the Lorentz gauge is already chosen for us by the equations of motion.
# References
[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.
## Fourier transform solutions and associated energy and momentum for the homogeneous Maxwell equation. (rework once more)
Posted by peeterjoot on December 29, 2009
[Click here for a PDF of this post with nicer formatting]. Note that this PDF file is formatted in a wide-for-screen layout that is probably not good for printing.
These notes build on and replace those formerly posted in Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation.
# Motivation and notation.
In Electrodynamic field energy for vacuum (reworked) [1], building on Energy and momentum for Complex electric and magnetic field phasors [2], a derivation for the energy and momentum density was derived for an assumed Fourier series solution to the homogeneous Maxwell’s equation. Here we move to the continuous case examining Fourier transform solutions and the associated energy and momentum density.
A complex (phasor) representation is implied, so taking real parts when all is said and done is required of the fields. For the energy momentum tensor the Geometric Algebra form, modified for complex fields, is used
\begin{aligned}T(a) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \hspace{\stretch{1}}(1.1)
The assumed four vector potential will be written
\begin{aligned}A(\mathbf{x}, t) = A^\mu(\mathbf{x}, t) \gamma_\mu = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(1.2)
Subject to the requirement that $A$ is a solution of Maxwell’s equation
\begin{aligned}\nabla (\nabla \wedge A) = 0.\end{aligned} \hspace{\stretch{1}}(1.3)
To avoid latex hell, no special notation will be used for the Fourier coefficients,
\begin{aligned}A(\mathbf{k}, t) = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{x}.\end{aligned} \hspace{\stretch{1}}(1.4)
When convenient and unambiguous, this $(\mathbf{k},t)$ dependence will be implied.
Having picked a time and space representation for the field, it will be natural to express both the four potential and the gradient as scalar plus spatial vector, instead of using the Dirac basis. For the gradient this is
\begin{aligned}\nabla &= \gamma^\mu \partial_\mu = (\partial_0 - \boldsymbol{\nabla}) \gamma_0 = \gamma_0 (\partial_0 + \boldsymbol{\nabla}),\end{aligned} \hspace{\stretch{1}}(1.5)
and for the four potential (or the Fourier transform functions), this is
\begin{aligned}A &= \gamma_\mu A^\mu = (\phi + \mathbf{A}) \gamma_0 = \gamma_0 (\phi - \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(1.6)
# Setup
The field bivector $F = \nabla \wedge A$ is required for the energy momentum tensor. This is
\begin{aligned}\nabla \wedge A&= \frac{1}{{2}}\left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \\ &= \frac{1}{{2}}\left( (\stackrel{ \rightarrow }{\partial}_0 - \stackrel{ \rightarrow }{\boldsymbol{\nabla}}) \gamma_0 \gamma_0 (\phi - \mathbf{A})-(\phi + \mathbf{A}) \gamma_0 \gamma_0 (\stackrel{ \leftarrow }{\partial}_0 + \stackrel{ \leftarrow }{\boldsymbol{\nabla}})\right) \\ &= -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \frac{1}{{2}}(\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}})\end{aligned}
This last term is a spatial curl and the field is then
\begin{aligned}F = -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A}\end{aligned} \hspace{\stretch{1}}(2.7)
Applied to the Fourier representation this is
\begin{aligned}F =\frac{1}{{(\sqrt{2 \pi})^3}} \int\left(- \frac{1}{c} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(2.8)
It is only the real parts of this that we are actually interested in, unless physical meaning can be assigned to the complete complex vector field.
# Constraints supplied by Maxwell’s equation.
A Fourier transform solution of Maxwell’s vacuum equation $\nabla F = 0$ has been assumed. Having expressed the Faraday bivector in terms of spatial vector quantities, it is more convenient to do this back substitution into after pre-multiplying Maxwell’s equation by $\gamma_0$, namely
\begin{aligned}0&= \gamma_0 \nabla F \\ &= (\partial_0 + \boldsymbol{\nabla}) F.\end{aligned} \hspace{\stretch{1}}(3.9)
Applied to the spatially decomposed field as specified in (2.7), this is
\begin{aligned}0&=-\partial_0 \boldsymbol{\nabla} \phi-\partial_{00} \mathbf{A}+ \partial_0 \boldsymbol{\nabla} \wedge \mathbf{A}-\boldsymbol{\nabla}^2 \phi- \boldsymbol{\nabla} \partial_0 \mathbf{A}+ \boldsymbol{\nabla} \cdot (\boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &=- \partial_0 \boldsymbol{\nabla} \phi - \boldsymbol{\nabla}^2 \phi- \partial_{00} \mathbf{A}- \boldsymbol{\nabla} \cdot \partial_0 \mathbf{A}+ \boldsymbol{\nabla}^2 \mathbf{A} - \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} ) \\ \end{aligned}
All grades of this equation must simultaneously equal zero, and the bivector grades have canceled (assuming commuting space and time partials), leaving two equations of constraint for the system
\begin{aligned}0 &=\boldsymbol{\nabla}^2 \phi + \boldsymbol{\nabla} \cdot \partial_0 \mathbf{A}\end{aligned} \hspace{\stretch{1}}(3.11)
\begin{aligned}0 &=\partial_{00} \mathbf{A} - \boldsymbol{\nabla}^2 \mathbf{A}+ \boldsymbol{\nabla} \partial_0 \phi + \boldsymbol{\nabla} ( \boldsymbol{\nabla} \cdot \mathbf{A} )\end{aligned} \hspace{\stretch{1}}(3.12)
It is immediately evident that a gauge transformation could be immediately helpful to simplify things. In [3] the gauge choice $\boldsymbol{\nabla} \cdot \mathbf{A} = 0$ is used. From (3.11) this implies that $\boldsymbol{\nabla}^2 \phi = 0$. Bohm argues that for this current and charge free case this implies $\phi = 0$, but he also has a periodicity constraint. Without a periodicity constraint it is easy to manufacture non-zero counterexamples. One is a linear function in the space and time coordinates
\begin{aligned}\phi = p x + q y + r z + s t\end{aligned} \hspace{\stretch{1}}(3.13)
This is a valid scalar potential provided that the wave equation for the vector potential is also a solution. We can however, force $\phi = 0$ by making the transformation $A^\mu \rightarrow A^\mu + \partial^\mu \psi$, which in non-covariant notation is
\begin{aligned}\phi &\rightarrow \phi + \frac{1}{c} \partial_t \psi \\ \mathbf{A} &\rightarrow \phi - \boldsymbol{\nabla} \psi\end{aligned} \hspace{\stretch{1}}(3.14)
If the transformed field $\phi' = \phi + \partial_t \psi/c$ can be forced to zero, then the complexity of the associated Maxwell equations are reduced. In particular, antidifferentiation of $\phi = -(1/c) \partial_t \psi$, yields
\begin{aligned}\psi(\mathbf{x},t) = \psi(\mathbf{x}, 0) - c \int_{\tau=0}^t \phi(\mathbf{x}, \tau) d\tau.\end{aligned} \hspace{\stretch{1}}(3.16)
Dropping primes, the transformed Maxwell equations now take the form
\begin{aligned}0 &= \partial_t( \boldsymbol{\nabla} \cdot \mathbf{A} )\end{aligned} \hspace{\stretch{1}}(3.17)
\begin{aligned}0 &=\partial_{00} \mathbf{A} - \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} ).\end{aligned} \hspace{\stretch{1}}(3.18)
There are two classes of solutions that stand out for these equations. If the vector potential is constant in time $\mathbf{A}(\mathbf{x},t) = \mathbf{A}(\mathbf{x})$, Maxwell’s equations are reduced to the single equation
\begin{aligned}0&= - \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} ).\end{aligned} \hspace{\stretch{1}}(3.19)
Observe that a gradient can be factored out of this equation
\begin{aligned}- \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} )&=\boldsymbol{\nabla} (-\boldsymbol{\nabla} \mathbf{A} + \boldsymbol{\nabla} \cdot \mathbf{A} ) \\ &=-\boldsymbol{\nabla} (\boldsymbol{\nabla} \wedge \mathbf{A}).\end{aligned}
The solutions are then those $\mathbf{A}$s that satisfy both
\begin{aligned}0 &= \partial_t \mathbf{A} \\ 0 &= \boldsymbol{\nabla} (\boldsymbol{\nabla} \wedge \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(3.20)
In particular any non-time dependent potential $\mathbf{A}$ with constant curl provides a solution to Maxwell’s equations. There may be other solutions to (3.19) too that are more general. Returning to (3.17) a second way to satisfy these equations stands out. Instead of requiring of $\mathbf{A}$ constant curl, constant divergence with respect to the time partial eliminates (3.17). The simplest resulting equations are those for which the divergence is a constant in time and space (such as zero). The solution set are then spanned by the vectors $\mathbf{A}$ for which
\begin{aligned}\text{constant} &= \boldsymbol{\nabla} \cdot \mathbf{A} \end{aligned} \hspace{\stretch{1}}(3.22)
\begin{aligned}0 &= \frac{1}{{c^2}} \partial_{tt} \mathbf{A} - \boldsymbol{\nabla}^2 \mathbf{A}.\end{aligned} \hspace{\stretch{1}}(3.23)
Any $\mathbf{A}$ that both has constant divergence and satisfies the wave equation will via (2.7) then produce a solution to Maxwell’s equation.
# Maxwell equation constraints applied to the assumed Fourier solutions.
Let’s consider Maxwell’s equations in all three forms, (3.11), (3.20), and (3.22) and apply these constraints to the assumed Fourier solution.
In all cases the starting point is a pair of Fourier transform relationships, where the Fourier transforms are the functions to be determined
\begin{aligned}\phi(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \phi(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(4.24)
\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \mathbf{A}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(4.25)
## Case I. Constant time vector potential. Scalar potential eliminated by gauge transformation.
From (4.24) we require
\begin{aligned}0 = (2 \pi)^{-3/2} \int \partial_t \mathbf{A}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.26)
So the Fourier transform also cannot have any time dependence, and we have
\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \mathbf{A}(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(4.27)
What is the curl of this? Temporarily falling back to coordinates is easiest for this calculation
\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{A}(\mathbf{k}) e^{i\mathbf{k} \cdot \mathbf{x}}&=\sigma_m \partial_m \wedge \sigma_n A^n(\mathbf{k}) e^{i \mathbf{x} \cdot \mathbf{x}} \\ &=\sigma_m \wedge \sigma_n A^n(\mathbf{k}) i k^m e^{i \mathbf{x} \cdot \mathbf{x}} \\ &=i\mathbf{k} \wedge \mathbf{A}(\mathbf{k}) e^{i \mathbf{x} \cdot \mathbf{x}} \\ \end{aligned}
This gives
\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.28)
We want to equate the divergence of this to zero. Neglecting the integral and constant factor this requires
\begin{aligned}0 &= \boldsymbol{\nabla} \cdot \left( i \mathbf{k} \wedge \mathbf{A} e^{i\mathbf{k} \cdot \mathbf{x}} \right) \\ &= {\left\langle{{ \sigma_m \partial_m i (\mathbf{k} \wedge \mathbf{A}) e^{i\mathbf{k} \cdot \mathbf{x}} }}\right\rangle}_{1} \\ &= -{\left\langle{{ \sigma_m (\mathbf{k} \wedge \mathbf{A}) k^m e^{i\mathbf{k} \cdot \mathbf{x}} }}\right\rangle}_{1} \\ &= -\mathbf{k} \cdot (\mathbf{k} \wedge \mathbf{A}) e^{i\mathbf{k} \cdot \mathbf{x}} \\ \end{aligned}
Requiring that the plane spanned by $\mathbf{k}$ and $\mathbf{A}(\mathbf{k})$ be perpendicular to $\mathbf{k}$ implies that $\mathbf{A} \propto \mathbf{k}$. The solution set is then completely described by functions of the form
\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \mathbf{k} \psi(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k},\end{aligned} \hspace{\stretch{1}}(4.29)
where $\psi(\mathbf{k})$ is an arbitrary scalar valued function. This is however, an extremely uninteresting solution since the curl is uniformly zero
\begin{aligned}F &= \boldsymbol{\nabla} \wedge \mathbf{A} \\ &= (2 \pi)^{-3/2} \int (i \mathbf{k}) \wedge \mathbf{k} \psi(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned}
Since $\mathbf{k} \wedge \mathbf{k} = 0$, when all is said and done the $\phi = 0$, $\partial_t \mathbf{A} = 0$ case appears to have no non-trivial (zero) solutions. Moving on, …
## Case II. Constant vector potential divergence. Scalar potential eliminated by gauge transformation.
Next in the order of complexity is consideration of the case (3.22). Here we also have $\phi = 0$, eliminated by gauge transformation, and are looking for solutions with the constraint
\begin{aligned}\text{constant} &= \boldsymbol{\nabla} \cdot \mathbf{A}(\mathbf{x}, t) \\ &= (2 \pi)^{-3/2} \int i \mathbf{k} \cdot \mathbf{A}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned}
How can this constraint be enforced? The only obvious way is a requirement for $\mathbf{k} \cdot \mathbf{A}(\mathbf{k}, t)$ to be zero for all $(\mathbf{k},t)$, meaning that our to be determined Fourier transform coefficients are required to be perpendicular to the wave number vector parameters at all times.
The remainder of Maxwell’s equations, (3.23) impose the addition constraint on the Fourier transform $\mathbf{A}(\mathbf{k},t)$
\begin{aligned}0 &= (2 \pi)^{-3/2} \int \left( \frac{1}{{c^2}} \partial_{tt} \mathbf{A}(\mathbf{k}, t) - i^2 \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t)\right) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.30)
For zero equality for all $\mathbf{x}$ it appears that we require the Fourier transforms $\mathbf{A}(\mathbf{k})$ to be harmonic in time
\begin{aligned}\partial_{tt} \mathbf{A}(\mathbf{k}, t) = - c^2 \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t).\end{aligned} \hspace{\stretch{1}}(4.31)
This has the familiar exponential solutions
\begin{aligned}\mathbf{A}(\mathbf{k}, t) = \mathbf{A}_{\pm}(\mathbf{k}) e^{ \pm i c {\left\lvert{\mathbf{k}}\right\rvert} t },\end{aligned} \hspace{\stretch{1}}(4.32)
also subject to a requirement that $\mathbf{k} \cdot \mathbf{A}(\mathbf{k}) = 0$. Our field, where the $\mathbf{A}_{\pm}(\mathbf{k})$ are to be determined by initial time conditions, is by (2.7) of the form
\begin{aligned}F(\mathbf{x}, t)= \text{Real} \frac{i}{(\sqrt{2\pi})^3} \int \Bigl( -{\left\lvert{\mathbf{k}}\right\rvert} \mathbf{A}_{+}(\mathbf{k}) + \mathbf{k} \wedge \mathbf{A}_{+}(\mathbf{k}) \Bigr) \exp(i \mathbf{k} \cdot \mathbf{x} + i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}+ \text{Real} \frac{i}{(\sqrt{2\pi})^3} \int \Bigl( {\left\lvert{\mathbf{k}}\right\rvert} \mathbf{A}_{-}(\mathbf{k}) + \mathbf{k} \wedge \mathbf{A}_{-}(\mathbf{k}) \Bigr) \exp(i \mathbf{k} \cdot \mathbf{x} - i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.33)
Since $0 = \mathbf{k} \cdot \mathbf{A}_{\pm}(\mathbf{k})$, we have $\mathbf{k} \wedge \mathbf{A}_{\pm}(\mathbf{k}) = \mathbf{k} \mathbf{A}_{\pm}$. This allows for factoring out of ${\left\lvert{\mathbf{k}}\right\rvert}$. The structure of the solution is not changed by incorporating the $i (2\pi)^{-3/2} {\left\lvert{\mathbf{k}}\right\rvert}$ factors into $\mathbf{A}_{\pm}$, leaving the field having the general form
\begin{aligned}F(\mathbf{x}, t)= \text{Real} \int ( \hat{\mathbf{k}} - 1 ) \mathbf{A}_{+}(\mathbf{k}) \exp(i \mathbf{k} \cdot \mathbf{x} + i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}+ \text{Real} \int ( \hat{\mathbf{k}} + 1 ) \mathbf{A}_{-}(\mathbf{k}) \exp(i \mathbf{k} \cdot \mathbf{x} - i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.34)
The original meaning of $\mathbf{A}_{\pm}$ as Fourier transforms of the vector potential is obscured by the tidy up change to absorb ${\left\lvert{\mathbf{k}}\right\rvert}$, but the geometry of the solution is clearer this way.
It is also particularly straightforward to confirm that $\gamma_0 \nabla F = 0$ separately for either half of (4.34).
## Case III. Non-zero scalar potential. No gauge transformation.
Now lets work from (3.11). In particular, a divergence operation can be factored from (3.11), for
\begin{aligned}0 = \boldsymbol{\nabla} \cdot (\boldsymbol{\nabla} \phi + \partial_0 \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(4.35)
Right off the top, there is a requirement for
\begin{aligned}\text{constant} = \boldsymbol{\nabla} \phi + \partial_0 \mathbf{A}.\end{aligned} \hspace{\stretch{1}}(4.36)
In terms of the Fourier transforms this is
\begin{aligned}\text{constant} = \frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(i \mathbf{k} \phi(\mathbf{k}, t) + \frac{1}{c} \partial_t \mathbf{A}(\mathbf{k}, t)\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.37)
Are there any ways for this to equal a constant for all $\mathbf{x}$ without requiring that constant to be zero? Assuming no for now, and that this constant must be zero, this implies a coupling between the $\phi$ and $\mathbf{A}$ Fourier transforms of the form
\begin{aligned}\phi(\mathbf{k}, t) = -\frac{1}{{i c \mathbf{k}}} \partial_t \mathbf{A}(\mathbf{k}, t)\end{aligned} \hspace{\stretch{1}}(4.38)
A secondary implication is that $\partial_t \mathbf{A}(\mathbf{k}, t) \propto \mathbf{k}$ or else $\phi(\mathbf{k}, t)$ is not a scalar. We had a transverse solution by requiring via gauge transformation that $\phi = 0$, and here we have instead the vector potential in the propagation direction.
A secondary confirmation that this is a required coupling between the scalar and vector potential can be had by evaluating the divergence equation of (4.35)
\begin{aligned}0 = \frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(- \mathbf{k}^2 \phi(\mathbf{k}, t) + \frac{i\mathbf{k}}{c} \cdot \partial_t \mathbf{A}(\mathbf{k}, t)\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.39)
Rearranging this also produces (4.38). We want to now substitute this relationship into (3.12).
Starting with just the $\partial_0 \phi - \boldsymbol{\nabla} \cdot \mathbf{A}$ part we have
\begin{aligned}\partial_0 \phi + \boldsymbol{\nabla} \cdot \mathbf{A}&=\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(\frac{i}{c^2 \mathbf{k}} \partial_{tt} \mathbf{A}(\mathbf{k}, t) + i \mathbf{k} \cdot \mathbf{A}\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.40)
Taking the gradient of this brings down a factor of $i\mathbf{k}$ for
\begin{aligned}\boldsymbol{\nabla} (\partial_0 \phi + \boldsymbol{\nabla} \cdot \mathbf{A})&=-\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(\frac{1}{c^2} \partial_{tt} \mathbf{A}(\mathbf{k}, t) + \mathbf{k} (\mathbf{k} \cdot \mathbf{A})\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.41)
(3.12) in its entirety is now
\begin{aligned}0 &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(- (i\mathbf{k})^2 \mathbf{A}+ \mathbf{k} (\mathbf{k} \cdot \mathbf{A})\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.42)
This isn’t terribly pleasant looking. Perhaps going the other direction. We could write
\begin{aligned}\phi = \frac{i}{c \mathbf{k}} \frac{\partial {\mathbf{A}}}{\partial {t}} = \frac{i}{c} \frac{\partial {\psi}}{\partial {t}},\end{aligned} \hspace{\stretch{1}}(4.43)
so that
\begin{aligned}\mathbf{A}(\mathbf{k}, t) = \mathbf{k} \psi(\mathbf{k}, t).\end{aligned} \hspace{\stretch{1}}(4.44)
\begin{aligned}0 &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(\frac{1}{{c^2}} \mathbf{k} \psi_{tt}- \boldsymbol{\nabla}^2 \mathbf{k} \psi + \boldsymbol{\nabla} \frac{i}{c^2} \psi_{tt}+\boldsymbol{\nabla}( \boldsymbol{\nabla} \cdot (\mathbf{k} \psi) )\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \\ \end{aligned}
Note that the gradients here operate on everything to the right, including and especially the exponential. Each application of the gradient brings down an additional $i\mathbf{k}$ factor, and we have
\begin{aligned}\frac{1}{{(\sqrt{2 \pi})^3}} \int \mathbf{k} \Bigl(\frac{1}{{c^2}} \psi_{tt}- i^2 \mathbf{k}^2 \psi + \frac{i^2}{c^2} \psi_{tt}+i^2 \mathbf{k}^2 \psi \Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned}
This is identically zero, so we see that this second equation provides no additional information. That is somewhat surprising since there is not a whole lot of constraints supplied by the first equation. The function $\psi(\mathbf{k}, t)$ can be anything. Understanding of this curiosity comes from computation of the Faraday bivector itself. From (2.7), that is
\begin{aligned}F = \frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(-i \mathbf{k} \frac{i}{c}\psi_t - \frac{1}{c} \mathbf{k} \psi_t + i \mathbf{k} \wedge \mathbf{k} \psi\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.45)
All terms cancel, so we see that a non-zero $\phi$ leads to $F = 0$, as was the case when considering (4.24) (a case that also resulted in $\mathbf{A}(\mathbf{k}) \propto \mathbf{k}$).
Can this Fourier representation lead to a non-transverse solution to Maxwell’s equation? If so, it is not obvious how.
# The energy momentum tensor
The energy momentum tensor is then
\begin{aligned}T(a) &= -\frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left(- \frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)+ i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)- i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)a\left(- \frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)- i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(5.46)
Observing that $\gamma_0$ commutes with spatial bivectors and anticommutes with spatial vectors, and writing $\sigma_\mu = \gamma_\mu \gamma_0$, the tensor splits neatly into scalar and spatial vector components
\begin{aligned}T(\gamma_\mu) \cdot \gamma_0 &= \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left\langle{{\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)+ i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)\sigma_\mu\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)}}\right\rangle e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}' \\ T(\gamma_\mu) \wedge \gamma_0 &= \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint{\left\langle{{\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)+ i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)\sigma_\mu\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)}}\right\rangle}_{1}e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(5.47)
In particular for $\mu = 0$, we have
\begin{aligned}H &\equiv T(\gamma_0) \cdot \gamma_0 = \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left(\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)\right)\cdot\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)\right)- (\mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)) \cdot (\mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t))\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}' \\ \mathbf{P} &\equiv T(\gamma_\mu) \wedge \gamma_0 = \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left(i\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)\right) \cdot\left(\mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)-i\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)\right)\cdot\left(\mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(5.49)
Integrating this over all space and identification of the delta function
\begin{aligned}\delta(\mathbf{k}) \equiv \frac{1}{{(2 \pi)^3}} \int e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{x},\end{aligned} \hspace{\stretch{1}}(5.51)
reduces the tensor to a single integral in the continuous angular wave number space of $\mathbf{k}$.
\begin{aligned}\int T(a) d^3 \mathbf{x} &= -\frac{\epsilon_0}{2} \text{Real} \int\left(- \frac{1}{c} {{\dot{\mathbf{A}}}}^{*}+ i \mathbf{k} {{\phi}}^{*}- i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)a\left(- \frac{1}{c} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.52)
Or,
\begin{aligned}\int T(\gamma_\mu) \gamma_0 d^3 \mathbf{x} =\frac{\epsilon_0}{2} \text{Real} \int{\left\langle{{\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}- i \mathbf{k} {{\phi}}^{*}+ i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)\sigma_\mu\left(\frac{1}{c} \dot{\mathbf{A}}+ i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)}}\right\rangle}_{{0,1}}d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.53)
Multiplying out (5.53) yields for $\int H$
\begin{aligned}\int H d^3 \mathbf{x} &=\frac{\epsilon_0}{2} \int d^3 \mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}}\right\rvert}^2 + \mathbf{k}^2 ({\left\lvert{\phi}\right\rvert}^2 + {\left\lvert{\mathbf{A}}\right\rvert}^2 )- {\left\lvert{\mathbf{k} \cdot \mathbf{A}}\right\rvert}^2+ 2 \frac{\mathbf{k}}{c} \cdot \text{Real}( i {{\phi}}^{*} \dot{\mathbf{A}} )\right)\end{aligned} \hspace{\stretch{1}}(5.54)
Recall that the only non-trivial solution we found for the assumed Fourier transform representation of $F$ was for $\phi = 0$, $\mathbf{k} \cdot \mathbf{A}(\mathbf{k}, t) = 0$. Thus we have for the energy density integrated over all space, just
\begin{aligned}\int H d^3 \mathbf{x} &=\frac{\epsilon_0}{2} \int d^3 \mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}}\right\rvert}^2 + \mathbf{k}^2 {\left\lvert{\mathbf{A}}\right\rvert}^2 \right).\end{aligned} \hspace{\stretch{1}}(5.55)
Observe that we have the structure of a Harmonic oscillator for the energy of the radiation system. What is the canonical momentum for this system? Will it correspond to the Poynting vector, integrated over all space?
Let’s reduce the vector component of (5.53), after first imposing the $\phi=0$, and $\mathbf{k} \cdot \mathbf{A} = 0$ conditions used to above for our harmonic oscillator form energy relationship. This is
\begin{aligned}\int \mathbf{P} d^3 \mathbf{x} &=\frac{\epsilon_0}{2 c} \text{Real} \int d^3 \mathbf{k} \left( i {\mathbf{A}}^{*}_t \cdot (\mathbf{k} \wedge \mathbf{A})+ i (\mathbf{k} \wedge {\mathbf{A}}^{*}) \cdot \mathbf{A}_t\right) \\ &=\frac{\epsilon_0}{2 c} \text{Real} \int d^3 \mathbf{k} \left( -i ({\mathbf{A}}^{*}_t \cdot \mathbf{A}) \mathbf{k}+ i \mathbf{k} ({\mathbf{A}}^{*} \cdot \mathbf{A}_t)\right)\end{aligned}
This is just
\begin{aligned}\int \mathbf{P} d^3 \mathbf{x} &=\frac{\epsilon_0}{c} \text{Real} i \int \mathbf{k} ({\mathbf{A}}^{*} \cdot \mathbf{A}_t) d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.56)
Recall that the Fourier transforms for the transverse propagation case had the form $\mathbf{A}(\mathbf{k}, t) = \mathbf{A}_{\pm}(\mathbf{k}) e^{\pm i c {\left\lvert{\mathbf{k}}\right\rvert} t}$, where the minus generated the advanced wave, and the plus the receding wave. With substitution of the vector potential for the advanced wave into the energy and momentum results of (5.55) and (5.56) respectively, we have
\begin{aligned}\int H d^3 \mathbf{x} &= \epsilon_0 \int \mathbf{k}^2 {\left\lvert{\mathbf{A}(\mathbf{k})}\right\rvert}^2 d^3 \mathbf{k} \\ \int \mathbf{P} d^3 \mathbf{x} &= \epsilon_0 \int \hat{\mathbf{k}} \mathbf{k}^2 {\left\lvert{\mathbf{A}(\mathbf{k})}\right\rvert}^2 d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.57)
After a somewhat circuitous route, this has the relativistic symmetry that is expected. In particular the for the complete $\mu=0$ tensor we have after integration over all space
\begin{aligned}\int T(\gamma_0) \gamma_0 d^3 \mathbf{x} = \epsilon_0 \int (1 + \hat{\mathbf{k}}) \mathbf{k}^2 {\left\lvert{\mathbf{A}(\mathbf{k})}\right\rvert}^2 d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.59)
The receding wave solution would give the same result, but directed as $1 - \hat{\mathbf{k}}$ instead.
Observe that we also have the four divergence conservation statement that is expected
\begin{aligned}\frac{\partial {}}{\partial {t}} \int H d^3 \mathbf{x} + \boldsymbol{\nabla} \cdot \int c \mathbf{P} d^3 \mathbf{x} &= 0.\end{aligned} \hspace{\stretch{1}}(5.60)
This follows trivially since both the derivatives are zero. If the integration region was to be more specific instead of a $0 + 0 = 0$ relationship, we’d have the power flux ${\partial {H}}/{\partial {t}}$ equal in magnitude to the momentum change through a bounding surface. For a more general surface the time and spatial dependencies shouldn’t necessarily vanish, but we should still have this radiation energy momentum conservation.
# References
[1] Peeter Joot. Electrodynamic field energy for vacuum. [online]. http://sites.google.com/site/peeterjoot/math2009/fourierMaxVac.pdf.
[2] Peeter Joot. {Energy and momentum for Complex electric and magnetic field phasors.} [online]. http://sites.google.com/site/peeterjoot/math2009/complexFieldEnergy.pdf.
[3] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.
## Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation.
Posted by peeterjoot on December 22, 2009
# Motivation and notation.
In Electrodynamic field energy for vacuum (reworked) [1], building on Energy and momentum for Complex electric and magnetic field phasors [2] a derivation for the energy and momentum density was derived for an assumed Fourier series solution to the homogeneous Maxwell’s equation. Here we move to the continuous case examining Fourier transform solutions and the associated energy and momentum density.
A complex (phasor) representation is implied, so taking real parts when all is said and done is required of the fields. For the energy momentum tensor the Geometric Algebra form, modified for complex fields, is used
\begin{aligned}T(a) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \hspace{\stretch{1}}(1.1)
The assumed four vector potential will be written
\begin{aligned}A(\mathbf{x}, t) = A^\mu(\mathbf{x}, t) \gamma_\mu = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(1.2)
Subject to the requirement that $A$ is a solution of Maxwell’s equation
\begin{aligned}\nabla (\nabla \wedge A) = 0.\end{aligned} \hspace{\stretch{1}}(1.3)
To avoid latex hell, no special notation will be used for the Fourier coefficients,
\begin{aligned}A(\mathbf{k}, t) = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{x}.\end{aligned} \hspace{\stretch{1}}(1.4)
When convenient and unambiguous, this $(\mathbf{k},t)$ dependence will be implied.
Having picked a time and space representation for the field, it will be natural to express both the four potential and the gradient as scalar plus spatial vector, instead of using the Dirac basis. For the gradient this is
\begin{aligned}\nabla &= \gamma^\mu \partial_\mu = (\partial_0 - \boldsymbol{\nabla}) \gamma_0 = \gamma_0 (\partial_0 + \boldsymbol{\nabla}),\end{aligned} \hspace{\stretch{1}}(1.5)
and for the four potential (or the Fourier transform functions), this is
\begin{aligned}A &= \gamma_\mu A^\mu = (\phi + \mathbf{A}) \gamma_0 = \gamma_0 (\phi - \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(1.6)
# Setup
The field bivector $F = \nabla \wedge A$ is required for the energy momentum tensor. This is
\begin{aligned}\nabla \wedge A&= \frac{1}{{2}}\left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \\ &= \frac{1}{{2}}\left( (\stackrel{ \rightarrow }{\partial}_0 - \stackrel{ \rightarrow }{\boldsymbol{\nabla}}) \gamma_0 \gamma_0 (\phi - \mathbf{A})- (\phi + \mathbf{A}) \gamma_0 \gamma_0 (\stackrel{ \leftarrow }{\partial}_0 + \stackrel{ \leftarrow }{\boldsymbol{\nabla}})\right) \\ &= -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \frac{1}{{2}}(\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}}) \end{aligned}
This last term is a spatial curl and the field is then
\begin{aligned}F = -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} \end{aligned} \hspace{\stretch{1}}(2.7)
Applied to the Fourier representation this is
\begin{aligned}F = \frac{1}{{(\sqrt{2 \pi})^3}} \int \left( - \frac{1}{{c}} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(2.8)
The energy momentum tensor is then
\begin{aligned}T(a) &= -\frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint \left( - \frac{1}{{c}} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)+ i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)- i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)a\left( - \frac{1}{{c}} \dot{\mathbf{A}}(\mathbf{k}, t)- i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(2.9)
# The tensor integrated over all space. Energy and momentum?
Integrating this over all space and identification of the delta function
\begin{aligned}\delta(\mathbf{k}) \equiv \frac{1}{{(2 \pi)^3}} \int e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{x},\end{aligned} \hspace{\stretch{1}}(3.10)
reduces the tensor to a single integral in the continuous angular wave number space of $\mathbf{k}$.
\begin{aligned}\int T(a) d^3 \mathbf{x} &= -\frac{\epsilon_0}{2} \text{Real} \int \left( - \frac{1}{{c}} {{\dot{\mathbf{A}}}}^{*}+ i \mathbf{k} {{\phi}}^{*}- i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)a\left( - \frac{1}{{c}} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.11)
Observing that $\gamma_0$ commutes with spatial bivectors and anticommutes with spatial vectors, and writing $\sigma_\mu = \gamma_\mu \gamma_0$, one has
\begin{aligned}\int T(\gamma_\mu) \gamma_0 d^3 \mathbf{x} = \frac{\epsilon_0}{2} \text{Real} \int {\left\langle{{\left( \frac{1}{{c}} {{\dot{\mathbf{A}}}}^{*}- i \mathbf{k} {{\phi}}^{*}+ i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)\sigma_\mu\left( \frac{1}{{c}} \dot{\mathbf{A}}+ i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)}}\right\rangle}_{{0,1}}d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.12)
The scalar and spatial vector grade selection operator has been added for convenience and does not change the result since those are necessarily the only grades anyhow. The post multiplication by the observer frame time basis vector $\gamma_0$ serves to separate the energy and momentum like components of the tensor nicely into scalar and vector aspects. In particular for $T(\gamma^0)$, one could write
\begin{aligned}\int T(\gamma^0) d^3 \mathbf{x} = (H + \mathbf{P}) \gamma_0,\end{aligned} \hspace{\stretch{1}}(3.13)
If these are correctly identified with energy and momentum then it also ought to be true that we have the conservation relationship
\begin{aligned}\frac{\partial {H}}{\partial {t}} + \boldsymbol{\nabla} \cdot (c \mathbf{P}) = 0.\end{aligned} \hspace{\stretch{1}}(3.14)
However, multiplying out (3.12) yields for $H$
\begin{aligned}H &= \frac{\epsilon_0}{2} \int d^3 \mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}}\right\rvert}^2 + \mathbf{k}^2 ({\left\lvert{\phi}\right\rvert}^2 + {\left\lvert{\mathbf{A}}\right\rvert}^2 )- {\left\lvert{\mathbf{k} \cdot \mathbf{A}}\right\rvert}^2 + 2 \frac{\mathbf{k}}{c} \cdot \text{Real}( i {{\phi}}^{*} \dot{\mathbf{A}} )\right)\end{aligned} \hspace{\stretch{1}}(3.15)
The vector component takes a bit more work to reduce
\begin{aligned}\mathbf{P} &= \frac{\epsilon_0}{2} \int d^3 \mathbf{k} \text{Real} \left(\frac{i}{c} ({{\dot{\mathbf{A}}}}^{*} \cdot (\mathbf{k} \wedge \mathbf{A})+ {{\phi}}^{*} \mathbf{k} \cdot (\mathbf{k} \wedge \mathbf{A})+ \frac{i}{c} (\mathbf{k} \wedge {\mathbf{A}}^{*}) \cdot \dot{\mathbf{A}}- \phi (\mathbf{k} \wedge {\mathbf{A}}^{*}) \cdot \mathbf{k}\right) \\ &=\frac{\epsilon_0}{2} \int d^3 \mathbf{k} \text{Real} \left(\frac{i}{c} \left( ({{\dot{\mathbf{A}}}}^{*} \cdot \mathbf{k}) \mathbf{A} -({{\dot{\mathbf{A}}}}^{*} \cdot \mathbf{A}) \mathbf{k} \right)+ {{\phi}}^{*} \left( \mathbf{k}^2 \mathbf{A} - (\mathbf{k} \cdot \mathbf{A}) \mathbf{k} \right)+ \frac{i}{c} \left( ({\mathbf{A}}^{*} \cdot \dot{\mathbf{A}}) \mathbf{k} - (\mathbf{k} \cdot \dot{\mathbf{A}}) {\mathbf{A}}^{*} \right)+ \phi \left( \mathbf{k}^2 {\mathbf{A}}^{*} -({\mathbf{A}}^{*} \cdot \mathbf{k}) \mathbf{k} \right) \right).\end{aligned}
Canceling and regrouping leaves
\begin{aligned}\mathbf{P}&=\epsilon_0 \int d^3 \mathbf{k} \text{Real} \left(\mathbf{A} \left( \mathbf{k}^2 {{\phi}}^{*} + \mathbf{k} \cdot {{\dot{\mathbf{A}}}}^{*} \right)+ \mathbf{k} \left( -{{\phi}}^{*} (\mathbf{k} \cdot \mathbf{A}) + \frac{i}{c} ({\mathbf{A}}^{*} \cdot \dot{\mathbf{A}})\right)\right).\end{aligned} \hspace{\stretch{1}}(3.16)
This has no explicit $\mathbf{x}$ dependence, so the conservation relation (3.14) is violated unless ${\partial {H}}/{\partial {t}} = 0$. There is no reason to assume that will be the case. In the discrete Fourier series treatment, a gauge transformation allowed for elimination of $\phi$, and this implied $\mathbf{k} \cdot \mathbf{A}_\mathbf{k} = 0$ or $\mathbf{A}_\mathbf{k}$ constant. We will probably have a similar result here, eliminating most of the terms in (3.15) and (3.16). Except for the constant $\mathbf{A}_\mathbf{k}$ solution of the field equations there is no obvious way that such a simplified energy expression will have zero derivative.
A more reasonable conclusion is that this approach is flawed. We ought to be looking at the divergence relation as a starting point, and instead of integrating over all space, instead employing Gauss’s theorem to convert the divergence integral into a surface integral. Without math, the conservation relationship probably ought to be expressed as energy change in a volume is matched by the momentum change through the surface. However, without an integral over all space, we do not get the nice delta function cancellation observed above. How to proceed is not immediately clear. Stepping back to review applications of Gauss’s theorem is probably a good first step.
# References
[1] Peeter Joot. Electrodynamic field energy for vacuum. [online]. http://sites.google.com/site/peeterjoot/math2009/fourierMaxVac.pdf.
[2] Peeter Joot. {Energy and momentum for Complex electric and magnetic field phasors.} [online]. http://sites.google.com/site/peeterjoot/math2009/complexFieldEnergy.pdf.
## Electrodynamic field energy for vacuum (reworked)
Posted by peeterjoot on December 21, 2009
# Previous version.
Reducing the products in the Dirac basis makes life more complicated then it needs to be (became obvious when attempting to derive an expression for the Poynting integral).
# Motivation.
From Energy and momentum for Complex electric and magnetic field phasors [PDF] how to formulate the energy momentum tensor for complex vector fields (ie. phasors) in the Geometric Algebra formalism is now understood. To recap, for the field $F = \mathbf{E} + I c \mathbf{B}$, where $\mathbf{E}$ and $\mathbf{B}$ may be complex vectors we have for Maxwell’s equation
\begin{aligned}\nabla F = J/\epsilon_0 c.\end{aligned} \quad\quad\quad(1)
This is a doubly complex representation, with the four vector pseudoscalar $I = \gamma_0 \gamma_1 \gamma_2 \gamma_3$ acting as a non-commutatitive imaginary, as well as real and imaginary parts for the electric and magnetic field vectors. We take the real part (not the scalar part) of any bivector solution $F$ of Maxwell’s equation as the actual solution, but allow ourself the freedom to work with the complex phasor representation when convenient. In these phasor vectors, the imaginary $i$, as in $\mathbf{E} = \text{Real}(\mathbf{E}) + i \text{Imag}(\mathbf{E})$, is a commuting imaginary, commuting with all the multivector elements in the algebra.
The real valued, four vector, energy momentum tensor $T(a)$ was found to be
\begin{aligned}T(a) = \frac{\epsilon_0}{4} \Bigl( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} \Bigr) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \quad\quad\quad(2)
To supply some context that gives meaning to this tensor the associated conservation relationship was found to be
\begin{aligned}\nabla \cdot T(a) &= a \cdot \frac{1}{{ c }} \text{Real} \left( J \cdot {{F}}^{*} \right).\end{aligned} \quad\quad\quad(3)
and in particular for $a = \gamma^0$, this four vector divergence takes the form
\begin{aligned}\frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ \boldsymbol{\nabla} \cdot \frac{1}{{\mu_0}} \text{Real} (\mathbf{E} \times {\mathbf{B}}^{*} )+ \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) = 0,\end{aligned} \quad\quad\quad(4)
relating the energy term $T^{00} = T(\gamma^0) \cdot \gamma^0$ and the Poynting spatial vector $T(\gamma^0) \wedge \gamma^0$ with the current density and electric field product that constitutes the energy portion of the Lorentz force density.
Let’s apply this to calculating the energy associated with the field that is periodic within a rectangular prism as done by Bohm in [2]. We do not necessarily need the Geometric Algebra formalism for this calculation, but this will be a fun way to attempt it.
# Setup
Let’s assume a Fourier representation for the four vector potential $A$ for the field $F = \nabla \wedge A$. That is
\begin{aligned}A = \sum_{\mathbf{k}} A_\mathbf{k}(t) e^{i \mathbf{k} \cdot \mathbf{x}},\end{aligned} \quad\quad\quad(5)
where summation is over all angular wave number triplets $\mathbf{k} = 2 \pi (k_1/\lambda_1, k_2/\lambda_2, k_3/\lambda_3)$. The Fourier coefficients $A_\mathbf{k} = {A_\mathbf{k}}^\mu \gamma_\mu$ are allowed to be complex valued, as is the resulting four vector $A$, and the associated bivector field $F$.
Fourier inversion, with $V = \lambda_1 \lambda_2 \lambda_3$, follows from
\begin{aligned}\delta_{\mathbf{k}', \mathbf{k}} =\frac{1}{{ V }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} e^{ i \mathbf{k}' \cdot \mathbf{x}} e^{-i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(6)
but only this orthogonality relationship and not the Fourier coefficients themselves
\begin{aligned}A_\mathbf{k} = \frac{1}{{ V }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} A(\mathbf{x}, t) e^{- i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(7)
will be of interest here. Evaluating the curl for this potential yields
\begin{aligned}F = \nabla \wedge A= \sum_{\mathbf{k}} \left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \gamma^m \wedge A_\mathbf{k} \frac{2 \pi i k_m}{\lambda_m} \right) e^{i \mathbf{k} \cdot \mathbf{x}}.\end{aligned} \quad\quad\quad(8)
Since the four vector potential has been expressed using an explicit split into time and space components it will be natural to re express the bivector field in terms of scalar and (spatial) vector potentials, with the Fourier coefficients. Writing $\sigma_m = \gamma_m \gamma_0$ for the spatial basis vectors, ${A_\mathbf{k}}^0 = \phi_\mathbf{k}$, and $\mathbf{A} = A^k \sigma_k$, this is
\begin{aligned}A_\mathbf{k} = (\phi_\mathbf{k} + \mathbf{A}_\mathbf{k}) \gamma_0.\end{aligned} \quad\quad\quad(9)
The Faraday bivector field $F$ is then
\begin{aligned}F = \sum_\mathbf{k} \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) e^{i \mathbf{k} \cdot \mathbf{x}}.\end{aligned} \quad\quad\quad(10)
This is now enough to express the energy momentum tensor $T(\gamma^\mu)$
\begin{aligned}T(\gamma^\mu) &= -\frac{\epsilon_0}{2} \sum_{\mathbf{k},\mathbf{k}'}\text{Real} \left(\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}'})}}^{*} + i \mathbf{k}' {{\phi_{\mathbf{k}'}}}^{*} - i \mathbf{k}' \wedge {{\mathbf{A}_{\mathbf{k}'}}}^{*} \right) \gamma^\mu \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x}}\right).\end{aligned} \quad\quad\quad(11)
It will be more convenient to work with a scalar plus bivector (spatial vector) form of this tensor, and right multiplication by $\gamma_0$ produces such a split
\begin{aligned}T(\gamma^\mu) \gamma_0 = \left\langle{{T(\gamma^\mu) \gamma_0}}\right\rangle + \sigma_a \left\langle{{ \sigma_a T(\gamma^\mu) \gamma_0 }}\right\rangle\end{aligned} \quad\quad\quad(12)
The primary object of this treatment will be consideration of the $\mu = 0$ components of the tensor, which provide a split into energy density $T(\gamma^0) \cdot \gamma_0$, and Poynting vector (momentum density) $T(\gamma^0) \wedge \gamma_0$.
Our first step is to integrate (12) over the volume $V$. This integration and the orthogonality relationship (6), removes the exponentials, leaving
\begin{aligned}\int T(\gamma^\mu) \cdot \gamma_0&= -\frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \left\langle{{\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \gamma^\mu \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) \gamma_0 }}\right\rangle \\ \int T(\gamma^\mu) \wedge \gamma_0&= -\frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \gamma^\mu \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) \gamma_0}}\right\rangle \end{aligned} \quad\quad\quad(13)
Because $\gamma_0$ commutes with the spatial bivectors, and anticommutes with the spatial vectors, the remainder of the Dirac basis vectors in these expressions can be eliminated
\begin{aligned}\int T(\gamma^0) \cdot \gamma_0&= -\frac{\epsilon_0 V }{2} \sum_{\mathbf{k}}\text{Real} \left\langle{{\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \end{aligned} \quad\quad\quad(15)
\begin{aligned}\int T(\gamma^0) \wedge \gamma_0&= -\frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \end{aligned} \quad\quad\quad(16)
\begin{aligned}\int T(\gamma^m) \cdot \gamma_0&= \frac{\epsilon_0 V }{2} \sum_{\mathbf{k}}\text{Real} \left\langle{{\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \sigma_m\left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \end{aligned} \quad\quad\quad(17)
\begin{aligned}\int T(\gamma^m) \wedge \gamma_0&= \frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \sigma_m\left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle.\end{aligned} \quad\quad\quad(18)
# Expanding the energy momentum tensor components.
## Energy
In (15) only the bivector-bivector and vector-vector products produce any scalar grades. Except for the bivector product this can be done by inspection. For that part we utilize the identity
\begin{aligned}\left\langle{{ (\mathbf{k} \wedge \mathbf{a}) (\mathbf{k} \wedge \mathbf{b}) }}\right\rangle= (\mathbf{a} \cdot \mathbf{k}) (\mathbf{b} \cdot \mathbf{k}) - \mathbf{k}^2 (\mathbf{a} \cdot \mathbf{b}).\end{aligned} \quad\quad\quad(19)
This leaves for the energy $H = \int T(\gamma^0) \cdot \gamma_0$ in the volume
\begin{aligned}H = \frac{\epsilon_0 V}{2} \sum_\mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2 +\mathbf{k}^2 \left( {\left\lvert{\phi_\mathbf{k}}\right\rvert}^2 + {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 \right) - {\left\lvert{\mathbf{k} \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2+ \frac{2}{c} \text{Real} \left( i {{\phi_\mathbf{k}}}^{*} \cdot \dot{\mathbf{A}}_\mathbf{k} \right)\right)\end{aligned} \quad\quad\quad(20)
We are left with a completely real expression, and one without any explicit Geometric Algebra. This does not look like the Harmonic oscillator Hamiltonian that was expected. A gauge transformation to eliminate $\phi_\mathbf{k}$ and an observation about when $\mathbf{k} \cdot \mathbf{A}_\mathbf{k}$ equals zero will give us that, but first lets get the mechanical jobs done, and reduce the products for the field momentum.
## Momentum
Now move on to (16). For the factors other than $\sigma_a$ only the vector-bivector products can contribute to the scalar product. We have two such products, one of the form
\begin{aligned}\sigma_a \left\langle{{ \sigma_a \mathbf{a} (\mathbf{k} \wedge \mathbf{c}) }}\right\rangle&=\sigma_a (\mathbf{c} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{k}) - \sigma_a (\mathbf{k} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{c}) \\ &=\mathbf{c} (\mathbf{a} \cdot \mathbf{k}) - \mathbf{k} (\mathbf{a} \cdot \mathbf{c}),\end{aligned}
and the other
\begin{aligned}\sigma_a \left\langle{{ \sigma_a (\mathbf{k} \wedge \mathbf{c}) \mathbf{a} }}\right\rangle&=\sigma_a (\mathbf{k} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{c}) - \sigma_a (\mathbf{c} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{k}) \\ &=\mathbf{k} (\mathbf{a} \cdot \mathbf{c}) - \mathbf{c} (\mathbf{a} \cdot \mathbf{k}).\end{aligned}
The momentum $\mathbf{P} = \int T(\gamma^0) \wedge \gamma_0$ in this volume follows by computation of
\begin{aligned}&\sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \\ &= i \mathbf{A}_\mathbf{k} \left( \left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} \right) \cdot \mathbf{k} \right) - i \mathbf{k} \left( \left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} \right) \cdot \mathbf{A}_\mathbf{k} \right) \\ &- i \mathbf{k} \left( \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} \right) \cdot {{\mathbf{A}_\mathbf{k}}}^{*} \right) + i {{\mathbf{A}_{\mathbf{k}}}}^{*} \left( \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} \right) \cdot \mathbf{k} \right)\end{aligned}
All the products are paired in nice conjugates, taking real parts, and premultiplication with $-\epsilon_0 V/2$ gives the desired result. Observe that two of these terms cancel, and another two have no real part. Those last are
\begin{aligned}-\frac{\epsilon_0 V \mathbf{k}}{2 c} \text{Real} \left( i {{(\dot{\mathbf{A}}_\mathbf{k}}}^{*} \cdot \mathbf{A}_\mathbf{k}+\dot{\mathbf{A}}_\mathbf{k} \cdot {{\mathbf{A}_\mathbf{k}}}^{*} \right)&=-\frac{\epsilon_0 V \mathbf{k}}{2 c} \text{Real} \left( i \frac{d}{dt} \mathbf{A}_\mathbf{k} \cdot {{\mathbf{A}_\mathbf{k}}}^{*} \right)\end{aligned}
Taking the real part of this pure imaginary $i {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2$ is zero, leaving just
\begin{aligned}\mathbf{P} &= \epsilon_0 V \sum_{\mathbf{k}}\text{Real} \left(i \mathbf{A}_\mathbf{k} \left( \frac{1}{{c}} {{\dot{\mathbf{A}}_\mathbf{k}}}^{*} \cdot \mathbf{k} \right)+ \mathbf{k}^2 \phi_\mathbf{k} {{ \mathbf{A}_\mathbf{k} }}^{*}- \mathbf{k} {{\phi_\mathbf{k}}}^{*} (\mathbf{k} \cdot \mathbf{A}_\mathbf{k})\right)\end{aligned} \quad\quad\quad(21)
I am not sure why exactly, but I actually expected a term with ${\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2$, quadratic in the vector potential. Is there a mistake above?
## Gauge transformation to simplify the Hamiltonian.
In (20) something that looked like the Harmonic oscillator was expected. On the surface this does not appear to be such a beast. Exploitation of gauge freedom is required to make the simplification that puts things into the Harmonic oscillator form.
If we are to change our four vector potential $A \rightarrow A + \nabla \psi$, then Maxwell’s equation takes the form
\begin{aligned}J/\epsilon_0 c = \nabla (\nabla \wedge (A + \nabla \psi) = \nabla (\nabla \wedge A) + \nabla (\underbrace{\nabla \wedge \nabla \psi}_{=0}),\end{aligned} \quad\quad\quad(22)
which is unchanged by the addition of the gradient to any original potential solution to the equation. In coordinates this is a transformation of the form
\begin{aligned}A^\mu \rightarrow A^\mu + \partial_\mu \psi,\end{aligned} \quad\quad\quad(23)
and we can use this to force any one of the potential coordinates to zero. For this problem, it appears that it is desirable to seek a $\psi$ such that $A^0 + \partial_0 \psi = 0$. That is
\begin{aligned}\sum_\mathbf{k} \phi_\mathbf{k}(t) e^{i \mathbf{k} \cdot \mathbf{x}} + \frac{1}{{c}} \partial_t \psi = 0.\end{aligned} \quad\quad\quad(24)
Or,
\begin{aligned}\psi(\mathbf{x},t) = \psi(\mathbf{x},0) -\frac{1}{{c}} \sum_\mathbf{k} e^{i \mathbf{k} \cdot \mathbf{x}} \int_{\tau=0}^t \phi_\mathbf{k}(\tau).\end{aligned} \quad\quad\quad(25)
With such a transformation, the $\phi_\mathbf{k}$ and $\dot{\mathbf{A}}_\mathbf{k}$ cross term in the Hamiltonian (20) vanishes, as does the $\phi_\mathbf{k}$ term in the four vector square of the last term, leaving just
\begin{aligned}H = \frac{\epsilon_0}{c^2} V \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} \Bigl((c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 + {\left\lvert{ ( c \mathbf{k}) \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2+ {\left\lvert{ c \mathbf{k} \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2\Bigr)\right).\end{aligned} \quad\quad\quad(26)
Additionally, wedging (5) with $\gamma_0$ now does not loose any information so our potential Fourier series is reduced to just
\begin{aligned}\mathbf{A} &= \sum_{\mathbf{k}} \mathbf{A}_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} \\ \mathbf{A}_\mathbf{k} &= \frac{1}{{ V }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} \mathbf{A}(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3.\end{aligned} \quad\quad\quad(27)
The desired harmonic oscillator form would be had in (26) if it were not for the $\mathbf{k} \cdot \mathbf{A}_\mathbf{k}$ term. Does that vanish? Returning to Maxwell’s equation should answer that question, but first it has to be expressed in terms of the vector potential. While $\mathbf{A} = A \wedge \gamma_0$, the lack of an $A^0$ component means that this can be inverted as
\begin{aligned}A = \mathbf{A} \gamma_0 = -\gamma_0 \mathbf{A}.\end{aligned} \quad\quad\quad(29)
The gradient can also be factored scalar and spatial vector components
\begin{aligned}\nabla = \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) = ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0.\end{aligned} \quad\quad\quad(30)
So, with this $A^0 = 0$ gauge choice the bivector field $F$ is
\begin{aligned}F = \nabla \wedge A = \frac{1}{{2}} \left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \end{aligned} \quad\quad\quad(31)
From the left the gradient action on $A$ is
\begin{aligned}\stackrel{ \rightarrow }{\nabla} A &= ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0 (-\gamma_0 \mathbf{A}) \\ &= ( -\partial_0 + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} ) \mathbf{A},\end{aligned}
and from the right
\begin{aligned}A \stackrel{ \leftarrow }{\nabla}&= \mathbf{A} \gamma_0 \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \mathbf{A} ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \partial_0 \mathbf{A} + \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \end{aligned}
Taking the difference we have
\begin{aligned}F &= \frac{1}{{2}} \Bigl( -\partial_0 \mathbf{A} + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \partial_0 \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \Bigr).\end{aligned}
Which is just
\begin{aligned}F = -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A}.\end{aligned} \quad\quad\quad(32)
For this vacuum case, premultiplication of Maxwell’s equation by $\gamma_0$ gives
\begin{aligned}0 &= \gamma_0 \nabla ( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= (\partial_0 + \boldsymbol{\nabla})( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} - \partial_0 \boldsymbol{\nabla} \cdot \mathbf{A} - \partial_0 \boldsymbol{\nabla} \wedge \mathbf{A} + \partial_0 ( \boldsymbol{\nabla} \wedge \mathbf{A} ) + \underbrace{\boldsymbol{\nabla} \cdot ( \boldsymbol{\nabla} \wedge \mathbf{A} ) }_{\boldsymbol{\nabla}^2 \mathbf{A} - \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A})}+ \underbrace{\boldsymbol{\nabla} \wedge ( \boldsymbol{\nabla} \wedge \mathbf{A} )}_{=0} \\ \end{aligned}
The spatial bivector and trivector grades are all zero. Equating the remaining scalar and vector components to zero separately yields a pair of equations in $\mathbf{A}$
\begin{aligned}0 &= \partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) \\ 0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) \end{aligned} \quad\quad\quad(33)
If the divergence of the vector potential is constant we have just a wave equation. Let’s see what that divergence is with the assumed Fourier representation
\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{A} &=\sum_{\mathbf{k} \ne (0,0,0)} {\mathbf{A}_\mathbf{k}}^m 2 \pi i \frac{k_m}{\lambda_m} e^{i \mathbf{k} \cdot \mathbf{x}} \\ &=i \sum_{\mathbf{k} \ne (0,0,0)} (\mathbf{A}_\mathbf{k} \cdot \mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x}} \\ &=i \sum_\mathbf{k} (\mathbf{A}_\mathbf{k} \cdot \mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x}} \end{aligned}
Since $\mathbf{A}_\mathbf{k} = \mathbf{A}_\mathbf{k}(t)$, there are two ways for $\partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) = 0$. For each $\mathbf{k}$ there must be a requirement for either $\mathbf{A}_\mathbf{k} \cdot \mathbf{k} = 0$ or $\mathbf{A}_\mathbf{k} = \text{constant}$. The constant $\mathbf{A}_\mathbf{k}$ solution to the first equation appears to represent a standing spatial wave with no time dependence. Is that of any interest?
The more interesting seeming case is where we have some non-static time varying state. In this case, if $\mathbf{A}_\mathbf{k} \cdot \mathbf{k}$, the second of these Maxwell’s equations is just the vector potential wave equation, since the divergence is zero. That is
\begin{aligned}0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} \end{aligned} \quad\quad\quad(35)
Solving this isn’t really what is of interest, since the objective was just to determine if the divergence could be assumed to be zero. This shows then, that if the transverse solution to Maxwell’s equation is picked, the Hamiltonian for this field, with this gauge choice, becomes
\begin{aligned}H = \frac{\epsilon_0}{c^2} V \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} (c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 \right).\end{aligned} \quad\quad\quad(36)
How does the gauge choice alter the Poynting vector? From (21), all the $\phi_\mathbf{k}$ dependence in that integrated momentum density is lost
\begin{aligned}\mathbf{P} &= \epsilon_0 V \sum_{\mathbf{k}}\text{Real} \left(i \mathbf{A}_\mathbf{k} \left( \frac{1}{{c}} {{\dot{\mathbf{A}}_\mathbf{k}}}^{*} \cdot \mathbf{k} \right)\right).\end{aligned} \quad\quad\quad(37)
The $\mathbf{A}_\mathbf{k} \cdot \mathbf{k}$ solutions to Maxwell’s equation are seen to result in zero momentum for this infinite periodic field. My expectation was something of the form $c \mathbf{P} = H \hat{\mathbf{k}}$, so intuition is either failing me, or my math is failing me, or this contrived periodic field solution leads to trouble.
# Conclusions and followup.
The objective was met, a reproduction of Bohm’s Harmonic oscillator result using a complex exponential Fourier series instead of separate sine and cosines.
The reason for Bohm’s choice to fix zero divergence as the gauge choice upfront is now clear. That automatically cuts complexity from the results. Figuring out how to work this problem with complex valued potentials and also using the Geometric Algebra formulation probably also made the work a bit more difficult since blundering through both simultaneously was required instead of just one at a time.
This was an interesting exercise though, since doing it this way I am able to understand all the intermediate steps. Bohm employed some subtler argumentation to eliminate the scalar potential $\phi$ upfront, and I have to admit I did not follow his logic, whereas blindly following where the math leads me all makes sense.
As a bit of followup, I’d like to consider the constant $\mathbf{A}_\mathbf{k}$ case in more detail, and any implications of the freedom to pick $\mathbf{A}_0$.
The general calculation of $T^{\mu\nu}$ for the assumed Fourier solution should be possible too, but was not attempted. Doing that general calculation with a four dimensional Fourier series is likely tidier than working with scalar and spatial variables as done here.
Now that the math is out of the way (except possibly for the momentum which doesn’t seem right), some discussion of implications and applications is also in order. My preference is to let the math sink-in a bit first and mull over the momentum issues at leisure.
# References
[2] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.
## Electrodynamic field energy for vacuum.
Posted by peeterjoot on December 19, 2009
# Motivation.
We now know how to formulate the energy momentum tensor for complex vector fields (ie. phasors) in the Geometric Algebra formalism. To recap, for the field $F = \mathbf{E} + I c \mathbf{B}$, where $\mathbf{E}$ and $\mathbf{B}$ may be complex vectors we have for Maxwell’s equation
\begin{aligned}\nabla F = J/\epsilon_0 c.\end{aligned} \quad\quad\quad(1)
This is a doubly complex representation, with the four vector pseudoscalar $I = \gamma_0 \gamma_1 \gamma_2 \gamma_3$ acting as a non-commutatitive imaginary, as well as real and imaginary parts for the electric and magnetic field vectors. We take the real part (not the scalar part) of any bivector solution $F$ of Maxwell’s equation as the actual solution, but allow ourself the freedom to work with the complex phasor representation when convenient. In these phasor vectors, the imaginary $i$, as in $\mathbf{E} = \text{Real}(\mathbf{E}) + i \text{Imag}(\mathbf{E})$, is a commuting imaginary, commuting with all the multivector elements in the algebra.
The real valued, four vector, energy momentum tensor $T(a)$ was found to be
\begin{aligned}T(a) = \frac{\epsilon_0}{4} \Bigl( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} \Bigr) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \quad\quad\quad(2)
To supply some context that gives meaning to this tensor the associated conservation relationship was found to be
\begin{aligned}\nabla \cdot T(a) &= a \cdot \frac{1}{{ c }} \text{Real} \left( J \cdot {{F}}^{*} \right).\end{aligned} \quad\quad\quad(3)
and in particular for $a = \gamma^0$, this four vector divergence takes the form
\begin{aligned}\frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ \boldsymbol{\nabla} \cdot \frac{1}{{\mu_0}} \text{Real} (\mathbf{E} \times {\mathbf{B}}^{*} )+ \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) = 0,\end{aligned} \quad\quad\quad(4)
relating the energy term $T^{00} = T(\gamma^0) \cdot \gamma^0$ and the Poynting spatial vector $T(\gamma^0) \wedge \gamma^0$ with the current density and electric field product that constitutes the energy portion of the Lorentz force density.
Let’s apply this to calculating the energy associated with the field that is periodic within a rectangular prism as done by Bohm in [1]. We do not necessarily need the Geometric Algebra formalism for this calculation, but this will be a fun way to attempt it.
# Setup
Let’s assume a Fourier representation for the four vector potential $A$ for the field $F = \nabla \wedge A$. That is
\begin{aligned}A = \sum_{\mathbf{k}} A_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}},\end{aligned} \quad\quad\quad(5)
where summation is over all wave number triplets $\mathbf{k} = (p/\lambda_1,q/\lambda_2,r/\lambda_3)$. The Fourier coefficients $A_\mathbf{k} = {A_\mathbf{k}}^\mu \gamma_\mu$ are allowed to be complex valued, as is the resulting four vector $A$, and the associated bivector field $F$.
Fourier inversion follows from
\begin{aligned}\delta_{\mathbf{k}', \mathbf{k}} =\frac{1}{{ \lambda_1 \lambda_2 \lambda_3 }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} e^{2 \pi i \mathbf{k}' \cdot \mathbf{x}} e^{-2 \pi i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(6)
but only this orthogonality relationship and not the Fourier coefficients themselves
\begin{aligned}A_\mathbf{k} = \frac{1}{{ \lambda_1 \lambda_2 \lambda_3 }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} A(\mathbf{x}, t) e^{-2 \pi i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(7)
will be of interest here. Evaluating the curl for this potential yields
\begin{aligned}F = \nabla \wedge A= \sum_{\mathbf{k}} \left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \sum_{m=1}^3 \gamma^m \wedge A_\mathbf{k} \frac{2 \pi i k_m}{\lambda_m} \right) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}}.\end{aligned} \quad\quad\quad(8)
We can now form the energy density
\begin{aligned}U = T(\gamma^0) \cdot \gamma^0=-\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} \gamma^0 F \gamma^0 \Bigr).\end{aligned} \quad\quad\quad(9)
With implied summation over all repeated integer indexes (even without matching uppers and lowers), this is
\begin{aligned}U =-\frac{\epsilon_0}{2} \sum_{\mathbf{k}', \mathbf{k}} \text{Real} \left\langle{{\left( \frac{1}{{c}} \gamma^0 \wedge {{\dot{A}_{\mathbf{k}'}}}^{*} - \gamma^m \wedge {{A_{\mathbf{k}'}}}^{*} \frac{2 \pi i k_m'}{\lambda_m} \right) e^{-2 \pi i \mathbf{k}' \cdot \mathbf{x}}\gamma^0\left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \gamma^n \wedge A_\mathbf{k} \frac{2 \pi i k_n}{\lambda_n} \right) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}}\gamma^0}}\right\rangle.\end{aligned} \quad\quad\quad(10)
The grade selection used here doesn’t change the result since we already have a scalar, but will just make it convenient to filter out any higher order products that will cancel anyways. Integrating over the volume element and taking advantage of the orthogonality relationship (6), the exponentials are removed, leaving the energy contained in the volume
\begin{aligned}H = -\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2}\sum_{\mathbf{k}} \text{Real} \left\langle{{\left( \frac{1}{{c}} \gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} - \gamma^m \wedge {{A_{\mathbf{k}}}}^{*} \frac{2 \pi i k_m}{\lambda_m} \right) \gamma^0\left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \gamma^n \wedge A_\mathbf{k} \frac{2 \pi i k_n}{\lambda_n} \right) \gamma^0}}\right\rangle.\end{aligned} \quad\quad\quad(11)
# First reduction of the Hamiltonian.
Let’s take the products involved in sequence one at a time, and evaluate, later adding and taking real parts if required all of
\begin{aligned}\frac{1}{{c^2}}\left\langle{{ (\gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) \gamma^0 (\gamma^0 \wedge \dot{A}_\mathbf{k}) \gamma^0 }}\right\rangle &=-\frac{1}{{c^2}}\left\langle{{ (\gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) (\gamma^0 \wedge \dot{A}_\mathbf{k}) }}\right\rangle \end{aligned} \quad\quad\quad(12)
\begin{aligned}- \frac{2 \pi i k_m}{c \lambda_m} \left\langle{{ (\gamma^m \wedge {{A_{\mathbf{k}}}}^{*} ) \gamma^0 ( \gamma^0 \wedge \dot{A}_\mathbf{k} ) \gamma^0}}\right\rangle &=\frac{2 \pi i k_m}{c \lambda_m} \left\langle{{ (\gamma^m \wedge {{A_{\mathbf{k}}}}^{*} ) ( \gamma^0 \wedge \dot{A}_\mathbf{k} ) }}\right\rangle \end{aligned} \quad\quad\quad(13)
\begin{aligned}\frac{2 \pi i k_n}{c \lambda_n} \left\langle{{ ( \gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) \gamma^0 ( \gamma^n \wedge A_\mathbf{k} ) \gamma^0}}\right\rangle &=-\frac{2 \pi i k_n}{c \lambda_n} \left\langle{{ ( \gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) ( \gamma^n \wedge A_\mathbf{k} ) }}\right\rangle \end{aligned} \quad\quad\quad(14)
\begin{aligned}-\frac{4 \pi^2 k_m k_n}{\lambda_m \lambda_n}\left\langle{{ (\gamma^m \wedge {{A_{\mathbf{k}}}}^{*} ) \gamma^0(\gamma^n \wedge A_\mathbf{k} ) \gamma^0}}\right\rangle. &\end{aligned} \quad\quad\quad(15)
The expectation is to obtain a Hamiltonian for the field that has the structure of harmonic oscillators, where the middle two products would have to be zero or sum to zero or have real parts that sum to zero. The first is expected to contain only products of ${\left\lvert{{\dot{A}_\mathbf{k}}^m}\right\rvert}^2$, and the last only products of ${\left\lvert{{A_\mathbf{k}}^m}\right\rvert}^2$.
While initially guessing that (13) and (14) may cancel, this isn’t so obviously the case. The use of cyclic permutation of multivectors within the scalar grade selection operator $\left\langle{{A B}}\right\rangle = \left\langle{{B A}}\right\rangle$ plus a change of dummy summation indexes in one of the two shows that this sum is of the form $Z + {{Z}}^{*}$. This sum is intrinsically real, so we can neglect one of the two doubling the other, but we will still be required to show that the real part of either is zero.
Lets reduce these one at a time starting with (12), and write $\dot{A}_\mathbf{k} = \kappa$ temporarily
\begin{aligned}\left\langle{{ (\gamma^0 \wedge {{\kappa}}^{*} ) (\gamma^0 \wedge \kappa }}\right\rangle &={\kappa^m}^{{*}} \kappa^{m'}\left\langle{{ \gamma^0 \gamma_m \gamma^0 \gamma_{m'} }}\right\rangle \\ &=-{\kappa^m}^{{*}} \kappa^{m'}\left\langle{{ \gamma_m \gamma_{m'} }}\right\rangle \\ &={\kappa^m}^{{*}} \kappa^{m'}\delta_{m m'}.\end{aligned}
So the first of our Hamiltonian terms is
\begin{aligned}\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2 c^2}\left\langle{{ (\gamma^0 \wedge {{\dot{A}_\mathbf{k}}}^{*} ) (\gamma^0 \wedge \dot{A}_\mathbf{k} }}\right\rangle &=\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2 c^2}{\left\lvert{{{\dot{A}}_{\mathbf{k}}}^m}\right\rvert}^2.\end{aligned} \quad\quad\quad(16)
Note that summation over $m$ is still implied here, so we’d be better off with a spatial vector representation of the Fourier coefficients $\mathbf{A}_\mathbf{k} = A_\mathbf{k} \wedge \gamma_0$. With such a notation, this contribution to the Hamiltonian is
\begin{aligned}\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2 c^2} \dot{\mathbf{A}}_\mathbf{k} \cdot {{\dot{\mathbf{A}}_\mathbf{k}}}^{*}.\end{aligned} \quad\quad\quad(17)
To reduce (13) and (13), this time writing $\kappa = A_\mathbf{k}$, we can start with just the scalar selection
\begin{aligned}\left\langle{{ (\gamma^m \wedge {{\kappa}}^{*} ) ( \gamma^0 \wedge \dot{\kappa} ) }}\right\rangle &=\Bigl( \gamma^m {{(\kappa^0)}}^{*} - {{\kappa}}^{*} \underbrace{(\gamma^m \cdot \gamma^0)}_{=0} \Bigr) \cdot \dot{\kappa} \\ &={{(\kappa^0)}}^{*} \dot{\kappa}^m\end{aligned}
Thus the contribution to the Hamiltonian from (13) and (13) is
\begin{aligned}\frac{2 \epsilon_0 \lambda_1 \lambda_2 \lambda_3 \pi k_m}{c \lambda_m} \text{Real} \Bigl( i {{(A_\mathbf{k}^0)}}^{*} \dot{A_\mathbf{k}}^m \Bigl)=\frac{2 \pi \epsilon_0 \lambda_1 \lambda_2 \lambda_3}{c} \text{Real} \Bigl( i {{(A_\mathbf{k}^0)}}^{*} \mathbf{k} \cdot \dot{\mathbf{A}}_\mathbf{k} \Bigl).\end{aligned} \quad\quad\quad(18)
Most definitively not zero in general. Our final expansion (15) is the messiest. Again with $A_\mathbf{k} = \kappa$ for short, the grade selection of this term in coordinates is
\begin{aligned}\left\langle{{ (\gamma^m \wedge {{\kappa}}^{*} ) \gamma^0 (\gamma^n \wedge \kappa ) \gamma^0 }}\right\rangle&=- {{\kappa_\mu}}^{*} \kappa^\nu \left\langle{{ (\gamma^m \wedge \gamma^\mu) \gamma^0 (\gamma_n \wedge \gamma_\nu) \gamma^0 }}\right\rangle\end{aligned} \quad\quad\quad(19)
Expanding this out yields
\begin{aligned}\left\langle{{ (\gamma^m \wedge {{\kappa}}^{*} ) \gamma^0 (\gamma^n \wedge \kappa ) \gamma^0 }}\right\rangle&=- ( {\left\lvert{\kappa_0}\right\rvert}^2 - {\left\lvert{A^a}\right\rvert}^2 ) \delta_{m n} + {{A^n}}^{*} A^m.\end{aligned} \quad\quad\quad(20)
The contribution to the Hamiltonian from this, with $\phi_\mathbf{k} = A^0_\mathbf{k}$, is then
\begin{aligned}2 \pi^2 \epsilon_0 \lambda_1 \lambda_2 \lambda_3 \Bigl(-\mathbf{k}^2 {{\phi_\mathbf{k}}}^{*} \phi_\mathbf{k} + \mathbf{k}^2 ({{\mathbf{A}_\mathbf{k}}}^{*} \cdot \mathbf{A}_\mathbf{k})+ (\mathbf{k} \cdot {{\mathbf{A}_k}}^{*}) (\mathbf{k} \cdot \mathbf{A}_k)\Bigr).\end{aligned} \quad\quad\quad(21)
A final reassembly of the Hamiltonian from the parts (17) and (18) and (21) is then
\begin{aligned}H = \epsilon_0 \lambda_1 \lambda_2 \lambda_3 \sum_\mathbf{k}\left(\frac{1}{{2 c^2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{2 \pi}{c} \text{Real} \Bigl( i {{ \phi_\mathbf{k} }}^{*} (\mathbf{k} \cdot \dot{\mathbf{A}}_\mathbf{k}) \Bigl)+2 \pi^2 \Bigl(\mathbf{k}^2 ( -{\left\lvert{\phi_\mathbf{k}}\right\rvert}^2 + {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 ) + {\left\lvert{\mathbf{k} \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2\Bigr)\right).\end{aligned} \quad\quad\quad(22)
This is finally reduced to a completely real expression, and one without any explicit Geometric Algebra. All the four vector Fourier vector potentials written out explicitly in terms of the spacetime split $A_\mathbf{k} = (\phi_\mathbf{k}, \mathbf{A}_\mathbf{k})$, which is natural since an explicit time and space split was the starting point.
# Gauge transformation to simplify the Hamiltonian.
While (22) has considerably simpler form than (11), what was expected, was something that looked like the Harmonic oscillator. On the surface this does not appear to be such a beast. Exploitation of gauge freedom is required to make the simplification that puts things into the Harmonic oscillator form.
If we are to change our four vector potential $A \rightarrow A + \nabla \psi$, then Maxwell’s equation takes the form
\begin{aligned}J/\epsilon_0 c = \nabla (\nabla \wedge (A + \nabla \psi) = \nabla (\nabla \wedge A) + \nabla (\underbrace{\nabla \wedge \nabla \psi}_{=0}),\end{aligned} \quad\quad\quad(23)
which is unchanged by the addition of the gradient to any original potential solution to the equation. In coordinates this is a transformation of the form
\begin{aligned}A^\mu \rightarrow A^\mu + \partial_\mu \psi,\end{aligned} \quad\quad\quad(24)
and we can use this to force any one of the potential coordinates to zero. For this problem, it appears that it is desirable to seek a $\psi$ such that $A^0 + \partial_0 \psi = 0$. That is
\begin{aligned}\sum_\mathbf{k} \phi_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} + \frac{1}{{c}} \partial_t \psi = 0.\end{aligned} \quad\quad\quad(25)
Or,
\begin{aligned}\psi(\mathbf{x},t) = \psi(\mathbf{x},0) -\frac{1}{{c}} \sum_\mathbf{k} e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} \int_{\tau=0}^t \phi_\mathbf{k}(\tau).\end{aligned} \quad\quad\quad(26)
With such a transformation, the $\phi_\mathbf{k}$ and $\dot{\mathbf{A}}_\mathbf{k}$ cross term in the Hamiltonian (22) vanishes, as does the $\phi_\mathbf{k}$ term in the four vector square of the last term, leaving just
\begin{aligned}H = \frac{\epsilon_0}{c^2} \lambda_1 \lambda_2 \lambda_3 \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} \Bigl((2 \pi c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 + {\left\lvert{ ( 2 \pi c \mathbf{k}) \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2\Bigr)\right).\end{aligned} \quad\quad\quad(27)
Additionally, wedging (5) with $\gamma_0$ now does not loose any information so our potential Fourier series is reduced to just
\begin{aligned}\mathbf{A} &= \sum_{\mathbf{k}} \mathbf{A}_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} \\ \mathbf{A}_\mathbf{k} &= \frac{1}{{ \lambda_1 \lambda_2 \lambda_3 }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} \mathbf{A}(\mathbf{x}, t) e^{-2 \pi i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3.\end{aligned} \quad\quad\quad(28)
The desired harmonic oscillator form would be had in (27) if it were not for the $\mathbf{k} \cdot \mathbf{A}_\mathbf{k}$ term. Does that vanish? Returning to Maxwell’s equation should answer that question, but first it has to be expressed in terms of the vector potential. While $\mathbf{A} = A \wedge \gamma_0$, the lack of an $A^0$ component means that this can be inverted as
\begin{aligned}A = \mathbf{A} \gamma_0 = -\gamma_0 \mathbf{A}.\end{aligned} \quad\quad\quad(30)
The gradient can also be factored scalar and spatial vector components
\begin{aligned}\nabla = \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) = ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0.\end{aligned} \quad\quad\quad(31)
So, with this $A^0 = 0$ gauge choice the bivector field $F$ is
\begin{aligned}F = \nabla \wedge A = \frac{1}{{2}} \left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \end{aligned} \quad\quad\quad(32)
From the left the gradient action on $A$ is
\begin{aligned}\stackrel{ \rightarrow }{\nabla} A &= ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0 (-\gamma_0 \mathbf{A}) \\ &= ( -\partial_0 + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} ) \mathbf{A},\end{aligned}
and from the right
\begin{aligned}A \stackrel{ \leftarrow }{\nabla}&= \mathbf{A} \gamma_0 \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \mathbf{A} ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \partial_0 \mathbf{A} + \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \end{aligned}
Taking the difference we have
\begin{aligned}F &= \frac{1}{{2}} \Bigl( -\partial_0 \mathbf{A} + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \partial_0 \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \Bigr).\end{aligned}
Which is just
\begin{aligned}F = -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A}.\end{aligned} \quad\quad\quad(33)
For this vacuum case, premultiplication of Maxwell’s equation by $\gamma_0$ gives
\begin{aligned}0 &= \gamma_0 \nabla ( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= (\partial_0 + \boldsymbol{\nabla})( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} - \partial_0 \boldsymbol{\nabla} \cdot \mathbf{A} - \partial_0 \boldsymbol{\nabla} \wedge \mathbf{A} + \partial_0 ( \boldsymbol{\nabla} \wedge \mathbf{A} ) + \underbrace{\boldsymbol{\nabla} \cdot ( \boldsymbol{\nabla} \wedge \mathbf{A} ) }_{\boldsymbol{\nabla}^2 \mathbf{A} - \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A})}+ \underbrace{\boldsymbol{\nabla} \wedge ( \boldsymbol{\nabla} \wedge \mathbf{A} )}_{=0} \\ \end{aligned}
The spatial bivector and trivector grades are all zero. Equating the remaining scalar and vector components to zero separately yields a pair of equations in $\mathbf{A}$
\begin{aligned}0 &= \partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) \\ 0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) \end{aligned} \quad\quad\quad(34)
If the divergence of the vector potential is constant we have just a wave equation. Let’s see what that divergence is with the assumed Fourier representation
\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{A} &=\sum_{k \ne (0,0,0)} {\mathbf{A}_\mathbf{k}}^m 2 \pi i \frac{k_m}{\lambda_m} e^{2\pi i \mathbf{k} \cdot \mathbf{x}} \\ &=2 \pi i \sum_{k \ne (0,0,0)} (\mathbf{A}_\mathbf{k} \cdot \mathbf{k}) e^{2\pi i \mathbf{k} \cdot \mathbf{x}} \\ \end{aligned}
Since $\mathbf{A}_\mathbf{k} = \mathbf{A}_\mathbf{k}(t)$, there are two ways for $\partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) = 0$. For each $\mathbf{k} \ne 0$ there must be a requirement for either $\mathbf{A}_\mathbf{k} \cdot \mathbf{k} = 0$ or $\mathbf{A}_\mathbf{k} = \text{constant}$. The constant $\mathbf{A}_\mathbf{k}$ solution to the first equation appears to represent a standing spatial wave with no time dependence. Is that of any interest?
The more interesting seeming case is where we have some non-static time varying state. In this case, if $\mathbf{A}_\mathbf{k} \cdot \mathbf{k}$ for all $\mathbf{k} \ne 0$ the second of these Maxwell’s equations is just the vector potential wave equation, since the divergence is zero. That is
\begin{aligned}0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} \end{aligned} \quad\quad\quad(36)
Solving this isn’t really what is of interest, since the objective was just to determine if the divergence could be assumed to be zero. This shows then, that if the transverse solution to Maxwell’s equation is picked, the Hamiltonian for this field, with this gauge choice, becomes
\begin{aligned}H = \frac{\epsilon_0}{c^2} \lambda_1 \lambda_2 \lambda_3 \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} (2 \pi c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 \right).\end{aligned} \quad\quad\quad(37)
# Conclusions and followup.
The objective was met, a reproduction of Bohm’s Harmonic oscillator result using a complex exponential Fourier series instead of separate sine and cosines.
The reason for Bohm’s choice to fix zero divergence as the gauge choice upfront is now clear. That automatically cuts complexity from the results. Figuring out how to work this problem with complex valued potentials and also using the Geometric Algebra formulation probably also made the work a bit more difficult since blundering through both simultaneously was required instead of just one at a time.
This was an interesting exercise though, since doing it this way I am able to understand all the intermediate steps. Bohm employed some subtler argumentation to eliminate the scalar potential $\phi$ upfront, and I have to admit I did not follow his logic, whereas blindly following where the math leads me all makes sense.
As a bit of followup, I’d like to consider the constant $\mathbf{A}_\mathbf{k}$ case, and any implications of the freedom to pick $\mathbf{A}_0$. I’d also like to construct the Poynting vector $T(\gamma^0) \wedge \gamma_0$, and see what the structure of that is with this Fourier representation.
A general calculation of $T^{\mu\nu}$ for an assumed Fourier solution should be possible too, but working in spatial quantities for the general case is probably torture. A four dimensional Fourier series is likely a superior option for the general case.
# References
[1] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.
## Energy and momentum for Complex electric and magnetic field phasors.
Posted by peeterjoot on December 15, 2009
# Motivation.
In [1] a complex phasor representations of the electric and magnetic fields is used
\begin{aligned}\mathbf{E} &= \boldsymbol{\mathcal{E}} e^{-i\omega t} \\ \mathbf{B} &= \mathbf{B} e^{-i\omega t}.\end{aligned} \quad\quad\quad(1)
Here the vectors $\boldsymbol{\mathcal{E}}$ and $\mathbf{B}$ are allowed to take on complex values. Jackson uses the real part of these complex vectors as the true fields, so one is really interested in just these quantities
\begin{aligned}\text{Real} \mathbf{E} &= \boldsymbol{\mathcal{E}}_r \cos(\omega t) + \boldsymbol{\mathcal{E}}_i \sin(\omega t) \\ \text{Real} \mathbf{B} &= \mathbf{B}_r \cos(\omega t) + \mathbf{B}_i \sin(\omega t),\end{aligned} \quad\quad\quad(3)
but carry the whole thing in manipulations to make things simpler. It is stated that the energy for such complex vector fields takes the form (ignoring constant scaling factors and units)
\begin{aligned}\text{Energy} \propto \mathbf{E} \cdot {\mathbf{E}}^{*} + \mathbf{B} \cdot {\mathbf{B}}^{*}.\end{aligned} \quad\quad\quad(5)
In some ways this is an obvious generalization. Less obvious is how this and the Poynting vector are related in their corresponding conservation relationships.
Here I explore this, employing a Geometric Algebra representation of the energy momentum tensor based on the real field representation found in [2]. Given the complex valued fields and a requirement that both the real and imaginary parts of the field satisfy Maxwell’s equation, it should be possible to derive the conservation relationship between the energy density and Poynting vector from first principles.
# Review of GA formalism for real fields.
In SI units the Geometric algebra form of Maxwell’s equation is
\begin{aligned}\nabla F &= J/\epsilon_0 c,\end{aligned} \quad\quad\quad(6)
where one has for the symbols
\begin{aligned}F &= \mathbf{E} + c I \mathbf{B} \\ I &= \gamma_0 \gamma_1 \gamma_2 \gamma_3 \\ \mathbf{E} &= E^k \gamma_k \gamma_0 \\ \mathbf{B} &= B^k \gamma_k \gamma_0 \\ (\gamma^0)^2 &= -(\gamma^k)^2 = 1 \\ \gamma^\mu \cdot \gamma_\nu &= {\delta^\mu}_\nu \\ J &= c \rho \gamma_0 + J^k \gamma_k \\ \nabla &= \gamma^\mu \partial_\mu = \gamma^\mu {\partial {}}/{\partial {x^\mu}}.\end{aligned} \quad\quad\quad(7)
The symmetric electrodynamic energy momentum tensor for real fields $\mathbf{E}$ and $\mathbf{B}$ is
\begin{aligned}T(a) &= \frac{-\epsilon_0}{2} F a F = \frac{\epsilon_0}{2} F a \tilde{F}.\end{aligned} \quad\quad\quad(15)
It may not be obvious that this is in fact a four vector, but this can be seen since it can only have grade one and three components, and also equals its reverse implying that the grade three terms are all zero. To illustrate this explicitly consider the components of $T^{\mu 0}$
\begin{aligned}\frac{2}{\epsilon_0} T(\gamma^0) &= -(\mathbf{E} + c I \mathbf{B}) \gamma^0 (\mathbf{E} + c I \mathbf{B}) \\ &= (\mathbf{E} + c I \mathbf{B}) (\mathbf{E} - c I \mathbf{B}) \gamma^0 \\ &= (\mathbf{E}^2 + c^2 \mathbf{B}^2 + c I (\mathbf{B} \mathbf{E} - \mathbf{E} \mathbf{B})) \gamma^0 \\ &= (\mathbf{E}^2 + c^2 \mathbf{B}^2) \gamma^0 + 2 c I ( \mathbf{B} \wedge \mathbf{E} ) \gamma^0 \\ &= (\mathbf{E}^2 + c^2 \mathbf{B}^2) \gamma^0 + 2 c ( \mathbf{E} \times \mathbf{B} ) \gamma^0 \\ \end{aligned}
Our result is a four vector in the Dirac basis as expected
\begin{aligned}T(\gamma^0) &= T^{\mu 0} \gamma_\mu \\ T^{0 0} &= \frac{\epsilon_0}{2} (\mathbf{E}^2 + c^2 \mathbf{B}^2) \\ T^{k 0} &= c \epsilon_0 (\mathbf{E} \times \mathbf{B})_k \end{aligned} \quad\quad\quad(16)
Similar expansions are possible for the general tensor components $T^{\mu\nu}$ but lets defer this more general expansion until considering complex valued fields. The main point here is to remind oneself how to express the energy momentum tensor in a fashion that is natural in a GA context. We also know that one has a conservation relationship associated with the divergence of this tensor $\nabla \cdot T(a)$ (ie. $\partial_\mu T^{\mu\nu}$), and want to rederive this relationship after guessing what form the GA expression for the energy momentum tensor takes when one allows the field vectors to take complex values.
# Computing the conservation relationship for complex field vectors.
As in 5, if one wants
\begin{aligned}T^{0 0} \propto \mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*},\end{aligned} \quad\quad\quad(19)
it is reasonable to assume that our energy momentum tensor will take the form
\begin{aligned}T(a) &= \frac{\epsilon_0}{4} \left( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} \right)= \frac{\epsilon_0}{2} \text{Real} \left( {{F}}^{*} a \tilde{F} \right)\end{aligned} \quad\quad\quad(20)
For real vector fields this reduces to the previous results and should produce the desired mix of real and imaginary dot products for the energy density term of the tensor. This is also a real four vector even when the field is complex, so the energy density and power density terms will all be real valued, which seems desirable.
## Expanding the tensor. Easy parts.
As with real fields expansion of $T(a)$ in terms of $\mathbf{E}$ and $\mathbf{B}$ is simplest for $a = \gamma^0$. Let’s start with that.
\begin{aligned}\frac{4}{\epsilon_0} T(\gamma^0) \gamma_0&=-({\mathbf{E}}^{*} + c I {\mathbf{B}}^{*} )\gamma^0 (\mathbf{E} + c I \mathbf{B}) \gamma_0-(\mathbf{E} + c I \mathbf{B} )\gamma^0 ({\mathbf{E}}^{*} + c I {\mathbf{B}}^{*} ) \gamma_0 \\ &=({\mathbf{E}}^{*} + c I {\mathbf{B}}^{*} ) (\mathbf{E} - c I \mathbf{B}) +(\mathbf{E} + c I \mathbf{B} ) ({\mathbf{E}}^{*} - c I {\mathbf{B}}^{*} ) \\ &={\mathbf{E}}^{*} \mathbf{E} + \mathbf{E} {\mathbf{E}}^{*} + c^2 ({\mathbf{B}}^{*} \mathbf{B} + \mathbf{B} {\mathbf{B}}^{*} ) + c I ( {\mathbf{B}}^{*} \mathbf{E} - {\mathbf{E}}^{*} \mathbf{B} + \mathbf{B} {\mathbf{E}}^{*} - \mathbf{E} {\mathbf{B}}^{*} ) \\ &=2 \mathbf{E} \cdot {\mathbf{E}}^{*} + 2 c^2 \mathbf{B} \cdot {\mathbf{B}}^{*}+ 2 c ( \mathbf{E} \times {\mathbf{B}}^{*} + {\mathbf{E}}^{*} \times \mathbf{B} ).\end{aligned}
This gives
\begin{aligned}T(\gamma^0) &=\frac{\epsilon_0}{2} \left( \mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*} \right) \gamma^0+ \frac{\epsilon_0 c}{2} ( \mathbf{E} \times {\mathbf{B}}^{*} + {\mathbf{E}}^{*} \times \mathbf{B} ) \gamma^0\end{aligned} \quad\quad\quad(21)
The sum of ${{F}}^{*} a F$ and its conjugate has produced the desired energy density expression. An implication of this is that one can form and take real parts of a complex Poynting vector $\mathbf{S} \propto \mathbf{E} \times {\mathbf{B}}^{*}$ to calculate the momentum density. This is stated but not demonstrated in Jackson, perhaps considered too obvious or messy to derive.
Observe that the a choice to work with complex valued vector fields gives a nice consistency, and one has the same factor of $1/2$ in both the energy and momentum terms. While the energy term is obviously real, the momentum terms can be written in an explicitly real notation as well since one has a quantity plus its conjugate. Using a more conventional four vector notation (omitting the explicit Dirac basis vectors), one can write this out as a strictly real quantity.
\begin{aligned}T(\gamma^0) &=\epsilon_0 \Bigl( \frac{1}{{2}}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*}),c \text{Real}( \mathbf{E} \times {\mathbf{B}}^{*} ) \Bigr)\end{aligned} \quad\quad\quad(22)
Observe that when the vector fields are restricted to real quantities, the conjugate and real part operators can be dropped and the real vector field result 16 is recovered.
## Expanding the tensor. Messier parts.
I intended here to compute $T(\gamma^k)$, and my starting point was a decomposition of the field vectors into components that anticommute or commute with $\gamma^k$
\begin{aligned}\mathbf{E} &= \mathbf{E}_\parallel + \mathbf{E}_\perp \\ \mathbf{B} &= \mathbf{B}_\parallel + \mathbf{B}_\perp.\end{aligned} \quad\quad\quad(23)
The components parallel to the spatial vector $\sigma_k = \gamma_k \gamma_0$ are anticommuting $\gamma^k \mathbf{E}_\parallel = -\mathbf{E}_\parallel \gamma^k$, whereas the perpendicular components commute $\gamma^k \mathbf{E}_\perp = \mathbf{E}_\perp \gamma^k$. The expansion of the tensor products is then
\begin{aligned}({{F}}^{*} \gamma^k \tilde{F} + \tilde{F} \gamma^k {{F}}^{*}) \gamma_k&= - ({\mathbf{E}}^{*} + I c {\mathbf{B}}^{*}) \gamma^k ( \mathbf{E}_\parallel + \mathbf{E}_\perp + c I ( \mathbf{B}_\parallel + \mathbf{B}_\perp ) ) \gamma_k \\ &- (\mathbf{E} + I c \mathbf{B}) \gamma^k ( {\mathbf{E}_\parallel}^{*} + {\mathbf{E}_\perp}^{*} + c I ( {\mathbf{B}_\parallel}^{*} + {\mathbf{B}_\perp}^{*} ) ) \gamma_k \\ &= ({\mathbf{E}}^{*} + I c {\mathbf{B}}^{*}) ( \mathbf{E}_\parallel - \mathbf{E}_\perp + c I ( -\mathbf{B}_\parallel + \mathbf{B}_\perp ) ) \\ &+ (\mathbf{E} + I c \mathbf{B}) ( {\mathbf{E}_\parallel}^{*} - {\mathbf{E}_\perp}^{*} + c I ( -{\mathbf{B}_\parallel}^{*} + {\mathbf{B}_\perp}^{*} ) ) \\ \end{aligned}
This isn’t particularly pretty to expand out. I did attempt it, but my result looked wrong. For the application I have in mind I do not actually need anything more than $T^{\mu 0}$, so rather than show something wrong, I’ll just omit it (at least for now).
## Calculating the divergence.
Working with 20, let’s calculate the divergence and see what one finds for the corresponding conservation relationship.
\begin{aligned}\frac{4}{\epsilon_0} \nabla \cdot T(a) &=\left\langle{{ \nabla ( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} )}}\right\rangle \\ &=-\left\langle{{ F \stackrel{ \leftrightarrow }\nabla {{F}}^{*} a + {{F}}^{*} \stackrel{ \leftrightarrow }\nabla F a }}\right\rangle \\ &=-{\left\langle{{ F \stackrel{ \leftrightarrow }\nabla {{F}}^{*} + {{F}}^{*} \stackrel{ \leftrightarrow }\nabla F }}\right\rangle}_{1} \cdot a \\ &=-{\left\langle{{ F \stackrel{ \rightarrow }\nabla {{F}}^{*} +F \stackrel{ \leftarrow }\nabla {{F}}^{*} + {{F}}^{*} \stackrel{ \leftarrow }\nabla F+ {{F}}^{*} \stackrel{ \rightarrow }\nabla F}}\right\rangle}_{1} \cdot a \\ &=-\frac{1}{{\epsilon_0 c}} {\left\langle{{ F {{J}}^{*} - J {{F}}^{*} - {{J}}^{*} F+ {{F}}^{*} J}}\right\rangle}_{1} \cdot a \\ &= \frac{2}{\epsilon_0 c} a \cdot ( J \cdot {{F}}^{*} + {{J}}^{*} \cdot F) \\ &= \frac{4}{\epsilon_0 c} a \cdot \text{Real} ( J \cdot {{F}}^{*} ).\end{aligned}
We have then for the divergence
\begin{aligned}\nabla \cdot T(a) &= a \cdot \frac{1}{{ c }} \text{Real} \left( J \cdot {{F}}^{*} \right).\end{aligned} \quad\quad\quad(25)
Lets write out $J \cdot {{F}}^{*}$ in the (stationary) observer frame where $J = (c\rho + \mathbf{J}) \gamma_0$. This is
\begin{aligned}J \cdot {{F}}^{*} &={\left\langle{{ (c\rho + \mathbf{J}) \gamma_0 ( {\mathbf{E}}^{*} + I c {\mathbf{B}}^{*} ) }}\right\rangle}_{1} \\ &=- (\mathbf{J} \cdot {\mathbf{E}}^{*} ) \gamma_0- c \left( \rho {\mathbf{E}}^{*} + \mathbf{J} \times {\mathbf{B}}^{*}\right) \gamma_0\end{aligned}
Writing out the four divergence relationships in full one has
\begin{aligned}\nabla \cdot T(\gamma^0) &= - \frac{1}{{ c }} \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) \\ \nabla \cdot T(\gamma^k) &= - \text{Real} \left( \rho {{(E^k)}}^{*} + (\mathbf{J} \times {\mathbf{B}}^{*})_k \right)\end{aligned} \quad\quad\quad(26)
Just as in the real field case one has a nice relativistic split into energy density and force (momentum change) components, but one has to take real parts and conjugate half the terms appropriately when one has complex fields.
Combining the divergence relation for $T(\gamma^0)$ with 22 the conservation relation for this subset of the energy momentum tensor becomes
\begin{aligned}\frac{1}{{c}} \frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ c \epsilon_0 \text{Real} \boldsymbol{\nabla} \cdot (\mathbf{E} \times {\mathbf{B}}^{*} )=- \frac{1}{{c}} \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) \end{aligned} \quad\quad\quad(28)
Or
\begin{aligned}\frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ \text{Real} \boldsymbol{\nabla} \cdot \frac{1}{{\mu_0}} (\mathbf{E} \times {\mathbf{B}}^{*} )+ \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) = 0\end{aligned} \quad\quad\quad(29)
It is this last term that puts some meaning behind Jackson’s treatment since we now know how the energy and momentum are related as a four vector quantity in this complex formalism.
While I’ve used geometric algebra to get to this final result, I would be interested to compare how the intermediate mess compares with the same complex field vector result obtained via traditional vector techniques. I am sure I could try this myself, but am not interested enough to attempt it.
Instead, now that this result is obtained, proceeding on to application is now possible. My intention is to try the vacuum electromagnetic energy density example from [3] using complex exponential Fourier series instead of the doubled sum of sines and cosines that Bohm used.
# References
[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.
[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.
[3] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.
## Electromagnetic Gauge invariance.
Posted by peeterjoot on September 24, 2009
At the end of section 12.1 in Jackson [1] he states that it is obvious that the Lorentz force equations are gauge invarient.
\begin{aligned}\frac{d \mathbf{p}}{dt} &= e \left( \mathbf{E} + \frac{\mathbf{u}}{c} \times \mathbf{B} \right) \\ \frac{d E}{dt} &= e \mathbf{u} \cdot \mathbf{E} \end{aligned} \quad\quad\quad(1)
Since I didn’t remember what Gauge invariance was, it wasn’t so obvious. But if I looking ahead to one of the problem 12.2 on this invariance we have a Gauge transformation defined in four vector form as
\begin{aligned}A^\alpha \rightarrow A^\alpha + \partial^\alpha \psi\end{aligned} \quad\quad\quad(3)
In vector form with $A = \gamma_\alpha A^\alpha$, this gauge transformation can be written
\begin{aligned}A \rightarrow A + \nabla \psi\end{aligned} \quad\quad\quad(4)
so this is really a statement that we add a spacetime gradient of something to the four vector potential. Given this, how does the field transform?
\begin{aligned}F &= \nabla \wedge A \\ &\rightarrow \nabla \wedge (A + \nabla \psi) \\ &= F + \nabla \wedge \nabla \psi\end{aligned}
But $\nabla \wedge \nabla \psi = 0$ (assuming partials are interchangable) so the field is invariant regardless of whether we are talking about the Lorentz force
\begin{aligned}\nabla F = J/\epsilon_0 c\end{aligned} \quad\quad\quad(5)
or the field equations themselves
\begin{aligned}\frac{dp}{d\tau} = e F \cdot v/c\end{aligned} \quad\quad\quad(6)
So, once you know the definition of the gauge transformation in four vector form, yes this justifiably obvious, however, to anybody who is not familiar with Geometric Algebra, perhaps this is still not so obvious. How does this translate to the more common place tensor or space time vector notations? The tensor four vector translation is the easier of the two, and there we have
\begin{aligned}F^{\alpha\beta} &= \partial^\alpha A^\beta -\partial^\beta A^\alpha \\ &\rightarrow \partial^\alpha (A^\beta + \partial^\beta \psi) -\partial^\beta (A^\alpha + \partial^\alpha \psi) \\ &= F^{\alpha\beta} + \partial^\alpha \partial^\beta \psi -\partial^\beta \partial^\alpha \psi \\ \end{aligned}
As required for $\nabla \wedge \nabla \psi = 0$ interchange of partials means the field components $F^{\alpha\beta}$ are unchanged by adding this gradient. Finally, in plain old spatial vector form, how is this gauge invariance expressed?
In components we have
\begin{aligned}A^0 &\rightarrow A^0 + \partial^0 \psi = \phi + \frac{1}{{c}}\frac{\partial \psi}{\partial t} \\ A^k &\rightarrow A^k + \partial^k \psi = A^k - \frac{\partial \psi}{\partial x^k}\end{aligned} \quad\quad\quad(7)
This last in vector form is $\mathbf{A} \rightarrow \mathbf{A} - \boldsymbol{\nabla} \psi$, where the sign inversion comes from $\partial^k = -\partial_k = -\partial/\partial x^k$, assuming a $+---$ metric.
We want to apply this to the electric and magnetic field components
\begin{aligned}\mathbf{E} &= -\boldsymbol{\nabla} \phi - \frac{1}{{c}}\frac{\partial \mathbf{A}}{\partial t} \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A}\end{aligned} \quad\quad\quad(9)
The electric field transforms as
\begin{aligned}\mathbf{E} &\rightarrow -\boldsymbol{\nabla} \left( \phi + \frac{1}{{c}}\frac{\partial \psi}{\partial t}\right) - \frac{1}{{c}}\frac{\partial }{\partial t} \left( \mathbf{A} - \boldsymbol{\nabla} \psi \right) \\ &= \mathbf{E} -\frac{1}{{c}} \boldsymbol{\nabla} \frac{\partial \psi}{\partial t} + \frac{1}{{c}}\frac{\partial }{\partial t} \boldsymbol{\nabla} \psi \end{aligned}
With partial interchange this is just $\mathbf{E}$. For the magnetic field we have
\begin{aligned}\mathbf{B} &\rightarrow \boldsymbol{\nabla} \times \left( \mathbf{A} - \boldsymbol{\nabla} \psi \right) \\ &= \mathbf{B} - \boldsymbol{\nabla} \times \boldsymbol{\nabla} \psi \end{aligned}
Again since the partials interchange we have $\boldsymbol{\nabla} \times \boldsymbol{\nabla} \psi = 0$, so this is just the magnetic field.
Alright. Worked this in three different ways, so now I can say its obvious.
# References
[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.
|
|
# Expected value
In probability theory, the expected value of a random variable is a key aspect of its probability distribution. Intuitively, a random variable's expected value represents the average of a large number of independent realizations of the random variable. For example, the expected value of rolling a six-sided die is 3.5, because the average of all the numbers that come up converges to 3.5 as the number of rolls approaches infinity (see § Examples for details). The expected value is also known as the expectation, mathematical expectation, mean, or first moment.
A bit more formally, the expected value of a discrete random variable is the probability-weighted average of all its possible values. In other words, each possible value the random variable can assume is multiplied by its probability of occurring, and the resulting products are summed to produce the expected value. The same principle applies to an absolutely continuous random variable, except that an integral of the variable with respect to its probability density replaces the sum. The formal definition subsumes both of these and also works for distributions which are neither discrete nor absolutely continuous; the expected value of a random variable is the integral of the random variable with respect to its probability measure.[1][2]
The expectation of a random variable plays an important role in a variety of contexts. For example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function. For a different example, in statistics, where one seeks estimates for unknown parameters based on available data, the estimate itself is a random variable. In such settings, a desirable criterion for a "good" estimator is that it is unbiased – that is, the expected value of the estimate is equal to the true value of the underlying parameter.
## History
The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players who have to end their game before it's properly finished. This problem had been debated for centuries, and many conflicting proposals and solutions had been suggested over the years, when it was posed in 1654 to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré. Méré claimed that this problem couldn't be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in a now famous series of letters to Pierre de Fermat. Soon enough they both independently came up with a solution. They solved the problem in different computational ways but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution and this in turn made them absolutely convinced they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.[3]
Three years later, in 1657, a Dutch mathematician Christiaan Huygens, who had just visited Paris, published a treatise (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory. In this book he considered the problem of points and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens also extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players). In this sense this book can be seen as the first successful attempt at laying down the foundations of the theory of probability.
In the foreword to his book, Huygens wrote: "It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs." (cited by Edwards (2002)). Thus, Huygens learned about de Méré's Problem in 1655 during his visit to France; later on in 1656 from his correspondence with Carcavi he learned that his method was essentially the same as Pascal's; so that before his book went to press in 1657 he knew about Pascal's priority in this subject.
### Etymology
Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes[4]:
That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2.
More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly[5]:
… this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope.
The use of the letter E to denote expected value goes back to W. A. Whitworth in 1901,[6] who used a script E. The symbol has become popular since for English writers it meant "Expectation", for Germans "Erwartungswert", for Spanish "Esperanza matemática" and for French "Espérance mathématique".[7]
## Definition
### Finite case
Let ${\displaystyle X}$ be a random variable with a finite number of finite outcomes ${\displaystyle x_{1},x_{2},\ldots ,x_{k}}$ occurring with probabilities ${\displaystyle p_{1},p_{2},\ldots ,p_{k},}$ respectively. The expectation of ${\displaystyle X}$ is defined as
${\displaystyle \operatorname {E} [X]=\sum _{i=1}^{k}x_{i}\,p_{i}=x_{1}p_{1}+x_{2}p_{2}+\cdots +x_{k}p_{k}.}$
Since all probabilities ${\displaystyle p_{i}}$ add up to 1 (${\displaystyle p_{1}+p_{2}+\cdots +p_{k}=1}$), the expected value is the weighted average, with ${\displaystyle p_{i}}$’s being the weights.
If all outcomes ${\displaystyle x_{i}}$ are equiprobable (that is, ${\displaystyle p_{1}=p_{2}=\cdots =p_{k}}$), then the weighted average turns into the simple average. If the outcomes ${\displaystyle x_{i}}$ are not equiprobable, then the simple average must be replaced with the weighted average, which takes into account the fact that some outcomes are more likely than the others.
An illustration of the convergence of sequence averages of rolls of a die to the expected value of 3.5 as the number of rolls (trials) grows.
#### Examples
• Let ${\displaystyle X}$ represent the outcome of a roll of a fair six-sided die. More specifically, ${\displaystyle X}$ will be the number of pips showing on the top face of the die after the toss. The possible values for ${\displaystyle X}$ are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of 1/6. The expectation of ${\displaystyle X}$ is
${\displaystyle \operatorname {E} [X]=1\cdot {\frac {1}{6}}+2\cdot {\frac {1}{6}}+3\cdot {\frac {1}{6}}+4\cdot {\frac {1}{6}}+5\cdot {\frac {1}{6}}+6\cdot {\frac {1}{6}}=3.5.}$
If one rolls the die ${\displaystyle n}$ times and computes the average (arithmetic mean) of the results, then as ${\displaystyle n}$ grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers.
• The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable ${\displaystyle X}$ represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability 1/38 in American roulette), the payoff is$35; otherwise the player loses the bet. The expected profit from such a bet will be
${\displaystyle \operatorname {E} [\,{\text{gain from }}\1{\text{ bet}}\,]=-\1\cdot {\frac {37}{38}}+\35\cdot {\frac {1}{38}}=-\{\frac {1}{19}}.}$
That is, the bet of \$1 stands to lose ${\displaystyle -\{\frac {1}{19}}}$, so its expected value is ${\displaystyle -\{\frac {1}{19}}.}$
### Countably infinite case
Intuitively, the expectation of a random variable taking values in a countable set of outcomes is defined analogously as the weighted sum of the outcome values, where the weights correspond to the probabilities of realizing that value. However, convergence issues associated with the infinite sum necessitate a more careful definition. A rigorous definition first defines expectation of a non-negative random variable, and then adapts it to general random variables.
Let ${\displaystyle X}$ be a non-negative random variable with a countable set of outcomes ${\displaystyle x_{1},x_{2},\ldots ,}$ occurring with probabilities ${\displaystyle p_{1},p_{2},\ldots ,}$ respectively. Analogous to the discrete case, the expected value of ${\displaystyle X}$ is then defined as the series
${\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }x_{i}\,p_{i}.}$
Note that since ${\displaystyle x_{i}p_{i}\geq 0}$, the infinite sum is well-defined and does not depend on the order in which it is computed. Unlike the discrete case, the expectation here can be equal to infinity, if the infinite sum above increases without bound.
For a general random variable ${\displaystyle X}$ that need not be non-negative, first define ${\displaystyle X^{+}=\max\{X,0\}}$ and ${\displaystyle X^{-}=\max\{-X,0\}}$. Observe that ${\displaystyle X=X^{+}-X^{-}}$, and both ${\displaystyle X^{+}}$ and ${\displaystyle X^{-}}$ are non-negative random variables. Hence, ${\displaystyle \operatorname {E} [X^{+}]}$ and ${\displaystyle \operatorname {E} [X^{-}]}$ are well-defined (using either the definition for finite discrete random variables or non-negative countable random variables). Then, we define ${\displaystyle \operatorname {E} [X]}$ as follows:
${\displaystyle \operatorname {E} [X]={\begin{cases}\operatorname {E} [X^{+}]-\operatorname {E} [X^{-}]&{\text{if }}\operatorname {E} [X^{+}]<\infty {\text{ and }}\operatorname {E} [X^{-}]<\infty ;\\\infty &{\text{if }}\operatorname {E} [X^{+}]=\infty {\text{ and }}\operatorname {E} [X^{-}]<\infty ;\\-\infty &{\text{if }}\operatorname {E} [X^{+}]<\infty {\text{ and }}\operatorname {E} [X^{-}]=\infty ;\\{\text{undefined}}&{\text{if }}\operatorname {E} [X^{+}]=\infty {\text{ and }}\operatorname {E} [X^{-}]=\infty .\\\end{cases}}}$
#### Examples
• Suppose ${\displaystyle x_{i}=i}$ and ${\displaystyle p_{i}={\frac {k}{i2^{i}}},}$ for ${\displaystyle i=1,2,3,\ldots }$, where ${\displaystyle k={\frac {1}{\ln 2}}}$ (with ${\displaystyle \ln }$ being the natural logarithm) is the scale factor such that the probabilities sum to 1. Then, using the direct definition for non-negative random variables, we have
${\displaystyle \operatorname {E} [X]=\sum _{i}x_{i}p_{i}=1\left({\frac {k}{2}}\right)+2\left({\frac {k}{8}}\right)+3\left({\frac {k}{24}}\right)+\dots ={\frac {k}{2}}+{\frac {k}{4}}+{\frac {k}{8}}+\dots =k.}$
• An example where the expectation is infinite arises in the context of the St. Petersburg paradox. Let ${\displaystyle x_{i}=2^{i}}$ and ${\displaystyle p_{i}={\frac {1}{2^{i}}}}$ for ${\displaystyle i=1,2,3,\ldots }$. Once again, since the random variable is non-negative, the expected value calculation gives
${\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }x_{i}\,p_{i}=2\cdot {\frac {1}{2}}+4\cdot {\frac {1}{4}}+8\cdot {\frac {1}{8}}+16\cdot {\frac {1}{16}}+\cdots =1+1+1+1+\cdots \,=\infty .}$
• For an example where the expectation is not well-defined, suppose the random variable ${\displaystyle X}$ takes values 1, −2, 3, −4, ..., with respective probabilities ${\displaystyle {\frac {c}{1^{2}}},{\frac {c}{2^{2}}},{\frac {c}{3^{2}}},{\frac {c}{4^{2}}}}$, ..., where ${\displaystyle c={\frac {6}{\pi ^{2}}}}$ is a normalizing constant that ensures the probabilities sum up to one.
Then, it follows that ${\displaystyle X^{+}}$ takes value ${\displaystyle 2k-1}$ with probability ${\displaystyle c/(2k-1)^{2}}$ for ${\displaystyle k=1,2,3,\cdots }$ and takes value ${\displaystyle 0}$ with remaining probability. Similarly, ${\displaystyle X^{-}}$ takes value ${\displaystyle 2k}$ with probability ${\displaystyle c/(2k)^{2}}$ for ${\displaystyle k=1,2,3,\cdots }$ and takes value ${\displaystyle 0}$ with remaining probability. Using the definition for non-negative random variables, one can show that both ${\displaystyle \operatorname {E} [X^{+}]=\infty }$ and ${\displaystyle \operatorname {E} [X^{-}]=\infty }$ (see Harmonic series). Hence, the expectation of ${\displaystyle X}$ is not well-defined.
### Absolutely continuous case
If ${\displaystyle X}$ is a random variable whose cumulative distribution function admits a density ${\displaystyle f(x)}$, then the expected value is defined as the following Lebesgue integral, if the integral exists:
${\displaystyle \operatorname {E} [X]=\int _{\mathbb {R} }xf(x)\,dx.}$
The expected value of a random variable may be undefined, if the integral does not exist. An example of such a random variable is one with the Cauchy distribution[8], due to its large "tails".
### General case
In general, if ${\displaystyle X}$ is a non-negative random variable defined on a probability space ${\displaystyle (\Omega ,\Sigma ,\operatorname {P} )}$, then the expected value of ${\displaystyle X}$, denoted by ${\displaystyle \operatorname {E} [X]}$, is defined as the Lebesgue integral
${\displaystyle \operatorname {E} [X]=\int _{\Omega }X(\omega )\,d\operatorname {P} (\omega ).}$
For a general random variable ${\displaystyle X}$, define as before ${\displaystyle X^{+}(\omega )=\max(X(\omega ),0)}$ and ${\displaystyle X^{-}(\omega )=-\min(X(\omega ),0)}$, and note that ${\displaystyle X=X^{+}-X^{-}}$, with both ${\displaystyle X^{+}}$ and ${\displaystyle X^{-}}$ nonnegative. Then, the expected value of ${\displaystyle X}$ is defined as
${\displaystyle \operatorname {E} [X]={\begin{cases}\operatorname {E} [X^{+}]-\operatorname {E} [X^{-}]&{\text{if }}\operatorname {E} [X^{+}]<\infty {\text{ and }}\operatorname {E} [X^{-}]<\infty ;\\\infty &{\text{if }}\operatorname {E} [X^{+}]=\infty {\text{ and }}\operatorname {E} [X^{-}]<\infty ;\\-\infty &{\text{if }}\operatorname {E} [X^{+}]<\infty {\text{ and }}\operatorname {E} [X^{-}]=\infty ;\\{\text{undefined}}&{\text{if }}\operatorname {E} [X^{+}]=\infty {\text{ and }}\operatorname {E} [X^{-}]=\infty .\end{cases}}}$
For multidimensional random variables, their expected value is defined per component, i.e.
${\displaystyle \operatorname {E} [(X_{1},\ldots ,X_{n})]=(\operatorname {E} [X_{1}],\ldots ,\operatorname {E} [X_{n}])}$
and, for a random matrix ${\displaystyle X}$ with elements ${\displaystyle X_{ij}}$, ${\displaystyle (\operatorname {E} [X])_{ij}=\operatorname {E} [X_{ij}].}$
## Basic properties
The basic properties below replicate or follow immediately from those of Lebesgue integral.
• Let ${\displaystyle {\mathbf {1} }_{A}}$ denote the indicator function of an event ${\displaystyle A}$. Then ${\displaystyle \operatorname {E} [{\mathbf {1} }_{A}]=1\cdot \operatorname {P} (A)+0\cdot \operatorname {P} (\Omega \setminus A)=\operatorname {P} (A).}$
• If ${\displaystyle X=Y}$ (a.s.), then ${\displaystyle \operatorname {E} [X]=\operatorname {E} [Y]}$.
• If ${\displaystyle X=c}$ (a.s.) for some constant ${\displaystyle c\in [-\infty ,+\infty ]}$, then ${\displaystyle \operatorname {E} [X]=c}$. In particular, for a random variable ${\displaystyle X}$ with well-defined expectation, ${\displaystyle \operatorname {E} [\operatorname {E} [X]]=\operatorname {E} [X]}$.
• Non-negativity: If ${\displaystyle X\geq 0}$ (a.s.), then ${\displaystyle \operatorname {E} [X]\geq 0}$.
• Linearity of expectation: The expected value operator (or expectation operator) ${\displaystyle \operatorname {E} [\cdot ]}$ is linear in the sense that, for any random variables ${\displaystyle X}$ and ${\displaystyle Y}$, and a constant ${\displaystyle a}$,
{\displaystyle {\begin{aligned}\operatorname {E} [X+Y]&=\operatorname {E} [X]+\operatorname {E} [Y],\\\operatorname {E} [aX]&=a\operatorname {E} [X],\end{aligned}}}
whenever the right-hand side is well-defined. This means that the expected value of the sum of any finite number of random variables is the sum of the expected values of the individual random variables, and the expected value scales linearly with a multiplicative constant.
• The following statements regarding a random variable ${\displaystyle X}$ are equivalent:
• ${\displaystyle \operatorname {E} [X]}$ exists and is finite.
• Both ${\displaystyle \operatorname {E} [X^{+}]}$ and ${\displaystyle \operatorname {E} [X^{-}]}$ are finite.
• ${\displaystyle \operatorname {E} [|X|]}$ is finite.
Sketch of proof: Indeed, ${\displaystyle |X|=X^{+}+X^{-}}$. By linearity, ${\displaystyle \operatorname {E} [|X|]=\operatorname {E} [X^{+}]+\operatorname {E} [X^{-}]}$.
For the reasons above, the expressions "${\displaystyle X}$ is integrable" and "the expected value of ${\displaystyle X}$ is finite" are used interchangeably throughout this article.
• For a random variable ${\displaystyle X}$ with well-defined expectation: ${\displaystyle |\operatorname {E} [X]|\leq \operatorname {E} |X|}$.
The proof follows from the triangle inequality and the linearity of expectation, as follows:
{\displaystyle {\begin{aligned}|\operatorname {E} [X]|&={\Bigl |}\operatorname {E} [X^{+}]-\operatorname {E} [X^{-}]{\Bigr |}\leq {\Bigl |}\operatorname {E} [X^{+}]{\Bigr |}+{\Bigl |}\operatorname {E} [X^{-}]{\Bigr |}&=\operatorname {E} [X^{+}]+\operatorname {E} [X^{-}]=\operatorname {E} [X^{+}+X^{-}]&=\operatorname {E} |X|.\end{aligned}}}
This result is a special case of the more general Jensen's inequality.
• Monotonicity: If ${\displaystyle X\leq Y}$ (a.s.), and both ${\displaystyle \operatorname {E} [X]}$ and ${\displaystyle \operatorname {E} [Y]}$ exist, then ${\displaystyle \operatorname {E} [X]\leq \operatorname {E} [Y]}$.
Proof follows from the linearity and the non-negativity property for ${\displaystyle Z=Y-X}$, since ${\displaystyle Z\geq 0}$ (a.s.).
• Non-degeneracy: If ${\displaystyle \operatorname {E} |X|=0}$, then ${\displaystyle X=0}$ (a.s.).
• If ${\displaystyle \operatorname {E} [X]<+\infty }$ then ${\displaystyle X<+\infty }$ (a.s.). Similarly, if ${\displaystyle \operatorname {E} [X]>-\infty }$ then ${\displaystyle X>-\infty }$ (a.s.).
• Non-multiplicativity: In general, the expected value operator is not multiplicative, i.e. ${\displaystyle \operatorname {E} [XY]}$ is not necessarily equal to ${\displaystyle \operatorname {E} [X]\cdot \operatorname {E} [Y]}$. Indeed, let ${\displaystyle X}$ assume the values of 1 and −1 with probability 0.5 each. Then
${\displaystyle \left(\operatorname {E} [X]\right)^{2}=\left({\frac {1}{2}}\cdot (-1)+{\frac {1}{2}}\cdot 1\right)^{2}=0,}$
and
${\displaystyle \operatorname {E} [X^{2}]={\frac {1}{2}}\cdot (-1)^{2}+{\frac {1}{2}}\cdot 1^{2}=1,{\text{ so }}\operatorname {E} [X^{2}]\neq (\operatorname {E} [X])^{2}.}$
However, if ${\displaystyle X}$ and ${\displaystyle Y}$ are independent, then indeed one can show that ${\displaystyle \operatorname {E} [XY]=\operatorname {E} [X]\operatorname {E} [Y]}$.
• If the random variables are dependent, then generally, the expected values will NOT multiply together.
• Law of the unconscious statistician: The expected value of a measurable function of ${\displaystyle X}$, ${\displaystyle g(X)}$, given that ${\displaystyle X}$ has a probability density function ${\displaystyle f(x)}$, is given by the inner product of ${\displaystyle f}$ and ${\displaystyle g}$:
${\displaystyle \operatorname {E} [g(X)]=\int _{\mathbb {R} }g(x)f(x)\,dx.}$
This formula also holds in multidimensional case, when ${\displaystyle g}$ is a function of several random variables, and ${\displaystyle f}$ is their joint density.[9][10]
• If X and Y are two random variables, and Y can be written as a function of X, that is, Y = f(X), then one can compute the expected value of Y using the distribution function of X. [11]
## Uses and applications
It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.
The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions.
To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.
This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. ${\displaystyle \operatorname {P} ({X\in {\mathcal {A}}})=\operatorname {E} [{\mathbf {1} }_{\mathcal {A}}]}$, where ${\displaystyle {\mathbf {1} }_{\mathcal {A}}}$ is the indicator function of the set ${\displaystyle {\mathcal {A}}}$.
The mass of probability distribution is balanced at the expected value, here a Beta(α,β) distribution with expected value α/(α+β).
In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X].
Expected values can also be used to compute the variance, by means of the computational formula for the variance
${\displaystyle \operatorname {Var} (X)=\operatorname {E} [X^{2}]-(\operatorname {E} [X])^{2}.}$
A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator ${\displaystyle {\hat {A}}}$ operating on a quantum state vector ${\displaystyle |\psi \rangle }$ is written as ${\displaystyle \langle {\hat {A}}\rangle =\langle \psi |A|\psi \rangle }$. The uncertainty in ${\displaystyle {\hat {A}}}$ can be calculated using the formula ${\displaystyle (\Delta A)^{2}=\langle {\hat {A}}^{2}\rangle -\langle {\hat {A}}\rangle ^{2}}$.
#### Interchanging limits and expectation
In general, it is not the case that ${\displaystyle \operatorname {E} [X_{n}]\to \operatorname {E} [X]}$ despite ${\displaystyle X_{n}\to X}$ pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, let ${\displaystyle U}$ be a random variable distributed uniformly on ${\displaystyle [0,1]}$. For ${\displaystyle n\geq 1,}$ define a sequence of random variables
${\displaystyle X_{n}=n\cdot \mathbf {1} \left\{U\in \left[0,{\tfrac {1}{n}}\right]\right\},}$
with ${\displaystyle {\mathbf {1} }\{A\}}$ being the indicator function of the event ${\displaystyle A}$. Then, it follows that ${\displaystyle X_{n}\to 0}$ (a.s). But, ${\displaystyle \operatorname {E} [X_{n}]=n\cdot \operatorname {P} \left(U\in \left[0,{\tfrac {1}{n}}\right]\right)=n\cdot {\tfrac {1}{n}}=1}$ for each ${\displaystyle n}$. Hence, ${\displaystyle \lim _{n\to \infty }\operatorname {E} [X_{n}]=1\neq 0=\operatorname {E} \left[\lim _{n\to \infty }X_{n}\right].}$
Analogously, for general sequence of random variables ${\displaystyle \{Y_{n}:n\geq 0\}}$, the expected value operator is not ${\displaystyle \sigma }$-additive, i.e.
${\displaystyle \operatorname {E} \left[\sum _{n=0}^{\infty }Y_{n}\right]\neq \sum _{n=0}^{\infty }\operatorname {E} [Y_{n}].}$
An example is easily obtained by setting ${\displaystyle Y_{0}=X_{1}}$ and ${\displaystyle Y_{n}=X_{n+1}-X_{n}}$ for ${\displaystyle n\geq 1}$, where ${\displaystyle X_{n}}$ is as in the previous example.
A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below.
• Monotone convergence theorem: Let ${\displaystyle \{X_{n}:n\geq 0\}}$ be a sequence of random variables, with ${\displaystyle 0\leq X_{n}\leq X_{n+1}}$ (a.s) for each ${\displaystyle n\geq 0}$. Furthermore, let ${\displaystyle X_{n}\to X}$ pointwise. Then, the monotone convergence theorem states that ${\displaystyle \lim _{n}\operatorname {E} [X_{n}]=\operatorname {E} [X].}$
Using the monotone convergence theorem, one can show that expectation indeed satisfies countable additivity for non-negative random variables. In particular, let ${\displaystyle \{X_{i}\}_{i=0}^{\infty }}$ be non-negative random variables. It follows from monotone convergence theorem that
${\displaystyle \operatorname {E} \left[\sum _{i=0}^{\infty }X_{i}\right]=\sum _{i=0}^{\infty }\operatorname {E} [X_{i}].}$
• Fatou's lemma: Let ${\displaystyle \{X_{n}\geq 0:n\geq 0\}}$ be a sequence of non-negative random variables. Fatou's lemma states that
${\displaystyle \operatorname {E} [\liminf _{n}X_{n}]\leq \liminf _{n}\operatorname {E} [X_{n}].}$
Corollary. Let ${\displaystyle X_{n}\geq 0}$ with ${\displaystyle \operatorname {E} [X_{n}]\leq C}$ for all ${\displaystyle n\geq 0}$. If ${\displaystyle X_{n}\to X}$ (a.s), then ${\displaystyle \operatorname {E} [X]\leq C.}$
Proof is by observing that ${\displaystyle \textstyle X=\liminf _{n}X_{n}}$ (a.s.) and applying Fatou's lemma.
• Dominated convergence theorem: Let ${\displaystyle \{X_{n}:n\geq 0\}}$ be a sequence of random variables. If ${\displaystyle X_{n}\to X}$ pointwise (a.s.), ${\displaystyle |X_{n}|\leq Y\leq +\infty }$ (a.s.), and ${\displaystyle \operatorname {E} [Y]<\infty }$. Then, according to the dominated convergence theorem,
• ${\displaystyle \operatorname {E} |X|\leq \operatorname {E} [Y]<\infty }$;
• ${\displaystyle \lim _{n}\operatorname {E} [X_{n}]=\operatorname {E} [X]}$
• ${\displaystyle \lim _{n}\operatorname {E} |X_{n}-X|=0.}$
• Uniform integrability: In some cases, the equality ${\displaystyle \displaystyle \lim _{n}\operatorname {E} [X_{n}]=\operatorname {E} [\lim _{n}X_{n}]}$ holds when the sequence ${\displaystyle \{X_{n}\}}$ is uniformly integrable.
#### Inequalities
There are a number of inequalities involving the expected values of functions of random variables. The following list includes some of the more basic ones.
• Markov's inequality: For a nonnegative random variable ${\displaystyle X}$ and ${\displaystyle a>0}$, Markov's inequality states that
${\displaystyle \operatorname {P} (X\geq a)\leq {\frac {\operatorname {E} [X]}{a}}.}$
• Bienaymé-Chebyshev inequality: Let ${\displaystyle X}$ be an arbitrary random variable with finite expected value ${\displaystyle \operatorname {E} [X]}$ and finite variance ${\displaystyle \operatorname {Var} [X]\neq 0}$. The Bienaymé-Chebyshev inequality states that, for any real number ${\displaystyle k>0}$,
${\displaystyle \operatorname {P} {\Bigl (}{\Bigl |}X-\operatorname {E} [X]{\Bigr |}\geq k{\sqrt {\operatorname {Var} [X]}}{\Bigr )}\leq {\frac {1}{k^{2}}}.}$
• Jensen's inequality: Let ${\displaystyle f:{\mathbb {R} }\to {\mathbb {R} }}$ be a measurable convex function and ${\displaystyle X}$ a random variable such that ${\displaystyle \operatorname {E} |X|<\infty }$. Jensen's inequality states that
${\displaystyle f(\operatorname {E} (X))\leq \operatorname {E} (f(X)).}$
For example, Jensen's inequality implies that ${\displaystyle |\operatorname {E} [X]|\leq \operatorname {E} |X|}$ since the absolute value function is convex.
• Lyapunov's inequality: Let ${\displaystyle 0. Lyapunov's inequality states that
${\displaystyle \left(\operatorname {E} |X|^{s}\right)^{1/s}\leq \left(\operatorname {E} |X|^{t}\right)^{1/t}.}$
Proof. Applying Jensen's inequality to ${\displaystyle |X|^{s}}$ and ${\displaystyle g(x)=|x|^{t/s}}$, obtain ${\displaystyle {\Bigl |}\operatorname {E} |X^{s}|{\Bigr |}^{t/s}\leq \operatorname {E} |X^{s}|^{t/s}=\operatorname {E} |X|^{t}}$. Taking the ${\displaystyle t^{th}}$ root of each side completes the proof.
${\displaystyle (\operatorname {E} [XY])^{2}\leq \operatorname {E} [X^{2}]\cdot \operatorname {E} [Y^{2}].}$
• Hölder's inequality: Let ${\displaystyle p}$ and ${\displaystyle q}$ satisfy ${\displaystyle 1\leq p\leq \infty }$, ${\displaystyle 1\leq q\leq \infty }$, and ${\displaystyle 1/p+1/q=1}$. The Hölder's inequality states that
${\displaystyle \operatorname {E} |XY|\leq (\operatorname {E} |X|^{p})^{1/p}(\operatorname {E} |Y|^{q})^{1/q}.}$
• Minkowski inequality: Let ${\displaystyle p}$ be an integer satisfying ${\displaystyle 1\leq p\leq \infty }$. Let, in addition, ${\displaystyle \operatorname {E} |X|^{p}<\infty }$ and ${\displaystyle \operatorname {E} |Y|^{p}<\infty }$. Then, according to the Minkowski inequality, ${\displaystyle \operatorname {E} |X+Y|^{p}<\infty }$ and
${\displaystyle {\Bigl (}\operatorname {E} |X+Y|^{p}{\Bigr )}^{1/p}\leq {\Bigl (}\operatorname {E} |X|^{p}{\Bigr )}^{1/p}+{\Bigl (}\operatorname {E} |Y|^{p}{\Bigr )}^{1/p}.}$
## Relationship with characteristic function
The probability density function ${\displaystyle f_{X}}$ of a scalar random variable ${\displaystyle X}$ is related to its characteristic function ${\displaystyle \varphi _{X}}$ by the inversion formula:
${\displaystyle f_{X}(x)={\frac {1}{2\pi }}\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,dt.}$
For the expected value of ${\displaystyle g(X)}$ (where ${\displaystyle g:{\mathbb {R} }\to {\mathbb {R} }}$ is a Borel function), we can use this inversion formula to obtain
${\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }g(x)\left[\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,dt\right]\,dx.}$
If ${\displaystyle \operatorname {E} [g(X)]}$ is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem,
${\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }G(t)\varphi _{X}(t)\,dt,}$
where
${\displaystyle G(t)=\int _{\mathbb {R} }g(x)e^{-itx}\,dx}$
is the Fourier transform of ${\displaystyle g(x).}$ The expression for ${\displaystyle \operatorname {E} [g(X)]}$ also follows directly from Plancherel theorem.
## Alternative formula for expected value
For non-negative random variables, one can compute the expected value using an alternative formula involving only the cumulative distribution function of the random variable. Similar formulas for general random variables can be obtained by using the relation ${\displaystyle X=X^{+}-X^{-}}$ and noting that both ${\displaystyle X^{+}}$ and ${\displaystyle X^{-}}$ are non-negative, for which the following applies.
#### Finite and countably infinite case
For a non-negative integer-valued random variable ${\displaystyle X:\Omega \to \{0,1,2,3,\ldots \}\cup \{+\infty \},}$
${\displaystyle \operatorname {E} [X]=\sum _{n=0}^{\infty }\operatorname {P} (X>n)=\sum _{n=0}^{\infty }{\bar {F}}(n),}$
where ${\displaystyle {\bar {F}}(x)=1-F(x)}$, and ${\displaystyle F(x)=P(X\leq x)}$ is the cdf of ${\displaystyle X}$.
#### General case
If ${\displaystyle X:\Omega \to [0,+\infty ]}$ is a non-negative random variable, then
${\displaystyle \operatorname {E} [X]=\int \limits _{[0,+\infty )}\operatorname {P} (X>x)\,dx=\int \limits _{[0,+\infty )}{\bar {F}}(x)\,dx,}$
where ${\displaystyle {\bar {F}}(x)=1-F(x)}$, and ${\displaystyle F}$ is the cdf of ${\displaystyle X}$.
## References
1. ^ Sheldon M Ross (2007). "§2.4 Expectation of a random variable". Introduction to probability models (9th ed.). Academic Press. p. 38 ff. ISBN 0-12-598062-0.
2. ^ Richard W Hamming (1991). "§2.5 Random variables, mean and the expected value". The art of probability for scientists and engineers. Addison–Wesley. p. 64 ff. ISBN 0-201-40686-1.
3. ^ "Ore, Pascal and the Invention of Probability Theory". The American Mathematical Monthly. 67 (5): 409–419. 1960. doi:10.2307/2309286.
4. ^ Huygens, Christian. "The Value of Chances in Games of Fortune. English Translation" (PDF).
5. ^ Laplace, Pierre Simon, marquis de, 1749-1827. (1952, ©1951). A philosophical essay on probabilities. Dover Publications. OCLC 475539. Check date values in: |date= (help)CS1 maint: multiple names: authors list (link)
6. ^ Whitworth, W.A. (1901) Choice and Chance with One Thousand Exercises. Fifth edition. Deighton Bell, Cambridge. [Reprinted by Hafner Publishing Co., New York, 1959.]
7. ^
8. ^ Richard W Hamming (1991). "Example 8.7–1 The Cauchy distribution". The art of probability for scientists and engineers. Addison-Wesley. p. 290 ff. ISBN 0-201-40686-1. Sampling from the Cauchy distribution and averaging gets you nowhere — one sample has the same distribution as the average of 1000 samples!
9. ^ Expectation Value, retrieved August 8, 2017
10. ^ Papoulis, A. (1984), Probability, Random Variables, and Stochastic Processes, New York: McGraw–Hill, pp. 139–152
11. ^ http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter6.pdf
## Literature
• Edwards, A.W.F (2002). Pascal's arithmetical triangle: the story of a mathematical idea (2nd ed.). JHU Press. ISBN 0-8018-6946-3.
• Huygens, Christiaan (1657). De ratiociniis in ludo aleæ (English translation, published in 1714:).CS1 maint: extra punctuation (link)
|
|
Geometric Algebra can be applied to Physics, and many of the introductions to GA online cover this, but they immediately jump to electromagnetic fields or quantum mechanics, which is unfortunate since GA can also greatly simplify 2D kinematics. One such example is uniform circular motion.
You should be familiar with all the concepts presented in An Introduction to Geometric Algebra over R^2 before proceeding.
If we have a vector p that moves at a constant rate of ω rad/s and has a starting position p0, then we can describe the vector p very easily:
$$\boldsymbol{p} = \boldsymbol{p_0} e^{\omega t \boldsymbol{I}}$$
Let's figure out what the derivative of a Rotor looks like, by first recalling its definition:
$$e^{\theta \boldsymbol{I}} := \cos(\theta) + \sin(\theta)\boldsymbol{I}$$
We take the derivative with respect to θ:
\begin{align*} \frac{d}{d \theta} e^{\theta \boldsymbol{I}} &= \frac{d}{d \theta} (\cos(\theta) + \sin(\theta)\boldsymbol{I}) \\ &= -\sin(\theta) + \cos(\theta)\boldsymbol{I} \\ \end{align*}
At this point observe that cos and sin just changed places, along with a sign change, but we know of another operation that does the same thing, which is multiplication by I, so we get:
\begin{align*} \frac{d}{d \theta} e^{\theta \boldsymbol{I}} &= \frac{d}{d \theta} (\cos(\theta) + \sin(\theta)\boldsymbol{I}) \\ &= -\sin(\theta) + \cos(\theta)\boldsymbol{I} \\ &= \boldsymbol{I} (\cos(\theta) + \sin(\theta)\boldsymbol{I}) \\ &= \boldsymbol{I} e^{\theta \boldsymbol{I}} \\ \end{align*}
Not only does the derivative have a nice neat expression, we can read off from the formula what is happening, which is that the derivative is a vector that is rotated 90 degrees from the original vector. Also note that normally the geometric product ins't commutative, but in this case both parts are rotors, so the order doesn't matter.
We can go through the same process to show what happens if θ has a constant multiplier k:
\begin{align*} \frac{d}{d \theta} e^{k \theta \boldsymbol{I}} &= \frac{d}{d \theta} (\cos(k \theta) + \sin(k \theta)\boldsymbol{I}) \\ &= k \boldsymbol{I} e^{k \theta \boldsymbol{I}} \\ \end{align*}
With our new derivative in hand we can now find the velocity vector for our position vector p, since velocity is just the derivative of position with respect to time.
\begin{align*} \boldsymbol{v} &= \frac{d}{dt} \boldsymbol{p} \\ &= \frac{d}{dt} \boldsymbol{p_0} e^{\omega t \boldsymbol{I}} \\ &= \boldsymbol{p_0} \omega \boldsymbol{I} e^{\omega t \boldsymbol{I}} \\ &= \omega \boldsymbol{p_0} \boldsymbol{I} e^{\omega t \boldsymbol{I}} \\ \end{align*}
Again, because we using Geometric Algebra, we can read off what is going on geometrically from the formula, that is, the derivative is a vector orthogonal to the position vector that is scaled by ω.
Note that we've drawn the vector as starting from the position, but that's not required.
We get the acceleration vector in the same manner, by taking the derivative of the velocity vector with respect to time.
\begin{align*} \boldsymbol{a} &= \frac{d}{dt} \boldsymbol{v} \\ &= \frac{d}{dt} \omega \boldsymbol{p_0} \boldsymbol{I} e^{\omega t \boldsymbol{I}} \\ &= \omega \boldsymbol{p_0} \boldsymbol{I} \omega \boldsymbol{I} e^{\omega t \boldsymbol{I}} \\ &= \omega^2 \boldsymbol{p_0} \boldsymbol{I} \boldsymbol{I} e^{\omega t \boldsymbol{I}} \\ &= - \omega^2 \boldsymbol{p_0} e^{\omega t \boldsymbol{I}} \end{align*}
And again we can just read off from the formula what is going on geometrically, which is that we end up with a vector that is rotated 180 degrees from the position vector, and scaled by ω2.
We can place the acceleration and velocity vectors as starting from the positition vector, and that looks like:
Note how simple this was to derive and that the geometric interpretation could be read off of the resulting formulas. We didn't need to leave the 2D plane, that is, all of these calculations took place in 𝔾2. The more classical derivations for uniform circular motion rely on the cross-product which takes you out of ℝ2 into ℝ3 and which doesn't work in higher level dimensions.
|
|
# Use Metallicity in a sentence
1. Metallicity definition is - the quality or state of being metallic.
Metallicity, Metallic
2. Metallicity is the property of a metal to conduct electricity or heat. A simple metal is directly tied to its transport properties
Metallicity, Metal
3. 3.2 Metallicity and SFR Together with the available gas supplies, Metallicity and SFR are properties that intrinsically depend on one another. As mentioned earlier, most of the heavy elements are produced in short lived massive stars and released after their terminal explosion.
Metallicity, Mentioned, Most, Massive
4. In astronomy and physical cosmology, the Metallicity of an object is the proportion of its matter made up of chemical elements other than hydrogen and helium.
5. Metallicity (plural metallicities) The quality or state of being metallic. (astronomy) The abundance of elements heavier than helium in stars as a result of nucleosynthesis.
Metallicity, Metallicities, Metallic
6. However, the dependence of this law on Metallicity is still largely debated
Metallicity
7. In this paper, we combine three samples of Cepheids in the Milky Way (MW), the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC) in order to derive the Metallicity term (hereafter $\gamma$) of the PL relation
Milky, Mw, Magellanic, Metallicity
8. Continue Shopping Metallicity Jewellery Design
Metallicity
9. My current project is focused on calibrating the behavior of the tip of the red giant branch in the near-infrared, and I have previously worked on emission line stars, RR Lyrae period-luminosity-Metallicity relations, and instrumentation for the Wide-Field Camera 3 on the Hubble Telescope.
My, Metallicity
10. A second way to characterize Metallicity is through the alpha-to-iron ratio, [ /Fe], which involve elements built by combining helium nuclei, such as Oxygen, Silicon, Neon, etc
Metallicity
11. Two channels to build up Metallicity enhancements:
Metallicity
12. Metallicity is a term in astronomy that refers to the proportion of elements in an astronomical object (usually a star) that are other than hydrogen or helium.In astronomy, all elements heavier then hydrogen and helium are collectively referred to with the blanket term "metals"
Metallicity, Metals
13. Unlike carbon nanotubes (which can exhibit metallicity depending on their chirality), isolated armchair and zigzag graphene nanoribbons (GNRs) always feature a …
Metalli
14. Metallicity The Metallicity of an object is the proportion of its material that is made up of metals. In astronomy, the term 'metals' is used for any element heavier than hydrogen or helium
15. Metallicity is an informative property of dwarf galaxies, offering insight into a dwarf galaxy’s history of gas accretion, loss, and star formation, which helps us understand the dynamics of galaxy formation
Metallicity
16. We studied the Metallicity, stellar age, and halo environment of satellite dwarf galaxies in the Justice League high-resolution
Metallicity
17. Metallicity project, Таллин
Metallicity
18. The Metallicity Distribution Function (MDF) is a hallmark of a stellar population
Metallicity, Mdf
19. For example, we know that the halo is characterized by a mean [Fe/H] ~ -1.6 while the MDF of the disk is centered closer to solar Metallicity.
Mean, Mdf, Metallicity
20. The Metallicity is typically determined by measuring the abundance of iron in the photosphere relative to the abundance of hydrogen.
Metallicity, Measuring
21. In the past years, a systematic downward revision of the Metallicity of the Sun has led to the "solar modeling problem", namely the disagreement between predictions of standard solar models and inferences from helioseismology.
Metallicity, Modeling, Models
22. Astronomers use the term “Metallicity” in reference to elements heavier than hydrogen and helium, such as oxygen, silicon, and iron.
Metallicity
23. Dimensional, volume-complete Metallicity distribution of »2.5 million F/G stars at heliocentric distances of up to »8 kpc
Metallicity, Million
24. SDSS spectroscopic Metallicity was used to calibrate a photometric Metallicity indicator based on the u ¡ g and g ¡ r colors, and an explicit Metallicity dependence term was added to the photometric parallax relation.
Metallicity
25. The Metallicity distribution function is an important concept in stellar and galactic evolution.It is a curve of what proportion of stars have a particular Metallicity ([Fe/H], the relative abundance of iron and hydrogen) of a population of stars such as in a cluster or galaxy.
Metallicity
26. The main factor in determining open cluster Metallicity appears to be galactocentric radius (e.g
Main, Metallicity
27. Since U-B color measures both simultaneously, to separate Metallicity and temperature effects one can make a "two-color diagram" where B-V sorts stars by temperature, and variations in U-B show Metallicity effects
Measures, Metallicity, Make
28. The Metallicity distribution of stars as a function of position in the Galactic disc would be an interesting constraint on chemical evolution studies of the Galaxy
Metallicity
29. Transition metal-free magnetism and half-Metallicity recently has been the subject of intense research activity due to its potential in spintronics application
Metal, Magnetism, Metallicity
30. More interestingly, room-temperature ferromagnetism, graphene–CaCl heterojunction, coexistence of piezoelectricity-like property and Metallicity, as well as the distinct hydrogen storage and release capability of the CaCl crystals in rGO membranes are experimentally demonstrated.
More, Metallicity, Membranes
31. They have been building Metallicity, extinction, and star formation maps of supernovae-rich galaxies like the Fireworks Galaxy
Metallicity, Maps
32. The Metallicity distributions of dIrrs resemble simple, leaky box chemical evolution models, whereas dSphs require an additional parameter, such as gas accretion, to explain the shapes of their Metallicity distributions
Metallicity, Models
33. Furthermore, the Metallicity distributions of the more luminous dSphs have sharp, metal-rich cut-offs that are consistent
Metallicity, More, Metal
34. Metallicity (met-ă-liss -ă-tee) (metal abundance) Symbol: Z
Metallicity, Met, Metal
35. Since metals are produced by nucleosynthesis in stars, the Metallicity of an object or class of objects depends on when it was formed: objects of later origin in general have a higher
Metals, Metallicity
36. We study the effect of multiple SNe using idealized 1D hydrodynamic simulations which explore a large parameter space of the number of SNe, and the background gas density and Metallicity.
Multiple, Metallicity
37. Http://www.theaudiopedia.com What is Metallicity? What does Metallicity mean? Metallicity meaning - Metallicity pronunciation - Metallicity defini
Metallicity, Mean, Meaning
38. In terms of Metallicity, massive ellipticals with bigger velocity dispersions are a bit more metal-rich than the Sun, whilst smaller galaxies may get down half the solar Metallicity
Metallicity, Massive, More, Metal, May
39. The gradient in Metallicity is attributed to the density of stars in the galactic center: there are more stars in the centre of the galaxy and so, over time, more metals have been returned to the interstellar medium and incorporated into new stars
Metallicity, More, Metals, Medium
40. By a similar mechanism, larger galaxies tend to have a higher Metallicity than their smaller
Mechanism, Metallicity
41. Katharsys - Metallicity LP by Katharsys, released 13 February 2017 1
Metallicity
42. Additional Metallicity measure- and Metallicity at z & 2, as the quantity 12 + log(O/H) ments, based on other indicators that use a wider set of increases monotonically from < 8.2 for galaxies with emission lines, are needed to confirm the trend with stel- hM⋆ i = 2.7 × 109 M⊙ to 8.6 for galaxies with hM⋆ i = lar mass revealed by the N 2
Metallicity, Measure, Ments, Monotonically, Mass
43. The Metallicity difference between the two hemispheres is nearly zero at pc and increases to Δ[Fe/H] = 0.05 dex at pc
Metallicity
44. We present optical longslit spectra for a subset of the xGASS and xCOLD GASS galaxies to investigate the correlation between radial Metallicity profiles and cold gas content
Metallicity
45. GRAPHENE Inducing Metallicity in graphene nanoribbons via zero-mode superlattices Daniel J
Metallicity, Mode
46. Metallicity (met-ă-liss -ă-tee) (metal abundance) Symbol: Z
Metallicity, Met, Metal
47. Since metals are produced by nucleosynthesis in stars, the Metallicity of an object or class of objects depends on when it was formed: objects of later origin in general have a higher
Metals, Metallicity
48. @article{osti_7188661, title = {Metallicity and RR Lyrae light curves}, author = {Simon, N R}, abstractNote = {The quantitative technique of Fourier decomposition is used here to revive the idea of a relation between Metallicity and light curve structure among the field RR(ab) stars
Metallicity
49. One of the ways we categorize stars is by their Metallicity
Metallicity
50. Metallicity effects on open cluster dynamics 1209 Hurley et al
Metallicity
51. Here we report a Metallicity of 0.09 times solar for a massive cloud that is falling into the disk of the Milky Way
Metallicity, Massive, Milky
52. Surprisingly, Trappist-1 type planetary systems, known as compact and multiple planet systems, seem to preferentially form around low-Metallicity stars
Multiple, Metallicity
53. The correlation between the gas giant planet occurrence and the Metallicity of their host stars has long been established
Metallicity
54. Metallicity project, Таллин
Metallicity
|
|
## A question connected with increasing the dimensionality of Euclidean spaces. [closed]
For each positive integer $n$ let $S(n)$ be a compact convex subset of $n$-dimensional Euclidean space with a non-empty interior and let $S(n)$ be a proper subset of $S(n+1)$. Can anyone provide an example of an infinite sequence $S(1),S(2),...,S(n),...$ of this sort in which, as $n$ approaches infinity, the diameters of these sets remain bounded but their $n$-dimensional volumes (i.e. their $n$-dimensional Lebesgue measures) do not converge to zero?
-
Am I missing something here? Suppose all the sets $S(n)$ have diameter at most $d$. Then $S(n) \subset B(x_n,d)$, for some $x_n \in \mathbb{R}^n$, and consequently $$\mathcal{L}^n(S(n)) \le \mathcal{L}^n(B(x_n,d)) = \frac{\pi^{n/2}d^n}{\Gamma(\frac{n}{2}+1)} \to 0$$ as $n \to \infty$. (As you can see this depends on the normalization constants for the measures of the unit balls. If you change those you still see the answer to your question just by looking at the measures of the balls.) Perhaps you are trying to ask something else? – Tapio Rajala Dec 26 2010 at 20:30
to Tapio Rajala: No, you are not missing anything. You have just shown me what should have been obvious to me and why what I was asking for was impossible. It was a stupid question to ask and should probably be deleted. I got confused by trying to understand why the volume of an n-dimensional ball of fixed radius approaches zero as n approaches infinity. – Garabed Gulbenkian Dec 26 2010 at 21:21
Think of the ration between the volumes of the unit sphere and the unit box in n dimensions. for n=1 it is 1/1 For n=2 it is 1/1 in the middle but $\pi$/4 over all. For n=3 it is $\pi$/4 at the equator and would stay $\pi$/4 if we had a cylinder but for the sphere it is $\pi$/6. – Aaron Meyerowitz Dec 26 2010 at 21:38
@Garabed Gulbenkian: OK. Usually I am the one making the stupid mistakes and replies, so it is nice to see it going this way for once. :) And yes, this is probably not a research level question.. but still on the subject: Even if you were given some other Haar measures $\mu^n$, your question would still come down to estimating the measures of the (unit) balls. Either $$\limsup_{n \to \infty}\mu^n(B(0,d)) = \limsup_{n \to \infty}\mu^n(B(0,1))d^n \to 0$$ for all $d >0$ or else you have an example using balls. Therefore there is no need to consider any other convex sets. – Tapio Rajala Dec 26 2010 at 22:13
Voting to close, in view of Tapio's excellent answer. – Gerry Myerson Dec 27 2010 at 2:47
|
|
Chapter 9
### Introduction
Objectives
1. Apply the concept of time constant to the physiology of mechanical ventilation.
2. Compare constant flow and descending ramp flow patterns during volume-controlled ventilation.
3. Describe the effect of respiratory mechanics on the airway pressure waveform during volume-controlled ventilation.
4. Describe the effect of resistance and compliance on flow during pressure-controlled ventilation.
5. Describe the effect of rise time adjustment during pressure-controlled and pressure support ventilation.
6. Describe the effect of termination flow during pressure support ventilation.
7. Discuss the role of sigh breaths during mechanical ventilation.
8. Discuss the physiologic effects of I:E ratio manipulations.
Microprocessor-controlled ventilators allow the clinician to choose among various inspiratory flow waveforms. This chapter describes the technical and physiologic aspects of various inspiratory waveforms during mechanical ventilation.
### Time Constant
An important principle for understanding pulmonary mechanics during mechanical ventilation is that of the time constant. The time constant determines the rate of change in the volume of a lung unit that is passively inflated or deflated. It is expressed by the relationship:
where Vt is the volume of a lung unit at time t, Vi is the initial volume of the lung unit, e is the base of the natural logarithm, and τ is the time constant. The relationship between Vt and τ is illustrated in Figure 9-1. Note that the volume change is nearly complete in five time constants.
###### Figure 9-1
The time constant function for lung emptying. After one time constant, 37% of the volume remains in the lungs, 13% remains after two time constants, 5% remains after three time constants, 2% remains after four time constants, and < 1% remains after five time constants.
For respiratory physiology, τ is the product of resistance and compliance. Lung units with a higher resistance and/or a higher compliance will have a longer time constant and require more time to fill and to empty. Conversely, lung units with a lower resistance and/or compliance will have a shorter time constant and thus require less time to fill and to empty. A simple method to measure the expiratory time constant is to divide the expired tidal volume by the peak expiratory flow during passive positive pressure ventilation:
where VT is the expired tidal volume and V̇e(peak) is the peak expiratory flow. Although this is a useful index of the global expiratory time constant, it treats the lung as a single compartment and thus does not account for time constant heterogeneity in the lungs.
### Flow Waveforms
#### Volume-Controlled Ventilation
The flow, pressure, and volume waveforms produced with a constant flow pattern are shown in Figure 9-2. This is often called square-wave or rectangular-wave ventilation due to the shape of the flow waveform. With the constant flow pattern, the volume ...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
If your institution subscribes to this resource, and you don't have a MyAccess profile, please contact your library's reference desk for information on how to gain access to this resource from off-campus.
## Subscription Options
### AccessAnesthesiology Full Site: One-Year Subscription
Connect to the full suite of AccessAnesthesiology content and resources including procedural videos, interactive self-assessment, real-life cases, 20+ textbooks, and more
|
|
# Ekman layer: Wikis
Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.
# Encyclopedia
The Ekman Layer is the layer in a fluid where the flow is the result of a balance between pressure gradient, Coriolis and turbulent drag forces. In the picture above, the wind blowing North creates a surface stress and a resulting Ekman spiral is found below it in the column of water.
The Ekman Layer is the layer in a fluid where there is a force balance between pressure gradient force, Coriolis force and turbulent drag. It was first described by Vagn Walfrid Ekman.
## History
Ekman developed the theory of the Ekman layer after Fridtjof Nansen observed that ice drifts at an angle of 20°-40° to the right of the prevailing wind direction while on an Arctic expedition aboard the Fram. Nansen asked his colleague, Vilhelm Bjerknes to set one of his students upon study of the problem. Bjerknes tapped Ekman, who presented his results in 1902 as his doctoral thesis.[1]
## Mathematical Formulation
The mathematical formulation of the Ekman layer can be found by assuming a neutrally stratified fluid, with horizontal momentum in balance between the forces of pressure gradient, Coriolis and turbulent drag.
$\ -fv = -\frac{1}{\rho_o} \frac{\part p}{\part x}+K_m \frac{\part^2 u}{\part z^2}$
$\ fu = -\frac{1}{\rho_o} \frac{\part p}{\part y}+K_m \frac{\part^2 v}{\part z^2}$
$\ 0 = -\frac{1}{\rho_o} \frac{\part p}{\part z}$,
where $\ K_m$ is the diffusive eddy viscosity, which can be derived using mixing length theory.
### Boundary Conditions
There are many regions where an Ekman layer is theoretically plausible; they include the bottom of the atmosphere, near the surface of the earth and ocean, the bottom of the ocean, near the sea floor and at the top of the ocean, near the air-sky interface.
Each of the different regions will have different boundary conditions. We will consider boundary conditions of the Ekman layer in the upper ocean[2]:
at $\ z = 0 : A \frac{\part u}{\part z} = \tau^x; A \frac{\part v}{\part z} = \tau^y$
where $\ \tau$ is the surface stress of the wind field or ice layer at the top of the ocean.
at $\ z \to \infty : u = u_g, v = v_g$,
where $\ u_g$ and $\ v_g$ are the geostrophic flows.
### Solution
These differential equations can be solved to find:
$\ u = u_g + \frac{\sqrt{2}}{fd}e^{z/d}\left [\tau^x cos(z/d - \pi/4) - \tau^y sin(z/d - \pi/4)\right ]$
$\ v = v_g + \frac{\sqrt{2}}{fd}e^{z/d}\left [\tau^x sin(z/d - \pi/4) + \tau^y cos(z/d - \pi/4)\right ].$
and by applying the continuity equation we can have the vertical velocity as following
$\ w = \frac{1}{f\rho_o}\left [-\left (\frac{\partial \tau^x}{\partial x} + \frac{\partial \tau^y}{\partial y} \right )e^{z/d}sin(z/d) + \left (\frac{\partial \tau^y}{\partial x} - \frac{\partial \tau^x}{\partial y} \right )(1-e^{z/d}cos(z/d))\right ]$
Note that when vertically integrated the volume transport associated with the Ekman spiral is to the right of the wind direction in the Northern Hemisphere.
## Experimental Observations of the Ekman Layer
There is much difficulty associated with observing the Ekman layer for two main reasons: the theory is too simplistic as it assumes a constant eddy viscosity, which Ekman himself anticipated[3], saying
“ It is obvious that $\ \left[\nu \right]$ cannot generally be regarded as a constant when the density of water is not uniform within the region considered ”
and because it is difficult to design instruments with great enough sensitivity to observe the velocity profile in the ocean.
### In the Atmosphere
In the atmosphere, the Ekman solution generally overstates the magnitude of the horizontal wind field because it does not account for the velocity shear in the surface layer. Splitting the boundary layer into the surface layer and the Ekman layer generally yields more accurate results.[4]
### In the Ocean
The Ekman layer, with its distinguishing feature the Ekman spiral, is rarely observed in the ocean. The Ekman layer near the surface of the ocean extends only about 10 - 20 meters deep,[4] and instrumentation sensitive enough to observe a velocity profile in such a shallow depth has only been available since around 1980.[2]
#### Instrumentation
Observations of the Ekman layer have only been possible since the development of robust surface moorings and sensitive current meters. Ekman himself developed a current meter to observe the spiral that bears his name, but was not successful.[5] The Vector Measuring Current Meter [6] and the Acoustic Doppler Current Profiler are both used to measure current.
#### Observations
The first observation of the Ekman spiral came in 1980 during the Mixed Layer Experiment.[7]
Ekman transport
## References
1. ^ [|Cushman-Roisin, Benoit] (1994). "Chapter 5 - The Ekman Layer". Introduction to Geophysical Fluid Dynamics (1st ed.). Prentice Hall. pp. 76–77.
2. ^ a b [|Vallis, Geoffrey K.] (2006). "Chapter 2 - Effects of Rotation and Stratification". Atmospheric and Oceanic Fluid Dynamics (1st ed.). Cambridge, UK: Cambridge University Press. pp. 112–113.
3. ^ Ekman, V.W. (1905). "On the influence of the earth's rotation on ocean currents". Ark. Mat. Astron. Fys. 2 (11): 1–52.
4. ^ a b [|Holton, James R.] (2004). "Chapter 5 - The Planetary Boundary Layer". Dynamic Meteorology. International Geophysics Series. 88 (4th ed.). Burlington, MA: Elsevier Academic Press. pp. 129–130.
5. ^ Rudnick, Daniel (2003). "Observations of Momentum Transfer in the Upper Ocean: Did Ekman Get It Right?". Near-Boundary Processes and their Parameterization (Manoa, Hawaii: School of Ocean and Earth Science and Technology).
6. ^ Weller, R.A.; Davis, R.E. (1980). "A vector-measuring current meter". Deep-Sea Res. 27: 565–582.
7. ^ Davis, R.E.; R. de Szoeke, and P. Niiler. (1981). "Part II: Modelling the mixed layer response". Deep-Sea Res. 28: 1453–1475.
|
|
# Getting apostrophs right in listings [duplicate]
I have a very simple problem. I want to display a simple php code in Latex using listings. For example
{goToPage('nextpage');
}
My problem is, that if I compile the code the apostrophs ' are displayed as ´ in the document. I can't figure out how to change this.
Here is the Latex Code I use:
\documentclass[12pt,a4paper,xcolor=dvipsnames]{article}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{listings}
\usepackage[T1]{fontenc}
\usepackage[usenames,dvipsnames,svgnames,table]{xcolor}
\lstset{ %
language=php, % the language of the code
basicstyle=\footnotesize, % the size of the fonts that are used for the code
numbers=left, % where to put the line-numbers
numberstyle=\tiny\color{gray}, % the style that is used for the line-numbers
stepnumber=1, % the step between two line-numbers. If it's 1, each line
% will be numbered
inputencoding = utf8,
numbersep=5pt, % how far the line-numbers are from the code
backgroundcolor=\color{white}, % choose the background color. You must add \usepackage{color}
showspaces=false, % show spaces adding particular underscores
showstringspaces=false, % underline spaces within strings
showtabs=false, % show tabs within strings adding particular underscores
frame=single, % adds a frame around the code
extendedchars=true,
rulecolor=\color{black}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. commens (green here))
tabsize=2, % sets default tabsize to 2 spaces
captionpos=b, % sets the caption-position to bottom
breaklines=true, % sets automatic line breaking
breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace
title=\lstname, % show the filename of files included with \lstinputlisting;
% also try caption instead of title
keywordstyle=\color{blue}, % keyword style
stringstyle=\color{Purple}, % string literal style
escapeinside={\%*}{*)}, % if you want to add a comment within your code
morekeywords={*,...} % if you want to add more keywords to the set
}
lines=true}
\lstset{framextopmargin=50pt,frame=bottomline}
\author{Philipp Hubert}
\title{SoSci Hilfe}
\begin{document}
\begin{lstlisting}[language=php]
{goToPage('***');
}
\end{lstlisting}
\end{document}
• You should add \begin{document} and \end{document} and things like that so that your example is a complete document. Then someone will surely answer with a change that works for the kind of document you have.
– pst
Apr 17, 2018 at 16:48
\begin{verbatim} Your code \end{verbatim}
|
|
# R/graphon-package.R In graphon: A Collection of Graphon Estimation Methods
#' graphon : A Collection of Graphon Estimation Methods
#'
#' The \pkg{graphon} provides a not-so-comprehensive list of methods for estimating graphon,
#' a symmetric measurable function, from a single or multiple of observed networks.
#' It also contains several auxiliary functions for generating sample networks using
#' various network models and graphons.
#'
#'
#' @section What is Graphon?:
#' Graphon - graph function - is a symmetric measurable function \deqn{W:[0,1]^2\rightarrow[0,1]} that arise
#' in studying exchangeable random graph models as well as sequence of dense graphs. In the language of
#' graph theory, it can be understood as a two-stage procedural network modeling that 1) each vertex/node in the graph
#' is assigned an independent random variable \eqn{u_j} from uniform distribution \eqn{U[0,1]}, and
#' 2) each edge \eqn{(i,j)} is randomly determined with probability \eqn{W(u_i,u_j)}. Due to such
#' procedural aspect, the term \emph{probability matrix} and \emph{graphon} will be interchangeably used
#' in further documentation.
#'
#'
#' @section Composition of the package:
#' The package mainly consists of two types of functions whose names start with \code{'est'}
#' and \code{'gmodel'} for estimation algorithms and graph models, respectively.
#'
#' The \code{'est'} family has 4 estimation methods at the current version,
#' \itemize{
#' \item \code{\link{est.LG}} for empirical degree sorting in stochastic blockmodel.
#' \item \code{\link{est.SBA}} for stochastic blockmodel approximation.
#' \item \code{\link{est.USVT}} for universal singular value thresholding.
#' \item \code{\link{est.nbdsmooth}} for neighborhood smoothing.
#' \item \code{\link{est.completion}} for matrix completion from a partially revealed data.
#' }
#'
#' Also, the current release has following graph models implemented,
#' \itemize{
#' \item \code{\link{gmodel.P}} generates a binary graph given an arbitrary probability matrix.
#' \item \code{\link{gmodel.ER}} is an implementation of Erdos-Renyi random graph models.
#' \item \code{\link{gmodel.block}} is used to generate networks with block structure.
#' \item \code{\link{gmodel.preset}} has 10 exemplary graphon models for simulation.
#' }
#'
#' @section Acknowledgements:
#' Some of the codes are translated from a \href{https://github.com/airoldilab/}{MATLAB package} developed by
#' \href{https://engineering.purdue.edu/ChanGroup/stanleychan.html}{Stanley Chan} (Purdue) and
#' \href{http://www.people.fas.harvard.edu/~airoldi/}{Edoardo Airoldi} (Harvard).
#'
#' @author Kisung You
#' @docType package
#' @name graphon-package
#' @import Rdpack
#' @importFrom utils packageVersion
#' @importFrom ROptSpace OptSpace
#' @importFrom graphics image par
#' @importFrom stats quantile rbinom runif
NULL
## Try the graphon package in your browser
Any scripts or data that you put into this service are public.
graphon documentation built on Sept. 21, 2018, 6:26 p.m.
|
|
Timezone: »
NeuralFuse: Improving the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes
Hao-Lun Sun · Lei Hsiung · Nandhini Chandramoorthy · Pin-Yu Chen · Tsung-Yi Ho
Deep neural networks (DNNs) are state-of-the-art models adopted in many machine learning based systems and algorithms. However, a notable issue of DNNs is their considerable energy consumption for training and inference. At the hardware level, one current energy-saving solution at the inference phase is to reduce the voltage supplied to the DNN hardware accelerator. However, operating in the low-voltage regime would induce random bit errors saved in the memory and thereby degrade the model performance. To address this challenge, we propose $\textbf{NeuralFuse}$, a novel input transformation technique as an add-on module, to protect the model from severe accuracy drops in low-voltage regimes. With NeuralFuse, we can mitigate the tradeoff between energy and accuracy without retraining the model, and it can be readily applied to DNNs with limited access, such as DNNs on non-configurable hardware or remote access to cloud-based APIs. Compared with unprotected DNNs, our experimental results show that NeuralFuse can reduce memory access energy up to 24% and simultaneously improve the accuracy in low-voltage regimes up to an increase of 57%. To the best of our knowledge, this is the first model-agnostic approach (i.e., no model retraining) to mitigate the accuracy-energy tradeoff in low-voltage regimes.
|
|
×
Classical Mechanics
# Deriving the kinematic relations
In the last quiz, we were able to relate position, velocity, and acceleration in special cases. We were also able to write down the relationship between the pairs $$(a,v)$$ and $$(v,d)$$.
In this quiz we're going to formalize our findings, derive the general kinematic relations, and obtain the relations for the special case of constant acceleration, $$a(t) = a_0$$.
When we explored Kinematics in the City we saw that for constant velocity, $$d=vT$$. In general, when $$v$$ depends on time, this relation is a differential equation for the rate of change of displacement $$d$$. How can we write this as a differential equation?
Suppose that we break up the $$v$$ vs. $$t$$ plot into rectangles. We can write $r_n = v_1\Delta t + v_2\Delta t + \cdots + v_n \Delta t.$ What is $$r_n - r_{n-1}$$?
In the continuous limit where the differences in time (and position) are infinitesimal, we can allow the differences to become differentials, i.e. \begin{align} \Delta r_n = r_n - r_{n-1} &\longleftrightarrow dr \\ \Delta t_n = t_n - t_{n-1} &\longleftrightarrow dt. \end{align} At each moment, the change in $$r(t)$$ is given by $$v(t)dt$$. Therefore, we can write $\frac{dr}{dt} = v(t),$ or equivalently $r(t_f) = r(t_i) + \int\limits_{t_i}^{t_f} v(t) dt.$
Just as $r_n = v_1\Delta t + v_2 \Delta t + \cdots +v_n \Delta t,$ we have $v_n = a_1\Delta t + a_2 \Delta t + \cdots + a_n \Delta t.$ Similarly, we have $\frac{dv}{dt} = a(t) \longleftrightarrow v(t_f) = v(t_i) + \int\limits_{t_i}^{t_f}a(t)dt.$ As we claimed, the relationship between $$a$$ and $$v$$ is the same as that between $$v$$ and $$r$$.
Suppose you're given the following form for the acceleration of a particle that starts from rest: $a(t) = a_0 e^t.$
Find the velocity at time $$t_f,$$ $$v(t_f),$$ given that the initial velocity, $$v(t_i),$$ is zero.
Thus, given an expression for $$a(t)$$, we can find the velocity $$v(t_f)$$ at all times. In fact, we can integrate once more to find the position $$r(t_f)$$.
Suppose we're given an arbitrary form for the acceleration $$a(t)$$ of a particle that starts from rest at $$r(t_i) = 0$$.
Find its position $$r(t_f)$$.
This is all a bit abstract. Let's apply our formula to a case that we've solved already. Recall our second motorcycle (the one that could briefly boost from $$v$$ to $$v+\Delta v$$).
If it starts from rest at $$r(0)$$, we can set $$v(0)$$ equal to zero, and write the acceleration $$a(t)$$ as $$a_0$$.
Then, \begin{align} r(t_f) &= r(0) + \int\limits_0^{t_f} dt \int\limits_0^{t} dt^\prime a_0 \\&= \int\limits_0^{t_f} dt\ a_0 t \\ &= \frac12 a_0 {t_f}^2. \end{align}
Putting it all together, if our motorcycle has an initial velocity $$v(0) = v_0$$, then we find that the position is given by
\begin{align} r(T) &= r_0 + \int\limits_0^T v_0 dt + \int\limits_0^T dt \left[\int\limits_0^t dt^\prime\ a_0\right] \\ &= r_0 + v_0T + \int\limits_0^T dt\ a_0 t \\ &= r_0 + v_0T + \frac12 a_0T^2. \end{align}
Suppose our motorcyclist starts at their apartment, with initial speed $$v_0$$ and accelerates at a constant rate so that they're moving at $$v_f$$ a time $$T$$ later.
How far do they travel during this motion?
Sometimes it is necessary to relate the initial and final velocities of an object in terms of its distance traveled, without reference to the time. As one example, an insurance specialist may come to the scene of a motorcycle crash and know the braking deceleration of the motorcycle along with the distance of its skid, but want to know how fast it was going before the accident.
Suppose a motorcycle starts at $$v=v_0$$ and then skids over a distance of $$d$$, ending the skid at $$v=v_f$$ .
Find $${v_f}^2 - {v_0}^2$$ in terms of the distance $$d$$ and the deceleration $$a$$.
We've shown how to derive the kinematic relations from the fundamental relationships between $$r$$, $$v$$, and $$a$$: $\frac{dr}{dt}=v, \quad \frac{dv}{dt} = a.$ For the case of constant acceleration, we showed $r_T = r_0 + v_0T + \frac12 a_0 T^2.$ Finally, we showed that we can relate the initial and final velocity without reference to time via ${v_T}^2 - {v_0}^2 = 2a(r_T - r_0).$
In this quiz, we've derived the relations in the context of 1D motion. However, these derivations work just the same in 2 or more dimensions. In general, for $$N$$-dimensional $$\mathbf{r}$$, $$\mathbf{v}$$, and $$\mathbf{a}$$, we have \begin{align} \mathbf{r_T} &= \mathbf{r_0} + \mathbf{v_0}T + \frac12 \mathbf{a_0}T^2 \\ \mathbf{v_T}^2 - \mathbf{v_0}^2 &= 2\mathbf{a}\cdot\left(\mathbf{r_T} - \mathbf{r_0}\right). \end{align} One common example we'll revisit is that of projectile motion, where the vertical motion of an object in a gravitational field $$g$$ is uncoupled from its horizontal motion.
×
|
|
# What is the wavelength of the light?
1. ### atsum
9
Calculate the wavelength of the light emitted when an electron in a one-dimensional box of length 5.2nm makes a transition from the n = 7 state to the n = 6 state.
I calculated in this way:
E = n^2*h^2/8*m*a^2
E= (7^2-6^2)*(6.626*10^-34)^2/8*(9.1*10^-31)*(5.2*10^-9)^2
E= 2.9*10^-20 J
E = hc/λ
λ = 6.85*10^-6 m
What is wrong with my calculation?
2. ### Simon Bridge
15,474
How do you know the answer is wrong?
It can help if you do the algebra first, then use more convenient units:$$\lambda=\frac{8(m_e c^2) L^2}{hc}\frac{1}{n_f^2-n_i^2}$$
##hc=1.240\text{eV$\mu$m}##
##m_ec^2=511\text{eV}##
##L=0.0052\text{$\mu$m}##
... give it a go.
(Don't forget to check my algebra to get that equation.)
3. ### atsum
9
It is the same calculation as mine.
I did the exercise on MasteringChemistry. It only said my answer is wrong.
4. ### Simon Bridge
15,474
But I got a different order of magnitude from you.
Repeat the calculation ... you have misplaced a decimal point someplace.
|
|
# Bounding Castelnuovo-Mumford regularity in a short exact sequence
Let $R$ be a commutative ring with unity, $I$ be an ideal and $a\in R$ be an element in $R$. We have the following short exact sequence:$$0\rightarrow R/(I:a)\rightarrow R/I\rightarrow R/(I+(a))\rightarrow 0$$ where the injection is multiplication by $a$, and the surjection is the canonical one. Moreover, it is known that whenever we have a short exact sequence of finitely generated graded modules over a polynomial ring over a field: $$0\rightarrow M''\rightarrow M\rightarrow M'\rightarrow 0$$ we can bound CM regularity as $\mathop{\rm reg}M\leq\max(\mathop{\rm reg} M'',\mathop{\rm reg}M')$.
In particular, if we let $R=\mathbb{C}[x]$, $I=(x^2)$ and $a=x$, we have $(I:a)=I+(a)=(x)$ homogeneous with $$\mathop{\rm reg}(R/I)=\mathop{\rm reg}(\mathbb{C}[x]/(x^2))=1$$ and $$\mathop{\rm reg}(R/(I:a))=\mathop{\rm reg}(R/(I+(a)))=\mathop{\rm reg(\mathbb{C}[x]/(x))}=0.$$ This seems to contradict the estimation of the regularity. Where can be the problem?
-
## 1 Answer
Of course, after I have been thinking about this for three weeks now, I find the answer right when I post the question. I am sorry.
The problem is that the exact sequence above is not a graded exact sequence. We need to shift the grading, and then the estimation is fine.
-
|
|
# zbMATH — the first resource for mathematics
Shimura varieties. (English) Zbl 1435.14004
London Mathematical Society Lecture Note Series 457. Cambridge: Cambridge University Press (ISBN 978-1-108-70486-1/pbk; 978-1-108-64971-1/ebook). iii, 333 p. (2020).
Publisher’s description: This is the second volume of a series of mainly expository articles on the arithmetic theory of automorphic forms. It forms a sequel to L. Clozel (ed.) et al. [Stabilization of the trace formula, Shimura varieties, and arithmetic applications. Volume 1: On the stabilization of the trace formula. Somerville, MA: International Press (2011; Zbl 1255.11027)]. The books are intended primarily for two groups of readers: those interested in the structure of automorphic forms on reductive groups over number fields, and specifically in qualitative information on multiplicities of automorphic representations; and those interested in the classification of $$I$$-adic representations of Galois groups of number fields. Langlands’ conjectures elaborate on the notion that these two problems overlap considerably. These volumes present convincing evidence supporting this, clearly and succinctly enough that readers can pass with minimal effort between the two points of view. Over a decade’s worth of progress toward the stabilization of the Arthur-Selberg trace formula, culminating in Ngo Bau Chau’s proof of the Fundamental Lemma, makes this series timely.
Systematically develops the Langlands-Kottwitz method for Shimura varieties through the important example of unitary groups
Constructs Galois representations attached to automorphic representations of $$GL(n)$$
Includes several surveys of contemporary developments as well as original research
The articles of this volume will be reviewed individually.
Indexed articles:
Haines, T. J.; Harris, M., Introduction to Volume II, 1-21 [Zbl 1436.11004]
Genestier, A.; Ngô, B. C., Lectures on Shimura varieties, 22-71 [Zbl 1440.11001]
Nicole, Marc-Hubert, Unitary Shimura varieties, 72-95 [Zbl 1440.14138]
Rozensztajn, Sandra, Integral models of Shimura varieties of PEL type, 96-114 [Zbl 1440.14139]
Zhu, Yihang, Introduction to the Langlands-Kottwitz method, 115-150 [Zbl 1440.14141]
Kisin, Mark, Integral canonical models of Shimura varieties: an update, 151-165 [Zbl 1440.14136]
Mantovan, Elena, The Newton stratification, 166-191 [Zbl 1440.14115]
Viehmann, Eva, On the geometry of the Newton stratification, 192-208 [Zbl 1440.14140]
Shin, Sug Woo, Construction of automorphic Galois representations: the self-dual case, 209-250 [Zbl 1455.11079]
Scholze, Peter, The local Langlands correspondence for GL$$_n$$ over $$p$$-adic fields, and the cohomology of compact unitary Shimura varieties, 251-265 [Zbl 1440.11218]
Chenevier, Gaëtan, An application of Hecke varieties from unitary groups, 266-296 [Zbl 07219416]
Sorensen, Claus M., A patching lemma, 297-305 [Zbl 1440.11219]
Johansson, Christian; Thorne, Jack A., On subquotients of the étale cohomology of Shimura varieties, 306-333 [Zbl 07219418]
##### MSC:
14-06 Proceedings, conferences, collections, etc. pertaining to algebraic geometry 11-06 Proceedings, conferences, collections, etc. pertaining to number theory 14G35 Modular and Shimura varieties 11G18 Arithmetic aspects of modular and Shimura varieties 00B15 Collections of articles of miscellaneous specific interest
Full Text:
|
|
# SPARK 2014 Rationale: Information Flow
## by Florian Schanda – Apr 25, 2014
We will start off with a simple example. Lets assume that we want to write a procedure that doubles and then swaps variables X and Y. The final value of X should depend only on the original value of Y and the final value of Y should depend only on the original value of X. So now let's write some code and add the depends contract that we just mentioned on it.
procedure Double_And_SWAP (X, Y : in out Integer)
with Global => null, -- We use no global variables.
Depends => (X => Y, -- This reads as: "X depends on Y"
Y => X) -- This reads as: "Y depends on X"
is
Tmp : Integer;
begin
X := X * 2;
Y := Y * 2;
Tmp := X;
X := Y;
X := Tmp; -- Oops, I mistyped... (should be "Y := Tmp;")
end Double_And_SWAP;
When the tools analyze the above code, they complain that the depends annotation does not match the implementation. Both variables depend on themselves instead of each other. At this point, to make the error go away we have to either change the code, or change the dependency relation. In this particular example the problem lies with the code. However, this might not always be the case, it could very well be that our contracts/specifications were actually wrong because we failed to notice a dependency and consequently failed to capture it on the Depends aspect. Had we not added the dependency relation, it would have been easy to miss the typo and end up with an error in our code. Spotting the error was easy on this occasion but the more complicated the code the harder it gets. The tools make our life easier by highlighting the path in the code that leads to the discrepancy.
The "Plan first, act later!" advice, seems to be applicable here since programmers should first formulate their Dependency relations and then proceed to the implementation.
Lets now point out some key characteristics of aspect Depends. The Depends aspect tells us how the outputs of a subprogram relate to the inputs. Inputs always remain unchanged, so they cannot depend on anything. If an output 'X' does not depend on any input, then we have to explicitly state this by writing "Depends => (X => null)". Similarly, if an input 'Y' of the subprogram is not used by any output, we have to state this by writing "Depends => (null => Y)".
Suppose that we want to write a procedure that takes a single parameter 'Y' and then sleeps for 'Y' milliseconds. Since time is not modelled in SPARK, this procedure will appear to have no output and input 'Y' will appear to be doing nothing. The dependency relation of this Sleep procedure will look exactly as mentioned before "Depends => (null => Y)".
Lets now try to do the inverse. We will look at an unannotated piece of code and try to figure out what the corresponding Depends aspect should have been.
procedure No_Depends is
begin
if Condition then
X := Y;
end if;
end No_Depends;
So let me think out loud... Since global variable Condition and global variable Y are only being read, they are inputs. Global variable X on the other hand is only ever written, so it has to just be an output. So the first Dependency relation that pops into mind is "Depends => (X => (Y, Condition))". Right?
...
WRONG ! When Condition is False, X remains exactly the way it was. So X depends on itself and is in fact also an input. It is as if we had written:
procedure No_Depends is
begin
if Condition then
X := Y;
else
X := X;
end if;
end No_Depends;
The correct dependency for the code above would be "Depends => (X => (X, Y, Condition))". A shorthand for this is "Depends => (X =>+ (Y, Condition))". The '+' symbol means that variables on the left hand side also depend on themselves. So, the thing to remember here is that even though calculating the dependency relation of a subprogram is not too hard, there are some subtleties involved.
Aspect Depends tells us how the outputs of a subprogram relate to its inputs. This improves readability and maintainability of the code by strengthening the interface specification of a subprogram. In certain contexts, such as the development of secure systems, this is a very powerful verification/assurance technique. Here, it is recommended that programmers provide dependency relations before they start writing the actual code so that the tools can verify the validity of the implementation against the annotations. If we all were to adopt this habit, higher quality code would be generated and the world would be a better and more secure place! :D
Posted in #Formal Verification #SPARK
|
|
When you install the software you will get several files and folders. For example if you have windows 7, you will get Autodesk_Autocad_2010_win32_x86_dyn.zip and Autodesk_Autocad_2010_win32_x64_dyn.zip If you have vista, you will get Autodesk_Autocad_2010_win32_x86_dyn.zip and Autodesk_Autocad_2010_win32_x64_dyn.zip If you have windows 8, you will get Autodesk_Autocad_2010_win32_x86_dyn.zip and Autodesk_Autocad_2010_win32_x64_dyn.zip Extract the files that you want to use. In my case, I extract them into a folder. For example: C:\Program Files\Autodesk\Autocad\2010 1. Create a folder called “sfc” 2. Copy and Paste the files inside the “sfc” folder. 3. Run the following command in the Command Prompt. sfc /scannow And you will get a screen that says, Successfully completed repair. You don’t need to do anything now. For information: Q: Simple algebraic complexity proof: $\binom{n}{2} = \binom{n-1}{2} + n$ I’m having trouble seeing how this is true. The definition of a binomial coefficient is: $\binom{n}{k} = \frac{n!}{k!(n-k)!}$ And if you look at the left hand side, $\binom{n}{k}$ is an
|
|
# Crushed Shell Calculator
At Pierce Materials & Services LLC in Lakeland, Florida, we provide the finest aggregate materials and dependable residential and commercial services at competitive prices. 5 tons can cover one cubic yard. In Imperial or US customary measurement system, the density is equal to 53 pound. Snape grass is an item used in the making of prayer potions and fishing potions. 00 ½ YD $12. ” These aggregates contain the soil particles sand, silt and clay, along with gravel or rock particles. Sand Shell (or shell sand) is naturally occurring in layers of the Earth's surface (especially near the coast). If you do not know the product density, use the optional density estimator* or contact a local sales representative. With a variety of sizes and kinds, All. Landscape Materials Calculator -- Calculate the amount of material, in yards, you need for a job. You have searched for sea shell shadow box and this page displays the closest product matches we have for sea shell shadow box to buy online. Earth (loose) 2,050 lb. Buy Walnut Shells at BrambleBerry. 73 then we can calculate that its density is 0. As sea shells break down, the calcium they contain enters the soil and serves as additional nutrients to the plants. If you're in the US, divide 2295 by 2000 (there are 2000lb in a US ton). A few weeks ago, I posted pictures of the first steps of reclaiming my classic Old Florida style crushed shell driveway. Fruit seeds: Experiment with cranberry, blackberry, blueberry, strawberry, apricot pits, etc. Brand Name: Midwest Manufacturing. The first thing most callers say isThis is a small job. Normally the mantle is expanded to meet the outer lip of the shell and you can see it encroaching and sometimes covering the columella. Converter for quantity amounts of Tomatoes, crushed, canned food between units measured in g, gram, dag, dekagram (10g), portion 100 g, grams, kg, kilogram (1,000g), oz, ounce (28. flexiblepavements. 0 g phosphorus per 1000 calories. Quantities can vary depending on labor costs, workmanship, breakage and waste. Whether you’re celebrating a birthday, graduation. Simply enter the material to be conveyed and the length of the conveyor that is being considered. Hardscaping. The Inland Shell Comes in four sizes Fines, 1/2” Shell, 1 1/2” Shell and Inland Reef Shell. ~ A crushed basalt gravel available in 5/8″ size with a dark grey look when dry, and very black look when wet. 2 acre property was purchased by Robert and Patrica Martell in 1974. lot coverage. Crushed Oyster Shell is a superior alternative to grit for your feathered friend. All of our crushed concrete at SCC consists of uncontaminated materials that meet TxDOT and city specifications and assists in the production of quality secondary aggregates. We have been in the business for 20+ years and take pride in providing our customers with the best customer service and the highest quality of materials available. Crushed Shell Driveway Cost. Landscaping With Shells - Rock Solid Landscape Stone Center …. Our materials calculator is meant to serve as an estimate of product needed. Melissa Kaplan, 1995. Established in 1891, we are still one of only a few family owned brick manufacturing facilities. size and attractive white color. Complete installation of a driveway with a base and surface layer of compacted gravel starts at roughly$0. GET A QUOTE PRODUCT AVAILABILITY MATERIAL CALCULATOR Flex Base TX Dot Item 247 Flex Base material is widely used for temporary roads as well as base material for underneath asphalt and concrete paving, sidewalks, driveways, and laydown areas. Formulas Used. NO HOA RULES!!! There is so much potential with this house. Prices depend on the type of shells and location. Please contact our representative for coverage estimates. And rake it out. Due to reduced capacity, some items have been temporarily removed from the site, and new orders can take up to 30 days to arrive. STC Ratings for Windows The STC rating of windows depends on the thickness of the glass, use of PVB or EVA, the number of panes, the frame material, and weather-stripping seals. I understand this calculator is designed to provide an estimated materials cost only. DOT-Certified Asphalt, Licensed and Insured, Knowledgeable Staff. 00 Select options;. 21 t/yd³ or 0. Used for extra grit in ice control sand, or for those seeking coarse sand with very little fines. One Yard will cover 100 Square Feet to a depth of 2 inches for Rock and Mulch, 80 Square Feet for Bark and 150 Square Feet for Shell. Crushed concrete is exactly what it sounds, concrete ground into small pieces. Application. The crushed shell extends into the fenced backyard through a gate making a perfect spot to store your boat or RV. While a large portion of this aggregate material is barged in from the northern states, PMC produces much. Crushed clamshells are the cheapest in Florida, Georgia, and Alabama, where it's $10 per ton,$14 per yard, or $0. If you're in the US, divide 2295 by 2000 (there are 2000lb in a US ton). Commercial Limerock. These sea shells cover approximately 3 sq. It is a delicate sac containing vital organs, including the lung and gills. Derivative calculator Integral calculator Definite integrator Limit calculator Series calculator Equation solver Expression simplifier Factoring calculator Expression calculator Inverse function Taylor series Matrix calculator Matrix arithmetic Graphing calculator. Crushed oyster shell grit is ideal as an additional source of calcium to help your bird maintain strong bones and healthy feathers. Here you can browse through our top exotic aggregate products from Mother of Pearl chips and other beautiful shell aggregates found regionally to our headquarters in Charlotte, North Carolina. For best results, please consult a Pioneer Sales Professional at any of our locations. We recommend rounding up on all approximations, to ensure you have enough material. Find more information about this product on the Manufacturer's website. Another simpler method is to store the shells in a baggy or bowl in your refrigerator until you are ready to crush them for use. Due to reduced capacity, some items have been temporarily removed from the site, and new orders can take up to 30 days to arrive. Drop rate estimates based on data collected by /u/Rathus and the Grand Order Subreddit. Converters between imperial and metric units, as well between units of the same system, but different scale. TERRAZZCO® crushes and processes marble, glass and shell chips varying in color and sizes. The specifications are 1″ minus with very little powder. 00 ½ YD$22. We provide high quality hauling services to deliver materials like fill dirt, rip rap, shell, sand, 57 stone, top soil and more! At Carrillo Trucking of Sarasota, Inc. deep, and pack a 6-in. It is perfect for healthy, smooth shell development in snails. • Minimum 3 characters. (crushed) 2,565. Decomposed Granite and D. A crushed stone calculator will help you figure out the amount of crushed stone you'll need for a patio, path or driveway project. Type in inches and feet of your project and calculate the estimated amount of Sand / Screenings in cubic yards, cubic feet and Tons, that your need for your project. A cubic yard is about $40 and a ton about$50. Sharing the same values in providing buffers to stabilize and enhance the pH, crushed corals usually push the pH up to 7. For best results, please consult a Pioneer Sales Professional at any of our locations. * Price per cubic yard…. Crushed Shells Cleaned, crushed, processed shells for driveways, walkways, etc. The LnL has arrived and I have the dies for. Another crushed stone variety is the riprap stone, which measures 3 1/2 inches in diameter and serves well as a soil stabilizer, as a backing for stone walls and retaining wells. Yet with a little. Handling Drums and Other Containers Contents Introduction 11-1 Inspection 11-3 Planning 11-3 Handling 11-4 Drums Containing Radioactive Waste 11-5 Drums that May Contain Explosive or Shock-Sensitive Wastes 11-5 Bulging Drums 11-6 Drums Cont aining Packaged Laboratory Wastes (Lab Packs) 11-6 Leaking, Open, and Deteriorated Drums 11-6 Buried. Houston Crushed Limestone Base. LCD Trucking is the area's premier supplier of large and small quantities of Rock, Shell and Fill materials of all varieties. A comedic take on the daily life of car-wash employees, chronicling their hopes, fears, joys, dreams, and tribulations, and meeting some eccentric customers along the way. Calculate the square feet of an area by using the length width and depth. Zestimate® Home Value: $176,852. CLICK HERE to get FREE & FAST BIDS from local gravel parking-pad installers. Ideal for foundations and walls. Crushed Corals. Crushed Shell Rock. While crushed oyster shell dissolves and vanishes, letting you know when to add more, the crushed coral remains, looking good but doing nothing. Add them into your homemade dog food for an easy, crunchy nutritional boost. And rake it out. Next, shells of the sized peanuts are crushed, typically by passing the peanuts between rollers that have been adjusted for peanut size. Using Crushed Shell on Your Tampa FL Driveway! If you are looking for a unique and contemporary appearance, crushed shell is a popular choice to use in Tampa. Lehmann Glass Lehmann - Grip Decanter - 2 Litre. It can also be used for driveways, walkways, edging and garden projects. No matter what your project entails or how frequently you've worked with crushed stone, though, it's extremely important that you have the ability to accurately estimate the quantity of materials that you'll need to buy. Free My IP API A free, simple, and easy to use API to show the caller's public IP address in plain text, CSV, JSON, JSONP, XML, or YAML format. 00 ½ YD$12. 6" Minus Base Limerock. , the Preferred Local Provider for Landscape Supplies for the Lower and Outer Cape is now located by the cell tower at 2780 Nauset Road, N. It stays in the gizzard and it's used to grind food. To use this calculator enter your desired width, length, and depth requirements. We deliver nationwide, Call us today. Green Tripe. Quality Florida Landscaping Shells Shell-A Unique Look in a Long-Lasting Package. I’m no fan of landscape fabric, but I accept that it can be a useful tool in the garden in a few select circumstances. Next, shells of the sized peanuts are crushed, typically by passing the peanuts between rollers that have been adjusted for peanut size. A comprehensive set of free online calculators and free online converters for your everyday calculations. Please Note: Prices, promotions, styles and availability may vary by store and online. Hill Sand & Gravel, Inc. Lufkin's Landscape has been a dependable family- owned and operated business serving Pinellas County since 1985. To prep the surface, you may want to use a sander to remove imperfections that you do not want sealed inside. One can argue that the azoic simpleness is what makes Box's character. 6000- 10000 tonnes of almond kernel per season, (March to July. Gravel is a essential building block for almost any type of housing or infrastructure. 6 (the amount of cubic feet in a ton). Whether it’s intricate detail work or a huge log cabin, the Dustless Blaster can handle it. Our Gray Tan Crushed Stone is a blend of gray, tan and brown stone crushed to various sizes. Some users manage to open it. This is not intended to be a "do-it-yourself" manual but more what you should expect when you take such a. I understand this calculator is designed to provide an estimated materials cost only. (1) Write a review. Stop by today or browse our selection and shop online. Tussey Mountain Mulch Landscape Center, LLC. The pulp is not equally thick in all of them. Minimum Delivery is Two Tons. Gravel Driveway 101: The Definitive Buyers Guide to Gravel Driveways A gravel driveway is the perfect choice for a countryside residence and is also your cheapest driveway option. Coconut Oil 16 oz. NASA Astrophysics Data System (ADS) Kong, Dali; Zhang, Keke; Schubert, Gerald; Anderson,. They share some other qualities of gravel as well, such as assisting in the control of erosion and runoff. Crushed Shells. TIP: If you're using Semi-Gloss Enamel, apply multiple thin coats, rather than one thick coat, to achieve the smoothest finish. Area Per Ton Coverage. GNA Sand and Gravel Cubic yard calculator for topsoil, sand, gravel, dirt. The gap between rollers must be narrow enough to crack the peanut hulls, but wide enough to prevent damage to the kernels. Another crushed stone variety is the riprap stone, which measures 3 1/2 inches in diameter and serves well as a soil stabilizer, as a backing for stone walls and retaining wells. That’s just how we roll. Our mission is to assist our customers in achieving their goals to enhance their homes, offices, schools, or public spaces in a sustainable, economical, and environmentally friendly manner. Pecan Shells are a richly colored red-brown mulch used in landscaping and gardening. This will tell you how many cubic yards of crushed stone you need. Volume of a Spherical Shell (V): The volume of the shell is returned in cubic meters. Revlon Pub After Shave Lotion 75ml. The ratio of concrete is given as 1:2:4 and steel is 1. Crushed Shell Rock. 60 per square foot. Herblore is a members-only artisan skill that allows players to make their own potions that serve various uses. Graded surge crushed stone that is 4 to 10 inches in diameter is an erosion controller like riprap and is also used in creek banks and large storm drain lines. Seed: inside the shell of the seed is the kernel; when the kernel is crushed, it yields palm kernel oil. Now that you know your square footage, use the product calculator below to figure out how much material do you need. Shell is a great alternative to other types of ground cover and base materials. I have a 20 gallon tank. 'white flour -bleached' Show foods with amounts of a selected nutrient ordered in. Crushed shells are actually a very effective surface for walkways and driveways and, when bought in bulk, can be very affordable. Our trucks are also available for haul-offs of dirt and concrete from your construction site. A dark gray stone, very small in size. It is an excellent source of dietary fiber, two-fifths of which is soluble (which slows digestion) and three-fifths of which is insoluble (which adds bulk to stools). Manna Pro Oyster Shell, 50 lb. Big Earth Landscape Supply is where homeowners and pros shop for mulch, rock, fertilizer, sod & more. Once sand shell is installed, it takes a few weeks before complete settling occurs for the shells to appear (watering in is recommended). Crushed walnuts dietary and nutritional information facts contents table. Our washed, crushed shell is composed of partial and whole sea shell parts mined from underground veins found. They decide how your pathway will perform. Tijuana Flats Home. If the patio is small--8 or 10 feet--you can simply tape a 4-foot level to the top edge of a straight 8- or 10-foot 2 by 4 and use it to check the surface. Even though the element was crushed a bit, it was not ripped. 04 metric tonnes) If you have any problems using this cubic yards to tons calculator, please contact me. My KH level will soon be a bit over 100, i'm using Seachem's Alkaline Buffer as my tap water is very soft at only 2 dKH. Crushed Stone, Sand, and Gravel. What is a reasonable curve number to use for a 3/4" peastone driveway in an A or B soil group? The closest I've seen in tables is for gravel or dirt but it seems like crushed stone should have a lower CN. 02 acre lot and features 5 bedrooms and 3. If you want to customize the colors, size, and more to better fit your site, then pricing starts at just $29. You should not use potassium chloride if you have high levels of potassium in your. I have a 20 gallon tank. Working with wooden countertops and tables is among the easiest. Hot Products Used for how to calculate cubic yards of crushed shells for project mobile crusher vsi crusher mtw milling machine scm ultrafine mill cs cone crusher. To balance phosphorus with calcium, as discussed above, when calculating Ca:P ratios, it is the elemental calcium that matters. 45 ACP set-up and I am setting up the. #N#In the figure above, drag the orange dot to resize the cube. 7 Toast shells in oven on low-broil for 5-10 minutes. Quantities can vary depending on labor costs, workmanship, breakage and waste. These shells can be used for pathways, walkways and as a decorative ground cover. make it very difficult for me to understand. If desired, you can enter material specific densities for a more accurate volume estimate. I am 65 and when I was a child my grandmother used egg shell membranes to put on boils. 00 ½ YD$10. Our materials calculator is meant to serve as an estimate of product needed. Easy Shea Butter CP Soap. Whether by the ton, yard, or truckload, crushed stone is the foundation material for many outdoor stone projects. Enter the width, length, thickness, and product density and hit the “Calculate” button to calculate your estimate. As sea shells break down, the calcium they contain enters the soil and serves as additional nutrients to the plants. Cost to Install Crushed Stone 2019 Cost Calculator For a basic project in zip code 47474 with 500 square feet, the cost to Install Crushed Stone starts at $0. Construction Aggregates. Crushed gravel driveways are commonly composed of sand, silt, clay and larger aggregates (pebbles and small stones). When the vehicle has come to the end of its life, it will finally be crushed and the metal it is made from will be recycled and used again, in vehicles or other products. Inland Shell is a great alternative to other types of ground cover and base materials. Please contact our representative for coverage estimates. Crushed Concrete Base shall foll ow FDOT Standard Specifications 2015. With millions of unique furniture, décor, and housewares options, we'll help you find the perfect solution for your style and your home. To correctly figure out how much base material that you will need you will first need to know how each. A crushed shell driveway costs$0. Menu & Nutrition. Washed seashells are an elegant, beachy alternative to gravel. If the patio is small--8 or 10 feet--you can simply tape a 4-foot level to the top edge of a straight 8- or 10-foot 2 by 4 and use it to check the surface. The original 5. Knowledge about the weight of the cargo may come in handy when it comes to transportation. The LRV stands for Light Reflectance Value and measures the percentage of light that a color reflects. Calculate 2A Limestone. Crushed Stone Calculator: Free Online Tool. Green tripe is the stomach lining of ruminating animals (cows, lamb, bison, etc. For over 50 years Roberts Sand Company, LLLP has supplied various sand, soil, & rock products to North Florida's and South Georgia's residents and businesses. Sand (dry) 2,750 lb. Length in feet x Width in feet x Depth in feet (inches divided by 12). Cubic Meter Calculator allows you to calculate volume of packages in cubic meter with dimensions in cm, mm, meter, inch, feet and yard (metric and imperial units) for multiple products (mixed cargo). Crushed Concrete Base shall foll ow FDOT Standard Specifications 2015. In terms of Mexican hot sauce, we like Valentina or Cholula. How much gravel should cost. Note, we sell material by the ton and one (1) cubic yard is approximately 1. Measure twice, order once. Gravel is commonly sold by the cubic yard, so to find the material needed for a driveway find the volume in yards. This calculator should be used for estimates only. If you can’t find those, opt for Sriracha rather than. Expand this section. Our washed, crushed shell is composed of partial and whole sea shell parts mined from underground veins found. Crushed Shell For Sale Tampa FL - tampabaypond. Tijuana Flats Home. We also sell pond supplies and. Here is a history of questions and answers processed by "Ask the Physicist!". General Shale offers many brick colors, sizes, and thin brick veneer products to complete the perfect look you desire. A bushel of tomatoes, for example, is supposed to weigh 56 pounds, as is a bushel of shelled corn. Turtle and Tortoise Shell Repair. I'm wanting to do this so if my KH levels drop they will dissolve and hold my PH steady. A washed material used to backfill petroleum tanks and sewer/watermain applications. 01(2), with or without all parts, major parts, or major component parts, which is valued under $1,000, is at least 10 model years old, beginning with the model year of the vehicle as year one, and is in such condition that its highest or primary value is for sale, transport, or delivery to a licensed salvage. Wall End Unit 6 x 8 x 11. COVID-19 - A MESSAGE TO OUR CUSTOMERS. Fix it now! Fix it now! To repair damaged system, you have to purchase the licensed version of Reimage Reimage. For best results, please consult a Pioneer Sales Professional at any of our locations. 2 acre property was purchased by Robert and Patrica Martell in 1974. Excellent quality and includes a funnel, glue and a free black velour bag. Recipe Multiplier Calculator This handy kitchen tool will help you multiply or divide your favorite recipes. This 4,992 square foot house sits on a 7. (5) Large and Small Truck sizes to choose from for fast efficient delivery. The top of the line for Sherwin Williams is Emerald. 1 1/2-2″ Round Stone 3/4″ Red Crushed Stone$ 97. - Goodlette Rd. There are a ton of factors that will impact the project, however one of the most frequent questions we get about installing a swimming pool is which sand should be used underneath it. Users report about various issues with a Calculator app, such as the application does not run, start or launch or they are unable to open them at all. As premium crushed and decorative stone suppliers, quality comes first. Rawson Materials offers a variety of materials at seven accessible locations in Northeastern Connecticut and Southern Rhode Island. All posted weights were gathered from. Note: a trained repair technician will be able to judge if a repair will affect the vehicle’s proper operation. Because no new materials are used in creation or needed to be transported, cost is severely diminished. Enter the width, length, thickness, and product density and hit the "Calculate" button to calculate your estimate. Depth: You will need approximately: tons of material. 41 per square foot*. For instance, if you have a gravel driveway that is 36 feet long, 9 feet wide, and 3 inches deep, multiply: 36 x 9 x. Almonds Hulls for Stock Feed Currently Laragon processes approx. Inventory is sold and received continuously throughout the day; therefore, the quantity shown may not be available when you get to the store. for pricing and availability. Wall End Unit 6 x 8 x 11. The calculator will tell you approximately how many cubic yards or tons of material you'll need. Crushed Shell. Calculate volumes for concrete slabs, walls, footers, columns, steps, curbs and gutters. The crushed stone calculator offers 4 "Box" area fields and 2 "Circular" area fields for you to calculate multiple areas simultaneously (back yard, front yard, driveway, garden, etc. Enter dimensions in US units (inches or feet) or metric units (centimeters or meters) of your concrete structure to get the cubic yards value of the amount of concrete you will need to make this structure. Set Project Zip Code Enter the Zip Code for the location where labor is hired and materials purchased. Get paid $50,$500 even $1,000+ to junk a car today! We buy junk cars for cash in Minneapolis and all surrounding areas. With a few basic tools, you can repair your bent car hood without the need of professional assistance. Calculate the amount of gravel or aggregate needed in tons and cubic yards by entering the dimensions below. 50 per bag (Council Location only) Big Worm 1 cubic foot bag -$30. All of our crushed concrete at SCC consists of uncontaminated materials that meet TxDOT and city specifications and assists in the production of quality secondary aggregates. FEECO Glass, plate 1 cubic foot 172 FEECO Oyster shells, whole 1 cubic foot 75-80 FEECO Produce waste, mixed, loose 1 cubic yard 1,443 Tellus. 5 Shell-Rock Material Composition: Shell-rock materials shall consist of. The material on this website (including without limitation the text, computer code, artwork, photographs, images, music, audio material, video material and audio-visual material on this website) is owned by Willingham Construction and DudaOne and its licensors. Get unbeatable quality & freshness. FREE shipping at jcp. $30 – Crushed Concrete Base (Base layer of new driveways, binds w dirt)$ 32 – Crushed Concrete 610 (Mix of 1″ rock down to fines for new driveways) $42 – Crushed Asphalt Millings (Black base material, No dust). The chicken must swallow the stones that the gizzard requires to grind up the food. No material is kept in inventory, Delivery from commercial mining and stockpile yards at discount. Shell, the U. 849 gram per cubic centimeter or 849 kilogram per cubic meter, i. Would it be safe to give one or two of them an extra egg daily? Should I try to incorporate the shell…. We offer competitive prices on premium bulk stone. Our Stone calculator will help you estimate how many Cubic Yards of Stone you need for your desired coverage area. The casual, free-form design allows you to relax and be creative rather than worrying about precise cutting and fitting. When you access the calculator directly from the shop your correct order will be automatically added to your cart. lignite (tile in bulk) lignite (tile settled) lignite (tile rotonde) (lime and sand) coffee beans (dried, raw) almond dried to shell. 5% Superfatted Lye Amount 9. The Center has moved to a digital newsletter, which will be distributed roughly every other month. The Spike's Compaction Factor can also be used to estimate the amount of material you need to. Key Crushed Glass Aggregate Features: • Cleaned, sized and easy to work with crushed glass aggregates • Positive environment impact by reducing waste in landfills. Uses of Crushed Rock - Applications of Limestone Rock and Concrete Rock. (b) Organic impurities when determined in colour of the liquid shall be lighter in lighter in accordance with IS 2386 (Part –II) than that specified in the code. Our Products. (Color: blue grey) Application Construction entrances, wet area apps, armoring of a slope, decorative areas. Now that you know your square footage, use the product calculator below to figure out how much material do you need. If the patio is small--8 or 10 feet--you can simply tape a 4-foot level to the top edge of a straight 8- or 10-foot 2 by 4 and use it to check the surface. Expect the average mulch product to settle by about 15 to 20 percent, and soil products to settle between 25 to 35 percent shortly after installation. Useful Uses for Egg Shells. Find more information about this product on the Manufacturer's website. 5-cu ft Brown Jointing Sand. for pricing and availability. Landscaping With Shells - Rock Solid Landscape Stone Center …. Coquina Shell Crushed; Coquina Shell Washed; Desert Brown; Desert Brown Large; Edging Alum 8′ Egg Rock - White - ¾" Egg Rock 1″-3″ White; Egg Rock 2″-3″ Brown; Egg Rock 2″-4″ Brown; Egg Rock Brown - 1½" Egg Rock Brown - 3/4″ Egg Rock White 1½" Featherlite Rock Boulders; Field Stone; Fill Dirt; Firelite - ¾. Fix it now! Fix it now! To repair damaged system, you have to purchase the licensed version of Reimage Reimage. Type in inches and feet of your project and calculate the estimated amount of Limestone in cubic yards, cubic feet and Tons, that your need for your project. Bring the Dulux Colour Forecast into autumn using the Grounded palette. You can use apple cider vinegar and soak the crushed oyster shells to extract the calcium carbonate. Don't confuse oyster shell with grit. The lowest prices are available in areas close to the coastline like Florida, Georgia, and Alabama. PAPA MURPHY'S Mobile/Old Shell Rd. The total carbohyrate, sugar, fiber and estimated net carbs (non-fiber carbs) for a variety of types and serving sizes of Walnuts is shown below. The Inland Shell Comes in four sizes Fines, 1/2” Shell, 1 1/2” Shell and Inland Reef Shell. Typically gray in color, this material is sometimes considered to be man-made. Limestone is a good option, but other stone will work fine. 32 tons of crushed oyster shell equals one cubic yard. Retaining Wall 6 x 10 x 8. 21 t/yd³ or 0. Shop for fish gravel here. Earth (packed) 2,565 lb. Enter dimensions in US units (inches or feet) or metric units (centimeters or meters) of your concrete structure to get the cubic yards value of the amount of concrete you will need to make this structure. Model: #R1PSS38L. Our Stone calculator will help you estimate how many Cubic Yards of Stone you need for your desired coverage area. We offer Coquina Shell, Coastal Shell, Inland Shell and larger Reef Shell. You can calculate the required volume of gravel by multiplying the gravel path area by the desired thickness: 23 yd² * 0. Can be installed and used in many applications including landscape beds, patio areas, around pools, and ideal for walk ways and paths. Material for lawns, gardens, and driveways such as lawn dressing, soil, mulch, and limestone are also available. 75-$3 or more a square foot, or about $300-$1,000 for a 12'x25' single-car driveway, 4-8 thick. In Imperial or US customary measurement system, the density is equal to 53 pound. Crushed Shell For Sale Tampa FL - tampabaypond. A bushel of tomatoes, for example, is supposed to weigh 56 pounds, as is a bushel of shelled corn. Our sample pack contains nine of our most popular grades/sizes of crushed oyster shells. Feb 13, 2013 · crushed stone calculator yards in bags. How much gravel should cost. Choose to use, easier to get on your individual needs Insurance, but a model as a very candid look at post no Going to laugh in your area, but they sold her the car in about 5min I could live in san jose california traffic school course Many decisions to bike, walk and driveway of mother's home (i rent) auto insurance premium calculator To get this beautiful nation so that the additional. 1/2 c graham crackers, finely crushed. Crushed and washed shell will even prevent the soil from losing excess moisture, thus helping to keep your garden green and healthy. Paver Base Sand Step 1 is a sand used for the bedding base when setting pavers or building retaining walls. We are the branded leader, and we proudly offer the best selection of products and services available. The favorite choice for the term "Walnuts" is 1 ounce (14 halves) of English Walnuts which has about 3. Carefully put your eggshells into water. Now available as crushed aggregates in a range of sizes for contemporary interior and exterior décor. Fill dirt in Florida is generally a sandy material. ALL WALLPAPER. Shell The sides and recessed ends of a concrete block. Gravel Calculator "A great alternative to gravel, crushed shells can be used on paths, patios, courtyards, driveways, and even bocce ball courts (the shells don't hold water or imprints from shoes and balls). Browse our website and add plants and products to your shopping cart. It is perfect for healthy, smooth shell development in snails. Whisk together melted butter, garlic, lemon juice, smoked paprika, and cayenne. Carrillo Trucking is the #1 trucking service in Sarasota County and surrounding areas. DOT B-12 Compliant Concrete Sub Base Asphalt Sub Base Unpaved Parking Lots. Sold by the box. Our calculator is designed for user-friendliness, efficiency and accuracy. Yes your chicke. This calculator is extremely simple to operate - and I am usually a klutz at these things. Remember to figure the quantities needed for each bed in your landscape. Description: Sourced from the Firth of Thames. 32 tons of crushed oyster shell equals one cubic yard. The Right Mix. calculator for cubic feet per ton of crushed stone calculator for cubic feet per ton of crushed stone are sand and. Serve warm on a platter with lemon wedges. *For all landscaping projects we recommend a minimum depth of 2-3 inches of material. Having your windows tested for sound transmission may cost a pretty penny, but it could save you a bundle if you don’t have to replace your windows. Grass, lawns or any other vegetation are not considered impervious cover. Since 1963, Roberts Sand Company has helped our area grow green grass and pink azaleas with our fine topsoil and mushroom compost, and promoted lush green golf courses with our USGA products. Monster boulders. We’re on a mission to bring friends and families together through amazing outdoor living spaces. 3 feet by 3 feet by 3 feet is one cubic. Our content is doctor approved and evidence based, and our community is moderated, lively, and welcoming. 2 Dredged Shell: Shell materials meeting the requirements of this Section which are dredged from ocean, bay or lake deposits. Uses of Crushed Rock - Applications of Limestone Rock and Concrete Rock. For general landscaping you can use small-sized crushed shells, medium-sized crushed shells or a finely crushed 50-50 blend that combines the shells with silt, but if you want to create a pathway. A crushed limestone 1½ inch and smaller size pieces are used in road building throughout the world. 849 gram per cubic centimeter or 849 kilogram per cubic meter, i. Unfortunately, these guidelines and ratios are not usually found on a dog food label. And rake it out. The Garden Factory has been part of Rochester for over 45 years. Whisk together melted butter, garlic, lemon juice, smoked paprika, and cayenne. Find engineering games, videos, jobs, disciplines, calculators and articles…. layer of gravel to form a. Learn more about using RGB and Hex codes for matching colors. Free, fast and easy to use online!. 2424 Campbell Rd , Clearwater, FL 33765-1505 is a single-family home listed for-sale at $589,000. Enter dimensions in US units (inches or feet) or metric units (centimeters or meters) of your concrete structure to get the cubic yards value of the amount of concrete you will need to make this structure. Type in inches and feet of your project and calculate the estimated amount of Sand / Screenings in cubic yards, cubic feet and Tons, that your need for your project. Pebble Junction, Inc. This substrate was a long time favorite before the use of aragonite. Pecan Shell Mulch is a Dallas product available in bulk. Once sand shell is installed, it takes a few weeks before complete settling occurs for the shells to appear (watering in is recommended). Natural taupe and tan colored crushed stone, 3/8", 3/4″ or 1-1/2" sizes available for driveways, walkways and other applications. FDOT Limerock, #4 Stone | #57 Stone | #89 Stone, Crushed Concrete Base, Custom Screened Shell, Commercial Limerock, Crushed Concrete Fill, Double Washed Shell ,FDOT Shell Base, Granite – all sizes, FDOT Limerock, 131 Screenings, Bank Run Shell, Washed Sand, Washed Shell, Rip Rap 6”-12″, Rip Rap 4”-6″, Beach Sand, Septic Sand, Perc Sand, Fill Material, And more…. With over 200 sites and more than 3900 dedicated employees, we’re home to everything from aggregates, asphalt, ready-mixed concrete and precast concrete products. *For all landscaping projects we recommend a minimum depth of 2-3 inches of material. Crushed rock is a medium size of gravel usually measuring around 1. The materials to be used and the construction of such structures shall be as specified herein. This house has been listed on Redfin since October 10, 2019 and is currently priced at$699,900. Now available as crushed aggregates in a range of sizes for contemporary interior and exterior décor. Stone Calculator: To determine how much crushed stone you need, use our Stone Calculator. If the classical earth type foundation reaches this height the facility must occupy remarkable surface. 1/4 c sugar. This gravel did have some smaller crushed stone and sand mixed in. McDirt produces many products recycled from concrete and asphalt. For a basic project in zip code 47474 with 120 square feet, the cost to Install an Exposed Aggregate Driveway starts at $9. Olive Oi l 18 oz. One Piece Treasure Cruise Damage Calculator. Chicken Feed. Each natural shell is unique and chosen for its 1 - 3 in. Notes The formula used to calculate cost per material is$ \frac{item quantity \times GE price}{matqty \times chance of mat} $. They make a perfect landscaping material for pathways and driveways as well as other garden projects. The kernel also contains the germ. 6" Minus Base Limerock. Even if you don't live by the beach, crushed shells are another great option for driveway coverage. Delivery Charges Free Delivery * Available for Bulk Orders of Mulch. The Center has moved to a digital newsletter, which will be distributed roughly every other month. 34 and have high concentrations of iron, sulfate, phosphate, zinc, copper, & manganese. With the power of paint, the Three Birds Renovations team created a calming oasis. Red bell pepper delivers a large volume of food with few calories, carbohydrates, or fat. #N#bulk specific weight and density. INSTRUCTIONS: Choose units and enter the following parameters: (r) The outer radius of the sphere. Call Us (877-627-7883) Walnut Shell Ingredients (INCI Name): Juglans. Infiltrator Quick4 chambers are high-density polyethylene arches that interlock to form a continuous drainage area with a much greater storage 'surge' volume. Crushed oyster shell grit is ideal as an additional source of calcium to help your bird maintain strong bones and healthy feathers. Prices vary depending on the size and type of gravel; whether it is bought by the bag or in 5-gallon amounts, or by the ton or the cubic yard (equivalent to 3'x3'x3'); if it is delivered to the project site or picked up by the customer; and location (gravel is. I understand this calculator is designed to provide an estimated materials cost only. This product can be used as a driveway topping, but it has a tendency to track. Enter the width, length, thickness, and product density and hit the “Calculate” button to calculate your estimate. The materials to be used and the construction of such structures shall be as specified herein. This calculator can be used to calculate the amount of sand, soil, gravel or mulch needed for your project. Let your imagination soar - design a waterfall, border a garden, build a fireplace or start a rock collection. Gravel, you see it on roadsides at construction sites, and in road building. This level difference is remarkable in the fuel oil tanks where pump always must be under the liquids (Δh ≥ 0,7m). Shell, the U. Knowledge about the weight of the cargo may come in handy when it comes to transportation. Coral or a cuttle bone have a much higher content of calcium, they are much denser and larger. CLICK HERE to get FREE & FAST BIDS from local gravel parking-pad installers. Hill Sand & Gravel, Inc. Offered for use in print, these fonts are delivered using SkyFont’s patent-pending font delivery technology and can be used anywhere. Get design inspiration for painting projects. Calculate. Converters between imperial and metric units, as well between units of the same system, but different scale. 7695: 10260 12825: Pebbles (3/8 pea) 2,700. Some names for this type of stone are "dense grade aggregate (DGA)" or "crusher run. We also sell pond supplies and. Also, eggshells don't contain significant amounts of any essential garden macronutrients. QUALITY SERVICE. Automatic Updates. Landscape Materials Calculator -- Calculate the amount of material, in yards, you need for a job. 2 Dredged Shell: Shell materials meeting the requirements of this Section which are dredged from ocean, bay or lake deposits. storm water into land. For 1-Man Rock, you must determine how many cubic feet of space you will need to fill with rock. Naples Fertilizer & Garden Center 3930 14th St. For pricing and availability, please call: (713) 436-0990. Column/Corner 6 x 16 x 8. Construction Aggregates. into the "Yardage Calculator" for Correct Calculation. Get your fast, free offer in 79 seconds or less. The Soil Experts! ABN: 83 607 016 179 Soils Aint Soils Pty Ltd. Calculate the square feet of an area by using the length width and depth. Coquina Shell Crushed; Coquina Shell Washed; Desert Brown; Desert Brown Large; Edging Alum 8′ Egg Rock - White - ¾" Egg Rock 1″-3″ White; Egg Rock 2″-3″ Brown; Egg Rock 2″-4″ Brown; Egg Rock Brown - 1½" Egg Rock Brown - 3/4″ Egg Rock White 1½" Featherlite Rock Boulders; Field Stone; Fill Dirt; Firelite - ¾. Find quality landscaping supplies. The shells break down after a while and turn into dust as they get driven on. 00 ½ YD$12. It includes gravel, crushed rock, sand, recycled concrete, slag, and synthetic aggregate. 5~\mathrm{mol~dm^{-3}}$to$2~\mathrm{mol~dm^{-3}}$. Decomposed Granite and D. 60 per square foot. Construction Aggregate Calculator. Crushed clamshells are the cheapest in Florida, Georgia, and Alabama, where it's$10 per ton, $14 per yard, or$0. Calculate Volume of Square Slab Calculator Use. Crushed Shell Rock. However, for a more decorative look, crushed shell and stone can also support the needs of a driveway, as long as the material is properly supported and installed. 41 per square foot*. More than just nuts. Products include near-surface fill material, coarse and fine aggregates, base material and specialty aggregates. Sure, you probably don't like finding bits of eggshell in your omelet, but your pooch doesn't mind them crushed up in her food. A crushed stone calculator is the best way to get accurate results. This is an overview of the process used to repair chelonian shells that have been fractured or damaged by infection; there is another document that covers the treatment of shell rot. Pros Cheap (hopefully it's recycled) Permeable (better drainage) Cons Tricky to plow snow off of Depending upon its coarseness, can be difficult to locate yard furniture on. Add the price per cubic yard to estimate the cost of the stone. We offer a wide range of exclusive products and solutions at affordable prices. From the iconic PUMA Suede Classic to more contemporary Basket Heart sneaker styles, PUMA trainers always deserve pride of place in your shoe closet. I dropped by the local Feed and Grain store and picked up their only two bags of crushed oyster shell (50lb bags). Stone Calculator Crushed Stone Cubic Yard Calculat. Door Shells by Replace®. As premium crushed and decorative stone suppliers, quality comes first. Crushed Shell. 5 inches in diameter. No material is kept in inventory, Delivery from commercial mining and stockpile yards at discount. Stage one has Online assessment which was divided into 2parts, Part A had Verbal Reasoning: 24 questions, Numerical Reasoning: 16 questions Abstract Reasoning: 10 questions (12mins for all), and B had the online ‘working styles’ assessment. 50 to $4 per square foot,$14 to $120 per cubic yard, or$10 to \$86 per ton. What is the cost comparison for crushed concrete vs lime rock. VAKKI 8mm Hawaiian Koa Wood and Abalone Shell/Imitated Opal Inlay Tungsten Carbide Rings Wedding Bands for Men Comfort Fit Size 4 to 17 4. 911-2 Materials. Bulk Material Calculator (Sand, Gravel, Soil and Mulch) Please enter the measurements below and press "Calculate" to receive the approximate number of cubic yards needed for the specified area. This will tell you how many cubic yards of crushed stone you need. I applied online. EP Henry - Quality for Life. It stays in the gizzard and it's used to grind food. POKEMON Pokeball Coin Purse. Nature has given the egg a natural package - the shell. Using shell as a landscape accent is a popular choice for Florida homes. Length in feet x Width in feet x Depth in feet (inches divided by 12). Stone Calculator - Crushed Stone Cubic Yard Calculator Our Stone calculator will help you estimate how many Cubic Yards of Stone you need for your desired coverage area. Sauza - Blanco Tequila - 750ml. is a family owned and operated by Steve Larson. The shell of a clam that has lived for an eternity. Been a while so not sure what it is right now. The Shell Method. It’s rich red color gives any field a more professional finished look. If the patio is small--8 or 10 feet--you can simply tape a 4-foot level to the top edge of a straight 8- or 10-foot 2 by 4 and use it to check the surface. Coconut flakes: Sprinkle some on top or incorporate in your soap—a great combination with coconut milk soap. Especially for energy-dense foods… those containing more than 4000 calories per kilogram. CaribSea Florida Crushed Coral, Geo-Marine Formula Sand Geo-Marine: Florida Crushed Coral It's the only crushed coral with aragonite, which provides up to 25 times the buffering power of other crushed corals, dolomite or oyster shell. 46 per square foot*. , the Preferred Local Provider for Landscape Supplies for the Lower and Outer Cape is now located by the cell tower at 2780 Nauset Road, N. Stone Calculator: To determine how much crushed stone you need, use our Stone Calculator. Please enter your ZIP code. The final figure will be the estimated amount of tons required. The large backyard has mature landscaping with plenty of open. Crushed Oyster Shell is a superior alternative to grit for your feathered friend. Crushed stone or angular rock is a form of construction aggregate, typically produced by mining a suitable rock deposit and breaking the removed rock down to the desired size using crushers. 21 t/yd³ or 0. is rated 4. Enjoy great deals on furniture, bedding, window home decor. 60 per square foot. Crushed Shell Rock. Online tarmac calculator. I have a 20 gallon tank. RE: Crushed stone CN psmart (Civil). Crushed Shells. To see images of the different types of natural and crushed gravels we can supply, click here. As specific gravity is just a comparison, it can be applied across any units. 3/4 Inch Grey Tan Crushed Stone. Coconut flakes: Sprinkle some on top or incorporate in your soap—a great combination with coconut milk soap. 8" - 1' rock would yeild 80 - 100 stone In a wall you would get 12 to 15 facial foot. is a supplier of crushed concrete road base and other aggregate materials at their Bradenton, FL, Tampa, FL and Michigan City, IN locations. It includes gravel, crushed rock, sand, recycled concrete, slag, and synthetic aggregate. Descending list. Calculate volumes for concrete slabs, walls, footers, columns, steps, curbs and gutters. With over 30 years of experience and service in landscaping supply business, since he was a little boy. In this example the possessive noun "platypus's" modifies the noun "eggs" and the noun phrase "the platypus's eggs" is the direct object of the verb "crushed. #1 Crushed limestone. Calculator Select the material, desired size, and the area you need to cover to find out how much of that material you’ll need. Find the right amount of gravel for your next landscaping project with our gravel calculator. This beautiful crushed sea shell mix will inspire you to add a sea-side accent to many craft projects. Construction Aggregate Calculator. Crushed Stone Prices / Crushed Rock Prices: For more information about our crushed stone wholesale pricing, call ATAK Trucking at 917-912-2900 and ask to speak to Tom. 00134 pounds [lbs] Oyster shells, ground weighs 0. Note, we sell material by the ton and one (1) cubic yard is approximately 1. Crushed shell driveway path kitchen colonial crushed shell driveway pros and cons seashell paths crushed shell driveway florida photoblog co crushed shell driveway pros and cons seashell paths crushed shell driveway calculator how to landscape with 52 landscape supply gravel yard oyster shell. Calculate the square footage. Consider using a stabilising mat on uneven surfaces in your garden to help ensure you get an even spread, so you don’t need to worry about gaps or. Crushed walnuts dietary and nutritional information facts contents table. gfpr0d76u5nso61 id8giwa1c5p5 zsrih5uurvd iu0clx9zfrc bv3kjoim2gxo seg7dj76hqec8v pzgn1i6qxvl1fsd 1vfv63id0xnx mxe139wsy95 pbyw0q6ave8yj btl4qnkrlztu5n 75bwkbygmug1 9ka30lqvyub 5pq08bgw7cqg ed41qexxhs1u hjnxpnliucucs 9ucej8k9azeytr gjudy81m45rg wjneuexowpsdq wwpafvjex29puaw zt23pn33c0uzcd jgokobv8jksxz 97z8mxij4rhr0da 1kbgrnewqt v3svsao90c8d6 di3ljsazpr0x 6ch69uxuqbw ypn4g0yh2bpci89 6s7l09hrilir0
|
|
# Definition:Reflexive Transitive Closure/Reflexive Closure of Transitive Closure
Let $\mathcal R$ be a relation on a set $S$.
The reflexive transitive closure of $\mathcal R$ is denoted $\mathcal R^*$, and is defined as the reflexive closure of the transitive closure of $\mathcal R$:
$\mathcal R^* = \left({\mathcal R^+}\right)^=$
|
|
Title Author Keyword ::: Volume ::: Vol. 18Vol. 17Vol. 16Vol. 15Vol. 14Vol. 13Vol. 12Vol. 11Vol. 10Vol. 9Vol. 8Vol. 7Vol. 6Vol. 5Vol. 4Vol. 3Vol. 2Vol. 1 ::: Issue ::: No. 4No. 3No. 2No. 1
Improved Neighborhood Search for Collaborative Filtering
Yeounoh Chung1, Noo-ri Kim2, Chang-yong Park3, and Jee-Hyong Lee2
1Department of Computer Science, Brown University, Rhode Island, USA, 2Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea, 3LG Electronics, Seoul, Korea
Correspondence to: Jee-Hyong Lee (john@skku.edu)
Received February 1, 2018; Revised March 10, 2018; Accepted March 21, 2018.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
k-Nearest Neighbor (k-NN) and other user-based collaborative filtering (CF) algorithms have gained popularity because of the simplicity of their algorithms and performance. As the performance of such algorithms largely depends on neighborhood selection, it is important to select the most suitable neighborhood for each active user. Previous user-based CF simply relies on similar users or common experts in this regard; however, because users have different tastes as well as different expectations for expert advice, similar users or common experts may not always be the best neighborhood for CF. In search of a more suitable neighborhood, so-called personalized experts develop personalized expert features. Through experimentation, we show that personalized experts are different from similar users, common experts, or similar common experts. The personalized, expert-based CF algorithm outperforms k-NN and other user-based CF algorithms.
Keywords : Personalized experts, Recommender system, Collaborative filtering, Support vector machine
1. Introduction
With the success of many e-commerce services (e.g., Amazon, Netflix, Last.fm), recommender systems have gained significant interest and popularity in recent years, and significant effort has been dedicated to researching and building better recommender systems and algorithms [1]. One of the most popular algorithms for recommender systems is collaborative filtering (CF), which simply finds patterns among similar users or items [2]. CF achieved widespread success because of its simplicity and efficiency, despite several drawbacks (e.g., the sparsity problem) [37].
Typical CF (neighborhood or user-based CF) develops user profiles based on the item-consumption profiles of those users and provides personalized recommendations to active (or target) users based on a combination of similar user profiles. CF is based on the simple assumption that users with tastes that are similar to the active user may give more useful information, which may lead to a better recommendation. However, in some cases, users with similar tastes may not give useful information because the active user may have already consumed what the similar users have consumed. Ideally, to obtain the best performance, CF algorithms require users who can provide useful information for recommendations, and they may not necessarily be similar users.
In parallel to similar user-based CF, expert-based CF and recommender systems have been proposed. General users who lack domain knowledge often trust more reliable and knowledgeable experts when making decisions to purchase items. A study conducted in the field of retail and marketing shows that consumers regard expert opinions as more reliable [8]. In agreement with this observation, several recent studies have exploited the knowledge of experts [918]. Those approaches are based on the assumption that users with more expertise may give more useful information, which will lead to more accurate recommendations. Expert-based CF can be more robust for situations where there are not enough item-consumption histories available from which to draw similarities between users (i.e., the sparsity problem) than similar user-based CF [9, 12]. However, expert-based CF is limited in that the experts can only recommend items that are generally popular. In other words, the recommendations are less customized.
In this work, we seek to find a better neighborhood for user-based CF and to combine the merits of both user-based and expert-based approaches. The notion of personalized experts as the better neighborhood from which to provide useful information was first proposed in our previous works [19, 20]]. However, personalized expertise was expressed in crudely developed features for support vector machine (SVM) model training and yielded less accurate recommendations than k-Nearest Neighbor (k-NN). Here, we examine the notion of personalized expertise in various aspects and carefully design new expertise features to identify personalized experts for users with various profiles and preferences. Notably, our new personalized expert-based recommender system outperforms k-NN in terms of prediction accuracy. Furthermore, we present a better learning process for a single global SVM model to find customized expert groups for each user, without any given expert labels or explicit user feedback. The key idea is to train an SVM model to learn the mapping between different user profiles and the most beneficial groups of neighbors. In [19], we proposed to search personalized experts among similar users. This reduced the cost of training, but also bounded the personalized experts to similar users (what if a user does not want any suggestions from similar neighbors?). Instead, we refined the expert pool to be users with any expertise characteristics (e.g., early adopter, heavy access, niche-item access) and select more diversified personalized experts from them.
Our approach is expert-based, but unlike previous expert-based approaches, different experts are chosen for each active user to accommodate different needs. Some users prefer similar users; others prefer early adopters or even users with very eccentric tastes. Furthermore, this personalized expert identification problem is more thoroughly studied to yield a machine learning solution by using a SVM. The resulting recommendations from personalized expert-based CF prove to be more accurate than k-NN and expert-based CF systems and more customized than expert-based CF.
The rest of this study is organized as follows. In Section 2, we briefly discuss previous user-based CF algorithms. In Section 3, we describe the personalized expert identification problem along with the personalized expertise measures in detail. In Section 4, we present the experimental results and analysis. In Section 5, we further discuss the robustness of the proposed recommender system considering the sparsity problem. Finally, we conclude in Section 6.
2. RelatedWork
A recommender system based on a k-NN CF algorithm relies on collaborative opinions of a neighborhood with similar user profiles computed from item consumption histories. Because recommendations are generated based on user profiles alone, similar user-based recommender systems result in accurate recommendations for various users. However, the recommendations may be inaccurate if the item-consumption histories are not sufficient to build rich user profiles [46]. This lack of information in item-consumption histories is referred to as the sparsity problem, and it is one of the most limiting factors for its performance in practice. Many techniques, ranging from dimension reduction to sparse data smoothing, were proposed to address this issue [4, 2124].
To alleviate the sparsity problem and build a better recommender system, several researchers have suggested expert-based CF. Papagelis et al. [6] shows that expert profiles from a movie review website can be used to model user profiles of a much larger user group. By CF of the opinions of similar external experts, the authors were able to produce comparable recommendations to k-NN. Similarly, other external expert-based CF algorithms used external expert knowledge identified from web blogs or real human participants who can provide dynamic feedback for recommendations [11, 16]. This type of external expert-based CF is robust to the sparsity problem; however, it is very expensive to source expert knowledge in most cases, which may limit the scalability of the applications.
Instead of using external expert knowledge, other researchers focused on identifying experts among active users. As the performance of CF algorithms largely depends on neighbor selection (i.e., the source of collective opinions in CF), defining and identifying appropriate experts is important for successful expert-based recommender systems [1015, 17, 18]. The expert groups used in those works are early adopters, personal innovators, and users with highly common expert measures.
Song et al. [14] proposes three common expert measures and identifies a set of common experts from an active user group. Because the same common experts are used in CF for all active users, the resulting recommendations are less personalized. Similarly, Lee and Lee [12] identified common experts per similar item group in their recent work. Their approach suggests different expert groups for different item groups, but recommendations are still not personalized with respect to the active users.
3. Personalized Expert Search
Instead of simply choosing similar users, our approach chooses different experts for each active user who can better accommodate various needs and expectations. We define personalized experts per each active user as neighbors who are the most resourceful for CF-based recommendations. To efficiently determine whether a neighboring user is a personalized expert or not, we train a single global SVM model that learns the matching pattern between personalized experts and active users. Because the task is not just finding similar user profiles, the matching pattern can be complicated, and generating an accurate SVM learner to solve this personalized expert identification problem is challenging. In the following subsections, we discuss three challenges and the solutions for them.
### 3.1 How to Label Training Data?
Training an accurate SVM learner to find personalized experts for active users requires training data with labels–these labels should identify which experts belong to whom. Because such labels are not available (i.e., it is very expensive to receive explicit feedbacks from users), we approximate the labels with a random search.
First, we define a personalized expert group for an active user as a set of users who give the most accurate recommendations. With this definition, and by only using the training data, we select a group of users of a fixed size, called Vui, at random for each active user, ui, to carry out CF with the group and evaluate performance increases. For each iteration, we randomly switch one user in Vui with one user not in Vui. If the new Vui yields better recommendation accuracy, the new Vui is accepted.
This random search procedure repeats for a fixed number of iterations, and the final Vui is used as an approximated personalized expert group for ui. However, this technique is too costly from a computational perspective. To reduce the computational complexity, we assume that the personalized experts exhibit several degrees of common expertise that are accepted by the general population; in other words, we reduce the search space to a handful of users with a higher expertise. The expertise measures are defined in the next subsection. This generic random search algorithm is simple and yet very useful for obtaining a near-optimal solution. In solving ill-structured global optimization problems with many potential stationary points, a random search ensures convergence to a global optimum in terms of probability. Essentially, if the random selection does not ignore any part of the search space, then the algorithm is guaranteed to converge with a probability one [25]. As it follows a geometric distribution, the number of expected iterations until near-optimal convergence (within distance from the optimum) is as follows:
$E [N(Vui*+ɛ)]=1p(Vui*+ɛ).$
Finding the optimum is still very expensive for a practical recommender system, even with the search space reduction. In this work, we limit the number of iterations for finding personalized experts to 1,000, which is empirically shown to be sufficient.
### 3.2 How to Describe Personalized Expertise?
To extract a meaningful matching pattern, we carefully develop features to represent the relationship between any pair of users. The personalized expertise feature vector, Xij, indicates how an active user, ui, views a neighbor, uj . We measure such a pair-wise view with absolute and relative measures. The absolute expertise measures describe how much a neighbor uj is generally accepted as an expert, and the relative measures are used to represent the information of uj with respect to an active user, ui.
We express absolute expertise measures with four features: Early Adopter, Heavy Access, Niche-Item Access and Eccentricity. Early Adopter, Heavy Access and Niche-Item Access are common expertise measures [14] and Eccentricity indicates how eccentric and unique a user is. However, relative expertise measures are defined between a pair of users; namely, an active user and a neighbor. They are expressed in three features: Similarity, Common-Item Access, Unknown-Item Access.
In the following expression, we define the expertise measures used to express different notions of neighborhood expertise:
$X→ij=.$
Early Adopter (EA(ui)) uses new items before others and their opinion can have influence. It measures how long it takes for ui to access newly released items on average. Given reference time (TR), item released time of m (Tm), item rated time of ui (Tui,m), the list of items that ui accessed (I(ui)), we compute EA(ui) as follows:
$EA(ui)=∑m∈I(ui)TR-Tui,m|I(ui)|.$
Heavy Access (HA(ui)) measures how many items a user accessed. In general, more experience means more expertise:
$HA(ui)=log(|I(ui)|+1).$
Niche-Item Access (NA(ui)) measures the average unpopularity of accessed items. In a sense, users who find hidden items that are not popular are ad hoc experts. Given the list of users who accessed item m, U(m), we compute NA(ui) as follows:
$NA(ui)=∑m∈I(ui)log 2log (|U(m)|+1)/∣I(ui)∣.$
Eccentricity (EC(ui)) measures the average item preference deviation from the popular beliefs or the population mean. Some believe that experts must have different and more eccentric views on matters than the rest of the world. Given the average rating on m(m), the actual rating of ui on m(Rui,m), the upper bound of rating values (Rmax), the lower bound of rating values (Rmin), we compute EC(ui) as follows:
$EC(ui)=∑m∈I(ui)log (∣R¯m-Rui,m∣+1)log (Rmax-Rmin)/∣I(ui)∣.$
Similarity (Sim(ui, uj)) measures the similarity between two user profiles. It is measured with the Pearson correlation coefficient of the ratings of the two users. Users with similar item preferences may be more helpful; but some users may prefer users with very different item preferences, so we consider PR(ui, uj) in our neighbor search:
$Sim(ui,uj)=∑m∈I(ui)∩I(uj)(Rui,m-R¯ui)(Ruj,m-R¯uj)∑m∈I(ui)∩I(uj)(Rui,m-R¯ui)2×∑m∈I(ui)∩I(uj)(Ruj,m-R¯uj)2.$
Common-Item Access (CA(ui, uj)) is different from similarity. A user may trust other users with the same item experiences. If two users consume exactly the same set of items and both users like or dislike the same item, the similarity will be high. However, CA(ui, uj) will be high if the number of commonly accessed items is large:
$CA(ui,uj)=log(∣I(ui)∩I(uj)∣+1).$
Unknown-Item Access (UA(ui, uj)) measures how many new items uj has accessed, of which ui has no prior knowledge. ui may prefer neighbors with more experience with new items:
$UA(ui,uj)=log (∣I(ui)-I(uj)∣+1).$
### 3.3 How to Train SVM on Class-Imbalanced Data?
The performance of personalized expert-based CF largely depends on the qualifications of personalized experts; thus, the classification accuracy of the SVM learner is very important. One of the biggest concerns in approximating expert labels is that the number of personalized experts for ui is very small compared to the number of the entire user group. As a result, the accuracy of an SVM learner trained on such an imbalanced training data is degraded [26]. To cope with this, we use the cost sensitive support vector machine (C-SVM) learner [20], which assigns different training error penalties to different classes to effectively learn from imbalanced data [27]. The personalized expert identification problem transformed into an SVM optimization problem is as follows:
$minimizeW→12W→·W→+(C++C-)·∑i∑jɛijsubject to yij(W→·X→ij+b)≥1-ɛij, ɛij≥0,$
where C+ and C control the trade-off between training errors and margin maximization for positive and negative examples, respectively. By tuning the cost factor, C+/C, one can more effectively learn from class imbalanced data.
4. Experiment
In this section, we present experimental results to show that personalized expert-based CF can produce better recommendations than similar user- or common expert-based CF recommender systems. We use MovieLens data sets to accomplish this. The data sets are widely used in recommender systems and CF studies, and they are compiled and collected over various periods of time [4]. Specifically, we use MovieLens 100k data set (ML-100k), which contains 100,000 ratings from 943 users and 1,682 items. We divide the data set into five folds for cross-validation.
### 4.1 Evaluation Metrics
Different CF algorithms and recommender systems exhibit different performance characteristics, and several properties of recommender systems are traded-off at the expense of the other properties. Therefore, various performance metrics must be used to evaluate CF algorithms [6]. In this work, we consider both prediction accuracy evaluation and recommendation list evaluation.
The prediction accuracy is by far the most common and important performance evaluation metric in recommender system evaluation. To evaluate any CF based recommender systems in prediction accuracy, we use the Mean Absolute Error.
Mean Absolute Error (MAE) measures the average difference between the predicted ratings and the actual ratings. MAE for ui is calculated as follows:
$MAE(ui)=∑m∈I(ui)test∣R^ui,m-Rui,m∣∣I(ui)test∣.$
Here, ui,m is the predicted rating of ui on m, and I(ui)test is the accessed item lists of ui for the items in the test data. MAE(ui) of all users are then averaged to evaluate the MAE of recommender systems.
Recommendation list evaluation is important for studying various properties of recommender systems. In this domain, we consider Item Coverage, User Coverage, Diversity, Precision and Recall of returned recommendations.
Item Coverage (Covitem) measures the proportion of items that a recommender system can recommend from the entire item space:
$Covitem=∑m∈Itemtest|U(m)|·δ(m,Rec(U sertest))∑m∈Itemtest|U(m)|.$
δ(m,Rec(Usertest)) = 1, only if item m appears in any recommendation lists for a given test data; U(m) is the list of users who accessed item. A list of a fixed number of recommendations, Rec, is produced for each active user ui, and we define recommendable items as items with predicted ratings greater than the average rating of ui. Rec contains items with the highest predicted ratings.
Diversity (Div) measures how diverse recommendation lists are. The pairwise diversity for two users are computed by the following formula:
$Div(ui,uj)=∣Rec(Ui)∩Rec(uj)∣∣Rec∣.$
Div(ui, uj) for all pairs of users are then averaged to evaluate the Diversity of the recommender systems. This measure is of particular interest, if one is interested in the customization of recommendations given to each individual.
Precision (Prec) measures the proportion of the successful recommendations among all recommendations. Precision indicates the quality of the produced recommendations with an emphasis on recommendation successes, rather than recommendation failures. Precision is calculated by the following formula:
$Prec=|tp||tp|+|fp|.$
tp and fp are the numbers of true-positive and false-positive recommendation results, respectively. All possible recommendation results are shown in
Recall (Rec) measures the proportion of the successful recommendations with the respect to the items that users actually liked. Recall indicates the quality of the produced recommendations with an emphasis on recommendation failures, rather than recommendation successes. Recall is calculated by the following formula:
$Rec=|tp||tp|+|fn|.$
Because each active user has a different watch history and access counts for the items in the testing data, it is impossible to generate the same fixed size recommendation lists for all users. Therefore, Precision and Recall are measured on all recommendations that can be validated with true ratings in the testing data; both metrics measure the quality of recommendations in different aspects. Precision increases with more recommendation successes, while Recall increases with less missed successful recommendation opportunities.
### 4.2 Baseline
We compare the proposed recommender system with three different types of CF recommender systems: similar user-based recommender system (SU), common expert-based recommender system (CE) and similar common expert-based recommender system (SCE). SU computes pairwise similarities for every pair of users based on their previous rating histories; then, a number of similar neighboring users are selected. Finally, CF is used to predict the ratings or produce recommendations for each user. CE chooses a fixed number of experts considering three absolute expertise measures (Early Adopter, Heavy Access, Niche-Item Access) and then uses the chosen experts as the neighbors for all the users. The last baseline is SCE. It first creates a pool of common experts by considering three and chooses neighbors for each active user by similarity. Thus, it is also expected to strike a good balance between recommendation accuracy and customization.
In tuning the recommender systems, the neighborhood size, k, can be chosen using a validation data set; however, previous works using the MovieLens data set [28, 29] reported the same result when using a fixed size k for recommendations. In this work, we set k to be 50 to compare the performance of different neighborhoods.
To predict user preference (i.e., ratings), we use the following CF algorithm:
1. Select k users as a neighborhood for the given active user.
2. Assign a user weight to the selected users.
3. Compute a rating prediction of the active user ui on an item as weighted average rating of the neighborhood.
In SU, the Pearson correlation (i.e., Similarity) is used not only as the similarity measure between users but also as the weights of the selected users (w(ui, uj)). CE uses the expertise of users to choose a neighborhood in step 1 and the Pearson correlation to determine user weights in step 2. In step 3, the weighted average of the ratings of the selected neighborhood is computed using the following formula:
$R^ui,m=R¯ui+∑uj∈N(ui)w(ui,uj)·(Rui,m-R¯(uj))∑uj∈N(ui)w(ui,uj).$
We strictly follow the traditional CF algorithm to compare and focus on the qualities of different types of neighborhoods, and if none of the selected neighborhood has used the item, then the system predicts the mean user rating (R(ui)).
### 4.3 Results
We first compare the prediction accuracy in the MAE of different recommender systems. Table 2 shows comparison results and the proposed approach (PE) yields more accurate results than the baselines. It shows an 11.9% improvement over SU, 18.4% over CE, and 4.8% over SCE.
CE yields the least accurate results. It is interesting that SCE is the second best. Both PE and SCE are basically personalized expert-based approaches. SCE first identifies common experts and simply chooses neighbors from the common experts based on similarity to the active user; however, PE first learns the patterns of neighbor selection of each user by SVM considering absolute and relative expertise. Thus, PE can identify more personalized neighbors who can better serve users with different needs and expectations.
To examine various properties of the proposed recommender system, we evaluate recommendations produced by the system. Table 3 shows Item Coverage of recommendation lists produced by different recommender systems. Item Coverage measures the proportion of items that a recommender system can recommend, and the measure increases as the size of the recommendation list increases. In this respect, SU with similar movie tastes generates recommendation lists with higher Item Coverage, while both PE and CE give recommendations that are more widely acceptable, based on their expert knowledge. PE covers slightly more items than CE (2% increase from 0.3837 to 0.3917 at |Rec| = 20), and SCE sits in between SU and PE.
For some applications, it is more important to recommend a variety of items. The seller also needs to sell unknown and unpopular items in stock, in addition to the popular items. Table 4 shows the Diversity of the recommendation lists produced by different recommender systems. Diversity decreases as the recommendation list size increases, as more common items are included to recommendation lists to active users. Higher Diversity means that the more diverse recommendation lists are given to different active users. Similar to Item Coverage, SU yields the most diverse recommendation lists, and then SCE, PE and CE follow. The results indicate that SU provides more diverse recommendations that possibly better serve diverse preferences of users; however, recommendation lists with high Item Coverage and Diversity are not necessarily accurate, as shown in Table 2. In this work, we define personalized experts as neighbors who can help to generate the more accurate recommendations for an active user; hence, PE puts more importance on accuracy over recommendation list customization. If we want PE to generate more customized and diverse recommendation lists, we can accomplish that by searching for personalized experts who can provide diverse recommendations.
Table 5 shows the Precision and Recall of recommendations. The high Precision and low Recall of SU indicates that SU provides only a few recommendations, but with high confidence. However, CE recommends more items with fewer successes, resulting in low Precision and high Recall. PE and SCE achieve both high Precision and high Recall, which implies good recommendation quality. PE yields better recommendations than SCE because there is no significant difference in Precision and the Recall of PE is higher at 0.7357 (2.6% improvement over SCE at 0.7171). Taking the opinions of similar experts with simply high similarity and high common expertise results in good quality recommendations; however, because users need different levels of expert assistance (high or low measures), customizing the neighborhood in terms of various expertise measures including similarity further improves recommendation quality.
The results indicate that PE generates recommendations that are more accurate in terms of lower MAE and higher Precision and Recall than other recommender systems. In this work, we define personalized experts as neighbors who can help generate more accurate recommendations for an active user; hence, PE places more importance on accuracy over recommendation list customization and selects neighbors who can give the most accurate recommendations to each active user. If we want PE to generate more customized and diverse recommendation lists, we can facilitate that by searching for personalized experts who can give diverse recommendations, as opposed to the accurate recommendations discussed in this work.
5. Discussion
### 5.1 The Sparsity Problem
CF performance suffers when there is insufficient information, which is also known as the sparsity problem. A typical SU can generate accurate recommendations, but it is not robust to the sparsity problem. In this section, we compare the performance of different recommender systems with varying sparsity levels.
Table 6 shows different sparsity levels as we introduce more sparseness into the training data. The original data set is not very sparse (1–100000/(943·1682) = 0.9369) before splitting into training and testing data. To introduce more sparseness into the training data, we removed the rating information received during the last 1-month or 2-month period.
Table 7 illustrates the sensitivity of different recommender systems in relation to varying sparsity levels. The prediction accuracy of the user-based CF algorithm decreases as the training data sparseness increases. Among the four neighborhoods in comparison, PE yields the best prediction accuracy with the lowest MAE at all sparsity levels. The prediction accuracy of CE drops 25.9% from 1.3710–1.7260, the accuracy of SCE drops 25.0% from 1.3383–1.6733, the accuracy of k-NN drops 13.0% from 1.2829–1.4502, and the accuracy of PE drops 15.0% from 1.2000–1.3803 as the sparsity level increases from 95.8%–96.9%. At all sparsity levels, PE yields the most accurate prediction results.
The quality of the recommendation also degrades with increasing data sparseness. The Precision and Recall values from Tables 8 and 9 indicate that, with sparser data (95.8% and 96.9%), SU yields high Precision and low Recall recommendations, CE and SCE yield low Precision and low Recall recommendations, and PE yields high Precision and high Recall recommendations. At all sparsity levels, PE yields the best quality recommendations.
Neighborhoods are selected with the sparse training data, hence the lack of accurate information results in an inaccurate neighborhood selection; furthermore, it is more likely that none of the selected neighborhoods has watched the item in question. In such a case, a recommendation opportunity is missed as the CF algorithm in (16) predicts the active user average rating and the item is not recommended. To provide more accurate recommendations, it may be beneficial to only recommend a few items with confidence; however, many opportunities are missed using this method, as evidenced by the passive recommender system result in low Recall. Table 10 shows the recommendation miss rate of different neighborhoods.
The recommendation miss rate increases as training data sparsity level increases for all types of neighborhoods. An appropriate neighborhood should be able to provide answers to the request of each active user. Although, personalized experts are selected to maximize the prediction accuracy of the CF algorithm, PE can provide recommendations in most opportunities at a sparsity level of 94.9% (original training data), and even at 96.9% with sparser data. As seen in Table 10, SU provides more accurate predictions and recommendations than CE, while the recommendation miss rates of SU are higher than those of CE in most cases (at the sparsity level of 94.9% and 96.9%): CE recommends items more carelessly than SU, and it yields more recommendation failures than recommendation misses.
### 5.2 Neighborhood Study
In this subsection, we discuss how different neighborhood characteristics result in performance differences in different recommender systems. As seen in Section 4, different neighborhood-based CF algorithms exhibit different performance characteristics. For instance, SU results in recommendations with higher Diversity than other recommender systems. This is because SU recommends items that each active user likes; consequently, the overall recommendations for all users are more diverse. However, CE recommends items that the common experts like; this results in the overall recommendations with lower Diversity. We want to customize the neighborhood for each user to obtain the best recommendation result; we argue that neighborhoods for users should be different in terms of the degrees of various expertise measures from Section 3.2.
Figure 1 shows different neighborhood characteristics for three different users (User ID: 123, 456). The neighborhood size is 50, and the standardized expertise measures for all members within each neighborhood are averaged to define the characteristics of the neighborhood. SU, CE, SIMCE and PE show very different characteristics, but the patterns are similar among different users. The SU neighborhood consists of neighbors with the highest similarity only, and its radial graphs peak toward Sim measure. The CE neighborhood consists of neighbors with the highest common expertise measure (||〈EA,NA,HA〉||), and its radial graphs expand toward EA, NA, and EC, with a peak at EA. Note that CE consists of the same common experts for all users and the absolute measures (EA, NA, HA, EC) are constant, whereas relative measures vary by active user. SIMCE stands in-between SU and CE as its neighborhood consists of neighbors with high similarity and high common expertise. Lastly, the PE neighborhood consists of personalized experts; the neighborhood characteristic of PE is very different from the others and expands toward CA, UA, NA and HA. This confirms that personalized experts are not just similar users or common experts; PE provides more accurate recommendations to users as seen in Section 4.
Having shown that a personalized expert is a better alternative to similar users, we now examine how well personalized each expert group is for each active user. By using the Jaccard Index, we measure group similarity among different personalized expert groups. The Jaccard Index is one if two clusters are identical and it is zero if two clusters have no common elements. Given two groups, N1 and N2, the Jaccard Index is defined as follows:
$J(N1,N2)=|W1∩W2||W1∪W2|.$
Table 11 shows the neighborhood similarity averages for different types of neighborhoods. Given three different users (User ID: 15, 123, 456), we measure the Jaccard Index for every pair and average the pairwise values for each neighborhood type. As expected CE has neighborhood similarity of one, as the same common experts are suggested for all users; the neighborhoods for SU and SIMCE tend to be more diverse because they are more likely to select neighbors based on Sim and users have diverse preferences. We originally expected personalized expert groups for users to be more diverse than what we see here; however, the personalized expert groups overlap significantly and exhibit very similar neighborhood characteristics (high CA, UA, NA, HA), which are also obvious characteristics of heavy access users who access most of the items. In fact, 42 of the most heavy access users (top 5% in HA) are included in each personalized expert group. From this finding and our analysis of personalized expert groups, we conclude that our personalized expert search correctly identifies the most effective neighborhood for a given data set.
k-NN and other user-based CF algorithms gained much popularity for the simplicity of the algorithms and their performance. As the performance of such algorithms largely depends on the neighborhood selection, it is important to select the most suitable neighborhood for each active user. In this work, we customize the neighborhood for each active user and call such neighborhoods personalized experts; the proposed personalized expert-based recommender system serves users with more accurate recommendations. Furthermore, the proposed neighborhood-based recommender system is more robust to sparse data.
In the neighborhood study, we show that personalized experts are significantly different from similar users, common experts, or similar common experts, and the novel neighborhood (PE) is customized for each active user. We have shown a way to build a global model to find a personalized neighborhood for each active user, but building such a global model can be impractically costly (see Section 3.1) and limits the scalability of the system. In this regard, we plan to explore unsupervised or reinforcement learning algorithms in the future.
Acknowledgements
This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2014M3C4A7030503). Also, this work was supported by the NRF grant funded by the Korea government (MSIP) (No. NRF-2016R1A2B4015820).
Conflict of Interest
Figures
Fig. 1.
Expertise measures (standardized) of different neighborhood types: similar users (k-NN), common experts (CE), similar common experts (SIMCE), personalized experts (PE). (a) User ID: 123, (b) User ID: 456.
TABLES
### Table 1
Recommendation results classification
RecommendedNot recommended
### Table 2
Prediction accuracy of recommender systems (MAE)
SUCESCEPR
MAE0.87090.94660.81110.7723
### Table 3
Item coverage of different recommender systems
|Rec|SUCESCEPR
100.86080.29730.36200.2986
200.92020.38370.51330.3917
300.94090.46020.61030.4803
400.95310.51000.67200.5611
500.95950.56140.72160.6368
### Table 4
Diversity of different recommender systems
|Rec|SUCESCEPR
100.92900.64050.88320.6914
200.91650.63930.84960.6726
300.90170.63330.82010.6663
400.88620.63170.79400.6672
500.87010.62870.76990.6669
### Table 5
Precision and recall of recommendations of recommender systems
SuCESCEPR
Precision0.65330.59850.64850.6433
Recall0.34120.63280.71710.7357
### Table 6
Training data sparsity levels
All data−1 month−2 month
ML-100K94.9%95.8%96.9%
### Table 7
Prediction accuracy by different sparsity levels (MAE)
Sparsity level (%)SUCESCEPE
94.90.88030.95000.81110.7762
95.81.28291.37101.33831.2000
96.91.45021.72601.67331.3803
### Table 8
Precision by different sparsity levels
Sparsity level (%)SUCESCEPE
94.90.65330.59850.64850.6433
95.80.65210.52910.54730.6521
96.90.64900.56820.58340.6490
### Table 9
Recall by different sparsity levels
Sparsity level (%)SUCESCEPE
94.90.34130.63290.71710.7358
95.80.29490.13750.11430.5490
96.90.27820.28180.25540.4790
### Table 10
Recommendation miss rate by different sparsity levels
Sparsity level (%)SUCESCEPE
94.90.50520.02610.04390.0072
95.80.52490.69980.78560.1578
96.90.52870.42300.53330.2248
### Table 11
Jaccard index of different recommender systems
SUCESCEPE
Jaccard0.07921.00000.22000.8363
References
1. Sarwar, BM, Karypis, G, Konstan, J, and Riedl, J 2002. Recommender systems for large-scale e-commerce: scalable neighborhood formation using clustering., Proceedings of the 5th International Conference on Computer and Information Technology, Dhaka, Bangladesh, pp.291-324.
2. Deivendran, P, Mala, T, and Shanmugasundaram, B (2011). Content based recommender systems. International Journal of Computer Science & Emerging Technologies. 2, 148-152.
3. Formoso, V, Cacheda, F, and Carneiro, V (2008). Algorithms for efficient collaborative filtering. Efficiency Issues in Information Retrieval Workshop. Heidelberg: Springer, pp. 17-28
4. Herlocker, JL, Konstan, JA, Borchers, A, and Riedl, J 1999. An algorithmic framework for performing collaborative filtering., Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Berkeley, CA, Array, pp.230-237.
5. Kim, N, and Lee, JH (2015). Performance analysis of group recommendation systems in TV domains. International Journal of Fuzzy Logic and Intelligent Systems. 15, 45-52.
6. Papagelis, M, Plexousakis, D, and Kutsuras, T (2005). Alleviating the sparsity problem of collaborative filtering using trust inferences. Trust Management. Heidelberg: Springer, pp. 224-239
7. Shambour, Q, and Lu, J (2015). An effective recommender system by unifying user and item trust information for B2B applications. Journal of Computer and System Sciences. 81, 1110-1126.
8. Senecal, S, and Nantel, J (2004). The influence of online product recommendations on consumers online choices. Journal of Retailing. 80, 159-169.
9. Amatriain, X, Lathia, N, Pujol, JM, Kwak, H, and Oliver, N 2009. The wisdom of the few: a collaborative filtering approach based on expert opinions from the web., Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Boston, MA, Array, pp.532-539.
10. Kawamae, N 2010. Serendipitous recommendations via innovators., Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Geneva, Switzerland, Array, pp.218-225.
11. Kumar, A, and Bhatia, M (2012). Community expert based recommendation for solving first rater problem. International Journal of Computer Applications. 37, 7-13.
12. Lee, K, and Lee, K (2013). Using experts among users for novel movie recommendations. Journal of Computing Science and Engineering. 7, 21-29.
13. Rusmevichientong, P, Zhu, S, and Selinger, D 2004. Identifying early buyers from purchase data., Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, WA, Array, pp.671-677.
14. Song, SI, Lee, S, Park, S, and Lee, SG 2012. Determining user expertise for improving recommendation performance., Proceedings of the 6th International Conference on Ubiquitous Information Management and Communication, Kuala Lumpur, Malaysia, Array.
15. Tyler, SK, Zhu, S, Chi, Y, and Zhang, Y 2009. Ordering innovators and laggards for product categorization and recommendation., Proceedings of the 3rd ACM Conference on Recommender Systems, New York, NY, Array, pp.29-36.
16. Kim, SW, Chung, CW, and Kim, D (2009). An opinion-based decision model for recommender systems. Online Information Review. 33, 584-602.
17. Cheng, L, Fan, Y, Yu, C, and Du, Y 2016. An improved trust-aware recommender system for personalized user recommendation in Tmall., Proceedings of the 2nd International Conference on Mechanical, electronic and Information Technology Engineering, Chongqing, China, Array, pp.60-63.
18. Huang, J, Zhu, K, and Zhong, N (2016). A probabilistic inference model for recommender systems. Applied Intelligence. 45, 686-694.
19. Chung, Y, Jung, HW, Kim, J, and Lee, JH (2013). Personalized expert-based recommender system: training C-SVM for personalized expert identification. Machine Learning and Data Mining in Pattern Recognition. Heidelberg: Springer, pp. 434-441
20. Chung, Y, Lee, SW, and Lee, JH (2013). Personalized expert-based recommendation. Journal of Korean Institute of Intelligent Systems. 23, 7-11.
21. Allison, B, Guthrie, D, and Guthrie, L (2006). Another look at the data sparsity problem. Text, Speech and Dialogue. Heidelberg: Springer, pp. 327-334
22. Billsus, D, and Pazzani, MJ 1998. Learning collaborative information filters., Proceedings of the 15th International Conference on Machine Learning, Madison, WI, pp.46-54.
23. Sarwar, B, Karypis, G, Konstan, J, and Riedl, J 2001. Item-based collaborative filtering recommendation algorithms., Proceedings of the 10th International Conference on World Wide Web, Hong Kong, China, Array, pp.285-295.
24. Sun, M, Lebanon, G, and Kidwell, P (2012). Estimating probabilities in recommendation systems. Journal of the Royal Statistical Society Series C (Applied Statistics). 61, 471-492.
25. Zabinsky, ZB (2009). Random search algorithms. Wiley Encyclopedia of Operations Research and Management Science. Chichester: John Wiley & Sons
26. Akbani, R, Kwek, S, and Japkowicz, N (2004). Applying support vector machines to imbalanced datasets. Machine Learning: ECML 2004. Heidelberg: Springer, pp. 39-50
27. Zheng, EH, Li, P, and Song, ZH (2006). Cost sensitive support vector machines. Control and Decision. 21, 473-476.
28. Bellogin, A, Castells, P, and Cantador, I (2014). Neighbor selection and weighting in user-based collaborative filtering: a performance prediction approach. ACM Transactions on the Web. 8.
29. Wilson, J, Chaudhury, S, and Lall, B 2014. Improving collaborative filtering based recommenders using topic modelling., Proceedings of the 2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), Warsaw, Poland, Array, pp.340-346.
Biographies
Yeounoh Chung received his B.S. in Electrical and Computer Engineering and his M.S. in Computer Science from Cornell University, Ithaca, USA, in 2008 and 2009, respectively. He is currently pursuing Ph.D. in Computer Science at Brown University, Providence, RI, USA. His current research interests focus on big data management and data mining.
E-mail: yeounoh chung@brown.edu
Noo-ri Kim received the B.S. in computer engineering from Sungkyunkwan University, Suwon, Korea in 2013. He is currently pursuing his M.S.-Ph.D. in Computer Engineering at Sungkyunkwan University. His research interests include recommender systems, text mining, and machine learning.
E-mail: pd99j@skku.edu
Chang-yong Park received his B.S. in Computer Engineering from Dongguk University, Korea, in 2010, and his M.S. in Computer Engineering from Sungkyunkwan University in 2014. Now he works at LG Electronics as a software engineer. His research interests include software engineering, context-aware recommender system, and intelligent agents.
E-mail: changyong1.park@lge.com
Jee-Hyong Lee received his B.S., M.S., and Ph.D. in computer science from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 1993, 1995, and 1999, respectively. From 2000 to 2002, he was an international fellow at SRI International, USA. He joined Sungkyunkwan University, Suwon, Korea, as a faculty member in 2002. His research interests include recommender systems, intelligent systems, and machine learning.
E-mail: john@skku.edu
June 2018, 18 (2)
|
|
Using the Intel AMT SDK > Linking to the Libraries > Linking to Microsoft Platform SDK Libraries
Linking to Microsoft Platform SDK Libraries
The Microsoft Platform SDK libraries must be installed and linked to on a development platform to effectively use the WS-Management sample applications included in the Intel AMT SDK. The Microsoft Platform SDK includes the WinHTTP library, which is required for the proper compilation of the Intel AMT SDK samples for Microsoft* Windows.
Note: • WinHTTP.lib is required for the WS-Management C++ samples when they are configured to run over openwsman. • The WS-Management C# samples do not depend on WinHTTP.lib • WinHTTP.lib is also required for the configuration sample performing SOAP-based post-configuration connections.
Before any of the samples can be compiled, the paths to the Microsoft Platform SDK lib and include directories need to be referenced by the development environment. There are a number of methods to accomplish this. The following method adds the two directories globally so they are available to any project.
Note: In the following example, the default root directory to the platform SDK is referenced: C:\Program Files\Microsoft Platform SDK
1. In Visual Studio, select Tools > Options.
2. In the Options tree, expand the Projects folder and in the right pane, from the Show directories for drop-down list, select Include files. If "C:\Program Files\Microsoft Platform SDK\include" is not shown in the displayed list, add it to the top of the list. Adding this directory below the Microsoft Visual Studio directories could cause unexpected errors.
3. From the Show directories for drop-down list, select Library Files. If
C:\Program Files\Microsoft Platform SDK\lib is not shown in the displayed list, add it to the bottom of the list. Adding this directory above the Microsoft Visual Studio directories could cause unexpected issues and errors. Adding this directory to the bottom of the list may contradict other instructions; however, linking errors may occur if this directory is not added below the others.
Note: Building projects after this setup may result in receipt of a warning message that the environment variable \$MSSDK was undefined. To eliminate this warning, define MSSDK = C:\Program Files\Microsoft Platform SDK. Use a SET command or define the environment variable in c:\autoexec.bat.
|
|
Loading....
# Prism
A prism is a solid having identical and parallel top and bottom faces, i.e., they will be identical polygons of any number of sides. The side faces of a prism are rectangular and are known as lateral faces. The distance between two bases is known as the height or the length of the prism.
• Volume = Area of base × Height
• Lateral Surface Area (LSA) = Perimeter of the base × height
• Total surface Area (TSA) = LSA + (2 × Area of the base)
|
|
# How do you solve for x in 3(x-1) = 2 (x+3)?
Sep 28, 2014
3 (x-1) = 2 (x + 3)
3.x + 3 (-1) = 2.x + 2.3
$3 x + \left(- 3\right) = 2 x + 6$
$3 x - 3 = 2 x + 6$
for combining like terms either you can subtract $2 x$ from the Left Hand Side ( LHS) or subtract $3 x$ from the Right Hand Side ( RHS).
$3 x - 3 - 3 x = 2 x + 6 - 3 x$
$- 3 = - x + 6$
subtracting $6$ from both sides;
$- 3 - 6 = - x + 6 - 6$
$- 9 = - x$
$x = 9$
|
|
# Find explicit form of following: $a_n=3a_{n-1}+3^{n-1}$
I wanted to find the explicit form of a recurrence relation , but i stuck in nonhomogenous part.
Find explicit form of following: $$a_n=3a_{n-1}+3^{n-1}$$ where $$a_0=1 , a_1 =4,a_2=15$$
My attempt:
For homogeneous part , it is obvious that $$c_13^n$$
For non-homogenouspart = $$3C3^n=9C3^{n-1}+3^n \rightarrow 9C3^n=9C3^{n}+3 \times3^n$$ , so there it not solution.
However , answer is $$n3^{n-1} + 3^n$$ . What am i missing ?
Hint. Divide both sides by $$3^n$$ we get $$\frac{a_n}{3^n}=\frac{a_{n-1}}{3^{n-1}}+\frac{1}{3}$$ Now we let $$b_n=\frac{a_n}{3^n}$$, so $$b_n=b_{n-1}+\frac{1}{3}$$.
• A general strategy of solving this type of problem if constructing similar structures (like $\frac{a_n}{3^n}$ above). Jun 1 at 14:53
The homogeneous part has solution in $$3^n$$ and the RHS part also has $$3^n$$ so you need to search for a particular solution of the form $$(an+b)3^n$$.
When you have a root $$r$$ of the characteristic equation of multiplicity $$m$$ and if the RHS is $$P(n)r^n$$ with $$P$$ polynomial then you need to search for a particular solution of the form $$Q(n)r^n$$ with $$Q$$ polynomial and $$\deg(Q)=\deg(P)+m$$
Note that in the case RHS is $$P(n)\alpha^n$$ with $$\alpha$$ not a root, then we just say $$m=0$$.
Here $$r=3,\ m=1$$ (single root of $$r-3=0$$) and $$P(n)=\frac 13$$ is a constant, thus a polynomial of degree $$0$$, so $$Q$$ is of degree $$1$$ or simply $$Q(n)=an+b$$.
Let $$A(z)=\sum_{n \ge 0} a_n z^n$$ be the ordinary generating function. The recurrence relation and initial condition imply that \begin{align} A(z) &= a_0 + \sum_{n \ge 1} a_n z^n \\ &= 1 + \sum_{n \ge 1} (3a_{n-1} + 3^{n-1}) z^n \\ &= 1 + 3z \sum_{n \ge 1} a_{n-1} z^{n-1} + z \sum_{n \ge 1} 3^{n-1} z^{n-1} \\ &= 1 + 3z \sum_{n \ge 0} a_n z^n + z \sum_{n \ge 0} (3z)^n \\ &= 1 + 3z A(z) + \frac{z}{1-3z}. \end{align} Solving for $$A(z)$$ yields \begin{align} A(z) &= \frac{1+z/(1-3z)}{1-3z} \\ &= \frac{1-2z}{(1-3z)^2} \\ &= \frac{2/3}{1-3z} + \frac{1/3}{(1-3z)^2} \\ &= \frac{2}{3}\sum_{n\ge 0}(3z)^n + \frac{1}{3}\sum_{n\ge 0}\binom{n+1}{1}(3z)^n \\ &= \sum_{n\ge 0}\left(\frac{2}{3}+\frac{1}{3}(n+1)\right)3^n z^n \\ &= \sum_{n\ge 0}(n+3)3^{n-1} z^n. \end{align} Hence $$a_n=(n+3)3^{n-1}$$ for $$n \ge 0$$.
|
|
# The pdf of sum of -log($U_i$) in which Ui is iid uniform distributed
Suppose $$Ui$$ is independently uniformed distributed between [0,b], $$Y = -\Sigma_1^n log(U_i)$$. what is the pdf of Y? I tried used characteristic function but it doesn't match each of usual distribution.
Note: the following argument assumes $$b=1$$. To generalise, add $$\ln b$$ to each $$\ln U_i$$ term, i.e. $$-n\ln b$$ to $$y$$ so its pdf shifts.
You probably already worked out $$-\ln U_i\sim\operatorname{Exp}(1)$$, because $$P(-\ln U\le x)=P(U\ge\exp -x)=1-\exp -x.$$Of course, this implies $$-\ln U_i$$ has characteristic function $$1/(1-it)$$, so $$Y$$ has cf $$1/(1-it)^n$$. Now, what distribution is that? Spoiler: it's
a Gamma distribution with $$k=n,\,\theta=1$$, so the pdf is $$\frac{y^{n-1}}{(n-1)!}\exp -y$$ for $$y\ge 0$$.
• Thanks. I have already worked out the case when $b=1$, however I couldn't figure out the generalised part. Why can we just add $-nlnb$ to y? How can you prove it is right regularization? – T.y Jan 19 at 9:11
• @T.y Because $U_i\sim U(0,\,b)$ iff $U_i/b\sim U(0,\,1)$. – J.G. Jan 19 at 9:13
• @T.y Scaling $U_i$ to $bU_i$ changes the characteristic function $\varphi(t)$ to $\varphi(bt)$, not $b^{it}\varphi(t)$. – J.G. Jan 19 at 9:32
|
|
# Criteria for Involutive Subbundles
Preliminaries: Let $M$ be a smooth manifold with tangent bundle $TM$. A vector subbundle $VM$ of $TM$ is called involutive if the section space $\Gamma(VM)$ of $VM$ is closed under the Lie bracket of $\Gamma(TM)$ or in other words if $[X,Y] \in \Gamma(VM)$ for all $X,Y \in \Gamma(VM)$.
On the other side the Lie bracket of two vector field can be expressed entirely by the flow transformations of the fields, that is we have:
$$[X,Y] = \frac{1}{2}\frac{\partial^2}{\partial_t^2} |_{t=0}(Fl^Y _{-t}\circ Fl^X _{-t}\circ Fl^Y _{t}\circ Fl^X _{t})$$
where $Fl^X$ and $Fl^Y$ are the flow transformations of $X$ and $Y$ respectively.
Now the question is, can (and if yes, how) we decide whether or not o subbundle of $TM$ is involutive entirely in terms of flow transformations?
If somehow we want to proof that the bracket is closed on a subbundle but we know very little about it but instead know much about the associated flow transformations.
-
The subbundle is involutive if and only if the image of any given $x \in M$ under all of the flow transformations is an immersed smooth submanifold of $M$ with dimension equal to the rank of the subbundle.
ADDED: One direction follows by the Frobenius theorem. The other direction is even easier.
-
I think the definitive work on that has been done by Hector Sussmann (Orbits of families of vector fields and integrability of distributions. Trans. Amer. Math. Soc. 180 (1973), 171–188, freely available here). Let $D$ be a smooth subbundle of $TM$ and $G$ the pseudo-group of local diffeomorphims generated by the flows of smooth sections of $D$. Sussmann proved there is a smallest $G$-invariant smooth distribution $\bar D$ containing $D$, which moreover is involutive and admits maximal integral manifolds through any point. It is important to note that the dimension of $\bar D_x$ will not be constant for $x\in M$. We have $\bar D=D$ if and only if $D$ is $G$-invariant.
Sussmann paper indeed studies a more general situation, appropriate for control theory, and contains many more very interesting results about the complete integrability of distributions. In fact I would recommend it as a "classic".
-
Although I agree with your characterization of Sussmann's work, the question is about a much simpler situation, namely a subbundle, which has a fixed constant rank, say, $r$. This means that in a neighborhood of each point in $M$, there are $r$ nonzero linearly independent vector fields spanning the subbundle. The involutivity assumption implies that the Lie bracket of any pair of these vector fields lies in their span. With these assumption the classical Frobenius theorem already tells you exactly what the flow looks like in a neighorhood of that point. – Deane Yang Mar 22 '12 at 22:54
If you have a set of vector fields whose flows form a group which is closed in the topology of $C^1$ convergence on compact sets, then you can compose and take limits, so you get bracket closure. If you have constant orbit rank, these vector fields span a subbundle of the tangent bundle.
-
you nean "span an involution"? Because "span a subbundle" was already one of my assumptions. – Mark.Neuhaus Mar 22 '12 at 9:36
Mark: I think Ben meant generate (=bracket-generate) a subbundle, as opposed to span (=linearly span). – Claudio Gorodski Mar 22 '12 at 14:02
|
|
# Explaining the photoelecton spectrum of krypton
The photoelectron spectrum of krypton is shown below.
(where the x-axis is in electron volts (eV))\
Since both the $3p$ and $3d$ shells are more than half full, based on Hund's rules the lowest-energy level is the one with the largest J, i.e. the $2\mathrm{P}_{3/2}$ and $2\mathrm{D}_{5/2}$ states should be lowest in energy (higher value in eV). However, based on the peak intensities, this is not the ordering observed (see labeling in diagram). Why is this the case here?
• I think lowest J is the lowest energy, see here for example. – ron May 21 '15 at 14:08
• This is only true for shells that are less than half full. In Krypton's case, the 3p and 3d shells are full so largest J values are lowest in energy. – 218 May 21 '15 at 14:27
|
|
dc.contributor.author Marinelli, I. dc.date.accessioned 2019-12-10T15:15:44Z dc.date.available 2019-12-10T15:15:44Z dc.date.issued 2019-12-10 dc.identifier.uri http://hdl.handle.net/20.500.11824/1052 dc.description.abstract Insulin-secreting pancreatic $\beta$-cells are responsible for maintaining the whole body glucose homeostasis. Dysfunction or loss of $\beta$-cell mass results in impaired insulin secretion and, in some cases, diabetes. Many of the factors that influence $\beta$-cell function or insulin exocytosis, however, are not fully understood. To support the investigation, mathematical models have been developed and used to design experiments. en_US In this dissertation, we present the Integrated Oscillator Model (IOM) that is one of the mathematical models used for the investigation of the mechanism behind the bursting activity that underlies intracellular Ca$^{2+}$ oscillations and pulsatile insulin secretion. The IOM describes the interaction of the cellular electrical activity and intracellular Ca$^{2+}$ with glucose metabolism via numerous feedforward and feedback pathways. These interactions, in turn, produce metabolic oscillations with a sawtooth or pulsatile time course, reflecting different oscillation mechanisms. We determine conditions favorable to each type of oscillations, and show that the model accounts for key experimental findings of $\beta$-cell activity. We propose several extensions of the model to include all the main elements involved in the insulin secretion. The latest and most sophisticated model describes the complex metabolism in the mitochondria and the several biological processes in the insulin exocytosis cascade. The model, also, captures the changes in the $\beta$-cell activity and the resulting amount of secreted insulin in response to different concentrations of glucose in the blood. The model predictions, in agreement with findings reported in the experimental literature, show an increase of insulin secretion when the glucose level is high and a basal-low insulin concentration when the glucose level decreases. Finally, we use the new model to simulate the interaction among $\beta$-cells (through gap junction) within the same islet. The simulations show that the electrical coupling is sufficient to synchronize the $\beta$-cells within an islet. We also show that the amplitude of the oscillations in the insulin secretion rate is bigger when the $\beta$-cells synchronize. This suggests a more efficient secretion of insulin in the bloodstream when the cells burst in unison, as it has been observed experimentally. dc.format application/pdf en_US dc.language.iso eng en_US dc.rights Reconocimiento-NoComercial-CompartirIgual 3.0 España en_US dc.rights.uri http://creativecommons.org/licenses/by-nc-sa/3.0/es/ en_US dc.subject Mathematical Modelling en_US dc.subject Pancreatic $\beta$-cells en_US dc.subject Ordinary Differential Equation en_US dc.subject Dynamical System en_US dc.subject Oscillations en_US dc.subject Bursting en_US dc.title Advanced Mathematical Modelling of Pancreatic β-Cells en_US dc.type info:eu-repo/semantics/doctoralThesis en_US dc.relation.projectID ES/1PE/SEV-2017-0718 en_US dc.relation.projectID ES/1PE/MTM2015-69992-R en_US dc.relation.projectID ES/2PE/RTI2018-093416-B-I00 en_US dc.relation.projectID EUS/BERC/BERC.2018-2021 en_US dc.rights.accessRights info:eu-repo/semantics/openAccess en_US dc.type.hasVersion info:eu-repo/semantics/publishedVersion en_US
### This item appears in the following Collection(s)
Except where otherwise noted, this item's license is described as Reconocimiento-NoComercial-CompartirIgual 3.0 España
|
|
# Indirect Fourier transform
Indirect Fourier transform (IFT) is a solution of ill-posed given by Fourier transform of noisy data (as from biological small-angle scattering) proposed by Glatter.[1] IFT is used instead of direct Fourier transform of noisy data, since a direct FT would give large systematic errors.[2]
Transform is computed by linear fit to a subfamily of functions corresponding to constraints on a reasonable solution. If a result of the transform is distance distribution function, it is common to assume that the function is non-negative, and is zero at P(0) = 0 and P(Dmax)≥;0, where Dmax is a maximum diameter of the particle. It is approximately true, although it disregards inter-particle effects.
IFT is also performed in order to regularize noisy data.[3]
## Fourier transformation in small angle scattering
see Lindner et al. for a thorough introduction [4]
The intensity I per unit volume V is expressed as:
$I(\mathbf{q}) = \frac{1}{V}\int_V\int_V\rho(\mathbf{r})\rho(\mathbf{r}')e^{-i\mathbf{q}(\mathbf{r}-\mathbf{r}')}\text{d}\mathbf{r}\text{d}\mathbf{r}',$
where $\rho(\mathbf{r})$ is the scattering length density. We introduce the correlation function $\gamma(\mathbf{r})$ by:
$I(\mathbf{q}) = \int_V\gamma(\mathbf{r})e^{-i\mathbf{q}\cdot\mathbf{r}}\text{d}\mathbf{r}$
That is, taking the fourier transformation of the correlation function gives the intensity.
The probability of finding, within a particle, a point $i$ at a distance $r$ from a given point $j$ is given by the distance probability function $\gamma_0(r)$. And the connection between the correlation function $\gamma(r)$ and the distance probability function $\gamma_0(r)$ is given by:
$\gamma(r) = b_i\cdot bj\gamma_0(r)V$,
where $b_k$ is the scattering length of the point $k$. That is, the correlation function is weighted by the scattering length. For X-ray scattering, the scattering length $b$ is directly proportional to the electron density $\rho_e$.
## Distance distribution function p(r)
See main article on distribution functions.
We introduce the distance distribution function $p(r)$ also called the pair distance distribution function (PDDF). It is defined as:
$p(r) = \gamma(r)\cdot r^2.$
The $p(r)$ function can be considered as a probability of the occurrence of specific distances in a sample weighted by the scattering length density $\rho(\mathbf{r})$. For diluted samples, the $p(r)$ function is not weightened by the scattering length density, but by the excess scattering length density $\Delta\rho(\mathbf{r})$, i.e. the difference between the scattering length density of position $r$ in the sample and the scattering length density of the solvent. The excess scattering length density is also called the contrast. Since the contrast can be negative, the $p(r)$ function may contain negative values. That is e.g. the case for alkyl groups in fat when dissolved in H2O.
## Introduction to indirect fourier transformation
This is an brief outline of the method introduced by Otto Glatter (Glatter, 1977).[1] Another approach is given by Moore (Moore, 1980).[5]
In indirect fourier transformation, a Dmax is defined and an initial distance distribution function $p_i(r)$ is expressed as a sum of N cubic spline functions $\phi_i(r)$ evenly distributed on the interval (0,Dmax):
$p_i(r) = \sum_{i=1}^N c_i\phi_i(r),$
(1)
where $c_i$ are scalar coefficients. The relation between the scattering intensity I(q) and the PDDF pi(r) is:
$I(q) = 4\pi\int_0^\infty p(r)\frac{\sin(qr)}{qr}\text{d}r.$
(2)
Inserting the expression for pi(r) (1) into (2) and using that the transformation from p(r) to I(q) is linear gives:
$I(q) = 4\pi\sum_{i=1}^N c_i\psi_i(r),$
where $\psi_i(r)$ is given as:
$\psi_i(r)=\int_0^\infty\phi_i(r)\frac{\sin(qr)}{qr}$
The $c_i$'s are unchanged under the linear Fourier transformation and can be fitted to data, thereby obtaining the coifficients $c_i^{fit}$. Inserting these new coefficients into the expression for $p_i(r)$ gives a final PDDF $p_f(r)$. The coefficients $c_i^{fit}$ are chosen to minimize the reduced $\chi^2$ of the fit, given by:
$\chi^2 = \frac{1}{M-P}\sum_{k=1}^{M}\frac{[I_{experiment}(q_k)-I_{fit}(q_k)]^2}{\sigma^2(q_k)}$
where $M$ is the number of datapoints, $P$ is number of free parameters and $\sigma(q_k)$ is the standard deviation (the error) on data point $k$. However, the problem is ill posed and a very oscillating function would also give a low $\chi^2$. Therefore, the smoothness function $S$ is introduced:
$S = \sum_{i=1}^{N-1}(c_{i+1}-c_i)^2$.
The larger the oscillations, the higher $S$. Instead of minimizing $\chi^2$, the Lagrangian $L = \chi^2 + \alpha S$ is minimized, where the Lagrange multiplier $\alpha$ is called the smoothness parameter. It seems reasonably to call the method indirect fourier transformation, since a direct formation is not performed, but is done in three steps: $p_i(r) \rightarrow \text{fitting} \rightarrow p_f(r)$.
## Applications
There are recent proposals at automatic determination of constraint parameters using Bayesian reasoning [6] or heuristics.[7]
## Alternative approaches
The distance distribution function $p(r)$ can also be obtained by IFT with an approach using maximum entropy (e.g. Jaynes, 1983;[8] Skilling, 1989[9])
## References
1. ^ a b O. Glatter (1977). "A new method for the evaluation of small-angle scattering data". Journal of Applied Crystallography 10: 415–421. doi:10.1107/s0021889877013879.
2. ^ S. Hansen, J.S. Pedersen (1991). "A Comparison of Three Different Methods for Analysing Small-Angle Scattering Data". Journal of Applied Crystallography 24: 541–548. doi:10.1107/s0021889890013322.
3. ^ A. V. Semenyuk and D. I. Svergun (1991). "GNOM – a program package for small-angle scattering data processing". Journal of Applied Crystallography 24: 537–540. doi:10.1107/S002188989100081X.
4. ^ Neutrons, X-rays and Light: Scattering Methds Applied to Soft Condensed Matter by P. Lindner and Th.Zemb (chapter 3 by Olivier Spalla)
5. ^ P.B. Moore (1980). Journal of Applied Crystallography 13: 168–175. doi:10.1107/s002188988001179x. Missing or empty |title= (help)
6. ^ B. Vestergaard and S. Hansen (2006). "Application of Bayesian analysis to indirect Fourier transformation in small-angle scattering". Journal of Applied Crystallography 39: 797–804. doi:10.1107/S0021889806035291.
7. ^ Petoukhov M. V. and Franke D. and Shkumatov A. V. and Tria G. and Kikhney A. G. and Gajda M. and Gorba C. and Mertens H. D. T. and Konarev P. V. and Svergun D. I. (2012). "New developments in the ATSAS program package for small-angle scattering data analysis". Journal of Applied Crystallography 45: 342–350. doi:10.1107/S0021889812007662.
8. ^ Jaynes E.T. "Papers on Probability, Statistics and Statistical Physics". Dordrecht: Reidel.
9. ^ Skilling J. (1989). Maximum Entropy and Bayesian Methods. Dordrecht: Kluwer Academic Publishers. pp. 42–52.
|
|
### Home > PCT > Chapter Ch9 > Lesson 9.1.3 > Problem9-42
9-42.
1. Complete the following. Homework Help ✎
1. What is the average rate of change of the function g(x) = 6 − 2x over the interval [2, 6]?
2. Over the interval [5, 7]?
3. Do you think it is true that g(x) will have a constant average rate of change over any interval? Why or why not?
4. Prove your answer to part (c) by computing the average rate of change for g(x) over the interval [a, b].
$\frac{g(6) - g(2)}{6-2}$
$\frac{g(7) - g(5)}{7-5}$
Yes, because it is linear.
$\frac{g(b) - g(a)}{b-a} = \frac{6 - 2b - (6 - 2a)}{b - a} =$
|
|
PACTF_2017: XOR 2
Category: Points: 40 Description:
Miles just sent me a really cool article to read! Unfortunately, he encrypted it before he sent it to me. Can you crack the code for me so I can read the article? Article.txt.
Hint:
Did you know that in typical English writing, a character is the same as the one k characters in front of it about 8% of the time, regardless of k?
Write-up
A hint of repeated XOR, done easily through XORTool.
$xortool -x -b -m 1000 Article.txt$ grep -R 'flag' -i xortool_out/
Binary file xortool_out//000.out matches
xortool_out//032.out:There are infinitely many even numbers, too, but they re much more common: exactly 500 out of the first 1,000. In fact, it s pretty apparent that out of the first X numbers, just about (1/2)X will be even. The flag is primes_are_cool.
Therefore, the flag is primes_are_cool.
|
|
Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.
— Robert Frost
.
Explanation and interpretations
The poem, especially its last lines, where the narrator declares that taking the road “one less traveled by” “made all the difference,” can be seen as a declaration of the importance of independence and personal freedom. However, Frost likely intended the poem as a gentle jab at his great friend and fellow poet Edward Thomas, and seemed amused at this slightly “mischievous” misinterpretation. The Road Not Taken seems to illustrate that once one takes a certain road, there’s no turning back, although one might change paths later on, they still can’t change the past.
— Wikipedia
.
.
2008.03.05 Wednesday $CHK_2$
|
|
Contents
Idea
A logical framework is a formal metalanguage for deductive systems, such as logic, natural deduction, type theories, sequent calculus, etc. Of course, like any formal system, these systems can be described in any sufficiently strong metalanguage. However, all logical systems of this type share certain distinguishing features, so it is helpful to have a particular metalanguage which is well-adapted to describing systems with those features.
Much of the description below is taken from (Harper).
Overview
The sentences of a logical framework are called judgments. It turns out that in deductive systems, there are two kinds of non-basic forms that arise very commonly, which we may call
• hypothetical judgments: one judgment is a logical consequence of some others.
• generic judgments: a judgment that is made generally for all values of some parameters, each ranging over a “domain” or “syntactic category”. (Note that a “syntactic category” in this sense is unrelated to the notion of syntactic category that is a category built out of syntax; here we mean “category” in its informal sense of a subgrouping, so that for instance we have one syntactic category of “types” and another of “terms”.)
These two forms turn out to have many parallel features, e.g. reflexivity and transitivity of hypothetical judgments correspond to variable-use and substitution in generic judgments. Appealing to the propositions as types principle, therefore, it is convenient to describe a system in which they are actually both instances of the same thing. That is, we identify the notion of evidence for a judgment with the notion of object of a syntactic category.
This leads to a notion that we will call an LF-type. Thus we will have types such as
• The LF-type of evidence for some judgment.
• The LF-type of objects of a syntactic category.
We will also have some general type-forming operations. Perhaps surprisingly, it turns out that
are all that we need.
There is a potential confusion of terminology, because these LF-types in a logical framework (being itself a type theory) are distinct from the objects that may be called “types” in any particular logic we might be talking about inside the logical framework. Thus, for instance, when formalizing Martin-Lof type theory in a logical framework, there is an “LF-type” which is the type of objects of the syntactic category of MLTT-types. This is furthermore distinct from a type of types, which is itself an object of the syntactic category of MLTT-types, i.e. a term belonging to the LF-type of such.
The type theory of a logical framework often includes a second layer called LF-kinds, which enables us to classify families of LF-types. For instance, the universe $Type$ of all LF-types is an LF-kind, as is the collection $A\to\Type$ of all families of LF-types dependent on some LF-type $A$. The LF-types and LF-kinds together are very similar to a pure type system with two sorts $Type$ and $Kind$, with axiom $Type:Kind$ and rules $(Type,Kind)$ and $(Type,Type)$, although there are some minor technical differences such as the treatment of definitional equality (PTS’s generally use untyped conversion, whereas logical frameworks are often formulated in a way so that only canonical forms exist).
Thus, we might have the following hierarchy of “universes”, which we summarize to fix the notation:
• $Kind$, the sort of LF-kinds
• $Type$, the LF-kind of LF-types, i.e. $Type:Kind$.
• $tp$ or $type$, the LF-type of all types in some object theory being discussed in LF, i.e. $tp : Type$.
• $\mathcal{U}_i$, a type-of-types in such an object-theory, i.e. $\mathcal{U}_i:tp$.
Once we have set up the logical framework as a language, there are then two approaches to describing a given logic inside of it. See (Harper), and the other references, for more details.
Synthetic presentations
In a synthetic presentation, we use LF-types to represent the syntactic objects and judgments of the object theory. Thus, if the object theory is a type theory, then in LF we have things like:
• an LF-type $tp$ of object-theory types
• an LF-type $tm$ of object-theory terms
• a dependent LF-type $of : tm \to tp \to Type$ expressing the object-theory typing judgment. That is, for each object-theory term $a:tm$ and each object-theory type $A:tp$, we have an LF-type $of(a,A)$ expressing the object-theory judgment “$a:A$” that $a$ is of type $A$. According to propositions as types (at the level of the metatheory, sometimes called “judgments as types”), the elements of $of(a,A)$ are “proofs” that $a:A$.
Note that we do not have to explicitly carry around an ambient context, as we sometimes do when presenting type theories in a more explicit style of a deductive system. This is because the notions of hypothetical and generic judgments are built into the logical framework and handled automatically by its contexts. We will discuss this further below.
Synthetic presentations are preferred by the school of Harper-Honsell-Plotkin and are generally used with implementations such as Twelf. They are very flexible and can be used to represent many different object-theories. Moreover, they generally support an adequacy theorem, that there is a compositional bijection between the syntactic objects of the object-theory, as usually presented, and the canonical forms of appropriate LF-type in its LF-presentation. Here “compositional” means that the bijection respects substitution.
Note that the adequacy theorem is a correspondence at the level of syntax; it does not even incorporate the object-theory notion of definitional equality! Two object-theory terms such as $(\lambda x.x)y$ and $y$ that are definitionally equal (by beta-reduction) are syntactically distinct, and hence also correspond to distinct syntactic entities in the LF-encoding. The definitional equality that relates them is represented by an element of the LF-type $defeq(\dots)$ encoding the definitional-equality judgment of the object-theory. This is appropriate because such LF-encodings are used, among other things, for the study of the syntax of the object-theory, e.g. for proving properties of its definitional equality.
However, synthetic presentations do not make maximal use of the framework in the case when the object-theory is also a type theory whose judgments are “analytic”. Here “synthetic” means roughly “requires evidence” whereas “analytic” means roughly “obvious”.
Analytic presentation
An analytic presentation is only possible for certain kinds of object-theories, generally those which are type theories similar to LF itself. In this case, we represent object-theory types by LF-types themselves. Thus we still have the LF-type $tp$ of object-theory types, but instead of the LF-type $tm$ of terms and the dependent LF-type $of$ representing the object-theory typing judgment, we have
• a dependent LF-type $el : tp \to Type$
which assigns to each object-theory type, the LF-type of its elements. In other words, the typing judgment of the object-theory is encoded by the typing judgment of the meta-theory.
Now we have to make a choice about how to represent the definitional equality of the object-theory. A consistent choice is to also represent it by the definitional equality of the meta-theory. That is, in addition to merely giving “axioms” such as $tp$ and $el$, we must give equations representing the rules of the object-theory as equalities in the logical framework. For instance, we must have a beta-reduction rule such as
app A B (lam A B F) M = F M
If the object-theory is itself a dependent type theory whose only definitional equalities are beta-reductions like this, then if we make the coercion $el$ implicit, we can think of the resulting encoding as analogous to a pure type system with three sorts, $tp$, $Type$, and $Kind$, with $tp:Type$ and $Type:Kind$.
However, from a practical point-of-view, rather than extending the logical framework with ad hoc definitional equalities to represent a particular object-theory, often what is actually done is that equality is defined as another type family with explicitly-introduced constructors. In other words, we use the analytic representation of types, but the synthetic representation of definitional equality. For example, in Twelf, the above equation could be represented by first assuming an LF-type family
eq : {A:tp} el A -> el A -> type
(Twelf uses braces as notation for the dependent product), and then postulating a constant
beta : {A:tp} {B:tp} eq (app A B (lam A B F)) (F M)
in addition to the other axioms of equality.
The analytic encoding is associated with Martin-Lof. While convenient for the description of rules in type theories, it is often less appropriate for the purposes of metatheoretic analysis.
For instance, in the hybrid style with the LF-type family $eq$, the terms in $el(A)$ will involve explicit coercions along equalities in $eq$. This destroys the adequacy theorem, since coercions along definitional equalities are generally silent in the usual presentation of a theory. Moreover, one must assert explicitly that dependent types respect definitional equalities, and for multiply-dependent types this requires dependent products in the object-theory.
On the other hand, the “fully analytic” version where definitional equalities in the object-theory are also definitional equalities in the meta-theory involves adding additional equations to the meta-theory, which in general can make it impossible to reason about. One needs a decision procedure for these equalities even to be able to check proofs.
Thus, analytic approaches are less general and flexible; they are best adapted to describing the rules and semantics of a dependent type theory, whereas synthetic approaches are better for reasoning about syntax and for studying more general object-theories.
Higher-order abstract syntax
In both synthetic and analytic presentations, we use higher-order abstract syntax (HOAS). Roughly, this means that variables in the object-theory are not terms of some LF-type, but are represented by actual LF-variables. For instance, when describing a type theory containing function types synthetically, we would have
• an LF-term $arr : tp \to tp \to tp$, where for object-theory types $A:tp$ and $B:tp$, the term $arr(A,B):tp$ represents their function-type
• an LF-term $app : tm \to tm \to tm$, where $app(f,a)$ represents the function application $f(a)$
• an LF-term $lam : (tm \to tm) \to tm$, representing lambda abstraction.
The point is that the argument of $lam$ (the “body” of the lambda abstraction) is not a “term containing a free variable $x$” but rather an LF-function from object-theory terms to object-theory terms. This is intended to be the function “substitute” which knows about the body of the lambda-abstraction, and when given an argument it substitutes it for the variable in that body and returns the result.
This approach completely avoids dealing with the problems of variable binding and substitution in the object language, by making use of the binding and substitution in the metalanguage LF. One might say that the variables in LF are the “universal notion of variable” which is merely reused by all object-theories.
The power of weak frameworks
It may be tempting to think of the LF-types such as $tp$ and $tm$ as inductively defined by their specified constructors (such as $arr$ for $tp$, and $app$ and $lam$ for $tm$). However, this is incorrect; LF does not have inductive types. In fact, this weakness is essential in order to guarantee “adequacy” of the HOAS encoding.
Suppose, for instance, that $tm$ were inductively defined inside of LF. Then we could define a function $tm\to tm$ by pattern-matching on the structure of $tm$, doing one thing if $tm$ were a lambda-abstraction and another thing if it were a function application. But such a function is definitely not the sort of thing that we want to be able to pass to the LF-function $lam$! By disallowing such matching, though, we can guarantee that the only functions $tm\to tm$ we can define and pass to $lam$ correspond to “substituting in a fixed term” as we intended.
As an even simpler example, suppose we consider an object-theory containing just one LF-type $nat$ together with constructors $z : nat$ and $s : nat \to nat$. Although we would like to think of $nat$ as representing the natural numbers, because of the lack of an induction principle, the LF-type $nat \to nat$ certainly cannot be shown to contain all the functions from natural numbers to natural numbers (essentially, we can only construct the constant functions and those incrementing their argument by a fixed constant). On the other hand, to some extent it is possible to get around this restriction by taking a relational rather than a functional point-of-view. For example, addition of natural numbers can be defined as a type family
add : nat -> nat -> nat -> type
together with a pair of constructors
add/z : {N:nat} add z N N.
add/s : {M:nat}{N:nat}{P:nat} add M N P -> add (s M) N (s P).
Now, it is still not possible to prove inside LF that $add$ is a total functional relation (i.e., that for all M:nat and N:nat there exists a unique P:nat such that add M N P). However, in this case that is certainly easy to verify by inspection, and the Twelf proof assistant has facilities for verifying such properties automatically (though in general checking totality is better supported than checking uniqueness).
Implementations
One of the uses of a logical framework is that as a type theory itself, it can be implemented in a computer. This provides a convenient system in which one can “program” the rules of any other specific type theory or logic which one wants to study.
For a list of logical framework implementations, see Specific logical Frameworks and Implementations.
Historically, the first logical framework implementation was Automath. The goal of the Automath project was to provide a tool for the formalization of mathematics without foundational prejudice. Many modern logical frameworks carry influences of this.
Then inspired by the development of Martin-Löf dependent type theory was the Edinburgh Logical Framework (ELF). The logic and type theory-approaches were later combined in the Elf language. This gave rise to Twelf.
References
The original logical framework using a synthetic approach was introduced in
• Bob Harper, Furio Honsell, and Gordon Plotkin, A framework for defining logics
while the analytic version was proposed by
• Per Martin-Lof, On the meanings of the logical constants and the justifications of the logical laws
General overviews include:
• Frank Pfenning, Logical frameworks – a brief introduction (pdf)
• Frank Pfenning, Logical frameworks In Alan Robinson and Andrei Voronkov (eds.) Handbook of Automated Reasoning, chapter 17, pages 1063–1147. Elsevier Science Publishers, 1999. (ps).
• Frank Pfenning, Logical frameworks web site (web), including an extensive bibliography and a list of implementations
• Randy Pollack, Some recent logical frameworks (2010) (pdf)
A number of examples of encoding object-theories into LF can be found in
• Arnon Avron, Furio Honsell, Ian A. Mason, and Robert Pollack, Using typed lambda calculus to implement formal systems on a machine
Revised on June 15, 2016 09:25:10 by Noam Zeilberger (193.55.177.48)
|
|
## On Efficiency and Accuracy in Cardioelectric Simulation
Please always quote using this URN: urn:nbn:de:0297-zib-10934
• Reasons for the failure of adaptive methods to deliver improved efficiency when integrating monodomain models for myocardiac excitation are discussed. Two closely related techniques for reducing the computational complexity of linearly implicit integrators, deliberate sparsing and splitting, are investigated with respect to their impact on computing time and accuracy.
|
|
# Magnetic Fields in Matter
In dielectric media, dipoles get aligned by an external electric field - a polarization density $$\mathbf{P}\left(\mathbf{r}\right)=\varepsilon_{0}\chi\left(\mathbf{r}\right)\mathbf{E}\left(\mathbf{r}\right)$$ builds up. In magnetic media, the same happens with the magnetic field and magnetic dipoles. Here, however, the effect is called magnetization $$\mathbf{M}\left(\mathbf{r}\right)=\chi_{m}\left(\mathbf{r}\right)\mathbf{H}\left(\mathbf{r}\right)$$ with the "magnetic suszeptibility" $$\chi_{m}\left(\mathbf{r}\right)$$. Then, the magnetic induction is given by $\begin{eqnarray*}\mathbf{B}\left(\mathbf{r}\right)&=&\mu_{0}\left(\mathbf{H}\left(\mathbf{r}\right)+\mathbf{M}\left(\mathbf{r}\right)\right)\\&=&\mu_{0}\left(\mathbf{H}\left(\mathbf{r}\right)+\chi_{m}\left(\mathbf{r}\right)\mathbf{H}\left(\mathbf{r}\right)\right)\\&\equiv&\mu_{0}\color{red}{\mu\left(\mathbf{r}\right)}\mathbf{H}\left(\mathbf{r}\right)\end{eqnarray*}$with the introduced relative permeability $$\mu\left(\mathbf{r}\right)$$. If $$\mu\left(\mathbf{r}\right)\neq1$$ and so $$\chi_{m}\left(\mathbf{r}\right)$$ does not vanish, we speak of a magnetic medium. In the given problems we will use Maxwell's equations in the magnetostatic approximation to calculate the magnetic field(s) with present magnetic media.
## Superconductors and Their Magnetostatic Fields
The Proca-formulation of electrodynamics allows to account for a hypothetical massive photon. This formulation lead to astonishing experiments but is formally equivalent to the London theory of superconductivity. Learn in this problem how a magnetic field is expelled from a superconductor by "massive photons".
## A Magnetic Sphere with Surface Current
Using the magnetostatic potential can be extremely useful to calculate magnetostatic problems. However, it can only be defined if no currents are present. In this problem you will discover how we can still use the potential in situations of strongly confined surface currents.
|
|
## Thursday, 25 August 2011
### The Mathematics of Super String Theory
The single most important equation in (first quantisized bosonic) string theory is the N-point scattering amplitude. This treats the incoming and outgoing strings as points, which in string theory are tachyons, with momentum ki which connect to a string world surface at the surface points zi. It is given by the following functional integral which integrates (sums) over all possible embeddings of this 2D surface in 26 dimensions.
$A_N = \int{D\mu \int{D[X] exp \left( -\frac{1}{4\pi\alpha} \int{ \partial_z X_{\mu}(z,\overline{z}) \partial_{\overline{z}} X^{\mu}(z,\overline{z})}dz^2 + i \sum_{i=1}^{N}{k_{i \mu} X^{\mu}(z_i,\overline{z}_i) } \right) }}$
The functional integral can be done because it is a Gaussian to become:
This is integrated over the various points zi. Special care must be taken because two parts of this complex region may represent the same point on the 2D surface and you don't want to integrate over them twice. Also you need to make sure you are not integrating multiple times over different paramaterisations of the surface. When this is taken into account it can be used to calculate the 4-point scattering amplitude (the 3-point amplitude is simply a delta function):
$A_4 = \frac{ \Gamma (-1+\frac12(k_1+k_2)^2) \Gamma (-1+\frac12(k_2+k_3)^2) } { \Gamma (-2+\frac12((k_1+k_2)^2+(k_2+k_3)^2)) }$
Which is a beta function. It was this beta function which was apparently found before full string theory was developed. With superstrings the equations contain not only the 10D space-time coordinates X but also the grassman coordinates θ. Since there are various ways this can be done this leads to different string theories.
When integrating over surfaces such as the torus, we end up with equations in terms of theta functions and elliptic functions such as the Dedekind eta function. This is smooth everywhere, which it has to be to make physical sense, only when raised to the 24th power. This is the origin of needing 26 dimensions of space-time for bosonic string theory. The extra two dimensions arise as degrees of freedom of the string surface.
### D-Branes
D-Branes are membrane-like objects in 10D string theory. They can be thought of as occurring as a result of a Kaluza-Klein compactification of 11D M-Theory which contains membranes. Because compactification of a geometric theory produces extra vector fields the D-branes can be included in the action by adding an extra U(1) vector field to the string action.
$\partial_z \rightarrow \partial_z +iA_z(z,\overline{z})$
In type I open string theory, the ends of open strings are always attached to D-brane surfaces. A string theory with more gauge fields such as SU(2) gauge fields would then correspond to the compactification of some higher dimensional theory above 11 dimensions which is not thought to be possible to date.
### Why Five Superstring Theories?
For a 10 dimensional supersymmetric theory we are allowed a 32-component Majorana spinor. This can be decomposed into a pair of 16-component Majorana-Weyl (chiral) spinors. There are then various ways to construct an invariant depending on whether these two spinors have the same or opposite chiralities:
Superstring Model Invariant
Heterotic $\partial_zX^\mu-i\overline{\theta_{L}}\Gamma^\mu\partial_z\theta_{L}$
IIA $\partial_zX^\mu-i\overline{\theta_{L}}\Gamma^\mu\partial_z\theta_{L}-i\overline{\theta_{R}}\Gamma^\mu\partial_z\theta_{R}$
IIB $\partial_zX^\mu-i\overline{\theta^1_{L}}\Gamma^\mu\partial_z\theta^1_{L}-i\overline{\theta^2_{L}}\Gamma^\mu\partial_z\theta^2_{L}$
The heterotic superstrings come in two types SO(32) and E8xE8 as indicated above and the type I superstrings include open strings.
Sources:
Wikipedia
|
|
# URL in Footnote [duplicate]
I have a problem with long URLs in a Footnote
Here is an example:
\documentclass{scrreprt}
\usepackage[utf8]{inputenc}
\usepackage{url}
\begin{document}
Lorem Ipsum\footnote{Vgl. \url{http://sourceforge.net/p/jaudio/git/ci/jAudio2.0-dev/tree/src/jAudioFeatureExtractor/AudioFeatures}}
\end{document}
This is how it looks:
You can see the big space between the german "Vgl." and the URL.
How can I set
1. explicit url break points at .../j|Audio|Feature|Extractor/... and
2. set the text alignment in the footnote to left-aligned (flush left)?
• \raggedright should work. Additionally, please see Zeilenumbrüche in Bibliographielinks. – Johannes_B May 18 '15 at 15:35
• although the question itself seems to be on a slightly different topic, there is a useful answer therein: [How to fill an underfull box in footnote with url?](tex.stackexchange.com/q/88553/579} – barbara beeton May 18 '15 at 15:42
• @Johannes_B thank you! \raggedright works for left-alignment. But your posted link is about URLs for BibTex and don't says how to set explicit break points. – Vertex May 18 '15 at 15:43
• I didn't have the right TeX.SX Q/A at hand and my washing machine was beeping, so i left a link to a place where i knew for sure the right answer can be found. But it is tricky, the answer is actually in the question. ;-) – Johannes_B May 18 '15 at 16:01
• @Vertex Sorry, i not really at the keyboard and just had time to leave short comments. You cannot add breakpoints directly within the url, but you can add specific letters to the list of breakable chars. Please see Forcing linebreaks in \url for further details. If that answers the question, please drop me a note. – Johannes_B May 18 '15 at 19:04
|
|
# Find the value of $\tan 22^{\large\circ}30'$
$\begin{array}{1 1}(A)\;\large\frac{1}{\sqrt 2+1}&(B)\;\large\frac{1}{\sqrt 2-1}\\(C)\;\large\frac{1}{\sqrt 3+1}&(D)\;\large\frac{1}{\sqrt 3-1}\end{array}$
Toolbox:
• $\sin 2\theta=2\sin \theta\cos\theta$
• $\cos^2\theta=\large\frac{1}{2}$$(1+\cos 2\theta) Given : \tan 22^{\large\circ}30' Let \theta=45^{\large\circ} \therefore \large\frac{\theta}{2}$$=22^{\large\circ}30'$
$\tan \large\frac{\theta}{2}=\frac{\sin \Large\frac{\theta}{2}}{\cos \Large\frac{\theta}{2}}$
Multiply both numerator and denominator by $2\cos \large\frac{\theta}{2}$
$\Rightarrow \large\frac{2\sin \Large\frac{\theta}{2}\normalsize \sin \Large\frac{\theta}{2}}{2\cos^2\Large\frac{\theta}{2}}$
$\Rightarrow \large\frac{\sin \theta}{2\big[\large\frac{1}{2}(1+\cos^2\Large\frac{\theta}{2})\big]}$
$\Rightarrow \large\frac{\sin \theta}{1+\cos \theta}$
Where $\theta=45^{\large\circ}$
$\Rightarrow \large\frac{\sin 45^{\large\circ}}{1+\cos 45^{\large\circ}}$
$\Rightarrow \large\frac{\Large\frac{1}{\sqrt 2}}{1+\Large\frac{1}{\sqrt 2}}$
$\Rightarrow \large\frac{1}{\sqrt 2+1}$
Hence (A) is the correct answer.
|
|
# A very simple discrete dynamical system with pebbles
Let us suppose we have slots $n$ slots $1, \ldots, n$ and $k$ pebbles, each of which is initially placed in some slot. Now the pebbles want to space themselves out as evenly as possible, and so they do the following. At each time step $t$, each pebble moves to the slot closest to the halfway point between its neighboring pebbles; if there is a tie, it chooses the slot to the left. The leftmost and rightmost pebbles apply the same procedure, but imagining that there are slots numbered $0$ and $n+1$ with pebbles in them.
Formally, numbering the pebbles $1, \ldots, k$ from left to right, and letting $x_i(t)$ be the slot of the $i$'th pebble at time t, we have $$x_i(t+1) = \lfloor \frac{x_{i-1}(t)+x_{i+1}(t)}{2} \rfloor, i = 2, \ldots, k-1$$ $\lfloor \cdot \rfloor$ rounds down to the closest integer. Similarly, $$x_1(t+1) = \lfloor (1/2) x_2(t) \rfloor, x_k(t+1) = \lfloor \frac{x_{k-1}(t) + (n+1)}{2} \rfloor.$$
Now the fixed point of this procedure is the arrangement in which $x_{i+1} - x_i$ and $x_i - x_{i-1}$ differ by $1$. My question: is it true that this fixed point is reached by the above procedure after sufficiently many iterations?
Why I care: no concrete reason really, I am just reading about finite difference methods, and this seemed like a simple problem connected with some of the things which are confusing me.
-
You're missing a factor of 1/2 in your expression for $x_1(t+1)$. – mjqxxxx Jan 21 '11 at 7:40
@mjqxxxx - thanks, fixed now. – angela o. Jan 21 '11 at 12:38
Update Even simpler: 5 slots. $(1,4) \rightarrow (2,3) \rightarrow (1,4)$.
You have a total number of unoccupied spaces $n-k$, divided into $k+1$ intervals between pebbles. Your procedure looks at adjacent pairs of empty intervals and tries to make them more similar. If they differ by more than one, the larger of the two intervals will be reduced and the smaller increased so that their difference becomes zero or one. If they differ by zero, they will be left alone. If they differ by one, they will be swapped if necessary so that the larger of the two intervals is on the right. In short, the only steady states will be those where each interval is the same size or one greater than its neighboring interval on the left -- there are many of these. If the update rules are applied one pebble at a time, a steady state will always be reached, because the total discrepancy, $D \equiv \sum_{i=1}^{k} |x_{i+1} - 2x_i + x_{i-1} - 1/2|$, decreases with each move. But if the update rules are applied simultaneously, it is less clear that you must reach a steady state. I think that you still must, but I don't have as simple a proof.
|
|
Breaking News
# Integral Of 1 2X
Integral Of 1 2X. Since 1 2 1 2 is constant with respect to x x, move 1 2 1 2 out of the integral. ∫ 1 2x dx = 1 2lnx +c or, ln√x +c. Rewrite using u u and d d u u. (original post by cvat) how come when i integrate it by doing 0.5 x integral of 1/x i get a different answer to just integrating it without taking the factor of a half, so for the first one i get 1/2 ln(x).
Finding the integral of 1/2x is very simple when you know the trick! Follow edited jun 30, 2015 at 16:01. The integral calculator lets you calculate integrals and antiderivatives of functions online — for free!
## In this video i will teach you how to integrate 1/2x step by step.
In this video i will teach you how to integrate 1/2x step by step. Follow edited jun 30, 2015 at 16:01. D cotx dx = d cosx sinx dx = (cosx)'sinx − cosx ⋅ (sinx)' sin2x = −sin2x − cos2x sin2x = − 1 (sin2x) hence.
## (Original Post By Cvat) How Come When I Integrate It By Doing 0.5 X Integral Of 1/X I Get A Different Answer To Just Integrating It Without Taking The Factor Of A Half, So For The First One I Get 1/2 Ln(X).
The integral calculator lets you calculate integrals and antiderivatives of functions online — for free! Finding the integral of 1/2x is very simple when you know the trick! 1 2 ∫ x2dx 1 2 ∫ x 2 d x by the power rule, the integral of x2 x 2 with respect to x x is 1 3×3 1 3 x 3. ∫ 1 u ⋅ 1 −2.
### 1,791 9 9 Silver Badges 22 22 Bronze.
By the power rule, the integral of x x with respect to x x is 1 2×2 1 2 x 2.
### Kesimpulan dari Integral Of 1 2X.
(original post by cvat) how come when i integrate it by doing 0.5 x integral of 1/x i get a different answer to just integrating it without taking the factor of a half, so for the first one i get 1/2 ln(x).
|
|
# There does not exist a group $G$ such that $|G/Z(G)|=pq$ for $p,q$ prime.
Let $p$ and $q$ primes, with $p<q$ and $pk \neq q-1 \ , \ \forall k \in \mathbb{Z}$. Show that there does not exist group $G$ such that $$\left|\frac{G}{Z(G)}\right|=pq,$$ where $Z(G)$ is the center of $G$.
-
add comment
## 1 Answer
Hint: prove the basic
Lemma: For any group $\,G\,$ , the quotient $\,G/Z(G)\,$ cannot be cyclic non-trivial, or in other words: for any group, $\,G/Z(G)\,$ is cyclic iff $\,G\,$ is abelian, and in this case the quotient is the trivial group.
Now just show that under the given data, a group of order $\,pq\,$ must be cyclic...
-
continue please – Jarbas Dantas Silva Oct 17 '12 at 1:16
What "continue" and where? – DonAntonio Oct 17 '12 at 2:55
Why a group of order pq must be ciclic? – Jarbas Dantas Silva Oct 17 '12 at 15:57
(1) How many Sylow $\,p-\,$subgroups it has? (2) Thus, what is $\,PQ\,$ , with $\,P,Q\,$ Sylow subgroups of order $\,p,q\,$ respectively? (3) Thus, $\,PQ\cong P\times Q\,$ ... – DonAntonio Oct 17 '12 at 17:05
add comment
|
|
# Description of problem
I’m playing with voltage- and spike-time-dependent plasticity, aka Clopath et al. model. While LTD has a spike-train term with delta functions and can be moved to on_pre resetting, LTP has all continuous variables and should be computed in a clock-driven manner. Both LTP and LTD use linear rectificatopn for voltage and low-pass filtered voltage.
\frac{dw_{ij}}{dt} = -A^{-}S_j(t)\lfloor \bar{u}^-_i - \theta_{-}\rfloor_{+}+ A^+\bar{x}_j \lfloor \bar{u}^+_i - \theta_{-}\rfloor_{+} \lfloor v_i - \theta_{+}\rfloor_{+}
\begin{array}{rlrl} du^-_i/dt &=\left(v-u^-_i \right)/\tau^- & \tau^-&=10 \text{ms} \\ du^+_i/dt &=\left(v-u^+_i \right)/\tau^+ & \tau^+ &=7 \text{ms} \\ d\bar{x}_i/dt &=\left(S_i(t)-\bar{x}_i \right)/\tau^x & \tau^x &=15 \text{ms} \\ \end{array}
where w_{ij} is a synaptic conductance from j^{th} presynaptic neuron to i^{th} postsynaptic. v is a voltage and S(t) = \sum \delta(t-t') is a spike train.
To make code a little faster, I moved computations of low pass filtered voltages and ReLU functions to the neuron side. So, part of my neuron equations looks like that
nrn_equ="""
.... neuron dynamics ....
dvlp/dt = (v-vlp)/tlpf_p : 1 # low-pass filter of the voltage for LTD
dvlm/dt = (v-vlm)/tlpf_m : 1 # low-pass filter of the voltage for LTP
vup = int(v>Tltp)*(v-Tltp) : 1
vlpup = int(vlp>Tltd)*(vlp-Tltd) : 1
vlmup = int(vlm>Tltd)*(vlm-Tltd) : 1
"""
Synaptic model can be implemented like this
syn = Synapses(lgninput, neurons, model="""
dwsyn/dt = Altp*x*vup_post*vlpup_post/tausynscal: 1 (clock-driven)
dx/dt = -x/tlpf_s : 1 (clock-driven) # slow presynaptic FR
dy/dt = -y/tfrest : 1 (clock-driven) # low-pass filter for FR estimate
""",
on_pre='''
x += 1*ms/tlpf_s
wsyn -= Altd*(y*second/tfrest/frRef)*vlmup_post
.... synaptic dynamics .....
''',
on_post='''
y += 1*ms/tfrest''')
This code works and works well.
============================
However, to debug my model, I want to separate LTP and LTD components and record them. So I modified synaptic equations like this:
syn = Synapses(lgninput, neurons, model="""
dwsyn/dt = LTP/tausynscal: 1 (clock-driven)
dx/dt = -x/tlpf_s : 1 (clock-driven) # slow presynaptic FR
dy/dt = -y/tfrest : 1 (clock-driven) # low-pass filter for FR estimate
LTP=Altp*x*vup_post*vlpup_post : 1
LTD : 1
""",
on_pre='''
x += 1*ms/tlpf_s
LTD = Altd*(y*second/tfrest/frRef)*vlmup_post
wsyn -= LTD
.... synaptic dynamics .....''',
on_post='''
y += 1*ms/tfrest''')
This code returns an error, which I don’t understand
ERROR Brian 2 encountered an unexpected error. If you think this is a bug in Brian 2, please report this issue either to the discourse forum at <http://brian.discourse.group/>, or to the issue tracker at <https://github.com/brian-team/brian2/issues>. Please include this file with debug information in your report: /tmp/brian_debug_8n4izity.log Additionally, you can also include a copy of the script that was run, available at: /tmp/brian_script_35ik7dpv.py Thanks! [brian2]
Traceback (most recent call last):
File "PLS-cortex.py", line 448, in <module>
onenrnpres_srec = StateMonitor(syn,"wsyn x y LTP LTD".split(),record=synids,dt=mth["/rec/cont/dt"]*ms)
File "/home/rth/.local/lib/python3.8/site-packages/brian2/monitors/statemonitor.py", line 248, in __init__
File "/home/rth/.local/lib/python3.8/site-packages/brian2/core/variables.py", line 1829, in add_reference
File "/home/rth/.local/lib/python3.8/site-packages/brian2/core/variables.py", line 1768, in add_referred_subexpression
File "/home/rth/.local/lib/python3.8/site-packages/brian2/core/variables.py", line 1773, in add_referred_subexpression
File "/home/rth/.local/lib/python3.8/site-packages/brian2/core/variables.py", line 1821, in add_reference
raise TypeError(f"Cannot link variable '{name}' to '{varname}' in "
TypeError: Cannot link variable '___source_LTP_synapses_vup_post_synapses__vup_post_neurongroup_v' to '_vup_post_neurongroup_v' in group 'synapses' -- need to precalculate direct indices but index _postsynaptic_idx can change
It may be related to standalone OpenMP mode, I used to load all cores of my processor.
set_device('cpp_standalone')
Hi!
I guess the culprit is setting only some synapses to be recorded in conjunction with standalone mode:
record=synids
Can you try to record all synapses (only as a check)?
Cheers,
Sebastian
Hi Sebastian, thank you for looking into it!
It doesn’t seem that the problem in recording, but in caches somewhere.
I add this line of code:
os.system("rm -fR tmp __pycache__ /home/rth/.cython/brian_extensions/_cython_magic_* "+standalone_dir)
and the problem is gone.
However, it’s a bit odd, so maybe something else is going on here.
Hi Sebastian,
After 1/2 hour investigation I should admit that you were right, and I overlooked a bug in my script .
It is related to synaptic recording.
1 Like
Hi everyone,
huh, I don’t quite see how the cache plays a role here… The problem is a very technical one, and one that we could certainly handle better, but it is entangled deeply with the code generation machinery. Here’s a (minimal ?) example that generates the same error:
G = NeuronGroup(2, '''v : 1
x = 2*v : 1''')
S = Synapses(G, G, '''w : 1
y = w + x_post : 1''')
S.connect()
mon = StateMonitor(S, 'y', record=[0])
This fails with:
TypeError: Cannot link variable '___source_y_synapses_x_post_synapses__x_post_neurongroup_v' to '_x_post_neurongroup_v' in group 'synapses' -- need to precalculate direct indices but index _postsynaptic_idx can change
The problem is due to the use of the subexpression x from the postsynaptic cell as part of a subexpression in the synapse. Brian can handle these things, and its approach is using indices. E.g. here, we need the value of v[postsynaptic_idcs[record_idx]] when we loop over record_idx to record all the values we ask for. To make our code machinery not too complex, we try to limit this a bit, and pre-calculate the direct indices that we need. However, this errors out since it is worried about postysnaptic_idcs being “dynamic” – the StateMonitor wants to calculate these indices when it is initialized, but in particular in standalone mode this might not be possible since the postsynaptic indices have not been determined yet. Again, just to be clear, this is not handled well here and there are many situations (including yours, I think) where we could actually make this work. It is just not supported by the way things are done currently.
All that said, there are at least two workarounds (I’ll demonstrate them with my simple example, but hopefully they are easy to transfer).
1. Do not use subexpressions, but record things individually and reconstruct them afterwards:
# ...group definitions
S_mon = StateMonitor(S, 'w', record=[0])
G_mon = StateMonitor(S, 'x', record=S.j[0])
# Result is S_mon.w + G_mon.x
1. Use (constant over dt) for the subexpressions
G = NeuronGroup(2, '''v : 1
x = 2*v : 1 (constant over dt)''')
S = Synapses(G, G, '''w : 1
y = w + x_post : 1 (constant over dt)''')
Using constant over dt means that these are no longer subexpressions, but e.g. x = 2*v is replaced by x: 1 together with a run_regularly operation that sets x = 2*v at the beginning of the time step. When such a variable is referred in a differential equation, this is not exactly equivalent (e.g. rk4 will only use one value instead of several slightly different values), but in practice this shouldn’t be an issue if the variable changes slowly with respect to the simulation time step.
Actually, using record=True is not possible in standalone mode. The reason is similar to what I described above: StateMonitor wants to determine the indices of the synapses to record from, but in standalone mode the synapses do not exist yet.
1 Like
@mstimberg, yes, it was my fault. I tried to debug and overlooked some flags in my script.
I see how it may work. Do you mean
G_mon = StateMonitor(G, 'x', record=S.j[0])
Sorry, yes. Alternatively (and more reasonable if you would otherwise record the same values several times), you could use
G_mon = StateMonitor(G, 'x', record=True)
and use the indices when you reconstruct the values, i.e. something like
y_values = S_mon.w + G_mon.x[S.j[:], :]
1 Like
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.