text
stringlengths 256
16.4k
|
|---|
I am wondering the best way to find a numerical solution to a finite general equilibrium model with Matlab. The model is a basic neoclassical growth model with equilibrium equations:
$c_t+k_{t+1}=k_t(1-\delta)+k_t^{\alpha}$(budget constraint)
$c_t^{-\gamma}=\beta c_{t+1}^{-\gamma}(1-\delta+\alpha k_{t+1}^{1-\alpha})$(Euler equation)
This is on a 500 time step time horizon. I tried to solve this using fsolve, but the program does not like the negative exponents in the expression. Is there another way in Matlab to program these? Also there are boundary conditions, let's just say they are $k_1=1$ and $k_{500}=0$.
|
Circle is nothing but the 2-D representation of a sphere. The total area that is taken inside the boundary of the circle is the surface area of the circle. When we say we want the area of the circle, then we mean surface area of the circle itself.
When the length of the radius or diameter or even the circumference of the circle is already given, then we can use the surface formula to find out the surface area. The surface is represented in square units.
The surface of a circle is given as
$\large Surface\;Area\;of\;a\;circle: A=\pi \times r^{2}$
Solved example Question: What is the radius of the circle whose surface area is 314.159 $cm^{2}$? Solution:
Using the formula: $A=\pi \times r^{2}$
Substituting
$314.159=\pi \times r^{2}$
$314.159=3.14 \times r^{2}$
$r^{2}=\frac{314.159}{3.14}$
$r^{2}=100.05$
$r=\sqrt{100.05}$
$r=10\;cm$
|
An ellipsoid is a closed quadric surface that is a three-dimensional analogue of an ellipse. The standard equation of an ellipsoid centered at the origin of a Cartesian coordinate system. The spectral theorem can again be used to obtain a standard equation akin to the one given above.
The formula of Ellipsoid is given below:
\[\large V=\frac{4}{3}\pi\,a\,b\,c\]
or the formula can also be written as:
\[\large V=\frac{4}{3}\pi\,r1\,r2\,r3\]
Where,
r1= radius of the ellipsoid 1 r2= radius of the ellipsoid 2 r3= radius of the ellipsoid 3 Volume of an Ellipsoid Formula solved examples Example: The ellipsoid whose radii are given as a = 9 cm, b = 6 cm and v = 3 cm.
Find the volume of ellipsoid.
Solution:
Given,
Radius (a) = 9 cm Radius (b) = 6 cm Radius (c) = 3 cm
Using the formula: $V=\frac{4}{3}\pi\,a\,b\,c$
$V=\frac{4}{3}\times\pi\times9\times6\times3$
$V=678.24\,cm^{3}$
|
As previous answers have stated, the wavelength (or frequency) and intensity of the beam are important, as well as the type and amount of impurities in the air. The beam must be of a wavelength that is visible to humans, and fog or dust scatters the light very strongly so that you can see it. However, even in pure, clean air, you will be able to see a laser beam under certain conditions.
This is because light can scatter from air molecules themselves via Rayleigh scattering. Rayleigh scattering has a strong inverse dependance on wavelength, specifically $\lambda^{-4}$, so it will be easier to see with a green, and especially a blue, laser
1. It also has a scattering angle dependance that goes like $1+\cos^2 \theta$, so it may be easier to see if your viewing angle is very close to the beam 2.
With a 5mW green laser pointer, Rayleigh scattering is pretty easy to see. I imagine it would be even easier with blue/violet, but I'm not sure, since human eyes are most sensitive at green, so that may tip the balance. A more intense beam, like those used at night clubs or laser light shows, would be very easy to see if the beam were held still, but in those situations the beams are moving around rapidly to produce the light show, so Rayleigh scattering alone wouldn't really let you see much. In situations like night clubs, the scattering from fog produced by fog machines is much more important.
You are correct that, in space, because there is no atmosphere and nothing to scatter off of, you wouldn't see any sort of laser beam.
1: This is also why the sky is blue, incidentally.
2:
DO NOT EVER TRY TO TEST THIS WITH A BEAM POINTED TOWARDS YOU If you want to try this out, take a laser pointer and hold it near your head (eg. against your temple) and point it away from you, in the dark.
|
Hello, I've never ventured into char before but cfr suggested that I ask in here about a better name for the quiz package that I am getting ready to submit to ctan (tex.stackexchange.com/questions/393309/…). Is something like latex2quiz too audacious?
Also, is anyone able to answer my questions about submitting to ctan, in particular about the format of the zip file and putting a configuration file in $TEXMFLOCAL/scripts/mathquiz/mathquizrc
Thanks. I'll email first but it sounds like a flat file with a TDS included in the right approach. (There are about 10 files for the package proper and the rest are for the documentation -- all of the images in the manual are auto-generated from "example" source files. The zip file is also auto generated so there's no packaging overhead...)
@Bubaya I think luatex has a command to force “cramped style”, which might solve the problem. Alternatively, you can lower the exponent a bit with f^{\raisebox{-1pt}{$\scriptstyle(m)$}} (modify the -1pt if need be).
@Bubaya (gotta go now, no time for followups on this one …)
@egreg @DavidCarlisle I already tried to avoid ascenders. Consider this MWE:
\documentclass[10pt]{scrartcl}\usepackage{lmodern}\usepackage{amsfonts}\begin{document}\noindentIf all indices are even, then all $\gamma_{i,i\pm1}=1$.In this case the $\partial$-elementary symmetric polynomialsspecialise to those from at $\gamma_{i,i\pm1}=1$,which we recognise at the ordinary elementary symmetric polynomials $\varepsilon^{(n)}_m$.The induction formula from indeed gives\end{document}
@PauloCereda -- okay. poke away. (by the way, do you know anything about glossaries? i'm having trouble forcing a "glossary" that is really an index, and should have been entered that way, into the required series style.)
@JosephWright I'd forgotten all about it but every couple of months it sends me an email saying I'm missing out. Oddly enough facebook and linked in do the same, as did research gate before I spam filtered RG:-)
@DavidCarlisle Regarding github.com/ho-tex/hyperref/issues/37, do you think that \textNFSSnoboundary would be okay as name? I don't want to use the suggested \textPUnoboundary as there is a similar definition in pdfx/l8uenc.def. And textnoboundary isn't imho good either, as it is more or less only an internal definition and not meant for users.
@UlrikeFischer I think it should be OK to use @, I just looked at puenc.def and for example \DeclareTextCompositeCommand{\b}{PU}{\@empty}{\textmacronbelow}% so @ needs to be safe
@UlrikeFischer that said I'm not sure it needs to be an encoding specific command, if it is only used as \let\noboundary\zzznoboundary when you know the PU encoding is going to be in force, it could just be \def\zzznoboundary{..} couldn't it?
@DavidCarlisle But puarenc.def is actually only an extension of puenc.def, so it is quite possible to do \usepackage[unicode]{hyperref}\input{puarenc.def}. And while I used a lot @ in the chess encodings, since I saw you do \input{tuenc.def} in an example I'm not sure if it was a good idea ...
@JosephWright it seems to be the day for merge commits in pull requests. Does github's "squash and merge" make it all into a single commit anyway so the multiple commits in the PR don't matter or should I be doing the cherry picking stuff (not that the git history is so important here) github.com/ho-tex/hyperref/pull/45 (@UlrikeFischer)
@JosephWright I really think I should drop all the generation of README and ChangeLog in html and pdf versions it failed there as the xslt is version 1 and I've just upgraded to a version 3 engine, an dit's dropped 1.0 compatibility:-)
|
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
|
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
|
I recently replaced the Lambertian BRDF in my path-tracer with Oren-Nayar, under the assumption that I could adjust it to use the GGX distribution model with appropriate masking/shadowing.
PBR suggests this is't viable, though - Oren-Nayar is formulated without any clear $D$/$F$/$G$ parameters, and a note under the Torrance-Sparrow model states that "one of the nice things about the Torrance-Sparrow model is that
the derivation doesn't depend on the particular microfacet distribution being used" (implying this isn't the case for Oren-Nayar).
If Oren-Nayar
is extensible, how would I do that? I suspect I could replace my $\sigma$ values with an NDF taking some $\alpha$ parameter (effectively convolving against the implicit Gaussian in the Oren-Nayar definition), and multiply the evaluated Oren-Nayar function against $G$ to capture masked segments of $dA$, but won't this clash with assumptions made by the derivation? PBR states that the function natively accounts for Gaussian-distributed masking, so applying another function over the top should result in over-darkening...
If it isn't extensible, can I adjust Torrance-Sparrow instead? Locking $F(\omega_o)$ to $1$ should remove Fresnel effects, and the specular assumption can be negated by extracting the $d\omega_h = \frac{d\omega_o}{4\cos\theta_o}$ relationship to create the modified BRDF $f(\omega_o, \omega_i) = \frac{D(\omega_h)G(\omega_o,\ \omega_i)d\omega_h}{d\omega_o\cos\theta_o}$
|
According to the KMV
1 model, the standard deviation for the default of the \(i\)th borrower \(\sigma_{Di}\) is given by \(\sigma_{Di}=\sqrt{(EDF)(1-EDF)}\) where \(EDF\) is the expected default frequency.
Specifically, let \(p\) be the probability of complete repayment of the loan, and \((1-p)\) be the probability of default, or the expected default frequency (EDF)
Probability of repayment
A Bernoulli trial
2, repayment \(p\) returns 1 and default \((1-p)\) returns 0
Expected outcome:
$$ E(\mbox{payoff}) = 1 \times p + 0 \times (1-p)=p$$
Standard deviation: \begin{align*}\sigma & =\sqrt{(1-p)^2 \times p + (0-p)^2 \times (1-p)} \\
& =\sqrt{(1-2p + p^2) \times p + p^2 \times (1-p)} \\ & =\sqrt{(p-2p^2 + p^3) + (p^2 -p^3)} \\ & =\sqrt{p-p^2} \\ & =\sqrt{p(1-p)} \end{align*} Probability of default
A Bernoulli trial
3, repayment (EDF) returns 0 and default (1-EDF) returns 1
Expected outcome:
$$ E(\mbox{payoff}) = 0 \times EDF + 1 \times (1 – EDF)= 1 – EDF $$
Standard deviation: \begin{align*}\sigma & =\sqrt{(0 – (1 – EDF)^2 \times EDF + (1 – (1 – EDF))^2 \times (1 – EDF)} \\
& =\sqrt{(-1 + EDF)^2 \times EDF + EDF^2 \times (1-EDF)} \\ & =\sqrt{(1-2EDF + EDF^2) \times EDF + EDF^2 – EDF^3} \\ & =\sqrt{EDF – 2EDF^2 + EDF^3 + EDF^2 – EDF^3} \\ & =\sqrt{EDF – EDF^2} \\ & =\sqrt{EDF(1-EDF)} \end{align*} Notes The KMV model is named after Stephen Kealhofer, John McQuown and Oldrich Vasicek. The model was acquired by Moody’s Analytics in 2002 Saunders and Cornett (2010, p330) KMV References Saunders A and M Cornett, (2010), Financial institutions management: a risk management approach, 7 ed., McGraw-Hill
|
Question:
Let $L_1$ and $L_2$ be languages over the alphabet $\Sigma$. If $L_1 \cap L_2$ is decidable, then $L_1$ is decidable or $L_2$ is decidable (or they both are).
Definition of a decidable language:A language L is decidable if there exists a Turing-machine $M$ such that on input $x$, $M$ accepts if $x \in L$, otherwise $M$ rejects $x$. Proof:
By the definition of a decidable language, we know that $M$ over $\Sigma$ accepts input $x \in L_1 \cap L_2$ and rejects $x$ otherwise. This means we can construct another Turing-machine $M'$ such that, $M'$ accepts $x \in \overline{L_1} \cup \overline{L_2}$ when $M$ rejects and reject when $M$ accepts. We know that both $L(M)$ and $L(M')$ are both decidable languages. Since $L(M) \cup L(M') = \Sigma^*$, $\Sigma^*$ is decidable and so $L_1 \subseteq \Sigma^*$, $L_1$ must be a decidable language. By the same logic $L_2 \subseteq \Sigma^*$ which means $L_2$ is also a decidable language.
|
8.2.3.2.1 - Minitab Express: 1 Sample Mean t Test, Raw Data MinitabExpress – One Sample Mean t Test Using Raw Data
Research question: Is the mean GPA in the population different from 3.0?
Null hypothesis: \(\mu\) = 3.0
Alternative hypothesis: \(\mu\) ≠ 3.0
The GPAs of \(n) = 226 students are available.
A one sample mean \(t\) test should be performed because the shape of the population is unknown, however the sample size is large (\(n\) ≥ 30).
To perform a one sample mean \(t\)
Open Minitab data set: On a PC: Select STATISTICS > One Sample > t On a Mac: Select Statistics> 1-Sample Inference > t Double-click on the variable GPAto insert it into the Samplebox Check the box Perform a hypothesis test For the Hypothesized meanenter 3 Click the Optionstab Use the default Alternative hypothesisof Mean ≠ hypothesized value Use the default Confidence levelof 95 Click OK
This should result in the following output:
N Mean StDev SE Mean 95% CI for \(\mu\) 226 3.23106 0.51040 0.03395 (3.16416, 3.29796)
\(\mu\):
mean of GPA
Null hypothesis H 0: \(\mu\) = 3 Alternative hypothesis H 1: \(\mu\) ≠ 3
T-Value P-Value 6.81 <0.0001
Select your operating system below to see a step-by-step guide for this example.
We could summarize these results using the five step hypothesis testing procedure:
We do not know if the population is normally distributed, however the sample size is large (\(n \ge 30\)) so we can perform a one sample mean t test.
\(H_0: \mu = 3.0\)
\(H_a: \mu \ne 3.0\)
\(t (225) = 6.81\)
\(p < 0.0001\)
\(p \le \alpha\), reject the null hypothesis
There is evidence that the mean GPA in the population is different from 3.0
|
Let $f$ be a real-valued Lebesgue measurable function on $\mathbb{R}$. Prove that there exist Borel measurable functions $g$ and $h$ such that $g(x)=h(x)$ almost everywhere and $g(x)\le f(x) \le h(x)$ for every $x \in \mathbb R$. I know that $f$ is measurable since there exists a sequence of simple function that converges to $f$. I have no further idea how to tackle this problem.
It seems that this is in fact not possible: there exists a measurable function $f:\mathbb R\to\mathbb R$ for which one cannot find any Borel function $h:\mathbb R\to\mathbb R$ such that $f\leq h$ everywhere.
Here is a (hopefully correct) example. Since my previous answer was wrong, I would be glad to know if this one is OK...
Take a perfect set $K\subseteq\mathbb R$ with Lebesgue measure $0$. The family of all perfect subsets of $K$ has the cardinality of the continuum; so (using the axiom of choice), one can enumerate the perfect subsets of $K$ as $(L_\alpha)_{\alpha<\mathfrak c}$. Then, one can define by transfinite induction a family $(C_\alpha)_{\alpha<\mathfrak c}$ of countable (infinite) sets such that $C_\alpha\subseteq L_\alpha$ and $C_\alpha\bigcap \left(\bigcup_{\beta<\alpha} C_\beta\right)=\emptyset$ for all $\alpha<\mathfrak c$. Indeed, if the $C_\beta$ have been found for all $\beta<\alpha$, then the set $Z_\alpha:=\bigcup_{\beta<\alpha} C_\beta$ has cardinality less than $\mathfrak c$, so $L_\alpha\setminus Z_\alpha$ is infinite because $L_\alpha$ has cardinality $\mathfrak c$, and hence $L_\alpha\setminus Z_\alpha$ contains a countable infinite set $C_\alpha$.
Now, enumerate each $C_\alpha$ as $C_\alpha=\{ x_{\alpha,n};\; n\in\mathbb N\}$, and define a function $f:\mathbb R\to\mathbb R$ as follows : $f(x_{\alpha,n}):=n$ for all $\alpha,n$, and $f(x)\equiv0$ outside $\bigcup_{\alpha<\mathfrak c} C_\alpha$. This is a
measurable function because $f(x)\equiv 0$ outside $K$ (and hence almost everywhere). On the other hand, the function $f$ has the following property : it is not bouded above on any perfect subset of $K$.
Assume that one can find a Borel function $h:\mathbb R\to\mathbb R$ such that $f\leq h$. Since $h$ is real-valued, we have $K=\bigcup_{n\in\mathbb N} B_n$, where $B_n:=\{ x\in K;\; h(x)\leq n\}$. Since $K$ is uncountable, at least one of the sets $B_n$ must be uncountable; and since $B_n$ is a Borel set (because $h$ is Borel), it must contain some perfect set $L$ (by a well known theorem). Then $h$ is bounded above on $L$, a contradiction since $h\geq f$ and $f$ is
not bounded above on $L$. Warning. What follows does not provide an answer to the question.
First, assume that $f$ is an indicator function, say $f=\mathbf 1_A$ where $A$ is a measurable. You can find Borel sets $B,C$ such that $B\subseteq A\subseteq C$ and $C\setminus B$ has measure $0$; then take $g:= \mathbf 1_B$ and $h:=\mathbf 1_C$.
Next, assume that $f$ is a simple function, say $f=\sum_{i=1}^N \alpha_i\mathbf 1_{A_i}$, where the sets $A_i$ are measurable and $\alpha_i\in\mathbb R$. By the first case, one can choose Borel functions $g_i$ and $h_i$ such that $g_i=h_i$ almost everywhere and $g_i\leq \mathbf 1_{A_i}\leq h_i$ if $\alpha_i\geq 0$, whereas $h_i\leq \mathbf 1_{A_i}\leq g_i$ if $\alpha_i<0$. Then take $g:=\sum_i\alpha_i g_i$ and $h:=\sum_i\alpha_i h_i$.
Finally, assume that $f$ is an arbitrary measurable function. One can find a sequence of simple measurable functions $(f_n)$ such that $f_n\to f$ everywhere. For each $n$, choose Borel functions $g_n$ and $h_n$ such that $g_n\leq f_n\leq h_n$ and $g_n=f_n=h_n$ almost everywhere. Now define $\widetilde g:=\limsup g_n$ and $\widetilde h_n:=\liminf h_n$. These are Borel extended real-valued functions, and $\widetilde g\leq f\leq\widetilde h$ because $g_n\leq f_n\leq h_n$ for all $n$ and $f_n\to f$. Moreover, we have $\widetilde g=f=\widetilde h$ almost everywhere because $g_n=f_n=h_n$ almost everywhere for each $n\in\mathbb N$ and $f_n\to f$. Now, set $E:=\{ \widetilde g=-\infty\}\cup\{\widetilde h=+\infty\}$, a Borel set of measure $0$ because $f$ is real-valued and $g=f=h$ a.e. Define $g:=\widetilde g$ on $\mathbb R\setminus E$ and $g:=0$ on $E$; and likewise $h:=\widetilde h$ on $\mathbb R\setminus E$ and $h:=0$ on $E$.
Unfortunately, this doesn't work since, as pointed out by @quartermind, one does not know that $g\leq f\leq h$ on $E$.
|
Since this awnser grew slowly lets start with the result, the same image once scaled to
.3\linewidth and once scaled to
\linewidth. The first section talks about the idea while the implementation is at the very bottom.
Source:image.tikz is at the bottom of this post.
\documentclass[]{scrreprt}
\usepackage{standalone}
\usepackage{tikz}
\usepackage{tikzscale}
\usetikzlibrary{calc}
\begin{document}
\begin{figure}
%\includestandalone[width=0.9\linewidth]{subfiles/TopView}
\includegraphics[width=.3\linewidth]{image}
\caption{Top view}\label{fig:topView}
\end{figure}
\includegraphics[width=\linewidth]{image}
\end{document}
There is a package
tikzscale, which does most of waht you ask, from the documentation:
So although the original tikzpicture itself has the width of a
complete line, it gets proportionally scaled down to half the width
while being loaded from the \includegraphics command. Neither the
line’s thickness nor the text center are scaled, [...]
If you use your graphics in documents with different font sizes though, be aware that the units
em and
ex change, thus their use might get you unwanted results if you mix them with absolute units.
I usually have one file with the headers and the
preview package, that I use to render the image standalone. I use input to include the different images in that file.
\documentclass{report}
\usepackage{tikz}
\usepackage[margin=0cm,nohead]{geometry}
\usepackage[active,tightpage]{preview}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc} %input encoding
\usetikzlibrary{...}
\begin{document}
\input{image1}
%\input{image2}
\end{document}
So far I have not come arround to use tikz externalize which would leave you with pdfs automatically and it is supported by
tikzscale. This might make my preview wrapper unnecessary.
Note that you do need to render the tikz files for every document you scale them in, since you don't want to scale the font/linewidths.
Scaling
Regarding OPs comment: Scaling as you requested can not work from prerenders, since in a image file all pixels are equal, there is no difference between fonts/linewidth and linelength. The only option to rescale tikz images and to keep fontsize/linesizes I know of is
tikzscale. Tikzscale has some very strict requirements, one being that the image is in a file with the extension
.tikz. Thus I renamed the second file to end in
tikz. I then change your main file to this:
\documentclass[]{scrreprt}
\usepackage{standalone}
\usepackage{tikz}
\usepackage{tikzscale}
\begin{document}
\begin{figure}
%\includestandalone[width=0.9\linewidth]{subfiles/TopView}
\includegraphics[width=.5\linewidth]{image}
\caption{Top view}\label{fig:topView}
\end{figure}
\end{document}
If you now run this you will get an error along the lines:
Request to scale an unscalable image.
This comes from the fact that you image IS unscalable. Your image is based on a node with an
includegraphics.
tikzscale only scales tikz coordinates. You then place nodes at the edges of the unscalable node holding text (text may not scale so these dont scale either).
So how does one obtain a scalable image?
add this as the last line of your picture:
\node at(2,2) {scaler};
You can try this method with different tikzpictures, then it should work.
Or as a workarround: Place two coordinates as coornerpoints of a rectangle (best with mesurement in an absolute mesure like
cm), calculate the size of this and use
scalebox to include the pdf. This way if tikzscale shifts the coordinates your included pdf scales. Therefore your image is now scalable.
here is an implementation of a workarround, It works as follows:with
\pgfgettransformentries we obtain the current scale matrix and use the
x or
y-scale to scale the included image by using graphicx
scale option by using
\includegraphics[scale=\a]{image2.pdf}};. Therefore the image size changes and the whole thing can be scaled.
\documentclass[tikz,convert=false]{standalone}
\usepackage{tikz, fp, tikz-3dplot}
\usetikzlibrary{calc, intersections, arrows, fixedpointarithmetic, decorations.markings}
\graphicspath{../images/}
\begin{document}
\begin{tikzpicture}[>=latex,line cap=round]
\pgfgettransformentries{\a}{\b}{\c}{\d}{\xtrans}{\ytrans}
\node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[scale=\a]{image2.pdf}};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\draw[help lines,xstep=.1,ystep=.1] (0,0) grid (1,1);
\foreach \x in {0,1,...,9} { \node [anchor=north] at (\x/10,0) {0.\x}; }
\foreach \y in {0,1,...,9} { \node [anchor=east] at (0,\y/10) {0.\y}; }
\node at (0.854,0.855) [rotate=-40]{$\vartheta^\prime$};
\node at (0.19,0.27) [rotate=40]{$f$};
\node at (0.055,0.605) [rotate=30]{$\delta$};
\node at (0.53,0.63) [rotate=40]{$R$};
\node at (0.15,0.2) [rotate=-40]{$l_\mathrm{c}$};
\draw[-latex, black] (0.8,0.1) -- (0.8,0.3) node[pos = .5, above, rotate = 90]{$\vec{u}$};
\draw[-latex, blue] (0.1,0.8) -- (0.3,0.8) node[pos = .5, above, rotate = 0]{$\vec{c}_\mathrm{m}$};
\end{scope}
\end{tikzpicture}
\end{document}
|
Even before quantization, charged bosonic fields exhibit a certain "self-interaction". The body of this post demonstrates this fact, and the last paragraph asks the question.
Notation/ Lagrangians
Let me first provide the respective Lagrangians and elucidate the notation.
I am talking about complex scalar QED with the Lagrangian $$\mathcal{L} = \frac{1}{2} D_\mu \phi^* D^\mu \phi - \frac{1}{2} m^2 \phi^* \phi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ Where $D_\mu \phi = (\partial_\mu + ie A_\mu) \phi$, $D_\mu \phi^* = (\partial_\mu - ie A_\mu) \phi^*$ and $F^{\mu \nu} = \partial^\mu A^\nu - \partial^\nu A^\mu$. I am also mentioning usual QED with the Lagrangian $$\mathcal{L} = \bar{\psi}(iD_\mu \gamma^\mu-m) \psi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ and "vector QED" (U(1) coupling to the Proca field) $$\mathcal{L} = - \frac{1}{4} (D^\mu B^{* \nu} - D^\nu B^{* \mu})(D_\mu B_\nu-D_\nu B_\mu) + \frac{1}{2} m^2 B^{* \nu}B_\nu - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$
The four-currents are obtained from Noether's theorem. Natural units $c=\hbar=1$ are used. $\Im$ means imaginary part.
Noether currents of particles
Consider the Noether current of the complex scalar $\phi$ $$j^\mu = \frac{e}{m} \Im(\phi^* \partial^\mu\phi)$$ Introducing local $U(1)$ gauge we have $\partial_\mu \to D_\mu=\partial_\mu + ie A_\mu$ (with $-ieA_\mu$ for the complex conjugate). The new Noether current is $$\mathcal{J}^\mu = \frac{e}{m} \Im(\phi^* D^\mu\phi) = \frac{e}{m} \Im(\phi^* \partial^\mu\phi) + \frac{e^2}{m} |\phi|^2 A^\mu$$ Similarly for a Proca field $B^\mu$ (massive spin 1 boson) we have $$j^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))$$ Which by the same procedure leads to $$\mathcal{J}^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))+ \frac{e^2}{m} |B|^2 A^\mu$$
Similar $e^2$ terms also appear in the Lagrangian itself as $e^2 A^2 |\phi|^2$. On the other hand, for a bispinor $\psi$ (spin 1/2 massive fermion) we have the current $$j^\mu = \mathcal{J}^\mu = e \bar{\psi} \gamma^\mu \psi$$ Since it does not have any $\partial_\mu$ included.
"Self-charge"
Now consider very slowly moving or even static particles, we have $\partial_0 \phi, \partial_0 B \to \pm im\phi, \pm im B$ and the current is essentially $(\rho,0,0,0)$. For $\phi$ we have thus approximately $$\rho = e (|\phi^+|^2-|\phi^-|^2) + \frac{e^2}{m} (|\phi^+|^2 + |\phi^-|^2) \Phi$$ Where $A^0 = \Phi$ is the electrostatic potential and $\phi^\pm$ are the "positive and negative frequency parts" of $\phi$ defined by $\partial_0 \phi^\pm = \pm im \phi^\pm$. A similar term appears for the Proca field.
For the interpretation let us pass back to SI units, in this case we only get a $1/c^2$ factor. The "extra density" is $$\Delta \rho = e\cdot \frac{e \Phi}{mc^2}\cdot |\phi|^2$$ That is, there is an extra density proportional to the ratio of the energy of the electrostatic field $e \Phi$ and the rest mass of the particle $mc^2$. The sign of this extra density is dependent only on the sign of the electrostatic potential and both frequency parts contribute with the same sign (which is superweird). This would mean that
classicaly, the "bare" charge of bosons in strong electromagnetic fields is not conserved, only this generalized charge is.
After all, it seems a bad convention to call $\mathcal{J}^\mu$ the electric charge current. By multiplying it by $m(c^2)/e$ it becomes a matter density current with the extra term corresponding to mass gained by electrostatic energy. However, that does not change the fact that the "bare charge density" $j^0$ seems not to be conserved for bosons.
Now to the questions:
On a theoretical level, is charge conservation at least temporarily or virtually violated for bosons in strong electromagnetic fields? (Charge conservation will quite obviously not be violated in the final S-matrix, and as an $\mathcal{O}(e^2)$ effect it will probably not be reflected in first order processes.) Is there an intuitive physical reason why such a violation is not true for fermions even on a classical level? Charged bosons do not have a high abundance in fundamental theories, but they do often appear in effective field theories. Is this "bare charge" non-conservation anyhow reflected in them and does it have associated experimental phenomena? Extra clarifying question: Say we have $10^{23}$ bosons with charge $e$ so that their charge is $e 10^{23}$. Now let us bring these bosons from very far away to very close to each other. As a consequence, they will be in a much stronger field $\Phi$. Does their measured charge change from $e 10^{23}$? If not, how do the bosons compensate in terms of $\phi, B, e, m$? If this is different for bosons rather than fermions, is there an intuitive argument why?
This post imported from StackExchange Physics at 2015-06-09 14:50 (UTC), posted by SE-user Void
|
Young's Modules is defined as,
$Y = \frac{\sigma}{\epsilon}$
where $\sigma$ is the strain defined as $\sigma = F/A$ and $\epsilon$ is the stress defined as $\epsilon = \frac{\Delta L}{L_0}$.
In this case we require the stress, thus re-arranging the first equation gives,
$\sigma = \epsilon Y$
After plugging in the definition of $\epsilon$ we then arrive at,
$\sigma = \frac{\Delta L}{L_0}Y$
Now as you already worked out, $\Delta L = L_0\alpha\Delta T$. Therefore after plugging this in, we arrive at the required result:
$\sigma = \frac{L_0\alpha\Delta T}{L_0}Y = \alpha\Delta T Y$
|
I suspect there are a number of errors in the equations, but it's unclearbecause the entire idea behind all these transformations of coordinates is unclear.
Yes, the ellipse projected onto an oblique plane can be rotated and translated in 3-D space back onto a plane parallel to the circle. But what's the purpose of that? Or you could rotate and translate some ellipse in the parallel plane onto the projected ellipse, but how do you construct the correct ellipse in the parallel plane to begin with?
It seems to me a much simpler approach is to write out the equations of the cone and the plane in three dimensions ($x,$ $y,$ and $z$ coordinates)and solve the equations.If the circle is parallel to the $x,y$ plane of the first set of coordinates, it may be easiest to write the equations first in that system and then transform the coordinates (all three coordinates, not just $x$ and $y$)before solving the equations.
Here's an attempt via the second approach.I make a few assumptions based on an interpretation of the diagramsof the cone and the intersecting plane.Assume the coordinates of the point $O$ in all three coordinate systems are $(x^r_O,y^r_O,z^r_O) = (x^g_O,y^g_O,z^g_O) = (x^b_O,y^b_O,z^b_O) = (0,0,0).$Assume the coordinates of $O'$ in the "red" system are$(x^r_{O'},y^r_{O'},z^r_{O'}) = (0,0,z^r_{O'}).$Assume the circle $CC'$ has radius $R$ and is parallel to the red plane,so the "red" coordinates of points on that circle satisfy the simultaneous equations\begin{align}(x^r)^2 + (y^r)^2 &= R^2,\\ z^r &= z^r_{O'}.\end{align}
Now to find an equation of the cone whose vertex is at $P$ and whose sidespass through the circle $CC',$ consider an arbitrary cross-section of the cone parallel to the red plane.The cross-section is a circle with center on the line $PO'$ and radius proportional to the distance from the parallel plane through $P.$In particular, the center of the cross-section has "red" coordinates$(x^r,y^r,z^r) = \left(h(z^r - z^r_{O'}), k(z^r - z^r_{O'}), z^r \right)$where $$h = \frac{x^r_P}{z^r_P - z^r_{O'}} \quad\text{and}\quadk = \frac{y^r_P}{z^r_P - z^r_{O'}},$$and the radius is $\left\lvert\dfrac{z^r_P - z^r}{z^r_P - z^r_{O'}}\right\rvert R.$The equation of the cone in "red" coordinates is therefore$$\left(x^r - h(z^r - z^r_{O'})\right)^2+ \left(y^r - k(z^r - z^r_{O'})\right)^2= \left(\frac{z^r_P - z^r}{z^r_P - z^r_{O'}} R\right)^2. \tag1$$
Now to find the equation in "blue" coordinates, we need to work out the conversion of coordinates. The point with "blue" coordinates $(x^b,y^b,z^b)_b$ has "green" coordinates$$(x^g,y^g,z^g)_g = (x^b\cos\beta + z^b\sin\beta, y^b, -x^b\sin\beta + z^b\cos\beta)_g.$$(This assumes that a small positive rotation angle $\beta$ would bring the positive $z$ axis of the blue plane closer to the positive $x$ axis of the green plane; if the positive direction of rotation is in the other direction, just reverse the sign of $\sin\beta$ in the formula.)The point with "green" coordinates $(x^g,y^g,z^g)_g$ has "red" coordinates$$(x^r,y^r,z^r)_r = (x^g, y^g\cos\alpha - z^g\sin\alpha, y^g\sin\alpha + z^g\cos\alpha)_r$$(assuming the positive direction of rotation takes the positive $y$ axis toward the positive $z$ axis; if it goes the other way, reverse the sign of $\sin\alpha$).
Now suppose a point on the cone has "blue" coordinates $(x^b,y^b,z^b)_b.$The "red" coordinates of that point, $(x^r,y^r,z^r)_r,$ have the formulas\begin{align}x^r &= x^g = x^b\cos\beta + z^b\sin\beta,\\[6pt]y^r &= y^g\cos\alpha - z^g\sin\alpha \\ &= y^b\cos\alpha - (-x^b\sin\beta + z^b\cos\beta)\sin\alpha \\ &= x^b\sin\beta\sin\alpha + y^b\cos\alpha - z^b\cos\beta\sin\alpha,\\[6pt]z^r &= y^g\sin\alpha + z^g\cos\alpha \\ &= y^b\sin\alpha + (-x^b\sin\beta + z^b\cos\beta)\cos\alpha \\ &= - x^b\sin\beta\cos\alpha + y^b\sin\alpha + z^b\cos\beta\cos\alpha.\end{align}
That is, the "red" coordinates of the point with "blue" coordinates $(x^b,y^b,z^b)_b$ are\begin{align}x^r &= a_{11}x^b + a_{13}z^b, \tag2\\y^r &= a_{21}x^b + a_{22}y^b + a_{23}z^b, \tag3\\z^r &= a_{31}x^b + a_{32}y^b + a_{33}z^b \tag4\end{align}where\begin{align} a_{11} &= \cos\beta, & & & a_{13} &= \sin\beta, \\ a_{21} &= \sin\beta\sin\alpha, & a_{22} &= \cos\alpha, & a_{23} &= -\cos\beta\sin\alpha,\\ a_{31} &= -\sin\beta\cos\alpha, & a_{32} &= \sin\alpha, & a_{33} &= \cos\beta\cos\alpha.\end{align}
If $(x^b,y^b,z^b)_b$ are the "blue" coordinates of a point on the cone, then the "red" coordinates of the same point must satisfy Equation $(1),$ above. That is, we can use Equations $(2),$ $(3),$ and $(4)$ to make substitutions for $x^r,$ $y^r,$ and $z^r$ in Equation $(1).$ The resulting equation is\begin{multline}\left(a_{11}x^b + a_{13}z^b - h(a_{31}x^b + a_{32}y^b + a_{33}z^b - z^r_{O'})\right)^2 \\+ \left(a_{21}x^b + a_{22}y^b + a_{23}z^b - k(a_{31}x^b + a_{32}y^b + a_{33}z^b - z^r_{O'})\right)^2 \\= \left(\frac{z^r_P - (a_{31}x^b + a_{32}y^b + a_{33}z^b)} {z^r_P - z^r_{O'}} R\right)^2. \tag5\end{multline}
But we are only interested in the intersection of the cone with the blue plane, where $z^b = 0.$So we can substitute $z^b = 0$ in Equation $(5),$ with the result\begin{multline}\left(a_{11}x^b - h(a_{31}x^b + a_{32}y^b - z^r_{O'})\right)^2+ \left(a_{21}x^b + a_{22}y^b - k(a_{31}x^b + a_{32}y^b - z^r_{O'})\right)^2 \\= \left(\frac{z^r_P - (a_{31}x^b + a_{32}y^b)} {z^r_P - z^r_{O'}} R\right)^2. \end{multline}
Now, this may still look daunting, but everything in this equation except $x^b$ and $y^b$ is a known constant. You can multiply out the products and squares of the expressions in parentheses until everything is just individual terms, each of which is some kind of constant times$x^b,$ $y^b,$ $(x^b)^2,$ $(y^b)^2,$ or $x^b y^b.$Collect all the terms together on one side of the equation so that it looks like$$ Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0 ,$$and then you can find the center, major axis, minor axis, and angle of the ellipse by following one of the procedures in the answers to
these questions:
Note that the center of the ellipse will not usually be at the samepoint as the projection of $O'$ onto the blue plane.
|
What is the "best LaTeX practices" for writing absolute value symbols? Are there any packages which provide good methods?
Some options include
|x| and
\mid x \mid, but I'm not sure which is best...
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
I have been using the code below using
\DeclarePairedDelimiter from the
mathtools package.
Since I don't think I have a case where I don't want this to scale based on the parameter, I make use of Swap definition of starred and non-starred command so that the normal use will automatically scale, and the starred version won't:
If you want it the other way around comment out the code between
\makeatother...\makeatletter.
\documentclass{article}\usepackage{mathtools}\DeclarePairedDelimiter\abs{\lvert}{\rvert}%\DeclarePairedDelimiter\norm{\lVert}{\rVert}%% Swap the definition of \abs* and \norm*, so that \abs% and \norm resizes the size of the brackets, and the % starred version does not.\makeatletter\let\oldabs\abs\def\abs{\@ifstar{\oldabs}{\oldabs*}}%\let\oldnorm\norm\def\norm{\@ifstar{\oldnorm}{\oldnorm*}}\makeatother\newcommand*{\Value}{\frac{1}{2}x^2}%\begin{document} \[\abs{\Value} \quad \norm{\Value} \qquad\text{non-starred} \] \[\abs*{\Value} \quad \norm*{\Value} \qquad\text{starred}\qquad\]\end{document}
Note if you just use
| you get mathord spacing, which is different from the spacing you'd get from paired mathopen/mathclose delimiters or from
\left/\right even if
\left/\right doesn't stretch the symbol. Personally I prefer the left/right spacing from mathinner here (even if @egreg says I'm generally wrong:-)
\documentclass{amsart}\begin{document}$ \log|x||y|b $$ \log\left|x\right|\left|y\right|b $$ \log\mathopen|x\mathclose|\mathopen|y\mathclose|b $\end{document}
One can also use
commath package.
\documentclass{article}\usepackage{commath}\begin{document}\[ \norm{a \vec{u}} = \abs{a} \, \norm{\vec{v}} \]\end{document}
The
physics LaTeX package also implements
abs and
norm:
\documentclass{article}\usepackage{physics}\begin{document} \[ c = \abs{-c} \] \[ \vu{a} = \frac{\vb{a}}{\norm{\vb{a}}} \]\end{document}
A simple, LaTeX native way of doing this is by using the
\| delimiter, with the standard
\left and
\right modifiers (source).
For example:
\left\| \sum_{i=1}^{n} x^2 \right\|
gives
For LyX users: maybe I have just overlooked how to do it correctly, but I couldn't find a way of doing this natively.
I thus used a 1x1-Matrix environment and set the kind to determinant. It might just be a hack, but it works fine in my usecase.
|
Search
Now showing items 1-10 of 20
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC
(Elsevier, 2013-12)
The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ...
Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(American Physical Society, 2013-12)
The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2013-10)
Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2013-03)
The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ...
Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE
(Springer, 2013-06)
Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ...
Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(American Physical Society, 2013-02)
The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE
(Springer, 2013-07)
The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ...
Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV
(American Physical Society, 2013-01)
Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
|
Can you provide proofs or counterexamples for the claims given below?
Inspired by Lucas-Lehmer-Riesel primality test I have formulated the following two claims:
First claim
Let $P_m(x)=2^{-m}\cdot((x-\sqrt{x^2-4})^m+(x+\sqrt{x^2-4})^m)$ . Let $M= k \cdot b^{n}-1 $ where $k$ is positive natural number , $ k<2^n$ , $b$ is an even positive natural number and $n\ge3$ . Let $a$ be a natural number greater than two such that $\left(\frac{a-2}{M}\right)=1$ and $\left(\frac{a+2}{M}\right)=-1$ where $\left(\frac{}{}\right)$ denotes Jacobi symbol. Let $S_i=P_b(S_{i-1})$ with $S_0$ equal to the modular $P_{kb/2}(P_{b/2}(a))\phantom{5} \text{mod} \phantom{5} M$. Then $M$ is prime if and only if $S_{n-2} \equiv 0 \pmod{M}$ .
You can run this test here .
Second claim
Let $P_m(x)=2^{-m}\cdot((x-\sqrt{x^2-4})^m+(x+\sqrt{x^2-4})^m)$ . Let $N= k \cdot b^{n}+1 $ where $k$ is positive natural number , $ k<2^n$ , $b$ is an even positive natural number and $n\ge3$ . Let $a$ be a natural number greater than two such that $\left(\frac{a-2}{N}\right)=-1$ and $\left(\frac{a+2}{N}\right)=-1$ where $\left(\frac{}{}\right)$ denotes Jacobi symbol. Let $S_i=P_b(S_{i-1})$ with $S_0$ equal to the modular $P_{kb/2}(P_{b/2}(a))\phantom{5} \text{mod} \phantom{5} N$. Then $N$ is prime if and only if $S_{n-2} \equiv 0 \pmod{N}$ .
You can run this test here .
I have tested these claims for many random values of $k$, $b$ and $n$ and there were no countereamples.
REMARK
It is possible to reformulate these claims into more compact form:
Let $P_m(x)=2^{-m}\cdot((x-\sqrt{x^2-4})^m+(x+\sqrt{x^2-4})^m)$ . Let $N= k \cdot b^{n}\pm 1 $ where $k$ is positive natural number , $ k<2^n$ , $b$ is an even positive natural number and $n\ge3$ . Let $a$ be a natural number greater than two such that $\left(\frac{2-a}{N}\right)=\left(\frac{a+2}{N}\right)=-1$ where $\left(\frac{}{}\right)$ denotes Jacobi symbol. Let $S_i=P_b(S_{i-1})$ with $S_0$ equal to the modular $P_{kb/2}(P_{b/2}(a))\phantom{5} \text{mod} \phantom{5} N$. Then $N$ is prime if and only if $S_{n-2} \equiv 0 \pmod{N}$ .
GUI application that implements these tests can be found here .
A command line program that implements these tests can be found here .
|
Let’s say we have a current wire with a current $I$ flowing. We know there is a field of $B=\frac{\mu_0I}{2\pi r}$ by using Ampère's law, and a simple integration path which goes circularly around the wire. Now if we take the path of integration as so the surface spans doesn’t intercept the wire we trivially get a $B=0$ which is obviously incorrect.
I see that I have essentially treated it as if there is no current even present. But a similar argument is used in other situations without fault.
Take for example a conducting cylinder with a hollow, cylindrical shaped space inside. By the same argument there is no field inside.
To further illustrate my point, the derivation of the B field inside of a solenoid requires you to intercept the currents. You can’t simply do the loop inside of the air gap.
This, at least to me, seems like the same thing, and I can’t justify why one is incorrect and the other is incorrect. Please point out why I am stupid.
|
I'm currently preparing for an exam in functional analysis, and I have a question about the extension of the spectral theorem for bounded self adjoint operators to bounded normal operators.
Starting point is the spectral theorem for bounded self adjoint operators: Let $T$ be a bounded self adjoint operator in an Hilbert space $X$, then there exists a unique spectral measure $E : \Sigma_\mathbb{R} \rightarrow B(X)$, which has compact support in $\mathbb{R}$ (Here $\Sigma_\mathbb{R}$ is the Borel-$\sigma$-algebra on $\mathbb{R}$ and $B(X)$ is the set of all bounded and linear operators in $X$) and $T = \int\limits_{\mathbb{R}}\lambda dE_\lambda$. Moreover the mapping $f \rightarrow f(T) := \int\limits_{\mathbb{R}} f(\lambda) dE_\lambda$, for bounded and measurable functions $f$, satisfies the conditions of the (unique) measurable functional calculus.
If a normal operator $T \in B(X)$ is given, one can define the Operators: $S_1 := \frac{1}{2} \left( T+T^{\ast} \right)$ and $S_2 := \frac{1}{2i} \left( T-T^{\ast} \right)$. Then we get that $T = S_1 + i S_2$ and that $S_1$ and $S_2$ are self adjoint. Then by the spectral theorem for self adjoint operators there exist two spectral measures $E^1$ and $E^2$. Since $T$ is normal, $S_1$ and $S_2$ commute, and therefore the spectral measures $E^1$ and $E^2$.
Then there exists a unique spectral measure $E : \Sigma_{\mathbb{R}^2} \rightarrow B(X)$ such that for all $A, B \in \Sigma_\mathbb{R}$ we have that $E(A \times B) = E^1(A)E^2(B)$. (See: Schmüdgen - Thm. 4.10)
By identifying $\mathbb{R}^2$ with $\mathbb{C}$ one gets a unique specral measure $E : \Sigma_\mathbb{C} \rightarrow B(X)$ and is able to define integrals with respect to this spectral measure in the natural way: First for step functions and then for bounded measurable functions by approximation.
Now I have to show that $E$ has the same properties as the spectral measure for self adjoint operators, i.e.: $T = \int\limits_{\mathbb{C}} z dE_z$ and the mapping $f \rightarrow f(T) := \int\limits_{\mathbb{C}} f(z) dE_z$, for bounded and measurable functions $f$, satisfies the conditions of the (unique) measurable functional calculus.
My question now is: is there any other way to show that, beside re-do the proof of the spectral theorem for self adjoint operators? It's not that much work, once one has the proof of the self adjoint case. I'm just curious if there's an more elegant way ...
Thanks in advance, GordonFreeman
|
Value
1. INTRODUCTION
Value investing is a strategy that buys cheap stocks. Cheapness is measured by the ratio of the price to the fundamental value of the stock. An example of such, and probably the most commonly used value metrics, is the book-to-market (BM) ratio - the higher the ratio the cheaper the stock. The value strategy has a long tradition and it dates back to the 1930s when Graham and Dodd (1934) published the popular book “Security Analysis”. The study of Basu (1977) [1] marks the beginning of a more recent, extremely rich academic value literature. The seminal work of Fama and French (1992, 1993) [2] [3] and the corresponding 3-factor model have established value (and also size) as one of the key risk factors explaining cross-sectional equity returns. However, value investing is not only limited to equity asset classes, as Asness et al. (2013) [4] show. In their study, they find value (and momentum) premia across several asset classes and markets. Moreover, these returns across asset class exhibit strong co-movements for value (and momentum).
Importantly, value investing has also been a very successful investment strategy over the last century in practice. Most prominently, Warren Buffett’s success can, at least partly, be attributed to the value premia as Frazzini, Kabiller and Pedersen (2013) [5] show (low-risk and quality are the other important factors according to the authors). Next to Buffet, many asset managers utilize value either as a standalone strategy, or in combination with other factors, for example, momentum.
Like any other strategy value investing is primarily a selection strategy (stock A vs. stock B) at any point in time and it has to be differentiated from a value oriented market timing (stocks vs. cash)/time-series value approach.
2. PRACTICAL IMPLEMENTATION
Private investors gain most easily access to value by simply buying passive value ETFs, which are cheaply available for most major equity indices. Most ETF providers offer cheap value solutions, mainly tracking the MSCI value indices.
Constructing a value portfolio involves the same logic as the other factors do, so we only give a brief overview of criteria that could be used.
The value portfolio similar to the one suggested by Fama and French (1993) [3] can be constructed by sorting stocks with respect to their book-to-market value. However, practitioners usually use a combination of several value measures. Let us take the MSCI value methodology as an example. MSCI value indices combine three measures, namely, the book-to-price ratio, 12-months forward earnings-to-price ratio and the dividend yield. The overall value measure of a company is then the sum of the three z-scores. Z-scores are obtained by standardizing each of the variables ($\frac{x-\mu}{\sigma}$). Since most fundamental data contain some outliers, the MSCI methodology employs the winsorization of the data. This means that the values, which exceed the 95th (are lower than the 5th) percentile are replaced by the value of the 95th (the 5th) percentile value. The final index is then obtained by filling up the portfolio with the cheapest stocks starting from top, once 50% of the free-float adjusted market capitalization is reached, no more stocks are added to the value index.
More active investors aim to add (literally) more value, by using more than just three dimensions and by selecting a more concentrated value portfolio, by including less stocks and/or apply some different weighting scheme. Another hint to improve classical value approaches can be found in Fama and French (2012)[6], which indicates that the value premium (and also momemetum) is more pronounced for smaller stocks.
Another interesting paper is Asness and Frazzini (2013)[7] --- they show the importance of using unlagged price information when forming book-to-market ratios. Originally, Fama and French (1992) [2] form value portfolios in June of each year, based on B/M information prior December 31st the previous year. The results indicate, that monthly updating of the price information delivers a significant increase in five-factor alphas in contrast to the yearly approach followed by the original study. This, of course, has practical implications, if a monthly updating and also rebalancing is truly more profitable, as the results suggest, one might ask if an ETF based on semi-annual updates can fully capture the value premia.
For non-equity asset classes, where no fundamental data are available, value can be proxied as in Asness et al. (2013) [4] by using the negative of the 5-year return. The ranks of the worst performers, measured over a period of 5 years, show, in the case of equities, a high correlation with ranks based on BM ratios.
Value is an attractive stand-alone strategy. However, a combined factor strategy with quality or momentum delivers significant improvements, in particular, with respect to risk characteristics.
3. RISKS INVOLVED
Value investing can be difficult for short-term oriented investors, as the past has shown pronounced periods of depressed value returns. For example, prior to the bursting of the dot-com bubble in the beginning of the century, or during the financial crises in 2008 value investors had to endure years of underperformance.
If the rational theory is right, value is painful when you already feel pain (see below). Hence, the risk you face is that, during bad times, you are hit particularly bad by being exposed to value risk.
4. THEORETICAL EXPLANATIONS
Ang (2014) [8] summarizes possible explanations of the existence of a positive value premia. Similar to other factors, there are rational and behavioral theoretical approaches.
The
rational theory of the value effect boils down to the idea that all value stocks or stocks with value characteristics contain exposure to a systematic value factor, which cannot be diversified away, similar to the systematic equity market risk. A positive risk-premia of the value factor can be justified, when the value factor performs particularly bad during bad times of the average investor. In finance theory these bad times correspond to a state of the economy, where a decline in asset prices is on average more painful than in good times (see consumption example below). The value premia is simply a risk compensation for holding value during these periods. Zhang (2005)[9] formalizes the theory in a neoclassical framework. During bad times, value firms face the problem that their capital is less productive and adjustments to the capital stocks are difficult. Hence, value firms perform worse during these periods. On the other hand, growth firms are more flexible, but face higher adjustment costs during booms. Therefore, they tend to underperform during booming markets. These characteristics are priced, when investors have a concave utility function. This becomes clear, when we apply the argument of marginal utility in consumption based asset pricing world: value performs worst when marginal utility is high (low consumption level) and growth performs worst when marginal utility is low (high consumption level). Using a more simplistic wording, value hurts when most already suffer from a negative economic environment.
One
behavioral explanation assumes an overextrapolation of recent news. Strong earnings growth is assumed to persist, while deteriorating earnings probably continue to get worse. Hence, prices are formed on biased expectations. If future earnings turn out to disappoint in case of growth stocks and, on the other hand, surprise positively for value stocks, then the price reversal occurs, driving the prices of previously “overpriced” and “underpriced” stocks to fundamental values.
If the value premia indeed boils down to the behavioral defects, Ang (2014) [8] asks the right question, “
why don’t more investors buy value stocks and, in doing so, push up their prices and remove the value premium,...”
5. SUMMARY
Value has historically been a successful investment approach, however, at a price of pronounced periods of depressed returns. Hence, risk reduction plays the key role for value investors. One way to achieve it is the combination of value with other strategies, such as momentum and quality, as it helps to drastically reduce the length of depressed return periods and drawdown risk. Cost-effective (long only) value exposure can easily be gained through ETFs for most important developed markets/regions, long-short value exposure is typically more pricy and mainly offered by specialized active asset managers.
References
Investment performance of common stocks in relation to their price-earnings ratios: A test of the efficient market hypothesis, Basu, Sanjoy , The Journal of Finance, Volume 32, Number 3, p.663–682, (1977) The cross-section of expected stock returns, Fama, Eugene F., and French Kenneth R. , The Journal of Finance, Volume 47, Number 2, p.427–465, (1992) Common risk factors in the returns on stocks and bonds, Fama, Eugene F., and French Kenneth R. , Journal of Financial Economics, Volume 33, Number 1, p.3–56, (1993) Value and Momentum Everywhere, Asness, Clifford S., Moskowitz Tobias J., and Pedersen Lasse Heje , The Journal of Finance, Volume 68, Number 3, p.929–985, (2013) Buffett’s Alpha, Frazzini, Andrea, Kabiller David, and Pedersen Lasse H. , National Bureau of Economic Research, (2013) Size, value, and momentum in international stock returns, Fama, Eugene F., and French Kenneth R. , Journal of Financial Economics, Volume 105, p.457–472, (2012) The devil in HML’s details, Asness, Cliff, and Frazzini Andrea , The Journal of Portfolio Management, Volume 39, p.49-68, (2013) Asset Management: A Systematic Approach to Factor Investing, Ang, Andrew , (2014) The value premium, Zhang, Lu , The Journal of Finance, Volume 60, p.67–103, (2005)
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
Consider a quantum
query algorithm that takes as input $x \in \{0,1\}^n$. Denote by $X_i$ the variable that evaluates to $1$ on input $x$ if the $i$-th bit of $x$ is 1, and $-1$ otherwise. Let $X_{\vec{i}}$ be the product of the variables whose indexes are in $\vec{i}$ (for instance : $X_{\overrightarrow{4,1,3}} = X_4X_1X_3$).
The polynomial method relies on the fact that the state $|\phi_t(x)\rangle$ of the algorithm on input $x$ at time $t$ can be written as $|\phi_t(x)\rangle = \sum_u P_u(x) |u\rangle$, where $u$ ranges over all the basis states and the $P_u$'s are
multilinear polynomials of degree at most $t$ in the $n$ variables $X_1, \dots, X_n$. In other words, there exist complex numbers $(a_{u,\vec{i}})$ such that for all $x$:
$$|\phi_t(x)\rangle = \sum_{u,\vec{i}} a_{u,\vec{i}} \cdot X_{\vec{i}} \ |u\rangle$$
where $P_u(X) = \sum_{\vec{i}} a_{u,\vec{i}} \cdot X_{\vec{i}}$ (for the sake of simplicity, assume from now on that the $a_{u,\vec{i}}$'s are just real numbers).
Finally, define the L2-norm $||P||_2$ of a polynomial to be the sum of the squares of its coefficients, i.e. $||P_u||_2 = \sum_{\vec{i}} a_{u,\vec{i}}^2$.
My question relies on the following observation:
$$\mathbb{E}_x (\phi_t(x)^2) = 1 = \sum_{u,\vec{i}} a_{u,\vec{i}}^2 = \sum_u ||P_u||_2$$
Indeed, $\phi_t(x)^2 = 1$ for all $x$ (recall that we took real numbers instead of complex ones, so this expresses the fact that $|\phi_t(x)\rangle$ must have norm one). And $\mathbb{E}_x (X_{\vec{i}}) = 0$ whenever $X_{\vec{i}}$ is not the constant 1.
Thus the $a_{u,\vec{i}}$'s must sum to one! In fact, I derived this equality from this paper: https://arxiv.org/pdf/1411.5729v1.pdf (p32), but I did not find any other result in quantum computing that uses it. For instance, is there a connection with the acceptance probability of the algorithm (if $u_a$ is the accepting state, can we relate the acceptance probability with $||P_{u_a}||_2$?
|
I like the point-set topology definition of continuous function. It’s elegant, generalises well, and I think puts a bunch of things on firmer foundations than epsilon-delta definitions.
But it also confusing to some people. Why are open sets? Why is it that the
pre image of an open set under a continuous function is open rather than the image?
One way to fix this is to start with different but equivalent definitions of topological spaces. This is fine, but it’s a little unsatisfying. The open set formulation is widely used because it’s quite powerful. It would be nice to be able to make intuitive sense of it. Additionally, the same sort of definition crops up elsewhere – e.g. a measurable function is one where the pre-image of measurable sets are measurable.
So I’d like to give you some intuition as to why open sets make sense and why given that intuition the definition of the continuous function is the “obvious” one.
I suggest that the intuitive concept you should attach to an open set is that and open set is an approximate measurement.
What does this mean?
Well, first let me pin down what I mean by the words individually.
A “measurement” does not here mean something like “this rod is exactly 1.23 meters long”. “This rod is less than a mile long” or “this rod is between 1 and 2 meters long” are also measurements. “The length of this rod is no more than 100 times its diameter” is also a measurement. A measurement in this case is anything that helps you pin down the range of possible objects.
And “approximate” does not mean “I guessed”. It means “you do not need to know the exact value arbitrarily well in order to validate this measurement”. You can easily validate that the rod is between 1 and 2 meters long with a tape measure. You
can’t validate that it’s exactly 1.23 meters long with a tape measure (but you can validate that it’s not).
An approximate measurement is, more or less, one where you only need a finite amount of information to validate it.
Note that you might need an infinite amount of information to
refute it. If I tell you that the rod is less than one meter long and it turns out that the rod is exactly one meter long down to such a subquantum scale that it turns out we’re all living in a simulation of a platonic euclidean universe then you need to measure its length infinitely precisely in order to tell me I’m wrong – even if you measure it down to the nearest micron it might be half a micron short of one meter.
So this is our intuitive and imprecise definition of an open set: An open set is one where for any member of the set we can prove that it’s a member of that set with a finite amount of information.
This is of course nonsense. How does this give rise to different topologies? And what constitutes information?
What those questions are then determines our topology. They don’t
need to actually correspond to any notion of finiteness (for example we could simply define the discrete topology in which all of the questions “Is it this point?” are permitted), but many classic ones do: e.g. You only need to evaluate a real number to a finite number of decimal places to prove that it’s in an open set.
Essentially these two resolve themselves together: Topologies correspond to different sorts of questions we can ask, and then “finite amount of information” just means that for every member we can prove that it’s a member by only asking a finite number of those questions.
This intuition corresponds nicely to the topology axioms: You only need 0 questions to determine if a member of the whole set is a member of the whole set, the empty set satisfies the property vacuously. If you have an arbitrary union \(\bigcup U_i\) then for \(x \in \bigcup U_i\), \(x \in U_j\) for some \(j\) and you only need a finite set of questions to prove that. If \(x \in U \cap V\) then you can take the finite proof that \(x \in U\) and the finite proof that \(x \in V\) and union them together.
You can make all this formal and get yet another characterisation of topological spaces but it’s not very interesting and ends up mostly corresponding to existing notions.
With that notion of approximate measures hand waved, we can now hand wave our notion of a continuous function:
If you apply a continuous function to some input and make an approximate measurement of the result, this gives you an approximate measurement of the input.
So for example if we just consider the length of a rod and make an approximate measurement of that, this gives us an approximate measurement of the whole rod: It still constrains the space of possible objects in a way we only need to ask finitely many questions to answer.
And this is precisely what “the preimage of an open set is open” means: If we make some measurement \(V\) and constrain \(f(x) \in V\) then this precisely corresponds to \(x \in f^{-1}(V)\). So “an approximate measurement of the result of a continuous function gives an approximate measurement of its input” is exactly “The preimage of an open set under a continuous function is open”.
But
why does that match what we would intuitively think of as “continuity”?
Well, in some cases it doesn’t really, but that’s OK. For examples where we have more intuition about what continuous should mean it matches quite nicely:
Consider e.g. \(f\) with \(f(0) = 1\) and \(f(x) = 0\) otherwise. Now consider the measurement \(f(x) > \frac{1}{2}\). In order to know whether this holds for \(x\) we’re back in the “this rod is exactly one meter long” territory – no matter how precisely you measure \(x\) it might be just a bit closer to zero than that but still non-zero.
This works in more generality: At any point of discontinuity \(x\) you will find open sets that you need to know \(y\) arbitrarily well to distinguish it from \(x\) in order to determine membership.
Note also that an approximate measurement of the
input to a continuous function does not give you an approximate measurement to the output. Consider e.g. the constant function \(f(x) = 1\). Then given some open set \(U\), in order to determine if \(y \in f(U)\) we need to test if \(y = 1\). This requires infinitely many decimal points of \(y\) and thus is not an approximate measurement.
Anyway, that’s enough hand waving. I don’t know if this actually clears things up for anyone (I figured this representation out long after I’d already internalized the rules of topology), but hopefully it’s given a different perspective on it.
|
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
|
I have reduced my solution of a 1D heat equation boundary value problem to the following:
$$W(z, t) = \sum_{n=1}^\infty b_n \sin(\lambda_n z) e^{-\lambda_n^2 \alpha t}$$
To get the coefficients $b_n$, I apply the initial condition that: $W(z, 0) = T_0$, which gives the Fourier Sine Series:
$$\sum_{n=1}^\infty b_n \sin(\lambda_n z) = T_0$$
My question is how to obtain the coefficients $b_n$ for my problem here using the integral formula for the Fourier Sine Series? Namely, if
$$f(x) = \sum_{n = 1}^\infty b_n \sin(nx)$$
Then:
$$b_n = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \sin(nx)$$
The argument in the sine function for my problem is $\lambda_n$ = (some function of n, and not explicitly equal to $n$ as in the integral formula above. Is there a way that I am suppose to transform the argument such that the formula can be applied? Thanks kindly in advance,
|
If we consider the sequent calculus formalization of the positive fragment of the logic of ticket entailment (let's label the logic $LT_{+\leftarrow\rightarrow}^{\circ t})$, that is a Lindenbaum algebra, we have various axioms (classic sequent calculus axioms for substructural logics) and the following structural rules:
$\;$
$\cfrac{\alpha[\beta;(\gamma;\delta)]\vdash A}{\alpha[(\beta;\gamma);\delta]\vdash A}$ $\;$ $\cfrac{\alpha[\gamma;(\beta;\delta)]\vdash A}{\alpha[(\beta;\gamma);\delta]\vdash A} \;\cfrac{\alpha[(\beta;\gamma);\gamma]\vdash A}{\alpha[\beta;\gamma]\vdash A} \;\cfrac{\alpha[\beta]\vdash A}{\alpha[t;\beta]\vdash A} \; \cfrac{\alpha[t;t]\vdash A}{\alpha[t]\vdash A}$
$\;$
If we add to the logic two connectives $(\leftarrow)$ and $(\circ)$, we have two more structural rules:
$\;$
$\cfrac{\alpha[t;\beta;\gamma]\vdash A}{\alpha[\beta;(t;\gamma)]\vdash A}$, comes with $(\vdash\leftarrow)$ and $(\leftarrow\vdash)$ rules;
$\;$
$\cfrac{\alpha[(t;\beta);\gamma]\vdash A}{\alpha[t;(\beta;\gamma)]\vdash A}$, comes with $(\vdash\circ)$ and $(\circ\vdash)$ rules.
$\;$
The only other connective that we admit in our sequent calculus is $\rightarrow$, and some properties like the admissibility of the Cut theorem or Soundness and Completeness are preserved in the extension to a logic with $\leftarrow$ and $\circ$
$\;$
Why are the two added rules necessary (or not)? The hint of the book (is the chapter of Relevance Logic in Philosophy of Logic, vol. 5 of Handbook of the Philosophy of Science, Gabbay, D., P. Thagard and J. Woods) is that they are necessary for
replacement and equivalent classes.
So, I guess the question, in general, is: how does replacement work for equivalent classes in a Lindenbaum algebra? And how is this related to the structural rules?
Thanks!
|
Linear regression is a statistical modeling technique used to describe a continuous response variable as a function of one or more predictor variables. It can help you understand and predict the behavior of complex systems or analyze experimental, financial, and biological data.
Linear regression techniques are used to create a linear model. The model describes the relationship between a dependent variable \(y\) (also called the response) as a function of one or more independent variables \(X_i\) (called the predictors). The general equation for a linear regression model is:
\[y = \beta_0 + \sum \ \beta_i X_i + \epsilon_i\]
where \(\beta\) represents linear parameter estimates to be computed and \(\epsilon\) represents the error terms.
There are several types of linear regression models:
Simple:model with only one predictor Multiple:model with multiple predictors Multivariate:model for multiple response variables Generate predictions Compare linear model fits Plot residuals Evaluate goodness-of-fit Detect outliers
To create a linear model that fits curves and surfaces to your data, see Curve Fitting Toolbox.
|
If you need a pdf version of these notes you can get it here
The ray model of light we have been using in the last few lectures works well when light interacts with objects that are large compared to the wavelength. For objects that have features on the same or smaller length scale as the wavelength of the light we need to consider wave effects.
When we introduced refraction we invoked Huygen's principle, which explicitly uses a wave model to explain refraction. We can also use this model to explain some other important effects that occur, particularly interference and diffraction
Huygen's principle states that every point on a wavefront can be considered to be the centre of a new secondary spherical wave, and the sum of these secondary waves determines the form of the wavefront at any subsequent time.
When we considered sound last semester we saw that two speakers placed some distance apart from each other the sound waves interfere with each other. As light is a wave we should expect similar effects to occur for light. This is indeed the case, if we shine monochromatic coherent light on a pair of slits we see an interference pattern. We can treat each of the slits as a point source of circular wavefronts. The condition for constructive interference (bright fringes) is
$d\sin\theta=m\lambda$ (m=0,1,2,..)
and for destructive interference (dark fringes)
$d\sin\theta=(m+\frac{1}{2})\lambda$ (m=0,1,2,..)
For small angles the angles $\theta_{1}$ and $\theta_{2}$ can be approximated as
$\theta_{1}=\frac{\lambda}{2d}$ and $\theta_{2}=\frac{\lambda}{d}$
The positions of the fringes are
$x_{1}=l\tan\theta_{1}\approx l\theta_{1}$ and $x_{2}=l\tan\theta_{2}\approx l\theta_{2}$
So we can see that if the screen is far from the slit the fringes are approximately equally spaced from each other.
As in the case of a wave on a rope that is incident on a heavier rope and is reflected with a 180
o phase change when a light wave is reflected from a more optically dense media a 180 o phase change occurs. This effect is important when we want to consider interference effects in thin films.
In the absence of interference, the degree to which reflection occurs is given by the reflection coefficient $R$ which at normal incidence for light going from a medium with refractive index $n_{0}$ to one with refractive index $n_{1}$ is given by
$R=(\frac{n_{0}-n_{1}}{n_{0}+n_{1}})^2$
The transmission $T$ and reflectance $R$ add to 1 and represent the fraction of the incident light intensity that is either transmitted or reflected.
For most lens we want as much of the incident light to be transmitted as possible. Suppose we take a glass lens with refractive index $n=1.52$. We can see from the reflectance equation
$R=(\frac{n_{0}-n_{1}}{n_{0}+n_{1}})^2=(\frac{1-1.52}{1+1.52})^2=0.043$
that about 4% of the incident light is reflected. This percentage can be reduced by the use of an anti reflective coating. Ideally we would use a coating that produced an equal amount of reflection at both interfaces, but there is no suitable material with the required refractive index, $n=1.26$, so we use magnesium flouride MgF
2.
As the two reflections both occur from more optically dense media they both experience a phase change of $\pi$ on reflection which corresponds to advancing the wave by $\frac{\lambda}{2}$ . To have the light be out of phase we need to light that goes through the coating to have advanced by $\frac{\lambda}{2}$ for destructive interference to occur. Critically, when destructive interference occurs the light is not lost, but is instead transmitted. As the wavelength of light in a medium is given by $\lambda=\frac{\lambda_{0}}{n}$ where $n$ is the refractive index of the medium and $\lambda_{0}$ is the wavelength of the light in free space, the thickness of the coating should be $\frac{\lambda}{4n_{2}}$.
In practice the light incident will not all be the same wavelength, so the thickness of the coating is typically chosen to work optimally in the center of the visible band (~550nm).
Interference effects can also be observed when light is reflected from the gap between two glass surfaces, which leads to the phenomena known as Newton's Rings. A similar problem is that of the air wedge
For a single wavelength dark stripes will occur whenever
$2t=m\lambda$ (m=0,1,2,..)
and bright stripes will occur whenever
$2t=(m+\frac{1}{2})\lambda$ (m=0,1,2,..)
For white light different colors will experience constructive interference at different thicknesses, leading to the colorful lines we see when an air gap is under normal light.
|
Definition:Lattice (Group Theory) Definition
A (point)
lattice is a discrete subgroup of $\R^m$ under addition.
Let $\R^m$ be the $m$-dimensional real Euclidean space.
Let $\left\{ {b_1, b_2, \ldots, b_n}\right\}$ be a set of linearly independent vectors of $\R^m$.
A
lattice in $\R^m$ is the set of all integer linear combinations of such vectors. That is: $\displaystyle \mathcal L (b_1, b_2, \ldots, b_n) = \left\{ {\sum_{i \mathop = 1}^n x_i b_i : x_i \in \Z}\right\}$
|
Suppose we have the following random variables $X_1$, $X_2$,....$X_n$,.., that are $iid$ but we dont know what distribution they follow.
I know that the sample mean $\bar{X}$ is an unbiased estimator of the population mean. But, how can i prove that the square of the sample mean is an biased (or maybe unbiased) estimator of the variance?
My particular doubt is how to continue this:
$E[\bar{X}^2] = E[(\frac{\sum_{i=1}^nX_i}{n})^2] = E[\frac{\sum_{i=1}^nX_i}{n} \times\frac{\sum_{i=1}^nX_i}{n}] = \frac{1}{n^2} E[\sum_{i=1}^nX_i \times \sum_{i=1}^nX_i] = .....$
I think the estimator is biased, but i want to confirm it...
|
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
|
This week I was at ICDE’19 where I presented
Efficient Synchronization of State-based CRDTs (here’s a link to paper and slides).Following Pedro’s idea, and since the talk is still fresh, this post will be a transcript of it. Context
In Figure 1, I briefly explain the CRDT acronym. My previous post was just about that.
Outline CRDT variants: First, we’ll take a look CRDT synchronization models. The main variants are operation-basedand state-based. There’s also delta-based, that is a variant of state-based synchronization. From: In classic delta-based, when a replica receives a delta, it checks whether this delta filterto decomposition; filter; join has something new(the filter), and if it has, the delta is propagated to the peers in the system. We’ll show that this is insufficient, and a more sophisticated approach is necessary: replicas should instead decompose the delta (the decomposition), select the relevant parts (the filter), and then merge them back together (the join), effectively extracting what’s newin the received delta. A super-realistic example of decomposition; filter; join:Consider a wedding cake.When we decompose it, we get all the layers.Then, we decide that we just want the first and the third layer.In the end, we glue the interesting layers back together. Decomposition of state-based CRDTs: The filterand joinoperations are already part of the CRDT framework. We’ll see how they look like in the simplest CRDT (a grow-only set). Then, we’ll introduce the missing part (the decomposition) and the three properties that define “good” decompositions.
And finally, some
experimental resultsbefore we go. CRDT synchronization models
Two of the main differences between CRDTs synchronization models are:
the guaranteesthat the synchronization middlewareshould provide the payloadexchanged between replicas upon a new update Operation-based
In the operation-based model, the middleware should provide
exactly-once causal delivery of operations (which can be expensive). The upside: only the update operation (typically small) needs to be sent. State-based
In the state-based model, messages can be
dropped, reordered and duplicated, meaning no expensive middleware is required in state-based. The downside: the full CRDT state needs to be sent when synchronizing replicas. Delta-state-based
Delta-state-based promises to be
the best of both worlds by inheriting the middleware guarantees from state-based (i.e. none), and by exchanging deltas that are typically as small as operations. But in reality…
There are inefficiencies in the delta-based synchronization model:
In terms of
bandwidth(LHS of Figure 4), delta-based can be as expensive as state-based(i.e., effectively sending the full state on every synchronization).
And since large states are being exchanged, trying to compute these deltas results in a
substantial CPU overhead, when comparing to state-based (RHS of Figure 4). So, in this paper… State-based CRDTs
As I promised above, we’ll explore the ideas behind the paper resorting to the
simplest CRDT that exists: a set.And from sets, we’ll just need two binary operations: subset and set union.
I mentioned before that all CRDTs have a
filter and a join operation. For sets, the filter is the subset, while the join is the set union. State-based vs Delta-based
State-based CRDTs define functions that allow us to update the CRDT state. These are called
mutators.
On the other hand, delta-based CRDTs define
delta-mutators that returns deltas.We can think of these deltas as the state difference between the state before the update and the state after.
As an example, if we’re trying to add the element
c to a set that contains a and b (i.e., {a, b}), the mutator will return the set {a, b, c}, while the delta-mutator simply returns the delta {c}.
This delta is then:
joined with the local stateresorting to join operation (in this case, the set union); after this step, the local state is the same as it would have been if the mutator was used(in our example, we would get {a, b, c}) added to a delta-bufferthat contains deltas to be propagated to peers Introducing some notation before an example
On the
right of our replica, we’ll depict the current CRDT state (an empty set in Figure 7). At the bottom, we’ll have the current list of deltas in the delta-buffer (an empty delta-buffer in Figure 7). The problem with the classic delta propagation
Consider the example in Figure 8 with four replicas,
A, B, C, and D.All replicas start with an empty set and an empty delta-buffer.
When
Aadds the element xto the set, the delta-mutator produces the delta {x}. This delta is joined with the local state (that becomes {x}, since union of { }with {x}is {x}), and it is added to the delta-buffer. Asynchronizes with Cby sending the new deltas in the delta-buffer (i.e. just {x}). When Creceives this delta, it joins it with the local state and also adds the delta to its delta-buffer, so that this delta can be further propagated. Csyncs with D(the process is similar to the one above).
Now
Aadds element yto the set. The resulting delta, {y}, is joined with the local state (that becomes {x, y}) and added to the delta-buffer. Asyncs with Bby sending the join of all the deltas never sent to B(i.e. {x, y}). Bdoes the standard thing when receiving the delta. Bsyncs with Cand now we get to the interesting part. Recall that the local state of Cbefore receiving this delta from Bis {x}. Although I didn’t mentioned yet in this running example, in the outline of the talk we’ve seen that when a delta is received, a filteroccurs: when Creceives {x, y}, it checks whether this delta has something new (compared to the local state). And indeed, there is something new (element y)! Since there’s something new, the delta is joined with the local state and added to the delta-buffer.
And finally,
Csyncs with Dby sending the new delta in its delta-buffer (i.e. {x, y}).
The last step is problematic because
C sends {x, y} to D, even though in its previous sync step with D it sent {x}.We would expect that now it would send the new state changes, i.e. only {y}.
This shows that the simple filter done by
C when it received {x, y} is not enough!Instead, C must decompose the received delta {x, y}, select/filter the interesting bits, join them back together, and only then, add what results (that should be {y}) to the delta-buffer.
We already know the
filter and join operation (at least for sets).Let’s now see how we do the decomposition. Decomposition of state-based CRDTs
In Figure 9 we have a decomposition example: given the set
{a, b, c}, the decomposition of this set should be {{a}, {b}, {c}} 1.
In the paper, we define three properties, that when respected, produce “good” decompositions:
The join everything in the decomposition produces the original element.As an example, {{b}, {c}}is not a good decomposition because its join only produces {b, c}, and not {a, b, c}. All elements in the decomposition are needed. {{a, b}, {b}, {c}}is not a good decomposition because {b}is not needed to produce {a, b, c}. No element can be further decomposed. {{a, b}, {c}}is not a good decomposition because one of its elements, namely {a, b}, can be further decomposed into {a}and {b}.
Only the decomposition
{{a}, {b}, {c}} respects all three properties.
(In the paper we show that for all states of CRDTs used in practice,
a decomposition always exists, and furthermore, this decomposition is unique.) Introducing the CRDT-difference operation
Once we have a decomposition, now we can build a
CRDT-difference binary operation.
For sets, this operation boils down to the set difference that can be simply defined as:
$$ a \setminus b = \{ x \in a \mid x \not \in b \}$$
As an example (from Figure 10), the difference between
{x, y} and {x} is {y} (i.e. {x, y}$\setminus$ {x} = {y}).
This operation can be generalized for CRDTs resorting to the decomposition we’ve just introduced.We don’t need to worry much about the formula in Figure 10, but the general idea is the following: for each element
x in the decomposition of the first argument (i.e. a), we check whether that x “is needed” in the second argument (i.e. b) resorting to the filter operation; in the end, we join all x that passed the filter, in order to obtain the final CRDT state difference. Going back to our example
Now we’re ready to “fix” our example.
When
C receives {x, y} from B, instead of checking if {x, y} has anything new, it computes the difference between the received delta {x, y} and its local state {x}.This returns only the new pieces of information in the received delta, i.e. simply {y}.
The returned
{y} is added to the delta-buffer, and now the last sync step is exactly what we expected: only {y} is sent from C to D.
Since the replica receiving the delta,
removes redundant state in the received delta, we denote this optimization by RR. Note that this problem only occurred because C received the same piece of information ( {x}) from two different replicas: both from A and from B.This means that we have a cycle in the network topology (this will be relevant in the experimental evaluation). A very simple optimization
In Figure 12 we have depicted another problem in delta propagation:
A produced a delta (by adding an element to the set), sent this delta to B, and B sent this delta back to A.
This behavior is undesirable but fortunately it is
super simple to fix.Each delta in the delta-buffer should be tagged with its origin (in the example, B would tag {x} with A), and never sent back to the origin in the next sync steps.This optimization is denoted by BP given that replicas should avoid back-propagation of received deltas. Evaluation
This post is already becoming super long, so let’s now try to finish it quickly.
As we’ve seen before, cycles in the network topology might result in redundant state being propagated between replicas. With that, in the evaluation we wanted to
observe the behavior of the different synchronization models with and without cycles in the topology.
For that, we ran experiments with a
tree (acyclic) and a partial-mesh (cyclic).Both these topologies are depicted in Figure 13. Set micro-benchmark
In Figure 14 we have the bandwidth required by
state-based, delta-based, delta-based BP, delta-based RR, and delta-based BP+RR, when synchronizing a replicated set.The results with the tree are on the left, and the results with the partial-mesh are on the right. If we look at the first two bars, we can see that vanilla delta-based represents no improvement when compared to state-based, independently of the topology employed. The third and fourth bars reveal something interesting: RRis only required when the topology has cycles, since BPalone achieves the best result ( BP+RR) in the acyclic topology. You may be wondering what’s difficult about that!?!?
For sets, this looks super simple. And I really hope that’s indeed the case, since this was the goal. However, all of this generalizes for any CRDT, as long as we know how to decompose them.
An excerpt from the last appendix in the paper:
In this section we show that for each composition technique there is a corresponding decomposition rule. As the lattice join ⊔ of a composite CRDT is defined in terms of the lattice join of its components [35], decomposition rules of a composite CRDT follow the same idea and resort to the decomposition of its smaller parts.
You can find such rules in Figure 15.
There’s much more in the paper On the experimental sidewe have other micro-benchmarks ( counterand map), comparisons with operation-based CRDTsand two variants of the Scuttlebuttalgorithm, and a Retwisbenchmark On the theory side: Resorting to the CRDT-difference operation, now we know what optimal delta-mutatorsshould return (optimal in the sense that they return the smallest delta) State-based CRDTs are not only join-semilattices, they are lattices, and even more than that, they are distributive lattices!!! Resorting to the CRDT-difference operation, now we know what The end
I hope you’ve enjoyed this transcript, and that it didn’t end up being too dense. This paper has also been covered by The Morning Paper.
If any question comes up, don’t hesitate!
|
Global bifurcations and a priori bounds of positive solutions for coupled nonlinear Schrödinger Systems
1.
School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
2.
School of Mathematical Sciences, Capital Normal University, Beijing 100048, China
3.
HLM, CEMS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
4.
School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
$ \begin{equation} \left\{ \begin{array}{ll} -\Delta u+\lambda_1 u = \mu_1 u^3+\beta uv^2-\gamma v &\text{in } \mathbb{R}^N, \\ -\Delta v+\lambda_2 v = \mu_2 v^3+\beta vu^2-\gamma u &\text{in } \mathbb{R}^N, \\ u(x), v(x)\rightarrow 0 \text{ as } \vert x\vert\rightarrow+\infty. \end{array} \right.\nonumber \end{equation} $
$ \lambda_1 = \lambda_2, \mu_1 = \mu_2 $
$ \gamma $
$ \beta\in(-1, +\infty) $
$ \gamma $
$ [-1, 0]\times H_r^1\left( \mathbb{R} ^N\right)\times H_r^1\left( \mathbb{R} ^N\right) $
$ \gamma = 0 $ Mathematics Subject Classification:35Q55, 35B32, 58C40, 58E07. Citation:Guowei Dai, Rushun Tian, Zhitao Zhang. Global bifurcations and a priori bounds of positive solutions for coupled nonlinear Schrödinger Systems. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 1905-1927. doi: 10.3934/dcdss.2019125
References:
[1] [2]
J. C. Alexander and S. S. Antman,
Global and local behavior of bifurcating multidimensional continua of solutions for multiparameter nonlinear eigenvalue problems,
[3] [4] [5]
A. Ambrosetti, E. Colorado and D. Ruiz,
Standing waves multi-bump solitions to linearly coupled systems of nonlinear Schrödinger equations,
[6]
T. Bartsch, E. N. Dancer and Z. Wang,
A Liouville theorem, a-priori bounds, and bifurcating branches of positive solutions for a nonlinear elliptic system,
[7] [8] [9]
J. Belmonte-Beitia, V. M. Prez-Garca and P. J. Torres,
Solitary waves for linearly coupled nonlinear Schrödinger equations with inhomogeneous coefficients,
[10] [11] [12] [13]
E. N. Dancer, K. Wang and Z. Zhang,
The limit equation for the Gross-Pitaevskii equations and S. Terracini's conjecture,
[14]
E. N. Dancer, K. Wang and Z. Zhang,
Uniform Hölder estimate for singularly perturbed parabolic systems of Bose-Einstein condensates and competing species,
[15]
E. N. Dancer and J. Wei,
Spike solutions in coupled nonlinear Schrödinger equations with attractive interaction,
[16]
E. N. Dancer, J. Wei and T. Weth, A priori bounds versus multiple existence of positive solutions for a nonlinear Schrödinger system,
[17]
B. Deconinck, P. G. Kevrekidis, H. E. Nistazakis and D. J. Frantzeskakis,
Linearly coupled Bose-Einstein condensates: from Rabi oscillations and quasiperiodic solutions to oscillating domain walls and spiral waves,
[18]
N. Dunford and J. T. Schwartz,
[19] [20]
P. M. Fitzpatrick, I. Massabò and J. Pejsachowicz,
Global several-parameter bifurcation and continuation theorems: a unified approach via completmenting maps,
[21] [22] [23] [24] [25] [26]
T.-C. Lin and J. Wei,
Spikes in two-component systems of nonlinear Schrödinger equations with trapping potentials,
[27] [28] [29] [30]
C. Rüegg, N. Cavadini, A. Furrer, H.-U. Gdel, K. Krmer, H. Mutka, A. Wildes, K. Habicht and P. Vorderwisch,
Bose-Einstein condensation of the triplet states in the magnetic insulator TlCuCl3,
[31] [32] [33]
R. Tian and Z. Zhang,
Existence and bifurcation of solutions for a double coupled system of Schrödinger equation,
[34] [35]
show all references
References:
[1] [2]
J. C. Alexander and S. S. Antman,
Global and local behavior of bifurcating multidimensional continua of solutions for multiparameter nonlinear eigenvalue problems,
[3] [4] [5]
A. Ambrosetti, E. Colorado and D. Ruiz,
Standing waves multi-bump solitions to linearly coupled systems of nonlinear Schrödinger equations,
[6]
T. Bartsch, E. N. Dancer and Z. Wang,
A Liouville theorem, a-priori bounds, and bifurcating branches of positive solutions for a nonlinear elliptic system,
[7] [8] [9]
J. Belmonte-Beitia, V. M. Prez-Garca and P. J. Torres,
Solitary waves for linearly coupled nonlinear Schrödinger equations with inhomogeneous coefficients,
[10] [11] [12] [13]
E. N. Dancer, K. Wang and Z. Zhang,
The limit equation for the Gross-Pitaevskii equations and S. Terracini's conjecture,
[14]
E. N. Dancer, K. Wang and Z. Zhang,
Uniform Hölder estimate for singularly perturbed parabolic systems of Bose-Einstein condensates and competing species,
[15]
E. N. Dancer and J. Wei,
Spike solutions in coupled nonlinear Schrödinger equations with attractive interaction,
[16]
E. N. Dancer, J. Wei and T. Weth, A priori bounds versus multiple existence of positive solutions for a nonlinear Schrödinger system,
[17]
B. Deconinck, P. G. Kevrekidis, H. E. Nistazakis and D. J. Frantzeskakis,
Linearly coupled Bose-Einstein condensates: from Rabi oscillations and quasiperiodic solutions to oscillating domain walls and spiral waves,
[18]
N. Dunford and J. T. Schwartz,
[19] [20]
P. M. Fitzpatrick, I. Massabò and J. Pejsachowicz,
Global several-parameter bifurcation and continuation theorems: a unified approach via completmenting maps,
[21] [22] [23] [24] [25] [26]
T.-C. Lin and J. Wei,
Spikes in two-component systems of nonlinear Schrödinger equations with trapping potentials,
[27] [28] [29] [30]
C. Rüegg, N. Cavadini, A. Furrer, H.-U. Gdel, K. Krmer, H. Mutka, A. Wildes, K. Habicht and P. Vorderwisch,
Bose-Einstein condensation of the triplet states in the magnetic insulator TlCuCl3,
[31] [32] [33]
R. Tian and Z. Zhang,
Existence and bifurcation of solutions for a double coupled system of Schrödinger equation,
[34] [35]
[1] [2]
Jiabao Su, Rushun Tian, Zhi-Qiang Wang.
Positive solutions of doubly coupled multicomponent nonlinear Schrödinger systems.
[3] [4] [5]
Xudong Shang, Jihui Zhang.
Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation.
[6]
Zhanping Liang, Yuanmin Song, Fuyi Li.
Positive ground state solutions of a quadratically coupled schrödinger system.
[7] [8]
Juncheng Wei, Wei Yao.
Uniqueness of positive solutions to some coupled nonlinear Schrödinger equations.
[9]
Haidong Liu, Zhaoli Liu.
Positive solutions of a nonlinear Schrödinger system with nonconstant potentials.
[10]
Tai-Chia Lin, Tsung-Fang Wu.
Existence and multiplicity of positive solutions for two coupled
nonlinear Schrödinger equations.
[11] [12]
Zhongwei Tang.
Segregated peak solutions of coupled Schrödinger systems with Neumann boundary conditions.
[13]
Jing Yang.
Segregated vector Solutions for nonlinear Schrödinger systems with electromagnetic potentials.
[14] [15]
Ran Zhuo, Yan Li.
Nonexistence and symmetry of solutions for Schrödinger systems involving fractional Laplacian.
[16]
Yongpeng Chen, Yuxia Guo, Zhongwei Tang.
Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents.
[17]
Lushun Wang, Minbo Yang, Yu Zheng.
Infinitely many segregated solutions for coupled nonlinear Schrödinger systems.
[18]
Yinbin Deng, Wei Shuai.
Positive solutions for quasilinear Schrödinger equations
with critical growth and potential vanishing at infinity.
[19]
Renata Bunoiu, Radu Precup, Csaba Varga.
Multiple positive standing wave solutions for schrödinger equations with oscillating state-dependent potentials.
[20]
Weiwei Ao, Juncheng Wei, Wen Yang.
Infinitely many positive solutions of fractional nonlinear Schrödinger equations with non-symmetric potentials.
2018 Impact Factor: 0.545
Tools Metrics Other articles
by authors
[Back to Top]
|
The theorem states that if $(v_1, ...,v_m)$ is linearly dependent in $V$ and $v_1 \neq 0$ then there exists $j \in \{2,...,m\}$ such that $v_j \in span(v_1,...,v_{j-1})$.
If $(v_1, v_2, ..., v_m)$ are linearly dependent, then by definition of linear dependence, at least one of these vectors can be expressed as a linear combination of remaining vectors.
To be more precise, vectors are linearly dependent if not all scalars $a_i$ have to be zero in this equality:
$$a_1 v_1 + a_2 v_2 + ... + a_n v_n = 0$$
For instance if $0v_1+0v_2+3v_3=0$, then $v_2=-3v_3$. So you just pick the vector $v_j$ that's associated with non-zero $a_j$, subtract everything else from both sides and get $v_j$ expressed as a linear combination of remaining vectors. Therefore $v_j \in span(v_1,...,v_{j-1})$, by definition of span.
I just don't understand the requirement that $v_1 \ne 0$. It always works, no matter if $v_1$ is zero or not.
|
Please answer all questions. Each short question is worth 10% of the grade and each long question is worth 30%. Good luck.
Short Questions
Suppose that
\(Y(t) = \text{exp}(g_At)F(\text{exp}(g_Kt)K(t), \text{exp}(g_Lt)L(t))\),
where \(F\) exhibits constant returns to scale. Suppose that \(\dot{L}(t)/L(t) = n\) and \(\dot{K}(t) = sY(t)\). Suppose also that \(F\) is not Cobb-Douglas (more specifically, suppose the share of labor is not constant as the effective capital-labor ratio \(\text{exp}(g_K(t))K(t) / \text{exp}(g_L(t))L(t)\) changes). Show that balanced growth, where output grows at a constant rate, is only possible if \(g_K = g_A = 0\).
Consider the following overlapping generations model with competitive markets. There are \(N\) generations, each of which lives for two periods. Agents from generation \(i\) supply labor at time \(t = i\) and live off capital income at time \(t = i + 1\). The last generation \(N\) simply receives an exogenous rate of return \(\bar{R}\) on their savings at time \(t = N +1\). Is the competitive equilibrium of this economy Pareto optimal? Now consider the same economy with \(N = \infty\). Is the competitive equilibrium still Pareto optimal? Provide an economic intuition (no need for math) for your answer. Answer both parts: The fact that changes in the policies and institutions of countries has no effect on their long-run growth rate is a challenge to endogenous growth models. True or false? Endogenous technological change models imply that product market competition is welfare-reducing because, by reducing monopoly rents, it discourages technological change and economic growth. True or false? Consider an economy with two types of labor, \(L\) and \(H\). Whether an increase in the supply of \(H\) induces a change in technology in a direction that is further (relatively) biased towards that factor depends on the elasticity of substitution between \(L\) and \(H\) (where a change in technology relatively biased towards \(H\) increases the wage of \(H\) relative to that of \(L\) at given supplies of \(L\) and \(H\)). True or false? Long Questions Problem 1. Consider a variant of the neoclassical economy with preferences at time 0 given by
\(\int_0^{\infty}\text{exp}(-{\rho}t)\frac{c(t)^{1-\theta}-1}{1-\theta}dt\).
Population is constant at \(L\), and labor is supplied inelastically. The aggregate production function is given by
\(F(K,L) = A_KK + A_LL^{1-\alpha}K^{\alpha}\),
where \(\alpha \in (0,1)\) and \(A_K > \rho + \delta\), and capital depreciates at rate \(\delta\). Capital and labor markets are competitive.
Derive the differential equation system that characterizes the evolution of the capital stock and consumption in equilibrium. Show that this economy generates sustained growth without technological change. What determines the asymptotic growth rate in this economy? [Hint: conjecture an equilibrium in which the capital stock asymptotically grows at a constant rate \(g > 0\). Simplify the differential equation system obtained in part 2 under this conjecture. Solve the simplified system and verify that there is an asymptotic equilibrium with a constant growth rate.] What additional condition do we need to impose to ensure that the equilibrium you have just characterized is meaningful? What happens if \(L\) grows at a constant rate. In what way does this type of growth fail to be a good approximation to the aggregate behavior of OECD countries. Problem 2. Consider the following endogenous growth model. Population at time \(t\) is \(L(t)\) and grows at the constant rate \(n\) (i.e., \(\dot{L}(t) = nL(t)\)). All agents have preferences given by
\(\int_0^{\infty}\text{exp}(-{\rho}t)\frac{C(t)^{1-\theta}-1}{1-\theta}dt\),
where \(C\) is consumption dened over the final good of the economy. This good is produced as
\(Y(t) = \left[\int_0^{N(t)}y(\nu,t)^{\beta}d{\nu}\right]^{1/{\beta}}\),
where \(y(\nu,t)\) is the amount of intermediate good \(\nu\) used in production at time \(t\) and \(N(t)\) denotes the number of intermediate goods available at time \(t\). The production function of each intermediate is
\(y(\nu,t) = l(\nu,t)\)
where \(l(\nu,t)\) is labor allocated to this good at time \(t\). New goods are produced by allocating workers to the R&D process, with the production function
\(\dot{N}(t) = {\eta}N^{\phi}(t)L_R(t)\)
where \(\phi \leq 1\) and \(L_R(t)\) is labor allocated to R&D at time \(t\). So labor market clearing requires \(\int_0^{N(t)}l(\nu,t)d\nu + L_R(t) = L(t)\). Risk-neutral firms hire workers for R&D. A firm who discovers a new good becomes the monopoly supplier, with a perfectly and indefinitely enforced patent.
Characterize the BGP in the case where \(\phi = 1\) and \(n = 0\). Why does the long-run growth rate depend on \(\theta\)? Why does the growth rate depend on \(L\)? Do you find this plausible? Now suppose that \(\phi = 1\) and \(n > 0\). What happens? Interpret. Now characterize the BGP when \(\phi < 1\) and \(n > 0\). Does the growth rate depend on \(L\)? Does it depend on \(\theta\)? On \(n\)? Why? Do you think that the configuration \(\phi < 1\) and \(n > 0\) is more plausible than the one with \(\phi = 1\) and \(n = 0\)?
|
I am confused as to when you use ΔU and when to use ΔH, I don’t really understand the difference. Is it that ΔU is used for closed systems whilst ΔH is used for open systems?
Is it that ΔU is used for closed systems whilst ΔH is used for open systems?
That's not quite correct.
Consider the case of a simple system: a volume of gas. The problem is that when processes take place, the gas may expand or contract. Since the gas pushes back on the external pressure, some amount of work may be done the system ($W = p\Delta V$). The internal energy change $\Delta U$ does not automatically factor in this work.
Enthalpy, on the other hand, is a quantity which is defined ($U + pV$) such that it does account for the pressure-volume work of the system, so $\Delta H$ is the change the energy of the system and the pressure-volme work done by the system.
So, the summary is that if you have a constant volume process (where pressure volume work is by definition zero), use $U$. If you have a constant pressure system, use $H$.
Even the free energies are this way as well. There's Helmholtz free energy ($A$ or $F$) for constant volume processes and Gibbs free energy ($G$) for constant pressure processes.
In chemistry, most processes are done in open air (unless you're using a bomb calorimeter), in which case all processes are simply subject to atmospheric pressure, so these processes would all be best described using
enthalpy and Gibbs free energy.
You would use $\Delta U$ along with Q-W for a closed system. For an open system (involving mass entering and leaving a control volume) operating at steady state, you would use $\Delta H$ along with $Q-W_s$, where $W_s$ is the shaft work per unit mass passing through the system (which does not include the work to push mass into and out of the system) and Q is the heat added to the control volume per unit mass passing through the system.
The open system version of the first law of thermodynamics (neglecting kinetic- and potential energy effects) is $$\dot{U}=\dot{Q}-\dot{W_s}+\dot{m}_{in}h_{in}-\dot{m}_{out}h_{out}$$where h is the enthalpy per unit mass of entering and leaving streams to the control volume. At steady state, $\dot{U}=0$ and $\dot{m}_{out}=\dot{m}_{in}=\dot{m}$. So, for steady state operation, $$\dot{Q}-\dot{W_s}+\dot{m}(h_{in}-h_{out})=0$$or equivalently, $$\Delta h=\frac{\dot{Q}}{\dot{m}}-\frac{\dot{W_s}}{\dot{m}}$$
|
Does anybody have any good papers or references to explain the differences between the Heisenberg model and Ising model?
As the comments already suggest, this is more introductory textbook material. Parkinson and Farnell's "An Introduction to Quantum Spin Systems" (Springer Lecture Notes in Physics 816, 2010) might be a good choice for you. It's an overall accessible introduction to quantum spin systems, particularly in one and two dimensions, covering models and common techniques. It also be available as an e-book through many university libraries. (But you should be able to find the distinction between Ising and Heisenberg models in many other books on magnetism or statistical physics too.) From p. 17:
Depending on the types of atom involved and the environment in which they exist the exchange interaction may have different forms. Examples are:
Heisenberg $J\mathbf{S}_1\cdot\mathbf{S}_2$ (as before)
Ising $JS_1^zS_2^z$
Anisotropic (a combination of the above) $J[\Delta S_1^z S_2^z + (S_1^xS_2^x + S_1^y S_2^y)]$
Biquadratic $J\left( \mathbf{S}_1 \cdot \mathbf{S}_2\right)^2$
To the best of my knowledge, I am aware that the Hamiltonians are similar, however the Heisenberg model represents the spins with Pauli operators.
Both models are generally defined in terms of spin operators, as shown above. In the important case of spin-1/2 spins, the spin operator can be represented in terms of Pauli matrices. In fact, there's an equality $\mathbf{S}=\hbar \vec{\sigma}/2$, where $\vec{\sigma}=(\sigma^x, \sigma^y, \sigma^z)$ is the vector of Pauli matrices. Often we just redefine the $J$ to avoid the factor of $\hbar/2$, which is why you might have seen an Heisenberg model written $J\, \vec{\sigma}_1\cdot\vec{\sigma}_2$, but it can make a difference when comparing observables calculated using different notations.
|
I need to give the LL grammar for the language below and explain why the grammar is LL and what the value of $k$ should be:
$$L = \{ a^n c^m c^{n+m} : n \ge 1, m \ge 1 \}. $$
I have the following, but it works for $n \ge 0$ and $m \ge 0$. I'm not sure how to get rid of that case where $n$ and $m$ are 0.
$$ \begin{align*} &S \to aSc \mid Z \mid \lambda \\ &Z \to bZc \mid \lambda \end{align*} $$
|
$\newcommand{\ket}[1]{\left|{#1}\right\rangle}$
Or... does this mean that the sphere is simply a graphical
representation of $\theta$ and $\phi$, while $\lvert 0\rangle$ and
$\lvert 1\rangle$ do not geometrically correspond to any vector on the
sphere? (but here it writes $\hat{z}=\lvert 0\rangle$ and
$-\hat{z}=\lvert 1\rangle$...)
This is
not an artificial graphical representation. But this representation of $\Psi$ on the Bloch sphere is based on stereographic projections, it is not a "linear" representation. For example the Euclidean equality $\ket{1}=-\ket{0}$ that you have noted, occurs only for the representations of $\ket{1}$ and $\ket{0}$, not for the "true" $\ket{1}$ and $\ket{0}$.
The $\frac{\theta}{2}$ can be seen on the picture given in my answer below. The red vector $\xi$ is the key point.
Once one writes the pure qubits as $$\Psi=\cos\frac{\theta}{2} \lvert 0\rangle + e^{i\varphi}\sin\frac{\theta}{2} \lvert 1\rangle \qquad (\star),$$it is obvious that the spherical coordinates provide a one-one correspondence (a homeomorphism) between a pure qubit and a point on the 2D-sphere (the Riemann sphere, or Bloch sphere in this context). But I want to show that this homeomorphism is not artificial.
Pure qubits are rays
One usually defines a qubit as a vector in the complex plane $\mathbb{C}^2$$$\Psi = v_0 \ket{0} + v_1 \ket{1} = v_0 \begin{pmatrix} 1 \\ 0 \end{pmatrix}+ v_1 \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} v_0 \\ v_1 \end{pmatrix}$$where $v_0$ and $v_1$ are
complex numbers satisfying ${|v_0|}^2+{|v_1|}^2=1$ (then $\Psi$ is said to be a normalized vector). The space of qubits has dimension $3$.
But, when $\Psi$ and $\Psi'$ are two qubits differing by a
complex proportionnality factor $z$ (necessarily having modulus $1$, hence $z=e^{i\alpha}$ and called a phase factor):$$\Psi' = z \Psi = \begin{pmatrix} zv_0 \\ zv_1 \end{pmatrix}$$they define the same "logic" through the Born rule (that also means that $\langle \Psi, A\Psi\rangle = \langle \Psi', A\Psi'\rangle$ for self-adjoint operators $A$), and considering only qubits having form$$\Psi=\cos\frac{\theta}{2} \lvert 0\rangle + e^{i\varphi}\sin\frac{\theta}{2} \lvert 1\rangle \qquad (\star),$$called the pure qubits, is enough when we look at qubits up to a complex proportionnality factor.
The space of pure qubits defined in this way (qubit up to a proportionnality factor) is also known as the
space of rays or the complexe projective space $\mathbb{C}\mathbb{P}^1$. This it the mathematical formalism behind pure qubits, and I will come back to this point. Homeomorphism with the Riemann sphere
It is obvious that the expression $(\star)$ provides an homeomorphism between the space of pure qubits and the Riemann sphere with the help of spherical polar coordinates. Obviously this homeomorphism is not linear; for example it is clear that $\ket{1} \neq -\ket{0}$ while this relation can be seen on the
representations of $\ket{1}$ and $\ket{0}$ on the Riemann sphere. And it is clear that the linear combination $$\Psi=\cos\frac{\theta}{2} \lvert 0\rangle + e^{i\varphi}\sin\frac{\theta}{2} \lvert 1\rangle$$does not occur on the 3D Euclidean representation.
Nevertheless,
this homeomorphism is not an artifical one. In the sequel, let us carefully distinguish between $\Psi$ and its representation (the $\Psi$ shown on the sphere).
It is well known that the Riemann sphere $\mathbb{S}^2$ is a representation (is homeomorphic to) the space $\bar{\mathbb{C}}$ of complex numbers
"plus a point at infinity" through the stereographic projection. The stereographic projection of the representation of $\Psi$ in the $(xy)$-plane is the vector $$\xi = \tan\frac{\theta}{2} e^{i\varphi}, $$shown in red on the figure below.
Interpreting the $(xy)$-plane as the space of complex numbers, note that $\xi$ actually lies in $\bar{\mathbb{C}}$ because $|1\rangle$ at the Southern pole is sent to the point at infinity (whereas $\ket{0}$ is sent to the origine of the plane). Denote by $\textit{Stereo1}$ this usual stereographic projection:$$\textit{Stereo1}\colon \mathbb{S}^2 \to \bar{\mathbb{C}}, $$which sends the
representation of $\Psi$ to the red vector $\xi$.
The point is the following one. As said before, the space of qubits is the complex projective space $\mathbb{C}\mathbb{P}^1$. And this one is known to be homeomorphic to the Riemann sphere too. This homeomorphism is called the stereographic projection too:$$\textit{Stereo2}\colon \mathbb{C}\mathbb{P}^1 \text{(the space of pure qubits)} \to \bar{\mathbb{C}}, $$and it is given by $$\textit{Stereo2}(\Psi) = \frac{v_1}{v_0} = \frac{e^{i\varphi}\sin \frac{\theta}{2}}{\cos\frac{\theta}{2}}= \tan\frac{\theta}{2} e^{i\varphi} = \xi. $$
This is why I said the homeomorphism provided by spherical polar coordinates is not an artifical one: it is a natural homeomorphism because of the relation$$\textit{Stereo1}(\text{representation of $\Psi$}) = \textit{Stereo2}(\Psi),$$that is to say $$\text{representation of $\Psi$} = {\textit{Stereo1}}^{-1} \bigl(\textit{Stereo2}(\Psi)\bigr).$$
Summary card
|
Find the solution $u(x, t)$ to the 1-D heat equation
$$ u_t = c^2u_{xx}$$
with initial conditions
$$ u(x,0) = \begin{cases} \hfill 0 \hfill & \text{ if $x < 0$ } \\ \hfill u_0 \hfill & \text{ if $x > 0$} \end{cases} $$
The final answer should be expressed in terms of the error function, which is defined as:
$$ \frac 2 {\sqrt{\pi}} \int _{-\infty}^\infty e^{-w^2} \, dx$$
So far, I have taken the Fourier transform of the PDE and obtained
$$\frac {du}{dt} = -c^2w^2u$$
Then, I took the Fourier transform of the initial conditions and obtained
$$ \frac {u_0}{iw \sqrt{2\pi}}$$
After this part, I am not sure what do. Can someone clarify if what I have done is correct so far and tell me what I should do next?
|
I have seen many times the $BF$ theory has non-trivial ground state degeneracy (typically on torus), but I can not see how the conclusion come out. Recently I found a paper by Hansson, Oganesyan and Sondhi,[Superconductors are topologically ordered] in which the superconductor is described by a Maxwell$-BF$ theory. They have a section of the GCD in a $BF$ theory in $2+1$ $d$. But actually I still have questions to understand it.
The $BF$ theory in $2+1$ $d$ is given by the action
$$
S = \frac{1}{\pi} \int d^3 x \epsilon^{\mu \nu \sigma} b_{\mu} \partial_{\nu} a_{\sigma}, \qquad (1)
$$
where $a_{\mu}$ and $b_{\mu}$ are $U(1)$ gauge fields.
$\mu,\nu,\sigma = 0,x,y$.
Working on $2-$torous, as in the section [IV.A] in Hansson's paper, the $BF$ theory can be written in the form
$$
S = \frac{1}{\pi}\int d^3x[\epsilon^{ij} \dot{a}_i b_j+
a_0 \epsilon^{ij} \partial_i b_j + b_0 \epsilon^{ij} \partial_i a_j],
$$
where $\dot{a} = \partial_0 a$ and $i,j = x,y$. They interpret $a_0$ and $b_0$ are multipliers for constraints
$\epsilon^{ij} \partial_i b_j = 0$ and $\epsilon^{ij} \partial_i a_j = 0$.
Upon inserting $a_i = \partial_i \Lambda_a + \bar{a}_i/L$
and $b_i = \partial_i \Lambda_b + \bar{b}_i/L$,
where $\Lambda_{a/b}$ are periodic functions on the torus, $\bar{a_i}$ and $\bar{b_i}$ are spatially constant, $L$ denotes the size of the system, the above $BF$ theory reduces to
$$
S = \frac{1}{\pi}\int d^3 x \epsilon^{ij} \dot{\bar{a}}_i \bar{b}_j. \qquad (2)
$$
Then they say from the Eq.(2) one can obtain the commutation relation ( [Eq. (38)] in their paper)
$$
[\bar{a}_x, \frac{1}{\pi}\bar{b}_y] = i, \quad
[\bar{a}_y,-\frac{1}{\pi}\bar{b}_x] = i. \qquad (3)
$$
Moreover, from the commutation relations Eq. (3), one can have ( [Eq. (39)] in their paper)
$$
A_x B_y + B_y A_x = 0, \quad
A_y B_x + B_x A_y = 0, \qquad (4)
$$
where $A_i = e^{i\bar{a}_i}$ and $B_i = e^{i\bar{b}_i}$.
They claim that relations Eq. (4) indicates a $2\times2 = 4-$fold GCD and "$B_i$ can be interpreted either as measuring the $b$-flux or inserting an $a-$flux."
There are several points that I don't understand.
1. How can I get communication relations Eq. (3) from the action Eq. (2)?
2. Why relations Eq. (4) indicate a $4-$fold GCD?
3. How should I understand the statement "$B_i$ can be interpreted either as measuring the $b$-flux or inserting an $a-$flux."?
I would be very appreciate if anyone can give me some hints or suggest me some relevant references.
|
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
|
The mistake is in what you're attempting to split. The pumping lemma says if $L$ is a regular language, there exists a number $m$ (the pumping length) such that every
string $x \in L$ can be be split into three parts, $x=pqr$ such that: $|pq| \leq m$ $|q| \geq 1$ $pq^{i}r \in L$ for all $i \in \mathbb{N}$
So it's the strings in the language that can be split and pumped, not the language.
So for the language you give, which is regular, you should be able to take any string, split it and pump it. One problem with doing this in practice is that the pumping length $m$ is not typically known, however we can take a larger example here and we don't have to worry.
Consider the string $y = a^{2m+3}b^{4}$, then we know that $pq = a^k$ for some $k \in [1,m]$, and in particular $p = a^{k_{1}}$ with $k_{1} \in [0,m-1]$, $q = a^{k_{2}}$ with $k_{2} \in [1,m]$ and $k_{1}+k_{2}=k$. Then $r = a^{2m+3-k}b^{4}$. As $k \leq m$, this string is in the language $A$ (it fits the definition in the second part).
Now we get to the pumping part; as $A$ is regular, assuming that we've split things correctly, we should be able to pump $y$. If we pump down, we get $a^{2m+3-k_{2}}b^{4}$, which is still in $A$ (again, $k_{2} \leq k \leq m$). If we pump up $n$ times, we get $a^{2m+3+nk_{2}}b^{4}$, which still fits the definition and is in $A$.
Then as $A$ is regular, we should be able to do this for every string in the language, but there's no real reason we would want to as it doesn't tell us anything - there are non-regular languages which can still be pumped (e.g. see the wikipedia article). What we use the pumping lemma for is showing that languages
are not regular by showing that the language we suspect is not regular contains at least one string that cannot be broken up and pumped in any way.
|
In the linear algebra found in Michael Artin's
Algebra, coordinates are not emphasized*.
You have a field (like $\mathbb{R}$ or $\mathbb{C}$), and your vector space (like $\mathbb{R}^n$ or $\mathbb{C}^n$) which you can multiply by elements of your field. This has to satisfy some big list of axioms, but no axiom mentions coordinates. $\mathbb{R}^n$, consisting of lists of real $n$-tuples, is an example of a vector space, but you could imagine defining it without making use of coordinates. You can define the dimension of the vector space without making use of coordinates either: the dimension is the size of the largest possible set of linearly independent vectors. So there's no coordinate dependence here.
In linear algebra, you can continue using abstract definitions. You care about linear operators on vector spaces. Linear maps have their own abstract definition. Once you choose a basis, composition of linear maps turns into matrix multiplication! So there doesn't have to be any coordinate dependence here either.
But even with all of this coordinate-free stuff, we still don't have the "coordinate-free" nature that physicists care about. We actually have to throw
geometry in to get anything useful for physics. In classical mechanics, we throw in Euclidean geometry.
The simplest way to throw in Euclidean geometry is to... use coordinates. We could say that vectors $(x,y)$ have length $\sqrt{x^2+y^2}$ (Note: normally this is the pythagorean theorem and is a "theorem". Here it's an axiom), and that the angle between two vectors satisfies $(v_x,v_y)\cdot(w_x,w_y)=v_x w_x+v_y w_y=\|v\|\|w\|\cos(\theta)$. This turns the vector space into an inner product space, but we
still don't have quite enough for physics, just yet. To get something more physical: dictate that all of our physical laws have to be invariant under transformations which leave all lengths and angles unchanged.
Finally, under the structure of a vector space, the Euclidean inner product, and the demand that physics should be unchanged under all linear maps that leave the inner product conserved, we have something resembling a vector in physics.
The moral is that it's a pain in the butt to spell out all of the mathematical assumptions. The vectors in mathematics are precisely anything that satisfied the vector axioms I linked above. The vectors in physics have tons of extra structure on them - like, for velocity vectors in classical mechanics, Euclidean geometry. However this doesn't necessarily have anything to do with coordinates.
|
Ashok tries to prove Lorentz invariance of the Dirac equation. If the spinor follows the transformation rule $\Psi' = S\Psi$, then
$$ (i\gamma^\mu\partial_\mu-m)\Psi = 0\to (i\gamma^\mu\Lambda^\nu_{\;\mu}\partial'_\nu-m)S^{-1}\Psi = 0. $$
Afterwards he writes
$$ (i\Lambda^\mu_{\;\nu}\gamma^\nu\partial'_\mu-m)S^{-1}\Psi = 0. $$
It may appear at first glance that he just commute the Lorentz transformation and the Dirac gamma matrix and swap indexes $\mu \leftrightarrow\nu$. Is this correct or is it an errata or is there something here more involved?
|
Communities (6) Top network posts 106 Why “characteristic zero” and not “infinite characteristic”? 80 Check if a point is within an ellipse 63 A "fast" way for computing $ \prod \limits_{i=1}^{45}(1+\tan i^\circ) $? 50 AM-GM-HM Triplets 47 Proving : $ \bigl(1+\frac{1}{n+1}\bigr)^{n+1} \gt (1+\frac{1}{n})^{n} $ 44 What is your favorite application of the Pigeonhole Principle? 43 Do harmonic numbers have a “closed-form” expression? View more network posts → Top tags (15)
17 One user, six accounts? Jan 9 '12
12 On the inclusion of pages-of-text-as-images in answers Jan 21 '12
8 New Google Plus logo is too bright? [closed] Jan 26 '12
8 Downvoting for no stated reason is abusive Nov 2 '11
6 Cannot view close votes Sep 2 '11
6 Viewing all questions that once had a bounty Oct 24 '11
|
Please give your comment and help with the following exercise.
[=>]: Let $y_1, y_2 \in Y, y_1 \neq y_2$. Since $f$ is surjective, $y_1 = f(x_1), y_2 = f(x_2)$ for some $x_1,x_2 \in X$. Since $Y$ is Hausdorff, there exist in $Y$ open neighborhoods $N(y_1), N(y_2)$ of $y_1,y_2$ respectively, such that $N(y_1) \cap N(y_2) = \emptyset$. Since $f$ is continuous, $f^{-1}(N(y_1)) \text{ and } f^{-1}(N(y_2))$ are open in $X$, necessarily disjointed and contain $x_1, x_2$, respectively.
It's here that I got stuck. My idea is to use what I know about $f^{-1}(N(y_1)) \text{ and } f^{-1}(N(y_2))$ to show that $(X \times X)\backslash R$ is open, hence $R$ is closed. But I don't know how to proceed.
[<=]: I might need some hints here also.
|
Suppose that we have a random vector \(\mathbf{X}\).
\(\textbf{X} = \left(\begin{array}{c} X_1\\ X_2\\ \vdots \\X_p\end{array}\right)\)
with population variance-covariance matrix
\(\text{var}(\textbf{X}) = \Sigma = \left(\begin{array}{cccc}\sigma^2_1 & \sigma_{12} & \dots &\sigma_{1p}\\ \sigma_{21} & \sigma^2_2 & \dots &\sigma_{2p}\\ \vdots & \vdots & \ddots & \vdots \\ \sigma_{p1} & \sigma_{p2} & \dots & \sigma^2_p\end{array}\right)\)
Consider the linear combinations
\(\begin{array}{lll} Y_1 & = & e_{11}X_1 + e_{12}X_2 + \dots + e_{1p}X_p \\ Y_2 & = & e_{21}X_1 + e_{22}X_2 + \dots + e_{2p}X_p \\ & & \vdots \\ Y_p & = & e_{p1}X_1 + e_{p2}X_2 + \dots +e_{pp}X_p\end{array}\)
Each of these can be thought of as a linear regression, predicting \(Y_{i}\) from \(X_{1}\), \(X_{2}\), ... , \(X_{p}\). There is no intercept, but \(e_{i1}\), \(e_{i2}\), ..., \(e_{ip}\) can be viewed as regression coefficients.
Note that \(Y_{i}\) is a function of our random data, and so is also random. Therefore it has a population variance
\(\text{var}(Y_i) = \sum\limits_{k=1}^{p}\sum\limits_{l=1}^{p}e_{ik}e_{il}\sigma_{kl} = \mathbf{e}'_i\Sigma\mathbf{e}_i\)
Moreover, \(Y_{i}\) and \(Y_{j}\) have population covariance
\(\text{cov}(Y_i, Y_j) = \sum\limits_{k=1}^{p}\sum\limits_{l=1}^{p}e_{ik}e_{jl}\sigma_{kl} = \mathbf{e}'_i\Sigma\mathbf{e}_j\)
Collect the coefficients \(e_{ij}\) into the vector
\(\mathbf{e}_i = \left(\begin{array}{c} e_{i1}\\ e_{i2}\\ \vdots \\ e_{ip}\end{array}\right)\)
First Principal Component (PCA1): \(\boldsymbol{Y}_{1}\) Section
The first principal component is the linear combination of x-variables that has maximum variance (among all linear combinations). It accounts for as much variation in the data as possible.
Specifically we define coefficients \( \boldsymbol { x } _ { 11 , } \boldsymbol { e } _ { 12 } , \ldots , \boldsymbol { e } _ { 1 p }\) for the first component in such a way that its variance is maximized, subject to the constraint that the sum of the squared coefficients is equal to one. This constraint is required so that a unique answer may be obtained.
More formally, select \(\boldsymbol { e } _ { 11 , } \boldsymbol { e } _ { 12 } , \ldots , \boldsymbol { e } _ { 1 p }\) that maximizes
\(\text{var}(Y_1) = \sum\limits_{k=1}^{p}\sum\limits_{l=1}^{p}e_{1k}e_{1l}\sigma_{kl} = \mathbf{e}'_1\Sigma\mathbf{e}_1\)
subject to the constraint that
\(\mathbf{e}'_1\mathbf{e}_1 = \sum\limits_{j=1}^{p}e^2_{1j} = 1\)
Second Principal Component (PCA2): \(\boldsymbol{Y}_{2}\) Section
The
second principal component is the linear combination of x-variables that accounts for as much of the remaining variation as possible, with the constraint that the correlation between the first and second component is 0
Select \(\boldsymbol { e } _ { 21 , } \boldsymbol { e } _ { 22 } , \ldots , \boldsymbol { e } _ { 2 p }\) that maximizes the variance of this new component...
\(\text{var}(Y_2) = \sum\limits_{k=1}^{p}\sum\limits_{l=1}^{p}e_{2k}e_{2l}\sigma_{kl} = \mathbf{e}'_2\Sigma\mathbf{e}_2\)
subject to the constraint that the sums of squared coefficients add up to one,
\(\mathbf{e}'_2\mathbf{e}_2 = \sum\limits_{j=1}^{p}e^2_{2j} = 1\)
along with the additional constraint that these two components are uncorrelated.
\(\text{cov}(Y_1, Y_2) = \sum\limits_{k=1}^{p}\sum\limits_{l=1}^{p}e_{1k}e_{2l}\sigma_{kl} = \mathbf{e}'_1\Sigma\mathbf{e}_2 = 0\)
All subsequent principal components have this same property – they are linear combinations that account for as much of the remaining variation as possible and they are not correlated with the other principal components.
We will do this in the same way with each additional component. For instance:
\(i^{th}\) Principal Component (PCAi): \(\boldsymbol{Y}_{i}\) Section
We select \(\boldsymbol { e } _ { i1 , } \boldsymbol { e } _ { i2 } , \ldots , \boldsymbol { e } _ { i p }\) to maximize
\(\text{var}(Y_i) = \sum\limits_{k=1}^{p}\sum\limits_{l=1}^{p}e_{ik}e_{il}\sigma_{kl} = \mathbf{e}'_i\Sigma\mathbf{e}_i\)
subject to the constraint that the sums of squared coefficients add up to one...along with the additional constraint that this new component is uncorrelated with all the previously defined components.
\(\mathbf{e}'_i\mathbf{e}_i = \sum\limits_{j=1}^{p}e^2_{ij} = 1\)
\(\text{cov}(Y_1, Y_i) = \sum\limits_{k=1}^{p}\sum\limits_{l=1}^{p}e_{1k}e_{il}\sigma_{kl} = \mathbf{e}'_1\Sigma\mathbf{e}_i = 0\),
\(\text{cov}(Y_2, Y_i) = \sum\limits_{k=1}^{p}\sum\limits_{l=1}^{p}e_{2k}e_{il}\sigma_{kl} = \mathbf{e}'_2\Sigma\mathbf{e}_i = 0\),
\(\vdots\)
\(\text{cov}(Y_{i-1}, Y_i) = \sum\limits_{k=1}^{p}\sum\limits_{l=1}^{p}e_{i-1,k}e_{il}\sigma_{kl} = \mathbf{e}'_{i-1}\Sigma\mathbf{e}_i = 0\)
Therefore all principal components are uncorrelated with one another.
|
In many books, I see projectile motion caused by gravity when the object is in a state of free fall, so the velocity on the $y$ axis changes with respect to time and the velocity on $x$ axis is constant. But when the velocity on both the $x$ and $y$ axes change with respect to time, can the motion still be called projectile motion?
An example of the type of motion you mention is the projectile motion with air resistance taken into account. In this case both vertical and horizontal components are functions of time. So this type of motion is of course possible and it is actually closer to the real behavior of a projectile.
Your question is about the name of this type of motion. The name is just a matter of convention. If you look up "projectile motion with air drag" you will see that it is pretty common to call this motion "projectile motion". So I would say that you can call it without danger of using an unusual terminology.
Yes, it could be. The choice of axes is arbitrary.
Yes and no. Projectile motion occurs when a single constant force is applied to a moving object. The force must be constant in magnitude
as well as direction.
It is convenient to decompose the motion about two axes, one along the force and one perpendicular in order to simplify the solution, but you don't
have to.
The resulting motion is a parabola, which we call projectile motion. The solution in vector form is:
$$ \boldsymbol{a} = \frac{1}{m} \boldsymbol{F} $$ $$ \boldsymbol{v} = \boldsymbol{v}_0 + \boldsymbol{a}\,t $$ $$ \boldsymbol{p} = \boldsymbol{p}_0 + \boldsymbol{v}_0 \,t + \frac{1}{2} \boldsymbol{a} t^2 $$
If along a particular direction the force (and hence the acceleration) is zero, then the position along that direction would be a result of constant motion.
Combining the answers of Chase Ryan Taylor and nasu:
Physicists usually consider coordinate systems a
matter of choice, being unrelated to what actually happens, and only useful in describing and analysing it conveniently. Therefore the same "classic projectile motion" could be viewed from a coordinate system rotated with respect to the one usually used for this problem: for example the x axis could be paralel to the initial velocity of the projectile, or point in any arbitary direction. One could even use a 3D system (though this would usually be of little use (expect perhaps as a vector manipulation exercise))
However, interpreting the question the way you probably meant it (that the
y axis is paralel with the local gravity vector, while x is perpendicular to it, being horizontal) the answer is still yes. If there is a horizontal force acting on the projectile so it is decelerating in this direction, it is still useful to label it "projectile motion", since this motion is also observable in projected (launched) objects (which move in some fluid)
The necessary sufficient and only conditions for projectile motion to occour are-: 1)Velocity should remain constant along one of the axis 2)A constant acceleration must act along or opposite to the other axis. Hope that answers your question.
If you want to change the $x$ component of your velocity you would need to go in a different world. Anyway if you can do this we have three cases. The $x$ component becomes 0 before falling creating an increasing slope projectile it will fall of straightly after achieving zero velocity. Now it still has some deceleration on $x$ axis before falling to ground. Then it'll back off to it's throwing point same as a projectile. If it does fall before the $x$ velocity being zero then it'll be still a projectile. But while falling it'll have great slope. And through out it's journey it's slope will gradually increase. Now about equations if I think that gravity also acts in $x$ axis. $x= v\cos\theta t -gt^2/2 $ and $y= v\sin\theta t- gt^2/2$ $y-x= vt (\cos\theta -\sin\theta)$ so it's a straight line whose slope increasing or decreasing with the value $\theta$ with time.It also seems if $\theta = 45° $ then it'll get back to it's throwing point .
|
How do we find the coefficients \(\boldsymbol{e_{ij}}\) for a principal component? Section
The solution involves the eigenvalues and eigenvectors of the variance-covariance matrix \(Σ\).
Let \(λ_1\) through \(λ_p\) denote the eigenvalues of the variance-covariance matrix \(Σ\). These are ordered so that \(λ_1\) has the largest eigenvalue and \(λ_p\) is the smallest.
\(\lambda_1 \ge \lambda_2 \ge \dots \ge \lambda_p\)
Let the vectors \(\boldsymbol{e}_1\) through \(\boldsymbol{e}_1\)
\(\boldsymbol{e}_1 , \boldsymbol{e}_2 , \dots , \boldsymbol{e}_p\)
denote the corresponding eigenvectors. It turns out that the elements for these eigenvectors are the coefficients of our principal components.
The variance for the
ith principal component is equal to the ith eigenvalue.
\(var(Y_i) = \text{var}(e_{i1}X_1 + e_{i2}X_2 + \dots e_{ip}X_p) = \lambda_i\)
Moreover, the principal components are uncorrelated with one another.
\(\text{cov}(Y_i, Y_j) = 0\)
The variance-covariance matrix may be written as a function of the eigenvalues and their corresponding eigenvectors. This is determined by the Spectral Decomposition Theorem. This will become useful later when we investigate topics under factor analysis.
Spectral Decomposition Theorem Section
The variance-covariance matrix can be written as the sum over the
p eigenvalues, multiplied by the product of the corresponding eigenvector times its transpose as shown in the first expression below:
\begin{align} \Sigma & = \sum_{i=1}^{p}\lambda_i \mathbf{e}_i \mathbf{e}_i' \\ & \cong \sum_{i=1}^{k}\lambda_i \mathbf{e}_i\mathbf{e}_i'\end{align}
The second expression is a useful approximation if \(\lambda_{k+1}, \lambda_{k+2}, \dots , \lambda_{p}\) are small. We may approximate Σ by
\(\sum\limits_{i=1}^{k}\lambda_i\mathbf{e}_i\mathbf{e}_i'\)
Again, this is more useful when we talk about factor analysis.
Earlier in the course we defined the total variation of \(\mathbf{X}\) as the trace of the variance-covariance matrix, that is the sum of the variances of the individual variables. This is also equal to the sum of the eigenvalues as shown below:
\begin{align} trace(\Sigma) & = \sigma^2_1 + \sigma^2_2 + \dots +\sigma^2_p \\ & = \lambda_1 + \lambda_2 + \dots + \lambda_p\end{align}
This will give us an interpretation of the components in terms of the amount of the full variation explained by each component. The proportion of variation explained by the
ith principal component is then defined to be the eigenvalue for that component divided by the sum of the eigenvalues. In other words, the ith principal component explains the following proportion of the total variation:
\(\dfrac{\lambda_i}{\lambda_1 + \lambda_2 + \dots + \lambda_p}\)
A related quantity is the proportion of variation explained by the first
k principal component. This would be the sum of the first k eigenvalues divided by its total variation.
\(\dfrac{\lambda_1 + \lambda_2 + \dots + \lambda_k}{\lambda_1 + \lambda_2 + \dots + \lambda_p}\)
Naturally, if the proportion of variation explained by the first
k principal components is large, then not much information is lost by considering only the first k principal components. Why It May Be Possible to Reduce Dimensions Section
When we have correlation (multicollinarity) between the x-variables, the data may more or less fall on a line or plane in a lower number of dimensions. For instance, imagine a plot of two x-variables that have a nearly perfect correlation. The data points will fall close to a straight line. That line could be used as a new (one-dimensional) axis to represent the variation among data points. As another example, suppose that we have verbal, math, and total SAT scores for a sample of students. We have three variables, but really (at most) two dimensions to the data because total= verbal+math, meaning the third variable is completely determined by the first two. The reason for saying “at most” two dimensions is that if there is a strong correlation between verbal and math, then it may be possible that there is only one true dimension to the data.
Note!
All of this is defined in terms of the population variance-covariance matrix Σ which is unknown. However, we may estimate Σ by the sample variance-covariance matrix given in the standard formula here:
\(\textbf{S} = \frac{1}{n-1} \sum\limits_{i=1}^{n}(\mathbf{X}_i-\bar{\textbf{x}})(\mathbf{X}_i-\bar{\textbf{x}})'\)
Procedure Section
Compute the eigenvalues \(\hat{\lambda}_1, \hat{\lambda}_2, \dots, \hat{\lambda}_p\) of the sample variance-covariance matrix
S, and the corresponding eigenvectors \(\hat{\mathbf{e}}_1, \hat{\mathbf{e}}_2, \dots, \hat{\mathbf{e}}_p\).
Then we define the estimated principal components using the eigenvectors as the coefficients:
\begin{align} \hat{Y}_1 & = \hat{e}_{11}X_1 + \hat{e}_{12}X_2 + \dots + \hat{e}_{1p}X_p \\ \hat{Y}_2 & = \hat{e}_{21}X_1 + \hat{e}_{22}X_2 + \dots + \hat{e}_{2p}X_p \\&\vdots\\ \hat{Y}_p & = \hat{e}_{p1}X_1 + \hat{e}_{p2}X_2 + \dots + \hat{e}_{pp}X_p \\ \end{align}
Generally, we only retain the first
k principal components. Here we must balance two conflicting desires: To obtain the simplest possible interpretation, we want kto be as small as possible. If we can explain most of the variation just by two principal components then this would give us a simple description of the data. When kis small, the first kcomponents explain a large portion of the overall variation. If the first few components explain a small amount of variation, we need more of them to explain a desired percentage of total variance resulting in large k. To avoid loss of information, we want the proportion of variation explained by the first kprincipal components to be large. Ideally as close to one as possible; i.e., we want
\(\dfrac{\hat{\lambda}_1 + \hat{\lambda}_2 + \dots + \hat{\lambda}_k}{\hat{\lambda}_1 + \hat{\lambda}_2 + \dots + \hat{\lambda}_p} \cong 1\)
|
Example 11-2: Places Rated Section
We will use the Places Rated Almanac data (Boyer and Savageau) which rates 329 communities according to nine criteria:
Climate and Terrain Housing Health Care & Environment Crime Transportation Education The Arts Recreation Economics
Notes The data for many of the variables are strongly skewed to the right. The log transformation was used to normalize the data.
Download the text file that contains the data here: places.txt
Using SAS
The SAS program will implement the principal component procedures:
Download the SAS program here: places.sasView the video explanation of the SAS code.
When you examine the output, the first thing that SAS does is provide summary information. There are 329 observations representing the 329 communities in our dataset and 9 variables. This is followed by simple statistics that report the means and standard deviations for each variable.
Below this is the variance-covariance matrix for the data. You should be able to see that the variance reported for climate is 0.01289.
What we really need to draw our attention to here is the eigenvalues of the variance-covariance matrix. In the SAS output, the eigenvalues are in ranked order from largest to smallest. These values appear in Table 1 below for discussion.
Using Minitab
View the video below to see how to perform a principle components analysis of the places_rated.txt data using the Minitab statistical software application.
Data Analysis Step 1: Examine the eigenvalues to determine how many principal components should be considered:
Table 1. Eigenvalues and the proportion of variation explained by the principal components.
Component Eigenvalue Proportion Cumulative 1 0.3775 0.7227 0.7227 2 0.0511 0.0977 0.8204 3 0.0279 0.0535 0.8739 4 0.0230 0.0440 0.9178 5 0.0168 0.0321 0.9500 6 0.0120 0.0229 0.9728 7 0.0085 0.0162 0.9890 8 0.0039 0.0075 0.9966 9 0.0018 0.0034 1.0000 Total 0.5225
If you take all of these eigenvalues and add them up, then you get the total variance of 0.5223.
The proportion of variation explained by each eigenvalue is given in the third column. For example, 0.3775 divided by the 0.5223 equals 0.7227, or, about 72% of the variation is explained by this first eigenvalue. The cumulative percentage explained is obtained by adding the successive proportions of variation explained to obtain the running total. For instance, 0.7227 plus 0.0977 equals 0.8204, and so forth. Therefore, about 82% of the variation is explained by the first two eigenvalues together.
Next we need to look at successive differences between the eigenvalues. Subtracting the second eigenvalue 0.051 from the first eigenvalue, 0.377 we get a difference of 0.326. The difference between the second and third eigenvalues is 0.0232; the next difference is 0.0049. Subsequent differences are even smaller. A sharp drop from one eigenvalue to the next may serve as another indicator of how many eigenvalues to consider.
The first three principal components explain 87% of the variation. This is an acceptably large percentage.
An
Alternative Method to determine the number of principal components is to look at a Scree Plot. With the eigenvalues ordered from largest to the smallest, a scree plot is the plot of \(\hat{\lambda_i}\) versus i. The number of components is determined at the point beyond which the remaining eigenvalues are all relatively small and of comparable size. The following plot is made in Minitab. The scree plot for the variables without standardization (covariance matrix)
As you see, we could have stopped at the second principal component, but we continued till the third component. Relatively speaking, the contribution of the third component is small compared to the second component.
Step 2: Next, we compute the principal component scores. For example, the first principal component can be computed using the elements of the first eigenvector:
\begin{align}\hat{Y}_1 & = 0.0351 \times (\text{climate}) + 0.0933 \times (\text{housing}) + 0.4078 \times (\text{health})\\ & + 0.1004 \times (\text{crime}) + 0.1501 \times (\text{transportation}) + 0.0321 \times (\text{education}) \\ & 0.8743 \times (\text{arts}) + 0.1590 \times (\text{recreation}) + 0.0195 \times (\text{economy})\end{align}
In order to complete this formula and compute the principal component for the individual community of interest, plug in that community's values for each of these variables. A fairly standard procedure is to use the difference between the variables and their sample means rather than the raw data. This is known as a translation of the random variables. Translation does not affect the interpretations because the variances of the original variables are the same as those of the translated variables.
Magnitudes of the coefficients give the contributions of each variable to that component. However, the magnitude of the coefficients also depend on the variances of the corresponding variables.
|
L # 1
Show that
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Last edited by krassi_holmz (2006-03-09 02:44:53)
IPBLE: Increasing Performance By Lowering Expectations.
Offline
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 2
If
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Let
log x = x' log y = y' log z = z'. Then:
x'+y'+z'=0.
Rewriting in terms of x' gives:
IPBLE: Increasing Performance By Lowering Expectations.
Offline
Well done, krassi_holmz!
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 3
If x²y³=a and log (x/y)=b, then what is the value of (logx)/(logy)?
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
loga=2logx+3logy
b=logx-logy loga+3b=5logx loga-2b=3logy+2logy=5logy logx/logy=(loga+3b)/(loga-2b). Last edited by krassi_holmz (2006-03-10 20:06:29)
IPBLE: Increasing Performance By Lowering Expectations.
Offline
Very well done, krassi_holmz!
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 4
Offline
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
You are not supposed to use a calculator or log tables for L # 4. Try again!
Last edited by JaneFairfax (2009-01-04 23:40:20)
Offline
No, I didn't
I remember
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
You still used a calculator / log table in the past to get those figures (or someone else did and showed them to you). I say again:
no calculators or log tables to be used (directly or indirectly) at all!! Last edited by JaneFairfax (2009-01-06 00:30:04)
Offline
Offline
log a = 2log x + 3log y
b = log x log y
log a + 3 b = 5log x
loga - 2b = 3logy + 2logy = 5logy
logx / logy = (loga+3b) / (loga-2b)
Offline
Hi ganesh
for L # 1 since log(a)= 1 / log(b), log(a)=1 b a a we have 1/log(abc)+1/log(abc)+1/log(abc)= a b c log(a)+log(b)+log(c)= log(abc)=1 abc abc abc abc Best Regards Riad Zaidan
Offline
Hi ganesh
for L # 2 I think that the following proof is easier: Assume Log(x)/(b-c)=Log(y)/(c-a)=Log(z)/(a-b)=t So Log(x)=t(b-c),Log(y)=t(c-a) , Log(z)=t(a-b) So Log(x)+Log(y)+Log(z)=tb-tc+tc-ta+ta-tb=0 So Log(xyz)=0 so xyz=1 Q.E.D Best Regards Riad Zaidan
Offline
Gentleman,
Thanks for the proofs.
Regards.
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
log_2(16) = \log_2 \left ( \frac{64}{4} \right ) = \log_2(64) - \log_2(4) = 6 - 2 = 4, \,
log_2(\sqrt[3]4) = \frac {1}{3} \log_2 (4) = \frac {2}{3}. \,
Offline
L # 4
I don't want a method that will rely on defining certain functions, taking derivatives,
noting concavity, etc.
Change of base:
Each side is positive, and multiplying by the positive denominator
keeps whatever direction of the alleged inequality the same direction:
On the right-hand side, the first factor is equal to a positive number less than 1,
while the second factor is equal to a positive number greater than 1. These facts are by inspection combined with the nature of exponents/logarithms.
Because of (log A)B = B(log A) = log(A^B), I may turn this into:
I need to show that
Then
Then 1 (on the left-hand side) will be greater than the value on the
right-hand side, and the truth of the original inequality will be established.
I want to show
Raise a base of 3 to each side:
Each side is positive, and I can square each side:
-----------------------------------------------------------------------------------
Then I want to show that when 2 is raised to a number equal to
(or less than) 1.5, then it is less than 3.
Each side is positive, and I can square each side:
Last edited by reconsideryouranswer (2011-05-27 20:05:01)
Signature line:
I wish a had a more interesting signature line.
Offline
Hi reconsideryouranswer,
This problem was posted by JaneFairfax. I think it would be appropriate she verify the solution.
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Hi all,
I saw this post today and saw the probs on log. Well, they are not bad, they are good. But you can also try these problems here by me (Credit: to a book):
http://www.mathisfunforum.com/viewtopic … 93#p399193
Practice makes a man perfect.
There is no substitute to hard work All of us do not have equal talents but everybody has equal oppurtunities to build their talents.-APJ Abdul Kalam
Offline
JaneFairfax, here is a basic proof of L4:
For all real a > 1, y = a^x is a strictly increasing function.
log(base 2)3 versus log(base 3)5
2*log(base 2)3 versus 2*log(base 3)5
log(base 2)9 versus log(base 3)25
2^3 = 8 < 9
2^(> 3) = 9
3^3 = 27 < 25
3^(< 3) = 25
So, the left-hand side is greater than the right-hand side, because
Its logarithm is a larger number.
Offline
|
Example 11-3: Place Rated (after Standardization) Section Using SAS
The SAS program implements the principal component procedures with standardized data:
download the SAS Program here: places1.sasView the video explanation of the SAS code.
The output begins with descriptive information including the means and standard deviations for the individual variables presented.
This is followed by the Correlation Matrix for the data. For example, the correlation between the housing and climate data was only 0.273. There are no hypotheses presented that these correlations are equal to zero. We will use this correlation matrix instead to obtain our eigenvalues and eigenvectors.
Using Minitab
View the video below to see how to perform a principal components analysis using the correlation matrix with the Minitab statistical software application.
Analysis
We need to focus on the eigenvalues of the correlation matrix that correspond to each of the principal components. In this case, total variation of the standardized variables is equal to
p, the number of variables. After standardization each variable has variance equal to one, and the total variation is the sum of these variations, in this case the total variation will be 9.
The eigenvalues of the correlation matrix are given in the second column in the table below. The proportion of variation explained by each of the principal components as well as the cumulative proportion of the variation explained are also provided.
Step 1
Examine the eigenvalues to determine how many principal components to consider:
Component Eigenvalue Proportion Cumulative 1 3.2978 0.3664 0.3664 2 1.2136 0.1348 0.5013 3 1.1055 0.1228 0.6241 4 0.9073 0.1008 0.7249 5 0.8606 0.0956 0.8205 6 0.5622 0.0625 0.8830 7 0.4838 0.0538 0.9368 8 0.3181 0.0353 0.9721 9 0.2511 0.0279 1.0000
The first principal component explains about 37% of the variation. Furthermore, the first four principal components explain 72%, while the first five principal components explain 82% of the variation. Compare these proportions with those obtained using non-standardized variables. This analysis is going to require a larger number of components to explain the same amount of variation as the original analysis using the variance-covariance matrix. This is not unusual.
In most cases, the required cut off is pre-specified; i.e. how much of the variation to be explained is pre-determined. For instance, I might state that I would be satisfied if I could explain 70% of the variation. If we do this, then we would select the components necessary until you get up to 70% of the variation. This would be one approach. This type of judgment is arbitrary and hard to make if you are not experienced with these types of analysis. The goal - to some extent - also depends on the type of problem at hand.
Another approach would be to plot the differences between the ordered values and look for a break or a sharp drop. The only sharp drop that is noticeable in this case is after the first component. One might, based on this, select only one component. However, one component is probably too few, particularly because we have only explained 37% of the variation. Consider the scree plot based on the standardized variables.
The scree plot for standardized variables (correlation matrix) Step 2
Next, we can compute the principal component scores using the eigenvectors. This is a formula for the first principal component:
\(\begin{array} \hat{Y}_1 & = & 0.158 \times Z_{\text{climate}} + 0.384 \times Z_{\text{housing}} + 0.410 \times Z_{\text{health}}\\ & & + 0.259 \times Z_{\text{crime}} + 0.375 \times Z_{\text{transportation}} + 0.274 \times Z_{\text{education}} \\ && 0.474 \times Z_{\text{arts}} + 0.353 \times Z_{\text{recreation}} + 0.164 \times Z_{\text{economy}}\end{array}\)
And remember, this is now a function of the standardized data, not of the raw data.
The magnitudes of the coefficients give the contributions of each variable to that component. Because the data have been standardized, they do not depend on the variances of the corresponding variables.
Step 3
Let's look at the coefficients for the principal components. In this case, because the data are standardized, the relative magnitude of each coefficient can be directly assessed within a column. Each column here corresponds with a column in the output of the program labeled Eigenvectors.
Principal Component Variable 1 2 3 4 5 Climate 0.158 0.069 0.800 0.377 0.041 Housing 0.384 0.139 0.080 0.197 -0.580 Health 0.410 -0.372 -0.019 0.113 0.030 Crime 0.259 0.474 0.128 -0.042 0.692 Transportation 0.375 -0.141 -0.141 -0.430 0.191 Education 0.274 -0.452 -0.241 0.457 0.224 Arts 0.474 -0.104 0.011 -0.147 0.012 Recreation 0.353 0.292 0.042 -0.404 -0.306 Economy 0.164 0.540 -0.507 0.476 -0.037
Interpretation of the principal components is based on which variables are most strongly correlated with each component. In other words, we need to decide which numbers are large within each column. In the first column, we see that Health and Arts are large. This is very arbitrary. Other variables might have also been included as part of this first principal component.
Component Summaries Section First Principal Component Analysis - PCA1
The first principal component is a measure of the quality of Health and the Arts, and to some extent Housing, Transportation, and Recreation. This component is associated with high ratings on all of these variables, especially Health and Arts. They are all positively related to PCA1 because they all have positive signs.
Second Principal Component Analysis - PCA2
The second principal component is a measure of the severity of crime, the quality of the economy, and the lack of quality in education. PCA2 is associated with high ratings of Crime and Economy and low ratings of Education. Here we can see that PCA2 distinguishes cities with high levels of crime and good economies from cities with poor educational systems.
Third Principal Component Analysis - PCA3
The third principal component is a measure of the quality of the climate and poorness of the economy. PCA3 is associated with high Climate ratings and low Economy ratings. The inclusion of economy within this component will add a bit of redundancy within our results. This component is primarily a measure of climate, and to a lesser extent the economy.
Fourth Principal Component Analysis - PCA4
The fourth principal component is a measure of the quality of education and the economy and the poorness of the transportation network and recreational opportunities. PCA4 is associated with high Education and Economy ratings and low Transportation and Recreation ratings.
Fifth Principal Component Analysis - PCA5
The fifth principal component is a measure of the severity of crime and the quality of housing. PCA5 is associated with high Crime ratings and low housing ratings.
|
Assuming a square base of width $w$ with mass $M$, and a horizontal wind load $F_w$ at a height $h$, then the condition for static equilibrium is
$$\sum\tau = 0$$$$\tau_{wind} + \tau_{base} = 0$$
Since if the sign tips, it would rotate around the edge of the base, that's a convenient axis about which to compute the torques:$$- F_w \frac{h}{2} + M g \frac{w}{2} = 0$$
Solving for M...
$$M = \frac{F_w h}{g w}$$
Plugging in your numbers (0.94 lbs-force = 4.18 N, 9 ft = 2.7 meters),
$$M = \frac{11.29 ~\rm Nm}{9.81 ~\rm{m/s^2} ~w} $$$$M = \frac{1.15 ~\rm kg~m}{w} $$
If you prefer lbs and feet,
$$M = \frac{8.32 ~\rm lbs~ft}{w}$$
For example, if your square base has a width of 3 feet, then you need a minimum of $M = 2.77 ~\rm lbs$
Note 1: This analysis assumes the mass of the sign itself is negligible compared to the base. This is a conservative assumption, since extra mass on the sign will make it more stable for initial tipping.
Note 2: I'm very skeptical that a sign of any significant size would only experience a wind load of 0.94 lbs in any significant wind. I would double-check that figure.
EDIT: I revised my answer now that the OP made it clear that the sign is a rectangle that extends from the ground up to 9 feet.
|
Math question on Newton's method and detecting actual zeros
02-07-2017, 05:04 PM
Post: #1
Math question on Newton's method and detecting actual zeros
(Admins: If this is in the wrong forum, please feel free to move it)
This came up during a debugging process in which Newton's method (using backtracking linesearch) gave me a solution to the system
\[ \frac{x\cdot y}{x+y} = 127\times 10^{-12}, \quad \left( \frac{x+y}{x} \right)^2 = 8.377 \]
(This problem was posed on the HP Prime subforum: http://hpmuseum.org/forum/thread-7677.html)
One solution I found was: \( x=1.94043067156\times 10^{-10}, \
y=3.67576704293\times 10^{-10} \) (hopefully no typos).
On the Prime, the error for the equations are in the order of \(10^{-19} \) and \(10^{-11}\) for the first and second equations, respectively (again, assuming I made no typos copying). So my question is: should a numerical solver should treat \(1.27\times 10^{-10}\) as "significant" or 0 (especially when it comes time to check for convergence, when the tolerance for \( |f_i| \) might be set to, say, \( 10^{-10} \) -- here \( f_i \) is the i-th equation in the system, set equal to 0)?
Graph 3D | QPI | SolveSys
02-07-2017, 06:45 PM
Post: #2
RE: Math question on Newton's method and detecting actual zeros
.
Hi, Han:
(02-07-2017 05:04 PM)Han Wrote: (Admins: If this is in the wrong forum, please feel free to move it)
Your system is trivial to solve by hand, like this:
1) Parameterize:
y = t*x
2) Substitute y=t*x into the first equation (a = 127E-12):
x*t*x = a*(x+t*x) -> t*x^2 = a*(1+t)*x -> (assuming x is not 0, which would make the second equation meaningless) t*x = a*(1+t) -> x = a*(1+t)/t
3) Substitute y=t*x in the second equation (b=8.377)
(1+t)^2 = b -> 1+t= sqr(b) -> t = sqr(b)-1 or t = -sqr(b)-1
4) let's consider the first case (the second is likewise):
t = sqr(b)-1 = 1.8943047524405580466334231771918
5) substitute the value of t in the first equation above in (2):
x = a*(1+t)/t = 1.9404306676968291608003859882111e-10
6) now, y=t*x, so:
y = t*x = 3.6757670355995087192244474350336e-10
which gives your solution. Taking the negative sqrt would give another.
As for your question, the best way to check for convergence is not to rely on some tolerance for the purported zero value when evaluating both equations for the computed x,y approximations in every iteration but rather to stop when consecutive approximations differ in less than a user-set tolerance expressed in ulps, i.e. units in the last place.
For instance, if you're making your computation with 10 digits and you set your tolerance to 2 ulps you would stop iterating as soon as consecutive approximations for both x and y have 8 digits in common (mantissa digits, regardeless of the exponents which of course should be the same).
Once you stop the iterations you should then check the values of f(x,y) and g(x,y) to determine whether you've found a root, a pole, or an extremum (maximum, minimum) but as far as stopping the iterations is concerned, the tolerance in ulps is the one to use for best results as it is completely independent of the magnitude of the roots, they might be of the order of 1E38 or of 1E-69 and it wouldn't matter.
Regards.
V.
.
02-07-2017, 08:03 PM
Post: #3
RE: Math question on Newton's method and detecting actual zeros
(02-07-2017 06:45 PM)Valentin Albillo Wrote: .
Thank you for the detailed solution; though in truth it was merely to present a case where a function might itself produce outputs that are extremely tiny. The math I understand quite well; it's the computer science part of implementing Newton's method that was giving me trouble. Your explanation above regarding ulps was precisely the answer I was looking for.
Graph 3D | QPI | SolveSys
User(s) browsing this thread: 1 Guest(s)
|
We define a class of subsets of $\Omega$ to be a
Monotone Class $\mathcal{M}$ iff it has the following two properties:
$(1)$ if $A_i \in \mathcal{M}$ and $A_1 \subset A_2 \subset \dots $ then $\bigcup_{i\ge 1} A_i \in \mathcal{M}$
$(2)$ if $A_i \in \mathcal{M}$ and $A_1 \supset A_2 \supset \dots$ then $\bigcap_{i\ge 1} A_i \in \mathcal{M}$
But, doesn't every set satisfies these properties as if $A_i \in \mathcal{M}$ and $A_1 \subset A_2 \subset ... \subset A_n$ then $\bigcup_{i\ge 1}^{i=n} A_i = A_n \in \mathcal{M}$. Similarly, if $A_i \in \mathcal{M}$ and $A_1 \supset A_2 \supset ... \supset A_n$ then $\bigcap_{i\ge 1}^{i=n} A_i = A_n \in \mathcal{M}$.
Is there a class of subsets which isn't
Monotone Class?
|
Current browse context:
hep-th
Change to browse by: Bookmark(what is this?) High Energy Physics - Theory Title: Scheme-Independent Series Expansions at an Infrared Zero of the Beta Function in Asymptotically Free Gauge Theories
(Submitted on 3 Oct 2016 (v1), last revised 6 Dec 2016 (this version, v2))
Abstract: We consider an asymptotically free vectorial gauge theory, with gauge group $G$ and $N_f$ fermions in a representation $R$ of $G$, having an infrared (IR) zero in the beta function at $\alpha_{IR}$. We present general formulas for scheme-independent series expansions of quantities, evaluated at $\alpha_{IR}$, as powers of an $N_f$-dependent expansion parameter, $\Delta_f$. First, we apply these to calculate the derivative $d\beta/d\alpha$ evaluated at $\alpha_{IR}$, denoted $\beta'_{IR}$, which is equivalent to the anomalous dimension of the ${\rm Tr}(F_{\mu\nu}F^{\mu\nu})$ operator, to order $\Delta_f^4$ for general $G$ and $R$, and to order $\Delta_f^5$ for $G={\rm SU}(3)$ and fermions in the fundamental representation. Second, we calculate the scheme-independent expansions of the anomalous dimension of the flavor-nonsinglet and flavor-singlet bilinear fermion antisymmetric Dirac tensor operators up to order $\Delta_f^3$. The results are compared with rigorous upper bounds on anomalous dimensions of operators in conformally invariant theories. Our other results include an analysis of the limit $N_c \to \infty$, $N_f \to \infty$ with $N_f/N_c$ fixed, calculation and analysis of Pad\'e approximants, and comparison with conventional higher-loop calculations of $\beta'_{IR}$ and anomalous dimensions as power series in $\alpha$. Submission historyFrom: Robert Shrock [view email] [v1]Mon, 3 Oct 2016 02:07:52 GMT (46kb) [v2]Tue, 6 Dec 2016 15:28:52 GMT (46kb)
|
Definition: Weak convergence- If a sequence $x_n\in X$ converges weakly then if $x_n\to x_0$ then $\lim_{n\to\infty}f(x_n)=f(x_0)$ for $f\in X'$
Proposition: If a sequence converges by the norm then it converges weakly. Proof: $|f(x_n)-f(x_0)|=|f(x_n-x_0)|\leqslant ||f|||x_n-x_0||\to 0$ as $n\to\infty$.
Then my doubt arises in the following example: Consider the space $C_0$(space of all the sequence that converge to 0) with norm $max$. $e_k=(0,0,0,...,1,0,0,0,0...0)$ then $||e_i-e_j||=1,i\neq j$, where $e_1,e_2...$ is a basis of $C_0$
$f\in (C_0)'\simeq l_1$
$f(x)=\sum_\limits{k=1}^{\infty}x_k\xi_k\:\:\:\:\sum_\limits{k=1}^{\infty}\xi_k<\infty$
$\xi_k=f(e_k)$
Then $|f(e_k)|\to 0$ as $k\to\infty$
Questions: 1) How does the author establishes $\xi_k=f(e_k)$? 2) Is correct to assume that since $|f(e_k)|\to 0$ the sequences converge weakly but not strongly?
Thanks in advance!
|
Let $H \subseteq \Bbb R^n$.
Prove that $H$ is compact $\iff$ every cover $\{E_{\alpha}\}_{\alpha \in A}$ where $E_{\alpha}$'s are relatively open in $H$ has a finite subcovering.
$\bf{Solution \ trial:}$
For $\Rightarrow$
Suppose $H$ is compact.
Suppose $\{E_{\alpha}\}$ are relatively open covering of $H$. Since $\{E_{\alpha}\}$ are relatively open covering of $H$,
$\exists$ open set $U_{\alpha}$ such that $U_{\alpha} \cap H= E_{\alpha}$
Then,$\ U_{\alpha}$ is open covering of $H$
Since $H$ is compact, $\exists $ finite subset $A_0 \subset A$ such that $$H\subseteq \bigcup_{\alpha \in A_0} \{U_{\alpha}\}$$
Then, $\{E_{\alpha}\}_{\alpha\in A_0} $ is a finite subcovering of $\{E_{\alpha}\}_{\alpha\in A} $
For $\Leftarrow$
Since $\{E_{\alpha}\}_{\alpha\in A} $ is relatively open subcovering of H,
$$\{E_{\alpha}\cap H\}_{\alpha\in A}$$ is relatively open covering.
$\exists$ a finite subset $A_0 \subset A$ such that $\{ V_{\alpha} \cap H\}_{\alpha \in A_0}$ covers H.
$$\{ V_{\alpha} \}_{\alpha \in A_0}$$ covers H.
i.e H is compact.
Is the proof enough? Does There exist any mistake or missings in the detail of the solution?
Please correct them. Thank you.
|
S.3.1 Hypothesis Testing (Critical Value Approach)S.3.1 Hypothesis Testing (Critical Value Approach)
The critical value approach involves determining "likely" or "unlikely" by determining whether or not the observed test statistic is more extreme than would be expected if the null hypothesis were true. That is, it entails comparing the observed test statistic to some cutoff value, called the "
critical value." If the test statistic is more extreme than the critical value, then the null hypothesis is rejected in favor of the alternative hypothesis. If the test statistic is not as extreme as the critical value, then the null hypothesis is not rejected.
Specifically, the four steps involved in using the critical value approach to conducting any hypothesis test are:
Specify the null and alternative hypotheses. Using the sample data and assuming the null hypothesis is true, calculate the value of the test statistic. To conduct the hypothesis test for the population mean μ, we use the t-statistic \(t^*=\frac{\bar{x}-\mu}{s/\sqrt{n}}\) which follows a t-distribution with n- 1 degrees of freedom. Determine the critical value by finding the value of the known distribution of the test statistic such that the probability of making a Type I error — which is denoted \(\alpha\) (greek letter "alpha") and is called the " significance level of the test" — is small (typically 0.01, 0.05, or 0.10). Compare the test statistic to the critical value. If the test statistic is more extreme in the direction of the alternative than the critical value, reject the null hypothesis in favor of the alternative hypothesis. If the test statistic is less extreme than the critical value, do not reject the null hypothesis. Example S.3.1.1 Mean GPA
In our example concerning the mean grade point average, suppose we take a random sample of
n = 15 students majoring in mathematics. Since n = 15, our test statistic t* has n - 1 = 14 degrees of freedom. Also, suppose we set our significance level α at 0.05, so that we have only a 5% chance of making a Type I error. Right-Tailed
The critical value for conducting the
right-tailed test H 0 : μ = 3 versus H A : μ > 3 is the t-value, denoted t \(\alpha\), n - 1, such that the probability to the right of it is \(\alpha\). It can be shown using either statistical software or a t-table that the critical value t 0.05,14 is 1.7613. That is, we would reject the null hypothesis H 0 : μ = 3 in favor of the alternative hypothesis H A : μ > 3 if the test statistic t* is greater than 1.7613. Visually, the rejection region is shaded red in the graph. Left-Tailed
The critical value for conducting the
left-tailed test H 0 : μ = 3 versus H A : μ < 3 is the t-value, denoted -t (\(\alpha\), n - 1) , such that the probability to the leftof it is \(\alpha\). It can be shown using either statistical software or a t-table that the critical value -t 0.05,14is -1.7613. That is, we would reject the null hypothesis H 0: μ= 3 in favor of the alternative hypothesis H A: μ< 3 if the test statistic t* is less than -1.7613. Visually, the rejection region is shaded red in the graph. Two-Tailed
There are two critical values for the
two-tailed test H 0 : μ = 3 versus H A : μ ≠ 3 — one for the left-tail denoted -t (\(\alpha\) /2, n - 1) and one for the right-tail denoted t (\(\alpha\). The value /2, n- 1) - t (\(\alpha\)/2,is the n- 1) t-value such that the probability to the leftof it is \(\alpha\)/2, and the value t (\(\alpha\)/2,is the n- 1) t-value such that the probability to the rightof it is \(\alpha\)/2. It can be shown using either statistical software or a t-table that the critical value -t 0.025,14is -2.1448 and the critical value t 0.025,14is 2.1448. That is, we would reject the null hypothesis H 0: μ= 3 in favor of the alternative hypothesis H A: μ≠ 3 if the test statistic t* is less than -2.1448 or greater than 2.1448. Visually, the rejection region is shaded red in the graph.
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
8.2.2.1.3 - Example: Milk
A study of 66,831 dairy cows found that the mean milk yield was 12.5 kg per milking with a standard deviation of 4.3 kg per milking (data from Berry, et al., 2013). Construct a 95% confidence interval for the average milk yield in the population.
First, let's compute the standard error:
\(SE=\frac{s}{\sqrt{n}}=\frac{4.3}{\sqrt{66831}}=0.0166\)
The standard error is small because the sample size is very large.
Next, let's find the \(t^*\) multiplier:
\(df=66831-1=66830\)
\(t^{*}=1.960\)
Now, we can construct our 95% confidence interval:
95% C.I.: \(12.5\pm1.960(0.017)=12.5\pm0.033=[12.467,\;12.533]\)
We are 95% confident that the mean milk yield in the population is between 12.467 and 12.533 kg per milking.
Berry, D. P., Coyne, J., Boughlan, B., Burke, M., McCarthy, J., Enright, B., Cromie, A. R., McParland, S. (2013). Genetics of milking characteristics in dairy cows. Animal, 7(11), 1750-1758.
|
Rahman, Mohammad Mahbubur (1997)
Central limit theorem for some classes of dynamical systems. Masters thesis, Concordia University.
Text (application/pdf)1MB
MQ25986.pdf
Abstract
We consider a transformation T of the unit interval (0, 1) into itself which is piecewise $C\sp2$ and expanding. Using the spectral decomposition of the Frobenius-Perron operator of T, we give a proof of the Central Limit Theorem for$$\left({1\over n}\right)\sum\sbsp{i=0}{n-1}f\circ T\sp{i},$$where f is a function of bounded variation. It is also shown that the speed of covergence in the Central Limit Theorem is of the order ${1\over\sqrt n}.$
Divisions: Concordia University > Faculty of Arts and Science > Mathematics and Statistics Item Type: Thesis (Masters) Authors: Rahman, Mohammad Mahbubur Pagination: v, 71 leaves ; 29 cm. Institution: Concordia University Degree Name: M.Sc. Program: Mathematics Date: 1997 Thesis Supervisor(s): Gora, Pawel ID Code: 236 Deposited By: Concordia University Library Deposited On: 27 Aug 2009 17:10 Last Modified: 18 Jan 2018 17:13 Related URLs:
Repository Staff Only: item control page
|
If a graph with $n$ vertices has more than $\frac{(n-1)(n-2)}{2}$ edges then it is connected.
I am a bit confused about this question, since I can always prove that for a graph to connected you need more than $|E|>n-1$ edges.
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
If a graph with $n$ vertices has more than $\frac{(n-1)(n-2)}{2}$ edges then it is connected.
I am a bit confused about this question, since I can always prove that for a graph to connected you need more than $|E|>n-1$ edges.
I am not sure what bothers you but as I see it you are confused about the following two facts
If a graph is connected then $e \geq n-1.$
If a graph has more than $e > \frac{(n-1)(n-2)}{2}$ then it is connected.
Notice that the implications in 1 and 2 are in opposite directions.
For a proof of 2. you can check out this link.
I think your problem might be to prove that you
cannot construct an undirected graph with $\dfrac{(n-1)(n-2)}{2}$ edges that is not connected. You are thinking about it the wrong way. The $E = n - 1$ formula about how few edges can you use to connect all the vertices.
Imagine you are an adversary trying to design a horrible highway system so that one town is disconnected. No matter how inefficiently you spend your roads, you'll still have to connect all the towns if there are so many roads.
Consider what the worst possible design could be, eg, the one that uses as many roads as possible but still leaves one town disconnected. How many edges does that have? What happens when you add one more edge to that?
1.As you mentioned we have:
$G\text{ is connected} \Rightarrow |V|-1 \le |E|$
But the other direction is not true, i.e:
$G\text{ is connected} \Leftrightarrow |V|-1 \le |E|$
is wrong statement.
So you can not use it for further reasoning. Sample counter example is this graph ($K_t$ is a complete graph on $t$ vertices, and $\cup$ means disjoint union of graphs):
$G = K_{n-1} \cup K_1$
$G$ has $n-1\choose 2$ edges and $n$ nodes, and ${n-1\choose 2} > n-1$ for $n>4$.
2.On the other hand, to prove that :
${|V|-1 \choose 2} < |E| \Rightarrow G\text{ is connected}$
We can do it as follow:
Suppose not, then $G$ is disjoint union of two graphs $G=G_1\cup G_2$, with $|G_1| = k, |G_2| = n-k, 0<k<n$, if we connect all the vertices of $G_1,G_2$ together to make graph $G"$, then $|E_{G"}|\le {n \choose 2}$ (because $G"$ has at most as complete graph edges) but:
${n-1 \choose 2} + 1 + k\cdot (n-k) \le |E_{G"}| \le {n \choose 2} \Rightarrow$
$(k-1)(n-k-1) + 1 \le 0\Rightarrow$ Contradicts with $0<k<n$.
Graph G has n nodes n=(n-1)+1 A graph to be disconnected there should be at least one isolated vertex.A graph with one isolated vertex has maximum of C(n-1,2) edges.
so every connected graph should have more than C(n-1,2) edges.
|
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting
Author Message murray Joined: 07 Feb 2006 Posts: 47 Location: Amherst, MA, USA
Posted: Wed Mar 15, 2006 10:53 pm Post subject: Y&Y TeX vs. MiKTeX with MathTimePro2 fonts I used updated fonts just prior to posting of 0.98.
Initially I had different page breaks with a test document of my own in the two TeX systems. but that seems to have disappeared once I updated geometry.sty in Y&Y to the same version I use with MiKTeX.
There are two other peculiarities, indirectly related to the fonts:
1. With Y&Y, when mtpro2.sty is loaded, I get messages that \vec is already defined, and then ditto for \grave, \acute, \check, \breve, \bar, \hat, \dot, \tilde, \ddot. Perhaps this is due to different versions of amsmath.sty?
2. In Y&Y, I must include
\usepackage[T1]{fontenc}
for otherwise I get message:
OT1/ptm/m/n/10.95=ptm7t at 10.95pt not loadable:
Metric (TFM) file not found
I am clearly using different psnfss package files with Y&Y than with MiKTeX. I tried updating the Y&Y versions to be the same as those for MiKTeX, but then all hell breaks loose over encodings -- with Y&Y expecting to find TeXnAnsi encodings and not finding them. (It may be that in Y&Y I have to update tfm's for Times, too. But I am loathe to mess further with Y&Y with respect to a working font configuration.) WaS Joined: 07 Feb 2006 Posts: 27 Location: Erlangen, Germany
Posted: Thu Mar 16, 2006 3:47 am Post subject: please, send me <w.a.schmidt@gmx.net> your test document and the
log files that would result with and without \usepackage[T1]{fontenc}
TIA
Walter WaS Joined: 07 Feb 2006 Posts: 27 Location: Erlangen, Germany
Posted: Fri Mar 17, 2006 9:27 am Post subject: Preliminary anwers:
1) Using T1 encoding with Times cannot work on Y&Y-TeX.
Y&Y-TeX supports Times and other fonts from the non-TeX world
only with LY1 encoding.
2) Updating psnfss on Y&Y-TeX is pointless. The psnfsss collection
supports the Base35 fonts with OT1 and T1/TS1 encoding, which
does not work on Y&Y-TeX; see above.
3) Loading fontenc should not be necessary at all, but
I do not yet understand why you get the error re. OT1/ptm.
Does it help to issue \usepackage[LY1]{fontenc} before loading
mtpro2?
4) The errors re. \vec etc. may be due to an obsolete amsmath.sty,
as compared with MikTeX. Please, run a minimal test document
that does not use amsmath to check this.
More info on Sunday. murray Joined: 07 Feb 2006 Posts: 47 Location: Amherst, MA, USA
Posted: Fri Mar 17, 2006 7:49 pm Post subject: Your answers identified the problems & solutions!
WaS wrote: Preliminary anwers:
1) Using T1 encoding with Times cannot work on Y&Y-TeX....
2) Updating psnfss on Y&Y-TeX is pointless....
3) Loading fontenc should not be necessary at all, but
I do not yet understand why you get the error re. OT1/ptm.
Does it help to issue \usepackage[LY1]{fontenc} before loading
mtpro2?
4) The errors re. \vec etc. may be due to an obsolete amsmath.sty,
as compared with MikTeX. Please, run a minimal test document
that does not use amsmath to check this.
Re 3): Yes, \usepackage[LY1]{fontenc} in my test documen avoides he error about OT1.
5) Yes, the error about \vec, etc., was due to an obsolete amsmath.sty. Refreshing the amsmath files fixed this.
Thank you!
All times are GMT - 7 Hours
You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group
|
Could anyone help with this problem? Thanks
A joint density function is given as follows:
$$f(x,y) =\begin{cases}{} 0.125\cdot (x+y+1) \ \ \text{for} -1<x<1, 0<y<2 \\ 0, \text{otherwise} \end{cases}$$
Calculate $P(X>Y)$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
A joint density function is given as follows:
$$f(x,y) =\begin{cases}{} 0.125\cdot (x+y+1) \ \ \text{for} -1<x<1, 0<y<2 \\ 0, \text{otherwise} \end{cases}$$
Calculate $P(X>Y)$
This question appears to be off-topic. The users who voted to close gave this specific reason:
Just recall what the density function represents: the probability of an event $A$ is the integral of the density function on $A$. So you have to inegrate the function on the set of points $A= \{(x,y) \mid x>y\} $. So $x$ can be any number in $[-1, 1]$, and $y$ has to be smaller than $x$.
Hence, compute $\int\limits_{-1}^{1} \int\limits_{0}^{x} f(x,y) \, dy \, dx$.
As you integrate 0 in the inner integral whenever $x$ is negative, it is the same as $\int\limits_{0}^{1} \int\limits_{0}^{x} f(x,y) \, dy \, dx$. You can easily compute this.
|
I decided to follow a recent trend and ask a question about logarithmic integrals :)
Is there a closed form for this integral? $$\int_0^1\frac{\log(x)\log^2(1-x)\log^2(1+x)}{x}\mathrm dx$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This integral is equal to $$ -4\big( \zeta(-3,-1,-1,-1) +\zeta(-3,-1,1,-1) +\zeta(-3,1,-1,1) +\zeta(3,-1,-1,-1) +\zeta(3,-1,1,-1) +\zeta(3,1,-1,1) \big) $$ in terms of the multiple zeta function, which can also be simplified to $$ 2\zeta(-5,-1)-2\zeta(-5,1)+2\zeta(5,-1)+{\textstyle\frac32}\zeta(5,1)+4\zeta(-3,1,1,1), $$ of which only $$ \begin{aligned} \zeta(5,1) &= {\textstyle\frac34}\zeta(6)-{\textstyle\frac12}\zeta(3)^2 \\ \zeta(5,-1) &= {\textstyle\frac{111}{64}} \zeta (6)-{\textstyle\frac{9}{32}} \zeta (3)^2-{\textstyle\frac{31}{16}} \zeta (5) \log (2) \end{aligned} $$ have a known closed form (see also this article about Euler sums, and also Euler Sums and Contour Integral Representations by Philippe Flajolet and Bruno Salvy).
There is no closed form for this integral as the answer involves $\sum_{n=1}^\infty\frac{H_n}{n^52^n}$ and $\sum_{n=1}^\infty\frac{(-1)^nH_n}{n^5}$ which have no known closed form and here is how I found them:
Let $I$ denotes our integral $\displaystyle \int_0^1\frac{\ln x\ln^2(1-x)\ln^2(1+x)}{x}\ dx$
Using the algebraic identity
$$12a^2b^2=(a+b)^4+(a-b)^4-2a^4-2b^4$$ and by letting $a=\ln(1-x)$ and $b=\ln(1+x)$ we can write our integral :
$$\small{12I=\underbrace{\int_0^1\frac{\ln x\ln^4(1-x^2)}{x}}_{1-x^2\mapsto x}+\underbrace{\int_0^1\frac{\ln x\ln^4\left(\frac{1-x}{1+x}\right)}{x}}_{\frac{1-x}{1+x}\mapsto x}-2\underbrace{\int_0^1\frac{\ln x\ln^4(1-x)}{x}}_{1-x\mapsto x}\ dx-2\int_0^1\frac{\ln x\ln^4(1+x)}{x}\ dx}$$
$$12I=-\frac74\underbrace{\int_0^1\frac{\ln(1-x)\ln^4x}{1-x}\ dx}_{K}+2\underbrace{\int_0^1\frac{\ln\left(\frac{1-x}{1+x}\right)\ln^4x}{1-x^2}\ dx}_{J}-2\underbrace{\int_0^1\frac{\ln x\ln^4(1+x)}{x}\ dx}_{M}$$
$$K=\int_0^1\frac{\ln(1-x)\ln^4x}{1-x}\ dx=-\sum_{n=1}^\infty H_n\int_0^1x^n\ln^4x\ dx\\ =-24\sum_{n=1}^\infty\frac{H_n}{(n+1)^5}=-24\sum_{n=1}^\infty\frac{H_n}{n^5}+24\zeta(6)=\boxed{12\zeta^2(3)-18\zeta(6)}$$
To evaluate $J$ we are going to use the identity
$$\frac{1}{1-x^2}\ln\left(\frac{1-x}{1+x}\right)=\sum_{n=1}^{\infty}\left(H_n-2H_{2n}\right)x^{2n-1}$$
$$J=\int_0^1\frac{\ln\left(\frac{1-x}{1+x}\right)\ln^4x}{1-x^2}\ dx=\sum_{n=1}^{\infty}\left(H_n-2H_{2n}\right)\int_0^1x^{2n-1}\ln^4x\ dx\\ \sum_{n=1}^{\infty}\left(H_n-2H_{2n}\right)\left(\frac{3}{4n^5}\right)=-\frac{93}{4}\sum_{n=1}^\infty\frac{H_n}{n^5}-24\sum_{n=1}^\infty\frac{(-1)^nH_n}{n^5}\\ =\boxed{\frac{93}{8}\zeta^2(3)-\frac{651}{16}\zeta(6)-24\sum_{n=1}^\infty\frac{(-1)^nH_n}{n^5}}$$
I managed to simplify $M$ here
$$M=-120\operatorname{Li}_6\left(\frac12\right)-72\ln2\operatorname{Li}_5\left(\frac12\right)-24\ln^22\operatorname{Li}_4\left(\frac12\right)+78\zeta(6)+\frac34\ln2\zeta(5)-\frac32\ln^22\zeta(4)-3\ln^32\zeta(3)+2\ln^42\zeta(2)+12\zeta^2(3)-12\ln2\zeta(2)\zeta(3)-\frac{17}{30}\ln^62+24\sum_{n=1}^\infty\frac{H_n}{n^52^n}$$
Combining the results of $K$, $J$ and $M$ we get
$$I=20\operatorname{Li}_6\left(\frac12\right)+12\ln2\operatorname{Li}_5\left(\frac12\right)+4\ln^22\operatorname{Li}_4\left(\frac12\right)-\frac{549}{32}\zeta(6) -\frac18\ln2\zeta(5)+\frac14\ln^22\zeta(4)\\ +\frac12\ln^32\zeta(3)-\frac13\ln^42\zeta(2)-\frac{29}{16}\zeta^2(3)+2\ln2\zeta(2)\zeta(3)\\ +\frac{17}{180}\ln^62-4\sum_{n=1}^\infty\frac{H_n}{n^52^n}-4\sum_{n=1}^\infty\frac{(-1)^nH_n}{n^5}$$
and here we see the two sums appeared and because their numerical values (given by wolfram) are different so unfortunately they don't cancel each other out. So the integral $I$ has no closed form.
|
I'm a mathematician interested in abstract QFT. I'm trying to undersand why, under certain (all?) circumstances, we must have $T^2 = -1$ rather than $T^2 = +1$, where $T$ is the time reversal operator. I understand from the Wikipedia article that requiring that energy stay positive forces $T$ to be represented by an anti-unitary operator. But I don't see how this forces $T^2=-1$. (Or maybe it doesn't force it, it merely allows it?)
Here's another version of my question. There are two distinct double covers of the Lie group $O(n)$ which restrict to the familiar $Spin(n)\to SO(n)$ cover on $SO(n)$; they are called $Pin_+(n)$ and $Pin_-(n)$. If $R\in O(n)$ is a reflection and $\tilde{R}\in Pin_\pm(n)$ covers $R$, then $\tilde{R}^2 = \pm 1$. So saying that $T^2=-1$ means we are in $Pin_-$ rather than $Pin_+$. (I'm assuming Euclidean signature here.) My question (version 2): Under what circumstances are we forced to use $Pin_-$ rather than $Pin_+$ here?
(I posted a similar question on physics.stackexchange.com last week, but there were no replies.)
EDIT: Thanks to the half-integer spin hint in the comments below, I was able to do a more effective web search. If I understand correctly, Kramer's theorem says that for even-dimensional (half integer spin) representations of the Spin group, $T$ must satisfy $T^2=-1$, while for the odd-dimensional representations (integer spin), we have $T^2=1$. I guess at this point it becomes a straightforward question in representation theory: Given an irreducible representation of $Spin(n)$, we can ask whether it is possible to extend it to $Pin_-(n)$ (or $Pin_+(n)$) so that the lifted reflections $\tilde R$ (e.g. $T$) act as an anti-unitary operator.This post has been migrated from (A51.SE)
|
Headquarters of Joint Institute for Nuclear Research in Dubna
The
Joint Institute for Nuclear Research, JINR (Russian: Объединённый институт ядерных исследований, ОИЯИ), in Dubna, Moscow Oblast (110 km north of Moscow), Russia, is an international research centre for nuclear sciences, with 5500 staff members, 1200 researchers including 1000 Ph.D's from eighteen member states (including Armenia, Azerbaijan, Belarus and Kazakhstan). Most scientists, however, are eminent Russian scientists.
The Institute has seven laboratories, each with its own specialisation: theoretical physics, high energy physics (particle physics), heavy ion physics, condensed matter physics, nuclear reactions, neutron physics, and information technology. The institute has a division to study radiation and radiobiological research and other ad hoc experimental physics experiments.
Principal research instruments include a nuclotron superconductive particle accelerator (particle energy: 7 GeV), three isochronic cyclotrons (120, 145, 650 MeV), a phasotron (680 MeV) and a synchrophasotron (4 GeV). The site has a neutron fast-pulse reactor (1500MW pulse) with nineteen associated instruments receiving neutron beams.
Contents Founding 1 Structure 2 Fields of research 3 Discoveries 4 JINR Prize 5 Directors 6 See also 7 References 8 External links 9 Founding
The agreement on the establishment of JINR was signed on March 26, 1956 in Moscow, with Wang Ganchang and Vladimir Veksler among the founders.
[1]
The institute was established on the basis of two research institutes of the USSR Academy of Sciences: the Institute for Nuclear Problems and the Electrophysical Laboratory.
Although the first research instrument was built at Dubna in 1947, it was not until the creation of CERN in 1954 that a countervailing group from the West was created—
JINR. Structure
Eduard Kozulin, head of the nuclear reaction laboratory of the United Nuclear Research Institute, checking the experiment readiness of the supersensitive analyzer of heavy atoms mass.
The JINR has eight laboratories and University Centre:
Bogoliubov Laboratory of Theoretical Physics (BLTP) Veksler and Baldin Laboratory of High Energies (VBLHE) Laboratory of Particle Physics (LPP) Dzhelepov Laboratory of Nuclear Problems (DLNP) Flerov Laboratory of Nuclear Reactions (FLNR) Frank Laboratory of Neutron Physics (FLNP) Laboratory of Information Technologies (LIT) Laboratory of Radiation Biology (LRB) University Centre (UC)
Experimental Physics workshops are also parts of the Institute.
Fields of research
The main fields of the Institute's research are:
Discoveries
More than 40 major discoveries have been made.
1959 – nonradiative transitions in mesoatoms 1960 – antisigma-minus hyperon 1963 – element 102 1972 – postradiative regeneration of cells 1973 – quark counting rule 1975 – phenomenon of slow neutron confinement 1988 – regularity of resonant formation of muonic molecules in deuterium 1999-2005 – elements 114, 116, 118, 115 and 113 2006 – chemical identification of element 112 2010 – successful synthesis of element 117 [2]
Elements discovered at
JINR: bohrium (1976), flerovium ( Island of stability, 1999), livermorium (2001), ununtrium (2004), ununpentium (2004), ununoctium (2006), ununseptium (2010). JINR Prize
In 1961 the JINR Prizes were instituted.
A group of physicists headed by Wang Ganchang, deputy director from 1958 to 1960 and the Soviet Professor Vladimir Veksler was awarded the first prize for the discovery of antisigma-minus hyperon. The experimental group led by Professor Wang Ganchang, analysed more than 40,000 photographs which recorded tens of thousands of nuclear interactions taken in the propane bubble chamber, produced by the 10 GeV synchrophasotron used to bombard a target forming high energy mesons, was the first to discover the anti-sigma minus hyperon particles on March 9, 1959: [3] \pi^- + C\to \bar\Sigma^- + K^0 + \bar K^0 + K^- + p^+ + \pi^+ + \pi^- + \hbox{nucleus recoil} The discovery of this new unstable antiparticle which decays in (1.18±0.07)·10 −10 s into an antineutron and a negative pion was announced in September of that year: [4] \bar\Sigma^-\to \bar n^0 + \pi^- No-one doubted at the time that this particle was elementary, but a few years later, this hyperon, the proton, the neutron, the pion and other hadrons had lost their status of elementary particles as they turned out to be complex particles too consisting of quarks and antiquarks. Directors See also References External links JINR Website Frank Laboratory of Neutron Physics Website
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
$\newcommand{\ket}[1]{\left| #1 \right>}$ $\newcommand{bra}[1]{\left< #1 \right|}$
Talking about the partial measurement the professor defines the state $\ket \psi$ to be
$$\ket{\psi} = \sum_{i,j} a_{ij} \ket{e_i} \otimes \ket{f_j} $$
where $\ket{e_i} \in V$ and $\ket{f_j} \in W$ are orthonormal bases. Then he rewrites the state $\ket \psi$ as
$$\ket \psi = \sum_i \ket {e_i} \otimes \ket{w_i}$$
where $\ket{w_i}= \sum_j a_{ij}\ket{f_j}$. I'm find until now. However he does the following:
$$\bra{e_i}\sum_j \ket{e_j} \otimes \ket{w_j}$$
and says, I quote, "You should understand this equation as $\bra{e_i}$ only
talks to $\ket{e_j}$. You could have written the $\bra{e_i}$ in different ways maybe you could have written $\bra{e_i} \otimes \mathbf 1$" and he says that writing $\bra{e_i} \otimes \mathbf 1$ is a little strange because $\bra{e_i}$ is a bra and $\mathbf 1$ is an operator.
Assuming that what we mean by the bra $\bra{e_i}$ is $\bra{e_i} \otimes \mathbf 1$, I have difficulty understanding what the following mathematical object is:
$$\bra{e_i} \otimes \mathbf 1 \sum_j \ket{e_j} \otimes \ket{w_j} = \sum_j \delta_{ij} \otimes \ket{w_j}$$
I can think of $\delta_{ij}$ as a number but then I don't know what $1 \otimes \ket{w_i}$ (note that $1$ is a number in this expression) should mean.
If on the other hand, I think of $\delta_{ij}$ as a tensor then I cannot simplify this any further and I should write the last expression as:
$$\delta_i^{\;j} \otimes \ket{e_j}$$
where there is a funny summation over $j$. Either way I cannot reduce it any further in order to get to the equation that he has at the very last which is the following:
$$\bra{e_i}\sum_j \ket{e_j} \otimes \ket{w_j} = \ket{w_i}$$
I don't understand how we have gone from a space with dimension $v\cdot w$, where $\mathrm{dim}(V)=v$ and $\mathrm{dim}(W)=w$, to a space with dimension $w$. What is the meaning of $s\otimes \ket{e_i}$, where $s$ is a scalar, if such an object really exists and lastly is the mathematics behind the above calculation is correct?
|
K A ALY
Articles written in Bulletin of Materials Science
Volume 39 Issue 2 April 2016 pp 491-498
The present study deals with the effect of composition on the elastic and optical properties of Ge$_x$Se$_2$Sb$_{1−x}$ ($0.0 \le x \le 1.0$) glasses. The various elastic moduli of these glasses such as Young’s modulus ($Y$) andthe bulk modulus ($B$) along with the micro-hardness ($H$), Poisson’s ratio ($\rho$) and Debye temperature ($T_D$) were obtained from the values of the longitudinal ($v_l$) and shear ($v_s$) ultrasonic velocities. On the basis of measurementsof the transmittance and reflectance spectra in the wavelength range of 0.4–2.5 $\mu$m the optical constants such as the film thickness ($t$), the refractive index ($n$) and the optical band gap ($E_g$) were investigated with high accuracy. The optically determined bulk modulus of these glasses was in good agreement with that elastically investigated. The obtained results were discussed in terms of the changes in the glass density, electronegativity and electronic polarizability with the variation in antimony content.
Volume 39 Issue 7 December 2016 pp 1791-1799
In this work, new glasses were synthesized from wastes of limestone and phosphate rocks besides commercial borax. The glasses were characterized by FTIR, DTA, ultrasonic techniques and UV spectroscopy. It was found that the concentration of both CaO and P$_2$O$_5$ increases and the concentrations of B$_2$O$_3$ and Na$_2$O decrease as the content of phosphate rocks increases. Variation of the contents of the different oxides affects the concentration of the structural units constituting the glass, which was indicated by the behaviour of the fraction N$_4$ of BO$_4$ units in the borate matrix. The density and the refractive index of the glasses decrease as the CaO and P$_2$O$_5$ contents increase, which was attributed to the increase of [BO$_3$] structural units. On the other hand, the physical parameterssuch as the ultrasonic velocity, the elastic moduli, the optical bandgap and the optical polarizability increased, which was attributed to the higher coordination number of CaO$_6$ compared with the coordination of borate structuralunits and to the former effect of P$_2$O$_5$. As a result, a polymerization of the total co-ordination number of the glass, crosslink density and connectivity within the glass network will occur.
Volume 39 Issue 7 December 2016 pp 1819-1825
The variations in structure and optical properties of amorphous a-Se and a-M$_5$Se$_{95}$ (M = Ge, Ga and Zn) films have been studied based on FTIR and optical measurements. FTIR transmittance spectra for a-Se and a-M$_5$Se$_{95}$ (M $=$ Ge, Ga and Zn) glasses were measured as a function of wavenumber. The addition of Ge, Ga and Zn increases the vibrational frequency of the a-Se main band. The absorption edge of Ge$_5$Se$_{95}$ shifted towards long side of the wavelength in comparison with that of a-Se film. This shift increases gradually in the case of Ga$_5$Se$_{95}$ and Zn$_5$Se$_95$ films. So, the optical bandgap ofM5Se95 films was decreased, but the index of refraction was increased. The first and third order of electric susceptibility ($\chi_{(1)}$ and $\chi_{(3)}$) and non-linear index of refraction ($n_2$) were increased by adding Ge, Ga and Zn into a-Se.
Current Issue
Volume 42 | Issue 6 December 2019
Click here for Editorial Note on CAP Mode
|
R Shanker
Articles written in Pramana – Journal of Physics
Volume 52 Issue 5 May 1999 pp 493-502 Research Articles
The absolute doubly differential cross-sections (DDCS) for production of the thick-target X-ray bremsstrahlung spectra in collisions of 6.5 keV and 7.5 keV electrons with thick Hf target are measured. The X-ray photons are counted by a Si(Li) detector placed at 90° to the electron beam direction. The bremsstrahlung spectra are corrected for various ‘solid-state effects’ namely, electron energy-loss, electron back-scattering, and photon-attenuation in the target, in addition to the correction for detector’s efficiency. The DDCS values after correction, are compared with the predictions of a most accurate thin-target bremsstrahlung theory [H K Tseng and R H Pratt,
47Ag, 79Au and 72Hf) at 7.0 keV and 7.5 keV electron energies has been studied. The agreement between experiment and theory is found to be satisfactory within 27% systematic error of measurements. However, an apparent systematic difference between experiment and theory in the region of low-energy photons has been explained qualitatively by considering the fact that the hexagonal atomic structure of Hf offers possibly a greater magnitude of ‘solid-state effects’ in respect of blocking the low-energy bremsstrahlung photons from coming out of the target surface than does the cubic-face centered structure of Ag and Au target in similar conditions of the experiment.
Volume 60 Issue 6 June 2003 pp 1203-1215
Relative cross sections, differential in energy and angle, for electrons ejected from CH
4 and C 3H 8 molecules under 16.0 keV electron impact have been measured. Electrons were analyzed by a 45° parallel plate electrostatic analyzer at emission angles varying from 60° to 135° with energies from 50 eV to 1000 eV. The angular distributions of electrons exhibit structures which are found to arise from Coulomb and non-Coulomb interactions. Furthermore, the double differential cross sections of electrons ejected from C 3H 8 molecule are found to be higher in magnitude than those from CH 4. This result supports the fact that the number of ejected electrons participating in collisions with C 3H 8 molecules is more than that in CH 4. Also, the angular distributions of C-K-shell Auger electrons emitted from the target molecules have been studied and shown to be isotropic within the experimental uncertainty
Volume 68 Issue 3 March 2007 pp 507-515 Research Articles
It is shown experimentally that under energetic electron bombardment the backscattered electrons from solid targets contribute significantly ($\sim 80$%) to the observed total electron yield, even for targets of high backscattering coefficients. It is further found that for tungsten ($Z = 74$) with a backscattering coefficient of about 0.50, about $20$% of the total electron yield is contributed by the total secondary electrons for impact energies in the range of 8–28 keV. The yield of true backscattered electrons at normal incidence ($\eta_{0}$), total secondary electrons (𝛿) and the total electron yield ($\delta_{\text{tot}}) produced in collisions of 8–28 keV electrons with W have been measured and compared with predictions of available theories. The present results indicate that the constant-loss of primary electrons in the target plays a significant role in producing the secondary electrons and that it yields a better fit to the experiment compared to the power-law.
Volume 68 Issue 3 March 2007 pp 517-528 Research Articles
The energy and angular distributions of backscattered electrons produced under the impact of 5 keV electrons with thick Al, Ti, Ag, W and Pt targets are measured. The energy range of backscattered electrons is considered between $E_{B} = 50$ eV and 5000 eV. The angle of incidence α and take-off angle 𝜃 are chosen to have values $\alpha = 0^{\circ}$ and 10° and $\theta = 100^{\circ}$, $110^{\circ}$ and $120^{\circ}$ respectively. The measured energy spectra are compared with the available theoretical models for $\alpha = 0^{\circ}$ and $10^{\circ}$. The elastic peak intensity of backscattered electrons is found to be a function of angle of incidence, take-off angle and atomic number of the target material. The considered theories are reasonably in good agreement with experiment for the energy spectra of the backscattered electrons having their reduced energies $\epsilon (= E_{B}/E_{0})$ in the range of 0.20 to 1.00.
Volume 74 Issue 4 April 2010 pp 563-574 Research Articles
We present new experimental data on thick target bremsstrahlung spectra generated from the interaction of energetic electrons with bulk matter. The ‘photon yields’ in terms of double differential cross-sections (DDCS) are measured for pure elements of thick targets: Ti ($Z = 22$), Ag ($Z = 47$), W ($Z = 74$) and Pt ($Z = 78$) under the impact of 10 keV electrons. Comparison of DDCS obtained from the experimental data is made with those predicted by Monte-Carlo (MC) calculations using PENELOPE code. A close agreement between the experimental data and the MC calculations is found for all the four targets within the experimental error of 16%. Furthermore, the ratios of DDCS of bremsstrahlung photons emitted from Ag, W and Pt with those from Ti as a function of photon energy are examined with a relatively lower uncertainty of about 10% and they are compared with MC calculations. A satisfactory agreement is found between the experiment and the calculations within some normalizing factors. The variations of DDCS as a function of Z and of photon energy are also studied which show that the DDCS vary closely with Z; however, some deviations are observed for ‘tip’ photons emitted from high Z targets.
Current Issue
Volume 93 | Issue 5 November 2019
Click here for Editorial Note on CAP Mode
|
Odd and Even Zeroes
In mathematics, the factorial of a positive integer number $n$ is written as $n!$ and is defined as follows:\begin{equation*} n! = 1 \times 2 \times 3 \times 4 \times \cdots \times (n-1) \times n = \prod _{i=1}^ n{i} \end{equation*}
The value of $0!$ is considered as $1$. $n!$ grows very rapidly with the increase of $n$. Some values of $n!$ are:\begin{align*} 0! & = 1 & 5! & = 120 \\ 1! & = 1 & 10! & = 3628800 \\ 2! & = 2 & 14! & = 87178291200 \\ 3! & = 6 & 18! & = 6402373705728000 \\ 4! & = 24 & 22! & = 1124000727777607680000 \\ \end{align*}
You can see that for some values of $n$, $n!$ has odd number of trailing zeroes (e.g. $5!$, $18!$) and for some values of $n$, $n!$ has even number of trailing zeroes (e.g. $0!$, $10!$, $22!$). Given the value of $n$, your job is to find how many of the values $0!,\, 1!,\, 2!,\, 3!,\, ...\, ,\, (n - 1)!,\, n!$ has even number of trailing zeroes.
Input
Input file contains at most 1000 lines of input. Each line contains an integer $n$ ($0 \leq n \leq 10^{18}$). Input is terminated by a line containing a $-1$.
Output
For each line of input produce one line of output. This line contains an integer which denotes how many of the numbers $0!,\, 1!,\, 2!,\, 3!,\, ...\, ,\, n!$ contains even number of trailing zeroes.
Sample Input 1 Sample Output 1 2 3 10 100 1000 2000 3000 10000 100000 200000 -1 3 4 6 61 525 1050 1551 5050 50250 100126
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
Suppose there are $N$ people in a two-person game (in my case, Super Smash Bros. Melee), and imagine a simple model where the players have "skill levels" $s_i\in\mathbb{R}$ for $1\leq i\leq N$, and the probability person $i$ will win a game over person $j$ is
$$ \mathbb{P}(i\text{ wins over }j)=\sigma(s_i-s_j) $$
where $\sigma:\mathbb{R}\rightarrow[0,1]$ is some monotonically-increasing function.
Since $\mathbb{P}(j\text{ wins over }i)=1-\mathbb{P}(i\text{ wins over }j)$, the only constraint on $\sigma$ is that
$$ \sigma(x)+\sigma(-x)=1. $$
The logistic sigmoid is one choice satisfying this.
Note the probability of player $i$ having $w$ wins and $l$ losses over player $j$ is binomial-distributed,
$$ \mathbb{P}(i\text{ has }w\text{ wins, }l\text{ losses over }j)=\binom{l+w}{w} \left(1-\sigma \left(s_i-s_j\right)\right){}^l \sigma \left(s_i-s_j\right){}^w. $$
Now suppose further that we've observed a number of matches between players, with $w(i,j)$ and $\ell(i,j)$ the number of times player $i$ won and lost against player $j$. From these matches, I'd like to estimate the $s_i$, which initially are assumed distributed via some uninformative Bayesian prior $\mathbb{P}(s_i,\ldots,s_N)$.
By Bayes Theorem, we can obtain the posterior distribution of the $s_i$ by
$$ \mathbb{P}(s_i,\ldots,s_N|\text{observed matches}) = \frac{\mathbb{P}(\text{observed matches}|s_i,\ldots,s_N)\mathbb{P}(s_i,\ldots,s_N)}{C} \\ =\frac{\mathbb{P}(s_i,\ldots,s_N)}{C}\prod_{i=1}^N\prod_{j=1}^N\binom{l(i,j)+w(i,j)}{w(i,j)} \left(1-\sigma \left(s_i-s_j\right)\right){}^{l(i,j)} \sigma \left(s_i-s_j\right){}^{w(i,j)} $$
where $C=\int_{\mathbb{R}^N}\mathbb{P}(\text{observed matches}|s_i,\ldots,s_N)\mathbb{P}(s_i,\ldots,s_N)\,\mathrm{d}\mathbf{s}$ is a constant normalization factor that I'm ignoring.
I'm stuck here. In my use case, the number of players $N\approx 5000$ and the number of games $\frac{1}{2}\sum_{i,j}w(i,j)+l(i,j)\approx 100000$ (game data obtained via the Smash GG API's history of tournament matches from the past 3 years), and the posterior distribution is not (as best I can tell) separable. As a result, the problem as stated appears computationally intractable.
My question is this:
What tricks are commonly used to make high-dimensional Bayesian posteriors like this computationally tractable? Are there approximations I can apply to make it separable?
|
I'm tutoring a student, and we were trying to solve the following question:
Find the local extreme values of $f(x) = -x^2 + 2x + 9$ over $[-2,\infty)$.
According to the textbook, the local extreme values are essentially the peaks and the valleys in the graph of the function $f$, so basically where $f'(x) = 0$. This is relatively easy to compute: $$f'(x) = -2x + 2,$$ of which the critical points are $x = 1$. Likewise, $f''(x) = -2 < 0$, which means $f$ is concave
down everywhere, and thus $x = 1$ is where a maximum value occurs on the graph. The maximum is $f(1) = -1 + 2 + 9 = 10$.
Of course, the endpoint $x = -2$ yields $$f(-2) = -(-2)^2 +2(-2) + 9 = 1,$$ but since the graph is concave down everywhere, $\displaystyle \lim_{x\to\infty}f = -\infty$ implies there really is no minimum per se... right?
The online computer program tells us that $(-2,1)$ is a local minimum, and $(1,10)$ is a local maximum. But in accordance with the definition from the textbook, why is $(-2,1)$ where a
local minimum of the graph occurs? It's neither a peak nor a valley in the graph. What exactly does local mean when the interval is infinite? It doesn't quite make logical sense, unless the definition is not as rigorous as it ought to be.
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
Mathematics and Science were invented by humans to understand and describe the world around us. A lot of mathematical quantities are used in Physics to explain the concepts clearly. A few examples of these include force, speed, velocity and work. These quantities are often described as being a scalar or a vector quantity. Scalars and vectors are differentiated depending on their definition. A scalar quantity is defined as the physical quantity that has only magnitude, for example, mass and electric charge. On the other hand, a vector quantity is defined as the physical quantity that has both, magnitude as well as direction like force and weight. The other way of differentiating these two quantities is by using a notation.
What Is Scalar Quantity?
Scalar quantity is defined as
The physical quantity with magnitude and no direction.
Some physical quantities can be described just by their numerical value (with their respective units) without directions (they don’t have any direction). The addition of these physical quantities follows the simple rules of the algebra. That is only their magnitudes are added.
What is Scalar quantity example?
There are plenty of scalar quantity examples, some of the common examples are:
Mass Speed Distance Time Area Volume Density Temperature What Is Vector Quantity?
A vector quantity is defined as
The physical quantity that has both direction as well as magnitude.
A vector with the value of magnitude equal to one and direction is called unit vector represented by a lowercase alphabet with a “hat” circumflex. That is “
û“. What are some examples of vector quantities?
Vector quantity examples are many, some of them are given below:
Linear momentum Acceleration Displacement Momentum Angular velocity Force Electric field Polarization Difference Between Scalar and Vector
The difference between Scalar and Vector is crucial to understand in physics learning. Below are a few differences for better understanding.
Vector Scalar Definition A physical quantity with both the magnitude and direction. A physical quantity with only magnitude. Representation A number (magnitude), direction using unit cap or arrow at the top and unit. A number (magnitude) and Unit Symbol Quantity symbol in bold and an arrow sign above Quantity symbol Direction Yes No Example Velocity and Acceleration Mass and Temperature Vector Addition and Subtraction
The addition and subtraction of vector quantities does not follow the simple arithmetic rules. A special set of rules are followed for the addition and subtraction of vectors. Following are some points to be noted while adding vectors:
Addition of vectors means finding the resultant of a number of vectors acting on a body. The component vectors whose resultant is to be calculated are independent of each other. Each vector acts as if the other vectors were absent. Vectors can be added geometrically but not algebraically. Vector addition is commutative in nature, i.e., \(\underset{A}{\rightarrow}+\underset{B}{\rightarrow}=\underset{B}{\rightarrow}+\underset{A}{\rightarrow}\)
Now, talking about vector subtraction, it is the same as adding the negative of the vector to be subtracted. TO better understand, let us look at the example given below.
Let us consider two vectors \(\underset{A}{\rightarrow}\) and \(\underset{B}{\rightarrow}\) as shown in the figure below. We required to subtract \(\underset{B}{\rightarrow}\) from \(\underset{A}{\rightarrow}\). It is just the same as adding \(\underset{-B}{\rightarrow}\) and \(\underset{A}{\rightarrow}\). The resultant is shown in the figure below Vector Notation
For vector quantity usually, an arrow is used on the top like \(\underset{v}{\rightarrow}\) which represents the vector value of the velocity and also explains that the quantity has both magnitudes as well as direction.
Following is the table explaining other related concepts:
Triangle Law of Vector Addition Scalar And Vector Products Position And Displacement Vectors Resolution Of A Vector In A Plane – Rectangular Components Scalar And Vector Quantities Problems With Solutions Q1: Given below is a list of quantities. Categorize each quantity as being either a vector or a scalar.
20 degrees Celsius 5 mi., North 256 bytes 5 m 30 m/sec, East 4000 Calories Answer:
20 degrees Celsius Scalar 5 mi., North Vector 256 bytes Scalar 5 m Scalar 30 m/sec, East Vector 4000 Calories Scalar Q2: Ashwin walks 10 m north, 12 m east, 3 m west and 5 m south and then stops to drink water. What is the magnitude of his displacement from his original point? Answer: We know that displacement is a vector quantity, hence the direction Ashwin walks will either be positive or negative along an axis.
Now, to find the total distance traveled along the y-axis, let us consider the movement towards the north to be positive and the movement towards the south to be negative.
\(\sum y=10\,m-5\,m=5\,m\)
He moved a net of 5 meters to the north along the y-axis.
Similarly, let us consider his movement towards the east to be positive and the movement towards the west to be negative.
\(\sum y=-3\,m+12\,m=9\,m\)
He moved a net of 9 m to the east.
Using Pythagoras theorem, the resultant displacement can be found as follows:\(D^2=(\sum x^2)+(\sum y^2)\)
Substituting the values, we get\(D^2=(9^2)+(5^2)\) \(D^2=(106)^2\) \(\sqrt{D^2}=\sqrt{(106)^2}\) \(D=10.30\,m\)
Q3. What is the magnitude of a unit vector? Answer: The magnitude of a unit vector is unity. A unit vector has no units or dimensions.
|
Two Types of Random Variables
A random variable [latex]\text{x}[/latex], and its distribution, can be discrete or continuous.
Learning Objectives
Contrast discrete and continuous variables
Key Takeaways Key Points A random variable is a variable taking on numerical values determined by the outcome of a random phenomenon. The probability distribution of a random variable [latex]\text{x}[/latex] tells us what the possible values of [latex]\text{x}[/latex] are and what probabilities are assigned to those values. A discrete random variable has a countable number of possible values. The probability of each value of a discrete random variable is between 0 and 1, and the sum of all the probabilities is equal to 1. A continuous random variable takes on all the values in some interval of numbers. A density curve describes the probability distribution of a continuous random variable, and the probability of a range of events is found by taking the area under the curve. Key Terms random variable: a quantity whose value is random and to which a probability distribution is assigned, such as the possible outcome of a roll of a die discrete random variable: obtained by counting values for which there are no in-between values, such as the integers 0, 1, 2, …. continuous random variable: obtained from data that can take infinitely many values Random Variables
In probability and statistics, a randomvariable is a variable whose value is subject to variations due to chance (i.e. randomness, in a mathematical sense). As opposed to other mathematical variables, a random variable conceptually does not have a single, fixed value (even if unknown); rather, it can take on a set of possible different values, each with an associated probability.
A random variable’s possible values might represent the possible outcomes of a yet-to-be-performed experiment, or the possible outcomes of a past experiment whose already-existing value is uncertain (for example, as a result of incomplete information or imprecise measurements). They may also conceptually represent either the results of an “objectively” random process (such as rolling a die), or the “subjective” randomness that results from incomplete knowledge of a quantity.
Random variables can be classified as either discrete (that is, taking any of a specified list of exact values) or as continuous (taking any numerical value in an interval or collection of intervals). The mathematical function describing the possible values of a random variable and their associated probabilities is known as a probability distribution.
Discrete Random Variables
Discrete random variables can take on either a finite or at most a countably infinite set of discrete values (for example, the integers). Their probability distribution is given by a probability mass function which directly maps each value of the random variable to a probability. For example, the value of [latex]\text{x}_1[/latex] takes on the probability [latex]\text{p}_1[/latex], the value of [latex]\text{x}_2[/latex] takes on the probability [latex]\text{p}_2[/latex], and so on. The probabilities [latex]\text{p}_\text{i}[/latex] must satisfy two requirements: every probability [latex]\text{p}_\text{i}[/latex] is a number between 0 and 1, and the sum of all the probabilities is 1. ([latex]\text{p}_1+\text{p}_2+\dots + \text{p}_\text{k} = 1[/latex])
Examples of discrete random variables include the values obtained from rolling a die and the grades received on a test out of 100.
Continuous Random Variables
Continuous random variables, on the other hand, take on values that vary continuously within one or more real intervals, and have a cumulative distribution function (CDF) that is absolutely continuous. As a result, the random variable has an uncountable infinite number of possible values, all of which have probability 0, though ranges of such values can have nonzero probability. The resulting probability distribution of the random variable can be described by a probability density, where the probability is found by taking the area under the curve.
Selecting random numbers between 0 and 1 are examples of continuous random variables because there are an infinite number of possibilities.
Probability Distributions for Discrete Random Variables
Probability distributions for discrete random variables can be displayed as a formula, in a table, or in a graph.
Learning Objectives
Give examples of discrete random variables
Key Takeaways Key Points A discrete probability function must satisfy the following: [latex]0 \leq \text{f}(\text{x}) \leq 1[/latex], i.e., the values of [latex]\text{f}(\text{x})[/latex] are probabilities, hence between 0 and 1. A discrete probability function must also satisfy the following: [latex]\sum \text{f}(\text{x}) = 1[/latex], i.e., adding the probabilities of all disjoint cases, we obtain the probability of the sample space, 1. The probability mass function has the same purpose as the probability histogram, and displays specific probabilities for each discrete random variable. The only difference is how it looks graphically. Key Terms discrete random variable: obtained by counting values for which there are no in-between values, such as the integers 0, 1, 2, …. probability distribution: A function of a discrete random variable yielding the probability that the variable will have a given value. probability mass function: a function that gives the relative probability that a discrete random variable is exactly equal to some value
A discrete random variable [latex]\text{x}[/latex] has a countable number of possible values. The probability distribution of a discrete random variable [latex]\text{x}[/latex] lists the values and their probabilities, where value [latex]\text{x}_1[/latex] has probability [latex]\text{p}_1[/latex], value [latex]\text{x}_2[/latex] has probability [latex]\text{x}_2[/latex], and so on. Every probability [latex]\text{p}_\text{i}[/latex] is a number between 0 and 1, and the sum of all the probabilities is equal to 1.
Examples of discrete random variables include:
The number of eggs that a hen lays in a given day (it can’t be 2.3) The number of people going to a given soccer match The number of students that come to class on a given day The number of people in line at McDonald’s on a given day and time
A discrete probability distribution can be described by a table, by a formula, or by a graph. For example, suppose that [latex]\text{x}[/latex] is a random variable that represents the number of people waiting at the line at a fast-food restaurant and it happens to only take the values 2, 3, or 5 with probabilities [latex]\frac{2}{10}[/latex], [latex]\frac{3}{10}[/latex], and [latex]\frac{5}{10}[/latex] respectively. This can be expressed through the function [latex]\text{f}(\text{x})= \frac{\text{x}}{10}[/latex]
, [latex]\text{x}=2, 3, 5[/latex] or through the table below. Of the conditional probabilities of the event [latex]\text{B}[/latex] given that [latex]\text{A}_1[/latex] is the case or that [latex]\text{A}_2[/latex] is the case, respectively. Notice that these two representations are equivalent, and that this can be represented graphically as in the probability histogram below.
The formula, table, and probability histogram satisfy the following necessary conditions of discrete probability distributions:
[latex]0 \leq \text{f}(\text{x}) \leq 1[/latex], i.e., the values of [latex]\text{f}(\text{x})[/latex] are probabilities, hence between 0 and 1. [latex]\sum \text{f}(\text{x}) = 1[/latex], i.e., adding the probabilities of all disjoint cases, we obtain the probability of the sample space, 1.
Sometimes, the discrete probability distribution is referred to as the probability mass function (pmf). The probability mass function has the same purpose as the probability histogram, and displays specific probabilities for each discrete random variable. The only difference is how it looks graphically.
Expected Values of Discrete Random Variables
The expected value of a random variable is the weighted average of all possible values that this random variable can take on.
Learning Objectives
Calculate the expected value of a discrete random variable
Key Takeaways Key Points The expected value of a random variable [latex]\text{X}[/latex] is defined as: [latex]\text{E}[\text{X}] = \text{x}_1\text{p}_1 + \text{x}_2\text{p}_2 + \dots + \text{x}_\text{i}\text{p}_\text{i}[/latex], which can also be written as: [latex]\text{E}[\text{X}] = \sum \text{x}_\text{i}\text{p}_\text{i}[/latex]. If all outcomes [latex]\text{x}_\text{i}[/latex] are equally likely (that is, [latex]\text{p}_1=\text{p}_2=\dots = \text{p}_\text{i}[/latex]), then the weighted average turns into the simple average. The expected value of [latex]\text{X}[/latex] is what one expects to happen on average, even though sometimes it results in a number that is impossible (such as 2.5 children). Key Terms discrete random variable: obtained by counting values for which there are no in-between values, such as the integers 0, 1, 2, …. expected value: of a discrete random variable, the sum of the probability of each possible outcome of the experiment multiplied by the value itself Discrete Random Variable
A discrete random variable [latex]\text{X}[/latex] has a countable number of possible values. The probability distribution of a discrete random variable [latex]\text{X}[/latex] lists the values and their probabilities, such that [latex]\text{x}_\text{i}[/latex] has a probability of [latex]\text{p}_\text{i}[/latex]. The probabilities [latex]\text{p}_\text{i}[/latex] must satisfy two requirements:
Every probability [latex]\text{p}_\text{i}[/latex] is a number between 0 and 1. The sum of the probabilities is 1: [latex]\text{p}_1+\text{p}_2+\dots + \text{p}_\text{i} = 1[/latex]. Expected Value Definition
In probability theory, the expected value (or expectation, mathematical expectation, EV, mean, or first moment) of a random variable is the weighted average of all possible values that this random variable can take on. The weights used in computing this average are probabilities in the case of a discrete random variable.
The expected value may be intuitively understood by the law of large numbers: the expected value, when it exists, is almost surely the limit of the sample mean as sample size grows to infinity. More informally, it can be interpreted as the long-run average of the results of many independent repetitions of an experiment (e.g. a dice roll). The value may not be expected in the ordinary sense—the “expected value” itself may be unlikely or even impossible (such as having 2.5 children), as is also the case with the sample mean.
How To Calculate Expected Value
Suppose random variable [latex]\text{X}[/latex] can take value [latex]\text{x}_1[/latex] with probability [latex]\text{p}_1[/latex], value [latex]\text{x}_2[/latex] with probability [latex]\text{p}_2[/latex], and so on, up to value [latex]\text{x}_i[/latex] with probability [latex]\text{p}_i[/latex]. Then the expectation value of a random variable [latex]\text{X}[/latex] is defined as: [latex]\text{E}[\text{X}] = \text{x}_1\text{p}_1 + \text{x}_2\text{p}_2 + \dots + \text{x}_\text{i}\text{p}_\text{i}[/latex], which can also be written as: [latex]\text{E}[\text{X}] = \sum \text{x}_\text{i}\text{p}_\text{i}[/latex].
If all outcomes [latex]\text{x}_\text{i}[/latex] are equally likely (that is, [latex]\text{p}_1 = \text{p}_2 = \dots = \text{p}_\text{i}[/latex]), then the weighted average turns into the simple average. This is intuitive: the expected value of a random variable is the average of all values it can take; thus the expected value is what one expects to happen on average. If the outcomes [latex]\text{x}_\text{i}[/latex] are not equally probable, then the simple average must be replaced with the weighted average, which takes into account the fact that some outcomes are more likely than the others. The intuition, however, remains the same: the expected value of [latex]\text{X}[/latex] is what one expects to happen on average.
For example, let [latex]\text{X}[/latex] represent the outcome of a roll of a six-sided die. The possible values for [latex]\text{X}[/latex] are 1, 2, 3, 4, 5, and 6, all equally likely (each having the probability of [latex]\frac{1}{6}[/latex]). The expectation of [latex]\text{X}[/latex] is: [latex]\text{E}[\text{X}] = \frac{1\text{x}_1}{6} + \frac{2\text{x}_2}{6} + \frac{3\text{x}_3}{6} + \frac{4\text{x}_4}{6} + \frac{5\text{x}_5}{6} + \frac{6\text{x}_6}{6} = 3.5[/latex]. In this case, since all outcomes are equally likely, we could have simply averaged the numbers together: [latex]\frac{1+2+3+4+5+6}{6} = 3.5[/latex].
|
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
|
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
|
According to my book, the magnetic field in any location inside the toroid (the empty region inside the toroid circumference) is zero because if we consider a circular loop passing through that point, the magnetic field would be zero as there is no current inside that loop. If we use this logic for inside the solenoid and take a small circle inside the solenoid with center at axis, we would get the same zero MF there, which is not true. If we apply this logic we will get a zero magnetic field wherever there is no current. I know this is not right and I am missing some point here.
The Amperian Loop centered at the axis of the solenoid encloses no current. This is true.
You note two rules.
By Ampere's Law, there is no magnetic field. The solenoid has a magnetic field.
You think 1. and 2. contradict. However, realize that Ampere's Law only describes the
net magnetic field along the Amperian Loop. It does not say anything about other magnetic fields. So let's modify the rules. By Ampere's Law, there is no net magnetic field along the Amperian Loop. The solenoid has a magnetic field.
To see why 1. and 2. are true and don't contradict, let's consider an ideal example, then generalize.
Ideal Example
The magnetic field lines within an infinitely long solenoid are perfectly parallel with the central axis of the solenoid. This is consistent with 2.
If you consider a circle centered at the solenoid's central axis, realize that all magnetic field lines are perpendicular to the circle.
So, there are no magnetic field lines that go along the Amperian Loop. This is consistent with 1.
General Example
In the real world, we can't have infinitely long solenoids. Real world solenoids deviate from the ideal solenoid.
A single wire loop has a circularly symmetric magnetic field. Similarly, a stack of single wire loops have a circularly symmetric magnetic field. This is consistent with rule 2.
Since the solenoid is made up of circles, note that any deviation from the ideal must be circularly symmetric.
By circular symmetry, there is no net magnetic field along the circular Amperian Loop. This is consistent with rule 1.
Conclusion
Rules 1 and 2 hold for all solenoids, and don't contradict each other.
(P.S. I'm pretty sure, but cannot prove, that real-world solenoids have magnetic field lines that deviate radially outwards from the central axis, radially perpendicular to the Amperian Loop. So, there is no single magnetic field line that contributes to the Amperian Loop, and we don't even need to consider circular symmetry.)
You have to be careful and understand what you are doing when evaluating the left hand side of the equation $\displaystyle \int_{\rm loop}\vec B \cdot d\vec l= \mu_0 I_{\rm enclosed}$.
For loops $X$ and $Z$ you can appeal to symmetry and say that the magnetic field $B$ has the same magnitude and is in the same direction as the loop at all points around the loop.
However for loop $Y$ because the integral $\displaystyle \int_{\rm loop}\vec B \cdot d\vec l$ is zero around the loop that does not mean that the magnetic field is zero at each point along the loop.
You will see from the diagram you will get positive and negative contributions to $\vec B \cdot d\vec l$ which will add up to zero.
|
By Ehrenfest Theorem we know that : $\frac{d \langle \Omega \rangle}{dt} =\langle[H,\Omega]\rangle $
Where $\Omega$ is an operator and $H$ is the Quantum Hamiltonian .
I would like to know what is wrong in the following steps:
\begin{equation} \frac{d \langle \Omega \rangle}{dt} := \frac{d\langle \psi | \Omega|\psi \rangle}{dt} = (\frac{\partial\langle\psi| }{\partial t} )\Omega|\psi\rangle + \langle \psi |\frac{\partial(\Omega|\psi\rangle|)}{\partial t} \tag{1} \end{equation} Using product rule, \begin{equation} \frac{d\langle \psi | \Omega|\psi \rangle}{dt} = (\frac{\partial\langle\psi| }{\partial t} )\Omega|\psi\rangle + \langle \psi |\frac{\partial(\Omega|\psi\rangle|)}{\partial t} \tag{2} \end{equation} Schroedinger's equation states:
\begin{equation} \frac{\partial|\psi \rangle}{\partial t} = \frac{-i H|\psi\rangle }{\hbar}\tag{3} \end{equation}
So taking the hermitian conjugation (and since the Hamiltonian is hermitian): \begin{equation} \frac{\partial\langle\psi| }{\partial t} = \frac{i \langle\psi|H }{\hbar} \tag{4} \end{equation} Now applying the time dependent Schrodinger equation to the state $\Omega|\psi\rangle$ (This can be done as $\Omega|\psi\rangle$ is also a state in the function space),
\begin{equation} \frac{\partial\Omega|\psi \rangle}{\partial t} = \frac{-i H\Omega|\psi\rangle }{\hbar}\tag{5} \end{equation} Thus, applying equations 4,5 to equation 2, \begin{equation} \frac{d\langle \psi | \Omega|\psi \rangle}{dt} = ( \frac{i \langle\psi|H }{\hbar} )\Omega|\psi\rangle + \langle \psi| \frac{-i }{\hbar}H\Omega|\psi\rangle\tag{5} \end{equation} As we can observe this gives
\begin{equation} \frac{d\langle \psi | \Omega|\psi \rangle}{dt} = \frac{i}{\hbar}\langle\psi|H \Omega|\psi\rangle - \frac{i}{\hbar}\langle\psi|H \Omega|\psi\rangle = 0 \end{equation} This goes against Ehrefest's theorem as $\Omega$ was a general operator. I suspect that one of my steps has succinctly assumed something but I cant figure out what it is.
|
the article linked below is very instructive and advanced about real Clifford Algebras, and their relationship with Lorentz group. After a general introduction of a Clifford algebra, $\mathcal{Cl}(V,\Phi)$, over a real, finite dimensional vector space $V$ with a non-degenerate quadratic form $\Phi$ as the quotient algebra: $$\mathcal{Cl}(V,\Phi)\sim\frac{\mathcal{T}(V)}{\mathcal{I}(V,\Phi)}$$ where $\mathcal{T}(V)$ is the tensor algebra of $V$ and $\mathcal{I}(V,\Phi)$ is the bilateral ideal generated by all elements of the form: $$v \otimes v - \Phi(v)·\mathbb{1}\quad,\quad v \in V$$ and after presenting all the implications of this definition, the article go straight to define an automorphism (the is also an involution) $\alpha:\mathcal{Cl}(V,\Phi)\rightarrow\mathcal{Cl}(V,\Phi)$ as: $$\alpha(i(v)):=-i(v)$$ where the map $i:V\rightarrow\mathcal{Cl}(V,\Phi)$ is an injection. Here, in the proof of $\alpha$ as an involution, the authors write that:
[...]. Furthermore, every $x\in\mathcal{Cl}(V,\Phi)$ can be written as:
$$x=x_1···x_m$$ with $x_j \in i(V)$ [...].
Now, my question is: if $\mathcal{T}(V)=\bigoplus\limits_{i=0}^{n}V^{\otimes i}$, shouldn't $x$ be a superposition of "polynomials" of elements of $i(V)$ rather than "monomials" as above? If not, how can I demonstrate this?
|
In a queuing system (M/M/1) with a finite packet capacity $z$, how do you determine the probability of packet loss if we assume that packets are dropped when the system is full? Packets arrive with a rate $\lambda$ and are served at rate $\mu$ (using the Poisson distribution).
My attempt at a solution:
Probability of packet loss = Probability that the system has exactly $n$ packets (i.e., it is at its capacity)?
I also have a formula from my notes where probability of packet loss is approximated at $(1 - \frac{\lambda}{\mu})(\frac{\lambda}{\mu})^z$ (so this is most likely correct but can someone please explain why?).
I'm not sure how to calculate the rate at which packets are dropped. I already know the probabilities $P(n)$ that there are $n$ packets in the system where $n = 0, 1, ..., z$ packets.
|
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ...
@Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation")
@Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable
Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags
@Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag
@glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :)
@Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work)
This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin...
@Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension
@Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity
I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head
@Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write
@Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all.
I've actually recently asked some questions on math.SE on related topics
@Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true
also probably even more generally without $i$ factors
so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal)
Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary
@Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t
Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check
If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ...
There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
|
This question already has an answer here:
How to find the factorial of a fraction? 5 answers
The TI-84 says 52.342777 but other calculators says domain error.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
The TI-84 says 52.342777 but other calculators says domain error.
Using the Gamma function we have $$(9/2)!=\Gamma(11/2)=\frac{945}{32}\sqrt{\pi}=52.3427777 $$
factorial is defined only for positive integers. But it is generalized to real numbers using the gamma function. For each positive integer n, you have $\Gamma(n+1)=n!$. You have that $\Gamma(5.5)=52.342777..$
You're getting the answer for $\Gamma(11/2)$, where
$$\Gamma(n)=(n-1)!$$
The gamma function $\Gamma(x)$ is defined for all $x$ except $0, -1, -2, -3, ...$. The factorial function, however, is defined only for positive integers.
|
It's widely belived that p-branes and Dp-branes are the same objects in string theory. P-branes are charged solutions of type IIB supergravity that have the topology of a black hole, one would expect to be able to construct the same black hole using a superposition of D-branes and to count the microstates to get a statistical entropy.
Maldacena showed in his thesis that one has to have D1-branes, D5-branes and some momentum in the $x^5$ direction in order to have a black whole with non zero area in order to be able to compute the Hawking-Bekenstein entropy.
In the count of microstates of the configuration the idea is to account for all the fields in the branes having in mind that the open strings can end in the D1 (1,1), D5 (5,5), and mixed D1-D5 (1,5), D5-D1 (5,1). It seems in the literature that one has to account for the bound state of the system in order to have a black hole, what is the explanation for this?.
If one sees the bound state of the system it's given by the Higgs branch, wich comes from giving the hypermultiples of the superpotential an expectation value:
$V = \frac{1}{(2\pi \alpha ')^2} \mid X_i \chi - \chi Y_i \mid^2 + \frac{g_1^2}{4} D_1^A D_1^A + \frac{g_5^2}{4V_4}D_5^2D_5^2 $
In the literature all the states that matter come from the massless excitations of the brane that i can get by setting the superpotential to zero, why I only care about the massless excitations of the branes?
|
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
|
Let $(M,g)$ be a Riemannian manifold, $\nabla$ the Levi-Civita connection of $g$. A vector filed $V$ on $M$ is called a Killing field if for every $p\in M$ and every $X,Y\in T_p M$, $$ g(\nabla_X V, Y)+g(X,\nabla_Y V)=0 $$ Show that if $(M,g)$ is a compact Riemannian manifold, and $V$ is a Killing field, then the flow $\Psi_t$ of $V$ is an isometry for each $t$.
Now to get us started, First we shall show that the rate of change of the metric $g_{\Psi_t(x)}(D_x\Psi_t X, D_x\Psi_t Y)$ with $t$ is zero at $t=0$ for any $X$ and $Y$ in $T_p M$. Then use the local group property of the flow.
Any help is appreciated.
First we claim that$$\nabla_{Xg} (X_i,X_j)=X(g(X_i,X_j))-([X,X_i],X_j)-([X,X_j],X_i)=X(g(X_i,X_j)),\dagger$$
Step 1 Define a connection for differentiating convector fields (1-forms). The derivative $\nabla_Y\omega$ should satisfy$$Y(\omega(X))=(\nabla_Y\omega)(X)+\omega(\nabla_Y X).$$Hence, $$(\nabla_Y\omega)(X) := Y(\omega(X))-\omega(\nabla_Y X).\qquad (1)$$Apply (1), we have \begin{align*} (\nabla_X g)(X_i, X_j)&=[Y,g](X_i,X_j)\\ &=Y(g(X_i,X_j))-g(\nabla_Y(X_i,X_j))\\ &=Y(g(X_i,X_j))-g(\nabla_Y X_i,X_j)-g(X_i,\nabla_Y X_j) \end{align*}
|
I have a number of measurements of the same quantity (in this case, the speed of sound in a material). Each of these measurements has their own uncertainty.
$$ v_{1} \pm \Delta v_{1} $$ $$ v_{2} \pm \Delta v_{2} $$ $$ v_{3} \pm \Delta v_{3} $$ $$ \vdots $$ $$ v_{N} \pm \Delta v_{N} $$
Since they're measurements of the same quantity, all the values of $v$ are roughly equal. I can, of course, calculate the mean:
$$ v = \frac{\sum_{i=1}^N v_{i}}{N}$$
What would the uncertainty in $v$ be? In the limit that all the $\Delta v_i$ are small, then $\Delta v$ should be the standard deviation of the $v_i$. If the $\Delta v_i$ are large, then $\Delta v$ should be something like $\sqrt{\frac{\sum_i \Delta v_i^2}{N}}$, right?
So what is the formula for combining these uncertainties? I don't think it's the one given in this answer (though I may be wrong) because it doesn't look like it behaves like I'd expect in the above limits (specifically, if the $\Delta v_i$ are zero then that formula gives $\Delta v = 0$, not the standard deviation of the $v_i$).
|
I'm studying the holographic entanglement entropy (HEE) in this paper (Ryu-Takayanagi, 2006). In section 6.3 they compute the HEE for a segment in a 2D CFT. To do so, they obtain the corresponding geodesic in the bulk (in the Poincaré patch) and compute its length.
I understand all that process, but I'm having some trouble when they introduce the cutoff. The metric diverges when $z\to0$ so we introduce a cutoff $\epsilon>0$, I understand that. But then they say
Since $e^\rho\sim x^i/z$ near the boundary, we find $z\sim a$
Here, $\rho$ is the hiperbolic radial coordinate un the global coordinates for AdS,
$$ ds^2 = R^2(-\cosh^2\rho\ d\tau^2 + d\rho^2 + \sinh^2\rho\ d\Omega^2) $$
$x^i$ and $z$ are coordinates in the Poincaré patch,
$$ ds^2 = \frac{R^2}{z^2}(dz^2-dt^2+\sum_i(dx^i)^2) $$
And $a$ is the inverse of the UV cutoff of the CFT in the boundary, that is, the spacing between sites.
I have two problems:
1) First, I don't see why near the boundary $e^\rho\sim x^i/z$. I made up the relations between both coordinate systems and I find more complicated relations than that (even setting $z\sim0$).
2) Even assuming the previous point, I don't understand why we obtain that relation between the CFT and the $z$ cutoff.
|
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
|
Search
Now showing items 1-10 of 27
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
|
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
|
Let me rearrange the logic of the Moyal Bracket that @ACuriousMind discussed neatly, by visiting a notional planet where people somehow discovered classical mechanics and quantum mechanics
independently; but suffered a terrible mental block that prevented them from appreciating there was a connection between the two, at first.
Then, one day, their Groenewold observed that, starting from QM, where capitals denote QM operators, $[P,Q]=PQ-QP=-i\hbar,\,$, etc..., and lower case denote classical phase-space entities, he could take any operator function of
P and Q, Φ, and package all of its matrix elements into the following c-number generating function, $$ f(q,p)= 2 \int_{-\infty}^\infty \text{d}y~e^{-2ipy/\hbar}~ \langle q+y| \Phi (Q,P) |q-y \rangle,$$(what we'd recognize here as our Wigner map to phase space on our planet), that is, to say, completely specified by the totality of its matrix elements,$$ \langle x| \Phi |y \rangle = \int_{-\infty}^\infty {\text{d}p\over h} ~e^{ip(x-y)/\hbar}~ f\left({x+y\over2},p\right) . $$He thus discovered that the operator Φ could actually be extracted out of inverting the above, so it is an operator functional of the c-number function quantum f(q,p), which of course also depends on $\hbar$, in general, $$ \Phi [f] = \frac{1}{(2\pi)^2}\iint\!\!\! \iint f(q,p) ~e^{i(a(Q-q)+b(P-p))}~ \text{d}q\, \text{d}p\, \text{d}a\, \text{d}b.$$
Observe how this form expresses
Φ(Q,P), with its complicated and capricious ordering of strings of Qs and Ps, now in a form where Qs and P are completely symmetric (the exponential being the formal infinite power series development thereof).
(On our planet, this inverse map is called the Weyl map, and was discovered first, in a misguided effort to start with classical quantities
f(q,p) and somehow, magically!, be led to their quantum correspondents, which know about $\hbar$, so with more information appearing out of thin air, but no matter. Still Kubo was the one to appreciate this procedure automatically Weyl-orders arbitrary operators, i.e. yields equal operators in this special ordering, in general looking different.)
Moreover, this Wigner map maps Hilbert space operator commutators $[\Phi,\Gamma]/(i\hbar)$ to what we call the Moyal Bracket,$$\frac{2}{\hbar} ~ f(x,p)\ \sin \left ( {{\tfrac{\hbar }{2}}(\overleftarrow{\partial}_x\overrightarrow{\partial}_{p}-\overleftarrow{\partial}_{p}\overrightarrow{\partial }_{x}} )\right ) \ g(x,p), $$where you note the leading term in the Taylor series w.r.t. $\hbar$ is just $\{f,g\}$, the Poisson Bracket. Hilbert space traces map to phase-space integrals.
(Full disclosure: an expansion of these moves can be found in our booklet
A Concise Treatise on Quantum Mechanics in Phase Spaceby Curtright, Fairlie, and Zachos, WS 2014, cf. online update, or most other popular texts on the subject.) So far, absolutely no physics, or insight: through a technical change of language, plain QM was simply re-expressed in c-number phase space.
Now, however, our Tralfmadorean Groenewold must have been very pleased indeed, since he also knew this was within the scope of classical mechanics, so he could discuss both QM and classical mechanics in the same breath. He could then observe that most "large", macroscopic, systems and entities involving large quantum numbers, and large actions compared to $\hbar$, behave as classical c-number functions of phase space familiar from classical mechanics (corrected by $\hbar$-fuzz, ignorable for very small $\hbar$), the Moyal Bracket for
slowly varying functions (on the scale of $\hbar$ again, where waviness and interference rule), devolved to the Poisson brackets, etc... He must have been beside himself with the emergent classical mechanics limit he found.
So, even though
f, g, etc, depend on $\hbar$, as full quantum objects, those that have a nonsingular limit as $\hbar\to 0$ reduce to neat engineering physics (freshman lab) quantities free from the frustrating complications of quantum mechanics. Oh, dear: variables are effectively commutative, when you sacrifice (quantum) information... Suddenly, talking about trajectories, in general, could make sense! (But then chaos and entropy reared their ugly heads. But we are digressing.)
OK, this is the outline of emergent classical behavior. Several subtleties are swept under the rug, including macroscopic quantum systems, etc..., but ginger treading conquers the fog of $\hbar$, and decoherence is a friend.
The invertible maps above, nevertheless, have nothing to do with quantization--they are mere changes of variables. But they help you monitor it, if you wished to go the Dirac way, and hence the misnomer "deformation quantization": you pretend you start with $\hbar$-independent
fs and the PB and "cleverly deform it" to the MB by guessing the $\hbar$-corrections on intuitive beauty principles. But you'll get the correct square of the angular momentum this way. Quantization is an art, a mystery. never
Convenience Edit to connect to antistandard ordering:@OkThen replicates the antistandard ordering prescription, that Kirkwood 1933 utilized, in eqn (121) of the book cited above; I couldn't resist the teachable moment. It is, of course, equivalent to the Wigner-Weyl map discussed here, as @ACuriousMind and @tparker point out. All of these Hilbert-space to phase-space maps are, where agreement to the classical entities at $O(\hbar^0)$ is essentially enforced as a boundary condition, so failure of the Dirac correspondence would be evidence of an error, as emphasized by @ACuriousMind.
Explicitly, sticking a extra factor $\exp(i\hbar ab/2)$ to the exponential of the above
Φ converts the above operator kernel to $e^{ib(P-p)} e^{ia(Q-q)}$ yielding a slightly different Φ', mappable invertibly to Φ, of course. The corresponding image of the Moyal bracket is, as given, a bit less symmetric, $~f(1-\exp(i\hbar(\overleftarrow{\partial}_x\overrightarrow{\partial}_{p}-\overleftarrow{\partial}_{p}\overrightarrow{\partial }_{x} ))g/i\hbar$, but of course mappable to the MB invertibly, by the same map. This was actually Dirac's original thesis observation, that correspondence of q with Q and p with P automatically yields the boundary condition discussed, so it could not fail. It was only subsequent cookie-cutter quantization scheme seekers who unwisely insisted on applying such maps to quantization, now safely excluded by Groenewold.
Note added on Bracken's emergence : In a remarkable 2003 paper, Bracken observes that the obverse side of the standard quantization relation $MB=\frac{2}{\hbar}\sin (\hbar ~PB /2)=PB + O(\hbar^2)$ is $PB=\frac{2}{\hbar}\arcsin (\hbar ~MB /2)=MB + O(\hbar^2)$, so emergent classical mechanics is an infinite asymptotic series of $\hbar$ quantum corrections to the quantum result: the magic here is the complete cancellation of all $\hbar$ dependence, analogous to the destructive interference of quantum phases in the functional integral yielding the classical extremizing result. It's good to know as a formal wisecrack, but I have never seen a brass-tacks utilization of it in a cogent nontrivial calculation.
|
I was looking into statistical and quantum mechanics and their overlap but I can't seem to solve a basic question as to what a partition function,
$$Z = \frac{\exp\left(-\beta\hbar \omega/2\right)}{1 - \exp(-\beta\hbar \omega)} $$ looks like when the temperature goes to infinity, where $\beta = 1/kT$. When $T\to\infty$, $\beta\to 0$. Obviously plugging this into $Z$ just cancels everything out. So I've tried using a Taylor series expansion
$$\exp(x) = 1+x+\cdots$$ to solve it. However I still can't get rid of the $\beta$. Best case scenario I end up with $Z=-1/2$ which is incorrect since I need to manipulate the $Z$ value later on to play with expectation energies.
If you guys have any idea on how to approach this please let me know. I understand rules are quite harsh on stack exchange and that I'm new, so I apologize for any infractions. Thank you for your help.
|
Is there a way to find number of "different" solutions to the equation $xy +yz + zx = N$, given the value of $N$.
Note: $x,y,z$ can have only non-negative values.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The problem is difficult, as it is related to the determination of class numbers of quadratic number fields. See the references I have given in the comments.
equation:
$XY+XZ+YZ=N$
Solutions in integers can be written by expanding the number of factorization: $N=ab$
And using solutions of Pell's equation: $p^2-(4k^2+1)s^2=1$
$k$ -some integer which choose on our own.
Solutions can be written:
$X=ap^2+2(ak+b+a)ps+(2(a-2b)k+2b+a)s^2$
$Y=2(ak-b)ps+2(2ak^2+(a+2b)k+b)s^2$
$Z=bp^2-2(2b+a)kps+(4bk^2-2ak-b)s^2$
And more:
$X=-2bp^2+2(k(4b+a)+b)ps-2((4b+2a)k^2+(2b-a)k)s^2$
$Y=-(2b+a)p^2+2(k(4b+a)-b-a)ps-(8bk^2-(4b+2a)k+a)s^2$
$Z=bp^2-2(2b+a)kps+(4bk^2-2ak-b)s^2$
Perhaps these formulas for someone too complicated. Then equation:
$XY+XZ+YZ=N$
If we ask what ever number: $p$
That the following sum can always be factored: $p^2+N=ks$
Solutions can be written.
$X=p$
$Y=s-p$
$Z=k-p$
the equation: $XY+XZ+YZ=a(X+Y+Z)$
Solutions can be written using the solutions of Pell's equation:
$p^2-(k^2-k+1)s^2=a$
$k$ - This number can be any and defined by us.
Then the decision will be Met form:
$X=p^2+(k+1)ps$
$Y=p^2+(k-2)ps$
$Z=p^2+(1-2k)ps$
And more.
$X=(k+1)ps-(k^2-k+1)s^2$
$Y=(k-2)ps-(k^2-k+1)s^2$
$Z=(1-2k)ps-(k^2-k+1)s^2$
Here is a C# program that does the job in a pretty naive way. It eliminates some possibilities, then bruteforces all solutions. You can also use the Mathematica code provided on the OEIS page on this sequence (thank Gerry Myerson for this!).
using System;class Program{ static void Main(string[] args) { Console.Write("N = "); int N = int.Parse(Console.ReadLine()); Console.WriteLine(); int u = 0, add = -1, n = 0, s = 0; while (u < 2 * N) { u += (add += 2); n++; } while (n <= N + 1) { if (canBeWrittenAsSumOfThreeSquares(u - 2 * N)) { for (int x = 0; x <= n / 3; x++) for (int y = x; y <= (n - x) / 2; y++) { int z = n - x - y; if (x * y + y * z + z * x == N) { Console.Write(x + ", " + y + ", " + z + " "); if (x != y && y != z && x != z) { Console.WriteLine("(6 permutations)"); s += 6; } else if (x == y && y == z) { Console.WriteLine("(1 permutation)"); s++; } else { Console.WriteLine("(3 permutations)"); s += 3; } } } } u += (add += 2); n++; } Console.WriteLine(s + " solutions found."); Console.ReadKey(true); } static bool canBeWrittenAsSumOfThreeSquares(int n) { while (n % 4 == 0 && n > 0) n /= 4; return n % 8 != 7; }}
The code uses the fact that $xy + yz + zx = \frac{(x + y + z)^2 - x^2 - y^2 - z^2}{2}$. When we have $xy + yz + zx = N$, we have $(x + y + z)^2 - x^2 - y^2 - z^2 = 2N$ and $ (x + y + z)^2 - 2N = x^2 + y^2 + z^2 $ a number can be written as a sum of three squares iff it is not of the form $4^k(8m + 7)$ see wikipedia article. So we simply check all $u \in \mathbb{N}$ so that $0 \leq u^2 - 2N \leq N^2 $, and then iterate through all increasing ($x \leq y \leq z$) 3-tuples $(x, y, z)$, multyplying with the number of permutations that is possible (either 1, if all three components are the same, 3, when two components are the same and one different, or 6, when all the components are different).
I haven't used any advanced theorem (class numbers?), but I just wanted to illustrate that there is an approach that works for small $N$. I was able to calculate the amount of solutions for $N=4000$ in about one minute. This approach probably can be expanded to be more efficient.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.